././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5625606 cwl_utils-0.37/0000755000175100001660000000000014753463357013074 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/LICENSE0000644000175100001660000002613614753463342014103 0ustar00runnerdocker Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/MANIFEST.in0000644000175100001660000000150614753463342014626 0ustar00runnerdockerinclude README.rst include Makefile include MANIFEST.in include LICENSE include *requirements.txt include create_cwl_from_objects.py include load_cwl_by_path.py include cwl_utils/cwlNodeEngine.js include cwl_utils/cwlNodeEngineJSConsole.js include cwl_utils/cwlNodeEngineWithContext.js include testdata/*.cwl include testdata/*.yaml include testdata/*.yml include testdata/*.input include testdata/*.json include testdata/*.ttl include testdata/*.owl include testdata/*.js include testdata/remote-cwl/*.cwl include testdata/workflows/*.cwl include testdata/workflows/*.yaml include testdata/types/*.yml include testdata/checker_wf/*.cwl include testdata/extensions/*.cwl include cwl_utils/py.typed include docs/conf.py docs/Makefile docs/_static/favicon.ico docs/requirements.txt include docs/*.rst global-exclude *~ global-exclude *.pyc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/Makefile0000644000175100001660000001513114753463342014527 0ustar00runnerdocker# This file is part of cwl-utils, # https://github.com/common-workflow-language/cwl-utils/, and is # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Contact: common-workflow-language@googlegroups.com # make format to fix most python formatting errors # make pylint to check Python code for enhanced compliance including naming # and documentation # make coverage-report to check coverage of the python scripts by the tests MODULE=cwl_utils PACKAGE=cwl-utils EXTRAS= # `SHELL=bash` doesn't work for some, so don't use BASH-isms like # `[[` conditional expressions. PYSOURCES=$(filter-out $(MODULE)/parser/cwl_v%,$(shell find $(MODULE) -name "*.py")) \ $(wildcard tests/*.py) create_cwl_from_objects.py load_cwl_by_path.py \ ${MODULE}/parser/cwl_v1_?_utils.py docs/conf.py DEVPKGS=build diff_cover pylint pep257 ruff 'tox<4' tox-pyenv \ wheel autoflake pyupgrade bandit auto-walrus \ -rlint-requirements.txt -rtest-requirements.txt -rmypy-requirements.txt DEBDEVPKGS=pep8 python-autopep8 pylint python-coverage ruff sloccount \ python-flake8 python-mock shellcheck VERSION=v$(shell echo $$(tail -n 1 cwl_utils/__meta__.py | awk '{print $$3}')) mkfile_dir := $(dir $(abspath $(lastword $(MAKEFILE_LIST)))) UNAME_S=$(shell uname -s) ## all : default task (install cwl-utils in dev mode) all: dev ## help : print this help message and exit help: Makefile @sed -n 's/^##//p' $< ## cleanup : shortcut for "make sort_imports format flake8 pydocstyle" cleanup: sort_imports format flake8 pydocstyle ## install-dep : install most of the development dependencies via pip install-dep: install-dependencies install-dependencies: pip install -U pip setuptools wheel pip install --upgrade $(DEVPKGS) ## install-deb-dep : install many of the dev dependencies via apt-get install-deb-dep: sudo apt-get install $(DEBDEVPKGS) ## install : install the cwl-utils package and the scripts install: FORCE pip install .$(EXTRAS) ## dev : install the cwl-utils package in dev mode dev: install-dep pip install -U pip setuptools wheel pip install -e .$(EXTRAS) ## dist : create a module package for distribution dist: dist/${MODULE}-$(VERSION).tar.gz dist/${MODULE}-$(VERSION).tar.gz: $(SOURCES) python -m build ## docs : make the docs docs: FORCE cd docs && $(MAKE) html ## clean : clean up all temporary / machine-generated files clean: FORCE rm -f ${MODULE}/*.pyc tests/*.pyc rm -Rf .coverage rm -f diff-cover.html # Linting and code style related targets ## sort_import : sorting imports using isort: https://github.com/timothycrosley/isort sort_imports: $(PYSOURCES) isort $^ remove_unused_imports: $(PYSOURCES) autoflake --in-place --remove-all-unused-imports $^ pep257: pydocstyle ## pydocstyle : check Python docstring style pydocstyle: $(PYSOURCES) ruff check $^ ## codespell : check for common misspellings codespell: codespell -w $(shell git ls-files | grep -v mypy-stubs) ## format : check/fix all code indentation and formatting (runs black) format: $(PYSOURCES) FORCE black $(PYSOURCES) format-check: $(PYSOURCES) black --diff --check $^ ## pylint : run static code analysis on Python code pylint: $(PYSOURCES) pylint --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" \ $^ -j0|| true pylint_report.txt: $(PYSOURCES) pylint --msg-template="{path}:{line}: [{msg_id}({symbol}), {obj}] {msg}" \ $^ -j0> $@ || true diff_pylint_report: pylint_report.txt diff-quality --compare-branch=main --violations=pylint pylint_report.txt .coverage: testcov coverage: .coverage coverage report coverage.xml: .coverage coverage xml coverage.html: htmlcov/index.html htmlcov/index.html: .coverage coverage html @echo Test coverage of the Python code is now in htmlcov/index.html coverage-report: .coverage coverage report diff-cover: coverage.xml diff-cover --compare-branch=main $^ diff-cover.html: coverage.xml diff-cover --compare-branch=main $^ --html-report $@ ## test : run the cwl-utils test suite test: $(PYSOURCES) python -m pytest -rsx ${PYTEST_EXTRA} ## testcov : run the cwl-utils test suite and collect coverage testcov: $(PYSOURCES) pytest --cov ${PYTEST_EXTRA} sloccount.sc: $(PYSOURCES) Makefile sloccount --duplicates --wide --details $^ > $@ ## sloccount : count lines of code sloccount: $(PYSOURCES) Makefile sloccount $^ list-author-emails: @echo 'name, E-Mail Address' @git log --format='%aN,%aE' | sort -u | grep -v 'root' mypy3: mypy mypy: ${PYSOURCES} MYPYPATH=$$MYPYPATH:mypy-stubs mypy $^ shellcheck: FORCE shellcheck release-test.sh pyupgrade: $(PYSOURCES) pyupgrade --exit-zero-even-if-changed --py39-plus $^ auto-walrus $^ release-test: FORCE git diff-index --quiet HEAD -- || ( echo You have uncommitted changes, please commit them and try again; false ) ./release-test.sh release: release-test . testenv2/bin/activate && \ pip install build && \ python -m build testenv2/src/${PACKAGE} && \ pip install twine && \ twine upload testenv2/src/${PACKAGE}/dist/* && \ git tag ${VERSION} && git push --tags flake8: $(PYSOURCES) flake8 $^ cwl_utils/parser/cwl_v1_0.py: FORCE schema-salad-tool --codegen python \ --codegen-parser-info "org.w3id.cwl.v1_0" \ https://github.com/common-workflow-language/common-workflow-language/raw/codegen/v1.0/extensions.yml \ > $@ cwl_utils/parser/cwl_v1_1.py: FORCE schema-salad-tool --codegen python \ --codegen-parser-info "org.w3id.cwl.v1_1" \ https://github.com/common-workflow-language/cwl-v1.1/raw/codegen/extensions.yml \ > $@ cwl_utils/parser/cwl_v1_2.py: FORCE schema-salad-tool --codegen python \ --codegen-parser-info "org.w3id.cwl.v1_2" \ https://github.com/common-workflow-language/cwl-v1.2/raw/codegen/extensions.yml \ > $@ regen_parsers: cwl_utils/parser/cwl_v1_*.py FORCE: # Use this to print the value of a Makefile variable # Example `make print-VERSION` # From https://www.cmcrossroads.com/article/printing-value-makefile-variable print-% : ; @echo $* = $($*) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5625606 cwl_utils-0.37/PKG-INFO0000644000175100001660000002007014753463357014170 0ustar00runnerdockerMetadata-Version: 2.2 Name: cwl-utils Version: 0.37 Author-email: Common Workflow Language project contributors License: Apache 2.0 Project-URL: Documentation, https://cwl-utils.readthedocs.io/ Project-URL: Homepage, https://www.commonwl.org Project-URL: Repository, https://github.com/common-workflow-language/cwl-utils Project-URL: Issues, https://github.com/common-workflow-language/cwl-utils/issues Project-URL: Changelog, https://github.com/common-workflow-language/cwl-utils/releases Project-URL: Related Tools, https://www.commonwl.org/tools/ Keywords: cwl,commonwl,common-workflow-language,sciworkflows Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Topic :: File Formats Classifier: Topic :: Scientific/Engineering Classifier: Topic :: Scientific/Engineering :: Information Analysis Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: System :: Distributed Computing Classifier: Typing :: Typed Requires-Python: <3.14,>=3.9 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: cwl-upgrader>=1.2.3 Requires-Dist: packaging Requires-Dist: rdflib Requires-Dist: requests Requires-Dist: schema-salad<9,>=8.8.20250205075315 Requires-Dist: ruamel.yaml<0.19,>=0.17.6 Requires-Dist: typing_extensions; python_version < "3.10" Provides-Extra: pretty Requires-Dist: cwlformat; extra == "pretty" Provides-Extra: testing Requires-Dist: pytest<9; extra == "testing" Requires-Dist: pytest-mock; extra == "testing" |Linux Build Status| |Code coverage| |Documentation Status| .. |Linux Build Status| image:: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml/badge.svg?branch=main :target: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml .. |Code coverage| image:: https://codecov.io/gh/common-workflow-language/cwl-utils/branch/main/graph/badge.svg :target: https://codecov.io/gh/common-workflow-language/cwl-utils .. |Documentation Status| image:: https://readthedocs.org/projects/cwl-utils/badge/?version=latest :target: https://cwl-utils.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status cwl-utils --------- Python Utilities and Autogenerated Classes for loading and parsing `CWL v1.0 `__, `CWL v1.1 `__, and `CWL v1.2 `__ documents. Requires Python 3.8+ Installation ------------ :: pip3 install cwl-utils To install from source:: git clone https://github.com/common-workflow-language/cwl-utils.git cd cwl-utils pip3 install . Usage ----- Pull the all referenced software container images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-docker-extract`` is useful to cache or pre-pull all software container images referenced in a CWL CommandLineTool or CWL Workflow (including all referenced CommandLineTools and sub-Workflows and so on). The default behaviour is to use the Docker engine to download and save the software container images in Docker format. .. code:: bash cwl-docker-extract path_to_my_workflow.cwl .. code:: bash cwl-docker-extract --dir DIRECTORY path_to_my_workflow.cwl Or you can use the Singularity software container engine to download and save the software container images and convert them to the Singularity format at the same time. .. code:: bash cwl-docker-extract --singularity --dir DIRECTORY path_to_my_workflow.cwl Print all referenced software packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-cite-extract`` prints all software packages found (recursively) in the specified CWL document. Currently the package name and any listed specs and version field are printed for all ``SoftwareRequirement`` s found. .. code:: bash cwl-cite-extract path_to_my_workflow.cwl Replace CWL Expressions with concrete steps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-expression-refactor`` refactors CWL documents so that any CWL Expression evaluations are separate steps (either CWL ExpressionTools or CWL CommandLineTools.) This allows execution by CWL engines that do not want to support inline expression evaluation outside of concrete steps, or do not want to directly support CWL's optional ``InlineJavascriptRequirement`` at all. .. code:: bash cwl-expression-refactor directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Split a packed CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-graph-split`` splits a packed CWL document file into multiple files. Packed CWL documents use the $graph construct to contain multiple CWL Process objects (Workflow, CommandLineTool, ExpressionTool, Operation). Typically packed CWL documents contain a CWL Workflow under the name "main" and the workflow steps (including any sub-workflows). .. code:: bash cwl-graph-split --outdir optional/directory/path/to/save/outputs path_to_my_workflow.cwl Normalize a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-normalizer`` normalizes one or more CWL document so that for each document, a JSON format CWL document is produces with it and all of its dependencies packed together, upgrading to CWL v1.2, as needed. Can optionally refactor CWL Expressions into separate steps in the manner of cwl-expression-refactor. .. code:: bash cwl-normalizer directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Generate for Workflow Parameters from a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-inputs-schema-gen`` generates a JSON Schema for workflow input parameters from a CWL document. .. code:: bash cwl-inputs-schema-gen path_to_my_workflow.cwl Using the CWL Parsers ~~~~~~~~~~~~~~~~~~~~~ .. code:: python from pathlib import Path from ruamel import yaml import sys from cwl_utils.parser import load_document_by_uri, save # File Input - This is the only thing you will need to adjust or take in as an input to your function: cwl_file = Path("testdata/md5sum.cwl") # or a plain string works as well # Import CWL Object cwl_obj = load_document_by_uri(cwl_file) # View CWL Object print("List of object attributes:\n{}".format("\n".join(map(str, dir(cwl_obj))))) # Export CWL Object into a built-in typed object saved_obj = save(cwl_obj) print(f"Export of the loaded CWL object: {saved_obj}.") Development ----------- Regenerate parsers ~~~~~~~~~~~~~~~~~~ To regenerate install the ``schema_salad`` package and run: ``cwl_utils/parser/cwl_v1_0.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/common-workflow-language/raw/codegen/v1.0/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_0" > cwl_utils/parser/cwl_v1_0.py`` ``cwl_utils/parser/cwl_v1_1.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.1/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_1" > cwl_utils/parser/cwl_v1_1.py`` ``cwl_utils/parser/cwl_v1_2.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.2/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_2" > cwl_utils/parser/cwl_v1_2.py`` Release ~~~~~~~ To release CWLUtils, bump the version in ``cwl_utils/__meta__.py``, and tag that commit with the new version. The `gh-action-pypi-publish `__ should release that tag. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/README.rst0000644000175100001660000001402714753463342014561 0ustar00runnerdocker|Linux Build Status| |Code coverage| |Documentation Status| .. |Linux Build Status| image:: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml/badge.svg?branch=main :target: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml .. |Code coverage| image:: https://codecov.io/gh/common-workflow-language/cwl-utils/branch/main/graph/badge.svg :target: https://codecov.io/gh/common-workflow-language/cwl-utils .. |Documentation Status| image:: https://readthedocs.org/projects/cwl-utils/badge/?version=latest :target: https://cwl-utils.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status cwl-utils --------- Python Utilities and Autogenerated Classes for loading and parsing `CWL v1.0 `__, `CWL v1.1 `__, and `CWL v1.2 `__ documents. Requires Python 3.8+ Installation ------------ :: pip3 install cwl-utils To install from source:: git clone https://github.com/common-workflow-language/cwl-utils.git cd cwl-utils pip3 install . Usage ----- Pull the all referenced software container images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-docker-extract`` is useful to cache or pre-pull all software container images referenced in a CWL CommandLineTool or CWL Workflow (including all referenced CommandLineTools and sub-Workflows and so on). The default behaviour is to use the Docker engine to download and save the software container images in Docker format. .. code:: bash cwl-docker-extract path_to_my_workflow.cwl .. code:: bash cwl-docker-extract --dir DIRECTORY path_to_my_workflow.cwl Or you can use the Singularity software container engine to download and save the software container images and convert them to the Singularity format at the same time. .. code:: bash cwl-docker-extract --singularity --dir DIRECTORY path_to_my_workflow.cwl Print all referenced software packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-cite-extract`` prints all software packages found (recursively) in the specified CWL document. Currently the package name and any listed specs and version field are printed for all ``SoftwareRequirement`` s found. .. code:: bash cwl-cite-extract path_to_my_workflow.cwl Replace CWL Expressions with concrete steps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-expression-refactor`` refactors CWL documents so that any CWL Expression evaluations are separate steps (either CWL ExpressionTools or CWL CommandLineTools.) This allows execution by CWL engines that do not want to support inline expression evaluation outside of concrete steps, or do not want to directly support CWL's optional ``InlineJavascriptRequirement`` at all. .. code:: bash cwl-expression-refactor directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Split a packed CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-graph-split`` splits a packed CWL document file into multiple files. Packed CWL documents use the $graph construct to contain multiple CWL Process objects (Workflow, CommandLineTool, ExpressionTool, Operation). Typically packed CWL documents contain a CWL Workflow under the name "main" and the workflow steps (including any sub-workflows). .. code:: bash cwl-graph-split --outdir optional/directory/path/to/save/outputs path_to_my_workflow.cwl Normalize a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-normalizer`` normalizes one or more CWL document so that for each document, a JSON format CWL document is produces with it and all of its dependencies packed together, upgrading to CWL v1.2, as needed. Can optionally refactor CWL Expressions into separate steps in the manner of cwl-expression-refactor. .. code:: bash cwl-normalizer directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Generate for Workflow Parameters from a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-inputs-schema-gen`` generates a JSON Schema for workflow input parameters from a CWL document. .. code:: bash cwl-inputs-schema-gen path_to_my_workflow.cwl Using the CWL Parsers ~~~~~~~~~~~~~~~~~~~~~ .. code:: python from pathlib import Path from ruamel import yaml import sys from cwl_utils.parser import load_document_by_uri, save # File Input - This is the only thing you will need to adjust or take in as an input to your function: cwl_file = Path("testdata/md5sum.cwl") # or a plain string works as well # Import CWL Object cwl_obj = load_document_by_uri(cwl_file) # View CWL Object print("List of object attributes:\n{}".format("\n".join(map(str, dir(cwl_obj))))) # Export CWL Object into a built-in typed object saved_obj = save(cwl_obj) print(f"Export of the loaded CWL object: {saved_obj}.") Development ----------- Regenerate parsers ~~~~~~~~~~~~~~~~~~ To regenerate install the ``schema_salad`` package and run: ``cwl_utils/parser/cwl_v1_0.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/common-workflow-language/raw/codegen/v1.0/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_0" > cwl_utils/parser/cwl_v1_0.py`` ``cwl_utils/parser/cwl_v1_1.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.1/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_1" > cwl_utils/parser/cwl_v1_1.py`` ``cwl_utils/parser/cwl_v1_2.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.2/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_2" > cwl_utils/parser/cwl_v1_2.py`` Release ~~~~~~~ To release CWLUtils, bump the version in ``cwl_utils/__meta__.py``, and tag that commit with the new version. The `gh-action-pypi-publish `__ should release that tag. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/create_cwl_from_objects.py0000755000175100001660000000142614753463342020312 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 import sys import ruamel.yaml from cwl_utils.parser import cwl_v1_2 as cwl def main() -> None: """Generate a CWL object to match "cat-tool.cwl".""" inputs = [cwl.CommandInputParameter(id="file1", type_="File")] outputs = [ cwl.CommandOutputParameter( id="output", type_="File", outputBinding=cwl.CommandOutputBinding(glob="output"), ) ] cat_tool = cwl.CommandLineTool( inputs=inputs, outputs=outputs, cwlVersion="v1.2", baseCommand="cat", stdin="$(inputs.file1.path)", stdout="output", ) yaml = ruamel.yaml.YAML() yaml.dump(cat_tool.save(), sys.stdout) if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5275602 cwl_utils-0.37/cwl_utils/0000755000175100001660000000000014753463357015101 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/__init__.py0000644000175100001660000000017614753463342017210 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Python Utilities and Autogenerated Classes for loading and parsing CWL documents.""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/__meta__.py0000644000175100001660000000016214753463342017166 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Global version number for the cwl_utils package.""" __version__ = "0.37" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cite_extract.py0000755000175100001660000000514714753463342020135 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 import argparse import sys from collections.abc import Iterator from typing import cast import cwl_utils.parser as cwl def arg_parser() -> argparse.ArgumentParser: """Construct the argument parser.""" parser = argparse.ArgumentParser( description="Print information about software used in a CWL document (Workflow or CommandLineTool). " "For CWL Workflows, all steps will also be searched (recursively)." ) parser.add_argument( "input", help="Input CWL document (CWL Workflow or CWL CommandLineTool)" ) return parser def run(args: argparse.Namespace) -> int: """Extract the software requirements.""" for req in traverse(cwl.load_document_by_uri(args.input)): process_software_requirement(req) return 0 def main() -> int: """Console entry point.""" return run(arg_parser().parse_args(sys.argv[1:])) def extract_software_reqs( process: cwl.Process, ) -> Iterator[cwl.SoftwareRequirement]: """Return an iterator over any SoftwareRequirements found in the given process.""" if process.requirements: for req in process.requirements: if isinstance(req, cwl.SoftwareRequirementTypes): yield req if process.hints: for req in process.hints: if isinstance(req, cwl.SoftwareRequirementTypes): yield req def process_software_requirement(req: cwl.SoftwareRequirement) -> None: """Pretty print the software package information.""" for package in req.packages: print( "Package: {}, version: {}, specs: {}".format( package.package, package.version, package.specs ) ) def traverse(process: cwl.Process) -> Iterator[cwl.SoftwareRequirement]: """Extract the software packages for this process, and any steps.""" yield from extract_software_reqs(process) if isinstance(process, cwl.WorkflowTypes): yield from traverse_workflow(process) def get_process_from_step(step: cwl.WorkflowStep) -> cwl.Process: """Return the process for this step, loading it if needed.""" if isinstance(step.run, str): return cast(cwl.Process, cwl.load_document_by_uri(step.run)) return cast(cwl.Process, step.run) def traverse_workflow(workflow: cwl.Workflow) -> Iterator[cwl.SoftwareRequirement]: """Iterate over the given workflow, extracting the software packages.""" for step in workflow.steps: yield from extract_software_reqs(step) yield from traverse(get_process_from_step(step)) if __name__ == "__main__": sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwlNodeEngine.js0000755000175100001660000000113314753463342020153 0ustar00runnerdocker"use strict"; process.stdin.setEncoding("utf8"); var incoming = ""; process.stdin.on("data", function(chunk) { incoming += chunk; var i = incoming.indexOf("\n"); if (i > -1) { try{ var fn = JSON.parse(incoming.substr(0, i)); incoming = incoming.substr(i+1); process.stdout.write(JSON.stringify(require("vm").runInNewContext(fn, {})) + "\n"); } catch(e){ console.error(e) } /*strings to indicate the process has finished*/ console.log("r1cepzbhUTxtykz5XTC4"); console.error("r1cepzbhUTxtykz5XTC4"); } }); process.stdin.on("end", process.exit); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwlNodeEngineJSConsole.js0000644000175100001660000000173114753463342021734 0ustar00runnerdocker"use strict"; function js_console_log(){ console.error("[log] "+require("util").format.apply(this, arguments).split("\n").join("\n[log] ")); } function js_console_err(){ console.error("[err] "+require("util").format.apply(this, arguments).split("\n").join("\n[err] ")); } process.stdin.setEncoding("utf8"); var incoming = ""; process.stdin.on("data", function(chunk) { incoming += chunk; var i = incoming.indexOf("\n"); if (i > -1) { try{ var fn = JSON.parse(incoming.substr(0, i)); incoming = incoming.substr(i+1); process.stdout.write(JSON.stringify(require("vm").runInNewContext(fn, { console: { log: js_console_log, error: js_console_err } })) + "\n"); } catch(e){ console.error(e) } /*strings to indicate the process has finished*/ console.log("r1cepzbhUTxtykz5XTC4"); console.error("r1cepzbhUTxtykz5XTC4"); } }); process.stdin.on("end", process.exit); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwlNodeEngineWithContext.js0000644000175100001660000000157714753463342022365 0ustar00runnerdocker"use strict"; process.stdin.setEncoding("utf8"); var incoming = ""; var firstInput = true; var context = {}; process.stdin.on("data", function(chunk) { incoming += chunk; var i = incoming.indexOf("\n"); while (i > -1) { try{ var input = incoming.substr(0, i); incoming = incoming.substr(i+1); var fn = JSON.parse(input); if(firstInput){ context = require("vm").runInNewContext(fn, {}); } else{ process.stdout.write(JSON.stringify(require("vm").runInNewContext(fn, context)) + "\n"); } } catch(e){ console.error(e); } if(firstInput){ firstInput = false; } else{ /*strings to indicate the process has finished*/ console.log("r1cepzbhUTxtykz5XTC4"); console.error("r1cepzbhUTxtykz5XTC4"); } i = incoming.indexOf("\n"); } }); process.stdin.on("end", process.exit); ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwl_v1_0_expression_refactor.py0000755000175100001660000025231614753463342023237 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2018-2021 Michael R. Crusoe """CWL Expression refactoring tool for CWL v1.0 .""" import copy import hashlib import uuid from collections.abc import Mapping, MutableSequence, Sequence from typing import Any, Optional, Union, cast from ruamel import yaml from schema_salad.sourceline import SourceLine from schema_salad.utils import json_dumps import cwl_utils.parser.cwl_v1_0 as cwl import cwl_utils.parser.cwl_v1_0_utils as utils from cwl_utils.errors import JavascriptException, WorkflowException from cwl_utils.expression import do_eval, interpolate from cwl_utils.types import CWLObjectType, CWLOutputType def expand_stream_shortcuts(process: cwl.CommandLineTool) -> cwl.CommandLineTool: """Rewrite the "type: stdout" shortcut to use an explicit random filename.""" if not process.outputs: return process result = None for index, output in enumerate(process.outputs): if output.type_ == "stdout": # TODO: add 'stdin' for CWL v1.1 if not result: result = copy.deepcopy(process) stdout_path = process.stdout if not stdout_path: stdout_path = str( hashlib.sha1( # nosec json_dumps(cwl.save(process)).encode("utf-8") ).hexdigest() ) result.stdout = stdout_path result.outputs[index].type_ = "File" output.outputBinding = cwl.CommandOutputBinding(stdout_path, None, None) if result: return result return process def escape_expression_field(contents: str) -> str: """Escape sequences similar to CWL expressions or param references.""" return contents.replace("${", "$/{").replace("$(", "$/(") def clean_type_ids( cwltype: Union[cwl.ArraySchema, cwl.InputRecordSchema], ) -> Union[cwl.ArraySchema, cwl.InputRecordSchema]: """Simplify type identifiers.""" result = copy.deepcopy(cwltype) if isinstance(result, cwl.ArraySchema): if isinstance(result.items, MutableSequence): for item in result.items: if hasattr(item, "id"): item.id = item.id.split("#")[-1] elif isinstance(result.items, cwl.InputRecordSchema): if result.items.name: result.items.name = result.items.name.split("/")[-1] if result.items.fields: for field in result.items.fields: field.name = field.name.split("/")[-1] elif isinstance(result, cwl.InputRecordSchema): if result.name: result.name = result.name.split("/")[-1] if result.fields: for field in result.fields: field.name = field.name.split("/")[-1] return result def get_expression( string: str, inputs: CWLObjectType, self: Optional[CWLOutputType] ) -> Optional[str]: """ Find and return a normalized CWL expression, if any. CWL expressions in the $() form are converted to the ${} form. """ if not isinstance(string, str): return None if string.strip().startswith("${"): return string if "$(" in string: runtime: CWLObjectType = { "cores": 0, "ram": 0, "outdir": "/root", "tmpdir": "/tmp", # nosec "outdirSize": 0, "tmpdirSize": 0, } try: do_eval( string, inputs, context=self, requirements=[], outdir="", tmpdir="", resources={}, ) except (WorkflowException, JavascriptException): if ( string[0:2] != "$(" or not string.endswith(")") or len(string.split("$(")) > 2 ): # then it is a string interpolation return cast( str, interpolate( scan=string, rootvars={ "inputs": inputs, "context": self, "runtime": runtime, }, fullJS=True, escaping_behavior=2, convert_to_expression=True, ), ) else: # it is a CWL Expression in $() with no string interpolation return "${return " + string.strip()[2:-1] + ";}" return None def etool_to_cltool( etool: cwl.ExpressionTool, expressionLib: Optional[list[str]] = None ) -> cwl.CommandLineTool: """Convert a ExpressionTool to a CommandLineTool.""" inputs = yaml.comments.CommentedSeq() # preserve the order for inp in etool.inputs: inputs.append( cwl.CommandInputParameter( id=inp.id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) outputs = yaml.comments.CommentedSeq() for outp in etool.outputs: outputs.append( cwl.CommandOutputParameter( id=outp.id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) contents = """"use strict"; var inputs=$(inputs); var runtime=$(runtime);""" if expressionLib: contents += "\n" + "\n".join(expressionLib) contents += ( """ var ret = function(){""" + escape_expression_field(etool.expression.strip()[2:-1]) + """}(); process.stdout.write(JSON.stringify(ret));""" ) listing = [cwl.Dirent(entryname="expression.js", entry=contents, writable=None)] iwdr = cwl.InitialWorkDirRequirement(listing) containerReq = cwl.DockerRequirement(dockerPull="node:alpine") softwareHint = cwl.SoftwareRequirement( packages=[cwl.SoftwarePackage(package="nodejs")] ) return cwl.CommandLineTool( inputs=inputs, outputs=outputs, id=etool.id, requirements=[iwdr], hints=[containerReq, softwareHint], label=etool.label, doc=etool.doc, cwlVersion=etool.cwlVersion, baseCommand=["nodejs", "expression.js"], stdout="cwl.output.json", extension_fields=etool.extension_fields, loadingOptions=etool.loadingOptions, ) def traverse( process: Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], replace_etool: bool, inside: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], bool]: """Convert the given process and any subprocesses.""" if not inside and isinstance(process, cwl.CommandLineTool): process = expand_stream_shortcuts(process) wf_inputs = [] wf_outputs = [] step_inputs = [] step_outputs = [] if process.inputs: for inp in process.inputs: inp_id = inp.id.split("#")[-1] step_inputs.append( cwl.WorkflowStepInput( id=inp_id, source=inp_id, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) wf_inputs.append( cwl.InputParameter( id=inp_id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) if process.outputs: for outp in process.outputs: outp_id = outp.id.split("#")[-1] step_outputs.append(outp_id) wf_outputs.append( cwl.WorkflowOutputParameter( id=outp_id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, outputSource=f"main/{outp_id}", type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) step = cwl.WorkflowStep( id="#main", in_=step_inputs, out=step_outputs, run=copy.deepcopy(process), ) workflow = cwl.Workflow( inputs=wf_inputs, outputs=wf_outputs, steps=[step], cwlVersion=process.cwlVersion, ) result, modified = traverse_workflow( workflow, replace_etool, skip_command_line1, skip_command_line2 ) if modified: return result, True else: return process, False if isinstance(process, cwl.ExpressionTool) and replace_etool: expression = get_expression(process.expression, empty_inputs(process), None) # Why call get_expression on an ExpressionTool? # It normalizes the form of $() CWL expressions into the ${} style if expression: process2 = copy.deepcopy(process) process2.expression = expression else: process2 = process return etool_to_cltool(process2), True if isinstance(process, cwl.Workflow): return traverse_workflow( process, replace_etool, skip_command_line1, skip_command_line2 ) return process, False def load_step( step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """If the step's Process is not inline, load and process it.""" modified = False if isinstance(step.run, str): step.run, modified = traverse( cwl.load_document(step.run, baseuri=step.loadingOptions.fileuri), replace_etool, True, skip_command_line1, skip_command_line2, ) return modified def generate_etool_from_expr( expr: str, target: Union[cwl.CommandInputParameter, cwl.InputParameter], no_inputs: bool = False, self_type: Optional[ Union[ cwl.InputParameter, cwl.CommandInputParameter, list[Union[cwl.InputParameter, cwl.CommandInputParameter]], ] ] = None, # if the "self" input should be a different type than the "result" output extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Convert a CWL Expression into an ExpressionTool.""" inputs = yaml.comments.CommentedSeq() if not no_inputs: if not self_type: self_type = target if isinstance(self_type, list): new_type: Union[ list[Union[cwl.ArraySchema, cwl.InputRecordSchema]], Union[cwl.ArraySchema, cwl.InputRecordSchema], ] = [clean_type_ids(t.type_) for t in self_type if t.type_] elif self_type.type_: new_type = clean_type_ids(self_type.type_) else: raise WorkflowException(f"Don't know how to make type from {self_type!r}.") inputs.append( cwl.InputParameter( id="self", label=self_type.label if not isinstance(self_type, list) else None, secondaryFiles=( self_type.secondaryFiles if not isinstance(self_type, list) else None ), streamable=( self_type.streamable if not isinstance(self_type, list) else None ), doc=self_type.doc if not isinstance(self_type, list) else None, format=self_type.format if not isinstance(self_type, list) else None, type_=new_type, extension_fields=( self_type.extension_fields if not isinstance(self_type, list) else None ), loadingOptions=( self_type.loadingOptions if not isinstance(self_type, list) else None ), ) ) outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, extension_fields=target.extension_fields, loadingOptions=target.loadingOptions, ) ) expression = "${" if not no_inputs: expression += "\n var self=inputs.self;" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) inlineJSReq = cwl.InlineJavascriptRequirement( find_expressionLib(extra_processes) if extra_processes else None ) return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=[inlineJSReq], cwlVersion="v1.0", ) def get_input_for_id( name: str, tool: Union[cwl.CommandLineTool, cwl.Workflow] ) -> Optional[cwl.CommandInputParameter]: """Determine the CommandInputParameter for the given input name.""" name = name.split("/")[-1] for inp in cast(list[cwl.CommandInputParameter], tool.inputs): if inp.id and inp.id.split("#")[-1].split("/")[-1] == name: return inp if isinstance(tool, cwl.Workflow) and "/" in name: stepname, stem = name.split("/", 1) for step in tool.steps: if step.id == stepname: result = get_input_for_id(stem, step.run) if result: return result return None def find_expressionLib( processes: Sequence[ Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool, cwl.WorkflowStep] ], ) -> Optional[list[str]]: """ Return the expressionLib from the highest priority InlineJavascriptRequirement. processes: should be in order of least important to most important (Workflow, WorkflowStep, ... CommandLineTool/ExpressionTool) """ for process in reversed(copy.copy(processes)): if process.requirements: for req in process.requirements: if isinstance(req, cwl.InlineJavascriptRequirement): return cast(Optional[list[str]], copy.deepcopy(req.expressionLib)) return None def replace_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.InputParameter], source: Optional[Union[str, list[Any]]], replace_etool: bool = False, extra_process: Optional[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = None, source_type: Optional[cwl.CommandInputParameter] = None, ) -> None: """Modify the given workflow, replacing the expr with an standalone ExpressionTool.""" extra_processes: list[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = [workflow] if extra_process: extra_processes.append(extra_process) etool: cwl.ExpressionTool = generate_etool_from_expr( expr, target, source is None, source_type, extra_processes ) if replace_etool: processes: list[Union[cwl.WorkflowStep, cwl.Workflow, cwl.CommandLineTool]] = [ workflow ] if extra_process: processes.append(extra_process) final_tool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( etool, find_expressionLib(processes) ) else: final_tool = etool inps = [] if source: inps.append(cwl.WorkflowStepInput(id="self", source=source)) workflow.steps.append( cwl.WorkflowStep( id=name, in_=inps, out=[cwl.WorkflowStepOutput("result")], run=final_tool, ) ) def replace_wf_input_ref_with_step_output( workflow: cwl.Workflow, name: str, target: str ) -> None: """Refactor all reference to a workflow input to the specified step output.""" if workflow.steps: for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if inp.source == name: inp.source = target if isinstance(inp.source, MutableSequence): for index, source in enumerate(inp.source): if source == name: inp.source[index] = target if workflow.outputs: for outp in workflow.outputs: if outp.outputSource: if outp.outputSource == name: outp.outputSource = target if isinstance(outp.outputSource, MutableSequence): for index, outputSource in enumerate(outp.outputSource): if outputSource == name: outp.outputSource[index] = target def empty_inputs( process_or_step: Union[ cwl.CommandLineTool, cwl.WorkflowStep, cwl.ExpressionTool, cwl.Workflow ], parent: Optional[cwl.Workflow] = None, ) -> dict[str, Any]: """Produce a mock input object for the given inputs.""" result = {} if isinstance(process_or_step, cwl.Process): for param in process_or_step.inputs: result[param.id.split("#")[-1]] = example_input(param.type_) else: for param in process_or_step.in_: param_id = param.id.split("/")[-1] if param.source is None and param.valueFrom: result[param_id] = example_input("string") elif param.source is None and param.default: result[param_id] = param.default else: try: result[param_id] = example_input( utils.type_for_source(process_or_step.run, param.source, parent) ) except WorkflowException: pass return result def example_input(some_type: Any) -> Any: """Produce a fake input for the given type.""" # TODO: accept some sort of context object with local custom type definitions if some_type == "Directory": return { "class": "Directory", "location": "https://www.example.com/example", "basename": "example", "listing": [ { "class": "File", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } ], } if some_type == "File": return { "class": "File", "location": "https://www.example.com/example.txt", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } if some_type == "int": return 23 if some_type == "string": return "hoopla!" if some_type == "boolean": return True return None EMPTY_FILE: CWLOutputType = { "class": "File", "basename": "em.pty", "nameroot": "em", "nameext": "pty", } TOPLEVEL_SF_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a secondaryFiles " "specification in a Workflow level input or standalone CommandLine Tool " "are not able to be refactored into separate ExpressionTool or " "CommandLineTool steps." ) TOPLEVEL_FORMAT_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a format " "specification in a Workflow level input are not able to be refactored " "into separate ExpressionTool/CommandLineTool steps." ) def process_workflow_inputs_and_outputs( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """Do any needed conversions on the given Workflow's inputs and outputs.""" modified = False inputs = empty_inputs(workflow) for index, param in enumerate(workflow.inputs): with SourceLine(workflow.inputs, index, WorkflowException): if param.format and get_expression(param.format, inputs, None): raise SourceLine( param.loadingOptions.original_doc, "format", raise_type=WorkflowException, ).makeError(TOPLEVEL_FORMAT_EXPR_ERROR.format(param.id.split("#")[-1])) if param.secondaryFiles: if get_expression(param.secondaryFiles, inputs, EMPTY_FILE): raise SourceLine( param.loadingOptions.original_doc, "secondaryFiles", raise_type=WorkflowException, ).makeError(TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1])) elif isinstance(param.secondaryFiles, MutableSequence): for index2, entry in enumerate(param.secondaryFiles): if get_expression(entry, inputs, EMPTY_FILE): raise SourceLine( param.loadingOptions.original_doc, index2, raise_type=WorkflowException, ).makeError( f"Entry {index}," + TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1]) ) return modified def process_workflow_reqs_and_hints( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """ Convert any expressions in a workflow's reqs and hints. Each expression will be converted to an additional step. The converted requirement will be copied to all workflow steps that don't have that requirement type. Those affected steps will gain an additional input from the relevant synthesized expression step. """ # TODO: consolidate the generated etools/cltools into a single "_expression_workflow_reqs" step # TODO: support resourceReq.* references to Workflow.inputs? # ^ By refactoring replace_expr_etool to allow multiple inputs, # and connecting all workflow inputs to the generated step modified = False inputs = empty_inputs(workflow) generated_res_reqs: list[tuple[str, Union[int, str]]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str]]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] prop_reqs: tuple[ Union[ type[cwl.EnvVarRequirement], type[cwl.ResourceRequirement], type[cwl.InitialWorkDirRequirement], ], ..., ] = () resourceReq: Optional[cwl.ResourceRequirement] = None envVarReq: Optional[cwl.EnvVarRequirement] = None iwdr: Optional[cwl.InitialWorkDirRequirement] = None if workflow.requirements is not None: for req in cast(list[cwl.ProcessRequirement], workflow.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.InputParameter( id=None, type_="string", ) etool_id = ( "_expression_workflow_EnvVarRequirement_{}".format( index ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if envVarReq is None: envVarReq = copy.deepcopy(req) prop_reqs += (cwl.EnvVarRequirement,) newEnvDef = copy.deepcopy(envDef) newEnvDef.envValue = f"$(inputs._envDef{index})" envVarReq.envDef[index] = newEnvDef generated_envVar_reqs.append((etool_id, index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.InputParameter(id=None, type_="long") etool_id = ( "_expression_workflow_ResourceRequirement_{}".format( attr ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if not resourceReq: resourceReq = cwl.ResourceRequirement( loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.ResourceRequirement,) setattr(resourceReq, attr, f"$(inputs._{attr})") generated_res_reqs.append((etool_id, attr)) if req and isinstance(req, cwl.InitialWorkDirRequirement): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target = cwl.InputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement" replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr = cwl.InitialWorkDirRequirement( listing="$(inputs._iwdr_listing)", loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = copy.deepcopy(req) for index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target = cwl.InputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = f"$(inputs._iwdr_listing_{index}" generated_iwdr_reqs.append((etool_id, index)) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression( entry.entry, inputs, None ) if expression: expr: str = expression expr_result = do_eval( ex=entry.entry, jobinput=inputs, requirements=[], outdir="", tmpdir="", resources={}, ) modified = True if ( isinstance(expr_result, Mapping) and "class" in expr_result and ( expr_result["class"] == "File" or expr_result["class"] == "Directory" ) ): target = cwl.InputParameter( id=None, type_=expr_result["class"], ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( "$(inputs._iwdr_listing_{}".format( index ) ) generated_iwdr_reqs.append( (etool_id, index) ) elif isinstance(expr_result, str): target = cwl.InputParameter( id=None, type_=["File"], ) if entry.entryname is None: raise SourceLine( entry.loadingOptions.original_doc, index, raise_type=WorkflowException, ).makeError( f"Entry {index}," + "Invalid CWL, if 'entry' " "is a string, then entryName must be specified." ) expr = ( '${return {"class": "File", "basename": "' + entry.entryname + '", "contents": (function(){' + expr[2:-1] + "})() }; }" ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.InputParameter( id=None, type_="string", ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) if generated_iwdr_reqs: prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = None if envVarReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.EnvVarRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(envVarReq) for entry in generated_envVar_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_envDef{entry[1]}", source=f"{entry[0]}/result", ) ) if resourceReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.ResourceRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(resourceReq) for entry in generated_res_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_{entry[1]}", source=f"{entry[0]}/result", ) ) if iwdr and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.InitialWorkDirRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(iwdr) if generated_iwdr_reqs: for entry in generated_iwdr_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_iwdr_listing_{index}", source=f"{entry[0]}/result", ) ) else: step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source="_expression_workflow_InitialWorkDirRequirement/result", ) ) if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance(x, prop_reqs) ] return modified def process_level_reqs( process: cwl.CommandLineTool, step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Convert expressions inside a process into new adjacent steps.""" # This is for reqs inside a Process (CommandLineTool, ExpressionTool) # differences from process_workflow_reqs_and_hints() are: # - the name of the generated ETools/CTools contains the name of the step, not "workflow" # - Generated ETools/CTools are adjacent steps # - Replace the CWL Expression inplace with a CWL parameter reference # - Don't create a new Requirement, nor delete the existing Requirement # - the Process is passed to replace_expr_with_etool for later searching for JS expressionLibs # - in addition to adding the input to the step for the ETool/CTool result, # add it to the Process.inputs as well if not process.requirements: return False modified = False target_process = step.run inputs = empty_inputs(process) generated_res_reqs: list[tuple[str, str]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str], Any]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] if not step.id: return False step_name = step.id.split("#", 1)[-1] for req_index, req in enumerate(process.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for env_index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.InputParameter(id=None, type_="string") etool_id = "_expression_{}_EnvVarRequirement_{}".format( step_name, env_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index][ env_index ].envValue = f"$(inputs._envDef{env_index})" generated_envVar_reqs.append((etool_id, env_index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.InputParameter(id=None, type_="long") etool_id = "_expression_{}_ResourceRequirement_{}".format( step_name, attr ) replace_clt_hintreq_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr( target_process.requirements[req_index], attr, f"$(inputs._{attr})", ) generated_res_reqs.append((etool_id, attr)) if ( not skip_command_line2 and req and isinstance(req, cwl.InitialWorkDirRequirement) ): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.InputParameter(id=None, type_=target_type) etool_id = "_expression_{}_InitialWorkDirRequirement".format( step_name ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing = ( "$(inputs._iwdr_listing)", ) step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source=f"{etool_id}/result", ) ) add_input_to_process( target_process, "_iwdr_listing", target_type, process.loadingOptions, ) else: for listing_index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.InputParameter( id=None, type_=target_type, ) etool_id = ( "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ] = f"$(inputs._iwdr_listing_{listing_index}" generated_iwdr_reqs.append( (etool_id, listing_index, target_type) ) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression(entry.entry, inputs, None) if expression: modified = True if entry.entryname is not None: entryname_expr = get_expression( entry.entryname, inputs, None ) entryname = ( entry.entryname if entryname_expr else f'"{entry.entryname}"' # noqa: B907 ) new_expression = ( "${var result; var entryname = " + entryname + "; var entry = " + entry.entry[2:-1] + """; if (typeof entry === 'string' || entry instanceof String) { result = {"class": "File", "basename": entryname, "contents": entry} ; if (typeof entryname === 'string' || entryname instanceof String) { result.basename = entryname ; } } else { result = entry ; } return result; }""" ) else: new_expression = expression d_target_type = ["File", "Directory"] target = cwl.InputParameter( id=None, type_=d_target_type, ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_clt_hintreq_expr_with_etool( new_expression, etool_id, parent, target, step, replace_etool, ) target_process.requirements[req_index].listing[ listing_index ].entry = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, d_target_type) ) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.InputParameter( id=None, type_="string", ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ].entryname = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, "string") ) for entry in generated_envVar_reqs: name = f"_envDef{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "string", process.loadingOptions) for entry in generated_res_reqs: name = f"_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "long", process.loadingOptions) for entry in generated_iwdr_reqs: name = f"_iwdr_listing_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, entry[2], process.loadingOptions) return modified def add_input_to_process( process: cwl.Process, name: str, inptype: Any, loadingOptions: cwl.LoadingOptions ) -> None: """Add a new InputParameter to the given CommandLineTool.""" if isinstance(process, cwl.CommandLineTool): process.inputs.append( cwl.CommandInputParameter( id=name, type_=inptype, loadingOptions=loadingOptions, ) ) def traverse_CommandLineTool( clt: cwl.CommandLineTool, parent: cwl.Workflow, step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Extract any CWL Expressions within the given CommandLineTool into sibling steps.""" modified = False # don't modify clt, modify step.run target_clt = step.run inputs = empty_inputs(clt) if not step.id: return False step_id = step.id.split("#")[-1] if clt.arguments and not skip_command_line1: for index, arg in enumerate(clt.arguments): if isinstance(arg, str): expression = get_expression(arg, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.InputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index] = cwl.CommandLineBinding( valueFrom=f"$(inputs.{inp_id})" ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput( f"{etool_id}/result", None, inp_id, None, None ) ) remove_JSReq(target_clt, skip_command_line1) elif isinstance(arg, cwl.CommandLineBinding) and arg.valueFrom: expression = get_expression(arg.valueFrom, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.InputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index].valueFrom = "$(inputs.{})".format( inp_id ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) remove_JSReq(target_clt, skip_command_line1) for streamtype in "stdout", "stderr": # add 'stdin' for v1.1 version stream_value = getattr(clt, streamtype) if stream_value: expression = get_expression(stream_value, inputs, None) if expression: modified = True inp_id = f"_{streamtype}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "string" target = cwl.InputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr(target_clt, streamtype, f"$(inputs.{inp_id})") target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=target_type) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for inp in clt.inputs: if not skip_command_line1 and inp.inputBinding and inp.inputBinding.valueFrom: expression = get_expression( inp.inputBinding.valueFrom, inputs, example_input(inp.type_) ) if expression: modified = True self_id = inp.id.split("#")[-1] inp_id = f"_{self_id}_valueFrom" etool_id = f"_expression_{step_id}{inp_id}" replace_step_clt_expr_with_etool( expression, etool_id, parent, inp, step, replace_etool, self_id ) inp.inputBinding.valueFrom = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=inp.type_) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for outp in clt.outputs: if outp.outputBinding: if outp.outputBinding.glob: expression = get_expression(outp.outputBinding.glob, inputs, None) if expression: modified = True inp_id = "_{}_glob".format(outp.id.split("#")[-1]) etool_id = f"_expression_{step_id}{inp_id}" glob_target_type = ["string", cwl.ArraySchema("string", "array")] target = cwl.InputParameter(id=None, type_=glob_target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) outp.outputBinding.glob = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=glob_target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) if outp.outputBinding.outputEval and not skip_command_line2: self: CWLOutputType = [ { "class": "File", "basename": "base.name", "nameroot": "base", "nameext": "name", "path": "/tmp/base.name", # nosec "dirname": "/tmp", # nosec } ] if outp.outputBinding.loadContents: cast(dict[Any, Any], self)[0]["contents"] = "stuff" expression = get_expression(outp.outputBinding.outputEval, inputs, self) if expression: modified = True outp_id = outp.id.split("#")[-1] inp_id = f"_{outp_id}_outputEval" etool_id = f"expression{inp_id}" sub_wf_outputs = cltool_step_outputs_to_workflow_outputs( step, etool_id, outp_id ) self_type = cwl.InputParameter( id=None, type_=cwl.InputArraySchema("File", "array", None, None), ) etool = generate_etool_from_expr( expression, outp, False, self_type, [clt, step, parent] ) if outp.outputBinding.loadContents: etool.inputs[0].type_.inputBinding = cwl.CommandLineBinding( loadContents=True ) etool.inputs.extend(cltool_inputs_to_etool_inputs(clt)) sub_wf_inputs = cltool_inputs_to_etool_inputs(clt) orig_step_inputs = copy.deepcopy(step.in_) for orig_step_input in orig_step_inputs: orig_step_input.id = orig_step_input.id.split("/")[-1] if isinstance(orig_step_input.source, MutableSequence): for index, source in enumerate(orig_step_input.source): orig_step_input.source[index] = source.split("#")[-1] else: orig_step_input.source = orig_step_input.source.split("#")[ -1 ] orig_step_inputs[:] = [ x for x in orig_step_inputs if not x.id.startswith("_") ] for inp in orig_step_inputs: inp.source = inp.id inp.linkMerge = None if replace_etool: processes = [parent] final_etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = ( etool_to_cltool(etool, find_expressionLib(processes)) ) else: final_etool = etool etool_step = cwl.WorkflowStep( id=etool_id, in_=orig_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=final_etool, scatterMethod=step.scatterMethod, ) new_clt_step = copy.copy( step ) # a deepcopy would be convenient, but params2.cwl gives it problems new_clt_step.id = new_clt_step.id.split("#")[-1] new_clt_step.run = copy.copy(step.run) new_clt_step.run.id = None remove_JSReq(new_clt_step.run, skip_command_line1) for new_outp in new_clt_step.run.outputs: if new_outp.id.split("#")[-1] == outp_id: if isinstance( new_outp, ( cwl.WorkflowOutputParameter, cwl.ExpressionToolOutputParameter, ), ): new_outp.type_ = cwl.OutputArraySchema( items="File", type_="array" ) elif isinstance(new_outp, cwl.CommandOutputParameter): if new_outp.outputBinding: new_outp.outputBinding.outputEval = None new_outp.outputBinding.loadContents = None new_outp.type_ = cwl.CommandOutputArraySchema( items="File", type_="array", ) else: raise Exception( "Unimplemented OutputParameter type: %s", type(new_outp), ) new_clt_step.in_ = copy.deepcopy(step.in_) for inp in new_clt_step.in_: inp.id = inp.id.split("/")[-1] inp.source = inp.id inp.linkMerge = None for index, out in enumerate(new_clt_step.out): new_clt_step.out[index] = out.split("/")[-1] for tool_inp in new_clt_step.run.inputs: tool_inp.id = tool_inp.id.split("#")[-1] for tool_out in new_clt_step.run.outputs: tool_out.id = tool_out.id.split("#")[-1] sub_wf_steps = [new_clt_step, etool_step] sub_workflow = cwl.Workflow( inputs=sub_wf_inputs, outputs=sub_wf_outputs, steps=sub_wf_steps, cwlVersion=parent.cwlVersion, ) if step.scatter: new_clt_step.scatter = None step.run = sub_workflow rename_step_source( sub_workflow, f"{step_id}/{outp_id}", f"{etool_id}/result", ) orig_step_inputs.append( cwl.WorkflowStepInput(id="self", source=f"{step_id}/{outp_id}") ) if not parent.requirements: parent.requirements = [cwl.SubworkflowFeatureRequirement()] else: has_sub_wf_req = False for req in parent.requirements: if isinstance(req, cwl.SubworkflowFeatureRequirement): has_sub_wf_req = True if not has_sub_wf_req: parent.requirements.append( cwl.SubworkflowFeatureRequirement() ) return modified def rename_step_source(workflow: cwl.Workflow, old: str, new: str) -> None: """Update step source names to the new name.""" def simplify_wf_id(uri: str) -> str: return uri.split("#")[-1].split("/", 1)[1] def simplify_step_id(uri: str) -> str: return uri.split("#")[-1] for wf_outp in workflow.outputs: if wf_outp.outputSource and simplify_wf_id(wf_outp.outputSource) == old: wf_outp.outputSource = new for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if isinstance(inp.source, str): source_id = ( simplify_step_id(inp.source) if "#" in inp.source else inp.source ) if source_id == old: inp.source = new else: for index, source in enumerate(inp.source): if simplify_step_id(source) == old: inp.source[index] = new def remove_JSReq( process: Union[cwl.CommandLineTool, cwl.WorkflowStep, cwl.Workflow], skip_command_line1: bool, ) -> None: """Since the InlineJavascriptRequirement is longer needed, remove it.""" if skip_command_line1 and isinstance(process, cwl.CommandLineTool): return if process.hints: process.hints[:] = [ hint for hint in process.hints if not isinstance(hint, cwl.InlineJavascriptRequirement) ] if not process.hints: process.hints = None if process.requirements: process.requirements[:] = [ req for req in process.requirements if not isinstance(req, cwl.InlineJavascriptRequirement) ] if not process.requirements: process.requirements = None def replace_step_clt_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.InputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> None: """Convert a step level CWL Expression to a sibling expression step.""" etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) def replace_clt_hintreq_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.InputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> Union[cwl.CommandLineTool, cwl.ExpressionTool]: """Factor out an expression inside a CommandLineTool req or hint into a sibling step.""" # Same as replace_step_clt_expr_with_etool or different? etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) return etool def cltool_inputs_to_etool_inputs( tool: cwl.CommandLineTool, ) -> list[cwl.InputParameter]: """Copy CommandLineTool input objects into the equivalent ExpressionTool input objects.""" inputs = yaml.comments.CommentedSeq() if tool.inputs: for clt_inp in tool.inputs: clt_inp_id = clt_inp.id.split("#")[-1].split("/")[-1] if not clt_inp_id.startswith("_"): inputs.append( cwl.InputParameter( id=clt_inp_id, label=clt_inp.label, secondaryFiles=clt_inp.secondaryFiles, streamable=clt_inp.streamable, doc=clt_inp.doc, format=clt_inp.format, default=clt_inp.default, type_=clt_inp.type_, extension_fields=clt_inp.extension_fields, loadingOptions=clt_inp.loadingOptions, ) ) return inputs def cltool_step_outputs_to_workflow_outputs( cltool_step: cwl.WorkflowStep, etool_step_id: str, etool_out_id: str ) -> list[cwl.OutputParameter]: """ Copy CommandLineTool outputs into the equivalent Workflow output parameters. Connects the outputSources for each of the new output parameters to the step they came from. """ outputs = yaml.comments.CommentedSeq() if not cltool_step.id: raise WorkflowException(f"Missing step id from {cltool_step}.") default_step_id = cltool_step.id.split("#")[-1] if cltool_step.run.outputs: for clt_out in cltool_step.run.outputs: clt_out_id = clt_out.id.split("#")[-1].split("/")[-1] if clt_out_id == etool_out_id: outputSource = f"{etool_step_id}/result" else: outputSource = f"{default_step_id}/{clt_out_id}" if not clt_out_id.startswith("_"): outputs.append( cwl.WorkflowOutputParameter( id=clt_out_id, label=clt_out.label, secondaryFiles=clt_out.secondaryFiles, streamable=clt_out.streamable, doc=clt_out.doc, format=clt_out.format, outputSource=outputSource, type_=clt_out.type_, extension_fields=clt_out.extension_fields, loadingOptions=clt_out.loadingOptions, ) ) return outputs def generate_etool_from_expr2( expr: str, target: cwl.InputParameter, inputs: Sequence[Union[cwl.InputParameter, cwl.CommandInputParameter]], self_name: Optional[str] = None, process: Optional[Union[cwl.CommandLineTool, cwl.ExpressionTool]] = None, extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Generate an ExpressionTool to achieve the same result as the given expression.""" outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, ) ) expression = "${" if self_name: expression += f"\n var self=inputs.{self_name};" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) hints = None procs: list[ Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow, cwl.WorkflowStep] ] = [] if process: procs.append(process) if extra_processes: procs.extend(extra_processes) inlineJSReq = cwl.InlineJavascriptRequirement(find_expressionLib(procs)) reqs = [inlineJSReq] if process: if process.hints: hints = copy.deepcopy(process.hints) hints[:] = [ x for x in hints if not isinstance(x, cwl.InitialWorkDirRequirement) ] if process.requirements: reqs.extend(copy.deepcopy(process.requirements)) reqs[:] = [ x for x in reqs if not isinstance(x, cwl.InitialWorkDirRequirement) ] return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=reqs, cwlVersion="v1.0", ) def traverse_step( step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Process the given WorkflowStep.""" modified = False inputs = empty_inputs(step, parent) if not step.id: return False step_id = step.id.split("#")[-1] original_process = copy.deepcopy(step.run) original_step_ins = copy.deepcopy(step.in_) for inp in step.in_: if inp.valueFrom: if not inp.source: self = None else: if isinstance(inp.source, MutableSequence): self = [] for source in inp.source: if not step.scatter: self.append( example_input( utils.type_for_source(parent, source.split("#")[-1]) ) ) else: scattered_source_type = utils.type_for_source( parent, source ) if isinstance(scattered_source_type, list): for stype in scattered_source_type: self.append(example_input(stype.type_)) else: self.append(example_input(scattered_source_type.type_)) else: if not step.scatter: self = example_input( utils.type_for_source(parent, inp.source.split("#")[-1]) ) else: scattered_source_type2 = utils.type_for_source( parent, inp.source ) if isinstance(scattered_source_type2, list): self = example_input(scattered_source_type2[0].type_) else: self = example_input(scattered_source_type2.type_) expression = get_expression(inp.valueFrom, inputs, self) if expression: modified = True etool_id = "_expression_{}_{}".format(step_id, inp.id.split("/")[-1]) target = get_input_for_id(inp.id, original_process) if not target: raise WorkflowException("target not found") input_source_id = None source_type: Optional[ Union[list[cwl.InputParameter], cwl.InputParameter] ] = None if inp.source: if isinstance(inp.source, MutableSequence): input_source_id = [] source_types: list[cwl.InputParameter] = [] for source in inp.source: source_id = source.split("#")[-1] input_source_id.append(source_id) temp_type = utils.type_for_source( step.run, source_id, parent ) if isinstance(temp_type, list): for ttype in temp_type: if ttype not in source_types: source_types.append(ttype) else: if temp_type not in source_types: source_types.append(temp_type) source_type = cwl.InputParameter( id=None, type_=cwl.ArraySchema(source_types, "array"), ) else: input_source_id = inp.source.split("#")[-1] source_type = utils.param_for_source_id( step.run, input_source_id, parent ) # target.id = target.id.split('#')[-1] if isinstance(original_process, cwl.ExpressionTool): found_JSReq = False reqs: list[cwl.ProcessRequirement] = [] if original_process.hints: reqs.extend(original_process.hints) if original_process.requirements: reqs.extend(original_process.requirements) for req in reqs: if isinstance(req, cwl.InlineJavascriptRequirement): found_JSReq = True if not found_JSReq: if not step.run.requirements: step.run.requirements = [] expr_lib = find_expressionLib([parent]) step.run.requirements.append( cwl.InlineJavascriptRequirement(expr_lib) ) replace_step_valueFrom_expr_with_etool( expression, etool_id, parent, target, step, inp, original_process, original_step_ins, input_source_id, replace_etool, source_type, ) inp.valueFrom = None inp.source = f"{etool_id}/result" # TODO: skip or special process for sub workflows? process_modified = process_level_reqs( original_process, step, parent, replace_etool, skip_command_line1, skip_command_line2, ) if process_modified: modified = True if isinstance(original_process, cwl.CommandLineTool): clt_modified = traverse_CommandLineTool( original_process, parent, step, replace_etool, skip_command_line1, skip_command_line2, ) if clt_modified: modified = True return modified def workflow_step_to_InputParameters( step_ins: list[cwl.WorkflowStepInput], parent: cwl.Workflow, except_in_id: str ) -> list[cwl.InputParameter]: """Create InputParameters to match the given WorkflowStep inputs.""" params = [] for inp in step_ins: if not inp.id: continue inp_id = inp.id.split("#")[-1].split("/")[-1] if inp.source and inp_id != except_in_id: param = copy.deepcopy( utils.param_for_source_id(parent, sourcenames=inp.source) ) if isinstance(param, list): for p in param: if not p.type_: raise WorkflowException( f"Don't know how to get type id for {p!r}." ) p.id = inp_id p.type_ = clean_type_ids(p.type_) params.append(p) else: if not param.type_: raise WorkflowException( f"Don't know how to get type id for {param!r}." ) param.id = inp_id param.type_ = clean_type_ids(param.type_) params.append(param) return params def replace_step_valueFrom_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.InputParameter], step: cwl.WorkflowStep, step_inp: cwl.WorkflowStepInput, original_process: Union[cwl.CommandLineTool, cwl.ExpressionTool], original_step_ins: list[cwl.WorkflowStepInput], source: Optional[Union[str, list[str]]], replace_etool: bool, source_type: Optional[Union[cwl.InputParameter, list[cwl.InputParameter]]] = None, ) -> None: """Replace a WorkflowStep level 'valueFrom' expression with a sibling ExpressionTool step.""" if not step_inp.id: raise WorkflowException(f"Missing id in {step_inp}.") step_inp_id = step_inp.id.split("/")[-1] etool_inputs = workflow_step_to_InputParameters( original_step_ins, workflow, step_inp_id ) if source: source_param = cwl.InputParameter(id="self", type_="Any") # TODO: would be nicer to derive a proper type; but in the face of linkMerge, this is easier for now etool_inputs.append(source_param) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, "self" if source else None, original_process, [workflow, step], ) if replace_etool: processes: list[ Union[ cwl.Workflow, cwl.CommandLineTool, cwl.ExpressionTool, cwl.WorkflowStep ] ] = [ workflow, step, ] cltool = etool_to_cltool(temp_etool, find_expressionLib(processes)) etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = cltool else: etool = temp_etool wf_step_inputs = copy.deepcopy(original_step_ins) if source: wf_step_inputs.append(cwl.WorkflowStepInput(id="self", source=step_inp.source)) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] if wf_step_input.valueFrom: wf_step_input.valueFrom = None if wf_step_input.source: if isinstance(wf_step_input.source, MutableSequence): for index, inp_source in enumerate(wf_step_input.source): wf_step_input.source[index] = inp_source.split("#")[-1] else: wf_step_input.source = wf_step_input.source.split("#")[-1] wf_step_inputs[:] = [ x for x in wf_step_inputs if x.id and not (x.id.startswith("_") or x.id.endswith(step_inp_id)) ] scatter = copy.deepcopy(step.scatter) if isinstance(scatter, str): scatter = [scatter] if isinstance(scatter, MutableSequence): for index, entry in enumerate(scatter): scatter[index] = entry.split("/")[-1] if scatter and step_inp_id in scatter: scatter = ["self"] # do we still need to scatter? else: scatter = None workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, scatter=scatter, scatterMethod=step.scatterMethod, ) ) def traverse_workflow( workflow: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[cwl.Workflow, bool]: """Traverse a workflow, processing each step.""" modified = False for index, step in enumerate(workflow.steps): if isinstance(step.run, cwl.ExpressionTool) and replace_etool: workflow.steps[index].run = etool_to_cltool(step.run) modified = True else: step_modified = load_step( step, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True for step in workflow.steps: if not step.id.startswith("_expression"): step_modified = traverse_step( step, workflow, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True if process_workflow_inputs_and_outputs(workflow, replace_etool): modified = True if process_workflow_reqs_and_hints(workflow, replace_etool): modified = True if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance( x, (cwl.InlineJavascriptRequirement, cwl.StepInputExpressionRequirement) ) ] else: workflow.requirements = None return workflow, modified ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwl_v1_1_expression_refactor.py0000755000175100001660000025240014753463342023232 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2018-2021 Michael R. Crusoe """CWL Expression refactoring tool for CWL v1.1 .""" import copy import hashlib import uuid from collections.abc import Mapping, MutableSequence, Sequence from typing import Any, Optional, Union, cast from ruamel import yaml from schema_salad.sourceline import SourceLine from schema_salad.utils import json_dumps import cwl_utils.parser.cwl_v1_1 as cwl import cwl_utils.parser.cwl_v1_1_utils as utils from cwl_utils.errors import JavascriptException, WorkflowException from cwl_utils.expression import do_eval, interpolate from cwl_utils.types import CWLObjectType, CWLOutputType def expand_stream_shortcuts(process: cwl.CommandLineTool) -> cwl.CommandLineTool: """Rewrite the "type: stdout" shortcut to use an explicit random filename.""" if not process.outputs: return process result = None for index, output in enumerate(process.outputs): if output.type_ == "stdout": # TODO: add 'stdin' for CWL v1.1 if not result: result = copy.deepcopy(process) stdout_path = process.stdout if not stdout_path: stdout_path = str( hashlib.sha1( # nosec json_dumps(cwl.save(process)).encode("utf-8") ).hexdigest() ) result.stdout = stdout_path result.outputs[index].type_ = "File" output.outputBinding = cwl.CommandOutputBinding(stdout_path, None, None) if result: return result return process def escape_expression_field(contents: str) -> str: """Escape sequences similar to CWL expressions or param references.""" return contents.replace("${", "$/{").replace("$(", "$/(") def clean_type_ids( cwltype: Union[cwl.ArraySchema, cwl.InputRecordSchema], ) -> Union[cwl.ArraySchema, cwl.InputRecordSchema]: """Simplify type identifiers.""" result = copy.deepcopy(cwltype) if isinstance(result, cwl.ArraySchema): if isinstance(result.items, MutableSequence): for item in result.items: if hasattr(item, "id"): item.id = item.id.split("#")[-1] elif isinstance(result.items, cwl.InputRecordSchema): if result.items.name: result.items.name = result.items.name.split("/")[-1] if result.items.fields: for field in result.items.fields: field.name = field.name.split("/")[-1] elif isinstance(result, cwl.InputRecordSchema): if result.name: result.name = result.name.split("/")[-1] if result.fields: for field in result.fields: field.name = field.name.split("/")[-1] return result def get_expression( string: str, inputs: CWLObjectType, self: Optional[CWLOutputType] ) -> Optional[str]: """ Find and return a normalized CWL expression, if any. CWL expressions in the $() form are converted to the ${} form. """ if not isinstance(string, str): return None if string.strip().startswith("${"): return string if "$(" in string: runtime: CWLObjectType = { "cores": 0, "ram": 0, "outdir": "/root", "tmpdir": "/tmp", # nosec "outdirSize": 0, "tmpdirSize": 0, } try: do_eval( string, inputs, context=self, requirements=[], outdir="", tmpdir="", resources={}, ) except (WorkflowException, JavascriptException): if ( string[0:2] != "$(" or not string.endswith(")") or len(string.split("$(")) > 2 ): # then it is a string interpolation return cast( str, interpolate( scan=string, rootvars={ "inputs": inputs, "context": self, "runtime": runtime, }, fullJS=True, escaping_behavior=2, convert_to_expression=True, ), ) else: # it is a CWL Expression in $() with no string interpolation return "${return " + string.strip()[2:-1] + ";}" return None def etool_to_cltool( etool: cwl.ExpressionTool, expressionLib: Optional[list[str]] = None ) -> cwl.CommandLineTool: """Convert a ExpressionTool to a CommandLineTool.""" inputs = yaml.comments.CommentedSeq() # preserve the order for inp in etool.inputs: inputs.append( cwl.CommandInputParameter( id=inp.id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) outputs = yaml.comments.CommentedSeq() for outp in etool.outputs: outputs.append( cwl.CommandOutputParameter( id=outp.id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) contents = """"use strict"; var inputs=$(inputs); var runtime=$(runtime);""" if expressionLib: contents += "\n" + "\n".join(expressionLib) contents += ( """ var ret = function(){""" + escape_expression_field(etool.expression.strip()[2:-1]) + """}(); process.stdout.write(JSON.stringify(ret));""" ) listing = [cwl.Dirent(entryname="expression.js", entry=contents, writable=None)] iwdr = cwl.InitialWorkDirRequirement(listing) containerReq = cwl.DockerRequirement(dockerPull="node:alpine") softwareHint = cwl.SoftwareRequirement( packages=[cwl.SoftwarePackage(package="nodejs")] ) return cwl.CommandLineTool( inputs=inputs, outputs=outputs, id=etool.id, requirements=[iwdr], hints=[containerReq, softwareHint], label=etool.label, doc=etool.doc, cwlVersion=etool.cwlVersion, baseCommand=["nodejs", "expression.js"], stdout="cwl.output.json", extension_fields=etool.extension_fields, loadingOptions=etool.loadingOptions, ) def traverse( process: Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], replace_etool: bool, inside: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], bool]: """Convert the given process and any subprocesses.""" if not inside and isinstance(process, cwl.CommandLineTool): process = expand_stream_shortcuts(process) wf_inputs = [] wf_outputs = [] step_inputs = [] step_outputs = [] if process.inputs: for inp in process.inputs: inp_id = inp.id.split("#")[-1] step_inputs.append( cwl.WorkflowStepInput( id=inp_id, source=inp_id, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) wf_inputs.append( cwl.WorkflowInputParameter( id=inp_id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) if process.outputs: for outp in process.outputs: outp_id = outp.id.split("#")[-1] step_outputs.append(outp_id) wf_outputs.append( cwl.WorkflowOutputParameter( id=outp_id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, outputSource=f"main/{outp_id}", type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) step = cwl.WorkflowStep( id="#main", in_=step_inputs, out=step_outputs, run=copy.deepcopy(process), ) workflow = cwl.Workflow( inputs=wf_inputs, outputs=wf_outputs, steps=[step], cwlVersion=process.cwlVersion, ) result, modified = traverse_workflow( workflow, replace_etool, skip_command_line1, skip_command_line2 ) if modified: return result, True else: return process, False if isinstance(process, cwl.ExpressionTool) and replace_etool: expression = get_expression(process.expression, empty_inputs(process), None) # Why call get_expression on an ExpressionTool? # It normalizes the form of $() CWL expressions into the ${} style if expression: process2 = copy.deepcopy(process) process2.expression = expression else: process2 = process return etool_to_cltool(process2), True if isinstance(process, cwl.Workflow): return traverse_workflow( process, replace_etool, skip_command_line1, skip_command_line2 ) return process, False def load_step( step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """If the step's Process is not inline, load and process it.""" modified = False if isinstance(step.run, str): step.run, modified = traverse( cwl.load_document(step.run, baseuri=step.loadingOptions.fileuri), replace_etool, True, skip_command_line1, skip_command_line2, ) return modified def generate_etool_from_expr( expr: str, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], no_inputs: bool = False, self_type: Optional[ Union[ cwl.WorkflowInputParameter, cwl.CommandInputParameter, list[Union[cwl.WorkflowInputParameter, cwl.CommandInputParameter]], ] ] = None, # if the "self" input should be a different type than the "result" output extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Convert a CWL Expression into an ExpressionTool.""" inputs = yaml.comments.CommentedSeq() if not no_inputs: if not self_type: self_type = target if isinstance(self_type, list): new_type: Union[ list[Union[cwl.ArraySchema, cwl.InputRecordSchema]], Union[cwl.ArraySchema, cwl.InputRecordSchema], ] = [clean_type_ids(t.type_) for t in self_type] else: new_type = clean_type_ids(self_type.type_) inputs.append( cwl.WorkflowInputParameter( id="self", label=self_type.label if not isinstance(self_type, list) else None, secondaryFiles=( self_type.secondaryFiles if not isinstance(self_type, list) else None ), streamable=( self_type.streamable if not isinstance(self_type, list) else None ), doc=self_type.doc if not isinstance(self_type, list) else None, format=self_type.format if not isinstance(self_type, list) else None, type_=new_type, extension_fields=( self_type.extension_fields if not isinstance(self_type, list) else None ), loadingOptions=( self_type.loadingOptions if not isinstance(self_type, list) else None ), ) ) outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, extension_fields=target.extension_fields, loadingOptions=target.loadingOptions, ) ) expression = "${" if not no_inputs: expression += "\n var self=inputs.self;" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) inlineJSReq = cwl.InlineJavascriptRequirement( find_expressionLib(extra_processes) if extra_processes else None ) return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=[inlineJSReq], cwlVersion="v1.0", ) def get_input_for_id( name: str, tool: Union[cwl.CommandLineTool, cwl.Workflow] ) -> Optional[cwl.CommandInputParameter]: """Determine the CommandInputParameter for the given input name.""" name = name.split("/")[-1] for inp in cast(list[cwl.CommandInputParameter], tool.inputs): if inp.id and inp.id.split("#")[-1].split("/")[-1] == name: return inp if isinstance(tool, cwl.Workflow) and "/" in name: stepname, stem = name.split("/", 1) for step in tool.steps: if step.id == stepname: result = get_input_for_id(stem, step.run) if result: return result return None def find_expressionLib( processes: Sequence[ Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool, cwl.WorkflowStep] ], ) -> Optional[list[str]]: """ Return the expressionLib from the highest priority InlineJavascriptRequirement. processes: should be in order of least important to most important (Workflow, WorkflowStep, ... CommandLineTool/ExpressionTool) """ for process in reversed(copy.copy(processes)): if process.requirements: for req in process.requirements: if isinstance(req, cwl.InlineJavascriptRequirement): return cast(Optional[list[str]], copy.deepcopy(req.expressionLib)) return None def replace_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], source: Optional[Union[str, list[Any]]], replace_etool: bool = False, extra_process: Optional[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = None, source_type: Optional[cwl.CommandInputParameter] = None, ) -> None: """Modify the given workflow, replacing the expr with an standalone ExpressionTool.""" extra_processes: list[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = [workflow] if extra_process: extra_processes.append(extra_process) etool: cwl.ExpressionTool = generate_etool_from_expr( expr, target, source is None, source_type, extra_processes ) if replace_etool: processes: list[Union[cwl.WorkflowStep, cwl.Workflow, cwl.CommandLineTool]] = [ workflow ] if extra_process: processes.append(extra_process) final_tool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( etool, find_expressionLib(processes) ) else: final_tool = etool inps = [] if source: inps.append(cwl.WorkflowStepInput(id="self", source=source)) workflow.steps.append( cwl.WorkflowStep( id=name, in_=inps, out=[cwl.WorkflowStepOutput("result")], run=final_tool, ) ) def replace_wf_input_ref_with_step_output( workflow: cwl.Workflow, name: str, target: str ) -> None: """Refactor all reference to a workflow input to the specified step output.""" if workflow.steps: for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if inp.source == name: inp.source = target if isinstance(inp.source, MutableSequence): for index, source in enumerate(inp.source): if source == name: inp.source[index] = target if workflow.outputs: for outp in workflow.outputs: if outp.outputSource: if outp.outputSource == name: outp.outputSource = target if isinstance(outp.outputSource, MutableSequence): for index, outputSource in enumerate(outp.outputSource): if outputSource == name: outp.outputSource[index] = target def empty_inputs( process_or_step: Union[ cwl.CommandLineTool, cwl.WorkflowStep, cwl.ExpressionTool, cwl.Workflow ], parent: Optional[cwl.Workflow] = None, ) -> dict[str, Any]: """Produce a mock input object for the given inputs.""" result = {} if isinstance(process_or_step, cwl.Process): for param in process_or_step.inputs: result[param.id.split("#")[-1]] = example_input(param.type_) else: for param in process_or_step.in_: param_id = param.id.split("/")[-1] if param.source is None and param.valueFrom: result[param_id] = example_input("string") elif param.source is None and param.default: result[param_id] = param.default else: try: result[param_id] = example_input( utils.type_for_source(process_or_step.run, param.source, parent) ) except WorkflowException: pass return result def example_input(some_type: Any) -> Any: """Produce a fake input for the given type.""" # TODO: accept some sort of context object with local custom type definitions if some_type == "Directory": return { "class": "Directory", "location": "https://www.example.com/example", "basename": "example", "listing": [ { "class": "File", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } ], } if some_type == "File": return { "class": "File", "location": "https://www.example.com/example.txt", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } if some_type == "int": return 23 if some_type == "string": return "hoopla!" if some_type == "boolean": return True return None EMPTY_FILE: CWLOutputType = { "class": "File", "basename": "em.pty", "nameroot": "em", "nameext": "pty", } TOPLEVEL_SF_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a secondaryFiles " "specification in a Workflow level input or standalone CommandLine Tool " "are not able to be refactored into separate ExpressionTool or " "CommandLineTool steps." ) TOPLEVEL_FORMAT_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a format " "specification in a Workflow level input are not able to be refactored " "into separate ExpressionTool/CommandLineTool steps." ) def process_workflow_inputs_and_outputs( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """Do any needed conversions on the given Workflow's inputs and outputs.""" modified = False inputs = empty_inputs(workflow) for index, param in enumerate(workflow.inputs): with SourceLine(workflow.inputs, index, WorkflowException): if param.format and get_expression(param.format, inputs, None): raise SourceLine( param.loadingOptions.original_doc, "format", raise_type=WorkflowException, ).makeError(TOPLEVEL_FORMAT_EXPR_ERROR.format(param.id.split("#")[-1])) if param.secondaryFiles: if hasattr(param.secondaryFiles, "pattern") and get_expression( param.secondaryFiles.pattern, inputs, EMPTY_FILE ): raise SourceLine( param.loadingOptions.original_doc, "secondaryFiles", raise_type=WorkflowException, ).makeError(TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1])) elif isinstance(param.secondaryFiles, MutableSequence): for index2, entry in enumerate(param.secondaryFiles): if get_expression(entry.pattern, inputs, EMPTY_FILE): raise SourceLine( param.loadingOptions.original_doc, index2, raise_type=WorkflowException, ).makeError( f"Entry {index}," + TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1]) ) return modified def process_workflow_reqs_and_hints( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """ Convert any expressions in a workflow's reqs and hints. Each expression will be converted to an additional step. The converted requirement will be copied to all workflow steps that don't have that requirement type. Those affected steps will gain an additional input from the relevant synthesized expression step. """ # TODO: consolidate the generated etools/cltools into a single "_expression_workflow_reqs" step # TODO: support resourceReq.* references to Workflow.inputs? # ^ By refactoring replace_expr_etool to allow multiple inputs, # and connecting all workflow inputs to the generated step modified = False inputs = empty_inputs(workflow) generated_res_reqs: list[tuple[str, Union[int, str]]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str]]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] prop_reqs: tuple[ Union[ type[cwl.EnvVarRequirement], type[cwl.ResourceRequirement], type[cwl.InitialWorkDirRequirement], ], ..., ] = () resourceReq: Optional[cwl.ResourceRequirement] = None envVarReq: Optional[cwl.EnvVarRequirement] = None iwdr: Optional[cwl.InitialWorkDirRequirement] = None if workflow.requirements is not None: for req in cast(list[cwl.ProcessRequirement], workflow.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = ( "_expression_workflow_EnvVarRequirement_{}".format( index ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if envVarReq is None: envVarReq = copy.deepcopy(req) prop_reqs += (cwl.EnvVarRequirement,) newEnvDef = copy.deepcopy(envDef) newEnvDef.envValue = f"$(inputs._envDef{index})" envVarReq.envDef[index] = newEnvDef generated_envVar_reqs.append((etool_id, index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="long") etool_id = ( "_expression_workflow_ResourceRequirement_{}".format( attr ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if not resourceReq: resourceReq = cwl.ResourceRequirement( loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.ResourceRequirement,) setattr(resourceReq, attr, f"$(inputs._{attr})") generated_res_reqs.append((etool_id, attr)) if req and isinstance(req, cwl.InitialWorkDirRequirement): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement" replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr = cwl.InitialWorkDirRequirement( listing="$(inputs._iwdr_listing)", loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = copy.deepcopy(req) for index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = f"$(inputs._iwdr_listing_{index}" generated_iwdr_reqs.append((etool_id, index)) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression( entry.entry, inputs, None ) if expression: expr: str = expression expr_result = do_eval( ex=entry.entry, jobinput=inputs, requirements=[], outdir="", tmpdir="", resources={}, ) modified = True if ( isinstance(expr_result, Mapping) and "class" in expr_result and ( expr_result["class"] == "File" or expr_result["class"] == "Directory" ) ): target = cwl.WorkflowInputParameter( id=None, type_=expr_result["class"], ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( "$(inputs._iwdr_listing_{}".format( index ) ) generated_iwdr_reqs.append( (etool_id, index) ) elif isinstance(expr_result, str): target = cwl.WorkflowInputParameter( id=None, type_=["File"], ) if entry.entryname is None: raise SourceLine( entry.loadingOptions.original_doc, index, raise_type=WorkflowException, ).makeError( f"Entry {index}," + "Invalid CWL, if 'entry' " "is a string, then entryName must be specified." ) expr = ( '${return {"class": "File", "basename": "' + entry.entryname + '", "contents": (function(){' + expr[2:-1] + "})() }; }" ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) if generated_iwdr_reqs: prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = None if envVarReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.EnvVarRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(envVarReq) for entry in generated_envVar_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_envDef{entry[1]}", source=f"{entry[0]}/result", ) ) if resourceReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.ResourceRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(resourceReq) for entry in generated_res_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_{entry[1]}", source=f"{entry[0]}/result", ) ) if iwdr and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.InitialWorkDirRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(iwdr) if generated_iwdr_reqs: for entry in generated_iwdr_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_iwdr_listing_{index}", source=f"{entry[0]}/result", ) ) else: step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source="_expression_workflow_InitialWorkDirRequirement/result", ) ) if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance(x, prop_reqs) ] return modified def process_level_reqs( process: cwl.CommandLineTool, step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Convert expressions inside a process into new adjacent steps.""" # This is for reqs inside a Process (CommandLineTool, ExpressionTool) # differences from process_workflow_reqs_and_hints() are: # - the name of the generated ETools/CTools contains the name of the step, not "workflow" # - Generated ETools/CTools are adjacent steps # - Replace the CWL Expression inplace with a CWL parameter reference # - Don't create a new Requirement, nor delete the existing Requirement # - the Process is passed to replace_expr_with_etool for later searching for JS expressionLibs # - in addition to adding the input to the step for the ETool/CTool result, # add it to the Process.inputs as well if not process.requirements: return False modified = False target_process = step.run inputs = empty_inputs(process) generated_res_reqs: list[tuple[str, str]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str], Any]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] if not step.id: return False step_name = step.id.split("#", 1)[-1] for req_index, req in enumerate(process.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for env_index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="string") etool_id = "_expression_{}_EnvVarRequirement_{}".format( step_name, env_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index][ env_index ].envValue = f"$(inputs._envDef{env_index})" generated_envVar_reqs.append((etool_id, env_index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="long") etool_id = "_expression_{}_ResourceRequirement_{}".format( step_name, attr ) replace_clt_hintreq_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr( target_process.requirements[req_index], attr, f"$(inputs._{attr})", ) generated_res_reqs.append((etool_id, attr)) if ( not skip_command_line2 and req and isinstance(req, cwl.InitialWorkDirRequirement) ): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.WorkflowInputParameter(id=None, type_=target_type) etool_id = "_expression_{}_InitialWorkDirRequirement".format( step_name ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing = ( "$(inputs._iwdr_listing)", ) step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source=f"{etool_id}/result", ) ) add_input_to_process( target_process, "_iwdr_listing", target_type, process.loadingOptions, ) else: for listing_index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.WorkflowInputParameter( id=None, type_=target_type, ) etool_id = ( "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ] = f"$(inputs._iwdr_listing_{listing_index}" generated_iwdr_reqs.append( (etool_id, listing_index, target_type) ) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression(entry.entry, inputs, None) if expression: modified = True if entry.entryname is not None: entryname_expr = get_expression( entry.entryname, inputs, None ) entryname = ( entry.entryname if entryname_expr else f'"{entry.entryname}"' # noqa: B907 ) new_expression = ( "${var result; var entryname = " + entryname + "; var entry = " + entry.entry[2:-1] + """; if (typeof entry === 'string' || entry instanceof String) { result = {"class": "File", "basename": entryname, "contents": entry} ; if (typeof entryname === 'string' || entryname instanceof String) { result.basename = entryname ; } } else { result = entry ; } return result; }""" ) else: new_expression = expression d_target_type = ["File", "Directory"] target = cwl.WorkflowInputParameter( id=None, type_=d_target_type, ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_clt_hintreq_expr_with_etool( new_expression, etool_id, parent, target, step, replace_etool, ) target_process.requirements[req_index].listing[ listing_index ].entry = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, d_target_type) ) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ].entryname = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, "string") ) for entry in generated_envVar_reqs: name = f"_envDef{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "string", process.loadingOptions) for entry in generated_res_reqs: name = f"_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "long", process.loadingOptions) for entry in generated_iwdr_reqs: name = f"_iwdr_listing_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, entry[2], process.loadingOptions) return modified def add_input_to_process( process: cwl.Process, name: str, inptype: Any, loadingOptions: cwl.LoadingOptions ) -> None: """Add a new InputParameter to the given CommandLineTool.""" if isinstance(process, cwl.CommandLineTool): process.inputs.append( cwl.CommandInputParameter( id=name, type_=inptype, loadingOptions=loadingOptions, ) ) def traverse_CommandLineTool( clt: cwl.CommandLineTool, parent: cwl.Workflow, step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Extract any CWL Expressions within the given CommandLineTool into sibling steps.""" modified = False # don't modify clt, modify step.run target_clt = step.run inputs = empty_inputs(clt) if not step.id: return False step_id = step.id.split("#")[-1] if clt.arguments and not skip_command_line1: for index, arg in enumerate(clt.arguments): if isinstance(arg, str): expression = get_expression(arg, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index] = cwl.CommandLineBinding( valueFrom=f"$(inputs.{inp_id})" ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput( f"{etool_id}/result", None, inp_id, None, None ) ) remove_JSReq(target_clt, skip_command_line1) elif isinstance(arg, cwl.CommandLineBinding) and arg.valueFrom: expression = get_expression(arg.valueFrom, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index].valueFrom = "$(inputs.{})".format( inp_id ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) remove_JSReq(target_clt, skip_command_line1) for streamtype in "stdout", "stderr": # add 'stdin' for v1.1 version stream_value = getattr(clt, streamtype) if stream_value: expression = get_expression(stream_value, inputs, None) if expression: modified = True inp_id = f"_{streamtype}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "string" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr(target_clt, streamtype, f"$(inputs.{inp_id})") target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=target_type) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for inp in clt.inputs: if not skip_command_line1 and inp.inputBinding and inp.inputBinding.valueFrom: expression = get_expression( inp.inputBinding.valueFrom, inputs, example_input(inp.type_) ) if expression: modified = True self_id = inp.id.split("#")[-1] inp_id = f"_{self_id}_valueFrom" etool_id = f"_expression_{step_id}{inp_id}" replace_step_clt_expr_with_etool( expression, etool_id, parent, inp, step, replace_etool, self_id ) inp.inputBinding.valueFrom = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=inp.type_) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for outp in clt.outputs: if outp.outputBinding: if outp.outputBinding.glob: expression = get_expression(outp.outputBinding.glob, inputs, None) if expression: modified = True inp_id = "_{}_glob".format(outp.id.split("#")[-1]) etool_id = f"_expression_{step_id}{inp_id}" glob_target_type = ["string", cwl.ArraySchema("string", "array")] target = cwl.WorkflowInputParameter(id=None, type_=glob_target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) outp.outputBinding.glob = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=glob_target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) if outp.outputBinding.outputEval and not skip_command_line2: self: CWLOutputType = [ { "class": "File", "basename": "base.name", "nameroot": "base", "nameext": "name", "path": "/tmp/base.name", # nosec "dirname": "/tmp", # nosec } ] if outp.outputBinding.loadContents: cast(dict[Any, Any], self)[0]["contents"] = "stuff" expression = get_expression(outp.outputBinding.outputEval, inputs, self) if expression: modified = True outp_id = outp.id.split("#")[-1] inp_id = f"_{outp_id}_outputEval" etool_id = f"expression{inp_id}" sub_wf_outputs = cltool_step_outputs_to_workflow_outputs( step, etool_id, outp_id ) self_type = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema("File", "array", None, None), ) etool = generate_etool_from_expr( expression, outp, False, self_type, [clt, step, parent] ) if outp.outputBinding.loadContents: etool.inputs[0].type_.inputBinding = cwl.CommandLineBinding( loadContents=True ) etool.inputs.extend(cltool_inputs_to_etool_inputs(clt)) sub_wf_inputs = cltool_inputs_to_etool_inputs(clt) orig_step_inputs = copy.deepcopy(step.in_) for orig_step_input in orig_step_inputs: orig_step_input.id = orig_step_input.id.split("/")[-1] if isinstance(orig_step_input.source, MutableSequence): for index, source in enumerate(orig_step_input.source): orig_step_input.source[index] = source.split("#")[-1] else: orig_step_input.source = orig_step_input.source.split("#")[ -1 ] orig_step_inputs[:] = [ x for x in orig_step_inputs if not x.id.startswith("_") ] for inp in orig_step_inputs: inp.source = inp.id inp.linkMerge = None if replace_etool: processes = [parent] final_etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = ( etool_to_cltool(etool, find_expressionLib(processes)) ) else: final_etool = etool etool_step = cwl.WorkflowStep( id=etool_id, in_=orig_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=final_etool, scatterMethod=step.scatterMethod, ) new_clt_step = copy.copy( step ) # a deepcopy would be convenient, but params2.cwl gives it problems new_clt_step.id = new_clt_step.id.split("#")[-1] new_clt_step.run = copy.copy(step.run) new_clt_step.run.id = None remove_JSReq(new_clt_step.run, skip_command_line1) for new_outp in new_clt_step.run.outputs: if new_outp.id.split("#")[-1] == outp_id: if isinstance( new_outp, ( cwl.WorkflowOutputParameter, cwl.ExpressionToolOutputParameter, ), ): new_outp.type_ = cwl.OutputArraySchema( items="File", type_="array" ) elif isinstance(new_outp, cwl.CommandOutputParameter): if new_outp.outputBinding: new_outp.outputBinding.outputEval = None new_outp.outputBinding.loadContents = None new_outp.type_ = cwl.CommandOutputArraySchema( items="File", type_="array", ) else: raise Exception( "Unimplemented OutputParameter type: %s", type(new_outp), ) new_clt_step.in_ = copy.deepcopy(step.in_) for inp in new_clt_step.in_: inp.id = inp.id.split("/")[-1] inp.source = inp.id inp.linkMerge = None for index, out in enumerate(new_clt_step.out): new_clt_step.out[index] = out.split("/")[-1] for tool_inp in new_clt_step.run.inputs: tool_inp.id = tool_inp.id.split("#")[-1] for tool_out in new_clt_step.run.outputs: tool_out.id = tool_out.id.split("#")[-1] sub_wf_steps = [new_clt_step, etool_step] sub_workflow = cwl.Workflow( inputs=sub_wf_inputs, outputs=sub_wf_outputs, steps=sub_wf_steps, cwlVersion=parent.cwlVersion, ) if step.scatter: new_clt_step.scatter = None step.run = sub_workflow rename_step_source( sub_workflow, f"{step_id}/{outp_id}", f"{etool_id}/result", ) orig_step_inputs.append( cwl.WorkflowStepInput(id="self", source=f"{step_id}/{outp_id}") ) if not parent.requirements: parent.requirements = [cwl.SubworkflowFeatureRequirement()] else: has_sub_wf_req = False for req in parent.requirements: if isinstance(req, cwl.SubworkflowFeatureRequirement): has_sub_wf_req = True if not has_sub_wf_req: parent.requirements.append( cwl.SubworkflowFeatureRequirement() ) return modified def rename_step_source(workflow: cwl.Workflow, old: str, new: str) -> None: """Update step source names to the new name.""" def simplify_wf_id(uri: str) -> str: return uri.split("#")[-1].split("/", 1)[1] def simplify_step_id(uri: str) -> str: return uri.split("#")[-1] for wf_outp in workflow.outputs: if wf_outp.outputSource and simplify_wf_id(wf_outp.outputSource) == old: wf_outp.outputSource = new for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if isinstance(inp.source, str): source_id = ( simplify_step_id(inp.source) if "#" in inp.source else inp.source ) if source_id == old: inp.source = new else: for index, source in enumerate(inp.source): if simplify_step_id(source) == old: inp.source[index] = new def remove_JSReq( process: Union[cwl.CommandLineTool, cwl.WorkflowStep, cwl.Workflow], skip_command_line1: bool, ) -> None: """Since the InlineJavascriptRequirement is longer needed, remove it.""" if skip_command_line1 and isinstance(process, cwl.CommandLineTool): return if process.hints: process.hints[:] = [ hint for hint in process.hints if not isinstance(hint, cwl.InlineJavascriptRequirement) ] if not process.hints: process.hints = None if process.requirements: process.requirements[:] = [ req for req in process.requirements if not isinstance(req, cwl.InlineJavascriptRequirement) ] if not process.requirements: process.requirements = None def replace_step_clt_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.WorkflowInputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> None: """Convert a step level CWL Expression to a sibling expression step.""" etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) def replace_clt_hintreq_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.WorkflowInputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> Union[cwl.CommandLineTool, cwl.ExpressionTool]: """Factor out an expression inside a CommandLineTool req or hint into a sibling step.""" # Same as replace_step_clt_expr_with_etool or different? etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) return etool def cltool_inputs_to_etool_inputs( tool: cwl.CommandLineTool, ) -> list[cwl.WorkflowInputParameter]: """Copy CommandLineTool input objects into the equivalent ExpressionTool input objects.""" inputs = yaml.comments.CommentedSeq() if tool.inputs: for clt_inp in tool.inputs: clt_inp_id = clt_inp.id.split("#")[-1].split("/")[-1] if not clt_inp_id.startswith("_"): inputs.append( cwl.WorkflowInputParameter( id=clt_inp_id, label=clt_inp.label, secondaryFiles=clt_inp.secondaryFiles, streamable=clt_inp.streamable, doc=clt_inp.doc, format=clt_inp.format, default=clt_inp.default, type_=clt_inp.type_, extension_fields=clt_inp.extension_fields, loadingOptions=clt_inp.loadingOptions, ) ) return inputs def cltool_step_outputs_to_workflow_outputs( cltool_step: cwl.WorkflowStep, etool_step_id: str, etool_out_id: str ) -> list[cwl.OutputParameter]: """ Copy CommandLineTool outputs into the equivalent Workflow output parameters. Connects the outputSources for each of the new output parameters to the step they came from. """ outputs = yaml.comments.CommentedSeq() if not cltool_step.id: raise WorkflowException(f"Missing step id from {cltool_step}.") default_step_id = cltool_step.id.split("#")[-1] if cltool_step.run.outputs: for clt_out in cltool_step.run.outputs: clt_out_id = clt_out.id.split("#")[-1].split("/")[-1] if clt_out_id == etool_out_id: outputSource = f"{etool_step_id}/result" else: outputSource = f"{default_step_id}/{clt_out_id}" if not clt_out_id.startswith("_"): outputs.append( cwl.WorkflowOutputParameter( id=clt_out_id, label=clt_out.label, secondaryFiles=clt_out.secondaryFiles, streamable=clt_out.streamable, doc=clt_out.doc, format=clt_out.format, outputSource=outputSource, type_=clt_out.type_, extension_fields=clt_out.extension_fields, loadingOptions=clt_out.loadingOptions, ) ) return outputs def generate_etool_from_expr2( expr: str, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], inputs: Sequence[Union[cwl.WorkflowInputParameter, cwl.CommandInputParameter]], self_name: Optional[str] = None, process: Optional[Union[cwl.CommandLineTool, cwl.ExpressionTool]] = None, extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Generate an ExpressionTool to achieve the same result as the given expression.""" outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, ) ) expression = "${" if self_name: expression += f"\n var self=inputs.{self_name};" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) hints = None procs: list[ Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow, cwl.WorkflowStep] ] = [] if process: procs.append(process) if extra_processes: procs.extend(extra_processes) inlineJSReq = cwl.InlineJavascriptRequirement(find_expressionLib(procs)) reqs = [inlineJSReq] if process: if process.hints: hints = copy.deepcopy(process.hints) hints[:] = [ x for x in hints if not isinstance(x, cwl.InitialWorkDirRequirement) ] if process.requirements: reqs.extend(copy.deepcopy(process.requirements)) reqs[:] = [ x for x in reqs if not isinstance(x, cwl.InitialWorkDirRequirement) ] return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=reqs, cwlVersion="v1.0", ) def traverse_step( step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Process the given WorkflowStep.""" modified = False inputs = empty_inputs(step, parent) if not step.id: return False step_id = step.id.split("#")[-1] original_process = copy.deepcopy(step.run) original_step_ins = copy.deepcopy(step.in_) for inp in step.in_: if inp.valueFrom: if not inp.source: self = None else: if isinstance(inp.source, MutableSequence): self = [] for source in inp.source: if not step.scatter: self.append( example_input( utils.type_for_source(parent, source.split("#")[-1]) ) ) else: scattered_source_type = utils.type_for_source( parent, source ) if isinstance(scattered_source_type, list): for stype in scattered_source_type: self.append(example_input(stype.type_)) else: self.append(example_input(scattered_source_type.type_)) else: if not step.scatter: self = example_input( utils.type_for_source(parent, inp.source.split("#")[-1]) ) else: scattered_source_type2 = utils.type_for_source( parent, inp.source ) if isinstance(scattered_source_type2, list): self = example_input(scattered_source_type2[0].type_) else: self = example_input(scattered_source_type2.type_) expression = get_expression(inp.valueFrom, inputs, self) if expression: modified = True etool_id = "_expression_{}_{}".format(step_id, inp.id.split("/")[-1]) target = get_input_for_id(inp.id, original_process) if not target: raise WorkflowException("target not found") input_source_id = None source_type: Optional[ Union[list[cwl.WorkflowInputParameter], cwl.WorkflowInputParameter] ] = None if inp.source: if isinstance(inp.source, MutableSequence): input_source_id = [] source_types: list[cwl.WorkflowInputParameter] = [] for source in inp.source: source_id = source.split("#")[-1] input_source_id.append(source_id) temp_type = utils.type_for_source( step.run, source_id, parent ) if isinstance(temp_type, list): for ttype in temp_type: if ttype not in source_types: source_types.append(ttype) else: if temp_type not in source_types: source_types.append(temp_type) source_type = cwl.WorkflowInputParameter( id=None, type_=cwl.ArraySchema(source_types, "array"), ) else: input_source_id = inp.source.split("#")[-1] source_type = utils.param_for_source_id( step.run, input_source_id, parent ) # target.id = target.id.split('#')[-1] if isinstance(original_process, cwl.ExpressionTool): found_JSReq = False reqs: list[cwl.ProcessRequirement] = [] if original_process.hints: reqs.extend(original_process.hints) if original_process.requirements: reqs.extend(original_process.requirements) for req in reqs: if isinstance(req, cwl.InlineJavascriptRequirement): found_JSReq = True if not found_JSReq: if not step.run.requirements: step.run.requirements = [] expr_lib = find_expressionLib([parent]) step.run.requirements.append( cwl.InlineJavascriptRequirement(expr_lib) ) replace_step_valueFrom_expr_with_etool( expression, etool_id, parent, target, step, inp, original_process, original_step_ins, input_source_id, replace_etool, source_type, ) inp.valueFrom = None inp.source = f"{etool_id}/result" # TODO: skip or special process for sub workflows? process_modified = process_level_reqs( original_process, step, parent, replace_etool, skip_command_line1, skip_command_line2, ) if process_modified: modified = True if isinstance(original_process, cwl.CommandLineTool): clt_modified = traverse_CommandLineTool( original_process, parent, step, replace_etool, skip_command_line1, skip_command_line2, ) if clt_modified: modified = True return modified def workflow_step_to_WorkflowInputParameters( step_ins: list[cwl.WorkflowStepInput], parent: cwl.Workflow, except_in_id: str ) -> list[cwl.WorkflowInputParameter]: """Create WorkflowInputParameters to match the given WorkflowStep inputs.""" params = [] for inp in step_ins: if not inp.id: continue inp_id = inp.id.split("#")[-1].split("/")[-1] if inp.source and inp_id != except_in_id: param = copy.deepcopy( utils.param_for_source_id(parent, sourcenames=inp.source) ) if isinstance(param, list): for p in param: p.id = inp_id p.type_ = clean_type_ids(p.type_) params.append(p) else: param.id = inp_id param.type_ = clean_type_ids(param.type_) params.append(param) return params def replace_step_valueFrom_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], step: cwl.WorkflowStep, step_inp: cwl.WorkflowStepInput, original_process: Union[cwl.CommandLineTool, cwl.ExpressionTool], original_step_ins: list[cwl.WorkflowStepInput], source: Optional[Union[str, list[str]]], replace_etool: bool, source_type: Optional[ Union[cwl.WorkflowInputParameter, list[cwl.WorkflowInputParameter]] ] = None, ) -> None: """Replace a WorkflowStep level 'valueFrom' expression with a sibling ExpressionTool step.""" if not step_inp.id: raise WorkflowException(f"Missing id in {step_inp}.") step_inp_id = step_inp.id.split("/")[-1] etool_inputs = workflow_step_to_WorkflowInputParameters( original_step_ins, workflow, step_inp_id ) if source: source_param = cwl.WorkflowInputParameter(id="self", type_="Any") # TODO: would be nicer to derive a proper type; but in the face of linkMerge, this is easier for now etool_inputs.append(source_param) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, "self" if source else None, original_process, [workflow, step], ) if replace_etool: processes: list[ Union[ cwl.Workflow, cwl.CommandLineTool, cwl.ExpressionTool, cwl.WorkflowStep ] ] = [ workflow, step, ] cltool = etool_to_cltool(temp_etool, find_expressionLib(processes)) etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = cltool else: etool = temp_etool wf_step_inputs = copy.deepcopy(original_step_ins) if source: wf_step_inputs.append(cwl.WorkflowStepInput(id="self", source=step_inp.source)) for wf_step_input in wf_step_inputs: if not wf_step_input.id: continue wf_step_input.id = wf_step_input.id.split("/")[-1] if wf_step_input.valueFrom: wf_step_input.valueFrom = None if wf_step_input.source: if isinstance(wf_step_input.source, MutableSequence): for index, inp_source in enumerate(wf_step_input.source): wf_step_input.source[index] = inp_source.split("#")[-1] else: wf_step_input.source = wf_step_input.source.split("#")[-1] wf_step_inputs[:] = [ x for x in wf_step_inputs if x.id and not (x.id.startswith("_") or x.id.endswith(step_inp_id)) ] scatter = copy.deepcopy(step.scatter) if isinstance(scatter, str): scatter = [scatter] if isinstance(scatter, MutableSequence): for index, entry in enumerate(scatter): scatter[index] = entry.split("/")[-1] if scatter and step_inp_id in scatter: scatter = ["self"] # do we still need to scatter? else: scatter = None workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, scatter=scatter, scatterMethod=step.scatterMethod, ) ) def traverse_workflow( workflow: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[cwl.Workflow, bool]: """Traverse a workflow, processing each step.""" modified = False for index, step in enumerate(workflow.steps): if isinstance(step.run, cwl.ExpressionTool) and replace_etool: workflow.steps[index].run = etool_to_cltool(step.run) modified = True else: step_modified = load_step( step, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True for step in workflow.steps: if not step.id.startswith("_expression"): step_modified = traverse_step( step, workflow, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True if process_workflow_inputs_and_outputs(workflow, replace_etool): modified = True if process_workflow_reqs_and_hints(workflow, replace_etool): modified = True if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance( x, (cwl.InlineJavascriptRequirement, cwl.StepInputExpressionRequirement) ) ] else: workflow.requirements = None return workflow, modified ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/cwl_v1_2_expression_refactor.py0000755000175100001660000026660114753463342023243 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2018-2021 Michael R. Crusoe """CWL Expression refactoring tool for CWL v1.2 .""" import copy import hashlib import uuid from collections.abc import Mapping, MutableSequence, Sequence from typing import Any, Optional, Union, cast from ruamel import yaml from schema_salad.sourceline import SourceLine from schema_salad.utils import json_dumps import cwl_utils.parser.cwl_v1_2 as cwl import cwl_utils.parser.cwl_v1_2_utils as utils from cwl_utils.errors import JavascriptException, WorkflowException from cwl_utils.expression import do_eval, interpolate from cwl_utils.types import CWLObjectType, CWLOutputType def expand_stream_shortcuts(process: cwl.CommandLineTool) -> cwl.CommandLineTool: """Rewrite the "type: stdout" shortcut to use an explicit random filename.""" if not process.outputs: return process result = None for index, output in enumerate(process.outputs): if output.type_ == "stdout": # TODO: add 'stdin' for CWL v1.1 if not result: result = copy.deepcopy(process) stdout_path = process.stdout if not stdout_path: stdout_path = str( hashlib.sha1( # nosec json_dumps(cwl.save(process)).encode("utf-8") ).hexdigest() ) result.stdout = stdout_path result.outputs[index].type_ = "File" output.outputBinding = cwl.CommandOutputBinding(stdout_path, None, None) if result: return result return process def escape_expression_field(contents: str) -> str: """Escape sequences similar to CWL expressions or param references.""" return contents.replace("${", "$/{").replace("$(", "$/(") def clean_type_ids( cwltype: Union[cwl.ArraySchema, cwl.InputRecordSchema], ) -> Union[cwl.ArraySchema, cwl.InputRecordSchema]: """Simplify type identifiers.""" result = copy.deepcopy(cwltype) if isinstance(result, cwl.ArraySchema): if isinstance(result.items, MutableSequence): for item in result.items: if hasattr(item, "id"): item.id = item.id.split("#")[-1] elif isinstance(result.items, cwl.InputRecordSchema): if result.items.name: result.items.name = result.items.name.split("/")[-1] if result.items.fields: for field in result.items.fields: field.name = field.name.split("/")[-1] elif isinstance(result, cwl.InputRecordSchema): if result.name: result.name = result.name.split("/")[-1] if result.fields: for field in result.fields: field.name = field.name.split("/")[-1] return result def get_expression( string: str, inputs: CWLObjectType, self: Optional[CWLOutputType] ) -> Optional[str]: """ Find and return a normalized CWL expression, if any. CWL expressions in the $() form are converted to the ${} form. """ if not isinstance(string, str): return None if string.strip().startswith("${"): return string if "$(" in string: runtime: CWLObjectType = { "cores": 0, "ram": 0, "outdir": "/root", "tmpdir": "/tmp", # nosec "outdirSize": 0, "tmpdirSize": 0, } try: do_eval( string, inputs, context=self, requirements=[], outdir="", tmpdir="", resources={}, ) except (WorkflowException, JavascriptException): if ( string[0:2] != "$(" or not string.endswith(")") or len(string.split("$(")) > 2 ): # then it is a string interpolation return cast( str, interpolate( scan=string, rootvars={ "inputs": inputs, "context": self, "runtime": runtime, }, fullJS=True, escaping_behavior=2, convert_to_expression=True, ), ) else: # it is a CWL Expression in $() with no string interpolation return "${return " + string.strip()[2:-1] + ";}" return None def etool_to_cltool( etool: cwl.ExpressionTool, expressionLib: Optional[list[str]] = None ) -> cwl.CommandLineTool: """Convert a ExpressionTool to a CommandLineTool.""" inputs = yaml.comments.CommentedSeq() # preserve the order for inp in etool.inputs: inputs.append( cwl.CommandInputParameter( id=inp.id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) outputs = yaml.comments.CommentedSeq() for outp in etool.outputs: outputs.append( cwl.CommandOutputParameter( id=outp.id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) contents = """"use strict"; var inputs=$(inputs); var runtime=$(runtime);""" if expressionLib: contents += "\n" + "\n".join(expressionLib) contents += ( """ var ret = function(){""" + escape_expression_field(etool.expression.strip()[2:-1]) + """}(); process.stdout.write(JSON.stringify(ret));""" ) listing = [cwl.Dirent(entryname="expression.js", entry=contents, writable=None)] iwdr = cwl.InitialWorkDirRequirement(listing) containerReq = cwl.DockerRequirement(dockerPull="node:alpine") softwareHint = cwl.SoftwareRequirement( packages=[cwl.SoftwarePackage(package="nodejs")] ) return cwl.CommandLineTool( inputs=inputs, outputs=outputs, id=etool.id, requirements=[iwdr], hints=[containerReq, softwareHint], label=etool.label, doc=etool.doc, cwlVersion=etool.cwlVersion, baseCommand=["nodejs", "expression.js"], stdout="cwl.output.json", extension_fields=etool.extension_fields, loadingOptions=etool.loadingOptions, ) def traverse( process: Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], replace_etool: bool, inside: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow], bool]: """Convert the given process and any subprocesses.""" if not inside and isinstance(process, cwl.CommandLineTool): process = expand_stream_shortcuts(process) wf_inputs = [] wf_outputs = [] step_inputs = [] step_outputs = [] if process.inputs: for inp in process.inputs: inp_id = inp.id.split("#")[-1] step_inputs.append( cwl.WorkflowStepInput( id=inp_id, source=inp_id, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) wf_inputs.append( cwl.WorkflowInputParameter( id=inp_id, label=inp.label, secondaryFiles=inp.secondaryFiles, streamable=inp.streamable, doc=inp.doc, format=inp.format, default=inp.default, type_=inp.type_, extension_fields=inp.extension_fields, loadingOptions=inp.loadingOptions, ) ) if process.outputs: for outp in process.outputs: outp_id = outp.id.split("#")[-1] step_outputs.append(outp_id) wf_outputs.append( cwl.WorkflowOutputParameter( id=outp_id, label=outp.label, secondaryFiles=outp.secondaryFiles, streamable=outp.streamable, doc=outp.doc, format=outp.format, outputSource=f"main/{outp_id}", type_=outp.type_, extension_fields=outp.extension_fields, loadingOptions=outp.loadingOptions, ) ) step = cwl.WorkflowStep( id="#main", in_=step_inputs, out=step_outputs, run=copy.deepcopy(process), ) workflow = cwl.Workflow( inputs=wf_inputs, outputs=wf_outputs, steps=[step], cwlVersion=process.cwlVersion, ) result, modified = traverse_workflow( workflow, replace_etool, skip_command_line1, skip_command_line2 ) if modified: return result, True else: return process, False if isinstance(process, cwl.ExpressionTool) and replace_etool: expression = get_expression(process.expression, empty_inputs(process), None) # Why call get_expression on an ExpressionTool? # It normalizes the form of $() CWL expressions into the ${} style if expression: process2 = copy.deepcopy(process) process2.expression = expression else: process2 = process return etool_to_cltool(process2), True if isinstance(process, cwl.Workflow): return traverse_workflow( process, replace_etool, skip_command_line1, skip_command_line2 ) return process, False def load_step( step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """If the step's Process is not inline, load and process it.""" modified = False if isinstance(step.run, str): step.run, modified = traverse( cwl.load_document(step.run, baseuri=step.loadingOptions.fileuri), replace_etool, True, skip_command_line1, skip_command_line2, ) return modified def generate_etool_from_expr( expr: str, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], no_inputs: bool = False, self_type: Optional[ Union[ cwl.WorkflowInputParameter, cwl.CommandInputParameter, list[Union[cwl.WorkflowInputParameter, cwl.CommandInputParameter]], ] ] = None, # if the "self" input should be a different type than the "result" output extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Convert a CWL Expression into an ExpressionTool.""" inputs = yaml.comments.CommentedSeq() if not no_inputs: if not self_type: self_type = target if isinstance(self_type, list): new_type: Union[ list[Union[cwl.ArraySchema, cwl.InputRecordSchema]], Union[cwl.ArraySchema, cwl.InputRecordSchema], ] = [clean_type_ids(t.type_) for t in self_type] else: new_type = clean_type_ids(self_type.type_) inputs.append( cwl.WorkflowInputParameter( id="self", label=self_type.label if not isinstance(self_type, list) else None, secondaryFiles=( self_type.secondaryFiles if not isinstance(self_type, list) else None ), streamable=( self_type.streamable if not isinstance(self_type, list) else None ), doc=self_type.doc if not isinstance(self_type, list) else None, format=self_type.format if not isinstance(self_type, list) else None, type_=new_type, extension_fields=( self_type.extension_fields if not isinstance(self_type, list) else None ), loadingOptions=( self_type.loadingOptions if not isinstance(self_type, list) else None ), ) ) outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, extension_fields=target.extension_fields, loadingOptions=target.loadingOptions, ) ) expression = "${" if not no_inputs: expression += "\n var self=inputs.self;" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) inlineJSReq = cwl.InlineJavascriptRequirement( find_expressionLib(extra_processes) if extra_processes else None ) return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=[inlineJSReq], cwlVersion="v1.0", ) def get_input_for_id( name: str, tool: Union[cwl.CommandLineTool, cwl.Workflow] ) -> Optional[cwl.CommandInputParameter]: """Determine the CommandInputParameter for the given input name.""" name = name.split("/")[-1] for inp in cast(list[cwl.CommandInputParameter], tool.inputs): if inp.id and inp.id.split("#")[-1].split("/")[-1] == name: return inp if isinstance(tool, cwl.Workflow) and "/" in name: stepname, stem = name.split("/", 1) for step in tool.steps: if step.id == stepname: result = get_input_for_id(stem, step.run) if result: return result return None def find_expressionLib( processes: Sequence[ Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool, cwl.WorkflowStep] ], ) -> Optional[list[str]]: """ Return the expressionLib from the highest priority InlineJavascriptRequirement. processes: should be in order of least important to most important (Workflow, WorkflowStep, ... CommandLineTool/ExpressionTool) """ for process in reversed(copy.copy(processes)): if process.requirements: for req in process.requirements: if isinstance(req, cwl.InlineJavascriptRequirement): return cast(Optional[list[str]], copy.deepcopy(req.expressionLib)) return None def replace_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], source: Optional[Union[str, list[Any]]], replace_etool: bool = False, extra_process: Optional[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = None, source_type: Optional[cwl.CommandInputParameter] = None, ) -> None: """Modify the given workflow, replacing the expr with an standalone ExpressionTool.""" extra_processes: list[ Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool] ] = [workflow] if extra_process: extra_processes.append(extra_process) etool: cwl.ExpressionTool = generate_etool_from_expr( expr, target, source is None, source_type, extra_processes ) if replace_etool: processes: list[Union[cwl.WorkflowStep, cwl.Workflow, cwl.CommandLineTool]] = [ workflow ] if extra_process: processes.append(extra_process) final_tool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( etool, find_expressionLib(processes) ) else: final_tool = etool inps = [] if source: inps.append(cwl.WorkflowStepInput(id="self", source=source)) workflow.steps.append( cwl.WorkflowStep( id=name, in_=inps, out=[cwl.WorkflowStepOutput("result")], run=final_tool, ) ) def replace_wf_input_ref_with_step_output( workflow: cwl.Workflow, name: str, target: str ) -> None: """Refactor all reference to a workflow input to the specified step output.""" if workflow.steps: for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if inp.source == name: inp.source = target if isinstance(inp.source, MutableSequence): for index, source in enumerate(inp.source): if source == name: inp.source[index] = target if workflow.outputs: for outp in workflow.outputs: if outp.outputSource: if outp.outputSource == name: outp.outputSource = target if isinstance(outp.outputSource, MutableSequence): for index, outputSource in enumerate(outp.outputSource): if outputSource == name: outp.outputSource[index] = target def empty_inputs( process_or_step: Union[ cwl.CommandLineTool, cwl.WorkflowStep, cwl.ExpressionTool, cwl.Workflow ], parent: Optional[cwl.Workflow] = None, ) -> dict[str, Any]: """Produce a mock input object for the given inputs.""" result = {} if isinstance(process_or_step, cwl.Process): for param in process_or_step.inputs: result[param.id.split("#")[-1]] = example_input(param.type_) else: for param in process_or_step.in_: param_id = param.id.split("/")[-1] if param.source is None and param.valueFrom: result[param_id] = example_input("string") elif param.source is None and param.default: result[param_id] = param.default else: try: result[param_id] = example_input( utils.type_for_source(process_or_step.run, param.source, parent) ) except WorkflowException: pass return result def example_input(some_type: Any) -> Any: """Produce a fake input for the given type.""" # TODO: accept some sort of context object with local custom type definitions if some_type == "Directory": return { "class": "Directory", "location": "https://www.example.com/example", "basename": "example", "listing": [ { "class": "File", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } ], } if some_type == "File": return { "class": "File", "location": "https://www.example.com/example.txt", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } if some_type == "int": return 23 if some_type == "string": return "hoopla!" if some_type == "boolean": return True return None EMPTY_FILE: CWLOutputType = { "class": "File", "basename": "em.pty", "nameroot": "em", "nameext": "pty", } TOPLEVEL_SF_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a secondaryFiles " "specification in a Workflow level input or standalone CommandLine Tool " "are not able to be refactored into separate ExpressionTool or " "CommandLineTool steps." ) TOPLEVEL_FORMAT_EXPR_ERROR = ( "Input '{}'. Sorry, CWL Expressions as part of a format " "specification in a Workflow level input are not able to be refactored " "into separate ExpressionTool/CommandLineTool steps." ) PICKVALUE_FIRST_NON_NULL_EXPR = """${ for (let i = 0; i < self.length; i++) { if (self[i] !== null){ return self[i]; } } throw 'pickValue=first_non_null, but no non-null value found: ' + self; } """ PICKVALUE_THE_ONLY_NON_NULL_EXPR = """${ var found = null; self.forEach(function(item) { if (item !== null) { if (found !== null) { throw 'pickValue=the_only_non_null, but multiple non-null values found: ' + self; } else { found = item; } } }) if (found !== null) { return found; } throw 'pickValue=the_only_non_null, but no non-null value found: ' + self; } """ PICKVALUE_ALL_NON_NULL_EXPR = """${ var results = []; self.forEach(function(item) { if (item !== null){ results.push(item); } }) return results; } """ PICKVALUE_ERROR = ( "pickValue '{}' is invalid. Should be one of 'first_non_null', " "'the_only_non_null', 'all_non_null'" ) def process_workflow_inputs_and_outputs( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """Do any needed conversions on the given Workflow's inputs and outputs.""" modified = False inputs = empty_inputs(workflow) for index, param in enumerate(workflow.inputs): with SourceLine(workflow.inputs, index, WorkflowException): if param.format and get_expression(param.format, inputs, None): raise SourceLine( param.loadingOptions.original_doc, "format", raise_type=WorkflowException, ).makeError(TOPLEVEL_FORMAT_EXPR_ERROR.format(param.id.split("#")[-1])) if param.secondaryFiles: if hasattr(param.secondaryFiles, "pattern") and get_expression( param.secondaryFiles.pattern, inputs, EMPTY_FILE ): raise SourceLine( param.loadingOptions.original_doc, "secondaryFiles", raise_type=WorkflowException, ).makeError(TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1])) elif isinstance(param.secondaryFiles, MutableSequence): for index2, entry in enumerate(param.secondaryFiles): if get_expression(entry.pattern, inputs, EMPTY_FILE): raise SourceLine( param.loadingOptions.original_doc, index2, raise_type=WorkflowException, ).makeError( f"Entry {index}," + TOPLEVEL_SF_EXPR_ERROR.format(param.id.split("#")[-1]) ) generated_pickValue_steps = [] for index3, param2 in enumerate(workflow.outputs): with SourceLine(workflow.outputs, index3, WorkflowException): if param2.pickValue: if param2.pickValue == "first_non_null": expression = PICKVALUE_FIRST_NON_NULL_EXPR elif param2.pickValue == "the_only_non_null": expression = PICKVALUE_THE_ONLY_NON_NULL_EXPR elif param2.pickValue == "all_non_null": expression = PICKVALUE_ALL_NON_NULL_EXPR else: raise WorkflowException(PICKVALUE_ERROR.format(param2.pickValue)) modified = True etool_id = "_pickValue_workflow_step_{}".format( param2.id.split("#")[-1] ) target_type = copy.deepcopy(param2.type_) if isinstance(target_type, cwl.OutputArraySchema): target_type.name = "" target = cwl.WorkflowInputParameter(id=None, type_=target_type) if not isinstance(param2.outputSource, list): sources = param2.outputSource.split("#")[-1] else: sources = [s.split("#")[-1] for s in param2.outputSource] source_type_items = utils.type_for_source(workflow, sources) if isinstance(source_type_items, cwl.ArraySchema): if isinstance(source_type_items.items, list): if "null" not in source_type_items.items: source_type_items.items.append("null") elif source_type_items.items != "null": source_type_items.items = ["null", source_type_items.items] elif isinstance(source_type_items, list): if "null" not in source_type_items: source_type_items.append("null") elif source_type_items != "null": source_type_items = ["null", source_type_items] source_type = cwl.CommandInputParameter(type_=source_type_items) replace_expr_with_etool( expression, etool_id, workflow, target, sources, replace_etool, None, source_type, ) param2.outputSource = f"{etool_id}/result" param2.pickValue = None generated_pickValue_steps.append(etool_id) return modified def process_workflow_reqs_and_hints( workflow: cwl.Workflow, replace_etool: bool ) -> bool: """ Convert any expressions in a workflow's reqs and hints. Each expression will be converted to an additional step. The converted requirement will be copied to all workflow steps that don't have that requirement type. Those affected steps will gain an additional input from the relevant synthesized expression step. """ # TODO: consolidate the generated etools/cltools into a single "_expression_workflow_reqs" step # TODO: support resourceReq.* references to Workflow.inputs? # ^ By refactoring replace_expr_etool to allow multiple inputs, # and connecting all workflow inputs to the generated step modified = False inputs = empty_inputs(workflow) generated_res_reqs: list[tuple[str, Union[int, str]]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str]]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] prop_reqs: tuple[ Union[ type[cwl.EnvVarRequirement], type[cwl.ResourceRequirement], type[cwl.InitialWorkDirRequirement], ], ..., ] = () resourceReq: Optional[cwl.ResourceRequirement] = None envVarReq: Optional[cwl.EnvVarRequirement] = None iwdr: Optional[cwl.InitialWorkDirRequirement] = None if workflow.requirements is not None: for req in cast(list[cwl.ProcessRequirement], workflow.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = ( "_expression_workflow_EnvVarRequirement_{}".format( index ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if envVarReq is None: envVarReq = copy.deepcopy(req) prop_reqs += (cwl.EnvVarRequirement,) newEnvDef = copy.deepcopy(envDef) newEnvDef.envValue = f"$(inputs._envDef{index})" envVarReq.envDef[index] = newEnvDef generated_envVar_reqs.append((etool_id, index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="long") etool_id = ( "_expression_workflow_ResourceRequirement_{}".format( attr ) ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) if not resourceReq: resourceReq = cwl.ResourceRequirement( loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.ResourceRequirement,) setattr(resourceReq, attr, f"$(inputs._{attr})") generated_res_reqs.append((etool_id, attr)) if req and isinstance(req, cwl.InitialWorkDirRequirement): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement" replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr = cwl.InitialWorkDirRequirement( listing="$(inputs._iwdr_listing)", loadingOptions=workflow.loadingOptions, ) prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = copy.deepcopy(req) for index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema( ["File", "Directory"], "array", None, None ), ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = f"$(inputs._iwdr_listing_{index}" generated_iwdr_reqs.append((etool_id, index)) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression( entry.entry, inputs, None ) if expression: expr: str = expression expr_result = do_eval( ex=entry.entry, jobinput=inputs, requirements=[], outdir="", tmpdir="", resources={}, ) modified = True if ( isinstance(expr_result, Mapping) and "class" in expr_result and ( expr_result["class"] == "File" or expr_result["class"] == "Directory" ) ): target = cwl.WorkflowInputParameter( id=None, type_=expr_result["class"], ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( "$(inputs._iwdr_listing_{}".format( index ) ) generated_iwdr_reqs.append( (etool_id, index) ) elif isinstance(expr_result, str): target = cwl.WorkflowInputParameter( id=None, type_=["File"], ) if entry.entryname is None: raise SourceLine( entry.loadingOptions.original_doc, index, raise_type=WorkflowException, ).makeError( f"Entry {index}," + "Invalid CWL, if 'entry' " "is a string, then entryName must be specified." ) expr = ( '${return {"class": "File", "basename": "' + entry.entryname + '", "contents": (function(){' + expr[2:-1] + "})() }; }" ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expr, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = "_expression_workflow_InitialWorkDirRequirement_{}".format( index ) replace_expr_with_etool( expression, etool_id, workflow, target, None, replace_etool, ) iwdr.listing[index] = ( f"$(inputs._iwdr_listing_{index}" ) generated_iwdr_reqs.append((etool_id, index)) if generated_iwdr_reqs: prop_reqs += (cwl.InitialWorkDirRequirement,) else: iwdr = None if envVarReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.EnvVarRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(envVarReq) for entry in generated_envVar_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_envDef{entry[1]}", source=f"{entry[0]}/result", ) ) if resourceReq and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.ResourceRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(resourceReq) for entry in generated_res_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_{entry[1]}", source=f"{entry[0]}/result", ) ) if iwdr and workflow.steps: for step in workflow.steps: if step.id.split("#")[-1].startswith("_expression_"): continue if step.requirements: for req in step.requirements: if isinstance(req, cwl.InitialWorkDirRequirement): continue else: step.requirements = yaml.comments.CommentedSeq() step.requirements.append(iwdr) if generated_iwdr_reqs: for entry in generated_iwdr_reqs: step.in_.append( cwl.WorkflowStepInput( id=f"_iwdr_listing_{index}", source=f"{entry[0]}/result", ) ) else: step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source="_expression_workflow_InitialWorkDirRequirement/result", ) ) if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance(x, prop_reqs) ] return modified def process_level_reqs( process: cwl.CommandLineTool, step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Convert expressions inside a process into new adjacent steps.""" # This is for reqs inside a Process (CommandLineTool, ExpressionTool) # differences from process_workflow_reqs_and_hints() are: # - the name of the generated ETools/CTools contains the name of the step, not "workflow" # - Generated ETools/CTools are adjacent steps # - Replace the CWL Expression inplace with a CWL parameter reference # - Don't create a new Requirement, nor delete the existing Requirement # - the Process is passed to replace_expr_with_etool for later searching for JS expressionLibs # - in addition to adding the input to the step for the ETool/CTool result, # add it to the Process.inputs as well if not process.requirements: return False modified = False target_process = step.run inputs = empty_inputs(process) generated_res_reqs: list[tuple[str, str]] = [] generated_iwdr_reqs: list[tuple[str, Union[int, str], Any]] = [] generated_envVar_reqs: list[tuple[str, Union[int, str]]] = [] if not step.id: return False step_name = step.id.split("#", 1)[-1] for req_index, req in enumerate(process.requirements): if req and isinstance(req, cwl.EnvVarRequirement): if req.envDef: for env_index, envDef in enumerate(req.envDef): if envDef.envValue: expression = get_expression(envDef.envValue, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="string") etool_id = "_expression_{}_EnvVarRequirement_{}".format( step_name, env_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index][ env_index ].envValue = f"$(inputs._envDef{env_index})" generated_envVar_reqs.append((etool_id, env_index)) if req and isinstance(req, cwl.ResourceRequirement): for attr in cwl.ResourceRequirement.attrs: this_attr = getattr(req, attr, None) if this_attr: expression = get_expression(this_attr, inputs, None) if expression: modified = True target = cwl.WorkflowInputParameter(id=None, type_="long") etool_id = "_expression_{}_ResourceRequirement_{}".format( step_name, attr ) replace_clt_hintreq_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr( target_process.requirements[req_index], attr, f"$(inputs._{attr})", ) generated_res_reqs.append((etool_id, attr)) if ( not skip_command_line2 and req and isinstance(req, cwl.InitialWorkDirRequirement) ): if req.listing: if isinstance(req.listing, str): expression = get_expression(req.listing, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.WorkflowInputParameter(id=None, type_=target_type) etool_id = "_expression_{}_InitialWorkDirRequirement".format( step_name ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing = ( "$(inputs._iwdr_listing)", ) step.in_.append( cwl.WorkflowStepInput( id="_iwdr_listing", source=f"{etool_id}/result", ) ) add_input_to_process( target_process, "_iwdr_listing", target_type, process.loadingOptions, ) else: for listing_index, entry in enumerate(req.listing): expression = get_expression(entry, inputs, None) if expression: modified = True target_type = cwl.InputArraySchema( ["File", "Directory"], "array", None, None ) target = cwl.WorkflowInputParameter( id=None, type_=target_type, ) etool_id = ( "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ] = f"$(inputs._iwdr_listing_{listing_index}" generated_iwdr_reqs.append( (etool_id, listing_index, target_type) ) elif isinstance(entry, cwl.Dirent): if entry.entry: expression = get_expression(entry.entry, inputs, None) if expression: modified = True if entry.entryname is not None: entryname_expr = get_expression( entry.entryname, inputs, None ) entryname = ( entry.entryname if entryname_expr else f'"{entry.entryname}"' # noqa: B907 ) new_expression = ( "${var result; var entryname = " + entryname + "; var entry = " + entry.entry[2:-1] + """; if (typeof entry === 'string' || entry instanceof String) { result = {"class": "File", "basename": entryname, "contents": entry} ; if (typeof entryname === 'string' || entryname instanceof String) { result.basename = entryname ; } } else { result = entry ; } return result; }""" ) else: new_expression = expression d_target_type = ["File", "Directory"] target = cwl.WorkflowInputParameter( id=None, type_=d_target_type, ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_clt_hintreq_expr_with_etool( new_expression, etool_id, parent, target, step, replace_etool, ) target_process.requirements[req_index].listing[ listing_index ].entry = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, d_target_type) ) elif entry.entryname: expression = get_expression( entry.entryname, inputs, None ) if expression: modified = True target = cwl.WorkflowInputParameter( id=None, type_="string", ) etool_id = "_expression_{}_InitialWorkDirRequirement_{}".format( step_name, listing_index ) replace_expr_with_etool( expression, etool_id, parent, target, None, replace_etool, process, ) target_process.requirements[req_index].listing[ listing_index ].entryname = "$(inputs._iwdr_listing_{})".format( listing_index ) generated_iwdr_reqs.append( (etool_id, listing_index, "string") ) for entry in generated_envVar_reqs: name = f"_envDef{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "string", process.loadingOptions) for entry in generated_res_reqs: name = f"_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, "long", process.loadingOptions) for entry in generated_iwdr_reqs: name = f"_iwdr_listing_{entry[1]}" step.in_.append(cwl.WorkflowStepInput(id=name, source=f"{entry[0]}/result")) add_input_to_process(target_process, name, entry[2], process.loadingOptions) return modified def add_input_to_process( process: cwl.Process, name: str, inptype: Any, loadingOptions: cwl.LoadingOptions ) -> None: """Add a new InputParameter to the given CommandLineTool.""" if isinstance(process, cwl.CommandLineTool): process.inputs.append( cwl.CommandInputParameter( id=name, type_=inptype, loadingOptions=loadingOptions, ) ) def traverse_CommandLineTool( clt: cwl.CommandLineTool, parent: cwl.Workflow, step: cwl.WorkflowStep, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Extract any CWL Expressions within the given CommandLineTool into sibling steps.""" modified = False # don't modify clt, modify step.run target_clt = step.run inputs = empty_inputs(clt) if not step.id: return False step_id = step.id.split("#")[-1] if clt.arguments and not skip_command_line1: for index, arg in enumerate(clt.arguments): if isinstance(arg, str): expression = get_expression(arg, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index] = cwl.CommandLineBinding( valueFrom=f"$(inputs.{inp_id})" ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput( f"{etool_id}/result", None, inp_id, None, None ) ) remove_JSReq(target_clt, skip_command_line1) elif isinstance(arg, cwl.CommandLineBinding) and arg.valueFrom: expression = get_expression(arg.valueFrom, inputs, None) if expression: modified = True inp_id = f"_arguments_{index}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "Any" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) target_clt.arguments[index].valueFrom = "$(inputs.{})".format( inp_id ) target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) remove_JSReq(target_clt, skip_command_line1) for streamtype in "stdout", "stderr": # add 'stdin' for v1.1 version stream_value = getattr(clt, streamtype) if stream_value: expression = get_expression(stream_value, inputs, None) if expression: modified = True inp_id = f"_{streamtype}" etool_id = f"_expression_{step_id}{inp_id}" target_type = "string" target = cwl.WorkflowInputParameter(id=None, type_=target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) setattr(target_clt, streamtype, f"$(inputs.{inp_id})") target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=target_type) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for inp in clt.inputs: if not skip_command_line1 and inp.inputBinding and inp.inputBinding.valueFrom: expression = get_expression( inp.inputBinding.valueFrom, inputs, example_input(inp.type_) ) if expression: modified = True self_id = inp.id.split("#")[-1] inp_id = f"_{self_id}_valueFrom" etool_id = f"_expression_{step_id}{inp_id}" replace_step_clt_expr_with_etool( expression, etool_id, parent, inp, step, replace_etool, self_id ) inp.inputBinding.valueFrom = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter(id=inp_id, type_=inp.type_) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) for outp in clt.outputs: if outp.outputBinding: if outp.outputBinding.glob: expression = get_expression(outp.outputBinding.glob, inputs, None) if expression: modified = True inp_id = "_{}_glob".format(outp.id.split("#")[-1]) etool_id = f"_expression_{step_id}{inp_id}" glob_target_type = ["string", cwl.ArraySchema("string", "array")] target = cwl.WorkflowInputParameter(id=None, type_=glob_target_type) replace_step_clt_expr_with_etool( expression, etool_id, parent, target, step, replace_etool ) outp.outputBinding.glob = f"$(inputs.{inp_id})" target_clt.inputs.append( cwl.CommandInputParameter( id=inp_id, type_=glob_target_type, ) ) step.in_.append( cwl.WorkflowStepInput(id=inp_id, source=f"{etool_id}/result") ) if outp.outputBinding.outputEval and not skip_command_line2: self: CWLOutputType = [ { "class": "File", "basename": "base.name", "nameroot": "base", "nameext": "name", "path": "/tmp/base.name", # nosec "dirname": "/tmp", # nosec } ] if outp.outputBinding.loadContents: cast(dict[Any, Any], self)[0]["contents"] = "stuff" expression = get_expression(outp.outputBinding.outputEval, inputs, self) if expression: modified = True outp_id = outp.id.split("#")[-1] inp_id = f"_{outp_id}_outputEval" etool_id = f"expression{inp_id}" sub_wf_outputs = cltool_step_outputs_to_workflow_outputs( step, etool_id, outp_id ) self_type = cwl.WorkflowInputParameter( id=None, type_=cwl.InputArraySchema("File", "array", None, None), ) etool = generate_etool_from_expr( expression, outp, False, self_type, [clt, step, parent] ) if outp.outputBinding.loadContents: etool.inputs[0].type_.inputBinding = cwl.CommandLineBinding( loadContents=True ) etool.inputs.extend(cltool_inputs_to_etool_inputs(clt)) sub_wf_inputs = cltool_inputs_to_etool_inputs(clt) orig_step_inputs = copy.deepcopy(step.in_) for orig_step_input in orig_step_inputs: orig_step_input.id = orig_step_input.id.split("/")[-1] if isinstance(orig_step_input.source, MutableSequence): for index, source in enumerate(orig_step_input.source): orig_step_input.source[index] = source.split("#")[-1] else: orig_step_input.source = orig_step_input.source.split("#")[ -1 ] orig_step_inputs[:] = [ x for x in orig_step_inputs if not x.id.startswith("_") ] for inp in orig_step_inputs: inp.source = inp.id inp.linkMerge = None if replace_etool: processes = [parent] final_etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = ( etool_to_cltool(etool, find_expressionLib(processes)) ) else: final_etool = etool etool_step = cwl.WorkflowStep( id=etool_id, in_=orig_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=final_etool, scatterMethod=step.scatterMethod, ) new_clt_step = copy.copy( step ) # a deepcopy would be convenient, but params2.cwl gives it problems new_clt_step.id = new_clt_step.id.split("#")[-1] new_clt_step.run = copy.copy(step.run) new_clt_step.run.id = None remove_JSReq(new_clt_step.run, skip_command_line1) for new_outp in new_clt_step.run.outputs: if new_outp.id.split("#")[-1] == outp_id: if isinstance( new_outp, ( cwl.WorkflowOutputParameter, cwl.ExpressionToolOutputParameter, ), ): new_outp.type_ = cwl.OutputArraySchema( items="File", type_="array" ) elif isinstance(new_outp, cwl.CommandOutputParameter): if new_outp.outputBinding: new_outp.outputBinding.outputEval = None new_outp.outputBinding.loadContents = None new_outp.type_ = cwl.CommandOutputArraySchema( items="File", type_="array", ) else: raise Exception( "Unimplemented OutputParameter type: %s", type(new_outp), ) new_clt_step.in_ = copy.deepcopy(step.in_) for inp in new_clt_step.in_: inp.id = inp.id.split("/")[-1] inp.source = inp.id inp.linkMerge = None for index, out in enumerate(new_clt_step.out): new_clt_step.out[index] = out.split("/")[-1] for tool_inp in new_clt_step.run.inputs: tool_inp.id = tool_inp.id.split("#")[-1] for tool_out in new_clt_step.run.outputs: tool_out.id = tool_out.id.split("#")[-1] sub_wf_steps = [new_clt_step, etool_step] sub_workflow = cwl.Workflow( inputs=sub_wf_inputs, outputs=sub_wf_outputs, steps=sub_wf_steps, cwlVersion=parent.cwlVersion, ) if step.scatter: new_clt_step.scatter = None step.run = sub_workflow rename_step_source( sub_workflow, f"{step_id}/{outp_id}", f"{etool_id}/result", ) orig_step_inputs.append( cwl.WorkflowStepInput(id="self", source=f"{step_id}/{outp_id}") ) if not parent.requirements: parent.requirements = [cwl.SubworkflowFeatureRequirement()] else: has_sub_wf_req = False for req in parent.requirements: if isinstance(req, cwl.SubworkflowFeatureRequirement): has_sub_wf_req = True if not has_sub_wf_req: parent.requirements.append( cwl.SubworkflowFeatureRequirement() ) return modified def rename_step_source(workflow: cwl.Workflow, old: str, new: str) -> None: """Update step source names to the new name.""" def simplify_wf_id(uri: str) -> str: return uri.split("#")[-1].split("/", 1)[1] def simplify_step_id(uri: str) -> str: return uri.split("#")[-1] for wf_outp in workflow.outputs: if wf_outp.outputSource and simplify_wf_id(wf_outp.outputSource) == old: wf_outp.outputSource = new for step in workflow.steps: if step.in_: for inp in step.in_: if inp.source: if isinstance(inp.source, str): source_id = ( simplify_step_id(inp.source) if "#" in inp.source else inp.source ) if source_id == old: inp.source = new else: for index, source in enumerate(inp.source): if simplify_step_id(source) == old: inp.source[index] = new def remove_JSReq( process: Union[cwl.CommandLineTool, cwl.WorkflowStep, cwl.Workflow], skip_command_line1: bool, ) -> None: """Since the InlineJavascriptRequirement is longer needed, remove it.""" if skip_command_line1 and isinstance(process, cwl.CommandLineTool): return if process.hints: process.hints[:] = [ hint for hint in process.hints if not isinstance(hint, cwl.InlineJavascriptRequirement) ] if not process.hints: process.hints = None if process.requirements: process.requirements[:] = [ req for req in process.requirements if not isinstance(req, cwl.InlineJavascriptRequirement) ] if not process.requirements: process.requirements = None def replace_step_clt_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.WorkflowInputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> None: """Convert a step level CWL Expression to a sibling expression step.""" etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) def replace_clt_hintreq_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: cwl.WorkflowInputParameter, step: cwl.WorkflowStep, replace_etool: bool, self_name: Optional[str] = None, ) -> Union[cwl.CommandLineTool, cwl.ExpressionTool]: """Factor out an expression inside a CommandLineTool req or hint into a sibling step.""" # Same as replace_step_clt_expr_with_etool or different? etool_inputs = cltool_inputs_to_etool_inputs(step.run) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, self_name, step.run, [workflow] ) if replace_etool: processes = [workflow] etool: Union[cwl.CommandLineTool, cwl.ExpressionTool] = etool_to_cltool( temp_etool, find_expressionLib(processes) ) else: etool = temp_etool wf_step_inputs = copy.deepcopy(step.in_) for wf_step_input in wf_step_inputs: wf_step_input.id = wf_step_input.id.split("/")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if not x.id.startswith("_")] workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, ) ) return etool def cltool_inputs_to_etool_inputs( tool: cwl.CommandLineTool, ) -> list[cwl.WorkflowInputParameter]: """Copy CommandLineTool input objects into the equivalent ExpressionTool input objects.""" inputs = yaml.comments.CommentedSeq() if tool.inputs: for clt_inp in tool.inputs: clt_inp_id = clt_inp.id.split("#")[-1].split("/")[-1] if not clt_inp_id.startswith("_"): inputs.append( cwl.WorkflowInputParameter( id=clt_inp_id, label=clt_inp.label, secondaryFiles=clt_inp.secondaryFiles, streamable=clt_inp.streamable, doc=clt_inp.doc, format=clt_inp.format, default=clt_inp.default, type_=clt_inp.type_, extension_fields=clt_inp.extension_fields, loadingOptions=clt_inp.loadingOptions, ) ) return inputs def cltool_step_outputs_to_workflow_outputs( cltool_step: cwl.WorkflowStep, etool_step_id: str, etool_out_id: str ) -> list[cwl.OutputParameter]: """ Copy CommandLineTool outputs into the equivalent Workflow output parameters. Connects the outputSources for each of the new output parameters to the step they came from. """ outputs = yaml.comments.CommentedSeq() if not cltool_step.id: raise WorkflowException(f"Missing step id from {cltool_step}.") default_step_id = cltool_step.id.split("#")[-1] if cltool_step.run.outputs: for clt_out in cltool_step.run.outputs: clt_out_id = clt_out.id.split("#")[-1].split("/")[-1] if clt_out_id == etool_out_id: outputSource = f"{etool_step_id}/result" else: outputSource = f"{default_step_id}/{clt_out_id}" if not clt_out_id.startswith("_"): outputs.append( cwl.WorkflowOutputParameter( id=clt_out_id, label=clt_out.label, secondaryFiles=clt_out.secondaryFiles, streamable=clt_out.streamable, doc=clt_out.doc, format=clt_out.format, outputSource=outputSource, type_=clt_out.type_, extension_fields=clt_out.extension_fields, loadingOptions=clt_out.loadingOptions, ) ) return outputs def generate_etool_from_expr2( expr: str, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], inputs: Sequence[Union[cwl.WorkflowInputParameter, cwl.CommandInputParameter]], self_name: Optional[str] = None, process: Optional[Union[cwl.CommandLineTool, cwl.ExpressionTool]] = None, extra_processes: Optional[ Sequence[Union[cwl.Workflow, cwl.WorkflowStep, cwl.CommandLineTool]] ] = None, ) -> cwl.ExpressionTool: """Generate an ExpressionTool to achieve the same result as the given expression.""" outputs = yaml.comments.CommentedSeq() outputs.append( cwl.ExpressionToolOutputParameter( id="result", label=target.label, secondaryFiles=target.secondaryFiles, streamable=target.streamable, doc=target.doc, format=target.format, type_=target.type_, ) ) expression = "${" if self_name: expression += f"\n var self=inputs.{self_name};" expression += ( """ return {"result": function(){""" + expr[2:-2] + """}()}; }""" ) hints = None procs: list[ Union[cwl.CommandLineTool, cwl.ExpressionTool, cwl.Workflow, cwl.WorkflowStep] ] = [] if process: procs.append(process) if extra_processes: procs.extend(extra_processes) inlineJSReq = cwl.InlineJavascriptRequirement(find_expressionLib(procs)) reqs = [inlineJSReq] if process: if process.hints: hints = copy.deepcopy(process.hints) hints[:] = [ x for x in hints if not isinstance(x, cwl.InitialWorkDirRequirement) ] if process.requirements: reqs.extend(copy.deepcopy(process.requirements)) reqs[:] = [ x for x in reqs if not isinstance(x, cwl.InitialWorkDirRequirement) ] return cwl.ExpressionTool( id="_:" + str(uuid.uuid4()), inputs=inputs, outputs=outputs, expression=expression, requirements=reqs, cwlVersion="v1.0", ) def traverse_step( step: cwl.WorkflowStep, parent: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> bool: """Process the given WorkflowStep.""" modified = False inputs = empty_inputs(step, parent) if not step.id: return False step_id = step.id.split("#")[-1] original_process = copy.deepcopy(step.run) original_step_ins = copy.deepcopy(step.in_) for inp in step.in_: if inp.valueFrom: if not inp.source: self = None else: if isinstance(inp.source, MutableSequence): self = [] for source in inp.source: if not step.scatter: self.append( example_input( utils.type_for_source(parent, source.split("#")[-1]) ) ) else: scattered_source_type = utils.type_for_source( parent, source ) if isinstance(scattered_source_type, list): for stype in scattered_source_type: self.append(example_input(stype.type_)) else: self.append(example_input(scattered_source_type.type_)) else: if not step.scatter: self = example_input( utils.type_for_source(parent, inp.source.split("#")[-1]) ) else: scattered_source_type2 = utils.type_for_source( parent, inp.source ) if isinstance(scattered_source_type2, list): self = example_input(scattered_source_type2[0].type_) else: self = example_input(scattered_source_type2.type_) expression = get_expression(inp.valueFrom, inputs, self) if expression: modified = True etool_id = "_expression_{}_{}".format(step_id, inp.id.split("/")[-1]) target = get_input_for_id(inp.id, original_process) if not target: raise WorkflowException("target not found") input_source_id = None source_type: Optional[ Union[list[cwl.WorkflowInputParameter], cwl.WorkflowInputParameter] ] = None if inp.source: if isinstance(inp.source, MutableSequence): input_source_id = [] source_types: list[cwl.WorkflowInputParameter] = [] for source in inp.source: source_id = source.split("#")[-1] input_source_id.append(source_id) temp_type = utils.type_for_source( step.run, source_id, parent ) if isinstance(temp_type, list): for ttype in temp_type: if ttype not in source_types: source_types.append(ttype) else: if temp_type not in source_types: source_types.append(temp_type) source_type = cwl.WorkflowInputParameter( id=None, type_=cwl.ArraySchema(source_types, "array"), ) else: input_source_id = inp.source.split("#")[-1] source_type = utils.param_for_source_id( step.run, input_source_id, parent ) # target.id = target.id.split('#')[-1] if isinstance(original_process, cwl.ExpressionTool): found_JSReq = False reqs: list[cwl.ProcessRequirement] = [] if original_process.hints: reqs.extend(original_process.hints) if original_process.requirements: reqs.extend(original_process.requirements) for req in reqs: if isinstance(req, cwl.InlineJavascriptRequirement): found_JSReq = True if not found_JSReq: if not step.run.requirements: step.run.requirements = [] expr_lib = find_expressionLib([parent]) step.run.requirements.append( cwl.InlineJavascriptRequirement(expr_lib) ) replace_step_valueFrom_expr_with_etool( expression, etool_id, parent, target, step, inp, original_process, original_step_ins, input_source_id, replace_etool, source_type, ) inp.valueFrom = None inp.source = f"{etool_id}/result" if step.when: expression = get_expression(string=step.when, inputs=inputs, self=None) if expression: modified = True replace_step_when_expr_with_etool( expression, parent, step, original_step_ins, replace_etool ) # TODO: skip or special process for sub workflows? process_modified = process_level_reqs( original_process, step, parent, replace_etool, skip_command_line1, skip_command_line2, ) if process_modified: modified = True if isinstance(original_process, cwl.CommandLineTool): clt_modified = traverse_CommandLineTool( original_process, parent, step, replace_etool, skip_command_line1, skip_command_line2, ) if clt_modified: modified = True return modified def workflow_step_to_WorkflowInputParameters( step_ins: list[cwl.WorkflowStepInput], parent: cwl.Workflow, except_in_id: str ) -> list[cwl.WorkflowInputParameter]: """Create WorkflowInputParameters to match the given WorkflowStep inputs.""" params = [] for inp in step_ins: if not inp.id: continue inp_id = inp.id.split("#")[-1].split("/")[-1] if inp.source and inp_id != except_in_id: param = copy.deepcopy( utils.param_for_source_id(parent, sourcenames=inp.source) ) if isinstance(param, list): for p in param: p.id = inp_id p.type_ = clean_type_ids(p.type_) params.append(p) else: param.id = inp_id param.type_ = clean_type_ids(param.type_) params.append(param) return params def replace_step_valueFrom_expr_with_etool( expr: str, name: str, workflow: cwl.Workflow, target: Union[cwl.CommandInputParameter, cwl.WorkflowInputParameter], step: cwl.WorkflowStep, step_inp: cwl.WorkflowStepInput, original_process: Union[cwl.CommandLineTool, cwl.ExpressionTool], original_step_ins: list[cwl.WorkflowStepInput], source: Optional[Union[str, list[str]]], replace_etool: bool, source_type: Optional[ Union[cwl.WorkflowInputParameter, list[cwl.WorkflowInputParameter]] ] = None, ) -> None: """Replace a WorkflowStep level 'valueFrom' expression with a sibling ExpressionTool step.""" if not step_inp.id: raise WorkflowException(f"Missing id in {step_inp}.") step_inp_id = step_inp.id.split("/")[-1] etool_inputs = workflow_step_to_WorkflowInputParameters( original_step_ins, workflow, step_inp_id ) if source: source_param = cwl.WorkflowInputParameter(id="self", type_="Any") # TODO: would be nicer to derive a proper type; but in the face of linkMerge, this is easier for now etool_inputs.append(source_param) temp_etool = generate_etool_from_expr2( expr, target, etool_inputs, "self" if source else None, original_process, [workflow, step], ) if replace_etool: processes: list[ Union[ cwl.Workflow, cwl.CommandLineTool, cwl.ExpressionTool, cwl.WorkflowStep ] ] = [ workflow, step, ] cltool = etool_to_cltool(temp_etool, find_expressionLib(processes)) etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = cltool else: etool = temp_etool wf_step_inputs = copy.deepcopy(original_step_ins) if source: wf_step_inputs.append(cwl.WorkflowStepInput(id="self", source=step_inp.source)) for wf_step_input in wf_step_inputs: if not wf_step_input.id: continue wf_step_input.id = wf_step_input.id.split("/")[-1] if wf_step_input.valueFrom: wf_step_input.valueFrom = None if wf_step_input.source: if isinstance(wf_step_input.source, MutableSequence): for index, inp_source in enumerate(wf_step_input.source): wf_step_input.source[index] = inp_source.split("#")[-1] else: wf_step_input.source = wf_step_input.source.split("#")[-1] wf_step_inputs[:] = [ x for x in wf_step_inputs if x.id and not (x.id.startswith("_") or x.id.endswith(step_inp_id)) ] scatter = copy.deepcopy(step.scatter) if isinstance(scatter, str): scatter = [scatter] if isinstance(scatter, MutableSequence): for index, entry in enumerate(scatter): scatter[index] = entry.split("/")[-1] if scatter and step_inp_id in scatter: scatter = ["self"] # do we still need to scatter? else: scatter = None workflow.steps.append( cwl.WorkflowStep( id=name, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, scatter=scatter, scatterMethod=step.scatterMethod, ) ) def replace_step_when_expr_with_etool( expr: str, workflow: cwl.Workflow, step: cwl.WorkflowStep, original_step_ins: list[cwl.WorkflowStepInput], replace_etool: bool, ) -> None: """Replace a WorkflowStep level 'when' expression with a sibling ExpressionTool step.""" if not step.id: raise WorkflowException(f"Missing id from {step}.") etool_id = "_when_expression_{}".format(step.id.split("#")[-1]) etool_inputs = workflow_step_to_WorkflowInputParameters( original_step_ins, workflow, "" ) temp_etool = generate_etool_from_expr2( expr, cwl.WorkflowInputParameter(id=None, type_="boolean"), etool_inputs, None, None, [workflow, step], ) if replace_etool: processes: list[ Union[ cwl.Workflow, cwl.CommandLineTool, cwl.ExpressionTool, cwl.WorkflowStep ] ] = [ workflow, step, ] cltool = etool_to_cltool(temp_etool, find_expressionLib(processes)) etool: Union[cwl.ExpressionTool, cwl.CommandLineTool] = cltool else: etool = temp_etool wf_step_inputs = copy.deepcopy(original_step_ins) for wf_step_input in wf_step_inputs: if not wf_step_input.id: continue wf_step_input.id = wf_step_input.id.split("/")[-1] if wf_step_input.source: if isinstance(wf_step_input.source, MutableSequence): for index, inp_source in enumerate(wf_step_input.source): wf_step_input.source[index] = inp_source.split("#")[-1] else: wf_step_input.source = wf_step_input.source.split("#")[-1] wf_step_inputs[:] = [x for x in wf_step_inputs if x.id and not x.id.startswith("_")] scatter = copy.deepcopy(step.scatter) if isinstance(scatter, str): scatter = [scatter] if isinstance(scatter, MutableSequence): for index, entry in enumerate(scatter): scatter[index] = entry.split("/")[-1] scatter = step.scatter workflow.steps.append( cwl.WorkflowStep( id=etool_id, in_=wf_step_inputs, out=[cwl.WorkflowStepOutput("result")], run=etool, scatter=scatter, scatterMethod=step.scatterMethod, ) ) step.when = "$(inputs._when)" step.in_.append(cwl.WorkflowStepInput(id="_when", source=f"{etool_id}/result")) def traverse_workflow( workflow: cwl.Workflow, replace_etool: bool, skip_command_line1: bool, skip_command_line2: bool, ) -> tuple[cwl.Workflow, bool]: """Traverse a workflow, processing each step.""" modified = False for index, step in enumerate(workflow.steps): if isinstance(step.run, cwl.ExpressionTool) and replace_etool: workflow.steps[index].run = etool_to_cltool(step.run) modified = True else: step_modified = load_step( step, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True for step in workflow.steps: if not step.id.startswith("_expression"): step_modified = traverse_step( step, workflow, replace_etool, skip_command_line1, skip_command_line2 ) if step_modified: modified = True if process_workflow_inputs_and_outputs(workflow, replace_etool): modified = True if process_workflow_reqs_and_hints(workflow, replace_etool): modified = True if workflow.requirements: workflow.requirements[:] = [ x for x in workflow.requirements if not isinstance( x, (cwl.InlineJavascriptRequirement, cwl.StepInputExpressionRequirement) ) ] else: workflow.requirements = None return workflow, modified ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/docker_extract.py0000755000175100001660000001114714753463342020455 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 import argparse import os import sys from collections.abc import Iterator from typing import cast import ruamel.yaml import cwl_utils.parser as cwl from cwl_utils.image_puller import ( DockerImagePuller, ImagePuller, SingularityImagePuller, ) def arg_parser() -> argparse.ArgumentParser: """Argument parser.""" parser = argparse.ArgumentParser( description="Save container images specified in a CWL document (Workflow or CommandLineTool). " "For CWL Workflows, all steps will also be searched (recursively)." ) parser.add_argument( "input", help="Input CWL document (CWL Workflow or CWL CommandLineTool)" ) parser.add_argument("--dir", help="Directory in which to save images") parser.add_argument( "-s", "--singularity", help="Use singularity to pull the image", action="store_true", ) parser.add_argument( "--container-engine", dest="container_engine", help="Specify which command to use to run OCI containers. " "Defaults to 'docker' (or 'singularity' if --singularity/-s is passed).", ) parser.add_argument( "--force-download", help="Force pulling a newer container.", action="store_true" ) return parser def run(args: argparse.Namespace) -> list[cwl.DockerRequirement]: """Extract the docker reqs and download them using Singularity or Docker.""" if args.singularity and not args.dir: print("Error! Must specify --dir if using --singularity") sys.exit(1) if args.dir: os.makedirs(args.dir, exist_ok=True) top = cwl.load_document_by_uri(args.input) reqs: list[cwl.DockerRequirement] = [] for req in traverse(top): reqs.append(req) if not req.dockerPull: print( "Unable to save image from due to lack of 'dockerPull':", file=sys.stderr, ) yaml = ruamel.yaml.YAML() yaml.dump(req.save(), sys.stderr) continue if args.singularity: image_puller: ImagePuller = SingularityImagePuller( req.dockerPull, args.dir, ( args.container_engine if args.container_engine is not None else "singularity" ), args.force_download, ) else: image_puller = DockerImagePuller( req.dockerPull, args.dir, ( args.container_engine if args.container_engine is not None else "docker" ), args.force_download, ) image_puller.save_docker_image() return reqs def extract_docker_requirements( process: cwl.Process, ) -> Iterator[cwl.DockerRequirement]: """Yield an iterator of the docker reqs, normalizing the pull request.""" for req in extract_docker_reqs(process): if isinstance(req.dockerPull, str) and ":" not in req.dockerPull: req.dockerPull += ":latest" yield req def extract_docker_reqs(process: cwl.Process) -> Iterator[cwl.DockerRequirement]: """For the given process, extract the DockerRequirement(s).""" if process.requirements: for req in process.requirements: if isinstance(req, cwl.DockerRequirementTypes): yield req if process.hints: for req in process.hints: if isinstance(req, cwl.DockerRequirementTypes): yield req def traverse(process: cwl.Process) -> Iterator[cwl.DockerRequirement]: """Yield the iterator for the docker reqs, including an workflow steps.""" yield from extract_docker_requirements(process) if isinstance(process, cwl.WorkflowTypes): yield from traverse_workflow(process) def get_process_from_step(step: cwl.WorkflowStep) -> cwl.Process: """Return the process for this step, loading it if necessary.""" if isinstance(step.run, str): return cast(cwl.Process, cwl.load_document_by_uri(step.run)) return cast(cwl.Process, step.run) def traverse_workflow(workflow: cwl.Workflow) -> Iterator[cwl.DockerRequirement]: """Iterate over the steps of this workflow, yielding the docker reqs.""" for step in workflow.steps: yield from extract_docker_reqs(step) yield from traverse(get_process_from_step(step)) def main() -> int: """Command line entry point.""" run(arg_parser().parse_args(sys.argv[1:])) return 0 if __name__ == "__main__": sys.exit(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/errors.py0000644000175100001660000000166114753463342016765 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Common Exceptions.""" class ArrayMissingItems(BaseException): """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" class JavascriptException(Exception): pass class MissingKeyField(BaseException): """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" class MissingTypeName(BaseException): """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" class RecordMissingFields(BaseException): """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" class SubstitutionError(Exception): pass class WorkflowException(Exception): pass class GraphTargetMissingException(WorkflowException): """When a $graph is encountered and there is no target and no main/#main.""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/expression.py0000644000175100001660000002522514753463342017652 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """CWL Expression parsing.""" import asyncio import copy import inspect import json from collections.abc import Awaitable, MutableMapping from typing import Any, Optional, Union, cast from schema_salad.utils import json_dumps from cwl_utils.errors import JavascriptException, SubstitutionError, WorkflowException from cwl_utils.loghandler import _logger from cwl_utils.sandboxjs import JSEngine, default_timeout, get_js_engine, param_re from cwl_utils.types import CWLObjectType, CWLOutputType from cwl_utils.utils import bytes2str_in_dicts def _convert_dumper(string: str) -> str: return f"{json.dumps(string)} + " def scanner(scan: str) -> Optional[tuple[int, int]]: """Find JS relevant punctuation in a string.""" DEFAULT = 0 DOLLAR = 1 PAREN = 2 BRACE = 3 SINGLE_QUOTE = 4 DOUBLE_QUOTE = 5 BACKSLASH = 6 i = 0 stack = [DEFAULT] start = 0 while i < len(scan): state = stack[-1] c = scan[i] if state == DEFAULT: if c == "$": stack.append(DOLLAR) elif c == "\\": stack.append(BACKSLASH) elif state == BACKSLASH: stack.pop() if stack[-1] == DEFAULT: return i - 1, i + 1 elif state == DOLLAR: if c == "(": start = i - 1 stack.append(PAREN) elif c == "{": start = i - 1 stack.append(BRACE) else: stack.pop() i -= 1 elif state == PAREN: if c == "(": stack.append(PAREN) elif c == ")": stack.pop() if stack[-1] == DOLLAR: return start, i + 1 elif c == "'": stack.append(SINGLE_QUOTE) elif c == '"': stack.append(DOUBLE_QUOTE) elif state == BRACE: if c == "{": stack.append(BRACE) elif c == "}": stack.pop() if stack[-1] == DOLLAR: return start, i + 1 elif c == "'": stack.append(SINGLE_QUOTE) elif c == '"': stack.append(DOUBLE_QUOTE) elif state == SINGLE_QUOTE: if c == "'": stack.pop() elif c == "\\": stack.append(BACKSLASH) elif state == DOUBLE_QUOTE: if c == '"': stack.pop() elif c == "\\": stack.append(BACKSLASH) i += 1 if len(stack) > 1 and not (len(stack) == 2 and stack[1] in (BACKSLASH, DOLLAR)): raise SubstitutionError( "Substitution error, unfinished block starting at position {}: '{}' stack was {}".format( start, scan[start:], stack ) ) return None def evaluator( js_engine: JSEngine, ex: str, obj: CWLObjectType, jslib: str, fullJS: bool, **kwargs: Any, ) -> Optional[CWLOutputType]: js_engine = js_engine or get_js_engine() expression_parse_exception = None if (match := param_re.match(ex)) is not None: first_symbol = match.group(1) first_symbol_end = match.end(1) if first_symbol_end + 1 == len(ex) and first_symbol == "null": return None try: if first_symbol not in obj: raise WorkflowException("%s is not defined" % first_symbol) if inspect.iscoroutinefunction(js_engine.regex_eval): return asyncio.get_event_loop().run_until_complete( cast( Awaitable[CWLOutputType], js_engine.regex_eval( first_symbol, ex[first_symbol_end:-1], cast(CWLOutputType, obj[first_symbol]), **kwargs, ), ) ) else: return cast( CWLOutputType, js_engine.regex_eval( first_symbol, ex[first_symbol_end:-1], cast(CWLOutputType, obj[first_symbol]), **kwargs, ), ) except WorkflowException as werr: expression_parse_exception = werr if fullJS: if inspect.iscoroutinefunction(js_engine.eval): return asyncio.get_event_loop().run_until_complete( cast(Awaitable[CWLOutputType], js_engine.eval(ex, jslib, **kwargs)) ) else: return cast(CWLOutputType, js_engine.eval(ex, jslib, **kwargs)) else: if expression_parse_exception is not None: raise JavascriptException( "Syntax error in parameter reference '%s': %s. This could be " "due to using Javascript code without specifying " "InlineJavascriptRequirement." % (ex[1:-1], expression_parse_exception) ) else: raise JavascriptException( "Syntax error in parameter reference '%s'. This could be due " "to using Javascript code without specifying " "InlineJavascriptRequirement." % ex ) def interpolate( scan: str, rootvars: CWLObjectType, jslib: str = "", fullJS: bool = False, strip_whitespace: bool = True, escaping_behavior: int = 2, convert_to_expression: bool = False, js_engine: Optional[JSEngine] = None, **kwargs: Any, ) -> Optional[CWLOutputType]: """ Interpolate and evaluate. Note: only call with convert_to_expression=True on CWL Expressions in $() form that need interpolation. """ if strip_whitespace: scan = scan.strip() parts = [] if convert_to_expression: dump = _convert_dumper parts.append("${return ") else: def dump(string: str) -> str: return string w = scanner(scan) while w: if convert_to_expression: parts.append(f'"{scan[0: w[0]]}" + ') # noqa: B907 else: parts.append(scan[0 : w[0]]) if scan[w[0]] == "$": if not convert_to_expression: js_engine = js_engine or get_js_engine() e = evaluator( js_engine, scan[w[0] + 1 : w[1]], rootvars, jslib, fullJS, **kwargs ) if w[0] == 0 and w[1] == len(scan) and len(parts) <= 1: return e leaf = json_dumps(e, sort_keys=True) if leaf[0] == '"': leaf = json.loads(leaf) parts.append(leaf) else: parts.append( "function(){var item =" + scan[w[0] : w[1]][2:-1] + '; if (typeof(item) === "string"){ return item; } ' "else { return JSON.stringify(item); }}() + " ) elif scan[w[0]] == "\\": if escaping_behavior == 1: # Old behavior. Just skip the next character. e = scan[w[1] - 1] parts.append(dump(e)) elif escaping_behavior == 2: # Backslash quoting requires a three character lookahead. e = scan[w[0] : w[1] + 1] if e in ("\\$(", "\\${"): # Suppress start of a parameter reference, drop the # backslash. parts.append(dump(e[1:])) w = (w[0], w[1] + 1) elif e[1] == "\\": # Double backslash, becomes a single backslash parts.append(dump("\\")) else: # Some other text, add it as-is (including the # backslash) and resume scanning. parts.append(dump(e[:2])) else: raise Exception("Unknown escaping behavior %s" % escaping_behavior) scan = scan[w[1] :] w = scanner(scan) if convert_to_expression: parts.append(f'"{scan}"') # noqa: B907 parts.append(";}") else: parts.append(scan) return "".join(parts) def jshead(engine_config: list[str], rootvars: CWLObjectType) -> str: """Make sure all the byte strings are converted to str in `rootvars` dict.""" return "\n".join( engine_config + [f"var {k} = {json_dumps(v, indent=4)};" for k, v in rootvars.items()] ) def needs_parsing(snippet: Any) -> bool: return isinstance(snippet, str) and ("$(" in snippet or "${" in snippet) def do_eval( ex: Optional[CWLOutputType], jobinput: CWLObjectType, requirements: list[CWLObjectType], outdir: Optional[str], tmpdir: Optional[str], resources: dict[str, Union[float, int]], context: Optional[CWLOutputType] = None, timeout: float = default_timeout, strip_whitespace: bool = True, cwlVersion: str = "", **kwargs: Any, ) -> Optional[CWLOutputType]: """ Evaluate the given CWL expression, in context. :param timeout: The maximum number of seconds to wait while executing. """ runtime = cast(MutableMapping[str, Union[int, str, None]], copy.deepcopy(resources)) runtime["tmpdir"] = tmpdir if tmpdir else None runtime["outdir"] = outdir if outdir else None rootvars = cast( CWLObjectType, bytes2str_in_dicts({"inputs": jobinput, "self": context, "runtime": runtime}), ) if isinstance(ex, str) and needs_parsing(ex): fullJS = False jslib = "" for r in reversed(requirements): if r["class"] == "InlineJavascriptRequirement": fullJS = True jslib = jshead(cast(list[str], r.get("expressionLib", [])), rootvars) break try: return interpolate( ex, rootvars, timeout=timeout, fullJS=fullJS, jslib=jslib, strip_whitespace=strip_whitespace, escaping_behavior=( 1 if cwlVersion in ( "v1.0", "v1.1.0-dev1", "v1.1", "v1.2.0-dev1", "v1.2.0-dev2", "v1.2.0-dev3", ) else 2 ), **kwargs, ) except Exception as e: _logger.exception(e) raise WorkflowException("Expression evaluation error:\n%s" % str(e)) from e else: return ex ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/expression_refactor.py0000755000175100001660000001360314753463342021537 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2018-2021 Michael R. Crusoe """CWL Expression refactoring tool for CWL.""" import argparse import logging import shutil import sys from collections.abc import MutableMapping, MutableSequence from pathlib import Path from typing import TYPE_CHECKING, Any, Callable, Optional, Union from ruamel.yaml.main import YAML from ruamel.yaml.scalarstring import walk_tree from cwl_utils import ( cwl_v1_0_expression_refactor, cwl_v1_1_expression_refactor, cwl_v1_2_expression_refactor, ) from cwl_utils.errors import WorkflowException from cwl_utils.loghandler import _logger as _cwlutilslogger from cwl_utils.parser import cwl_v1_0, cwl_v1_1, cwl_v1_2 if TYPE_CHECKING: from typing_extensions import Protocol else: Protocol = object _logger = logging.getLogger("cwl-expression-refactor") # pylint: disable=invalid-name defaultStreamHandler = logging.StreamHandler() # pylint: disable=invalid-name _logger.addHandler(defaultStreamHandler) _logger.setLevel(logging.INFO) _cwlutilslogger.setLevel(100) save_type = Optional[ Union[MutableMapping[str, Any], MutableSequence[Any], int, float, bool, str] ] class saveCWL(Protocol): """Shortcut type for CWL v1.x parse.save().""" def __call__( self, val: Any, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: """Must use this instead of a Callable due to the keyword args.""" ... def arg_parser() -> argparse.ArgumentParser: """Build the argument parser.""" parser = argparse.ArgumentParser( description="Tool to refactor CWL documents so that any CWL expression " "are separate steps as either ExpressionTools or CommandLineTools. Exit code 7 " "means a single CWL document was provided but it did not need modification." ) parser.add_argument( "--etools", help="Output ExpressionTools, don't go all the way to CommandLineTools.", action="store_true", ) parser.add_argument( "--skip-some1", help="Don't process CommandLineTool.inputs.inputBinding and CommandLineTool.arguments sections.", action="store_true", ) parser.add_argument( "--skip-some2", help="Don't process CommandLineTool.outputEval or " "CommandLineTool.requirements.InitialWorkDirRequirement.", action="store_true", ) parser.add_argument("dir", help="Directory in which to save converted files") parser.add_argument( "inputs", nargs="+", help="One or more CWL documents.", ) return parser def parse_args(args: list[str]) -> argparse.Namespace: """Parse the command line options.""" return arg_parser().parse_args(args) def main() -> None: """Console entry point.""" sys.exit(run(sys.argv[1:])) def run(args: list[str]) -> int: """Collect the arguments and run.""" return refactor(parse_args(args)) def refactor(args: argparse.Namespace) -> int: """Primary processing loop.""" return_code = 0 yaml = YAML(typ="rt") yaml.preserve_quotes = True for document in args.inputs: _logger.info("Processing %s.", document) with open(document) as doc_handle: result = yaml.load(doc_handle) version = result["cwlVersion"] uri = Path(document).resolve().as_uri() if version == "v1.0": top = cwl_v1_0.load_document_by_yaml(result, uri) traverse: Callable[[Any, bool, bool, bool, bool], tuple[Any, bool]] = ( cwl_v1_0_expression_refactor.traverse ) save: saveCWL = cwl_v1_0.save elif version == "v1.1": top = cwl_v1_1.load_document_by_yaml(result, uri) traverse = cwl_v1_1_expression_refactor.traverse save = cwl_v1_1.save elif version == "v1.2": top = cwl_v1_2.load_document_by_yaml(result, uri) traverse = cwl_v1_2_expression_refactor.traverse save = cwl_v1_2.save else: _logger.error( "Sorry, %s is not a supported CWL version by this tool.", version ) return -1 try: result, modified = traverse( top, not args.etools, False, args.skip_some1, args.skip_some2 ) output = Path(args.dir) / Path(document).name if not modified: if len(args.inputs) > 1: shutil.copyfile(document, output) continue else: return 7 if not isinstance(result, MutableSequence): result_json = save( result, base_url=( result.loadingOptions.fileuri if result.loadingOptions.fileuri else "" ), ) # ^^ Setting the base_url and keeping the default value # for relative_uris=True means that the IDs in the generated # JSON/YAML are kept clean of the path to the input document else: result_json = [ save(result_item, base_url=result_item.loadingOptions.fileuri) for result_item in result ] walk_tree(result_json) # ^ converts multiline strings to nice multiline YAML with open(output, "w", encoding="utf-8") as output_filehandle: output_filehandle.write( "#!/usr/bin/env cwl-runner\n" ) # TODO: teach the codegen to do this? yaml.dump(result_json, output_filehandle) except WorkflowException as exc: return_code = 1 _logger.exception("Skipping %s due to error.", document, exc_info=exc) return return_code if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/file_formats.py0000644000175100001660000000427714753463342020131 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """ CWL file formats utilities. For more information, please visit https://www.commonwl.org/user_guide/16-file-formats/ """ from typing import Optional, Union from rdflib import OWL, RDFS, Graph, URIRef from schema_salad.exceptions import ValidationException from schema_salad.utils import aslist, json_dumps from cwl_utils.types import CWLObjectType def formatSubclassOf( fmt: str, cls: str, ontology: Optional[Graph], visited: set[str] ) -> bool: """Determine if `fmt` is a subclass of `cls`.""" if URIRef(fmt) == URIRef(cls): return True if ontology is None: return False if fmt in visited: return False visited.add(fmt) uriRefFmt = URIRef(fmt) for _s, _p, o in ontology.triples((uriRefFmt, RDFS.subClassOf, None)): # Find parent classes of `fmt` and search upward if formatSubclassOf(o, cls, ontology, visited): return True for _s, _p, o in ontology.triples((uriRefFmt, OWL.equivalentClass, None)): # Find equivalent classes of `fmt` and search horizontally if formatSubclassOf(o, cls, ontology, visited): return True for s, _p, _o in ontology.triples((None, OWL.equivalentClass, uriRefFmt)): # Find equivalent classes of `fmt` and search horizontally if formatSubclassOf(s, cls, ontology, visited): return True return False def check_format( actual_file: Union[CWLObjectType, list[CWLObjectType]], input_formats: Union[list[str], str], ontology: Optional[Graph], ) -> None: """Confirm that the format present is valid for the allowed formats.""" for afile in aslist(actual_file): if not afile: continue if "format" not in afile: raise ValidationException( f"File has no 'format' defined: {json_dumps(afile, indent=4)}" ) for inpf in aslist(input_formats): if afile["format"] == inpf or formatSubclassOf( afile["format"], inpf, ontology, set() ): return raise ValidationException( f"File has an incompatible format: {json_dumps(afile, indent=4)}" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/graph_split.py0000755000175100001660000002300114753463342017760 0ustar00runnerdocker#!/usr/bin/env python # SPDX-License-Identifier: Apache-2.0 # Copyright 2019-2020 Michael R. Crusoe # Copyright 2020 Altair Wei """ Unpacks the result of `cwltool --unpack`. Only tested with a single v1.0 workflow. """ import argparse import json import os import sys from collections.abc import MutableMapping from typing import IO, TYPE_CHECKING, Any, Union, cast from cwlformat.formatter import stringify_dict from ruamel.yaml.dumper import RoundTripDumper from ruamel.yaml.main import YAML, dump from ruamel.yaml.representer import RoundTripRepresenter from schema_salad.sourceline import SourceLine, add_lc_filename if TYPE_CHECKING: from _typeshed import StrPath def arg_parser() -> argparse.ArgumentParser: """Build the argument parser.""" parser = argparse.ArgumentParser(description="Split a packed CWL document.") parser.add_argument("cwlfile") parser.add_argument( "-m", "--mainfile", default=None, type=str, help="Specify the name of the main document.", ) parser.add_argument( "-f", "--output-format", choices=["json", "yaml"], type=str, default="json", help="Specify the format of the output CWL files.", ) parser.add_argument( "-p", "--pretty", action="store_true", default=False, help="Beautify the output CWL document, only works with yaml format.", ) parser.add_argument( "-C", "--outdir", type=str, default=os.getcwd(), help="Output folder for the unpacked CWL files.", ) return parser def main() -> None: """Console entry point.""" sys.exit(run(sys.argv[1:])) def run(args: list[str]) -> int: """Split the packed CWL at the path of the first argument.""" options = arg_parser().parse_args(args) with open(options.cwlfile) as source_handle: graph_split( source_handle, options.outdir, options.output_format, options.mainfile, options.pretty, ) return 0 def graph_split( sourceIO: IO[str], output_dir: "StrPath", output_format: str, mainfile: str, pretty: bool, ) -> None: """Loop over the provided packed CWL document and split it up.""" yaml = YAML(typ="rt") yaml.preserve_quotes = True source = yaml.load(sourceIO) add_lc_filename(source, sourceIO.name) if "$graph" not in source: print("No $graph, so not for us.") return version = source.pop("cwlVersion") def my_represent_none( self: Any, data: Any ) -> Any: # pylint: disable=unused-argument """Force clean representation of 'null'.""" return self.represent_scalar("tag:yaml.org,2002:null", "null") RoundTripRepresenter.add_representer(type(None), my_represent_none) for entry in source["$graph"]: entry_id = entry.pop("id").lstrip("#") entry["cwlVersion"] = version imports = rewrite(entry, entry_id) if imports: for import_name in imports: rewrite_types(entry, f"#{import_name}", False) if entry_id == "main": if mainfile is None: entry_id = f"unpacked_{os.path.basename(sourceIO.name)}" else: entry_id = mainfile output_file = os.path.join(output_dir, entry_id + ".cwl") if output_format == "json": json_dump(entry, output_file) elif output_format == "yaml": yaml_dump(entry, output_file, pretty) def rewrite(document: Any, doc_id: str) -> set[str]: """Rewrite the given element from the CWL $graph.""" imports = set() if isinstance(document, list) and not isinstance(document, str): for entry in document: imports.update(rewrite(entry, doc_id)) elif isinstance(document, dict): this_id = document["id"] if "id" in document else None for key, value in document.items(): with SourceLine(document, key, Exception): if key == "run" and isinstance(value, str) and value[0] == "#": document[key] = f"{value[1:]}.cwl" elif key in ("id", "outputSource") and value.startswith("#" + doc_id): document[key] = value[len(doc_id) + 2 :] elif key == "out" and isinstance(value, list): def rewrite_id(entry: Any) -> Union[MutableMapping[Any, Any], str]: if isinstance(entry, MutableMapping): if entry["id"].startswith(this_id): assert isinstance(this_id, str) # nosec B101 entry["id"] = cast(str, entry["id"])[len(this_id) + 1 :] return entry elif isinstance(entry, str): if this_id and entry.startswith(this_id): return entry[len(this_id) + 1 :] return entry raise Exception(f"{entry} is neither a dictionary nor string.") document[key][:] = [rewrite_id(entry) for entry in value] elif key in ("source", "scatter", "items", "format"): if ( isinstance(value, str) and value.startswith("#") and "/" in value ): referrant_file, sub = value[1:].split("/", 1) if referrant_file == doc_id: document[key] = sub else: document[key] = f"{referrant_file}#{sub}" elif isinstance(value, list): new_sources = list() for entry in value: if entry.startswith("#" + doc_id): new_sources.append(entry[len(doc_id) + 2 :]) else: new_sources.append(entry) document[key] = new_sources elif key == "$import": rewrite_import(document) elif key == "class" and value == "SchemaDefRequirement": return rewrite_schemadef(document) else: imports.update(rewrite(value, doc_id)) return imports def rewrite_import(document: MutableMapping[str, Any]) -> None: """Adjust the $import directive.""" external_file = document["$import"].split("/")[0][1:] document["$import"] = external_file def rewrite_types(field: Any, entry_file: str, sameself: bool) -> None: """Clean up the names of the types.""" if isinstance(field, list) and not isinstance(field, str): for entry in field: rewrite_types(entry, entry_file, sameself) return if isinstance(field, dict): for key, value in field.items(): for name in ("type", "items"): if key == name: if isinstance(value, str) and value.startswith(entry_file): if sameself: field[key] = value[len(entry_file) + 1 :] else: field[key] = "{d[0]}#{d[1]}".format( d=value[1:].split("/", 1) ) if isinstance(value, dict): rewrite_types(value, entry_file, sameself) if isinstance(value, list) and not isinstance(value, str): for entry in value: rewrite_types(entry, entry_file, sameself) def rewrite_schemadef(document: MutableMapping[str, Any]) -> set[str]: """Dump the schemadefs to their own file.""" for entry in document["types"]: if "$import" in entry: rewrite_import(entry) elif "name" in entry and "/" in entry["name"]: entry_file, entry["name"] = entry["name"].split("/") for field in entry["fields"]: field["name"] = field["name"].split("/")[2] rewrite_types(field, entry_file, True) with open(entry_file[1:], "a", encoding="utf-8") as entry_handle: dump([entry], entry_handle, Dumper=RoundTripDumper) entry["$import"] = entry_file[1:] del entry["name"] del entry["type"] del entry["fields"] seen_imports = set() def seen_import(entry: MutableMapping[str, Any]) -> bool: if "$import" in entry: external_file = entry["$import"] if external_file not in seen_imports: seen_imports.add(external_file) return True return False return True types = document["types"] document["types"][:] = [entry for entry in types if seen_import(entry)] return seen_imports def json_dump(entry: Any, output_file: str) -> None: """Output object as JSON.""" with open(output_file, "w", encoding="utf-8") as result_handle: json.dump(entry, result_handle, indent=4) def yaml_dump(entry: Any, output_file: str, pretty: bool) -> None: """Output object as YAML.""" yaml = YAML(typ="rt") yaml.default_flow_style = False yaml.map_indent = 4 yaml.sequence_indent = 2 with open(output_file, "w", encoding="utf-8") as result_handle: if pretty: result_handle.write(stringify_dict(entry)) else: yaml.dump( entry, result_handle, ) if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/image_puller.py0000644000175100001660000001122714753463342020115 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Classes for docker-extract.""" import logging import os import subprocess # nosec from abc import ABC, abstractmethod from pathlib import Path from typing import Optional, Union from .singularity import get_version as get_singularity_version from .singularity import is_version_2_6 as is_singularity_version_2_6 from .singularity import is_version_3_or_newer as is_singularity_version_3_or_newer logging.basicConfig(level=logging.INFO) _LOGGER = logging.getLogger(__name__) class ImagePuller(ABC): def __init__( self, req: str, save_directory: Optional[Union[str, Path]], cmd: str, force_pull: bool, ) -> None: """Create an ImagePuller.""" self.req = req self.save_directory = save_directory self.cmd = cmd self.force_pull = force_pull @abstractmethod def get_image_name(self) -> str: """Get the engine-specific image name.""" @abstractmethod def save_docker_image(self) -> None: """Download and save the image to disk.""" @staticmethod def _run_command_pull(cmd_pull: list[str]) -> None: try: subprocess.run( # nosec cmd_pull, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT ) except subprocess.CalledProcessError as err: if err.output: raise subprocess.SubprocessError(err.output) from err raise err class DockerImagePuller(ImagePuller): """Pull docker image with Docker.""" def get_image_name(self) -> str: """Get the name of the tarball.""" name = "".join(self.req.split("/")) + ".tar" # Replace colons with underscores in the name. # See https://github.com/containers/podman/issues/489 name = name.replace(":", "_") return name def generate_udocker_loading_command(self) -> str: """Generate the udocker loading command.""" return f"udocker load -i {self.get_image_name()}" def save_docker_image(self) -> None: """Download and save the software container image to disk as a docker tarball.""" _LOGGER.info(f"Pulling {self.req} with {self.cmd}...") cmd_pull = [self.cmd, "pull", self.req] ImagePuller._run_command_pull(cmd_pull) _LOGGER.info(f"Image successfully pulled: {self.req}") if self.save_directory: dest = os.path.join(self.save_directory, self.get_image_name()) if self.save_directory and self.force_pull: os.remove(dest) cmd_save = [ self.cmd, "save", "-o", dest, self.req, ] subprocess.run(cmd_save, check=True) # nosec _LOGGER.info(f"Image successfully saved: {dest!r}.") print(self.generate_udocker_loading_command()) class SingularityImagePuller(ImagePuller): """Pull docker image with Singularity.""" CHARS_TO_REPLACE = ["/", ":"] NEW_CHAR = "_" def get_image_name(self) -> str: """Determine the file name appropriate to the installed version of Singularity.""" image_name = self.req for char in self.CHARS_TO_REPLACE: image_name = image_name.replace(char, self.NEW_CHAR) if is_singularity_version_2_6(): suffix = ".img" elif is_singularity_version_3_or_newer(): suffix = ".sif" else: raise Exception( f"Don't know how to handle this version of singularity: {get_singularity_version()}." ) return f"{image_name}{suffix}" def save_docker_image(self) -> None: """Pull down the Docker software container image and save it in the Singularity image format.""" save_directory: Union[str, Path] if self.save_directory: save_directory = self.save_directory if ( os.path.exists(os.path.join(save_directory, self.get_image_name())) and not self.force_pull ): _LOGGER.info(f"Already cached {self.req} with Singularity.") return _LOGGER.info(f"Pulling {self.req} with Singularity...") cmd_pull = [ self.cmd, "pull", ] if self.force_pull: cmd_pull.append("--force") cmd_pull.extend( [ "--name", os.path.join(save_directory, self.get_image_name()), f"docker://{self.req}", ] ) ImagePuller._run_command_pull(cmd_pull) _LOGGER.info( f"Image successfully pulled: {save_directory}/{self.get_image_name()}" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/inputs_schema_gen.py0000644000175100001660000004762214753463342021153 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2024 Hirotaka Suetake # Copyright 2024 Alexis Lucattini """Generate JSON Schema from CWL inputs object.""" import argparse import json import logging import sys from copy import deepcopy from pathlib import Path from typing import Any, Dict, List, Optional, Union from urllib.parse import urlparse import requests # Get typeguard from extensions if we're running in python3.8 if sys.version_info[:2] < (3, 10): from typing_extensions import TypeGuard # Not in 3.8 typing module else: from typing import TypeGuard from cwl_utils.loghandler import _logger as _cwlutilslogger from cwl_utils.parser import ( CommandLineTool, Directory, File, InputArraySchema, InputArraySchemaTypes, InputEnumSchema, InputEnumSchemaTypes, InputRecordSchema, InputRecordSchemaTypes, Workflow, WorkflowInputParameter, load_document_by_uri, ) from cwl_utils.utils import ( get_value_from_uri, is_local_uri, is_uri, sanitise_schema_field, to_pascal_case, ) _logger = logging.getLogger("cwl-inputs-schema-gen") # pylint: disable=invalid-name defaultStreamHandler = logging.StreamHandler() # pylint: disable=invalid-name _logger.addHandler(defaultStreamHandler) _logger.setLevel(logging.INFO) _cwlutilslogger.setLevel(100) # Globals # Maps CWL types to JSON Schema types PRIMITIVE_TYPES_MAPPING = { "boolean": "boolean", "string": "string", "int": "integer", "float": "number", "long": "number", "double": "number", "null": "null", } JSON_TEMPLATE_PATH = ( Path(__file__) .parent.joinpath("./templates/workflow_input_json_schema_template.json") .absolute() .resolve() ) # Some type hinting InputType = Union[ InputArraySchema, InputEnumSchema, InputRecordSchema, str, File, Directory ] # Don't need type checking at runtime class JSONSchemaProperty: """Generate a JSON schema property from a CWL input parameter.""" def __init__( self, name: str, type_: Union[InputType, List[InputType], str, Any], description: Optional[str] = "", required: Optional[bool] = False, ): """Initialise the JSONSchemaProperty object.""" # Initialise values self.name: str = name self.type_: Union[InputType, List[InputType], str, Any] = type_ self.description = description self.required = required self.type_dict = self.generate_type_dict() def generate_type_dict(self) -> Dict[str, Any]: """Generate the type dict for a property from a CWL input parameter type.""" # If the type is a list and contains null, then the property is not required if isinstance(self.type_, List) and "null" in self.type_: self.required = False self.type_ = list(filter(lambda type_item: type_item != "null", self.type_)) # Check if we're down to one item, we can then squeeze if len(self.type_) == 1: self.type_ = self.type_[0] # type_ is still a list therefore we offer multiple input types for this parameter if isinstance(self.type_, List): # We use the oneOf keyword to specify multiple types type_dict = self.generate_type_dict_from_type_list(self.type_) # type_ is a single type else: type_dict = self.generate_type_dict_from_type(self.type_) # Add in the description to the type dict type_dict.update({"description": self.description}) return type_dict def generate_type_dict_from_type(self, type_item: Any) -> Dict[str, Any]: """ Generate the type dict for a property from a CWL input parameter type. We call this function for each type in the type_ list In the case there are multiple types, each dict is added to the oneOf list """ # Primitive types should have a 1-1 mapping # Between an CWL Input Parameter type and a JSON schema type if isinstance(type_item, str): if type_item in PRIMITIVE_TYPES_MAPPING.keys(): return {"type": PRIMITIVE_TYPES_MAPPING[type_item]} elif type_item in ["stdin"]: return {"$ref": "#/definitions/File"} elif type_item in ["File", "Directory", "Any"]: return {"$ref": f"#/definitions/{type_item}"} # When item is a record schema type elif is_uri(type_item): return { "$ref": f"#/definitions/{to_pascal_case(get_value_from_uri(type_item))}" } else: raise ValueError(f"Unknown type: {type_item}") elif isinstance(type_item, InputArraySchemaTypes): return { "type": "array", "items": self.generate_type_dict_from_type(type_item.items), } elif isinstance(type_item, InputEnumSchemaTypes): return { "type": "string", "enum": list( map( lambda symbol_iter: get_value_from_uri(symbol_iter), type_item.symbols, ) ), } elif isinstance(type_item, InputRecordSchemaTypes): if type_item.fields is None: return {"type": "object"} if not isinstance(type_item.fields, List): _cwlutilslogger.error( "Expected fields of InputRecordSchemaType to be a list" ) raise TypeError return { "type": "object", "properties": { get_value_from_uri(prop.name): self.generate_type_dict_from_type( prop.type_ ) for prop in type_item.fields }, } elif isinstance(type_item, Dict): # Nested import # {'$import': '../relative/path/to/schema'} if "$import" in type_item.keys(): # This path is a relative path to import return { "$ref": f"#/definitions/{to_pascal_case(get_value_from_uri(type_item['$import']))}" } else: raise ValueError(f"Unknown type: {type_item}") elif isinstance(type_item, List): # Nested schema return { "oneOf": list( map( lambda type_iter: self.generate_type_dict_from_type(type_iter), type_item, ) ) } else: raise ValueError(f"Unknown type: {type_item}") def generate_type_dict_from_type_list( self, type_: List[InputType] ) -> Dict[str, Any]: """Given a list of types, generate a JSON schema property dict wrapped in oneOf list.""" return { "oneOf": list( map( lambda type_item: self.generate_type_dict_from_type(type_item), type_, ) ) } def to_dict(self) -> Dict[str, Any]: """Return as a dictionary.""" return {self.name: self.type_dict} def get_is_required_from_input_parameter( input_parameter: WorkflowInputParameter, ) -> bool: """Given an input parameter, return if it is required.""" if isinstance(input_parameter.type_, str) and input_parameter.type_.endswith("?"): return False if input_parameter.default is not None: return False if isinstance(input_parameter.type_, List) and "null" in input_parameter.type_: return False if isinstance(input_parameter.type_, InputRecordSchemaTypes): if input_parameter.type_ is not None: if (isinstance(input_parameter.type_.type_, str)) and ( input_parameter.type_.type_.endswith("?") ): return False return True def generate_json_schema_property_from_input_parameter( input_parameter: WorkflowInputParameter, ) -> JSONSchemaProperty: """ Given an input parameter, generate a JSON schema property. :param input_parameter: :return: """ # Get the input name and documentation for description input_name = get_value_from_uri(str(input_parameter.id)) doc = input_parameter.doc required = get_is_required_from_input_parameter(input_parameter) return JSONSchemaProperty( name=input_name, type_=input_parameter.type_, description=doc if doc is not None else "", required=required, ) def generate_definition_from_schema(schema: InputRecordSchema) -> Dict[str, Any]: """ Given a schema, generate a JSON schema definition. :param schema: :return: """ # Sanitise each field of the schema sanitised_fields = {} if schema.fields is None: return {} for field in schema.fields: sanitised_fields.update( { get_value_from_uri(field.name): sanitise_schema_field( {"type": field.type_} ) } ) # Generate JSON properties property_list = [] for prop_name, prop_obj in sanitised_fields.items(): # Simplify type first by removing nulls required = True # If the property object is a string, then it's a reference to another schema if isinstance(prop_obj, str): raise TypeError("Property Object should be a dictionary") if isinstance(prop_obj.get("type", []), List): if "null" in prop_obj.get("type", []): required = False prop_obj["type"] = list( filter(lambda type_item: type_item != "null", prop_obj.get("type", [])) ) # Check if we're down to one item if len(prop_obj["type"]) == 1: prop_obj["type"] = prop_obj["type"][0] # Generate JSONSchema Property prop = JSONSchemaProperty( name=prop_name, type_=prop_obj.get("type"), description=prop_obj.get("doc", ""), required=required, ) property_list.append(prop) return { to_pascal_case(get_value_from_uri(str(schema.name))): { "type": "object", "properties": {prop.name: prop.type_dict for prop in property_list}, "required": [prop.name for prop in property_list if prop.required], } } def cwl_to_jsonschema(cwl_obj: Union[Workflow, CommandLineTool]) -> Any: """ cwl_obj: A CWL Object. Returns: A JSONSchema object. Example: cwl_obj = load_document_by_uri() jsonschema = cwl_to_jsonschema(cwl_inputs) """ # Initialise the schema from the workflow input json schema template with open(JSON_TEMPLATE_PATH, "r") as template_h: input_json_schema = json.load(template_h) # Get the complex schema keys def is_complex_record_schema_key(idx_iter: str) -> TypeGuard[bool]: if cwl_obj.loadingOptions.idx is None: return False if cwl_obj.loadingOptions.idx.get(idx_iter) is None: return False if not isinstance(cwl_obj.loadingOptions.idx.get(idx_iter), tuple): return False # Get index as a tuple input_schema_type, _ = cwl_obj.loadingOptions.idx.get(idx_iter, (None, None)) if isinstance(input_schema_type, InputRecordSchemaTypes): return True return False complex_schema_keys: List[str] = list( filter( lambda idx_iter: is_complex_record_schema_key(idx_iter), cwl_obj.loadingOptions.idx.keys(), ) ) # Complex schema values def get_complex_schema_values(idx_iter: str) -> InputRecordSchema: if not isinstance(cwl_obj.loadingOptions.idx.get(idx_iter), tuple): raise TypeError(f"Expected tuple from idx loading options key {idx_iter}") # Collect input record schema input_record_schema, _ = cwl_obj.loadingOptions.idx.get(idx_iter, (None, None)) if not isinstance(input_record_schema, InputRecordSchemaTypes): raise TypeError( f"Expected InputRecordSchemaTypes from idx loading options key {idx_iter}" ) return input_record_schema complex_schema_values: List[InputRecordSchema] = list( map( lambda idx_iter: get_complex_schema_values(idx_iter), complex_schema_keys, ) ) # Load in all $imports to be referred by complex input types workflow_schema_definitions_list = list( map( lambda complex_schema_values_iter: generate_definition_from_schema( complex_schema_values_iter ), complex_schema_values, ) ) if cwl_obj.requirements is not None: try: schema_def_requirement = next( filter( lambda requirement_iter: requirement_iter.class_ == "SchemaDefRequirement", cwl_obj.requirements, ) ) workflow_schema_definitions_list.extend( list( map( lambda schema_def_iter: generate_definition_from_schema( schema_def_iter ), schema_def_requirement.types, ) ) ) except StopIteration: pass # Convert schema definitions to dict workflow_schema_definitions_dict = {} for schema_definition in workflow_schema_definitions_list: workflow_schema_definitions_dict.update(schema_definition) # Generate JSON Schema Properties properties = list( map( lambda workflow_parameter_input_obj: generate_json_schema_property_from_input_parameter( workflow_parameter_input_obj ), cwl_obj.inputs, ) ) # Generate JSON schema input_json_schema.update( { "type": "object", "properties": { prop.name: ( {"oneOf": [{"type": "null"}, prop.type_dict]} if prop.required is False else prop.type_dict ) for prop in properties }, "required": [prop.name for prop in properties if prop.required], } ) # Update definitions from schema input_json_schema["definitions"].update(workflow_schema_definitions_dict) # Slim down the schema as required input_json_schema = slim_definitions(input_json_schema) # Add "additionalProperties": false to top of schema # input_json_schema["additionalProperties"] = False return input_json_schema # Traverse the properties and return all definitions that are used def _recursive_search( json_data: Dict[str, Any], target_key: str, ) -> List[Any]: """Given a target key return all instances of a key in a json object.""" result = [] if isinstance(json_data, dict): for key, value in json_data.items(): if key == target_key: result.append(value) else: result.extend(_recursive_search(value, target_key)) elif isinstance(json_data, list): for item in json_data: result.extend(_recursive_search(item, target_key)) return result # Get all the property dependencies def _get_all_ref_attributes(json_object: Dict[str, Any]) -> List[Any]: """Given a json object, return all the reference attributes.""" return _recursive_search(json_object, "$ref") def get_property_dependencies( property_dict: Dict[str, Any], input_json_schema: Dict[str, Any], existing_property_dependencies: Optional[List[Any]] = None, ) -> List[str]: """Recursively collect all dependencies for a property.""" # Initialise return list if existing_property_dependencies is None: existing_property_dependencies = [] # All reference attributes for reference_attribute in _get_all_ref_attributes(property_dict): # Get the value from the reference attribute reference_value = get_value_from_uri(reference_attribute) # If the reference value is not in the existing property dependencies, add it if reference_value not in existing_property_dependencies: existing_property_dependencies.append(reference_value) # Get the property dependencies of the reference value existing_property_dependencies.extend( get_property_dependencies( input_json_schema["definitions"][reference_value], input_json_schema, existing_property_dependencies, ) ) return existing_property_dependencies def slim_definitions(input_json_schema: Dict[str, Any]) -> Dict[str, Any]: """ Slim down the schema to only the definitions that are used by the properties. Traverse the properties and return all definitions that are used. Remove all other definitions. """ # Copy schema input_json_schema = deepcopy(input_json_schema) # Get required definitions required_definitions = get_property_dependencies( input_json_schema.get("properties", {}), input_json_schema ) for definition_key in list(input_json_schema["definitions"].keys()): if definition_key not in required_definitions: del input_json_schema["definitions"][definition_key] return input_json_schema def arg_parser() -> argparse.ArgumentParser: """Build the argument parser.""" parser = argparse.ArgumentParser(description="Generate JSON Schema from a CWL URI.") parser.add_argument("cwl_url", help="URL or Path to the CWL document") parser.add_argument( "-o", "--output", type=argparse.FileType("w"), default=sys.stdout, help="Output file. Default is stdout.", ) return parser def parse_args(args: List[str]) -> argparse.Namespace: """Parse the command line arguments.""" return arg_parser().parse_args(args) def main() -> None: """Console entry point.""" sys.exit(run(parse_args(sys.argv[1:]))) def get_cwl_url(url: str) -> str: """ Conform to uri format. If no scheme, then assert is a local file path and exists if scheme is file, then assert is a local file path and exists If scheme is not file, then assert is a valid Web URL Return either the url or the local path as a uri. """ if not is_uri(url): if not Path(url).exists(): logging.error("The CWL URL is invalid.") raise FileNotFoundError return Path(url).as_uri() elif is_local_uri(url): if not Path(urlparse(url).path).exists(): logging.error("The CWL URL is invalid.") raise FileNotFoundError return url else: # urlparse(url).scheme not in ['file']: response = requests.get(url, timeout=20) if response.status_code != 200: logging.error("The CWL URL is invalid.") raise FileNotFoundError return url def run(args: argparse.Namespace) -> int: """Run the main program.""" # Check the cwl_url is valid cwl_url = get_cwl_url(args.cwl_url) # Check the output file is writable if args.output.name != "": if not Path(args.output.name).parent.is_dir(): logging.error( "The output file is not writable, the output parent directory does not exist" ) return 1 _logger.info("Loading the CWL document") cwl_obj = load_document_by_uri(cwl_url) try: jsonschema = cwl_to_jsonschema(cwl_obj) except Exception as e: _logger.exception( "Failed to generate JSON Schema from CWL inputs object. Error: %s", e ) return 1 args.output.write(json.dumps(jsonschema, indent=2) + "\n") return 0 if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/loghandler.py0000644000175100001660000000046314753463342017567 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Shared logging object.""" import logging _logger = logging.getLogger("cwl_utils") # pylint: disable=invalid-name defaultStreamHandler = logging.StreamHandler() # pylint: disable=invalid-name _logger.addHandler(defaultStreamHandler) _logger.setLevel(logging.INFO) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/normalizer.py0000644000175100001660000001104614753463342017631 0ustar00runnerdocker#!/usr/bin/env python3 # SPDX-License-Identifier: Apache-2.0 # Copyright 2021 Michael R. Crusoe """Normalize CWL documents to CWL v1.2, JSON style.""" import argparse import logging import sys import tempfile from collections.abc import MutableSequence from pathlib import Path from cwlupgrader import main as cwlupgrader from ruamel import yaml from schema_salad.sourceline import add_lc_filename from cwl_utils import cwl_v1_2_expression_refactor from cwl_utils.loghandler import _logger as _cwlutilslogger from cwl_utils.pack import pack from cwl_utils.parser.cwl_v1_2 import load_document_by_yaml, save _logger = logging.getLogger("cwl-normalizer") # pylint: disable=invalid-name defaultStreamHandler = logging.StreamHandler() # pylint: disable=invalid-name _logger.addHandler(defaultStreamHandler) _logger.setLevel(logging.INFO) _cwlutilslogger.setLevel(100) def arg_parser() -> argparse.ArgumentParser: """Build the argument parser.""" parser = argparse.ArgumentParser( description="Normalizes CWL documents. Will upgrade to CWL v1.2, " "and pack the result. Can optionally refactor out CWL expressions." ) parser.add_argument( "--etools", help="Output ExpressionTools, don't go all the way to CommandLineTools.", action="store_true", ) parser.add_argument( "--skip-some1", help="Don't process CommandLineTool.inputs.inputBinding and CommandLineTool.arguments sections.", action="store_true", ) parser.add_argument( "--skip-some2", help="Don't process CommandLineTool.outputEval or " "CommandLineTool.requirements.InitialWorkDirRequirement.", action="store_true", ) parser.add_argument( "--no-expression-refactoring", help="Don't do any CWL expression refactoring.", action="store_true", ) parser.add_argument("dir", help="Directory in which to save converted files") parser.add_argument( "inputs", nargs="+", help="One or more CWL documents.", ) return parser def parse_args(args: list[str]) -> argparse.Namespace: """Parse the command line arguments.""" return arg_parser().parse_args(args) def main() -> None: """Console entry point.""" sys.exit(run(parse_args(sys.argv[1:]))) def run(args: argparse.Namespace) -> int: """Primary processing loop.""" imports: set[str] = set() for document in args.inputs: _logger.info("Processing %s.", document) with open(document) as doc_handle: result = yaml.main.round_trip_load(doc_handle, preserve_quotes=True) add_lc_filename(result, document) version = result.get("cwlVersion", None) if version in ("draft-3", "cwl:draft-3", "v1.0", "v1.1"): result = cwlupgrader.upgrade_document(result, args.dir, imports=imports) else: _logger.error( "Sorry, %s in %s is not a supported CWL version by this tool.", (version, document), ) return -1 uri = Path(document).resolve().as_uri() if not args.no_expression_refactoring: refactored, _ = cwl_v1_2_expression_refactor.traverse( load_document_by_yaml(result, uri), not args.etools, False, args.skip_some1, args.skip_some2, ) if not isinstance(refactored, MutableSequence): result = save( refactored, base_url=( refactored.loadingOptions.fileuri if refactored.loadingOptions.fileuri else "" ), ) # ^^ Setting the base_url and keeping the default value # for relative_uris=True means that the IDs in the generated # JSON/YAML are kept clean of the path to the input document else: result = [ save(result_item, base_url=result_item.loadingOptions.fileuri) for result_item in refactored ] if "$graph" in result: packed = result else: with tempfile.TemporaryDirectory() as tmpdirname: path = Path(tmpdirname) / Path(document).name packed = pack(str(path)) output = Path(args.dir) / Path(document).name with open(output, "w", encoding="utf-8") as output_filehandle: output_filehandle.write(packed) return 0 if __name__ == "__main__": main() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/pack.py0000644000175100001660000002122314753463342016363 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 # Copyright (c) 2021 Michael R. Crusoe # Copyright (c) 2020 Seven Bridges # See https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/LICENSE """ CWL document packing functions. The link resolution is as follows: We always have two components: the base and the link If the link is a url or absolute path it is what is used to fetch the data. If the link is a relative path it is combined with the base and that is what is used to fetch data From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py """ import logging import sys import urllib.parse import urllib.request from collections.abc import ItemsView from typing import TYPE_CHECKING, Any, Optional, Union, cast from packaging import version from cwl_utils import schemadef, utils if TYPE_CHECKING: from _collections_abc import dict_items logger = logging.getLogger(__name__) def get_inner_dict( cwl: dict[str, Any], path: list[dict[str, Any]] ) -> Optional[dict[str, Any]]: if len(path) == 0: return cwl if isinstance(cwl, dict): _v = cwl.get(path[0]["key"]) if _v is not None: return get_inner_dict(_v, path[1:]) elif isinstance(cwl, list): # Going to assume this is a map expressed as list for _v in cwl: if isinstance(_v, dict): if _v.get(path[0]["key_field"]) == path[0]["key"]: return get_inner_dict(_v, path[1:]) return None def pack_process( cwl: dict[str, Any], base_url: urllib.parse.ParseResult, cwl_version: str, parent_user_defined_types: Optional[dict[str, Any]] = None, ) -> dict[str, Any]: cwl = listify_everything(cwl) cwl = normalize_sources(cwl) cwl, user_defined_types = load_schemadefs(cwl, base_url, parent_user_defined_types) cwl = resolve_schemadefs(cwl, base_url, user_defined_types) cwl = resolve_imports(cwl, base_url) cwl = resolve_steps( cwl, base_url, cwl.get("cwlVersion", cwl_version), user_defined_types, ) cwl = add_missing_requirements(cwl) return cwl def listify_everything(cwl: dict[str, Any]) -> dict[str, Any]: """ Convert many CWL construct from their map to the list version. See https://www.commonwl.org/v1.1/Workflow.html#map """ for port in ["inputs", "outputs"]: cwl[port] = utils.normalize_to_list( cwl.get(port, []), key_field="id", value_field="type" ) cwl["requirements"] = utils.normalize_to_list( cwl.get("requirements", []), key_field="class", value_field=None ) if cwl.get("class") != "Workflow": return cwl cwl["steps"] = utils.normalize_to_list( cwl.get("steps", []), key_field="id", value_field=None ) for _, v in enumerate(cwl["steps"]): if isinstance(v, dict): v["in"] = utils.normalize_to_list( v.get("in", []), key_field="id", value_field="source" ) return cwl def normalize_sources(cwl: dict[str, Any]) -> dict[str, Any]: """Normalize the steps and output of a CWL Workflow.""" if cwl.get("class") != "Workflow": return cwl for _step in cwl.get("steps", {}): if not isinstance(_step, dict): continue _inputs = _step.get("in", {}) for k, _input in enumerate(_inputs): if isinstance(_input, str): _inputs[k] = _normalize(_input) elif isinstance(_input, dict): _src = _input.get("source") if isinstance(_src, str): _input["source"] = _normalize(_input["source"]) _outputs = cwl.get("outputs", {}) for k, _output in enumerate(_outputs): if isinstance(_output, str): _outputs[k] = _normalize(_output) elif isinstance(_output, dict): _src = _output.get("outputSource") if isinstance(_src, str): _output["outputSource"] = _normalize(_output["outputSource"]) return cwl def _normalize(s: str) -> str: if s.startswith("#"): return s[1:] else: return s def load_schemadefs( cwl: dict[str, Any], base_url: urllib.parse.ParseResult, parent_user_defined_types: Optional[dict[str, Any]] = None, ) -> tuple[dict[str, Any], dict[str, Any]]: """Internalize any SchemaDefRequirement, and remove it.""" user_defined_types = schemadef.build_user_defined_type_dict(cwl, base_url) if parent_user_defined_types is not None: user_defined_types.update(parent_user_defined_types) cwl["requirements"] = [ req for req in cwl.get("requirements", []) if req.get("class") != "SchemaDefRequirement" ] return cwl, user_defined_types def resolve_schemadefs( cwl: dict[str, Any], base_url: urllib.parse.ParseResult, user_defined_types: dict[str, Any], ) -> dict[str, Any]: cwl = schemadef.inline_types(cwl, "inputs", base_url, user_defined_types) cwl = schemadef.inline_types(cwl, "outputs", base_url, user_defined_types) return cwl def resolve_imports(cwl: Any, base_url: urllib.parse.ParseResult) -> Any: if isinstance(cwl, dict): itr: Union["dict_items[Any, Any]", ItemsView[Any, Any]] = cwl.items() elif isinstance(cwl, list): itr = cast(ItemsView[Any, Any], [(n, v) for n, v in enumerate(cwl)]) else: return cwl for k, v in itr: if isinstance(v, dict): if len(v) == 1: _k = list(v.keys())[0] if _k in ["$import", "$include"]: cwl[k], this_base_url = utils.load_linked_file( base_url, v[_k], is_import=_k == "$import" ) cwl[k] = resolve_imports(cwl[k], base_url) return cwl def resolve_steps( cwl: dict[str, Any], base_url: urllib.parse.ParseResult, cwl_version: str, parent_user_defined_types: Optional[dict[str, Any]] = None, ) -> dict[str, Any]: """Load and pack all "run" sections of the workflow steps.""" if isinstance(cwl, str): raise RuntimeError(f"{base_url.geturl()}: Expecting a process, found a string") if not isinstance(cwl, dict): return cwl if cwl.get("class") != "Workflow": return cwl for _, v in enumerate(cwl["steps"]): if isinstance(v, dict): sys.stderr.write( f"\n--\nRecursing into step {base_url.geturl()}:{v['id']}\n" ) _run = v.get("run") if isinstance(_run, str): v["run"], new_base_url = utils.load_linked_file( base_url, _run, is_import=True ) v["run"] = pack_process( v["run"], new_base_url, cwl.get("cwlVersion", cwl_version), ) else: v["run"] = pack_process( v["run"], base_url, cwl.get("cwlVersion", cwl_version), parent_user_defined_types, ) if "cwlVersion" in v["run"]: parent_version = version.parse( cwl.get("cwlVersion", cwl_version).strip("v") ) this_version = version.parse(v["run"]["cwlVersion"].strip("v")) if this_version > parent_version: cwl["cwlVersion"] = v["run"]["cwlVersion"] # not really enough, but hope for the best return cwl def add_missing_requirements(cwl: dict[str, Any]) -> dict[str, Any]: """Due to packing, we may need to add a "SubworkflowFeatureRequirement".""" requirements = cwl.get("requirements", []) present = {req["class"] for req in requirements} def _add_req(_req_name: str) -> None: nonlocal requirements if _req_name not in present: requirements += [{"class": _req_name}] if cwl.get("class") == "Workflow": sub_workflow = False for step in cwl["steps"]: if step["run"]["class"] == "Workflow": sub_workflow = True break if sub_workflow: _add_req("SubworkflowFeatureRequirement") return cwl def pack(cwl_path: str) -> dict[str, Any]: """Pack a CWL document at the given path.""" sys.stderr.write(f"Packing {cwl_path}\n") file_path_url = urllib.parse.urlparse(cwl_path) cwl, full_url = cast( tuple[dict[str, Any], urllib.parse.ParseResult], utils.load_linked_file(base_url=file_path_url, link="", is_import=True), ) if "$graph" in cwl: # assume already packed return cwl cwl = pack_process(cwl, full_url, cwl["cwlVersion"]) return cwl ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5335603 cwl_utils-0.37/cwl_utils/parser/0000755000175100001660000000000014753463357016375 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/__init__.py0000644000175100001660000003345414753463342020511 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 import os from collections.abc import MutableMapping, MutableSequence from pathlib import Path from typing import Any, Optional, Union, cast from urllib.parse import unquote_plus, urlparse from schema_salad.exceptions import ValidationException from schema_salad.utils import yaml_no_ts from ..errors import GraphTargetMissingException from . import cwl_v1_0, cwl_v1_1, cwl_v1_2 LoadingOptions = Union[ cwl_v1_0.LoadingOptions, cwl_v1_1.LoadingOptions, cwl_v1_2.LoadingOptions ] """Type union for a CWL v1.x LoadingOptions object.""" Saveable = Union[cwl_v1_0.Saveable, cwl_v1_1.Saveable, cwl_v1_2.Saveable] """Type union for a CWL v1.x Saveable object.""" InputParameter = Union[ cwl_v1_0.InputParameter, cwl_v1_1.InputParameter, cwl_v1_2.InputParameter ] """Type union for a CWL v1.x InputEnumSchema object.""" InputRecordField = Union[ cwl_v1_0.InputRecordField, cwl_v1_1.InputRecordField, cwl_v1_2.InputRecordField, ] """Type union for a CWL v1.x InputRecordSchema object.""" InputSchema = Union[cwl_v1_0.InputSchema, cwl_v1_1.InputSchema, cwl_v1_2.InputSchema] """Type union for a CWL v1.x InputSchema object.""" OutputParameter = Union[ cwl_v1_0.OutputParameter, cwl_v1_1.OutputParameter, cwl_v1_2.OutputParameter ] """Type union for a CWL v1.x OutputParameter object.""" OutputArraySchema = Union[ cwl_v1_0.OutputArraySchema, cwl_v1_1.OutputArraySchema, cwl_v1_2.OutputArraySchema, ] """Type union for a CWL v1.x OutputArraySchema object.""" OutputEnumSchema = Union[ cwl_v1_0.OutputEnumSchema, cwl_v1_1.OutputEnumSchema, cwl_v1_2.OutputEnumSchema, ] """Type union for a CWL v1.x OutputEnumSchema object.""" OutputRecordField = Union[ cwl_v1_0.OutputRecordField, cwl_v1_1.OutputRecordField, cwl_v1_2.OutputRecordField, ] """Type union for a CWL v1.x OutputRecordField object.""" OutputRecordSchema = Union[ cwl_v1_0.OutputRecordSchema, cwl_v1_1.OutputRecordSchema, cwl_v1_2.OutputRecordSchema, ] """Type union for a CWL v1.x OutputRecordSchema object.""" OutputSchema = Union[ cwl_v1_0.OutputSchema, cwl_v1_1.OutputSchema, cwl_v1_2.OutputSchema ] """Type union for a CWL v1.x OutputSchema object.""" Workflow = Union[cwl_v1_0.Workflow, cwl_v1_1.Workflow, cwl_v1_2.Workflow] WorkflowTypes = (cwl_v1_0.Workflow, cwl_v1_1.Workflow, cwl_v1_2.Workflow) """Type union for a CWL v1.x Workflow object.""" WorkflowInputParameter = Union[ cwl_v1_0.InputParameter, cwl_v1_1.WorkflowInputParameter, cwl_v1_2.WorkflowInputParameter, ] """Type union for a CWL v1.x WorkflowInputParameter object.""" WorkflowOutputParameter = Union[ cwl_v1_0.WorkflowOutputParameter, cwl_v1_1.WorkflowOutputParameter, cwl_v1_2.WorkflowOutputParameter, ] """Type union for a CWL v1.x WorkflowOutputParameter object.""" WorkflowStep = Union[ cwl_v1_0.WorkflowStep, cwl_v1_1.WorkflowStep, cwl_v1_2.WorkflowStep ] """Type union for a CWL v1.x WorkflowStep object.""" WorkflowStepInput = Union[ cwl_v1_0.WorkflowStepInput, cwl_v1_1.WorkflowStepInput, cwl_v1_2.WorkflowStepInput ] """Type union for a CWL v1.x WorkflowStepInput object.""" WorkflowStepOutput = Union[ cwl_v1_0.WorkflowStepOutput, cwl_v1_1.WorkflowStepOutput, cwl_v1_2.WorkflowStepOutput, ] """Type union for a CWL v1.x WorkflowStepOutput object.""" CommandLineTool = Union[ cwl_v1_0.CommandLineTool, cwl_v1_1.CommandLineTool, cwl_v1_2.CommandLineTool ] CommandLineToolTypes = ( cwl_v1_0.CommandLineTool, cwl_v1_1.CommandLineTool, cwl_v1_2.CommandLineTool, ) """Type union for a CWL v1.x CommandLineTool object.""" CommandLineBinding = Union[ cwl_v1_0.CommandLineBinding, cwl_v1_1.CommandLineBinding, cwl_v1_2.CommandLineBinding, ] """Type union for a CWL v1.x CommandLineBinding object.""" CommandOutputBinding = Union[ cwl_v1_0.CommandOutputBinding, cwl_v1_1.CommandOutputBinding, cwl_v1_2.CommandOutputBinding, ] """Type union for a CWL v1.x CommandOutputBinding object.""" CommandInputParameter = Union[ cwl_v1_0.CommandInputParameter, cwl_v1_1.CommandInputParameter, cwl_v1_2.CommandInputParameter, ] """Type union for a CWL v1.x CommandInputParameter object.""" CommandOutputParameter = Union[ cwl_v1_0.CommandOutputParameter, cwl_v1_1.CommandOutputParameter, cwl_v1_2.CommandOutputParameter, ] """Type union for a CWL v1.x CommandOutputParameter object.""" ExpressionTool = Union[ cwl_v1_0.ExpressionTool, cwl_v1_1.ExpressionTool, cwl_v1_2.ExpressionTool ] """Type union for a CWL v1.x ExpressionTool object.""" ExpressionToolOutputParameter = Union[ cwl_v1_0.ExpressionToolOutputParameter, cwl_v1_1.ExpressionToolOutputParameter, cwl_v1_2.ExpressionToolOutputParameter, ] """Type union for a CWL v1.x ExpressionToolOutputParameter object.""" DockerRequirement = Union[ cwl_v1_0.DockerRequirement, cwl_v1_1.DockerRequirement, cwl_v1_2.DockerRequirement ] DockerRequirementTypes = ( cwl_v1_0.DockerRequirement, cwl_v1_1.DockerRequirement, cwl_v1_2.DockerRequirement, ) """Type union for a CWL v1.x DockerRequirement object.""" Process = Union[Workflow, CommandLineTool, ExpressionTool, cwl_v1_2.Operation] """Type Union for a CWL v1.x Process object.""" ProcessRequirement = Union[ cwl_v1_0.ProcessRequirement, cwl_v1_1.ProcessRequirement, cwl_v1_2.ProcessRequirement, ] """Type Union for a CWL v1.x ProcessRequirement object.""" ProcessRequirementTypes = ( cwl_v1_0.ProcessRequirement, cwl_v1_1.ProcessRequirement, cwl_v1_2.ProcessRequirement, ) SoftwareRequirement = Union[ cwl_v1_0.SoftwareRequirement, cwl_v1_1.SoftwareRequirement, cwl_v1_2.SoftwareRequirement, ] SoftwareRequirementTypes = ( cwl_v1_0.SoftwareRequirement, cwl_v1_1.SoftwareRequirement, cwl_v1_2.SoftwareRequirement, ) """Type union for a CWL v1.x SoftwareRequirement object.""" ArraySchema = Union[cwl_v1_0.ArraySchema, cwl_v1_1.ArraySchema, cwl_v1_2.ArraySchema] InputArraySchema = Union[ cwl_v1_0.InputArraySchema, cwl_v1_1.InputArraySchema, cwl_v1_2.InputArraySchema ] InputArraySchemaTypes = ( cwl_v1_0.InputArraySchema, cwl_v1_1.InputArraySchema, cwl_v1_2.InputArraySchema, ) """Type Union for a CWL v1.x ArraySchema object.""" EnumSchema = Union[cwl_v1_0.EnumSchema, cwl_v1_1.EnumSchema, cwl_v1_2.EnumSchema] InputEnumSchema = Union[ cwl_v1_0.InputEnumSchema, cwl_v1_1.InputEnumSchema, cwl_v1_2.InputEnumSchema ] InputEnumSchemaTypes = ( cwl_v1_0.InputEnumSchema, cwl_v1_1.InputEnumSchema, cwl_v1_2.InputEnumSchema, ) """Type Union for a CWL v1.x EnumSchema object.""" RecordSchema = Union[ cwl_v1_0.RecordSchema, cwl_v1_1.RecordSchema, cwl_v1_2.RecordSchema ] InputRecordSchema = Union[ cwl_v1_0.InputRecordSchema, cwl_v1_1.InputRecordSchema, cwl_v1_2.InputRecordSchema ] InputRecordSchemaTypes = ( cwl_v1_0.InputRecordSchema, cwl_v1_1.InputRecordSchema, cwl_v1_2.InputRecordSchema, ) """Type Union for a CWL v1.x RecordSchema object.""" File = Union[cwl_v1_0.File, cwl_v1_1.File, cwl_v1_2.File] """Type Union for a CWL v1.x File object.""" SecondaryFileSchema = Union[cwl_v1_1.SecondaryFileSchema, cwl_v1_2.SecondaryFileSchema] """Type Union for a CWL v1.x SecondaryFileSchema object.""" Directory = Union[cwl_v1_0.Directory, cwl_v1_1.Directory, cwl_v1_2.Directory] """Type Union for a CWL v1.x Directory object.""" Dirent = Union[cwl_v1_0.Dirent, cwl_v1_1.Dirent, cwl_v1_2.Dirent] """Type Union for a CWL v1.x Dirent object.""" _Loader = Union[cwl_v1_0._Loader, cwl_v1_1._Loader, cwl_v1_2._Loader] """Type union for a CWL v1.x _Loader.""" def _get_id_from_graph(yaml: MutableMapping[str, Any], id_: Optional[str]) -> Any: if id_ is None: id_ = "main" for el in yaml["$graph"]: if el["id"].lstrip("#") == id_: return el raise GraphTargetMissingException( "Tool file contains graph of multiple objects, must specify " "one of #%s" % ", #".join(el["id"] for el in yaml["$graph"]) ) def cwl_version(yaml: Any) -> Optional[str]: """ Return the cwlVersion of a YAML object. :param yaml: ruamel.yaml object for a CWL document :raises ValidationException: If `yaml` is not a MutableMapping. """ if not isinstance(yaml, MutableMapping): raise ValidationException("MutableMapping is required") if "cwlVersion" not in list(yaml.keys()): return None return cast(str, yaml["cwlVersion"]) def load_document_by_uri( path: Union[str, Path], loadingOptions: Optional[LoadingOptions] = None, load_all: bool = False, ) -> Any: """Load a CWL object from a URI or a path.""" if isinstance(path, str): uri = urlparse(path) id_ = uri.fragment or None if not uri.scheme or uri.scheme == "file": real_uri = Path(unquote_plus(uri.path)).resolve().as_uri() base_uri = Path(unquote_plus(uri.path)).resolve().parent.as_uri() else: real_uri = path base_uri = os.path.dirname(path) else: real_uri = path.resolve().as_uri() base_uri = path.resolve().parent.as_uri() id_ = path.resolve().name.split("#")[1] if "#" in path.resolve().name else None if isinstance(loadingOptions, cwl_v1_0.LoadingOptions): loadingOptions = cwl_v1_0.LoadingOptions( fileuri=real_uri, baseuri=base_uri, copyfrom=loadingOptions ) elif isinstance(loadingOptions, cwl_v1_1.LoadingOptions): loadingOptions = cwl_v1_1.LoadingOptions( fileuri=real_uri, baseuri=base_uri, copyfrom=loadingOptions ) elif isinstance(loadingOptions, cwl_v1_2.LoadingOptions): loadingOptions = cwl_v1_2.LoadingOptions( fileuri=real_uri, baseuri=base_uri, copyfrom=loadingOptions ) elif loadingOptions is None: loadingOptions = cwl_v1_2.LoadingOptions(fileuri=real_uri, baseuri=base_uri) else: raise ValidationException( f"Unsupported loadingOptions type: {type(loadingOptions)}" ) doc = loadingOptions.fetcher.fetch_text(real_uri) return load_document_by_string(doc, real_uri, loadingOptions, id_, load_all) def load_document( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, id_: Optional[str] = None, load_all: bool = False, ) -> Any: """Load a CWL object from a serialized YAML string or a YAML object.""" if baseuri is None: baseuri = cwl_v1_0.file_uri(os.getcwd()) + "/" if isinstance(doc, str): return load_document_by_string(doc, baseuri, loadingOptions, id_) return load_document_by_yaml(doc, baseuri, loadingOptions, id_, load_all) def load_document_by_string( string: str, uri: str, loadingOptions: Optional[LoadingOptions] = None, id_: Optional[str] = None, load_all: bool = False, ) -> Any: """Load a CWL object from a serialized YAML string.""" yaml = yaml_no_ts() result = yaml.load(string) return load_document_by_yaml(result, uri, loadingOptions, id_, load_all) def load_document_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, id_: Optional[str] = None, load_all: bool = False, ) -> Any: """Load a CWL object from a YAML object.""" version = cwl_version(yaml) if "$graph" in yaml and not load_all: yaml = _get_id_from_graph(yaml, id_) yaml["cwlVersion"] = version if version == "v1.0": result = cwl_v1_0.load_document_by_yaml( yaml, uri, cast(Optional[cwl_v1_0.LoadingOptions], loadingOptions) ) elif version == "v1.1": result = cwl_v1_1.load_document_by_yaml( yaml, uri, cast(Optional[cwl_v1_1.LoadingOptions], loadingOptions) ) elif version == "v1.2": result = cwl_v1_2.load_document_by_yaml( yaml, uri, cast(Optional[cwl_v1_2.LoadingOptions], loadingOptions) ) elif version is None: raise ValidationException("could not get the cwlVersion") else: raise ValidationException( f"Version error. Did not recognise {version} as a CWL version" ) if isinstance(result, MutableSequence): lst = [] for r in result: if "cwlVersion" in r.attrs: r.cwlVersion = version lst.append(r) return lst return result def save( val: Optional[Union[Saveable, MutableSequence[Saveable]]], top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> Any: """Convert a CWL Python object into a JSON/YAML serializable object.""" if ( isinstance(val, cwl_v1_0.Saveable) or isinstance(val, cwl_v1_1.Saveable) or isinstance(val, cwl_v1_2.Saveable) ): return val.save(top=top, base_url=base_url, relative_uris=relative_uris) if isinstance(val, MutableSequence): lst = [ save(v, top=top, base_url=base_url, relative_uris=relative_uris) for v in val ] if top and all(is_process(v) for v in val): vers = [ e.get("cwlVersion") for i, e in enumerate(lst) if is_process(val[i]) ] latest = max( (v for v in vers if v is not None), key=cast(Any, version_split) ) return {"cwlVersion": latest, "$graph": lst} return lst if isinstance(val, MutableMapping): newdict = {} for key in val: newdict[key] = save( val[key], top=False, base_url=base_url, relative_uris=relative_uris ) return newdict return val def is_process(v: Any) -> bool: """Test to see if the object is a CWL v1.x Python Process object.""" return ( isinstance(v, cwl_v1_0.Process) or isinstance(v, cwl_v1_1.Process) or isinstance(v, cwl_v1_2.Process) ) def version_split(version: str) -> MutableSequence[int]: """Split a cwlVersion value into its numerical components.""" return [int(v) for v in version[1:].split(".")] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_0.py0000644000175100001660000406027114753463342020365 0ustar00runnerdocker# # This file was autogenerated using schema-salad-tool --codegen=python # The code itself is released under the Apache 2.0 license and the help text is # subject to the license of the original schema. import copy import logging import os import pathlib import tempfile import uuid as _uuid__ # pylint: disable=unused-import # noqa: F401 import xml.sax # nosec from abc import ABC, abstractmethod from collections.abc import MutableMapping, MutableSequence, Sequence from io import StringIO from itertools import chain from typing import Any, Optional, Union, cast from urllib.parse import quote, urldefrag, urlparse, urlsplit, urlunsplit from urllib.request import pathname2url from rdflib import Graph from rdflib.plugins.parsers.notation3 import BadSyntax from ruamel.yaml.comments import CommentedMap from schema_salad.exceptions import SchemaSaladException, ValidationException from schema_salad.fetcher import DefaultFetcher, Fetcher, MemoryCachingFetcher from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import CacheType, yaml_no_ts # requires schema-salad v8.2+ _vocab: dict[str, str] = {} _rvocab: dict[str, str] = {} _logger = logging.getLogger("salad") IdxType = MutableMapping[str, tuple[Any, "LoadingOptions"]] class LoadingOptions: idx: IdxType fileuri: Optional[str] baseuri: str namespaces: MutableMapping[str, str] schemas: MutableSequence[str] original_doc: Optional[Any] addl_metadata: MutableMapping[str, Any] fetcher: Fetcher vocab: dict[str, str] rvocab: dict[str, str] cache: CacheType imports: list[str] includes: list[str] no_link_check: Optional[bool] container: Optional[str] def __init__( self, fetcher: Optional[Fetcher] = None, namespaces: Optional[dict[str, str]] = None, schemas: Optional[list[str]] = None, fileuri: Optional[str] = None, copyfrom: Optional["LoadingOptions"] = None, original_doc: Optional[Any] = None, addl_metadata: Optional[dict[str, str]] = None, baseuri: Optional[str] = None, idx: Optional[IdxType] = None, imports: Optional[list[str]] = None, includes: Optional[list[str]] = None, no_link_check: Optional[bool] = None, container: Optional[str] = None, ) -> None: """Create a LoadingOptions object.""" self.original_doc = original_doc if idx is not None: self.idx = idx else: self.idx = copyfrom.idx if copyfrom is not None else {} if fileuri is not None: self.fileuri = fileuri else: self.fileuri = copyfrom.fileuri if copyfrom is not None else None if baseuri is not None: self.baseuri = baseuri else: self.baseuri = copyfrom.baseuri if copyfrom is not None else "" if namespaces is not None: self.namespaces = namespaces else: self.namespaces = copyfrom.namespaces if copyfrom is not None else {} if schemas is not None: self.schemas = schemas else: self.schemas = copyfrom.schemas if copyfrom is not None else [] if addl_metadata is not None: self.addl_metadata = addl_metadata else: self.addl_metadata = copyfrom.addl_metadata if copyfrom is not None else {} if imports is not None: self.imports = imports else: self.imports = copyfrom.imports if copyfrom is not None else [] if includes is not None: self.includes = includes else: self.includes = copyfrom.includes if copyfrom is not None else [] if no_link_check is not None: self.no_link_check = no_link_check else: self.no_link_check = copyfrom.no_link_check if copyfrom is not None else False if container is not None: self.container = container else: self.container = copyfrom.container if copyfrom is not None else None if fetcher is not None: self.fetcher = fetcher elif copyfrom is not None: self.fetcher = copyfrom.fetcher else: import requests from cachecontrol.caches import SeparateBodyFileCache from cachecontrol.wrapper import CacheControl root = pathlib.Path(os.environ.get("HOME", tempfile.gettempdir())) session = CacheControl( requests.Session(), cache=SeparateBodyFileCache(root / ".cache" / "salad"), ) self.fetcher: Fetcher = DefaultFetcher({}, session) self.cache = self.fetcher.cache if isinstance(self.fetcher, MemoryCachingFetcher) else {} self.vocab = _vocab self.rvocab = _rvocab if self.namespaces is not None: self.vocab = self.vocab.copy() self.rvocab = self.rvocab.copy() for k, v in self.namespaces.items(): self.vocab[k] = v self.rvocab[v] = k @property def graph(self) -> Graph: """Generate a merged rdflib.Graph from all entries in self.schemas.""" graph = Graph() if not self.schemas: return graph key = str(hash(tuple(self.schemas))) if key in self.cache: return cast(Graph, self.cache[key]) for schema in self.schemas: fetchurl = ( self.fetcher.urljoin(self.fileuri, schema) if self.fileuri is not None else pathlib.Path(schema).resolve().as_uri() ) if fetchurl not in self.cache or self.cache[fetchurl] is True: _logger.debug("Getting external schema %s", fetchurl) try: content = self.fetcher.fetch_text(fetchurl) except Exception as e: _logger.warning("Could not load extension schema %s: %s", fetchurl, str(e)) continue newGraph = Graph() err_msg = "unknown error" for fmt in ["xml", "turtle"]: try: newGraph.parse(data=content, format=fmt, publicID=str(fetchurl)) self.cache[fetchurl] = newGraph graph += newGraph break except (xml.sax.SAXParseException, TypeError, BadSyntax) as e: err_msg = str(e) else: _logger.warning("Could not load extension schema %s: %s", fetchurl, err_msg) self.cache[key] = graph return graph class Saveable(ABC): """Mark classes than have a save() and fromDoc() function.""" @classmethod @abstractmethod def fromDoc( cls, _doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, ) -> "Saveable": """Construct this object from the result of yaml.load().""" @abstractmethod def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: """Convert this object to a JSON/YAML friendly dictionary.""" def load_field( val: Union[str, dict[str, str]], fieldtype: "_Loader", baseuri: str, loadingOptions: LoadingOptions, lc: Optional[list[Any]] = None, ) -> Any: """Load field.""" if isinstance(val, MutableMapping): if "$import" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$import"]) result, metadata = _document_load_by_url( fieldtype, url, loadingOptions, ) loadingOptions.imports.append(url) return result if "$include" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$include"]) val = loadingOptions.fetcher.fetch_text(url) loadingOptions.includes.append(url) return fieldtype.load(val, baseuri, loadingOptions, lc=lc) save_type = Optional[Union[MutableMapping[str, Any], MutableSequence[Any], int, float, bool, str]] def extract_type(val_type: type[Any]) -> str: """Take a type of value, and extracts the value as a string.""" val_str = str(val_type) return val_str.split("'")[1] def convert_typing(val_type: str) -> str: """Normalize type names to schema-salad types.""" if "None" in val_type: return "null" if "CommentedSeq" in val_type or "list" in val_type: return "array" if "CommentedMap" in val_type or "dict" in val_type: return "object" if "False" in val_type or "True" in val_type: return "boolean" return val_type def parse_errors(error_message: str) -> tuple[str, str, str]: """Parse error messages from several loaders into one error message.""" if not error_message.startswith("Expected"): return error_message, "", "" vals = error_message.split("\n") if len(vals) == 1: return error_message, "", "" types = set() for val in vals: individual_vals = val.split(" ") if val == "": continue if individual_vals[1] == "one": individual_vals = val.split("(")[1].split(",") for t in individual_vals: types.add(t.strip(" ").strip(")\n")) elif individual_vals[2] == "").replace("'", "")) elif individual_vals[0] == "Value": types.add(individual_vals[-1].strip(".")) else: types.add(individual_vals[1].replace(",", "")) types = {val for val in types if val != "NoneType"} if "str" in types: types = {convert_typing(val) for val in types if "'" not in val} to_print = "" for val in types: if "'" in val: to_print = "value" if len(types) == 1 else "values" if to_print == "": to_print = "type" if len(types) == 1 else "types" verb_tensage = "is" if len(types) == 1 else "are" return str(types).replace("{", "(").replace("}", ")").replace("'", ""), to_print, verb_tensage def save( val: Any, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: if isinstance(val, Saveable): return val.save(top=top, base_url=base_url, relative_uris=relative_uris) if isinstance(val, MutableSequence): return [save(v, top=False, base_url=base_url, relative_uris=relative_uris) for v in val] if isinstance(val, MutableMapping): newdict = {} for key in val: newdict[key] = save(val[key], top=False, base_url=base_url, relative_uris=relative_uris) return newdict if val is None or isinstance(val, (int, float, bool, str)): return val raise Exception("Not Saveable: %s" % type(val)) def save_with_metadata( val: Any, valLoadingOpts: LoadingOptions, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: """Save and set $namespaces, $schemas, $base and any other metadata fields at the top level.""" saved_val = save(val, top, base_url, relative_uris) newdict: MutableMapping[str, Any] = {} if isinstance(saved_val, MutableSequence): newdict = {"$graph": saved_val} elif isinstance(saved_val, MutableMapping): newdict = saved_val if valLoadingOpts.namespaces: newdict["$namespaces"] = valLoadingOpts.namespaces if valLoadingOpts.schemas: newdict["$schemas"] = valLoadingOpts.schemas if valLoadingOpts.baseuri: newdict["$base"] = valLoadingOpts.baseuri for k, v in valLoadingOpts.addl_metadata.items(): if k not in newdict: newdict[k] = v return newdict def expand_url( url: str, base_url: str, loadingOptions: LoadingOptions, scoped_id: bool = False, vocab_term: bool = False, scoped_ref: Optional[int] = None, ) -> str: if url in ("@id", "@type"): return url if vocab_term and url in loadingOptions.vocab: return url if bool(loadingOptions.vocab) and ":" in url: prefix = url.split(":")[0] if prefix in loadingOptions.vocab: url = loadingOptions.vocab[prefix] + url[len(prefix) + 1 :] split = urlsplit(url) if ( (bool(split.scheme) and split.scheme in loadingOptions.fetcher.supported_schemes()) or url.startswith("$(") or url.startswith("${") ): pass elif scoped_id and not bool(split.fragment): splitbase = urlsplit(base_url) frg = "" if bool(splitbase.fragment): frg = splitbase.fragment + "/" + split.path else: frg = split.path pt = splitbase.path if splitbase.path != "" else "/" url = urlunsplit((splitbase.scheme, splitbase.netloc, pt, splitbase.query, frg)) elif scoped_ref is not None and not bool(split.fragment): splitbase = urlsplit(base_url) sp = splitbase.fragment.split("/") n = scoped_ref while n > 0 and len(sp) > 0: sp.pop() n -= 1 sp.append(url) url = urlunsplit( ( splitbase.scheme, splitbase.netloc, splitbase.path, splitbase.query, "/".join(sp), ) ) else: url = loadingOptions.fetcher.urljoin(base_url, url) if vocab_term: split = urlsplit(url) if bool(split.scheme): if url in loadingOptions.rvocab: return loadingOptions.rvocab[url] else: raise ValidationException(f"Term {url!r} not in vocabulary") return url class _Loader: def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: pass class _AnyLoader(_Loader): def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc is not None: return doc raise ValidationException("Expected non-null") class _PrimitiveLoader(_Loader): def __init__(self, tp: Union[type, tuple[type[str], type[str]]]) -> None: self.tp = tp def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, self.tp): raise ValidationException(f"Expected a {self.tp} but got {doc.__class__.__name__}") return doc def __repr__(self) -> str: return str(self.tp) class _ArrayLoader(_Loader): def __init__(self, items: _Loader) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableSequence): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an array." ) r: list[Any] = [] errors: list[SchemaSaladException] = [] fields: list[str] = [] for i in range(0, len(doc)): try: lf = load_field( doc[i], _UnionLoader([self, self.items]), baseuri, loadingOptions, lc=lc ) flatten = loadingOptions.container != "@list" if flatten and isinstance(lf, MutableSequence): r.extend(lf) else: r.append(lf) if isinstance(doc[i], CommentedMap): if doc[i].get("id") is not None: if doc[i].get("id") in fields: errors.append( ValidationException( f"Duplicate field {doc[i].get('id')!r}", SourceLine(doc[i], "id", str), [], ) ) else: fields.append(doc[i].get("id")) except ValidationException as e: e = ValidationException( "array item is invalid because", SourceLine(doc, i, str), [e] ) errors.append(e) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return f"array<{self.items}>" class _MapLoader(_Loader): def __init__( self, values: _Loader, name: Optional[str] = None, container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.values = values self.name = name self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException(f"Expected a map, was {type(doc)}") if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) r: dict[str, Any] = {} errors: list[SchemaSaladException] = [] for k, v in doc.items(): try: lf = load_field(v, self.values, baseuri, loadingOptions, lc) r[k] = lf except ValidationException as e: errors.append(e.with_sourceline(SourceLine(doc, k, str))) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return self.name if self.name is not None else f"map" class _EnumLoader(_Loader): def __init__(self, symbols: Sequence[str], name: str) -> None: self.symbols = symbols self.name = name def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc in self.symbols: return doc raise ValidationException(f"Expected one of {self.symbols}") def __repr__(self) -> str: return self.name class _SecondaryDSLLoader(_Loader): def __init__(self, inner: _Loader) -> None: self.inner = inner def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: r: list[dict[str, Any]] = [] if isinstance(doc, MutableSequence): for d in doc: if isinstance(d, str): if d.endswith("?"): r.append({"pattern": d[:-1], "required": False}) else: r.append({"pattern": d}) elif isinstance(d, dict): new_dict: dict[str, Any] = {} dict_copy = copy.deepcopy(d) if "pattern" in dict_copy: new_dict["pattern"] = dict_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {d}" ) new_dict["required"] = ( dict_copy.pop("required") if "required" in dict_copy else None ) if len(dict_copy): raise ValidationException( "Unallowed values in secondaryFiles specification entry: {}".format( dict_copy ) ) r.append(new_dict) else: raise ValidationException( "Expected a string or sequence of (strings or mappings)." ) elif isinstance(doc, MutableMapping): new_dict = {} doc_copy = copy.deepcopy(doc) if "pattern" in doc_copy: new_dict["pattern"] = doc_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {doc}" ) new_dict["required"] = doc_copy.pop("required") if "required" in doc_copy else None if len(doc_copy): raise ValidationException( f"Unallowed values in secondaryFiles specification entry: {doc_copy}" ) r.append(new_dict) elif isinstance(doc, str): if doc.endswith("?"): r.append({"pattern": doc[:-1], "required": False}) else: r.append({"pattern": doc}) else: raise ValidationException("Expected str or sequence of str") return self.inner.load(r, baseuri, loadingOptions, docRoot, lc=lc) class _RecordLoader(_Loader): def __init__( self, classtype: type[Saveable], container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.classtype = classtype self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an object." ) if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) return self.classtype.fromDoc(doc, baseuri, loadingOptions, docRoot=docRoot) def __repr__(self) -> str: return str(self.classtype.__name__) class _ExpressionLoader(_Loader): def __init__(self, items: type[str]) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, str): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is a str." ) return doc class _UnionLoader(_Loader): def __init__(self, alternates: Sequence[_Loader], name: Optional[str] = None) -> None: self.alternates = alternates self.name = name def add_loaders(self, loaders: Sequence[_Loader]) -> None: self.alternates = tuple(loader for loader in chain(self.alternates, loaders)) def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: errors = [] if lc is None: lc = [] for t in self.alternates: try: return t.load(doc, baseuri, loadingOptions, docRoot=docRoot, lc=lc) except ValidationException as e: if isinstance(t, _ArrayLoader) and len(self.alternates) > 1: continue if isinstance(doc, (CommentedMap, dict)): if "class" in doc: if str(doc.get("class")) == str(t): errors.append( ValidationException( f"Object `{baseuri.split('/')[-1]}` is not valid because:", SourceLine(doc, next(iter(doc)), str), [e], ) ) else: if "array" in str(t): continue else: if "id" in doc: id = baseuri.split("/")[-1] + "#" + str(doc.get("id")) if "id" in lc: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, "id", str), [e], ) ) else: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, doc.get("id"), str), [e], ) ) else: if not isinstance( t, (_PrimitiveLoader) ): # avoids 'tried was {x}' errors errors.append( ValidationException(f"tried `{t}` but", None, [e]) ) else: # avoids "tried but x" and instead returns the values for parsing errors.append(ValidationException("", None, [e])) if isinstance(doc, (CommentedMap, dict)) and "class" in doc: if str(doc.get("class")) not in str(self.alternates): errors.append( ValidationException( "Field `class` contains undefined reference to " + "`" + "/".join(baseuri.split("/")[0:-1]) + "/" + str(doc.get("class")) + "`", SourceLine(doc, "class", str), [], ) ) raise ValidationException("", None, errors, "*") def __repr__(self) -> str: return self.name if self.name is not None else " | ".join(str(a) for a in self.alternates) class _URILoader(_Loader): def __init__( self, inner: _Loader, scoped_id: bool, vocab_term: bool, scoped_ref: Optional[int], no_link_check: Optional[bool], ) -> None: self.inner = inner self.scoped_id = scoped_id self.vocab_term = vocab_term self.scoped_ref = scoped_ref self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, no_link_check=self.no_link_check ) if isinstance(doc, MutableSequence): newdoc = [] for i in doc: if isinstance(i, str): newdoc.append( expand_url( i, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) ) else: newdoc.append(i) doc = newdoc elif isinstance(doc, str): doc = expand_url( doc, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) if isinstance(doc, str): if not loadingOptions.no_link_check: errors = [] try: if not loadingOptions.fetcher.check_exists(doc): errors.append( ValidationException(f"contains undefined reference to `{doc}`") ) except ValidationException: pass if len(errors) > 0: raise ValidationException("", None, errors) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _TypeDSLLoader(_Loader): def __init__(self, inner: _Loader, refScope: Optional[int], salad_version: str) -> None: self.inner = inner self.refScope = refScope self.salad_version = salad_version def resolve( self, doc: str, baseuri: str, loadingOptions: LoadingOptions, ) -> Union[list[Union[dict[str, Any], str]], dict[str, Any], str]: doc_ = doc optional = False if doc_.endswith("?"): optional = True doc_ = doc_[0:-1] if doc_.endswith("[]"): salad_versions = [int(v) for v in self.salad_version[1:].split(".")] items: Union[list[Union[dict[str, Any], str]], dict[str, Any], str] = "" rest = doc_[0:-2] if salad_versions < [1, 3]: if rest.endswith("[]"): # To show the error message with the original type return doc else: items = expand_url(rest, baseuri, loadingOptions, False, True, self.refScope) else: items = self.resolve(rest, baseuri, loadingOptions) if isinstance(items, str): items = expand_url(items, baseuri, loadingOptions, False, True, self.refScope) expanded: Union[dict[str, Any], str] = {"type": "array", "items": items} else: expanded = expand_url(doc_, baseuri, loadingOptions, False, True, self.refScope) if optional: return ["null", expanded] else: return expanded def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableSequence): r: list[Any] = [] for d in doc: if isinstance(d, str): resolved = self.resolve(d, baseuri, loadingOptions) if isinstance(resolved, MutableSequence): for i in resolved: if i not in r: r.append(i) else: if resolved not in r: r.append(resolved) else: r.append(d) doc = r elif isinstance(doc, str): doc = self.resolve(doc, baseuri, loadingOptions) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _IdMapLoader(_Loader): def __init__(self, inner: _Loader, mapSubject: str, mapPredicate: Optional[str]) -> None: self.inner = inner self.mapSubject = mapSubject self.mapPredicate = mapPredicate def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableMapping): r: list[Any] = [] for k in doc.keys(): val = doc[k] if isinstance(val, CommentedMap): v = copy.copy(val) v.lc.data = val.lc.data v.lc.filename = val.lc.filename v[self.mapSubject] = k r.append(v) elif isinstance(val, MutableMapping): v2 = copy.copy(val) v2[self.mapSubject] = k r.append(v2) else: if self.mapPredicate: v3 = {self.mapPredicate: val} v3[self.mapSubject] = k r.append(v3) else: raise ValidationException("No mapPredicate") doc = r return self.inner.load(doc, baseuri, loadingOptions, lc=lc) def _document_load( loader: _Loader, doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if isinstance(doc, str): return _document_load_by_url( loader, loadingOptions.fetcher.urljoin(baseuri, doc), loadingOptions, addl_metadata_fields=addl_metadata_fields, ) if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] docuri = baseuri if "$base" in doc: baseuri = doc["$base"] loadingOptions = LoadingOptions( copyfrom=loadingOptions, namespaces=doc.get("$namespaces", None), schemas=doc.get("$schemas", None), baseuri=doc.get("$base", None), addl_metadata=addl_metadata, ) doc = copy.copy(doc) if "$namespaces" in doc: doc.pop("$namespaces") if "$schemas" in doc: doc.pop("$schemas") if "$base" in doc: doc.pop("$base") if "$graph" in doc: loadingOptions.idx[baseuri] = ( loader.load(doc["$graph"], baseuri, loadingOptions), loadingOptions, ) else: loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) if docuri != baseuri: loadingOptions.idx[docuri] = loadingOptions.idx[baseuri] return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def _document_load_by_url( loader: _Loader, url: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _document_load( loader, result, doc_url, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) return loadingOptions.idx[url] def file_uri(path: str, split_frag: bool = False) -> str: """Transform a file path into a URL with file scheme.""" if path.startswith("file://"): return path if split_frag: pathsp = path.split("#", 2) frag = "#" + quote(str(pathsp[1])) if len(pathsp) == 2 else "" urlpath = pathname2url(str(pathsp[0])) else: urlpath = pathname2url(path) frag = "" if urlpath.startswith("//"): return f"file:{urlpath}{frag}" return f"file://{urlpath}{frag}" def prefix_url(url: str, namespaces: dict[str, str]) -> str: """Expand short forms into full URLs using the given namespace dictionary.""" for k, v in namespaces.items(): if url.startswith(v): return k + ":" + url[len(v) :] return url def save_relative_uri( uri: Any, base_url: str, scoped_id: bool, ref_scope: Optional[int], relative_uris: bool, ) -> Any: """Convert any URI to a relative one, obeying the scoping rules.""" if isinstance(uri, MutableSequence): return [save_relative_uri(u, base_url, scoped_id, ref_scope, relative_uris) for u in uri] elif isinstance(uri, str): if not relative_uris or uri == base_url: return uri urisplit = urlsplit(uri) basesplit = urlsplit(base_url) if urisplit.scheme == basesplit.scheme and urisplit.netloc == basesplit.netloc: if urisplit.path != basesplit.path: p = os.path.relpath(urisplit.path, os.path.dirname(basesplit.path)) if urisplit.fragment: p = p + "#" + urisplit.fragment return p basefrag = basesplit.fragment + "/" if ref_scope: sp = basefrag.split("/") i = 0 while i < ref_scope: sp.pop() i += 1 basefrag = "/".join(sp) if urisplit.fragment.startswith(basefrag): return urisplit.fragment[len(basefrag) :] return urisplit.fragment return uri else: return save(uri, top=False, base_url=base_url, relative_uris=relative_uris) def shortname(inputid: str) -> str: """ Compute the shortname of a fully qualified identifier. See https://w3id.org/cwl/v1.2/SchemaSalad.html#Short_names. """ parsed_id = urlparse(inputid) if parsed_id.fragment: return parsed_id.fragment.split("/")[-1] return parsed_id.path.split("/")[-1] def parser_info() -> str: return "org.w3id.cwl.v1_0" class Documented(Saveable): pass class RecordField(Documented): """ A field of a record. """ name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class RecordSchema(Saveable): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class EnumSchema(Saveable): """ Define an enumerated type. """ name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, EnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type"]) class ArraySchema(Saveable): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class MapSchema(Saveable): def __init__( self, type_: Any, values: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.type_ = type_ self.values = values def __eq__(self, other: Any) -> bool: if isinstance(other, MapSchema): return bool(self.type_ == other.type_ and self.values == other.values) return False def __hash__(self) -> int: return hash((self.type_, self.values)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MapSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Map_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("values") is None: raise ValidationException("missing required field `values`", None, []) values = load_field( _doc.get("values"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("values") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `values`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("values") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [e], detailed_message=f"the `values` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `type`, `values`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( type_=type_, values=values, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.values is not None: u = save_relative_uri(self.values, base_url, False, 2, relative_uris) r["values"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["type", "values"]) class UnionSchema(Saveable): def __init__( self, names: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.names = names self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, UnionSchema): return bool(self.names == other.names and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.names, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "UnionSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("names") is None: raise ValidationException("missing required field `names`", None, []) names = load_field( _doc.get("names"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("names") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `names`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("names") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [e], detailed_message=f"the `names` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Union_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `names`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( names=names, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.names is not None: u = save_relative_uri(self.names, base_url, False, 2, relative_uris) r["names"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["names", "type"]) class CWLArraySchema(ArraySchema): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class CWLRecordField(RecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class CWLRecordSchema(RecordSchema): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class File(Saveable): """ Represents a file (or group of files when `secondaryFiles` is provided) that will be accessible by tools using standard POSIX file system call API such as open(2) and read(2). Files are represented as objects with `class` of `File`. File objects have a number of properties that provide metadata about the file. The `location` property of a File is a URI that uniquely identifies the file. Implementations must support the file:// URI scheme and may support other schemes such as http://. The value of `location` may also be a relative reference, in which case it must be resolved relative to the URI of the document it appears in. Alternately to `location`, implementations must also accept the `path` property on File, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). If no `location` or `path` is specified, a file object must specify `contents` with the UTF-8 text content of the file. This is a "file literal". File literals do not correspond to external resources, but are created on disk with `contents` with when needed for a executing a tool. Where appropriate, expressions can return file literals to define new files on a runtime. The maximum size of `contents` is 64 kilobytes. The `basename` property defines the filename on disk where the file is staged. This may differ from the resource name. If not provided, `basename` must be computed from the last path part of `location` and made available to expressions. The `secondaryFiles` property is a list of File or Directory objects that must be staged in the same directory as the primary file. It is an error for file names to be duplicated in `secondaryFiles`. The `size` property is the size in bytes of the File. It must be computed from the resource and made available to expressions. The `checksum` field contains a cryptographic hash of the file content for use it verifying file contents. Implementations may, at user option, enable or disable computation of the `checksum` field for performance or other reasons. However, the ability to compute output checksums is required to pass the CWL conformance test suite. When executing a CommandLineTool, the files and secondary files may be staged to an arbitrary directory, but must use the value of `basename` for the filename. The `path` property must be file path in the context of the tool execution runtime (local to the compute node, or within the executing container). All computed properties should be available to expressions. File literals also must be staged and `path` must be set. When collecting CommandLineTool outputs, `glob` matching returns file paths (with the `path` property) and the derived properties. This can all be modified by `outputEval`. Alternately, if the file `cwl.output.json` is present in the output, `outputBinding` is ignored. File objects in the output must provide either a `location` URI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). When evaluating an ExpressionTool, file objects must be referenced via `location` (the expression tool does not have access to files on disk so `path` is meaningless) or as file literals. It is legal to return a file object with an existing `location` but a different `basename`. The `loadContents` field of ExpressionTool inputs behaves the same as on CommandLineTool inputs, however it is not meaningful on the outputs. An ExpressionTool may forward file references from input to output by using the same value for `location`. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, dirname: Optional[Any] = None, nameroot: Optional[Any] = None, nameext: Optional[Any] = None, checksum: Optional[Any] = None, size: Optional[Any] = None, secondaryFiles: Optional[Any] = None, format: Optional[Any] = None, contents: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "File" self.location = location self.path = path self.basename = basename self.dirname = dirname self.nameroot = nameroot self.nameext = nameext self.checksum = checksum self.size = size self.secondaryFiles = secondaryFiles self.format = format self.contents = contents def __eq__(self, other: Any) -> bool: if isinstance(other, File): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.dirname == other.dirname and self.nameroot == other.nameroot and self.nameext == other.nameext and self.checksum == other.checksum and self.size == other.size and self.secondaryFiles == other.secondaryFiles and self.format == other.format and self.contents == other.contents ) return False def __hash__(self) -> int: return hash( ( self.class_, self.location, self.path, self.basename, self.dirname, self.nameroot, self.nameext, self.checksum, self.size, self.secondaryFiles, self.format, self.contents, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "File": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_File_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) dirname = None if "dirname" in _doc: try: dirname = load_field( _doc.get("dirname"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dirname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dirname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dirname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [e], detailed_message=f"the `dirname` field with value `{val}` " "is not valid because:", ) ) nameroot = None if "nameroot" in _doc: try: nameroot = load_field( _doc.get("nameroot"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameroot") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameroot`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameroot") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [e], detailed_message=f"the `nameroot` field with value `{val}` " "is not valid because:", ) ) nameext = None if "nameext" in _doc: try: nameext = load_field( _doc.get("nameext"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameext") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameext`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameext") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [e], detailed_message=f"the `nameext` field with value `{val}` " "is not valid because:", ) ) checksum = None if "checksum" in _doc: try: checksum = load_field( _doc.get("checksum"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("checksum") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `checksum`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("checksum") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [e], detailed_message=f"the `checksum` field with value `{val}` " "is not valid because:", ) ) size = None if "size" in _doc: try: size = load_field( _doc.get("size"), union_of_None_type_or_inttype, baseuri, loadingOptions, lc=_doc.get("size") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `size`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("size") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [e], detailed_message=f"the `size` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) contents = None if "contents" in _doc: try: contents = load_field( _doc.get("contents"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("contents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `contents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("contents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [e], detailed_message=f"the `contents` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `dirname`, `nameroot`, `nameext`, `checksum`, `size`, `secondaryFiles`, `format`, `contents`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, dirname=dirname, nameroot=nameroot, nameext=nameext, checksum=checksum, size=size, secondaryFiles=secondaryFiles, format=format, contents=contents, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.dirname is not None: r["dirname"] = save( self.dirname, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameroot is not None: r["nameroot"] = save( self.nameroot, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameext is not None: r["nameext"] = save( self.nameext, top=False, base_url=base_url, relative_uris=relative_uris ) if self.checksum is not None: r["checksum"] = save( self.checksum, top=False, base_url=base_url, relative_uris=relative_uris ) if self.size is not None: r["size"] = save( self.size, top=False, base_url=base_url, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, base_url, True, None, relative_uris) r["format"] = u if self.contents is not None: r["contents"] = save( self.contents, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "location", "path", "basename", "dirname", "nameroot", "nameext", "checksum", "size", "secondaryFiles", "format", "contents", ] ) class Directory(Saveable): """ Represents a directory to present to a command line tool. Directories are represented as objects with `class` of `Directory`. Directory objects have a number of properties that provide metadata about the directory. The `location` property of a Directory is a URI that uniquely identifies the directory. Implementations must support the file:// URI scheme and may support other schemes such as http://. Alternately to `location`, implementations must also accept the `path` property on Directory, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). A Directory object may have a `listing` field. This is a list of File and Directory objects that are contained in the Directory. For each entry in `listing`, the `basename` property defines the name of the File or Subdirectory when staged to disk. If `listing` is not provided, the implementation must have some way of fetching the Directory listing at runtime based on the `location` field. If a Directory does not have `location`, it is a Directory literal. A Directory literal must provide `listing`. Directory literals must be created on disk at runtime as needed. The resources in a Directory literal do not need to have any implied relationship in their `location`. For example, a Directory listing may contain two files located on different hosts. It is the responsibility of the runtime to ensure that those files are staged to disk appropriately. Secondary files associated with files in `listing` must also be staged to the same Directory. When executing a CommandLineTool, Directories must be recursively staged first and have local values of `path` assigend. Directory objects in CommandLineTool output must provide either a `location` URI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). An ExpressionTool may forward file references from input to output by using the same value for `location`. Name conflicts (the same `basename` appearing multiple times in `listing` or in any entry in `secondaryFiles` in the listing) is a fatal error. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, listing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Directory" self.location = location self.path = path self.basename = basename self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, Directory): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.listing == other.listing ) return False def __hash__(self) -> int: return hash( (self.class_, self.location, self.path, self.basename, self.listing) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Directory": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Directory_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) listing = None if "listing" in _doc: try: listing = load_field( _doc.get("listing"), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "location", "path", "basename", "listing"]) class SchemaBase(Saveable): pass class Parameter(SchemaBase): """ Define an input or output parameter to a process. """ pass class InputBinding(Saveable): pass class OutputBinding(Saveable): pass class InputSchema(SchemaBase): pass class OutputSchema(SchemaBase): pass class InputRecordField(CWLRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, inputBinding: Optional[Any] = None, label: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.inputBinding = inputBinding self.label = label def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.inputBinding == other.inputBinding and self.label == other.label ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_, self.inputBinding, self.label)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `inputBinding`, `label`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, inputBinding=inputBinding, label=label, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type", "inputBinding", "label"]) class InputRecordSchema(CWLRecordSchema, InputSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "name"]) class InputEnumSchema(EnumSchema, InputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, InputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( (self.name, self.symbols, self.type_, self.label, self.inputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "inputBinding"]) class InputArraySchema(CWLArraySchema, InputSchema): def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, InputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.inputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=base_url, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "inputBinding"]) class OutputRecordField(CWLRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_, self.outputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type", "outputBinding"]) class OutputRecordSchema(CWLRecordSchema, OutputSchema): def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label"]) class OutputEnumSchema(EnumSchema, OutputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, OutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( (self.name, self.symbols, self.type_, self.label, self.outputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "outputBinding"]) class OutputArraySchema(CWLArraySchema, OutputSchema): def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, OutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.outputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=base_url, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "outputBinding"]) class InputParameter(Parameter): id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, format: Optional[Any] = None, inputBinding: Optional[Any] = None, default: Optional[Any] = None, type_: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.inputBinding = inputBinding self.default = default self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, InputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.inputBinding == other.inputBinding and self.default == other.default and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.inputBinding, self.default, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) type_ = None if "type" in _doc: try: type_ = load_field( _doc.get("type"), typedsl_union_of_None_type_or_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `inputBinding`, `default`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, inputBinding=inputBinding, default=default, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "inputBinding", "default", "type", ] ) class OutputParameter(Parameter): id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.outputBinding = outputBinding self.format = format def __eq__(self, other: Any) -> bool: if isinstance(other, OutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.outputBinding == other.outputBinding and self.format == other.format ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.outputBinding, self.format, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `outputBinding`, `format`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, outputBinding=outputBinding, format=format, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "outputBinding", "format", ] ) class ProcessRequirement(Saveable): """ A process requirement declares a prerequisite that may or must be fulfilled before executing a process. See [`Process.hints`](#process) and [`Process.requirements`](#process). Process requirements are the primary mechanism for specifying extensions to the CWL core specification. """ pass class Process(Saveable): """ The base executable type in CWL is the `Process` object defined by the document. Note that the `Process` object is abstract and cannot be directly executed. """ pass class InlineJavascriptRequirement(ProcessRequirement): """ Indicates that the workflow platform must support inline Javascript expressions. If this requirement is not present, the workflow platform must not perform expression interpolatation. """ def __init__( self, expressionLib: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InlineJavascriptRequirement" self.expressionLib = expressionLib def __eq__(self, other: Any) -> bool: if isinstance(other, InlineJavascriptRequirement): return bool( self.class_ == other.class_ and self.expressionLib == other.expressionLib ) return False def __hash__(self) -> int: return hash((self.class_, self.expressionLib)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InlineJavascriptRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InlineJavascriptRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e expressionLib = None if "expressionLib" in _doc: try: expressionLib = load_field( _doc.get("expressionLib"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("expressionLib") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expressionLib`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expressionLib") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [e], detailed_message=f"the `expressionLib` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `expressionLib`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( expressionLib=expressionLib, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.expressionLib is not None: r["expressionLib"] = save( self.expressionLib, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "expressionLib"]) class SchemaDefRequirement(ProcessRequirement): """ This field consists of an array of type definitions which must be used when interpreting the `inputs` and `outputs` fields. When a `type` field contain a IRI, the implementation must check if the type is defined in `schemaDefs` and use that definition. If the type is not found in `schemaDefs`, it is an error. The entries in `schemaDefs` must be processed in the order listed such that later schema definitions may refer to earlier schema definitions. """ def __init__( self, types: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SchemaDefRequirement" self.types = types def __eq__(self, other: Any) -> bool: if isinstance(other, SchemaDefRequirement): return bool(self.class_ == other.class_ and self.types == other.types) return False def __hash__(self) -> int: return hash((self.class_, self.types)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SchemaDefRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SchemaDefRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("types") is None: raise ValidationException("missing required field `types`", None, []) types = load_field( _doc.get("types"), array_of_union_of_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader, baseuri, loadingOptions, lc=_doc.get("types") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `types`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("types") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [e], detailed_message=f"the `types` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `types`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( types=types, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.types is not None: r["types"] = save( self.types, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "types"]) class EnvironmentDef(Saveable): """ Define an environment variable that will be set in the runtime environment by the workflow platform when executing the command line tool. May be the result of executing an expression, such as getting a parameter from input. """ def __init__( self, envName: Any, envValue: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.envName = envName self.envValue = envValue def __eq__(self, other: Any) -> bool: if isinstance(other, EnvironmentDef): return bool( self.envName == other.envName and self.envValue == other.envValue ) return False def __hash__(self) -> int: return hash((self.envName, self.envValue)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvironmentDef": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("envName") is None: raise ValidationException("missing required field `envName`", None, []) envName = load_field( _doc.get("envName"), strtype, baseuri, loadingOptions, lc=_doc.get("envName") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envName`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envName") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [e], detailed_message=f"the `envName` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("envValue") is None: raise ValidationException("missing required field `envValue`", None, []) envValue = load_field( _doc.get("envValue"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("envValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [e], detailed_message=f"the `envValue` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `envName`, `envValue`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envName=envName, envValue=envValue, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.envName is not None: r["envName"] = save( self.envName, top=False, base_url=base_url, relative_uris=relative_uris ) if self.envValue is not None: r["envValue"] = save( self.envValue, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["envName", "envValue"]) class CommandLineBinding(InputBinding): """ When listed under `inputBinding` in the input schema, the term "value" refers to the the corresponding value in the input object. For binding objects listed in `CommandLineTool.arguments`, the term "value" refers to the effective value after evaluating `valueFrom`. The binding behavior when building the command line depends on the data type of the value. If there is a mismatch between the type described by the input schema and the effective value, such as resulting from an expression evaluation, an implementation must use the data type of the effective value. - **string**: Add `prefix` and the string to the command line. - **number**: Add `prefix` and decimal representation to command line. - **boolean**: If true, add `prefix` to the command line. If false, add nothing. - **File**: Add `prefix` and the value of [`File.path`](#File) to the command line. - **Directory**: Add `prefix` and the value of [`Directory.path`](#Directory) to the command line. - **array**: If `itemSeparator` is specified, add `prefix` and the join the array into a single string with `itemSeparator` separating the items. Otherwise first add `prefix`, then recursively process individual elements. If the array is empty, it does not add anything to command line. - **object**: Add `prefix` only, and recursively add object fields for which `inputBinding` is specified. - **null**: Add nothing. """ def __init__( self, loadContents: Optional[Any] = None, position: Optional[Any] = None, prefix: Optional[Any] = None, separate: Optional[Any] = None, itemSeparator: Optional[Any] = None, valueFrom: Optional[Any] = None, shellQuote: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents self.position = position self.prefix = prefix self.separate = separate self.itemSeparator = itemSeparator self.valueFrom = valueFrom self.shellQuote = shellQuote def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineBinding): return bool( self.loadContents == other.loadContents and self.position == other.position and self.prefix == other.prefix and self.separate == other.separate and self.itemSeparator == other.itemSeparator and self.valueFrom == other.valueFrom and self.shellQuote == other.shellQuote ) return False def __hash__(self) -> int: return hash( ( self.loadContents, self.position, self.prefix, self.separate, self.itemSeparator, self.valueFrom, self.shellQuote, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) position = None if "position" in _doc: try: position = load_field( _doc.get("position"), union_of_None_type_or_inttype, baseuri, loadingOptions, lc=_doc.get("position") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `position`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("position") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [e], detailed_message=f"the `position` field with value `{val}` " "is not valid because:", ) ) prefix = None if "prefix" in _doc: try: prefix = load_field( _doc.get("prefix"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("prefix") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `prefix`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("prefix") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [e], detailed_message=f"the `prefix` field with value `{val}` " "is not valid because:", ) ) separate = None if "separate" in _doc: try: separate = load_field( _doc.get("separate"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("separate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `separate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("separate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [e], detailed_message=f"the `separate` field with value `{val}` " "is not valid because:", ) ) itemSeparator = None if "itemSeparator" in _doc: try: itemSeparator = load_field( _doc.get("itemSeparator"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("itemSeparator") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `itemSeparator`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("itemSeparator") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [e], detailed_message=f"the `itemSeparator` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) shellQuote = None if "shellQuote" in _doc: try: shellQuote = load_field( _doc.get("shellQuote"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("shellQuote") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shellQuote`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shellQuote") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [e], detailed_message=f"the `shellQuote` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`, `position`, `prefix`, `separate`, `itemSeparator`, `valueFrom`, `shellQuote`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, position=position, prefix=prefix, separate=separate, itemSeparator=itemSeparator, valueFrom=valueFrom, shellQuote=shellQuote, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.position is not None: r["position"] = save( self.position, top=False, base_url=base_url, relative_uris=relative_uris ) if self.prefix is not None: r["prefix"] = save( self.prefix, top=False, base_url=base_url, relative_uris=relative_uris ) if self.separate is not None: r["separate"] = save( self.separate, top=False, base_url=base_url, relative_uris=relative_uris ) if self.itemSeparator is not None: r["itemSeparator"] = save( self.itemSeparator, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.shellQuote is not None: r["shellQuote"] = save( self.shellQuote, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "loadContents", "position", "prefix", "separate", "itemSeparator", "valueFrom", "shellQuote", ] ) class CommandOutputBinding(OutputBinding): """ Describes how to generate an output parameter based on the files produced by a CommandLineTool. The output parameter value is generated by applying these operations in the following order: - glob - loadContents - outputEval - secondaryFiles """ def __init__( self, glob: Optional[Any] = None, loadContents: Optional[Any] = None, outputEval: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.glob = glob self.loadContents = loadContents self.outputEval = outputEval def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputBinding): return bool( self.glob == other.glob and self.loadContents == other.loadContents and self.outputEval == other.outputEval ) return False def __hash__(self) -> int: return hash((self.glob, self.loadContents, self.outputEval)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] glob = None if "glob" in _doc: try: glob = load_field( _doc.get("glob"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("glob") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `glob`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("glob") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [e], detailed_message=f"the `glob` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) outputEval = None if "outputEval" in _doc: try: outputEval = load_field( _doc.get("outputEval"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outputEval") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputEval`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputEval") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [e], detailed_message=f"the `outputEval` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `glob`, `loadContents`, `outputEval`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( glob=glob, loadContents=loadContents, outputEval=outputEval, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.glob is not None: r["glob"] = save( self.glob, top=False, base_url=base_url, relative_uris=relative_uris ) if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outputEval is not None: r["outputEval"] = save( self.outputEval, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["glob", "loadContents", "outputEval"]) class CommandInputRecordField(InputRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, inputBinding: Optional[Any] = None, label: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.inputBinding = inputBinding self.label = label def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.inputBinding == other.inputBinding and self.label == other.label ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_, self.inputBinding, self.label)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `inputBinding`, `label`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, inputBinding=inputBinding, label=label, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type", "inputBinding", "label"]) class CommandInputRecordSchema(InputRecordSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "name"]) class CommandInputEnumSchema(InputEnumSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( (self.name, self.symbols, self.type_, self.label, self.inputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "inputBinding"]) class CommandInputArraySchema(InputArraySchema): def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.inputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=base_url, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "inputBinding"]) class CommandOutputRecordField(OutputRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_, self.outputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type", "outputBinding"]) class CommandOutputRecordSchema(OutputRecordSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "name"]) class CommandOutputEnumSchema(OutputEnumSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( (self.name, self.symbols, self.type_, self.label, self.outputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "outputBinding"]) class CommandOutputArraySchema(OutputArraySchema): def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.outputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=base_url, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "outputBinding"]) class CommandInputParameter(InputParameter): """ An input parameter for a CommandLineTool. """ id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, format: Optional[Any] = None, inputBinding: Optional[Any] = None, default: Optional[Any] = None, type_: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.inputBinding = inputBinding self.default = default self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.inputBinding == other.inputBinding and self.default == other.default and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.inputBinding, self.default, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) type_ = None if "type" in _doc: try: type_ = load_field( _doc.get("type"), typedsl_union_of_None_type_or_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `inputBinding`, `default`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, inputBinding=inputBinding, default=default, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "inputBinding", "default", "type", ] ) class CommandOutputParameter(OutputParameter): """ An output parameter for a CommandLineTool. """ id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, format: Optional[Any] = None, type_: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.outputBinding = outputBinding self.format = format self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.outputBinding == other.outputBinding and self.format == other.format and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.outputBinding, self.format, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) type_ = None if "type" in _doc: try: type_ = load_field( _doc.get("type"), typedsl_union_of_None_type_or_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `outputBinding`, `format`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, outputBinding=outputBinding, format=format, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "outputBinding", "format", "type", ] ) class CommandLineTool(Process): """ This defines the schema of the CWL Command Line Tool Description document. """ id: str def __init__( self, inputs: Any, outputs: Any, id: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, cwlVersion: Optional[Any] = None, baseCommand: Optional[Any] = None, arguments: Optional[Any] = None, stdin: Optional[Any] = None, stderr: Optional[Any] = None, stdout: Optional[Any] = None, successCodes: Optional[Any] = None, temporaryFailCodes: Optional[Any] = None, permanentFailCodes: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.label = label self.doc = doc self.cwlVersion = cwlVersion self.class_ = "CommandLineTool" self.baseCommand = baseCommand self.arguments = arguments self.stdin = stdin self.stderr = stderr self.stdout = stdout self.successCodes = successCodes self.temporaryFailCodes = temporaryFailCodes self.permanentFailCodes = permanentFailCodes def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineTool): return bool( self.id == other.id and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.label == other.label and self.doc == other.doc and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.baseCommand == other.baseCommand and self.arguments == other.arguments and self.stdin == other.stdin and self.stderr == other.stderr and self.stdout == other.stdout and self.successCodes == other.successCodes and self.temporaryFailCodes == other.temporaryFailCodes and self.permanentFailCodes == other.permanentFailCodes ) return False def __hash__(self) -> int: return hash( ( self.id, self.inputs, self.outputs, self.requirements, self.hints, self.label, self.doc, self.cwlVersion, self.class_, self.baseCommand, self.arguments, self.stdin, self.stderr, self.stdout, self.successCodes, self.temporaryFailCodes, self.permanentFailCodes, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_CommandLineTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_CommandInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_CommandOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) baseCommand = None if "baseCommand" in _doc: try: baseCommand = load_field( _doc.get("baseCommand"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("baseCommand") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `baseCommand`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("baseCommand") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [e], detailed_message=f"the `baseCommand` field with value `{val}` " "is not valid because:", ) ) arguments = None if "arguments" in _doc: try: arguments = load_field( _doc.get("arguments"), union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("arguments") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `arguments`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("arguments") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [e], detailed_message=f"the `arguments` field with value `{val}` " "is not valid because:", ) ) stdin = None if "stdin" in _doc: try: stdin = load_field( _doc.get("stdin"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [e], detailed_message=f"the `stdin` field with value `{val}` " "is not valid because:", ) ) stderr = None if "stderr" in _doc: try: stderr = load_field( _doc.get("stderr"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stderr") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stderr`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stderr") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [e], detailed_message=f"the `stderr` field with value `{val}` " "is not valid because:", ) ) stdout = None if "stdout" in _doc: try: stdout = load_field( _doc.get("stdout"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdout") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdout`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdout") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [e], detailed_message=f"the `stdout` field with value `{val}` " "is not valid because:", ) ) successCodes = None if "successCodes" in _doc: try: successCodes = load_field( _doc.get("successCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("successCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `successCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("successCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [e], detailed_message=f"the `successCodes` field with value `{val}` " "is not valid because:", ) ) temporaryFailCodes = None if "temporaryFailCodes" in _doc: try: temporaryFailCodes = load_field( _doc.get("temporaryFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("temporaryFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `temporaryFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("temporaryFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [e], detailed_message=f"the `temporaryFailCodes` field with value `{val}` " "is not valid because:", ) ) permanentFailCodes = None if "permanentFailCodes" in _doc: try: permanentFailCodes = load_field( _doc.get("permanentFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("permanentFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `permanentFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("permanentFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [e], detailed_message=f"the `permanentFailCodes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `inputs`, `outputs`, `requirements`, `hints`, `label`, `doc`, `cwlVersion`, `class`, `baseCommand`, `arguments`, `stdin`, `stderr`, `stdout`, `successCodes`, `temporaryFailCodes`, `permanentFailCodes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, label=label, doc=doc, cwlVersion=cwlVersion, baseCommand=baseCommand, arguments=arguments, stdin=stdin, stderr=stderr, stdout=stdout, successCodes=successCodes, temporaryFailCodes=temporaryFailCodes, permanentFailCodes=permanentFailCodes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.baseCommand is not None: r["baseCommand"] = save( self.baseCommand, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.arguments is not None: r["arguments"] = save( self.arguments, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdin is not None: r["stdin"] = save( self.stdin, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stderr is not None: r["stderr"] = save( self.stderr, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdout is not None: r["stdout"] = save( self.stdout, top=False, base_url=self.id, relative_uris=relative_uris ) if self.successCodes is not None: r["successCodes"] = save( self.successCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.temporaryFailCodes is not None: r["temporaryFailCodes"] = save( self.temporaryFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.permanentFailCodes is not None: r["permanentFailCodes"] = save( self.permanentFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "inputs", "outputs", "requirements", "hints", "label", "doc", "cwlVersion", "class", "baseCommand", "arguments", "stdin", "stderr", "stdout", "successCodes", "temporaryFailCodes", "permanentFailCodes", ] ) class DockerRequirement(ProcessRequirement): """ Indicates that a workflow component should be run in a [Docker](http://docker.com) container, and specifies how to fetch or build the image. If a CommandLineTool lists `DockerRequirement` under `hints` (or `requirements`), it may (or must) be run in the specified Docker container. The platform must first acquire or install the correct Docker image as specified by `dockerPull`, `dockerImport`, `dockerLoad` or `dockerFile`. The platform must execute the tool in the container using `docker run` with the appropriate Docker image and tool command line. The workflow platform may provide input files and the designated output directory through the use of volume bind mounts. The platform should rewrite file paths in the input object to correspond to the Docker bind mounted locations. That is, the platform should rewrite values in the parameter context such as `runtime.outdir`, `runtime.tmpdir` and others to be valid paths within the container. When running a tool contained in Docker, the workflow platform must not assume anything about the contents of the Docker container, such as the presence or absence of specific software, except to assume that the generated command line represents a valid command within the runtime environment of the container. ## Interaction with other requirements If [EnvVarRequirement](#EnvVarRequirement) is specified alongside a DockerRequirement, the environment variables must be provided to Docker using `--env` or `--env-file` and interact with the container's preexisting environment as defined by Docker. """ def __init__( self, dockerPull: Optional[Any] = None, dockerLoad: Optional[Any] = None, dockerFile: Optional[Any] = None, dockerImport: Optional[Any] = None, dockerImageId: Optional[Any] = None, dockerOutputDirectory: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "DockerRequirement" self.dockerPull = dockerPull self.dockerLoad = dockerLoad self.dockerFile = dockerFile self.dockerImport = dockerImport self.dockerImageId = dockerImageId self.dockerOutputDirectory = dockerOutputDirectory def __eq__(self, other: Any) -> bool: if isinstance(other, DockerRequirement): return bool( self.class_ == other.class_ and self.dockerPull == other.dockerPull and self.dockerLoad == other.dockerLoad and self.dockerFile == other.dockerFile and self.dockerImport == other.dockerImport and self.dockerImageId == other.dockerImageId and self.dockerOutputDirectory == other.dockerOutputDirectory ) return False def __hash__(self) -> int: return hash( ( self.class_, self.dockerPull, self.dockerLoad, self.dockerFile, self.dockerImport, self.dockerImageId, self.dockerOutputDirectory, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "DockerRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_DockerRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e dockerPull = None if "dockerPull" in _doc: try: dockerPull = load_field( _doc.get("dockerPull"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerPull") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerPull`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerPull") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [e], detailed_message=f"the `dockerPull` field with value `{val}` " "is not valid because:", ) ) dockerLoad = None if "dockerLoad" in _doc: try: dockerLoad = load_field( _doc.get("dockerLoad"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerLoad") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerLoad`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerLoad") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [e], detailed_message=f"the `dockerLoad` field with value `{val}` " "is not valid because:", ) ) dockerFile = None if "dockerFile" in _doc: try: dockerFile = load_field( _doc.get("dockerFile"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerFile") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerFile`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerFile") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [e], detailed_message=f"the `dockerFile` field with value `{val}` " "is not valid because:", ) ) dockerImport = None if "dockerImport" in _doc: try: dockerImport = load_field( _doc.get("dockerImport"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImport") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImport`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImport") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [e], detailed_message=f"the `dockerImport` field with value `{val}` " "is not valid because:", ) ) dockerImageId = None if "dockerImageId" in _doc: try: dockerImageId = load_field( _doc.get("dockerImageId"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImageId") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImageId`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImageId") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [e], detailed_message=f"the `dockerImageId` field with value `{val}` " "is not valid because:", ) ) dockerOutputDirectory = None if "dockerOutputDirectory" in _doc: try: dockerOutputDirectory = load_field( _doc.get("dockerOutputDirectory"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerOutputDirectory") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerOutputDirectory`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerOutputDirectory") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [e], detailed_message=f"the `dockerOutputDirectory` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `dockerPull`, `dockerLoad`, `dockerFile`, `dockerImport`, `dockerImageId`, `dockerOutputDirectory`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( dockerPull=dockerPull, dockerLoad=dockerLoad, dockerFile=dockerFile, dockerImport=dockerImport, dockerImageId=dockerImageId, dockerOutputDirectory=dockerOutputDirectory, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.dockerPull is not None: r["dockerPull"] = save( self.dockerPull, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerLoad is not None: r["dockerLoad"] = save( self.dockerLoad, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerFile is not None: r["dockerFile"] = save( self.dockerFile, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImport is not None: r["dockerImport"] = save( self.dockerImport, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImageId is not None: r["dockerImageId"] = save( self.dockerImageId, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerOutputDirectory is not None: r["dockerOutputDirectory"] = save( self.dockerOutputDirectory, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "dockerPull", "dockerLoad", "dockerFile", "dockerImport", "dockerImageId", "dockerOutputDirectory", ] ) class SoftwareRequirement(ProcessRequirement): """ A list of software packages that should be configured in the environment of the defined process. """ def __init__( self, packages: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SoftwareRequirement" self.packages = packages def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwareRequirement): return bool(self.class_ == other.class_ and self.packages == other.packages) return False def __hash__(self) -> int: return hash((self.class_, self.packages)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwareRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SoftwareRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("packages") is None: raise ValidationException("missing required field `packages`", None, []) packages = load_field( _doc.get("packages"), idmap_packages_array_of_SoftwarePackageLoader, baseuri, loadingOptions, lc=_doc.get("packages") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `packages`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("packages") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [e], detailed_message=f"the `packages` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `packages`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( packages=packages, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.packages is not None: r["packages"] = save( self.packages, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "packages"]) class SoftwarePackage(Saveable): def __init__( self, package: Any, version: Optional[Any] = None, specs: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.package = package self.version = version self.specs = specs def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwarePackage): return bool( self.package == other.package and self.version == other.version and self.specs == other.specs ) return False def __hash__(self) -> int: return hash((self.package, self.version, self.specs)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwarePackage": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("package") is None: raise ValidationException("missing required field `package`", None, []) package = load_field( _doc.get("package"), strtype, baseuri, loadingOptions, lc=_doc.get("package") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `package`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("package") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [e], detailed_message=f"the `package` field with value `{val}` " "is not valid because:", ) ) version = None if "version" in _doc: try: version = load_field( _doc.get("version"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("version") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `version`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("version") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [e], detailed_message=f"the `version` field with value `{val}` " "is not valid because:", ) ) specs = None if "specs" in _doc: try: specs = load_field( _doc.get("specs"), uri_union_of_None_type_or_array_of_strtype_False_False_None_True, baseuri, loadingOptions, lc=_doc.get("specs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `specs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("specs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [e], detailed_message=f"the `specs` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `package`, `version`, `specs`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( package=package, version=version, specs=specs, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.package is not None: r["package"] = save( self.package, top=False, base_url=base_url, relative_uris=relative_uris ) if self.version is not None: r["version"] = save( self.version, top=False, base_url=base_url, relative_uris=relative_uris ) if self.specs is not None: u = save_relative_uri(self.specs, base_url, False, None, relative_uris) r["specs"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["package", "version", "specs"]) class Dirent(Saveable): """ Define a file or subdirectory that must be placed in the designated output directory prior to executing the command line tool. May be the result of executing an expression, such as building a configuration file from a template. """ def __init__( self, entry: Any, entryname: Optional[Any] = None, writable: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.entryname = entryname self.entry = entry self.writable = writable def __eq__(self, other: Any) -> bool: if isinstance(other, Dirent): return bool( self.entryname == other.entryname and self.entry == other.entry and self.writable == other.writable ) return False def __hash__(self) -> int: return hash((self.entryname, self.entry, self.writable)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Dirent": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] entryname = None if "entryname" in _doc: try: entryname = load_field( _doc.get("entryname"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entryname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entryname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entryname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [e], detailed_message=f"the `entryname` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("entry") is None: raise ValidationException("missing required field `entry`", None, []) entry = load_field( _doc.get("entry"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entry") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entry`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entry") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [e], detailed_message=f"the `entry` field with value `{val}` " "is not valid because:", ) ) writable = None if "writable" in _doc: try: writable = load_field( _doc.get("writable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("writable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `writable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("writable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [e], detailed_message=f"the `writable` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `entryname`, `entry`, `writable`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( entryname=entryname, entry=entry, writable=writable, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.entryname is not None: r["entryname"] = save( self.entryname, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.entry is not None: r["entry"] = save( self.entry, top=False, base_url=base_url, relative_uris=relative_uris ) if self.writable is not None: r["writable"] = save( self.writable, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["entryname", "entry", "writable"]) class InitialWorkDirRequirement(ProcessRequirement): """ Define a list of files and subdirectories that must be created by the workflow platform in the designated output directory prior to executing the command line tool. """ def __init__( self, listing: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InitialWorkDirRequirement" self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, InitialWorkDirRequirement): return bool(self.class_ == other.class_ and self.listing == other.listing) return False def __hash__(self) -> int: return hash((self.class_, self.listing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InitialWorkDirRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InitialWorkDirRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("listing") is None: raise ValidationException("missing required field `listing`", None, []) listing = load_field( _doc.get("listing"), union_of_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "listing"]) class EnvVarRequirement(ProcessRequirement): """ Define a list of environment variables which will be set in the execution environment of the tool. See `EnvironmentDef` for details. """ def __init__( self, envDef: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "EnvVarRequirement" self.envDef = envDef def __eq__(self, other: Any) -> bool: if isinstance(other, EnvVarRequirement): return bool(self.class_ == other.class_ and self.envDef == other.envDef) return False def __hash__(self) -> int: return hash((self.class_, self.envDef)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvVarRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_EnvVarRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("envDef") is None: raise ValidationException("missing required field `envDef`", None, []) envDef = load_field( _doc.get("envDef"), idmap_envDef_array_of_EnvironmentDefLoader, baseuri, loadingOptions, lc=_doc.get("envDef") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envDef`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envDef") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [e], detailed_message=f"the `envDef` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `envDef`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envDef=envDef, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.envDef is not None: r["envDef"] = save( self.envDef, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "envDef"]) class ShellCommandRequirement(ProcessRequirement): """ Modify the behavior of CommandLineTool to generate a single string containing a shell command line. Each item in the argument list must be joined into a string separated by single spaces and quoted to prevent intepretation by the shell, unless `CommandLineBinding` for that argument contains `shellQuote: false`. If `shellQuote: false` is specified, the argument is joined into the command string without quoting, which allows the use of shell metacharacters such as `|` for pipes. """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShellCommandRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ShellCommandRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShellCommandRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ShellCommandRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ResourceRequirement(ProcessRequirement): """ Specify basic hardware resource requirements. "min" is the minimum amount of a resource that must be reserved to schedule a job. If "min" cannot be satisfied, the job should not be run. "max" is the maximum amount of a resource that the job shall be permitted to use. If a node has sufficient resources, multiple jobs may be scheduled on a single node provided each job's "max" resource requirements are met. If a job attempts to exceed its "max" resource allocation, an implementation may deny additional resources, which may result in job failure. If "min" is specified but "max" is not, then "max" == "min" If "max" is specified by "min" is not, then "min" == "max". It is an error if max < min. It is an error if the value of any of these fields is negative. If neither "min" nor "max" is specified for a resource, an implementation may provide a default. """ def __init__( self, coresMin: Optional[Any] = None, coresMax: Optional[Any] = None, ramMin: Optional[Any] = None, ramMax: Optional[Any] = None, tmpdirMin: Optional[Any] = None, tmpdirMax: Optional[Any] = None, outdirMin: Optional[Any] = None, outdirMax: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ResourceRequirement" self.coresMin = coresMin self.coresMax = coresMax self.ramMin = ramMin self.ramMax = ramMax self.tmpdirMin = tmpdirMin self.tmpdirMax = tmpdirMax self.outdirMin = outdirMin self.outdirMax = outdirMax def __eq__(self, other: Any) -> bool: if isinstance(other, ResourceRequirement): return bool( self.class_ == other.class_ and self.coresMin == other.coresMin and self.coresMax == other.coresMax and self.ramMin == other.ramMin and self.ramMax == other.ramMax and self.tmpdirMin == other.tmpdirMin and self.tmpdirMax == other.tmpdirMax and self.outdirMin == other.outdirMin and self.outdirMax == other.outdirMax ) return False def __hash__(self) -> int: return hash( ( self.class_, self.coresMin, self.coresMax, self.ramMin, self.ramMax, self.tmpdirMin, self.tmpdirMax, self.outdirMin, self.outdirMax, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ResourceRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ResourceRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e coresMin = None if "coresMin" in _doc: try: coresMin = load_field( _doc.get("coresMin"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [e], detailed_message=f"the `coresMin` field with value `{val}` " "is not valid because:", ) ) coresMax = None if "coresMax" in _doc: try: coresMax = load_field( _doc.get("coresMax"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [e], detailed_message=f"the `coresMax` field with value `{val}` " "is not valid because:", ) ) ramMin = None if "ramMin" in _doc: try: ramMin = load_field( _doc.get("ramMin"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [e], detailed_message=f"the `ramMin` field with value `{val}` " "is not valid because:", ) ) ramMax = None if "ramMax" in _doc: try: ramMax = load_field( _doc.get("ramMax"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [e], detailed_message=f"the `ramMax` field with value `{val}` " "is not valid because:", ) ) tmpdirMin = None if "tmpdirMin" in _doc: try: tmpdirMin = load_field( _doc.get("tmpdirMin"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [e], detailed_message=f"the `tmpdirMin` field with value `{val}` " "is not valid because:", ) ) tmpdirMax = None if "tmpdirMax" in _doc: try: tmpdirMax = load_field( _doc.get("tmpdirMax"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [e], detailed_message=f"the `tmpdirMax` field with value `{val}` " "is not valid because:", ) ) outdirMin = None if "outdirMin" in _doc: try: outdirMin = load_field( _doc.get("outdirMin"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [e], detailed_message=f"the `outdirMin` field with value `{val}` " "is not valid because:", ) ) outdirMax = None if "outdirMax" in _doc: try: outdirMax = load_field( _doc.get("outdirMax"), union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [e], detailed_message=f"the `outdirMax` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `coresMin`, `coresMax`, `ramMin`, `ramMax`, `tmpdirMin`, `tmpdirMax`, `outdirMin`, `outdirMax`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( coresMin=coresMin, coresMax=coresMax, ramMin=ramMin, ramMax=ramMax, tmpdirMin=tmpdirMin, tmpdirMax=tmpdirMax, outdirMin=outdirMin, outdirMax=outdirMax, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.coresMin is not None: r["coresMin"] = save( self.coresMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.coresMax is not None: r["coresMax"] = save( self.coresMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMin is not None: r["ramMin"] = save( self.ramMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMax is not None: r["ramMax"] = save( self.ramMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.tmpdirMin is not None: r["tmpdirMin"] = save( self.tmpdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.tmpdirMax is not None: r["tmpdirMax"] = save( self.tmpdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMin is not None: r["outdirMin"] = save( self.outdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMax is not None: r["outdirMax"] = save( self.outdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "coresMin", "coresMax", "ramMin", "ramMax", "tmpdirMin", "tmpdirMax", "outdirMin", "outdirMax", ] ) class ExpressionToolOutputParameter(OutputParameter): id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, format: Optional[Any] = None, type_: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.outputBinding = outputBinding self.format = format self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionToolOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.outputBinding == other.outputBinding and self.format == other.format and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.outputBinding, self.format, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionToolOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) type_ = None if "type" in _doc: try: type_ = load_field( _doc.get("type"), typedsl_union_of_None_type_or_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `outputBinding`, `format`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, outputBinding=outputBinding, format=format, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "outputBinding", "format", "type", ] ) class ExpressionTool(Process): """ Execute an expression as a Workflow step. """ id: str def __init__( self, inputs: Any, outputs: Any, expression: Any, id: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.label = label self.doc = doc self.cwlVersion = cwlVersion self.class_ = "ExpressionTool" self.expression = expression def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionTool): return bool( self.id == other.id and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.label == other.label and self.doc == other.doc and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.expression == other.expression ) return False def __hash__(self) -> int: return hash( ( self.id, self.inputs, self.outputs, self.requirements, self.hints, self.label, self.doc, self.cwlVersion, self.class_, self.expression, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ExpressionTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_InputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_ExpressionToolOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("expression") is None: raise ValidationException("missing required field `expression`", None, []) expression = load_field( _doc.get("expression"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("expression") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expression`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expression") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [e], detailed_message=f"the `expression` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `inputs`, `outputs`, `requirements`, `hints`, `label`, `doc`, `cwlVersion`, `class`, `expression`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, label=label, doc=doc, cwlVersion=cwlVersion, expression=expression, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.expression is not None: r["expression"] = save( self.expression, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "inputs", "outputs", "requirements", "hints", "label", "doc", "cwlVersion", "class", "expression", ] ) class WorkflowOutputParameter(OutputParameter): """ Describe an output parameter of a workflow. The parameter must be connected to one or more parameters defined in the workflow that will provide the value of the output parameter. """ id: str def __init__( self, id: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, outputBinding: Optional[Any] = None, format: Optional[Any] = None, outputSource: Optional[Any] = None, linkMerge: Optional[Any] = None, type_: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.outputBinding = outputBinding self.format = format self.outputSource = outputSource self.linkMerge = linkMerge self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.outputBinding == other.outputBinding and self.format == other.format and self.outputSource == other.outputSource and self.linkMerge == other.linkMerge and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.outputBinding, self.format, self.outputSource, self.linkMerge, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) outputSource = None if "outputSource" in _doc: try: outputSource = load_field( _doc.get("outputSource"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None, baseuri, loadingOptions, lc=_doc.get("outputSource") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputSource`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputSource") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [e], detailed_message=f"the `outputSource` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) type_ = None if "type" in _doc: try: type_ = load_field( _doc.get("type"), typedsl_union_of_None_type_or_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `outputBinding`, `format`, `outputSource`, `linkMerge`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, outputBinding=outputBinding, format=format, outputSource=outputSource, linkMerge=linkMerge, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.outputSource is not None: u = save_relative_uri(self.outputSource, self.id, False, 1, relative_uris) r["outputSource"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "outputBinding", "format", "outputSource", "linkMerge", "type", ] ) class Sink(Saveable): pass class WorkflowStepInput(Sink): """ The input of a workflow step connects an upstream parameter (from the workflow inputs, or the outputs of other workflows steps) with the input parameters of the underlying step. ## Input object A WorkflowStepInput object must contain an `id` field in the form `#fieldname` or `#prefix/fieldname`. When the `id` field contains a slash `/` the field name consists of the characters following the final slash (the prefix portion may contain one or more slashes to indicate scope). This defines a field of the workflow step input object with the value of the `source` parameter(s). ## Merging To merge multiple inbound data links, [MultipleInputFeatureRequirement](#MultipleInputFeatureRequirement) must be specified in the workflow or workflow step requirements. If the sink parameter is an array, or named in a [workflow scatter](#WorkflowStep) operation, there may be multiple inbound data links listed in the `source` field. The values from the input links are merged depending on the method specified in the `linkMerge` field. If not specified, the default method is "merge_nested". * **merge_nested** The input must be an array consisting of exactly one entry for each input link. If "merge_nested" is specified with a single link, the value from the link must be wrapped in a single-item list. * **merge_flattened** 1. The source and sink parameters must be compatible types, or the source type must be compatible with single element from the "items" type of the destination array parameter. 2. Source parameters which are arrays are concatenated. Source parameters which are single element types are appended as single elements. """ id: str def __init__( self, id: Any, source: Optional[Any] = None, linkMerge: Optional[Any] = None, default: Optional[Any] = None, valueFrom: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.source = source self.linkMerge = linkMerge self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.default = default self.valueFrom = valueFrom def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepInput): return bool( self.source == other.source and self.linkMerge == other.linkMerge and self.id == other.id and self.default == other.default and self.valueFrom == other.valueFrom ) return False def __hash__(self) -> int: return hash( (self.source, self.linkMerge, self.id, self.default, self.valueFrom) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepInput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) source = None if "source" in _doc: try: source = load_field( _doc.get("source"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None, baseuri, loadingOptions, lc=_doc.get("source") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `source`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("source") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [e], detailed_message=f"the `source` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `source`, `linkMerge`, `id`, `default`, `valueFrom`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( source=source, linkMerge=linkMerge, id=id, default=default, valueFrom=valueFrom, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.source is not None: u = save_relative_uri(self.source, self.id, False, 2, relative_uris) r["source"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["source", "linkMerge", "id", "default", "valueFrom"]) class WorkflowStepOutput(Saveable): """ Associate an output parameter of the underlying process with a workflow parameter. The workflow parameter (given in the `id` field) be may be used as a `source` to connect with input parameters of other workflow steps, or with an output parameter of the process. """ id: str def __init__( self, id: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepOutput): return bool(self.id == other.id) return False def __hash__(self) -> int: return hash((self.id)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepOutput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["id"]) class WorkflowStep(Saveable): """ A workflow step is an executable element of a workflow. It specifies the underlying process implementation (such as `CommandLineTool` or another `Workflow`) in the `run` field and connects the input and output parameters of the underlying process to workflow parameters. # Scatter/gather To use scatter/gather, [ScatterFeatureRequirement](#ScatterFeatureRequirement) must be specified in the workflow or workflow step requirements. A "scatter" operation specifies that the associated workflow step or subworkflow should execute separately over a list of input elements. Each job making up a scatter operation is independent and may be executed concurrently. The `scatter` field specifies one or more input parameters which will be scattered. An input parameter may be listed more than once. The declared type of each input parameter is implicitly becomes an array of items of the input parameter type. If a parameter is listed more than once, it becomes a nested array. As a result, upstream parameters which are connected to scattered parameters must be arrays. All output parameter types are also implicitly wrapped in arrays. Each job in the scatter results in an entry in the output array. If any scattered parameter runtime value is an empty array, all outputs are set to empty arrays and no work is done for the step, according to applicable scattering rules. If `scatter` declares more than one input parameter, `scatterMethod` describes how to decompose the input into a discrete set of jobs. * **dotproduct** specifies that each of the input arrays are aligned and one element taken from each array to construct each job. It is an error if all input arrays are not the same length. * **nested_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output must be nested arrays for each level of scattering, in the order that the input arrays are listed in the `scatter` field. * **flat_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output arrays must be flattened to a single level, but otherwise listed in the order that the input arrays are listed in the `scatter` field. # Subworkflows To specify a nested workflow as part of a workflow step, [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) must be specified in the workflow or workflow step requirements. It is a fatal error if a workflow directly or indirectly invokes itself as a subworkflow (recursive workflows are not allowed). """ id: str def __init__( self, id: Any, in_: Any, out: Any, run: Any, requirements: Optional[Any] = None, hints: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, scatter: Optional[Any] = None, scatterMethod: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.in_ = in_ self.out = out self.requirements = requirements self.hints = hints self.label = label self.doc = doc self.run = run self.scatter = scatter self.scatterMethod = scatterMethod def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStep): return bool( self.id == other.id and self.in_ == other.in_ and self.out == other.out and self.requirements == other.requirements and self.hints == other.hints and self.label == other.label and self.doc == other.doc and self.run == other.run and self.scatter == other.scatter and self.scatterMethod == other.scatterMethod ) return False def __hash__(self) -> int: return hash( ( self.id, self.in_, self.out, self.requirements, self.hints, self.label, self.doc, self.run, self.scatter, self.scatterMethod, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStep": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: _errors__.append(ValidationException("missing id")) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("in") is None: raise ValidationException("missing required field `in`", None, []) in_ = load_field( _doc.get("in"), idmap_in__array_of_WorkflowStepInputLoader, baseuri, loadingOptions, lc=_doc.get("in") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `in`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("in") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [e], detailed_message=f"the `in` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("out") is None: raise ValidationException("missing required field `out`", None, []) out = load_field( _doc.get("out"), uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("out") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `out`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("out") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [e], detailed_message=f"the `out` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) scatter = None if "scatter" in _doc: try: scatter = load_field( _doc.get("scatter"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("scatter") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatter`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatter") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [e], detailed_message=f"the `scatter` field with value `{val}` " "is not valid because:", ) ) scatterMethod = None if "scatterMethod" in _doc: try: scatterMethod = load_field( _doc.get("scatterMethod"), uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("scatterMethod") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatterMethod`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatterMethod") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [e], detailed_message=f"the `scatterMethod` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `in`, `out`, `requirements`, `hints`, `label`, `doc`, `run`, `scatter`, `scatterMethod`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, in_=in_, out=out, requirements=requirements, hints=hints, label=label, doc=doc, run=run, scatter=scatter, scatterMethod=scatterMethod, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.in_ is not None: r["in"] = save( self.in_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.out is not None: u = save_relative_uri(self.out, self.id, True, None, relative_uris) r["out"] = u if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u if self.scatter is not None: u = save_relative_uri(self.scatter, self.id, False, 0, relative_uris) r["scatter"] = u if self.scatterMethod is not None: u = save_relative_uri( self.scatterMethod, self.id, False, None, relative_uris ) r["scatterMethod"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "in", "out", "requirements", "hints", "label", "doc", "run", "scatter", "scatterMethod", ] ) class Workflow(Process): """ A workflow describes a set of **steps** and the **dependencies** between those steps. When a step produces output that will be consumed by a second step, the first step is a dependency of the second step. When there is a dependency, the workflow engine must execute the preceding step and wait for it to successfully produce output before executing the dependent step. If two steps are defined in the workflow graph that are not directly or indirectly dependent, these steps are **independent**, and may execute in any order or execute concurrently. A workflow is complete when all steps have been executed. Dependencies between parameters are expressed using the `source` field on [workflow step input parameters](#WorkflowStepInput) and [workflow output parameters](#WorkflowOutputParameter). The `source` field expresses the dependency of one parameter on another such that when a value is associated with the parameter specified by `source`, that value is propagated to the destination parameter. When all data links inbound to a given step are fufilled, the step is ready to execute. ## Workflow success and failure A completed step must result in one of `success`, `temporaryFailure` or `permanentFailure` states. An implementation may choose to retry a step execution which resulted in `temporaryFailure`. An implementation may choose to either continue running other steps of a workflow, or terminate immediately upon `permanentFailure`. * If any step of a workflow execution results in `permanentFailure`, then the workflow status is `permanentFailure`. * If one or more steps result in `temporaryFailure` and all other steps complete `success` or are not executed, then the workflow status is `temporaryFailure`. * If all workflow steps are executed and complete with `success`, then the workflow status is `success`. # Extensions [ScatterFeatureRequirement](#ScatterFeatureRequirement) and [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) are available as standard [extensions](#Extensions_and_Metadata) to core workflow semantics. """ id: str def __init__( self, inputs: Any, outputs: Any, steps: Any, id: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.label = label self.doc = doc self.cwlVersion = cwlVersion self.class_ = "Workflow" self.steps = steps def __eq__(self, other: Any) -> bool: if isinstance(other, Workflow): return bool( self.id == other.id and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.label == other.label and self.doc == other.doc and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.steps == other.steps ) return False def __hash__(self) -> int: return hash( ( self.id, self.inputs, self.outputs, self.requirements, self.hints, self.label, self.doc, self.cwlVersion, self.class_, self.steps, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Workflow": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Workflow_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_InputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_WorkflowOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("steps") is None: raise ValidationException("missing required field `steps`", None, []) steps = load_field( _doc.get("steps"), idmap_steps_union_of_array_of_WorkflowStepLoader, baseuri, loadingOptions, lc=_doc.get("steps") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `steps`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("steps") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [e], detailed_message=f"the `steps` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `inputs`, `outputs`, `requirements`, `hints`, `label`, `doc`, `cwlVersion`, `class`, `steps`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, label=label, doc=doc, cwlVersion=cwlVersion, steps=steps, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.steps is not None: r["steps"] = save( self.steps, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "inputs", "outputs", "requirements", "hints", "label", "doc", "cwlVersion", "class", "steps", ] ) class SubworkflowFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support nested workflows in the `run` field of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SubworkflowFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, SubworkflowFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SubworkflowFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ScatterFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support the `scatter` and `scatterMethod` fields of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ScatterFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ScatterFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ScatterFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ScatterFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class MultipleInputFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support multiple inbound data links listed in the `source` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MultipleInputFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, MultipleInputFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MultipleInputFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class StepInputExpressionRequirement(ProcessRequirement): """ Indicate that the workflow platform must support the `valueFrom` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "StepInputExpressionRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, StepInputExpressionRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "StepInputExpressionRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_StepInputExpressionRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class LoadListingRequirement(ProcessRequirement): def __init__( self, loadListing: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "LoadListingRequirement" self.loadListing = loadListing def __eq__(self, other: Any) -> bool: if isinstance(other, LoadListingRequirement): return bool( self.class_ == other.class_ and self.loadListing == other.loadListing ) return False def __hash__(self) -> int: return hash((self.class_, self.loadListing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "LoadListingRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("loadListing") is None: raise ValidationException("missing required field `loadListing`", None, []) loadListing = load_field( _doc.get("loadListing"), union_of_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `loadListing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadListing=loadListing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "loadListing"]) class InplaceUpdateRequirement(ProcessRequirement): def __init__( self, inplaceUpdate: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InplaceUpdateRequirement" self.inplaceUpdate = inplaceUpdate def __eq__(self, other: Any) -> bool: if isinstance(other, InplaceUpdateRequirement): return bool( self.class_ == other.class_ and self.inplaceUpdate == other.inplaceUpdate ) return False def __hash__(self) -> int: return hash((self.class_, self.inplaceUpdate)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InplaceUpdateRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inplaceUpdate") is None: raise ValidationException("missing required field `inplaceUpdate`", None, []) inplaceUpdate = load_field( _doc.get("inplaceUpdate"), booltype, baseuri, loadingOptions, lc=_doc.get("inplaceUpdate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inplaceUpdate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inplaceUpdate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [e], detailed_message=f"the `inplaceUpdate` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `inplaceUpdate`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( inplaceUpdate=inplaceUpdate, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.inplaceUpdate is not None: r["inplaceUpdate"] = save( self.inplaceUpdate, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "inplaceUpdate"]) class Secrets(ProcessRequirement): def __init__( self, secrets: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Secrets" self.secrets = secrets def __eq__(self, other: Any) -> bool: if isinstance(other, Secrets): return bool(self.class_ == other.class_ and self.secrets == other.secrets) return False def __hash__(self) -> int: return hash((self.class_, self.secrets)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Secrets": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("secrets") is None: raise ValidationException("missing required field `secrets`", None, []) secrets = load_field( _doc.get("secrets"), uri_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("secrets") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secrets`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secrets") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [e], detailed_message=f"the `secrets` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `secrets`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( secrets=secrets, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.secrets is not None: u = save_relative_uri(self.secrets, base_url, False, 0, relative_uris) r["secrets"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "secrets"]) class TimeLimit(ProcessRequirement): """ Set an upper limit on the execution time of a CommandLineTool or ExpressionTool. A tool execution which exceeds the time limit may be preemptively terminated and considered failed. May also be used by batch systems to make scheduling decisions. """ def __init__( self, timelimit: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "TimeLimit" self.timelimit = timelimit def __eq__(self, other: Any) -> bool: if isinstance(other, TimeLimit): return bool( self.class_ == other.class_ and self.timelimit == other.timelimit ) return False def __hash__(self) -> int: return hash((self.class_, self.timelimit)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "TimeLimit": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("timelimit") is None: raise ValidationException("missing required field `timelimit`", None, []) timelimit = load_field( _doc.get("timelimit"), union_of_inttype_or_strtype, baseuri, loadingOptions, lc=_doc.get("timelimit") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `timelimit`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("timelimit") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [e], detailed_message=f"the `timelimit` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `timelimit`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( timelimit=timelimit, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.timelimit is not None: r["timelimit"] = save( self.timelimit, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "timelimit"]) class WorkReuse(ProcessRequirement): """ For implementations that support reusing output from past work (on the assumption that same code and same input produce same results), control whether to enable or disable the reuse behavior for a particular tool or step (to accommodate situations where that assumption is incorrect). A reused step is not executed but instead returns the same output as the original execution. If `enableReuse` is not specified, correct tools should assume it is enabled by default. """ def __init__( self, enableReuse: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "WorkReuse" self.enableReuse = enableReuse def __eq__(self, other: Any) -> bool: if isinstance(other, WorkReuse): return bool( self.class_ == other.class_ and self.enableReuse == other.enableReuse ) return False def __hash__(self) -> int: return hash((self.class_, self.enableReuse)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkReuse": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("enableReuse") is None: raise ValidationException("missing required field `enableReuse`", None, []) enableReuse = load_field( _doc.get("enableReuse"), union_of_booltype_or_strtype, baseuri, loadingOptions, lc=_doc.get("enableReuse") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `enableReuse`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("enableReuse") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [e], detailed_message=f"the `enableReuse` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `enableReuse`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( enableReuse=enableReuse, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.enableReuse is not None: r["enableReuse"] = save( self.enableReuse, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "enableReuse"]) class NetworkAccess(ProcessRequirement): """ Indicate whether a process requires outgoing IPv4/IPv6 network access. Choice of IPv4 or IPv6 is implementation and site specific, correct tools must support both. If `networkAccess` is false or not specified, tools must not assume network access, except for localhost (the loopback device). If `networkAccess` is true, the tool must be able to make outgoing connections to network resources. Resources may be on a private subnet or the public Internet. However, implementations and sites may apply their own security policies to restrict what is accessible by the tool. Enabling network access does not imply a publicly routable IP address or the ability to accept inbound connections. """ def __init__( self, networkAccess: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "NetworkAccess" self.networkAccess = networkAccess def __eq__(self, other: Any) -> bool: if isinstance(other, NetworkAccess): return bool( self.class_ == other.class_ and self.networkAccess == other.networkAccess ) return False def __hash__(self) -> int: return hash((self.class_, self.networkAccess)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "NetworkAccess": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("networkAccess") is None: raise ValidationException("missing required field `networkAccess`", None, []) networkAccess = load_field( _doc.get("networkAccess"), union_of_booltype_or_strtype, baseuri, loadingOptions, lc=_doc.get("networkAccess") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `networkAccess`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("networkAccess") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [e], detailed_message=f"the `networkAccess` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `networkAccess`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( networkAccess=networkAccess, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.networkAccess is not None: r["networkAccess"] = save( self.networkAccess, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "networkAccess"]) class ProcessGenerator(Process): id: str def __init__( self, inputs: Any, outputs: Any, run: Any, id: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.label = label self.doc = doc self.cwlVersion = cwlVersion self.class_ = "ProcessGenerator" self.run = run def __eq__(self, other: Any) -> bool: if isinstance(other, ProcessGenerator): return bool( self.id == other.id and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.label == other.label and self.doc == other.doc and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.run == other.run ) return False def __hash__(self) -> int: return hash( ( self.id, self.inputs, self.outputs, self.requirements, self.hints, self.label, self.doc, self.cwlVersion, self.class_, self.run, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ProcessGenerator": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_InputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_OutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `inputs`, `outputs`, `requirements`, `hints`, `label`, `doc`, `cwlVersion`, `class`, `run`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, label=label, doc=doc, cwlVersion=cwlVersion, run=run, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "inputs", "outputs", "requirements", "hints", "label", "doc", "cwlVersion", "class", "run", ] ) class MPIRequirement(ProcessRequirement): """ Indicates that a process requires an MPI runtime. """ def __init__( self, processes: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MPIRequirement" self.processes = processes def __eq__(self, other: Any) -> bool: if isinstance(other, MPIRequirement): return bool( self.class_ == other.class_ and self.processes == other.processes ) return False def __hash__(self) -> int: return hash((self.class_, self.processes)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MPIRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("processes") is None: raise ValidationException("missing required field `processes`", None, []) processes = load_field( _doc.get("processes"), union_of_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("processes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `processes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("processes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [e], detailed_message=f"the `processes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `processes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( processes=processes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.processes is not None: r["processes"] = save( self.processes, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "processes"]) class CUDARequirement(ProcessRequirement): """ Require support for NVIDA CUDA (GPU hardware acceleration). """ def __init__( self, cudaComputeCapability: Any, cudaVersionMin: Any, cudaDeviceCountMax: Optional[Any] = None, cudaDeviceCountMin: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "CUDARequirement" self.cudaComputeCapability = cudaComputeCapability self.cudaDeviceCountMax = cudaDeviceCountMax self.cudaDeviceCountMin = cudaDeviceCountMin self.cudaVersionMin = cudaVersionMin def __eq__(self, other: Any) -> bool: if isinstance(other, CUDARequirement): return bool( self.class_ == other.class_ and self.cudaComputeCapability == other.cudaComputeCapability and self.cudaDeviceCountMax == other.cudaDeviceCountMax and self.cudaDeviceCountMin == other.cudaDeviceCountMin and self.cudaVersionMin == other.cudaVersionMin ) return False def __hash__(self) -> int: return hash( ( self.class_, self.cudaComputeCapability, self.cudaDeviceCountMax, self.cudaDeviceCountMin, self.cudaVersionMin, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CUDARequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("cudaComputeCapability") is None: raise ValidationException("missing required field `cudaComputeCapability`", None, []) cudaComputeCapability = load_field( _doc.get("cudaComputeCapability"), union_of_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("cudaComputeCapability") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaComputeCapability`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaComputeCapability") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [e], detailed_message=f"the `cudaComputeCapability` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMax = None if "cudaDeviceCountMax" in _doc: try: cudaDeviceCountMax = load_field( _doc.get("cudaDeviceCountMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [e], detailed_message=f"the `cudaDeviceCountMax` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMin = None if "cudaDeviceCountMin" in _doc: try: cudaDeviceCountMin = load_field( _doc.get("cudaDeviceCountMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [e], detailed_message=f"the `cudaDeviceCountMin` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("cudaVersionMin") is None: raise ValidationException("missing required field `cudaVersionMin`", None, []) cudaVersionMin = load_field( _doc.get("cudaVersionMin"), strtype, baseuri, loadingOptions, lc=_doc.get("cudaVersionMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaVersionMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaVersionMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [e], detailed_message=f"the `cudaVersionMin` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `cudaComputeCapability`, `cudaDeviceCountMax`, `cudaDeviceCountMin`, `cudaVersionMin`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( cudaComputeCapability=cudaComputeCapability, cudaDeviceCountMax=cudaDeviceCountMax, cudaDeviceCountMin=cudaDeviceCountMin, cudaVersionMin=cudaVersionMin, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.cudaComputeCapability is not None: r["cudaComputeCapability"] = save( self.cudaComputeCapability, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMax is not None: r["cudaDeviceCountMax"] = save( self.cudaDeviceCountMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMin is not None: r["cudaDeviceCountMin"] = save( self.cudaDeviceCountMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaVersionMin is not None: r["cudaVersionMin"] = save( self.cudaVersionMin, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "cudaComputeCapability", "cudaDeviceCountMax", "cudaDeviceCountMin", "cudaVersionMin", ] ) class ShmSize(ProcessRequirement): def __init__( self, shmSize: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShmSize" self.shmSize = shmSize def __eq__(self, other: Any) -> bool: if isinstance(other, ShmSize): return bool(self.class_ == other.class_ and self.shmSize == other.shmSize) return False def __hash__(self) -> int: return hash((self.class_, self.shmSize)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShmSize": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("shmSize") is None: raise ValidationException("missing required field `shmSize`", None, []) shmSize = load_field( _doc.get("shmSize"), strtype, baseuri, loadingOptions, lc=_doc.get("shmSize") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shmSize`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shmSize") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [e], detailed_message=f"the `shmSize` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `shmSize`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( shmSize=shmSize, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.shmSize is not None: r["shmSize"] = save( self.shmSize, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "shmSize"]) _vocab = { "Any": "https://w3id.org/cwl/salad#Any", "ArraySchema": "https://w3id.org/cwl/salad#ArraySchema", "CUDARequirement": "http://commonwl.org/cwltool#CUDARequirement", "CWLArraySchema": "https://w3id.org/cwl/cwl#CWLArraySchema", "CWLInputFile": "https://w3id.org/cwl/cwl#CWLInputFile", "CWLObjectType": "https://w3id.org/cwl/cwl#CWLObjectType", "CWLRecordField": "https://w3id.org/cwl/cwl#CWLRecordField", "CWLRecordSchema": "https://w3id.org/cwl/cwl#CWLRecordSchema", "CWLType": "https://w3id.org/cwl/cwl#CWLType", "CWLVersion": "https://w3id.org/cwl/cwl#CWLVersion", "CommandInputArraySchema": "https://w3id.org/cwl/cwl#CommandInputArraySchema", "CommandInputEnumSchema": "https://w3id.org/cwl/cwl#CommandInputEnumSchema", "CommandInputParameter": "https://w3id.org/cwl/cwl#CommandInputParameter", "CommandInputRecordField": "https://w3id.org/cwl/cwl#CommandInputRecordField", "CommandInputRecordSchema": "https://w3id.org/cwl/cwl#CommandInputRecordSchema", "CommandLineBinding": "https://w3id.org/cwl/cwl#CommandLineBinding", "CommandLineTool": "https://w3id.org/cwl/cwl#CommandLineTool", "CommandOutputArraySchema": "https://w3id.org/cwl/cwl#CommandOutputArraySchema", "CommandOutputBinding": "https://w3id.org/cwl/cwl#CommandOutputBinding", "CommandOutputEnumSchema": "https://w3id.org/cwl/cwl#CommandOutputEnumSchema", "CommandOutputParameter": "https://w3id.org/cwl/cwl#CommandOutputParameter", "CommandOutputRecordField": "https://w3id.org/cwl/cwl#CommandOutputRecordField", "CommandOutputRecordSchema": "https://w3id.org/cwl/cwl#CommandOutputRecordSchema", "Directory": "https://w3id.org/cwl/cwl#Directory", "Dirent": "https://w3id.org/cwl/cwl#Dirent", "DockerRequirement": "https://w3id.org/cwl/cwl#DockerRequirement", "Documented": "https://w3id.org/cwl/salad#Documented", "EnumSchema": "https://w3id.org/cwl/salad#EnumSchema", "EnvVarRequirement": "https://w3id.org/cwl/cwl#EnvVarRequirement", "EnvironmentDef": "https://w3id.org/cwl/cwl#EnvironmentDef", "Expression": "https://w3id.org/cwl/cwl#Expression", "ExpressionPlaceholder": "https://w3id.org/cwl/cwl#ExpressionPlaceholder", "ExpressionTool": "https://w3id.org/cwl/cwl#ExpressionTool", "ExpressionToolOutputParameter": "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter", "File": "https://w3id.org/cwl/cwl#File", "InitialWorkDirRequirement": "https://w3id.org/cwl/cwl#InitialWorkDirRequirement", "InlineJavascriptRequirement": "https://w3id.org/cwl/cwl#InlineJavascriptRequirement", "InplaceUpdateRequirement": "http://commonwl.org/cwltool#InplaceUpdateRequirement", "InputArraySchema": "https://w3id.org/cwl/cwl#InputArraySchema", "InputBinding": "https://w3id.org/cwl/cwl#InputBinding", "InputEnumSchema": "https://w3id.org/cwl/cwl#InputEnumSchema", "InputParameter": "https://w3id.org/cwl/cwl#InputParameter", "InputRecordField": "https://w3id.org/cwl/cwl#InputRecordField", "InputRecordSchema": "https://w3id.org/cwl/cwl#InputRecordSchema", "InputSchema": "https://w3id.org/cwl/cwl#InputSchema", "LinkMergeMethod": "https://w3id.org/cwl/cwl#LinkMergeMethod", "LoadListingRequirement": "http://commonwl.org/cwltool#LoadListingRequirement", "MPIRequirement": "http://commonwl.org/cwltool#MPIRequirement", "MapSchema": "https://w3id.org/cwl/salad#MapSchema", "MultipleInputFeatureRequirement": "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement", "NetworkAccess": "http://commonwl.org/cwltool#NetworkAccess", "OutputArraySchema": "https://w3id.org/cwl/cwl#OutputArraySchema", "OutputBinding": "https://w3id.org/cwl/cwl#OutputBinding", "OutputEnumSchema": "https://w3id.org/cwl/cwl#OutputEnumSchema", "OutputParameter": "https://w3id.org/cwl/cwl#OutputParameter", "OutputRecordField": "https://w3id.org/cwl/cwl#OutputRecordField", "OutputRecordSchema": "https://w3id.org/cwl/cwl#OutputRecordSchema", "OutputSchema": "https://w3id.org/cwl/cwl#OutputSchema", "Parameter": "https://w3id.org/cwl/cwl#Parameter", "PrimitiveType": "https://w3id.org/cwl/salad#PrimitiveType", "Process": "https://w3id.org/cwl/cwl#Process", "ProcessGenerator": "http://commonwl.org/cwltool#ProcessGenerator", "ProcessRequirement": "https://w3id.org/cwl/cwl#ProcessRequirement", "RecordField": "https://w3id.org/cwl/salad#RecordField", "RecordSchema": "https://w3id.org/cwl/salad#RecordSchema", "ResourceRequirement": "https://w3id.org/cwl/cwl#ResourceRequirement", "ScatterFeatureRequirement": "https://w3id.org/cwl/cwl#ScatterFeatureRequirement", "ScatterMethod": "https://w3id.org/cwl/cwl#ScatterMethod", "SchemaBase": "https://w3id.org/cwl/cwl#SchemaBase", "SchemaDefRequirement": "https://w3id.org/cwl/cwl#SchemaDefRequirement", "Secrets": "http://commonwl.org/cwltool#Secrets", "ShellCommandRequirement": "https://w3id.org/cwl/cwl#ShellCommandRequirement", "ShmSize": "http://commonwl.org/cwltool#ShmSize", "Sink": "https://w3id.org/cwl/cwl#Sink", "SoftwarePackage": "https://w3id.org/cwl/cwl#SoftwarePackage", "SoftwareRequirement": "https://w3id.org/cwl/cwl#SoftwareRequirement", "StepInputExpressionRequirement": "https://w3id.org/cwl/cwl#StepInputExpressionRequirement", "SubworkflowFeatureRequirement": "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement", "TimeLimit": "http://commonwl.org/cwltool#TimeLimit", "UnionSchema": "https://w3id.org/cwl/salad#UnionSchema", "WorkReuse": "http://commonwl.org/cwltool#WorkReuse", "Workflow": "https://w3id.org/cwl/cwl#Workflow", "WorkflowOutputParameter": "https://w3id.org/cwl/cwl#WorkflowOutputParameter", "WorkflowStep": "https://w3id.org/cwl/cwl#WorkflowStep", "WorkflowStepInput": "https://w3id.org/cwl/cwl#WorkflowStepInput", "WorkflowStepOutput": "https://w3id.org/cwl/cwl#WorkflowStepOutput", "array": "https://w3id.org/cwl/salad#array", "boolean": "http://www.w3.org/2001/XMLSchema#boolean", "deep_listing": "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/deep_listing", "dotproduct": "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct", "double": "http://www.w3.org/2001/XMLSchema#double", "draft-2": "https://w3id.org/cwl/cwl#draft-2", "draft-3": "https://w3id.org/cwl/cwl#draft-3", "draft-3.dev1": "https://w3id.org/cwl/cwl#draft-3.dev1", "draft-3.dev2": "https://w3id.org/cwl/cwl#draft-3.dev2", "draft-3.dev3": "https://w3id.org/cwl/cwl#draft-3.dev3", "draft-3.dev4": "https://w3id.org/cwl/cwl#draft-3.dev4", "draft-3.dev5": "https://w3id.org/cwl/cwl#draft-3.dev5", "draft-4.dev1": "https://w3id.org/cwl/cwl#draft-4.dev1", "draft-4.dev2": "https://w3id.org/cwl/cwl#draft-4.dev2", "draft-4.dev3": "https://w3id.org/cwl/cwl#draft-4.dev3", "enum": "https://w3id.org/cwl/salad#enum", "flat_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct", "float": "http://www.w3.org/2001/XMLSchema#float", "int": "http://www.w3.org/2001/XMLSchema#int", "long": "http://www.w3.org/2001/XMLSchema#long", "map": "https://w3id.org/cwl/salad#map", "merge_flattened": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened", "merge_nested": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested", "nested_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct", "no_listing": "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/no_listing", "null": "https://w3id.org/cwl/salad#null", "record": "https://w3id.org/cwl/salad#record", "shallow_listing": "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/shallow_listing", "stderr": "https://w3id.org/cwl/cwl#stderr", "stdout": "https://w3id.org/cwl/cwl#stdout", "string": "http://www.w3.org/2001/XMLSchema#string", "union": "https://w3id.org/cwl/salad#union", "v1.0": "https://w3id.org/cwl/cwl#v1.0", "v1.0.dev4": "https://w3id.org/cwl/cwl#v1.0.dev4", } _rvocab = { "https://w3id.org/cwl/salad#Any": "Any", "https://w3id.org/cwl/salad#ArraySchema": "ArraySchema", "http://commonwl.org/cwltool#CUDARequirement": "CUDARequirement", "https://w3id.org/cwl/cwl#CWLArraySchema": "CWLArraySchema", "https://w3id.org/cwl/cwl#CWLInputFile": "CWLInputFile", "https://w3id.org/cwl/cwl#CWLObjectType": "CWLObjectType", "https://w3id.org/cwl/cwl#CWLRecordField": "CWLRecordField", "https://w3id.org/cwl/cwl#CWLRecordSchema": "CWLRecordSchema", "https://w3id.org/cwl/cwl#CWLType": "CWLType", "https://w3id.org/cwl/cwl#CWLVersion": "CWLVersion", "https://w3id.org/cwl/cwl#CommandInputArraySchema": "CommandInputArraySchema", "https://w3id.org/cwl/cwl#CommandInputEnumSchema": "CommandInputEnumSchema", "https://w3id.org/cwl/cwl#CommandInputParameter": "CommandInputParameter", "https://w3id.org/cwl/cwl#CommandInputRecordField": "CommandInputRecordField", "https://w3id.org/cwl/cwl#CommandInputRecordSchema": "CommandInputRecordSchema", "https://w3id.org/cwl/cwl#CommandLineBinding": "CommandLineBinding", "https://w3id.org/cwl/cwl#CommandLineTool": "CommandLineTool", "https://w3id.org/cwl/cwl#CommandOutputArraySchema": "CommandOutputArraySchema", "https://w3id.org/cwl/cwl#CommandOutputBinding": "CommandOutputBinding", "https://w3id.org/cwl/cwl#CommandOutputEnumSchema": "CommandOutputEnumSchema", "https://w3id.org/cwl/cwl#CommandOutputParameter": "CommandOutputParameter", "https://w3id.org/cwl/cwl#CommandOutputRecordField": "CommandOutputRecordField", "https://w3id.org/cwl/cwl#CommandOutputRecordSchema": "CommandOutputRecordSchema", "https://w3id.org/cwl/cwl#Directory": "Directory", "https://w3id.org/cwl/cwl#Dirent": "Dirent", "https://w3id.org/cwl/cwl#DockerRequirement": "DockerRequirement", "https://w3id.org/cwl/salad#Documented": "Documented", "https://w3id.org/cwl/salad#EnumSchema": "EnumSchema", "https://w3id.org/cwl/cwl#EnvVarRequirement": "EnvVarRequirement", "https://w3id.org/cwl/cwl#EnvironmentDef": "EnvironmentDef", "https://w3id.org/cwl/cwl#Expression": "Expression", "https://w3id.org/cwl/cwl#ExpressionPlaceholder": "ExpressionPlaceholder", "https://w3id.org/cwl/cwl#ExpressionTool": "ExpressionTool", "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter": "ExpressionToolOutputParameter", "https://w3id.org/cwl/cwl#File": "File", "https://w3id.org/cwl/cwl#InitialWorkDirRequirement": "InitialWorkDirRequirement", "https://w3id.org/cwl/cwl#InlineJavascriptRequirement": "InlineJavascriptRequirement", "http://commonwl.org/cwltool#InplaceUpdateRequirement": "InplaceUpdateRequirement", "https://w3id.org/cwl/cwl#InputArraySchema": "InputArraySchema", "https://w3id.org/cwl/cwl#InputBinding": "InputBinding", "https://w3id.org/cwl/cwl#InputEnumSchema": "InputEnumSchema", "https://w3id.org/cwl/cwl#InputParameter": "InputParameter", "https://w3id.org/cwl/cwl#InputRecordField": "InputRecordField", "https://w3id.org/cwl/cwl#InputRecordSchema": "InputRecordSchema", "https://w3id.org/cwl/cwl#InputSchema": "InputSchema", "https://w3id.org/cwl/cwl#LinkMergeMethod": "LinkMergeMethod", "http://commonwl.org/cwltool#LoadListingRequirement": "LoadListingRequirement", "http://commonwl.org/cwltool#MPIRequirement": "MPIRequirement", "https://w3id.org/cwl/salad#MapSchema": "MapSchema", "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement": "MultipleInputFeatureRequirement", "http://commonwl.org/cwltool#NetworkAccess": "NetworkAccess", "https://w3id.org/cwl/cwl#OutputArraySchema": "OutputArraySchema", "https://w3id.org/cwl/cwl#OutputBinding": "OutputBinding", "https://w3id.org/cwl/cwl#OutputEnumSchema": "OutputEnumSchema", "https://w3id.org/cwl/cwl#OutputParameter": "OutputParameter", "https://w3id.org/cwl/cwl#OutputRecordField": "OutputRecordField", "https://w3id.org/cwl/cwl#OutputRecordSchema": "OutputRecordSchema", "https://w3id.org/cwl/cwl#OutputSchema": "OutputSchema", "https://w3id.org/cwl/cwl#Parameter": "Parameter", "https://w3id.org/cwl/salad#PrimitiveType": "PrimitiveType", "https://w3id.org/cwl/cwl#Process": "Process", "http://commonwl.org/cwltool#ProcessGenerator": "ProcessGenerator", "https://w3id.org/cwl/cwl#ProcessRequirement": "ProcessRequirement", "https://w3id.org/cwl/salad#RecordField": "RecordField", "https://w3id.org/cwl/salad#RecordSchema": "RecordSchema", "https://w3id.org/cwl/cwl#ResourceRequirement": "ResourceRequirement", "https://w3id.org/cwl/cwl#ScatterFeatureRequirement": "ScatterFeatureRequirement", "https://w3id.org/cwl/cwl#ScatterMethod": "ScatterMethod", "https://w3id.org/cwl/cwl#SchemaBase": "SchemaBase", "https://w3id.org/cwl/cwl#SchemaDefRequirement": "SchemaDefRequirement", "http://commonwl.org/cwltool#Secrets": "Secrets", "https://w3id.org/cwl/cwl#ShellCommandRequirement": "ShellCommandRequirement", "http://commonwl.org/cwltool#ShmSize": "ShmSize", "https://w3id.org/cwl/cwl#Sink": "Sink", "https://w3id.org/cwl/cwl#SoftwarePackage": "SoftwarePackage", "https://w3id.org/cwl/cwl#SoftwareRequirement": "SoftwareRequirement", "https://w3id.org/cwl/cwl#StepInputExpressionRequirement": "StepInputExpressionRequirement", "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement": "SubworkflowFeatureRequirement", "http://commonwl.org/cwltool#TimeLimit": "TimeLimit", "https://w3id.org/cwl/salad#UnionSchema": "UnionSchema", "http://commonwl.org/cwltool#WorkReuse": "WorkReuse", "https://w3id.org/cwl/cwl#Workflow": "Workflow", "https://w3id.org/cwl/cwl#WorkflowOutputParameter": "WorkflowOutputParameter", "https://w3id.org/cwl/cwl#WorkflowStep": "WorkflowStep", "https://w3id.org/cwl/cwl#WorkflowStepInput": "WorkflowStepInput", "https://w3id.org/cwl/cwl#WorkflowStepOutput": "WorkflowStepOutput", "https://w3id.org/cwl/salad#array": "array", "http://www.w3.org/2001/XMLSchema#boolean": "boolean", "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/deep_listing": "deep_listing", "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct": "dotproduct", "http://www.w3.org/2001/XMLSchema#double": "double", "https://w3id.org/cwl/cwl#draft-2": "draft-2", "https://w3id.org/cwl/cwl#draft-3": "draft-3", "https://w3id.org/cwl/cwl#draft-3.dev1": "draft-3.dev1", "https://w3id.org/cwl/cwl#draft-3.dev2": "draft-3.dev2", "https://w3id.org/cwl/cwl#draft-3.dev3": "draft-3.dev3", "https://w3id.org/cwl/cwl#draft-3.dev4": "draft-3.dev4", "https://w3id.org/cwl/cwl#draft-3.dev5": "draft-3.dev5", "https://w3id.org/cwl/cwl#draft-4.dev1": "draft-4.dev1", "https://w3id.org/cwl/cwl#draft-4.dev2": "draft-4.dev2", "https://w3id.org/cwl/cwl#draft-4.dev3": "draft-4.dev3", "https://w3id.org/cwl/salad#enum": "enum", "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct": "flat_crossproduct", "http://www.w3.org/2001/XMLSchema#float": "float", "http://www.w3.org/2001/XMLSchema#int": "int", "http://www.w3.org/2001/XMLSchema#long": "long", "https://w3id.org/cwl/salad#map": "map", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened": "merge_flattened", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested": "merge_nested", "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct": "nested_crossproduct", "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/no_listing": "no_listing", "https://w3id.org/cwl/salad#null": "null", "https://w3id.org/cwl/salad#record": "record", "http://commonwl.org/cwltool#LoadListingRequirement/loadListing/LoadListingEnum/shallow_listing": "shallow_listing", "https://w3id.org/cwl/cwl#stderr": "stderr", "https://w3id.org/cwl/cwl#stdout": "stdout", "http://www.w3.org/2001/XMLSchema#string": "string", "https://w3id.org/cwl/salad#union": "union", "https://w3id.org/cwl/cwl#v1.0": "v1.0", "https://w3id.org/cwl/cwl#v1.0.dev4": "v1.0.dev4", } strtype = _PrimitiveLoader(str) inttype = _PrimitiveLoader(int) floattype = _PrimitiveLoader(float) booltype = _PrimitiveLoader(bool) None_type = _PrimitiveLoader(type(None)) Any_type = _AnyLoader() PrimitiveTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", ), "PrimitiveType", ) """ Names of salad data types (based on Avro schema declarations). Refer to the [Avro schema declaration documentation](https://avro.apache.org/docs/current/spec.html#schemas) for detailed information. null: no value boolean: a binary value int: 32-bit signed integer long: 64-bit signed integer float: single precision (32-bit) IEEE 754 floating-point number double: double precision (64-bit) IEEE 754 floating-point number string: Unicode character sequence """ AnyLoader = _EnumLoader(("Any",), "Any") """ The **Any** type validates for any non-null value. """ RecordFieldLoader = _RecordLoader(RecordField, None, None) RecordSchemaLoader = _RecordLoader(RecordSchema, None, None) EnumSchemaLoader = _RecordLoader(EnumSchema, None, None) ArraySchemaLoader = _RecordLoader(ArraySchema, None, None) MapSchemaLoader = _RecordLoader(MapSchema, None, None) UnionSchemaLoader = _RecordLoader(UnionSchema, None, None) CWLTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", "File", "Directory", ), "CWLType", ) """ Extends primitive types with the concept of a file and directory as a builtin type. File: A File object Directory: A Directory object """ CWLArraySchemaLoader = _RecordLoader(CWLArraySchema, None, None) CWLRecordFieldLoader = _RecordLoader(CWLRecordField, None, None) CWLRecordSchemaLoader = _RecordLoader(CWLRecordSchema, None, None) FileLoader = _RecordLoader(File, None, None) DirectoryLoader = _RecordLoader(Directory, None, None) CWLObjectTypeLoader = _UnionLoader((), "CWLObjectTypeLoader") union_of_None_type_or_CWLObjectTypeLoader = _UnionLoader( ( None_type, CWLObjectTypeLoader, ) ) array_of_union_of_None_type_or_CWLObjectTypeLoader = _ArrayLoader( union_of_None_type_or_CWLObjectTypeLoader ) map_of_union_of_None_type_or_CWLObjectTypeLoader = _MapLoader( union_of_None_type_or_CWLObjectTypeLoader, "CWLInputFile", "@list", True ) CWLInputFileLoader = map_of_union_of_None_type_or_CWLObjectTypeLoader CWLVersionLoader = _EnumLoader( ( "draft-2", "draft-3.dev1", "draft-3.dev2", "draft-3.dev3", "draft-3.dev4", "draft-3.dev5", "draft-3", "draft-4.dev1", "draft-4.dev2", "draft-4.dev3", "v1.0.dev4", "v1.0", ), "CWLVersion", ) """ Version symbols for published CWL document versions. """ ExpressionLoader = _ExpressionLoader(str) InputRecordFieldLoader = _RecordLoader(InputRecordField, None, None) InputRecordSchemaLoader = _RecordLoader(InputRecordSchema, None, None) InputEnumSchemaLoader = _RecordLoader(InputEnumSchema, None, None) InputArraySchemaLoader = _RecordLoader(InputArraySchema, None, None) OutputRecordFieldLoader = _RecordLoader(OutputRecordField, None, None) OutputRecordSchemaLoader = _RecordLoader(OutputRecordSchema, None, None) OutputEnumSchemaLoader = _RecordLoader(OutputEnumSchema, None, None) OutputArraySchemaLoader = _RecordLoader(OutputArraySchema, None, None) InputParameterLoader = _RecordLoader(InputParameter, None, None) OutputParameterLoader = _RecordLoader(OutputParameter, None, None) InlineJavascriptRequirementLoader = _RecordLoader( InlineJavascriptRequirement, None, None ) SchemaDefRequirementLoader = _RecordLoader(SchemaDefRequirement, None, None) EnvironmentDefLoader = _RecordLoader(EnvironmentDef, None, None) CommandLineBindingLoader = _RecordLoader(CommandLineBinding, None, None) CommandOutputBindingLoader = _RecordLoader(CommandOutputBinding, None, None) CommandInputRecordFieldLoader = _RecordLoader(CommandInputRecordField, None, None) CommandInputRecordSchemaLoader = _RecordLoader(CommandInputRecordSchema, None, None) CommandInputEnumSchemaLoader = _RecordLoader(CommandInputEnumSchema, None, None) CommandInputArraySchemaLoader = _RecordLoader(CommandInputArraySchema, None, None) CommandOutputRecordFieldLoader = _RecordLoader(CommandOutputRecordField, None, None) CommandOutputRecordSchemaLoader = _RecordLoader(CommandOutputRecordSchema, None, None) CommandOutputEnumSchemaLoader = _RecordLoader(CommandOutputEnumSchema, None, None) CommandOutputArraySchemaLoader = _RecordLoader(CommandOutputArraySchema, None, None) CommandInputParameterLoader = _RecordLoader(CommandInputParameter, None, None) CommandOutputParameterLoader = _RecordLoader(CommandOutputParameter, None, None) stdoutLoader = _EnumLoader(("stdout",), "stdout") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stdout stdout: a_stdout_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stdout_file stdout: a_stdout_file ``` If there is no `stdout` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stdout ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stdout_filenameABCDEFG stdout: random_stdout_filenameABCDEFG ``` """ stderrLoader = _EnumLoader(("stderr",), "stderr") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stderr stderr: a_stderr_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stderr_file stderr: a_stderr_file ``` If there is no `stderr` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stderr ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stderr_filenameABCDEFG stderr: random_stderr_filenameABCDEFG ``` """ CommandLineToolLoader = _RecordLoader(CommandLineTool, None, None) DockerRequirementLoader = _RecordLoader(DockerRequirement, None, None) SoftwareRequirementLoader = _RecordLoader(SoftwareRequirement, None, None) SoftwarePackageLoader = _RecordLoader(SoftwarePackage, None, None) DirentLoader = _RecordLoader(Dirent, None, None) InitialWorkDirRequirementLoader = _RecordLoader(InitialWorkDirRequirement, None, None) EnvVarRequirementLoader = _RecordLoader(EnvVarRequirement, None, None) ShellCommandRequirementLoader = _RecordLoader(ShellCommandRequirement, None, None) ResourceRequirementLoader = _RecordLoader(ResourceRequirement, None, None) ExpressionToolOutputParameterLoader = _RecordLoader( ExpressionToolOutputParameter, None, None ) ExpressionToolLoader = _RecordLoader(ExpressionTool, None, None) LinkMergeMethodLoader = _EnumLoader( ( "merge_nested", "merge_flattened", ), "LinkMergeMethod", ) """ The input link merge method, described in [WorkflowStepInput](#WorkflowStepInput). """ WorkflowOutputParameterLoader = _RecordLoader(WorkflowOutputParameter, None, None) WorkflowStepInputLoader = _RecordLoader(WorkflowStepInput, None, None) WorkflowStepOutputLoader = _RecordLoader(WorkflowStepOutput, None, None) ScatterMethodLoader = _EnumLoader( ( "dotproduct", "nested_crossproduct", "flat_crossproduct", ), "ScatterMethod", ) """ The scatter method, as described in [workflow step scatter](#WorkflowStep). """ WorkflowStepLoader = _RecordLoader(WorkflowStep, None, None) WorkflowLoader = _RecordLoader(Workflow, None, None) SubworkflowFeatureRequirementLoader = _RecordLoader( SubworkflowFeatureRequirement, None, None ) ScatterFeatureRequirementLoader = _RecordLoader(ScatterFeatureRequirement, None, None) MultipleInputFeatureRequirementLoader = _RecordLoader( MultipleInputFeatureRequirement, None, None ) StepInputExpressionRequirementLoader = _RecordLoader( StepInputExpressionRequirement, None, None ) LoadListingRequirementLoader = _RecordLoader(LoadListingRequirement, None, None) InplaceUpdateRequirementLoader = _RecordLoader(InplaceUpdateRequirement, None, None) SecretsLoader = _RecordLoader(Secrets, None, None) TimeLimitLoader = _RecordLoader(TimeLimit, None, None) WorkReuseLoader = _RecordLoader(WorkReuse, None, None) NetworkAccessLoader = _RecordLoader(NetworkAccess, None, None) ProcessGeneratorLoader = _RecordLoader(ProcessGenerator, None, None) MPIRequirementLoader = _RecordLoader(MPIRequirement, None, None) CUDARequirementLoader = _RecordLoader(CUDARequirement, None, None) ShmSizeLoader = _RecordLoader(ShmSize, None, None) array_of_strtype = _ArrayLoader(strtype) union_of_None_type_or_strtype_or_array_of_strtype = _UnionLoader( ( None_type, strtype, array_of_strtype, ) ) uri_strtype_True_False_None_None = _URILoader(strtype, True, False, None, None) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, ) ) typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, 2, "v1.1", ) array_of_RecordFieldLoader = _ArrayLoader(RecordFieldLoader) union_of_None_type_or_array_of_RecordFieldLoader = _UnionLoader( ( None_type, array_of_RecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_RecordFieldLoader, "name", "type" ) Record_nameLoader = _EnumLoader(("record",), "Record_name") typedsl_Record_nameLoader_2 = _TypeDSLLoader(Record_nameLoader, 2, "v1.1") union_of_None_type_or_strtype = _UnionLoader( ( None_type, strtype, ) ) uri_union_of_None_type_or_strtype_True_False_None_None = _URILoader( union_of_None_type_or_strtype, True, False, None, None ) uri_array_of_strtype_True_False_None_None = _URILoader( array_of_strtype, True, False, None, None ) Enum_nameLoader = _EnumLoader(("enum",), "Enum_name") typedsl_Enum_nameLoader_2 = _TypeDSLLoader(Enum_nameLoader, 2, "v1.1") uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, False, True, 2, None, ) Array_nameLoader = _EnumLoader(("array",), "Array_name") typedsl_Array_nameLoader_2 = _TypeDSLLoader(Array_nameLoader, 2, "v1.1") Map_nameLoader = _EnumLoader(("map",), "Map_name") typedsl_Map_nameLoader_2 = _TypeDSLLoader(Map_nameLoader, 2, "v1.1") Union_nameLoader = _EnumLoader(("union",), "Union_name") typedsl_Union_nameLoader_2 = _TypeDSLLoader(Union_nameLoader, 2, "v1.1") union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, ) ) uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, False, True, 2, None, ) typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CWLRecordFieldLoader = _ArrayLoader(CWLRecordFieldLoader) union_of_None_type_or_array_of_CWLRecordFieldLoader = _UnionLoader( ( None_type, array_of_CWLRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_CWLRecordFieldLoader, "name", "type" ) File_classLoader = _EnumLoader(("File",), "File_class") uri_File_classLoader_False_True_None_None = _URILoader( File_classLoader, False, True, None, None ) uri_union_of_None_type_or_strtype_False_False_None_None = _URILoader( union_of_None_type_or_strtype, False, False, None, None ) union_of_None_type_or_inttype = _UnionLoader( ( None_type, inttype, ) ) union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( FileLoader, DirectoryLoader, ) ) array_of_union_of_FileLoader_or_DirectoryLoader = _ArrayLoader( union_of_FileLoader_or_DirectoryLoader ) union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( None_type, array_of_union_of_FileLoader_or_DirectoryLoader, ) ) uri_union_of_None_type_or_strtype_True_False_None_True = _URILoader( union_of_None_type_or_strtype, True, False, None, True ) Directory_classLoader = _EnumLoader(("Directory",), "Directory_class") uri_Directory_classLoader_False_True_None_None = _URILoader( Directory_classLoader, False, True, None, None ) union_of_strtype_or_ExpressionLoader = _UnionLoader( ( strtype, ExpressionLoader, ) ) array_of_union_of_strtype_or_ExpressionLoader = _ArrayLoader( union_of_strtype_or_ExpressionLoader ) union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_union_of_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, ExpressionLoader, array_of_union_of_strtype_or_ExpressionLoader, ) ) union_of_None_type_or_booltype = _UnionLoader( ( None_type, booltype, ) ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_CommandLineBindingLoader = _UnionLoader( ( None_type, CommandLineBindingLoader, ) ) array_of_InputRecordFieldLoader = _ArrayLoader(InputRecordFieldLoader) union_of_None_type_or_array_of_InputRecordFieldLoader = _UnionLoader( ( None_type, array_of_InputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_InputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_CommandOutputBindingLoader = _UnionLoader( ( None_type, CommandOutputBindingLoader, ) ) array_of_OutputRecordFieldLoader = _ArrayLoader(OutputRecordFieldLoader) union_of_None_type_or_array_of_OutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_OutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_OutputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, array_of_strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader, True, False, None, True, ) union_of_None_type_or_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( None_type, CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_None_type_or_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_None_type_or_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_ExpressionLoader, True, False, None, True ) array_of_InputParameterLoader = _ArrayLoader(InputParameterLoader) idmap_inputs_array_of_InputParameterLoader = _IdMapLoader( array_of_InputParameterLoader, "id", "type" ) array_of_OutputParameterLoader = _ArrayLoader(OutputParameterLoader) idmap_outputs_array_of_OutputParameterLoader = _IdMapLoader( array_of_OutputParameterLoader, "id", "type" ) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, LoadListingRequirementLoader, InplaceUpdateRequirementLoader, SecretsLoader, TimeLimitLoader, WorkReuseLoader, NetworkAccessLoader, MPIRequirementLoader, CUDARequirementLoader, ShmSizeLoader, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, ) ) idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, "class", "None", ) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, LoadListingRequirementLoader, InplaceUpdateRequirementLoader, SecretsLoader, TimeLimitLoader, WorkReuseLoader, NetworkAccessLoader, MPIRequirementLoader, CUDARequirementLoader, ShmSizeLoader, Any_type, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_LoadListingRequirementLoader_or_InplaceUpdateRequirementLoader_or_SecretsLoader_or_TimeLimitLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, "class", "None", ) union_of_None_type_or_CWLVersionLoader = _UnionLoader( ( None_type, CWLVersionLoader, ) ) uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None = _URILoader( union_of_None_type_or_CWLVersionLoader, False, True, None, None ) InlineJavascriptRequirement_classLoader = _EnumLoader( ("InlineJavascriptRequirement",), "InlineJavascriptRequirement_class" ) uri_InlineJavascriptRequirement_classLoader_False_True_None_None = _URILoader( InlineJavascriptRequirement_classLoader, False, True, None, None ) union_of_None_type_or_array_of_strtype = _UnionLoader( ( None_type, array_of_strtype, ) ) SchemaDefRequirement_classLoader = _EnumLoader( ("SchemaDefRequirement",), "SchemaDefRequirement_class" ) uri_SchemaDefRequirement_classLoader_False_True_None_None = _URILoader( SchemaDefRequirement_classLoader, False, True, None, None ) union_of_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader = ( _UnionLoader( ( InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, ) ) ) array_of_union_of_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader = _ArrayLoader( union_of_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader ) union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype = _UnionLoader( ( None_type, strtype, ExpressionLoader, array_of_strtype, ) ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CommandInputRecordFieldLoader = _ArrayLoader(CommandInputRecordFieldLoader) union_of_None_type_or_array_of_CommandInputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandInputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandInputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CommandOutputRecordFieldLoader = _ArrayLoader(CommandOutputRecordFieldLoader) union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandOutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_None_type_or_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( None_type, CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_None_type_or_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_None_type_or_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( None_type, CWLTypeLoader, stdoutLoader, stderrLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_None_type_or_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_None_type_or_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) CommandLineTool_classLoader = _EnumLoader(("CommandLineTool",), "CommandLineTool_class") uri_CommandLineTool_classLoader_False_True_None_None = _URILoader( CommandLineTool_classLoader, False, True, None, None ) array_of_CommandInputParameterLoader = _ArrayLoader(CommandInputParameterLoader) idmap_inputs_array_of_CommandInputParameterLoader = _IdMapLoader( array_of_CommandInputParameterLoader, "id", "type" ) array_of_CommandOutputParameterLoader = _ArrayLoader(CommandOutputParameterLoader) idmap_outputs_array_of_CommandOutputParameterLoader = _IdMapLoader( array_of_CommandOutputParameterLoader, "id", "type" ) union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( strtype, ExpressionLoader, CommandLineBindingLoader, ) ) array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = ( _ArrayLoader(union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader) ) union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( None_type, array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, ) ) array_of_inttype = _ArrayLoader(inttype) union_of_None_type_or_array_of_inttype = _UnionLoader( ( None_type, array_of_inttype, ) ) DockerRequirement_classLoader = _EnumLoader( ("DockerRequirement",), "DockerRequirement_class" ) uri_DockerRequirement_classLoader_False_True_None_None = _URILoader( DockerRequirement_classLoader, False, True, None, None ) SoftwareRequirement_classLoader = _EnumLoader( ("SoftwareRequirement",), "SoftwareRequirement_class" ) uri_SoftwareRequirement_classLoader_False_True_None_None = _URILoader( SoftwareRequirement_classLoader, False, True, None, None ) array_of_SoftwarePackageLoader = _ArrayLoader(SoftwarePackageLoader) idmap_packages_array_of_SoftwarePackageLoader = _IdMapLoader( array_of_SoftwarePackageLoader, "package", "specs" ) uri_union_of_None_type_or_array_of_strtype_False_False_None_True = _URILoader( union_of_None_type_or_array_of_strtype, False, False, None, True ) InitialWorkDirRequirement_classLoader = _EnumLoader( ("InitialWorkDirRequirement",), "InitialWorkDirRequirement_class" ) uri_InitialWorkDirRequirement_classLoader_False_True_None_None = _URILoader( InitialWorkDirRequirement_classLoader, False, True, None, None ) union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader = _UnionLoader( ( FileLoader, DirectoryLoader, DirentLoader, strtype, ExpressionLoader, ) ) array_of_union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader = _ArrayLoader( union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader ) union_of_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader_or_strtype_or_ExpressionLoader = _UnionLoader( ( array_of_union_of_FileLoader_or_DirectoryLoader_or_DirentLoader_or_strtype_or_ExpressionLoader, strtype, ExpressionLoader, ) ) EnvVarRequirement_classLoader = _EnumLoader( ("EnvVarRequirement",), "EnvVarRequirement_class" ) uri_EnvVarRequirement_classLoader_False_True_None_None = _URILoader( EnvVarRequirement_classLoader, False, True, None, None ) array_of_EnvironmentDefLoader = _ArrayLoader(EnvironmentDefLoader) idmap_envDef_array_of_EnvironmentDefLoader = _IdMapLoader( array_of_EnvironmentDefLoader, "envName", "envValue" ) ShellCommandRequirement_classLoader = _EnumLoader( ("ShellCommandRequirement",), "ShellCommandRequirement_class" ) uri_ShellCommandRequirement_classLoader_False_True_None_None = _URILoader( ShellCommandRequirement_classLoader, False, True, None, None ) ResourceRequirement_classLoader = _EnumLoader( ("ResourceRequirement",), "ResourceRequirement_class" ) uri_ResourceRequirement_classLoader_False_True_None_None = _URILoader( ResourceRequirement_classLoader, False, True, None, None ) union_of_None_type_or_inttype_or_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, inttype, strtype, ExpressionLoader, ) ) union_of_None_type_or_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( None_type, CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_None_type_or_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_None_type_or_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, 2, "v1.1", ) ExpressionTool_classLoader = _EnumLoader(("ExpressionTool",), "ExpressionTool_class") uri_ExpressionTool_classLoader_False_True_None_None = _URILoader( ExpressionTool_classLoader, False, True, None, None ) array_of_ExpressionToolOutputParameterLoader = _ArrayLoader( ExpressionToolOutputParameterLoader ) idmap_outputs_array_of_ExpressionToolOutputParameterLoader = _IdMapLoader( array_of_ExpressionToolOutputParameterLoader, "id", "type" ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 1, None ) union_of_None_type_or_LinkMergeMethodLoader = _UnionLoader( ( None_type, LinkMergeMethodLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 2, None ) array_of_WorkflowStepInputLoader = _ArrayLoader(WorkflowStepInputLoader) idmap_in__array_of_WorkflowStepInputLoader = _IdMapLoader( array_of_WorkflowStepInputLoader, "id", "source" ) union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( ( strtype, WorkflowStepOutputLoader, ) ) array_of_union_of_strtype_or_WorkflowStepOutputLoader = _ArrayLoader( union_of_strtype_or_WorkflowStepOutputLoader ) union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( (array_of_union_of_strtype_or_WorkflowStepOutputLoader,) ) uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None = _URILoader( union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader, True, False, None, None, ) array_of_Any_type = _ArrayLoader(Any_type) union_of_None_type_or_array_of_Any_type = _UnionLoader( ( None_type, array_of_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_Any_type = _IdMapLoader( union_of_None_type_or_array_of_Any_type, "class", "None" ) union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( strtype, CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, ) ) uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None = _URILoader( union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, False, False, None, None, ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 0, None ) union_of_None_type_or_ScatterMethodLoader = _UnionLoader( ( None_type, ScatterMethodLoader, ) ) uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None = _URILoader( union_of_None_type_or_ScatterMethodLoader, False, True, None, None ) Workflow_classLoader = _EnumLoader(("Workflow",), "Workflow_class") uri_Workflow_classLoader_False_True_None_None = _URILoader( Workflow_classLoader, False, True, None, None ) array_of_WorkflowOutputParameterLoader = _ArrayLoader(WorkflowOutputParameterLoader) idmap_outputs_array_of_WorkflowOutputParameterLoader = _IdMapLoader( array_of_WorkflowOutputParameterLoader, "id", "type" ) array_of_WorkflowStepLoader = _ArrayLoader(WorkflowStepLoader) union_of_array_of_WorkflowStepLoader = _UnionLoader((array_of_WorkflowStepLoader,)) idmap_steps_union_of_array_of_WorkflowStepLoader = _IdMapLoader( union_of_array_of_WorkflowStepLoader, "id", "None" ) SubworkflowFeatureRequirement_classLoader = _EnumLoader( ("SubworkflowFeatureRequirement",), "SubworkflowFeatureRequirement_class" ) uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None = _URILoader( SubworkflowFeatureRequirement_classLoader, False, True, None, None ) ScatterFeatureRequirement_classLoader = _EnumLoader( ("ScatterFeatureRequirement",), "ScatterFeatureRequirement_class" ) uri_ScatterFeatureRequirement_classLoader_False_True_None_None = _URILoader( ScatterFeatureRequirement_classLoader, False, True, None, None ) MultipleInputFeatureRequirement_classLoader = _EnumLoader( ("MultipleInputFeatureRequirement",), "MultipleInputFeatureRequirement_class" ) uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None = _URILoader( MultipleInputFeatureRequirement_classLoader, False, True, None, None ) StepInputExpressionRequirement_classLoader = _EnumLoader( ("StepInputExpressionRequirement",), "StepInputExpressionRequirement_class" ) uri_StepInputExpressionRequirement_classLoader_False_True_None_None = _URILoader( StepInputExpressionRequirement_classLoader, False, True, None, None ) uri_strtype_False_True_None_None = _URILoader(strtype, False, True, None, None) LoadListingEnumLoader = _EnumLoader( ( "no_listing", "shallow_listing", "deep_listing", ), "LoadListingEnum", ) union_of_LoadListingEnumLoader = _UnionLoader((LoadListingEnumLoader,)) uri_array_of_strtype_False_False_0_None = _URILoader( array_of_strtype, False, False, 0, None ) union_of_inttype_or_strtype = _UnionLoader( ( inttype, strtype, ) ) union_of_booltype_or_strtype = _UnionLoader( ( booltype, strtype, ) ) union_of_inttype_or_ExpressionLoader = _UnionLoader( ( inttype, ExpressionLoader, ) ) union_of_strtype_or_array_of_strtype = _UnionLoader( ( strtype, array_of_strtype, ) ) union_of_None_type_or_inttype_or_ExpressionLoader = _UnionLoader( ( None_type, inttype, ExpressionLoader, ) ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, ) ) array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _ArrayLoader( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, ) ) CWLObjectTypeLoader.add_loaders( ( booltype, inttype, floattype, strtype, FileLoader, DirectoryLoader, array_of_union_of_None_type_or_CWLObjectTypeLoader, map_of_union_of_None_type_or_CWLObjectTypeLoader, ) ) def load_document( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions() result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, ) return result def load_document_with_metadata( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=baseuri) return _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) def load_document_by_string( string: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, result, uri, loadingOptions, ) return result def load_document_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """ Shortcut to load via a YAML object. yaml: must be from ruamel.yaml.main.YAML.load with preserve_quotes=True """ add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, yaml, uri, loadingOptions, ) return result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_0_utils.py0000644000175100001660000005113614753463342021601 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 import hashlib import logging import os from collections import namedtuple from collections.abc import MutableMapping, MutableSequence from io import StringIO from typing import IO, Any, Optional, Union, cast from urllib.parse import urldefrag from schema_salad.exceptions import ValidationException from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import aslist, json_dumps, yaml_no_ts import cwl_utils.parser import cwl_utils.parser.cwl_v1_0 as cwl import cwl_utils.parser.utils from cwl_utils.errors import WorkflowException from cwl_utils.utils import yaml_dumps CONTENT_LIMIT: int = 64 * 1024 _logger = logging.getLogger("cwl_utils") SrcSink = namedtuple("SrcSink", ["src", "sink", "linkMerge", "message"]) def _compare_records( src: cwl.RecordSchema, sink: cwl.RecordSchema, strict: bool = False ) -> bool: """ Compare two records, ensuring they have compatible fields. This handles normalizing record names, which will be relative to workflow step, so that they can be compared. """ srcfields = {cwl.shortname(field.name): field.type_ for field in (src.fields or {})} sinkfields = { cwl.shortname(field.name): field.type_ for field in (sink.fields or {}) } for key in sinkfields.keys(): if ( not can_assign_src_to_sink( srcfields.get(key, "null"), sinkfields.get(key, "null"), strict ) and sinkfields.get(key) is not None ): _logger.info( "Record comparison failure for %s and %s\n" "Did not match fields for %s: %s and %s", cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], src ).name, cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], sink ).name, key, srcfields.get(key), sinkfields.get(key), ) return False return True def _compare_type(type1: Any, type2: Any) -> bool: if isinstance(type1, cwl.ArraySchema) and isinstance(type2, cwl.ArraySchema): return _compare_type(type1.items, type2.items) elif isinstance(type1, cwl.RecordSchema) and isinstance(type2, cwl.RecordSchema): fields1 = { cwl.shortname(field.name): field.type_ for field in (type1.fields or {}) } fields2 = { cwl.shortname(field.name): field.type_ for field in (type2.fields or {}) } if fields1.keys() != fields2.keys(): return False return all(_compare_type(fields1[k], fields2[k]) for k in fields1.keys()) elif isinstance(type1, MutableSequence) and isinstance(type2, MutableSequence): if len(type1) != len(type2): return False for t1 in type1: if not any(_compare_type(t1, t2) for t2 in type2): return False return True else: return bool(type1 == type2) def _inputfile_load( doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: cwl.LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, cwl.LoadingOptions]: loader = cwl.CWLInputFileLoader if isinstance(doc, str): url = loadingOptions.fetcher.urljoin(baseuri, doc) if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = cwl.LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _inputfile_load( result, doc_url, loadingOptions, ) return loadingOptions.idx[url] if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] loadingOptions = cwl.LoadingOptions( copyfrom=loadingOptions, baseuri=baseuri, addl_metadata=addl_metadata, ) loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def can_assign_src_to_sink(src: Any, sink: Any, strict: bool = False) -> bool: """ Check for identical type specifications, ignoring extra keys like inputBinding. src: admissible source types sink: admissible sink types In non-strict comparison, at least one source type must match one sink type, except for 'null'. In strict comparison, all source types must match at least one sink type. """ if src == "Any" or sink == "Any": return True if isinstance(src, cwl.ArraySchema) and isinstance(sink, cwl.ArraySchema): return can_assign_src_to_sink(src.items, sink.items, strict) if isinstance(src, cwl.RecordSchema) and isinstance(sink, cwl.RecordSchema): return _compare_records(src, sink, strict) if isinstance(src, MutableSequence): if strict: for this_src in src: if not can_assign_src_to_sink(this_src, sink): return False return True for this_src in src: if this_src != "null" and can_assign_src_to_sink(this_src, sink): return True return False if isinstance(sink, MutableSequence): for this_sink in sink: if can_assign_src_to_sink(src, this_sink): return True return False return bool(src == sink) def check_all_types( src_dict: dict[str, Any], sinks: MutableSequence[Union[cwl.WorkflowStepInput, cwl.WorkflowOutputParameter]], type_dict: dict[str, Any], ) -> dict[str, list[SrcSink]]: """Given a list of sinks, check if their types match with the types of their sources.""" validation: dict[str, list[SrcSink]] = {"warning": [], "exception": []} for sink in sinks: if isinstance(sink, cwl.WorkflowOutputParameter): sourceName = "outputSource" sourceField = sink.outputSource elif isinstance(sink, cwl.WorkflowStepInput): sourceName = "source" sourceField = sink.source else: continue if sourceField is not None: if isinstance(sourceField, MutableSequence): linkMerge = sink.linkMerge or ( "merge_nested" if len(sourceField) > 1 else None ) srcs_of_sink = [] for parm_id in sourceField: srcs_of_sink += [src_dict[parm_id]] else: parm_id = cast(str, sourceField) if parm_id not in src_dict: raise SourceLine(sink, sourceName, ValidationException).makeError( f"{sourceName} not found: {parm_id}" ) srcs_of_sink = [src_dict[parm_id]] linkMerge = None for src in srcs_of_sink: check_result = check_types( type_dict[cast(str, src.id)], type_dict[sink.id], linkMerge, getattr(sink, "valueFrom", None), ) if check_result == "warning": validation["warning"].append(SrcSink(src, sink, linkMerge, None)) elif check_result == "exception": validation["exception"].append(SrcSink(src, sink, linkMerge, None)) return validation def check_types( srctype: Any, sinktype: Any, linkMerge: Optional[str], valueFrom: Optional[str] = None, ) -> str: """ Check if the source and sink types are correct. Acceptable types are "pass", "warning", or "exception". """ if valueFrom is not None: return "pass" if linkMerge is None: if can_assign_src_to_sink(srctype, sinktype, strict=True): return "pass" if can_assign_src_to_sink(srctype, sinktype, strict=False): return "warning" return "exception" if linkMerge == "merge_nested": return check_types( cwl.ArraySchema(items=srctype, type_="array"), sinktype, None, None ) if linkMerge == "merge_flattened": return check_types(merge_flatten_type(srctype), sinktype, None, None) raise ValidationException(f"Invalid value {linkMerge} for linkMerge field.") def content_limit_respected_read_bytes(f: IO[bytes]) -> bytes: """ Read file content up to 64 kB as a byte array. Truncate content for larger files. """ return f.read(CONTENT_LIMIT) def content_limit_respected_read(f: IO[bytes]) -> str: """ Read file content up to 64 kB as an utf-8 encoded string. Truncate content for larger files. """ return content_limit_respected_read_bytes(f).decode("utf-8") def convert_stdstreams_to_files(clt: cwl.CommandLineTool) -> None: """Convert stdout and stderr type shortcuts to files.""" for out in clt.outputs: if out.type_ == "stdout": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stdout shortcut." ) if clt.stdout is None: clt.stdout = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stdout) elif out.type_ == "stderr": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stderr shortcut." ) if clt.stderr is None: clt.stderr = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stderr) def load_inputfile( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.0 input file from a serialized YAML string or a YAML object.""" if baseuri is None: baseuri = cwl.file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = cwl.LoadingOptions() result, metadata = _inputfile_load( doc, baseuri, loadingOptions, ) return result def load_inputfile_by_string( string: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.0 input file from a serialized YAML string.""" yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( result, uri, loadingOptions, ) return result def load_inputfile_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.0 input file from a YAML object.""" add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( yaml, uri, loadingOptions, ) return result def merge_flatten_type(src: Any) -> Any: """Return the merge flattened type of the source type.""" if isinstance(src, MutableSequence): return [merge_flatten_type(t) for t in src] if isinstance(src, cwl.ArraySchema): return src return cwl.ArraySchema(type_="array", items=src) def type_for_step_input( step: cwl.WorkflowStep, in_: cwl.WorkflowStepInput, ) -> Any: """Determine the type for the given step input.""" if in_.valueFrom is not None: return "Any" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.inputs: for step_input in step_run.inputs: if cast(str, step_input.id).split("#")[-1] == in_.id.split("#")[-1]: input_type = step_input.type_ if step.scatter is not None and in_.id in aslist(step.scatter): input_type = cwl.ArraySchema(items=input_type, type_="array") return input_type return "Any" def type_for_step_output( step: cwl.WorkflowStep, sourcename: str, ) -> Any: """Determine the type for the given step output.""" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.outputs: for step_output in step_run.outputs: if ( step_output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): output_type = step_output.type_ if step.scatter is not None: if step.scatterMethod == "nested_crossproduct": for _ in range(len(aslist(step.scatter))): output_type = cwl.ArraySchema( items=output_type, type_="array" ) else: output_type = cwl.ArraySchema(items=output_type, type_="array") return output_type raise ValidationException( "param {} not found in {}.".format( sourcename, yaml_dumps(cwl.save(step)), ) ) def type_for_source( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, linkMerge: Optional[str] = None, ) -> Any: """Determine the type for the given sourcenames.""" scatter_context: list[Optional[tuple[int, str]]] = [] params = param_for_source_id(process, sourcenames, parent, scatter_context) if not isinstance(params, list): new_type = params.type_ if scatter_context[0] is not None: if scatter_context[0][1] == "nested_crossproduct": for _ in range(scatter_context[0][0]): new_type = cwl.ArraySchema(items=new_type, type_="array") else: new_type = cwl.ArraySchema(items=new_type, type_="array") if linkMerge == "merge_nested": new_type = cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": new_type = merge_flatten_type(new_type) return new_type new_type = [] for p, sc in zip(params, scatter_context): if isinstance(p, str) and not any(_compare_type(t, p) for t in new_type): cur_type = p elif hasattr(p, "type_") and not any( _compare_type(t, p.type_) for t in new_type ): cur_type = p.type_ else: cur_type = None if cur_type is not None: if sc is not None: if sc[1] == "nested_crossproduct": for _ in range(sc[0]): cur_type = cwl.ArraySchema(items=cur_type, type_="array") else: cur_type = cwl.ArraySchema(items=cur_type, type_="array") new_type.append(cur_type) if len(new_type) == 1: new_type = new_type[0] if linkMerge == "merge_nested": return cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": return merge_flatten_type(new_type) elif isinstance(sourcenames, list) and len(sourcenames) > 1: return cwl.ArraySchema(items=new_type, type_="array") else: return new_type def param_for_source_id( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, scatter_context: Optional[list[Optional[tuple[int, str]]]] = None, ) -> Union[list[cwl.InputParameter], cwl.InputParameter]: """Find the process input parameter that matches one of the given sourcenames.""" if isinstance(sourcenames, str): sourcenames = [sourcenames] params: list[cwl.InputParameter] = [] for sourcename in sourcenames: if not isinstance(process, cwl.Workflow): for param in process.inputs: if param.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(param) if scatter_context is not None: scatter_context.append(None) targets = [process] if parent: targets.append(parent) for target in targets: if isinstance(target, cwl.Workflow): for inp in target.inputs: if inp.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(inp) if scatter_context is not None: scatter_context.append(None) for step in target.steps: if ( "/".join(sourcename.split("#")[-1].split("/")[:-1]) == step.id.split("#")[-1] and step.out ): step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) for outp in step.out: outp_id = outp if isinstance(outp, str) else outp.id if ( outp_id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): if step_run and step_run.outputs: for output in step_run.outputs: if ( output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): params.append(output) if scatter_context is not None: if isinstance(step.scatter, str): scatter_context.append( ( 1, step.scatterMethod or "dotproduct", ) ) elif isinstance( step.scatter, MutableSequence ): scatter_context.append( ( len(step.scatter), step.scatterMethod or "dotproduct", ) ) else: scatter_context.append(None) if len(params) == 1: return params[0] elif len(params) > 1: return params raise WorkflowException( "param {} not found in {}\n{}.".format( sourcename, yaml_dumps(cwl.save(process)), (f" or\n {yaml_dumps(cwl.save(parent))}" if parent is not None else ""), ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_1.py0000644000175100001660000436670514753463342020401 0ustar00runnerdocker# # This file was autogenerated using schema-salad-tool --codegen=python # The code itself is released under the Apache 2.0 license and the help text is # subject to the license of the original schema. import copy import logging import os import pathlib import tempfile import uuid as _uuid__ # pylint: disable=unused-import # noqa: F401 import xml.sax # nosec from abc import ABC, abstractmethod from collections.abc import MutableMapping, MutableSequence, Sequence from io import StringIO from itertools import chain from typing import Any, Optional, Union, cast from urllib.parse import quote, urldefrag, urlparse, urlsplit, urlunsplit from urllib.request import pathname2url from rdflib import Graph from rdflib.plugins.parsers.notation3 import BadSyntax from ruamel.yaml.comments import CommentedMap from schema_salad.exceptions import SchemaSaladException, ValidationException from schema_salad.fetcher import DefaultFetcher, Fetcher, MemoryCachingFetcher from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import CacheType, yaml_no_ts # requires schema-salad v8.2+ _vocab: dict[str, str] = {} _rvocab: dict[str, str] = {} _logger = logging.getLogger("salad") IdxType = MutableMapping[str, tuple[Any, "LoadingOptions"]] class LoadingOptions: idx: IdxType fileuri: Optional[str] baseuri: str namespaces: MutableMapping[str, str] schemas: MutableSequence[str] original_doc: Optional[Any] addl_metadata: MutableMapping[str, Any] fetcher: Fetcher vocab: dict[str, str] rvocab: dict[str, str] cache: CacheType imports: list[str] includes: list[str] no_link_check: Optional[bool] container: Optional[str] def __init__( self, fetcher: Optional[Fetcher] = None, namespaces: Optional[dict[str, str]] = None, schemas: Optional[list[str]] = None, fileuri: Optional[str] = None, copyfrom: Optional["LoadingOptions"] = None, original_doc: Optional[Any] = None, addl_metadata: Optional[dict[str, str]] = None, baseuri: Optional[str] = None, idx: Optional[IdxType] = None, imports: Optional[list[str]] = None, includes: Optional[list[str]] = None, no_link_check: Optional[bool] = None, container: Optional[str] = None, ) -> None: """Create a LoadingOptions object.""" self.original_doc = original_doc if idx is not None: self.idx = idx else: self.idx = copyfrom.idx if copyfrom is not None else {} if fileuri is not None: self.fileuri = fileuri else: self.fileuri = copyfrom.fileuri if copyfrom is not None else None if baseuri is not None: self.baseuri = baseuri else: self.baseuri = copyfrom.baseuri if copyfrom is not None else "" if namespaces is not None: self.namespaces = namespaces else: self.namespaces = copyfrom.namespaces if copyfrom is not None else {} if schemas is not None: self.schemas = schemas else: self.schemas = copyfrom.schemas if copyfrom is not None else [] if addl_metadata is not None: self.addl_metadata = addl_metadata else: self.addl_metadata = copyfrom.addl_metadata if copyfrom is not None else {} if imports is not None: self.imports = imports else: self.imports = copyfrom.imports if copyfrom is not None else [] if includes is not None: self.includes = includes else: self.includes = copyfrom.includes if copyfrom is not None else [] if no_link_check is not None: self.no_link_check = no_link_check else: self.no_link_check = copyfrom.no_link_check if copyfrom is not None else False if container is not None: self.container = container else: self.container = copyfrom.container if copyfrom is not None else None if fetcher is not None: self.fetcher = fetcher elif copyfrom is not None: self.fetcher = copyfrom.fetcher else: import requests from cachecontrol.caches import SeparateBodyFileCache from cachecontrol.wrapper import CacheControl root = pathlib.Path(os.environ.get("HOME", tempfile.gettempdir())) session = CacheControl( requests.Session(), cache=SeparateBodyFileCache(root / ".cache" / "salad"), ) self.fetcher: Fetcher = DefaultFetcher({}, session) self.cache = self.fetcher.cache if isinstance(self.fetcher, MemoryCachingFetcher) else {} self.vocab = _vocab self.rvocab = _rvocab if self.namespaces is not None: self.vocab = self.vocab.copy() self.rvocab = self.rvocab.copy() for k, v in self.namespaces.items(): self.vocab[k] = v self.rvocab[v] = k @property def graph(self) -> Graph: """Generate a merged rdflib.Graph from all entries in self.schemas.""" graph = Graph() if not self.schemas: return graph key = str(hash(tuple(self.schemas))) if key in self.cache: return cast(Graph, self.cache[key]) for schema in self.schemas: fetchurl = ( self.fetcher.urljoin(self.fileuri, schema) if self.fileuri is not None else pathlib.Path(schema).resolve().as_uri() ) if fetchurl not in self.cache or self.cache[fetchurl] is True: _logger.debug("Getting external schema %s", fetchurl) try: content = self.fetcher.fetch_text(fetchurl) except Exception as e: _logger.warning("Could not load extension schema %s: %s", fetchurl, str(e)) continue newGraph = Graph() err_msg = "unknown error" for fmt in ["xml", "turtle"]: try: newGraph.parse(data=content, format=fmt, publicID=str(fetchurl)) self.cache[fetchurl] = newGraph graph += newGraph break except (xml.sax.SAXParseException, TypeError, BadSyntax) as e: err_msg = str(e) else: _logger.warning("Could not load extension schema %s: %s", fetchurl, err_msg) self.cache[key] = graph return graph class Saveable(ABC): """Mark classes than have a save() and fromDoc() function.""" @classmethod @abstractmethod def fromDoc( cls, _doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, ) -> "Saveable": """Construct this object from the result of yaml.load().""" @abstractmethod def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: """Convert this object to a JSON/YAML friendly dictionary.""" def load_field( val: Union[str, dict[str, str]], fieldtype: "_Loader", baseuri: str, loadingOptions: LoadingOptions, lc: Optional[list[Any]] = None, ) -> Any: """Load field.""" if isinstance(val, MutableMapping): if "$import" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$import"]) result, metadata = _document_load_by_url( fieldtype, url, loadingOptions, ) loadingOptions.imports.append(url) return result if "$include" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$include"]) val = loadingOptions.fetcher.fetch_text(url) loadingOptions.includes.append(url) return fieldtype.load(val, baseuri, loadingOptions, lc=lc) save_type = Optional[Union[MutableMapping[str, Any], MutableSequence[Any], int, float, bool, str]] def extract_type(val_type: type[Any]) -> str: """Take a type of value, and extracts the value as a string.""" val_str = str(val_type) return val_str.split("'")[1] def convert_typing(val_type: str) -> str: """Normalize type names to schema-salad types.""" if "None" in val_type: return "null" if "CommentedSeq" in val_type or "list" in val_type: return "array" if "CommentedMap" in val_type or "dict" in val_type: return "object" if "False" in val_type or "True" in val_type: return "boolean" return val_type def parse_errors(error_message: str) -> tuple[str, str, str]: """Parse error messages from several loaders into one error message.""" if not error_message.startswith("Expected"): return error_message, "", "" vals = error_message.split("\n") if len(vals) == 1: return error_message, "", "" types = set() for val in vals: individual_vals = val.split(" ") if val == "": continue if individual_vals[1] == "one": individual_vals = val.split("(")[1].split(",") for t in individual_vals: types.add(t.strip(" ").strip(")\n")) elif individual_vals[2] == "").replace("'", "")) elif individual_vals[0] == "Value": types.add(individual_vals[-1].strip(".")) else: types.add(individual_vals[1].replace(",", "")) types = {val for val in types if val != "NoneType"} if "str" in types: types = {convert_typing(val) for val in types if "'" not in val} to_print = "" for val in types: if "'" in val: to_print = "value" if len(types) == 1 else "values" if to_print == "": to_print = "type" if len(types) == 1 else "types" verb_tensage = "is" if len(types) == 1 else "are" return str(types).replace("{", "(").replace("}", ")").replace("'", ""), to_print, verb_tensage def save( val: Any, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: if isinstance(val, Saveable): return val.save(top=top, base_url=base_url, relative_uris=relative_uris) if isinstance(val, MutableSequence): return [save(v, top=False, base_url=base_url, relative_uris=relative_uris) for v in val] if isinstance(val, MutableMapping): newdict = {} for key in val: newdict[key] = save(val[key], top=False, base_url=base_url, relative_uris=relative_uris) return newdict if val is None or isinstance(val, (int, float, bool, str)): return val raise Exception("Not Saveable: %s" % type(val)) def save_with_metadata( val: Any, valLoadingOpts: LoadingOptions, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: """Save and set $namespaces, $schemas, $base and any other metadata fields at the top level.""" saved_val = save(val, top, base_url, relative_uris) newdict: MutableMapping[str, Any] = {} if isinstance(saved_val, MutableSequence): newdict = {"$graph": saved_val} elif isinstance(saved_val, MutableMapping): newdict = saved_val if valLoadingOpts.namespaces: newdict["$namespaces"] = valLoadingOpts.namespaces if valLoadingOpts.schemas: newdict["$schemas"] = valLoadingOpts.schemas if valLoadingOpts.baseuri: newdict["$base"] = valLoadingOpts.baseuri for k, v in valLoadingOpts.addl_metadata.items(): if k not in newdict: newdict[k] = v return newdict def expand_url( url: str, base_url: str, loadingOptions: LoadingOptions, scoped_id: bool = False, vocab_term: bool = False, scoped_ref: Optional[int] = None, ) -> str: if url in ("@id", "@type"): return url if vocab_term and url in loadingOptions.vocab: return url if bool(loadingOptions.vocab) and ":" in url: prefix = url.split(":")[0] if prefix in loadingOptions.vocab: url = loadingOptions.vocab[prefix] + url[len(prefix) + 1 :] split = urlsplit(url) if ( (bool(split.scheme) and split.scheme in loadingOptions.fetcher.supported_schemes()) or url.startswith("$(") or url.startswith("${") ): pass elif scoped_id and not bool(split.fragment): splitbase = urlsplit(base_url) frg = "" if bool(splitbase.fragment): frg = splitbase.fragment + "/" + split.path else: frg = split.path pt = splitbase.path if splitbase.path != "" else "/" url = urlunsplit((splitbase.scheme, splitbase.netloc, pt, splitbase.query, frg)) elif scoped_ref is not None and not bool(split.fragment): splitbase = urlsplit(base_url) sp = splitbase.fragment.split("/") n = scoped_ref while n > 0 and len(sp) > 0: sp.pop() n -= 1 sp.append(url) url = urlunsplit( ( splitbase.scheme, splitbase.netloc, splitbase.path, splitbase.query, "/".join(sp), ) ) else: url = loadingOptions.fetcher.urljoin(base_url, url) if vocab_term: split = urlsplit(url) if bool(split.scheme): if url in loadingOptions.rvocab: return loadingOptions.rvocab[url] else: raise ValidationException(f"Term {url!r} not in vocabulary") return url class _Loader: def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: pass class _AnyLoader(_Loader): def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc is not None: return doc raise ValidationException("Expected non-null") class _PrimitiveLoader(_Loader): def __init__(self, tp: Union[type, tuple[type[str], type[str]]]) -> None: self.tp = tp def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, self.tp): raise ValidationException(f"Expected a {self.tp} but got {doc.__class__.__name__}") return doc def __repr__(self) -> str: return str(self.tp) class _ArrayLoader(_Loader): def __init__(self, items: _Loader) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableSequence): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an array." ) r: list[Any] = [] errors: list[SchemaSaladException] = [] fields: list[str] = [] for i in range(0, len(doc)): try: lf = load_field( doc[i], _UnionLoader([self, self.items]), baseuri, loadingOptions, lc=lc ) flatten = loadingOptions.container != "@list" if flatten and isinstance(lf, MutableSequence): r.extend(lf) else: r.append(lf) if isinstance(doc[i], CommentedMap): if doc[i].get("id") is not None: if doc[i].get("id") in fields: errors.append( ValidationException( f"Duplicate field {doc[i].get('id')!r}", SourceLine(doc[i], "id", str), [], ) ) else: fields.append(doc[i].get("id")) except ValidationException as e: e = ValidationException( "array item is invalid because", SourceLine(doc, i, str), [e] ) errors.append(e) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return f"array<{self.items}>" class _MapLoader(_Loader): def __init__( self, values: _Loader, name: Optional[str] = None, container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.values = values self.name = name self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException(f"Expected a map, was {type(doc)}") if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) r: dict[str, Any] = {} errors: list[SchemaSaladException] = [] for k, v in doc.items(): try: lf = load_field(v, self.values, baseuri, loadingOptions, lc) r[k] = lf except ValidationException as e: errors.append(e.with_sourceline(SourceLine(doc, k, str))) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return self.name if self.name is not None else f"map" class _EnumLoader(_Loader): def __init__(self, symbols: Sequence[str], name: str) -> None: self.symbols = symbols self.name = name def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc in self.symbols: return doc raise ValidationException(f"Expected one of {self.symbols}") def __repr__(self) -> str: return self.name class _SecondaryDSLLoader(_Loader): def __init__(self, inner: _Loader) -> None: self.inner = inner def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: r: list[dict[str, Any]] = [] if isinstance(doc, MutableSequence): for d in doc: if isinstance(d, str): if d.endswith("?"): r.append({"pattern": d[:-1], "required": False}) else: r.append({"pattern": d}) elif isinstance(d, dict): new_dict: dict[str, Any] = {} dict_copy = copy.deepcopy(d) if "pattern" in dict_copy: new_dict["pattern"] = dict_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {d}" ) new_dict["required"] = ( dict_copy.pop("required") if "required" in dict_copy else None ) if len(dict_copy): raise ValidationException( "Unallowed values in secondaryFiles specification entry: {}".format( dict_copy ) ) r.append(new_dict) else: raise ValidationException( "Expected a string or sequence of (strings or mappings)." ) elif isinstance(doc, MutableMapping): new_dict = {} doc_copy = copy.deepcopy(doc) if "pattern" in doc_copy: new_dict["pattern"] = doc_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {doc}" ) new_dict["required"] = doc_copy.pop("required") if "required" in doc_copy else None if len(doc_copy): raise ValidationException( f"Unallowed values in secondaryFiles specification entry: {doc_copy}" ) r.append(new_dict) elif isinstance(doc, str): if doc.endswith("?"): r.append({"pattern": doc[:-1], "required": False}) else: r.append({"pattern": doc}) else: raise ValidationException("Expected str or sequence of str") return self.inner.load(r, baseuri, loadingOptions, docRoot, lc=lc) class _RecordLoader(_Loader): def __init__( self, classtype: type[Saveable], container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.classtype = classtype self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an object." ) if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) return self.classtype.fromDoc(doc, baseuri, loadingOptions, docRoot=docRoot) def __repr__(self) -> str: return str(self.classtype.__name__) class _ExpressionLoader(_Loader): def __init__(self, items: type[str]) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, str): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is a str." ) return doc class _UnionLoader(_Loader): def __init__(self, alternates: Sequence[_Loader], name: Optional[str] = None) -> None: self.alternates = alternates self.name = name def add_loaders(self, loaders: Sequence[_Loader]) -> None: self.alternates = tuple(loader for loader in chain(self.alternates, loaders)) def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: errors = [] if lc is None: lc = [] for t in self.alternates: try: return t.load(doc, baseuri, loadingOptions, docRoot=docRoot, lc=lc) except ValidationException as e: if isinstance(t, _ArrayLoader) and len(self.alternates) > 1: continue if isinstance(doc, (CommentedMap, dict)): if "class" in doc: if str(doc.get("class")) == str(t): errors.append( ValidationException( f"Object `{baseuri.split('/')[-1]}` is not valid because:", SourceLine(doc, next(iter(doc)), str), [e], ) ) else: if "array" in str(t): continue else: if "id" in doc: id = baseuri.split("/")[-1] + "#" + str(doc.get("id")) if "id" in lc: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, "id", str), [e], ) ) else: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, doc.get("id"), str), [e], ) ) else: if not isinstance( t, (_PrimitiveLoader) ): # avoids 'tried was {x}' errors errors.append( ValidationException(f"tried `{t}` but", None, [e]) ) else: # avoids "tried but x" and instead returns the values for parsing errors.append(ValidationException("", None, [e])) if isinstance(doc, (CommentedMap, dict)) and "class" in doc: if str(doc.get("class")) not in str(self.alternates): errors.append( ValidationException( "Field `class` contains undefined reference to " + "`" + "/".join(baseuri.split("/")[0:-1]) + "/" + str(doc.get("class")) + "`", SourceLine(doc, "class", str), [], ) ) raise ValidationException("", None, errors, "*") def __repr__(self) -> str: return self.name if self.name is not None else " | ".join(str(a) for a in self.alternates) class _URILoader(_Loader): def __init__( self, inner: _Loader, scoped_id: bool, vocab_term: bool, scoped_ref: Optional[int], no_link_check: Optional[bool], ) -> None: self.inner = inner self.scoped_id = scoped_id self.vocab_term = vocab_term self.scoped_ref = scoped_ref self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, no_link_check=self.no_link_check ) if isinstance(doc, MutableSequence): newdoc = [] for i in doc: if isinstance(i, str): newdoc.append( expand_url( i, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) ) else: newdoc.append(i) doc = newdoc elif isinstance(doc, str): doc = expand_url( doc, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) if isinstance(doc, str): if not loadingOptions.no_link_check: errors = [] try: if not loadingOptions.fetcher.check_exists(doc): errors.append( ValidationException(f"contains undefined reference to `{doc}`") ) except ValidationException: pass if len(errors) > 0: raise ValidationException("", None, errors) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _TypeDSLLoader(_Loader): def __init__(self, inner: _Loader, refScope: Optional[int], salad_version: str) -> None: self.inner = inner self.refScope = refScope self.salad_version = salad_version def resolve( self, doc: str, baseuri: str, loadingOptions: LoadingOptions, ) -> Union[list[Union[dict[str, Any], str]], dict[str, Any], str]: doc_ = doc optional = False if doc_.endswith("?"): optional = True doc_ = doc_[0:-1] if doc_.endswith("[]"): salad_versions = [int(v) for v in self.salad_version[1:].split(".")] items: Union[list[Union[dict[str, Any], str]], dict[str, Any], str] = "" rest = doc_[0:-2] if salad_versions < [1, 3]: if rest.endswith("[]"): # To show the error message with the original type return doc else: items = expand_url(rest, baseuri, loadingOptions, False, True, self.refScope) else: items = self.resolve(rest, baseuri, loadingOptions) if isinstance(items, str): items = expand_url(items, baseuri, loadingOptions, False, True, self.refScope) expanded: Union[dict[str, Any], str] = {"type": "array", "items": items} else: expanded = expand_url(doc_, baseuri, loadingOptions, False, True, self.refScope) if optional: return ["null", expanded] else: return expanded def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableSequence): r: list[Any] = [] for d in doc: if isinstance(d, str): resolved = self.resolve(d, baseuri, loadingOptions) if isinstance(resolved, MutableSequence): for i in resolved: if i not in r: r.append(i) else: if resolved not in r: r.append(resolved) else: r.append(d) doc = r elif isinstance(doc, str): doc = self.resolve(doc, baseuri, loadingOptions) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _IdMapLoader(_Loader): def __init__(self, inner: _Loader, mapSubject: str, mapPredicate: Optional[str]) -> None: self.inner = inner self.mapSubject = mapSubject self.mapPredicate = mapPredicate def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableMapping): r: list[Any] = [] for k in doc.keys(): val = doc[k] if isinstance(val, CommentedMap): v = copy.copy(val) v.lc.data = val.lc.data v.lc.filename = val.lc.filename v[self.mapSubject] = k r.append(v) elif isinstance(val, MutableMapping): v2 = copy.copy(val) v2[self.mapSubject] = k r.append(v2) else: if self.mapPredicate: v3 = {self.mapPredicate: val} v3[self.mapSubject] = k r.append(v3) else: raise ValidationException("No mapPredicate") doc = r return self.inner.load(doc, baseuri, loadingOptions, lc=lc) def _document_load( loader: _Loader, doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if isinstance(doc, str): return _document_load_by_url( loader, loadingOptions.fetcher.urljoin(baseuri, doc), loadingOptions, addl_metadata_fields=addl_metadata_fields, ) if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] docuri = baseuri if "$base" in doc: baseuri = doc["$base"] loadingOptions = LoadingOptions( copyfrom=loadingOptions, namespaces=doc.get("$namespaces", None), schemas=doc.get("$schemas", None), baseuri=doc.get("$base", None), addl_metadata=addl_metadata, ) doc = copy.copy(doc) if "$namespaces" in doc: doc.pop("$namespaces") if "$schemas" in doc: doc.pop("$schemas") if "$base" in doc: doc.pop("$base") if "$graph" in doc: loadingOptions.idx[baseuri] = ( loader.load(doc["$graph"], baseuri, loadingOptions), loadingOptions, ) else: loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) if docuri != baseuri: loadingOptions.idx[docuri] = loadingOptions.idx[baseuri] return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def _document_load_by_url( loader: _Loader, url: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _document_load( loader, result, doc_url, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) return loadingOptions.idx[url] def file_uri(path: str, split_frag: bool = False) -> str: """Transform a file path into a URL with file scheme.""" if path.startswith("file://"): return path if split_frag: pathsp = path.split("#", 2) frag = "#" + quote(str(pathsp[1])) if len(pathsp) == 2 else "" urlpath = pathname2url(str(pathsp[0])) else: urlpath = pathname2url(path) frag = "" if urlpath.startswith("//"): return f"file:{urlpath}{frag}" return f"file://{urlpath}{frag}" def prefix_url(url: str, namespaces: dict[str, str]) -> str: """Expand short forms into full URLs using the given namespace dictionary.""" for k, v in namespaces.items(): if url.startswith(v): return k + ":" + url[len(v) :] return url def save_relative_uri( uri: Any, base_url: str, scoped_id: bool, ref_scope: Optional[int], relative_uris: bool, ) -> Any: """Convert any URI to a relative one, obeying the scoping rules.""" if isinstance(uri, MutableSequence): return [save_relative_uri(u, base_url, scoped_id, ref_scope, relative_uris) for u in uri] elif isinstance(uri, str): if not relative_uris or uri == base_url: return uri urisplit = urlsplit(uri) basesplit = urlsplit(base_url) if urisplit.scheme == basesplit.scheme and urisplit.netloc == basesplit.netloc: if urisplit.path != basesplit.path: p = os.path.relpath(urisplit.path, os.path.dirname(basesplit.path)) if urisplit.fragment: p = p + "#" + urisplit.fragment return p basefrag = basesplit.fragment + "/" if ref_scope: sp = basefrag.split("/") i = 0 while i < ref_scope: sp.pop() i += 1 basefrag = "/".join(sp) if urisplit.fragment.startswith(basefrag): return urisplit.fragment[len(basefrag) :] return urisplit.fragment return uri else: return save(uri, top=False, base_url=base_url, relative_uris=relative_uris) def shortname(inputid: str) -> str: """ Compute the shortname of a fully qualified identifier. See https://w3id.org/cwl/v1.2/SchemaSalad.html#Short_names. """ parsed_id = urlparse(inputid) if parsed_id.fragment: return parsed_id.fragment.split("/")[-1] return parsed_id.path.split("/")[-1] def parser_info() -> str: return "org.w3id.cwl.v1_1" class Documented(Saveable): pass class RecordField(Documented): """ A field of a record. """ name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class RecordSchema(Saveable): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class EnumSchema(Saveable): """ Define an enumerated type. """ name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, EnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type"]) class ArraySchema(Saveable): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class MapSchema(Saveable): def __init__( self, type_: Any, values: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.type_ = type_ self.values = values def __eq__(self, other: Any) -> bool: if isinstance(other, MapSchema): return bool(self.type_ == other.type_ and self.values == other.values) return False def __hash__(self) -> int: return hash((self.type_, self.values)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MapSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Map_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("values") is None: raise ValidationException("missing required field `values`", None, []) values = load_field( _doc.get("values"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("values") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `values`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("values") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [e], detailed_message=f"the `values` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `type`, `values`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( type_=type_, values=values, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.values is not None: u = save_relative_uri(self.values, base_url, False, 2, relative_uris) r["values"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["type", "values"]) class UnionSchema(Saveable): def __init__( self, names: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.names = names self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, UnionSchema): return bool(self.names == other.names and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.names, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "UnionSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("names") is None: raise ValidationException("missing required field `names`", None, []) names = load_field( _doc.get("names"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("names") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `names`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("names") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [e], detailed_message=f"the `names` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Union_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `names`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( names=names, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.names is not None: u = save_relative_uri(self.names, base_url, False, 2, relative_uris) r["names"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["names", "type"]) class CWLArraySchema(ArraySchema): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class CWLRecordField(RecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class CWLRecordSchema(RecordSchema): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class File(Saveable): """ Represents a file (or group of files when `secondaryFiles` is provided) that will be accessible by tools using standard POSIX file system call API such as open(2) and read(2). Files are represented as objects with `class` of `File`. File objects have a number of properties that provide metadata about the file. The `location` property of a File is a URI that uniquely identifies the file. Implementations must support the file:// URI scheme and may support other schemes such as http://. The value of `location` may also be a relative reference, in which case it must be resolved relative to the URI of the document it appears in. Alternately to `location`, implementations must also accept the `path` property on File, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). If no `location` or `path` is specified, a file object must specify `contents` with the UTF-8 text content of the file. This is a "file literal". File literals do not correspond to external resources, but are created on disk with `contents` with when needed for a executing a tool. Where appropriate, expressions can return file literals to define new files on a runtime. The maximum size of `contents` is 64 kilobytes. The `basename` property defines the filename on disk where the file is staged. This may differ from the resource name. If not provided, `basename` must be computed from the last path part of `location` and made available to expressions. The `secondaryFiles` property is a list of File or Directory objects that must be staged in the same directory as the primary file. It is an error for file names to be duplicated in `secondaryFiles`. The `size` property is the size in bytes of the File. It must be computed from the resource and made available to expressions. The `checksum` field contains a cryptographic hash of the file content for use it verifying file contents. Implementations may, at user option, enable or disable computation of the `checksum` field for performance or other reasons. However, the ability to compute output checksums is required to pass the CWL conformance test suite. When executing a CommandLineTool, the files and secondary files may be staged to an arbitrary directory, but must use the value of `basename` for the filename. The `path` property must be file path in the context of the tool execution runtime (local to the compute node, or within the executing container). All computed properties should be available to expressions. File literals also must be staged and `path` must be set. When collecting CommandLineTool outputs, `glob` matching returns file paths (with the `path` property) and the derived properties. This can all be modified by `outputEval`. Alternately, if the file `cwl.output.json` is present in the output, `outputBinding` is ignored. File objects in the output must provide either a `location` URI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). When evaluating an ExpressionTool, file objects must be referenced via `location` (the expression tool does not have access to files on disk so `path` is meaningless) or as file literals. It is legal to return a file object with an existing `location` but a different `basename`. The `loadContents` field of ExpressionTool inputs behaves the same as on CommandLineTool inputs, however it is not meaningful on the outputs. An ExpressionTool may forward file references from input to output by using the same value for `location`. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, dirname: Optional[Any] = None, nameroot: Optional[Any] = None, nameext: Optional[Any] = None, checksum: Optional[Any] = None, size: Optional[Any] = None, secondaryFiles: Optional[Any] = None, format: Optional[Any] = None, contents: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "File" self.location = location self.path = path self.basename = basename self.dirname = dirname self.nameroot = nameroot self.nameext = nameext self.checksum = checksum self.size = size self.secondaryFiles = secondaryFiles self.format = format self.contents = contents def __eq__(self, other: Any) -> bool: if isinstance(other, File): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.dirname == other.dirname and self.nameroot == other.nameroot and self.nameext == other.nameext and self.checksum == other.checksum and self.size == other.size and self.secondaryFiles == other.secondaryFiles and self.format == other.format and self.contents == other.contents ) return False def __hash__(self) -> int: return hash( ( self.class_, self.location, self.path, self.basename, self.dirname, self.nameroot, self.nameext, self.checksum, self.size, self.secondaryFiles, self.format, self.contents, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "File": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_File_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) dirname = None if "dirname" in _doc: try: dirname = load_field( _doc.get("dirname"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dirname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dirname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dirname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [e], detailed_message=f"the `dirname` field with value `{val}` " "is not valid because:", ) ) nameroot = None if "nameroot" in _doc: try: nameroot = load_field( _doc.get("nameroot"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameroot") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameroot`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameroot") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [e], detailed_message=f"the `nameroot` field with value `{val}` " "is not valid because:", ) ) nameext = None if "nameext" in _doc: try: nameext = load_field( _doc.get("nameext"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameext") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameext`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameext") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [e], detailed_message=f"the `nameext` field with value `{val}` " "is not valid because:", ) ) checksum = None if "checksum" in _doc: try: checksum = load_field( _doc.get("checksum"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("checksum") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `checksum`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("checksum") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [e], detailed_message=f"the `checksum` field with value `{val}` " "is not valid because:", ) ) size = None if "size" in _doc: try: size = load_field( _doc.get("size"), union_of_None_type_or_inttype, baseuri, loadingOptions, lc=_doc.get("size") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `size`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("size") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [e], detailed_message=f"the `size` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) contents = None if "contents" in _doc: try: contents = load_field( _doc.get("contents"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("contents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `contents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("contents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [e], detailed_message=f"the `contents` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `dirname`, `nameroot`, `nameext`, `checksum`, `size`, `secondaryFiles`, `format`, `contents`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, dirname=dirname, nameroot=nameroot, nameext=nameext, checksum=checksum, size=size, secondaryFiles=secondaryFiles, format=format, contents=contents, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.dirname is not None: r["dirname"] = save( self.dirname, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameroot is not None: r["nameroot"] = save( self.nameroot, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameext is not None: r["nameext"] = save( self.nameext, top=False, base_url=base_url, relative_uris=relative_uris ) if self.checksum is not None: r["checksum"] = save( self.checksum, top=False, base_url=base_url, relative_uris=relative_uris ) if self.size is not None: r["size"] = save( self.size, top=False, base_url=base_url, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, base_url, True, None, relative_uris) r["format"] = u if self.contents is not None: r["contents"] = save( self.contents, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "location", "path", "basename", "dirname", "nameroot", "nameext", "checksum", "size", "secondaryFiles", "format", "contents", ] ) class Directory(Saveable): """ Represents a directory to present to a command line tool. Directories are represented as objects with `class` of `Directory`. Directory objects have a number of properties that provide metadata about the directory. The `location` property of a Directory is a URI that uniquely identifies the directory. Implementations must support the file:// URI scheme and may support other schemes such as http://. Alternately to `location`, implementations must also accept the `path` property on Directory, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). A Directory object may have a `listing` field. This is a list of File and Directory objects that are contained in the Directory. For each entry in `listing`, the `basename` property defines the name of the File or Subdirectory when staged to disk. If `listing` is not provided, the implementation must have some way of fetching the Directory listing at runtime based on the `location` field. If a Directory does not have `location`, it is a Directory literal. A Directory literal must provide `listing`. Directory literals must be created on disk at runtime as needed. The resources in a Directory literal do not need to have any implied relationship in their `location`. For example, a Directory listing may contain two files located on different hosts. It is the responsibility of the runtime to ensure that those files are staged to disk appropriately. Secondary files associated with files in `listing` must also be staged to the same Directory. When executing a CommandLineTool, Directories must be recursively staged first and have local values of `path` assigend. Directory objects in CommandLineTool output must provide either a `location` URI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). An ExpressionTool may forward file references from input to output by using the same value for `location`. Name conflicts (the same `basename` appearing multiple times in `listing` or in any entry in `secondaryFiles` in the listing) is a fatal error. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, listing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Directory" self.location = location self.path = path self.basename = basename self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, Directory): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.listing == other.listing ) return False def __hash__(self) -> int: return hash( (self.class_, self.location, self.path, self.basename, self.listing) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Directory": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Directory_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) listing = None if "listing" in _doc: try: listing = load_field( _doc.get("listing"), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "location", "path", "basename", "listing"]) class Labeled(Saveable): pass class Identified(Saveable): pass class LoadContents(Saveable): pass class FieldBase(Labeled): pass class InputFormat(Saveable): pass class OutputFormat(Saveable): pass class Parameter(FieldBase, Documented, Identified): """ Define an input or output parameter to a process. """ pass class InputBinding(Saveable): def __init__( self, loadContents: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents def __eq__(self, other: Any) -> bool: if isinstance(other, InputBinding): return bool(self.loadContents == other.loadContents) return False def __hash__(self) -> int: return hash((self.loadContents)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["loadContents"]) class IOSchema(Labeled, Documented): pass class InputSchema(IOSchema): pass class OutputSchema(IOSchema): pass class InputRecordField(CWLRecordField, FieldBase, InputFormat, LoadContents): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.loadContents = loadContents self.loadListing = loadListing def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.loadContents, self.loadListing, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `loadContents`, `loadListing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, loadContents=loadContents, loadListing=loadListing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "loadContents", "loadListing", ] ) class InputRecordSchema(CWLRecordSchema, InputSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class InputEnumSchema(EnumSchema, InputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, InputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class InputArraySchema(CWLArraySchema, InputSchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, InputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class OutputRecordField(CWLRecordField, FieldBase, OutputFormat): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["doc", "name", "type", "label", "secondaryFiles", "streamable", "format"] ) class OutputRecordSchema(CWLRecordSchema, OutputSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class OutputEnumSchema(EnumSchema, OutputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, OutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class OutputArraySchema(CWLArraySchema, OutputSchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, OutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class InputParameter(Parameter, InputFormat, LoadContents): pass class OutputParameter(Parameter, OutputFormat): pass class ProcessRequirement(Saveable): """ A process requirement declares a prerequisite that may or must be fulfilled before executing a process. See [`Process.hints`](#process) and [`Process.requirements`](#process). Process requirements are the primary mechanism for specifying extensions to the CWL core specification. """ pass class Process(Identified, Labeled, Documented): """ The base executable type in CWL is the `Process` object defined by the document. Note that the `Process` object is abstract and cannot be directly executed. """ pass class InlineJavascriptRequirement(ProcessRequirement): """ Indicates that the workflow platform must support inline Javascript expressions. If this requirement is not present, the workflow platform must not perform expression interpolatation. """ def __init__( self, expressionLib: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InlineJavascriptRequirement" self.expressionLib = expressionLib def __eq__(self, other: Any) -> bool: if isinstance(other, InlineJavascriptRequirement): return bool( self.class_ == other.class_ and self.expressionLib == other.expressionLib ) return False def __hash__(self) -> int: return hash((self.class_, self.expressionLib)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InlineJavascriptRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InlineJavascriptRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e expressionLib = None if "expressionLib" in _doc: try: expressionLib = load_field( _doc.get("expressionLib"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("expressionLib") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expressionLib`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expressionLib") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [e], detailed_message=f"the `expressionLib` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `expressionLib`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( expressionLib=expressionLib, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.expressionLib is not None: r["expressionLib"] = save( self.expressionLib, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "expressionLib"]) class CommandInputSchema(Saveable): pass class SchemaDefRequirement(ProcessRequirement): """ This field consists of an array of type definitions which must be used when interpreting the `inputs` and `outputs` fields. When a `type` field contain a IRI, the implementation must check if the type is defined in `schemaDefs` and use that definition. If the type is not found in `schemaDefs`, it is an error. The entries in `schemaDefs` must be processed in the order listed such that later schema definitions may refer to earlier schema definitions. """ def __init__( self, types: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SchemaDefRequirement" self.types = types def __eq__(self, other: Any) -> bool: if isinstance(other, SchemaDefRequirement): return bool(self.class_ == other.class_ and self.types == other.types) return False def __hash__(self) -> int: return hash((self.class_, self.types)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SchemaDefRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SchemaDefRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("types") is None: raise ValidationException("missing required field `types`", None, []) types = load_field( _doc.get("types"), array_of_union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader, baseuri, loadingOptions, lc=_doc.get("types") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `types`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("types") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [e], detailed_message=f"the `types` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `types`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( types=types, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.types is not None: r["types"] = save( self.types, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "types"]) class SecondaryFileSchema(Saveable): def __init__( self, pattern: Any, required: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.pattern = pattern self.required = required def __eq__(self, other: Any) -> bool: if isinstance(other, SecondaryFileSchema): return bool( self.pattern == other.pattern and self.required == other.required ) return False def __hash__(self) -> int: return hash((self.pattern, self.required)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SecondaryFileSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("pattern") is None: raise ValidationException("missing required field `pattern`", None, []) pattern = load_field( _doc.get("pattern"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("pattern") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `pattern`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("pattern") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `pattern` field is not valid because:", SourceLine(_doc, "pattern", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `pattern` field is not valid because:", SourceLine(_doc, "pattern", str), [e], detailed_message=f"the `pattern` field with value `{val}` " "is not valid because:", ) ) required = None if "required" in _doc: try: required = load_field( _doc.get("required"), union_of_None_type_or_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("required") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `required`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("required") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `required` field is not valid because:", SourceLine(_doc, "required", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `required` field is not valid because:", SourceLine(_doc, "required", str), [e], detailed_message=f"the `required` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `pattern`, `required`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( pattern=pattern, required=required, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.pattern is not None: r["pattern"] = save( self.pattern, top=False, base_url=base_url, relative_uris=relative_uris ) if self.required is not None: r["required"] = save( self.required, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["pattern", "required"]) class LoadListingRequirement(ProcessRequirement): """ Specify the desired behavior for loading the `listing` field of a Directory object for use by expressions. """ def __init__( self, loadListing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "LoadListingRequirement" self.loadListing = loadListing def __eq__(self, other: Any) -> bool: if isinstance(other, LoadListingRequirement): return bool( self.class_ == other.class_ and self.loadListing == other.loadListing ) return False def __hash__(self) -> int: return hash((self.class_, self.loadListing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "LoadListingRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_LoadListingRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `loadListing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadListing=loadListing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "loadListing"]) class EnvironmentDef(Saveable): """ Define an environment variable that will be set in the runtime environment by the workflow platform when executing the command line tool. May be the result of executing an expression, such as getting a parameter from input. """ def __init__( self, envName: Any, envValue: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.envName = envName self.envValue = envValue def __eq__(self, other: Any) -> bool: if isinstance(other, EnvironmentDef): return bool( self.envName == other.envName and self.envValue == other.envValue ) return False def __hash__(self) -> int: return hash((self.envName, self.envValue)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvironmentDef": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("envName") is None: raise ValidationException("missing required field `envName`", None, []) envName = load_field( _doc.get("envName"), strtype, baseuri, loadingOptions, lc=_doc.get("envName") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envName`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envName") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [e], detailed_message=f"the `envName` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("envValue") is None: raise ValidationException("missing required field `envValue`", None, []) envValue = load_field( _doc.get("envValue"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("envValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [e], detailed_message=f"the `envValue` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `envName`, `envValue`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envName=envName, envValue=envValue, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.envName is not None: r["envName"] = save( self.envName, top=False, base_url=base_url, relative_uris=relative_uris ) if self.envValue is not None: r["envValue"] = save( self.envValue, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["envName", "envValue"]) class CommandLineBinding(InputBinding): """ When listed under `inputBinding` in the input schema, the term "value" refers to the the corresponding value in the input object. For binding objects listed in `CommandLineTool.arguments`, the term "value" refers to the effective value after evaluating `valueFrom`. The binding behavior when building the command line depends on the data type of the value. If there is a mismatch between the type described by the input schema and the effective value, such as resulting from an expression evaluation, an implementation must use the data type of the effective value. - **string**: Add `prefix` and the string to the command line. - **number**: Add `prefix` and decimal representation to command line. - **boolean**: If true, add `prefix` to the command line. If false, add nothing. - **File**: Add `prefix` and the value of [`File.path`](#File) to the command line. - **Directory**: Add `prefix` and the value of [`Directory.path`](#Directory) to the command line. - **array**: If `itemSeparator` is specified, add `prefix` and the join the array into a single string with `itemSeparator` separating the items. Otherwise first add `prefix`, then recursively process individual elements. If the array is empty, it does not add anything to command line. - **object**: Add `prefix` only, and recursively add object fields for which `inputBinding` is specified. - **null**: Add nothing. """ def __init__( self, loadContents: Optional[Any] = None, position: Optional[Any] = None, prefix: Optional[Any] = None, separate: Optional[Any] = None, itemSeparator: Optional[Any] = None, valueFrom: Optional[Any] = None, shellQuote: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents self.position = position self.prefix = prefix self.separate = separate self.itemSeparator = itemSeparator self.valueFrom = valueFrom self.shellQuote = shellQuote def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineBinding): return bool( self.loadContents == other.loadContents and self.position == other.position and self.prefix == other.prefix and self.separate == other.separate and self.itemSeparator == other.itemSeparator and self.valueFrom == other.valueFrom and self.shellQuote == other.shellQuote ) return False def __hash__(self) -> int: return hash( ( self.loadContents, self.position, self.prefix, self.separate, self.itemSeparator, self.valueFrom, self.shellQuote, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) position = None if "position" in _doc: try: position = load_field( _doc.get("position"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("position") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `position`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("position") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [e], detailed_message=f"the `position` field with value `{val}` " "is not valid because:", ) ) prefix = None if "prefix" in _doc: try: prefix = load_field( _doc.get("prefix"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("prefix") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `prefix`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("prefix") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [e], detailed_message=f"the `prefix` field with value `{val}` " "is not valid because:", ) ) separate = None if "separate" in _doc: try: separate = load_field( _doc.get("separate"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("separate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `separate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("separate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [e], detailed_message=f"the `separate` field with value `{val}` " "is not valid because:", ) ) itemSeparator = None if "itemSeparator" in _doc: try: itemSeparator = load_field( _doc.get("itemSeparator"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("itemSeparator") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `itemSeparator`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("itemSeparator") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [e], detailed_message=f"the `itemSeparator` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) shellQuote = None if "shellQuote" in _doc: try: shellQuote = load_field( _doc.get("shellQuote"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("shellQuote") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shellQuote`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shellQuote") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [e], detailed_message=f"the `shellQuote` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`, `position`, `prefix`, `separate`, `itemSeparator`, `valueFrom`, `shellQuote`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, position=position, prefix=prefix, separate=separate, itemSeparator=itemSeparator, valueFrom=valueFrom, shellQuote=shellQuote, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.position is not None: r["position"] = save( self.position, top=False, base_url=base_url, relative_uris=relative_uris ) if self.prefix is not None: r["prefix"] = save( self.prefix, top=False, base_url=base_url, relative_uris=relative_uris ) if self.separate is not None: r["separate"] = save( self.separate, top=False, base_url=base_url, relative_uris=relative_uris ) if self.itemSeparator is not None: r["itemSeparator"] = save( self.itemSeparator, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.shellQuote is not None: r["shellQuote"] = save( self.shellQuote, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "loadContents", "position", "prefix", "separate", "itemSeparator", "valueFrom", "shellQuote", ] ) class CommandOutputBinding(LoadContents): """ Describes how to generate an output parameter based on the files produced by a CommandLineTool. The output parameter value is generated by applying these operations in the following order: - glob - loadContents - outputEval - secondaryFiles """ def __init__( self, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, glob: Optional[Any] = None, outputEval: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents self.loadListing = loadListing self.glob = glob self.outputEval = outputEval def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputBinding): return bool( self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.glob == other.glob and self.outputEval == other.outputEval ) return False def __hash__(self) -> int: return hash((self.loadContents, self.loadListing, self.glob, self.outputEval)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) glob = None if "glob" in _doc: try: glob = load_field( _doc.get("glob"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("glob") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `glob`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("glob") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [e], detailed_message=f"the `glob` field with value `{val}` " "is not valid because:", ) ) outputEval = None if "outputEval" in _doc: try: outputEval = load_field( _doc.get("outputEval"), union_of_None_type_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outputEval") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputEval`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputEval") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [e], detailed_message=f"the `outputEval` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`, `loadListing`, `glob`, `outputEval`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, loadListing=loadListing, glob=glob, outputEval=outputEval, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.glob is not None: r["glob"] = save( self.glob, top=False, base_url=base_url, relative_uris=relative_uris ) if self.outputEval is not None: r["outputEval"] = save( self.outputEval, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["loadContents", "loadListing", "glob", "outputEval"]) class CommandLineBindable(Saveable): def __init__( self, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineBindable): return bool(self.inputBinding == other.inputBinding) return False def __hash__(self) -> int: return hash((self.inputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineBindable": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["inputBinding"]) class CommandInputRecordField(InputRecordField, CommandLineBindable): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.loadContents = loadContents self.loadListing = loadListing self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.loadContents, self.loadListing, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `loadContents`, `loadListing`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, loadContents=loadContents, loadListing=loadListing, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "loadContents", "loadListing", "inputBinding", ] ) class CommandInputRecordSchema( InputRecordSchema, CommandInputSchema, CommandLineBindable ): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.fields, self.type_, self.label, self.doc, self.name, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name", "inputBinding"]) class CommandInputEnumSchema(InputEnumSchema, CommandInputSchema, CommandLineBindable): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.name, self.symbols, self.type_, self.label, self.doc, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc", "inputBinding"]) class CommandInputArraySchema( InputArraySchema, CommandInputSchema, CommandLineBindable ): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( (self.items, self.type_, self.label, self.doc, self.name, self.inputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name", "inputBinding"]) class CommandOutputRecordField(OutputRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.outputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "outputBinding", ] ) class CommandOutputRecordSchema(OutputRecordSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class CommandOutputEnumSchema(OutputEnumSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class CommandOutputArraySchema(OutputArraySchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class CommandInputParameter(InputParameter): """ An input parameter for a CommandLineTool. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, default: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.loadContents = loadContents self.loadListing = loadListing self.default = default self.type_ = type_ self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.default == other.default and self.type_ == other.type_ and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.loadContents, self.loadListing, self.default, self.type_, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `loadContents`, `loadListing`, `default`, `type`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, loadContents=loadContents, loadListing=loadListing, default=default, type_=type_, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "loadContents", "loadListing", "default", "type", "inputBinding", ] ) class CommandOutputParameter(OutputParameter): """ An output parameter for a CommandLineTool. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.type_ = type_ self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.type_ == other.type_ and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.type_, self.outputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `type`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, type_=type_, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "type", "outputBinding", ] ) class CommandLineTool(Process): """ This defines the schema of the CWL Command Line Tool Description document. """ id: str def __init__( self, inputs: Any, outputs: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, baseCommand: Optional[Any] = None, arguments: Optional[Any] = None, stdin: Optional[Any] = None, stderr: Optional[Any] = None, stdout: Optional[Any] = None, successCodes: Optional[Any] = None, temporaryFailCodes: Optional[Any] = None, permanentFailCodes: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.class_ = "CommandLineTool" self.baseCommand = baseCommand self.arguments = arguments self.stdin = stdin self.stderr = stderr self.stdout = stdout self.successCodes = successCodes self.temporaryFailCodes = temporaryFailCodes self.permanentFailCodes = permanentFailCodes def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineTool): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.baseCommand == other.baseCommand and self.arguments == other.arguments and self.stdin == other.stdin and self.stderr == other.stderr and self.stdout == other.stdout and self.successCodes == other.successCodes and self.temporaryFailCodes == other.temporaryFailCodes and self.permanentFailCodes == other.permanentFailCodes ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.class_, self.baseCommand, self.arguments, self.stdin, self.stderr, self.stdout, self.successCodes, self.temporaryFailCodes, self.permanentFailCodes, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_CommandLineTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_CommandInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_CommandOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) baseCommand = None if "baseCommand" in _doc: try: baseCommand = load_field( _doc.get("baseCommand"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("baseCommand") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `baseCommand`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("baseCommand") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [e], detailed_message=f"the `baseCommand` field with value `{val}` " "is not valid because:", ) ) arguments = None if "arguments" in _doc: try: arguments = load_field( _doc.get("arguments"), union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("arguments") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `arguments`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("arguments") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [e], detailed_message=f"the `arguments` field with value `{val}` " "is not valid because:", ) ) stdin = None if "stdin" in _doc: try: stdin = load_field( _doc.get("stdin"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [e], detailed_message=f"the `stdin` field with value `{val}` " "is not valid because:", ) ) stderr = None if "stderr" in _doc: try: stderr = load_field( _doc.get("stderr"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stderr") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stderr`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stderr") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [e], detailed_message=f"the `stderr` field with value `{val}` " "is not valid because:", ) ) stdout = None if "stdout" in _doc: try: stdout = load_field( _doc.get("stdout"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdout") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdout`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdout") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [e], detailed_message=f"the `stdout` field with value `{val}` " "is not valid because:", ) ) successCodes = None if "successCodes" in _doc: try: successCodes = load_field( _doc.get("successCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("successCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `successCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("successCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [e], detailed_message=f"the `successCodes` field with value `{val}` " "is not valid because:", ) ) temporaryFailCodes = None if "temporaryFailCodes" in _doc: try: temporaryFailCodes = load_field( _doc.get("temporaryFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("temporaryFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `temporaryFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("temporaryFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [e], detailed_message=f"the `temporaryFailCodes` field with value `{val}` " "is not valid because:", ) ) permanentFailCodes = None if "permanentFailCodes" in _doc: try: permanentFailCodes = load_field( _doc.get("permanentFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("permanentFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `permanentFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("permanentFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [e], detailed_message=f"the `permanentFailCodes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `class`, `baseCommand`, `arguments`, `stdin`, `stderr`, `stdout`, `successCodes`, `temporaryFailCodes`, `permanentFailCodes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, baseCommand=baseCommand, arguments=arguments, stdin=stdin, stderr=stderr, stdout=stdout, successCodes=successCodes, temporaryFailCodes=temporaryFailCodes, permanentFailCodes=permanentFailCodes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.baseCommand is not None: r["baseCommand"] = save( self.baseCommand, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.arguments is not None: r["arguments"] = save( self.arguments, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdin is not None: r["stdin"] = save( self.stdin, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stderr is not None: r["stderr"] = save( self.stderr, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdout is not None: r["stdout"] = save( self.stdout, top=False, base_url=self.id, relative_uris=relative_uris ) if self.successCodes is not None: r["successCodes"] = save( self.successCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.temporaryFailCodes is not None: r["temporaryFailCodes"] = save( self.temporaryFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.permanentFailCodes is not None: r["permanentFailCodes"] = save( self.permanentFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "class", "baseCommand", "arguments", "stdin", "stderr", "stdout", "successCodes", "temporaryFailCodes", "permanentFailCodes", ] ) class DockerRequirement(ProcessRequirement): """ Indicates that a workflow component should be run in a [Docker](http://docker.com) or Docker-compatible (such as [Singularity](https://www.sylabs.io/) and [udocker](https://github.com/indigo-dc/udocker)) container environment and specifies how to fetch or build the image. If a CommandLineTool lists `DockerRequirement` under `hints` (or `requirements`), it may (or must) be run in the specified Docker container. The platform must first acquire or install the correct Docker image as specified by `dockerPull`, `dockerImport`, `dockerLoad` or `dockerFile`. The platform must execute the tool in the container using `docker run` with the appropriate Docker image and tool command line. The workflow platform may provide input files and the designated output directory through the use of volume bind mounts. The platform should rewrite file paths in the input object to correspond to the Docker bind mounted locations. That is, the platform should rewrite values in the parameter context such as `runtime.outdir`, `runtime.tmpdir` and others to be valid paths within the container. The platform must ensure that `runtime.outdir` and `runtime.tmpdir` are distinct directories. When running a tool contained in Docker, the workflow platform must not assume anything about the contents of the Docker container, such as the presence or absence of specific software, except to assume that the generated command line represents a valid command within the runtime environment of the container. A container image may specify an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) and/or [CMD](https://docs.docker.com/engine/reference/builder/#cmd). Command line arguments will be appended after all elements of ENTRYPOINT, and will override all elements specified using CMD (in other words, CMD is only used when the CommandLineTool definition produces an empty command line). Use of implicit ENTRYPOINT or CMD are discouraged due to reproducibility concerns of the implicit hidden execution point (For further discussion, see [https://doi.org/10.12688/f1000research.15140.1](https://doi.org/10.12688/f1000research.15140.1)). Portable CommandLineTool wrappers in which use of a container is optional must not rely on ENTRYPOINT or CMD. CommandLineTools which do rely on ENTRYPOINT or CMD must list `DockerRequirement` in the `requirements` section. ## Interaction with other requirements If [EnvVarRequirement](#EnvVarRequirement) is specified alongside a DockerRequirement, the environment variables must be provided to Docker using `--env` or `--env-file` and interact with the container's preexisting environment as defined by Docker. """ def __init__( self, dockerPull: Optional[Any] = None, dockerLoad: Optional[Any] = None, dockerFile: Optional[Any] = None, dockerImport: Optional[Any] = None, dockerImageId: Optional[Any] = None, dockerOutputDirectory: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "DockerRequirement" self.dockerPull = dockerPull self.dockerLoad = dockerLoad self.dockerFile = dockerFile self.dockerImport = dockerImport self.dockerImageId = dockerImageId self.dockerOutputDirectory = dockerOutputDirectory def __eq__(self, other: Any) -> bool: if isinstance(other, DockerRequirement): return bool( self.class_ == other.class_ and self.dockerPull == other.dockerPull and self.dockerLoad == other.dockerLoad and self.dockerFile == other.dockerFile and self.dockerImport == other.dockerImport and self.dockerImageId == other.dockerImageId and self.dockerOutputDirectory == other.dockerOutputDirectory ) return False def __hash__(self) -> int: return hash( ( self.class_, self.dockerPull, self.dockerLoad, self.dockerFile, self.dockerImport, self.dockerImageId, self.dockerOutputDirectory, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "DockerRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_DockerRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e dockerPull = None if "dockerPull" in _doc: try: dockerPull = load_field( _doc.get("dockerPull"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerPull") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerPull`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerPull") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [e], detailed_message=f"the `dockerPull` field with value `{val}` " "is not valid because:", ) ) dockerLoad = None if "dockerLoad" in _doc: try: dockerLoad = load_field( _doc.get("dockerLoad"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerLoad") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerLoad`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerLoad") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [e], detailed_message=f"the `dockerLoad` field with value `{val}` " "is not valid because:", ) ) dockerFile = None if "dockerFile" in _doc: try: dockerFile = load_field( _doc.get("dockerFile"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerFile") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerFile`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerFile") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [e], detailed_message=f"the `dockerFile` field with value `{val}` " "is not valid because:", ) ) dockerImport = None if "dockerImport" in _doc: try: dockerImport = load_field( _doc.get("dockerImport"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImport") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImport`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImport") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [e], detailed_message=f"the `dockerImport` field with value `{val}` " "is not valid because:", ) ) dockerImageId = None if "dockerImageId" in _doc: try: dockerImageId = load_field( _doc.get("dockerImageId"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImageId") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImageId`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImageId") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [e], detailed_message=f"the `dockerImageId` field with value `{val}` " "is not valid because:", ) ) dockerOutputDirectory = None if "dockerOutputDirectory" in _doc: try: dockerOutputDirectory = load_field( _doc.get("dockerOutputDirectory"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerOutputDirectory") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerOutputDirectory`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerOutputDirectory") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [e], detailed_message=f"the `dockerOutputDirectory` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `dockerPull`, `dockerLoad`, `dockerFile`, `dockerImport`, `dockerImageId`, `dockerOutputDirectory`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( dockerPull=dockerPull, dockerLoad=dockerLoad, dockerFile=dockerFile, dockerImport=dockerImport, dockerImageId=dockerImageId, dockerOutputDirectory=dockerOutputDirectory, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.dockerPull is not None: r["dockerPull"] = save( self.dockerPull, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerLoad is not None: r["dockerLoad"] = save( self.dockerLoad, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerFile is not None: r["dockerFile"] = save( self.dockerFile, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImport is not None: r["dockerImport"] = save( self.dockerImport, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImageId is not None: r["dockerImageId"] = save( self.dockerImageId, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerOutputDirectory is not None: r["dockerOutputDirectory"] = save( self.dockerOutputDirectory, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "dockerPull", "dockerLoad", "dockerFile", "dockerImport", "dockerImageId", "dockerOutputDirectory", ] ) class SoftwareRequirement(ProcessRequirement): """ A list of software packages that should be configured in the environment of the defined process. """ def __init__( self, packages: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SoftwareRequirement" self.packages = packages def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwareRequirement): return bool(self.class_ == other.class_ and self.packages == other.packages) return False def __hash__(self) -> int: return hash((self.class_, self.packages)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwareRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SoftwareRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("packages") is None: raise ValidationException("missing required field `packages`", None, []) packages = load_field( _doc.get("packages"), idmap_packages_array_of_SoftwarePackageLoader, baseuri, loadingOptions, lc=_doc.get("packages") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `packages`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("packages") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [e], detailed_message=f"the `packages` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `packages`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( packages=packages, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.packages is not None: r["packages"] = save( self.packages, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "packages"]) class SoftwarePackage(Saveable): def __init__( self, package: Any, version: Optional[Any] = None, specs: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.package = package self.version = version self.specs = specs def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwarePackage): return bool( self.package == other.package and self.version == other.version and self.specs == other.specs ) return False def __hash__(self) -> int: return hash((self.package, self.version, self.specs)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwarePackage": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("package") is None: raise ValidationException("missing required field `package`", None, []) package = load_field( _doc.get("package"), strtype, baseuri, loadingOptions, lc=_doc.get("package") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `package`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("package") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [e], detailed_message=f"the `package` field with value `{val}` " "is not valid because:", ) ) version = None if "version" in _doc: try: version = load_field( _doc.get("version"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("version") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `version`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("version") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [e], detailed_message=f"the `version` field with value `{val}` " "is not valid because:", ) ) specs = None if "specs" in _doc: try: specs = load_field( _doc.get("specs"), uri_union_of_None_type_or_array_of_strtype_False_False_None_True, baseuri, loadingOptions, lc=_doc.get("specs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `specs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("specs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [e], detailed_message=f"the `specs` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `package`, `version`, `specs`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( package=package, version=version, specs=specs, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.package is not None: r["package"] = save( self.package, top=False, base_url=base_url, relative_uris=relative_uris ) if self.version is not None: r["version"] = save( self.version, top=False, base_url=base_url, relative_uris=relative_uris ) if self.specs is not None: u = save_relative_uri(self.specs, base_url, False, None, relative_uris) r["specs"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["package", "version", "specs"]) class Dirent(Saveable): """ Define a file or subdirectory that must be placed in the designated output directory prior to executing the command line tool. May be the result of executing an expression, such as building a configuration file from a template. """ def __init__( self, entry: Any, entryname: Optional[Any] = None, writable: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.entryname = entryname self.entry = entry self.writable = writable def __eq__(self, other: Any) -> bool: if isinstance(other, Dirent): return bool( self.entryname == other.entryname and self.entry == other.entry and self.writable == other.writable ) return False def __hash__(self) -> int: return hash((self.entryname, self.entry, self.writable)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Dirent": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] entryname = None if "entryname" in _doc: try: entryname = load_field( _doc.get("entryname"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entryname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entryname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entryname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [e], detailed_message=f"the `entryname` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("entry") is None: raise ValidationException("missing required field `entry`", None, []) entry = load_field( _doc.get("entry"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entry") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entry`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entry") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [e], detailed_message=f"the `entry` field with value `{val}` " "is not valid because:", ) ) writable = None if "writable" in _doc: try: writable = load_field( _doc.get("writable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("writable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `writable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("writable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [e], detailed_message=f"the `writable` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `entryname`, `entry`, `writable`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( entryname=entryname, entry=entry, writable=writable, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.entryname is not None: r["entryname"] = save( self.entryname, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.entry is not None: r["entry"] = save( self.entry, top=False, base_url=base_url, relative_uris=relative_uris ) if self.writable is not None: r["writable"] = save( self.writable, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["entryname", "entry", "writable"]) class InitialWorkDirRequirement(ProcessRequirement): """ Define a list of files and subdirectories that must be created by the workflow platform in the designated output directory prior to executing the command line tool. """ def __init__( self, listing: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InitialWorkDirRequirement" self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, InitialWorkDirRequirement): return bool(self.class_ == other.class_ and self.listing == other.listing) return False def __hash__(self) -> int: return hash((self.class_, self.listing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InitialWorkDirRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InitialWorkDirRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("listing") is None: raise ValidationException("missing required field `listing`", None, []) listing = load_field( _doc.get("listing"), union_of_array_of_union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "listing"]) class EnvVarRequirement(ProcessRequirement): """ Define a list of environment variables which will be set in the execution environment of the tool. See `EnvironmentDef` for details. """ def __init__( self, envDef: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "EnvVarRequirement" self.envDef = envDef def __eq__(self, other: Any) -> bool: if isinstance(other, EnvVarRequirement): return bool(self.class_ == other.class_ and self.envDef == other.envDef) return False def __hash__(self) -> int: return hash((self.class_, self.envDef)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvVarRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_EnvVarRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("envDef") is None: raise ValidationException("missing required field `envDef`", None, []) envDef = load_field( _doc.get("envDef"), idmap_envDef_array_of_EnvironmentDefLoader, baseuri, loadingOptions, lc=_doc.get("envDef") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envDef`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envDef") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [e], detailed_message=f"the `envDef` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `envDef`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envDef=envDef, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.envDef is not None: r["envDef"] = save( self.envDef, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "envDef"]) class ShellCommandRequirement(ProcessRequirement): """ Modify the behavior of CommandLineTool to generate a single string containing a shell command line. Each item in the argument list must be joined into a string separated by single spaces and quoted to prevent intepretation by the shell, unless `CommandLineBinding` for that argument contains `shellQuote: false`. If `shellQuote: false` is specified, the argument is joined into the command string without quoting, which allows the use of shell metacharacters such as `|` for pipes. """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShellCommandRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ShellCommandRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShellCommandRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ShellCommandRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ResourceRequirement(ProcessRequirement): """ Specify basic hardware resource requirements. "min" is the minimum amount of a resource that must be reserved to schedule a job. If "min" cannot be satisfied, the job should not be run. "max" is the maximum amount of a resource that the job shall be permitted to use. If a node has sufficient resources, multiple jobs may be scheduled on a single node provided each job's "max" resource requirements are met. If a job attempts to exceed its "max" resource allocation, an implementation may deny additional resources, which may result in job failure. If "min" is specified but "max" is not, then "max" == "min" If "max" is specified by "min" is not, then "min" == "max". It is an error if max < min. It is an error if the value of any of these fields is negative. If neither "min" nor "max" is specified for a resource, use the default values below. """ def __init__( self, coresMin: Optional[Any] = None, coresMax: Optional[Any] = None, ramMin: Optional[Any] = None, ramMax: Optional[Any] = None, tmpdirMin: Optional[Any] = None, tmpdirMax: Optional[Any] = None, outdirMin: Optional[Any] = None, outdirMax: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ResourceRequirement" self.coresMin = coresMin self.coresMax = coresMax self.ramMin = ramMin self.ramMax = ramMax self.tmpdirMin = tmpdirMin self.tmpdirMax = tmpdirMax self.outdirMin = outdirMin self.outdirMax = outdirMax def __eq__(self, other: Any) -> bool: if isinstance(other, ResourceRequirement): return bool( self.class_ == other.class_ and self.coresMin == other.coresMin and self.coresMax == other.coresMax and self.ramMin == other.ramMin and self.ramMax == other.ramMax and self.tmpdirMin == other.tmpdirMin and self.tmpdirMax == other.tmpdirMax and self.outdirMin == other.outdirMin and self.outdirMax == other.outdirMax ) return False def __hash__(self) -> int: return hash( ( self.class_, self.coresMin, self.coresMax, self.ramMin, self.ramMax, self.tmpdirMin, self.tmpdirMax, self.outdirMin, self.outdirMax, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ResourceRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ResourceRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e coresMin = None if "coresMin" in _doc: try: coresMin = load_field( _doc.get("coresMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [e], detailed_message=f"the `coresMin` field with value `{val}` " "is not valid because:", ) ) coresMax = None if "coresMax" in _doc: try: coresMax = load_field( _doc.get("coresMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [e], detailed_message=f"the `coresMax` field with value `{val}` " "is not valid because:", ) ) ramMin = None if "ramMin" in _doc: try: ramMin = load_field( _doc.get("ramMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [e], detailed_message=f"the `ramMin` field with value `{val}` " "is not valid because:", ) ) ramMax = None if "ramMax" in _doc: try: ramMax = load_field( _doc.get("ramMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [e], detailed_message=f"the `ramMax` field with value `{val}` " "is not valid because:", ) ) tmpdirMin = None if "tmpdirMin" in _doc: try: tmpdirMin = load_field( _doc.get("tmpdirMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [e], detailed_message=f"the `tmpdirMin` field with value `{val}` " "is not valid because:", ) ) tmpdirMax = None if "tmpdirMax" in _doc: try: tmpdirMax = load_field( _doc.get("tmpdirMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [e], detailed_message=f"the `tmpdirMax` field with value `{val}` " "is not valid because:", ) ) outdirMin = None if "outdirMin" in _doc: try: outdirMin = load_field( _doc.get("outdirMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [e], detailed_message=f"the `outdirMin` field with value `{val}` " "is not valid because:", ) ) outdirMax = None if "outdirMax" in _doc: try: outdirMax = load_field( _doc.get("outdirMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [e], detailed_message=f"the `outdirMax` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `coresMin`, `coresMax`, `ramMin`, `ramMax`, `tmpdirMin`, `tmpdirMax`, `outdirMin`, `outdirMax`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( coresMin=coresMin, coresMax=coresMax, ramMin=ramMin, ramMax=ramMax, tmpdirMin=tmpdirMin, tmpdirMax=tmpdirMax, outdirMin=outdirMin, outdirMax=outdirMax, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.coresMin is not None: r["coresMin"] = save( self.coresMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.coresMax is not None: r["coresMax"] = save( self.coresMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMin is not None: r["ramMin"] = save( self.ramMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMax is not None: r["ramMax"] = save( self.ramMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.tmpdirMin is not None: r["tmpdirMin"] = save( self.tmpdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.tmpdirMax is not None: r["tmpdirMax"] = save( self.tmpdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMin is not None: r["outdirMin"] = save( self.outdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMax is not None: r["outdirMax"] = save( self.outdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "coresMin", "coresMax", "ramMin", "ramMax", "tmpdirMin", "tmpdirMax", "outdirMin", "outdirMax", ] ) class WorkReuse(ProcessRequirement): """ For implementations that support reusing output from past work (on the assumption that same code and same input produce same results), control whether to enable or disable the reuse behavior for a particular tool or step (to accomodate situations where that assumption is incorrect). A reused step is not executed but instead returns the same output as the original execution. If `enableReuse` is not specified, correct tools should assume it is enabled by default. """ def __init__( self, enableReuse: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "WorkReuse" self.enableReuse = enableReuse def __eq__(self, other: Any) -> bool: if isinstance(other, WorkReuse): return bool( self.class_ == other.class_ and self.enableReuse == other.enableReuse ) return False def __hash__(self) -> int: return hash((self.class_, self.enableReuse)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkReuse": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_WorkReuse_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("enableReuse") is None: raise ValidationException("missing required field `enableReuse`", None, []) enableReuse = load_field( _doc.get("enableReuse"), union_of_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("enableReuse") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `enableReuse`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("enableReuse") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [e], detailed_message=f"the `enableReuse` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `enableReuse`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( enableReuse=enableReuse, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.enableReuse is not None: r["enableReuse"] = save( self.enableReuse, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "enableReuse"]) class NetworkAccess(ProcessRequirement): """ Indicate whether a process requires outgoing IPv4/IPv6 network access. Choice of IPv4 or IPv6 is implementation and site specific, correct tools must support both. If `networkAccess` is false or not specified, tools must not assume network access, except for localhost (the loopback device). If `networkAccess` is true, the tool must be able to make outgoing connections to network resources. Resources may be on a private subnet or the public Internet. However, implementations and sites may apply their own security policies to restrict what is accessible by the tool. Enabling network access does not imply a publically routable IP address or the ability to accept inbound connections. """ def __init__( self, networkAccess: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "NetworkAccess" self.networkAccess = networkAccess def __eq__(self, other: Any) -> bool: if isinstance(other, NetworkAccess): return bool( self.class_ == other.class_ and self.networkAccess == other.networkAccess ) return False def __hash__(self) -> int: return hash((self.class_, self.networkAccess)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "NetworkAccess": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_NetworkAccess_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("networkAccess") is None: raise ValidationException("missing required field `networkAccess`", None, []) networkAccess = load_field( _doc.get("networkAccess"), union_of_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("networkAccess") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `networkAccess`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("networkAccess") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [e], detailed_message=f"the `networkAccess` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `networkAccess`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( networkAccess=networkAccess, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.networkAccess is not None: r["networkAccess"] = save( self.networkAccess, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "networkAccess"]) class InplaceUpdateRequirement(ProcessRequirement): """ If `inplaceUpdate` is true, then an implementation supporting this feature may permit tools to directly update files with `writable: true` in InitialWorkDirRequirement. That is, as an optimization, files may be destructively modified in place as opposed to copied and updated. An implementation must ensure that only one workflow step may access a writable file at a time. It is an error if a file which is writable by one workflow step file is accessed (for reading or writing) by any other workflow step running independently. However, a file which has been updated in a previous completed step may be used as input to multiple steps, provided it is read-only in every step. Workflow steps which modify a file must produce the modified file as output. Downstream steps which futher process the file must use the output of previous steps, and not refer to a common input (this is necessary for both ordering and correctness). Workflow authors should provide this in the `hints` section. The intent of this feature is that workflows produce the same results whether or not InplaceUpdateRequirement is supported by the implementation, and this feature is primarily available as an optimization for particular environments. Users and implementers should be aware that workflows that destructively modify inputs may not be repeatable or reproducible. In particular, enabling this feature implies that WorkReuse should not be enabled. """ def __init__( self, inplaceUpdate: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InplaceUpdateRequirement" self.inplaceUpdate = inplaceUpdate def __eq__(self, other: Any) -> bool: if isinstance(other, InplaceUpdateRequirement): return bool( self.class_ == other.class_ and self.inplaceUpdate == other.inplaceUpdate ) return False def __hash__(self) -> int: return hash((self.class_, self.inplaceUpdate)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InplaceUpdateRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InplaceUpdateRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inplaceUpdate") is None: raise ValidationException("missing required field `inplaceUpdate`", None, []) inplaceUpdate = load_field( _doc.get("inplaceUpdate"), booltype, baseuri, loadingOptions, lc=_doc.get("inplaceUpdate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inplaceUpdate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inplaceUpdate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [e], detailed_message=f"the `inplaceUpdate` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `inplaceUpdate`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( inplaceUpdate=inplaceUpdate, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.inplaceUpdate is not None: r["inplaceUpdate"] = save( self.inplaceUpdate, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "inplaceUpdate"]) class ToolTimeLimit(ProcessRequirement): """ Set an upper limit on the execution time of a CommandLineTool. A CommandLineTool whose execution duration exceeds the time limit may be preemptively terminated and considered failed. May also be used by batch systems to make scheduling decisions. The execution duration excludes external operations, such as staging of files, pulling a docker image etc, and only counts wall-time for the execution of the command line itself. """ def __init__( self, timelimit: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ToolTimeLimit" self.timelimit = timelimit def __eq__(self, other: Any) -> bool: if isinstance(other, ToolTimeLimit): return bool( self.class_ == other.class_ and self.timelimit == other.timelimit ) return False def __hash__(self) -> int: return hash((self.class_, self.timelimit)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ToolTimeLimit": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ToolTimeLimit_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("timelimit") is None: raise ValidationException("missing required field `timelimit`", None, []) timelimit = load_field( _doc.get("timelimit"), union_of_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("timelimit") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `timelimit`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("timelimit") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [e], detailed_message=f"the `timelimit` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `timelimit`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( timelimit=timelimit, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.timelimit is not None: r["timelimit"] = save( self.timelimit, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "timelimit"]) class ExpressionToolOutputParameter(OutputParameter): id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionToolOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionToolOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["label", "secondaryFiles", "streamable", "doc", "id", "format", "type"] ) class WorkflowInputParameter(InputParameter): id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, default: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.loadContents = loadContents self.loadListing = loadListing self.default = default self.type_ = type_ self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.default == other.default and self.type_ == other.type_ and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.loadContents, self.loadListing, self.default, self.type_, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_InputBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `loadContents`, `loadListing`, `default`, `type`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, loadContents=loadContents, loadListing=loadListing, default=default, type_=type_, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "loadContents", "loadListing", "default", "type", "inputBinding", ] ) class ExpressionTool(Process): """ An ExpressionTool is a type of Process object that can be run by itself or as a Workflow step. It executes a pure Javascript expression that has access to the same input parameters as a workflow. It is meant to be used sparingly as a way to isolate complex Javascript expressions that need to operate on input data and produce some result; perhaps just a rearrangement of the inputs. No Docker software container is required or allowed. """ id: str def __init__( self, inputs: Any, outputs: Any, expression: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.class_ = "ExpressionTool" self.expression = expression def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionTool): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.expression == other.expression ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.class_, self.expression, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ExpressionTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_WorkflowInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_ExpressionToolOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("expression") is None: raise ValidationException("missing required field `expression`", None, []) expression = load_field( _doc.get("expression"), ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("expression") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expression`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expression") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [e], detailed_message=f"the `expression` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `class`, `expression`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, expression=expression, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.expression is not None: r["expression"] = save( self.expression, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "class", "expression", ] ) class WorkflowOutputParameter(OutputParameter): """ Describe an output parameter of a workflow. The parameter must be connected to one or more parameters defined in the workflow that will provide the value of the output parameter. It is legal to connect a WorkflowInputParameter to a WorkflowOutputParameter. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, outputSource: Optional[Any] = None, linkMerge: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.outputSource = outputSource self.linkMerge = linkMerge self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.outputSource == other.outputSource and self.linkMerge == other.linkMerge and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.outputSource, self.linkMerge, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) outputSource = None if "outputSource" in _doc: try: outputSource = load_field( _doc.get("outputSource"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None, baseuri, loadingOptions, lc=_doc.get("outputSource") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputSource`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputSource") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [e], detailed_message=f"the `outputSource` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `outputSource`, `linkMerge`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, outputSource=outputSource, linkMerge=linkMerge, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.outputSource is not None: u = save_relative_uri(self.outputSource, self.id, False, 1, relative_uris) r["outputSource"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "outputSource", "linkMerge", "type", ] ) class Sink(Saveable): pass class WorkflowStepInput(Identified, Sink, LoadContents, Labeled): """ The input of a workflow step connects an upstream parameter (from the workflow inputs, or the outputs of other workflows steps) with the input parameters of the process specified by the `run` field. Only input parameters declared by the target process will be passed through at runtime to the process though additonal parameters may be specified (for use within `valueFrom` expressions for instance) - unconnected or unused parameters do not represent an error condition. ## Input object A WorkflowStepInput object must contain an `id` field in the form `#fieldname` or `#prefix/fieldname`. When the `id` field contains a slash `/` the field name consists of the characters following the final slash (the prefix portion may contain one or more slashes to indicate scope). This defines a field of the workflow step input object with the value of the `source` parameter(s). ## Merging To merge multiple inbound data links, [MultipleInputFeatureRequirement](#MultipleInputFeatureRequirement) must be specified in the workflow or workflow step requirements. If the sink parameter is an array, or named in a [workflow scatter](#WorkflowStep) operation, there may be multiple inbound data links listed in the `source` field. The values from the input links are merged depending on the method specified in the `linkMerge` field. If not specified, the default method is "merge_nested". * **merge_nested** The input must be an array consisting of exactly one entry for each input link. If "merge_nested" is specified with a single link, the value from the link must be wrapped in a single-item list. * **merge_flattened** 1. The source and sink parameters must be compatible types, or the source type must be compatible with single element from the "items" type of the destination array parameter. 2. Source parameters which are arrays are concatenated. Source parameters which are single element types are appended as single elements. """ id: str def __init__( self, id: Optional[Any] = None, source: Optional[Any] = None, linkMerge: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, label: Optional[Any] = None, default: Optional[Any] = None, valueFrom: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.source = source self.linkMerge = linkMerge self.loadContents = loadContents self.loadListing = loadListing self.label = label self.default = default self.valueFrom = valueFrom def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepInput): return bool( self.id == other.id and self.source == other.source and self.linkMerge == other.linkMerge and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.label == other.label and self.default == other.default and self.valueFrom == other.valueFrom ) return False def __hash__(self) -> int: return hash( ( self.id, self.source, self.linkMerge, self.loadContents, self.loadListing, self.label, self.default, self.valueFrom, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepInput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) source = None if "source" in _doc: try: source = load_field( _doc.get("source"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None, baseuri, loadingOptions, lc=_doc.get("source") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `source`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("source") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [e], detailed_message=f"the `source` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `source`, `linkMerge`, `loadContents`, `loadListing`, `label`, `default`, `valueFrom`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, source=source, linkMerge=linkMerge, loadContents=loadContents, loadListing=loadListing, label=label, default=default, valueFrom=valueFrom, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.source is not None: u = save_relative_uri(self.source, self.id, False, 2, relative_uris) r["source"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "source", "linkMerge", "loadContents", "loadListing", "label", "default", "valueFrom", ] ) class WorkflowStepOutput(Identified): """ Associate an output parameter of the underlying process with a workflow parameter. The workflow parameter (given in the `id` field) be may be used as a `source` to connect with input parameters of other workflow steps, or with an output parameter of the process. A unique identifier for this workflow output parameter. This is the identifier to use in the `source` field of `WorkflowStepInput` to connect the output value to downstream parameters. """ id: str def __init__( self, id: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepOutput): return bool(self.id == other.id) return False def __hash__(self) -> int: return hash((self.id)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepOutput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["id"]) class WorkflowStep(Identified, Labeled, Documented): """ A workflow step is an executable element of a workflow. It specifies the underlying process implementation (such as `CommandLineTool` or another `Workflow`) in the `run` field and connects the input and output parameters of the underlying process to workflow parameters. # Scatter/gather To use scatter/gather, [ScatterFeatureRequirement](#ScatterFeatureRequirement) must be specified in the workflow or workflow step requirements. A "scatter" operation specifies that the associated workflow step or subworkflow should execute separately over a list of input elements. Each job making up a scatter operation is independent and may be executed concurrently. The `scatter` field specifies one or more input parameters which will be scattered. An input parameter may be listed more than once. The declared type of each input parameter is implicitly becomes an array of items of the input parameter type. If a parameter is listed more than once, it becomes a nested array. As a result, upstream parameters which are connected to scattered parameters must be arrays. All output parameter types are also implicitly wrapped in arrays. Each job in the scatter results in an entry in the output array. If any scattered parameter runtime value is an empty array, all outputs are set to empty arrays and no work is done for the step, according to applicable scattering rules. If `scatter` declares more than one input parameter, `scatterMethod` describes how to decompose the input into a discrete set of jobs. * **dotproduct** specifies that each of the input arrays are aligned and one element taken from each array to construct each job. It is an error if all input arrays are not the same length. * **nested_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output must be nested arrays for each level of scattering, in the order that the input arrays are listed in the `scatter` field. * **flat_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output arrays must be flattened to a single level, but otherwise listed in the order that the input arrays are listed in the `scatter` field. # Subworkflows To specify a nested workflow as part of a workflow step, [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) must be specified in the workflow or workflow step requirements. It is a fatal error if a workflow directly or indirectly invokes itself as a subworkflow (recursive workflows are not allowed). """ id: str def __init__( self, in_: Any, out: Any, run: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, scatter: Optional[Any] = None, scatterMethod: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.in_ = in_ self.out = out self.requirements = requirements self.hints = hints self.run = run self.scatter = scatter self.scatterMethod = scatterMethod def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStep): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.in_ == other.in_ and self.out == other.out and self.requirements == other.requirements and self.hints == other.hints and self.run == other.run and self.scatter == other.scatter and self.scatterMethod == other.scatterMethod ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.in_, self.out, self.requirements, self.hints, self.run, self.scatter, self.scatterMethod, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStep": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("in") is None: raise ValidationException("missing required field `in`", None, []) in_ = load_field( _doc.get("in"), idmap_in__array_of_WorkflowStepInputLoader, baseuri, loadingOptions, lc=_doc.get("in") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `in`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("in") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [e], detailed_message=f"the `in` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("out") is None: raise ValidationException("missing required field `out`", None, []) out = load_field( _doc.get("out"), uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("out") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `out`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("out") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [e], detailed_message=f"the `out` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) subscope_baseuri = expand_url('run', baseuri, loadingOptions, True) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None, subscope_baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) scatter = None if "scatter" in _doc: try: scatter = load_field( _doc.get("scatter"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("scatter") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatter`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatter") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [e], detailed_message=f"the `scatter` field with value `{val}` " "is not valid because:", ) ) scatterMethod = None if "scatterMethod" in _doc: try: scatterMethod = load_field( _doc.get("scatterMethod"), uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("scatterMethod") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatterMethod`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatterMethod") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [e], detailed_message=f"the `scatterMethod` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `in`, `out`, `requirements`, `hints`, `run`, `scatter`, `scatterMethod`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, in_=in_, out=out, requirements=requirements, hints=hints, run=run, scatter=scatter, scatterMethod=scatterMethod, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.in_ is not None: r["in"] = save( self.in_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.out is not None: u = save_relative_uri(self.out, self.id, True, None, relative_uris) r["out"] = u if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u if self.scatter is not None: u = save_relative_uri(self.scatter, self.id, False, 0, relative_uris) r["scatter"] = u if self.scatterMethod is not None: u = save_relative_uri( self.scatterMethod, self.id, False, None, relative_uris ) r["scatterMethod"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "in", "out", "requirements", "hints", "run", "scatter", "scatterMethod", ] ) class Workflow(Process): """ A workflow describes a set of **steps** and the **dependencies** between those steps. When a step produces output that will be consumed by a second step, the first step is a dependency of the second step. When there is a dependency, the workflow engine must execute the preceding step and wait for it to successfully produce output before executing the dependent step. If two steps are defined in the workflow graph that are not directly or indirectly dependent, these steps are **independent**, and may execute in any order or execute concurrently. A workflow is complete when all steps have been executed. Dependencies between parameters are expressed using the `source` field on [workflow step input parameters](#WorkflowStepInput) and [workflow output parameters](#WorkflowOutputParameter). The `source` field expresses the dependency of one parameter on another such that when a value is associated with the parameter specified by `source`, that value is propagated to the destination parameter. When all data links inbound to a given step are fufilled, the step is ready to execute. ## Workflow success and failure A completed step must result in one of `success`, `temporaryFailure` or `permanentFailure` states. An implementation may choose to retry a step execution which resulted in `temporaryFailure`. An implementation may choose to either continue running other steps of a workflow, or terminate immediately upon `permanentFailure`. * If any step of a workflow execution results in `permanentFailure`, then the workflow status is `permanentFailure`. * If one or more steps result in `temporaryFailure` and all other steps complete `success` or are not executed, then the workflow status is `temporaryFailure`. * If all workflow steps are executed and complete with `success`, then the workflow status is `success`. # Extensions [ScatterFeatureRequirement](#ScatterFeatureRequirement) and [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) are available as standard [extensions](#Extensions_and_Metadata) to core workflow semantics. """ id: str def __init__( self, inputs: Any, outputs: Any, steps: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.class_ = "Workflow" self.steps = steps def __eq__(self, other: Any) -> bool: if isinstance(other, Workflow): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.steps == other.steps ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.class_, self.steps, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Workflow": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Workflow_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_WorkflowInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_WorkflowOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("steps") is None: raise ValidationException("missing required field `steps`", None, []) steps = load_field( _doc.get("steps"), idmap_steps_union_of_array_of_WorkflowStepLoader, baseuri, loadingOptions, lc=_doc.get("steps") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `steps`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("steps") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [e], detailed_message=f"the `steps` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `class`, `steps`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, steps=steps, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.steps is not None: r["steps"] = save( self.steps, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "class", "steps", ] ) class SubworkflowFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support nested workflows in the `run` field of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SubworkflowFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, SubworkflowFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SubworkflowFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ScatterFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support the `scatter` and `scatterMethod` fields of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ScatterFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ScatterFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ScatterFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ScatterFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class MultipleInputFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support multiple inbound data links listed in the `source` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MultipleInputFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, MultipleInputFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MultipleInputFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class StepInputExpressionRequirement(ProcessRequirement): """ Indicate that the workflow platform must support the `valueFrom` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "StepInputExpressionRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, StepInputExpressionRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "StepInputExpressionRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_StepInputExpressionRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class Secrets(ProcessRequirement): def __init__( self, secrets: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Secrets" self.secrets = secrets def __eq__(self, other: Any) -> bool: if isinstance(other, Secrets): return bool(self.class_ == other.class_ and self.secrets == other.secrets) return False def __hash__(self) -> int: return hash((self.class_, self.secrets)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Secrets": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("secrets") is None: raise ValidationException("missing required field `secrets`", None, []) secrets = load_field( _doc.get("secrets"), uri_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("secrets") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secrets`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secrets") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [e], detailed_message=f"the `secrets` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `secrets`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( secrets=secrets, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.secrets is not None: u = save_relative_uri(self.secrets, base_url, False, 0, relative_uris) r["secrets"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "secrets"]) class ProcessGenerator(Process): id: str def __init__( self, inputs: Any, outputs: Any, run: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.class_ = "ProcessGenerator" self.run = run def __eq__(self, other: Any) -> bool: if isinstance(other, ProcessGenerator): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.class_ == other.class_ and self.run == other.run ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.class_, self.run, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ProcessGenerator": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) subscope_baseuri = expand_url('run', baseuri, loadingOptions, True) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None, subscope_baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `class`, `run`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, run=run, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "class", "run", ] ) class MPIRequirement(ProcessRequirement): """ Indicates that a process requires an MPI runtime. """ def __init__( self, processes: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MPIRequirement" self.processes = processes def __eq__(self, other: Any) -> bool: if isinstance(other, MPIRequirement): return bool( self.class_ == other.class_ and self.processes == other.processes ) return False def __hash__(self) -> int: return hash((self.class_, self.processes)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MPIRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("processes") is None: raise ValidationException("missing required field `processes`", None, []) processes = load_field( _doc.get("processes"), union_of_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("processes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `processes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("processes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [e], detailed_message=f"the `processes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `processes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( processes=processes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.processes is not None: r["processes"] = save( self.processes, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "processes"]) class CUDARequirement(ProcessRequirement): """ Require support for NVIDA CUDA (GPU hardware acceleration). """ def __init__( self, cudaComputeCapability: Any, cudaVersionMin: Any, cudaDeviceCountMax: Optional[Any] = None, cudaDeviceCountMin: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "CUDARequirement" self.cudaComputeCapability = cudaComputeCapability self.cudaDeviceCountMax = cudaDeviceCountMax self.cudaDeviceCountMin = cudaDeviceCountMin self.cudaVersionMin = cudaVersionMin def __eq__(self, other: Any) -> bool: if isinstance(other, CUDARequirement): return bool( self.class_ == other.class_ and self.cudaComputeCapability == other.cudaComputeCapability and self.cudaDeviceCountMax == other.cudaDeviceCountMax and self.cudaDeviceCountMin == other.cudaDeviceCountMin and self.cudaVersionMin == other.cudaVersionMin ) return False def __hash__(self) -> int: return hash( ( self.class_, self.cudaComputeCapability, self.cudaDeviceCountMax, self.cudaDeviceCountMin, self.cudaVersionMin, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CUDARequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("cudaComputeCapability") is None: raise ValidationException("missing required field `cudaComputeCapability`", None, []) cudaComputeCapability = load_field( _doc.get("cudaComputeCapability"), union_of_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("cudaComputeCapability") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaComputeCapability`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaComputeCapability") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [e], detailed_message=f"the `cudaComputeCapability` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMax = None if "cudaDeviceCountMax" in _doc: try: cudaDeviceCountMax = load_field( _doc.get("cudaDeviceCountMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [e], detailed_message=f"the `cudaDeviceCountMax` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMin = None if "cudaDeviceCountMin" in _doc: try: cudaDeviceCountMin = load_field( _doc.get("cudaDeviceCountMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [e], detailed_message=f"the `cudaDeviceCountMin` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("cudaVersionMin") is None: raise ValidationException("missing required field `cudaVersionMin`", None, []) cudaVersionMin = load_field( _doc.get("cudaVersionMin"), strtype, baseuri, loadingOptions, lc=_doc.get("cudaVersionMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaVersionMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaVersionMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [e], detailed_message=f"the `cudaVersionMin` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `cudaComputeCapability`, `cudaDeviceCountMax`, `cudaDeviceCountMin`, `cudaVersionMin`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( cudaComputeCapability=cudaComputeCapability, cudaDeviceCountMax=cudaDeviceCountMax, cudaDeviceCountMin=cudaDeviceCountMin, cudaVersionMin=cudaVersionMin, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.cudaComputeCapability is not None: r["cudaComputeCapability"] = save( self.cudaComputeCapability, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMax is not None: r["cudaDeviceCountMax"] = save( self.cudaDeviceCountMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMin is not None: r["cudaDeviceCountMin"] = save( self.cudaDeviceCountMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaVersionMin is not None: r["cudaVersionMin"] = save( self.cudaVersionMin, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "cudaComputeCapability", "cudaDeviceCountMax", "cudaDeviceCountMin", "cudaVersionMin", ] ) class ShmSize(ProcessRequirement): def __init__( self, shmSize: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShmSize" self.shmSize = shmSize def __eq__(self, other: Any) -> bool: if isinstance(other, ShmSize): return bool(self.class_ == other.class_ and self.shmSize == other.shmSize) return False def __hash__(self) -> int: return hash((self.class_, self.shmSize)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShmSize": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("shmSize") is None: raise ValidationException("missing required field `shmSize`", None, []) shmSize = load_field( _doc.get("shmSize"), strtype, baseuri, loadingOptions, lc=_doc.get("shmSize") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shmSize`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shmSize") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [e], detailed_message=f"the `shmSize` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `shmSize`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( shmSize=shmSize, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.shmSize is not None: r["shmSize"] = save( self.shmSize, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "shmSize"]) _vocab = { "Any": "https://w3id.org/cwl/salad#Any", "ArraySchema": "https://w3id.org/cwl/salad#ArraySchema", "CUDARequirement": "http://commonwl.org/cwltool#CUDARequirement", "CWLArraySchema": "https://w3id.org/cwl/cwl#CWLArraySchema", "CWLInputFile": "https://w3id.org/cwl/cwl#CWLInputFile", "CWLObjectType": "https://w3id.org/cwl/cwl#CWLObjectType", "CWLRecordField": "https://w3id.org/cwl/cwl#CWLRecordField", "CWLRecordSchema": "https://w3id.org/cwl/cwl#CWLRecordSchema", "CWLType": "https://w3id.org/cwl/cwl#CWLType", "CWLVersion": "https://w3id.org/cwl/cwl#CWLVersion", "CommandInputArraySchema": "https://w3id.org/cwl/cwl#CommandInputArraySchema", "CommandInputEnumSchema": "https://w3id.org/cwl/cwl#CommandInputEnumSchema", "CommandInputParameter": "https://w3id.org/cwl/cwl#CommandInputParameter", "CommandInputRecordField": "https://w3id.org/cwl/cwl#CommandInputRecordField", "CommandInputRecordSchema": "https://w3id.org/cwl/cwl#CommandInputRecordSchema", "CommandInputSchema": "https://w3id.org/cwl/cwl#CommandInputSchema", "CommandLineBindable": "https://w3id.org/cwl/cwl#CommandLineBindable", "CommandLineBinding": "https://w3id.org/cwl/cwl#CommandLineBinding", "CommandLineTool": "https://w3id.org/cwl/cwl#CommandLineTool", "CommandOutputArraySchema": "https://w3id.org/cwl/cwl#CommandOutputArraySchema", "CommandOutputBinding": "https://w3id.org/cwl/cwl#CommandOutputBinding", "CommandOutputEnumSchema": "https://w3id.org/cwl/cwl#CommandOutputEnumSchema", "CommandOutputParameter": "https://w3id.org/cwl/cwl#CommandOutputParameter", "CommandOutputRecordField": "https://w3id.org/cwl/cwl#CommandOutputRecordField", "CommandOutputRecordSchema": "https://w3id.org/cwl/cwl#CommandOutputRecordSchema", "Directory": "https://w3id.org/cwl/cwl#Directory", "Dirent": "https://w3id.org/cwl/cwl#Dirent", "DockerRequirement": "https://w3id.org/cwl/cwl#DockerRequirement", "Documented": "https://w3id.org/cwl/salad#Documented", "EnumSchema": "https://w3id.org/cwl/salad#EnumSchema", "EnvVarRequirement": "https://w3id.org/cwl/cwl#EnvVarRequirement", "EnvironmentDef": "https://w3id.org/cwl/cwl#EnvironmentDef", "Expression": "https://w3id.org/cwl/cwl#Expression", "ExpressionPlaceholder": "https://w3id.org/cwl/cwl#ExpressionPlaceholder", "ExpressionTool": "https://w3id.org/cwl/cwl#ExpressionTool", "ExpressionToolOutputParameter": "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter", "FieldBase": "https://w3id.org/cwl/cwl#FieldBase", "File": "https://w3id.org/cwl/cwl#File", "IOSchema": "https://w3id.org/cwl/cwl#IOSchema", "Identified": "https://w3id.org/cwl/cwl#Identified", "InitialWorkDirRequirement": "https://w3id.org/cwl/cwl#InitialWorkDirRequirement", "InlineJavascriptRequirement": "https://w3id.org/cwl/cwl#InlineJavascriptRequirement", "InplaceUpdateRequirement": "https://w3id.org/cwl/cwl#InplaceUpdateRequirement", "InputArraySchema": "https://w3id.org/cwl/cwl#InputArraySchema", "InputBinding": "https://w3id.org/cwl/cwl#InputBinding", "InputEnumSchema": "https://w3id.org/cwl/cwl#InputEnumSchema", "InputFormat": "https://w3id.org/cwl/cwl#InputFormat", "InputParameter": "https://w3id.org/cwl/cwl#InputParameter", "InputRecordField": "https://w3id.org/cwl/cwl#InputRecordField", "InputRecordSchema": "https://w3id.org/cwl/cwl#InputRecordSchema", "InputSchema": "https://w3id.org/cwl/cwl#InputSchema", "Labeled": "https://w3id.org/cwl/cwl#Labeled", "LinkMergeMethod": "https://w3id.org/cwl/cwl#LinkMergeMethod", "LoadContents": "https://w3id.org/cwl/cwl#LoadContents", "LoadListingEnum": "https://w3id.org/cwl/cwl#LoadListingEnum", "LoadListingRequirement": "https://w3id.org/cwl/cwl#LoadListingRequirement", "MPIRequirement": "http://commonwl.org/cwltool#MPIRequirement", "MapSchema": "https://w3id.org/cwl/salad#MapSchema", "MultipleInputFeatureRequirement": "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement", "NetworkAccess": "https://w3id.org/cwl/cwl#NetworkAccess", "OutputArraySchema": "https://w3id.org/cwl/cwl#OutputArraySchema", "OutputEnumSchema": "https://w3id.org/cwl/cwl#OutputEnumSchema", "OutputFormat": "https://w3id.org/cwl/cwl#OutputFormat", "OutputParameter": "https://w3id.org/cwl/cwl#OutputParameter", "OutputRecordField": "https://w3id.org/cwl/cwl#OutputRecordField", "OutputRecordSchema": "https://w3id.org/cwl/cwl#OutputRecordSchema", "OutputSchema": "https://w3id.org/cwl/cwl#OutputSchema", "Parameter": "https://w3id.org/cwl/cwl#Parameter", "PrimitiveType": "https://w3id.org/cwl/salad#PrimitiveType", "Process": "https://w3id.org/cwl/cwl#Process", "ProcessGenerator": "http://commonwl.org/cwltool#ProcessGenerator", "ProcessRequirement": "https://w3id.org/cwl/cwl#ProcessRequirement", "RecordField": "https://w3id.org/cwl/salad#RecordField", "RecordSchema": "https://w3id.org/cwl/salad#RecordSchema", "ResourceRequirement": "https://w3id.org/cwl/cwl#ResourceRequirement", "ScatterFeatureRequirement": "https://w3id.org/cwl/cwl#ScatterFeatureRequirement", "ScatterMethod": "https://w3id.org/cwl/cwl#ScatterMethod", "SchemaDefRequirement": "https://w3id.org/cwl/cwl#SchemaDefRequirement", "SecondaryFileSchema": "https://w3id.org/cwl/cwl#SecondaryFileSchema", "Secrets": "http://commonwl.org/cwltool#Secrets", "ShellCommandRequirement": "https://w3id.org/cwl/cwl#ShellCommandRequirement", "ShmSize": "http://commonwl.org/cwltool#ShmSize", "Sink": "https://w3id.org/cwl/cwl#Sink", "SoftwarePackage": "https://w3id.org/cwl/cwl#SoftwarePackage", "SoftwareRequirement": "https://w3id.org/cwl/cwl#SoftwareRequirement", "StepInputExpressionRequirement": "https://w3id.org/cwl/cwl#StepInputExpressionRequirement", "SubworkflowFeatureRequirement": "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement", "ToolTimeLimit": "https://w3id.org/cwl/cwl#ToolTimeLimit", "UnionSchema": "https://w3id.org/cwl/salad#UnionSchema", "WorkReuse": "https://w3id.org/cwl/cwl#WorkReuse", "Workflow": "https://w3id.org/cwl/cwl#Workflow", "WorkflowInputParameter": "https://w3id.org/cwl/cwl#WorkflowInputParameter", "WorkflowOutputParameter": "https://w3id.org/cwl/cwl#WorkflowOutputParameter", "WorkflowStep": "https://w3id.org/cwl/cwl#WorkflowStep", "WorkflowStepInput": "https://w3id.org/cwl/cwl#WorkflowStepInput", "WorkflowStepOutput": "https://w3id.org/cwl/cwl#WorkflowStepOutput", "array": "https://w3id.org/cwl/salad#array", "boolean": "http://www.w3.org/2001/XMLSchema#boolean", "deep_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/deep_listing", "dotproduct": "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct", "double": "http://www.w3.org/2001/XMLSchema#double", "draft-2": "https://w3id.org/cwl/cwl#draft-2", "draft-3": "https://w3id.org/cwl/cwl#draft-3", "draft-3.dev1": "https://w3id.org/cwl/cwl#draft-3.dev1", "draft-3.dev2": "https://w3id.org/cwl/cwl#draft-3.dev2", "draft-3.dev3": "https://w3id.org/cwl/cwl#draft-3.dev3", "draft-3.dev4": "https://w3id.org/cwl/cwl#draft-3.dev4", "draft-3.dev5": "https://w3id.org/cwl/cwl#draft-3.dev5", "draft-4.dev1": "https://w3id.org/cwl/cwl#draft-4.dev1", "draft-4.dev2": "https://w3id.org/cwl/cwl#draft-4.dev2", "draft-4.dev3": "https://w3id.org/cwl/cwl#draft-4.dev3", "enum": "https://w3id.org/cwl/salad#enum", "flat_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct", "float": "http://www.w3.org/2001/XMLSchema#float", "int": "http://www.w3.org/2001/XMLSchema#int", "long": "http://www.w3.org/2001/XMLSchema#long", "map": "https://w3id.org/cwl/salad#map", "merge_flattened": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened", "merge_nested": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested", "nested_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct", "no_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/no_listing", "null": "https://w3id.org/cwl/salad#null", "record": "https://w3id.org/cwl/salad#record", "shallow_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/shallow_listing", "stderr": "https://w3id.org/cwl/cwl#stderr", "stdin": "https://w3id.org/cwl/cwl#stdin", "stdout": "https://w3id.org/cwl/cwl#stdout", "string": "http://www.w3.org/2001/XMLSchema#string", "union": "https://w3id.org/cwl/salad#union", "v1.0": "https://w3id.org/cwl/cwl#v1.0", "v1.0.dev4": "https://w3id.org/cwl/cwl#v1.0.dev4", "v1.1": "https://w3id.org/cwl/cwl#v1.1", "v1.1.0-dev1": "https://w3id.org/cwl/cwl#v1.1.0-dev1", } _rvocab = { "https://w3id.org/cwl/salad#Any": "Any", "https://w3id.org/cwl/salad#ArraySchema": "ArraySchema", "http://commonwl.org/cwltool#CUDARequirement": "CUDARequirement", "https://w3id.org/cwl/cwl#CWLArraySchema": "CWLArraySchema", "https://w3id.org/cwl/cwl#CWLInputFile": "CWLInputFile", "https://w3id.org/cwl/cwl#CWLObjectType": "CWLObjectType", "https://w3id.org/cwl/cwl#CWLRecordField": "CWLRecordField", "https://w3id.org/cwl/cwl#CWLRecordSchema": "CWLRecordSchema", "https://w3id.org/cwl/cwl#CWLType": "CWLType", "https://w3id.org/cwl/cwl#CWLVersion": "CWLVersion", "https://w3id.org/cwl/cwl#CommandInputArraySchema": "CommandInputArraySchema", "https://w3id.org/cwl/cwl#CommandInputEnumSchema": "CommandInputEnumSchema", "https://w3id.org/cwl/cwl#CommandInputParameter": "CommandInputParameter", "https://w3id.org/cwl/cwl#CommandInputRecordField": "CommandInputRecordField", "https://w3id.org/cwl/cwl#CommandInputRecordSchema": "CommandInputRecordSchema", "https://w3id.org/cwl/cwl#CommandInputSchema": "CommandInputSchema", "https://w3id.org/cwl/cwl#CommandLineBindable": "CommandLineBindable", "https://w3id.org/cwl/cwl#CommandLineBinding": "CommandLineBinding", "https://w3id.org/cwl/cwl#CommandLineTool": "CommandLineTool", "https://w3id.org/cwl/cwl#CommandOutputArraySchema": "CommandOutputArraySchema", "https://w3id.org/cwl/cwl#CommandOutputBinding": "CommandOutputBinding", "https://w3id.org/cwl/cwl#CommandOutputEnumSchema": "CommandOutputEnumSchema", "https://w3id.org/cwl/cwl#CommandOutputParameter": "CommandOutputParameter", "https://w3id.org/cwl/cwl#CommandOutputRecordField": "CommandOutputRecordField", "https://w3id.org/cwl/cwl#CommandOutputRecordSchema": "CommandOutputRecordSchema", "https://w3id.org/cwl/cwl#Directory": "Directory", "https://w3id.org/cwl/cwl#Dirent": "Dirent", "https://w3id.org/cwl/cwl#DockerRequirement": "DockerRequirement", "https://w3id.org/cwl/salad#Documented": "Documented", "https://w3id.org/cwl/salad#EnumSchema": "EnumSchema", "https://w3id.org/cwl/cwl#EnvVarRequirement": "EnvVarRequirement", "https://w3id.org/cwl/cwl#EnvironmentDef": "EnvironmentDef", "https://w3id.org/cwl/cwl#Expression": "Expression", "https://w3id.org/cwl/cwl#ExpressionPlaceholder": "ExpressionPlaceholder", "https://w3id.org/cwl/cwl#ExpressionTool": "ExpressionTool", "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter": "ExpressionToolOutputParameter", "https://w3id.org/cwl/cwl#FieldBase": "FieldBase", "https://w3id.org/cwl/cwl#File": "File", "https://w3id.org/cwl/cwl#IOSchema": "IOSchema", "https://w3id.org/cwl/cwl#Identified": "Identified", "https://w3id.org/cwl/cwl#InitialWorkDirRequirement": "InitialWorkDirRequirement", "https://w3id.org/cwl/cwl#InlineJavascriptRequirement": "InlineJavascriptRequirement", "https://w3id.org/cwl/cwl#InplaceUpdateRequirement": "InplaceUpdateRequirement", "https://w3id.org/cwl/cwl#InputArraySchema": "InputArraySchema", "https://w3id.org/cwl/cwl#InputBinding": "InputBinding", "https://w3id.org/cwl/cwl#InputEnumSchema": "InputEnumSchema", "https://w3id.org/cwl/cwl#InputFormat": "InputFormat", "https://w3id.org/cwl/cwl#InputParameter": "InputParameter", "https://w3id.org/cwl/cwl#InputRecordField": "InputRecordField", "https://w3id.org/cwl/cwl#InputRecordSchema": "InputRecordSchema", "https://w3id.org/cwl/cwl#InputSchema": "InputSchema", "https://w3id.org/cwl/cwl#Labeled": "Labeled", "https://w3id.org/cwl/cwl#LinkMergeMethod": "LinkMergeMethod", "https://w3id.org/cwl/cwl#LoadContents": "LoadContents", "https://w3id.org/cwl/cwl#LoadListingEnum": "LoadListingEnum", "https://w3id.org/cwl/cwl#LoadListingRequirement": "LoadListingRequirement", "http://commonwl.org/cwltool#MPIRequirement": "MPIRequirement", "https://w3id.org/cwl/salad#MapSchema": "MapSchema", "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement": "MultipleInputFeatureRequirement", "https://w3id.org/cwl/cwl#NetworkAccess": "NetworkAccess", "https://w3id.org/cwl/cwl#OutputArraySchema": "OutputArraySchema", "https://w3id.org/cwl/cwl#OutputEnumSchema": "OutputEnumSchema", "https://w3id.org/cwl/cwl#OutputFormat": "OutputFormat", "https://w3id.org/cwl/cwl#OutputParameter": "OutputParameter", "https://w3id.org/cwl/cwl#OutputRecordField": "OutputRecordField", "https://w3id.org/cwl/cwl#OutputRecordSchema": "OutputRecordSchema", "https://w3id.org/cwl/cwl#OutputSchema": "OutputSchema", "https://w3id.org/cwl/cwl#Parameter": "Parameter", "https://w3id.org/cwl/salad#PrimitiveType": "PrimitiveType", "https://w3id.org/cwl/cwl#Process": "Process", "http://commonwl.org/cwltool#ProcessGenerator": "ProcessGenerator", "https://w3id.org/cwl/cwl#ProcessRequirement": "ProcessRequirement", "https://w3id.org/cwl/salad#RecordField": "RecordField", "https://w3id.org/cwl/salad#RecordSchema": "RecordSchema", "https://w3id.org/cwl/cwl#ResourceRequirement": "ResourceRequirement", "https://w3id.org/cwl/cwl#ScatterFeatureRequirement": "ScatterFeatureRequirement", "https://w3id.org/cwl/cwl#ScatterMethod": "ScatterMethod", "https://w3id.org/cwl/cwl#SchemaDefRequirement": "SchemaDefRequirement", "https://w3id.org/cwl/cwl#SecondaryFileSchema": "SecondaryFileSchema", "http://commonwl.org/cwltool#Secrets": "Secrets", "https://w3id.org/cwl/cwl#ShellCommandRequirement": "ShellCommandRequirement", "http://commonwl.org/cwltool#ShmSize": "ShmSize", "https://w3id.org/cwl/cwl#Sink": "Sink", "https://w3id.org/cwl/cwl#SoftwarePackage": "SoftwarePackage", "https://w3id.org/cwl/cwl#SoftwareRequirement": "SoftwareRequirement", "https://w3id.org/cwl/cwl#StepInputExpressionRequirement": "StepInputExpressionRequirement", "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement": "SubworkflowFeatureRequirement", "https://w3id.org/cwl/cwl#ToolTimeLimit": "ToolTimeLimit", "https://w3id.org/cwl/salad#UnionSchema": "UnionSchema", "https://w3id.org/cwl/cwl#WorkReuse": "WorkReuse", "https://w3id.org/cwl/cwl#Workflow": "Workflow", "https://w3id.org/cwl/cwl#WorkflowInputParameter": "WorkflowInputParameter", "https://w3id.org/cwl/cwl#WorkflowOutputParameter": "WorkflowOutputParameter", "https://w3id.org/cwl/cwl#WorkflowStep": "WorkflowStep", "https://w3id.org/cwl/cwl#WorkflowStepInput": "WorkflowStepInput", "https://w3id.org/cwl/cwl#WorkflowStepOutput": "WorkflowStepOutput", "https://w3id.org/cwl/salad#array": "array", "http://www.w3.org/2001/XMLSchema#boolean": "boolean", "https://w3id.org/cwl/cwl#LoadListingEnum/deep_listing": "deep_listing", "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct": "dotproduct", "http://www.w3.org/2001/XMLSchema#double": "double", "https://w3id.org/cwl/cwl#draft-2": "draft-2", "https://w3id.org/cwl/cwl#draft-3": "draft-3", "https://w3id.org/cwl/cwl#draft-3.dev1": "draft-3.dev1", "https://w3id.org/cwl/cwl#draft-3.dev2": "draft-3.dev2", "https://w3id.org/cwl/cwl#draft-3.dev3": "draft-3.dev3", "https://w3id.org/cwl/cwl#draft-3.dev4": "draft-3.dev4", "https://w3id.org/cwl/cwl#draft-3.dev5": "draft-3.dev5", "https://w3id.org/cwl/cwl#draft-4.dev1": "draft-4.dev1", "https://w3id.org/cwl/cwl#draft-4.dev2": "draft-4.dev2", "https://w3id.org/cwl/cwl#draft-4.dev3": "draft-4.dev3", "https://w3id.org/cwl/salad#enum": "enum", "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct": "flat_crossproduct", "http://www.w3.org/2001/XMLSchema#float": "float", "http://www.w3.org/2001/XMLSchema#int": "int", "http://www.w3.org/2001/XMLSchema#long": "long", "https://w3id.org/cwl/salad#map": "map", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened": "merge_flattened", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested": "merge_nested", "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct": "nested_crossproduct", "https://w3id.org/cwl/cwl#LoadListingEnum/no_listing": "no_listing", "https://w3id.org/cwl/salad#null": "null", "https://w3id.org/cwl/salad#record": "record", "https://w3id.org/cwl/cwl#LoadListingEnum/shallow_listing": "shallow_listing", "https://w3id.org/cwl/cwl#stderr": "stderr", "https://w3id.org/cwl/cwl#stdin": "stdin", "https://w3id.org/cwl/cwl#stdout": "stdout", "http://www.w3.org/2001/XMLSchema#string": "string", "https://w3id.org/cwl/salad#union": "union", "https://w3id.org/cwl/cwl#v1.0": "v1.0", "https://w3id.org/cwl/cwl#v1.0.dev4": "v1.0.dev4", "https://w3id.org/cwl/cwl#v1.1": "v1.1", "https://w3id.org/cwl/cwl#v1.1.0-dev1": "v1.1.0-dev1", } strtype = _PrimitiveLoader(str) inttype = _PrimitiveLoader(int) floattype = _PrimitiveLoader(float) booltype = _PrimitiveLoader(bool) None_type = _PrimitiveLoader(type(None)) Any_type = _AnyLoader() PrimitiveTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", ), "PrimitiveType", ) """ Names of salad data types (based on Avro schema declarations). Refer to the [Avro schema declaration documentation](https://avro.apache.org/docs/current/spec.html#schemas) for detailed information. null: no value boolean: a binary value int: 32-bit signed integer long: 64-bit signed integer float: single precision (32-bit) IEEE 754 floating-point number double: double precision (64-bit) IEEE 754 floating-point number string: Unicode character sequence """ AnyLoader = _EnumLoader(("Any",), "Any") """ The **Any** type validates for any non-null value. """ RecordFieldLoader = _RecordLoader(RecordField, None, None) RecordSchemaLoader = _RecordLoader(RecordSchema, None, None) EnumSchemaLoader = _RecordLoader(EnumSchema, None, None) ArraySchemaLoader = _RecordLoader(ArraySchema, None, None) MapSchemaLoader = _RecordLoader(MapSchema, None, None) UnionSchemaLoader = _RecordLoader(UnionSchema, None, None) CWLTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", "File", "Directory", ), "CWLType", ) """ Extends primitive types with the concept of a file and directory as a builtin type. File: A File object Directory: A Directory object """ CWLArraySchemaLoader = _RecordLoader(CWLArraySchema, None, None) CWLRecordFieldLoader = _RecordLoader(CWLRecordField, None, None) CWLRecordSchemaLoader = _RecordLoader(CWLRecordSchema, None, None) FileLoader = _RecordLoader(File, None, None) DirectoryLoader = _RecordLoader(Directory, None, None) CWLObjectTypeLoader = _UnionLoader((), "CWLObjectTypeLoader") union_of_None_type_or_CWLObjectTypeLoader = _UnionLoader( ( None_type, CWLObjectTypeLoader, ) ) array_of_union_of_None_type_or_CWLObjectTypeLoader = _ArrayLoader( union_of_None_type_or_CWLObjectTypeLoader ) map_of_union_of_None_type_or_CWLObjectTypeLoader = _MapLoader( union_of_None_type_or_CWLObjectTypeLoader, "None", None, None ) InlineJavascriptRequirementLoader = _RecordLoader( InlineJavascriptRequirement, None, None ) SchemaDefRequirementLoader = _RecordLoader(SchemaDefRequirement, None, None) LoadListingRequirementLoader = _RecordLoader(LoadListingRequirement, None, None) DockerRequirementLoader = _RecordLoader(DockerRequirement, None, None) SoftwareRequirementLoader = _RecordLoader(SoftwareRequirement, None, None) InitialWorkDirRequirementLoader = _RecordLoader(InitialWorkDirRequirement, None, None) EnvVarRequirementLoader = _RecordLoader(EnvVarRequirement, None, None) ShellCommandRequirementLoader = _RecordLoader(ShellCommandRequirement, None, None) ResourceRequirementLoader = _RecordLoader(ResourceRequirement, None, None) WorkReuseLoader = _RecordLoader(WorkReuse, None, None) NetworkAccessLoader = _RecordLoader(NetworkAccess, None, None) InplaceUpdateRequirementLoader = _RecordLoader(InplaceUpdateRequirement, None, None) ToolTimeLimitLoader = _RecordLoader(ToolTimeLimit, None, None) SubworkflowFeatureRequirementLoader = _RecordLoader( SubworkflowFeatureRequirement, None, None ) ScatterFeatureRequirementLoader = _RecordLoader(ScatterFeatureRequirement, None, None) MultipleInputFeatureRequirementLoader = _RecordLoader( MultipleInputFeatureRequirement, None, None ) StepInputExpressionRequirementLoader = _RecordLoader( StepInputExpressionRequirement, None, None ) SecretsLoader = _RecordLoader(Secrets, None, None) MPIRequirementLoader = _RecordLoader(MPIRequirement, None, None) CUDARequirementLoader = _RecordLoader(CUDARequirement, None, None) ShmSizeLoader = _RecordLoader(ShmSize, None, None) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, LoadListingRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, WorkReuseLoader, NetworkAccessLoader, InplaceUpdateRequirementLoader, ToolTimeLimitLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, SecretsLoader, MPIRequirementLoader, CUDARequirementLoader, ShmSizeLoader, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, CWLObjectTypeLoader, ) ) map_of_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader = _MapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader, "CWLInputFile", "@list", True, ) CWLInputFileLoader = map_of_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader CWLVersionLoader = _EnumLoader( ( "draft-2", "draft-3.dev1", "draft-3.dev2", "draft-3.dev3", "draft-3.dev4", "draft-3.dev5", "draft-3", "draft-4.dev1", "draft-4.dev2", "draft-4.dev3", "v1.0.dev4", "v1.0", "v1.1.0-dev1", "v1.1", ), "CWLVersion", ) """ Version symbols for published CWL document versions. """ LoadListingEnumLoader = _EnumLoader( ( "no_listing", "shallow_listing", "deep_listing", ), "LoadListingEnum", ) """ Specify the desired behavior for loading the `listing` field of a Directory object for use by expressions. no_listing: Do not load the directory listing. shallow_listing: Only load the top level listing, do not recurse into subdirectories. deep_listing: Load the directory listing and recursively load all subdirectories as well. """ ExpressionLoader = _ExpressionLoader(str) InputBindingLoader = _RecordLoader(InputBinding, None, None) InputRecordFieldLoader = _RecordLoader(InputRecordField, None, None) InputRecordSchemaLoader = _RecordLoader(InputRecordSchema, None, None) InputEnumSchemaLoader = _RecordLoader(InputEnumSchema, None, None) InputArraySchemaLoader = _RecordLoader(InputArraySchema, None, None) OutputRecordFieldLoader = _RecordLoader(OutputRecordField, None, None) OutputRecordSchemaLoader = _RecordLoader(OutputRecordSchema, None, None) OutputEnumSchemaLoader = _RecordLoader(OutputEnumSchema, None, None) OutputArraySchemaLoader = _RecordLoader(OutputArraySchema, None, None) SecondaryFileSchemaLoader = _RecordLoader(SecondaryFileSchema, None, None) EnvironmentDefLoader = _RecordLoader(EnvironmentDef, None, None) CommandLineBindingLoader = _RecordLoader(CommandLineBinding, None, None) CommandOutputBindingLoader = _RecordLoader(CommandOutputBinding, None, None) CommandLineBindableLoader = _RecordLoader(CommandLineBindable, None, None) CommandInputRecordFieldLoader = _RecordLoader(CommandInputRecordField, None, None) CommandInputRecordSchemaLoader = _RecordLoader(CommandInputRecordSchema, None, None) CommandInputEnumSchemaLoader = _RecordLoader(CommandInputEnumSchema, None, None) CommandInputArraySchemaLoader = _RecordLoader(CommandInputArraySchema, None, None) CommandOutputRecordFieldLoader = _RecordLoader(CommandOutputRecordField, None, None) CommandOutputRecordSchemaLoader = _RecordLoader(CommandOutputRecordSchema, None, None) CommandOutputEnumSchemaLoader = _RecordLoader(CommandOutputEnumSchema, None, None) CommandOutputArraySchemaLoader = _RecordLoader(CommandOutputArraySchema, None, None) CommandInputParameterLoader = _RecordLoader(CommandInputParameter, None, None) CommandOutputParameterLoader = _RecordLoader(CommandOutputParameter, None, None) stdinLoader = _EnumLoader(("stdin",), "stdin") """ Only valid as a `type` for a `CommandLineTool` input with no `inputBinding` set. `stdin` must not be specified at the `CommandLineTool` level. The following ``` inputs: an_input_name: type: stdin ``` is equivalent to ``` inputs: an_input_name: type: File streamable: true stdin: ${inputs.an_input_name.path} ``` """ stdoutLoader = _EnumLoader(("stdout",), "stdout") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stdout stdout: a_stdout_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stdout_file stdout: a_stdout_file ``` If there is no `stdout` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stdout ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stdout_filenameABCDEFG stdout: random_stdout_filenameABCDEFG ``` """ stderrLoader = _EnumLoader(("stderr",), "stderr") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stderr stderr: a_stderr_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stderr_file stderr: a_stderr_file ``` If there is no `stderr` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stderr ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stderr_filenameABCDEFG stderr: random_stderr_filenameABCDEFG ``` """ CommandLineToolLoader = _RecordLoader(CommandLineTool, None, None) SoftwarePackageLoader = _RecordLoader(SoftwarePackage, None, None) DirentLoader = _RecordLoader(Dirent, None, None) ExpressionToolOutputParameterLoader = _RecordLoader( ExpressionToolOutputParameter, None, None ) WorkflowInputParameterLoader = _RecordLoader(WorkflowInputParameter, None, None) ExpressionToolLoader = _RecordLoader(ExpressionTool, None, None) LinkMergeMethodLoader = _EnumLoader( ( "merge_nested", "merge_flattened", ), "LinkMergeMethod", ) """ The input link merge method, described in [WorkflowStepInput](#WorkflowStepInput). """ WorkflowOutputParameterLoader = _RecordLoader(WorkflowOutputParameter, None, None) WorkflowStepInputLoader = _RecordLoader(WorkflowStepInput, None, None) WorkflowStepOutputLoader = _RecordLoader(WorkflowStepOutput, None, None) ScatterMethodLoader = _EnumLoader( ( "dotproduct", "nested_crossproduct", "flat_crossproduct", ), "ScatterMethod", ) """ The scatter method, as described in [workflow step scatter](#WorkflowStep). """ WorkflowStepLoader = _RecordLoader(WorkflowStep, None, None) WorkflowLoader = _RecordLoader(Workflow, None, None) ProcessGeneratorLoader = _RecordLoader(ProcessGenerator, None, None) array_of_strtype = _ArrayLoader(strtype) union_of_None_type_or_strtype_or_array_of_strtype = _UnionLoader( ( None_type, strtype, array_of_strtype, ) ) uri_strtype_True_False_None_None = _URILoader(strtype, True, False, None, None) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, ) ) typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, 2, "v1.1", ) array_of_RecordFieldLoader = _ArrayLoader(RecordFieldLoader) union_of_None_type_or_array_of_RecordFieldLoader = _UnionLoader( ( None_type, array_of_RecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_RecordFieldLoader, "name", "type" ) Record_nameLoader = _EnumLoader(("record",), "Record_name") typedsl_Record_nameLoader_2 = _TypeDSLLoader(Record_nameLoader, 2, "v1.1") union_of_None_type_or_strtype = _UnionLoader( ( None_type, strtype, ) ) uri_union_of_None_type_or_strtype_True_False_None_None = _URILoader( union_of_None_type_or_strtype, True, False, None, None ) uri_array_of_strtype_True_False_None_None = _URILoader( array_of_strtype, True, False, None, None ) Enum_nameLoader = _EnumLoader(("enum",), "Enum_name") typedsl_Enum_nameLoader_2 = _TypeDSLLoader(Enum_nameLoader, 2, "v1.1") uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, False, True, 2, None, ) Array_nameLoader = _EnumLoader(("array",), "Array_name") typedsl_Array_nameLoader_2 = _TypeDSLLoader(Array_nameLoader, 2, "v1.1") Map_nameLoader = _EnumLoader(("map",), "Map_name") typedsl_Map_nameLoader_2 = _TypeDSLLoader(Map_nameLoader, 2, "v1.1") Union_nameLoader = _EnumLoader(("union",), "Union_name") typedsl_Union_nameLoader_2 = _TypeDSLLoader(Union_nameLoader, 2, "v1.1") union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, ) ) uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, False, True, 2, None, ) typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CWLRecordFieldLoader = _ArrayLoader(CWLRecordFieldLoader) union_of_None_type_or_array_of_CWLRecordFieldLoader = _UnionLoader( ( None_type, array_of_CWLRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_CWLRecordFieldLoader, "name", "type" ) File_classLoader = _EnumLoader(("File",), "File_class") uri_File_classLoader_False_True_None_None = _URILoader( File_classLoader, False, True, None, None ) uri_union_of_None_type_or_strtype_False_False_None_None = _URILoader( union_of_None_type_or_strtype, False, False, None, None ) union_of_None_type_or_inttype = _UnionLoader( ( None_type, inttype, ) ) union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( FileLoader, DirectoryLoader, ) ) array_of_union_of_FileLoader_or_DirectoryLoader = _ArrayLoader( union_of_FileLoader_or_DirectoryLoader ) union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( None_type, array_of_union_of_FileLoader_or_DirectoryLoader, ) ) secondaryfilesdsl_union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( _SecondaryDSLLoader( union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader ), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, ) ) uri_union_of_None_type_or_strtype_True_False_None_True = _URILoader( union_of_None_type_or_strtype, True, False, None, True ) Directory_classLoader = _EnumLoader(("Directory",), "Directory_class") uri_Directory_classLoader_False_True_None_None = _URILoader( Directory_classLoader, False, True, None, None ) union_of_None_type_or_booltype = _UnionLoader( ( None_type, booltype, ) ) union_of_None_type_or_LoadListingEnumLoader = _UnionLoader( ( None_type, LoadListingEnumLoader, ) ) array_of_SecondaryFileSchemaLoader = _ArrayLoader(SecondaryFileSchemaLoader) union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader = _UnionLoader( ( None_type, SecondaryFileSchemaLoader, array_of_SecondaryFileSchemaLoader, ) ) secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader = _UnionLoader( ( _SecondaryDSLLoader( union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader ), union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, ) ) union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, array_of_strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader, True, False, None, True, ) union_of_None_type_or_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_ExpressionLoader, True, False, None, True ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_InputRecordFieldLoader = _ArrayLoader(InputRecordFieldLoader) union_of_None_type_or_array_of_InputRecordFieldLoader = _UnionLoader( ( None_type, array_of_InputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_InputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_OutputRecordFieldLoader = _ArrayLoader(OutputRecordFieldLoader) union_of_None_type_or_array_of_OutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_OutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_OutputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader = _UnionLoader( ( CommandInputParameterLoader, WorkflowInputParameterLoader, ) ) array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader = ( _ArrayLoader(union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader) ) idmap_inputs_array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader = _IdMapLoader( array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader, "id", "type", ) union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader = _UnionLoader( ( CommandOutputParameterLoader, ExpressionToolOutputParameterLoader, WorkflowOutputParameterLoader, ) ) array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader = _ArrayLoader( union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader ) idmap_outputs_array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader = _IdMapLoader( array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader, "id", "type", ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, ) ) idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader, "class", "None", ) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, LoadListingRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, WorkReuseLoader, NetworkAccessLoader, InplaceUpdateRequirementLoader, ToolTimeLimitLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, SecretsLoader, MPIRequirementLoader, CUDARequirementLoader, ShmSizeLoader, Any_type, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_ShmSizeLoader_or_Any_type, "class", "None", ) union_of_None_type_or_CWLVersionLoader = _UnionLoader( ( None_type, CWLVersionLoader, ) ) uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None = _URILoader( union_of_None_type_or_CWLVersionLoader, False, True, None, None ) InlineJavascriptRequirement_classLoader = _EnumLoader( ("InlineJavascriptRequirement",), "InlineJavascriptRequirement_class" ) uri_InlineJavascriptRequirement_classLoader_False_True_None_None = _URILoader( InlineJavascriptRequirement_classLoader, False, True, None, None ) union_of_None_type_or_array_of_strtype = _UnionLoader( ( None_type, array_of_strtype, ) ) SchemaDefRequirement_classLoader = _EnumLoader( ("SchemaDefRequirement",), "SchemaDefRequirement_class" ) uri_SchemaDefRequirement_classLoader_False_True_None_None = _URILoader( SchemaDefRequirement_classLoader, False, True, None, None ) union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader = _UnionLoader( ( CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, ) ) array_of_union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader = _ArrayLoader( union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader ) union_of_strtype_or_ExpressionLoader = _UnionLoader( ( strtype, ExpressionLoader, ) ) union_of_None_type_or_booltype_or_ExpressionLoader = _UnionLoader( ( None_type, booltype, ExpressionLoader, ) ) LoadListingRequirement_classLoader = _EnumLoader( ("LoadListingRequirement",), "LoadListingRequirement_class" ) uri_LoadListingRequirement_classLoader_False_True_None_None = _URILoader( LoadListingRequirement_classLoader, False, True, None, None ) union_of_None_type_or_inttype_or_ExpressionLoader = _UnionLoader( ( None_type, inttype, ExpressionLoader, ) ) union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype = _UnionLoader( ( None_type, strtype, ExpressionLoader, array_of_strtype, ) ) union_of_None_type_or_ExpressionLoader = _UnionLoader( ( None_type, ExpressionLoader, ) ) union_of_None_type_or_CommandLineBindingLoader = _UnionLoader( ( None_type, CommandLineBindingLoader, ) ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CommandInputRecordFieldLoader = _ArrayLoader(CommandInputRecordFieldLoader) union_of_None_type_or_array_of_CommandInputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandInputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandInputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_CommandOutputBindingLoader = _UnionLoader( ( None_type, CommandOutputBindingLoader, ) ) array_of_CommandOutputRecordFieldLoader = _ArrayLoader(CommandOutputRecordFieldLoader) union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandOutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, stdinLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, stdoutLoader, stderrLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) CommandLineTool_classLoader = _EnumLoader(("CommandLineTool",), "CommandLineTool_class") uri_CommandLineTool_classLoader_False_True_None_None = _URILoader( CommandLineTool_classLoader, False, True, None, None ) array_of_CommandInputParameterLoader = _ArrayLoader(CommandInputParameterLoader) idmap_inputs_array_of_CommandInputParameterLoader = _IdMapLoader( array_of_CommandInputParameterLoader, "id", "type" ) array_of_CommandOutputParameterLoader = _ArrayLoader(CommandOutputParameterLoader) idmap_outputs_array_of_CommandOutputParameterLoader = _IdMapLoader( array_of_CommandOutputParameterLoader, "id", "type" ) union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( strtype, ExpressionLoader, CommandLineBindingLoader, ) ) array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = ( _ArrayLoader(union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader) ) union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( None_type, array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, ) ) array_of_inttype = _ArrayLoader(inttype) union_of_None_type_or_array_of_inttype = _UnionLoader( ( None_type, array_of_inttype, ) ) DockerRequirement_classLoader = _EnumLoader( ("DockerRequirement",), "DockerRequirement_class" ) uri_DockerRequirement_classLoader_False_True_None_None = _URILoader( DockerRequirement_classLoader, False, True, None, None ) SoftwareRequirement_classLoader = _EnumLoader( ("SoftwareRequirement",), "SoftwareRequirement_class" ) uri_SoftwareRequirement_classLoader_False_True_None_None = _URILoader( SoftwareRequirement_classLoader, False, True, None, None ) array_of_SoftwarePackageLoader = _ArrayLoader(SoftwarePackageLoader) idmap_packages_array_of_SoftwarePackageLoader = _IdMapLoader( array_of_SoftwarePackageLoader, "package", "specs" ) uri_union_of_None_type_or_array_of_strtype_False_False_None_True = _URILoader( union_of_None_type_or_array_of_strtype, False, False, None, True ) InitialWorkDirRequirement_classLoader = _EnumLoader( ("InitialWorkDirRequirement",), "InitialWorkDirRequirement_class" ) uri_InitialWorkDirRequirement_classLoader_False_True_None_None = _URILoader( InitialWorkDirRequirement_classLoader, False, True, None, None ) union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader = _UnionLoader( ( None_type, FileLoader, array_of_union_of_FileLoader_or_DirectoryLoader, DirectoryLoader, DirentLoader, ExpressionLoader, ) ) array_of_union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader = _ArrayLoader( union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader ) union_of_array_of_union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader_or_ExpressionLoader = _UnionLoader( ( array_of_union_of_None_type_or_FileLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader_or_DirectoryLoader_or_DirentLoader_or_ExpressionLoader, ExpressionLoader, ) ) EnvVarRequirement_classLoader = _EnumLoader( ("EnvVarRequirement",), "EnvVarRequirement_class" ) uri_EnvVarRequirement_classLoader_False_True_None_None = _URILoader( EnvVarRequirement_classLoader, False, True, None, None ) array_of_EnvironmentDefLoader = _ArrayLoader(EnvironmentDefLoader) idmap_envDef_array_of_EnvironmentDefLoader = _IdMapLoader( array_of_EnvironmentDefLoader, "envName", "envValue" ) ShellCommandRequirement_classLoader = _EnumLoader( ("ShellCommandRequirement",), "ShellCommandRequirement_class" ) uri_ShellCommandRequirement_classLoader_False_True_None_None = _URILoader( ShellCommandRequirement_classLoader, False, True, None, None ) ResourceRequirement_classLoader = _EnumLoader( ("ResourceRequirement",), "ResourceRequirement_class" ) uri_ResourceRequirement_classLoader_False_True_None_None = _URILoader( ResourceRequirement_classLoader, False, True, None, None ) WorkReuse_classLoader = _EnumLoader(("WorkReuse",), "WorkReuse_class") uri_WorkReuse_classLoader_False_True_None_None = _URILoader( WorkReuse_classLoader, False, True, None, None ) union_of_booltype_or_ExpressionLoader = _UnionLoader( ( booltype, ExpressionLoader, ) ) NetworkAccess_classLoader = _EnumLoader(("NetworkAccess",), "NetworkAccess_class") uri_NetworkAccess_classLoader_False_True_None_None = _URILoader( NetworkAccess_classLoader, False, True, None, None ) InplaceUpdateRequirement_classLoader = _EnumLoader( ("InplaceUpdateRequirement",), "InplaceUpdateRequirement_class" ) uri_InplaceUpdateRequirement_classLoader_False_True_None_None = _URILoader( InplaceUpdateRequirement_classLoader, False, True, None, None ) ToolTimeLimit_classLoader = _EnumLoader(("ToolTimeLimit",), "ToolTimeLimit_class") uri_ToolTimeLimit_classLoader_False_True_None_None = _URILoader( ToolTimeLimit_classLoader, False, True, None, None ) union_of_inttype_or_ExpressionLoader = _UnionLoader( ( inttype, ExpressionLoader, ) ) union_of_None_type_or_InputBindingLoader = _UnionLoader( ( None_type, InputBindingLoader, ) ) ExpressionTool_classLoader = _EnumLoader(("ExpressionTool",), "ExpressionTool_class") uri_ExpressionTool_classLoader_False_True_None_None = _URILoader( ExpressionTool_classLoader, False, True, None, None ) array_of_WorkflowInputParameterLoader = _ArrayLoader(WorkflowInputParameterLoader) idmap_inputs_array_of_WorkflowInputParameterLoader = _IdMapLoader( array_of_WorkflowInputParameterLoader, "id", "type" ) array_of_ExpressionToolOutputParameterLoader = _ArrayLoader( ExpressionToolOutputParameterLoader ) idmap_outputs_array_of_ExpressionToolOutputParameterLoader = _IdMapLoader( array_of_ExpressionToolOutputParameterLoader, "id", "type" ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 1, None ) union_of_None_type_or_LinkMergeMethodLoader = _UnionLoader( ( None_type, LinkMergeMethodLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 2, None ) array_of_WorkflowStepInputLoader = _ArrayLoader(WorkflowStepInputLoader) idmap_in__array_of_WorkflowStepInputLoader = _IdMapLoader( array_of_WorkflowStepInputLoader, "id", "source" ) union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( ( strtype, WorkflowStepOutputLoader, ) ) array_of_union_of_strtype_or_WorkflowStepOutputLoader = _ArrayLoader( union_of_strtype_or_WorkflowStepOutputLoader ) union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( (array_of_union_of_strtype_or_WorkflowStepOutputLoader,) ) uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None = _URILoader( union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader, True, False, None, None, ) array_of_Any_type = _ArrayLoader(Any_type) union_of_None_type_or_array_of_Any_type = _UnionLoader( ( None_type, array_of_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_Any_type = _IdMapLoader( union_of_None_type_or_array_of_Any_type, "class", "None" ) union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( strtype, CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, ) ) uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_False_False_None_None = _URILoader( union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, False, False, None, None, ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 0, None ) union_of_None_type_or_ScatterMethodLoader = _UnionLoader( ( None_type, ScatterMethodLoader, ) ) uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None = _URILoader( union_of_None_type_or_ScatterMethodLoader, False, True, None, None ) Workflow_classLoader = _EnumLoader(("Workflow",), "Workflow_class") uri_Workflow_classLoader_False_True_None_None = _URILoader( Workflow_classLoader, False, True, None, None ) array_of_WorkflowOutputParameterLoader = _ArrayLoader(WorkflowOutputParameterLoader) idmap_outputs_array_of_WorkflowOutputParameterLoader = _IdMapLoader( array_of_WorkflowOutputParameterLoader, "id", "type" ) array_of_WorkflowStepLoader = _ArrayLoader(WorkflowStepLoader) union_of_array_of_WorkflowStepLoader = _UnionLoader((array_of_WorkflowStepLoader,)) idmap_steps_union_of_array_of_WorkflowStepLoader = _IdMapLoader( union_of_array_of_WorkflowStepLoader, "id", "None" ) SubworkflowFeatureRequirement_classLoader = _EnumLoader( ("SubworkflowFeatureRequirement",), "SubworkflowFeatureRequirement_class" ) uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None = _URILoader( SubworkflowFeatureRequirement_classLoader, False, True, None, None ) ScatterFeatureRequirement_classLoader = _EnumLoader( ("ScatterFeatureRequirement",), "ScatterFeatureRequirement_class" ) uri_ScatterFeatureRequirement_classLoader_False_True_None_None = _URILoader( ScatterFeatureRequirement_classLoader, False, True, None, None ) MultipleInputFeatureRequirement_classLoader = _EnumLoader( ("MultipleInputFeatureRequirement",), "MultipleInputFeatureRequirement_class" ) uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None = _URILoader( MultipleInputFeatureRequirement_classLoader, False, True, None, None ) StepInputExpressionRequirement_classLoader = _EnumLoader( ("StepInputExpressionRequirement",), "StepInputExpressionRequirement_class" ) uri_StepInputExpressionRequirement_classLoader_False_True_None_None = _URILoader( StepInputExpressionRequirement_classLoader, False, True, None, None ) uri_strtype_False_True_None_None = _URILoader(strtype, False, True, None, None) uri_array_of_strtype_False_False_0_None = _URILoader( array_of_strtype, False, False, 0, None ) union_of_strtype_or_array_of_strtype = _UnionLoader( ( strtype, array_of_strtype, ) ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, ) ) array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _ArrayLoader( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, ProcessGeneratorLoader, array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, ) ) CWLObjectTypeLoader.add_loaders( ( booltype, inttype, floattype, strtype, FileLoader, DirectoryLoader, array_of_union_of_None_type_or_CWLObjectTypeLoader, map_of_union_of_None_type_or_CWLObjectTypeLoader, ) ) def load_document( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions() result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, ) return result def load_document_with_metadata( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=baseuri) return _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) def load_document_by_string( string: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, result, uri, loadingOptions, ) return result def load_document_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """ Shortcut to load via a YAML object. yaml: must be from ruamel.yaml.main.YAML.load with preserve_quotes=True """ add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_ProcessGeneratorLoader, yaml, uri, loadingOptions, ) return result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_1_utils.py0000644000175100001660000005263314753463342021605 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 import hashlib import logging import os from collections import namedtuple from collections.abc import MutableMapping, MutableSequence from io import StringIO from typing import IO, Any, Optional, Union, cast from urllib.parse import urldefrag from schema_salad.exceptions import ValidationException from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import aslist, json_dumps, yaml_no_ts import cwl_utils.parser import cwl_utils.parser.cwl_v1_1 as cwl import cwl_utils.parser.utils from cwl_utils.errors import WorkflowException from cwl_utils.utils import yaml_dumps CONTENT_LIMIT: int = 64 * 1024 _logger = logging.getLogger("cwl_utils") SrcSink = namedtuple("SrcSink", ["src", "sink", "linkMerge", "message"]) def _compare_records( src: cwl.RecordSchema, sink: cwl.RecordSchema, strict: bool = False ) -> bool: """ Compare two records, ensuring they have compatible fields. This handles normalizing record names, which will be relative to workflow step, so that they can be compared. """ srcfields = {cwl.shortname(field.name): field.type_ for field in (src.fields or {})} sinkfields = { cwl.shortname(field.name): field.type_ for field in (sink.fields or {}) } for key in sinkfields.keys(): if ( not can_assign_src_to_sink( srcfields.get(key, "null"), sinkfields.get(key, "null"), strict ) and sinkfields.get(key) is not None ): _logger.info( "Record comparison failure for %s and %s\n" "Did not match fields for %s: %s and %s", cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], src ).name, cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], sink ).name, key, srcfields.get(key), sinkfields.get(key), ) return False return True def _compare_type(type1: Any, type2: Any) -> bool: if isinstance(type1, cwl.ArraySchema) and isinstance(type2, cwl.ArraySchema): return _compare_type(type1.items, type2.items) elif isinstance(type1, cwl.RecordSchema) and isinstance(type2, cwl.RecordSchema): fields1 = { cwl.shortname(field.name): field.type_ for field in (type1.fields or {}) } fields2 = { cwl.shortname(field.name): field.type_ for field in (type2.fields or {}) } if fields1.keys() != fields2.keys(): return False return all(_compare_type(fields1[k], fields2[k]) for k in fields1.keys()) elif isinstance(type1, MutableSequence) and isinstance(type2, MutableSequence): if len(type1) != len(type2): return False for t1 in type1: if not any(_compare_type(t1, t2) for t2 in type2): return False return True else: return bool(type1 == type2) def _inputfile_load( doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: cwl.LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, cwl.LoadingOptions]: loader = cwl.CWLInputFileLoader if isinstance(doc, str): url = loadingOptions.fetcher.urljoin(baseuri, doc) if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = cwl.LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _inputfile_load( result, doc_url, loadingOptions, ) return loadingOptions.idx[url] if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] loadingOptions = cwl.LoadingOptions( copyfrom=loadingOptions, baseuri=baseuri, addl_metadata=addl_metadata, ) loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def can_assign_src_to_sink(src: Any, sink: Any, strict: bool = False) -> bool: """ Check for identical type specifications, ignoring extra keys like inputBinding. src: admissible source types sink: admissible sink types In non-strict comparison, at least one source type must match one sink type, except for 'null'. In strict comparison, all source types must match at least one sink type. """ if src == "Any" or sink == "Any": return True if isinstance(src, cwl.ArraySchema) and isinstance(sink, cwl.ArraySchema): return can_assign_src_to_sink(src.items, sink.items, strict) if isinstance(src, cwl.RecordSchema) and isinstance(sink, cwl.RecordSchema): return _compare_records(src, sink, strict) if isinstance(src, MutableSequence): if strict: for this_src in src: if not can_assign_src_to_sink(this_src, sink): return False return True for this_src in src: if this_src != "null" and can_assign_src_to_sink(this_src, sink): return True return False if isinstance(sink, MutableSequence): for this_sink in sink: if can_assign_src_to_sink(src, this_sink): return True return False return bool(src == sink) def check_all_types( src_dict: dict[str, Any], sinks: MutableSequence[Union[cwl.WorkflowStepInput, cwl.WorkflowOutputParameter]], type_dict: dict[str, Any], ) -> dict[str, list[SrcSink]]: """Given a list of sinks, check if their types match with the types of their sources.""" validation: dict[str, list[SrcSink]] = {"warning": [], "exception": []} for sink in sinks: if isinstance(sink, cwl.WorkflowOutputParameter): sourceName = "outputSource" sourceField = sink.outputSource elif isinstance(sink, cwl.WorkflowStepInput): sourceName = "source" sourceField = sink.source else: continue if sourceField is not None: if isinstance(sourceField, MutableSequence): linkMerge = sink.linkMerge or ( "merge_nested" if len(sourceField) > 1 else None ) srcs_of_sink = [] for parm_id in sourceField: srcs_of_sink += [src_dict[parm_id]] else: parm_id = cast(str, sourceField) if parm_id not in src_dict: raise SourceLine(sink, sourceName, ValidationException).makeError( f"{sourceName} not found: {parm_id}" ) srcs_of_sink = [src_dict[parm_id]] linkMerge = None for src in srcs_of_sink: check_result = check_types( type_dict[cast(str, src.id)], type_dict[sink.id], linkMerge, getattr(sink, "valueFrom", None), ) if check_result == "warning": validation["warning"].append(SrcSink(src, sink, linkMerge, None)) elif check_result == "exception": validation["exception"].append(SrcSink(src, sink, linkMerge, None)) return validation def check_types( srctype: Any, sinktype: Any, linkMerge: Optional[str], valueFrom: Optional[str] = None, ) -> str: """ Check if the source and sink types are correct. Acceptable types are "pass", "warning", or "exception". """ if valueFrom is not None: return "pass" if linkMerge is None: if can_assign_src_to_sink(srctype, sinktype, strict=True): return "pass" if can_assign_src_to_sink(srctype, sinktype, strict=False): return "warning" return "exception" if linkMerge == "merge_nested": return check_types( cwl.ArraySchema(items=srctype, type_="array"), sinktype, None, None ) if linkMerge == "merge_flattened": return check_types(merge_flatten_type(srctype), sinktype, None, None) raise ValidationException(f"Invalid value {linkMerge} for linkMerge field.") def content_limit_respected_read_bytes(f: IO[bytes]) -> bytes: """ Read file content up to 64 kB as a byte array. Truncate content for larger files. """ return f.read(CONTENT_LIMIT) def content_limit_respected_read(f: IO[bytes]) -> str: """ Read file content up to 64 kB as an utf-8 encoded string. Truncate content for larger files. """ return content_limit_respected_read_bytes(f).decode("utf-8") def convert_stdstreams_to_files(clt: cwl.CommandLineTool) -> None: """Convert stdin, stdout and stderr type shortcuts to files.""" for out in clt.outputs: if out.type_ == "stdout": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stdout shortcut." ) if clt.stdout is None: clt.stdout = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stdout) elif out.type_ == "stderr": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stderr shortcut." ) if clt.stderr is None: clt.stderr = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stderr) for inp in clt.inputs: if inp.type_ == "stdin": if inp.inputBinding is not None: raise ValidationException( "Not allowed to specify unputBinding when using stdin shortcut." ) if clt.stdin is not None: raise ValidationException( "Not allowed to specify stdin path when using stdin type shortcut." ) else: clt.stdin = ( "$(inputs.%s.path)" % cast(str, inp.id).rpartition("#")[2].split("/")[-1] ) inp.type_ = "File" def load_inputfile( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.1 input file from a serialized YAML string or a YAML object.""" if baseuri is None: baseuri = cwl.file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = cwl.LoadingOptions() result, metadata = _inputfile_load( doc, baseuri, loadingOptions, ) return result def load_inputfile_by_string( string: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.1 input file from a serialized YAML string.""" yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( result, uri, loadingOptions, ) return result def load_inputfile_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.1 input file from a YAML object.""" add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( yaml, uri, loadingOptions, ) return result def merge_flatten_type(src: Any) -> Any: """Return the merge flattened type of the source type.""" if isinstance(src, MutableSequence): return [merge_flatten_type(t) for t in src] if isinstance(src, cwl.ArraySchema): return src return cwl.ArraySchema(type_="array", items=src) def type_for_step_input( step: cwl.WorkflowStep, in_: cwl.WorkflowStepInput, ) -> Any: """Determine the type for the given step input.""" if in_.valueFrom is not None: return "Any" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.inputs: for step_input in step_run.inputs: if cast(str, step_input.id).split("#")[-1] == in_.id.split("#")[-1]: input_type = step_input.type_ if step.scatter is not None and in_.id in aslist(step.scatter): input_type = cwl.ArraySchema(items=input_type, type_="array") return input_type return "Any" def type_for_step_output( step: cwl.WorkflowStep, sourcename: str, ) -> Any: """Determine the type for the given step output.""" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.outputs: for output in step_run.outputs: if ( output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): output_type = output.type_ if step.scatter is not None: if step.scatterMethod == "nested_crossproduct": for _ in range(len(aslist(step.scatter))): output_type = cwl.ArraySchema( items=output_type, type_="array" ) else: output_type = cwl.ArraySchema(items=output_type, type_="array") return output_type raise ValidationException( "param {} not found in {}.".format( sourcename, yaml_dumps(cwl.save(step)), ) ) def type_for_source( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, linkMerge: Optional[str] = None, ) -> Any: """Determine the type for the given sourcenames.""" scatter_context: list[Optional[tuple[int, str]]] = [] params = param_for_source_id(process, sourcenames, parent, scatter_context) if not isinstance(params, list): new_type = params.type_ if scatter_context[0] is not None: if scatter_context[0][1] == "nested_crossproduct": for _ in range(scatter_context[0][0]): new_type = cwl.ArraySchema(items=new_type, type_="array") else: new_type = cwl.ArraySchema(items=new_type, type_="array") if linkMerge == "merge_nested": new_type = cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": new_type = merge_flatten_type(new_type) return new_type new_type = [] for p, sc in zip(params, scatter_context): if isinstance(p, str) and not any(_compare_type(t, p) for t in new_type): cur_type = p elif hasattr(p, "type_") and not any( _compare_type(t, p.type_) for t in new_type ): cur_type = p.type_ else: cur_type = None if cur_type is not None: if sc is not None: if sc[1] == "nested_crossproduct": for _ in range(sc[0]): cur_type = cwl.ArraySchema(items=cur_type, type_="array") else: cur_type = cwl.ArraySchema(items=cur_type, type_="array") new_type.append(cur_type) if len(new_type) == 1: new_type = new_type[0] if linkMerge == "merge_nested": return cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": return merge_flatten_type(new_type) elif isinstance(sourcenames, list) and len(sourcenames) > 1: return cwl.ArraySchema(items=new_type, type_="array") else: return new_type def param_for_source_id( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, scatter_context: Optional[list[Optional[tuple[int, str]]]] = None, ) -> Union[list[cwl.WorkflowInputParameter], cwl.WorkflowInputParameter]: """Find the process input parameter that matches one of the given sourcenames.""" if isinstance(sourcenames, str): sourcenames = [sourcenames] params: list[cwl.WorkflowInputParameter] = [] for sourcename in sourcenames: if not isinstance(process, cwl.Workflow): for param in process.inputs: if param.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(param) if scatter_context is not None: scatter_context.append(None) targets = [process] if parent: targets.append(parent) for target in targets: if isinstance(target, cwl.Workflow): for inp in target.inputs: if inp.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(inp) if scatter_context is not None: scatter_context.append(None) for step in target.steps: if ( "/".join(sourcename.split("#")[-1].split("/")[:-1]) == step.id.split("#")[-1] and step.out ): step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) for outp in step.out: outp_id = outp if isinstance(outp, str) else outp.id if ( outp_id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): if step_run and step_run.outputs: for output in step_run.outputs: if ( output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): params.append(output) if scatter_context is not None: if scatter_context is not None: if isinstance(step.scatter, str): scatter_context.append( ( 1, step.scatterMethod or "dotproduct", ) ) elif isinstance( step.scatter, MutableSequence ): scatter_context.append( ( len(step.scatter), step.scatterMethod or "dotproduct", ) ) else: scatter_context.append(None) if len(params) == 1: return params[0] elif len(params) > 1: return params raise WorkflowException( "param {} not found in {}\n{}.".format( sourcename, yaml_dumps(cwl.save(process)), (f" or\n {yaml_dumps(cwl.save(parent))}" if parent is not None else ""), ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_2.py0000644000175100001660000501746114753463342020373 0ustar00runnerdocker# # This file was autogenerated using schema-salad-tool --codegen=python # The code itself is released under the Apache 2.0 license and the help text is # subject to the license of the original schema. import copy import logging import os import pathlib import tempfile import uuid as _uuid__ # pylint: disable=unused-import # noqa: F401 import xml.sax # nosec from abc import ABC, abstractmethod from collections.abc import MutableMapping, MutableSequence, Sequence from io import StringIO from itertools import chain from typing import Any, Optional, Union, cast from urllib.parse import quote, urldefrag, urlparse, urlsplit, urlunsplit from urllib.request import pathname2url from rdflib import Graph from rdflib.plugins.parsers.notation3 import BadSyntax from ruamel.yaml.comments import CommentedMap from schema_salad.exceptions import SchemaSaladException, ValidationException from schema_salad.fetcher import DefaultFetcher, Fetcher, MemoryCachingFetcher from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import CacheType, yaml_no_ts # requires schema-salad v8.2+ _vocab: dict[str, str] = {} _rvocab: dict[str, str] = {} _logger = logging.getLogger("salad") IdxType = MutableMapping[str, tuple[Any, "LoadingOptions"]] class LoadingOptions: idx: IdxType fileuri: Optional[str] baseuri: str namespaces: MutableMapping[str, str] schemas: MutableSequence[str] original_doc: Optional[Any] addl_metadata: MutableMapping[str, Any] fetcher: Fetcher vocab: dict[str, str] rvocab: dict[str, str] cache: CacheType imports: list[str] includes: list[str] no_link_check: Optional[bool] container: Optional[str] def __init__( self, fetcher: Optional[Fetcher] = None, namespaces: Optional[dict[str, str]] = None, schemas: Optional[list[str]] = None, fileuri: Optional[str] = None, copyfrom: Optional["LoadingOptions"] = None, original_doc: Optional[Any] = None, addl_metadata: Optional[dict[str, str]] = None, baseuri: Optional[str] = None, idx: Optional[IdxType] = None, imports: Optional[list[str]] = None, includes: Optional[list[str]] = None, no_link_check: Optional[bool] = None, container: Optional[str] = None, ) -> None: """Create a LoadingOptions object.""" self.original_doc = original_doc if idx is not None: self.idx = idx else: self.idx = copyfrom.idx if copyfrom is not None else {} if fileuri is not None: self.fileuri = fileuri else: self.fileuri = copyfrom.fileuri if copyfrom is not None else None if baseuri is not None: self.baseuri = baseuri else: self.baseuri = copyfrom.baseuri if copyfrom is not None else "" if namespaces is not None: self.namespaces = namespaces else: self.namespaces = copyfrom.namespaces if copyfrom is not None else {} if schemas is not None: self.schemas = schemas else: self.schemas = copyfrom.schemas if copyfrom is not None else [] if addl_metadata is not None: self.addl_metadata = addl_metadata else: self.addl_metadata = copyfrom.addl_metadata if copyfrom is not None else {} if imports is not None: self.imports = imports else: self.imports = copyfrom.imports if copyfrom is not None else [] if includes is not None: self.includes = includes else: self.includes = copyfrom.includes if copyfrom is not None else [] if no_link_check is not None: self.no_link_check = no_link_check else: self.no_link_check = copyfrom.no_link_check if copyfrom is not None else False if container is not None: self.container = container else: self.container = copyfrom.container if copyfrom is not None else None if fetcher is not None: self.fetcher = fetcher elif copyfrom is not None: self.fetcher = copyfrom.fetcher else: import requests from cachecontrol.caches import SeparateBodyFileCache from cachecontrol.wrapper import CacheControl root = pathlib.Path(os.environ.get("HOME", tempfile.gettempdir())) session = CacheControl( requests.Session(), cache=SeparateBodyFileCache(root / ".cache" / "salad"), ) self.fetcher: Fetcher = DefaultFetcher({}, session) self.cache = self.fetcher.cache if isinstance(self.fetcher, MemoryCachingFetcher) else {} self.vocab = _vocab self.rvocab = _rvocab if self.namespaces is not None: self.vocab = self.vocab.copy() self.rvocab = self.rvocab.copy() for k, v in self.namespaces.items(): self.vocab[k] = v self.rvocab[v] = k @property def graph(self) -> Graph: """Generate a merged rdflib.Graph from all entries in self.schemas.""" graph = Graph() if not self.schemas: return graph key = str(hash(tuple(self.schemas))) if key in self.cache: return cast(Graph, self.cache[key]) for schema in self.schemas: fetchurl = ( self.fetcher.urljoin(self.fileuri, schema) if self.fileuri is not None else pathlib.Path(schema).resolve().as_uri() ) if fetchurl not in self.cache or self.cache[fetchurl] is True: _logger.debug("Getting external schema %s", fetchurl) try: content = self.fetcher.fetch_text(fetchurl) except Exception as e: _logger.warning("Could not load extension schema %s: %s", fetchurl, str(e)) continue newGraph = Graph() err_msg = "unknown error" for fmt in ["xml", "turtle"]: try: newGraph.parse(data=content, format=fmt, publicID=str(fetchurl)) self.cache[fetchurl] = newGraph graph += newGraph break except (xml.sax.SAXParseException, TypeError, BadSyntax) as e: err_msg = str(e) else: _logger.warning("Could not load extension schema %s: %s", fetchurl, err_msg) self.cache[key] = graph return graph class Saveable(ABC): """Mark classes than have a save() and fromDoc() function.""" @classmethod @abstractmethod def fromDoc( cls, _doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, ) -> "Saveable": """Construct this object from the result of yaml.load().""" @abstractmethod def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: """Convert this object to a JSON/YAML friendly dictionary.""" def load_field( val: Union[str, dict[str, str]], fieldtype: "_Loader", baseuri: str, loadingOptions: LoadingOptions, lc: Optional[list[Any]] = None, ) -> Any: """Load field.""" if isinstance(val, MutableMapping): if "$import" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$import"]) result, metadata = _document_load_by_url( fieldtype, url, loadingOptions, ) loadingOptions.imports.append(url) return result if "$include" in val: if loadingOptions.fileuri is None: raise SchemaSaladException("Cannot load $import without fileuri") url = loadingOptions.fetcher.urljoin(loadingOptions.fileuri, val["$include"]) val = loadingOptions.fetcher.fetch_text(url) loadingOptions.includes.append(url) return fieldtype.load(val, baseuri, loadingOptions, lc=lc) save_type = Optional[Union[MutableMapping[str, Any], MutableSequence[Any], int, float, bool, str]] def extract_type(val_type: type[Any]) -> str: """Take a type of value, and extracts the value as a string.""" val_str = str(val_type) return val_str.split("'")[1] def convert_typing(val_type: str) -> str: """Normalize type names to schema-salad types.""" if "None" in val_type: return "null" if "CommentedSeq" in val_type or "list" in val_type: return "array" if "CommentedMap" in val_type or "dict" in val_type: return "object" if "False" in val_type or "True" in val_type: return "boolean" return val_type def parse_errors(error_message: str) -> tuple[str, str, str]: """Parse error messages from several loaders into one error message.""" if not error_message.startswith("Expected"): return error_message, "", "" vals = error_message.split("\n") if len(vals) == 1: return error_message, "", "" types = set() for val in vals: individual_vals = val.split(" ") if val == "": continue if individual_vals[1] == "one": individual_vals = val.split("(")[1].split(",") for t in individual_vals: types.add(t.strip(" ").strip(")\n")) elif individual_vals[2] == "").replace("'", "")) elif individual_vals[0] == "Value": types.add(individual_vals[-1].strip(".")) else: types.add(individual_vals[1].replace(",", "")) types = {val for val in types if val != "NoneType"} if "str" in types: types = {convert_typing(val) for val in types if "'" not in val} to_print = "" for val in types: if "'" in val: to_print = "value" if len(types) == 1 else "values" if to_print == "": to_print = "type" if len(types) == 1 else "types" verb_tensage = "is" if len(types) == 1 else "are" return str(types).replace("{", "(").replace("}", ")").replace("'", ""), to_print, verb_tensage def save( val: Any, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: if isinstance(val, Saveable): return val.save(top=top, base_url=base_url, relative_uris=relative_uris) if isinstance(val, MutableSequence): return [save(v, top=False, base_url=base_url, relative_uris=relative_uris) for v in val] if isinstance(val, MutableMapping): newdict = {} for key in val: newdict[key] = save(val[key], top=False, base_url=base_url, relative_uris=relative_uris) return newdict if val is None or isinstance(val, (int, float, bool, str)): return val raise Exception("Not Saveable: %s" % type(val)) def save_with_metadata( val: Any, valLoadingOpts: LoadingOptions, top: bool = True, base_url: str = "", relative_uris: bool = True, ) -> save_type: """Save and set $namespaces, $schemas, $base and any other metadata fields at the top level.""" saved_val = save(val, top, base_url, relative_uris) newdict: MutableMapping[str, Any] = {} if isinstance(saved_val, MutableSequence): newdict = {"$graph": saved_val} elif isinstance(saved_val, MutableMapping): newdict = saved_val if valLoadingOpts.namespaces: newdict["$namespaces"] = valLoadingOpts.namespaces if valLoadingOpts.schemas: newdict["$schemas"] = valLoadingOpts.schemas if valLoadingOpts.baseuri: newdict["$base"] = valLoadingOpts.baseuri for k, v in valLoadingOpts.addl_metadata.items(): if k not in newdict: newdict[k] = v return newdict def expand_url( url: str, base_url: str, loadingOptions: LoadingOptions, scoped_id: bool = False, vocab_term: bool = False, scoped_ref: Optional[int] = None, ) -> str: if url in ("@id", "@type"): return url if vocab_term and url in loadingOptions.vocab: return url if bool(loadingOptions.vocab) and ":" in url: prefix = url.split(":")[0] if prefix in loadingOptions.vocab: url = loadingOptions.vocab[prefix] + url[len(prefix) + 1 :] split = urlsplit(url) if ( (bool(split.scheme) and split.scheme in loadingOptions.fetcher.supported_schemes()) or url.startswith("$(") or url.startswith("${") ): pass elif scoped_id and not bool(split.fragment): splitbase = urlsplit(base_url) frg = "" if bool(splitbase.fragment): frg = splitbase.fragment + "/" + split.path else: frg = split.path pt = splitbase.path if splitbase.path != "" else "/" url = urlunsplit((splitbase.scheme, splitbase.netloc, pt, splitbase.query, frg)) elif scoped_ref is not None and not bool(split.fragment): splitbase = urlsplit(base_url) sp = splitbase.fragment.split("/") n = scoped_ref while n > 0 and len(sp) > 0: sp.pop() n -= 1 sp.append(url) url = urlunsplit( ( splitbase.scheme, splitbase.netloc, splitbase.path, splitbase.query, "/".join(sp), ) ) else: url = loadingOptions.fetcher.urljoin(base_url, url) if vocab_term: split = urlsplit(url) if bool(split.scheme): if url in loadingOptions.rvocab: return loadingOptions.rvocab[url] else: raise ValidationException(f"Term {url!r} not in vocabulary") return url class _Loader: def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: pass class _AnyLoader(_Loader): def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc is not None: return doc raise ValidationException("Expected non-null") class _PrimitiveLoader(_Loader): def __init__(self, tp: Union[type, tuple[type[str], type[str]]]) -> None: self.tp = tp def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, self.tp): raise ValidationException(f"Expected a {self.tp} but got {doc.__class__.__name__}") return doc def __repr__(self) -> str: return str(self.tp) class _ArrayLoader(_Loader): def __init__(self, items: _Loader) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableSequence): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an array." ) r: list[Any] = [] errors: list[SchemaSaladException] = [] fields: list[str] = [] for i in range(0, len(doc)): try: lf = load_field( doc[i], _UnionLoader([self, self.items]), baseuri, loadingOptions, lc=lc ) flatten = loadingOptions.container != "@list" if flatten and isinstance(lf, MutableSequence): r.extend(lf) else: r.append(lf) if isinstance(doc[i], CommentedMap): if doc[i].get("id") is not None: if doc[i].get("id") in fields: errors.append( ValidationException( f"Duplicate field {doc[i].get('id')!r}", SourceLine(doc[i], "id", str), [], ) ) else: fields.append(doc[i].get("id")) except ValidationException as e: e = ValidationException( "array item is invalid because", SourceLine(doc, i, str), [e] ) errors.append(e) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return f"array<{self.items}>" class _MapLoader(_Loader): def __init__( self, values: _Loader, name: Optional[str] = None, container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.values = values self.name = name self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException(f"Expected a map, was {type(doc)}") if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) r: dict[str, Any] = {} errors: list[SchemaSaladException] = [] for k, v in doc.items(): try: lf = load_field(v, self.values, baseuri, loadingOptions, lc) r[k] = lf except ValidationException as e: errors.append(e.with_sourceline(SourceLine(doc, k, str))) if errors: raise ValidationException("", None, errors) return r def __repr__(self) -> str: return self.name if self.name is not None else f"map" class _EnumLoader(_Loader): def __init__(self, symbols: Sequence[str], name: str) -> None: self.symbols = symbols self.name = name def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if doc in self.symbols: return doc raise ValidationException(f"Expected one of {self.symbols}") def __repr__(self) -> str: return self.name class _SecondaryDSLLoader(_Loader): def __init__(self, inner: _Loader) -> None: self.inner = inner def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: r: list[dict[str, Any]] = [] if isinstance(doc, MutableSequence): for d in doc: if isinstance(d, str): if d.endswith("?"): r.append({"pattern": d[:-1], "required": False}) else: r.append({"pattern": d}) elif isinstance(d, dict): new_dict: dict[str, Any] = {} dict_copy = copy.deepcopy(d) if "pattern" in dict_copy: new_dict["pattern"] = dict_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {d}" ) new_dict["required"] = ( dict_copy.pop("required") if "required" in dict_copy else None ) if len(dict_copy): raise ValidationException( "Unallowed values in secondaryFiles specification entry: {}".format( dict_copy ) ) r.append(new_dict) else: raise ValidationException( "Expected a string or sequence of (strings or mappings)." ) elif isinstance(doc, MutableMapping): new_dict = {} doc_copy = copy.deepcopy(doc) if "pattern" in doc_copy: new_dict["pattern"] = doc_copy.pop("pattern") else: raise ValidationException( f"Missing pattern in secondaryFiles specification entry: {doc}" ) new_dict["required"] = doc_copy.pop("required") if "required" in doc_copy else None if len(doc_copy): raise ValidationException( f"Unallowed values in secondaryFiles specification entry: {doc_copy}" ) r.append(new_dict) elif isinstance(doc, str): if doc.endswith("?"): r.append({"pattern": doc[:-1], "required": False}) else: r.append({"pattern": doc}) else: raise ValidationException("Expected str or sequence of str") return self.inner.load(r, baseuri, loadingOptions, docRoot, lc=lc) class _RecordLoader(_Loader): def __init__( self, classtype: type[Saveable], container: Optional[str] = None, no_link_check: Optional[bool] = None, ) -> None: self.classtype = classtype self.container = container self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, MutableMapping): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is an object." ) if self.container is not None or self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, container=self.container, no_link_check=self.no_link_check ) return self.classtype.fromDoc(doc, baseuri, loadingOptions, docRoot=docRoot) def __repr__(self) -> str: return str(self.classtype.__name__) class _ExpressionLoader(_Loader): def __init__(self, items: type[str]) -> None: self.items = items def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if not isinstance(doc, str): raise ValidationException( f"Value is a {convert_typing(extract_type(type(doc)))}, " f"but valid type for this field is a str." ) return doc class _UnionLoader(_Loader): def __init__(self, alternates: Sequence[_Loader], name: Optional[str] = None) -> None: self.alternates = alternates self.name = name def add_loaders(self, loaders: Sequence[_Loader]) -> None: self.alternates = tuple(loader for loader in chain(self.alternates, loaders)) def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: errors = [] if lc is None: lc = [] for t in self.alternates: try: return t.load(doc, baseuri, loadingOptions, docRoot=docRoot, lc=lc) except ValidationException as e: if isinstance(t, _ArrayLoader) and len(self.alternates) > 1: continue if isinstance(doc, (CommentedMap, dict)): if "class" in doc: if str(doc.get("class")) == str(t): errors.append( ValidationException( f"Object `{baseuri.split('/')[-1]}` is not valid because:", SourceLine(doc, next(iter(doc)), str), [e], ) ) else: if "array" in str(t): continue else: if "id" in doc: id = baseuri.split("/")[-1] + "#" + str(doc.get("id")) if "id" in lc: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, "id", str), [e], ) ) else: errors.append( ValidationException( f"checking object `{id}` using `{t}`", SourceLine(lc, doc.get("id"), str), [e], ) ) else: if not isinstance( t, (_PrimitiveLoader) ): # avoids 'tried was {x}' errors errors.append( ValidationException(f"tried `{t}` but", None, [e]) ) else: # avoids "tried but x" and instead returns the values for parsing errors.append(ValidationException("", None, [e])) if isinstance(doc, (CommentedMap, dict)) and "class" in doc: if str(doc.get("class")) not in str(self.alternates): errors.append( ValidationException( "Field `class` contains undefined reference to " + "`" + "/".join(baseuri.split("/")[0:-1]) + "/" + str(doc.get("class")) + "`", SourceLine(doc, "class", str), [], ) ) raise ValidationException("", None, errors, "*") def __repr__(self) -> str: return self.name if self.name is not None else " | ".join(str(a) for a in self.alternates) class _URILoader(_Loader): def __init__( self, inner: _Loader, scoped_id: bool, vocab_term: bool, scoped_ref: Optional[int], no_link_check: Optional[bool], ) -> None: self.inner = inner self.scoped_id = scoped_id self.vocab_term = vocab_term self.scoped_ref = scoped_ref self.no_link_check = no_link_check def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if self.no_link_check is not None: loadingOptions = LoadingOptions( copyfrom=loadingOptions, no_link_check=self.no_link_check ) if isinstance(doc, MutableSequence): newdoc = [] for i in doc: if isinstance(i, str): newdoc.append( expand_url( i, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) ) else: newdoc.append(i) doc = newdoc elif isinstance(doc, str): doc = expand_url( doc, baseuri, loadingOptions, self.scoped_id, self.vocab_term, self.scoped_ref, ) if isinstance(doc, str): if not loadingOptions.no_link_check: errors = [] try: if not loadingOptions.fetcher.check_exists(doc): errors.append( ValidationException(f"contains undefined reference to `{doc}`") ) except ValidationException: pass if len(errors) > 0: raise ValidationException("", None, errors) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _TypeDSLLoader(_Loader): def __init__(self, inner: _Loader, refScope: Optional[int], salad_version: str) -> None: self.inner = inner self.refScope = refScope self.salad_version = salad_version def resolve( self, doc: str, baseuri: str, loadingOptions: LoadingOptions, ) -> Union[list[Union[dict[str, Any], str]], dict[str, Any], str]: doc_ = doc optional = False if doc_.endswith("?"): optional = True doc_ = doc_[0:-1] if doc_.endswith("[]"): salad_versions = [int(v) for v in self.salad_version[1:].split(".")] items: Union[list[Union[dict[str, Any], str]], dict[str, Any], str] = "" rest = doc_[0:-2] if salad_versions < [1, 3]: if rest.endswith("[]"): # To show the error message with the original type return doc else: items = expand_url(rest, baseuri, loadingOptions, False, True, self.refScope) else: items = self.resolve(rest, baseuri, loadingOptions) if isinstance(items, str): items = expand_url(items, baseuri, loadingOptions, False, True, self.refScope) expanded: Union[dict[str, Any], str] = {"type": "array", "items": items} else: expanded = expand_url(doc_, baseuri, loadingOptions, False, True, self.refScope) if optional: return ["null", expanded] else: return expanded def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableSequence): r: list[Any] = [] for d in doc: if isinstance(d, str): resolved = self.resolve(d, baseuri, loadingOptions) if isinstance(resolved, MutableSequence): for i in resolved: if i not in r: r.append(i) else: if resolved not in r: r.append(resolved) else: r.append(d) doc = r elif isinstance(doc, str): doc = self.resolve(doc, baseuri, loadingOptions) return self.inner.load(doc, baseuri, loadingOptions, lc=lc) class _IdMapLoader(_Loader): def __init__(self, inner: _Loader, mapSubject: str, mapPredicate: Optional[str]) -> None: self.inner = inner self.mapSubject = mapSubject self.mapPredicate = mapPredicate def load( self, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None, lc: Optional[list[Any]] = None, ) -> Any: if isinstance(doc, MutableMapping): r: list[Any] = [] for k in doc.keys(): val = doc[k] if isinstance(val, CommentedMap): v = copy.copy(val) v.lc.data = val.lc.data v.lc.filename = val.lc.filename v[self.mapSubject] = k r.append(v) elif isinstance(val, MutableMapping): v2 = copy.copy(val) v2[self.mapSubject] = k r.append(v2) else: if self.mapPredicate: v3 = {self.mapPredicate: val} v3[self.mapSubject] = k r.append(v3) else: raise ValidationException("No mapPredicate") doc = r return self.inner.load(doc, baseuri, loadingOptions, lc=lc) def _document_load( loader: _Loader, doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if isinstance(doc, str): return _document_load_by_url( loader, loadingOptions.fetcher.urljoin(baseuri, doc), loadingOptions, addl_metadata_fields=addl_metadata_fields, ) if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] docuri = baseuri if "$base" in doc: baseuri = doc["$base"] loadingOptions = LoadingOptions( copyfrom=loadingOptions, namespaces=doc.get("$namespaces", None), schemas=doc.get("$schemas", None), baseuri=doc.get("$base", None), addl_metadata=addl_metadata, ) doc = copy.copy(doc) if "$namespaces" in doc: doc.pop("$namespaces") if "$schemas" in doc: doc.pop("$schemas") if "$base" in doc: doc.pop("$base") if "$graph" in doc: loadingOptions.idx[baseuri] = ( loader.load(doc["$graph"], baseuri, loadingOptions), loadingOptions, ) else: loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) if docuri != baseuri: loadingOptions.idx[docuri] = loadingOptions.idx[baseuri] return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def _document_load_by_url( loader: _Loader, url: str, loadingOptions: LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, LoadingOptions]: if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _document_load( loader, result, doc_url, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) return loadingOptions.idx[url] def file_uri(path: str, split_frag: bool = False) -> str: """Transform a file path into a URL with file scheme.""" if path.startswith("file://"): return path if split_frag: pathsp = path.split("#", 2) frag = "#" + quote(str(pathsp[1])) if len(pathsp) == 2 else "" urlpath = pathname2url(str(pathsp[0])) else: urlpath = pathname2url(path) frag = "" if urlpath.startswith("//"): return f"file:{urlpath}{frag}" return f"file://{urlpath}{frag}" def prefix_url(url: str, namespaces: dict[str, str]) -> str: """Expand short forms into full URLs using the given namespace dictionary.""" for k, v in namespaces.items(): if url.startswith(v): return k + ":" + url[len(v) :] return url def save_relative_uri( uri: Any, base_url: str, scoped_id: bool, ref_scope: Optional[int], relative_uris: bool, ) -> Any: """Convert any URI to a relative one, obeying the scoping rules.""" if isinstance(uri, MutableSequence): return [save_relative_uri(u, base_url, scoped_id, ref_scope, relative_uris) for u in uri] elif isinstance(uri, str): if not relative_uris or uri == base_url: return uri urisplit = urlsplit(uri) basesplit = urlsplit(base_url) if urisplit.scheme == basesplit.scheme and urisplit.netloc == basesplit.netloc: if urisplit.path != basesplit.path: p = os.path.relpath(urisplit.path, os.path.dirname(basesplit.path)) if urisplit.fragment: p = p + "#" + urisplit.fragment return p basefrag = basesplit.fragment + "/" if ref_scope: sp = basefrag.split("/") i = 0 while i < ref_scope: sp.pop() i += 1 basefrag = "/".join(sp) if urisplit.fragment.startswith(basefrag): return urisplit.fragment[len(basefrag) :] return urisplit.fragment return uri else: return save(uri, top=False, base_url=base_url, relative_uris=relative_uris) def shortname(inputid: str) -> str: """ Compute the shortname of a fully qualified identifier. See https://w3id.org/cwl/v1.2/SchemaSalad.html#Short_names. """ parsed_id = urlparse(inputid) if parsed_id.fragment: return parsed_id.fragment.split("/")[-1] return parsed_id.path.split("/")[-1] def parser_info() -> str: return "org.w3id.cwl.v1_2" class Documented(Saveable): pass class RecordField(Documented): """ A field of a record. """ name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class RecordSchema(Saveable): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, RecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "RecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class EnumSchema(Saveable): """ Define an enumerated type. """ name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, EnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type"]) class ArraySchema(Saveable): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class MapSchema(Saveable): def __init__( self, type_: Any, values: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.type_ = type_ self.values = values def __eq__(self, other: Any) -> bool: if isinstance(other, MapSchema): return bool(self.type_ == other.type_ and self.values == other.values) return False def __hash__(self) -> int: return hash((self.type_, self.values)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MapSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Map_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("values") is None: raise ValidationException("missing required field `values`", None, []) values = load_field( _doc.get("values"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("values") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `values`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("values") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `values` field is not valid because:", SourceLine(_doc, "values", str), [e], detailed_message=f"the `values` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `type`, `values`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( type_=type_, values=values, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) if self.values is not None: u = save_relative_uri(self.values, base_url, False, 2, relative_uris) r["values"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["type", "values"]) class UnionSchema(Saveable): def __init__( self, names: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.names = names self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, UnionSchema): return bool(self.names == other.names and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.names, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "UnionSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("names") is None: raise ValidationException("missing required field `names`", None, []) names = load_field( _doc.get("names"), uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("names") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `names`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("names") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `names` field is not valid because:", SourceLine(_doc, "names", str), [e], detailed_message=f"the `names` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Union_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `names`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( names=names, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.names is not None: u = save_relative_uri(self.names, base_url, False, 2, relative_uris) r["names"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["names", "type"]) class CWLArraySchema(ArraySchema): def __init__( self, items: Any, type_: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLArraySchema): return bool(self.items == other.items and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.items, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.items is not None: u = save_relative_uri(self.items, base_url, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type"]) class CWLRecordField(RecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash((self.doc, self.name, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["doc", "name", "type"]) class CWLRecordSchema(RecordSchema): def __init__( self, type_: Any, fields: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, CWLRecordSchema): return bool(self.fields == other.fields and self.type_ == other.type_) return False def __hash__(self) -> int: return hash((self.fields, self.type_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CWLRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=base_url, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type"]) class File(Saveable): """ Represents a file (or group of files when `secondaryFiles` is provided) that will be accessible by tools using standard POSIX file system call API such as open(2) and read(2). Files are represented as objects with `class` of `File`. File objects have a number of properties that provide metadata about the file. The `location` property of a File is a IRI that uniquely identifies the file. Implementations must support the `file://` IRI scheme and may support other schemes such as `http://` and `https://`. The value of `location` may also be a relative reference, in which case it must be resolved relative to the IRI of the document it appears in. Alternately to `location`, implementations must also accept the `path` property on File, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). If no `location` or `path` is specified, a file object must specify `contents` with the UTF-8 text content of the file. This is a "file literal". File literals do not correspond to external resources, but are created on disk with `contents` with when needed for executing a tool. Where appropriate, expressions can return file literals to define new files on a runtime. The maximum size of `contents` is 64 kilobytes. The `basename` property defines the filename on disk where the file is staged. This may differ from the resource name. If not provided, `basename` must be computed from the last path part of `location` and made available to expressions. The `secondaryFiles` property is a list of File or Directory objects that must be staged in the same directory as the primary file. It is an error for file names to be duplicated in `secondaryFiles`. The `size` property is the size in bytes of the File. It must be computed from the resource and made available to expressions. The `checksum` field contains a cryptographic hash of the file content for use it verifying file contents. Implementations may, at user option, enable or disable computation of the `checksum` field for performance or other reasons. However, the ability to compute output checksums is required to pass the CWL conformance test suite. When executing a CommandLineTool, the files and secondary files may be staged to an arbitrary directory, but must use the value of `basename` for the filename. The `path` property must be file path in the context of the tool execution runtime (local to the compute node, or within the executing container). All computed properties should be available to expressions. File literals also must be staged and `path` must be set. When collecting CommandLineTool outputs, `glob` matching returns file paths (with the `path` property) and the derived properties. This can all be modified by `outputEval`. Alternately, if the file `cwl.output.json` is present in the output, `outputBinding` is ignored. File objects in the output must provide either a `location` IRI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). When evaluating an ExpressionTool, file objects must be referenced via `location` (the expression tool does not have access to files on disk so `path` is meaningless) or as file literals. It is legal to return a file object with an existing `location` but a different `basename`. The `loadContents` field of ExpressionTool inputs behaves the same as on CommandLineTool inputs, however it is not meaningful on the outputs. An ExpressionTool may forward file references from input to output by using the same value for `location`. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, dirname: Optional[Any] = None, nameroot: Optional[Any] = None, nameext: Optional[Any] = None, checksum: Optional[Any] = None, size: Optional[Any] = None, secondaryFiles: Optional[Any] = None, format: Optional[Any] = None, contents: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "File" self.location = location self.path = path self.basename = basename self.dirname = dirname self.nameroot = nameroot self.nameext = nameext self.checksum = checksum self.size = size self.secondaryFiles = secondaryFiles self.format = format self.contents = contents def __eq__(self, other: Any) -> bool: if isinstance(other, File): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.dirname == other.dirname and self.nameroot == other.nameroot and self.nameext == other.nameext and self.checksum == other.checksum and self.size == other.size and self.secondaryFiles == other.secondaryFiles and self.format == other.format and self.contents == other.contents ) return False def __hash__(self) -> int: return hash( ( self.class_, self.location, self.path, self.basename, self.dirname, self.nameroot, self.nameext, self.checksum, self.size, self.secondaryFiles, self.format, self.contents, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "File": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_File_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) dirname = None if "dirname" in _doc: try: dirname = load_field( _doc.get("dirname"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dirname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dirname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dirname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dirname` field is not valid because:", SourceLine(_doc, "dirname", str), [e], detailed_message=f"the `dirname` field with value `{val}` " "is not valid because:", ) ) nameroot = None if "nameroot" in _doc: try: nameroot = load_field( _doc.get("nameroot"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameroot") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameroot`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameroot") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameroot` field is not valid because:", SourceLine(_doc, "nameroot", str), [e], detailed_message=f"the `nameroot` field with value `{val}` " "is not valid because:", ) ) nameext = None if "nameext" in _doc: try: nameext = load_field( _doc.get("nameext"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("nameext") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `nameext`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("nameext") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `nameext` field is not valid because:", SourceLine(_doc, "nameext", str), [e], detailed_message=f"the `nameext` field with value `{val}` " "is not valid because:", ) ) checksum = None if "checksum" in _doc: try: checksum = load_field( _doc.get("checksum"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("checksum") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `checksum`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("checksum") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `checksum` field is not valid because:", SourceLine(_doc, "checksum", str), [e], detailed_message=f"the `checksum` field with value `{val}` " "is not valid because:", ) ) size = None if "size" in _doc: try: size = load_field( _doc.get("size"), union_of_None_type_or_inttype, baseuri, loadingOptions, lc=_doc.get("size") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `size`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("size") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `size` field is not valid because:", SourceLine(_doc, "size", str), [e], detailed_message=f"the `size` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) contents = None if "contents" in _doc: try: contents = load_field( _doc.get("contents"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("contents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `contents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("contents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `contents` field is not valid because:", SourceLine(_doc, "contents", str), [e], detailed_message=f"the `contents` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `dirname`, `nameroot`, `nameext`, `checksum`, `size`, `secondaryFiles`, `format`, `contents`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, dirname=dirname, nameroot=nameroot, nameext=nameext, checksum=checksum, size=size, secondaryFiles=secondaryFiles, format=format, contents=contents, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.dirname is not None: r["dirname"] = save( self.dirname, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameroot is not None: r["nameroot"] = save( self.nameroot, top=False, base_url=base_url, relative_uris=relative_uris ) if self.nameext is not None: r["nameext"] = save( self.nameext, top=False, base_url=base_url, relative_uris=relative_uris ) if self.checksum is not None: r["checksum"] = save( self.checksum, top=False, base_url=base_url, relative_uris=relative_uris ) if self.size is not None: r["size"] = save( self.size, top=False, base_url=base_url, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, base_url, True, None, relative_uris) r["format"] = u if self.contents is not None: r["contents"] = save( self.contents, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "location", "path", "basename", "dirname", "nameroot", "nameext", "checksum", "size", "secondaryFiles", "format", "contents", ] ) class Directory(Saveable): """ Represents a directory to present to a command line tool. Directories are represented as objects with `class` of `Directory`. Directory objects have a number of properties that provide metadata about the directory. The `location` property of a Directory is a IRI that uniquely identifies the directory. Implementations must support the file:// IRI scheme and may support other schemes such as http://. Alternately to `location`, implementations must also accept the `path` property on Directory, which must be a filesystem path available on the same host as the CWL runner (for inputs) or the runtime environment of a command line tool execution (for command line tool outputs). A Directory object may have a `listing` field. This is a list of File and Directory objects that are contained in the Directory. For each entry in `listing`, the `basename` property defines the name of the File or Subdirectory when staged to disk. If `listing` is not provided, the implementation must have some way of fetching the Directory listing at runtime based on the `location` field. If a Directory does not have `location`, it is a Directory literal. A Directory literal must provide `listing`. Directory literals must be created on disk at runtime as needed. The resources in a Directory literal do not need to have any implied relationship in their `location`. For example, a Directory listing may contain two files located on different hosts. It is the responsibility of the runtime to ensure that those files are staged to disk appropriately. Secondary files associated with files in `listing` must also be staged to the same Directory. When executing a CommandLineTool, Directories must be recursively staged first and have local values of `path` assigned. Directory objects in CommandLineTool output must provide either a `location` IRI or a `path` property in the context of the tool execution runtime (local to the compute node, or within the executing container). An ExpressionTool may forward file references from input to output by using the same value for `location`. Name conflicts (the same `basename` appearing multiple times in `listing` or in any entry in `secondaryFiles` in the listing) is a fatal error. """ def __init__( self, location: Optional[Any] = None, path: Optional[Any] = None, basename: Optional[Any] = None, listing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Directory" self.location = location self.path = path self.basename = basename self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, Directory): return bool( self.class_ == other.class_ and self.location == other.location and self.path == other.path and self.basename == other.basename and self.listing == other.listing ) return False def __hash__(self) -> int: return hash( (self.class_, self.location, self.path, self.basename, self.listing) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Directory": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Directory_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e location = None if "location" in _doc: try: location = load_field( _doc.get("location"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("location") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `location`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("location") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `location` field is not valid because:", SourceLine(_doc, "location", str), [e], detailed_message=f"the `location` field with value `{val}` " "is not valid because:", ) ) path = None if "path" in _doc: try: path = load_field( _doc.get("path"), uri_union_of_None_type_or_strtype_False_False_None_None, baseuri, loadingOptions, lc=_doc.get("path") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `path`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("path") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `path` field is not valid because:", SourceLine(_doc, "path", str), [e], detailed_message=f"the `path` field with value `{val}` " "is not valid because:", ) ) basename = None if "basename" in _doc: try: basename = load_field( _doc.get("basename"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("basename") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `basename`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("basename") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `basename` field is not valid because:", SourceLine(_doc, "basename", str), [e], detailed_message=f"the `basename` field with value `{val}` " "is not valid because:", ) ) listing = None if "listing" in _doc: try: listing = load_field( _doc.get("listing"), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `location`, `path`, `basename`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( location=location, path=path, basename=basename, listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.location is not None: u = save_relative_uri(self.location, base_url, False, None, relative_uris) r["location"] = u if self.path is not None: u = save_relative_uri(self.path, base_url, False, None, relative_uris) r["path"] = u if self.basename is not None: r["basename"] = save( self.basename, top=False, base_url=base_url, relative_uris=relative_uris ) if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "location", "path", "basename", "listing"]) class Labeled(Saveable): pass class Identified(Saveable): pass class LoadContents(Saveable): pass class FieldBase(Labeled): pass class InputFormat(Saveable): pass class OutputFormat(Saveable): pass class Parameter(FieldBase, Documented, Identified): """ Define an input or output parameter to a process. """ pass class InputBinding(Saveable): def __init__( self, loadContents: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents def __eq__(self, other: Any) -> bool: if isinstance(other, InputBinding): return bool(self.loadContents == other.loadContents) return False def __hash__(self) -> int: return hash((self.loadContents)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["loadContents"]) class IOSchema(Labeled, Documented): pass class InputSchema(IOSchema): pass class OutputSchema(IOSchema): pass class InputRecordField(CWLRecordField, FieldBase, InputFormat, LoadContents): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.loadContents = loadContents self.loadListing = loadListing def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.loadContents, self.loadListing, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `loadContents`, `loadListing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, loadContents=loadContents, loadListing=loadListing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "loadContents", "loadListing", ] ) class InputRecordSchema(CWLRecordSchema, InputSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, InputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class InputEnumSchema(EnumSchema, InputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, InputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class InputArraySchema(CWLArraySchema, InputSchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, InputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class OutputRecordField(CWLRecordField, FieldBase, OutputFormat): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["doc", "name", "type", "label", "secondaryFiles", "streamable", "format"] ) class OutputRecordSchema(CWLRecordSchema, OutputSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, OutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class OutputEnumSchema(EnumSchema, OutputSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, OutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class OutputArraySchema(CWLArraySchema, OutputSchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, OutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class InputParameter(Parameter, InputFormat, LoadContents): pass class OutputParameter(Parameter, OutputFormat): pass class ProcessRequirement(Saveable): """ A process requirement declares a prerequisite that may or must be fulfilled before executing a process. See [`Process.hints`](#process) and [`Process.requirements`](#process). Process requirements are the primary mechanism for specifying extensions to the CWL core specification. """ pass class Process(Identified, Labeled, Documented): """ The base executable type in CWL is the `Process` object defined by the document. Note that the `Process` object is abstract and cannot be directly executed. """ pass class InlineJavascriptRequirement(ProcessRequirement): """ Indicates that the workflow platform must support inline Javascript expressions. If this requirement is not present, the workflow platform must not perform expression interpolation. """ def __init__( self, expressionLib: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InlineJavascriptRequirement" self.expressionLib = expressionLib def __eq__(self, other: Any) -> bool: if isinstance(other, InlineJavascriptRequirement): return bool( self.class_ == other.class_ and self.expressionLib == other.expressionLib ) return False def __hash__(self) -> int: return hash((self.class_, self.expressionLib)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InlineJavascriptRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InlineJavascriptRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e expressionLib = None if "expressionLib" in _doc: try: expressionLib = load_field( _doc.get("expressionLib"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("expressionLib") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expressionLib`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expressionLib") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expressionLib` field is not valid because:", SourceLine(_doc, "expressionLib", str), [e], detailed_message=f"the `expressionLib` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `expressionLib`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( expressionLib=expressionLib, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.expressionLib is not None: r["expressionLib"] = save( self.expressionLib, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "expressionLib"]) class CommandInputSchema(Saveable): pass class SchemaDefRequirement(ProcessRequirement): """ This field consists of an array of type definitions which must be used when interpreting the `inputs` and `outputs` fields. When a `type` field contains a IRI, the implementation must check if the type is defined in `schemaDefs` and use that definition. If the type is not found in `schemaDefs`, it is an error. The entries in `schemaDefs` must be processed in the order listed such that later schema definitions may refer to earlier schema definitions. - **Type definitions are allowed for `enum` and `record` types only.** - Type definitions may be shared by defining them in a file and then `$include`-ing them in the `types` field. - A file can contain a list of type definitions """ def __init__( self, types: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SchemaDefRequirement" self.types = types def __eq__(self, other: Any) -> bool: if isinstance(other, SchemaDefRequirement): return bool(self.class_ == other.class_ and self.types == other.types) return False def __hash__(self) -> int: return hash((self.class_, self.types)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SchemaDefRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SchemaDefRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("types") is None: raise ValidationException("missing required field `types`", None, []) types = load_field( _doc.get("types"), array_of_union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader, baseuri, loadingOptions, lc=_doc.get("types") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `types`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("types") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `types` field is not valid because:", SourceLine(_doc, "types", str), [e], detailed_message=f"the `types` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `types`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( types=types, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.types is not None: r["types"] = save( self.types, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "types"]) class SecondaryFileSchema(Saveable): """ Secondary files are specified using the following micro-DSL for secondary files: * If the value is a string, it is transformed to an object with two fields `pattern` and `required` * By default, the value of `required` is `null` (this indicates default behavior, which may be based on the context) * If the value ends with a question mark `?` the question mark is stripped off and the value of the field `required` is set to `False` * The remaining value is assigned to the field `pattern` For implementation details and examples, please see [this section](SchemaSalad.html#Domain_Specific_Language_for_secondary_files) in the Schema Salad specification. """ def __init__( self, pattern: Any, required: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.pattern = pattern self.required = required def __eq__(self, other: Any) -> bool: if isinstance(other, SecondaryFileSchema): return bool( self.pattern == other.pattern and self.required == other.required ) return False def __hash__(self) -> int: return hash((self.pattern, self.required)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SecondaryFileSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("pattern") is None: raise ValidationException("missing required field `pattern`", None, []) pattern = load_field( _doc.get("pattern"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("pattern") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `pattern`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("pattern") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `pattern` field is not valid because:", SourceLine(_doc, "pattern", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `pattern` field is not valid because:", SourceLine(_doc, "pattern", str), [e], detailed_message=f"the `pattern` field with value `{val}` " "is not valid because:", ) ) required = None if "required" in _doc: try: required = load_field( _doc.get("required"), union_of_None_type_or_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("required") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `required`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("required") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `required` field is not valid because:", SourceLine(_doc, "required", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `required` field is not valid because:", SourceLine(_doc, "required", str), [e], detailed_message=f"the `required` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `pattern`, `required`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( pattern=pattern, required=required, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.pattern is not None: r["pattern"] = save( self.pattern, top=False, base_url=base_url, relative_uris=relative_uris ) if self.required is not None: r["required"] = save( self.required, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["pattern", "required"]) class LoadListingRequirement(ProcessRequirement): """ Specify the desired behavior for loading the `listing` field of a Directory object for use by expressions. """ def __init__( self, loadListing: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "LoadListingRequirement" self.loadListing = loadListing def __eq__(self, other: Any) -> bool: if isinstance(other, LoadListingRequirement): return bool( self.class_ == other.class_ and self.loadListing == other.loadListing ) return False def __hash__(self) -> int: return hash((self.class_, self.loadListing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "LoadListingRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_LoadListingRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `loadListing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadListing=loadListing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "loadListing"]) class EnvironmentDef(Saveable): """ Define an environment variable that will be set in the runtime environment by the workflow platform when executing the command line tool. May be the result of executing an expression, such as getting a parameter from input. """ def __init__( self, envName: Any, envValue: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.envName = envName self.envValue = envValue def __eq__(self, other: Any) -> bool: if isinstance(other, EnvironmentDef): return bool( self.envName == other.envName and self.envValue == other.envValue ) return False def __hash__(self) -> int: return hash((self.envName, self.envValue)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvironmentDef": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("envName") is None: raise ValidationException("missing required field `envName`", None, []) envName = load_field( _doc.get("envName"), strtype, baseuri, loadingOptions, lc=_doc.get("envName") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envName`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envName") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envName` field is not valid because:", SourceLine(_doc, "envName", str), [e], detailed_message=f"the `envName` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("envValue") is None: raise ValidationException("missing required field `envValue`", None, []) envValue = load_field( _doc.get("envValue"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("envValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envValue` field is not valid because:", SourceLine(_doc, "envValue", str), [e], detailed_message=f"the `envValue` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `envName`, `envValue`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envName=envName, envValue=envValue, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.envName is not None: r["envName"] = save( self.envName, top=False, base_url=base_url, relative_uris=relative_uris ) if self.envValue is not None: r["envValue"] = save( self.envValue, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["envName", "envValue"]) class CommandLineBinding(InputBinding): """ When listed under `inputBinding` in the input schema, the term "value" refers to the corresponding value in the input object. For binding objects listed in `CommandLineTool.arguments`, the term "value" refers to the effective value after evaluating `valueFrom`. The binding behavior when building the command line depends on the data type of the value. If there is a mismatch between the type described by the input schema and the effective value, such as resulting from an expression evaluation, an implementation must use the data type of the effective value. - **string**: Add `prefix` and the string to the command line. - **number**: Add `prefix` and decimal representation to command line. - **boolean**: If true, add `prefix` to the command line. If false, add nothing. - **File**: Add `prefix` and the value of [`File.path`](#File) to the command line. - **Directory**: Add `prefix` and the value of [`Directory.path`](#Directory) to the command line. - **array**: If `itemSeparator` is specified, add `prefix` and the join the array into a single string with `itemSeparator` separating the items. Otherwise, first add `prefix`, then recursively process individual elements. If the array is empty, it does not add anything to command line. - **object**: Add `prefix` only, and recursively add object fields for which `inputBinding` is specified. - **null**: Add nothing. """ def __init__( self, loadContents: Optional[Any] = None, position: Optional[Any] = None, prefix: Optional[Any] = None, separate: Optional[Any] = None, itemSeparator: Optional[Any] = None, valueFrom: Optional[Any] = None, shellQuote: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents self.position = position self.prefix = prefix self.separate = separate self.itemSeparator = itemSeparator self.valueFrom = valueFrom self.shellQuote = shellQuote def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineBinding): return bool( self.loadContents == other.loadContents and self.position == other.position and self.prefix == other.prefix and self.separate == other.separate and self.itemSeparator == other.itemSeparator and self.valueFrom == other.valueFrom and self.shellQuote == other.shellQuote ) return False def __hash__(self) -> int: return hash( ( self.loadContents, self.position, self.prefix, self.separate, self.itemSeparator, self.valueFrom, self.shellQuote, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) position = None if "position" in _doc: try: position = load_field( _doc.get("position"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("position") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `position`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("position") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `position` field is not valid because:", SourceLine(_doc, "position", str), [e], detailed_message=f"the `position` field with value `{val}` " "is not valid because:", ) ) prefix = None if "prefix" in _doc: try: prefix = load_field( _doc.get("prefix"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("prefix") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `prefix`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("prefix") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `prefix` field is not valid because:", SourceLine(_doc, "prefix", str), [e], detailed_message=f"the `prefix` field with value `{val}` " "is not valid because:", ) ) separate = None if "separate" in _doc: try: separate = load_field( _doc.get("separate"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("separate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `separate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("separate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `separate` field is not valid because:", SourceLine(_doc, "separate", str), [e], detailed_message=f"the `separate` field with value `{val}` " "is not valid because:", ) ) itemSeparator = None if "itemSeparator" in _doc: try: itemSeparator = load_field( _doc.get("itemSeparator"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("itemSeparator") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `itemSeparator`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("itemSeparator") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `itemSeparator` field is not valid because:", SourceLine(_doc, "itemSeparator", str), [e], detailed_message=f"the `itemSeparator` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) shellQuote = None if "shellQuote" in _doc: try: shellQuote = load_field( _doc.get("shellQuote"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("shellQuote") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shellQuote`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shellQuote") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shellQuote` field is not valid because:", SourceLine(_doc, "shellQuote", str), [e], detailed_message=f"the `shellQuote` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`, `position`, `prefix`, `separate`, `itemSeparator`, `valueFrom`, `shellQuote`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, position=position, prefix=prefix, separate=separate, itemSeparator=itemSeparator, valueFrom=valueFrom, shellQuote=shellQuote, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.position is not None: r["position"] = save( self.position, top=False, base_url=base_url, relative_uris=relative_uris ) if self.prefix is not None: r["prefix"] = save( self.prefix, top=False, base_url=base_url, relative_uris=relative_uris ) if self.separate is not None: r["separate"] = save( self.separate, top=False, base_url=base_url, relative_uris=relative_uris ) if self.itemSeparator is not None: r["itemSeparator"] = save( self.itemSeparator, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.shellQuote is not None: r["shellQuote"] = save( self.shellQuote, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "loadContents", "position", "prefix", "separate", "itemSeparator", "valueFrom", "shellQuote", ] ) class CommandOutputBinding(LoadContents): """ Describes how to generate an output parameter based on the files produced by a CommandLineTool. The output parameter value is generated by applying these operations in the following order: - glob - loadContents - outputEval - secondaryFiles """ def __init__( self, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, glob: Optional[Any] = None, outputEval: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.loadContents = loadContents self.loadListing = loadListing self.glob = glob self.outputEval = outputEval def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputBinding): return bool( self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.glob == other.glob and self.outputEval == other.outputEval ) return False def __hash__(self) -> int: return hash((self.loadContents, self.loadListing, self.glob, self.outputEval)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputBinding": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) glob = None if "glob" in _doc: try: glob = load_field( _doc.get("glob"), union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("glob") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `glob`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("glob") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `glob` field is not valid because:", SourceLine(_doc, "glob", str), [e], detailed_message=f"the `glob` field with value `{val}` " "is not valid because:", ) ) outputEval = None if "outputEval" in _doc: try: outputEval = load_field( _doc.get("outputEval"), union_of_None_type_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outputEval") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputEval`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputEval") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputEval` field is not valid because:", SourceLine(_doc, "outputEval", str), [e], detailed_message=f"the `outputEval` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `loadContents`, `loadListing`, `glob`, `outputEval`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loadContents=loadContents, loadListing=loadListing, glob=glob, outputEval=outputEval, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.glob is not None: r["glob"] = save( self.glob, top=False, base_url=base_url, relative_uris=relative_uris ) if self.outputEval is not None: r["outputEval"] = save( self.outputEval, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["loadContents", "loadListing", "glob", "outputEval"]) class CommandLineBindable(Saveable): def __init__( self, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineBindable): return bool(self.inputBinding == other.inputBinding) return False def __hash__(self) -> int: return hash((self.inputBinding)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineBindable": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["inputBinding"]) class CommandInputRecordField(InputRecordField, CommandLineBindable): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.loadContents = loadContents self.loadListing = loadListing self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.loadContents, self.loadListing, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `loadContents`, `loadListing`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, loadContents=loadContents, loadListing=loadListing, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "loadContents", "loadListing", "inputBinding", ] ) class CommandInputRecordSchema( InputRecordSchema, CommandInputSchema, CommandLineBindable ): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.fields, self.type_, self.label, self.doc, self.name, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name", "inputBinding"]) class CommandInputEnumSchema(InputEnumSchema, CommandInputSchema, CommandLineBindable): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.name, self.symbols, self.type_, self.label, self.doc, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc", "inputBinding"]) class CommandInputArraySchema( InputArraySchema, CommandInputSchema, CommandLineBindable ): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( (self.items, self.type_, self.label, self.doc, self.name, self.inputBinding) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name", "inputBinding"]) class CommandOutputRecordField(OutputRecordField): name: str def __init__( self, name: Any, type_: Any, doc: Optional[Any] = None, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, format: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.type_ = type_ self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.format = format self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordField): return bool( self.doc == other.doc and self.name == other.name and self.type_ == other.type_ and self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.format == other.format and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( ( self.doc, self.name, self.type_, self.label, self.secondaryFiles, self.streamable, self.format, self.outputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordField": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: _errors__.append(ValidationException("missing name")) if not __original_name_is_none: baseuri = cast(str, name) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `doc`, `name`, `type`, `label`, `secondaryFiles`, `streamable`, `format`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( doc=doc, name=name, type_=type_, label=label, secondaryFiles=secondaryFiles, streamable=streamable, format=format, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.name, relative_uris=relative_uris, ) if self.format is not None: u = save_relative_uri(self.format, self.name, True, None, relative_uris) r["format"] = u if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.name, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "doc", "name", "type", "label", "secondaryFiles", "streamable", "format", "outputBinding", ] ) class CommandOutputRecordSchema(OutputRecordSchema): name: str def __init__( self, type_: Any, fields: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.fields = fields self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputRecordSchema): return bool( self.fields == other.fields and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.fields, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputRecordSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) fields = None if "fields" in _doc: try: fields = load_field( _doc.get("fields"), idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, baseuri, loadingOptions, lc=_doc.get("fields") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `fields`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("fields") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `fields` field is not valid because:", SourceLine(_doc, "fields", str), [e], detailed_message=f"the `fields` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Record_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `fields`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( fields=fields, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.fields is not None: r["fields"] = save( self.fields, top=False, base_url=self.name, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["fields", "type", "label", "doc", "name"]) class CommandOutputEnumSchema(OutputEnumSchema): name: str def __init__( self, symbols: Any, type_: Any, name: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) self.symbols = symbols self.type_ = type_ self.label = label self.doc = doc def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputEnumSchema): return bool( self.name == other.name and self.symbols == other.symbols and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc ) return False def __hash__(self) -> int: return hash((self.name, self.symbols, self.type_, self.label, self.doc)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputEnumSchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("symbols") is None: raise ValidationException("missing required field `symbols`", None, []) symbols = load_field( _doc.get("symbols"), uri_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("symbols") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `symbols`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("symbols") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `symbols` field is not valid because:", SourceLine(_doc, "symbols", str), [e], detailed_message=f"the `symbols` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Enum_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `name`, `symbols`, `type`, `label`, `doc`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( name=name, symbols=symbols, type_=type_, label=label, doc=doc, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.symbols is not None: u = save_relative_uri(self.symbols, self.name, True, None, relative_uris) r["symbols"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["name", "symbols", "type", "label", "doc"]) class CommandOutputArraySchema(OutputArraySchema): name: str def __init__( self, items: Any, type_: Any, label: Optional[Any] = None, doc: Optional[Any] = None, name: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.items = items self.type_ = type_ self.label = label self.doc = doc self.name = name if name is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputArraySchema): return bool( self.items == other.items and self.type_ == other.type_ and self.label == other.label and self.doc == other.doc and self.name == other.name ) return False def __hash__(self) -> int: return hash((self.items, self.type_, self.label, self.doc, self.name)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputArraySchema": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] name = None if "name" in _doc: try: name = load_field( _doc.get("name"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("name") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `name`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("name") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `name` field is not valid because:", SourceLine(_doc, "name", str), [e], detailed_message=f"the `name` field with value `{val}` " "is not valid because:", ) ) __original_name_is_none = name is None if name is None: if docRoot is not None: name = docRoot else: name = "_:" + str(_uuid__.uuid4()) if not __original_name_is_none: baseuri = cast(str, name) try: if _doc.get("items") is None: raise ValidationException("missing required field `items`", None, []) items = load_field( _doc.get("items"), uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None, baseuri, loadingOptions, lc=_doc.get("items") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `items`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("items") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `items` field is not valid because:", SourceLine(_doc, "items", str), [e], detailed_message=f"the `items` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_Array_nameLoader_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `items`, `type`, `label`, `doc`, `name`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( items=items, type_=type_, label=label, doc=doc, name=name, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, name)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.name is not None: u = save_relative_uri(self.name, base_url, True, None, relative_uris) r["name"] = u if self.items is not None: u = save_relative_uri(self.items, self.name, False, 2, relative_uris) r["items"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.name, relative_uris=relative_uris ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.name, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.name, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["items", "type", "label", "doc", "name"]) class CommandInputParameter(InputParameter): """ An input parameter for a CommandLineTool. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, default: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.loadContents = loadContents self.loadListing = loadListing self.default = default self.type_ = type_ self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.default == other.default and self.type_ == other.type_ and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.loadContents, self.loadListing, self.default, self.type_, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `loadContents`, `loadListing`, `default`, `type`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, loadContents=loadContents, loadListing=loadListing, default=default, type_=type_, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "loadContents", "loadListing", "default", "type", "inputBinding", ] ) class CommandOutputParameter(OutputParameter): """ An output parameter for a CommandLineTool. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, outputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.type_ = type_ self.outputBinding = outputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, CommandOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.type_ == other.type_ and self.outputBinding == other.outputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.type_, self.outputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) outputBinding = None if "outputBinding" in _doc: try: outputBinding = load_field( _doc.get("outputBinding"), union_of_None_type_or_CommandOutputBindingLoader, baseuri, loadingOptions, lc=_doc.get("outputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputBinding` field is not valid because:", SourceLine(_doc, "outputBinding", str), [e], detailed_message=f"the `outputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `type`, `outputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, type_=type_, outputBinding=outputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputBinding is not None: r["outputBinding"] = save( self.outputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "type", "outputBinding", ] ) class CommandLineTool(Process): """ This defines the schema of the CWL Command Line Tool Description document. """ id: str def __init__( self, inputs: Any, outputs: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, intent: Optional[Any] = None, baseCommand: Optional[Any] = None, arguments: Optional[Any] = None, stdin: Optional[Any] = None, stderr: Optional[Any] = None, stdout: Optional[Any] = None, successCodes: Optional[Any] = None, temporaryFailCodes: Optional[Any] = None, permanentFailCodes: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.intent = intent self.class_ = "CommandLineTool" self.baseCommand = baseCommand self.arguments = arguments self.stdin = stdin self.stderr = stderr self.stdout = stdout self.successCodes = successCodes self.temporaryFailCodes = temporaryFailCodes self.permanentFailCodes = permanentFailCodes def __eq__(self, other: Any) -> bool: if isinstance(other, CommandLineTool): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.intent == other.intent and self.class_ == other.class_ and self.baseCommand == other.baseCommand and self.arguments == other.arguments and self.stdin == other.stdin and self.stderr == other.stderr and self.stdout == other.stdout and self.successCodes == other.successCodes and self.temporaryFailCodes == other.temporaryFailCodes and self.permanentFailCodes == other.permanentFailCodes ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.intent, self.class_, self.baseCommand, self.arguments, self.stdin, self.stderr, self.stdout, self.successCodes, self.temporaryFailCodes, self.permanentFailCodes, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CommandLineTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_CommandLineTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_CommandInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_CommandOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) intent = None if "intent" in _doc: try: intent = load_field( _doc.get("intent"), uri_union_of_None_type_or_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("intent") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `intent`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("intent") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [e], detailed_message=f"the `intent` field with value `{val}` " "is not valid because:", ) ) baseCommand = None if "baseCommand" in _doc: try: baseCommand = load_field( _doc.get("baseCommand"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("baseCommand") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `baseCommand`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("baseCommand") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `baseCommand` field is not valid because:", SourceLine(_doc, "baseCommand", str), [e], detailed_message=f"the `baseCommand` field with value `{val}` " "is not valid because:", ) ) arguments = None if "arguments" in _doc: try: arguments = load_field( _doc.get("arguments"), union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, baseuri, loadingOptions, lc=_doc.get("arguments") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `arguments`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("arguments") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `arguments` field is not valid because:", SourceLine(_doc, "arguments", str), [e], detailed_message=f"the `arguments` field with value `{val}` " "is not valid because:", ) ) stdin = None if "stdin" in _doc: try: stdin = load_field( _doc.get("stdin"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdin` field is not valid because:", SourceLine(_doc, "stdin", str), [e], detailed_message=f"the `stdin` field with value `{val}` " "is not valid because:", ) ) stderr = None if "stderr" in _doc: try: stderr = load_field( _doc.get("stderr"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stderr") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stderr`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stderr") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stderr` field is not valid because:", SourceLine(_doc, "stderr", str), [e], detailed_message=f"the `stderr` field with value `{val}` " "is not valid because:", ) ) stdout = None if "stdout" in _doc: try: stdout = load_field( _doc.get("stdout"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("stdout") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `stdout`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("stdout") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `stdout` field is not valid because:", SourceLine(_doc, "stdout", str), [e], detailed_message=f"the `stdout` field with value `{val}` " "is not valid because:", ) ) successCodes = None if "successCodes" in _doc: try: successCodes = load_field( _doc.get("successCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("successCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `successCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("successCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `successCodes` field is not valid because:", SourceLine(_doc, "successCodes", str), [e], detailed_message=f"the `successCodes` field with value `{val}` " "is not valid because:", ) ) temporaryFailCodes = None if "temporaryFailCodes" in _doc: try: temporaryFailCodes = load_field( _doc.get("temporaryFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("temporaryFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `temporaryFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("temporaryFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `temporaryFailCodes` field is not valid because:", SourceLine(_doc, "temporaryFailCodes", str), [e], detailed_message=f"the `temporaryFailCodes` field with value `{val}` " "is not valid because:", ) ) permanentFailCodes = None if "permanentFailCodes" in _doc: try: permanentFailCodes = load_field( _doc.get("permanentFailCodes"), union_of_None_type_or_array_of_inttype, baseuri, loadingOptions, lc=_doc.get("permanentFailCodes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `permanentFailCodes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("permanentFailCodes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `permanentFailCodes` field is not valid because:", SourceLine(_doc, "permanentFailCodes", str), [e], detailed_message=f"the `permanentFailCodes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `intent`, `class`, `baseCommand`, `arguments`, `stdin`, `stderr`, `stdout`, `successCodes`, `temporaryFailCodes`, `permanentFailCodes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, intent=intent, baseCommand=baseCommand, arguments=arguments, stdin=stdin, stderr=stderr, stdout=stdout, successCodes=successCodes, temporaryFailCodes=temporaryFailCodes, permanentFailCodes=permanentFailCodes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.intent is not None: u = save_relative_uri(self.intent, self.id, True, None, relative_uris) r["intent"] = u if self.baseCommand is not None: r["baseCommand"] = save( self.baseCommand, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.arguments is not None: r["arguments"] = save( self.arguments, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdin is not None: r["stdin"] = save( self.stdin, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stderr is not None: r["stderr"] = save( self.stderr, top=False, base_url=self.id, relative_uris=relative_uris ) if self.stdout is not None: r["stdout"] = save( self.stdout, top=False, base_url=self.id, relative_uris=relative_uris ) if self.successCodes is not None: r["successCodes"] = save( self.successCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.temporaryFailCodes is not None: r["temporaryFailCodes"] = save( self.temporaryFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.permanentFailCodes is not None: r["permanentFailCodes"] = save( self.permanentFailCodes, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "intent", "class", "baseCommand", "arguments", "stdin", "stderr", "stdout", "successCodes", "temporaryFailCodes", "permanentFailCodes", ] ) class DockerRequirement(ProcessRequirement): """ Indicates that a workflow component should be run in a [Docker](https://docker.com) or Docker-compatible (such as [Singularity](https://www.sylabs.io/) and [udocker](https://github.com/indigo-dc/udocker)) container environment and specifies how to fetch or build the image. If a CommandLineTool lists `DockerRequirement` under `hints` (or `requirements`), it may (or must) be run in the specified Docker container. The platform must first acquire or install the correct Docker image as specified by `dockerPull`, `dockerImport`, `dockerLoad` or `dockerFile`. The platform must execute the tool in the container using `docker run` with the appropriate Docker image and tool command line. The workflow platform may provide input files and the designated output directory through the use of volume bind mounts. The platform should rewrite file paths in the input object to correspond to the Docker bind mounted locations. That is, the platform should rewrite values in the parameter context such as `runtime.outdir`, `runtime.tmpdir` and others to be valid paths within the container. The platform must ensure that `runtime.outdir` and `runtime.tmpdir` are distinct directories. When running a tool contained in Docker, the workflow platform must not assume anything about the contents of the Docker container, such as the presence or absence of specific software, except to assume that the generated command line represents a valid command within the runtime environment of the container. A container image may specify an [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) and/or [CMD](https://docs.docker.com/engine/reference/builder/#cmd). Command line arguments will be appended after all elements of ENTRYPOINT, and will override all elements specified using CMD (in other words, CMD is only used when the CommandLineTool definition produces an empty command line). Use of implicit ENTRYPOINT or CMD are discouraged due to reproducibility concerns of the implicit hidden execution point (For further discussion, see [https://doi.org/10.12688/f1000research.15140.1](https://doi.org/10.12688/f1000research.15140.1)). Portable CommandLineTool wrappers in which use of a container is optional must not rely on ENTRYPOINT or CMD. CommandLineTools which do rely on ENTRYPOINT or CMD must list `DockerRequirement` in the `requirements` section. ## Interaction with other requirements If [EnvVarRequirement](#EnvVarRequirement) is specified alongside a DockerRequirement, the environment variables must be provided to Docker using `--env` or `--env-file` and interact with the container's preexisting environment as defined by Docker. """ def __init__( self, dockerPull: Optional[Any] = None, dockerLoad: Optional[Any] = None, dockerFile: Optional[Any] = None, dockerImport: Optional[Any] = None, dockerImageId: Optional[Any] = None, dockerOutputDirectory: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "DockerRequirement" self.dockerPull = dockerPull self.dockerLoad = dockerLoad self.dockerFile = dockerFile self.dockerImport = dockerImport self.dockerImageId = dockerImageId self.dockerOutputDirectory = dockerOutputDirectory def __eq__(self, other: Any) -> bool: if isinstance(other, DockerRequirement): return bool( self.class_ == other.class_ and self.dockerPull == other.dockerPull and self.dockerLoad == other.dockerLoad and self.dockerFile == other.dockerFile and self.dockerImport == other.dockerImport and self.dockerImageId == other.dockerImageId and self.dockerOutputDirectory == other.dockerOutputDirectory ) return False def __hash__(self) -> int: return hash( ( self.class_, self.dockerPull, self.dockerLoad, self.dockerFile, self.dockerImport, self.dockerImageId, self.dockerOutputDirectory, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "DockerRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_DockerRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e dockerPull = None if "dockerPull" in _doc: try: dockerPull = load_field( _doc.get("dockerPull"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerPull") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerPull`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerPull") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerPull` field is not valid because:", SourceLine(_doc, "dockerPull", str), [e], detailed_message=f"the `dockerPull` field with value `{val}` " "is not valid because:", ) ) dockerLoad = None if "dockerLoad" in _doc: try: dockerLoad = load_field( _doc.get("dockerLoad"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerLoad") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerLoad`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerLoad") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerLoad` field is not valid because:", SourceLine(_doc, "dockerLoad", str), [e], detailed_message=f"the `dockerLoad` field with value `{val}` " "is not valid because:", ) ) dockerFile = None if "dockerFile" in _doc: try: dockerFile = load_field( _doc.get("dockerFile"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerFile") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerFile`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerFile") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerFile` field is not valid because:", SourceLine(_doc, "dockerFile", str), [e], detailed_message=f"the `dockerFile` field with value `{val}` " "is not valid because:", ) ) dockerImport = None if "dockerImport" in _doc: try: dockerImport = load_field( _doc.get("dockerImport"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImport") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImport`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImport") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImport` field is not valid because:", SourceLine(_doc, "dockerImport", str), [e], detailed_message=f"the `dockerImport` field with value `{val}` " "is not valid because:", ) ) dockerImageId = None if "dockerImageId" in _doc: try: dockerImageId = load_field( _doc.get("dockerImageId"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerImageId") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerImageId`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerImageId") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerImageId` field is not valid because:", SourceLine(_doc, "dockerImageId", str), [e], detailed_message=f"the `dockerImageId` field with value `{val}` " "is not valid because:", ) ) dockerOutputDirectory = None if "dockerOutputDirectory" in _doc: try: dockerOutputDirectory = load_field( _doc.get("dockerOutputDirectory"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("dockerOutputDirectory") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `dockerOutputDirectory`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("dockerOutputDirectory") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `dockerOutputDirectory` field is not valid because:", SourceLine(_doc, "dockerOutputDirectory", str), [e], detailed_message=f"the `dockerOutputDirectory` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `dockerPull`, `dockerLoad`, `dockerFile`, `dockerImport`, `dockerImageId`, `dockerOutputDirectory`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( dockerPull=dockerPull, dockerLoad=dockerLoad, dockerFile=dockerFile, dockerImport=dockerImport, dockerImageId=dockerImageId, dockerOutputDirectory=dockerOutputDirectory, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.dockerPull is not None: r["dockerPull"] = save( self.dockerPull, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerLoad is not None: r["dockerLoad"] = save( self.dockerLoad, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerFile is not None: r["dockerFile"] = save( self.dockerFile, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImport is not None: r["dockerImport"] = save( self.dockerImport, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerImageId is not None: r["dockerImageId"] = save( self.dockerImageId, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.dockerOutputDirectory is not None: r["dockerOutputDirectory"] = save( self.dockerOutputDirectory, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "dockerPull", "dockerLoad", "dockerFile", "dockerImport", "dockerImageId", "dockerOutputDirectory", ] ) class SoftwareRequirement(ProcessRequirement): """ A list of software packages that should be configured in the environment of the defined process. """ def __init__( self, packages: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SoftwareRequirement" self.packages = packages def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwareRequirement): return bool(self.class_ == other.class_ and self.packages == other.packages) return False def __hash__(self) -> int: return hash((self.class_, self.packages)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwareRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SoftwareRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("packages") is None: raise ValidationException("missing required field `packages`", None, []) packages = load_field( _doc.get("packages"), idmap_packages_array_of_SoftwarePackageLoader, baseuri, loadingOptions, lc=_doc.get("packages") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `packages`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("packages") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `packages` field is not valid because:", SourceLine(_doc, "packages", str), [e], detailed_message=f"the `packages` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `packages`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( packages=packages, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.packages is not None: r["packages"] = save( self.packages, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "packages"]) class SoftwarePackage(Saveable): def __init__( self, package: Any, version: Optional[Any] = None, specs: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.package = package self.version = version self.specs = specs def __eq__(self, other: Any) -> bool: if isinstance(other, SoftwarePackage): return bool( self.package == other.package and self.version == other.version and self.specs == other.specs ) return False def __hash__(self) -> int: return hash((self.package, self.version, self.specs)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SoftwarePackage": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("package") is None: raise ValidationException("missing required field `package`", None, []) package = load_field( _doc.get("package"), strtype, baseuri, loadingOptions, lc=_doc.get("package") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `package`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("package") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `package` field is not valid because:", SourceLine(_doc, "package", str), [e], detailed_message=f"the `package` field with value `{val}` " "is not valid because:", ) ) version = None if "version" in _doc: try: version = load_field( _doc.get("version"), union_of_None_type_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("version") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `version`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("version") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `version` field is not valid because:", SourceLine(_doc, "version", str), [e], detailed_message=f"the `version` field with value `{val}` " "is not valid because:", ) ) specs = None if "specs" in _doc: try: specs = load_field( _doc.get("specs"), uri_union_of_None_type_or_array_of_strtype_False_False_None_True, baseuri, loadingOptions, lc=_doc.get("specs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `specs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("specs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `specs` field is not valid because:", SourceLine(_doc, "specs", str), [e], detailed_message=f"the `specs` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `package`, `version`, `specs`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( package=package, version=version, specs=specs, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.package is not None: r["package"] = save( self.package, top=False, base_url=base_url, relative_uris=relative_uris ) if self.version is not None: r["version"] = save( self.version, top=False, base_url=base_url, relative_uris=relative_uris ) if self.specs is not None: u = save_relative_uri(self.specs, base_url, False, None, relative_uris) r["specs"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["package", "version", "specs"]) class Dirent(Saveable): """ Define a file or subdirectory that must be staged to a particular place prior to executing the command line tool. May be the result of executing an expression, such as building a configuration file from a template. Usually files are staged within the [designated output directory](#Runtime_environment). However, under certain circumstances, files may be staged at arbitrary locations, see discussion for `entryname`. """ def __init__( self, entry: Any, entryname: Optional[Any] = None, writable: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.entryname = entryname self.entry = entry self.writable = writable def __eq__(self, other: Any) -> bool: if isinstance(other, Dirent): return bool( self.entryname == other.entryname and self.entry == other.entry and self.writable == other.writable ) return False def __hash__(self) -> int: return hash((self.entryname, self.entry, self.writable)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Dirent": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] entryname = None if "entryname" in _doc: try: entryname = load_field( _doc.get("entryname"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entryname") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entryname`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entryname") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entryname` field is not valid because:", SourceLine(_doc, "entryname", str), [e], detailed_message=f"the `entryname` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("entry") is None: raise ValidationException("missing required field `entry`", None, []) entry = load_field( _doc.get("entry"), union_of_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("entry") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `entry`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("entry") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `entry` field is not valid because:", SourceLine(_doc, "entry", str), [e], detailed_message=f"the `entry` field with value `{val}` " "is not valid because:", ) ) writable = None if "writable" in _doc: try: writable = load_field( _doc.get("writable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("writable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `writable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("writable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `writable` field is not valid because:", SourceLine(_doc, "writable", str), [e], detailed_message=f"the `writable` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `entryname`, `entry`, `writable`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( entryname=entryname, entry=entry, writable=writable, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.entryname is not None: r["entryname"] = save( self.entryname, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.entry is not None: r["entry"] = save( self.entry, top=False, base_url=base_url, relative_uris=relative_uris ) if self.writable is not None: r["writable"] = save( self.writable, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["entryname", "entry", "writable"]) class InitialWorkDirRequirement(ProcessRequirement): """ Define a list of files and subdirectories that must be staged by the workflow platform prior to executing the command line tool. Normally files are staged within the designated output directory. However, when running inside containers, files may be staged at arbitrary locations, see discussion for [`Dirent.entryname`](#Dirent). Together with `DockerRequirement.dockerOutputDirectory` it is possible to control the locations of both input and output files when running in containers. """ def __init__( self, listing: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InitialWorkDirRequirement" self.listing = listing def __eq__(self, other: Any) -> bool: if isinstance(other, InitialWorkDirRequirement): return bool(self.class_ == other.class_ and self.listing == other.listing) return False def __hash__(self) -> int: return hash((self.class_, self.listing)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InitialWorkDirRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InitialWorkDirRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("listing") is None: raise ValidationException("missing required field `listing`", None, []) listing = load_field( _doc.get("listing"), union_of_ExpressionLoader_or_array_of_union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader, baseuri, loadingOptions, lc=_doc.get("listing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `listing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("listing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `listing` field is not valid because:", SourceLine(_doc, "listing", str), [e], detailed_message=f"the `listing` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `listing`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( listing=listing, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.listing is not None: r["listing"] = save( self.listing, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "listing"]) class EnvVarRequirement(ProcessRequirement): """ Define a list of environment variables which will be set in the execution environment of the tool. See `EnvironmentDef` for details. """ def __init__( self, envDef: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "EnvVarRequirement" self.envDef = envDef def __eq__(self, other: Any) -> bool: if isinstance(other, EnvVarRequirement): return bool(self.class_ == other.class_ and self.envDef == other.envDef) return False def __hash__(self) -> int: return hash((self.class_, self.envDef)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "EnvVarRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_EnvVarRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("envDef") is None: raise ValidationException("missing required field `envDef`", None, []) envDef = load_field( _doc.get("envDef"), idmap_envDef_array_of_EnvironmentDefLoader, baseuri, loadingOptions, lc=_doc.get("envDef") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `envDef`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("envDef") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `envDef` field is not valid because:", SourceLine(_doc, "envDef", str), [e], detailed_message=f"the `envDef` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `envDef`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( envDef=envDef, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.envDef is not None: r["envDef"] = save( self.envDef, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "envDef"]) class ShellCommandRequirement(ProcessRequirement): """ Modify the behavior of CommandLineTool to generate a single string containing a shell command line. Each item in the `arguments` list must be joined into a string separated by single spaces and quoted to prevent interpretation by the shell, unless `CommandLineBinding` for that argument contains `shellQuote: false`. If `shellQuote: false` is specified, the argument is joined into the command string without quoting, which allows the use of shell metacharacters such as `|` for pipes. """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShellCommandRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ShellCommandRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShellCommandRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ShellCommandRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ResourceRequirement(ProcessRequirement): """ Specify basic hardware resource requirements. "min" is the minimum amount of a resource that must be reserved to schedule a job. If "min" cannot be satisfied, the job should not be run. "max" is the maximum amount of a resource that the job shall be allocated. If a node has sufficient resources, multiple jobs may be scheduled on a single node provided each job's "max" resource requirements are met. If a job attempts to exceed its resource allocation, an implementation may deny additional resources, which may result in job failure. If both "min" and "max" are specified, an implementation may choose to allocate any amount between "min" and "max", with the actual allocation provided in the `runtime` object. If "min" is specified but "max" is not, then "max" == "min" If "max" is specified by "min" is not, then "min" == "max". It is an error if max < min. It is an error if the value of any of these fields is negative. If neither "min" nor "max" is specified for a resource, use the default values below. """ def __init__( self, coresMin: Optional[Any] = None, coresMax: Optional[Any] = None, ramMin: Optional[Any] = None, ramMax: Optional[Any] = None, tmpdirMin: Optional[Any] = None, tmpdirMax: Optional[Any] = None, outdirMin: Optional[Any] = None, outdirMax: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ResourceRequirement" self.coresMin = coresMin self.coresMax = coresMax self.ramMin = ramMin self.ramMax = ramMax self.tmpdirMin = tmpdirMin self.tmpdirMax = tmpdirMax self.outdirMin = outdirMin self.outdirMax = outdirMax def __eq__(self, other: Any) -> bool: if isinstance(other, ResourceRequirement): return bool( self.class_ == other.class_ and self.coresMin == other.coresMin and self.coresMax == other.coresMax and self.ramMin == other.ramMin and self.ramMax == other.ramMax and self.tmpdirMin == other.tmpdirMin and self.tmpdirMax == other.tmpdirMax and self.outdirMin == other.outdirMin and self.outdirMax == other.outdirMax ) return False def __hash__(self) -> int: return hash( ( self.class_, self.coresMin, self.coresMax, self.ramMin, self.ramMax, self.tmpdirMin, self.tmpdirMax, self.outdirMin, self.outdirMax, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ResourceRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ResourceRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e coresMin = None if "coresMin" in _doc: try: coresMin = load_field( _doc.get("coresMin"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMin` field is not valid because:", SourceLine(_doc, "coresMin", str), [e], detailed_message=f"the `coresMin` field with value `{val}` " "is not valid because:", ) ) coresMax = None if "coresMax" in _doc: try: coresMax = load_field( _doc.get("coresMax"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("coresMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `coresMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("coresMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `coresMax` field is not valid because:", SourceLine(_doc, "coresMax", str), [e], detailed_message=f"the `coresMax` field with value `{val}` " "is not valid because:", ) ) ramMin = None if "ramMin" in _doc: try: ramMin = load_field( _doc.get("ramMin"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMin` field is not valid because:", SourceLine(_doc, "ramMin", str), [e], detailed_message=f"the `ramMin` field with value `{val}` " "is not valid because:", ) ) ramMax = None if "ramMax" in _doc: try: ramMax = load_field( _doc.get("ramMax"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("ramMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `ramMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("ramMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `ramMax` field is not valid because:", SourceLine(_doc, "ramMax", str), [e], detailed_message=f"the `ramMax` field with value `{val}` " "is not valid because:", ) ) tmpdirMin = None if "tmpdirMin" in _doc: try: tmpdirMin = load_field( _doc.get("tmpdirMin"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMin` field is not valid because:", SourceLine(_doc, "tmpdirMin", str), [e], detailed_message=f"the `tmpdirMin` field with value `{val}` " "is not valid because:", ) ) tmpdirMax = None if "tmpdirMax" in _doc: try: tmpdirMax = load_field( _doc.get("tmpdirMax"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("tmpdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `tmpdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("tmpdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `tmpdirMax` field is not valid because:", SourceLine(_doc, "tmpdirMax", str), [e], detailed_message=f"the `tmpdirMax` field with value `{val}` " "is not valid because:", ) ) outdirMin = None if "outdirMin" in _doc: try: outdirMin = load_field( _doc.get("outdirMin"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMin` field is not valid because:", SourceLine(_doc, "outdirMin", str), [e], detailed_message=f"the `outdirMin` field with value `{val}` " "is not valid because:", ) ) outdirMax = None if "outdirMax" in _doc: try: outdirMax = load_field( _doc.get("outdirMax"), union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("outdirMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outdirMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outdirMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outdirMax` field is not valid because:", SourceLine(_doc, "outdirMax", str), [e], detailed_message=f"the `outdirMax` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `coresMin`, `coresMax`, `ramMin`, `ramMax`, `tmpdirMin`, `tmpdirMax`, `outdirMin`, `outdirMax`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( coresMin=coresMin, coresMax=coresMax, ramMin=ramMin, ramMax=ramMax, tmpdirMin=tmpdirMin, tmpdirMax=tmpdirMax, outdirMin=outdirMin, outdirMax=outdirMax, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.coresMin is not None: r["coresMin"] = save( self.coresMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.coresMax is not None: r["coresMax"] = save( self.coresMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMin is not None: r["ramMin"] = save( self.ramMin, top=False, base_url=base_url, relative_uris=relative_uris ) if self.ramMax is not None: r["ramMax"] = save( self.ramMax, top=False, base_url=base_url, relative_uris=relative_uris ) if self.tmpdirMin is not None: r["tmpdirMin"] = save( self.tmpdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.tmpdirMax is not None: r["tmpdirMax"] = save( self.tmpdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMin is not None: r["outdirMin"] = save( self.outdirMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.outdirMax is not None: r["outdirMax"] = save( self.outdirMax, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "coresMin", "coresMax", "ramMin", "ramMax", "tmpdirMin", "tmpdirMax", "outdirMin", "outdirMax", ] ) class WorkReuse(ProcessRequirement): """ For implementations that support reusing output from past work (on the assumption that same code and same input produce same results), control whether to enable or disable the reuse behavior for a particular tool or step (to accommodate situations where that assumption is incorrect). A reused step is not executed but instead returns the same output as the original execution. If `WorkReuse` is not specified, correct tools should assume it is enabled by default. """ def __init__( self, enableReuse: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "WorkReuse" self.enableReuse = enableReuse def __eq__(self, other: Any) -> bool: if isinstance(other, WorkReuse): return bool( self.class_ == other.class_ and self.enableReuse == other.enableReuse ) return False def __hash__(self) -> int: return hash((self.class_, self.enableReuse)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkReuse": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_WorkReuse_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("enableReuse") is None: raise ValidationException("missing required field `enableReuse`", None, []) enableReuse = load_field( _doc.get("enableReuse"), union_of_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("enableReuse") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `enableReuse`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("enableReuse") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `enableReuse` field is not valid because:", SourceLine(_doc, "enableReuse", str), [e], detailed_message=f"the `enableReuse` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `enableReuse`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( enableReuse=enableReuse, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.enableReuse is not None: r["enableReuse"] = save( self.enableReuse, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "enableReuse"]) class NetworkAccess(ProcessRequirement): """ Indicate whether a process requires outgoing IPv4/IPv6 network access. Choice of IPv4 or IPv6 is implementation and site specific, correct tools must support both. If `networkAccess` is false or not specified, tools must not assume network access, except for localhost (the loopback device). If `networkAccess` is true, the tool must be able to make outgoing connections to network resources. Resources may be on a private subnet or the public Internet. However, implementations and sites may apply their own security policies to restrict what is accessible by the tool. Enabling network access does not imply a publicly routable IP address or the ability to accept inbound connections. """ def __init__( self, networkAccess: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "NetworkAccess" self.networkAccess = networkAccess def __eq__(self, other: Any) -> bool: if isinstance(other, NetworkAccess): return bool( self.class_ == other.class_ and self.networkAccess == other.networkAccess ) return False def __hash__(self) -> int: return hash((self.class_, self.networkAccess)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "NetworkAccess": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_NetworkAccess_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("networkAccess") is None: raise ValidationException("missing required field `networkAccess`", None, []) networkAccess = load_field( _doc.get("networkAccess"), union_of_booltype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("networkAccess") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `networkAccess`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("networkAccess") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `networkAccess` field is not valid because:", SourceLine(_doc, "networkAccess", str), [e], detailed_message=f"the `networkAccess` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `networkAccess`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( networkAccess=networkAccess, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.networkAccess is not None: r["networkAccess"] = save( self.networkAccess, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "networkAccess"]) class InplaceUpdateRequirement(ProcessRequirement): """ If `inplaceUpdate` is true, then an implementation supporting this feature may permit tools to directly update files with `writable: true` in InitialWorkDirRequirement. That is, as an optimization, files may be destructively modified in place as opposed to copied and updated. An implementation must ensure that only one workflow step may access a writable file at a time. It is an error if a file which is writable by one workflow step file is accessed (for reading or writing) by any other workflow step running independently. However, a file which has been updated in a previous completed step may be used as input to multiple steps, provided it is read-only in every step. Workflow steps which modify a file must produce the modified file as output. Downstream steps which further process the file must use the output of previous steps, and not refer to a common input (this is necessary for both ordering and correctness). Workflow authors should provide this in the `hints` section. The intent of this feature is that workflows produce the same results whether or not InplaceUpdateRequirement is supported by the implementation, and this feature is primarily available as an optimization for particular environments. Users and implementers should be aware that workflows that destructively modify inputs may not be repeatable or reproducible. In particular, enabling this feature implies that WorkReuse should not be enabled. """ def __init__( self, inplaceUpdate: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "InplaceUpdateRequirement" self.inplaceUpdate = inplaceUpdate def __eq__(self, other: Any) -> bool: if isinstance(other, InplaceUpdateRequirement): return bool( self.class_ == other.class_ and self.inplaceUpdate == other.inplaceUpdate ) return False def __hash__(self) -> int: return hash((self.class_, self.inplaceUpdate)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "InplaceUpdateRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_InplaceUpdateRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("inplaceUpdate") is None: raise ValidationException("missing required field `inplaceUpdate`", None, []) inplaceUpdate = load_field( _doc.get("inplaceUpdate"), booltype, baseuri, loadingOptions, lc=_doc.get("inplaceUpdate") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inplaceUpdate`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inplaceUpdate") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inplaceUpdate` field is not valid because:", SourceLine(_doc, "inplaceUpdate", str), [e], detailed_message=f"the `inplaceUpdate` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `inplaceUpdate`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( inplaceUpdate=inplaceUpdate, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.inplaceUpdate is not None: r["inplaceUpdate"] = save( self.inplaceUpdate, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "inplaceUpdate"]) class ToolTimeLimit(ProcessRequirement): """ Set an upper limit on the execution time of a CommandLineTool. A CommandLineTool whose execution duration exceeds the time limit may be preemptively terminated and considered failed. May also be used by batch systems to make scheduling decisions. The execution duration excludes external operations, such as staging of files, pulling a docker image etc, and only counts wall-time for the execution of the command line itself. """ def __init__( self, timelimit: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ToolTimeLimit" self.timelimit = timelimit def __eq__(self, other: Any) -> bool: if isinstance(other, ToolTimeLimit): return bool( self.class_ == other.class_ and self.timelimit == other.timelimit ) return False def __hash__(self) -> int: return hash((self.class_, self.timelimit)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ToolTimeLimit": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ToolTimeLimit_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("timelimit") is None: raise ValidationException("missing required field `timelimit`", None, []) timelimit = load_field( _doc.get("timelimit"), union_of_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("timelimit") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `timelimit`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("timelimit") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `timelimit` field is not valid because:", SourceLine(_doc, "timelimit", str), [e], detailed_message=f"the `timelimit` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `timelimit`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( timelimit=timelimit, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.timelimit is not None: r["timelimit"] = save( self.timelimit, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "timelimit"]) class ExpressionToolOutputParameter(OutputParameter): id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionToolOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionToolOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["label", "secondaryFiles", "streamable", "doc", "id", "format", "type"] ) class WorkflowInputParameter(InputParameter): id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, default: Optional[Any] = None, inputBinding: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.loadContents = loadContents self.loadListing = loadListing self.default = default self.type_ = type_ self.inputBinding = inputBinding def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.default == other.default and self.type_ == other.type_ and self.inputBinding == other.inputBinding ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.loadContents, self.loadListing, self.default, self.type_, self.inputBinding, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) inputBinding = None if "inputBinding" in _doc: try: inputBinding = load_field( _doc.get("inputBinding"), union_of_None_type_or_InputBindingLoader, baseuri, loadingOptions, lc=_doc.get("inputBinding") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputBinding`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputBinding") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputBinding` field is not valid because:", SourceLine(_doc, "inputBinding", str), [e], detailed_message=f"the `inputBinding` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `loadContents`, `loadListing`, `default`, `type`, `inputBinding`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, loadContents=loadContents, loadListing=loadListing, default=default, type_=type_, inputBinding=inputBinding, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputBinding is not None: r["inputBinding"] = save( self.inputBinding, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "loadContents", "loadListing", "default", "type", "inputBinding", ] ) class ExpressionTool(Process): """ An ExpressionTool is a type of Process object that can be run by itself or as a Workflow step. It executes a pure Javascript expression that has access to the same input parameters as a workflow. It is meant to be used sparingly as a way to isolate complex Javascript expressions that need to operate on input data and produce some result; perhaps just a rearrangement of the inputs. No Docker software container is required or allowed. """ id: str def __init__( self, inputs: Any, outputs: Any, expression: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, intent: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.intent = intent self.class_ = "ExpressionTool" self.expression = expression def __eq__(self, other: Any) -> bool: if isinstance(other, ExpressionTool): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.intent == other.intent and self.class_ == other.class_ and self.expression == other.expression ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.intent, self.class_, self.expression, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ExpressionTool": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ExpressionTool_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_WorkflowInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_ExpressionToolOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) intent = None if "intent" in _doc: try: intent = load_field( _doc.get("intent"), uri_union_of_None_type_or_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("intent") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `intent`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("intent") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [e], detailed_message=f"the `intent` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("expression") is None: raise ValidationException("missing required field `expression`", None, []) expression = load_field( _doc.get("expression"), ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("expression") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `expression`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("expression") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `expression` field is not valid because:", SourceLine(_doc, "expression", str), [e], detailed_message=f"the `expression` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `intent`, `class`, `expression`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, intent=intent, expression=expression, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.intent is not None: u = save_relative_uri(self.intent, self.id, True, None, relative_uris) r["intent"] = u if self.expression is not None: r["expression"] = save( self.expression, top=False, base_url=self.id, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "intent", "class", "expression", ] ) class WorkflowOutputParameter(OutputParameter): """ Describe an output parameter of a workflow. The parameter must be connected to one or more parameters defined in the workflow that will provide the value of the output parameter. It is legal to connect a WorkflowInputParameter to a WorkflowOutputParameter. See [WorkflowStepInput](#WorkflowStepInput) for discussion of `linkMerge` and `pickValue`. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, outputSource: Optional[Any] = None, linkMerge: Optional[Any] = None, pickValue: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.outputSource = outputSource self.linkMerge = linkMerge self.pickValue = pickValue self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.outputSource == other.outputSource and self.linkMerge == other.linkMerge and self.pickValue == other.pickValue and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.outputSource, self.linkMerge, self.pickValue, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) outputSource = None if "outputSource" in _doc: try: outputSource = load_field( _doc.get("outputSource"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None, baseuri, loadingOptions, lc=_doc.get("outputSource") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputSource`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputSource") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputSource` field is not valid because:", SourceLine(_doc, "outputSource", str), [e], detailed_message=f"the `outputSource` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) pickValue = None if "pickValue" in _doc: try: pickValue = load_field( _doc.get("pickValue"), union_of_None_type_or_PickValueMethodLoader, baseuri, loadingOptions, lc=_doc.get("pickValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `pickValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("pickValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [e], detailed_message=f"the `pickValue` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `outputSource`, `linkMerge`, `pickValue`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, outputSource=outputSource, linkMerge=linkMerge, pickValue=pickValue, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.outputSource is not None: u = save_relative_uri(self.outputSource, self.id, False, 1, relative_uris) r["outputSource"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.pickValue is not None: r["pickValue"] = save( self.pickValue, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "outputSource", "linkMerge", "pickValue", "type", ] ) class Sink(Saveable): pass class WorkflowStepInput(Identified, Sink, LoadContents, Labeled): """ The input of a workflow step connects an upstream parameter (from the workflow inputs, or the outputs of other workflows steps) with the input parameters of the process specified by the `run` field. Only input parameters declared by the target process will be passed through at runtime to the process though additional parameters may be specified (for use within `valueFrom` expressions for instance) - unconnected or unused parameters do not represent an error condition. # Input object A WorkflowStepInput object must contain an `id` field in the form `#fieldname` or `#prefix/fieldname`. When the `id` field contains a slash `/` the field name consists of the characters following the final slash (the prefix portion may contain one or more slashes to indicate scope). This defines a field of the workflow step input object with the value of the `source` parameter(s). # Merging multiple inbound data links To merge multiple inbound data links, [MultipleInputFeatureRequirement](#MultipleInputFeatureRequirement) must be specified in the workflow or workflow step requirements. If the sink parameter is an array, or named in a [workflow scatter](#WorkflowStep) operation, there may be multiple inbound data links listed in the `source` field. The values from the input links are merged depending on the method specified in the `linkMerge` field. If both `linkMerge` and `pickValue` are null or not specified, and there is more than one element in the `source` array, the default method is "merge_nested". If both `linkMerge` and `pickValue` are null or not specified, and there is only a single element in the `source`, then the input parameter takes the scalar value from the single input link (it is *not* wrapped in a single-list). * **merge_nested** The input must be an array consisting of exactly one entry for each input link. If "merge_nested" is specified with a single link, the value from the link must be wrapped in a single-item list. * **merge_flattened** 1. The source and sink parameters must be compatible types, or the source type must be compatible with single element from the "items" type of the destination array parameter. 2. Source parameters which are arrays are concatenated. Source parameters which are single element types are appended as single elements. # Picking non-null values among inbound data links If present, `pickValue` specifies how to pick non-null values among inbound data links. `pickValue` is evaluated 1. Once all source values from upstream step or parameters are available. 2. After `linkMerge`. 3. Before `scatter` or `valueFrom`. This is specifically intended to be useful in combination with [conditional execution](#WorkflowStep), where several upstream steps may be connected to a single input (`source` is a list), and skipped steps produce null values. Static type checkers should check for type consistency after inferring what the type will be after `pickValue` is applied, just as they do currently for `linkMerge`. * **first_non_null** For the first level of a list input, pick the first non-null element. The result is a scalar. It is an error if there is no non-null element. Examples: * `[null, x, null, y] -> x` * `[null, [null], null, y] -> [null]` * `[null, null, null] -> Runtime Error` *Intended use case*: If-else pattern where the value comes either from a conditional step or from a default or fallback value. The conditional step(s) should be placed first in the list. * **the_only_non_null** For the first level of a list input, pick the single non-null element. The result is a scalar. It is an error if there is more than one non-null element. Examples: * `[null, x, null] -> x` * `[null, x, null, y] -> Runtime Error` * `[null, [null], null] -> [null]` * `[null, null, null] -> Runtime Error` *Intended use case*: Switch type patterns where developer considers more than one active code path as a workflow error (possibly indicating an error in writing `when` condition expressions). * **all_non_null** For the first level of a list input, pick all non-null values. The result is a list, which may be empty. Examples: * `[null, x, null] -> [x]` * `[x, null, y] -> [x, y]` * `[null, [x], [null]] -> [[x], [null]]` * `[null, null, null] -> []` *Intended use case*: It is valid to have more than one source, but sources are conditional, so null sources (from skipped steps) should be filtered out. """ id: str def __init__( self, id: Optional[Any] = None, source: Optional[Any] = None, linkMerge: Optional[Any] = None, pickValue: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, label: Optional[Any] = None, default: Optional[Any] = None, valueFrom: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.source = source self.linkMerge = linkMerge self.pickValue = pickValue self.loadContents = loadContents self.loadListing = loadListing self.label = label self.default = default self.valueFrom = valueFrom def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepInput): return bool( self.id == other.id and self.source == other.source and self.linkMerge == other.linkMerge and self.pickValue == other.pickValue and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.label == other.label and self.default == other.default and self.valueFrom == other.valueFrom ) return False def __hash__(self) -> int: return hash( ( self.id, self.source, self.linkMerge, self.pickValue, self.loadContents, self.loadListing, self.label, self.default, self.valueFrom, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepInput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) source = None if "source" in _doc: try: source = load_field( _doc.get("source"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None, baseuri, loadingOptions, lc=_doc.get("source") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `source`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("source") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `source` field is not valid because:", SourceLine(_doc, "source", str), [e], detailed_message=f"the `source` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) pickValue = None if "pickValue" in _doc: try: pickValue = load_field( _doc.get("pickValue"), union_of_None_type_or_PickValueMethodLoader, baseuri, loadingOptions, lc=_doc.get("pickValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `pickValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("pickValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [e], detailed_message=f"the `pickValue` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `source`, `linkMerge`, `pickValue`, `loadContents`, `loadListing`, `label`, `default`, `valueFrom`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, source=source, linkMerge=linkMerge, pickValue=pickValue, loadContents=loadContents, loadListing=loadListing, label=label, default=default, valueFrom=valueFrom, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.source is not None: u = save_relative_uri(self.source, self.id, False, 2, relative_uris) r["source"] = u if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.pickValue is not None: r["pickValue"] = save( self.pickValue, top=False, base_url=self.id, relative_uris=relative_uris ) if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "source", "linkMerge", "pickValue", "loadContents", "loadListing", "label", "default", "valueFrom", ] ) class WorkflowStepOutput(Identified): """ Associate an output parameter of the underlying process with a workflow parameter. The workflow parameter (given in the `id` field) be may be used as a `source` to connect with input parameters of other workflow steps, or with an output parameter of the process. A unique identifier for this workflow output parameter. This is the identifier to use in the `source` field of `WorkflowStepInput` to connect the output value to downstream parameters. """ id: str def __init__( self, id: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStepOutput): return bool(self.id == other.id) return False def __hash__(self) -> int: return hash((self.id)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStepOutput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["id"]) class WorkflowStep(Identified, Labeled, Documented): """ A workflow step is an executable element of a workflow. It specifies the underlying process implementation (such as `CommandLineTool` or another `Workflow`) in the `run` field and connects the input and output parameters of the underlying process to workflow parameters. # Scatter/gather To use scatter/gather, [ScatterFeatureRequirement](#ScatterFeatureRequirement) must be specified in the workflow or workflow step requirements. A "scatter" operation specifies that the associated workflow step or subworkflow should execute separately over a list of input elements. Each job making up a scatter operation is independent and may be executed concurrently. The `scatter` field specifies one or more input parameters which will be scattered. An input parameter may be listed more than once. The declared type of each input parameter implicitly becomes an array of items of the input parameter type. If a parameter is listed more than once, it becomes a nested array. As a result, upstream parameters which are connected to scattered parameters must be arrays. All output parameter types are also implicitly wrapped in arrays. Each job in the scatter results in an entry in the output array. If any scattered parameter runtime value is an empty array, all outputs are set to empty arrays and no work is done for the step, according to applicable scattering rules. If `scatter` declares more than one input parameter, `scatterMethod` describes how to decompose the input into a discrete set of jobs. * **dotproduct** specifies that each of the input arrays are aligned and one element taken from each array to construct each job. It is an error if all input arrays are not the same length. * **nested_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output must be nested arrays for each level of scattering, in the order that the input arrays are listed in the `scatter` field. * **flat_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output arrays must be flattened to a single level, but otherwise listed in the order that the input arrays are listed in the `scatter` field. # Conditional execution (Optional) Conditional execution makes execution of a step conditional on an expression. A step that is not executed is "skipped". A skipped step produces `null` for all output parameters. The condition is evaluated after `scatter`, using the input object of each individual scatter job. This means over a set of scatter jobs, some may be executed and some may be skipped. When the results are gathered, skipped steps must be `null` in the output arrays. The `when` field controls conditional execution. This is an expression that must be evaluated with `inputs` bound to the step input object (or individual scatter job), and returns a boolean value. It is an error if this expression returns a value other than `true` or `false`. Conditionals in CWL are an optional feature and are not required to be implemented by all consumers of CWL documents. An implementation that does not support conditionals must return a fatal error when attempting to execute a workflow that uses conditional constructs the implementation does not support. # Subworkflows To specify a nested workflow as part of a workflow step, [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) must be specified in the workflow or workflow step requirements. It is a fatal error if a workflow directly or indirectly invokes itself as a subworkflow (recursive workflows are not allowed). """ id: str def __init__( self, in_: Any, out: Any, run: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, when: Optional[Any] = None, scatter: Optional[Any] = None, scatterMethod: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.in_ = in_ self.out = out self.requirements = requirements self.hints = hints self.run = run self.when = when self.scatter = scatter self.scatterMethod = scatterMethod def __eq__(self, other: Any) -> bool: if isinstance(other, WorkflowStep): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.in_ == other.in_ and self.out == other.out and self.requirements == other.requirements and self.hints == other.hints and self.run == other.run and self.when == other.when and self.scatter == other.scatter and self.scatterMethod == other.scatterMethod ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.in_, self.out, self.requirements, self.hints, self.run, self.when, self.scatter, self.scatterMethod, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "WorkflowStep": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("in") is None: raise ValidationException("missing required field `in`", None, []) in_ = load_field( _doc.get("in"), idmap_in__array_of_WorkflowStepInputLoader, baseuri, loadingOptions, lc=_doc.get("in") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `in`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("in") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `in` field is not valid because:", SourceLine(_doc, "in", str), [e], detailed_message=f"the `in` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("out") is None: raise ValidationException("missing required field `out`", None, []) out = load_field( _doc.get("out"), uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("out") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `out`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("out") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `out` field is not valid because:", SourceLine(_doc, "out", str), [e], detailed_message=f"the `out` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) subscope_baseuri = expand_url('run', baseuri, loadingOptions, True) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_False_False_None_None, subscope_baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) when = None if "when" in _doc: try: when = load_field( _doc.get("when"), union_of_None_type_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("when") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `when`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("when") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `when` field is not valid because:", SourceLine(_doc, "when", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `when` field is not valid because:", SourceLine(_doc, "when", str), [e], detailed_message=f"the `when` field with value `{val}` " "is not valid because:", ) ) scatter = None if "scatter" in _doc: try: scatter = load_field( _doc.get("scatter"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("scatter") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatter`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatter") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatter` field is not valid because:", SourceLine(_doc, "scatter", str), [e], detailed_message=f"the `scatter` field with value `{val}` " "is not valid because:", ) ) scatterMethod = None if "scatterMethod" in _doc: try: scatterMethod = load_field( _doc.get("scatterMethod"), uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("scatterMethod") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `scatterMethod`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("scatterMethod") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `scatterMethod` field is not valid because:", SourceLine(_doc, "scatterMethod", str), [e], detailed_message=f"the `scatterMethod` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `in`, `out`, `requirements`, `hints`, `run`, `when`, `scatter`, `scatterMethod`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, in_=in_, out=out, requirements=requirements, hints=hints, run=run, when=when, scatter=scatter, scatterMethod=scatterMethod, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.in_ is not None: r["in"] = save( self.in_, top=False, base_url=self.id, relative_uris=relative_uris ) if self.out is not None: u = save_relative_uri(self.out, self.id, True, None, relative_uris) r["out"] = u if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u if self.when is not None: r["when"] = save( self.when, top=False, base_url=self.id, relative_uris=relative_uris ) if self.scatter is not None: u = save_relative_uri(self.scatter, self.id, False, 0, relative_uris) r["scatter"] = u if self.scatterMethod is not None: u = save_relative_uri( self.scatterMethod, self.id, False, None, relative_uris ) r["scatterMethod"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "in", "out", "requirements", "hints", "run", "when", "scatter", "scatterMethod", ] ) class Workflow(Process): """ A workflow describes a set of **steps** and the **dependencies** between those steps. When a step produces output that will be consumed by a second step, the first step is a dependency of the second step. When there is a dependency, the workflow engine must execute the preceding step and wait for it to successfully produce output before executing the dependent step. If two steps are defined in the workflow graph that are not directly or indirectly dependent, these steps are **independent**, and may execute in any order or execute concurrently. A workflow is complete when all steps have been executed. Dependencies between parameters are expressed using the `source` field on [workflow step input parameters](#WorkflowStepInput) and `outputSource` field on [workflow output parameters](#WorkflowOutputParameter). The `source` field on each workflow step input parameter expresses the data links that contribute to the value of the step input parameter (the "sink"). A workflow step can only begin execution when every data link connected to a step has been fulfilled. The `outputSource` field on each workflow step input parameter expresses the data links that contribute to the value of the workflow output parameter (the "sink"). Workflow execution cannot complete successfully until every data link connected to an output parameter has been fulfilled. ## Workflow success and failure A completed step must result in one of `success`, `temporaryFailure` or `permanentFailure` states. An implementation may choose to retry a step execution which resulted in `temporaryFailure`. An implementation may choose to either continue running other steps of a workflow, or terminate immediately upon `permanentFailure`. * If any step of a workflow execution results in `permanentFailure`, then the workflow status is `permanentFailure`. * If one or more steps result in `temporaryFailure` and all other steps complete `success` or are not executed, then the workflow status is `temporaryFailure`. * If all workflow steps are executed and complete with `success`, then the workflow status is `success`. # Extensions [ScatterFeatureRequirement](#ScatterFeatureRequirement) and [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) are available as standard [extensions](#Extensions_and_Metadata) to core workflow semantics. """ id: str def __init__( self, inputs: Any, outputs: Any, steps: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, intent: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.intent = intent self.class_ = "Workflow" self.steps = steps def __eq__(self, other: Any) -> bool: if isinstance(other, Workflow): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.intent == other.intent and self.class_ == other.class_ and self.steps == other.steps ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.intent, self.class_, self.steps, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Workflow": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Workflow_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_WorkflowInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_WorkflowOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) intent = None if "intent" in _doc: try: intent = load_field( _doc.get("intent"), uri_union_of_None_type_or_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("intent") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `intent`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("intent") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [e], detailed_message=f"the `intent` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("steps") is None: raise ValidationException("missing required field `steps`", None, []) steps = load_field( _doc.get("steps"), idmap_steps_union_of_array_of_WorkflowStepLoader, baseuri, loadingOptions, lc=_doc.get("steps") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `steps`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("steps") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `steps` field is not valid because:", SourceLine(_doc, "steps", str), [e], detailed_message=f"the `steps` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `intent`, `class`, `steps`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, intent=intent, steps=steps, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.intent is not None: u = save_relative_uri(self.intent, self.id, True, None, relative_uris) r["intent"] = u if self.steps is not None: r["steps"] = save( self.steps, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "intent", "class", "steps", ] ) class SubworkflowFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support nested workflows in the `run` field of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "SubworkflowFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, SubworkflowFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "SubworkflowFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class ScatterFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support the `scatter` and `scatterMethod` fields of [WorkflowStep](#WorkflowStep). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ScatterFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, ScatterFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ScatterFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_ScatterFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class MultipleInputFeatureRequirement(ProcessRequirement): """ Indicates that the workflow platform must support multiple inbound data links listed in the `source` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MultipleInputFeatureRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, MultipleInputFeatureRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MultipleInputFeatureRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class StepInputExpressionRequirement(ProcessRequirement): """ Indicate that the workflow platform must support the `valueFrom` field of [WorkflowStepInput](#WorkflowStepInput). """ def __init__( self, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "StepInputExpressionRequirement" def __eq__(self, other: Any) -> bool: if isinstance(other, StepInputExpressionRequirement): return bool(self.class_ == other.class_) return False def __hash__(self) -> int: return hash((self.class_)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "StepInputExpressionRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_StepInputExpressionRequirement_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`".format(k), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class"]) class OperationInputParameter(InputParameter): """ Describe an input parameter of an operation. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, loadContents: Optional[Any] = None, loadListing: Optional[Any] = None, default: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.loadContents = loadContents self.loadListing = loadListing self.default = default self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, OperationInputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.loadContents == other.loadContents and self.loadListing == other.loadListing and self.default == other.default and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.loadContents, self.loadListing, self.default, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OperationInputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) loadContents = None if "loadContents" in _doc: try: loadContents = load_field( _doc.get("loadContents"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("loadContents") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadContents`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadContents") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadContents` field is not valid because:", SourceLine(_doc, "loadContents", str), [e], detailed_message=f"the `loadContents` field with value `{val}` " "is not valid because:", ) ) loadListing = None if "loadListing" in _doc: try: loadListing = load_field( _doc.get("loadListing"), union_of_None_type_or_LoadListingEnumLoader, baseuri, loadingOptions, lc=_doc.get("loadListing") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loadListing`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loadListing") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loadListing` field is not valid because:", SourceLine(_doc, "loadListing", str), [e], detailed_message=f"the `loadListing` field with value `{val}` " "is not valid because:", ) ) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_CWLObjectTypeLoader, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `loadContents`, `loadListing`, `default`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, loadContents=loadContents, loadListing=loadListing, default=default, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.loadContents is not None: r["loadContents"] = save( self.loadContents, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.loadListing is not None: r["loadListing"] = save( self.loadListing, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "label", "secondaryFiles", "streamable", "doc", "id", "format", "loadContents", "loadListing", "default", "type", ] ) class OperationOutputParameter(OutputParameter): """ Describe an output parameter of an operation. """ id: str def __init__( self, type_: Any, label: Optional[Any] = None, secondaryFiles: Optional[Any] = None, streamable: Optional[Any] = None, doc: Optional[Any] = None, id: Optional[Any] = None, format: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.label = label self.secondaryFiles = secondaryFiles self.streamable = streamable self.doc = doc self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.format = format self.type_ = type_ def __eq__(self, other: Any) -> bool: if isinstance(other, OperationOutputParameter): return bool( self.label == other.label and self.secondaryFiles == other.secondaryFiles and self.streamable == other.streamable and self.doc == other.doc and self.id == other.id and self.format == other.format and self.type_ == other.type_ ) return False def __hash__(self) -> int: return hash( ( self.label, self.secondaryFiles, self.streamable, self.doc, self.id, self.format, self.type_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "OperationOutputParameter": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) secondaryFiles = None if "secondaryFiles" in _doc: try: secondaryFiles = load_field( _doc.get("secondaryFiles"), secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, baseuri, loadingOptions, lc=_doc.get("secondaryFiles") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secondaryFiles`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secondaryFiles") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secondaryFiles` field is not valid because:", SourceLine(_doc, "secondaryFiles", str), [e], detailed_message=f"the `secondaryFiles` field with value `{val}` " "is not valid because:", ) ) streamable = None if "streamable" in _doc: try: streamable = load_field( _doc.get("streamable"), union_of_None_type_or_booltype, baseuri, loadingOptions, lc=_doc.get("streamable") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `streamable`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("streamable") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `streamable` field is not valid because:", SourceLine(_doc, "streamable", str), [e], detailed_message=f"the `streamable` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) format = None if "format" in _doc: try: format = load_field( _doc.get("format"), uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True, baseuri, loadingOptions, lc=_doc.get("format") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `format`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("format") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `format` field is not valid because:", SourceLine(_doc, "format", str), [e], detailed_message=f"the `format` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("type") is None: raise ValidationException("missing required field `type`", None, []) type_ = load_field( _doc.get("type"), typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2, baseuri, loadingOptions, lc=_doc.get("type") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `type`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("type") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `type` field is not valid because:", SourceLine(_doc, "type", str), [e], detailed_message=f"the `type` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `label`, `secondaryFiles`, `streamable`, `doc`, `id`, `format`, `type`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( label=label, secondaryFiles=secondaryFiles, streamable=streamable, doc=doc, id=id, format=format, type_=type_, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.secondaryFiles is not None: r["secondaryFiles"] = save( self.secondaryFiles, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.streamable is not None: r["streamable"] = save( self.streamable, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.format is not None: u = save_relative_uri(self.format, self.id, True, None, relative_uris) r["format"] = u if self.type_ is not None: r["type"] = save( self.type_, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["label", "secondaryFiles", "streamable", "doc", "id", "format", "type"] ) class Operation(Process): """ This record describes an abstract operation. It is a potential step of a workflow that has not yet been bound to a concrete implementation. It specifies an input and output signature, but does not provide enough information to be executed. An implementation (or other tooling) may provide a means of binding an Operation to a concrete process (such as Workflow, CommandLineTool, or ExpressionTool) with a compatible signature. """ id: str def __init__( self, inputs: Any, outputs: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, intent: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.intent = intent self.class_ = "Operation" def __eq__(self, other: Any) -> bool: if isinstance(other, Operation): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.intent == other.intent and self.class_ == other.class_ ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.intent, self.class_, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Operation": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_Operation_classLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_OperationInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_OperationOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) intent = None if "intent" in _doc: try: intent = load_field( _doc.get("intent"), uri_union_of_None_type_or_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("intent") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `intent`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("intent") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [e], detailed_message=f"the `intent` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `intent`, `class`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, intent=intent, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.intent is not None: u = save_relative_uri(self.intent, self.id, True, None, relative_uris) r["intent"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "intent", "class", ] ) class Secrets(ProcessRequirement): def __init__( self, secrets: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Secrets" self.secrets = secrets def __eq__(self, other: Any) -> bool: if isinstance(other, Secrets): return bool(self.class_ == other.class_ and self.secrets == other.secrets) return False def __hash__(self) -> int: return hash((self.class_, self.secrets)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Secrets": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("secrets") is None: raise ValidationException("missing required field `secrets`", None, []) secrets = load_field( _doc.get("secrets"), uri_array_of_strtype_False_False_0_None, baseuri, loadingOptions, lc=_doc.get("secrets") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `secrets`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("secrets") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `secrets` field is not valid because:", SourceLine(_doc, "secrets", str), [e], detailed_message=f"the `secrets` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `secrets`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( secrets=secrets, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.secrets is not None: u = save_relative_uri(self.secrets, base_url, False, 0, relative_uris) r["secrets"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "secrets"]) class ProcessGenerator(Process): id: str def __init__( self, inputs: Any, outputs: Any, run: Any, id: Optional[Any] = None, label: Optional[Any] = None, doc: Optional[Any] = None, requirements: Optional[Any] = None, hints: Optional[Any] = None, cwlVersion: Optional[Any] = None, intent: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.label = label self.doc = doc self.inputs = inputs self.outputs = outputs self.requirements = requirements self.hints = hints self.cwlVersion = cwlVersion self.intent = intent self.class_ = "ProcessGenerator" self.run = run def __eq__(self, other: Any) -> bool: if isinstance(other, ProcessGenerator): return bool( self.id == other.id and self.label == other.label and self.doc == other.doc and self.inputs == other.inputs and self.outputs == other.outputs and self.requirements == other.requirements and self.hints == other.hints and self.cwlVersion == other.cwlVersion and self.intent == other.intent and self.class_ == other.class_ and self.run == other.run ) return False def __hash__(self) -> int: return hash( ( self.id, self.label, self.doc, self.inputs, self.outputs, self.requirements, self.hints, self.cwlVersion, self.intent, self.class_, self.run, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ProcessGenerator": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e label = None if "label" in _doc: try: label = load_field( _doc.get("label"), union_of_None_type_or_strtype, baseuri, loadingOptions, lc=_doc.get("label") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `label`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("label") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `label` field is not valid because:", SourceLine(_doc, "label", str), [e], detailed_message=f"the `label` field with value `{val}` " "is not valid because:", ) ) doc = None if "doc" in _doc: try: doc = load_field( _doc.get("doc"), union_of_None_type_or_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("doc") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `doc`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("doc") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `doc` field is not valid because:", SourceLine(_doc, "doc", str), [e], detailed_message=f"the `doc` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("inputs") is None: raise ValidationException("missing required field `inputs`", None, []) inputs = load_field( _doc.get("inputs"), idmap_inputs_array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader, baseuri, loadingOptions, lc=_doc.get("inputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `inputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("inputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `inputs` field is not valid because:", SourceLine(_doc, "inputs", str), [e], detailed_message=f"the `inputs` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputs") is None: raise ValidationException("missing required field `outputs`", None, []) outputs = load_field( _doc.get("outputs"), idmap_outputs_array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader, baseuri, loadingOptions, lc=_doc.get("outputs") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputs`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputs") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputs` field is not valid because:", SourceLine(_doc, "outputs", str), [e], detailed_message=f"the `outputs` field with value `{val}` " "is not valid because:", ) ) requirements = None if "requirements" in _doc: try: requirements = load_field( _doc.get("requirements"), idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, baseuri, loadingOptions, lc=_doc.get("requirements") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `requirements`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("requirements") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `requirements` field is not valid because:", SourceLine(_doc, "requirements", str), [e], detailed_message=f"the `requirements` field with value `{val}` " "is not valid because:", ) ) hints = None if "hints" in _doc: try: hints = load_field( _doc.get("hints"), idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, baseuri, loadingOptions, lc=_doc.get("hints") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `hints`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("hints") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `hints` field is not valid because:", SourceLine(_doc, "hints", str), [e], detailed_message=f"the `hints` field with value `{val}` " "is not valid because:", ) ) cwlVersion = None if "cwlVersion" in _doc: try: cwlVersion = load_field( _doc.get("cwlVersion"), uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("cwlVersion") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cwlVersion`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cwlVersion") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cwlVersion` field is not valid because:", SourceLine(_doc, "cwlVersion", str), [e], detailed_message=f"the `cwlVersion` field with value `{val}` " "is not valid because:", ) ) intent = None if "intent" in _doc: try: intent = load_field( _doc.get("intent"), uri_union_of_None_type_or_array_of_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("intent") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `intent`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("intent") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `intent` field is not valid because:", SourceLine(_doc, "intent", str), [e], detailed_message=f"the `intent` field with value `{val}` " "is not valid because:", ) ) subscope_baseuri = expand_url('run', baseuri, loadingOptions, True) try: if _doc.get("run") is None: raise ValidationException("missing required field `run`", None, []) run = load_field( _doc.get("run"), uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_False_False_None_None, subscope_baseuri, loadingOptions, lc=_doc.get("run") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `run`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("run") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `run` field is not valid because:", SourceLine(_doc, "run", str), [e], detailed_message=f"the `run` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `id`, `label`, `doc`, `inputs`, `outputs`, `requirements`, `hints`, `cwlVersion`, `intent`, `class`, `run`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( id=id, label=label, doc=doc, inputs=inputs, outputs=outputs, requirements=requirements, hints=hints, cwlVersion=cwlVersion, intent=intent, run=run, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, self.id, False, None, relative_uris) r["class"] = u if self.label is not None: r["label"] = save( self.label, top=False, base_url=self.id, relative_uris=relative_uris ) if self.doc is not None: r["doc"] = save( self.doc, top=False, base_url=self.id, relative_uris=relative_uris ) if self.inputs is not None: r["inputs"] = save( self.inputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.outputs is not None: r["outputs"] = save( self.outputs, top=False, base_url=self.id, relative_uris=relative_uris ) if self.requirements is not None: r["requirements"] = save( self.requirements, top=False, base_url=self.id, relative_uris=relative_uris, ) if self.hints is not None: r["hints"] = save( self.hints, top=False, base_url=self.id, relative_uris=relative_uris ) if self.cwlVersion is not None: u = save_relative_uri(self.cwlVersion, self.id, False, None, relative_uris) r["cwlVersion"] = u if self.intent is not None: u = save_relative_uri(self.intent, self.id, True, None, relative_uris) r["intent"] = u if self.run is not None: u = save_relative_uri(self.run, self.id, False, None, relative_uris) r["run"] = u # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "id", "label", "doc", "inputs", "outputs", "requirements", "hints", "cwlVersion", "intent", "class", "run", ] ) class MPIRequirement(ProcessRequirement): """ Indicates that a process requires an MPI runtime. """ def __init__( self, processes: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "MPIRequirement" self.processes = processes def __eq__(self, other: Any) -> bool: if isinstance(other, MPIRequirement): return bool( self.class_ == other.class_ and self.processes == other.processes ) return False def __hash__(self) -> int: return hash((self.class_, self.processes)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "MPIRequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("processes") is None: raise ValidationException("missing required field `processes`", None, []) processes = load_field( _doc.get("processes"), union_of_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("processes") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `processes`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("processes") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `processes` field is not valid because:", SourceLine(_doc, "processes", str), [e], detailed_message=f"the `processes` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `processes`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( processes=processes, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.processes is not None: r["processes"] = save( self.processes, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "processes"]) class CUDARequirement(ProcessRequirement): """ Require support for NVIDA CUDA (GPU hardware acceleration). """ def __init__( self, cudaComputeCapability: Any, cudaVersionMin: Any, cudaDeviceCountMax: Optional[Any] = None, cudaDeviceCountMin: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "CUDARequirement" self.cudaComputeCapability = cudaComputeCapability self.cudaDeviceCountMax = cudaDeviceCountMax self.cudaDeviceCountMin = cudaDeviceCountMin self.cudaVersionMin = cudaVersionMin def __eq__(self, other: Any) -> bool: if isinstance(other, CUDARequirement): return bool( self.class_ == other.class_ and self.cudaComputeCapability == other.cudaComputeCapability and self.cudaDeviceCountMax == other.cudaDeviceCountMax and self.cudaDeviceCountMin == other.cudaDeviceCountMin and self.cudaVersionMin == other.cudaVersionMin ) return False def __hash__(self) -> int: return hash( ( self.class_, self.cudaComputeCapability, self.cudaDeviceCountMax, self.cudaDeviceCountMin, self.cudaVersionMin, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "CUDARequirement": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("cudaComputeCapability") is None: raise ValidationException("missing required field `cudaComputeCapability`", None, []) cudaComputeCapability = load_field( _doc.get("cudaComputeCapability"), union_of_strtype_or_array_of_strtype, baseuri, loadingOptions, lc=_doc.get("cudaComputeCapability") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaComputeCapability`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaComputeCapability") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaComputeCapability` field is not valid because:", SourceLine(_doc, "cudaComputeCapability", str), [e], detailed_message=f"the `cudaComputeCapability` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMax = None if "cudaDeviceCountMax" in _doc: try: cudaDeviceCountMax = load_field( _doc.get("cudaDeviceCountMax"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMax") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMax`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMax") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMax` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMax", str), [e], detailed_message=f"the `cudaDeviceCountMax` field with value `{val}` " "is not valid because:", ) ) cudaDeviceCountMin = None if "cudaDeviceCountMin" in _doc: try: cudaDeviceCountMin = load_field( _doc.get("cudaDeviceCountMin"), union_of_None_type_or_inttype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("cudaDeviceCountMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaDeviceCountMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaDeviceCountMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaDeviceCountMin` field is not valid because:", SourceLine(_doc, "cudaDeviceCountMin", str), [e], detailed_message=f"the `cudaDeviceCountMin` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("cudaVersionMin") is None: raise ValidationException("missing required field `cudaVersionMin`", None, []) cudaVersionMin = load_field( _doc.get("cudaVersionMin"), strtype, baseuri, loadingOptions, lc=_doc.get("cudaVersionMin") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `cudaVersionMin`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("cudaVersionMin") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `cudaVersionMin` field is not valid because:", SourceLine(_doc, "cudaVersionMin", str), [e], detailed_message=f"the `cudaVersionMin` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `cudaComputeCapability`, `cudaDeviceCountMax`, `cudaDeviceCountMin`, `cudaVersionMin`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( cudaComputeCapability=cudaComputeCapability, cudaDeviceCountMax=cudaDeviceCountMax, cudaDeviceCountMin=cudaDeviceCountMin, cudaVersionMin=cudaVersionMin, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.cudaComputeCapability is not None: r["cudaComputeCapability"] = save( self.cudaComputeCapability, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMax is not None: r["cudaDeviceCountMax"] = save( self.cudaDeviceCountMax, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaDeviceCountMin is not None: r["cudaDeviceCountMin"] = save( self.cudaDeviceCountMin, top=False, base_url=base_url, relative_uris=relative_uris, ) if self.cudaVersionMin is not None: r["cudaVersionMin"] = save( self.cudaVersionMin, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( [ "class", "cudaComputeCapability", "cudaDeviceCountMax", "cudaDeviceCountMin", "cudaVersionMin", ] ) class LoopInput(Saveable): id: str def __init__( self, default: Optional[Any] = None, id: Optional[Any] = None, linkMerge: Optional[Any] = None, loopSource: Optional[Any] = None, pickValue: Optional[Any] = None, valueFrom: Optional[Any] = None, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.default = default self.id = id if id is not None else "_:" + str(_uuid__.uuid4()) self.linkMerge = linkMerge self.loopSource = loopSource self.pickValue = pickValue self.valueFrom = valueFrom def __eq__(self, other: Any) -> bool: if isinstance(other, LoopInput): return bool( self.default == other.default and self.id == other.id and self.linkMerge == other.linkMerge and self.loopSource == other.loopSource and self.pickValue == other.pickValue and self.valueFrom == other.valueFrom ) return False def __hash__(self) -> int: return hash( ( self.default, self.id, self.linkMerge, self.loopSource, self.pickValue, self.valueFrom, ) ) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "LoopInput": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] id = None if "id" in _doc: try: id = load_field( _doc.get("id"), uri_union_of_None_type_or_strtype_True_False_None_None, baseuri, loadingOptions, lc=_doc.get("id") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `id`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("id") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `id` field is not valid because:", SourceLine(_doc, "id", str), [e], detailed_message=f"the `id` field with value `{val}` " "is not valid because:", ) ) __original_id_is_none = id is None if id is None: if docRoot is not None: id = docRoot else: id = "_:" + str(_uuid__.uuid4()) if not __original_id_is_none: baseuri = cast(str, id) default = None if "default" in _doc: try: default = load_field( _doc.get("default"), union_of_None_type_or_Any_type, baseuri, loadingOptions, lc=_doc.get("default") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `default`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("default") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `default` field is not valid because:", SourceLine(_doc, "default", str), [e], detailed_message=f"the `default` field with value `{val}` " "is not valid because:", ) ) linkMerge = None if "linkMerge" in _doc: try: linkMerge = load_field( _doc.get("linkMerge"), union_of_None_type_or_LinkMergeMethodLoader, baseuri, loadingOptions, lc=_doc.get("linkMerge") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `linkMerge`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("linkMerge") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `linkMerge` field is not valid because:", SourceLine(_doc, "linkMerge", str), [e], detailed_message=f"the `linkMerge` field with value `{val}` " "is not valid because:", ) ) loopSource = None if "loopSource" in _doc: try: loopSource = load_field( _doc.get("loopSource"), uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None, baseuri, loadingOptions, lc=_doc.get("loopSource") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loopSource`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loopSource") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loopSource` field is not valid because:", SourceLine(_doc, "loopSource", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loopSource` field is not valid because:", SourceLine(_doc, "loopSource", str), [e], detailed_message=f"the `loopSource` field with value `{val}` " "is not valid because:", ) ) pickValue = None if "pickValue" in _doc: try: pickValue = load_field( _doc.get("pickValue"), union_of_None_type_or_PickValueMethodLoader, baseuri, loadingOptions, lc=_doc.get("pickValue") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `pickValue`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("pickValue") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `pickValue` field is not valid because:", SourceLine(_doc, "pickValue", str), [e], detailed_message=f"the `pickValue` field with value `{val}` " "is not valid because:", ) ) valueFrom = None if "valueFrom" in _doc: try: valueFrom = load_field( _doc.get("valueFrom"), union_of_None_type_or_strtype_or_ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("valueFrom") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `valueFrom`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("valueFrom") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `valueFrom` field is not valid because:", SourceLine(_doc, "valueFrom", str), [e], detailed_message=f"the `valueFrom` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `default`, `id`, `linkMerge`, `loopSource`, `pickValue`, `valueFrom`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( default=default, id=id, linkMerge=linkMerge, loopSource=loopSource, pickValue=pickValue, valueFrom=valueFrom, extension_fields=extension_fields, loadingOptions=loadingOptions, ) loadingOptions.idx[cast(str, id)] = (_constructed, loadingOptions) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.id is not None: u = save_relative_uri(self.id, base_url, True, None, relative_uris) r["id"] = u if self.default is not None: r["default"] = save( self.default, top=False, base_url=self.id, relative_uris=relative_uris ) if self.linkMerge is not None: r["linkMerge"] = save( self.linkMerge, top=False, base_url=self.id, relative_uris=relative_uris ) if self.loopSource is not None: u = save_relative_uri(self.loopSource, self.id, False, 1, relative_uris) r["loopSource"] = u if self.pickValue is not None: r["pickValue"] = save( self.pickValue, top=False, base_url=self.id, relative_uris=relative_uris ) if self.valueFrom is not None: r["valueFrom"] = save( self.valueFrom, top=False, base_url=self.id, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset( ["default", "id", "linkMerge", "loopSource", "pickValue", "valueFrom"] ) class Loop(ProcessRequirement): """ Prototype to enable workflow-level looping of a step. Valid only under `requirements` of a https://www.commonwl.org/v1.2/Workflow.html#WorkflowStep. Unlike other CWL requirements, Loop requirement is not propagated to inner steps. `loopWhen` is an expansion of the CWL v1.2 `when` construct which controls conditional execution. Using `loopWhen` and `when` for the same step will produce an error. `loopWhen` is not compatible with `scatter` at this time and combining the two in the same step will produce an error. """ def __init__( self, loop: Any, loopWhen: Any, outputMethod: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "Loop" self.loop = loop self.loopWhen = loopWhen self.outputMethod = outputMethod def __eq__(self, other: Any) -> bool: if isinstance(other, Loop): return bool( self.class_ == other.class_ and self.loop == other.loop and self.loopWhen == other.loopWhen and self.outputMethod == other.outputMethod ) return False def __hash__(self) -> int: return hash((self.class_, self.loop, self.loopWhen, self.outputMethod)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "Loop": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("loop") is None: raise ValidationException("missing required field `loop`", None, []) loop = load_field( _doc.get("loop"), idmap_loop_array_of_LoopInputLoader, baseuri, loadingOptions, lc=_doc.get("loop") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loop`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loop") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loop` field is not valid because:", SourceLine(_doc, "loop", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loop` field is not valid because:", SourceLine(_doc, "loop", str), [e], detailed_message=f"the `loop` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("loopWhen") is None: raise ValidationException("missing required field `loopWhen`", None, []) loopWhen = load_field( _doc.get("loopWhen"), ExpressionLoader, baseuri, loadingOptions, lc=_doc.get("loopWhen") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `loopWhen`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("loopWhen") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `loopWhen` field is not valid because:", SourceLine(_doc, "loopWhen", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `loopWhen` field is not valid because:", SourceLine(_doc, "loopWhen", str), [e], detailed_message=f"the `loopWhen` field with value `{val}` " "is not valid because:", ) ) try: if _doc.get("outputMethod") is None: raise ValidationException("missing required field `outputMethod`", None, []) outputMethod = load_field( _doc.get("outputMethod"), LoopOutputModesLoader, baseuri, loadingOptions, lc=_doc.get("outputMethod") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `outputMethod`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("outputMethod") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `outputMethod` field is not valid because:", SourceLine(_doc, "outputMethod", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `outputMethod` field is not valid because:", SourceLine(_doc, "outputMethod", str), [e], detailed_message=f"the `outputMethod` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `loop`, `loopWhen`, `outputMethod`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( loop=loop, loopWhen=loopWhen, outputMethod=outputMethod, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.loop is not None: r["loop"] = save( self.loop, top=False, base_url=base_url, relative_uris=relative_uris ) if self.loopWhen is not None: r["loopWhen"] = save( self.loopWhen, top=False, base_url=base_url, relative_uris=relative_uris ) if self.outputMethod is not None: r["outputMethod"] = save( self.outputMethod, top=False, base_url=base_url, relative_uris=relative_uris, ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "loop", "loopWhen", "outputMethod"]) class ShmSize(ProcessRequirement): def __init__( self, shmSize: Any, extension_fields: Optional[dict[str, Any]] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> None: if extension_fields: self.extension_fields = extension_fields else: self.extension_fields = CommentedMap() if loadingOptions: self.loadingOptions = loadingOptions else: self.loadingOptions = LoadingOptions() self.class_ = "ShmSize" self.shmSize = shmSize def __eq__(self, other: Any) -> bool: if isinstance(other, ShmSize): return bool(self.class_ == other.class_ and self.shmSize == other.shmSize) return False def __hash__(self) -> int: return hash((self.class_, self.shmSize)) @classmethod def fromDoc( cls, doc: Any, baseuri: str, loadingOptions: LoadingOptions, docRoot: Optional[str] = None ) -> "ShmSize": _doc = copy.copy(doc) if hasattr(doc, "lc"): _doc.lc.data = doc.lc.data _doc.lc.filename = doc.lc.filename _errors__ = [] try: if _doc.get("class") is None: raise ValidationException("missing required field `class`", None, []) class_ = load_field( _doc.get("class"), uri_strtype_False_True_None_None, baseuri, loadingOptions, lc=_doc.get("class") ) if class_ not in (cls.__name__, loadingOptions.vocab.get(cls.__name__)): raise ValidationException(f"tried `{cls.__name__}` but") except ValidationException as e: raise e try: if _doc.get("shmSize") is None: raise ValidationException("missing required field `shmSize`", None, []) shmSize = load_field( _doc.get("shmSize"), strtype, baseuri, loadingOptions, lc=_doc.get("shmSize") ) except ValidationException as e: error_message, to_print, verb_tensage = parse_errors(str(e)) if str(e) == "missing required field `shmSize`": _errors__.append( ValidationException( str(e), None ) ) else: val = _doc.get("shmSize") if error_message != str(e): val_type = convert_typing(extract_type(type(val))) _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [ValidationException(f"Value is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}", detailed_message=f"Value `{val}` is a {val_type}, " f"but valid {to_print} for this field " f"{verb_tensage} {error_message}")], ) ) else: _errors__.append( ValidationException( "the `shmSize` field is not valid because:", SourceLine(_doc, "shmSize", str), [e], detailed_message=f"the `shmSize` field with value `{val}` " "is not valid because:", ) ) extension_fields: dict[str, Any] = {} for k in _doc.keys(): if k not in cls.attrs: if not k: _errors__.append( ValidationException("mapping with implicit null key") ) elif ":" in k: ex = expand_url( k, "", loadingOptions, scoped_id=False, vocab_term=False ) extension_fields[ex] = _doc[k] else: _errors__.append( ValidationException( "invalid field `{}`, expected one of: `class`, `shmSize`".format( k ), SourceLine(_doc, k, str), ) ) if _errors__: raise ValidationException("", None, _errors__, "*") _constructed = cls( shmSize=shmSize, extension_fields=extension_fields, loadingOptions=loadingOptions, ) return _constructed def save( self, top: bool = False, base_url: str = "", relative_uris: bool = True ) -> dict[str, Any]: r: dict[str, Any] = {} if relative_uris: for ef in self.extension_fields: r[prefix_url(ef, self.loadingOptions.vocab)] = self.extension_fields[ef] else: for ef in self.extension_fields: r[ef] = self.extension_fields[ef] if self.class_ is not None: uri = self.loadingOptions.vocab[self.class_] if p := self.loadingOptions.rvocab.get(uri[: -len(self.class_)]): uri = f"{p}:{self.class_}" else: uri = self.class_ u = save_relative_uri(uri, base_url, False, None, relative_uris) r["class"] = u if self.shmSize is not None: r["shmSize"] = save( self.shmSize, top=False, base_url=base_url, relative_uris=relative_uris ) # top refers to the directory level if top: if self.loadingOptions.namespaces: r["$namespaces"] = self.loadingOptions.namespaces if self.loadingOptions.schemas: r["$schemas"] = self.loadingOptions.schemas return r attrs = frozenset(["class", "shmSize"]) _vocab = { "Any": "https://w3id.org/cwl/salad#Any", "ArraySchema": "https://w3id.org/cwl/salad#ArraySchema", "CUDARequirement": "http://commonwl.org/cwltool#CUDARequirement", "CWLArraySchema": "https://w3id.org/cwl/cwl#CWLArraySchema", "CWLInputFile": "https://w3id.org/cwl/cwl#CWLInputFile", "CWLObjectType": "https://w3id.org/cwl/cwl#CWLObjectType", "CWLRecordField": "https://w3id.org/cwl/cwl#CWLRecordField", "CWLRecordSchema": "https://w3id.org/cwl/cwl#CWLRecordSchema", "CWLType": "https://w3id.org/cwl/cwl#CWLType", "CWLVersion": "https://w3id.org/cwl/cwl#CWLVersion", "CommandInputArraySchema": "https://w3id.org/cwl/cwl#CommandInputArraySchema", "CommandInputEnumSchema": "https://w3id.org/cwl/cwl#CommandInputEnumSchema", "CommandInputParameter": "https://w3id.org/cwl/cwl#CommandInputParameter", "CommandInputRecordField": "https://w3id.org/cwl/cwl#CommandInputRecordField", "CommandInputRecordSchema": "https://w3id.org/cwl/cwl#CommandInputRecordSchema", "CommandInputSchema": "https://w3id.org/cwl/cwl#CommandInputSchema", "CommandLineBindable": "https://w3id.org/cwl/cwl#CommandLineBindable", "CommandLineBinding": "https://w3id.org/cwl/cwl#CommandLineBinding", "CommandLineTool": "https://w3id.org/cwl/cwl#CommandLineTool", "CommandOutputArraySchema": "https://w3id.org/cwl/cwl#CommandOutputArraySchema", "CommandOutputBinding": "https://w3id.org/cwl/cwl#CommandOutputBinding", "CommandOutputEnumSchema": "https://w3id.org/cwl/cwl#CommandOutputEnumSchema", "CommandOutputParameter": "https://w3id.org/cwl/cwl#CommandOutputParameter", "CommandOutputRecordField": "https://w3id.org/cwl/cwl#CommandOutputRecordField", "CommandOutputRecordSchema": "https://w3id.org/cwl/cwl#CommandOutputRecordSchema", "Directory": "https://w3id.org/cwl/cwl#Directory", "Dirent": "https://w3id.org/cwl/cwl#Dirent", "DockerRequirement": "https://w3id.org/cwl/cwl#DockerRequirement", "Documented": "https://w3id.org/cwl/salad#Documented", "EnumSchema": "https://w3id.org/cwl/salad#EnumSchema", "EnvVarRequirement": "https://w3id.org/cwl/cwl#EnvVarRequirement", "EnvironmentDef": "https://w3id.org/cwl/cwl#EnvironmentDef", "Expression": "https://w3id.org/cwl/cwl#Expression", "ExpressionPlaceholder": "https://w3id.org/cwl/cwl#ExpressionPlaceholder", "ExpressionTool": "https://w3id.org/cwl/cwl#ExpressionTool", "ExpressionToolOutputParameter": "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter", "FieldBase": "https://w3id.org/cwl/cwl#FieldBase", "File": "https://w3id.org/cwl/cwl#File", "IOSchema": "https://w3id.org/cwl/cwl#IOSchema", "Identified": "https://w3id.org/cwl/cwl#Identified", "InitialWorkDirRequirement": "https://w3id.org/cwl/cwl#InitialWorkDirRequirement", "InlineJavascriptRequirement": "https://w3id.org/cwl/cwl#InlineJavascriptRequirement", "InplaceUpdateRequirement": "https://w3id.org/cwl/cwl#InplaceUpdateRequirement", "InputArraySchema": "https://w3id.org/cwl/cwl#InputArraySchema", "InputBinding": "https://w3id.org/cwl/cwl#InputBinding", "InputEnumSchema": "https://w3id.org/cwl/cwl#InputEnumSchema", "InputFormat": "https://w3id.org/cwl/cwl#InputFormat", "InputParameter": "https://w3id.org/cwl/cwl#InputParameter", "InputRecordField": "https://w3id.org/cwl/cwl#InputRecordField", "InputRecordSchema": "https://w3id.org/cwl/cwl#InputRecordSchema", "InputSchema": "https://w3id.org/cwl/cwl#InputSchema", "Labeled": "https://w3id.org/cwl/cwl#Labeled", "LinkMergeMethod": "https://w3id.org/cwl/cwl#LinkMergeMethod", "LoadContents": "https://w3id.org/cwl/cwl#LoadContents", "LoadListingEnum": "https://w3id.org/cwl/cwl#LoadListingEnum", "LoadListingRequirement": "https://w3id.org/cwl/cwl#LoadListingRequirement", "Loop": "http://commonwl.org/cwltool#Loop", "LoopInput": "http://commonwl.org/cwltool#LoopInput", "MPIRequirement": "http://commonwl.org/cwltool#MPIRequirement", "MapSchema": "https://w3id.org/cwl/salad#MapSchema", "MultipleInputFeatureRequirement": "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement", "NetworkAccess": "https://w3id.org/cwl/cwl#NetworkAccess", "Operation": "https://w3id.org/cwl/cwl#Operation", "OperationInputParameter": "https://w3id.org/cwl/cwl#OperationInputParameter", "OperationOutputParameter": "https://w3id.org/cwl/cwl#OperationOutputParameter", "OutputArraySchema": "https://w3id.org/cwl/cwl#OutputArraySchema", "OutputEnumSchema": "https://w3id.org/cwl/cwl#OutputEnumSchema", "OutputFormat": "https://w3id.org/cwl/cwl#OutputFormat", "OutputParameter": "https://w3id.org/cwl/cwl#OutputParameter", "OutputRecordField": "https://w3id.org/cwl/cwl#OutputRecordField", "OutputRecordSchema": "https://w3id.org/cwl/cwl#OutputRecordSchema", "OutputSchema": "https://w3id.org/cwl/cwl#OutputSchema", "Parameter": "https://w3id.org/cwl/cwl#Parameter", "PickValueMethod": "https://w3id.org/cwl/cwl#PickValueMethod", "PrimitiveType": "https://w3id.org/cwl/salad#PrimitiveType", "Process": "https://w3id.org/cwl/cwl#Process", "ProcessGenerator": "http://commonwl.org/cwltool#ProcessGenerator", "ProcessRequirement": "https://w3id.org/cwl/cwl#ProcessRequirement", "RecordField": "https://w3id.org/cwl/salad#RecordField", "RecordSchema": "https://w3id.org/cwl/salad#RecordSchema", "ResourceRequirement": "https://w3id.org/cwl/cwl#ResourceRequirement", "ScatterFeatureRequirement": "https://w3id.org/cwl/cwl#ScatterFeatureRequirement", "ScatterMethod": "https://w3id.org/cwl/cwl#ScatterMethod", "SchemaDefRequirement": "https://w3id.org/cwl/cwl#SchemaDefRequirement", "SecondaryFileSchema": "https://w3id.org/cwl/cwl#SecondaryFileSchema", "Secrets": "http://commonwl.org/cwltool#Secrets", "ShellCommandRequirement": "https://w3id.org/cwl/cwl#ShellCommandRequirement", "ShmSize": "http://commonwl.org/cwltool#ShmSize", "Sink": "https://w3id.org/cwl/cwl#Sink", "SoftwarePackage": "https://w3id.org/cwl/cwl#SoftwarePackage", "SoftwareRequirement": "https://w3id.org/cwl/cwl#SoftwareRequirement", "StepInputExpressionRequirement": "https://w3id.org/cwl/cwl#StepInputExpressionRequirement", "SubworkflowFeatureRequirement": "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement", "ToolTimeLimit": "https://w3id.org/cwl/cwl#ToolTimeLimit", "UnionSchema": "https://w3id.org/cwl/salad#UnionSchema", "WorkReuse": "https://w3id.org/cwl/cwl#WorkReuse", "Workflow": "https://w3id.org/cwl/cwl#Workflow", "WorkflowInputParameter": "https://w3id.org/cwl/cwl#WorkflowInputParameter", "WorkflowOutputParameter": "https://w3id.org/cwl/cwl#WorkflowOutputParameter", "WorkflowStep": "https://w3id.org/cwl/cwl#WorkflowStep", "WorkflowStepInput": "https://w3id.org/cwl/cwl#WorkflowStepInput", "WorkflowStepOutput": "https://w3id.org/cwl/cwl#WorkflowStepOutput", "all": "http://commonwl.org/cwltool#Loop/outputMethod/LoopOutputModes/all", "all_non_null": "https://w3id.org/cwl/cwl#PickValueMethod/all_non_null", "array": "https://w3id.org/cwl/salad#array", "boolean": "http://www.w3.org/2001/XMLSchema#boolean", "deep_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/deep_listing", "dotproduct": "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct", "double": "http://www.w3.org/2001/XMLSchema#double", "draft-2": "https://w3id.org/cwl/cwl#draft-2", "draft-3": "https://w3id.org/cwl/cwl#draft-3", "draft-3.dev1": "https://w3id.org/cwl/cwl#draft-3.dev1", "draft-3.dev2": "https://w3id.org/cwl/cwl#draft-3.dev2", "draft-3.dev3": "https://w3id.org/cwl/cwl#draft-3.dev3", "draft-3.dev4": "https://w3id.org/cwl/cwl#draft-3.dev4", "draft-3.dev5": "https://w3id.org/cwl/cwl#draft-3.dev5", "draft-4.dev1": "https://w3id.org/cwl/cwl#draft-4.dev1", "draft-4.dev2": "https://w3id.org/cwl/cwl#draft-4.dev2", "draft-4.dev3": "https://w3id.org/cwl/cwl#draft-4.dev3", "enum": "https://w3id.org/cwl/salad#enum", "first_non_null": "https://w3id.org/cwl/cwl#PickValueMethod/first_non_null", "flat_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct", "float": "http://www.w3.org/2001/XMLSchema#float", "int": "http://www.w3.org/2001/XMLSchema#int", "last": "http://commonwl.org/cwltool#Loop/outputMethod/LoopOutputModes/last", "long": "http://www.w3.org/2001/XMLSchema#long", "map": "https://w3id.org/cwl/salad#map", "merge_flattened": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened", "merge_nested": "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested", "nested_crossproduct": "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct", "no_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/no_listing", "null": "https://w3id.org/cwl/salad#null", "record": "https://w3id.org/cwl/salad#record", "shallow_listing": "https://w3id.org/cwl/cwl#LoadListingEnum/shallow_listing", "stderr": "https://w3id.org/cwl/cwl#stderr", "stdin": "https://w3id.org/cwl/cwl#stdin", "stdout": "https://w3id.org/cwl/cwl#stdout", "string": "http://www.w3.org/2001/XMLSchema#string", "the_only_non_null": "https://w3id.org/cwl/cwl#PickValueMethod/the_only_non_null", "union": "https://w3id.org/cwl/salad#union", "v1.0": "https://w3id.org/cwl/cwl#v1.0", "v1.0.dev4": "https://w3id.org/cwl/cwl#v1.0.dev4", "v1.1": "https://w3id.org/cwl/cwl#v1.1", "v1.1.0-dev1": "https://w3id.org/cwl/cwl#v1.1.0-dev1", "v1.2": "https://w3id.org/cwl/cwl#v1.2", "v1.2.0-dev1": "https://w3id.org/cwl/cwl#v1.2.0-dev1", "v1.2.0-dev2": "https://w3id.org/cwl/cwl#v1.2.0-dev2", "v1.2.0-dev3": "https://w3id.org/cwl/cwl#v1.2.0-dev3", "v1.2.0-dev4": "https://w3id.org/cwl/cwl#v1.2.0-dev4", "v1.2.0-dev5": "https://w3id.org/cwl/cwl#v1.2.0-dev5", } _rvocab = { "https://w3id.org/cwl/salad#Any": "Any", "https://w3id.org/cwl/salad#ArraySchema": "ArraySchema", "http://commonwl.org/cwltool#CUDARequirement": "CUDARequirement", "https://w3id.org/cwl/cwl#CWLArraySchema": "CWLArraySchema", "https://w3id.org/cwl/cwl#CWLInputFile": "CWLInputFile", "https://w3id.org/cwl/cwl#CWLObjectType": "CWLObjectType", "https://w3id.org/cwl/cwl#CWLRecordField": "CWLRecordField", "https://w3id.org/cwl/cwl#CWLRecordSchema": "CWLRecordSchema", "https://w3id.org/cwl/cwl#CWLType": "CWLType", "https://w3id.org/cwl/cwl#CWLVersion": "CWLVersion", "https://w3id.org/cwl/cwl#CommandInputArraySchema": "CommandInputArraySchema", "https://w3id.org/cwl/cwl#CommandInputEnumSchema": "CommandInputEnumSchema", "https://w3id.org/cwl/cwl#CommandInputParameter": "CommandInputParameter", "https://w3id.org/cwl/cwl#CommandInputRecordField": "CommandInputRecordField", "https://w3id.org/cwl/cwl#CommandInputRecordSchema": "CommandInputRecordSchema", "https://w3id.org/cwl/cwl#CommandInputSchema": "CommandInputSchema", "https://w3id.org/cwl/cwl#CommandLineBindable": "CommandLineBindable", "https://w3id.org/cwl/cwl#CommandLineBinding": "CommandLineBinding", "https://w3id.org/cwl/cwl#CommandLineTool": "CommandLineTool", "https://w3id.org/cwl/cwl#CommandOutputArraySchema": "CommandOutputArraySchema", "https://w3id.org/cwl/cwl#CommandOutputBinding": "CommandOutputBinding", "https://w3id.org/cwl/cwl#CommandOutputEnumSchema": "CommandOutputEnumSchema", "https://w3id.org/cwl/cwl#CommandOutputParameter": "CommandOutputParameter", "https://w3id.org/cwl/cwl#CommandOutputRecordField": "CommandOutputRecordField", "https://w3id.org/cwl/cwl#CommandOutputRecordSchema": "CommandOutputRecordSchema", "https://w3id.org/cwl/cwl#Directory": "Directory", "https://w3id.org/cwl/cwl#Dirent": "Dirent", "https://w3id.org/cwl/cwl#DockerRequirement": "DockerRequirement", "https://w3id.org/cwl/salad#Documented": "Documented", "https://w3id.org/cwl/salad#EnumSchema": "EnumSchema", "https://w3id.org/cwl/cwl#EnvVarRequirement": "EnvVarRequirement", "https://w3id.org/cwl/cwl#EnvironmentDef": "EnvironmentDef", "https://w3id.org/cwl/cwl#Expression": "Expression", "https://w3id.org/cwl/cwl#ExpressionPlaceholder": "ExpressionPlaceholder", "https://w3id.org/cwl/cwl#ExpressionTool": "ExpressionTool", "https://w3id.org/cwl/cwl#ExpressionToolOutputParameter": "ExpressionToolOutputParameter", "https://w3id.org/cwl/cwl#FieldBase": "FieldBase", "https://w3id.org/cwl/cwl#File": "File", "https://w3id.org/cwl/cwl#IOSchema": "IOSchema", "https://w3id.org/cwl/cwl#Identified": "Identified", "https://w3id.org/cwl/cwl#InitialWorkDirRequirement": "InitialWorkDirRequirement", "https://w3id.org/cwl/cwl#InlineJavascriptRequirement": "InlineJavascriptRequirement", "https://w3id.org/cwl/cwl#InplaceUpdateRequirement": "InplaceUpdateRequirement", "https://w3id.org/cwl/cwl#InputArraySchema": "InputArraySchema", "https://w3id.org/cwl/cwl#InputBinding": "InputBinding", "https://w3id.org/cwl/cwl#InputEnumSchema": "InputEnumSchema", "https://w3id.org/cwl/cwl#InputFormat": "InputFormat", "https://w3id.org/cwl/cwl#InputParameter": "InputParameter", "https://w3id.org/cwl/cwl#InputRecordField": "InputRecordField", "https://w3id.org/cwl/cwl#InputRecordSchema": "InputRecordSchema", "https://w3id.org/cwl/cwl#InputSchema": "InputSchema", "https://w3id.org/cwl/cwl#Labeled": "Labeled", "https://w3id.org/cwl/cwl#LinkMergeMethod": "LinkMergeMethod", "https://w3id.org/cwl/cwl#LoadContents": "LoadContents", "https://w3id.org/cwl/cwl#LoadListingEnum": "LoadListingEnum", "https://w3id.org/cwl/cwl#LoadListingRequirement": "LoadListingRequirement", "http://commonwl.org/cwltool#Loop": "Loop", "http://commonwl.org/cwltool#LoopInput": "LoopInput", "http://commonwl.org/cwltool#MPIRequirement": "MPIRequirement", "https://w3id.org/cwl/salad#MapSchema": "MapSchema", "https://w3id.org/cwl/cwl#MultipleInputFeatureRequirement": "MultipleInputFeatureRequirement", "https://w3id.org/cwl/cwl#NetworkAccess": "NetworkAccess", "https://w3id.org/cwl/cwl#Operation": "Operation", "https://w3id.org/cwl/cwl#OperationInputParameter": "OperationInputParameter", "https://w3id.org/cwl/cwl#OperationOutputParameter": "OperationOutputParameter", "https://w3id.org/cwl/cwl#OutputArraySchema": "OutputArraySchema", "https://w3id.org/cwl/cwl#OutputEnumSchema": "OutputEnumSchema", "https://w3id.org/cwl/cwl#OutputFormat": "OutputFormat", "https://w3id.org/cwl/cwl#OutputParameter": "OutputParameter", "https://w3id.org/cwl/cwl#OutputRecordField": "OutputRecordField", "https://w3id.org/cwl/cwl#OutputRecordSchema": "OutputRecordSchema", "https://w3id.org/cwl/cwl#OutputSchema": "OutputSchema", "https://w3id.org/cwl/cwl#Parameter": "Parameter", "https://w3id.org/cwl/cwl#PickValueMethod": "PickValueMethod", "https://w3id.org/cwl/salad#PrimitiveType": "PrimitiveType", "https://w3id.org/cwl/cwl#Process": "Process", "http://commonwl.org/cwltool#ProcessGenerator": "ProcessGenerator", "https://w3id.org/cwl/cwl#ProcessRequirement": "ProcessRequirement", "https://w3id.org/cwl/salad#RecordField": "RecordField", "https://w3id.org/cwl/salad#RecordSchema": "RecordSchema", "https://w3id.org/cwl/cwl#ResourceRequirement": "ResourceRequirement", "https://w3id.org/cwl/cwl#ScatterFeatureRequirement": "ScatterFeatureRequirement", "https://w3id.org/cwl/cwl#ScatterMethod": "ScatterMethod", "https://w3id.org/cwl/cwl#SchemaDefRequirement": "SchemaDefRequirement", "https://w3id.org/cwl/cwl#SecondaryFileSchema": "SecondaryFileSchema", "http://commonwl.org/cwltool#Secrets": "Secrets", "https://w3id.org/cwl/cwl#ShellCommandRequirement": "ShellCommandRequirement", "http://commonwl.org/cwltool#ShmSize": "ShmSize", "https://w3id.org/cwl/cwl#Sink": "Sink", "https://w3id.org/cwl/cwl#SoftwarePackage": "SoftwarePackage", "https://w3id.org/cwl/cwl#SoftwareRequirement": "SoftwareRequirement", "https://w3id.org/cwl/cwl#StepInputExpressionRequirement": "StepInputExpressionRequirement", "https://w3id.org/cwl/cwl#SubworkflowFeatureRequirement": "SubworkflowFeatureRequirement", "https://w3id.org/cwl/cwl#ToolTimeLimit": "ToolTimeLimit", "https://w3id.org/cwl/salad#UnionSchema": "UnionSchema", "https://w3id.org/cwl/cwl#WorkReuse": "WorkReuse", "https://w3id.org/cwl/cwl#Workflow": "Workflow", "https://w3id.org/cwl/cwl#WorkflowInputParameter": "WorkflowInputParameter", "https://w3id.org/cwl/cwl#WorkflowOutputParameter": "WorkflowOutputParameter", "https://w3id.org/cwl/cwl#WorkflowStep": "WorkflowStep", "https://w3id.org/cwl/cwl#WorkflowStepInput": "WorkflowStepInput", "https://w3id.org/cwl/cwl#WorkflowStepOutput": "WorkflowStepOutput", "http://commonwl.org/cwltool#Loop/outputMethod/LoopOutputModes/all": "all", "https://w3id.org/cwl/cwl#PickValueMethod/all_non_null": "all_non_null", "https://w3id.org/cwl/salad#array": "array", "http://www.w3.org/2001/XMLSchema#boolean": "boolean", "https://w3id.org/cwl/cwl#LoadListingEnum/deep_listing": "deep_listing", "https://w3id.org/cwl/cwl#ScatterMethod/dotproduct": "dotproduct", "http://www.w3.org/2001/XMLSchema#double": "double", "https://w3id.org/cwl/cwl#draft-2": "draft-2", "https://w3id.org/cwl/cwl#draft-3": "draft-3", "https://w3id.org/cwl/cwl#draft-3.dev1": "draft-3.dev1", "https://w3id.org/cwl/cwl#draft-3.dev2": "draft-3.dev2", "https://w3id.org/cwl/cwl#draft-3.dev3": "draft-3.dev3", "https://w3id.org/cwl/cwl#draft-3.dev4": "draft-3.dev4", "https://w3id.org/cwl/cwl#draft-3.dev5": "draft-3.dev5", "https://w3id.org/cwl/cwl#draft-4.dev1": "draft-4.dev1", "https://w3id.org/cwl/cwl#draft-4.dev2": "draft-4.dev2", "https://w3id.org/cwl/cwl#draft-4.dev3": "draft-4.dev3", "https://w3id.org/cwl/salad#enum": "enum", "https://w3id.org/cwl/cwl#PickValueMethod/first_non_null": "first_non_null", "https://w3id.org/cwl/cwl#ScatterMethod/flat_crossproduct": "flat_crossproduct", "http://www.w3.org/2001/XMLSchema#float": "float", "http://www.w3.org/2001/XMLSchema#int": "int", "http://commonwl.org/cwltool#Loop/outputMethod/LoopOutputModes/last": "last", "http://www.w3.org/2001/XMLSchema#long": "long", "https://w3id.org/cwl/salad#map": "map", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_flattened": "merge_flattened", "https://w3id.org/cwl/cwl#LinkMergeMethod/merge_nested": "merge_nested", "https://w3id.org/cwl/cwl#ScatterMethod/nested_crossproduct": "nested_crossproduct", "https://w3id.org/cwl/cwl#LoadListingEnum/no_listing": "no_listing", "https://w3id.org/cwl/salad#null": "null", "https://w3id.org/cwl/salad#record": "record", "https://w3id.org/cwl/cwl#LoadListingEnum/shallow_listing": "shallow_listing", "https://w3id.org/cwl/cwl#stderr": "stderr", "https://w3id.org/cwl/cwl#stdin": "stdin", "https://w3id.org/cwl/cwl#stdout": "stdout", "http://www.w3.org/2001/XMLSchema#string": "string", "https://w3id.org/cwl/cwl#PickValueMethod/the_only_non_null": "the_only_non_null", "https://w3id.org/cwl/salad#union": "union", "https://w3id.org/cwl/cwl#v1.0": "v1.0", "https://w3id.org/cwl/cwl#v1.0.dev4": "v1.0.dev4", "https://w3id.org/cwl/cwl#v1.1": "v1.1", "https://w3id.org/cwl/cwl#v1.1.0-dev1": "v1.1.0-dev1", "https://w3id.org/cwl/cwl#v1.2": "v1.2", "https://w3id.org/cwl/cwl#v1.2.0-dev1": "v1.2.0-dev1", "https://w3id.org/cwl/cwl#v1.2.0-dev2": "v1.2.0-dev2", "https://w3id.org/cwl/cwl#v1.2.0-dev3": "v1.2.0-dev3", "https://w3id.org/cwl/cwl#v1.2.0-dev4": "v1.2.0-dev4", "https://w3id.org/cwl/cwl#v1.2.0-dev5": "v1.2.0-dev5", } strtype = _PrimitiveLoader(str) inttype = _PrimitiveLoader(int) floattype = _PrimitiveLoader(float) booltype = _PrimitiveLoader(bool) None_type = _PrimitiveLoader(type(None)) Any_type = _AnyLoader() PrimitiveTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", ), "PrimitiveType", ) """ Names of salad data types (based on Avro schema declarations). Refer to the [Avro schema declaration documentation](https://avro.apache.org/docs/current/spec.html#schemas) for detailed information. null: no value boolean: a binary value int: 32-bit signed integer long: 64-bit signed integer float: single precision (32-bit) IEEE 754 floating-point number double: double precision (64-bit) IEEE 754 floating-point number string: Unicode character sequence """ AnyLoader = _EnumLoader(("Any",), "Any") """ The **Any** type validates for any non-null value. """ RecordFieldLoader = _RecordLoader(RecordField, None, None) RecordSchemaLoader = _RecordLoader(RecordSchema, None, None) EnumSchemaLoader = _RecordLoader(EnumSchema, None, None) ArraySchemaLoader = _RecordLoader(ArraySchema, None, None) MapSchemaLoader = _RecordLoader(MapSchema, None, None) UnionSchemaLoader = _RecordLoader(UnionSchema, None, None) CWLTypeLoader = _EnumLoader( ( "null", "boolean", "int", "long", "float", "double", "string", "File", "Directory", ), "CWLType", ) """ Extends primitive types with the concept of a file and directory as a builtin type. File: A File object Directory: A Directory object """ CWLArraySchemaLoader = _RecordLoader(CWLArraySchema, None, None) CWLRecordFieldLoader = _RecordLoader(CWLRecordField, None, None) CWLRecordSchemaLoader = _RecordLoader(CWLRecordSchema, None, None) FileLoader = _RecordLoader(File, None, None) DirectoryLoader = _RecordLoader(Directory, None, None) CWLObjectTypeLoader = _UnionLoader((), "CWLObjectTypeLoader") union_of_None_type_or_CWLObjectTypeLoader = _UnionLoader( ( None_type, CWLObjectTypeLoader, ) ) array_of_union_of_None_type_or_CWLObjectTypeLoader = _ArrayLoader( union_of_None_type_or_CWLObjectTypeLoader ) map_of_union_of_None_type_or_CWLObjectTypeLoader = _MapLoader( union_of_None_type_or_CWLObjectTypeLoader, "None", None, None ) InlineJavascriptRequirementLoader = _RecordLoader( InlineJavascriptRequirement, None, None ) SchemaDefRequirementLoader = _RecordLoader(SchemaDefRequirement, None, None) LoadListingRequirementLoader = _RecordLoader(LoadListingRequirement, None, None) DockerRequirementLoader = _RecordLoader(DockerRequirement, None, None) SoftwareRequirementLoader = _RecordLoader(SoftwareRequirement, None, None) InitialWorkDirRequirementLoader = _RecordLoader(InitialWorkDirRequirement, None, None) EnvVarRequirementLoader = _RecordLoader(EnvVarRequirement, None, None) ShellCommandRequirementLoader = _RecordLoader(ShellCommandRequirement, None, None) ResourceRequirementLoader = _RecordLoader(ResourceRequirement, None, None) WorkReuseLoader = _RecordLoader(WorkReuse, None, None) NetworkAccessLoader = _RecordLoader(NetworkAccess, None, None) InplaceUpdateRequirementLoader = _RecordLoader(InplaceUpdateRequirement, None, None) ToolTimeLimitLoader = _RecordLoader(ToolTimeLimit, None, None) SubworkflowFeatureRequirementLoader = _RecordLoader( SubworkflowFeatureRequirement, None, None ) ScatterFeatureRequirementLoader = _RecordLoader(ScatterFeatureRequirement, None, None) MultipleInputFeatureRequirementLoader = _RecordLoader( MultipleInputFeatureRequirement, None, None ) StepInputExpressionRequirementLoader = _RecordLoader( StepInputExpressionRequirement, None, None ) SecretsLoader = _RecordLoader(Secrets, None, None) MPIRequirementLoader = _RecordLoader(MPIRequirement, None, None) CUDARequirementLoader = _RecordLoader(CUDARequirement, None, None) LoopLoader = _RecordLoader(Loop, None, None) ShmSizeLoader = _RecordLoader(ShmSize, None, None) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, LoadListingRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, WorkReuseLoader, NetworkAccessLoader, InplaceUpdateRequirementLoader, ToolTimeLimitLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, SecretsLoader, MPIRequirementLoader, CUDARequirementLoader, LoopLoader, ShmSizeLoader, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, CWLObjectTypeLoader, ) ) map_of_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader = _MapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader, "CWLInputFile", "@list", True, ) CWLInputFileLoader = map_of_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_CWLObjectTypeLoader CWLVersionLoader = _EnumLoader( ( "draft-2", "draft-3.dev1", "draft-3.dev2", "draft-3.dev3", "draft-3.dev4", "draft-3.dev5", "draft-3", "draft-4.dev1", "draft-4.dev2", "draft-4.dev3", "v1.0.dev4", "v1.0", "v1.1.0-dev1", "v1.1", "v1.2.0-dev1", "v1.2.0-dev2", "v1.2.0-dev3", "v1.2.0-dev4", "v1.2.0-dev5", "v1.2", ), "CWLVersion", ) """ Version symbols for published CWL document versions. """ LoadListingEnumLoader = _EnumLoader( ( "no_listing", "shallow_listing", "deep_listing", ), "LoadListingEnum", ) """ Specify the desired behavior for loading the `listing` field of a Directory object for use by expressions. no_listing: Do not load the directory listing. shallow_listing: Only load the top level listing, do not recurse into subdirectories. deep_listing: Load the directory listing and recursively load all subdirectories as well. """ ExpressionLoader = _ExpressionLoader(str) InputBindingLoader = _RecordLoader(InputBinding, None, None) InputRecordFieldLoader = _RecordLoader(InputRecordField, None, None) InputRecordSchemaLoader = _RecordLoader(InputRecordSchema, None, None) InputEnumSchemaLoader = _RecordLoader(InputEnumSchema, None, None) InputArraySchemaLoader = _RecordLoader(InputArraySchema, None, None) OutputRecordFieldLoader = _RecordLoader(OutputRecordField, None, None) OutputRecordSchemaLoader = _RecordLoader(OutputRecordSchema, None, None) OutputEnumSchemaLoader = _RecordLoader(OutputEnumSchema, None, None) OutputArraySchemaLoader = _RecordLoader(OutputArraySchema, None, None) SecondaryFileSchemaLoader = _RecordLoader(SecondaryFileSchema, None, None) EnvironmentDefLoader = _RecordLoader(EnvironmentDef, None, None) CommandLineBindingLoader = _RecordLoader(CommandLineBinding, None, None) CommandOutputBindingLoader = _RecordLoader(CommandOutputBinding, None, None) CommandLineBindableLoader = _RecordLoader(CommandLineBindable, None, None) CommandInputRecordFieldLoader = _RecordLoader(CommandInputRecordField, None, None) CommandInputRecordSchemaLoader = _RecordLoader(CommandInputRecordSchema, None, None) CommandInputEnumSchemaLoader = _RecordLoader(CommandInputEnumSchema, None, None) CommandInputArraySchemaLoader = _RecordLoader(CommandInputArraySchema, None, None) CommandOutputRecordFieldLoader = _RecordLoader(CommandOutputRecordField, None, None) CommandOutputRecordSchemaLoader = _RecordLoader(CommandOutputRecordSchema, None, None) CommandOutputEnumSchemaLoader = _RecordLoader(CommandOutputEnumSchema, None, None) CommandOutputArraySchemaLoader = _RecordLoader(CommandOutputArraySchema, None, None) CommandInputParameterLoader = _RecordLoader(CommandInputParameter, None, None) CommandOutputParameterLoader = _RecordLoader(CommandOutputParameter, None, None) stdinLoader = _EnumLoader(("stdin",), "stdin") """ Only valid as a `type` for a `CommandLineTool` input with no `inputBinding` set. `stdin` must not be specified at the `CommandLineTool` level. The following ``` inputs: an_input_name: type: stdin ``` is equivalent to ``` inputs: an_input_name: type: File streamable: true stdin: $(inputs.an_input_name.path) ``` """ stdoutLoader = _EnumLoader(("stdout",), "stdout") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stdout stdout: a_stdout_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stdout_file stdout: a_stdout_file ``` If there is no `stdout` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stdout ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stdout_filenameABCDEFG stdout: random_stdout_filenameABCDEFG ``` If the `CommandLineTool` contains logically chained commands (e.g. `echo a && echo b`) `stdout` must include the output of every command. """ stderrLoader = _EnumLoader(("stderr",), "stderr") """ Only valid as a `type` for a `CommandLineTool` output with no `outputBinding` set. The following ``` outputs: an_output_name: type: stderr stderr: a_stderr_file ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: a_stderr_file stderr: a_stderr_file ``` If there is no `stderr` name provided, a random filename will be created. For example, the following ``` outputs: an_output_name: type: stderr ``` is equivalent to ``` outputs: an_output_name: type: File streamable: true outputBinding: glob: random_stderr_filenameABCDEFG stderr: random_stderr_filenameABCDEFG ``` """ CommandLineToolLoader = _RecordLoader(CommandLineTool, None, None) SoftwarePackageLoader = _RecordLoader(SoftwarePackage, None, None) DirentLoader = _RecordLoader(Dirent, None, None) ExpressionToolOutputParameterLoader = _RecordLoader( ExpressionToolOutputParameter, None, None ) WorkflowInputParameterLoader = _RecordLoader(WorkflowInputParameter, None, None) ExpressionToolLoader = _RecordLoader(ExpressionTool, None, None) LinkMergeMethodLoader = _EnumLoader( ( "merge_nested", "merge_flattened", ), "LinkMergeMethod", ) """ The input link merge method, described in [WorkflowStepInput](#WorkflowStepInput). """ PickValueMethodLoader = _EnumLoader( ( "first_non_null", "the_only_non_null", "all_non_null", ), "PickValueMethod", ) """ Picking non-null values among inbound data links, described in [WorkflowStepInput](#WorkflowStepInput). """ WorkflowOutputParameterLoader = _RecordLoader(WorkflowOutputParameter, None, None) WorkflowStepInputLoader = _RecordLoader(WorkflowStepInput, None, None) WorkflowStepOutputLoader = _RecordLoader(WorkflowStepOutput, None, None) ScatterMethodLoader = _EnumLoader( ( "dotproduct", "nested_crossproduct", "flat_crossproduct", ), "ScatterMethod", ) """ The scatter method, as described in [workflow step scatter](#WorkflowStep). """ WorkflowStepLoader = _RecordLoader(WorkflowStep, None, None) WorkflowLoader = _RecordLoader(Workflow, None, None) OperationInputParameterLoader = _RecordLoader(OperationInputParameter, None, None) OperationOutputParameterLoader = _RecordLoader(OperationOutputParameter, None, None) OperationLoader = _RecordLoader(Operation, None, None) ProcessGeneratorLoader = _RecordLoader(ProcessGenerator, None, None) LoopInputLoader = _RecordLoader(LoopInput, None, None) array_of_strtype = _ArrayLoader(strtype) union_of_None_type_or_strtype_or_array_of_strtype = _UnionLoader( ( None_type, strtype, array_of_strtype, ) ) uri_strtype_True_False_None_None = _URILoader(strtype, True, False, None, None) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, RecordSchemaLoader, EnumSchemaLoader, ArraySchemaLoader, MapSchemaLoader, UnionSchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, ) ) typedsl_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, 2, "v1.1", ) array_of_RecordFieldLoader = _ArrayLoader(RecordFieldLoader) union_of_None_type_or_array_of_RecordFieldLoader = _UnionLoader( ( None_type, array_of_RecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_RecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_RecordFieldLoader, "name", "type" ) Record_nameLoader = _EnumLoader(("record",), "Record_name") typedsl_Record_nameLoader_2 = _TypeDSLLoader(Record_nameLoader, 2, "v1.1") union_of_None_type_or_strtype = _UnionLoader( ( None_type, strtype, ) ) uri_union_of_None_type_or_strtype_True_False_None_None = _URILoader( union_of_None_type_or_strtype, True, False, None, None ) uri_array_of_strtype_True_False_None_None = _URILoader( array_of_strtype, True, False, None, None ) Enum_nameLoader = _EnumLoader(("enum",), "Enum_name") typedsl_Enum_nameLoader_2 = _TypeDSLLoader(Enum_nameLoader, 2, "v1.1") uri_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_RecordSchemaLoader_or_EnumSchemaLoader_or_ArraySchemaLoader_or_MapSchemaLoader_or_UnionSchemaLoader_or_strtype, False, True, 2, None, ) Array_nameLoader = _EnumLoader(("array",), "Array_name") typedsl_Array_nameLoader_2 = _TypeDSLLoader(Array_nameLoader, 2, "v1.1") Map_nameLoader = _EnumLoader(("map",), "Map_name") typedsl_Map_nameLoader_2 = _TypeDSLLoader(Map_nameLoader, 2, "v1.1") Union_nameLoader = _EnumLoader(("union",), "Union_name") typedsl_Union_nameLoader_2 = _TypeDSLLoader(Union_nameLoader, 2, "v1.1") union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, ) ) array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _ArrayLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype ) union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype = _UnionLoader( ( PrimitiveTypeLoader, CWLRecordSchemaLoader, EnumSchemaLoader, CWLArraySchemaLoader, strtype, array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, ) ) uri_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, False, True, 2, None, ) typedsl_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype_or_array_of_union_of_PrimitiveTypeLoader_or_CWLRecordSchemaLoader_or_EnumSchemaLoader_or_CWLArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CWLRecordFieldLoader = _ArrayLoader(CWLRecordFieldLoader) union_of_None_type_or_array_of_CWLRecordFieldLoader = _UnionLoader( ( None_type, array_of_CWLRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CWLRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_CWLRecordFieldLoader, "name", "type" ) File_classLoader = _EnumLoader(("File",), "File_class") uri_File_classLoader_False_True_None_None = _URILoader( File_classLoader, False, True, None, None ) uri_union_of_None_type_or_strtype_False_False_None_None = _URILoader( union_of_None_type_or_strtype, False, False, None, None ) union_of_None_type_or_inttype = _UnionLoader( ( None_type, inttype, ) ) union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( FileLoader, DirectoryLoader, ) ) array_of_union_of_FileLoader_or_DirectoryLoader = _ArrayLoader( union_of_FileLoader_or_DirectoryLoader ) union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( None_type, array_of_union_of_FileLoader_or_DirectoryLoader, ) ) secondaryfilesdsl_union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( _SecondaryDSLLoader( union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader ), union_of_None_type_or_array_of_union_of_FileLoader_or_DirectoryLoader, ) ) uri_union_of_None_type_or_strtype_True_False_None_True = _URILoader( union_of_None_type_or_strtype, True, False, None, True ) Directory_classLoader = _EnumLoader(("Directory",), "Directory_class") uri_Directory_classLoader_False_True_None_None = _URILoader( Directory_classLoader, False, True, None, None ) union_of_None_type_or_booltype = _UnionLoader( ( None_type, booltype, ) ) union_of_None_type_or_LoadListingEnumLoader = _UnionLoader( ( None_type, LoadListingEnumLoader, ) ) array_of_SecondaryFileSchemaLoader = _ArrayLoader(SecondaryFileSchemaLoader) union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader = _UnionLoader( ( None_type, SecondaryFileSchemaLoader, array_of_SecondaryFileSchemaLoader, ) ) secondaryfilesdsl_union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader = _UnionLoader( ( _SecondaryDSLLoader( union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader ), union_of_None_type_or_SecondaryFileSchemaLoader_or_array_of_SecondaryFileSchemaLoader, ) ) union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, array_of_strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype_or_ExpressionLoader, True, False, None, True, ) union_of_None_type_or_strtype_or_ExpressionLoader = _UnionLoader( ( None_type, strtype, ExpressionLoader, ) ) uri_union_of_None_type_or_strtype_or_ExpressionLoader_True_False_None_True = _URILoader( union_of_None_type_or_strtype_or_ExpressionLoader, True, False, None, True ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, InputRecordSchemaLoader, InputEnumSchemaLoader, InputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_InputRecordFieldLoader = _ArrayLoader(InputRecordFieldLoader) union_of_None_type_or_array_of_InputRecordFieldLoader = _UnionLoader( ( None_type, array_of_InputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_InputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_InputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_InputRecordSchemaLoader_or_InputEnumSchemaLoader_or_InputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, OutputRecordSchemaLoader, OutputEnumSchemaLoader, OutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_OutputRecordFieldLoader = _ArrayLoader(OutputRecordFieldLoader) union_of_None_type_or_array_of_OutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_OutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_OutputRecordFieldLoader = _IdMapLoader( union_of_None_type_or_array_of_OutputRecordFieldLoader, "name", "type" ) uri_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_OutputRecordSchemaLoader_or_OutputEnumSchemaLoader_or_OutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader = _UnionLoader( ( CommandInputParameterLoader, WorkflowInputParameterLoader, OperationInputParameterLoader, ) ) array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader = _ArrayLoader( union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader ) idmap_inputs_array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader = _IdMapLoader( array_of_union_of_CommandInputParameterLoader_or_WorkflowInputParameterLoader_or_OperationInputParameterLoader, "id", "type", ) union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader = _UnionLoader( ( CommandOutputParameterLoader, ExpressionToolOutputParameterLoader, WorkflowOutputParameterLoader, OperationOutputParameterLoader, ) ) array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader = _ArrayLoader( union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader ) idmap_outputs_array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader = _IdMapLoader( array_of_union_of_CommandOutputParameterLoader_or_ExpressionToolOutputParameterLoader_or_WorkflowOutputParameterLoader_or_OperationOutputParameterLoader, "id", "type", ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, ) ) idmap_requirements_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader, "class", "None", ) union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( InlineJavascriptRequirementLoader, SchemaDefRequirementLoader, LoadListingRequirementLoader, DockerRequirementLoader, SoftwareRequirementLoader, InitialWorkDirRequirementLoader, EnvVarRequirementLoader, ShellCommandRequirementLoader, ResourceRequirementLoader, WorkReuseLoader, NetworkAccessLoader, InplaceUpdateRequirementLoader, ToolTimeLimitLoader, SubworkflowFeatureRequirementLoader, ScatterFeatureRequirementLoader, MultipleInputFeatureRequirementLoader, StepInputExpressionRequirementLoader, SecretsLoader, MPIRequirementLoader, CUDARequirementLoader, LoopLoader, ShmSizeLoader, Any_type, ) ) array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type = _ArrayLoader( union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type ) union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type = _UnionLoader( ( None_type, array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type = _IdMapLoader( union_of_None_type_or_array_of_union_of_InlineJavascriptRequirementLoader_or_SchemaDefRequirementLoader_or_LoadListingRequirementLoader_or_DockerRequirementLoader_or_SoftwareRequirementLoader_or_InitialWorkDirRequirementLoader_or_EnvVarRequirementLoader_or_ShellCommandRequirementLoader_or_ResourceRequirementLoader_or_WorkReuseLoader_or_NetworkAccessLoader_or_InplaceUpdateRequirementLoader_or_ToolTimeLimitLoader_or_SubworkflowFeatureRequirementLoader_or_ScatterFeatureRequirementLoader_or_MultipleInputFeatureRequirementLoader_or_StepInputExpressionRequirementLoader_or_SecretsLoader_or_MPIRequirementLoader_or_CUDARequirementLoader_or_LoopLoader_or_ShmSizeLoader_or_Any_type, "class", "None", ) union_of_None_type_or_CWLVersionLoader = _UnionLoader( ( None_type, CWLVersionLoader, ) ) uri_union_of_None_type_or_CWLVersionLoader_False_True_None_None = _URILoader( union_of_None_type_or_CWLVersionLoader, False, True, None, None ) union_of_None_type_or_array_of_strtype = _UnionLoader( ( None_type, array_of_strtype, ) ) uri_union_of_None_type_or_array_of_strtype_True_False_None_None = _URILoader( union_of_None_type_or_array_of_strtype, True, False, None, None ) InlineJavascriptRequirement_classLoader = _EnumLoader( ("InlineJavascriptRequirement",), "InlineJavascriptRequirement_class" ) uri_InlineJavascriptRequirement_classLoader_False_True_None_None = _URILoader( InlineJavascriptRequirement_classLoader, False, True, None, None ) SchemaDefRequirement_classLoader = _EnumLoader( ("SchemaDefRequirement",), "SchemaDefRequirement_class" ) uri_SchemaDefRequirement_classLoader_False_True_None_None = _URILoader( SchemaDefRequirement_classLoader, False, True, None, None ) union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader = _UnionLoader( ( CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, ) ) array_of_union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader = _ArrayLoader( union_of_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader ) union_of_strtype_or_ExpressionLoader = _UnionLoader( ( strtype, ExpressionLoader, ) ) union_of_None_type_or_booltype_or_ExpressionLoader = _UnionLoader( ( None_type, booltype, ExpressionLoader, ) ) LoadListingRequirement_classLoader = _EnumLoader( ("LoadListingRequirement",), "LoadListingRequirement_class" ) uri_LoadListingRequirement_classLoader_False_True_None_None = _URILoader( LoadListingRequirement_classLoader, False, True, None, None ) union_of_None_type_or_inttype_or_ExpressionLoader = _UnionLoader( ( None_type, inttype, ExpressionLoader, ) ) union_of_None_type_or_strtype_or_ExpressionLoader_or_array_of_strtype = _UnionLoader( ( None_type, strtype, ExpressionLoader, array_of_strtype, ) ) union_of_None_type_or_ExpressionLoader = _UnionLoader( ( None_type, ExpressionLoader, ) ) union_of_None_type_or_CommandLineBindingLoader = _UnionLoader( ( None_type, CommandLineBindingLoader, ) ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) array_of_CommandInputRecordFieldLoader = _ArrayLoader(CommandInputRecordFieldLoader) union_of_None_type_or_array_of_CommandInputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandInputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandInputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandInputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, ) ) array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _ArrayLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype ) union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_None_type_or_CommandOutputBindingLoader = _UnionLoader( ( None_type, CommandOutputBindingLoader, ) ) array_of_CommandOutputRecordFieldLoader = _ArrayLoader(CommandOutputRecordFieldLoader) union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = _UnionLoader( ( None_type, array_of_CommandOutputRecordFieldLoader, ) ) idmap_fields_union_of_None_type_or_array_of_CommandOutputRecordFieldLoader = ( _IdMapLoader( union_of_None_type_or_array_of_CommandOutputRecordFieldLoader, "name", "type" ) ) uri_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_False_True_2_None = _URILoader( union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, False, True, 2, None, ) union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, stdinLoader, CommandInputRecordSchemaLoader, CommandInputEnumSchemaLoader, CommandInputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_stdinLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandInputRecordSchemaLoader_or_CommandInputEnumSchemaLoader_or_CommandInputArraySchemaLoader_or_strtype, 2, "v1.1", ) union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype = _UnionLoader( ( CWLTypeLoader, stdoutLoader, stderrLoader, CommandOutputRecordSchemaLoader, CommandOutputEnumSchemaLoader, CommandOutputArraySchemaLoader, strtype, array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, ) ) typedsl_union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_2 = _TypeDSLLoader( union_of_CWLTypeLoader_or_stdoutLoader_or_stderrLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype_or_array_of_union_of_CWLTypeLoader_or_CommandOutputRecordSchemaLoader_or_CommandOutputEnumSchemaLoader_or_CommandOutputArraySchemaLoader_or_strtype, 2, "v1.1", ) CommandLineTool_classLoader = _EnumLoader(("CommandLineTool",), "CommandLineTool_class") uri_CommandLineTool_classLoader_False_True_None_None = _URILoader( CommandLineTool_classLoader, False, True, None, None ) array_of_CommandInputParameterLoader = _ArrayLoader(CommandInputParameterLoader) idmap_inputs_array_of_CommandInputParameterLoader = _IdMapLoader( array_of_CommandInputParameterLoader, "id", "type" ) array_of_CommandOutputParameterLoader = _ArrayLoader(CommandOutputParameterLoader) idmap_outputs_array_of_CommandOutputParameterLoader = _IdMapLoader( array_of_CommandOutputParameterLoader, "id", "type" ) union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( strtype, ExpressionLoader, CommandLineBindingLoader, ) ) array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = ( _ArrayLoader(union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader) ) union_of_None_type_or_array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader = _UnionLoader( ( None_type, array_of_union_of_strtype_or_ExpressionLoader_or_CommandLineBindingLoader, ) ) array_of_inttype = _ArrayLoader(inttype) union_of_None_type_or_array_of_inttype = _UnionLoader( ( None_type, array_of_inttype, ) ) DockerRequirement_classLoader = _EnumLoader( ("DockerRequirement",), "DockerRequirement_class" ) uri_DockerRequirement_classLoader_False_True_None_None = _URILoader( DockerRequirement_classLoader, False, True, None, None ) SoftwareRequirement_classLoader = _EnumLoader( ("SoftwareRequirement",), "SoftwareRequirement_class" ) uri_SoftwareRequirement_classLoader_False_True_None_None = _URILoader( SoftwareRequirement_classLoader, False, True, None, None ) array_of_SoftwarePackageLoader = _ArrayLoader(SoftwarePackageLoader) idmap_packages_array_of_SoftwarePackageLoader = _IdMapLoader( array_of_SoftwarePackageLoader, "package", "specs" ) uri_union_of_None_type_or_array_of_strtype_False_False_None_True = _URILoader( union_of_None_type_or_array_of_strtype, False, False, None, True ) InitialWorkDirRequirement_classLoader = _EnumLoader( ("InitialWorkDirRequirement",), "InitialWorkDirRequirement_class" ) uri_InitialWorkDirRequirement_classLoader_False_True_None_None = _URILoader( InitialWorkDirRequirement_classLoader, False, True, None, None ) union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( None_type, DirentLoader, ExpressionLoader, FileLoader, DirectoryLoader, array_of_union_of_FileLoader_or_DirectoryLoader, ) ) array_of_union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader = _ArrayLoader( union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader ) union_of_ExpressionLoader_or_array_of_union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader = _UnionLoader( ( ExpressionLoader, array_of_union_of_None_type_or_DirentLoader_or_ExpressionLoader_or_FileLoader_or_DirectoryLoader_or_array_of_union_of_FileLoader_or_DirectoryLoader, ) ) EnvVarRequirement_classLoader = _EnumLoader( ("EnvVarRequirement",), "EnvVarRequirement_class" ) uri_EnvVarRequirement_classLoader_False_True_None_None = _URILoader( EnvVarRequirement_classLoader, False, True, None, None ) array_of_EnvironmentDefLoader = _ArrayLoader(EnvironmentDefLoader) idmap_envDef_array_of_EnvironmentDefLoader = _IdMapLoader( array_of_EnvironmentDefLoader, "envName", "envValue" ) ShellCommandRequirement_classLoader = _EnumLoader( ("ShellCommandRequirement",), "ShellCommandRequirement_class" ) uri_ShellCommandRequirement_classLoader_False_True_None_None = _URILoader( ShellCommandRequirement_classLoader, False, True, None, None ) ResourceRequirement_classLoader = _EnumLoader( ("ResourceRequirement",), "ResourceRequirement_class" ) uri_ResourceRequirement_classLoader_False_True_None_None = _URILoader( ResourceRequirement_classLoader, False, True, None, None ) union_of_None_type_or_inttype_or_floattype_or_ExpressionLoader = _UnionLoader( ( None_type, inttype, floattype, ExpressionLoader, ) ) WorkReuse_classLoader = _EnumLoader(("WorkReuse",), "WorkReuse_class") uri_WorkReuse_classLoader_False_True_None_None = _URILoader( WorkReuse_classLoader, False, True, None, None ) union_of_booltype_or_ExpressionLoader = _UnionLoader( ( booltype, ExpressionLoader, ) ) NetworkAccess_classLoader = _EnumLoader(("NetworkAccess",), "NetworkAccess_class") uri_NetworkAccess_classLoader_False_True_None_None = _URILoader( NetworkAccess_classLoader, False, True, None, None ) InplaceUpdateRequirement_classLoader = _EnumLoader( ("InplaceUpdateRequirement",), "InplaceUpdateRequirement_class" ) uri_InplaceUpdateRequirement_classLoader_False_True_None_None = _URILoader( InplaceUpdateRequirement_classLoader, False, True, None, None ) ToolTimeLimit_classLoader = _EnumLoader(("ToolTimeLimit",), "ToolTimeLimit_class") uri_ToolTimeLimit_classLoader_False_True_None_None = _URILoader( ToolTimeLimit_classLoader, False, True, None, None ) union_of_inttype_or_ExpressionLoader = _UnionLoader( ( inttype, ExpressionLoader, ) ) union_of_None_type_or_InputBindingLoader = _UnionLoader( ( None_type, InputBindingLoader, ) ) ExpressionTool_classLoader = _EnumLoader(("ExpressionTool",), "ExpressionTool_class") uri_ExpressionTool_classLoader_False_True_None_None = _URILoader( ExpressionTool_classLoader, False, True, None, None ) array_of_WorkflowInputParameterLoader = _ArrayLoader(WorkflowInputParameterLoader) idmap_inputs_array_of_WorkflowInputParameterLoader = _IdMapLoader( array_of_WorkflowInputParameterLoader, "id", "type" ) array_of_ExpressionToolOutputParameterLoader = _ArrayLoader( ExpressionToolOutputParameterLoader ) idmap_outputs_array_of_ExpressionToolOutputParameterLoader = _IdMapLoader( array_of_ExpressionToolOutputParameterLoader, "id", "type" ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_1_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 1, None ) union_of_None_type_or_LinkMergeMethodLoader = _UnionLoader( ( None_type, LinkMergeMethodLoader, ) ) union_of_None_type_or_PickValueMethodLoader = _UnionLoader( ( None_type, PickValueMethodLoader, ) ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_2_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 2, None ) array_of_WorkflowStepInputLoader = _ArrayLoader(WorkflowStepInputLoader) idmap_in__array_of_WorkflowStepInputLoader = _IdMapLoader( array_of_WorkflowStepInputLoader, "id", "source" ) union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( ( strtype, WorkflowStepOutputLoader, ) ) array_of_union_of_strtype_or_WorkflowStepOutputLoader = _ArrayLoader( union_of_strtype_or_WorkflowStepOutputLoader ) union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader = _UnionLoader( (array_of_union_of_strtype_or_WorkflowStepOutputLoader,) ) uri_union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader_True_False_None_None = _URILoader( union_of_array_of_union_of_strtype_or_WorkflowStepOutputLoader, True, False, None, None, ) array_of_Any_type = _ArrayLoader(Any_type) union_of_None_type_or_array_of_Any_type = _UnionLoader( ( None_type, array_of_Any_type, ) ) idmap_hints_union_of_None_type_or_array_of_Any_type = _IdMapLoader( union_of_None_type_or_array_of_Any_type, "class", "None" ) union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader = _UnionLoader( ( strtype, CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, OperationLoader, ProcessGeneratorLoader, ) ) uri_union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_False_False_None_None = _URILoader( union_of_strtype_or_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, False, False, None, None, ) uri_union_of_None_type_or_strtype_or_array_of_strtype_False_False_0_None = _URILoader( union_of_None_type_or_strtype_or_array_of_strtype, False, False, 0, None ) union_of_None_type_or_ScatterMethodLoader = _UnionLoader( ( None_type, ScatterMethodLoader, ) ) uri_union_of_None_type_or_ScatterMethodLoader_False_True_None_None = _URILoader( union_of_None_type_or_ScatterMethodLoader, False, True, None, None ) Workflow_classLoader = _EnumLoader(("Workflow",), "Workflow_class") uri_Workflow_classLoader_False_True_None_None = _URILoader( Workflow_classLoader, False, True, None, None ) array_of_WorkflowOutputParameterLoader = _ArrayLoader(WorkflowOutputParameterLoader) idmap_outputs_array_of_WorkflowOutputParameterLoader = _IdMapLoader( array_of_WorkflowOutputParameterLoader, "id", "type" ) array_of_WorkflowStepLoader = _ArrayLoader(WorkflowStepLoader) union_of_array_of_WorkflowStepLoader = _UnionLoader((array_of_WorkflowStepLoader,)) idmap_steps_union_of_array_of_WorkflowStepLoader = _IdMapLoader( union_of_array_of_WorkflowStepLoader, "id", "None" ) SubworkflowFeatureRequirement_classLoader = _EnumLoader( ("SubworkflowFeatureRequirement",), "SubworkflowFeatureRequirement_class" ) uri_SubworkflowFeatureRequirement_classLoader_False_True_None_None = _URILoader( SubworkflowFeatureRequirement_classLoader, False, True, None, None ) ScatterFeatureRequirement_classLoader = _EnumLoader( ("ScatterFeatureRequirement",), "ScatterFeatureRequirement_class" ) uri_ScatterFeatureRequirement_classLoader_False_True_None_None = _URILoader( ScatterFeatureRequirement_classLoader, False, True, None, None ) MultipleInputFeatureRequirement_classLoader = _EnumLoader( ("MultipleInputFeatureRequirement",), "MultipleInputFeatureRequirement_class" ) uri_MultipleInputFeatureRequirement_classLoader_False_True_None_None = _URILoader( MultipleInputFeatureRequirement_classLoader, False, True, None, None ) StepInputExpressionRequirement_classLoader = _EnumLoader( ("StepInputExpressionRequirement",), "StepInputExpressionRequirement_class" ) uri_StepInputExpressionRequirement_classLoader_False_True_None_None = _URILoader( StepInputExpressionRequirement_classLoader, False, True, None, None ) Operation_classLoader = _EnumLoader(("Operation",), "Operation_class") uri_Operation_classLoader_False_True_None_None = _URILoader( Operation_classLoader, False, True, None, None ) array_of_OperationInputParameterLoader = _ArrayLoader(OperationInputParameterLoader) idmap_inputs_array_of_OperationInputParameterLoader = _IdMapLoader( array_of_OperationInputParameterLoader, "id", "type" ) array_of_OperationOutputParameterLoader = _ArrayLoader(OperationOutputParameterLoader) idmap_outputs_array_of_OperationOutputParameterLoader = _IdMapLoader( array_of_OperationOutputParameterLoader, "id", "type" ) uri_strtype_False_True_None_None = _URILoader(strtype, False, True, None, None) uri_array_of_strtype_False_False_0_None = _URILoader( array_of_strtype, False, False, 0, None ) union_of_strtype_or_array_of_strtype = _UnionLoader( ( strtype, array_of_strtype, ) ) union_of_None_type_or_Any_type = _UnionLoader( ( None_type, Any_type, ) ) array_of_LoopInputLoader = _ArrayLoader(LoopInputLoader) idmap_loop_array_of_LoopInputLoader = _IdMapLoader( array_of_LoopInputLoader, "id", "loopSource" ) LoopOutputModesLoader = _EnumLoader( ( "last", "all", ), "LoopOutputModes", ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, OperationLoader, ProcessGeneratorLoader, ) ) array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader = _ArrayLoader( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader ) union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader = _UnionLoader( ( CommandLineToolLoader, ExpressionToolLoader, WorkflowLoader, OperationLoader, ProcessGeneratorLoader, array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, ) ) CWLObjectTypeLoader.add_loaders( ( booltype, inttype, floattype, strtype, FileLoader, DirectoryLoader, array_of_union_of_None_type_or_CWLObjectTypeLoader, map_of_union_of_None_type_or_CWLObjectTypeLoader, ) ) def load_document( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions() result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, ) return result def load_document_with_metadata( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> Any: if baseuri is None: baseuri = file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=baseuri) return _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, doc, baseuri, loadingOptions, addl_metadata_fields=addl_metadata_fields, ) def load_document_by_string( string: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, result, uri, loadingOptions, ) return result def load_document_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """ Shortcut to load via a YAML object. yaml: must be from ruamel.yaml.main.YAML.load with preserve_quotes=True """ add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = LoadingOptions(fileuri=uri) result, metadata = _document_load( union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader_or_array_of_union_of_CommandLineToolLoader_or_ExpressionToolLoader_or_WorkflowLoader_or_OperationLoader_or_ProcessGeneratorLoader, yaml, uri, loadingOptions, ) return result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/cwl_v1_2_utils.py0000644000175100001660000006277214753463342021613 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 import hashlib import logging import os from collections import namedtuple from collections.abc import MutableMapping, MutableSequence from io import StringIO from typing import IO, Any, Optional, Union, cast from urllib.parse import urldefrag from schema_salad.exceptions import ValidationException from schema_salad.sourceline import SourceLine, add_lc_filename from schema_salad.utils import aslist, json_dumps, yaml_no_ts import cwl_utils.parser import cwl_utils.parser.cwl_v1_2 as cwl import cwl_utils.parser.utils from cwl_utils.errors import WorkflowException from cwl_utils.utils import yaml_dumps CONTENT_LIMIT: int = 64 * 1024 _logger = logging.getLogger("cwl_utils") SrcSink = namedtuple("SrcSink", ["src", "sink", "linkMerge", "message"]) def _compare_records( src: cwl.RecordSchema, sink: cwl.RecordSchema, strict: bool = False ) -> bool: """ Compare two records, ensuring they have compatible fields. This handles normalizing record names, which will be relative to workflow step, so that they can be compared. """ srcfields = {cwl.shortname(field.name): field.type_ for field in (src.fields or {})} sinkfields = { cwl.shortname(field.name): field.type_ for field in (sink.fields or {}) } for key in sinkfields.keys(): if ( not can_assign_src_to_sink( srcfields.get(key, "null"), sinkfields.get(key, "null"), strict ) and sinkfields.get(key) is not None ): _logger.info( "Record comparison failure for %s and %s\n" "Did not match fields for %s: %s and %s", cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], src ).name, cast( Union[cwl.InputRecordSchema, cwl.CommandOutputRecordSchema], sink ).name, key, srcfields.get(key), sinkfields.get(key), ) return False return True def _compare_type(type1: Any, type2: Any) -> bool: if isinstance(type1, cwl.ArraySchema) and isinstance(type2, cwl.ArraySchema): return _compare_type(type1.items, type2.items) elif isinstance(type1, cwl.RecordSchema) and isinstance(type2, cwl.RecordSchema): fields1 = { cwl.shortname(field.name): field.type_ for field in (type1.fields or {}) } fields2 = { cwl.shortname(field.name): field.type_ for field in (type2.fields or {}) } if fields1.keys() != fields2.keys(): return False return all(_compare_type(fields1[k], fields2[k]) for k in fields1.keys()) elif isinstance(type1, MutableSequence) and isinstance(type2, MutableSequence): if len(type1) != len(type2): return False for t1 in type1: if not any(_compare_type(t1, t2) for t2 in type2): return False return True else: return bool(type1 == type2) def _is_all_output_method_loop_step( param_to_step: dict[str, cwl.WorkflowStep], parm_id: str ) -> bool: if (source_step := param_to_step.get(parm_id)) is not None: for requirement in source_step.requirements or []: if isinstance(requirement, cwl.Loop) and requirement.outputMethod == "all": return True return False def _is_conditional_step( param_to_step: dict[str, cwl.WorkflowStep], parm_id: str ) -> bool: if (source_step := param_to_step.get(parm_id)) is not None: if source_step.when is not None: return True return False def _inputfile_load( doc: Union[str, MutableMapping[str, Any], MutableSequence[Any]], baseuri: str, loadingOptions: cwl.LoadingOptions, addl_metadata_fields: Optional[MutableSequence[str]] = None, ) -> tuple[Any, cwl.LoadingOptions]: loader = cwl.CWLInputFileLoader if isinstance(doc, str): url = loadingOptions.fetcher.urljoin(baseuri, doc) if url in loadingOptions.idx: return loadingOptions.idx[url] doc_url, frg = urldefrag(url) text = loadingOptions.fetcher.fetch_text(doc_url) textIO = StringIO(text) textIO.name = str(doc_url) yaml = yaml_no_ts() result = yaml.load(textIO) add_lc_filename(result, doc_url) loadingOptions = cwl.LoadingOptions(copyfrom=loadingOptions, fileuri=doc_url) _inputfile_load( result, doc_url, loadingOptions, ) return loadingOptions.idx[url] if isinstance(doc, MutableMapping): addl_metadata = {} if addl_metadata_fields is not None: for mf in addl_metadata_fields: if mf in doc: addl_metadata[mf] = doc[mf] loadingOptions = cwl.LoadingOptions( copyfrom=loadingOptions, baseuri=baseuri, addl_metadata=addl_metadata, ) loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions, docRoot=baseuri), loadingOptions, ) return loadingOptions.idx[baseuri] if isinstance(doc, MutableSequence): loadingOptions.idx[baseuri] = ( loader.load(doc, baseuri, loadingOptions), loadingOptions, ) return loadingOptions.idx[baseuri] raise ValidationException( "Expected URI string, MutableMapping or MutableSequence, got %s" % type(doc) ) def can_assign_src_to_sink(src: Any, sink: Any, strict: bool = False) -> bool: """ Check for identical type specifications, ignoring extra keys like inputBinding. src: admissible source types sink: admissible sink types In non-strict comparison, at least one source type must match one sink type, except for 'null'. In strict comparison, all source types must match at least one sink type. """ if src == "Any" or sink == "Any": return True if isinstance(src, cwl.ArraySchema) and isinstance(sink, cwl.ArraySchema): return can_assign_src_to_sink(src.items, sink.items, strict) if isinstance(src, cwl.RecordSchema) and isinstance(sink, cwl.RecordSchema): return _compare_records(src, sink, strict) if isinstance(src, MutableSequence): if strict: for this_src in src: if not can_assign_src_to_sink(this_src, sink): return False return True for this_src in src: if this_src != "null" and can_assign_src_to_sink(this_src, sink): return True return False if isinstance(sink, MutableSequence): for this_sink in sink: if can_assign_src_to_sink(src, this_sink): return True return False return bool(src == sink) def check_all_types( src_dict: dict[str, Any], sinks: MutableSequence[Union[cwl.WorkflowStepInput, cwl.WorkflowOutputParameter]], param_to_step: dict[str, cwl.WorkflowStep], type_dict: dict[str, Any], ) -> dict[str, list[SrcSink]]: """Given a list of sinks, check if their types match with the types of their sources.""" validation: dict[str, list[SrcSink]] = {"warning": [], "exception": []} for sink in sinks: extra_message = ( "pickValue is %s" % sink.pickValue if sink.pickValue is not None else None ) if isinstance(sink, cwl.WorkflowOutputParameter): sourceName = "outputSource" sourceField = sink.outputSource elif isinstance(sink, cwl.WorkflowStepInput): sourceName = "source" sourceField = sink.source else: continue if sourceField is not None: if isinstance(sourceField, MutableSequence): linkMerge = sink.linkMerge or ( "merge_nested" if len(sourceField) > 1 else None ) if sink.pickValue in ["first_non_null", "the_only_non_null"]: linkMerge = None srcs_of_sink = [] for parm_id in sourceField: srcs_of_sink += [src_dict[parm_id]] if ( _is_conditional_step(param_to_step, parm_id) and sink.pickValue is not None ): validation["warning"].append( SrcSink( src_dict[parm_id], sink, linkMerge, message="Source is from conditional step, but pickValue is not used", ) ) if _is_all_output_method_loop_step(param_to_step, parm_id): src_typ = type_dict[src_dict[parm_id].id] type_dict[src_dict[parm_id].id] = cwl.ArraySchema( items=src_typ, type_="array" ) else: parm_id = cast(str, sourceField) if parm_id not in src_dict: raise SourceLine(sink, sourceName, ValidationException).makeError( f"{sourceName} not found: {parm_id}" ) srcs_of_sink = [src_dict[parm_id]] linkMerge = None if sink.pickValue is not None: validation["warning"].append( SrcSink( src_dict[parm_id], sink, linkMerge, message="pickValue is used but only a single input source is declared", ) ) if _is_conditional_step(param_to_step, parm_id): src_typ = aslist(type_dict[src_dict[parm_id].id]) snk_typ = type_dict[sink.id] if "null" not in src_typ: src_typ = ["null"] + cast(list[Any], src_typ) if ( not isinstance(snk_typ, MutableSequence) or "null" not in snk_typ ): validation["warning"].append( SrcSink( src_dict[parm_id], sink, linkMerge, message="Source is from conditional step and may produce `null`", ) ) type_dict[src_dict[parm_id].id] = src_typ if _is_all_output_method_loop_step(param_to_step, parm_id): src_typ = type_dict[src_dict[parm_id].id] type_dict[src_dict[parm_id].id] = cwl.ArraySchema( items=src_typ, type_="array" ) for src in srcs_of_sink: check_result = check_types( type_dict[cast(str, src.id)], type_dict[sink.id], linkMerge, getattr(sink, "valueFrom", None), ) if check_result == "warning": validation["warning"].append( SrcSink(src, sink, linkMerge, extra_message) ) elif check_result == "exception": validation["exception"].append( SrcSink(src, sink, linkMerge, extra_message) ) return validation def check_types( srctype: Any, sinktype: Any, linkMerge: Optional[str], valueFrom: Optional[str] = None, ) -> str: """ Check if the source and sink types are correct. Acceptable types are "pass", "warning", or "exception". """ if valueFrom is not None: return "pass" if linkMerge is None: if can_assign_src_to_sink(srctype, sinktype, strict=True): return "pass" if can_assign_src_to_sink(srctype, sinktype, strict=False): return "warning" return "exception" if linkMerge == "merge_nested": return check_types( cwl.ArraySchema(items=srctype, type_="array"), sinktype, None, None ) if linkMerge == "merge_flattened": return check_types(merge_flatten_type(srctype), sinktype, None, None) raise ValidationException(f"Invalid value {linkMerge} for linkMerge field.") def content_limit_respected_read_bytes(f: IO[bytes]) -> bytes: """ Read file content up to 64 kB as a byte array. Throw exception for larger files (see https://www.commonwl.org/v1.2/Workflow.html#Changelog). """ contents = f.read(CONTENT_LIMIT + 1) if len(contents) > CONTENT_LIMIT: raise WorkflowException( "file is too large, loadContents limited to %d bytes" % CONTENT_LIMIT ) return contents def content_limit_respected_read(f: IO[bytes]) -> str: """ Read file content up to 64 kB as an utf-8 encoded string. Throw exception for larger files (see https://www.commonwl.org/v1.2/Workflow.html#Changelog). """ return content_limit_respected_read_bytes(f).decode("utf-8") def convert_stdstreams_to_files(clt: cwl.CommandLineTool) -> None: """Convert stdin, stdout and stderr type shortcuts to files.""" for out in clt.outputs: if out.type_ == "stdout": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stdout shortcut." ) if clt.stdout is None: clt.stdout = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stdout) elif out.type_ == "stderr": if out.outputBinding is not None: raise ValidationException( "Not allowed to specify outputBinding when using stderr shortcut." ) if clt.stderr is None: clt.stderr = str( hashlib.sha1( # nosec json_dumps(clt.save(), sort_keys=True).encode("utf-8") ).hexdigest() ) out.type_ = "File" out.outputBinding = cwl.CommandOutputBinding(glob=clt.stderr) for inp in clt.inputs: if inp.type_ == "stdin": if inp.inputBinding is not None: raise ValidationException( "Not allowed to specify unputBinding when using stdin shortcut." ) if clt.stdin is not None: raise ValidationException( "Not allowed to specify stdin path when using stdin type shortcut." ) else: clt.stdin = ( "$(inputs.%s.path)" % cast(str, inp.id).rpartition("#")[2].split("/")[-1] ) inp.type_ = "File" def load_inputfile( doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.2 input file from a serialized YAML string or a YAML object.""" if baseuri is None: baseuri = cwl.file_uri(os.getcwd()) + "/" if loadingOptions is None: loadingOptions = cwl.LoadingOptions() result, metadata = _inputfile_load( doc, baseuri, loadingOptions, ) return result def load_inputfile_by_string( string: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.2 input file from a serialized YAML string.""" yaml = yaml_no_ts() result = yaml.load(string) add_lc_filename(result, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( result, uri, loadingOptions, ) return result def load_inputfile_by_yaml( yaml: Any, uri: str, loadingOptions: Optional[cwl.LoadingOptions] = None, ) -> Any: """Load a CWL v1.2 input file from a YAML object.""" add_lc_filename(yaml, uri) if loadingOptions is None: loadingOptions = cwl.LoadingOptions(fileuri=uri) result, metadata = _inputfile_load( yaml, uri, loadingOptions, ) return result def merge_flatten_type(src: Any) -> Any: """Return the merge flattened type of the source type.""" if isinstance(src, MutableSequence): return [merge_flatten_type(t) for t in src] if isinstance(src, cwl.ArraySchema): return src return cwl.ArraySchema(type_="array", items=src) def type_for_step_input( step: cwl.WorkflowStep, in_: cwl.WorkflowStepInput, ) -> Any: """Determine the type for the given step input.""" if in_.valueFrom is not None: return "Any" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.inputs: for step_input in step_run.inputs: if cast(str, step_input.id).split("#")[-1] == in_.id.split("#")[-1]: input_type = step_input.type_ if step.scatter is not None and in_.id in aslist(step.scatter): input_type = cwl.ArraySchema(items=input_type, type_="array") return input_type return "Any" def type_for_step_output( step: cwl.WorkflowStep, sourcename: str, ) -> Any: """Determine the type for the given step output.""" step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) if step_run and step_run.outputs: for output in step_run.outputs: if ( output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): output_type = output.type_ if step.scatter is not None: if step.scatterMethod == "nested_crossproduct": for _ in range(len(aslist(step.scatter))): output_type = cwl.ArraySchema( items=output_type, type_="array" ) else: output_type = cwl.ArraySchema(items=output_type, type_="array") return output_type raise ValidationException( "param {} not found in {}.".format( sourcename, yaml_dumps(cwl.save(step)), ) ) def type_for_source( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, linkMerge: Optional[str] = None, pickValue: Optional[str] = None, ) -> Any: """Determine the type for the given sourcenames.""" scatter_context: list[Optional[tuple[int, str]]] = [] params = param_for_source_id(process, sourcenames, parent, scatter_context) if not isinstance(params, list): new_type = params.type_ if scatter_context[0] is not None: if scatter_context[0][1] == "nested_crossproduct": for _ in range(scatter_context[0][0]): new_type = cwl.ArraySchema(items=new_type, type_="array") else: new_type = cwl.ArraySchema(items=new_type, type_="array") if linkMerge == "merge_nested": new_type = cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": new_type = merge_flatten_type(new_type) if pickValue is not None: if isinstance(new_type, cwl.ArraySchema): if pickValue in ["first_non_null", "the_only_non_null"]: new_type = new_type.items return new_type new_type = [] for p, sc in zip(params, scatter_context): if isinstance(p, str) and not any(_compare_type(t, p) for t in new_type): cur_type = p elif hasattr(p, "type_") and not any( _compare_type(t, p.type_) for t in new_type ): cur_type = p.type_ else: cur_type = None if cur_type is not None: if sc is not None: if sc[1] == "nested_crossproduct": for _ in range(sc[0]): cur_type = cwl.ArraySchema(items=cur_type, type_="array") else: cur_type = cwl.ArraySchema(items=cur_type, type_="array") new_type.append(cur_type) if len(new_type) == 1: new_type = new_type[0] if linkMerge == "merge_nested": new_type = cwl.ArraySchema(items=new_type, type_="array") elif linkMerge == "merge_flattened": new_type = merge_flatten_type(new_type) elif isinstance(sourcenames, list) and len(sourcenames) > 1: new_type = cwl.ArraySchema(items=new_type, type_="array") if pickValue is not None: if isinstance(new_type, cwl.ArraySchema): if pickValue in ["first_non_null", "the_only_non_null"]: new_type = new_type.items return new_type def param_for_source_id( process: Union[cwl.CommandLineTool, cwl.Workflow, cwl.ExpressionTool], sourcenames: Union[str, list[str]], parent: Optional[cwl.Workflow] = None, scatter_context: Optional[list[Optional[tuple[int, str]]]] = None, ) -> Union[list[cwl.WorkflowInputParameter], cwl.WorkflowInputParameter]: """Find the process input parameter that matches one of the given sourcenames.""" if isinstance(sourcenames, str): sourcenames = [sourcenames] params: list[cwl.WorkflowInputParameter] = [] for sourcename in sourcenames: if not isinstance(process, cwl.Workflow): for param in process.inputs: if param.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(param) if scatter_context is not None: scatter_context.append(None) targets = [process] if parent: targets.append(parent) for target in targets: if isinstance(target, cwl.Workflow): for inp in target.inputs: if inp.id.split("#")[-1] == sourcename.split("#")[-1]: params.append(inp) if scatter_context is not None: scatter_context.append(None) for step in target.steps: if ( "/".join(sourcename.split("#")[-1].split("/")[:-1]) == step.id.split("#")[-1] and step.out ): step_run = cwl_utils.parser.utils.load_step(step) cwl_utils.parser.utils.convert_stdstreams_to_files(step_run) for outp in step.out: outp_id = outp if isinstance(outp, str) else outp.id if ( outp_id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): if step_run and step_run.outputs: for output in step_run.outputs: if ( output.id.split("#")[-1].split("/")[-1] == sourcename.split("#")[-1].split("/")[-1] ): params.append(output) if scatter_context is not None: if scatter_context is not None: if isinstance(step.scatter, str): scatter_context.append( ( 1, step.scatterMethod or "dotproduct", ) ) elif isinstance( step.scatter, MutableSequence ): scatter_context.append( ( len(step.scatter), step.scatterMethod or "dotproduct", ) ) else: scatter_context.append(None) if len(params) == 1: return params[0] elif len(params) > 1: return params raise WorkflowException( "param {} not found in {}\n{}.".format( sourcename, yaml_dumps(cwl.save(process)), (f" or\n {yaml_dumps(cwl.save(parent))}" if parent is not None else ""), ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/latest.py0000644000175100001660000000010614753463342020232 0ustar00runnerdocker"""Convenience module.""" from .cwl_v1_2 import * # noqa: F401,F403 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/parser/utils.py0000644000175100001660000004065714753463342020115 0ustar00runnerdocker"""CWL parser utility functions.""" import copy import logging import os from collections.abc import MutableSequence from pathlib import Path from types import ModuleType from typing import Any, Optional, Union, cast from urllib.parse import unquote_plus, urlparse from schema_salad.exceptions import ValidationException from schema_salad.sourceline import SourceLine, strip_dup_lineno from schema_salad.utils import json_dumps, yaml_no_ts import cwl_utils import cwl_utils.parser from . import ( LoadingOptions, Process, Workflow, WorkflowStep, WorkflowStepInput, cwl_v1_0, cwl_v1_0_utils, cwl_v1_1, cwl_v1_1_utils, cwl_v1_2, cwl_v1_2_utils, ) _logger = logging.getLogger("cwl_utils") def convert_stdstreams_to_files(process: Process) -> None: """Convert stdin, stdout and stderr type shortcuts to files.""" if isinstance(process, cwl_v1_0.CommandLineTool): cwl_v1_0_utils.convert_stdstreams_to_files(process) elif isinstance(process, cwl_v1_1.CommandLineTool): cwl_v1_1_utils.convert_stdstreams_to_files(process) elif isinstance(process, cwl_v1_2.CommandLineTool): cwl_v1_2_utils.convert_stdstreams_to_files(process) def load_inputfile_by_uri( version: str, path: Union[str, Path], loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """Load a CWL input file from a URI or a path.""" if isinstance(path, str): uri = urlparse(path) if not uri.scheme or uri.scheme == "file": real_path = Path(unquote_plus(uri.path)).resolve().as_uri() else: real_path = path else: real_path = path.resolve().as_uri() if version is None: raise ValidationException("could not get the cwlVersion") baseuri = str(real_path) if loadingOptions is None: if version == "v1.0": loadingOptions = cwl_v1_0.LoadingOptions(fileuri=baseuri) elif version == "v1.1": loadingOptions = cwl_v1_1.LoadingOptions(fileuri=baseuri) elif version == "v1.2": loadingOptions = cwl_v1_2.LoadingOptions(fileuri=baseuri) else: raise ValidationException( f"Version error. Did not recognise {version} as a CWL version" ) doc = loadingOptions.fetcher.fetch_text(real_path) return load_inputfile_by_string(version, doc, baseuri, loadingOptions) def load_inputfile( version: str, doc: Any, baseuri: Optional[str] = None, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """Load a CWL input file from a serialized YAML string or a YAML object.""" if baseuri is None: baseuri = cwl_v1_0.file_uri(os.getcwd()) + "/" if isinstance(doc, str): return load_inputfile_by_string(version, doc, baseuri, loadingOptions) return load_inputfile_by_yaml(version, doc, baseuri, loadingOptions) def load_inputfile_by_string( version: str, string: str, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """Load a CWL input file from a serialized YAML string.""" yaml = yaml_no_ts() result = yaml.load(string) return load_inputfile_by_yaml(version, result, uri, loadingOptions) def load_inputfile_by_yaml( version: str, yaml: Any, uri: str, loadingOptions: Optional[LoadingOptions] = None, ) -> Any: """Load a CWL input file from a YAML object.""" if version == "v1.0": result = cwl_v1_0_utils.load_inputfile_by_yaml( yaml, uri, cast(Optional[cwl_v1_0.LoadingOptions], loadingOptions) ) elif version == "v1.1": result = cwl_v1_1_utils.load_inputfile_by_yaml( yaml, uri, cast(Optional[cwl_v1_1.LoadingOptions], loadingOptions) ) elif version == "v1.2": result = cwl_v1_2_utils.load_inputfile_by_yaml( yaml, uri, cast(Optional[cwl_v1_2.LoadingOptions], loadingOptions) ) elif version is None: raise ValidationException("could not get the cwlVersion") else: raise ValidationException( f"Version error. Did not recognise {version} as a CWL version" ) return result def load_step( step: cwl_utils.parser.WorkflowStep, ) -> Process: if isinstance(step.run, str): step_run = cwl_utils.parser.load_document_by_uri( path=step.loadingOptions.fetcher.urljoin( base_url=cast(str, step.loadingOptions.fileuri), url=step.run, ), loadingOptions=step.loadingOptions, ) return cast(Process, step_run) else: return cast(Process, copy.deepcopy(step.run)) def static_checker(workflow: cwl_utils.parser.Workflow) -> None: """Check if all source and sink types of a workflow are compatible before run time.""" step_inputs = [] step_outputs = [] type_dict = {} param_to_step = {} for step in workflow.steps: if step.in_ is not None: step_inputs.extend(step.in_) param_to_step.update({s.id: step for s in step.in_}) type_dict.update( { cast(str, s.id): type_for_step_input( step, s, cast(str, workflow.cwlVersion) ) for s in step.in_ } ) if step.out is not None: # FIXME: the correct behaviour here would be to create WorkflowStepOutput directly at load time if workflow.cwlVersion == "v1.0": step_outs = [ cwl_v1_0.WorkflowStepOutput(s) if isinstance(s, str) else s for s in step.out ] elif workflow.cwlVersion == "v1.1": step_outs = [ cwl_v1_1.WorkflowStepOutput(s) if isinstance(s, str) else s for s in step.out ] elif workflow.cwlVersion == "v1.2": step_outs = [ cwl_v1_2.WorkflowStepOutput(s) if isinstance(s, str) else s for s in step.out ] else: raise Exception(f"Unsupported CWL version {workflow.cwlVersion}") step_outputs.extend(step_outs) param_to_step.update({s.id: step for s in step_outs}) type_dict.update( { s.id: type_for_step_output(step, s.id, workflow.cwlVersion) for s in step_outs } ) src_dict = { **{param.id: param for param in workflow.inputs}, **{param.id: param for param in step_outputs}, } type_dict = { **type_dict, **{param.id: param.type_ for param in workflow.inputs}, **{param.id: param.type_ for param in workflow.outputs}, } parser: ModuleType step_inputs_val: dict[str, Any] workflow_outputs_val: dict[str, Any] if workflow.cwlVersion == "v1.0": parser = cwl_v1_0 step_inputs_val = cwl_v1_0_utils.check_all_types( src_dict, step_inputs, type_dict ) workflow_outputs_val = cwl_v1_0_utils.check_all_types( src_dict, workflow.outputs, type_dict ) elif workflow.cwlVersion == "v1.1": parser = cwl_v1_1 step_inputs_val = cwl_v1_1_utils.check_all_types( src_dict, step_inputs, type_dict ) workflow_outputs_val = cwl_v1_1_utils.check_all_types( src_dict, workflow.outputs, type_dict ) elif workflow.cwlVersion == "v1.2": parser = cwl_v1_2 step_inputs_val = cwl_v1_2_utils.check_all_types( src_dict, step_inputs, param_to_step, type_dict ) workflow_outputs_val = cwl_v1_2_utils.check_all_types( src_dict, workflow.outputs, param_to_step, type_dict ) else: raise Exception(f"Unsupported CWL version {workflow.cwlVersion}") warnings = step_inputs_val["warning"] + workflow_outputs_val["warning"] exceptions = step_inputs_val["exception"] + workflow_outputs_val["exception"] warning_msgs = [] exception_msgs = [] for warning in warnings: src = warning.src sink = warning.sink linkMerge = warning.linkMerge msg = ( SourceLine(src, "type").makeError( "Source '%s' of type %s may be incompatible" % ( parser.shortname(src.id), json_dumps(parser.save(type_dict[src.id])), ) ) + "\n" + SourceLine(sink, "type").makeError( " with sink '%s' of type %s" % ( parser.shortname(sink.id), json_dumps(parser.save(type_dict[sink.id])), ) ) ) if linkMerge is not None: msg += "\n" + SourceLine(sink).makeError( " source has linkMerge method %s" % linkMerge ) if warning.message is not None: msg += "\n" + SourceLine(sink).makeError(" " + warning.message) if msg: warning_msgs.append(msg) for exception in exceptions: src = exception.src sink = exception.sink linkMerge = exception.linkMerge extra_message = exception.message msg = ( SourceLine(src, "type").makeError( "Source '%s' of type %s is incompatible" % (parser.shortname(src.id), json_dumps(parser.save(type_dict[src.id]))) ) + "\n" + SourceLine(sink, "type").makeError( " with sink '%s' of type %s" % ( parser.shortname(sink.id), json_dumps(parser.save(type_dict[sink.id])), ) ) ) if extra_message is not None: msg += "\n" + SourceLine(sink).makeError(" " + extra_message) if linkMerge is not None: msg += "\n" + SourceLine(sink).makeError( " source has linkMerge method %s" % linkMerge ) exception_msgs.append(msg) for sink in step_inputs: if ( "null" != type_dict[sink.id] and not ( isinstance(type_dict[sink.id], MutableSequence) and "null" in type_dict[sink.id] ) and getattr(sink, "source", None) is None and getattr(sink, "default", None) is None and getattr(sink, "valueFrom", None) is None ): msg = SourceLine(sink).makeError( "Required parameter '%s' does not have source, default, or valueFrom expression" % parser.shortname(sink.id) ) exception_msgs.append(msg) all_warning_msg = strip_dup_lineno("\n".join(warning_msgs)) all_exception_msg = strip_dup_lineno("\n" + "\n".join(exception_msgs)) if all_warning_msg: _logger.warning("Workflow checker warning:\n%s", all_warning_msg) if exceptions: raise ValidationException(all_exception_msg) def type_for_source( process: Process, sourcenames: Union[str, list[str]], parent: Optional[Workflow] = None, linkMerge: Optional[str] = None, pickValue: Optional[str] = None, ) -> Any: """Determine the type for the given sourcenames.""" if process.cwlVersion == "v1.0": return cwl_v1_0_utils.type_for_source( cast( Union[ cwl_v1_0.CommandLineTool, cwl_v1_0.Workflow, cwl_v1_0.ExpressionTool, ], process, ), sourcenames, cast(Optional[cwl_v1_0.Workflow], parent), linkMerge, ) elif process.cwlVersion == "v1.1": return cwl_v1_1_utils.type_for_source( cast( Union[ cwl_v1_1.CommandLineTool, cwl_v1_1.Workflow, cwl_v1_1.ExpressionTool, ], process, ), sourcenames, cast(Optional[cwl_v1_1.Workflow], parent), linkMerge, ) elif process.cwlVersion == "v1.2": return cwl_v1_2_utils.type_for_source( cast( Union[ cwl_v1_2.CommandLineTool, cwl_v1_2.Workflow, cwl_v1_2.ExpressionTool, ], process, ), sourcenames, cast(Optional[cwl_v1_2.Workflow], parent), linkMerge, pickValue, ) elif process.cwlVersion is None: raise ValidationException("could not get the cwlVersion") else: raise ValidationException( f"Version error. Did not recognise {process.cwlVersion} as a CWL version" ) def type_for_step_input( step: WorkflowStep, in_: WorkflowStepInput, cwlVersion: str ) -> Any: """Determine the type for the given step output.""" if cwlVersion == "v1.0": return cwl_v1_0_utils.type_for_step_input( cast(cwl_v1_0.WorkflowStep, step), cast(cwl_v1_0.WorkflowStepInput, in_) ) elif cwlVersion == "v1.1": return cwl_v1_1_utils.type_for_step_input( cast(cwl_v1_1.WorkflowStep, step), cast(cwl_v1_1.WorkflowStepInput, in_) ) elif cwlVersion == "v1.2": return cwl_v1_2_utils.type_for_step_input( cast(cwl_v1_2.WorkflowStep, step), cast(cwl_v1_2.WorkflowStepInput, in_) ) def type_for_step_output(step: WorkflowStep, sourcename: str, cwlVersion: str) -> Any: """Determine the type for the given step output.""" if cwlVersion == "v1.0": return cwl_v1_0_utils.type_for_step_output( cast(cwl_v1_0.WorkflowStep, step), sourcename ) elif cwlVersion == "v1.1": return cwl_v1_1_utils.type_for_step_output( cast(cwl_v1_1.WorkflowStep, step), sourcename ) elif cwlVersion == "v1.2": return cwl_v1_2_utils.type_for_step_output( cast(cwl_v1_2.WorkflowStep, step), sourcename ) def param_for_source_id( process: Union[ cwl_utils.parser.CommandLineTool, cwl_utils.parser.Workflow, cwl_utils.parser.ExpressionTool, ], sourcenames: Union[str, list[str]], parent: Optional[cwl_utils.parser.Workflow] = None, scatter_context: Optional[list[Optional[tuple[int, str]]]] = None, ) -> Union[ Union[ list[cwl_utils.parser.cwl_v1_0.InputParameter], cwl_utils.parser.cwl_v1_0.InputParameter, ], Union[ list[cwl_utils.parser.cwl_v1_1.WorkflowInputParameter], cwl_utils.parser.cwl_v1_1.WorkflowInputParameter, ], Union[ list[cwl_utils.parser.cwl_v1_2.WorkflowInputParameter], cwl_utils.parser.cwl_v1_2.WorkflowInputParameter, ], ]: if process.cwlVersion == "v1.0": return cwl_utils.parser.cwl_v1_0_utils.param_for_source_id( cast( Union[ cwl_utils.parser.cwl_v1_0.CommandLineTool, cwl_utils.parser.cwl_v1_0.Workflow, cwl_utils.parser.cwl_v1_0.ExpressionTool, ], process, ), sourcenames, cast(cwl_utils.parser.cwl_v1_0.Workflow, parent), scatter_context, ) elif process.cwlVersion == "v1.1": return cwl_utils.parser.cwl_v1_1_utils.param_for_source_id( cast( Union[ cwl_utils.parser.cwl_v1_1.CommandLineTool, cwl_utils.parser.cwl_v1_1.Workflow, cwl_utils.parser.cwl_v1_1.ExpressionTool, ], process, ), sourcenames, cast(cwl_utils.parser.cwl_v1_1.Workflow, parent), scatter_context, ) elif process.cwlVersion == "v1.2": return cwl_utils.parser.cwl_v1_2_utils.param_for_source_id( cast( Union[ cwl_utils.parser.cwl_v1_2.CommandLineTool, cwl_utils.parser.cwl_v1_2.Workflow, cwl_utils.parser.cwl_v1_2.ExpressionTool, ], process, ), sourcenames, cast(cwl_utils.parser.cwl_v1_2.Workflow, parent), scatter_context, ) elif process.cwlVersion is None: raise ValidationException("could not get the cwlVersion") else: raise ValidationException( f"Version error. Did not recognise {process.cwlVersion} as a CWL version" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/py.typed0000644000175100001660000000000014753463342016560 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/sandboxjs.py0000644000175100001660000005723214753463342017451 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Safe execution of CWL Expressions in a NodeJS sandbox.""" import collections import errno import glob import json import os import re import select import subprocess # nosec import threading from abc import ABC, abstractmethod from collections.abc import Awaitable, Mapping, MutableMapping, MutableSequence from importlib.resources import files from io import BytesIO from typing import Any, Deque, Optional, Union, cast from schema_salad.utils import json_dumps from cwl_utils.errors import JavascriptException, WorkflowException from cwl_utils.loghandler import _logger from cwl_utils.types import CWLOutputType from cwl_utils.utils import singularity_supports_userns default_timeout = 20 """Default number of seconds to wait while running a javascript engine.""" seg_symbol = r"""\w+""" seg_single = r"""\['([^']|\\')+'\]""" seg_double = r"""\["([^"]|\\")+"\]""" seg_index = r"""\[[0-9]+\]""" segments = rf"(\.{seg_symbol}|{seg_single}|{seg_double}|{seg_index})" segment_re = re.compile(segments, flags=re.UNICODE) param_str = rf"\(({seg_symbol}){segments}*\)$" param_re = re.compile(param_str, flags=re.UNICODE) def code_fragment_to_js(jscript: str, jslib: str = "") -> str: if isinstance(jscript, str) and len(jscript) > 1 and jscript[0] == "{": inner_js = jscript else: inner_js = "{return (%s);}" % jscript return f'"use strict";\n{jslib}\n(function(){inner_js})()' def linenum(fn: str) -> str: lines = fn.splitlines() ofs = 0 maxlines = 99 if len(lines) > maxlines: ofs = len(lines) - maxlines lines = lines[-maxlines:] return "\n".join("%02i %s" % (i + ofs + 1, b) for i, b in enumerate(lines)) def stdfmt(data: str) -> str: if "\n" in data: return "\n" + data.strip() return data class JSEngine(ABC): @abstractmethod def eval( self, scan: str, jslib: str = "", **kwargs: Any ) -> Union[CWLOutputType, Awaitable[CWLOutputType]]: ... @abstractmethod def regex_eval( self, parsed_string: str, remaining_string: str, current_value: CWLOutputType, **kwargs: Any, ) -> Union[CWLOutputType, Awaitable[CWLOutputType]]: ... class NodeJSEngine(JSEngine): localdata = threading.local() def __init__( self, have_node_slim: bool = False, minimum_node_version_str: str = "0.10.26", process_finished_str: str = "r1cepzbhUTxtykz5XTC4\n", ): self.have_node_slim: bool = have_node_slim self.minimum_node_version_str: str = minimum_node_version_str self.process_finished_str: str = process_finished_str self.processes_to_kill: Deque[subprocess.Popen[str]] = collections.deque() def __del__(self) -> None: try: while self.processes_to_kill: process = self.processes_to_kill.popleft() if isinstance(process.args, MutableSequence): args = process.args else: args = [process.args] cidfile = [ str(arg).split("=")[1] for arg in args if "--cidfile" in str(arg) ] if cidfile: # Try to be nice try: with open(cidfile[0]) as inp_stream: p = subprocess.Popen( # nosec [args[0], "kill", inp_stream.read()], shell=False, # nosec ) try: p.wait(timeout=10) except subprocess.TimeoutExpired: p.kill() except FileNotFoundError: pass if process.stdin: process.stdin.close() try: process.wait(10) except subprocess.TimeoutExpired: pass process.kill() except TypeError: pass def check_js_threshold_version(self, working_alias: str) -> bool: """ Check if the nodeJS engine version on the system with the allowed minimum version. https://github.com/nodejs/node/blob/master/CHANGELOG.md#nodejs-changelog """ # parse nodejs version into int Tuple: 'v4.2.6\n' -> [4, 2, 6] current_version_str = subprocess.check_output( # nosec [working_alias, "-v"], text=True ) current_version = [ int(v) for v in current_version_str.strip().strip("v").split(".") ] minimum_node_version = [ int(v) for v in self.minimum_node_version_str.split(".") ] return current_version >= minimum_node_version def exec_js_process( self, js_text: str, timeout: float = default_timeout, js_console: bool = False, context: Optional[str] = None, force_docker_pull: bool = False, container_engine: str = "docker", ) -> tuple[int, str, str]: """ Run a javascript text. :param timeout: Max number of seconds to wait. :returns: A tuple of the return code, stdout, and stderr of the javascript engine invocation. """ if not hasattr(self.localdata, "procs"): self.localdata.procs = {} if js_console and context is not None: raise NotImplementedError("js_console=True and context not implemented") if js_console: js_engine = "cwlNodeEngineJSConsole.js" _logger.warning( "Running with support for javascript console in expressions (DO NOT USE IN PRODUCTION)" ) elif context is not None: js_engine = "cwlNodeEngineWithContext.js" else: js_engine = "cwlNodeEngine.js" created_new_process = False if context is not None: nodejs = self.localdata.procs.get((js_engine, context)) else: nodejs = self.localdata.procs.get(js_engine) if nodejs is None or nodejs.poll() is not None: js_engine_code = files("cwl_utils").joinpath(js_engine).read_text("utf-8") created_new_process = True new_proc = self.new_js_proc( js_engine_code, force_docker_pull=force_docker_pull, container_engine=container_engine, ) if context is None: self.localdata.procs[js_engine] = new_proc nodejs = new_proc else: self.localdata.procs[(js_engine, context)] = new_proc nodejs = new_proc killed = [] def terminate() -> None: """Kill the node process if it exceeds timeout limit.""" try: killed.append(True) nodejs.kill() except OSError: pass timer = threading.Timer(timeout, terminate) timer.daemon = True timer.start() stdin_text = "" if created_new_process and context is not None: stdin_text = json_dumps(context) + "\n" stdin_text += json_dumps(js_text) + "\n" stdin_buf = BytesIO(stdin_text.encode("utf-8")) stdout_buf = BytesIO() stderr_buf = BytesIO() rselect: list[BytesIO] = [nodejs.stdout, nodejs.stderr] wselect: list[BytesIO] = [nodejs.stdin] def process_finished() -> bool: return stdout_buf.getvalue().decode("utf-8").endswith( self.process_finished_str ) and stderr_buf.getvalue().decode("utf-8").endswith( self.process_finished_str ) while not process_finished() and timer.is_alive(): rready, wready, _ = select.select(rselect, wselect, []) try: if nodejs.stdin in wready: buf = stdin_buf.read(select.PIPE_BUF) if buf: os.write(nodejs.stdin.fileno(), buf) for pipes in ((nodejs.stdout, stdout_buf), (nodejs.stderr, stderr_buf)): if pipes[0] in rready: buf = os.read(pipes[0].fileno(), select.PIPE_BUF) if buf: pipes[1].write(buf) except OSError: break timer.cancel() stdin_buf.close() stdoutdata = stdout_buf.getvalue()[: -len(self.process_finished_str) - 1] stderrdata = stderr_buf.getvalue()[: -len(self.process_finished_str) - 1] nodejs.poll() if nodejs.poll() not in (None, 0): if killed: returncode = -1 else: returncode = nodejs.returncode else: returncode = 0 return returncode, stdoutdata.decode("utf-8"), stderrdata.decode("utf-8") def new_js_proc( self, js_text: str, force_docker_pull: bool = False, container_engine: str = "docker", ) -> "subprocess.Popen[str]": """Return a subprocess ready to submit javascript to.""" required_node_version, docker = (False,) * 2 nodejs = None # type: Optional[subprocess.Popen[str]] trynodes = ("nodejs", "node") for n in trynodes: try: if ( subprocess.check_output( # nosec [n, "--eval", "process.stdout.write('t')"], text=True, ) != "t" ): continue else: nodejs = subprocess.Popen( # nosec [n, "--eval", js_text], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, ) self.processes_to_kill.append(nodejs) required_node_version = self.check_js_threshold_version(n) break except (subprocess.CalledProcessError, OSError): pass if nodejs is None or nodejs is not None and required_node_version is False: try: nodeimg = "docker.io/node:alpine" if container_engine == "singularity": nodeimg = f"docker://{nodeimg}" elif container_engine in ("podman", "udocker"): nodeimg = "docker.io/library/node:alpine" if not self.have_node_slim: singularity_cache: Optional[str] = None if container_engine in ("docker", "podman"): dockerimgs = subprocess.check_output( # nosec [container_engine, "images", "-q", nodeimg], text=True, ) elif container_engine == "singularity": singularity_cache = os.environ.get("CWL_SINGULARITY_CACHE") if singularity_cache: singularityimgs = glob.glob( singularity_cache + "/node_alpine.sif" ) else: singularityimgs = glob.glob( os.getcwd() + "/node_alpine.sif" ) if singularityimgs: nodeimg = singularityimgs[0] elif container_engine == "udocker": matches = re.search( re.escape(nodeimg), subprocess.check_output( # nosec [container_engine, "images"], text=True, ), ) if matches: dockerimgs = matches[0] else: dockerimgs = "" else: raise Exception( f"Unknown container_engine: {container_engine}." ) # if output is an empty string need_singularity = ( container_engine == "singularity" and not singularityimgs ) need_docker = container_engine != "singularity" and ( len(dockerimgs.split("\n")) <= 1 ) if need_singularity or need_docker or force_docker_pull: # pull node:alpine docker container nodejs_pull_commands = [container_engine, "pull"] if force_docker_pull: nodejs_pull_commands.append("--force") nodejs_pull_commands.append(nodeimg) cwd = singularity_cache if singularity_cache else os.getcwd() nodejsimg = subprocess.check_output( # nosec nodejs_pull_commands, text=True, cwd=cwd ) _logger.debug( "Pulled Docker image %s %s using %s", nodeimg, nodejsimg, container_engine, ) self.have_node_slim = True nodejs_commands = [container_engine] if ( container_engine != "singularity" and "udocker" not in container_engine ): nodejs_commands.extend( [ "run", "--attach=STDIN", "--attach=STDOUT", "--attach=STDERR", "--sig-proxy=true", "--interactive", "--rm", ] ) elif "singularity" in container_engine: nodejs_commands.extend( [ "exec", "--contain", "--ipc", "--cleanenv", "--userns" if singularity_supports_userns() else "--pid", ] ) elif "udocker" in container_engine: nodejs_commands.extend( [ "run", "--device=/dev/stdin", "--device=/dev/stdout", "--device=/dev/stderr", ] ) nodejs_commands.extend( [ nodeimg, "node", "--eval", js_text, ], ) _logger.debug("Running nodejs via %s", nodejs_commands[:-1]) nodejs = subprocess.Popen( # nosec nodejs_commands, universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) self.processes_to_kill.append(nodejs) docker = True except OSError as e: if e.errno == errno.ENOENT: pass else: raise except subprocess.CalledProcessError as e: _logger.debug("Error while attempting to run nodejs: %s", e) # docker failed and nodejs not on system if nodejs is None: raise JavascriptException( "NodeJSEngine requires Node.js engine to evaluate and validate " "Javascript expressions, but couldn't find it. Tried {trynodes}, " f"{container_engine} run node:alpine".format( trynodes=", ".join(trynodes), container_engine=container_engine ) ) # docker failed, but nodejs is installed on system but the version is below the required version if docker is False and required_node_version is False: raise JavascriptException( "NodeJSEngine requires minimum v{} version of Node.js engine.".format( self.minimum_node_version_str ), "Try updating: https://docs.npmjs.com/getting-started/installing-node", ) return nodejs def eval( self, scan: str, jslib: str = "", timeout: float = default_timeout, force_docker_pull: bool = False, debug: bool = False, js_console: bool = False, container_engine: str = "docker", **kwargs: Any, ) -> CWLOutputType: fn = code_fragment_to_js(scan, jslib) returncode, stdout, stderr = self.exec_js_process( fn, timeout, js_console=js_console, force_docker_pull=force_docker_pull, container_engine=container_engine, ) if js_console: if stderr is not None: _logger.info("Javascript console output:") _logger.info("----------------------------------------") _logger.info( "\n".join( re.findall(r"^[[](?:log|err)[]].*$", stderr, flags=re.MULTILINE) ) ) _logger.info("----------------------------------------") if returncode != 0: if debug: info = ( "returncode was: %s\nscript was:\n%s\nstdout was: %s\nstderr was: %s\n" % (returncode, linenum(fn), stdfmt(stdout), stdfmt(stderr)) ) else: info = "Javascript expression was: {}\nstdout was: {}\nstderr was: {}".format( scan, stdfmt(stdout), stdfmt(stderr) ) if returncode == -1: raise JavascriptException( f"Long-running script killed after {timeout} seconds: {info}" ) else: raise JavascriptException(info) try: return cast(CWLOutputType, json.loads(stdout)) except ValueError as err: raise JavascriptException( "{}\nscript was:\n{}\nstdout was: '{}'\nstderr was: '{}'\n".format( err, linenum(fn), stdout, stderr ) ) from err def regex_eval( self, parsed_string: str, remaining_string: str, current_value: CWLOutputType, **kwargs: Any, ) -> CWLOutputType: if remaining_string: m = segment_re.match(remaining_string) if not m: return current_value next_segment_str = m.group(1) key: Optional[Union[str, int]] = None if next_segment_str[0] == ".": key = next_segment_str[1:] elif next_segment_str[1] in ("'", '"'): key = next_segment_str[2:-2].replace("\\'", "'").replace('\\"', '"') if key is not None: if ( isinstance(current_value, MutableSequence) and key == "length" and not remaining_string[m.end(1) :] ): return len(current_value) if not isinstance(current_value, MutableMapping): raise WorkflowException( "%s is a %s, cannot index on string '%s'" % (parsed_string, type(current_value).__name__, key) ) if key not in current_value: raise WorkflowException( f"{parsed_string} does not contain key {key!r}." ) else: try: key = int(next_segment_str[1:-1]) except ValueError as v: raise WorkflowException(str(v)) from v if not isinstance(current_value, MutableSequence): raise WorkflowException( "%s is a %s, cannot index on int '%s'" % (parsed_string, type(current_value).__name__, key) ) if key and key >= len(current_value): raise WorkflowException( "%s list index %i out of range" % (parsed_string, key) ) if isinstance(current_value, Mapping): try: return self.regex_eval( parsed_string + remaining_string, remaining_string[m.end(1) :], cast(CWLOutputType, current_value[cast(str, key)]), ) except KeyError as exc: raise WorkflowException( f"{parsed_string!r} doesn't have property {key!r}." ) from exc elif isinstance(current_value, list) and isinstance(key, int): try: return self.regex_eval( parsed_string + remaining_string, remaining_string[m.end(1) :], current_value[key], ) except KeyError as exc: raise WorkflowException( f"{parsed_string!r} doesn't have property {key!r}." ) from exc else: raise WorkflowException( f"{parsed_string!r} doesn't have property {key!r}." ) else: return current_value __js_engine: JSEngine = NodeJSEngine() def get_js_engine() -> JSEngine: return __js_engine def set_js_engine(js_engine: JSEngine) -> None: global __js_engine __js_engine = js_engine # The following functions are maintained for compatibility purposes def check_js_threshold_version(*args: Any, **kwargs: Any) -> bool: _check_js_threshold_version = getattr( get_js_engine(), "check_js_threshold_version", None ) if callable(_check_js_threshold_version): return cast(bool, _check_js_threshold_version(*args, **kwargs)) else: raise NotImplementedError( "Method check_js_threshold_version is not implemented in js engine {}".format( get_js_engine().__class__.__name__ ) ) def exec_js_process(*args: Any, **kwargs: Any) -> tuple[int, str, str]: """ Run a javascript text. :param timeout: Max number of seconds to wait. :returns: A tuple of the return code, stdout, and stderr of the javascript engine invocation. """ _exec_js_process = getattr(get_js_engine(), "exec_js_process", None) if callable(_exec_js_process): return cast(tuple[int, str, str], _exec_js_process(*args, **kwargs)) else: raise NotImplementedError( "Method exec_js_process is not implemented in js engine {}".format( get_js_engine().__class__.__name__ ) ) def new_js_proc(*args: Any, **kwargs: Any) -> "subprocess.Popen[str]": _new_js_proc = getattr(get_js_engine(), "new_js_proc", None) if callable(_new_js_proc): return cast("subprocess.Popen[str]", _new_js_proc(*args, **kwargs)) else: raise NotImplementedError( "Method new_js_proc is not implemented in js engine {}".format( get_js_engine().__class__.__name__ ) ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/schemadef.py0000644000175100001660000002027114753463342017366 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 # Copyright (c) 2023 Genomics plc # Copyright (c) 2021 Michael R. Crusoe # Copyright (c) 2020 Seven Bridges # See https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/LICENSE """ Valid forms of user defined types stored in external file. A single dictionary (tests/types/singletype.yml) A list of dictionaries (e.g. tests/types/recursive.yml) Types can refer to other types in the file Names can not clash across files (This seems arbitrary and we allow that for packing) Only records and arrays can be defined (https://github.com/common-workflow-language/cwl-v1.2/pull/14) From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py """ import sys import urllib.parse from copy import deepcopy from typing import Any, cast from cwl_utils import errors, types, utils def build_user_defined_type_dict( cwl: dict[str, Any], base_url: urllib.parse.ParseResult ) -> dict[str, Any]: user_defined_types = {} # Check for `$import` directly under `requirements` so we can specially handle # the new base_url for index, entry in enumerate(cwl.get("requirements", [])): if isinstance(entry, dict) and "$import" in entry: requirement, new_base_url = utils.load_linked_file( base_url, entry["$import"], is_import=True ) if requirement["class"] == "SchemaDefRequirement": cwl["requirements"][index] = requirement type_definition_list = requirement["types"] path_prefix = new_base_url.geturl() sys.stderr.write( f"Parsing {len(type_definition_list)} types from {path_prefix}\n" ) for v in type_definition_list: k = v.get("name") if k is None: raise RuntimeError(f"In file {path_prefix} type missing name") user_defined_types[f"{path_prefix}#{k}"] = v return _build_user_defined_type_dict(cwl, base_url, user_defined_types) def _build_user_defined_type_dict( cwl: dict[str, Any], base_url: urllib.parse.ParseResult, user_defined_types: dict[str, Any], ) -> dict[str, Any]: schemadef: dict[str, str] = next( ( req for req in cwl.get("requirements", []) if req.get("class") == "SchemaDefRequirement" ), {}, ) schema_list = cast(list[dict[str, Any]], schemadef.get("types", [])) if not isinstance(schema_list, list): raise RuntimeError( f"In file {base_url.geturl()}: " f"Schemadef types have to be a list\n" f"Instead, got: {schema_list}" ) for schema in schema_list: if not isinstance(schema, dict): raise RuntimeError( f"In file {base_url.geturl()}: " f"User type has to be a dict\n" f"Instead, got: {schema}" ) if len(schema.keys()) == 1 and list(schema.keys())[0] == "$import": type_definition_list, this_url = utils.load_linked_file( base_url, schema["$import"], is_import=True ) # This is always a list if isinstance(type_definition_list, dict): type_definition_list = [type_definition_list] # except when it isn't path_prefix = ( this_url.geturl() ) # sbpack.lib.normalized_path(schema["$import"], base_url).geturl() sys.stderr.write( f"Parsing {len(type_definition_list)} types from {path_prefix}\n" ) for v in type_definition_list: k = v.get("name") if k is None: raise RuntimeError(f"In file {path_prefix} type missing name") user_defined_types[f"{path_prefix}#{k}"] = v else: path_prefix = base_url.geturl() user_defined_types[f"{path_prefix}#{schema.get('name')}"] = schema # sys.stderr.write(str(user_defined_types)) # sys.stderr.write("\n") return user_defined_types # port = "input" or "output" def inline_types( cwl: dict[str, Any], port: str, base_url: urllib.parse.ParseResult, user_defined_types: dict[str, Any], ) -> dict[str, Any]: if ( len(cwl[port]) == 1 and isinstance(cwl[port][0], dict) and cwl[port][0]["id"] == "$import" ): defs, base_url = utils.load_linked_file( base_url, cwl[port][0]["type"], is_import=True ) else: defs = cwl[port] cwl[port] = [_inline_type(v, base_url, user_defined_types) for v in defs] return cwl _inline_type_name_uniq_id = 0 _inline_type_names: set[str] = set() def _inline_type( v: Any, base_url: urllib.parse.ParseResult, user_defined_types: dict[str, Any] ) -> Any: global _inline_type_name_uniq_id global _inline_type_names _inline_type_name_uniq_id += 1 if isinstance(v, str): # Handle syntactic sugar if v.endswith("[]"): return { "type": "array", "items": _inline_type(v[:-2], base_url, user_defined_types), } if v.endswith("?"): return ["null", _inline_type(v[:-1], base_url, user_defined_types)] if v in types.built_in_types: return v if "#" not in v: path_prefix = base_url path_suffix = v else: parts = v.split("#") path_prefix = utils.resolved_path(base_url, parts[0]) path_suffix = parts[1] path = f"{path_prefix.geturl()}#{path_suffix}" if path not in user_defined_types: raise RuntimeError( f"Could not find type {path!r} in {user_defined_types!r}." ) else: resolve_type = deepcopy(user_defined_types[path]) # resolve_type.pop("name", None) # Should work, but cwltool complains if "name" in resolve_type: user_type_name = resolve_type["name"] if user_type_name in _inline_type_names: resolve_type["name"] = ( f"{user_type_name}_{_inline_type_name_uniq_id}" ) else: _inline_type_names.add(user_type_name) else: resolve_type["name"] = f"user_type_{_inline_type_name_uniq_id}" return _inline_type(resolve_type, path_prefix, user_defined_types) elif isinstance(v, list): return [_inline_type(_v, base_url, user_defined_types) for _v in v] elif isinstance(v, dict): if v.get("$import") is not None: imported_type, import_base_url = utils.load_linked_file( base_url, v["$import"], is_import=True ) return _inline_type(imported_type, import_base_url, user_defined_types) _type = v.get("type") if _type is None: raise errors.MissingTypeName( f"In file {base_url.geturl()}, type {v.get('name')} is missing type name" ) elif _type == "enum": return v elif _type == "array": if "items" not in v: raise errors.ArrayMissingItems( f"In file {base_url.geturl()}, array type {_type.get('name')} is missing 'items'" ) v["items"] = _inline_type(v["items"], base_url, user_defined_types) return v elif _type == "record": if "fields" not in v: raise errors.RecordMissingFields( f"In file {base_url.geturl()}, record type {_type.get('name')} is missing 'fields'" ) fields = utils.normalize_to_list( v["fields"], key_field="name", value_field="type" ) v["fields"] = [ _inline_type(_f, base_url, user_defined_types) for _f in fields ] return v elif _type in types.built_in_types: return v else: v["type"] = _inline_type(_type, base_url, user_defined_types) return v else: raise RuntimeError("Found a type sbpack can not understand") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/singularity.py0000644000175100001660000000633614753463342020027 0ustar00runnerdocker"""Support Singularity{,-CE} {2,3}.x or Apptainer 1.x.""" import re from subprocess import check_output # nosec from typing import Optional from .loghandler import _logger # Cached version number of singularity # This is a list containing major and minor versions as integer. # (The number of minor version digits can vary among different distributions, # therefore we need a list here.) _SINGULARITY_VERSION: Optional[list[int]] = None # Cached flavor / distribution of singularity # Can be singularity, singularity-ce or apptainer _SINGULARITY_FLAVOR: str = "" def get_version() -> tuple[list[int], str]: """ Parse the output of 'singularity --version' to determine the flavor and version. Both pieces of information will be cached. :returns: A tuple containing: - A tuple with major and minor version numbers as integer. - A string with the name of the singularity flavor. """ global _SINGULARITY_VERSION # pylint: disable=global-statement global _SINGULARITY_FLAVOR # pylint: disable=global-statement if _SINGULARITY_VERSION is None: version_output = check_output( # nosec ["singularity", "--version"], text=True ).strip() version_match = re.match(r"(.+) version ([0-9\.]+)", version_output) if version_match is None: raise RuntimeError("Output of 'singularity --version' not recognized.") version_string = version_match.group(2) _SINGULARITY_VERSION = [int(i) for i in version_string.split(".")] _SINGULARITY_FLAVOR = version_match.group(1) _logger.debug( f"Singularity version: {version_string}" " ({_SINGULARITY_FLAVOR}." ) return (_SINGULARITY_VERSION, _SINGULARITY_FLAVOR) def is_apptainer_1_or_newer() -> bool: """ Check if apptainer singularity distribution is version 1.0 or higher. Apptainer v1.0.0 is compatible with SingularityCE 3.9.5. See: https://github.com/apptainer/apptainer/releases """ v = get_version() if v[1] != "apptainer": return False return v[0][0] >= 1 def is_version_2_6() -> bool: """ Check if this singularity version is exactly version 2.6. Also returns False if the flavor is not singularity or singularity-ce. """ v = get_version() if v[1] != "singularity" and v[1] != "singularity-ce": return False return v[0][0] == 2 and v[0][1] == 6 def is_version_3_or_newer() -> bool: """Check if this version is singularity version 3 or newer or equivalent.""" if is_apptainer_1_or_newer(): return True # this is equivalent to singularity-ce > 3.9.5 v = get_version() return v[0][0] >= 3 def is_version_3_1_or_newer() -> bool: """Check if this version is singularity version 3.1 or newer or equivalent.""" if is_apptainer_1_or_newer(): return True # this is equivalent to singularity-ce > 3.9.5 v = get_version() return v[0][0] >= 4 or (v[0][0] == 3 and v[0][1] >= 1) def is_version_3_4_or_newer() -> bool: """Detect if Singularity v3.4+ is available.""" if is_apptainer_1_or_newer(): return True # this is equivalent to singularity-ce > 3.9.5 v = get_version() return v[0][0] >= 4 or (v[0][0] == 3 and v[0][1] >= 4) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5335603 cwl_utils-0.37/cwl_utils/templates/0000755000175100001660000000000014753463357017077 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/templates/workflow_input_json_schema_template.json0000644000175100001660000011254414753463342027330 0ustar00runnerdocker{ "$schema": "http://json-schema.org/draft-07/schema#", "definitions": { "InputArraySchema": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#InputArraySchema", "properties": { "doc": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "string" } ], "description": "A documentation string for this object, or an array of strings which should be concatenated." }, "items": { "anyOf": [ { "$ref": "#/definitions/InputRecordSchema" }, { "$ref": "#/definitions/InputEnumSchema" }, { "$ref": "#/definitions/InputArraySchema" }, { "items": { "anyOf": [ { "$ref": "#/definitions/InputRecordSchema" }, { "$ref": "#/definitions/InputEnumSchema" }, { "$ref": "#/definitions/InputArraySchema" }, { "type": "string" } ] }, "type": "array" }, { "type": "string" } ], "description": "Defines the type of the array elements." }, "label": { "description": "A short, human-readable label of this object.", "type": "string" }, "name": { "description": "The identifier for this type", "type": "string" }, "type": { "const": "array", "description": "Must be `array`", "type": "string" } }, "required": [ "items", "type" ], "type": "object" }, "InputBinding": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#InputBinding", "properties": { "loadContents": { "description": "Use of `loadContents` in `InputBinding` is deprecated.\nPreserved for v1.0 backwards compatibility. Will be removed in\nCWL v2.0. Use `InputParameter.loadContents` instead.", "type": "boolean" } }, "required": [], "type": "object" }, "InputEnumSchema": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#InputEnumSchema", "properties": { "doc": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "string" } ], "description": "A documentation string for this object, or an array of strings which should be concatenated." }, "label": { "description": "A short, human-readable label of this object.", "type": "string" }, "name": { "description": "The identifier for this type", "type": "string" }, "symbols": { "description": "Defines the set of valid symbols.", "items": { "type": "string" }, "type": "array" }, "type": { "const": "enum", "description": "Must be `enum`", "type": "string" } }, "required": [ "symbols", "type" ], "type": "object" }, "InputRecordField": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#InputRecordField", "properties": { "doc": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "string" } ], "description": "A documentation string for this object, or an array of strings which should be concatenated." }, "format": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "string" } ], "description": "Only valid when `type: File` or is an array of `items: File`.\n\nThis must be one or more IRIs of concept nodes\nthat represents file formats which are allowed as input to this\nparameter, preferably defined within an ontology. If no ontology is\navailable, file formats may be tested by exact match." }, "label": { "description": "A short, human-readable label of this object.", "type": "string" }, "loadContents": { "description": "Only valid when `type: File` or is an array of `items: File`.\n\nIf true, the file (or each file in the array) must be a UTF-8\ntext file 64 KiB or smaller, and the implementation must read\nthe entire contents of the file (or file array) and place it\nin the `contents` field of the File object for use by\nexpressions. If the size of the file is greater than 64 KiB,\nthe implementation must raise a fatal error.", "type": "boolean" }, "loadListing": { "description": "Only valid when `type: Directory` or is an array of `items: Directory`.\n\nSpecify the desired behavior for loading the `listing` field of\na Directory object for use by expressions.\n\nThe order of precedence for loadListing is:\n\n 1. `loadListing` on an individual parameter\n 2. Inherited from `LoadListingRequirement`\n 3. By default: `no_listing`", "enum": [ "deep_listing", "no_listing", "shallow_listing" ], "type": "string" }, "name": { "description": "The name of the field", "type": "string" }, "secondaryFiles": { "anyOf": [ { "$ref": "#/definitions/SecondaryFileSchema" }, { "items": { "$ref": "#/definitions/SecondaryFileSchema" }, "type": "array" } ], "description": "Only valid when `type: File` or is an array of `items: File`.\n\nProvides a pattern or expression specifying files or\ndirectories that should be included alongside the primary\nfile. Secondary files may be required or optional. When not\nexplicitly specified, secondary files specified for `inputs`\nare required and `outputs` are optional. An implementation\nmust include matching Files and Directories in the\n`secondaryFiles` property of the primary file. These Files\nand Directories must be transferred and staged alongside the\nprimary file. An implementation may fail workflow execution\nif a required secondary file does not exist.\n\nIf the value is an expression, the value of `self` in the expression\nmust be the primary input or output File object to which this binding\napplies. The `basename`, `nameroot` and `nameext` fields must be\npresent in `self`. For `CommandLineTool` outputs the `path` field must\nalso be present. The expression must return a filename string relative\nto the path to the primary File, a File or Directory object with either\n`path` or `location` and `basename` fields set, or an array consisting\nof strings or File or Directory objects. It is legal to reference an\nunchanged File or Directory object taken from input as a secondaryFile.\nThe expression may return \"null\" in which case there is no secondaryFile\nfrom that expression.\n\nTo work on non-filename-preserving storage systems, portable tool\ndescriptions should avoid constructing new values from `location`, but\nshould construct relative references using `basename` or `nameroot`\ninstead.\n\nIf a value in `secondaryFiles` is a string that is not an expression,\nit specifies that the following pattern should be applied to the path\nof the primary file to yield a filename relative to the primary File:\n\n 1. If string ends with `?` character, remove the last `?` and mark\n the resulting secondary file as optional.\n 2. If string begins with one or more caret `^` characters, for each\n caret, remove the last file extension from the path (the last\n period `.` and all following characters). If there are no file\n extensions, the path is unchanged.\n 3. Append the remainder of the string to the end of the file path." }, "streamable": { "description": "Only valid when `type: File` or is an array of `items: File`.\n\nA value of `true` indicates that the file is read or written\nsequentially without seeking. An implementation may use this flag to\nindicate whether it is valid to stream file contents using a named\npipe. Default: `false`.", "type": "boolean" }, "type": { "anyOf": [ { "$ref": "#/definitions/InputRecordSchema" }, { "$ref": "#/definitions/InputEnumSchema" }, { "$ref": "#/definitions/InputArraySchema" }, { "items": { "anyOf": [ { "$ref": "#/definitions/InputRecordSchema" }, { "$ref": "#/definitions/InputEnumSchema" }, { "$ref": "#/definitions/InputArraySchema" }, { "type": "string" } ] }, "type": "array" }, { "type": "string" } ], "description": "The field type" } }, "required": [ "name", "type" ], "type": "object" }, "InputRecordSchema": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#InputRecordSchema", "properties": { "doc": { "anyOf": [ { "items": { "type": "string" }, "type": "array" }, { "type": "string" } ], "description": "A documentation string for this object, or an array of strings which should be concatenated." }, "fields": { "description": "Defines the fields of the record.", "items": { "$ref": "#/definitions/InputRecordField" }, "type": "array" }, "label": { "description": "A short, human-readable label of this object.", "type": "string" }, "name": { "description": "The identifier for this type", "type": "string" }, "type": { "const": "record", "description": "Must be `record`", "type": "string" } }, "required": [ "type" ], "type": "object" }, "SecondaryFileSchema": { "additionalProperties": false, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#SecondaryFileSchema\n\nSecondary files are specified using the following micro-DSL for secondary files:\n\n* If the value is a string, it is transformed to an object with two fields\n `pattern` and `required`\n* By default, the value of `required` is `null`\n (this indicates default behavior, which may be based on the context)\n* If the value ends with a question mark `?` the question mark is\n stripped off and the value of the field `required` is set to `False`\n* The remaining value is assigned to the field `pattern`\n\nFor implementation details and examples, please see\n[this section](SchemaSalad.html#Domain_Specific_Language_for_secondary_files)\nin the Schema Salad specification.", "properties": { "pattern": { "description": "Provides a pattern or expression specifying files or directories that\nshould be included alongside the primary file.\n\nIf the value is an expression, the value of `self` in the\nexpression must be the primary input or output File object to\nwhich this binding applies. The `basename`, `nameroot` and\n`nameext` fields must be present in `self`. For\n`CommandLineTool` inputs the `location` field must also be\npresent. For `CommandLineTool` outputs the `path` field must\nalso be present. If secondary files were included on an input\nFile object as part of the Process invocation, they must also\nbe present in `secondaryFiles` on `self`.\n\nThe expression must return either: a filename string relative\nto the path to the primary File, a File or Directory object\n(`class: File` or `class: Directory`) with either `location`\n(for inputs) or `path` (for outputs) and `basename` fields\nset, or an array consisting of strings or File or Directory\nobjects as previously described.\n\nIt is legal to use `location` from a File or Directory object\npassed in as input, including `location` from secondary files\non `self`. If an expression returns a File object with the\nsame `location` but a different `basename` as a secondary file\nthat was passed in, the expression result takes precedence.\nSetting the basename with an expression this way affects the\n`path` where the secondary file will be staged to in the\nCommandLineTool.\n\nThe expression may return \"null\" in which case there is no\nsecondary file from that expression.\n\nTo work on non-filename-preserving storage systems, portable\ntool descriptions should treat `location` as an\n[opaque identifier](#opaque-strings) and avoid constructing new\nvalues from `location`, but should construct relative references\nusing `basename` or `nameroot` instead, or propagate `location`\nfrom defined inputs.\n\nIf a value in `secondaryFiles` is a string that is not an expression,\nit specifies that the following pattern should be applied to the path\nof the primary file to yield a filename relative to the primary File:\n\n 1. If string ends with `?` character, remove the last `?` and mark\n the resulting secondary file as optional.\n 2. If string begins with one or more caret `^` characters, for each\n caret, remove the last file extension from the path (the last\n period `.` and all following characters). If there are no file\n extensions, the path is unchanged.\n 3. Append the remainder of the string to the end of the file path.", "type": "string" }, "required": { "description": "An implementation must not fail workflow execution if `required` is\nset to `false` and the expected secondary file does not exist.\nDefault value for `required` field is `true` for secondary files on\ninput and `false` for secondary files on output.", "type": [ "string", "boolean" ] } }, "required": [ "pattern" ], "type": "object" }, "File": { "additionalProperties": false, "description": "Represents a file (or group of files when `secondaryFiles` is provided) that\nwill be accessible by tools using standard POSIX file system call API such as\nopen(2) and read(2).\n\nFiles are represented as objects with `class` of `File`. File objects have\na number of properties that provide metadata about the file.\n\nThe `location` property of a File is a URI that uniquely identifies the\nfile. Implementations must support the `file://` URI scheme and may support\nother schemes such as `http://` and `https://`. The value of `location` may also be a\nrelative reference, in which case it must be resolved relative to the URI\nof the document it appears in. Alternately to `location`, implementations\nmust also accept the `path` property on File, which must be a filesystem\npath available on the same host as the CWL runner (for inputs) or the\nruntime environment of a command line tool execution (for command line tool\noutputs).\n\nIf no `location` or `path` is specified, a file object must specify\n`contents` with the UTF-8 text content of the file. This is a \"file\nliteral\". File literals do not correspond to external resources, but are\ncreated on disk with `contents` with when needed for executing a tool.\nWhere appropriate, expressions can return file literals to define new files\non a runtime. The maximum size of `contents` is 64 kilobytes.\n\nThe `basename` property defines the filename on disk where the file is\nstaged. This may differ from the resource name. If not provided,\n`basename` must be computed from the last path part of `location` and made\navailable to expressions.\n\nThe `secondaryFiles` property is a list of File or Directory objects that\nmust be staged in the same directory as the primary file. It is an error\nfor file names to be duplicated in `secondaryFiles`.\n\nThe `size` property is the size in bytes of the File. It must be computed\nfrom the resource and made available to expressions. The `checksum` field\ncontains a cryptographic hash of the file content for use it verifying file\ncontents. Implementations may, at user option, enable or disable\ncomputation of the `checksum` field for performance or other reasons.\nHowever, the ability to compute output checksums is required to pass the\nCWL conformance test suite.\n\nWhen executing a CommandLineTool, the files and secondary files may be\nstaged to an arbitrary directory, but must use the value of `basename` for\nthe filename. The `path` property must be file path in the context of the\ntool execution runtime (local to the compute node, or within the executing\ncontainer). All computed properties should be available to expressions.\nFile literals also must be staged and `path` must be set.\n\nWhen collecting CommandLineTool outputs, `glob` matching returns file paths\n(with the `path` property) and the derived properties. This can all be\nmodified by `outputEval`. Alternately, if the file `cwl.output.json` is\npresent in the output, `outputBinding` is ignored.\n\nFile objects in the output must provide either a `location` URI or a `path`\nproperty in the context of the tool execution runtime (local to the compute\nnode, or within the executing container).\n\nWhen evaluating an ExpressionTool, file objects must be referenced via\n`location` (the expression tool does not have access to files on disk so\n`path` is meaningless) or as file literals. It is legal to return a file\nobject with an existing `location` but a different `basename`. The\n`loadContents` field of ExpressionTool inputs behaves the same as on\nCommandLineTool inputs, however it is not meaningful on the outputs.\n\nAn ExpressionTool may forward file references from input to output by using\nthe same value for `location`.", "properties": { "basename": { "description": "The base name of the file, that is, the name of the file without any\nleading directory path. The base name must not contain a slash `/`.\n\nIf not provided, the implementation must set this field based on the\n`location` field by taking the final path component after parsing\n`location` as an IRI. If `basename` is provided, it is not required to\nmatch the value from `location`.\n\nWhen this file is made available to a CommandLineTool, it must be named\nwith `basename`, i.e. the final component of the `path` field must match\n`basename`.", "type": "string" }, "checksum": { "description": "Optional hash code for validating file integrity. Currently, must be in the form\n\"sha1$ + hexadecimal string\" using the SHA-1 algorithm.", "type": "string" }, "class": { "const": "File", "description": "Must be `File` to indicate this object describes a file.", "type": "string" }, "contents": { "description": "File contents literal.\n\nIf neither `location` nor `path` is provided, `contents` must be\nnon-null. The implementation must assign a unique identifier for the\n`location` field. When the file is staged as input to CommandLineTool,\nthe value of `contents` must be written to a file.\n\nIf `contents` is set as a result of a Javascript expression,\nan `entry` in `InitialWorkDirRequirement`, or read in from\n`cwl.output.json`, there is no specified upper limit on the\nsize of `contents`. Implementations may have practical limits\non the size of `contents` based on memory and storage\navailable to the workflow runner or other factors.\n\nIf the `loadContents` field of an `InputParameter` or\n`OutputParameter` is true, and the input or output File object\n`location` is valid, the file must be a UTF-8 text file 64 KiB\nor smaller, and the implementation must read the entire\ncontents of the file and place it in the `contents` field. If\nthe size of the file is greater than 64 KiB, the\nimplementation must raise a fatal error.", "type": "string" }, "dirname": { "description": "The name of the directory containing file, that is, the path leading up\nto the final slash in the path such that `dirname + '/' + basename ==\npath`.\n\nThe implementation must set this field based on the value of `path`\nprior to evaluating parameter references or expressions in a\nCommandLineTool document. This field must not be used in any other\ncontext.", "type": "string" }, "format": { "description": "The format of the file: this must be an IRI of a concept node that\nrepresents the file format, preferably defined within an ontology.\nIf no ontology is available, file formats may be tested by exact match.\n\nReasoning about format compatibility must be done by checking that an\ninput file format is the same, `owl:equivalentClass` or\n`rdfs:subClassOf` the format required by the input parameter.\n`owl:equivalentClass` is transitive with `rdfs:subClassOf`, e.g. if\n` owl:equivalentClass ` and ` owl:subclassOf ` then infer\n` owl:subclassOf `.\n\nFile format ontologies may be provided in the \"$schemas\" metadata at the\nroot of the document. If no ontologies are specified in `$schemas`, the\nruntime may perform exact file format matches.", "type": "string" }, "location": { "description": "An IRI that identifies the file resource. This may be a relative\nreference, in which case it must be resolved using the base IRI of the\ndocument. The location may refer to a local or remote resource; the\nimplementation must use the IRI to retrieve file content. If an\nimplementation is unable to retrieve the file content stored at a\nremote resource (due to unsupported protocol, access denied, or other\nissue) it must signal an error.\n\nIf the `location` field is not provided, the `contents` field must be\nprovided. The implementation must assign a unique identifier for\nthe `location` field.\n\nIf the `path` field is provided but the `location` field is not, an\nimplementation may assign the value of the `path` field to `location`,\nthen follow the rules above.", "type": "string" }, "nameext": { "description": "The basename extension such that `nameroot + nameext == basename`, and\n`nameext` is empty or begins with a period and contains at most one\nperiod. Leading periods on the basename are ignored; a basename of\n`.cshrc` will have an empty `nameext`.\n\nThe implementation must set this field automatically based on the value\nof `basename` prior to evaluating parameter references or expressions.", "type": "string" }, "nameroot": { "description": "The basename root such that `nameroot + nameext == basename`, and\n`nameext` is empty or begins with a period and contains at most one\nperiod. For the purposes of path splitting leading periods on the\nbasename are ignored; a basename of `.cshrc` will have a nameroot of\n`.cshrc`.\n\nThe implementation must set this field automatically based on the value\nof `basename` prior to evaluating parameter references or expressions.", "type": "string" }, "path": { "description": "The local host path where the File is available when a CommandLineTool is\nexecuted. This field must be set by the implementation. The final\npath component must match the value of `basename`. This field\nmust not be used in any other context. The command line tool being\nexecuted must be able to access the file at `path` using the POSIX\n`open(2)` syscall.\n\nAs a special case, if the `path` field is provided but the `location`\nfield is not, an implementation may assign the value of the `path`\nfield to `location`, and remove the `path` field.\n\nIf the `path` contains [POSIX shell metacharacters](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02)\n(`|`,`&`, `;`, `<`, `>`, `(`,`)`, `$`,`` ` ``, `\\`, `\"`, `'`,\n``, ``, and ``) or characters\n[not allowed](http://www.iana.org/assignments/idna-tables-6.3.0/idna-tables-6.3.0.xhtml)\nfor [Internationalized Domain Names for Applications](https://tools.ietf.org/html/rfc6452)\nthen implementations may terminate the process with a\n`permanentFailure`.", "type": "string" }, "secondaryFiles": { "description": "A list of additional files or directories that are associated with the\nprimary file and must be transferred alongside the primary file.\nExamples include indexes of the primary file, or external references\nwhich must be included when loading primary document. A file object\nlisted in `secondaryFiles` may itself include `secondaryFiles` for\nwhich the same rules apply.", "items": { "anyOf": [ { "$ref": "#/definitions/File" }, { "$ref": "#/definitions/Directory" } ] }, "type": "array" }, "size": { "description": "Optional file size (in bytes)", "type": "number" } }, "required": [ "class" ], "type": "object" }, "Directory": { "additionalProperties": false, "description": "Represents a directory to present to a command line tool.\n\nDirectories are represented as objects with `class` of `Directory`. Directory objects have\na number of properties that provide metadata about the directory.\n\nThe `location` property of a Directory is a URI that uniquely identifies\nthe directory. Implementations must support the file:// URI scheme and may\nsupport other schemes such as http://. Alternately to `location`,\nimplementations must also accept the `path` property on Directory, which\nmust be a filesystem path available on the same host as the CWL runner (for\ninputs) or the runtime environment of a command line tool execution (for\ncommand line tool outputs).\n\nA Directory object may have a `listing` field. This is a list of File and\nDirectory objects that are contained in the Directory. For each entry in\n`listing`, the `basename` property defines the name of the File or\nSubdirectory when staged to disk. If `listing` is not provided, the\nimplementation must have some way of fetching the Directory listing at\nruntime based on the `location` field.\n\nIf a Directory does not have `location`, it is a Directory literal. A\nDirectory literal must provide `listing`. Directory literals must be\ncreated on disk at runtime as needed.\n\nThe resources in a Directory literal do not need to have any implied\nrelationship in their `location`. For example, a Directory listing may\ncontain two files located on different hosts. It is the responsibility of\nthe runtime to ensure that those files are staged to disk appropriately.\nSecondary files associated with files in `listing` must also be staged to\nthe same Directory.\n\nWhen executing a CommandLineTool, Directories must be recursively staged\nfirst and have local values of `path` assigned.\n\nDirectory objects in CommandLineTool output must provide either a\n`location` URI or a `path` property in the context of the tool execution\nruntime (local to the compute node, or within the executing container).\n\nAn ExpressionTool may forward file references from input to output by using\nthe same value for `location`.\n\nName conflicts (the same `basename` appearing multiple times in `listing`\nor in any entry in `secondaryFiles` in the listing) is a fatal error.", "properties": { "basename": { "description": "The base name of the directory, that is, the name of the file without any\nleading directory path. The base name must not contain a slash `/`.\n\nIf not provided, the implementation must set this field based on the\n`location` field by taking the final path component after parsing\n`location` as an IRI. If `basename` is provided, it is not required to\nmatch the value from `location`.\n\nWhen this file is made available to a CommandLineTool, it must be named\nwith `basename`, i.e. the final component of the `path` field must match\n`basename`.", "type": "string" }, "class": { "const": "Directory", "description": "Must be `Directory` to indicate this object describes a Directory.", "type": "string" }, "listing": { "description": "List of files or subdirectories contained in this directory. The name\nof each file or subdirectory is determined by the `basename` field of\neach `File` or `Directory` object. It is an error if a `File` shares a\n`basename` with any other entry in `listing`. If two or more\n`Directory` object share the same `basename`, this must be treated as\nequivalent to a single subdirectory with the listings recursively\nmerged.", "items": { "anyOf": [ { "$ref": "#/definitions/File" }, { "$ref": "#/definitions/Directory" } ] }, "type": "array" }, "location": { "description": "An IRI that identifies the directory resource. This may be a relative\nreference, in which case it must be resolved using the base IRI of the\ndocument. The location may refer to a local or remote resource. If\nthe `listing` field is not set, the implementation must use the\nlocation IRI to retrieve directory listing. If an implementation is\nunable to retrieve the directory listing stored at a remote resource (due to\nunsupported protocol, access denied, or other issue) it must signal an\nerror.\n\nIf the `location` field is not provided, the `listing` field must be\nprovided. The implementation must assign a unique identifier for\nthe `location` field.\n\nIf the `path` field is provided but the `location` field is not, an\nimplementation may assign the value of the `path` field to `location`,\nthen follow the rules above.", "type": "string" }, "path": { "description": "The local path where the Directory is made available prior to executing a\nCommandLineTool. This must be set by the implementation. This field\nmust not be used in any other context. The command line tool being\nexecuted must be able to access the directory at `path` using the POSIX\n`opendir(2)` syscall.\n\nIf the `path` contains [POSIX shell metacharacters](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02)\n(`|`,`&`, `;`, `<`, `>`, `(`,`)`, `$`,`` ` ``, `\\`, `\"`, `'`,\n``, ``, and ``) or characters\n[not allowed](http://www.iana.org/assignments/idna-tables-6.3.0/idna-tables-6.3.0.xhtml)\nfor [Internationalized Domain Names for Applications](https://tools.ietf.org/html/rfc6452)\nthen implementations may terminate the process with a\n`permanentFailure`.", "type": "string" } }, "required": [ "class" ], "type": "object" }, "Any": { "description": "A placeholder for any type of CWL object.", "anyOf": [ { "type": "boolean" }, { "type": "integer" }, { "type": "number" }, { "type": "string" }, { "type": "array" }, { "type": "object" }, { "$ref": "#/definitions/File" }, { "$ref": "#/definitions/Directory" } ], "properties": { "format": true, "size": true, "listing": true, "checksum": true, "nameroot": true, "contents": true, "location": true, "path": true, "dirname": true, "nameext": true, "secondaryFiles": true, "class": true, "basename": true } } }, "description": "Auto-generated class implementation for https://w3id.org/cwl/cwl#WorkflowInputParameter", "type": "object" }././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/types.py0000644000175100001660000000215614753463342016615 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 # From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py """Shared Python type definitions for commons JSON like CWL objects.""" from collections.abc import MutableMapping, MutableSequence from typing import Any, Optional, Union built_in_types = [ "null", "boolean", "int", "long", "float", "double", "string", "File", "Directory", "stdin", "stdout", "stderr", "Any", ] CWLOutputAtomType = Union[ None, bool, str, int, float, MutableSequence[ Union[ None, bool, str, int, float, MutableSequence[Any], MutableMapping[str, Any] ] ], MutableMapping[ str, Union[ None, bool, str, int, float, MutableSequence[Any], MutableMapping[str, Any] ], ], ] CWLOutputType = Union[ bool, str, int, float, MutableSequence[CWLOutputAtomType], MutableMapping[str, CWLOutputAtomType], ] CWLObjectType = MutableMapping[str, Optional[CWLOutputType]] SinkType = Union[CWLOutputType, CWLObjectType] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/cwl_utils/utils.py0000644000175100001660000003155314753463342016614 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Miscellaneous utility functions.""" import os import pathlib import subprocess # nosec import sys import urllib.error import urllib.parse import urllib.request from collections.abc import MutableMapping, MutableSequence from copy import deepcopy from io import StringIO from typing import Any, Dict, List, Optional, Union from urllib.parse import urlparse from ruamel.yaml.main import YAML from ruamel.yaml.parser import ParserError from ruamel.yaml.scanner import ScannerError from cwl_utils.errors import MissingKeyField from cwl_utils.loghandler import _logger # Type hinting from cwl_utils.parser import InputRecordSchemaTypes # Load as 1.2 files from cwl_utils.parser.cwl_v1_2 import InputArraySchema as InputArraySchemaV1_2 from cwl_utils.parser.cwl_v1_2 import InputEnumSchema as InputEnumSchemaV1_2 fast_yaml = YAML(typ="safe") _USERNS: Optional[bool] = None def _is_github_symbolic_link(base_url: urllib.parse.ParseResult, contents: str) -> bool: """ Test if link is a GitHub style symbolic link. Look for remote path with contents that is a single line with no new line with an extension. https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py """ if base_url.scheme in ["file://", ""]: return False idx = contents.find("\n") if idx > -1: return False if "." not in contents: return False return True def bytes2str_in_dicts( inp: Union[MutableMapping[str, Any], MutableSequence[Any], Any], ) -> Union[str, MutableSequence[Any], MutableMapping[str, Any]]: """ Convert any present byte string to unicode string, inplace. input is a dict of nested dicts and lists """ # if input is dict, recursively call for each value if isinstance(inp, MutableMapping): for k in inp: inp[k] = bytes2str_in_dicts(inp[k]) return inp # if list, iterate through list and fn call # for all its elements if isinstance(inp, MutableSequence): for idx, value in enumerate(inp): inp[idx] = bytes2str_in_dicts(value) return inp # if value is bytes, return decoded string, elif isinstance(inp, bytes): return inp.decode("utf-8") # simply return elements itself return inp def load_linked_file( base_url: urllib.parse.ParseResult, link: str, is_import: bool = False ) -> tuple[Any, urllib.parse.ParseResult]: """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" new_url = resolved_path(base_url, link) if new_url.scheme in ["file://", ""]: contents = pathlib.Path(new_url.path).open().read() else: try: contents = ( urllib.request.urlopen(new_url.geturl()).read().decode("utf-8") # nosec ) except urllib.error.HTTPError as e: _logger.error("Could not find linked file: %s", new_url.geturl()) raise SystemExit(e) from e if _is_github_symbolic_link(new_url, contents): # This is an exception for symbolic links on github sys.stderr.write( f"{new_url.geturl()}: found file-like string in contents.\n" f"Treating as github symbolic link to {contents}\n" ) return load_linked_file(new_url, contents, is_import=is_import) if is_import: try: _node = fast_yaml.load(contents) except ParserError as e: e.context = f"\n===\nMalformed file: {new_url.geturl()}\n===\n" + e.context raise SystemExit(e) from e except ScannerError as e: e.problem = f"\n===\nMalformed file: {new_url.geturl()}\n===\n" + e.problem raise SystemExit(e) from e else: _node = contents return _node, new_url def normalize_to_map( obj: Union[list[Any], dict[str, Any]], key_field: str ) -> dict[str, Any]: """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" if isinstance(obj, dict): return deepcopy(obj) elif isinstance(obj, list): map_obj = {} for v in obj: if not isinstance(v, dict): raise RuntimeError("Expecting a dict here") k = v.get(key_field) if k is None: raise MissingKeyField(key_field) v.pop(key_field, None) map_obj[k] = v return map_obj else: raise RuntimeError("Expecting a dictionary or a list here") def normalize_to_list( obj: Union[list[Any], dict[str, Any]], key_field: str, value_field: Optional[str] ) -> list[Any]: """From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py .""" if isinstance(obj, list): return deepcopy(obj) elif isinstance(obj, dict): map_list = [] for k, v in obj.items(): if not isinstance(v, dict): if value_field is None: raise RuntimeError(f"Expecting a dict here, got {v}") v = {value_field: v} v.update({key_field: k}) map_list += [v] return map_list else: raise RuntimeError("Expecting a dictionary or a list here") def resolved_path( base_url: urllib.parse.ParseResult, link: str ) -> urllib.parse.ParseResult: """ Derive a resolved path. This function will 1. Resolve the path, which means dot and double dot components are resolved 2. Use the OS appropriate path resolution for local paths, and network appropriate resolution for network paths From https://github.com/rabix/sbpack/blob/b8404a0859ffcbe1edae6d8f934e51847b003320/sbpack/lib.py :param base_url: "this document" :param link: "string in this document" :returns: new URL that allows us to retrieve the linked document """ link_url = urllib.parse.urlparse(link) # The link will always Posix if link_url.scheme == "file://": # Absolute local path return link_url elif link_url.scheme == "": # Relative path, can be local or remote if base_url.scheme in ["file://", ""]: # Local relative path if link == "": return base_url else: return urllib.parse.urlparse( urllib.parse.urljoin(base_url.geturl(), link_url.geturl()) ) else: # Remote relative path return urllib.parse.urlparse( urllib.parse.urljoin(base_url.geturl(), link_url.path) ) # We need urljoin because we need to resolve relative links in a # platform independent manner # Absolute remote path return link_url def singularity_supports_userns() -> bool: """Confirm if the version of Singularity install supports the --userns flag.""" global _USERNS # pylint: disable=global-statement if _USERNS is None: try: hello_image = os.path.join(os.path.dirname(__file__), "hello.simg") result = subprocess.Popen( # nosec ["singularity", "exec", "--userns", hello_image, "true"], stderr=subprocess.PIPE, stdout=subprocess.DEVNULL, universal_newlines=True, ).communicate(timeout=60)[1] _USERNS = ( "No valid /bin/sh" in result or "/bin/sh doesn't exist in container" in result or "executable file not found in" in result ) except subprocess.TimeoutExpired: _USERNS = False return _USERNS def yaml_dumps(obj: Any) -> str: """ Shortcut. Don't use if you have a file descriptor (like sys.stdout) available. """ yaml = YAML() stream = StringIO() yaml.dump(obj, stream) return stream.getvalue() def to_pascal_case(name: str) -> str: """ Convert a string to PascalCase. fastq-list-row to FastqListRow fastq_list_row to FastqListRow :param name: :return: """ return "".join( map(lambda word: word.capitalize(), name.replace("_", "-").split("-")) ) def sanitise_schema_field( schema_field_item: Union[Dict[str, Any], str], ) -> Union[Dict[str, Any], str]: """ Schemas need to be resolved before converted to JSON properties. Convert { 'type': 'Directory?' } To { 'type': ['null', 'Directory'] } Convert { 'type': 'string[]' } To InputArraySchema( type_=array, items=string ) Convert { 'type': 'File[]?' } To { 'type': [ 'null', InputArraySchema( type_=array, items=File ) ] } Convert { 'type': 'Enum', 'symbols': ['A', 'B', 'C'] } To { 'type': InputEnumSchema( type_=enum, symbols=['A', 'B', 'C'] ) } Convert { 'type': 'array', 'items': { '$import': '../../../schemas/fastq-list-row/1.0.0/fastq-list-row__1.0.0.yaml#fastq-list-row' } } To { 'type': InputArraySchema( type_=array, items={ '$import': '../../../schemas/fastq-list-row/1.0.0/fastq-list-row__1.0.0.yaml#fastq-list-row' } ) } :param schema_field_item: :return: """ # We might be just a string, in which case, just return # This happens in the case that type is a list of primitive types if isinstance(schema_field_item, str): return schema_field_item # Copy schema field schema_field_item = deepcopy(schema_field_item) required = True if isinstance(schema_field_item, InputRecordSchemaTypes): return schema_field_item if isinstance(schema_field_item.get("type"), List): if "null" in schema_field_item.get("type", []): required = False schema_field_item["type"] = list( filter( lambda type_item: type_item != "null", schema_field_item.get("type", []) ) ) if len(schema_field_item["type"]) == 1: schema_field_item["type"] = schema_field_item["type"][0] else: # Recursively get items schema_field_item["type"] = list( map( lambda field_subtypes: sanitise_schema_field(field_subtypes), schema_field_item.get("type", []), ) ) if isinstance(schema_field_item.get("type"), str): if schema_field_item.get("type", "").endswith("?"): required = False schema_field_item["type"] = schema_field_item.get("type", "").replace( "?", "" ) if schema_field_item.get("type", "").endswith("[]"): # Strip list schema_field_item["type"] = schema_field_item.get("type", "").replace( "[]", "" ) # Convert to array schema_field_item["type"] = InputArraySchemaV1_2( type_="array", items=schema_field_item.get("type", "") ) if isinstance(schema_field_item.get("type"), Dict): # Likely an enum if schema_field_item.get("type", {}).get("type", "") == "enum": schema_field_item["type"] = InputEnumSchemaV1_2( type_="enum", symbols=schema_field_item.get("type", {}).get("symbols", ""), ) elif schema_field_item.get("type", {}).get("type", "") == "array": schema_field_item["type"] = InputArraySchemaV1_2( type_="array", items=schema_field_item.get("type", {}).get("items", "") ) elif "$import" in schema_field_item.get("type", {}).keys(): # Leave import as is pass else: raise ValueError(f"Unknown type: {schema_field_item.get('type')}") if not required: if isinstance(schema_field_item.get("type"), List): schema_field_item["type"] = ["null"] + schema_field_item.get("type", []) else: schema_field_item["type"] = ["null", schema_field_item.get("type", "")] return schema_field_item def is_uri(uri: str) -> bool: """ Given a URI return True if it is a URI. :param uri: :return: """ if not urlparse(uri).scheme == "": return True else: return False def is_local_uri(uri: str) -> bool: """Given a uri, first check if it is a uri, then check if it is a local uri.""" if is_uri(uri) and urlparse(uri).scheme == "file": return True return False def get_value_from_uri(uri: str) -> str: """ Given a URI, return the value after #. file://path/to/imported/record#my_workflow_name/record_name Returns record_name :param uri: :return: """ url_obj = urlparse(uri) return url_obj.fragment.rsplit("/")[-1] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5615606 cwl_utils-0.37/cwl_utils.egg-info/0000755000175100001660000000000014753463357016573 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/PKG-INFO0000644000175100001660000002007014753463356017666 0ustar00runnerdockerMetadata-Version: 2.2 Name: cwl-utils Version: 0.37 Author-email: Common Workflow Language project contributors License: Apache 2.0 Project-URL: Documentation, https://cwl-utils.readthedocs.io/ Project-URL: Homepage, https://www.commonwl.org Project-URL: Repository, https://github.com/common-workflow-language/cwl-utils Project-URL: Issues, https://github.com/common-workflow-language/cwl-utils/issues Project-URL: Changelog, https://github.com/common-workflow-language/cwl-utils/releases Project-URL: Related Tools, https://www.commonwl.org/tools/ Keywords: cwl,commonwl,common-workflow-language,sciworkflows Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.12 Classifier: Programming Language :: Python :: 3.13 Classifier: Topic :: File Formats Classifier: Topic :: Scientific/Engineering Classifier: Topic :: Scientific/Engineering :: Information Analysis Classifier: Topic :: Software Development :: Libraries Classifier: Topic :: System :: Distributed Computing Classifier: Typing :: Typed Requires-Python: <3.14,>=3.9 Description-Content-Type: text/x-rst License-File: LICENSE Requires-Dist: cwl-upgrader>=1.2.3 Requires-Dist: packaging Requires-Dist: rdflib Requires-Dist: requests Requires-Dist: schema-salad<9,>=8.8.20250205075315 Requires-Dist: ruamel.yaml<0.19,>=0.17.6 Requires-Dist: typing_extensions; python_version < "3.10" Provides-Extra: pretty Requires-Dist: cwlformat; extra == "pretty" Provides-Extra: testing Requires-Dist: pytest<9; extra == "testing" Requires-Dist: pytest-mock; extra == "testing" |Linux Build Status| |Code coverage| |Documentation Status| .. |Linux Build Status| image:: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml/badge.svg?branch=main :target: https://github.com/common-workflow-language/cwl-utils/actions/workflows/ci-tests.yml .. |Code coverage| image:: https://codecov.io/gh/common-workflow-language/cwl-utils/branch/main/graph/badge.svg :target: https://codecov.io/gh/common-workflow-language/cwl-utils .. |Documentation Status| image:: https://readthedocs.org/projects/cwl-utils/badge/?version=latest :target: https://cwl-utils.readthedocs.io/en/latest/?badge=latest :alt: Documentation Status cwl-utils --------- Python Utilities and Autogenerated Classes for loading and parsing `CWL v1.0 `__, `CWL v1.1 `__, and `CWL v1.2 `__ documents. Requires Python 3.8+ Installation ------------ :: pip3 install cwl-utils To install from source:: git clone https://github.com/common-workflow-language/cwl-utils.git cd cwl-utils pip3 install . Usage ----- Pull the all referenced software container images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-docker-extract`` is useful to cache or pre-pull all software container images referenced in a CWL CommandLineTool or CWL Workflow (including all referenced CommandLineTools and sub-Workflows and so on). The default behaviour is to use the Docker engine to download and save the software container images in Docker format. .. code:: bash cwl-docker-extract path_to_my_workflow.cwl .. code:: bash cwl-docker-extract --dir DIRECTORY path_to_my_workflow.cwl Or you can use the Singularity software container engine to download and save the software container images and convert them to the Singularity format at the same time. .. code:: bash cwl-docker-extract --singularity --dir DIRECTORY path_to_my_workflow.cwl Print all referenced software packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-cite-extract`` prints all software packages found (recursively) in the specified CWL document. Currently the package name and any listed specs and version field are printed for all ``SoftwareRequirement`` s found. .. code:: bash cwl-cite-extract path_to_my_workflow.cwl Replace CWL Expressions with concrete steps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-expression-refactor`` refactors CWL documents so that any CWL Expression evaluations are separate steps (either CWL ExpressionTools or CWL CommandLineTools.) This allows execution by CWL engines that do not want to support inline expression evaluation outside of concrete steps, or do not want to directly support CWL's optional ``InlineJavascriptRequirement`` at all. .. code:: bash cwl-expression-refactor directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Split a packed CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-graph-split`` splits a packed CWL document file into multiple files. Packed CWL documents use the $graph construct to contain multiple CWL Process objects (Workflow, CommandLineTool, ExpressionTool, Operation). Typically packed CWL documents contain a CWL Workflow under the name "main" and the workflow steps (including any sub-workflows). .. code:: bash cwl-graph-split --outdir optional/directory/path/to/save/outputs path_to_my_workflow.cwl Normalize a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-normalizer`` normalizes one or more CWL document so that for each document, a JSON format CWL document is produces with it and all of its dependencies packed together, upgrading to CWL v1.2, as needed. Can optionally refactor CWL Expressions into separate steps in the manner of cwl-expression-refactor. .. code:: bash cwl-normalizer directory/path/to/save/outputs path_to_my_workflow.cwl [more_workflows.cwl] Generate for Workflow Parameters from a CWL document ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``cwl-inputs-schema-gen`` generates a JSON Schema for workflow input parameters from a CWL document. .. code:: bash cwl-inputs-schema-gen path_to_my_workflow.cwl Using the CWL Parsers ~~~~~~~~~~~~~~~~~~~~~ .. code:: python from pathlib import Path from ruamel import yaml import sys from cwl_utils.parser import load_document_by_uri, save # File Input - This is the only thing you will need to adjust or take in as an input to your function: cwl_file = Path("testdata/md5sum.cwl") # or a plain string works as well # Import CWL Object cwl_obj = load_document_by_uri(cwl_file) # View CWL Object print("List of object attributes:\n{}".format("\n".join(map(str, dir(cwl_obj))))) # Export CWL Object into a built-in typed object saved_obj = save(cwl_obj) print(f"Export of the loaded CWL object: {saved_obj}.") Development ----------- Regenerate parsers ~~~~~~~~~~~~~~~~~~ To regenerate install the ``schema_salad`` package and run: ``cwl_utils/parser/cwl_v1_0.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/common-workflow-language/raw/codegen/v1.0/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_0" > cwl_utils/parser/cwl_v1_0.py`` ``cwl_utils/parser/cwl_v1_1.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.1/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_1" > cwl_utils/parser/cwl_v1_1.py`` ``cwl_utils/parser/cwl_v1_2.py`` was created via ``schema-salad-tool --codegen python https://github.com/common-workflow-language/cwl-v1.2/raw/codegen/extensions.yml --codegen-parser-info "org.w3id.cwl.v1_2" > cwl_utils/parser/cwl_v1_2.py`` Release ~~~~~~~ To release CWLUtils, bump the version in ``cwl_utils/__meta__.py``, and tag that commit with the new version. The `gh-action-pypi-publish `__ should release that tag. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/SOURCES.txt0000644000175100001660000001470614753463356020466 0ustar00runnerdockerLICENSE MANIFEST.in Makefile README.rst create_cwl_from_objects.py lint-requirements.txt load_cwl_by_path.py mypy-requirements.txt pyproject.toml requirements.txt test-requirements.txt cwl_utils/__init__.py cwl_utils/__meta__.py cwl_utils/cite_extract.py cwl_utils/cwlNodeEngine.js cwl_utils/cwlNodeEngineJSConsole.js cwl_utils/cwlNodeEngineWithContext.js cwl_utils/cwl_v1_0_expression_refactor.py cwl_utils/cwl_v1_1_expression_refactor.py cwl_utils/cwl_v1_2_expression_refactor.py cwl_utils/docker_extract.py cwl_utils/errors.py cwl_utils/expression.py cwl_utils/expression_refactor.py cwl_utils/file_formats.py cwl_utils/graph_split.py cwl_utils/image_puller.py cwl_utils/inputs_schema_gen.py cwl_utils/loghandler.py cwl_utils/normalizer.py cwl_utils/pack.py cwl_utils/py.typed cwl_utils/sandboxjs.py cwl_utils/schemadef.py cwl_utils/singularity.py cwl_utils/types.py cwl_utils/utils.py cwl_utils.egg-info/PKG-INFO cwl_utils.egg-info/SOURCES.txt cwl_utils.egg-info/dependency_links.txt cwl_utils.egg-info/entry_points.txt cwl_utils.egg-info/requires.txt cwl_utils.egg-info/top_level.txt cwl_utils/parser/__init__.py cwl_utils/parser/cwl_v1_0.py cwl_utils/parser/cwl_v1_0_utils.py cwl_utils/parser/cwl_v1_1.py cwl_utils/parser/cwl_v1_1_utils.py cwl_utils/parser/cwl_v1_2.py cwl_utils/parser/cwl_v1_2_utils.py cwl_utils/parser/latest.py cwl_utils/parser/utils.py cwl_utils/templates/workflow_input_json_schema_template.json docs/Makefile docs/conf.py docs/index.rst docs/requirements.txt docs/_static/favicon.ico testdata/EDAM_subset.owl testdata/cond-single-source-wf-003.1.cwl testdata/cond-single-source-wf-004.1.cwl testdata/cond-single-source-wf-005.1.cwl testdata/cond-wf-003.1.cwl testdata/cond-wf-004.1.cwl testdata/cond-wf-005.1.cwl testdata/count-lines6-single-source-wf_v1_0.cwl testdata/count-lines6-single-source-wf_v1_1.cwl testdata/count-lines6-single-source-wf_v1_2.cwl testdata/count-lines6-wf_v1_0.cwl testdata/count-lines6-wf_v1_1.cwl testdata/count-lines6-wf_v1_2.cwl testdata/count-lines7-single-source-wf_v1_0.cwl testdata/count-lines7-single-source-wf_v1_1.cwl testdata/count-lines7-single-source-wf_v1_2.cwl testdata/count-lines7-wf_v1_0.cwl testdata/count-lines7-wf_v1_1.cwl testdata/count-lines7-wf_v1_2.cwl testdata/debian_image_id.cwl testdata/dir4-job.yml testdata/dockstore-tool-md5sum.cwl testdata/dockstore-tool-md5sum_v11.cwl testdata/dockstore-tool-md5sum_v12.cwl testdata/echo-tool-packed.cwl testdata/echo_v1_0.cwl testdata/echo_v1_1.cwl testdata/echo_v1_2.cwl testdata/env-job3.yaml testdata/foo-array.cwl testdata/foo.cwl testdata/formattest-job.json testdata/formattest2.cwl testdata/formattest2_v1_0.cwl testdata/formattest2_v1_1.cwl testdata/gx_edam.ttl testdata/js-expr-req-wf.cwl testdata/lib.js testdata/map-ordering-v1_0.cwl testdata/map-ordering-v1_1.cwl testdata/map-ordering-v1_2.cwl testdata/md5sum.cwl testdata/md5sum.input testdata/md5sum.json testdata/md5sum_v11.cwl testdata/md5sum_v12.cwl testdata/nested-array-job.yml testdata/record-output-wf_v1_0.cwl testdata/record-output-wf_v1_1.cwl testdata/record-output-wf_v1_2.cwl testdata/record-output_v1_0.cwl testdata/record-output_v1_1.cwl testdata/record-output_v1_2.cwl testdata/revsort-run-1-packed.cwl testdata/scatter-wf1_v1_0.cwl testdata/scatter-wf1_v1_1.cwl testdata/scatter-wf1_v1_2.cwl testdata/scatter-wf2_v1_0.cwl testdata/scatter-wf2_v1_1.cwl testdata/scatter-wf2_v1_2.cwl testdata/scatter-wf3_v1_0.cwl testdata/scatter-wf3_v1_1.cwl testdata/scatter-wf3_v1_2.cwl testdata/seqtk_seq.cwl testdata/stdout-wf_v1_0.cwl testdata/stdout-wf_v1_1.cwl testdata/stdout-wf_v1_2.cwl testdata/step-valuefrom2-wf_v1_0.cwl testdata/step-valuefrom2-wf_v1_1.cwl testdata/step-valuefrom2-wf_v1_2.cwl testdata/step-valuefrom3-wf_v1_0.cwl testdata/step-valuefrom3-wf_v1_1.cwl testdata/step-valuefrom3-wf_v1_2.cwl testdata/step_valuefrom5_wf_v1_0.cwl testdata/step_valuefrom5_wf_v1_1.cwl testdata/step_valuefrom5_wf_v1_2.cwl testdata/step_valuefrom5_wf_with_id_v1_0.cwl testdata/step_valuefrom5_wf_with_id_v1_1.cwl testdata/step_valuefrom5_wf_with_id_v1_2.cwl testdata/wc3-tool_v1_0.cwl testdata/wc3-tool_v1_1.cwl testdata/wc3-tool_v1_2.cwl testdata/wf2.cwl testdata/workflow_input_format_expr.cwl testdata/workflow_input_format_expr_v1_1.cwl testdata/workflow_input_format_expr_v1_2.cwl testdata/workflow_input_sf_expr.cwl testdata/workflow_input_sf_expr_array.cwl testdata/workflow_input_sf_expr_array_v1_1.cwl testdata/workflow_input_sf_expr_array_v1_2.cwl testdata/workflow_input_sf_expr_v1_1.cwl testdata/workflow_input_sf_expr_v1_2.cwl testdata/checker_wf/broken-wf.cwl testdata/checker_wf/broken-wf2.cwl testdata/checker_wf/broken-wf3.cwl testdata/checker_wf/cat.cwl testdata/checker_wf/echo.cwl testdata/checker_wf/functional-wf.cwl testdata/extensions/all-output-loop_v1_2.cwl testdata/extensions/cuda-requirement_v1_0.cwl testdata/extensions/cuda-requirement_v1_1.cwl testdata/extensions/cuda-requirement_v1_2.cwl testdata/extensions/inplace-update-requirement_v1_0.cwl testdata/extensions/load-listing-requirement_v1_0.cwl testdata/extensions/mpi-requirement_v1_0.cwl testdata/extensions/mpi-requirement_v1_1.cwl testdata/extensions/mpi-requirement_v1_2.cwl testdata/extensions/network-access_v1_0.cwl testdata/extensions/process-generator_v1_0.cwl testdata/extensions/process-generator_v1_1.cwl testdata/extensions/process-generator_v1_2.cwl testdata/extensions/secrets_v1_0.cwl testdata/extensions/secrets_v1_1.cwl testdata/extensions/secrets_v1_2.cwl testdata/extensions/shm-size_v1_0.cwl testdata/extensions/shm-size_v1_1.cwl testdata/extensions/shm-size_v1_2.cwl testdata/extensions/single-var-loop_v1_2.cwl testdata/extensions/time-limit_v1_0.cwl testdata/extensions/work-reuse_v1_0.cwl testdata/remote-cwl/tool1.cwl testdata/remote-cwl/tool2.cwl testdata/remote-cwl/wf1.cwl testdata/types/array.yml testdata/types/recursive.yml testdata/types/singletype.yml testdata/types/singletype2.yml testdata/types/testtypes.yml testdata/workflows/count-lines16-wf.cwl testdata/workflows/import-in-type.cwl testdata/workflows/parseInt-tool.cwl testdata/workflows/scatter-wf4.cwl testdata/workflows/type-import.yaml testdata/workflows/wc-tool.cwl testdata/workflows/wf5.cwl tests/__init__.py tests/load_cwl_by_path.py tests/test_cite_extract.py tests/test_docker_extract.py tests/test_etools_to_clt.py tests/test_examples.py tests/test_extensions.py tests/test_format.py tests/test_graph_split.py tests/test_inputs_schema_gen.py tests/test_js_sandbox.py tests/test_meta.py tests/test_packing.py tests/test_parser.py tests/test_parser_utils.py tests/test_subscope.py tests/test_utils.py tests/util.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/dependency_links.txt0000644000175100001660000000000114753463356022640 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/entry_points.txt0000644000175100001660000000050214753463356022065 0ustar00runnerdocker[console_scripts] cwl-cite-extract = cwl_utils.cite_extract:main cwl-docker-extract = cwl_utils.docker_extract:main cwl-expression-refactor = cwl_utils.expression_refactor:main cwl-graph-split = cwl_utils.graph_split:main cwl-inputs-schema-gen = cwl_utils.inputs_schema_gen:main cwl-normalizer = cwl_utils.normalizer:main ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/requires.txt0000644000175100001660000000031614753463356021172 0ustar00runnerdockercwl-upgrader>=1.2.3 packaging rdflib requests schema-salad<9,>=8.8.20250205075315 ruamel.yaml<0.19,>=0.17.6 [:python_version < "3.10"] typing_extensions [pretty] cwlformat [testing] pytest<9 pytest-mock ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482862.0 cwl_utils-0.37/cwl_utils.egg-info/top_level.txt0000644000175100001660000000001214753463356021315 0ustar00runnerdockercwl_utils ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5345602 cwl_utils-0.37/docs/0000755000175100001660000000000014753463357014024 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/docs/Makefile0000644000175100001660000000117214753463342015457 0ustar00runnerdocker# Minimal makefile for Sphinx documentation # # You can set these variables from the command line, and also # from the environment for the first two. SPHINXOPTS ?= SPHINXBUILD ?= sphinx-build SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5345602 cwl_utils-0.37/docs/_static/0000755000175100001660000000000014753463357015452 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/docs/_static/favicon.ico0000644000175100001660000003535614753463342017601 0ustar00runnerdocker00 ¨%6  ¨Þ% h†6(0` $ÿÿÿÍÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÓÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüûþÿýûþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûùýÿºŸâÿ¼¡ãÿûùþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûùýÿ·žáÿvDÆÿwEÇÿ¹ âÿûúþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿ¶žáÿrDÆÿo?Åÿo?ÅÿrDÆÿȶéÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿµŸáÿnCÅÿj>Ãÿk?Ãÿi<ÃÿŒhÑÿçßöÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿ³ŸáÿjBÄÿf=Âÿg>Âÿe;Áÿ‡fÏÿåÞõÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûûþÿ±ŸàÿfAÂÿa;Áÿb<Áÿa;ÀÿƒeÎÿäÝôÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿôñûÿˆnÐÿZ7¾ÿ];¿ÿ];¿ÿ\:¿ÿ¤‘Ûÿýüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿäßõÿ}eÍÿV7½ÿX9¾ÿX8¾ÿ_AÁÿ²¤áÿüûþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿãÞôÿxbËÿQ5¼ÿT8½ÿS7¼ÿY?¿ÿ°¤àÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿàÜóÿs`ÉÿL4»ÿN7»ÿM6»ÿU>¾ÿ°¥áÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÞÛóÿn^ÈÿG3¹ÿJ6ºÿI4ºÿQ=½ÿ®¥áÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÜÙòÿi[ÇÿC3¹ÿE5ºÿC3¹ÿL<¼ÿ¬¦àÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÙÖñÿYM¿ÿ>1¸ÿ@3¸ÿ>2¸ÿF:»ÿ´¯ãÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¿¹ÙÿG7—ÿ8-¬ÿ:1¸ÿ8/¶ÿPH¿ÿÈÅëÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¾¹ÜÿF7”ÿ1ÿ4!†ÿ3*©ÿJF¾ÿÄÂéÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¾ºÞÿG:›ÿ1 ˆÿ3 ÿ0vÿI9ÿÂÀæÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¾»áÿG<£ÿ1#ÿ3#‰ÿ0ÿK8ˆÿÅ¿×ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¾¼ãÿG>ªÿ2&˜ÿ4%‘ÿ0 ‡ÿK:ÿÅÀÚÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ¾¼åÿGA±ÿ2( ÿ4(™ÿ2$‘ÿM?˜ÿÆÀÜÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿïïùÿb]Áÿ0(¨ÿ4*¡ÿ4(™ÿ2$‘ÿ”ŒÂÿþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÇÆéÿMG³ÿ1( ÿ4(™ÿ3$‘ÿG9•ÿ¼¶ÖÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÆçÿMD¬ÿ1%˜ÿ4%‘ÿ1 ˆÿE4Œÿ¼µÔÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÅåÿMB¥ÿ1#ÿ3#‰ÿ1ÿE1…ÿ¼´ÑÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÄãÿM@žÿ0 ˆÿ3 ÿ1vÿD1‚ÿ¼¸ÞÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÇÃàÿL>—ÿ0ÿ3}ÿ1&œÿFB»ÿ½»çÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÄÞÿM=–ÿ5)¢ÿ81·ÿ6/¶ÿJC½ÿ½ºçÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿçåöÿf\Ãÿ;0·ÿ=2¸ÿ<1¸ÿ;0·ÿŽˆÖÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿâàõÿrgËÿ@2¹ÿA3¹ÿA3¹ÿA4¹ÿ‰€ÓÿñðúÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÞÛóÿoaÉÿC2¹ÿE5ºÿE4ºÿG6ºÿ†ÖÿóòûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÙÕñÿl\ÇÿH3¹ÿJ6ºÿI5ºÿL8»ÿ—ŒØÿõôûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÔÎïÿjWÆÿL4ºÿN7»ÿM6»ÿQ:¼ÿž’Úÿ÷öüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÎÆìÿjRÅÿP5¼ÿS8¼ÿR7¼ÿX?¿ÿ§™Ýÿùøýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüûþÿ–„×ÿS5¼ÿV9¾ÿW9¾ÿU7½ÿu]ÊÿëèøÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿëèøÿˆqÑÿY8¾ÿZ:¿ÿZ:¿ÿ\;¿ÿ—‚×ÿóñûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿëç÷ÿ‹rÑÿ]:¿ÿ_;Àÿ_;Àÿ_;Àÿ›„×ÿôñûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìçøÿŽsÒÿb;Áÿc=Áÿc<Áÿc<ÁÿŸ‡Ùÿõòûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìçøÿ‘sÓÿf<Âÿg>Âÿg=Âÿg>Âÿ¢ˆÚÿõóûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìçøÿ“sÔÿj=Ãÿk?Ãÿk>Ãÿk?Äÿ¥ŠÛÿöóüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿíçøÿ–tÕÿm>Äÿo@Åÿo?Åÿn?Äÿ¯”ßÿýüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿíçøÿ˜tÕÿq?Åÿr?ÅÿzKÈÿdzèÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿíæøÿ™rÔÿ}KÉÿDzèÿþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìä÷ÿÖÅîÿýüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÆÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÍ( @ ÿÿÿëÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿîÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿýüþÿýüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûùýÿº âÿ»¢ãÿûúþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿ¶žâÿsCÆÿrCÆÿÁ­æÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿ´ŸáÿmCÅÿh;Âÿ}WËÿÙÍðÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüûþÿ±žàÿgAÃÿa:ÀÿwUÉÿÖÌïÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿøöýÿ“}ÕÿZ8¾ÿZ8¾ÿ€gÍÿíêøÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿîêùÿ‰tÑÿS6¼ÿT7½ÿˆuÑÿíêøÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿìéøÿ‚qÏÿK4ºÿL5ºÿ„uÐÿíëùÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿêè÷ÿ|oÎÿF5ºÿD3¹ÿ€uÐÿìêøÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÚ×îÿRF·ÿ=1¸ÿ;/·ÿš“Úÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿóòùÿ…|¸ÿ6%ˆÿ3(£ÿa\ÆÿÜÛóÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿóòúÿ…}Àÿ3$‹ÿ0|ÿ_PšÿÜÚðÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿóòúÿ…Çÿ4(˜ÿ0 ‰ÿ`QœÿÝÚèÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿøøýÿ‡„Ïÿ4+£ÿ1%–ÿZM¢ÿÞÛëÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿùùýÿŠÑÿ5,¤ÿ1%–ÿWJ ÿÙÖèÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿõõûÿŒ†Êÿ5)˜ÿ0 ‰ÿZK˜ÿØÔåÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿõõûÿŒ„Ãÿ5%Œÿ/|ÿZI’ÿØÖìÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿõõúÿŒƒ¼ÿ7%…ÿ2%šÿ\XÃÿØ×ñÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿâàòÿXM¶ÿ:/¶ÿ7-¶ÿ„}ÒÿûûþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿëêøÿwÐÿC5ºÿ@1¸ÿlaÈÿÝÚóÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿèå÷ÿ~oÎÿG4ºÿG3¹ÿufËÿâàôÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿäßõÿ|iÍÿM5»ÿO7»ÿ~mÎÿæãöÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûûþÿ“ÕÿR4¼ÿT7½ÿiOÅÿßÙóÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÙÒðÿvZÉÿY8¾ÿ^=Àÿ ŒÚÿöôüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÜÔñÿ{\Ëÿ`9Àÿc=Áÿ¤ŽÛÿ÷õüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÝÔòÿ€]Ìÿf;Âÿi?Ãÿ©ÝÿøõüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÞÓòÿ…^Îÿl=Ãÿn@Äÿ±˜àÿýýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿßÔòÿ‰^ÎÿxGÇÿ­æÿþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿßÒòÿ̸êÿýûþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿèÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿë(  ÿÿÿûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿýûþÿýûþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûùþÿ¹¡âÿ¿©åÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿýüþÿ±žàÿrLÇÿÈ·éÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûúþÿŸŒÚÿeIÃÿÏÇìÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿõóûÿ’…ÖÿbQÄÿÌÇìÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÄ¿ßÿB4¨ÿ‘‹ÖÿþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÏÍéÿUI¢ÿzn®ÿîí÷ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ÷÷ýÿqkÀÿF:ÿÚ×éÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÒÐêÿWK¢ÿwjªÿíìöÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÈÄáÿB5¥ÿ‰ƒÒÿüüþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿóñûÿ’†Öÿ[MÂÿ½éÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿüüþÿ›‹ÙÿY@¿ÿŽéÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿĸèÿnMÆÿ©•ÝÿùøýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿɺêÿxOÉÿ³›àÿþýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿμëÿÄ®çÿþýÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿûÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿþÿÿþþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿû././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/docs/conf.py0000644000175100001660000000660014753463342015317 0ustar00runnerdocker# Configuration file for the Sphinx documentation builder. # # This file only contains a selection of the most common options. For a full # list see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Path setup -------------------------------------------------------------- import importlib.metadata # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import sys import time from datetime import datetime, timezone sys.path.insert(0, os.path.abspath("..")) # -- Project information ----------------------------------------------------- build_date = datetime.fromtimestamp( int(os.environ.get("SOURCE_DATE_EPOCH", time.time())), timezone.utc ) project = "cwl-utils: Python utilities Common Workflow Language documents" copyright = f"2019 — {build_date.year} CWL Community" author = "Common Workflow Language Project contributors" # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.autosummary", "sphinx.ext.intersphinx", "sphinx.ext.inheritance_diagram", "autoapi.extension", "sphinx_autodoc_typehints", "sphinx_rtd_theme", "sphinxcontrib.autoprogram", ] intersphinx_mapping = { "python": ("https://docs.python.org/3", None), "schema_salad": ("https://schema-salad.readthedocs.io/en/stable/", None), "rdflib": ("https://rdflib.readthedocs.io/en/6.2.0/", None), # "ruamel.yaml": ("https://yaml.readthedocs.io/en/stable/", None), } # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] # -- Options for HTML output ------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "sphinx_rtd_theme" # html_logo = "_static/logo.png" html_favicon = "_static/favicon.ico" html_theme_options = { "collapse_navigation": False, } # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] release = importlib.metadata.version("cwl_utils") version = ".".join(release.split(".")[:2]) autoapi_dirs = ["../cwl_utils"] autodoc_typehints = "description" autoapi_keep_files = True autoapi_ignore = ["*migrations*", "*.pyi"] autoapi_options = [ "members", "undoc-members", "show-inheritance", "show-inheritance-diagram", "show-module-summary", "imported-members", "special-members", ] # sphinx-autodoc-typehints always_document_param_types = True # If False, do not add type info for undocumented parameters. # If True, add stub documentation for undocumented parameters to be able to add type info. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/docs/index.rst0000644000175100001660000000214614753463342015662 0ustar00runnerdocker====================================================== Python classes for loading, manipulating CWL documents ====================================================== This is the reference implementation of the Common Workflow Language. It is intended to be feature complete and provide comprehensive validation of CWL files as well as provide other tools related to working with CWL. Modules ======= .. toctree:: :maxdepth: 2 :caption: Contents: autoapi/index Included Utility Programs ========================= .. autoprogram:: cwl_utils.cite_extract:arg_parser() :prog: cwl-cite-extract .. autoprogram:: cwl_utils.docker_extract:arg_parser() :prog: cwl-docker-extract .. autoprogram:: cwl_utils.expression_refactor:arg_parser() :prog: cwl-expression-refactor .. autoprogram:: cwl_utils.graph_split:arg_parser() :prog: cwl-graph-split .. autoprogram:: cwl_utils.normalizer:arg_parser() :prog: cwl-normalizer .. autoprogram:: cwl_utils.inputs_schema_gen:arg_parser() :prog: cwl-inputs-schema-gen Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/docs/requirements.txt0000644000175100001660000000014614753463342017303 0ustar00runnerdockersphinx >= 2.2, < 9 sphinx-rtd-theme sphinx-autoapi sphinx-autodoc-typehints sphinxcontrib-autoprogram ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/lint-requirements.txt0000644000175100001660000000007214753463342017315 0ustar00runnerdockerflake8-bugbear < 24.13 black == 25.* codespell isort >= 5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/load_cwl_by_path.py0000644000175100001660000000124314753463342016732 0ustar00runnerdocker""" Example from README.md. Please synchronize all changes between the two. """ # SPDX-License-Identifier: Apache-2.0 from pathlib import Path from cwl_utils.parser import load_document_by_uri, save # File Input - This is the only thing you will need to adjust or take in as an input to your function: cwl_file = Path("testdata/md5sum.cwl") # or a plain string works as well # Import CWL Object cwl_obj = load_document_by_uri(cwl_file) # View CWL Object print("List of object attributes:\n{}".format("\n".join(map(str, dir(cwl_obj))))) # Export CWL Object into a built-in typed object saved_obj = save(cwl_obj) print(f"Export of the loaded CWL object: {saved_obj}.") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/mypy-requirements.txt0000644000175100001660000000013014753463342017340 0ustar00runnerdockermypy==1.15.0 typing_extensions types-requests types-jsonschema types-setuptools>=57.4.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/pyproject.toml0000644000175100001660000000614714753463342016012 0ustar00runnerdocker[build-system] requires = ["setuptools>=61.2"] build-backend = "setuptools.build_meta" [project] name = "cwl-utils" authors = [{name = "Common Workflow Language project contributors", email = "common-workflow-language@googlegroups.com"}] license = {text = "Apache 2.0"} keywords = ["cwl", "commonwl", "common-workflow-language", "sciworkflows"] readme = "README.rst" classifiers = [ "Development Status :: 4 - Beta", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Operating System :: POSIX", "Operating System :: MacOS :: MacOS X", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Topic :: File Formats", "Topic :: Scientific/Engineering", "Topic :: Scientific/Engineering :: Information Analysis", "Topic :: Software Development :: Libraries", "Topic :: System :: Distributed Computing", "Typing :: Typed", ] requires-python = ">=3.9,<3.14" dynamic = ["version", "dependencies"] [project.urls] Documentation = "https://cwl-utils.readthedocs.io/" Homepage = "https://www.commonwl.org" Repository = "https://github.com/common-workflow-language/cwl-utils" Issues = "https://github.com/common-workflow-language/cwl-utils/issues" Changelog = "https://github.com/common-workflow-language/cwl-utils/releases" "Related Tools" = "https://www.commonwl.org/tools/" [project.optional-dependencies] pretty = ["cwlformat"] testing = [ "pytest<9", "pytest-mock", ] [project.scripts] cwl-cite-extract = "cwl_utils.cite_extract:main" cwl-docker-extract = "cwl_utils.docker_extract:main" cwl-expression-refactor = "cwl_utils.expression_refactor:main" cwl-graph-split = "cwl_utils.graph_split:main" cwl-normalizer = "cwl_utils.normalizer:main" cwl-inputs-schema-gen = "cwl_utils.inputs_schema_gen:main" [tool.aliases] test = "pytest" [tool.pytest.ini_options] testpaths = ["tests"] [tool.setuptools] packages = [ "cwl_utils", "cwl_utils.parser", "cwl_utils.tests", "cwl_utils.testdata", "cwl_utils.testdata.checker_wf", "cwl_utils.testdata.extensions", "cwl_utils.testdata.types", "cwl_utils.testdata.workflows", ] include-package-data = true [tool.setuptools.package-dir] "cwl_utils.parser" = "cwl_utils/parser" "cwl_utils.tests" = "tests" "cwl_utils.testdata" = "testdata" "cwl_utils.testdata.checker_wf" = "testdata/checker_wf" "cwl_utils.testdata.extensions" = "testdata/extensions" "cwl_utils.testdata.types" = "testdata/types" "cwl_utils.testdata.workflows" = "testdata/workflows" [tool.setuptools.package-data] "cwl_utils" = ["templates/*.json"] [tool.setuptools.dynamic] version = {attr = "cwl_utils.__meta__.__version__"} dependencies = {file = ["requirements.txt"]} [tool.isort] profile = "black" [tool.ruff.lint] select = [ "D", # pydocstyle ] ignore = [ "D100", "D101", "D102", "D103", "D104", "D105", "D107", "D203", "D212", ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/requirements.txt0000644000175100001660000000023414753463342016351 0ustar00runnerdockercwl-upgrader >= 1.2.3 packaging rdflib requests schema-salad >= 8.8.20250205075315,<9 ruamel.yaml >= 0.17.6, < 0.19 typing_extensions;python_version<'3.10' ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5625606 cwl_utils-0.37/setup.cfg0000644000175100001660000000004614753463357014715 0ustar00runnerdocker[egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/test-requirements.txt0000644000175100001660000000013014753463342017321 0ustar00runnerdockerpytest < 9 pytest-cov pytest-xdist cwlformat pytest-mock >= 1.10.0 jsonschema >= 4.21.1 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5505605 cwl_utils-0.37/testdata/0000755000175100001660000000000014753463357014705 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/EDAM_subset.owl0000644000175100001660000013642114753463342017524 0ustar00runnerdocker ]> EDAM_topic http://edamontology.org/topic_ "EDAM topics" EDAM_operation http://edamontology.org/operation_ "EDAM operations" 09:07:2015 3625 formats "EDAM data formats" EDAM An ontology of bioinformatics topics, operations, types of data including identifiers, and data formats identifiers "EDAM types of identifiers" data "EDAM types of data" relations "EDAM relations" edam "EDAM" EDAM editors: Jon Ison, Matus Kalas, and Herve Menager. Contributors: Inge Jonassen, Dan Bolser, Hamish McWilliam, Mahmut Uludag, James Malone, Rodrigo Lopez, Steve Pettifer, and Peter Rice. Contibutions from these projects: EMBRACE, ELIXIR, and BioMedBridges (EU); EMBOSS (BBSRC, UK); eSysbio, FUGE Bioinformatics Platform, and ELIXIR.NO/Norwegian Bioinformatics Platform (Research Council of Norway). See http://edamontology.org for documentation and licence. operations "EDAM operations" EDAM http://edamontology.org/ "EDAM relations and concept properties" application/rdf+xml EDAM_data http://edamontology.org/data_ "EDAM types of data" concept_properties "EDAM concept properties" Jon Ison Matúš KalaÅ¡ Jon Ison, Matus Kalas, Hervé Ménager EDAM_format http://edamontology.org/format_ "EDAM data formats" topics "EDAM topics" 1.11 Hervé Ménager EDAM is an ontology of well established, familiar concepts that are prevalent within bioinformatics, including types of data and data identifiers, data formats, operations and topics. EDAM is a simple ontology - essentially a set of terms with synonyms and definitions - organised into an intuitive hierarchy for convenient use by curators, software developers and end-users. EDAM is suitable for large-scale semantic annotations and categorization of diverse bioinformatics resources. EDAM is also suitable for diverse application including for example within workbenches and workflow-management systems, software distributions, and resource registries. Created in Version in which a concept was created. true concept_properties Documentation Specification 'Documentation' trailing modifier (qualifier, 'documentation') of 'xref' links of 'Format' concepts. When 'true', the link is pointing to a page with explanation, description, documentation, or specification of the given data format. true concept_properties Example 'Example' concept property ('example' metadat tag) lists examples of valid values of types of identifiers (accessions). Applicable to some other types of data, too. true concept_properties Obsolete since true concept_properties Version in which a concept was made obsolete. Regular expression 'Regular expression' concept property ('regex' metadata tag) specifies the allowed values of types of identifiers (accessions). Applicable to some other types of data, too. concept_properties true has format "http://purl.obolibrary.org/obo/OBI_0000298" Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that is (or is in a role of) 'Data', or an input, output, input or output argument of an 'Operation'. Object B can either be a concept that is a 'Format', or in unexpected cases an entity outside of an ontology that is a 'Format' or is in the role of a 'Format'. In EDAM, 'has_format' is not explicitly defined between EDAM concepts, only the inverse 'is_format_of'. false OBO_REL:is_a relations http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#has-quality" false false edam 'A has_format B' defines for the subject A, that it has the object B as its data format. false has function http://wsio.org/has_function false OBO_REL:is_a OBO_REL:bearer_of edam Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated). Object B can either be a concept that is (or is in a role of) a function, or an entity outside of an ontology that is (or is in a role of) a function specification. In the scope of EDAM, 'has_function' serves only for relating annotated entities outside of EDAM with 'Operation' concepts. false http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#has-quality" true 'A has_function B' defines for the subject A, that it has the object B as its function. "http://purl.obolibrary.org/obo/OBI_0000306" relations false true In very unusual cases. Is defined anywhere? Not in the 'unknown' version of RO. 'OBO_REL:bearer_of' is narrower in the sense that it only relates ontological categories (concepts) that are an 'independent_continuant' (snap:IndependentContinuant) with ontological categories that are a 'specifically_dependent_continuant' (snap:SpecificallyDependentContinuant), and broader in the sense that it relates with any borne objects not just functions of the subject. OBO_REL:bearer_of has identifier false false relations OBO_REL:is_a edam 'A has_identifier B' defines for the subject A, that it has the object B as its identifier. Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated). Object B can either be a concept that is an 'Identifier', or an entity outside of an ontology that is an 'Identifier' or is in the role of an 'Identifier'. In EDAM, 'has_identifier' is not explicitly defined between EDAM concepts, only the inverse 'is_identifier_of'. false false has input OBO_REL:has_participant "http://purl.obolibrary.org/obo/OBI_0000293" false http://wsio.org/has_input Subject A can either be concept that is or has an 'Operation' function, or an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that has an 'Operation' function or is an 'Operation'. Object B can be any concept or entity. In EDAM, only 'has_input' is explicitly defined between EDAM concepts ('Operation' 'has_input' 'Data'). The inverse, 'is_input_of', is not explicitly defined. relations OBO_REL:is_a false 'A has_input B' defines for the subject A, that it has the object B as a necessary or actual input or input argument. false true edam OBO_REL:has_participant 'OBO_REL:has_participant' is narrower in the sense that it only relates ontological categories (concepts) that are a 'process' (span:Process) with ontological categories that are a 'continuant' (snap:Continuant), and broader in the sense that it relates with any participating objects not just inputs or input arguments of the subject. true In very unusual cases. has output http://wsio.org/has_output Subject A can either be concept that is or has an 'Operation' function, or an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that has an 'Operation' function or is an 'Operation'. Object B can be any concept or entity. In EDAM, only 'has_output' is explicitly defined between EDAM concepts ('Operation' 'has_output' 'Data'). The inverse, 'is_output_of', is not explicitly defined. edam "http://purl.obolibrary.org/obo/OBI_0000299" OBO_REL:is_a relations OBO_REL:has_participant true 'A has_output B' defines for the subject A, that it has the object B as a necessary or actual output or output argument. false false false 'OBO_REL:has_participant' is narrower in the sense that it only relates ontological categories (concepts) that are a 'process' (span:Process) with ontological categories that are a 'continuant' (snap:Continuant), and broader in the sense that it relates with any participating objects not just outputs or output arguments of the subject. It is also not clear whether an output (result) actually participates in the process that generates it. OBO_REL:has_participant In very unusual cases. true has topic relations true Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated). Object B can either be a concept that is a 'Topic', or in unexpected cases an entity outside of an ontology that is a 'Topic' or is in the role of a 'Topic'. In EDAM, only 'has_topic' is explicitly defined between EDAM concepts ('Operation' or 'Data' 'has_topic' 'Topic'). The inverse, 'is_topic_of', is not explicitly defined. false 'A has_topic B' defines for the subject A, that it has the object B as its topic (A is in the scope of a topic B). edam OBO_REL:is_a http://annotation-ontology.googlecode.com/svn/trunk/annotation-core.owl#hasTopic false "http://purl.obolibrary.org/obo/IAO_0000136" false http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#has-quality "http://purl.obolibrary.org/obo/OBI_0000298" In very unusual cases. true is format of false OBO_REL:is_a false false false 'A is_format_of B' defines for the subject A, that it is a data format of the object B. edam relations Subject A can either be a concept that is a 'Format', or in unexpected cases an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that is a 'Format' or is in the role of a 'Format'. Object B can be any concept or entity outside of an ontology that is (or is in a role of) 'Data', or an input, output, input or output argument of an 'Operation'. In EDAM, only 'is_format_of' is explicitly defined between EDAM concepts ('Format' 'is_format_of' 'Data'). The inverse, 'has_format', is not explicitly defined. OBO_REL:quality_of http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#inherent-in OBO_REL:quality_of Is defined anywhere? Not in the 'unknown' version of RO. 'OBO_REL:quality_of' might be seen narrower in the sense that it only relates subjects that are a 'quality' (snap:Quality) with objects that are an 'independent_continuant' (snap:IndependentContinuant), and is broader in the sense that it relates any qualities of the object. is function of Subject A can either be concept that is (or is in a role of) a function, or an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that is (or is in a role of) a function specification. Object B can be any concept or entity. Within EDAM itself, 'is_function_of' is not used. OBO_REL:inheres_in true OBO_REL:is_a false 'A is_function_of B' defines for the subject A, that it is a function of the object B. OBO_REL:function_of edam http://wsio.org/is_function_of relations http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#inherent-in false false OBO_REL:inheres_in Is defined anywhere? Not in the 'unknown' version of RO. 'OBO_REL:inheres_in' is narrower in the sense that it only relates ontological categories (concepts) that are a 'specifically_dependent_continuant' (snap:SpecificallyDependentContinuant) with ontological categories that are an 'independent_continuant' (snap:IndependentContinuant), and broader in the sense that it relates any borne subjects not just functions. true In very unusual cases. OBO_REL:function_of Is defined anywhere? Not in the 'unknown' version of RO. 'OBO_REL:function_of' only relates subjects that are a 'function' (snap:Function) with objects that are an 'independent_continuant' (snap:IndependentContinuant), so for example no processes. It does not define explicitly that the subject is a function of the object. is identifier of false false edam false relations Subject A can either be a concept that is an 'Identifier', or an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that is an 'Identifier' or is in the role of an 'Identifier'. Object B can be any concept or entity outside of an ontology. In EDAM, only 'is_identifier_of' is explicitly defined between EDAM concepts (only 'Identifier' 'is_identifier_of' 'Data'). The inverse, 'has_identifier', is not explicitly defined. 'A is_identifier_of B' defines for the subject A, that it is an identifier of the object B. OBO_REL:is_a false is input of false http://wsio.org/is_input_of relations true false OBO_REL:participates_in OBO_REL:is_a "http://purl.obolibrary.org/obo/OBI_0000295" edam Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated). Object B can either be a concept that is or has an 'Operation' function, or an entity outside of an ontology that has an 'Operation' function or is an 'Operation'. In EDAM, 'is_input_of' is not explicitly defined between EDAM concepts, only the inverse 'has_input'. false 'A is_input_of B' defines for the subject A, that it as a necessary or actual input or input argument of the object B. 'OBO_REL:participates_in' is narrower in the sense that it only relates ontological categories (concepts) that are a 'continuant' (snap:Continuant) with ontological categories that are a 'process' (span:Process), and broader in the sense that it relates any participating subjects not just inputs or input arguments. OBO_REL:participates_in In very unusual cases. true is output of OBO_REL:is_a false false Subject A can be any concept or entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated). Object B can either be a concept that is or has an 'Operation' function, or an entity outside of an ontology that has an 'Operation' function or is an 'Operation'. In EDAM, 'is_output_of' is not explicitly defined between EDAM concepts, only the inverse 'has_output'. edam false 'A is_output_of B' defines for the subject A, that it as a necessary or actual output or output argument of the object B. OBO_REL:participates_in http://wsio.org/is_output_of true relations "http://purl.obolibrary.org/obo/OBI_0000312" In very unusual cases. true OBO_REL:participates_in 'OBO_REL:participates_in' is narrower in the sense that it only relates ontological categories (concepts) that are a 'continuant' (snap:Continuant) with ontological categories that are a 'process' (span:Process), and broader in the sense that it relates any participating subjects not just outputs or output arguments. It is also not clear whether an output (result) actually participates in the process that generates it. is topic of 'A is_topic_of B' defines for the subject A, that it is a topic of the object B (a topic A is the scope of B). relations OBO_REL:quality_of false true false Subject A can either be a concept that is a 'Topic', or in unexpected cases an entity outside of an ontology (or an ontology concept in a role of an entity being semantically annotated) that is a 'Topic' or is in the role of a 'Topic'. Object B can be any concept or entity outside of an ontology. In EDAM, 'is_topic_of' is not explicitly defined between EDAM concepts, only the inverse 'has_topic'. http://www.loa-cnr.it/ontologies/DOLCE-Lite.owl#inherent-in false OBO_REL:is_a edam OBO_REL:quality_of Is defined anywhere? Not in the 'unknown' version of RO. 'OBO_REL:quality_of' might be seen narrower in the sense that it only relates subjects that are a 'quality' (snap:Quality) with objects that are an 'independent_continuant' (snap:IndependentContinuant), and is broader in the sense that it relates any qualities of the object. In very unusual cases. true Resource type beta12orEarlier beta12orEarlier A type of computational resource used in bioinformatics. true FASTA beta12orEarlier FASTA format FASTA sequence format FASTA format including NCBI-style IDs. FASTA-like (text) This concept may also be used for the many non-standard FASTA-like formats. http://filext.com/file-extension/FASTA beta12orEarlier A text format resembling FASTA format. Textual format http://filext.com/file-extension/TSV http://www.iana.org/assignments/media-types/text/plain Textual format. Data in text format can be compressed into binary format, or can be a value of an XML element or attribute. Markup formats are not considered textual (or more precisely, not plain-textual). txt http://filext.com/file-extension/TXT Plain text http://www.iana.org/assignments/media-types/media-types.xhtml#text beta12orEarlier Alignment format (text) Text format for molecular sequence alignment information. beta12orEarlier Obsolete concept (EDAM) 1.2 Needed for conversion to the OBO format. An obsolete concept (redefined in EDAM). true ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5515604 cwl_utils-0.37/testdata/checker_wf/0000755000175100001660000000000014753463357017005 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/broken-wf.cwl0000755000175100001660000000251614753463342021407 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: ScatterFeatureRequirement: {} MultipleInputFeatureRequirement: {} StepInputExpressionRequirement: {} inputs: letters0: type: [string, int] default: "a0" letters1: type: string[] default: ["a1", "b1"] letters2: type: [string, int] default: "a2" letters3: type: string[] default: ["a3", "b3"] letters4: type: int default: 4 letters5: type: string[] default: ["a5", "b5", "c5"] outputs: all: type: File[] outputSource: cat/txt steps: echo_v: run: echo.cwl in: echo_in: {} out: [txt] echo_w: run: echo.cwl in: echo_in: letters0 out: [txt] echo_x: run: echo.cwl scatter: echo_in in: echo_in: source: [letters1, letters2] linkMerge: merge_nested out: [txt] echo_y: run: echo.cwl scatter: echo_in in: echo_in: source: [letters3, letters4] linkMerge: merge_flattened out: [txt] echo_z: run: echo.cwl in: echo_in: source: letters5 valueFrom: "special value parsed in valueFrom" out: [txt] cat: run: cat.cwl in: cat_in: source: [echo_w/txt, echo_x/txt, echo_y/txt, echo_z/txt, letters0] linkMerge: merge_flattened out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/broken-wf2.cwl0000755000175100001660000000250614753463342021470 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: ScatterFeatureRequirement: {} MultipleInputFeatureRequirement: {} StepInputExpressionRequirement: {} inputs: letters0: type: [string, int] default: "a0" letters1: type: string[] default: ["a1", "b1"] letters2: type: [string, int] default: "a2" letters3: type: string[] default: ["a3", "b3"] letters4: type: int default: 4 letters5: type: string[] default: ["a5", "b5", "c5"] outputs: all: type: File[] outputSource: cat/txt steps: echo_v: run: echo.cwl in: {} out: [txt] echo_w: run: echo.cwl in: echo_in: letters0 out: [txt, other] echo_x: run: echo.cwl scatter: echo_in in: echo_in: source: [letters1, letters2] linkMerge: merge_nested out: [txt] echo_y: run: echo.cwl scatter: echo_in in: echo_in: source: [letters3, letters4] linkMerge: merge_flattened out: [txt] echo_z: run: echo.cwl in: echo_in: source: letters5 valueFrom: "special value parsed in valueFrom" out: [txt] cat: run: cat.cwl in: cat_in: source: [echo_w/txt, echo_x/txt, echo_y/txt, echo_z/txt, letters0] linkMerge: merge_flattened out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/broken-wf3.cwl0000755000175100001660000000141014753463342021462 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: ScatterFeatureRequirement: {} MultipleInputFeatureRequirement: {} StepInputExpressionRequirement: {} inputs: letters0: type: [string, int] default: "a0" letters1: type: string[] default: ["a1", "b1"] letters2: type: [string, int] default: "a2" letters3: type: string[] default: ["a3", "b3"] letters4: type: string default: "a4" letters5: type: string[] default: ["a5", "b5", "c5"] outputs: all: type: File outputSource: cat/txt steps: - id: embedded run: functional-wf.cwl in: [] out: - id: All - id: cat run: cat.cwl in: - id: cat_in source: - embedded/All out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/cat.cwl0000755000175100001660000000027414753463342020263 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool baseCommand: cat inputs: cat_in: type: File[] inputBinding: {} stdout: all.txt outputs: txt: type: stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/echo.cwl0000755000175100001660000000032714753463342020431 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool baseCommand: echo inputs: echo_in: type: - string - string[] inputBinding: {} stdout: out.txt outputs: txt: type: stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/checker_wf/functional-wf.cwl0000755000175100001660000000240314753463342022264 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: ScatterFeatureRequirement: {} MultipleInputFeatureRequirement: {} StepInputExpressionRequirement: {} inputs: letters0: type: [string, int] default: "a0" letters1: type: string[] default: ["a1", "b1"] letters2: type: [string, int] default: "a2" letters3: type: string[] default: ["a3", "b3"] letters4: type: string default: "a4" letters5: type: string[] default: ["a5", "b5", "c5"] outputs: all: type: File outputSource: cat/txt steps: echo_w: run: echo.cwl in: echo_in: letters0 out: [txt] echo_x: run: echo.cwl scatter: echo_in in: echo_in: source: [letters1, letters2] linkMerge: merge_nested out: [txt] echo_y: run: echo.cwl scatter: echo_in in: echo_in: source: [letters3, letters4] linkMerge: merge_flattened out: [txt] echo_z: run: echo.cwl in: echo_in: source: letters5 valueFrom: "special value parsed in valueFrom" out: [txt] cat: run: cat.cwl in: cat_in: source: [echo_w/txt, echo_x/txt, echo_y/txt, echo_z/txt] linkMerge: merge_flattened out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-single-source-wf-003.1.cwl0000755000175100001660000000053614753463342022166 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo-array.cwl when: $(inputs.in1 < 1) out: [out1] outputs: out1: type: string outputSource: step1/out1 pickValue: first_non_null requirements: InlineJavascriptRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-single-source-wf-004.1.cwl0000755000175100001660000000054114753463342022163 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo-array.cwl when: $(inputs.in1 < 1) out: [out1] outputs: out1: type: string outputSource: step1/out1 pickValue: the_only_non_null requirements: InlineJavascriptRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-single-source-wf-005.1.cwl0000755000175100001660000000053614753463342022170 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo-array.cwl when: $(inputs.in1 < 1) out: [out1] outputs: out1: type: string[] outputSource: step1/out1 pickValue: all_non_null requirements: InlineJavascriptRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-wf-003.1.cwl0000755000175100001660000000102214753463342017400 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.in1 < 1) out: [out1] step2: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.a_new_var > 2) out: [out1] outputs: out1: type: string outputSource: - step1/out1 - step2/out1 pickValue: first_non_null requirements: InlineJavascriptRequirement: {} MultipleInputFeatureRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-wf-004.1.cwl0000755000175100001660000000102514753463342017404 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.in1 < 1) out: [out1] step2: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.a_new_var > 2) out: [out1] outputs: out1: type: string outputSource: - step1/out1 - step2/out1 pickValue: the_only_non_null requirements: InlineJavascriptRequirement: {} MultipleInputFeatureRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/cond-wf-005.1.cwl0000755000175100001660000000102214753463342017402 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: val: int steps: step1: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.in1 < 1) out: [out1] step2: in: in1: val a_new_var: val run: foo.cwl when: $(inputs.a_new_var > 2) out: [out1] outputs: out1: type: string[] outputSource: - step1/out1 - step2/out1 pickValue: all_non_null requirements: InlineJavascriptRequirement: {} MultipleInputFeatureRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-single-source-wf_v1_0.cwl0000755000175100001660000000056314753463342023757 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: - class: ScatterFeatureRequirement inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_0.cwl scatter: file1 in: file1: source: file1 linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-single-source-wf_v1_1.cwl0000755000175100001660000000056214753463342023757 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: - class: ScatterFeatureRequirement inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_1.cwl scatter: file1 in: file1: source: file1 linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-single-source-wf_v1_2.cwl0000755000175100001660000000056214753463342023760 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: - class: ScatterFeatureRequirement inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_2.cwl scatter: file1 in: file1: source: file1 linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-wf_v1_0.cwl0000755000175100001660000000067014753463342021201 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: - class: ScatterFeatureRequirement - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_0.cwl scatter: file1 in: file1: source: [file1, file2] linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-wf_v1_1.cwl0000755000175100001660000000067014753463342021202 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: - class: ScatterFeatureRequirement - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_1.cwl scatter: file1 in: file1: source: [file1, file2] linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines6-wf_v1_2.cwl0000755000175100001660000000067014753463342021203 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: - class: ScatterFeatureRequirement - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_2.cwl scatter: file1 in: file1: source: [file1, file2] linkMerge: merge_nested out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-single-source-wf_v1_0.cwl0000755000175100001660000000045614753463342023761 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_0.cwl in: file1: source: file1 linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-single-source-wf_v1_1.cwl0000755000175100001660000000045614753463342023762 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_1.cwl in: file1: source: file1 linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-single-source-wf_v1_2.cwl0000755000175100001660000000045614753463342023763 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: file1: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_2.cwl in: file1: source: file1 linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-wf_v1_0.cwl0000755000175100001660000000060314753463342021176 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_0.cwl in: file1: source: [file1, file2] linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-wf_v1_1.cwl0000755000175100001660000000060314753463342021177 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_1.cwl in: file1: source: [file1, file2] linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/count-lines7-wf_v1_2.cwl0000755000175100001660000000060314753463342021200 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: - class: MultipleInputFeatureRequirement inputs: file1: File[] file2: File[] outputs: count_output: type: int outputSource: step1/output steps: step1: run: wc3-tool_v1_2.cwl in: file1: source: [file1, file2] linkMerge: merge_flattened out: [output]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/debian_image_id.cwl0000755000175100001660000000031514753463342020450 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool requirements: DockerRequirement: dockerImageId: 'debian:stable-slim.img' inputs: message: string outputs: [] baseCommand: echo ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/dir4-job.yml0000644000175100001660000000025514753463342017036 0ustar00runnerdockerinf: class: File location: hello.tar secondaryFiles: - class: File location: index.py - class: Directory basename: xtestdir location: testdir././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/dockstore-tool-md5sum.cwl0000755000175100001660000000263414753463342021576 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool id: Md5sum label: Simple md5sum tool cwlVersion: v1.0 $namespaces: dct: http://purl.org/dc/terms/ foaf: http://xmlns.com/foaf/0.1/ doc: | [![Docker Repository on Quay.io](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum/status "Docker Repository on Quay.io")](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum) [![Build Status](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum.svg)](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum) A very, very simple Docker container for the md5sum command. See the [README](https://github.com/briandoconnor/dockstore-tool-md5sum/blob/master/README.md) for more information. #dct:creator: # '@id': http://orcid.org/0000-0002-7681-6415 # foaf:name: Brian O'Connor # foaf:mbox: briandoconnor@gmail.com requirements: - class: InlineJavascriptRequirement hints: DockerRequirement: dockerPull: quay.io/mr_c/dockstore-tool-md5sum:1.0.4 ResourceRequirement: # The command really requires very little resources. coresMin: 1 ramMin: 1024 outdirMin: 512 inputs: input_file: type: File inputBinding: position: 1 doc: The file that will have its md5sum calculated. outputs: output_file: type: File outputBinding: glob: md5sum.txt doc: A text file that contains a single line that is the md5sum of the input file. baseCommand: my_md5sum ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/dockstore-tool-md5sum_v11.cwl0000755000175100001660000000271214753463342022262 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool id: Md5sum label: Simple md5sum tool cwlVersion: v1.1 $namespaces: dct: http://purl.org/dc/terms/ foaf: http://xmlns.com/foaf/0.1/ doc: | [![Docker Repository on Quay.io](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum/status "Docker Repository on Quay.io")](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum) [![Build Status](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum.svg)](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum) A very, very simple Docker container for the md5sum command. See the [README](https://github.com/briandoconnor/dockstore-tool-md5sum/blob/master/README.md) for more information. #dct:creator: # '@id': http://orcid.org/0000-0002-7681-6415 # foaf:name: Brian O'Connor # foaf:mbox: briandoconnor@gmail.com requirements: - class: InlineJavascriptRequirement hints: DockerRequirement: dockerPull: quay.io/mr_c/dockstore-tool-md5sum:1.0.4 ResourceRequirement: # The command really requires very little resources. coresMin: 1 ramMin: 1024 outdirMin: 512 inputs: input_file: type: File inputBinding: position: 1 doc: The file that will have its md5sum calculated. outputs: output_file: type: File format: http://edamontology.org/data_3671 outputBinding: glob: md5sum.txt doc: A text file that contains a single line that is the md5sum of the input file. baseCommand: my_md5sum ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/dockstore-tool-md5sum_v12.cwl0000755000175100001660000000271214753463342022263 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool id: Md5sum label: Simple md5sum tool cwlVersion: v1.2 $namespaces: dct: http://purl.org/dc/terms/ foaf: http://xmlns.com/foaf/0.1/ doc: | [![Docker Repository on Quay.io](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum/status "Docker Repository on Quay.io")](https://quay.io/repository/briandoconnor/dockstore-tool-md5sum) [![Build Status](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum.svg)](https://travis-ci.org/briandoconnor/dockstore-tool-md5sum) A very, very simple Docker container for the md5sum command. See the [README](https://github.com/briandoconnor/dockstore-tool-md5sum/blob/master/README.md) for more information. #dct:creator: # '@id': http://orcid.org/0000-0002-7681-6415 # foaf:name: Brian O'Connor # foaf:mbox: briandoconnor@gmail.com requirements: - class: InlineJavascriptRequirement hints: DockerRequirement: dockerPull: quay.io/mr_c/dockstore-tool-md5sum:1.0.4 ResourceRequirement: # The command really requires very little resources. coresMin: 1 ramMin: 1024 outdirMin: 512 inputs: input_file: type: File inputBinding: position: 1 doc: The file that will have its md5sum calculated. outputs: output_file: type: File format: http://edamontology.org/data_3671 outputBinding: glob: md5sum.txt doc: A text file that contains a single line that is the md5sum of the input file. baseCommand: my_md5sum ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/echo-tool-packed.cwl0000755000175100001660000000122214753463342020524 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 $graph: - class: CommandLineTool id: first inputs: in: type: Any outputs: out: type: string outputBinding: glob: out.txt loadContents: true outputEval: $(self[0].contents) baseCommand: [ echo, first ] stdout: out.txt - class: CommandLineTool id: main inputs: in: type: Any inputBinding: {} outputs: out: type: string outputBinding: glob: out.txt loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: out.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/echo_v1_0.cwl0000755000175100001660000000032714753463342017156 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool baseCommand: echo inputs: echo_in: type: - string - string[] inputBinding: {} stdout: out.txt outputs: txt: type: stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/echo_v1_1.cwl0000755000175100001660000000032714753463342017157 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: CommandLineTool baseCommand: echo inputs: echo_in: type: - string - string[] inputBinding: {} stdout: out.txt outputs: txt: type: stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/echo_v1_2.cwl0000755000175100001660000000032714753463342017160 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: CommandLineTool baseCommand: echo inputs: echo_in: type: - string - string[] inputBinding: {} stdout: out.txt outputs: txt: type: stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/env-job3.yaml0000644000175100001660000000020514753463342017203 0ustar00runnerdockerin: "hello test env" cwl:requirements: - class: EnvVarRequirement envDef: - envName: TEST_ENV envValue: $(inputs.in) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5555606 cwl_utils-0.37/testdata/extensions/0000755000175100001660000000000014753463357017104 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/all-output-loop_v1_2.cwl0000644000175100001660000000111714753463342023511 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: InlineJavascriptRequirement: {} inputs: i1: int outputs: o1: type: int[] outputSource: subworkflow/o1 steps: subworkflow: run: class: ExpressionTool inputs: i1: int outputs: o1: int expression: > ${return {'o1': inputs.i1 + 1};} in: i1: i1 out: [o1] requirements: cwltool:Loop: loopWhen: $(inputs.i1 < 10) loop: i1: o1 outputMethod: all ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/cuda-requirement_v1_0.cwl0000644000175100001660000000052614753463342023707 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:CUDARequirement: cudaVersionMin: "1.0" cudaComputeCapability: "1.0" cudaDeviceCountMin: $(inputs.gpus) inputs: gpus: type: int default: 1 outputs: [] baseCommand: "nvidia-smi"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/cuda-requirement_v1_1.cwl0000644000175100001660000000052614753463342023710 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: CommandLineTool $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:CUDARequirement: cudaVersionMin: "1.0" cudaComputeCapability: "1.0" cudaDeviceCountMin: $(inputs.gpus) inputs: gpus: type: int default: 1 outputs: [] baseCommand: "nvidia-smi"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/cuda-requirement_v1_2.cwl0000644000175100001660000000052614753463342023711 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: CommandLineTool $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:CUDARequirement: cudaVersionMin: "1.0" cudaComputeCapability: "1.0" cudaDeviceCountMin: $(inputs.gpus) inputs: gpus: type: int default: 1 outputs: [] baseCommand: "nvidia-smi"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/inplace-update-requirement_v1_0.cwl0000644000175100001660000000067014753463342025666 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:InplaceUpdateRequirement: inplaceUpdate: true inputs: r: File script: type: File default: class: File location: updateval.py outputs: out: type: File outputBinding: glob: $(inputs.r.basename) arguments: [python, $(inputs.script), $(inputs.r.basename)]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/load-listing-requirement_v1_0.cwl0000644000175100001660000000044414753463342025360 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: http://commonwl.org/cwltool# requirements: cwltool:LoadListingRequirement: loadListing: shallow_listing inputs: d: Directory outputs: [] arguments: [echo, "$(inputs.d.listing[0].listing[0])"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/mpi-requirement_v1_0.cwl0000644000175100001660000000036714753463342023563 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" baseCommand: env requirements: cwltool:MPIRequirement: processes: 1 inputs: {} outputs: environment: type: stdout././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/mpi-requirement_v1_1.cwl0000644000175100001660000000036614753463342023563 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.1 $namespaces: cwltool: "http://commonwl.org/cwltool#" baseCommand: env requirements: cwltool:MPIRequirement: processes: 1 inputs: {} outputs: environment: type: stdout././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/mpi-requirement_v1_2.cwl0000644000175100001660000000036614753463342023564 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 $namespaces: cwltool: "http://commonwl.org/cwltool#" baseCommand: env requirements: cwltool:MPIRequirement: processes: 1 inputs: {} outputs: environment: type: stdout././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/network-access_v1_0.cwl0000644000175100001660000000055314753463342023365 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:NetworkAccess: networkAccess: true inputs: [] outputs: [] baseCommand: python arguments: - "-c" - valueFrom: | import urllib.request assert(urllib.request.urlopen("http://commonwl.org").code == 200)././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/process-generator_v1_0.cwl0000644000175100001660000000127514753463342024101 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" class: cwltool:ProcessGenerator inputs: script: string dir: Directory outputs: {} run: class: CommandLineTool inputs: script: string dir: Directory outputs: runProcess: type: File outputBinding: glob: main.cwl requirements: InlineJavascriptRequirement: {} LoadListingRequirement: loadListing: shallow_listing InitialWorkDirRequirement: listing: | ${ var v = inputs.dir.listing; v.push({entryname: "inp.py", entry: inputs.script}); return v; } arguments: [python3, inp.py] stdout: main.cwl././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/process-generator_v1_1.cwl0000644000175100001660000000127514753463342024102 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 $namespaces: cwltool: "http://commonwl.org/cwltool#" class: cwltool:ProcessGenerator inputs: script: string dir: Directory outputs: {} run: class: CommandLineTool inputs: script: string dir: Directory outputs: runProcess: type: File outputBinding: glob: main.cwl requirements: InlineJavascriptRequirement: {} LoadListingRequirement: loadListing: shallow_listing InitialWorkDirRequirement: listing: | ${ var v = inputs.dir.listing; v.push({entryname: "inp.py", entry: inputs.script}); return v; } arguments: [python3, inp.py] stdout: main.cwl././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/process-generator_v1_2.cwl0000644000175100001660000000127514753463342024103 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 $namespaces: cwltool: "http://commonwl.org/cwltool#" class: cwltool:ProcessGenerator inputs: script: string dir: Directory outputs: {} run: class: CommandLineTool inputs: script: string dir: Directory outputs: runProcess: type: File outputBinding: glob: main.cwl requirements: InlineJavascriptRequirement: {} LoadListingRequirement: loadListing: shallow_listing InitialWorkDirRequirement: listing: | ${ var v = inputs.dir.listing; v.push({entryname: "inp.py", entry: inputs.script}); return v; } arguments: [python3, inp.py] stdout: main.cwl././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/secrets_v1_0.cwl0000644000175100001660000000036514753463342022106 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool $namespaces: cwltool: http://commonwl.org/cwltool# requirements: cwltool:Secrets: secrets: [pw] inputs: pw: string outputs: out: stdout arguments: [cat, example.conf] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/secrets_v1_1.cwl0000644000175100001660000000036514753463342022107 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: CommandLineTool $namespaces: cwltool: http://commonwl.org/cwltool# requirements: cwltool:Secrets: secrets: [pw] inputs: pw: string outputs: out: stdout arguments: [cat, example.conf] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/secrets_v1_2.cwl0000644000175100001660000000036514753463342022110 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: CommandLineTool $namespaces: cwltool: http://commonwl.org/cwltool# requirements: cwltool:Secrets: secrets: [pw] inputs: pw: string outputs: out: stdout arguments: [cat, example.conf] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/shm-size_v1_0.cwl0000644000175100001660000000043414753463342022172 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:ShmSize: shmSize: 128m inputs: [] outputs: output: type: stdout baseCommand: echo stdout: shm-size.txt arguments: [ $(runtime) ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/shm-size_v1_1.cwl0000644000175100001660000000043414753463342022173 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.1 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:ShmSize: shmSize: 128m inputs: [] outputs: output: type: stdout baseCommand: echo stdout: shm-size.txt arguments: [ $(runtime) ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/shm-size_v1_2.cwl0000644000175100001660000000043414753463342022174 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:ShmSize: shmSize: 128m inputs: [] outputs: output: type: stdout baseCommand: echo stdout: shm-size.txt arguments: [ $(runtime) ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/single-var-loop_v1_2.cwl0000644000175100001660000000111614753463342023451 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: InlineJavascriptRequirement: {} inputs: i1: int outputs: o1: type: int outputSource: subworkflow/o1 steps: subworkflow: run: class: ExpressionTool inputs: i1: int outputs: o1: int expression: > ${return {'o1': inputs.i1 + 1};} in: i1: i1 out: [o1] requirements: cwltool:Loop: loopWhen: $(inputs.i1 < 10) loop: i1: o1 outputMethod: last ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/time-limit_v1_0.cwl0000644000175100001660000000042414753463342022504 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" inputs: sleep_time: type: int default: 3 inputBinding: {} outputs: [] requirements: cwltool:TimeLimit: timelimit: 20 baseCommand: sleep././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/extensions/work-reuse_v1_0.cwl0000644000175100001660000000051314753463342022534 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 $namespaces: cwltool: "http://commonwl.org/cwltool#" requirements: cwltool:WorkReuse: enableReuse: false inputs: [] outputs: page: stdout stdout: time.txt baseCommand: python arguments: - "-c" - valueFrom: | import time print(time.time()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/foo-array.cwl0000755000175100001660000000040114753463342017303 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 inputs: in1: int baseCommand: [echo] outputs: out1: type: string[] outputBinding: outputEval: ${ return [foo, inputs.in1]; } requirements: InlineJavascriptRequirement: {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/foo.cwl0000755000175100001660000000030214753463342016167 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 inputs: in1: int baseCommand: [echo] outputs: out1: type: string outputBinding: outputEval: foo $(inputs.in1) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/formattest-job.json0000644000175100001660000000017014753463342020530 0ustar00runnerdocker{ "input": { "class": "File", "location": "whale.txt", "format": "edam:format_2330" } } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/formattest2.cwl0000755000175100001660000000076014753463342017666 0ustar00runnerdocker#!/usr/bin/env cwl-runner $namespaces: edam: http://edamontology.org/ $schemas: - EDAM_subset.owl class: CommandLineTool cwlVersion: v1.2 doc: "Reverse each line using the `rev` command" hints: DockerRequirement: dockerPull: docker.io/debian:stable-slim inputs: input: type: File inputBinding: {} format: edam:format_2330 outputs: output: type: File outputBinding: glob: output.txt format: $(inputs.input.format) baseCommand: rev stdout: output.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/formattest2_v1_0.cwl0000755000175100001660000000076014753463342020513 0ustar00runnerdocker#!/usr/bin/env cwl-runner $namespaces: edam: http://edamontology.org/ $schemas: - EDAM_subset.owl class: CommandLineTool cwlVersion: v1.0 doc: "Reverse each line using the `rev` command" hints: DockerRequirement: dockerPull: docker.io/debian:stable-slim inputs: input: type: File inputBinding: {} format: edam:format_2330 outputs: output: type: File outputBinding: glob: output.txt format: $(inputs.input.format) baseCommand: rev stdout: output.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/formattest2_v1_1.cwl0000755000175100001660000000076014753463342020514 0ustar00runnerdocker#!/usr/bin/env cwl-runner $namespaces: edam: http://edamontology.org/ $schemas: - EDAM_subset.owl class: CommandLineTool cwlVersion: v1.1 doc: "Reverse each line using the `rev` command" hints: DockerRequirement: dockerPull: docker.io/debian:stable-slim inputs: input: type: File inputBinding: {} format: edam:format_2330 outputs: output: type: File outputBinding: glob: output.txt format: $(inputs.input.format) baseCommand: rev stdout: output.txt ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/gx_edam.ttl0000644000175100001660000000041514753463342017030 0ustar00runnerdocker@prefix gx: . @prefix owl: . @prefix rdfs: . @prefix edam: . gx:fasta a owl:Class . gx:fasta owl:equivalentClass edam:format_1929 . ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/js-expr-req-wf.cwl0000755000175100001660000000116714753463342020205 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 $graph: - id: tool class: CommandLineTool requirements: InlineJavascriptRequirement: expressionLib: - "function foo() { return 2; }" inputs: [] outputs: out: stdout arguments: [echo, $(foo())] stdout: whatever.txt - id: wf class: Workflow requirements: InlineJavascriptRequirement: expressionLib: - "function bar() { return 1; }" inputs: [] outputs: out: type: File outputSource: tool/out steps: tool: run: "#tool" in: {} out: [out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/lib.js0000644000175100001660000000016314753463342016003 0ustar00runnerdockervar foo = function(x) { return 2 * x } var bar = function(n, x) { return `{n} engineers walk into a {x}` }././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/map-ordering-v1_0.cwl0000644000175100001660000000130514753463342020534 0ustar00runnerdockercwlVersion: v1.0 class: Workflow inputs: 09first_input: string 05second_input: int 01third_input: File steps: zz_step_one: run: class: ExpressionTool inputs: [] outputs: [] expression: ${return {}; } requirements: InlineJavascriptRequirement: {} in: [] out: [] 00_step_two: out: [] run: inputs: [] requirements: InlineJavascriptRequirement: {} outputs: [] expression: ${return {}; } class: ExpressionTool in: [] outputs: zz_first_output: type: File outputSource: 01third_input ll_second_output: type: string outputSource: 09first_input aa_third_output: type: int outputSource: 05second_input ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/map-ordering-v1_1.cwl0000644000175100001660000000130514753463342020535 0ustar00runnerdockercwlVersion: v1.1 class: Workflow inputs: 09first_input: string 05second_input: int 01third_input: File steps: zz_step_one: run: class: ExpressionTool inputs: [] outputs: [] expression: ${return {}; } requirements: InlineJavascriptRequirement: {} in: [] out: [] 00_step_two: out: [] run: inputs: [] requirements: InlineJavascriptRequirement: {} outputs: [] expression: ${return {}; } class: ExpressionTool in: [] outputs: zz_first_output: type: File outputSource: 01third_input ll_second_output: type: string outputSource: 09first_input aa_third_output: type: int outputSource: 05second_input ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/map-ordering-v1_2.cwl0000644000175100001660000000130514753463342020536 0ustar00runnerdockercwlVersion: v1.2 class: Workflow inputs: 09first_input: string 05second_input: int 01third_input: File steps: zz_step_one: run: class: ExpressionTool inputs: [] outputs: [] expression: ${return {}; } requirements: InlineJavascriptRequirement: {} in: [] out: [] 00_step_two: out: [] run: inputs: [] requirements: InlineJavascriptRequirement: {} outputs: [] expression: ${return {}; } class: ExpressionTool in: [] outputs: zz_first_output: type: File outputSource: 01third_input ll_second_output: type: string outputSource: 09first_input aa_third_output: type: int outputSource: 05second_input ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/md5sum.cwl0000755000175100001660000000042614753463342016625 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow inputs: input_file: File outputs: output_file: type: File outputSource: md5sum/output_file steps: md5sum: run: dockstore-tool-md5sum.cwl in: input_file: input_file out: [output_file] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/md5sum.input0000644000175100001660000000000614753463342017166 0ustar00runnerdockerhello ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/md5sum.json0000644000175100001660000000017014753463342017002 0ustar00runnerdocker{"output_file": {"path": "/tmp/md5sum.txt", "class": "File"}, "input_file": {"path": "md5sum.input", "class": "File"}} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/md5sum_v11.cwl0000755000175100001660000000043214753463342017311 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow inputs: input_file: File outputs: output_file: type: File outputSource: md5sum/output_file steps: md5sum: run: dockstore-tool-md5sum_v11.cwl in: input_file: input_file out: [output_file] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/md5sum_v12.cwl0000755000175100001660000000043214753463342017312 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow inputs: input_file: File outputs: output_file: type: File outputSource: md5sum/output_file steps: md5sum: run: dockstore-tool-md5sum_v12.cwl in: input_file: input_file out: [output_file] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/nested-array-job.yml0000644000175100001660000000001614753463342020565 0ustar00runnerdockerletters: [[a]]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output-wf_v1_0.cwl0000755000175100001660000000072314753463342021466 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 inputs: irec: type: name: irec type: record fields: - name: ifoo type: File - name: ibar type: File outputs: orec: type: type: record fields: - name: ofoo type: File - name: obar type: File outputSource: step1/orec steps: step1: run: record-output_v1_0.cwl in: irec: irec out: [orec]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output-wf_v1_1.cwl0000755000175100001660000000074414753463342021472 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 inputs: irec: type: name: irec type: record fields: - name: ifoo type: File - name: ibar type: File outputs: orec: type: name: orec type: record fields: - name: ofoo type: File - name: obar type: File outputSource: step1/orec steps: step1: run: record-output_v1_1.cwl in: irec: irec out: [orec]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output-wf_v1_2.cwl0000755000175100001660000000074414753463342021473 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: irec: type: name: irec type: record fields: - name: ifoo type: File - name: ibar type: File outputs: orec: type: name: orec type: record fields: - name: ofoo type: File - name: obar type: File outputSource: step1/orec steps: step1: run: record-output_v1_2.cwl in: irec: irec out: [orec]././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output_v1_0.cwl0000755000175100001660000000157114753463342021056 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 hints: ResourceRequirement: ramMin: 8 requirements: - class: ShellCommandRequirement inputs: irec: type: name: irec type: record fields: - name: ifoo type: File inputBinding: position: 2 - name: ibar type: File inputBinding: position: 6 outputs: orec: type: name: orec type: record fields: - name: ofoo type: File outputBinding: glob: foo - name: obar type: File outputBinding: glob: bar arguments: - {valueFrom: "cat", position: 1} - {valueFrom: "> foo", position: 3, shellQuote: false} - {valueFrom: "&&", position: 4, shellQuote: false} - {valueFrom: "cat", position: 5} - {valueFrom: "> bar", position: 7, shellQuote: false} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output_v1_1.cwl0000755000175100001660000000151514753463342021055 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.1 requirements: - class: ShellCommandRequirement inputs: irec: type: name: irec type: record fields: - name: ifoo type: File inputBinding: position: 2 - name: ibar type: File inputBinding: position: 6 outputs: orec: type: name: orec type: record fields: - name: ofoo type: File outputBinding: glob: foo - name: obar type: File outputBinding: glob: bar arguments: - {valueFrom: "cat", position: 1} - {valueFrom: "> foo", position: 3, shellQuote: false} - {valueFrom: "&&", position: 4, shellQuote: false} - {valueFrom: "cat", position: 5} - {valueFrom: "> bar", position: 7, shellQuote: false} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/record-output_v1_2.cwl0000755000175100001660000000151514753463342021056 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 requirements: - class: ShellCommandRequirement inputs: irec: type: name: irec type: record fields: - name: ifoo type: File inputBinding: position: 2 - name: ibar type: File inputBinding: position: 6 outputs: orec: type: name: orec type: record fields: - name: ofoo type: File outputBinding: glob: foo - name: obar type: File outputBinding: glob: bar arguments: - {valueFrom: "cat", position: 1} - {valueFrom: "> foo", position: 3, shellQuote: false} - {valueFrom: "&&", position: 4, shellQuote: false} - {valueFrom: "cat", position: 5} - {valueFrom: "> bar", position: 7, shellQuote: false} ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5555606 cwl_utils-0.37/testdata/remote-cwl/0000755000175100001660000000000014753463357016763 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/remote-cwl/tool1.cwl0000755000175100001660000000107014753463342020523 0ustar00runnerdocker#!/usr/bin/env cwl-runner # We have this tool to test both local and remote packing class: CommandLineTool cwlVersion: v1.2 inputs: in1: type: string inputBinding: position: 1 valueFrom: A_$(inputs.in1)_B_${return inputs.in1}_C_$(inputs.in1) baseCommand: echo arguments: - valueFrom: $(runtime) outputs: out1: type: string outputBinding: glob: out.txt loadContents: true outputEval: $(self)_D_$(runtime) stdout: out.txt requirements: InlineJavascriptRequirement: expressionLib: - $include: ../lib.js ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/remote-cwl/tool2.cwl0000755000175100001660000000112514753463342020525 0ustar00runnerdocker#!/usr/bin/env cwl-runner # We have this tool to test both local and remote packing class: CommandLineTool cwlVersion: v1.2 inputs: in1: type: ../types/testtypes.yml#my_boolean_array inputBinding: position: 1 valueFrom: A_$(inputs.in1)_B_${return inputs.in1}_C_$(inputs.in1) baseCommand: echo arguments: - valueFrom: $(runtime) outputs: out1: type: string outputBinding: glob: out.txt loadContents: true outputEval: $(self)_D_$(runtime) stdout: out.txt requirements: SchemaDefRequirement: types: - $import: ../types/testtypes.yml ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/remote-cwl/wf1.cwl0000755000175100001660000000072514753463342020170 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: - id: in1 type: ../types/testtypes.yml#my_boolean_array steps: s1: run: ./tool2.cwl in: in1: "#in1" # This should be normalized out out: [out1] s2: run: tool1.cwl in: in1: s1/out1 out: [out1] outputs: - id: out1 type: string outputSource: "#s2/out1" requirements: SchemaDefRequirement: types: - $import: ../types/testtypes.yml ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/revsort-run-1-packed.cwl0000755000175100001660000001053614753463342021307 0ustar00runnerdocker#!/usr/bin/env cwl-runner { "$graph": [ { "class": "Workflow", "doc": "Reverse the lines in a document, then sort those lines.", "hints": [ { "class": "DockerRequirement", "dockerPull": "debian:8" } ], "inputs": [ { "type": "File", "doc": "The input file to be processed.", "format": "https://www.iana.org/assignments/media-types/text/plain", "default": { "class": "File", "location": "file:///home/stain/src/cwltool/tests/wf/hello.txt" }, "id": "#main/input" }, { "type": "boolean", "default": true, "doc": "If true, reverse (decending) sort", "id": "#main/reverse_sort" } ], "outputs": [ { "type": "File", "outputSource": "#main/sorted/output", "doc": "The output with the lines reversed and sorted.", "id": "#main/output" } ], "steps": [ { "in": [ { "source": "#main/input", "id": "#main/rev/input" } ], "out": [ "#main/rev/output" ], "run": "#revtool.cwl", "id": "#main/rev" }, { "in": [ { "source": "#main/rev/output", "id": "#main/sorted/input" }, { "source": "#main/reverse_sort", "id": "#main/sorted/reverse" } ], "out": [ "#main/sorted/output" ], "run": "#sorttool.cwl", "id": "#main/sorted" } ], "id": "#main", "$namespaces": { "iana": "https://www.iana.org/assignments/media-types/" } }, { "class": "CommandLineTool", "doc": "Reverse each line using the `rev` command", "inputs": [ { "type": "File", "inputBinding": {}, "id": "#revtool.cwl/input" } ], "outputs": [ { "type": "File", "outputBinding": { "glob": "output.txt" }, "id": "#revtool.cwl/output" } ], "baseCommand": "rev", "stdout": "output.txt", "id": "#revtool.cwl" }, { "class": "CommandLineTool", "doc": "Sort lines using the `sort` command", "inputs": [ { "id": "#sorttool.cwl/reverse", "type": "boolean", "inputBinding": { "position": 1, "prefix": "--reverse" } }, { "id": "#sorttool.cwl/input", "type": "File", "inputBinding": { "position": 2 } } ], "outputs": [ { "id": "#sorttool.cwl/output", "type": "File", "outputBinding": { "glob": "output.txt" } } ], "baseCommand": "sort", "stdout": "output.txt", "id": "#sorttool.cwl" } ], "cwlVersion": "v1.0", "$schemas": [ "file:///home/stain/src/cwltool/tests/wf/empty.ttl", "file:///home/stain/src/cwltool/tests/wf/empty2.ttl" ] } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf1_v1_0.cwl0000755000175100001660000000123214753463342020374 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow inputs: inp: string[] outputs: out: type: string[] outputSource: step1/echo_out requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in: inp out: [echo_out] scatter: echo_in run: class: CommandLineTool inputs: echo_in: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf1_v1_1.cwl0000755000175100001660000000126414753463342020402 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow inputs: inp: string[] outputs: out: type: string[] outputSource: step1/echo_out requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in: inp out: [echo_out] scatter: echo_in run: class: CommandLineTool inputs: echo_in: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo" stdout: "step1_out"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf1_v1_2.cwl0000755000175100001660000000126414753463342020403 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow inputs: inp: string[] outputs: out: type: string[] outputSource: step1/echo_out requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in: inp out: [echo_out] scatter: echo_in run: class: CommandLineTool inputs: echo_in: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo" stdout: "step1_out"././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf2_v1_0.cwl0000755000175100001660000000165714753463342020410 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow inputs: inp1: string[] inp2: string[] outputs: out: outputSource: step1/echo_out type: type: array items: type: array items: string requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] scatter: [echo_in1, echo_in2] scatterMethod: nested_crossproduct run: class: CommandLineTool id: step1command inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo" stdout: step1_out ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf2_v1_1.cwl0000755000175100001660000000165614753463342020410 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow inputs: inp1: string[] inp2: string[] outputs: out: outputSource: step1/echo_out type: type: array items: type: array items: string requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] scatter: [echo_in1, echo_in2] scatterMethod: nested_crossproduct run: class: CommandLineTool id: step1command inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo" stdout: step1_out././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf2_v1_2.cwl0000755000175100001660000000165614753463342020411 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow inputs: inp1: string[] inp2: string[] outputs: out: outputSource: step1/echo_out type: type: array items: type: array items: string requirements: - class: ScatterFeatureRequirement steps: step1: in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] scatter: [echo_in1, echo_in2] scatterMethod: nested_crossproduct run: class: CommandLineTool id: step1command inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: - "-n" - "foo" stdout: step1_out././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf3_v1_0.cwl0000755000175100001660000000163214753463342020402 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 $graph: - id: echo class: CommandLineTool hints: ResourceRequirement: ramMin: 8 inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: ["-n", "foo"] stdout: step1_out - id: main class: Workflow inputs: inp1: string[] inp2: string[] requirements: - class: ScatterFeatureRequirement steps: step1: scatter: [echo_in1, echo_in2] scatterMethod: flat_crossproduct in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] run: "#echo" outputs: out: outputSource: step1/echo_out type: type: array items: string././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf3_v1_1.cwl0000755000175100001660000000155014753463342020402 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 $graph: - id: echo class: CommandLineTool inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: ["-n", "foo"] stdout: step1_out - id: main class: Workflow inputs: inp1: string[] inp2: string[] requirements: - class: ScatterFeatureRequirement steps: step1: scatter: [echo_in1, echo_in2] scatterMethod: flat_crossproduct in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] run: "#echo" outputs: out: outputSource: step1/echo_out type: type: array items: string././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/scatter-wf3_v1_2.cwl0000755000175100001660000000155014753463342020403 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 $graph: - id: echo class: CommandLineTool inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: ["-n", "foo"] stdout: step1_out - id: main class: Workflow inputs: inp1: string[] inp2: string[] requirements: - class: ScatterFeatureRequirement steps: step1: scatter: [echo_in1, echo_in2] scatterMethod: flat_crossproduct in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] run: "#echo" outputs: out: outputSource: step1/echo_out type: type: array items: string././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/seqtk_seq.cwl0000755000175100001660000000050414753463342017407 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: CommandLineTool doc: "Convert to FASTA (seqtk)" baseCommand: ["seqtk", "seq"] inputs: sequences: type: File inputBinding: prefix: "-a" outputs: result: stdout hints: SoftwareRequirement: packages: - package: seqtk version: - r93 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/stdout-wf_v1_0.cwl0000755000175100001660000000041014753463342020165 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 inputs: letters0: type: string default: "a0" outputs: all: type: File outputSource: echo_w/txt steps: echo_w: run: echo_v1_0.cwl in: echo_in: letters0 out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/stdout-wf_v1_1.cwl0000755000175100001660000000041014753463342020166 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 inputs: letters0: type: string default: "a0" outputs: all: type: File outputSource: echo_w/txt steps: echo_w: run: echo_v1_1.cwl in: echo_in: letters0 out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/stdout-wf_v1_2.cwl0000755000175100001660000000041014753463342020167 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: letters0: type: string default: "a0" outputs: all: type: File outputSource: echo_w/txt steps: echo_w: run: echo_v1_2.cwl in: echo_in: letters0 out: [txt] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom2-wf_v1_0.cwl0000755000175100001660000000140414753463342021702 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement - class: MultipleInputFeatureRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: c: source: [a, b] valueFrom: "$(self[0] + self[1])" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom2-wf_v1_1.cwl0000755000175100001660000000140414753463342021703 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement - class: MultipleInputFeatureRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: c: source: [a, b] valueFrom: "$(self[0] + self[1])" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom2-wf_v1_2.cwl0000755000175100001660000000140414753463342021704 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement - class: MultipleInputFeatureRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: c: source: [a, b] valueFrom: "$(self[0] + self[1])" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom3-wf_v1_0.cwl0000755000175100001660000000133214753463342021703 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: a: a b: b c: valueFrom: "$(inputs.a + inputs.b)" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom3-wf_v1_1.cwl0000755000175100001660000000133214753463342021704 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: a: a b: b c: valueFrom: "$(inputs.a + inputs.b)" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step-valuefrom3-wf_v1_2.cwl0000755000175100001660000000133214753463342021705 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: - class: StepInputExpressionRequirement - class: InlineJavascriptRequirement inputs: a: int b: int outputs: val: type: string outputSource: step1/echo_out steps: step1: run: id: echo class: CommandLineTool inputs: c: type: int inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" stdout: step1_out in: a: a b: b c: valueFrom: "$(inputs.a + inputs.b)" out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_v1_0.cwl0000755000175100001660000000244114753463342022053 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_v1_1.cwl0000755000175100001660000000244114753463342022054 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_v1_2.cwl0000755000175100001660000000244114753463342022055 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_with_id_v1_0.cwl0000755000175100001660000000246714753463342023572 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.0 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: id: explicit/id class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_with_id_v1_1.cwl0000755000175100001660000000246714753463342023573 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.1 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: id: explicit/id class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/step_valuefrom5_wf_with_id_v1_2.cwl0000755000175100001660000000246714753463342023574 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 requirements: StepInputExpressionRequirement: {} inputs: file1: File outputs: val1: type: string outputSource: step1/echo_out val2: type: string outputSource: step2/echo_out steps: step1: run: id: explicit/id class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) echo_out_file: type: File outputBinding: glob: step1_out baseCommand: echo stdout: step1_out in: name: source: file1 valueFrom: $(self.basename) out: [echo_out, echo_out_file] step2: run: class: CommandLineTool inputs: name: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: step1_out loadContents: true outputEval: $(self[0].contents) baseCommand: echo stdout: step1_out in: name: source: step1/echo_out_file valueFrom: $(self.basename) out: [echo_out] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5565605 cwl_utils-0.37/testdata/types/0000755000175100001660000000000014753463357016051 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/types/array.yml0000644000175100001660000000144514753463342017710 0ustar00runnerdocker# Contains arrays and intra-file type reference # class: SchemaDefRequirement # types: ... # This form does not work with cwltool, even though it can be found here # https://github.com/NCI-GDC/gdc-dnaseq-cwl/blob/master/tools/readgroup_path.yml # Notably, that .yml file is never used, so likely they tried it, failed and # forgot to take it out - name: sample_meta2 #duplicate names are not fine across files type: record fields: - name: prop type: string - name: study_meta type: array items: sample_meta2 # Apparently can't declare an array inside an array? # - name: study_meta_too # type: array # items: [string, sample_meta2, study_meta] - name: study_meta_too type: record fields: meta1: sample_meta2 meta2: study_meta ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/types/recursive.yml0000644000175100001660000000073514753463342020602 0ustar00runnerdocker# Contains records and intra-file type reference # You can't have an enum in a user type - name: sample_meta type: record fields: - name: sample type: ["null", string] - name: species type: string - name: file_with_sample_meta type: record fields: - name: file type: File - name: meta type: sample_meta - name: info_with_sample_meta type: record fields: comment: type: string meta: type: sample_meta ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/types/singletype.yml0000644000175100001660000000021714753463342020751 0ustar00runnerdocker# You can not use the dictionary format shown in # singletype2.yml. It has to be this name: simple_record type: record fields: prop: string ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/types/singletype2.yml0000644000175100001660000000014414753463342021032 0ustar00runnerdocker# You can not use this dictionary format simple_record2: type: record fields: prop: string ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/types/testtypes.yml0000644000175100001660000000024114753463342020627 0ustar00runnerdocker- name: my_boolean_array type: array items: boolean label: "A boolean array" - name: my_enum type: enum symbols: [a, b, c] label: "A required enum" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/wc3-tool_v1_0.cwl0000755000175100001660000000100214753463342017676 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.0 requirements: - class: InlineJavascriptRequirement hints: ResourceRequirement: ramMin: 8 inputs: file1: type: File[] inputBinding: {} outputs: output: type: int outputBinding: glob: output.txt loadContents: true outputEval: | ${ var s = self[0].contents.split(/\r?\n/); return parseInt(s[s.length-2]); } stdout: output.txt././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/wc3-tool_v1_1.cwl0000755000175100001660000000074614753463342017715 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.1 requirements: - class: InlineJavascriptRequirement inputs: file1: type: File[] inputBinding: {} outputs: output: type: int outputBinding: glob: output.txt loadContents: true outputEval: | ${ var s = self[0].contents.split(/\r?\n/); return parseInt(s[s.length-2]); } stdout: output.txt baseCommand: wc././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/wc3-tool_v1_2.cwl0000755000175100001660000000074614753463342017716 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 requirements: - class: InlineJavascriptRequirement inputs: file1: type: File[] inputBinding: {} outputs: output: type: int outputBinding: glob: output.txt loadContents: true outputEval: | ${ var s = self[0].contents.split(/\r?\n/); return parseInt(s[s.length-2]); } stdout: output.txt baseCommand: wc././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/wf2.cwl0000755000175100001660000000075614753463342016117 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: in1: types/testtypes.yml#my_boolean_array in2: type: types/testtypes.yml#my_enum steps: s1: run: remote-cwl/wf1.cwl in: - id: in1 source: "#in1" out: [out1] s2: run: remote-cwl/tool1.cwl in: in1: s1/out1 out: [out1] outputs: out1: type: string outputSource: "#s2/out1" requirements: SchemaDefRequirement: types: - $import: types/testtypes.yml ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_format_expr.cwl0000755000175100001660000000124514753463342023112 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File format: | ${ return "http://edamontology.org/format_3016"; } default: class: File basename: default format: http://edamontology.org/format_3016 contents: "42" steps: format_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.format}; } outputs: result: string out: [ result ] outputs: format_uri: type: string outputSource: format_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_format_expr_v1_1.cwl0000755000175100001660000000124514753463342023740 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File format: | ${ return "http://edamontology.org/format_3016"; } default: class: File basename: default format: http://edamontology.org/format_3016 contents: "42" steps: format_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.format}; } outputs: result: string out: [ result ] outputs: format_uri: type: string outputSource: format_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_format_expr_v1_2.cwl0000755000175100001660000000124514753463342023741 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File format: | ${ return "http://edamontology.org/format_3016"; } default: class: File basename: default format: http://edamontology.org/format_3016 contents: "42" steps: format_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.format}; } outputs: result: string out: [ result ] outputs: format_uri: type: string outputSource: format_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr.cwl0000755000175100001660000000133114753463342022226 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: | ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr_array.cwl0000755000175100001660000000133114753463342023424 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.0 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: - ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr_array_v1_1.cwl0000755000175100001660000000133114753463342024252 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: - ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr_array_v1_2.cwl0000755000175100001660000000134314753463342024256 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: - | ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr_v1_1.cwl0000755000175100001660000000133114753463342023054 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.1 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: | ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflow_input_sf_expr_v1_2.cwl0000755000175100001660000000133114753463342023055 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 class: Workflow requirements: InlineJavascriptRequirement: {} inputs: first: type: File secondaryFiles: | ${ return self.basename+".2"; } default: class: File basename: default.txt contents: "42" secondaryFiles: - class: File basename: default.txt.2 contents: "23" steps: sf_extract: in: target: first run: class: ExpressionTool inputs: target: File expression: | ${ return {"result": inputs.target.secondaryFiles[0].basename}; } outputs: result: string out: [ result ] outputs: sf_name: type: string outputSource: sf_extract/result ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5575604 cwl_utils-0.37/testdata/workflows/0000755000175100001660000000000014753463357016742 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/count-lines16-wf.cwl0000755000175100001660000000152614753463342022473 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: Workflow cwlVersion: v1.2 inputs: file1: File outputs: count_output: {type: int, outputSource: step1/count_output} requirements: SubworkflowFeatureRequirement: {} steps: step1: in: {file1: file1} out: [count_output] run: class: Workflow inputs: file1: File outputs: count_output: {type: int, outputSource: step2/count_output} steps: step1: {run: wc-tool.cwl, in: {file1: file1}, out: [output]} step2: in: {file1: step1/output} out: [count_output] run: class: Workflow inputs: file1: File outputs: count_output: {type: int, outputSource: step1/output} steps: step1: {run: parseInt-tool.cwl, in: {file1: file1}, out: [output]} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/import-in-type.cwl0000755000175100001660000000040414753463342022341 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 inputs: file1: type: $import: type-import.yaml outputs: output: type: File outputBinding: { glob: output } baseCommand: [wc, -l] stdin: $(inputs.file1.path) stdout: output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/parseInt-tool.cwl0000755000175100001660000000040514753463342022205 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: ExpressionTool requirements: - class: InlineJavascriptRequirement cwlVersion: v1.2 inputs: file1: type: File loadContents: true outputs: output: int expression: "$({'output': parseInt(inputs.file1.contents)})" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/scatter-wf4.cwl0000755000175100001660000000154514753463342021616 0ustar00runnerdocker#!/usr/bin/env cwl-runner cwlVersion: v1.2 $graph: - id: echo class: CommandLineTool inputs: echo_in1: type: string inputBinding: {} echo_in2: type: string inputBinding: {} outputs: echo_out: type: string outputBinding: glob: "step1_out" loadContents: true outputEval: $(self[0].contents) baseCommand: "echo" arguments: ["-n", "foo"] stdout: step1_out - id: main class: Workflow inputs: inp1: string[] inp2: string[] requirements: - class: ScatterFeatureRequirement steps: step1: scatter: [echo_in1, echo_in2] scatterMethod: dotproduct in: echo_in1: inp1 echo_in2: inp2 out: [echo_out] run: "#echo" outputs: - id: out outputSource: step1/echo_out type: type: array items: string ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/type-import.yaml0000644000175100001660000000002314753463342022104 0ustar00runnerdocker- File - Directory ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/wc-tool.cwl0000755000175100001660000000045614753463342021037 0ustar00runnerdocker#!/usr/bin/env cwl-runner class: CommandLineTool cwlVersion: v1.2 hints: DockerRequirement: dockerPull: docker.io/debian:stable-slim inputs: file1: File outputs: output: type: File outputBinding: { glob: output } baseCommand: [wc, -l] stdin: $(inputs.file1.path) stdout: output ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/testdata/workflows/wf5.cwl0000755000175100001660000000156514753463342020156 0ustar00runnerdocker#!/usr/bin/env cwl-runner # Checks symbolic links on github class: Workflow cwlVersion: v1.2 inputs: in1: type: ../types/recursive.yml#file_with_sample_meta in2: type: ../types/array.yml#study_meta_too in3: type: ../types/singletype.yml#simple_record in4: type: [string, ../types/recursive.yml#sample_meta] steps: s1: run: ../tools/link-to-clt1.cwl in: in1: "#in1" # This should be normalized out in2: in2 in3: in3 in4: in4 out: [out2] outputs: - id: out1 type: ../types/array.yml#study_meta_too outputSource: "#s1/out2" requirements: SchemaDefRequirement: types: - $import: ../types/recursive.yml - $import: ../types/array.yml - $import: ../types/singletype.yml - name: user_type1 # For tools/clt2.cwl type: record fields: - name: prop type: string ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1739482862.5615606 cwl_utils-0.37/tests/0000755000175100001660000000000014753463357014236 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/__init__.py0000644000175100001660000000000014753463342016327 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/load_cwl_by_path.py0000644000175100001660000000124314753463342020074 0ustar00runnerdocker""" Example from README.md. Please synchronize all changes between the two. """ # SPDX-License-Identifier: Apache-2.0 from pathlib import Path from cwl_utils.parser import load_document_by_uri, save # File Input - This is the only thing you will need to adjust or take in as an input to your function: cwl_file = Path("testdata/md5sum.cwl") # or a plain string works as well # Import CWL Object cwl_obj = load_document_by_uri(cwl_file) # View CWL Object print("List of object attributes:\n{}".format("\n".join(map(str, dir(cwl_obj))))) # Export CWL Object into a built-in typed object saved_obj = save(cwl_obj) print(f"Export of the loaded CWL object: {saved_obj}.") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_cite_extract.py0000644000175100001660000000164414753463342020324 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Tests for cwl-cite-extract.""" import pytest from cwl_utils.cite_extract import arg_parser, run from .util import get_data def test_cite_extract_simple(capsys: pytest.CaptureFixture[str]) -> None: """Test the citation extraction, simply.""" assert run(arg_parser().parse_args([get_data("testdata/seqtk_seq.cwl")])) == 0 captured = capsys.readouterr() assert captured.out == "Package: seqtk, version: ['r93'], specs: None\n" assert captured.err == "" def test_cite_extract_workflow_no_results(capsys: pytest.CaptureFixture[str]) -> None: """Attempt to extract citations from a workflow without any SoftwareRequirements.""" assert ( run( arg_parser().parse_args([get_data("testdata/checker_wf/functional-wf.cwl")]) ) == 0 ) captured = capsys.readouterr() assert captured.out == "" assert captured.err == "" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_docker_extract.py0000644000175100001660000000750014753463342020644 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Tests for cwl-docker-extract.""" from pathlib import Path import pytest from cwl_utils.docker_extract import arg_parser, run from .util import get_data, needs_docker, needs_podman, needs_singularity @pytest.mark.parametrize( ("target", "engine"), [ pytest.param("testdata/md5sum.cwl", "docker", marks=needs_docker), pytest.param("testdata/md5sum_v11.cwl", "docker", marks=needs_docker), pytest.param("testdata/md5sum.cwl", "podman", marks=needs_podman), pytest.param("testdata/md5sum_v11.cwl", "podman", marks=needs_podman), pytest.param("testdata/md5sum.cwl", "singularity", marks=needs_singularity), pytest.param("testdata/md5sum_v11.cwl", "singularity", marks=needs_singularity), ], ) def test_container_extraction(target: str, engine: str, tmp_path: Path) -> None: """Test container extraction tool.""" args = ["--dir", str(tmp_path), get_data(target), "--container-engine", engine] if engine == "singularity": args.append("--singularity") reqs = run(arg_parser().parse_args(args)) assert len(reqs) == 1 assert len(list(tmp_path.iterdir())) == 1 @pytest.mark.parametrize( ("engine"), [ pytest.param("docker", marks=needs_docker), pytest.param("podman", marks=needs_podman), pytest.param("singularity", marks=needs_singularity), ], ) def test_container_extraction_force(engine: str, tmp_path: Path) -> None: """Test force pull container extraction.""" args = [ "--dir", str(tmp_path), get_data("testdata/md5sum.cwl"), "--container-engine", engine, ] if engine == "singularity": args.append("--singularity") reqs = run(arg_parser().parse_args(args)) assert len(reqs) == 1 assert len(list(tmp_path.iterdir())) == 1 args = [ "--dir", str(tmp_path), get_data("testdata/md5sum.cwl"), "--container-engine", engine, "--force-download", ] if engine == "singularity": args.append("--singularity") reqs = run(arg_parser().parse_args(args)) assert len(reqs) == 1 assert len(list(tmp_path.iterdir())) == 1 @pytest.mark.parametrize( ("engine"), [ pytest.param("docker", marks=needs_docker), pytest.param("podman", marks=needs_podman), pytest.param("singularity", marks=needs_singularity), ], ) def test_container_extraction_no_dockerPull( engine: str, tmp_path: Path, capsys: pytest.CaptureFixture[str] ) -> None: """Test container extraction tool when dockerPull is missing.""" args = [ "--dir", str(tmp_path), get_data("testdata/debian_image_id.cwl"), "--container-engine", engine, ] if engine == "singularity": args.append("--singularity") reqs = run(arg_parser().parse_args(args)) assert len(reqs) == 1 assert len(list(tmp_path.iterdir())) == 0 captured = capsys.readouterr() assert ( captured.err == """Unable to save image from due to lack of 'dockerPull': class: DockerRequirement dockerImageId: 'debian:stable-slim.img' """ ) @pytest.mark.parametrize( ("engine"), [ pytest.param("docker", marks=needs_docker), pytest.param("podman", marks=needs_podman), pytest.param("singularity", marks=needs_singularity), ], ) def test_container_extraction_embedded_step(engine: str, tmp_path: Path) -> None: """Test container extraction tool.""" args = [ "--dir", str(tmp_path), get_data("testdata/workflows/count-lines16-wf.cwl"), "--container-engine", engine, ] if engine == "singularity": args.append("--singularity") reqs = run(arg_parser().parse_args(args)) assert len(reqs) == 1 assert len(list(tmp_path.iterdir())) == 1 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_etools_to_clt.py0000644000175100001660000002124114753463342020512 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test the CWL Expression refactoring tool.""" import os import shutil import sys import tarfile from collections.abc import Generator from pathlib import Path from typing import TYPE_CHECKING, cast import pytest import requests from _pytest.tmpdir import TempPathFactory from pytest import raises import cwl_utils.parser.cwl_v1_0 as parser import cwl_utils.parser.cwl_v1_1 as parser1 import cwl_utils.parser.cwl_v1_2 as parser2 from cwl_utils.cwl_v1_0_expression_refactor import traverse as traverse0 from cwl_utils.cwl_v1_1_expression_refactor import traverse as traverse1 from cwl_utils.cwl_v1_2_expression_refactor import traverse as traverse2 from cwl_utils.errors import WorkflowException from cwl_utils.expression_refactor import run as expression_refactor from .util import get_data if TYPE_CHECKING: from http.client import HTTPResponse def test_v1_0_workflow_top_level_format_expr() -> None: """Test for the correct error when converting a format expression in a workflow level input.""" with raises(WorkflowException, match=r".*format specification.*"): result, modified = traverse0( parser.load_document(get_data("testdata/workflow_input_format_expr.cwl")), False, False, False, False, ) def test_v1_0_workflow_top_level_sf_expr() -> None: """Test for the correct error when converting a secondaryFiles expression in a workflow level input.""" with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse0( parser.load_document(get_data("testdata/workflow_input_sf_expr.cwl")), False, False, False, False, ) def test_v1_0_workflow_top_level_sf_expr_array() -> None: """Test correct error when converting a secondaryFiles expression (array form) in a workflow level input.""" # noqa: B950 with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse0( parser.load_document(get_data("testdata/workflow_input_sf_expr_array.cwl")), False, False, False, False, ) def test_v1_1_workflow_top_level_format_expr() -> None: """Test for the correct error when converting a format expression in a workflow level input.""" with raises(WorkflowException, match=r".*format specification.*"): result, modified = traverse1( parser1.load_document( get_data("testdata/workflow_input_format_expr_v1_1.cwl") ), False, False, False, False, ) def test_v1_1_workflow_top_level_sf_expr() -> None: """Test for the correct error when converting a secondaryFiles expression in a workflow level input.""" with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse1( parser1.load_document(get_data("testdata/workflow_input_sf_expr_v1_1.cwl")), False, False, False, False, ) def test_v1_1_workflow_top_level_sf_expr_array() -> None: """Test for the correct error when converting a secondaryFiles expression (array form) in a workflow level input.""" # noqa: B950 with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse1( parser1.load_document( get_data("testdata/workflow_input_sf_expr_array_v1_1.cwl") ), False, False, False, False, ) def test_v1_2_workflow_top_level_format_expr() -> None: """Test for the correct error when converting a format expression in a workflow level input.""" with raises(WorkflowException, match=r".*format specification.*"): result, modified = traverse2( parser2.load_document( get_data("testdata/workflow_input_format_expr_v1_2.cwl") ), False, False, False, False, ) def test_v1_2_workflow_top_level_sf_expr() -> None: """Test for the correct error when converting a secondaryFiles expression in a workflow level input.""" with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse2( parser2.load_document(get_data("testdata/workflow_input_sf_expr_v1_2.cwl")), False, False, False, False, ) def test_v1_2_workflow_top_level_sf_expr_array() -> None: """Test for the correct error when converting a secondaryFiles expression (array form) in a workflow level input.""" # noqa: B950 with raises(WorkflowException, match=r".*secondaryFiles.*"): result, modified = traverse2( parser2.load_document( get_data("testdata/workflow_input_sf_expr_array_v1_2.cwl") ), False, False, False, False, ) def test_v1_0_step_valuefrom_expr_multisource() -> None: """Convert a valueFrom expression that has multiple sources.""" result, modified = traverse0( parser.load_document(get_data("testdata/step-valuefrom2-wf_v1_0.cwl")), False, False, False, False, ) def test_v1_1_step_valuefrom_expr_multisource() -> None: """Convert a valueFrom expression that has multiple sources.""" result, modified = traverse1( parser1.load_document(get_data("testdata/step-valuefrom2-wf_v1_1.cwl")), False, False, False, False, ) def test_v1_2_step_valuefrom_expr_multisource() -> None: """Convert a valueFrom expression that has multiple sources.""" result, modified = traverse2( parser2.load_document(get_data("testdata/step-valuefrom2-wf_v1_2.cwl")), False, False, False, False, ) def test_v1_0_step_valuefrom_expr_sibling_inputs() -> None: """Convert a valueFrom expression from a step input that has uninvolved sibling inputs.""" result, modified = traverse0( parser.load_document(get_data("testdata/step-valuefrom3-wf_v1_0.cwl")), False, False, False, False, ) def test_v1_1_step_valuefrom_expr_sibling_inputs() -> None: """Convert a valueFrom expression from a step input that has uninvolved sibling inputs.""" result, modified = traverse1( parser1.load_document(get_data("testdata/step-valuefrom3-wf_v1_1.cwl")), False, False, False, False, ) def test_v1_2_step_valuefrom_expr_sibling_inputs() -> None: """Convert a valueFrom expression from a step input that has uninvolved sibling inputs.""" result, modified = traverse2( parser2.load_document(get_data("testdata/step-valuefrom3-wf_v1_2.cwl")), False, False, False, False, ) def test_v1_2_workflow_output_pickvalue_expr() -> None: """Convert a workflow output pickValue expression.""" result, modified = traverse2( parser2.load_document(get_data("testdata/cond-wf-003.1.cwl")), False, False, False, False, ) def test_expression_refactor(tmp_path: Path) -> None: """Functional test.""" input_path = get_data("testdata/cond-wf-003.1.cwl") result = expression_refactor([str(tmp_path), input_path]) assert result == 0 def test_expression_refactor_noop_solo(tmp_path: Path) -> None: """Functional test.""" input_path = get_data("testdata/dockstore-tool-md5sum.cwl") result = expression_refactor([str(tmp_path), input_path]) assert result == 7 def test_expression_refactor_noop(tmp_path: Path) -> None: """Functional test.""" input_path1 = get_data("testdata/dockstore-tool-md5sum.cwl") input_path2 = get_data("testdata/echo-tool-packed.cwl") result = expression_refactor([str(tmp_path), input_path1, input_path2]) assert result == 0 @pytest.fixture(scope="session") def cwl_v1_0_dir( tmp_path_factory: TempPathFactory, ) -> Generator[str, None, None]: """Download the CWL 1.0.2 specs and return a path to the directory.""" tmp_path = tmp_path_factory.mktemp("cwl_v1_0_dir") with cast( "HTTPResponse", requests.get( "https://github.com/common-workflow-language/common-workflow-language/archive/v1.0.2.tar.gz", stream=True, ).raw, ) as specfileobj: tf = tarfile.open(fileobj=specfileobj) if sys.version_info > (3, 12): tf.extractall(path=tmp_path, filter="data") else: tf.extractall(path=tmp_path) yield str(tmp_path / "common-workflow-language-1.0.2") shutil.rmtree(os.path.join(tmp_path)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_examples.py0000644000175100001660000000072214753463342017460 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Tests of example Python scripts.""" import os import runpy from pathlib import Path def test_load_example() -> None: """Test the load_cwl_by_path.py script.""" cwd = Path.cwd() parent = Path(__file__).resolve().parent os.chdir(parent.parent) result_raw = runpy.run_path(str(parent / "load_cwl_by_path.py")) os.chdir(cwd) result = result_raw["saved_obj"] assert result["class"] == "Workflow" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_extensions.py0000644000175100001660000002315414753463342020045 0ustar00runnerdockerfrom pathlib import Path from cwl_utils.parser import cwl_v1_0, cwl_v1_1, cwl_v1_2, load_document_by_uri from .util import get_data def test_cuda_requirement_v1_0() -> None: """Test that CUDARequirement objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/cuda-requirement_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.CUDARequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:CUDARequirement" def test_cuda_requirement_v1_1() -> None: """Test that CUDARequirement objects are correctly loaded for CWL v1.1.""" uri = ( Path(get_data("testdata/extensions/cuda-requirement_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_1.CUDARequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:CUDARequirement" def test_cuda_requirement_v1_2() -> None: """Test that CUDARequirement objects are correctly loaded for CWL v1.2.""" uri = ( Path(get_data("testdata/extensions/cuda-requirement_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_2.CUDARequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:CUDARequirement" def test_load_listing_requirement_v1_0() -> None: """Test that LoadListingRequirement objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/load-listing-requirement_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.LoadListingRequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:LoadListingRequirement" def test_loop_v1_2() -> None: """Test that Loop and LoopInput objects are correctly loaded for CWL v1.2.""" uri = ( Path(get_data("testdata/extensions/single-var-loop_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) cwl_step = next(iter(cwl_obj.steps)) loop_req = next(iter(cwl_step.requirements)) assert isinstance(loop_req, cwl_v1_2.Loop) assert isinstance(next(iter(loop_req.loop)), cwl_v1_2.LoopInput) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["steps"][0]["requirements"][0]["class"] == "cwltool:Loop" def test_inplace_update_requirement_v1_0() -> None: """Test that InplaceUpdateRequirement objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/inplace-update-requirement_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance( next(iter(cwl_obj.requirements)), cwl_v1_0.InplaceUpdateRequirement ) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:InplaceUpdateRequirement" def test_mpi_requirement_v1_0() -> None: """Test that MPIRequirement objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/mpi-requirement_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.MPIRequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:MPIRequirement" def test_mpi_requirement_v1_1() -> None: """Test that MPIRequirement objects are correctly loaded for CWL v1.1.""" uri = ( Path(get_data("testdata/extensions/mpi-requirement_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_1.MPIRequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:MPIRequirement" def test_mpi_requirement_v1_2() -> None: """Test that MPIRequirement objects are correctly loaded for CWL v1.2.""" uri = ( Path(get_data("testdata/extensions/mpi-requirement_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_2.MPIRequirement) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:MPIRequirement" def test_network_access_v1_0() -> None: """Test that NetworkAccess objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/network-access_v1_0.cwl")).resolve().as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.NetworkAccess) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:NetworkAccess" def test_process_generator_v1_0() -> None: """Test that ProcessGenerator objects are correctly loaded for CWL v1.0.""" uri = ( Path(get_data("testdata/extensions/process-generator_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(cwl_obj, cwl_v1_0.ProcessGenerator) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["class"] == "cwltool:ProcessGenerator" def test_process_generator_v1_1() -> None: """Test that ProcessGenerator objects are correctly loaded for CWL v1.1.""" uri = ( Path(get_data("testdata/extensions/process-generator_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(cwl_obj, cwl_v1_1.ProcessGenerator) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["class"] == "cwltool:ProcessGenerator" def test_process_generator_v1_2() -> None: """Test that ProcessGenerator objects are correctly loaded for CWL v1.2.""" uri = ( Path(get_data("testdata/extensions/process-generator_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert isinstance(cwl_obj, cwl_v1_2.ProcessGenerator) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["class"] == "cwltool:ProcessGenerator" def test_secrets_v1_0() -> None: """Test that Secrets objects are correctly loaded for CWL v1.0.""" uri = Path(get_data("testdata/extensions/secrets_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.Secrets) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:Secrets" def test_secrets_v1_1() -> None: """Test that Secrets objects are correctly loaded for CWL v1.1.""" uri = Path(get_data("testdata/extensions/secrets_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_1.Secrets) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:Secrets" def test_secrets_v1_2() -> None: """Test that Secrets objects are correctly loaded for CWL v1.2.""" uri = Path(get_data("testdata/extensions/secrets_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_2.Secrets) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:Secrets" def test_shm_size_v1_0() -> None: """Test that ShmSize objects are correctly loaded for CWL v1.0.""" uri = Path(get_data("testdata/extensions/shm-size_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.ShmSize) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:ShmSize" def test_shm_size_v1_1() -> None: """Test that ShmSize objects are correctly loaded for CWL v1.1.""" uri = Path(get_data("testdata/extensions/shm-size_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_1.ShmSize) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:ShmSize" def test_shm_size_v1_2() -> None: """Test that ShmSize objects are correctly loaded for CWL v1.2.""" uri = Path(get_data("testdata/extensions/shm-size_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_2.ShmSize) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:ShmSize" def test_time_limit_v1_0() -> None: """Test that TimeLimit objects are correctly loaded for CWL v1.0.""" uri = Path(get_data("testdata/extensions/time-limit_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.TimeLimit) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:TimeLimit" def test_work_reuse_v1_0() -> None: """Test that WorkReuse objects are correctly loaded for CWL v1.0.""" uri = Path(get_data("testdata/extensions/work-reuse_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert isinstance(next(iter(cwl_obj.requirements)), cwl_v1_0.WorkReuse) cwl_dict = cwl_obj.save(top=True) assert cwl_dict["requirements"][0]["class"] == "cwltool:WorkReuse" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_format.py0000644000175100001660000001530214753463342017132 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Tests of cwl_utils.file_formats.""" import xml.sax from pathlib import Path from typing import Optional import requests from pytest import raises from rdflib import Graph from rdflib.compare import to_isomorphic from rdflib.plugins.parsers.notation3 import BadSyntax from schema_salad.exceptions import ValidationException from schema_salad.fetcher import DefaultFetcher from cwl_utils.file_formats import check_format from cwl_utils.parser import load_document_by_uri from cwl_utils.types import CWLObjectType from .util import get_data def _create_file(format_: Optional[str] = None) -> CWLObjectType: obj: CWLObjectType = { "class": "File", "basename": "example.txt", "size": 23, "contents": "hoopla", "nameroot": "example", "nameext": "txt", } if format_: obj["format"] = format_ return obj def _load_format(fetchurl: str) -> Graph: fetcher = DefaultFetcher({}, requests.Session()) content = fetcher.fetch_text(fetchurl) graph = Graph() for fmt in ["xml", "turtle", "rdfa"]: try: graph.parse(data=content, format=fmt, publicID=str(fetchurl)) break except (xml.sax.SAXParseException, TypeError, BadSyntax): pass return graph EDAM = _load_format(Path(get_data("testdata/EDAM_subset.owl")).as_uri()) GX = _load_format(Path(get_data("testdata/gx_edam.ttl")).as_uri()) def test_check_format() -> None: """Exact format equivalence test, with ontology.""" check_format( actual_file=_create_file(format_="http://edamontology.org/format_2330"), input_formats="http://edamontology.org/format_2330", ontology=EDAM, ) def test_check_format_subformat() -> None: """Test of check_format with a subformat.""" check_format( actual_file=_create_file(format_="http://edamontology.org/format_1929"), input_formats="http://edamontology.org/format_2330", ontology=EDAM, ) def test_check_format_equiv() -> None: """Test of check_format with an equivalent format.""" check_format( actual_file=_create_file(format_="http://edamontology.org/format_1929"), input_formats="http://galaxyproject.org/formats/fasta", ontology=EDAM + GX, ) def test_check_format_equiv2() -> None: """Test of check_format with an equivalent format, in the reverse.""" check_format( actual_file=_create_file(format_="http://galaxyproject.org/formats/fasta"), input_formats="http://edamontology.org/format_1929", ontology=EDAM + GX, ) def test_check_format_wrong_format() -> None: """Test of check_format with a non-match format with an ontology.""" with raises(ValidationException, match=r"File has an incompatible format: .*"): check_format( actual_file=_create_file(format_="http://edamontology.org/format_1929"), input_formats="http://edamontology.org/format_2334", ontology=EDAM, ) def test_check_format_wrong_format_no_ontology() -> None: """Test of check_format with a non-match format.""" with raises(ValidationException, match=r"File has an incompatible format: .*"): check_format( actual_file=_create_file(format_="http://edamontology.org/format_1929"), input_formats="http://edamontology.org/format_2334", ontology=None, ) def test_check_format_no_format() -> None: """Confirm that a missing format produces the expected exception.""" with raises(ValidationException, match=r"File has no 'format' defined: .*"): check_format( actual_file=_create_file(), input_formats="http://edamontology.org/format_2330", ontology=EDAM, ) def test_check_format_missing_file() -> None: """Confirm that a missing file produces no error.""" check_format( actual_file=[{}], input_formats="http://edamontology.org/format_2330", ontology=EDAM, ) def test_check_format_no_ontology() -> None: """Confirm that precisely matching formats without an ontology still match.""" check_format( actual_file=_create_file(format_="http://edamontology.org/format_2330"), input_formats="http://edamontology.org/format_2330", ontology=Graph(), ) def test_loading_options_graph_property_v1_0() -> None: """Test that RDFLib Graph representations of $schema properties are correctly loaded, CWL v1.0.""" uri = Path(get_data("testdata/formattest2_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(EDAM) def test_loading_options_graph_property_v1_1() -> None: """Test that RDFLib Graph representations of $schema properties are correctly loaded, CWL v1.1.""" uri = Path(get_data("testdata/formattest2_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(EDAM) def test_loading_options_graph_property_v1_2() -> None: """Test that RDFLib Graph representations of $schema properties are correctly loaded, CWL v1.2.""" uri = Path(get_data("testdata/formattest2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(EDAM) def test_loading_options_missing_graph_v1_0() -> None: """Affirm that v1.0 documents without $schema still produce an empty graph property.""" uri = Path(get_data("testdata/workflow_input_format_expr.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(Graph()) def test_loading_options_missing_graph_v1_1() -> None: """Affirm that v1.1 documents without $schema still produce an empty graph property.""" uri = ( Path(get_data("testdata/workflow_input_format_expr_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(Graph()) def test_loading_options_missing_graph_v1_2() -> None: """Affirm that v1.2 documents without $schema still produce an empty graph property.""" uri = ( Path(get_data("testdata/workflow_input_format_expr_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) assert to_isomorphic(cwl_obj.loadingOptions.graph) == to_isomorphic(Graph()) def test_loading_format_without_schema_v1_0() -> None: """Test that format fields without accompanying schemas are tolerated, CWL v1.0.""" uri = Path(get_data("testdata/revsort-run-1-packed.cwl")).resolve().as_uri() load_document_by_uri(uri) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_graph_split.py0000644000175100001660000000163014753463342020155 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test the CWL $graph document splitter tool.""" from io import StringIO from pathlib import Path import requests from cwl_utils.graph_split import graph_split from .util import get_data URI = ( "https://gist.githubusercontent.com/altairwei/" "6a0097db95cad23de36f825ed3b9f4b0/raw/" "83f332931c3093ee73554cd7f60054ce17d03239/rhapsody_wta_1.8.packed.cwl" ) def test_graph_split(tmp_path: Path) -> None: """Confirm that a user provided example produces no exception.""" sourceIO = StringIO(requests.get(URI).text) sourceIO.name = URI graph_split(sourceIO, tmp_path, "yaml", "main.cwl", True) def test_graph_split_offline(tmp_path: Path) -> None: """Confirm that a local provided example produces no exception.""" with open(get_data("testdata/js-expr-req-wf.cwl")) as handle: graph_split(handle, tmp_path, "yaml", "main.cwl", True) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_inputs_schema_gen.py0000644000175100001660000000626614753463342021346 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Tests for cwl-inputs-schema-gen.""" from pathlib import Path from typing import Dict import pytest import requests from jsonschema.exceptions import SchemaError, ValidationError from jsonschema.validators import validate from ruamel.yaml import YAML from cwl_utils.inputs_schema_gen import cwl_to_jsonschema from cwl_utils.loghandler import _logger as _cwlutilslogger from cwl_utils.parser import load_document_by_uri TEST_ROOT_URL = ( "https://raw.githubusercontent.com/common-workflow-language/cwl-v1.2/main/tests" ) TEST_PARAMS = [ # Packed Case { "tool_url": f"{TEST_ROOT_URL}/revsort-packed.cwl", "input_url": f"{TEST_ROOT_URL}/revsort-job.json", }, # The number of parameters is a little large, and the definition itself is a straightforward case. { "tool_url": f"{TEST_ROOT_URL}/bwa-mem-tool.cwl", "input_url": f"{TEST_ROOT_URL}/bwa-mem-job.json", }, # The case where CommandInputParameter is shortened (e.g., param: string) { "tool_url": f"{TEST_ROOT_URL}/env-tool1.cwl", "input_url": f"{TEST_ROOT_URL}/env-job.json", }, # Dir { "tool_url": f"{TEST_ROOT_URL}/dir.cwl", "input_url": f"{TEST_ROOT_URL}/dir-job.yml", }, # SecondaryFiles { "tool_url": f"{TEST_ROOT_URL}/secondaryfiles/rename-inputs.cwl", "input_url": f"{TEST_ROOT_URL}/secondaryfiles/rename-inputs.yml", }, # Stage array { "tool_url": f"{TEST_ROOT_URL}/stage-array.cwl", "input_url": f"{TEST_ROOT_URL}/stage-array-job.json", }, ] @pytest.mark.parametrize("test_param", TEST_PARAMS) def test_cwl_inputs_to_jsonschema(test_param: Dict[str, str]) -> None: tool_url = test_param["tool_url"] input_url = test_param["input_url"] cwl_obj = load_document_by_uri(tool_url) _cwlutilslogger.info(f"Generating schema for {Path(tool_url).name}") json_schema = cwl_to_jsonschema(cwl_obj) _cwlutilslogger.info( f"Testing {Path(input_url).name} against schema generated for input {Path(tool_url).name}" ) yaml = YAML() input_obj = yaml.load(requests.get(input_url).text) try: validate(input_obj, json_schema) except (ValidationError, SchemaError) as err: _cwlutilslogger.error( f"Validation failed for {Path(input_url).name} " f"against schema generated for input {Path(tool_url).name}" ) raise SchemaError(f"{Path(input_url).name} failed schema validation") from err def test_cwl_inputs_to_jsonschema_fails() -> None: """Compare tool schema of param 1 against input schema of param 2.""" tool_url = TEST_PARAMS[0]["tool_url"] input_url = TEST_PARAMS[3]["input_url"] cwl_obj = load_document_by_uri(tool_url) _cwlutilslogger.info(f"Generating schema for {Path(tool_url).name}") json_schema = cwl_to_jsonschema(cwl_obj) _cwlutilslogger.info( f"Testing {Path(input_url).name} against schema generated for input {Path(tool_url).name}" ) yaml = YAML() input_obj = yaml.load(requests.get(input_url).text) # We expect this to fail with pytest.raises(ValidationError): validate(input_obj, json_schema) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_js_sandbox.py0000644000175100001660000001402114753463342017771 0ustar00runnerdocker"""Test sandboxjs.py and related code.""" import os import shutil import threading from pathlib import Path from typing import Any import pytest from cwl_utils import expression, sandboxjs from .util import needs_podman, needs_singularity, needs_udocker node_versions = [ ("v0.8.26\n", False), ("v0.10.25\n", False), ("v0.10.26\n", True), ("v4.4.2\n", True), ("v7.7.3\n", True), ] @pytest.mark.parametrize("version,supported", node_versions) def test_node_version(version: str, supported: bool, mocker: Any) -> None: mocked_subprocess = mocker.patch("cwl_utils.sandboxjs.subprocess") mocked_subprocess.check_output = mocker.Mock(return_value=version) assert sandboxjs.check_js_threshold_version("node") == supported def test_value_from_two_concatenated_expressions() -> None: js_engine = sandboxjs.get_js_engine() js_engine.have_node_slim = False # type: ignore[attr-defined] js_engine.localdata = threading.local() # type: ignore[attr-defined] assert ( expression.do_eval( '$("a ")$("string")', {}, [{"class": "InlineJavascriptRequirement"}], None, None, {}, cwlVersion="v1.0", ) == "a string" ) def hide_nodejs(temp_dir: Path) -> str: """Generate a new PATH that hides node{js,}.""" paths: list[str] = os.environ.get("PATH", "").split(":") names: list[str] = [] if "/bin" in paths: bin_path = Path("/bin") if ( bin_path.is_symlink() and os.readlink(bin_path) == "usr/bin" and "/usr/bin" in paths ): paths.remove("/bin") for name in ("nodejs", "node"): path = shutil.which(name, path=":".join(paths)) if path: names.append(path) for name in names: dirname = os.path.dirname(name) if dirname in paths: paths.remove(dirname) new_dir = temp_dir / os.path.basename(dirname) new_dir.mkdir() for entry in os.listdir(dirname): if entry not in ("nodejs", "node"): os.symlink(os.path.join(dirname, entry), new_dir / entry) paths.append(str(new_dir)) return ":".join(paths) @needs_podman def test_value_from_two_concatenated_expressions_podman( tmp_path: Path, monkeypatch: pytest.MonkeyPatch ) -> None: """Javascript test using podman.""" new_paths = hide_nodejs(tmp_path) with monkeypatch.context() as m: m.setenv("PATH", new_paths) js_engine = sandboxjs.get_js_engine() js_engine.have_node_slim = False # type: ignore[attr-defined] js_engine.localdata = threading.local() # type: ignore[attr-defined] assert ( expression.do_eval( '$("a ")$("string")', {}, [{"class": "InlineJavascriptRequirement"}], None, None, {}, cwlVersion="v1.0", container_engine="podman", ) == "a string" ) @needs_udocker def test_value_from_two_concatenated_expressions_udocker( tmp_path: Path, monkeypatch: pytest.MonkeyPatch ) -> None: """Javascript test using udocker.""" new_paths = hide_nodejs(tmp_path) with monkeypatch.context() as m: m.setenv("PATH", new_paths) js_engine = sandboxjs.get_js_engine() js_engine.have_node_slim = False # type: ignore[attr-defined] js_engine.localdata = threading.local() # type: ignore[attr-defined] assert ( expression.do_eval( '$("a ")$("string")', {}, [{"class": "InlineJavascriptRequirement"}], None, None, {}, cwlVersion="v1.0", container_engine="udocker", ) == "a string" ) @needs_singularity def test_value_from_two_concatenated_expressions_singularity( tmp_path: Path, monkeypatch: pytest.MonkeyPatch ) -> None: """Javascript test using Singularity.""" new_paths = hide_nodejs(tmp_path) with monkeypatch.context() as m: m.setenv("PATH", new_paths) js_engine = sandboxjs.get_js_engine() js_engine.have_node_slim = False # type: ignore[attr-defined] js_engine.localdata = threading.local() # type: ignore[attr-defined] assert ( expression.do_eval( '$("a ")$("string")', {}, [{"class": "InlineJavascriptRequirement"}], None, None, {}, cwlVersion="v1.0", container_engine="singularity", ) == "a string" ) @needs_singularity def test_singularity_cache(tmp_path: Path, monkeypatch: pytest.MonkeyPatch) -> None: """Affirm that CWL_SINGULARIT_CACHE is respected.""" bin_path = tmp_path / "no_nodejs" bin_path.mkdir() new_paths = hide_nodejs(bin_path) cache_path = tmp_path / "singularity_cache" cache_path.mkdir() with monkeypatch.context() as m: m.setenv("PATH", new_paths) m.setenv("CWL_SINGULARITY_CACHE", str(cache_path)) js_engine = sandboxjs.get_js_engine() js_engine.localdata = threading.local() # type: ignore[attr-defined] js_engine.have_node_slim = False # type: ignore[attr-defined] assert ( expression.do_eval( "$(42*23)", {}, [{"class": "InlineJavascriptRequirement"}], None, None, {}, cwlVersion="v1.0", container_engine="singularity", ) == 42 * 23 ) assert (cache_path / "node_alpine.sif").exists() def test_caches_js_processes(mocker: Any) -> None: sandboxjs.exec_js_process("7", context="{}") mocked_new_js_proc = mocker.patch("cwl_utils.sandboxjs.new_js_proc") sandboxjs.exec_js_process("7", context="{}") mocked_new_js_proc.assert_not_called() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_meta.py0000644000175100001660000000041614753463342016570 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test __meta__ properties.""" from cwl_utils.__meta__ import __version__ def test_graph_split() -> None: """Confirm that __version__ exists and is a string.""" assert __version__ assert isinstance(__version__, str) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_packing.py0000644000175100001660000000621414753463342017260 0ustar00runnerdockerfrom typing import Any from cwl_utils.pack import pack from .util import get_data def _find(l_item: list[Any], key: str, val: str) -> Any: return next(_x for _x in l_item if _x[key] == val) def test_port_normalization() -> None: cwl = pack(get_data("testdata/remote-cwl/wf1.cwl")) step_s1 = _find(cwl.get("steps", []), "id", "s1") step_in1 = _find(step_s1.get("in"), "id", "in1") assert step_in1["source"] == "in1" cwl = pack(get_data("testdata/wf2.cwl")) step_s1 = _find(cwl.get("steps", []), "id", "s1") step_in1 = _find(step_s1.get("in"), "id", "in1") assert step_in1["source"] == "in1" out1 = _find(cwl.get("outputs", []), "id", "out1") assert out1.get("outputSource") == "s2/out1" def test_include() -> None: cwl = pack(get_data("testdata/remote-cwl/tool1.cwl")) assert "arguments" in cwl assert isinstance(cwl.get("arguments"), list) inline_js_req = _find( cwl.get("requirements", []), "class", "InlineJavascriptRequirement" ) include_js = inline_js_req.get("expressionLib") assert isinstance(include_js, list) assert "engineers walk into a" in include_js[0] def test_schema_def1() -> None: cwl = pack(get_data("testdata/remote-cwl/tool2.cwl")) _type = _find(cwl.get("inputs", []), "id", "in1").get("type") assert isinstance(_type, dict) assert _type.get("type") == "array" def test_schema_def2() -> None: cwl = pack(get_data("testdata/wf2.cwl")) _type = _find(cwl.get("inputs", []), "id", "in2").get("type") assert isinstance(_type, dict) assert _type.get("type") == "enum" def test_step_packing() -> None: cwl = pack(get_data("testdata/remote-cwl/wf1.cwl")) s1 = _find(cwl.get("steps", []), "id", "s1") tool2 = s1.get("run") _type = _find(tool2.get("inputs"), "id", "in1").get("type") assert isinstance(_type, dict) assert _type.get("type") == "array" def test_embedded_packing() -> None: pack(get_data("testdata/workflows/count-lines16-wf.cwl")) def test_remote_packing() -> None: cwl = pack( "https://raw.githubusercontent.com/kaushik-work/sbpack/master/tests/wf2.cwl" ) s1 = _find(cwl.get("steps", []), "id", "s1") wf1 = s1.get("run") assert wf1.get("class") == "Workflow" tool2 = _find(wf1.get("steps"), "id", "s1").get("run") _type = _find(tool2.get("inputs"), "id", "in1").get("type") assert isinstance(_type, dict) assert _type.get("type") == "array" def test_remote_packing_github_soft_links() -> None: cwl = pack( "https://raw.githubusercontent.com/rabix/sbpack/master/tests/workflows/wf5.cwl" ) s1 = _find(cwl.get("steps", []), "id", "s1") tool1 = s1.get("run") assert tool1.get("class") == "CommandLineTool" def test_already_packed_graph() -> None: """Workflow already packed in a $graph.""" cwl = pack(get_data("testdata/workflows/scatter-wf4.cwl")) assert "inputs" not in cwl assert "outputs" not in cwl assert "$graph" in cwl assert "requirements" not in cwl def test_import_in_type() -> None: cwl = pack(get_data("testdata/workflows/import-in-type.cwl")) assert cwl["inputs"][0]["type"] == ["File", "Directory"] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_parser.py0000644000175100001660000001471614753463342017146 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test the load and save functions for CWL.""" from pathlib import Path from pytest import raises from ruamel.yaml.main import YAML import cwl_utils.parser.latest as latest from cwl_utils.errors import GraphTargetMissingException from cwl_utils.parser import ( cwl_v1_2, cwl_version, load_document, load_document_by_uri, save, ) from .util import get_data TEST_v1_0_CWL = get_data("testdata/md5sum.cwl") TEST_v1_0_CWL_REMOTE = ( "https://raw.githubusercontent.com/" "common-workflow-language/cwl-utils/main/testdata/md5sum.cwl" ) TEST_v1_2_CWL = get_data("testdata/workflow_input_format_expr_v1_2.cwl") yaml = YAML(typ="rt") yaml.preserve_quotes = True def test_cwl_version() -> None: """Test cwl_version for a CommandLineTool.""" with open(TEST_v1_0_CWL) as cwl_h: yaml_obj = yaml.load(cwl_h) ver = cwl_version(yaml_obj) assert ver == "v1.0" def test_load_document() -> None: """Test load_document for a CommandLineTool.""" with open(TEST_v1_0_CWL) as cwl_h: yaml_obj = yaml.load(cwl_h) cwl_obj = load_document(yaml_obj, Path(TEST_v1_0_CWL).resolve().as_uri()) assert cwl_obj.cwlVersion == "v1.0" assert cwl_obj.inputs[0].id.endswith("input_file") def test_load_document_with_local_uri() -> None: """Test load_document for a CommandLineTool in a local URI.""" uri = Path(TEST_v1_0_CWL).resolve().as_uri() assert uri.startswith("file://") cwl_obj = load_document_by_uri(uri) assert cwl_obj.cwlVersion == "v1.0" assert cwl_obj.inputs[0].id.endswith("input_file") def test_load_document_with_remote_uri() -> None: """Test load_document for a CommandLineTool in a remote URI.""" cwl_obj = load_document_by_uri(TEST_v1_0_CWL_REMOTE) assert cwl_obj.cwlVersion == "v1.0" assert cwl_obj.inputs[0].id.endswith("input_file") def test_save() -> None: """Test save for a list of Process objects with different cwlVersions.""" with open(TEST_v1_0_CWL) as cwl_h: yaml_obj10 = yaml.load(cwl_h) cwl_obj10 = load_document(yaml_obj10, Path(TEST_v1_0_CWL).resolve().as_uri()) assert cwl_obj10.cwlVersion == "v1.0" with open(TEST_v1_2_CWL) as cwl_h: yaml_obj12 = yaml.load(cwl_h) cwl_obj12 = load_document(yaml_obj12, Path(TEST_v1_2_CWL).resolve().as_uri()) assert cwl_obj12.cwlVersion == "v1.2" saved_obj = save([cwl_obj10, cwl_obj12]) ver = cwl_version(saved_obj) assert ver == "v1.2" def test_latest_parser() -> None: """Test the `latest` parser is same as cwl_v1_2 (current latest) parser.""" uri = Path(TEST_v1_2_CWL).as_uri() with open(TEST_v1_2_CWL) as cwl_h: yaml_obj12 = yaml.load(cwl_h) latest_cwl_obj = latest.load_document_by_yaml(yaml_obj12, uri) assert latest_cwl_obj.cwlVersion == "v1.2" def test_shortname() -> None: assert cwl_v1_2.shortname("http://example.com/foo") == "foo" assert cwl_v1_2.shortname("http://example.com/#bar") == "bar" assert cwl_v1_2.shortname("http://example.com/foo/bar") == "bar" assert cwl_v1_2.shortname("http://example.com/foo#bar") == "bar" assert cwl_v1_2.shortname("http://example.com/#foo/bar") == "bar" assert cwl_v1_2.shortname("http://example.com/foo#bar/baz") == "baz" def test_get_id_from_graph() -> None: """Test loading an explicit id of a CWL document with $graph property.""" uri = Path(get_data("testdata/echo-tool-packed.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri + "#main") assert cwl_obj.id == uri + "#main" def test_get_default_id_from_graph() -> None: """Test that loading the default id of a CWL document with $graph property returns the `#main` id.""" uri = Path(get_data("testdata/echo-tool-packed.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_obj.id == uri + "#main" def test_get_default_id_from_graph_without_main() -> None: """Test that loading the default id of a CWL document with $graph property and no `#main` id throws an error.""" # noqa: B950 uri = Path(get_data("testdata/js-expr-req-wf.cwl")).resolve().as_uri() with raises(GraphTargetMissingException): load_document_by_uri(uri) def test_graph_load_all() -> None: """Test that we can get all object in a $graph file.""" uri = Path(get_data("testdata/js-expr-req-wf.cwl")).resolve().as_uri() cwl_objs = load_document_by_uri(uri, load_all=True) assert len(cwl_objs) == 2 def test_map_ordering_v1_0() -> None: """Confirm that ID map entries are not sorted during parsing, CWL v1.0.""" uri = Path(get_data("testdata/map-ordering-v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_obj.inputs[0].id == f"{uri}#09first_input" assert cwl_obj.inputs[1].id == f"{uri}#05second_input" assert cwl_obj.inputs[2].id == f"{uri}#01third_input" assert cwl_obj.steps[0].id == f"{uri}#zz_step_one" assert cwl_obj.steps[1].id == f"{uri}#00_step_two" assert cwl_obj.outputs[0].id == f"{uri}#zz_first_output" assert cwl_obj.outputs[1].id == f"{uri}#ll_second_output" assert cwl_obj.outputs[2].id == f"{uri}#aa_third_output" def test_map_ordering_v1_1() -> None: """Confirm that ID map entries are not sorted during parsing, CWL v1.1.""" uri = Path(get_data("testdata/map-ordering-v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_obj.inputs[0].id == f"{uri}#09first_input" assert cwl_obj.inputs[1].id == f"{uri}#05second_input" assert cwl_obj.inputs[2].id == f"{uri}#01third_input" assert cwl_obj.steps[0].id == f"{uri}#zz_step_one" assert cwl_obj.steps[1].id == f"{uri}#00_step_two" assert cwl_obj.outputs[0].id == f"{uri}#zz_first_output" assert cwl_obj.outputs[1].id == f"{uri}#ll_second_output" assert cwl_obj.outputs[2].id == f"{uri}#aa_third_output" def test_map_ordering_v1_2() -> None: """Confirm that ID map entries are not sorted during parsing, CWL v1.2.""" uri = Path(get_data("testdata/map-ordering-v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_obj.inputs[0].id == f"{uri}#09first_input" assert cwl_obj.inputs[1].id == f"{uri}#05second_input" assert cwl_obj.inputs[2].id == f"{uri}#01third_input" assert cwl_obj.steps[0].id == f"{uri}#zz_step_one" assert cwl_obj.steps[1].id == f"{uri}#00_step_two" assert cwl_obj.outputs[0].id == f"{uri}#zz_first_output" assert cwl_obj.outputs[1].id == f"{uri}#ll_second_output" assert cwl_obj.outputs[2].id == f"{uri}#aa_third_output" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_parser_utils.py0000644000175100001660000014262114753463342020363 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test the CWL parsers utility functions.""" import tempfile from collections.abc import MutableSequence from pathlib import Path from typing import cast import pytest from pytest import raises from schema_salad.exceptions import ValidationException import cwl_utils.parser.cwl_v1_0 import cwl_utils.parser.cwl_v1_0_utils import cwl_utils.parser.cwl_v1_1 import cwl_utils.parser.cwl_v1_1_utils import cwl_utils.parser.cwl_v1_2 import cwl_utils.parser.cwl_v1_2_utils import cwl_utils.parser.utils from cwl_utils.errors import WorkflowException from cwl_utils.parser import load_document_by_uri from .util import get_data @pytest.mark.parametrize("cwlVersion", ["v1_0", "v1_1", "v1_2"]) def test_static_checker_fail(cwlVersion: str) -> None: """Confirm that static type checker raises expected exception.""" if cwlVersion == "v1_0": uri = Path(get_data("testdata/checker_wf/broken-wf.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) with pytest.raises( ValidationException, match="\nSource .* of type .* is incompatible\n.*with sink .* of type .*", ): cwl_utils.parser.utils.static_checker(cwl_obj) uri = Path(get_data("testdata/checker_wf/broken-wf2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) with pytest.raises(ValidationException, match="param .* not found in id: .*"): cwl_utils.parser.utils.static_checker(cwl_obj) uri = Path(get_data("testdata/checker_wf/broken-wf3.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) with pytest.raises(ValidationException, match="param .* not found in id: .*"): cwl_utils.parser.utils.static_checker(cwl_obj) uri = ( Path(get_data(f"testdata/count-lines6-wf_{cwlVersion}.cwl")).resolve().as_uri() ) cwl_obj = load_document_by_uri(uri) with pytest.raises( ValidationException, match="\nSource .* of type .* is incompatible\n.*with sink .* of type .*", ): cwl_utils.parser.utils.static_checker(cwl_obj) uri = ( Path(get_data(f"testdata/count-lines6-single-source-wf_{cwlVersion}.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) with pytest.raises( ValidationException, match="\nSource .* of type .* is incompatible\n.*with sink .* of type .*", ): cwl_utils.parser.utils.static_checker(cwl_obj) if cwlVersion == "v1_2": uri = ( Path(get_data("testdata/cond-single-source-wf-003.1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) with pytest.raises(ValidationException, match=".* pickValue is first_non_null"): cwl_utils.parser.utils.static_checker(cwl_obj) uri = Path(get_data("testdata/cond-single-source-wf-004.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) with pytest.raises(ValidationException, match=".* pickValue is the_only_non_null"): cwl_utils.parser.utils.static_checker(cwl_obj) @pytest.mark.parametrize("cwlVersion", ["v1_0", "v1_1", "v1_2"]) def test_static_checker_success(cwlVersion: str) -> None: """Confirm that static type checker correctly validates workflows.""" test_files = [ f"testdata/record-output-wf_{cwlVersion}.cwl", f"testdata/step_valuefrom5_wf_{cwlVersion}.cwl", f"testdata/step_valuefrom5_wf_with_id_{cwlVersion}.cwl", f"testdata/stdout-wf_{cwlVersion}.cwl", f"testdata/scatter-wf1_{cwlVersion}.cwl", f"testdata/scatter-wf2_{cwlVersion}.cwl", f"testdata/scatter-wf3_{cwlVersion}.cwl", f"testdata/count-lines7-wf_{cwlVersion}.cwl", f"testdata/count-lines7-single-source-wf_{cwlVersion}.cwl", ] if cwlVersion == "v1_2": test_files.extend( [ "testdata/cond-wf-003.1.cwl", "testdata/cond-wf-004.1.cwl", "testdata/cond-wf-005.1.cwl", "testdata/cond-single-source-wf-005.1.cwl", "testdata/extensions/all-output-loop_v1_2.cwl", "testdata/extensions/single-var-loop_v1_2.cwl", "testdata/wf2.cwl", ] ) for test_file in test_files: uri = Path(get_data(test_file)).resolve().as_uri() cwl_obj = load_document_by_uri(uri) cwl_utils.parser.utils.static_checker(cwl_obj) def test_v1_0_file_content_64_kB() -> None: """Test that reading file content is allowed up to 64kB in CWL v1.0.""" text = "a" * cwl_utils.parser.cwl_v1_0_utils.CONTENT_LIMIT with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) content = cwl_utils.parser.cwl_v1_0_utils.content_limit_respected_read(f) assert content == text def test_v1_0_file_content_larger_than_64_kB() -> None: """Test that reading file content is truncated to 64kB for larger files in CWL v1.0.""" text = "a" * (cwl_utils.parser.cwl_v1_0_utils.CONTENT_LIMIT + 1) with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) content = cwl_utils.parser.cwl_v1_0_utils.content_limit_respected_read(f) assert content == text[0 : cwl_utils.parser.cwl_v1_0_utils.CONTENT_LIMIT] def test_v_1_0_load_inputfile_with_format() -> None: """Test that File object with the `format` field is correctly loaded with CWL v1.0.""" uri = Path(get_data("testdata/formattest2_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) uri = Path(get_data("testdata/formattest-job.json")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri( "v1.0", uri, cwl_obj.loadingOptions ) assert jobfile["input"].format == cwl_obj.inputs[0].format def test_v_1_0_load_inputfile_with_nested_array() -> None: """Test that nested arrays are preserved when loading an input file with CWL v1.0.""" uri = Path(get_data("testdata/nested-array-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.0", uri) assert isinstance(jobfile["letters"], MutableSequence) assert isinstance(jobfile["letters"][0], MutableSequence) assert jobfile["letters"][0][0] == "a" def test_v_1_0_load_inputfile_with_secondary_files() -> None: """Test that secondary files are treated as objects when loading an input file with CWL v1.0.""" uri = Path(get_data("testdata/dir4-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.0", uri) assert isinstance(jobfile["inf"].secondaryFiles[0], cwl_utils.parser.cwl_v1_0.File) assert isinstance( jobfile["inf"].secondaryFiles[1], cwl_utils.parser.cwl_v1_0.Directory ) def test_v1_0_stdout_to_file() -> None: """Test that stdout shortcut is converted to stdout parameter with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter(id="test", type_="stdout") ], ) cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) assert clt.stdout is not None assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_0_stdout_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stdout shortcut with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter( id="test", type_="stdout", outputBinding=cwl_utils.parser.cwl_v1_0.CommandOutputBinding( glob="output.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) def test_v1_0_stdout_to_file_preserve_original() -> None: """Test that stdout parameter prevails on stdout shortcut with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter(id="test", type_="stdout") ], stdout="original.txt", ) cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) assert clt.stdout == "original.txt" assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_0_stderr_to_file() -> None: """Test that stderr shortcut is converted to stderr parameter with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter(id="test", type_="stderr") ], ) cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) assert clt.stderr is not None assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_0_stderr_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stderr shortcut with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter( id="test", type_="stderr", outputBinding=cwl_utils.parser.cwl_v1_0.CommandOutputBinding( glob="err.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) def test_v1_0_stderr_to_file_preserve_original() -> None: """Test that stderr parameter prevails on stdout shortcut with CWL v1.0.""" clt = cwl_utils.parser.cwl_v1_0.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_0.CommandOutputParameter(id="test", type_="stderr") ], stderr="original.txt", ) cwl_utils.parser.cwl_v1_0_utils.convert_stdstreams_to_files(clt) assert clt.stderr == "original.txt" assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_0_type_compare_list() -> None: """Test that the type comparison works correctly a list type with CWL v1.0.""" uri = Path(get_data("testdata/echo_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_utils.parser.cwl_v1_0_utils._compare_type( cwl_obj.inputs[0].type_, cwl_obj.inputs[0].type_ ) def test_v1_0_type_compare_record() -> None: """Test that the type comparison works correctly a record type with CWL v1.0.""" uri = Path(get_data("testdata/record-output-wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert cwl_utils.parser.cwl_v1_0_utils._compare_type(source_type, source_type) def test_v1_0_type_for_source() -> None: """Test that the type is correctly inferred from a source id with CWL v1.0.""" uri = Path(get_data("testdata/step_valuefrom5_wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_0_type_for_source_with_id() -> None: """Test that the type is correctly inferred from a source id with CWL v1.0.""" uri = ( Path(get_data("testdata/step_valuefrom5_wf_with_id_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_0_type_for_stdout() -> None: """Test that the `stdout` type is correctly matched with the `File` type in CWL v1.0.""" uri = Path(get_data("testdata/stdout-wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.outputs[0].outputSource ) assert source_type == "File" def test_v1_0_type_output_source_record() -> None: """Test that the type is correctly inferred from a record output source with CWL v1.0.""" uri = Path(get_data("testdata/record-output-wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.RecordSchema) fields = cast( MutableSequence[cwl_utils.parser.cwl_v1_0.RecordField], source_type.fields ) assert len(fields) == 2 assert fields[0].type_ == "File" assert fields[1].type_ == "File" def test_v1_0_type_for_output_source_with_single_scatter_step() -> None: """Test that the type is correctly inferred from a single scatter step with CWL v1.0.""" uri = Path(get_data("testdata/scatter-wf1_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items == "string" def test_v1_0_type_for_output_source_with_nested_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a nested_crossproduct scatter step with CWL v1.0.""" uri = Path(get_data("testdata/scatter-wf2_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items.items == "string" def test_v1_0_type_for_output_source_with_flat_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a flat_crossproduct scatter step with CWL v1.0.""" uri = Path(get_data("testdata/scatter-wf3_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items == "string" def test_v1_0_type_for_source_with_multiple_entries_merge_nested() -> None: """Test that the type is correctly inferred from a list of source ids and merge_nested with CWL v1.0.""" uri = Path(get_data("testdata/count-lines6-wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items.items == "File" def test_v1_0_type_for_source_with_multiple_entries_merge_flattened() -> None: """Test that the type is correctly inferred from a list of source ids and merge_flattened with CWL v1.0.""" # noqa: B950 uri = Path(get_data("testdata/count-lines7-wf_v1_0.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items == "File" def test_v1_0_type_for_source_with_single_entry_merge_nested() -> None: """Test that the type is correctly inferred from a single source id and merge_nested with CWL v1.0.""" uri = ( Path(get_data("testdata/count-lines6-single-source-wf_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items.items == "File" def test_v1_0_type_for_source_with_single_entry_merge_flattened() -> None: """Test that the type is correctly inferred from a single source id and merge_flattened with CWL v1.0.""" uri = ( Path(get_data("testdata/count-lines7-single-source-wf_v1_0.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_0.ArraySchema) assert source_type.items == "File" def test_v1_1_file_content_64_kB() -> None: """Test that reading file content is allowed up to 64kB in CWL v1.1.""" text = "a" * cwl_utils.parser.cwl_v1_1_utils.CONTENT_LIMIT with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) content = cwl_utils.parser.cwl_v1_1_utils.content_limit_respected_read(f) assert content == text def test_v1_1_file_content_larger_than_64_kB() -> None: """Test that reading file content is truncated to 64kB for larger files in CWL v1.1.""" text = "a" * (cwl_utils.parser.cwl_v1_1_utils.CONTENT_LIMIT + 1) with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) content = cwl_utils.parser.cwl_v1_1_utils.content_limit_respected_read(f) assert content == text[0 : cwl_utils.parser.cwl_v1_1_utils.CONTENT_LIMIT] def test_v_1_1_load_inputfile_with_format() -> None: """Test that File object with the `format` field is correctly loaded with CWL v1.1.""" uri = Path(get_data("testdata/formattest2_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) uri = Path(get_data("testdata/formattest-job.json")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri( "v1.1", uri, cwl_obj.loadingOptions ) assert jobfile["input"].format == cwl_obj.inputs[0].format def test_v_1_1_load_inputfile_with_nested_array() -> None: """Test that nested arrays are preserved when loading an input file with CWL v1.1.""" uri = Path(get_data("testdata/nested-array-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.1", uri) assert isinstance(jobfile["letters"], MutableSequence) assert isinstance(jobfile["letters"][0], MutableSequence) assert jobfile["letters"][0][0] == "a" def test_v_1_1_load_inputfile_with_requirements() -> None: """Test that an input file with the cwl:requirements directive is correctly loaded with CWL v1.1.""" uri = Path(get_data("testdata/env-job3.yaml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.1", uri) assert isinstance( jobfile["cwl:requirements"][0], cwl_utils.parser.cwl_v1_1.EnvVarRequirement ) def test_v_1_1_load_inputfile_with_secondary_files() -> None: """Test that secondary files are treated as objects when loading an input file with CWL v1.1.""" uri = Path(get_data("testdata/dir4-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.1", uri) assert isinstance(jobfile["inf"].secondaryFiles[0], cwl_utils.parser.cwl_v1_1.File) assert isinstance( jobfile["inf"].secondaryFiles[1], cwl_utils.parser.cwl_v1_1.Directory ) def test_v1_1_stdout_to_file() -> None: """Test that stdout shortcut is converted to stdout parameter with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter(id="test", type_="stdout") ], ) cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) assert clt.stdout is not None assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_1_stdout_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stdout shortcut with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter( id="test", type_="stdout", outputBinding=cwl_utils.parser.cwl_v1_1.CommandOutputBinding( glob="output.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) def test_v1_1_stdout_to_file_preserve_original() -> None: """Test that stdout parameter prevails on stdout shortcut with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter(id="test", type_="stdout") ], stdout="original.txt", ) cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) assert clt.stdout == "original.txt" assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_1_stderr_to_file() -> None: """Test that stderr shortcut is converted to stderr parameter with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter(id="test", type_="stderr") ], ) cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) assert clt.stderr is not None assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_1_stderr_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stderr shortcut with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter( id="test", type_="stderr", outputBinding=cwl_utils.parser.cwl_v1_1.CommandOutputBinding( glob="err.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) def test_v1_1_stderr_to_file_preserve_original() -> None: """Test that stderr parameter prevails on stdout shortcut with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_1.CommandOutputParameter(id="test", type_="stderr") ], stderr="original.txt", ) cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) assert clt.stderr == "original.txt" assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_1_stdin_to_file() -> None: """Test that stdin shortcut is converted to stdin parameter with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_1.CommandInputParameter(id="test", type_="stdin") ], outputs=[], ) cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) assert clt.stdin is not None def test_v1_1_stdin_to_file_with_binding() -> None: """Test that inputBinding is not allowed with stdin shortcut with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_1.CommandInputParameter( id="test", type_="stdin", inputBinding=cwl_utils.parser.cwl_v1_1.CommandLineBinding( prefix="--test" ), ) ], outputs=[], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) def test_v1_1_stdin_to_file_fail_with_original() -> None: """Test that stdin shortcut fails when stdin parameter is defined with CWL v1.1.""" clt = cwl_utils.parser.cwl_v1_1.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_1.CommandInputParameter(id="test", type_="stdin") ], outputs=[], stdin="original.txt", ) with raises(ValidationException): cwl_utils.parser.cwl_v1_1_utils.convert_stdstreams_to_files(clt) def test_v1_1_type_compare_list() -> None: """Test that the type comparison works correctly a list type with CWL v1.1.""" uri = Path(get_data("testdata/echo_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_utils.parser.cwl_v1_1_utils._compare_type( cwl_obj.inputs[0].type_, cwl_obj.inputs[0].type_ ) def test_v1_1_type_compare_record() -> None: """Test that the type comparison works correctly a record type with CWL v1.1.""" uri = Path(get_data("testdata/record-output-wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert cwl_utils.parser.cwl_v1_1_utils._compare_type(source_type, source_type) def test_v1_1_type_for_source() -> None: """Test that the type is correctly inferred from a source id with CWL v1.1.""" uri = Path(get_data("testdata/step_valuefrom5_wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_1_type_for_source_with_id() -> None: """Test that the type is correctly inferred from a source id with CWL v1.1.""" uri = ( Path(get_data("testdata/step_valuefrom5_wf_with_id_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_1_type_for_stdout() -> None: """Test that the `stdout` type is correctly matched with the `File` type in CWL v1.1.""" uri = Path(get_data("testdata/stdout-wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.outputs[0].outputSource ) assert source_type == "File" def test_v1_1_type_output_source_record() -> None: """Test that the type is correctly inferred from a record output source with CWL v1.1.""" uri = Path(get_data("testdata/record-output-wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.RecordSchema) fields = cast( MutableSequence[cwl_utils.parser.cwl_v1_1.RecordField], source_type.fields ) assert len(fields) == 2 assert fields[0].type_ == "File" assert fields[1].type_ == "File" def test_v1_1_type_for_output_source_with_single_scatter_step() -> None: """Test that the type is correctly inferred from a single scatter step with CWL v1.1.""" uri = Path(get_data("testdata/scatter-wf1_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items == "string" def test_v1_1_type_for_output_source_with_nested_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a nested_crossproduct scatter step with CWL v1.1.""" uri = Path(get_data("testdata/scatter-wf2_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items.items == "string" def test_v1_1_type_for_output_source_with_flat_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a flat_crossproduct scatter step with CWL v1.1.""" uri = Path(get_data("testdata/scatter-wf3_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items == "string" def test_v1_1_type_for_source_with_multiple_entries_merge_nested() -> None: """Test that the type is correctly inferred from a list of source ids and merge_nested with CWL v1.1.""" uri = Path(get_data("testdata/count-lines6-wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items.items == "File" def test_v1_1_type_for_source_with_multiple_entries_merge_flattened() -> None: """Test that the type is correctly inferred from a list of source ids and merge_flattened with CWL v1.1.""" # noqa: B950 uri = Path(get_data("testdata/count-lines7-wf_v1_1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items == "File" def test_v1_1_type_for_source_with_single_entry_merge_nested() -> None: """Test that the type is correctly inferred from a single source id and merge_nested with CWL v1.1.""" uri = ( Path(get_data("testdata/count-lines6-single-source-wf_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items.items == "File" def test_v1_1_type_for_source_with_single_entry_merge_flattened() -> None: """Test that the type is correctly inferred from a single source id and merge_flattened with CWL v1.1.""" uri = ( Path(get_data("testdata/count-lines7-single-source-wf_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_1.ArraySchema) assert source_type.items == "File" def test_v1_2_file_content_64_kB() -> None: """Test that reading file content is allowed up to 64kB in CWL v1.2.""" text = "a" * cwl_utils.parser.cwl_v1_2_utils.CONTENT_LIMIT with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) content = cwl_utils.parser.cwl_v1_2_utils.content_limit_respected_read(f) assert content == text def test_v1_2_file_content_larger_than_64_kB() -> None: """Test that reading file content fails for files larger than 64kB in CWL v1.0.""" text = "a" * (cwl_utils.parser.cwl_v1_2_utils.CONTENT_LIMIT + 1) with tempfile.TemporaryFile() as f: f.write(text.encode("utf-8")) f.seek(0) with raises(WorkflowException): cwl_utils.parser.cwl_v1_2_utils.content_limit_respected_read(f) def test_v_1_2_load_inputfile_with_format() -> None: """Test that File object with the `format` field is correctly loaded with CWL v1.2.""" uri = Path(get_data("testdata/formattest2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) uri = Path(get_data("testdata/formattest-job.json")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri( "v1.2", uri, cwl_obj.loadingOptions ) assert jobfile["input"].format == cwl_obj.inputs[0].format def test_v_1_2_load_inputfile_with_nested_array() -> None: """Test that nested arrays are preserved when loading an input file with CWL v1.2.""" uri = Path(get_data("testdata/nested-array-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.2", uri) assert isinstance(jobfile["letters"], MutableSequence) assert isinstance(jobfile["letters"][0], MutableSequence) assert jobfile["letters"][0][0] == "a" def test_v_1_2_load_inputfile_with_requirements() -> None: """Test that an input file with the cwl:requirements directive is correctly loaded with CWL v1.2.""" uri = Path(get_data("testdata/env-job3.yaml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.2", uri) assert isinstance( jobfile["cwl:requirements"][0], cwl_utils.parser.cwl_v1_2.EnvVarRequirement ) def test_v_1_2_load_inputfile_with_secondary_files() -> None: """Test that secondary files are treated as objects when loading an input file with CWL v1.2.""" uri = Path(get_data("testdata/dir4-job.yml")).resolve().as_uri() jobfile = cwl_utils.parser.utils.load_inputfile_by_uri("v1.2", uri) assert isinstance(jobfile["inf"].secondaryFiles[0], cwl_utils.parser.cwl_v1_2.File) assert isinstance( jobfile["inf"].secondaryFiles[1], cwl_utils.parser.cwl_v1_2.Directory ) def test_v1_2_stdout_to_file() -> None: """Test that stdout shortcut is converted to stdout parameter with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter(id="test", type_="stdout") ], ) cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) assert clt.stdout is not None assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_2_stdout_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stdout shortcut with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter( id="test", type_="stdout", outputBinding=cwl_utils.parser.cwl_v1_2.CommandOutputBinding( glob="output.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) def test_v1_2_stdout_to_file_preserve_original() -> None: """Test that stdout parameter prevails on stdout shortcut with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter(id="test", type_="stdout") ], stdout="original.txt", ) cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) assert clt.stdout == "original.txt" assert clt.stdout == clt.outputs[0].outputBinding.glob def test_v1_2_stderr_to_file() -> None: """Test that stderr shortcut is converted to stderr parameter with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter(id="test", type_="stderr") ], ) cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) assert clt.stderr is not None assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_2_stderr_to_file_with_binding() -> None: """Test that outputBinding is not allowed with stderr shortcut with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter( id="test", type_="stderr", outputBinding=cwl_utils.parser.cwl_v1_2.CommandOutputBinding( glob="err.txt" ), ) ], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) def test_v1_2_stderr_to_file_preserve_original() -> None: """Test that stderr parameter prevails on stdout shortcut with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[], outputs=[ cwl_utils.parser.cwl_v1_2.CommandOutputParameter(id="test", type_="stderr") ], stderr="original.txt", ) cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) assert clt.stderr == "original.txt" assert clt.stderr == clt.outputs[0].outputBinding.glob def test_v1_2_stdin_to_file() -> None: """Test that stdin shortcut is converted to stdin parameter with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_2.CommandInputParameter(id="test", type_="stdin") ], outputs=[], ) cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) assert clt.stdin is not None def test_v1_2_stdin_to_file_with_binding() -> None: """Test that inputBinding is not allowed with stdin shortcut with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_2.CommandInputParameter( id="test", type_="stdin", inputBinding=cwl_utils.parser.cwl_v1_2.CommandLineBinding( prefix="--test" ), ) ], outputs=[], ) with raises(ValidationException): cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) def test_v1_2_stdin_to_file_fail_with_original() -> None: """Test that stdin shortcut fails when stdin parameter is defined with CWL v1.2.""" clt = cwl_utils.parser.cwl_v1_2.CommandLineTool( inputs=[ cwl_utils.parser.cwl_v1_2.CommandInputParameter(id="test", type_="stdin") ], outputs=[], stdin="original.txt", ) with raises(ValidationException): cwl_utils.parser.cwl_v1_2_utils.convert_stdstreams_to_files(clt) def test_v1_2_type_compare_list() -> None: """Test that the type comparison works correctly a list type with CWL v1.2.""" uri = Path(get_data("testdata/echo_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) assert cwl_utils.parser.cwl_v1_2_utils._compare_type( cwl_obj.inputs[0].type_, cwl_obj.inputs[0].type_ ) def test_v1_2_type_compare_record() -> None: """Test that the type comparison works correctly a record type with CWL v1.2.""" uri = Path(get_data("testdata/record-output-wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert cwl_utils.parser.cwl_v1_2_utils._compare_type(source_type, source_type) def test_v1_2_type_for_source() -> None: """Test that the type is correctly inferred from a source id with CWL v1.2.""" uri = Path(get_data("testdata/step_valuefrom5_wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_2_type_for_source_with_id() -> None: """Test that the type is correctly inferred from a source id with CWL v1.2.""" uri = ( Path(get_data("testdata/step_valuefrom5_wf_with_id_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.loadingOptions.fileuri + "#step1/echo_out_file" ) assert source_type == "File" def test_v1_2_type_for_stdout() -> None: """Test that the `stdout` type is correctly matched with the `File` type in CWL v1.2.""" uri = Path(get_data("testdata/stdout-wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( cwl_obj, cwl_obj.outputs[0].outputSource ) assert source_type == "File" def test_v1_2_type_output_source_record() -> None: """Test that the type is correctly inferred from a record output source with CWL v1.2.""" uri = Path(get_data("testdata/record-output-wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.RecordSchema) fields = cast( MutableSequence[cwl_utils.parser.cwl_v1_2.RecordField], source_type.fields ) assert len(fields) == 2 assert fields[0].type_ == "File" assert fields[1].type_ == "File" def test_v1_2_type_for_output_source_with_single_scatter_step() -> None: """Test that the type is correctly inferred from a single scatter step with CWL v1.2.""" uri = Path(get_data("testdata/scatter-wf1_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "string" def test_v1_2_type_for_output_source_with_nested_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a nested_crossproduct scatter step with CWL v1.2.""" uri = Path(get_data("testdata/scatter-wf2_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items.items == "string" def test_v1_2_type_for_output_source_with_flat_crossproduct_scatter_step() -> None: """Test that the type is correctly inferred from a flat_crossproduct scatter step with CWL v1.2.""" uri = Path(get_data("testdata/scatter-wf3_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "string" def test_v1_2_type_for_source_with_multiple_entries_merge_nested() -> None: """Test that the type is correctly inferred from a list of source ids and merge_nested with CWL v1.2.""" uri = Path(get_data("testdata/count-lines6-wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items.items == "File" def test_v1_2_type_for_source_with_multiple_entries_merge_flattened() -> None: """Test that the type is correctly inferred from a list of source ids and merge_flattened with CWL v1.2.""" # noqa: B950 uri = Path(get_data("testdata/count-lines7-wf_v1_2.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "File" def test_v1_2_type_for_source_with_single_entry_merge_nested() -> None: """Test that the type is correctly inferred from a single source id and merge_nested with CWL v1.2.""" uri = ( Path(get_data("testdata/count-lines6-single-source-wf_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert isinstance(source_type.items, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items.items == "File" def test_v1_2_type_for_source_with_single_entry_merge_flattened() -> None: """Test that the type is correctly inferred from a single source id and merge_flattened with CWL v1.2.""" uri = ( Path(get_data("testdata/count-lines7-single-source-wf_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.steps[0].in_[0].source, linkMerge=cwl_obj.steps[0].in_[0].linkMerge, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "File" def test_v1_2_type_for_source_with_multiple_entries_first_non_null() -> None: """Test that the type is correctly inferred from a list of source ids and first_non_null with CWL v1.2.""" uri = Path(get_data("testdata/cond-wf-003.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert source_type == "string" def test_v1_2_type_for_source_with_multiple_entries_the_only_non_null() -> None: """Test that the type is correctly inferred from a list of source ids and the_only_non_null with CWL v1.2.""" # noqa: B950 uri = Path(get_data("testdata/cond-wf-004.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert source_type == "string" def test_v1_2_type_for_source_with_multiple_entries_all_non_null() -> None: """Test that the type is correctly inferred from a list of source ids and all_non_null with CWL v1.2.""" uri = Path(get_data("testdata/cond-wf-005.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "string" def test_v1_2_type_for_source_with_single_entry_first_non_null() -> None: """Test that the type is correctly inferred from a single source id and first_non_null with CWL v1.2.""" uri = Path(get_data("testdata/cond-single-source-wf-003.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert source_type == "string" def test_v1_2_type_for_source_with_single_entry_the_only_non_null() -> None: """Test that the type is correctly inferred from a single source id and the_only_non_null with CWL v1.2.""" # noqa: B950 uri = Path(get_data("testdata/cond-single-source-wf-004.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert source_type == "string" def test_v1_2_type_for_source_with_single_entry_all_non_null() -> None: """Test that the type is correctly inferred from a single source id and all_non_null with CWL v1.2.""" uri = Path(get_data("testdata/cond-single-source-wf-005.1.cwl")).resolve().as_uri() cwl_obj = load_document_by_uri(uri) source_type = cwl_utils.parser.utils.type_for_source( process=cwl_obj, sourcenames=cwl_obj.outputs[0].outputSource, pickValue=cwl_obj.outputs[0].pickValue, ) assert isinstance(source_type, cwl_utils.parser.cwl_v1_2.ArraySchema) assert source_type.items == "string" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_subscope.py0000644000175100001660000000256314753463342017472 0ustar00runnerdocker# SPDX-License-Identifier: Apache-2.0 """Test that scoping of identifiers in Workflow.steps[].run is correct.""" from pathlib import Path from cwl_utils.parser import Workflow, load_document_by_uri from .util import get_data def test_workflow_step_process_scope_v1_0() -> None: """CWL v1.0 IDs under Workflow.steps[].run should not be scoped in the "run" scope.""" uri = Path(get_data("testdata/workflow_input_format_expr.cwl")).resolve().as_uri() cwl_obj: Workflow = load_document_by_uri(uri) assert cwl_obj.steps[0].run.inputs[0].id.endswith("#format_extract/target") def test_workflow_step_process_scope_v1_1() -> None: """CWL v1.1 IDs under Workflow.steps[].run should be scoped in the "run" scope.""" uri = ( Path(get_data("testdata/workflow_input_format_expr_v1_1.cwl")) .resolve() .as_uri() ) cwl_obj: Workflow = load_document_by_uri(uri) assert cwl_obj.steps[0].run.inputs[0].id.endswith("#format_extract/run/target") def test_workflow_step_process_scope_v1_2() -> None: """CWL v1.2 IDs under Workflow.steps[].run should be scoped in the "run" scope.""" uri = ( Path(get_data("testdata/workflow_input_format_expr_v1_2.cwl")) .resolve() .as_uri() ) cwl_obj: Workflow = load_document_by_uri(uri) assert cwl_obj.steps[0].run.inputs[0].id.endswith("#format_extract/run/target") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/test_utils.py0000644000175100001660000000100114753463342016771 0ustar00runnerdockerfrom urllib.parse import urlparse from cwl_utils.utils import resolved_path def test_resoled_path() -> None: base_url = urlparse( "schemas/bclconvert-run-configuration/2.0.0--4.0.3/bclconvert-run-configuration__2.0.0--4.0.3.yaml" ) link = "../../../schemas/samplesheet/2.0.0--4.0.3/samplesheet__2.0.0--4.0.3.yaml#samplesheet" rpath = resolved_path(base_url, link) assert rpath == urlparse( "schemas/samplesheet/2.0.0--4.0.3/samplesheet__2.0.0--4.0.3.yaml#samplesheet" ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1739482850.0 cwl_utils-0.37/tests/util.py0000644000175100001660000000227614753463342015566 0ustar00runnerdockerimport os import shutil from pathlib import Path import pytest from pkg_resources import Requirement, ResolutionError, resource_filename def get_data(filename: str) -> str: # normalizing path depending on OS or else it will cause problem when joining path filename = os.path.normpath(filename) filepath = None try: filepath = resource_filename(Requirement.parse("schema-salad"), filename) except ResolutionError: pass if not filepath or not os.path.isfile(filepath): filepath = os.path.join(os.path.dirname(__file__), os.pardir, filename) return str(Path(filepath).resolve()) needs_docker = pytest.mark.skipif( not bool(shutil.which("docker")), reason="Requires the docker executable on the system path.", ) needs_singularity = pytest.mark.skipif( not bool(shutil.which("singularity")), reason="Requires the singularity executable on the system path.", ) needs_podman = pytest.mark.skipif( not bool(shutil.which("podman")), reason="Requires the podman executable on the system path.", ) needs_udocker = pytest.mark.skipif( not bool(shutil.which("udocker")), reason="Requires the udocker executable on the system path.", )