pax_global_header00006660000000000000000000000064141507426530014521gustar00rootroot0000000000000052 comment=d2ae15e61592843539f52a60dc78e0c2e7bddd4f strictyaml-1.6.1/000077500000000000000000000000001415074265300137215ustar00rootroot00000000000000strictyaml-1.6.1/.gitattributes000066400000000000000000000000371415074265300166140ustar00rootroot00000000000000*.story linguist-language=YAML strictyaml-1.6.1/.github/000077500000000000000000000000001415074265300152615ustar00rootroot00000000000000strictyaml-1.6.1/.github/ISSUE_TEMPLATE.md000066400000000000000000000001551415074265300177670ustar00rootroot00000000000000Feel free to raise an issue for any of the following: * Support request * Feature request * Argument * Bug strictyaml-1.6.1/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000014251415074265300210640ustar00rootroot00000000000000Qualities of an ideal pull request: Code changes come with a new story or an amendment to an existing story in order to exercise the code. Commit messages that adhere to the following pattern: TYPEOFCOMMIT : Quick explanation of what changed and why. Where TYPEOFCOMMIT is one of the following: * FEATURE - for new features and accompanying stories. * BUG - for bugfixes and accompanying stories (or amendments to an existing story). * DOCS - for changes to README, etc. and non-functional changes to stories. * MISC - Anything else. Ideal code qualities: * Loosely coupled * DRY * Clear and straightforward is preferable to clever * Docstrings that explain why rather than what * Clearly disambiguated Variable/method/class names * Passes flake8 linting with < 100 character lines strictyaml-1.6.1/.github/workflows/000077500000000000000000000000001415074265300173165ustar00rootroot00000000000000strictyaml-1.6.1/.github/workflows/pythonapp.yml000066400000000000000000000011621415074265300220630ustar00rootroot00000000000000name: Regression test on: [push, pull_request, workflow_dispatch] jobs: build: runs-on: ubuntu-20.04 steps: - uses: actions/checkout@v1 #- name: Setup tmate session # uses: mxschmitt/action-tmate@v3 - name: Install dependencies run: | #sudo apt-get update #sudo apt-get upgrade -y sudo apt-get install python3-virtualenv build-essential sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev sudo pip install hitchkey - name: Regression checks run: | hk checks strictyaml-1.6.1/.gitignore000066400000000000000000000002161415074265300157100ustar00rootroot00000000000000build/ dist/ .cache strictyaml.egg-info/ tests/.hitch state/ __pycache__ *.pyc *.kate-swp hitch/gen hitch/personalsettings.yml temp/ .vscode/ strictyaml-1.6.1/CHANGELOG.md000066400000000000000000000340161415074265300155360ustar00rootroot00000000000000# Changelog ### Latest * BUG: Don't check indentation consistency when allowing flow. ### 1.6.0 No relevant code changes. ### 1.5.0 * FEATURE: HexInt scalar validator. Parses hex integers case insensitively while preserving style for "round tripping". (#157) ### 1.4.4 No relevant code changes. ### 1.4.3 * FEATURE : Regression testing on python 2.7.14. This is probably the last release which will support Python 2. * FEATURE : Bundle last propertly working ruamel.yaml version in with strictyaml. ### 1.4.2 * BUGFIX : Ensure strictyaml continues to work -- updated temporary fix. ### 1.4.1 * BUGFIX : Ensure strictyaml continues to work. * FEATURE : NullNone - parse 'null' as None like YAML 1.2 does. ### 1.3.2 * BUGFIX : Fix for #135 hat tip @eulores. ### 1.3.1 * BUGFIX : Fix for faulty Url validator. ### 1.3.0 * BUGFIX : Fixes to ensure python 2.7.14 compatibility. This is probably the last time I will do this. * FEATURE : Use dict instead of OrderedDict for Python 3.7+ * BUGFIX : Require Regex to be a full match * FEATURE: Support underscores in int and decimal ### 1.2.0 * FEATURE : #129 : Optional keys in mappings and set value to None - with thanks to @wwoods. ### 1.1.1 No relevant code changes. ### 1.1.0 * FEATURE: Add support for NaN and infinity representations (#122) ### 1.0.7 * BUGFIX: Fix for #116 - handle Enum or integers errors * BUGFIX : Fix accidental python 2 breakage. * BUGFIX : Accidental misrecognition of boolean values as numbers - cause of #85. * BUGFIX : Fix for #86 - handle changing multiline strings. * BUGFIX: handle deprecated collections import in the parser (#82) * bugfix - handle deprecated collections import in the parser * import sys ### 1.0.5 * BUGFIX : Fixed python 2 bug introduced when fixing #72. * BUG: issue #72. Now __setitem__ uses schema. Before this commit, schemas could be violated when assigning to Map/Sequence members. Now, modifications to the data must fit the data's schema. Furthermore, if the node on which __setitem__ is called has a compound schema, the selected validator within the compound schema may change correctly. ### 1.0.4 * FEATURE : Include tests / stories in package. * BUG: issue #72. Now __setitem__ uses schema. Before this commit, schemas could be violated when assigning to Map/Sequence members. Now, modifications to the data must fit the data's schema. ### 1.0.3 * BUGFIX : Fix for #64 integer value on YAML object being interpreted as string. ### 1.0.2 * BUGFIX : #63 CommaSeparated should recognize a blank string as a valid empty list. * BUGFIX : #61 fix for exception when parsing MapPattern with a non-string (float) as a key - hat tip @dgg5503. * BUGFIX : Fix deprecation warning on Iterable from collections raised by #60 - hat tip @dthonon. * FEATURE : Raise exception if more than one sequence validators are combined. ### 1.0.1 * FEATURE : #51 - OrValidation with multiple Map validators does not work effectively - raise exception if user tries to use more than one. ### 1.0.0 No relevant code changes. ### 0.15.4 * BUGFIX : If revalidating a subschema with an Or validator and then changing the contents of the parsed data, a cryptic error was raised. ### 0.15.3 * BUGFIX : Fix for #46 - prevent Enum parsed values from becoming YAML objects. ### 0.15.2 * BUGFIX : Modifying YAML via __getitem__ where the YAML was validated using an OR operator failed - or validators didn't have a key_validator. ### 0.15.1 * BUGFIX : Make optional values parsed using Optional appear at the *end* of the ordered dict. * BUGFIX : Prevent YAML objects from appearing in .data. * BUGFIX : Fix for bug caused by revalidation with Optional default values. ### 0.15.0 * FEATURE : Raise exception when invalid default data is used is used in Optionals. * FEATURE : Do not output defaults from Optional() to YAML and ignore parsed defaults where the parsed data is equal to the default specified in Optional(). * BUGFIX : Include README.md. ### 0.14.0 * BUGFIX : Made comma separated serializer serialize correctly. * BUGFIX : Fixed the way serialization of empty lists, dicts, None and datetimes is handled. * BUGFIX : Float - fixed transformation in to YAML. * BUGFIX : Proper serialization for floats. * BUGFIX : Proper serialization for regexes. * BUGFIX : Comma separated serialization. * BUGFIX : Enum - fixed transformation in to YAML. * BUGFIX : Stricter validation on the serialization of dicts. * BUGFIX : Stricter validation on the serialization of sequences. * BUGFIX : Streamlined and fixed the way that serialization of data to YAML is performed so that both complex changes to YAML documents can be made and complex YAML documents can be assembled from scratch. * BUGFIX : Stop dumping empty dicts / lists. * BUGFIX : Fixed content encoding on README. ### 0.13.0 * FEATURE : Allow flow style to be parsed with a dirty_load method. ### 0.12.0 * FEATURE : Implement mapping abstract base class as per #39. ### 0.11.10 * BUGFIX : #15 - fix for regression caused by the release of ruamel.yaml 0.15.48. * FEATURE : Handle building documents a bit more sensibly - build the document from strings, floats, dicts, lists, etc. only. ### 0.11.8 * FEATURE : Include LICENSE.txt in package - #34 - hat tip @mcs07. * BUGFIX : Fix for #32 hat tip @NoahTheDuke - render given sections on variations in documentation. ### 0.11.7 * FEATURE : Replaced read in README.rst with read in README.md in setup.py. ### 0.11.6 * FEATURE : Replaced link in setup.py to new website. ### 0.11.5 * MINOR BUGFIX : #31 hat tip @karantan - validator was being set incorrectly on YAML objects after parse. ### 0.11.4 * BUGFIX : Fix bug where changing items in the YAML after revalidation failed. ### 0.11.3 * PATCH BUGFIX : Ensure that revalidation keeps the validator. ### 0.11.2 * BUGFIX : When using YAML objects as keys on other YAML objects, it should just use the value. ### 0.11.1 * BUGFIX PATCH : Fix the behavior of getting and deleting items on a YAML object where the key have gone through a validator. * PATCH BUGFIX : Allow setting of properties using __setitem__ when a key validator was used. * PATCH REFACTOR : Sequence and Mapping validators now inherit from SeqValidator and MapValidator. * PATCH REFACTOR : Added some docstrings and removed unnecessary method. * PATCH REFACTOR : Added docstrings and clarified variable names. * PATCH REFACTOR : Moved YAMLPointer class into its own file. * PATCH REFACTOR : Refactored the processing code. * BUGFIX PATCH : Fix revalidation when using mappings that process keys. * MINOR : FEATURE : Throw exception on inconsistent indents. * PATCH : REFACTOR : Cleaned up stories. * PATCH : REFACTOR : Changes to enable documentation generation. * PATCH : REFACTOR : Upgraded hitchstory and strictyaml in hitch environment. ### 0.10.0 * MINOR : FEATURE : Optionally specify minimum_keys and maximum_keys in MapPattern. ### 0.9.0 * MINOR : FEATURE : Revalidation of code parsed with any (to make it work with scalar values). * PATCH : REFACTOR : Renamed defer validation -> revalidation. * MINOR : FEATURE : Revalidation of code parsed with Any. * PATCH : REFACTOR : Added more useful YAMLChunk methods. * PATCH : REFACTOR : Fixed type assertions. * PATCH : FEATURE : Added assert for or validators. * PATCH : FEATURE : Clearer error messages for non-matching scalars. * PATCH : REFACTOR : Made linter happy. * MINOR : FEATURE : Made clearer exception messages and prevented invalid validators from being used in compound validators. * PATCH : REFACTOR : Reworked the way YAML objects are created. * PATCH : REFACTOR : Reworked the way mapping, sequence, 'or' and scalar validators work. * PATCH : REFACTOR : Add doctests to the regression suite. * PATCH : REFACTOR : Clean up Map() class so it interacts only via chunk.process(). * PATCH : REFACTOR : Cleaned up some stories. * PATCH : REFACTOR : Make linter happy. * PATCH : REFACTOR : Moved more validation functionality into the chunk object. * PATCH : REFACTOR : Clean up validator code. * PATCH : REFACTOR : Move more core code away from validators into YAMLChunk. ### 0.8.0 * MINOR : FEATURE : Enum with item validators. * MINOR : FEATURE : Mappings with key validators. * MINOR : FEATURE : Key validation using Map(). * PATCH : PERFORMANCE IMPROVEMENTS : Avoided the use of deepcopy() every time a validator is used. * MINOR BUGFIX : Roundtripping failure fixes. * PATCH : REFACTOR : Only use chunk object in scalar validators. * PATCH : REFACTOR : Removed dependency on two variables being fed in to the validate_scalar method. ### 0.7.3 * PERFORMANCE : Improved performance of dumping by restricting the number of times deepcopy() is called. * FEATURE : Performance improvements. * FEATURE : Create dumpable YAML documents from simple dicts/lists of python data. * FEATURE : Create documents from scratch. * FEATURE : Create YAML documents directly. * FEATURE : Merging of YAML documents. * BUGFIX : Handle roundtripping changing indexed elements in sequences. * BUGFIX : Fix for scalar value roundtripping. * FEATURE : Set sequences using lists. * FEATURE : Use 'yes' and 'no' as default boolean values. * FEATURE : Set nested mappings that get correctly validated using __setitem__ interface on YAML() objects. * BUGFIX : Don't futz with the ordering of keys when setting vars via __setitem__. ### 0.7.2 * BUGFIX : Decimal point representer does not work with decimal.Decimal() object, so using float instead. * BUGFIX : In python 2.x, the __ne__ magic method is called on != rather than negating the result of the __eq__ method. This caused undesired behavior in python 2.x. * FEATURE : Parser errors will have the correct label attached. * FEATURE : .data now returns ordereddicts. * BUG : Boolean 'y' and 'n' values ought to resolve to True and False. * BUG : Map keys were accidentally optional by default. * BUG : Disallowed exceptions were previously not labeled. * BUG : Duplicate key exceptions were previously not labeled. * BUG : Fix for roundtripping multiline strings in python 2. * FEATURE : Label parsed YAML and have the label appear in exceptions. * BUG : Fix the way that data is roundtrip loaded into the yaml object via __setitem__. * BUG : Fixed roundtripping without using load() ### 0.7.0 * FEATURE : Modified the way that roundtripping works. ### 0.6.2 * FEATURE : Added version number accessible from __init__.py * BUG : Fixed the miscounting of list lines when getting start_line/end_line. ### 0.6.0 * BUG : Fixed import issues. * FEATURE : Added email and url validators. * FEATURE : Added Regex validator. ### 0.5.9 No relevant code changes. ### 0.5.8 * BUG : Fixed boolean error. ### 0.5.7 * BUG : Fixed MapPattern errors caused when editing items. ### 0.5.6 * FEATURE : Strict type checking when setting values. ### 0.5.5 * BUG : Fix roundtripping when using non-ascii characters. * BUG : Fixed unicode roundtripping error. ### 0.5.4 * BUG : Fix mishandling of special characters in mappings. * BUG : Handle invalid keys which are non-ascii characters gracefully * BUG : Fix for character encoding issues that occur when non-string scalars see invalid input. ### 0.5.3 * BUG : Third fix for character encoding issues (#18: hat tip @jpscaletti) * BUG : Second fix for character encoding issues (#18: hat tip Juan Pablo-Scaletti) * BUG : Fix for #18 (hat tip Juan Pablo-Scaletti) ### 0.5.2 * BUG : Fix for #17 (hat tip @kshpytsya) ### 0.5.1 * FEATURE : YAML object importable directly from module. ### 0.5.0 * BUG : Fixed some broken elements of roundtripping. * BUG : .data representations did not give keys as strings. * BUG : Fixed bool(x) overriding in python 2. * FEATURE : Greater than / less than implemented along with better post-representation assignment to mappings. * FEATURE : Better repr() * BUG : Keys are now represented as YAML() objects with full access to the location of the key in the YAML document. * FEATURE : Added is_[scalar|mapping|sequence] methods. * FEATURE : .values() on YAML object. * FEATURE : Added .value property to YAML object. * FEATURE : Implemented __contains__ so the 'in' method can be used. * FEATURE : Added .get(val) and .keys() so mapping YAML objects can be treated like dicts. * FEATURE : Added .items() to YAML object. * FEATURE : Handle srting casting for integers. * FEATURE : Raise TypeError when trying to cast a string to bool. * FEATURE : Raise TypeError when attempting to cast YAML(bool) to string. * FEATURE : Get lines for a specific YAML value, lines before it and lines after. * FEATURE : Print line numbers of YAML elements. * FEATURE : Any validator. * FEATURE : Fixed length sequence validation. * BUG : Fixed roundtripping. * FEATURE : Rountripped YAML with modifications. * BUG : Fixed ruamel.yaml version at a higher level. * FEATURE : Parse YAML into roundtrippable YAML object. ### 0.4.2 No relevant code changes. ### 0.4.1 * BUG : Fixed comma separated validator - now removes trailing spaces when parsing "a, b, c" so it does not parse as "a", " b", " c". ### 0.4.0 * FEATURE: Comma separated values. ### 0.3.9 * FEATURE : Added import for CommentedYAML type. ### 0.3.8 * FEATURE : Empty property validation. ### 0.3.7 * BUG : Fixed ruamel.yaml importing so that it works with earlier versions too. ### 0.3.6 * BUG : Fixed 13.1 ruamel.yaml issue. ### 0.3.5 * BUG : Stray print statement. ### 0.3.3 * BUG : Disallow flow style was failing with sequences. ### 0.3.2 No relevant code changes. ### 0.3.1 * BUG : Fixed mis-parsing caused by 'null' and non-strings in dictionary keys. ### 0.3 * FEATURE : Datetime parsing. * BUG : When loading a blank string using 'Any' it returned None by accident. ### 0.2 * FEATURE : YAMLValidationError now inherits from and uses the same mechanism as MarkedYAMLError. ### 0.1.6 No relevant code changes. ### 0.1.5 * FEATURE : Duplicate keys disallowed. ### 0.1.4 * FEATURE : Made the default to parse all scalars to string (Any validator) and added validator that returns CommentedSeq/CommentedMap. ### 0.1.3 * FEATURE : Clearer exception messages. * BUG : Fixed bug in sequence validator. ### 0.1.2 * BUG : Single value YAML documents now allowed. * BUG : Raise type error if it isn't a string passed through. strictyaml-1.6.1/LICENSE.txt000066400000000000000000000020411415074265300155410ustar00rootroot00000000000000Copyright (c) 2014 Colm O'Connor Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. strictyaml-1.6.1/MANIFEST.in000066400000000000000000000001701415074265300154550ustar00rootroot00000000000000include VERSION include LICENSE.txt include README.md recursive-include hitch * prune hitch/__pycache__ prune hitch/gen strictyaml-1.6.1/README.md000066400000000000000000000243561415074265300152120ustar00rootroot00000000000000# StrictYAML StrictYAML is a [type-safe](https://en.wikipedia.org/wiki/Type_safety) YAML parser that parses and validates a [restricted subset](https://hitchdev.com/strictyaml/features-removed) of the [YAML](https://hitchdev.com/strictyaml/what-is-yaml) specification. Priorities: - Beautiful API - Refusing to parse [the ugly, hard to read and insecure features of YAML](https://hitchdev.com/strictyaml/features-removed) like [the Norway problem](https://hitchdev.com/strictyaml/why/implicit-typing-removed). - Strict validation of markup and straightforward type casting. - Clear, readable exceptions with **code snippets** and **line numbers**. - Acting as a near-drop in replacement for pyyaml, ruamel.yaml or poyo. - Ability to read in YAML, make changes and write it out again with comments preserved. - [Not speed](https://hitchdev.com/strictyaml/why/speed-not-a-priority), currently. Simple example: ```yaml # All about the character name: Ford Prefect age: 42 possessions: - Towel ``` ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError ``` Default parse result: ```python >>> load(yaml_snippet) YAML({'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']}) ``` All data is string, list or OrderedDict: ```python >>> load(yaml_snippet).data {'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']} ``` Quickstart with schema: ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) ``` 42 is now parsed as an integer: ```python >>> person = load(yaml_snippet, schema) >>> person.data {'name': 'Ford Prefect', 'age': 42, 'possessions': ['Towel']} ``` A YAMLError will be raised if there are syntactic problems, violations of your schema or use of disallowed YAML features: ```yaml # All about the character name: Ford Prefect age: 42 ``` For example, a schema violation: ```python try: person = load(yaml_snippet, schema) except YAMLError as error: print(error) ``` ```yaml while parsing a mapping in "", line 1, column 1: # All about the character ^ (line: 1) required key(s) 'possessions' not found in "", line 3, column 1: age: '42' ^ (line: 3) ``` If parsed correctly: ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError, as_document schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) ``` You can modify values and write out the YAML with comments preserved: ```python person = load(yaml_snippet, schema) person['age'] = 43 print(person.as_yaml()) ``` ```yaml # All about the character name: Ford Prefect age: 43 possessions: - Towel ``` As well as look up line numbers: ```python >>> person = load(yaml_snippet, schema) >>> person['possessions'][0].start_line 5 ``` And construct YAML documents from dicts or lists: ```python print(as_document({"x": 1}).as_yaml()) ``` ```yaml x: 1 ``` ## Install ```sh $ pip install strictyaml ``` ## Why StrictYAML? There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. I've tried to make it the best one. Below is a series of documented justifications: - [Why avoid using environment variables as configuration?](https://hitchdev.com/strictyaml/why-not/environment-variables-as-config) - [Why not use HJSON?](https://hitchdev.com/strictyaml/why-not/hjson) - [Why not HOCON?](https://hitchdev.com/strictyaml/why-not/hocon) - [Why not use INI files?](https://hitchdev.com/strictyaml/why-not/ini) - [Why not use JSON Schema for validation?](https://hitchdev.com/strictyaml/why-not/json-schema) - [Why not JSON for simple configuration files?](https://hitchdev.com/strictyaml/why-not/json) - [Why not JSON5?](https://hitchdev.com/strictyaml/why-not/json5) - [Why not use the YAML 1.2 standard? - we don't need a new standard!](https://hitchdev.com/strictyaml/why-not/ordinary-yaml) - [Why not use kwalify with standard YAML to validate my YAML?](https://hitchdev.com/strictyaml/why-not/pykwalify) - [Why not use Python's schema library (or similar) for validation?](https://hitchdev.com/strictyaml/why-not/python-schema) - [Why not use SDLang?](https://hitchdev.com/strictyaml/why-not/sdlang) - [What is wrong with TOML?](https://hitchdev.com/strictyaml/why-not/toml) - [Why shouldn't I just use Python code for configuration?](https://hitchdev.com/strictyaml/why-not/turing-complete-code) - [Why not use XML for configuration or DSLs?](https://hitchdev.com/strictyaml/why-not/xml) ## Using StrictYAML How to: - [Build a YAML document from scratch in code](https://hitchdev.com/strictyaml/using/alpha/howto/build-yaml-document) - [Either/or schema validation of different, equally valid different kinds of YAML](https://hitchdev.com/strictyaml/using/alpha/howto/either-or-validation) - [Labeling exceptions](https://hitchdev.com/strictyaml/using/alpha/howto/label-exceptions) - [Merge YAML documents](https://hitchdev.com/strictyaml/using/alpha/howto/merge-yaml-documents) - [Revalidate an already validated document](https://hitchdev.com/strictyaml/using/alpha/howto/revalidation) - [Reading in YAML, editing it and writing it back out](https://hitchdev.com/strictyaml/using/alpha/howto/roundtripping) - [Get line numbers of YAML elements](https://hitchdev.com/strictyaml/using/alpha/howto/what-line) - [Parsing YAML without a schema](https://hitchdev.com/strictyaml/using/alpha/howto/without-a-schema) Compound validators: - [Fixed length sequences (FixedSeq)](https://hitchdev.com/strictyaml/using/alpha/compound/fixed-length-sequences) - [Mappings combining defined and undefined keys (MapCombined)](https://hitchdev.com/strictyaml/using/alpha/compound/map-combined) - [Mappings with arbitrary key names (MapPattern)](https://hitchdev.com/strictyaml/using/alpha/compound/map-pattern) - [Mapping with defined keys and a custom key validator (Map)](https://hitchdev.com/strictyaml/using/alpha/compound/mapping-with-slug-keys) - [Using a YAML object of a parsed mapping](https://hitchdev.com/strictyaml/using/alpha/compound/mapping-yaml-object) - [Mappings with defined keys (Map)](https://hitchdev.com/strictyaml/using/alpha/compound/mapping) - [Optional keys with defaults (Map/Optional)](https://hitchdev.com/strictyaml/using/alpha/compound/optional-keys-with-defaults) - [Validating optional keys in mappings (Map)](https://hitchdev.com/strictyaml/using/alpha/compound/optional-keys) - [Sequences of unique items (UniqueSeq)](https://hitchdev.com/strictyaml/using/alpha/compound/sequences-of-unique-items) - [Sequence/list validator (Seq)](https://hitchdev.com/strictyaml/using/alpha/compound/sequences) - [Updating document with a schema](https://hitchdev.com/strictyaml/using/alpha/compound/update) Scalar validators: - [Boolean (Bool)](https://hitchdev.com/strictyaml/using/alpha/scalar/boolean) - [Parsing comma separated items (CommaSeparated)](https://hitchdev.com/strictyaml/using/alpha/scalar/comma-separated) - [Datetimes (Datetime)](https://hitchdev.com/strictyaml/using/alpha/scalar/datetime) - [Decimal numbers (Decimal)](https://hitchdev.com/strictyaml/using/alpha/scalar/decimal) - [Email and URL validators](https://hitchdev.com/strictyaml/using/alpha/scalar/email-and-url) - [Empty key validation](https://hitchdev.com/strictyaml/using/alpha/scalar/empty) - [Enumerated scalars (Enum)](https://hitchdev.com/strictyaml/using/alpha/scalar/enum) - [Floating point numbers (Float)](https://hitchdev.com/strictyaml/using/alpha/scalar/float) - [Hexadecimal Integers (HexInt)](https://hitchdev.com/strictyaml/using/alpha/scalar/hexadecimal-integer) - [Integers (Int)](https://hitchdev.com/strictyaml/using/alpha/scalar/integer) - [Validating strings with regexes (Regex)](https://hitchdev.com/strictyaml/using/alpha/scalar/regular-expressions) - [Parsing strings (Str)](https://hitchdev.com/strictyaml/using/alpha/scalar/string) Restrictions: - [Disallowed YAML](https://hitchdev.com/strictyaml/using/alpha/restrictions/disallowed-yaml) - [Duplicate keys](https://hitchdev.com/strictyaml/using/alpha/restrictions/duplicate-keys) - [Dirty load](https://hitchdev.com/strictyaml/using/alpha/restrictions/loading-dirty-yaml) ## Design justifications There are some design decisions in StrictYAML which are controversial and/or not obvious. Those are documented here: - [What is wrong with duplicate keys?](https://hitchdev.com/strictyaml/why/duplicate-keys-disallowed) - [What is wrong with explicit tags?](https://hitchdev.com/strictyaml/why/explicit-tags-removed) - [What is wrong with flow-style YAML?](https://hitchdev.com/strictyaml/why/flow-style-removed) - [The Norway Problem - why StrictYAML refuses to do implicit typing and so should you](https://hitchdev.com/strictyaml/why/implicit-typing-removed) - [What is wrong with node anchors and references?](https://hitchdev.com/strictyaml/why/node-anchors-and-references-removed) - [Why does StrictYAML not parse direct representations of Python objects?](https://hitchdev.com/strictyaml/why/not-parse-direct-representations-of-python-objects) - [Why does StrictYAML only parse from strings and not files?](https://hitchdev.com/strictyaml/why/only-parse-strings-not-files) - [Why is parsing speed not a high priority for StrictYAML?](https://hitchdev.com/strictyaml/why/speed-not-a-priority) - [What is syntax typing?](https://hitchdev.com/strictyaml/why/syntax-typing-bad) - [Why does StrictYAML make you define a schema in Python - a Turing-complete language?](https://hitchdev.com/strictyaml/why/turing-complete-schema) ## Star Contributors - @wwoods - @chrisburr - @jnichols0 ## Other Contributors - @eulores - @WaltWoods - @ChristopherGS - @gvx - @AlexandreDecan - @lots0logs - @tobbez - @jaredsampson - @BoboTIG StrictYAML also includes code from [ruamel.yaml](https://yaml.readthedocs.io/en/latest/), Copyright Anthon van der Neut. ## Contributing - Before writing any code, please read the tutorial on [contributing to hitchdev libraries](https://hitchdev.com/approach/contributing-to-hitch-libraries/). - Before writing any code, if you're proposing a new feature, please raise it on github. If it's an existing feature / bug, please comment and briefly describe how you're going to implement it. - All code needs to come accompanied with a story that exercises it or a modification to an existing story. This is used both to test the code and build the documentation.strictyaml-1.6.1/VERSION000066400000000000000000000000051415074265300147640ustar00rootroot000000000000001.6.1strictyaml-1.6.1/docs/000077500000000000000000000000001415074265300146515ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/000077500000000000000000000000001415074265300161275ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/changelog.md000066400000000000000000000340161415074265300204040ustar00rootroot00000000000000# Changelog ### Latest * BUG: Don't check indentation consistency when allowing flow. ### 1.6.0 No relevant code changes. ### 1.5.0 * FEATURE: HexInt scalar validator. Parses hex integers case insensitively while preserving style for "round tripping". (#157) ### 1.4.4 No relevant code changes. ### 1.4.3 * FEATURE : Regression testing on python 2.7.14. This is probably the last release which will support Python 2. * FEATURE : Bundle last propertly working ruamel.yaml version in with strictyaml. ### 1.4.2 * BUGFIX : Ensure strictyaml continues to work -- updated temporary fix. ### 1.4.1 * BUGFIX : Ensure strictyaml continues to work. * FEATURE : NullNone - parse 'null' as None like YAML 1.2 does. ### 1.3.2 * BUGFIX : Fix for #135 hat tip @eulores. ### 1.3.1 * BUGFIX : Fix for faulty Url validator. ### 1.3.0 * BUGFIX : Fixes to ensure python 2.7.14 compatibility. This is probably the last time I will do this. * FEATURE : Use dict instead of OrderedDict for Python 3.7+ * BUGFIX : Require Regex to be a full match * FEATURE: Support underscores in int and decimal ### 1.2.0 * FEATURE : #129 : Optional keys in mappings and set value to None - with thanks to @wwoods. ### 1.1.1 No relevant code changes. ### 1.1.0 * FEATURE: Add support for NaN and infinity representations (#122) ### 1.0.7 * BUGFIX: Fix for #116 - handle Enum or integers errors * BUGFIX : Fix accidental python 2 breakage. * BUGFIX : Accidental misrecognition of boolean values as numbers - cause of #85. * BUGFIX : Fix for #86 - handle changing multiline strings. * BUGFIX: handle deprecated collections import in the parser (#82) * bugfix - handle deprecated collections import in the parser * import sys ### 1.0.5 * BUGFIX : Fixed python 2 bug introduced when fixing #72. * BUG: issue #72. Now __setitem__ uses schema. Before this commit, schemas could be violated when assigning to Map/Sequence members. Now, modifications to the data must fit the data's schema. Furthermore, if the node on which __setitem__ is called has a compound schema, the selected validator within the compound schema may change correctly. ### 1.0.4 * FEATURE : Include tests / stories in package. * BUG: issue #72. Now __setitem__ uses schema. Before this commit, schemas could be violated when assigning to Map/Sequence members. Now, modifications to the data must fit the data's schema. ### 1.0.3 * BUGFIX : Fix for #64 integer value on YAML object being interpreted as string. ### 1.0.2 * BUGFIX : #63 CommaSeparated should recognize a blank string as a valid empty list. * BUGFIX : #61 fix for exception when parsing MapPattern with a non-string (float) as a key - hat tip @dgg5503. * BUGFIX : Fix deprecation warning on Iterable from collections raised by #60 - hat tip @dthonon. * FEATURE : Raise exception if more than one sequence validators are combined. ### 1.0.1 * FEATURE : #51 - OrValidation with multiple Map validators does not work effectively - raise exception if user tries to use more than one. ### 1.0.0 No relevant code changes. ### 0.15.4 * BUGFIX : If revalidating a subschema with an Or validator and then changing the contents of the parsed data, a cryptic error was raised. ### 0.15.3 * BUGFIX : Fix for #46 - prevent Enum parsed values from becoming YAML objects. ### 0.15.2 * BUGFIX : Modifying YAML via __getitem__ where the YAML was validated using an OR operator failed - or validators didn't have a key_validator. ### 0.15.1 * BUGFIX : Make optional values parsed using Optional appear at the *end* of the ordered dict. * BUGFIX : Prevent YAML objects from appearing in .data. * BUGFIX : Fix for bug caused by revalidation with Optional default values. ### 0.15.0 * FEATURE : Raise exception when invalid default data is used is used in Optionals. * FEATURE : Do not output defaults from Optional() to YAML and ignore parsed defaults where the parsed data is equal to the default specified in Optional(). * BUGFIX : Include README.md. ### 0.14.0 * BUGFIX : Made comma separated serializer serialize correctly. * BUGFIX : Fixed the way serialization of empty lists, dicts, None and datetimes is handled. * BUGFIX : Float - fixed transformation in to YAML. * BUGFIX : Proper serialization for floats. * BUGFIX : Proper serialization for regexes. * BUGFIX : Comma separated serialization. * BUGFIX : Enum - fixed transformation in to YAML. * BUGFIX : Stricter validation on the serialization of dicts. * BUGFIX : Stricter validation on the serialization of sequences. * BUGFIX : Streamlined and fixed the way that serialization of data to YAML is performed so that both complex changes to YAML documents can be made and complex YAML documents can be assembled from scratch. * BUGFIX : Stop dumping empty dicts / lists. * BUGFIX : Fixed content encoding on README. ### 0.13.0 * FEATURE : Allow flow style to be parsed with a dirty_load method. ### 0.12.0 * FEATURE : Implement mapping abstract base class as per #39. ### 0.11.10 * BUGFIX : #15 - fix for regression caused by the release of ruamel.yaml 0.15.48. * FEATURE : Handle building documents a bit more sensibly - build the document from strings, floats, dicts, lists, etc. only. ### 0.11.8 * FEATURE : Include LICENSE.txt in package - #34 - hat tip @mcs07. * BUGFIX : Fix for #32 hat tip @NoahTheDuke - render given sections on variations in documentation. ### 0.11.7 * FEATURE : Replaced read in README.rst with read in README.md in setup.py. ### 0.11.6 * FEATURE : Replaced link in setup.py to new website. ### 0.11.5 * MINOR BUGFIX : #31 hat tip @karantan - validator was being set incorrectly on YAML objects after parse. ### 0.11.4 * BUGFIX : Fix bug where changing items in the YAML after revalidation failed. ### 0.11.3 * PATCH BUGFIX : Ensure that revalidation keeps the validator. ### 0.11.2 * BUGFIX : When using YAML objects as keys on other YAML objects, it should just use the value. ### 0.11.1 * BUGFIX PATCH : Fix the behavior of getting and deleting items on a YAML object where the key have gone through a validator. * PATCH BUGFIX : Allow setting of properties using __setitem__ when a key validator was used. * PATCH REFACTOR : Sequence and Mapping validators now inherit from SeqValidator and MapValidator. * PATCH REFACTOR : Added some docstrings and removed unnecessary method. * PATCH REFACTOR : Added docstrings and clarified variable names. * PATCH REFACTOR : Moved YAMLPointer class into its own file. * PATCH REFACTOR : Refactored the processing code. * BUGFIX PATCH : Fix revalidation when using mappings that process keys. * MINOR : FEATURE : Throw exception on inconsistent indents. * PATCH : REFACTOR : Cleaned up stories. * PATCH : REFACTOR : Changes to enable documentation generation. * PATCH : REFACTOR : Upgraded hitchstory and strictyaml in hitch environment. ### 0.10.0 * MINOR : FEATURE : Optionally specify minimum_keys and maximum_keys in MapPattern. ### 0.9.0 * MINOR : FEATURE : Revalidation of code parsed with any (to make it work with scalar values). * PATCH : REFACTOR : Renamed defer validation -> revalidation. * MINOR : FEATURE : Revalidation of code parsed with Any. * PATCH : REFACTOR : Added more useful YAMLChunk methods. * PATCH : REFACTOR : Fixed type assertions. * PATCH : FEATURE : Added assert for or validators. * PATCH : FEATURE : Clearer error messages for non-matching scalars. * PATCH : REFACTOR : Made linter happy. * MINOR : FEATURE : Made clearer exception messages and prevented invalid validators from being used in compound validators. * PATCH : REFACTOR : Reworked the way YAML objects are created. * PATCH : REFACTOR : Reworked the way mapping, sequence, 'or' and scalar validators work. * PATCH : REFACTOR : Add doctests to the regression suite. * PATCH : REFACTOR : Clean up Map() class so it interacts only via chunk.process(). * PATCH : REFACTOR : Cleaned up some stories. * PATCH : REFACTOR : Make linter happy. * PATCH : REFACTOR : Moved more validation functionality into the chunk object. * PATCH : REFACTOR : Clean up validator code. * PATCH : REFACTOR : Move more core code away from validators into YAMLChunk. ### 0.8.0 * MINOR : FEATURE : Enum with item validators. * MINOR : FEATURE : Mappings with key validators. * MINOR : FEATURE : Key validation using Map(). * PATCH : PERFORMANCE IMPROVEMENTS : Avoided the use of deepcopy() every time a validator is used. * MINOR BUGFIX : Roundtripping failure fixes. * PATCH : REFACTOR : Only use chunk object in scalar validators. * PATCH : REFACTOR : Removed dependency on two variables being fed in to the validate_scalar method. ### 0.7.3 * PERFORMANCE : Improved performance of dumping by restricting the number of times deepcopy() is called. * FEATURE : Performance improvements. * FEATURE : Create dumpable YAML documents from simple dicts/lists of python data. * FEATURE : Create documents from scratch. * FEATURE : Create YAML documents directly. * FEATURE : Merging of YAML documents. * BUGFIX : Handle roundtripping changing indexed elements in sequences. * BUGFIX : Fix for scalar value roundtripping. * FEATURE : Set sequences using lists. * FEATURE : Use 'yes' and 'no' as default boolean values. * FEATURE : Set nested mappings that get correctly validated using __setitem__ interface on YAML() objects. * BUGFIX : Don't futz with the ordering of keys when setting vars via __setitem__. ### 0.7.2 * BUGFIX : Decimal point representer does not work with decimal.Decimal() object, so using float instead. * BUGFIX : In python 2.x, the __ne__ magic method is called on != rather than negating the result of the __eq__ method. This caused undesired behavior in python 2.x. * FEATURE : Parser errors will have the correct label attached. * FEATURE : .data now returns ordereddicts. * BUG : Boolean 'y' and 'n' values ought to resolve to True and False. * BUG : Map keys were accidentally optional by default. * BUG : Disallowed exceptions were previously not labeled. * BUG : Duplicate key exceptions were previously not labeled. * BUG : Fix for roundtripping multiline strings in python 2. * FEATURE : Label parsed YAML and have the label appear in exceptions. * BUG : Fix the way that data is roundtrip loaded into the yaml object via __setitem__. * BUG : Fixed roundtripping without using load() ### 0.7.0 * FEATURE : Modified the way that roundtripping works. ### 0.6.2 * FEATURE : Added version number accessible from __init__.py * BUG : Fixed the miscounting of list lines when getting start_line/end_line. ### 0.6.0 * BUG : Fixed import issues. * FEATURE : Added email and url validators. * FEATURE : Added Regex validator. ### 0.5.9 No relevant code changes. ### 0.5.8 * BUG : Fixed boolean error. ### 0.5.7 * BUG : Fixed MapPattern errors caused when editing items. ### 0.5.6 * FEATURE : Strict type checking when setting values. ### 0.5.5 * BUG : Fix roundtripping when using non-ascii characters. * BUG : Fixed unicode roundtripping error. ### 0.5.4 * BUG : Fix mishandling of special characters in mappings. * BUG : Handle invalid keys which are non-ascii characters gracefully * BUG : Fix for character encoding issues that occur when non-string scalars see invalid input. ### 0.5.3 * BUG : Third fix for character encoding issues (#18: hat tip @jpscaletti) * BUG : Second fix for character encoding issues (#18: hat tip Juan Pablo-Scaletti) * BUG : Fix for #18 (hat tip Juan Pablo-Scaletti) ### 0.5.2 * BUG : Fix for #17 (hat tip @kshpytsya) ### 0.5.1 * FEATURE : YAML object importable directly from module. ### 0.5.0 * BUG : Fixed some broken elements of roundtripping. * BUG : .data representations did not give keys as strings. * BUG : Fixed bool(x) overriding in python 2. * FEATURE : Greater than / less than implemented along with better post-representation assignment to mappings. * FEATURE : Better repr() * BUG : Keys are now represented as YAML() objects with full access to the location of the key in the YAML document. * FEATURE : Added is_[scalar|mapping|sequence] methods. * FEATURE : .values() on YAML object. * FEATURE : Added .value property to YAML object. * FEATURE : Implemented __contains__ so the 'in' method can be used. * FEATURE : Added .get(val) and .keys() so mapping YAML objects can be treated like dicts. * FEATURE : Added .items() to YAML object. * FEATURE : Handle srting casting for integers. * FEATURE : Raise TypeError when trying to cast a string to bool. * FEATURE : Raise TypeError when attempting to cast YAML(bool) to string. * FEATURE : Get lines for a specific YAML value, lines before it and lines after. * FEATURE : Print line numbers of YAML elements. * FEATURE : Any validator. * FEATURE : Fixed length sequence validation. * BUG : Fixed roundtripping. * FEATURE : Rountripped YAML with modifications. * BUG : Fixed ruamel.yaml version at a higher level. * FEATURE : Parse YAML into roundtrippable YAML object. ### 0.4.2 No relevant code changes. ### 0.4.1 * BUG : Fixed comma separated validator - now removes trailing spaces when parsing "a, b, c" so it does not parse as "a", " b", " c". ### 0.4.0 * FEATURE: Comma separated values. ### 0.3.9 * FEATURE : Added import for CommentedYAML type. ### 0.3.8 * FEATURE : Empty property validation. ### 0.3.7 * BUG : Fixed ruamel.yaml importing so that it works with earlier versions too. ### 0.3.6 * BUG : Fixed 13.1 ruamel.yaml issue. ### 0.3.5 * BUG : Stray print statement. ### 0.3.3 * BUG : Disallow flow style was failing with sequences. ### 0.3.2 No relevant code changes. ### 0.3.1 * BUG : Fixed mis-parsing caused by 'null' and non-strings in dictionary keys. ### 0.3 * FEATURE : Datetime parsing. * BUG : When loading a blank string using 'Any' it returned None by accident. ### 0.2 * FEATURE : YAMLValidationError now inherits from and uses the same mechanism as MarkedYAMLError. ### 0.1.6 No relevant code changes. ### 0.1.5 * FEATURE : Duplicate keys disallowed. ### 0.1.4 * FEATURE : Made the default to parse all scalars to string (Any validator) and added validator that returns CommentedSeq/CommentedMap. ### 0.1.3 * FEATURE : Clearer exception messages. * BUG : Fixed bug in sequence validator. ### 0.1.2 * BUG : Single value YAML documents now allowed. * BUG : Raise type error if it isn't a string passed through. strictyaml-1.6.1/docs/public/comparison/000077500000000000000000000000001415074265300203015ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/comparison/table.yml000066400000000000000000000022131415074265300221110ustar00rootroot00000000000000json: syntax to data ratio: % length of JSON: 100 syntax typing: yes well defined: yes comments: no hierarchical: yes duplicate keys: no multiline strings: no meaningful whitespace: no complex features: no strictyaml: syntax to data ratio: % length of JSON: syntax typing: no well defined: yes comments: yes syntactic noise ratio: hierarchical: yes duplicate keys: no multiline strings: yes meaningful whitespace: yes complex features: no yaml: syntax to data ratio: % length of JSON: syntax typing: yes well defined: yes comments: yes hierarchical: yes duplicate keys: yes multiline strings: yes meaningful whitespace: no complex features: yes ini: syntactic noise ratio: % length of JSON: syntax typing: no well defined: no comments: yes hierarchical: no duplicate keys: yes multiline strings: no meaningful whitespace: no complex features: no toml: syntax to data ratio: % length of JSON: syntax typing: no well defined: no comments: yes hierarchical: yes duplicate keys: no multiline strings: yes meaningful whitespace: no complex features: yes json5: strictyaml-1.6.1/docs/public/features-removed.md000066400000000000000000000042011415074265300217230ustar00rootroot00000000000000--- title: What YAML features does StrictYAML remove? --- StrictYAML restricts you from parsing a number of things which the YAML specification says should be parsed. An issue has been [raised](https://github.com/yaml/YAML2/issues/8) by [David Seaward](https://inkwell.za.net/) about this critique on the official YAML repository. This document lists those of those features: ## Implicit Typing ([Why?](../why/implicit-typing-removed)) ```yaml x: yes y: null ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {"x": True, "y": None} ``` Example StrictYAML without schema: ```python load(yaml) == {"x": "yes", "y": "null"} ``` Example StrictYAML with schema: ```python load(yaml, Map({"x": Bool(), "y": Str()})) == {"x": True, "y": "null"} ``` ## Direct representations of objects ([Why?](../why/not-parse-direct-representations-of-python-objects)) ```yaml --- !python/hash:UnsafeUserObject email: evilhacker@hacker.com password: passwordtoset type: admin ``` Example pyyaml/ruamel: ```python load(yaml) == {'evil': b'z\xf8\xa5u\xabZ'} ``` Example StrictYAML ```python raises TagTokenDisallowed ``` ## Duplicate Keys Disallowed ([Why?](../why/duplicate-keys-disallowed)) ```yaml x: 1 x: 2 ``` Example pyyaml/poyo: ```python load(yaml) == {'x': 2} ``` Example StrictYAML ```python raises DuplicateKeysDisallowed ``` ## Explicit tags ([Why?](../why/explicit-tags-removed)) ```yaml x: !!int 5 ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == load(yaml) == {"x": 5} ``` Example StrictYAML ```python raises TagTokenDisallowed ``` ## Node anchors and refs ([Why?](../why/node-anchors-and-references-removed)) ```yaml x: &id001 a: 1 y: *id001 ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {'x': {'a': 1}, 'y': {'a': 1}} ``` Example StrictYAML ```python raises NodeAnchorDisallowed ``` To parse the above YAML *literally* in StrictYAML do: ```yaml x: '&id001' a: 1 y: '*id001' ``` ## Flow style ([Why?](../why/flow-style-removed)) ```yaml x: 1 b: {c: 3, d: 4} ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {'x': 1, 'b': {'c': 3, 'd': 4}} ``` Example StrictYAML ```python raises FlowStyleDisallowed ``` strictyaml-1.6.1/docs/public/index.md000066400000000000000000000205311415074265300175610ustar00rootroot00000000000000--- title: StrictYAML --- {{< github-stars user="crdoconnor" project="strictyaml" >}} StrictYAML is a [type-safe](https://en.wikipedia.org/wiki/Type_safety) YAML parser that parses and validates a [restricted subset](features-removed) of the [YAML](what-is-yaml) specification. Priorities: - Beautiful API - Refusing to parse [the ugly, hard to read and insecure features of YAML](features-removed) like [the Norway problem](why/implicit-typing-removed). - Strict validation of markup and straightforward type casting. - Clear, readable exceptions with **code snippets** and **line numbers**. - Acting as a near-drop in replacement for pyyaml, ruamel.yaml or poyo. - Ability to read in YAML, make changes and write it out again with comments preserved. - [Not speed](why/speed-not-a-priority), currently. Simple example: ```yaml # All about the character name: Ford Prefect age: 42 possessions: - Towel ``` ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError ``` Default parse result: ```python >>> load(yaml_snippet) YAML({'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']}) ``` All data is string, list or OrderedDict: ```python >>> load(yaml_snippet).data {'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']} ``` Quickstart with schema: ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) ``` 42 is now parsed as an integer: ```python >>> person = load(yaml_snippet, schema) >>> person.data {'name': 'Ford Prefect', 'age': 42, 'possessions': ['Towel']} ``` A YAMLError will be raised if there are syntactic problems, violations of your schema or use of disallowed YAML features: ```yaml # All about the character name: Ford Prefect age: 42 ``` For example, a schema violation: ```python try: person = load(yaml_snippet, schema) except YAMLError as error: print(error) ``` ```yaml while parsing a mapping in "", line 1, column 1: # All about the character ^ (line: 1) required key(s) 'possessions' not found in "", line 3, column 1: age: '42' ^ (line: 3) ``` If parsed correctly: ```python from strictyaml import load, Map, Str, Int, Seq, YAMLError, as_document schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) ``` You can modify values and write out the YAML with comments preserved: ```python person = load(yaml_snippet, schema) person['age'] = 43 print(person.as_yaml()) ``` ```yaml # All about the character name: Ford Prefect age: 43 possessions: - Towel ``` As well as look up line numbers: ```python >>> person = load(yaml_snippet, schema) >>> person['possessions'][0].start_line 5 ``` And construct YAML documents from dicts or lists: ```python print(as_document({"x": 1}).as_yaml()) ``` ```yaml x: 1 ``` ## Install ```sh $ pip install strictyaml ``` ## Why StrictYAML? There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. I've tried to make it the best one. Below is a series of documented justifications: - [Why avoid using environment variables as configuration?](why-not/environment-variables-as-config) - [Why not use HJSON?](why-not/hjson) - [Why not HOCON?](why-not/hocon) - [Why not use INI files?](why-not/ini) - [Why not use JSON Schema for validation?](why-not/json-schema) - [Why not JSON for simple configuration files?](why-not/json) - [Why not JSON5?](why-not/json5) - [Why not use the YAML 1.2 standard? - we don't need a new standard!](why-not/ordinary-yaml) - [Why not use kwalify with standard YAML to validate my YAML?](why-not/pykwalify) - [Why not use Python's schema library (or similar) for validation?](why-not/python-schema) - [Why not use SDLang?](why-not/sdlang) - [What is wrong with TOML?](why-not/toml) - [Why shouldn't I just use Python code for configuration?](why-not/turing-complete-code) - [Why not use XML for configuration or DSLs?](why-not/xml) ## Using StrictYAML How to: - [Build a YAML document from scratch in code](using/alpha/howto/build-yaml-document) - [Either/or schema validation of different, equally valid different kinds of YAML](using/alpha/howto/either-or-validation) - [Labeling exceptions](using/alpha/howto/label-exceptions) - [Merge YAML documents](using/alpha/howto/merge-yaml-documents) - [Revalidate an already validated document](using/alpha/howto/revalidation) - [Reading in YAML, editing it and writing it back out](using/alpha/howto/roundtripping) - [Get line numbers of YAML elements](using/alpha/howto/what-line) - [Parsing YAML without a schema](using/alpha/howto/without-a-schema) Compound validators: - [Fixed length sequences (FixedSeq)](using/alpha/compound/fixed-length-sequences) - [Mappings combining defined and undefined keys (MapCombined)](using/alpha/compound/map-combined) - [Mappings with arbitrary key names (MapPattern)](using/alpha/compound/map-pattern) - [Mapping with defined keys and a custom key validator (Map)](using/alpha/compound/mapping-with-slug-keys) - [Using a YAML object of a parsed mapping](using/alpha/compound/mapping-yaml-object) - [Mappings with defined keys (Map)](using/alpha/compound/mapping) - [Optional keys with defaults (Map/Optional)](using/alpha/compound/optional-keys-with-defaults) - [Validating optional keys in mappings (Map)](using/alpha/compound/optional-keys) - [Sequences of unique items (UniqueSeq)](using/alpha/compound/sequences-of-unique-items) - [Sequence/list validator (Seq)](using/alpha/compound/sequences) - [Updating document with a schema](using/alpha/compound/update) Scalar validators: - [Boolean (Bool)](using/alpha/scalar/boolean) - [Parsing comma separated items (CommaSeparated)](using/alpha/scalar/comma-separated) - [Datetimes (Datetime)](using/alpha/scalar/datetime) - [Decimal numbers (Decimal)](using/alpha/scalar/decimal) - [Email and URL validators](using/alpha/scalar/email-and-url) - [Empty key validation](using/alpha/scalar/empty) - [Enumerated scalars (Enum)](using/alpha/scalar/enum) - [Floating point numbers (Float)](using/alpha/scalar/float) - [Hexadecimal Integers (HexInt)](using/alpha/scalar/hexadecimal-integer) - [Integers (Int)](using/alpha/scalar/integer) - [Validating strings with regexes (Regex)](using/alpha/scalar/regular-expressions) - [Parsing strings (Str)](using/alpha/scalar/string) Restrictions: - [Disallowed YAML](using/alpha/restrictions/disallowed-yaml) - [Duplicate keys](using/alpha/restrictions/duplicate-keys) - [Dirty load](using/alpha/restrictions/loading-dirty-yaml) ## Design justifications There are some design decisions in StrictYAML which are controversial and/or not obvious. Those are documented here: - [What is wrong with duplicate keys?](why/duplicate-keys-disallowed) - [What is wrong with explicit tags?](why/explicit-tags-removed) - [What is wrong with flow-style YAML?](why/flow-style-removed) - [The Norway Problem - why StrictYAML refuses to do implicit typing and so should you](why/implicit-typing-removed) - [What is wrong with node anchors and references?](why/node-anchors-and-references-removed) - [Why does StrictYAML not parse direct representations of Python objects?](why/not-parse-direct-representations-of-python-objects) - [Why does StrictYAML only parse from strings and not files?](why/only-parse-strings-not-files) - [Why is parsing speed not a high priority for StrictYAML?](why/speed-not-a-priority) - [What is syntax typing?](why/syntax-typing-bad) - [Why does StrictYAML make you define a schema in Python - a Turing-complete language?](why/turing-complete-schema) ## Star Contributors - @wwoods - @chrisburr - @jnichols0 ## Other Contributors - @eulores - @WaltWoods - @ChristopherGS - @gvx - @AlexandreDecan - @lots0logs - @tobbez - @jaredsampson - @BoboTIG StrictYAML also includes code from [ruamel.yaml](https://yaml.readthedocs.io/en/latest/), Copyright Anthon van der Neut. ## Contributing - Before writing any code, please read the tutorial on [contributing to hitchdev libraries](https://hitchdev.com/approach/contributing-to-hitch-libraries/). - Before writing any code, if you're proposing a new feature, please raise it on github. If it's an existing feature / bug, please comment and briefly describe how you're going to implement it. - All code needs to come accompanied with a story that exercises it or a modification to an existing story. This is used both to test the code and build the documentation.strictyaml-1.6.1/docs/public/redirects.yml000066400000000000000000000003751415074265300206430ustar00rootroot00000000000000/why-not/syntax-typing: /why/syntax-typing-bad /new-faq: / /using/alpha/compound/either-or-validation: /using/alpha/howto/either-or-validation/ /why/care-about-yaml: /why /why/binary-data-removed: /why/not-parse-direct-representations-of-python-objects strictyaml-1.6.1/docs/public/using/000077500000000000000000000000001415074265300172545ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/000077500000000000000000000000001415074265300203415ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/compound/000077500000000000000000000000001415074265300221655ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/compound/fixed-length-sequences.md000066400000000000000000000032011415074265300270520ustar00rootroot00000000000000--- title: Fixed length sequences (FixedSeq) type: using --- Sequences of fixed length can be validated with a series of different (or the same) types. ```python from strictyaml import FixedSeq, Str, Map, Int, Float, YAMLValidationError, load from ensure import Ensure schema = FixedSeq([Int(), Map({"x": Str()}), Float()]) ``` Equivalent list: ```yaml - 1 - x: 5 - 2.5 ``` ```python Ensure(load(yaml_snippet, schema)).equals([1, {"x": "5"}, 2.5, ]) ``` Invalid list 1: ```yaml a: 1 b: 2 c: 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a sequence of 3 elements in "", line 1, column 1: a: '1' ^ (line: 1) found a mapping in "", line 3, column 1: c: '3' ^ (line: 3) ``` Invalid list 2: ```yaml - 2 - a - a: - 1 - 2 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a mapping found arbitrary text in "", line 2, column 1: - a ^ (line: 2) ``` Invalid list 3: ```yaml - 1 - a ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a sequence of 3 elements in "", line 1, column 1: - '1' ^ (line: 1) found a sequence of 2 elements in "", line 2, column 1: - a ^ (line: 2) ``` {{< note title="Executable specification" >}} Page automatically generated from fixed-sequence.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/index.md000066400000000000000000000013511415074265300236160ustar00rootroot00000000000000--- title: Compound StrictYAML Validators --- - [Fixed length sequences (FixedSeq)](fixed-length-sequences) - [Mappings combining defined and undefined keys (MapCombined)](map-combined) - [Mappings with arbitrary key names (MapPattern)](map-pattern) - [Mapping with defined keys and a custom key validator (Map)](mapping-with-slug-keys) - [Using a YAML object of a parsed mapping](mapping-yaml-object) - [Mappings with defined keys (Map)](mapping) - [Optional keys with defaults (Map/Optional)](optional-keys-with-defaults) - [Validating optional keys in mappings (Map)](optional-keys) - [Sequences of unique items (UniqueSeq)](sequences-of-unique-items) - [Sequence/list validator (Seq)](sequences) - [Updating document with a schema](update) strictyaml-1.6.1/docs/public/using/alpha/compound/map-combined.md000066400000000000000000000046731415074265300250540ustar00rootroot00000000000000--- title: Mappings combining defined and undefined keys (MapCombined) type: using --- {{< warning title="Experimental" >}} This feature is in alpha. The API may change on a minor version increment. {{< /warning >}} When you wish to support arbitrary optional keys in some mappings (i.e. to specify some required keys in the schema, but allow any additional ones on top of that), you use a MapCombined. See https://github.com/crdoconnor/strictyaml/issues/148#issuecomment-861007657 ```python from strictyaml import Any, Int, MapCombined, Optional, Str, load from ensure import Ensure schema = MapCombined( { "required": Str(), Optional("foo"): Int(), }, Str(), Any(), ) ``` Optional is present: ```yaml required: Hello World foo: 42 bar: 42 ``` ```python Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "foo": 42, "bar": "42", } ) ``` Optional is absent: ```yaml required: Hello World bar: 42 ``` ```python Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "bar": "42", } ) ``` Multiple undefined: ```yaml required: Hello World bar: 42 baz: forty two ``` ```python Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "bar": "42", "baz": "forty two", } ) ``` Required is absent: ```yaml bar: 42 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping required key(s) 'required' not found in "", line 1, column 1: bar: '42' ^ (line: 1) ``` Undefined of invalid type: ```yaml required: Hello World bar: forty two ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 2, column 1: bar: forty two ^ (line: 2) ``` Invalid key type: ```yaml 1: Hello World not_an_integer: 42 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 2, column 1: not_an_integer: '42' ^ (line: 2) ``` {{< note title="Executable specification" >}} Page automatically generated from map-combined.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/map-pattern.md000066400000000000000000000055231415074265300247440ustar00rootroot00000000000000--- title: Mappings with arbitrary key names (MapPattern) type: using --- When you do not wish to let the user define the key names in a mapping and and only specify what type the keys are, use a MapPattern. When you wish to specify the exact key name, use the 'Map' validator instead. ```python from strictyaml import MapPattern, Int, Float, Str, Any, Seq, YAMLValidationError, load from ensure import Ensure schema = MapPattern(Str(), Int()) ``` Equivalence 1: ```yaml â: 1 b: 2 ``` ```python Ensure(load(yaml_snippet, schema)).equals({u"â": 1, "b": 2}) ``` Equivalence 2: ```yaml a: 1 c: 3 ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1, "c": 3}) ``` Equivalence 3: ```yaml a: 1 ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1, }) ``` With floats and ints: ```yaml 10.25: 23 20.33: 76 ``` ```python Ensure(load(yaml_snippet, MapPattern(Float(), Int())).data).equals({10.25: 23, 20.33: 76}) ``` With Int and List: ```yaml 1: - ABC 2: - DEF ``` ```python Ensure(load(yaml_snippet, MapPattern(Int(), Seq(Str()))).data).equals({1: ["ABC"], 2: ["DEF"]}) ``` Invalid 1: ```yaml b: b ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 1, column 1: b: b ^ (line: 1) ``` Invalid 2: ```yaml a: a b: 2 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 1, column 1: a: a ^ (line: 1) ``` More than the maximum number of keys: ```yaml â: 1 b: 2 ``` ```python load(yaml_snippet, MapPattern(Str(), Int(), maximum_keys=1)) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping in "", line 1, column 1: "\xE2": '1' ^ (line: 1) expected a maximum of 1 key, found 2. in "", line 2, column 1: b: '2' ^ (line: 2) ``` Fewer than the minimum number of keys: ```yaml â: 1 ``` ```python load(yaml_snippet, MapPattern(Str(), Int(), minimum_keys=2)) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping expected a minimum of 2 keys, found 1. in "", line 1, column 1: "\xE2": '1' ^ (line: 1) ``` Invalid with non-ascii: ```yaml a: 1 b: yâs c: 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 2, column 1: b: "y\xE2s" ^ (line: 2) ``` {{< note title="Executable specification" >}} Page automatically generated from mappattern.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/mapping-with-slug-keys.md000066400000000000000000000026151415074265300270400ustar00rootroot00000000000000--- title: Mapping with defined keys and a custom key validator (Map) type: using --- {{< warning title="Experimental" >}} This feature is in alpha. The API may change on a minor version increment. {{< /warning >}} A typical mapping except that the key values are determined by the value provided by the validator. Example yaml_snippet: ```yaml Name: United Kingdom country-code: GB DIAL CODE: +44 official languages: - English - Welsh ``` ```python from collections import OrderedDict from strictyaml import Map, Optional, Str, Seq, load, ScalarValidator from ensure import Ensure # This example uses slugify from the "python-slugify" package from slugify import slugify class Slug(ScalarValidator): def validate_scalar(self, chunk): return slugify(unicode(chunk.contents)) schema = Map({ "name": Str(), Optional("country-code"): Str(), "dial-code": Str(), "official-languages": Seq(Str()) }, key_validator=Slug()) ``` ```python Ensure(load(yaml_snippet, schema).data).equals( { "name": "United Kingdom", "country-code": "GB", "dial-code": "+44", "official-languages": ["English", "Welsh"], } ) ``` {{< note title="Executable specification" >}} Page automatically generated from map-with-key-validator.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/mapping-yaml-object.md000066400000000000000000000037651415074265300263610ustar00rootroot00000000000000--- title: Using a YAML object of a parsed mapping type: using --- When a YAML document with mappings is parsed, it is not parsed as a dict but as a YAML object which behaves very similarly to a dict, but with some extra capabilities. You can use .items(), .keys(), .values(), look up items with square bracket notation, .get(key, with_default_if_nonexistent) and use "x in y" notation to determine key membership. To retrieve the equivalent dict (containing just other dicts, lists and strings/ints/etc.) use .data. Example yaml_snippet: ```yaml a: 1 b: 2 c: 3 ``` ```python from strictyaml import Map, Int, load from ensure import Ensure schema = Map({"a": Int(), "b": Int(), "c": Int()}) ``` .is_mapping(): ```python Ensure(load(yaml_snippet, schema).is_mapping()).is_true() ``` Equivalence with equivalent plain dict: ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": 2, "c": 3}) ``` .items(): ```python Ensure(load(yaml_snippet, schema).items()).equals([("a", 1), ("b", 2), ("c", 3)]) ``` Use in to detect presence of a key: ```python Ensure("a" in load(yaml_snippet, schema)).is_true() ``` .values(): ```python Ensure(load(yaml_snippet, schema).values()).equals([1, 2, 3]) ``` .keys(): ```python Ensure(load(yaml_snippet, schema).keys()).equals(["a", "b", "c"]) ``` Key lookup: ```python yaml = load(yaml_snippet, schema) Ensure(yaml[yaml.keys()[0]]).equals(1) ``` Dict lookup: ```python Ensure(load(yaml_snippet, schema)["a"]).equals(1) ``` .get(): ```python Ensure(load(yaml_snippet, schema).get("a")).equals(1) ``` .get() nonexistent: ```python Ensure(load(yaml_snippet, schema).get("nonexistent")).equals(None) ``` len(): ```python Ensure(len(load(yaml_snippet, schema))).equals(3) ``` {{< note title="Executable specification" >}} Page automatically generated from mapping-representation.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/mapping.md000066400000000000000000000064331415074265300241500ustar00rootroot00000000000000--- title: Mappings with defined keys (Map) type: using --- Mappings of one value to another are represented by : in YAML and parsed as python dicts. Using StrictYAML's 'Map' you can validate that a mapping contains the right keys and the right *type* of values. Note: for mappings where you don't know the exact names of the keys in advance but you do know the type, use MapPattern. Example yaml_snippet: ```yaml â: 1 b: 2 c: 3 ``` ```python from collections import OrderedDict from strictyaml import Map, Int, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Int(), "b": Int(), "c": Int()}) schema_2 = Map({u"â": Int(), "b": Int(), "c": Int()}) ``` one key mapping: ```yaml x: 1 ``` ```python Ensure(load(yaml_snippet, Map({"x": Int()})).data).equals(OrderedDict([('x', 1)])) ``` key value: ```python Ensure(load(yaml_snippet, schema_2)[u'â']).equals(1) ``` get item key not found: ```python load(yaml_snippet, schema_2)['keynotfound'] ``` ```python : 'keynotfound' ``` cannot use .text: ```python load(yaml_snippet, schema_2).text ``` ```python builtins.TypeError:YAML({'â': 1, 'b': 2, 'c': 3}) is a mapping, has no text value.: ``` parse snippet where key is not found in schema: ```yaml a: 1 b: 2 â: 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping unexpected key not in schema 'â' in "", line 3, column 1: "\xE2": '3' ^ (line: 3) ``` sequence not expected when parsing: ```yaml - 1 - 2 - 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a mapping in "", line 1, column 1: - '1' ^ (line: 1) found a sequence in "", line 3, column 1: - '3' ^ (line: 3) ``` List not expected when serializing: ```python as_document([1, 2, 3], schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: Expected a dict, found '[1, 2, 3]' ``` Empty dict not valid when serializing: ```python as_document({}, schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: Expected a non-empty dict, found an empty dict. Use EmptyDict validator to serialize empty dicts. ``` Unexpected key: ```yaml a: 1 b: 2 c: 3 d: 4 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping unexpected key not in schema 'd' in "", line 4, column 1: d: '4' ^ (line: 4) ``` required key not found: ```yaml a: 1 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping required key(s) 'b', 'c' not found in "", line 1, column 1: a: '1' ^ (line: 1) ``` iterator: ```yaml a: 1 b: 2 c: 3 ``` ```python assert [item for item in load(yaml_snippet, schema)] == ["a", "b", "c"] ``` serialize: ```python assert as_document(OrderedDict([(u"â", 1), ("b", 2), ("c", 3)]), schema_2).as_yaml() == yaml_snippet ``` {{< note title="Executable specification" >}} Page automatically generated from map.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/optional-keys-with-defaults.md000066400000000000000000000026311415074265300300650ustar00rootroot00000000000000--- title: Optional keys with defaults (Map/Optional) type: using --- {{< warning title="Experimental" >}} This feature is in alpha. The API may change on a minor version increment. {{< /warning >}} Example yaml_snippet: ```yaml a: 1 ``` ```python from strictyaml import Map, Int, Str, Bool, EmptyNone, Optional, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Int(), Optional("b", default=False): Bool(), }) ``` When parsed the result will include the optional value: ```python Ensure(load(yaml_snippet, schema).data).equals(OrderedDict([("a", 1), ("b", False)])) ``` If parsed and then output to YAML again the default data won't be there: ```python print(load(yaml_snippet, schema).as_yaml()) ``` ```yaml a: 1 ``` When default data is output to YAML it is removed: ```python print(as_document({"a": 1, "b": False}, schema).as_yaml()) ``` ```yaml a: 1 ``` When you want a key to stay and default to None: ```python schema = Map({"a": Int(), Optional("b", default=None, drop_if_none=False): EmptyNone() | Bool(), }) Ensure(load(yaml_snippet, schema).data).equals(OrderedDict([("a", 1), ("b", None)])) ``` {{< note title="Executable specification" >}} Page automatically generated from optional-with-defaults.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/optional-keys.md000066400000000000000000000030561415074265300253110ustar00rootroot00000000000000--- title: Validating optional keys in mappings (Map) type: using --- Not every key in a YAML mapping will be required. If you use the "Optional('key')" validator with YAML, you can signal that a key/value pair is not required. ```python from strictyaml import Map, Int, Str, Bool, Optional, load from ensure import Ensure schema = Map({"a": Int(), Optional("b"): Bool(), }) ``` Valid example 1: ```yaml a: 1 b: yes ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": True}) ``` Valid example 2: ```yaml a: 1 b: no ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": False}) ``` Valid example missing key: ```yaml a: 1 ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1}) ``` Invalid 1: ```yaml a: 1 b: 2 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a boolean value (one of "yes", "true", "on", "1", "y", "no", "false", "off", "0", "n") found an arbitrary integer in "", line 2, column 1: b: '2' ^ (line: 2) ``` Invalid 2: ```yaml a: 1 b: yes c: 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping unexpected key not in schema 'c' in "", line 3, column 1: c: '3' ^ (line: 3) ``` {{< note title="Executable specification" >}} Page automatically generated from optional.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/sequences-of-unique-items.md000066400000000000000000000030321415074265300275250ustar00rootroot00000000000000--- title: Sequences of unique items (UniqueSeq) type: using --- UniqueSeq validates sequences which contain no duplicate values. Example yaml_snippet: ```yaml - A - B - C ``` ```python from strictyaml import UniqueSeq, Str, load, as_document from ensure import Ensure schema = UniqueSeq(Str()) ``` Valid: ```python Ensure(load(yaml_snippet, schema)).equals(["A", "B", "C", ]) ``` Parsing with one dupe raises an exception: ```yaml - A - B - B ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a sequence in "", line 1, column 1: - A ^ (line: 1) duplicate found in "", line 3, column 1: - B ^ (line: 3) ``` Parsing all dupes raises an exception: ```yaml - 3 - 3 - 3 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a sequence in "", line 1, column 1: - '3' ^ (line: 1) duplicate found in "", line 3, column 1: - '3' ^ (line: 3) ``` Serializing with dupes raises an exception: ```python as_document(["A", "B", "B"], schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: Expecting all unique items, but duplicates were found in '['A', 'B', 'B']'. ``` {{< note title="Executable specification" >}} Page automatically generated from unique-sequence.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/sequences.md000066400000000000000000000051531415074265300245060ustar00rootroot00000000000000--- title: Sequence/list validator (Seq) type: using --- Sequences in YAML are denoted by a series of dashes ('-') and parsed as a list in python. Validating sequences of a particular type can be done with the Seq validator, specifying the type. See also [UniqueSeq](../sequences-of-unique-items) and [FixedSeq](../fixed-length-sequences) for other types of sequence validation. Example yaml_snippet: ```yaml - 1 - 2 - 3 ``` ```python from strictyaml import Seq, Str, Int, load from ensure import Ensure ``` Valid Parsed: ```python Ensure(load(yaml_snippet, Seq(Str()))).equals(["1", "2", "3", ]) ``` Is sequence: ```python assert load(yaml_snippet, Seq(Str())).is_sequence() ``` Iterator: ```python assert [x for x in load(yaml_snippet, Seq(Str()))] == ["1", "2", "3"] ``` Lists of lists: ```yaml - - a - b - c - - d - e - f ``` ```python assert load(yaml_snippet, Seq(Seq(Str()))) == [["a", "b", "c"], ["d", "e", "f"]] ``` .text is nonsensical: ```yaml - â - 2 - 3 ``` ```python load(yaml_snippet, Seq(Str())).text ``` ```python builtins.TypeError:YAML(['â', '2', '3']) is a sequence, has no text value.: ``` Invalid mapping instead: ```yaml a: 1 b: 2 c: 3 ``` ```python load(yaml_snippet, Seq(Str())) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a sequence in "", line 1, column 1: a: '1' ^ (line: 1) found a mapping in "", line 3, column 1: c: '3' ^ (line: 3) ``` Invalid nested structure instead: ```yaml - 2 - 3 - a: - 1 - 2 ``` ```python load(yaml_snippet, Seq(Str())) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a str in "", line 3, column 1: - a: ^ (line: 3) found a mapping in "", line 5, column 1: - '2' ^ (line: 5) ``` Invalid item in sequence: ```yaml - 1.1 - 1.2 ``` ```python load(yaml_snippet, Seq(Int())) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found an arbitrary number in "", line 1, column 1: - '1.1' ^ (line: 1) ``` One invalid item in sequence: ```yaml - 1 - 2 - 3.4 ``` ```python load(yaml_snippet, Seq(Int())) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found an arbitrary number in "", line 3, column 1: - '3.4' ^ (line: 3) ``` {{< note title="Executable specification" >}} Page automatically generated from sequence.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/compound/update.md000066400000000000000000000047221415074265300237760ustar00rootroot00000000000000--- title: Updating document with a schema type: using --- When StrictYAML loads a document with a schema, it checks that future updates to that document follow the original schema. ```python import strictyaml as s from ensure import Ensure ``` GitHub \#72: ```python doc = s.load('a: 9', s.Map({ 'a': s.Str(), s.Optional('b'): s.Int(), })) doc['b'] = 9 assert doc['b'] == 9 ``` Works on empty mapping: ```python doc = s.load('', s.EmptyDict() | s.Map({ 'a': s.Int(), })) doc['a'] = 9 assert doc['a'] == 9, doc.as_yaml() ``` Works on complex types: ```python doc = s.load('a: 8', s.Map({'a': s.Int() | s.Float()})) assert type(doc['a'].data) == int, repr(doc.data) doc['a'] = '5.' assert type(doc['a'].data) == float, repr(doc.data) assert doc['a'] == 5. ``` Will not work on empty sequence: ```python doc = s.load('', s.EmptyList() | s.Seq(s.Int())) doc[0] = 9 ``` ```python strictyaml.exceptions.YAMLSerializationError: cannot extend list via __setitem__. Instead, replace whole list on parent node. ``` Works on map with setting, updating, and then setting multiple keys (regression): ```python doc = s.load('', s.EmptyDict() | s.MapPattern( s.Str(), s.EmptyDict() | s.Map({ s.Optional('b'): s.Seq(s.Int()), }) )) doc['a'] = {} doc['a']['b'] = ['9'] assert doc.data == {'a': {'b': [9]}}, doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n', doc.as_yaml() # Second assignment doesn't occur... doc['a']['b'] = ['9', '10'] assert doc.data == {'a': {'b': [9, 10]}}, doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n - 10\n', doc.as_yaml() # If and only if another node is overwritten. This was a bug due # to mismatched _ruamelparsed objects. doc['b'] = {'b': ['11']} assert doc['a']['b'].data == [9, 10], doc.data assert doc['b']['b'].data == [11], doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n - 10\nb:\n b:\n - 11\n', doc.as_yaml() ``` For empty sequence, must instead assign whole sequence as key: ```python doc = s.load('a:', s.Map({'a': s.EmptyList() | s.Seq(s.Int())})) doc['a'] = [1, 2, 3] assert doc['a'].data == [1, 2, 3], repr(doc.data) ``` Can assign from string: ```python doc = s.load('a: 9', s.Map({ 'a': s.Str(), s.Optional('b'): s.Int(), })) doc['b'] = '9' assert doc['b'] == 9 ``` {{< note title="Executable specification" >}} Page automatically generated from update-with-schema.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/000077500000000000000000000000001415074265300215015ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/howto/build-yaml-document.md000066400000000000000000000032161415074265300257000ustar00rootroot00000000000000--- title: Build a YAML document from scratch in code type: using --- YAML documents can be built from combinations of dicts, lists and strings if no schema is used. ```python from ensure import Ensure from strictyaml import as_document from collections import OrderedDict # Can also use regular dict if an arbitrary ordering is ok yaml = as_document(OrderedDict( [(u"â", 'yes'), ("b", "hâllo"), ("c", ["1", "2", "3"])] )) ``` Then dump: ```python print(yaml.as_yaml()) ``` ```yaml â: yes b: hâllo c: - 1 - 2 - 3 ``` However, any type that is not a string, dict or list cannot be parsed without a schema: ```python class RandomClass(object): def __repr__(self): return 'some random object' as_document({"x": RandomClass()}) ``` ```python strictyaml.exceptions.YAMLSerializationError: 'some random object' is not a string ``` Empty dicts also cannot be serialized without a schema: ```python as_document({'hello': {}}) ``` ```python strictyaml.exceptions.YAMLSerializationError: Empty dicts are not serializable to StrictYAML unless schema is used. ``` Neither can lists: ```python as_document({'hello': []}) ``` ```python strictyaml.exceptions.YAMLSerializationError: Empty lists are not serializable to StrictYAML unless schema is used. ``` You can grab line numbers from the object that is serialized: ```python Ensure(yaml.start_line).equals(1) ``` {{< note title="Executable specification" >}} Page automatically generated from build-yaml-document-from-scratch.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/either-or-validation.md000066400000000000000000000037271415074265300260620ustar00rootroot00000000000000--- title: Either/or schema validation of different, equally valid different kinds of YAML type: using --- StrictYAML can be directed to parse two different elements or blocks of YAML. If the first thing does not parse correctly, it attempts to parse the second. If the second does not parse correctly, it raises an exception. ```python from strictyaml import Map, Seq, Bool, Int, Str, YAMLValidationError, load from ensure import Ensure schema = Str() | Map({"a": Bool() | Int()}) ``` Boolean first choice true: ```yaml a: yes ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": True}) ``` Boolean first choice false: ```yaml a: no ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": False}) ``` Int second choice: ```yaml a: 5 ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": 5}) ``` Invalid not bool or int: ```yaml a: A ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer found arbitrary text in "", line 1, column 1: a: A ^ (line: 1) ``` Invalid combinations of more than one map: ```yaml a: x ``` ```python load(yaml_snippet, Map({"a": Str()}) | Map({"b": Str()})) ``` ```python strictyaml.exceptions.InvalidValidatorError: You tried to Or ('|') together 2 Map validators. Try using revalidation instead. ``` Invalid combinations of more than one seq: ```yaml - 1 - 2 ``` ```python load(yaml_snippet, Seq(Int()) | Seq(Str())) ``` ```python strictyaml.exceptions.InvalidValidatorError: You tried to Or ('|') together 2 Seq validators. Try using revalidation instead. ``` Change item after validated: ```yaml a: yes ``` ```python yaml = load(yaml_snippet, schema) yaml['a'] = 5 assert yaml['a'] == 5 ``` {{< note title="Executable specification" >}} Page automatically generated from or.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/index.md000066400000000000000000000010101415074265300231220ustar00rootroot00000000000000--- title: How to... --- - [Build a YAML document from scratch in code](build-yaml-document) - [Either/or schema validation of different, equally valid different kinds of YAML](either-or-validation) - [Labeling exceptions](label-exceptions) - [Merge YAML documents](merge-yaml-documents) - [Revalidate an already validated document](revalidation) - [Reading in YAML, editing it and writing it back out](roundtripping) - [Get line numbers of YAML elements](what-line) - [Parsing YAML without a schema](without-a-schema) strictyaml-1.6.1/docs/public/using/alpha/howto/label-exceptions.md000066400000000000000000000015461415074265300252670ustar00rootroot00000000000000--- title: Labeling exceptions type: using --- When raising exceptions, you can add a label that will replace with whatever you want. Example yaml_snippet: ```yaml a: 1 b: - 1 - 2 ``` ```python from strictyaml import Map, Int, load, YAMLValidationError ``` Label myfilename: ```python load(yaml_snippet, Map({"a": Int(), "b": Map({"x": Int(), "y": Int()})}), label="myfilename") ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a mapping in "myfilename", line 2, column 1: b: ^ (line: 2) found a sequence in "myfilename", line 4, column 1: - '2' ^ (line: 4) ``` {{< note title="Executable specification" >}} Page automatically generated from handle-exceptions.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/merge-yaml-documents.md000066400000000000000000000015051415074265300260620ustar00rootroot00000000000000--- title: Merge YAML documents type: using --- Loaded YAML can be combined and dumped with the comments intact. ```python from strictyaml import Map, MapPattern, Str, Seq, Int, load schema_1 = Map({ "a": Str(), "b": Map({"x": Int(), "y": Int()}), "c": Seq(MapPattern(Str(), Str())), }) schema_2 = Map({"x": Int(), "y": Int()}) yaml_1 = load(yaml_snippet_1, schema_1) yaml_2 = load(yaml_snippet_2, schema_2) yaml_1['b'] = yaml_2 ``` ```python print(yaml_1.as_yaml()) ``` ```yaml # Some comment a: â # value comment # Another comment b: x: 8 # y is now 9 y: 9 c: - a: 1 - b: 2 ``` {{< note title="Executable specification" >}} Page automatically generated from merge-documents.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/revalidation.md000066400000000000000000000033471415074265300245130ustar00rootroot00000000000000--- title: Revalidate an already validated document type: using --- When parsing a YAML document you may wish to do more than one validation pass over a document. This is needed when: * It simplifies your code to apply validation in stages. * You want to validate recursively. * One part of the document validation depends upon another (this is the example given below). Example yaml_snippet: ```yaml capitals: UK: 1 Germany: 2 countries: - Germany - UK ``` ```python from strictyaml import Str, Int, Map, Seq, Any, load from ensure import Ensure overall_schema = Map({"capitals": Any(), "countries": Seq(Str())}) parsed = load(yaml_snippet, overall_schema) ``` Reparse mapping: ```python Ensure(parsed.data['capitals']['UK']).equals("1") parsed['capitals'].revalidate(Map({capital: Int() for capital in parsed.data['countries']})) Ensure(parsed.data['capitals']['UK']).equals(1) ``` Reparse scalar: ```python Ensure(parsed.data['capitals']['UK']).equals("1") parsed['capitals']['UK'].revalidate(Int()) Ensure(parsed.data['capitals']['UK']).equals(1) Ensure(parsed['capitals']['UK'].data).is_an(int) ``` Parse error: ```yaml capitals: UK: 1 Germany: 2 France: 3 countries: - Germany - UK ``` ```python parsed['capitals'].revalidate(Map({capital: Int() for capital in parsed.data['countries']})) ``` ```python strictyaml.exceptions.YAMLValidationError: while parsing a mapping unexpected key not in schema 'France' in "", line 4, column 1: France: '3' ^ (line: 4) ``` {{< note title="Executable specification" >}} Page automatically generated from revalidation.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/roundtripping.md000066400000000000000000000050661415074265300247360ustar00rootroot00000000000000--- title: Reading in YAML, editing it and writing it back out type: using --- Loaded YAML can be modified and dumped out again with comments preserved using .as_yaml(). Note that due to some bugs in the library (ruamel.yaml) underlying StrictYAML, while the data parsed should be precisely the same, the exact syntax (newlines, comment locations, etc.) may not be identical. Example yaml_snippet: ```yaml # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: 1 - b: 2 ``` ```python from strictyaml import Map, MapPattern, EmptyDict, Str, Seq, Int, load from ensure import Ensure schema = Map({ "a": Str(), "b": Map({"x": Int(), "y": Int()}), "c": EmptyDict() | Seq(MapPattern(Str(), Str())), }) ``` Commented: ```python Ensure(load(yaml_snippet, schema).as_yaml()).equals(yaml_snippet) ``` Modified with invalid variable: ```python to_modify = load(yaml_snippet, schema) to_modify['b']['x'] = 2 to_modify['c'][0]['a'] = '3' to_modify['b']['x'] = 'not an integer' ``` ```python strictyaml.exceptions.YAMLSerializationError: 'not an integer' not an integer. ``` Modified with float: ```python to_modify = load(yaml_snippet, schema) to_modify['c'][0]['a'] = "1.0001" print(to_modify.as_yaml()) ``` ```yaml # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: 1.0001 - b: 2 ``` Modify multi line string: ```yaml a: some b: | text ``` ```python schema = Map({"a": Str(), "b": Str()}) to_modify = load(yaml_snippet, schema) to_modify['a'] = 'changed' print(to_modify.as_yaml()) ``` ```yaml a: changed b: | text ``` Modified with one variable: ```python to_modify = load(yaml_snippet, schema) to_modify['b']['x'] = 2 to_modify['c'][0]['a'] = '3' print(to_modify.as_yaml()) ``` ```yaml # Some comment a: â # value comment # Another comment b: x: 2 y: 5 c: - a: 3 - b: 2 ``` Text across lines: ```python to_modify = load(yaml_snippet, schema) to_modify['c'][0]['a'] = "text\nacross\nlines" print(to_modify.as_yaml()) ``` ```yaml # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: |- text across lines - b: 2 ``` With empty dict: ```python to_modify = load(yaml_snippet, schema) to_modify['c'] = {} print(to_modify.as_yaml()) ``` ```yaml # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: ``` {{< note title="Executable specification" >}} Page automatically generated from roundtrip.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/what-line.md000066400000000000000000000033341415074265300237160ustar00rootroot00000000000000--- title: Get line numbers of YAML elements type: using --- Line numbers, the text of an item and text of surrounding lines can be grabbed from returned YAML objects - using .start_line, .end_line, lines(), lines_before(x) and lines_after(x). Example yaml_snippet: ```yaml y: p # Some comment a: | x # Another comment b: y c: a d: b ``` ```python from strictyaml import Map, Str, YAMLValidationError, load from ensure import Ensure schema = Map({"y": Str(), "a": Str(), "b": Str(), "c": Str(), "d": Str()}) snippet = load(yaml_snippet, schema) ``` If there is preceding comment for an item the start line includes it: ```python Ensure(snippet["a"].start_line).equals(3) Ensure(snippet["d"].start_line).equals(9) ``` If there is a trailing comment the end line includes it: ```python Ensure(snippet["a"].end_line).equals(6) Ensure(snippet["d"].end_line).equals(10) ``` You can grab the start line of a key: ```python Ensure(snippet.keys()[1].start_line).equals(3) ``` Start line and end line of whole snippet: ```python Ensure(snippet.start_line).equals(1) Ensure(snippet.end_line).equals(10) ``` Grabbing a line before an item: ```python Ensure(snippet['a'].lines_before(1)).equals("# Some comment") ``` Grabbing a line after an item: ```python Ensure(snippet['a'].lines_after(4)).equals("b: y\nc: a\n\nd: b") ``` Grabbing the lines of an item including surrounding comments: ```python print(load(yaml_snippet, schema)['a'].lines()) ``` ```yaml a: | x # Another comment ``` {{< note title="Executable specification" >}} Page automatically generated from whatline.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/howto/without-a-schema.md000066400000000000000000000026011415074265300252010ustar00rootroot00000000000000--- title: Parsing YAML without a schema type: using --- When using strictyaml you do not have to specify a schema. If you do this, the validator "Any" is used which will accept any mapping and any list and any scalar values (which will always be interpreted as a string, unlike regular YAML). This is the recommended approach when rapidly prototyping and the desired schema is fluid. When your prototype code is parsing YAML that has a more fixed structure, we recommend that you 'lock it down' with a schema. The Any validator can be used inside fixed structures as well. Example yaml_snippet: ```yaml a: x: 9 y: 8 b: 2 c: 3 ``` ```python from strictyaml import Str, Any, MapPattern, load from ensure import Ensure ``` Parse without validator: ```python Ensure(load(yaml_snippet)).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) ``` Parse with any validator - equivalent: ```python Ensure(load(yaml_snippet, Any())).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) ``` Fix higher levels of schema: ```python Ensure(load(yaml_snippet, MapPattern(Str(), Any()))).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) ``` {{< note title="Executable specification" >}} Page automatically generated from non-schema-validation.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/index.md000066400000000000000000000000461415074265300217720ustar00rootroot00000000000000--- title: Using StrictYAML v0.x.x ---strictyaml-1.6.1/docs/public/using/alpha/restrictions/000077500000000000000000000000001415074265300230715ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/restrictions/disallowed-yaml.md000066400000000000000000000066571415074265300265200ustar00rootroot00000000000000--- title: Disallowed YAML type: using --- StrictYAML parses an opinionated subset of the YAML specification which refuses to parse features which are otherwise valid in standard YAML. For an explanation as to why these features are stripped out of StrictYAML, see the FAQ. Disallowed YAML features raise Disallowed exceptions while syntactically invalid YAML raises ScannerError or ComposerError. Every error inherits from YAMLError. ```python from strictyaml import Map, Int, Any, load from strictyaml import TagTokenDisallowed, FlowMappingDisallowed, AnchorTokenDisallowed schema = Map({"x": Map({"a": Any(), "b": Any(), "c": Any()})}) ``` Tag tokens: ```yaml x: a: !!str yes b: !!str 3.5 c: !!str yes ``` ```python load(yaml_snippet, schema, label="disallowed") ``` ```python strictyaml.exceptions.TagTokenDisallowed: While scanning in "disallowed", line 2, column 11: a: !!str yes ^ (line: 2) Found disallowed tag tokens (do not specify types in markup) in "disallowed", line 2, column 6: a: !!str yes ^ (line: 2) ``` Flow style sequence: ```yaml [a, b]: [x, y] ``` ```python load(yaml_snippet, schema, label="disallowed") ``` ```python strictyaml.exceptions.FlowMappingDisallowed: While scanning in "disallowed", line 1, column 1: [a, b]: [x, y] ^ (line: 1) Found ugly disallowed JSONesque flow mapping (surround with ' and ' to make text appear literally) in "disallowed", line 1, column 2: [a, b]: [x, y] ^ (line: 1) ``` Flow style mapping: ```yaml x: { a: 1, b: 2, c: 3 } ``` ```python load(yaml_snippet, schema, label="disallowed") ``` ```python strictyaml.exceptions.FlowMappingDisallowed: While scanning in "disallowed", line 1, column 4: x: { a: 1, b: 2, c: 3 } ^ (line: 1) Found ugly disallowed JSONesque flow mapping (surround with ' and ' to make text appear literally) in "disallowed", line 1, column 5: x: { a: 1, b: 2, c: 3 } ^ (line: 1) ``` Node anchors and references: ```yaml x: a: &node1 3.5 b: 1 c: *node1 ``` ```python load(yaml_snippet, schema, label="disallowed") ``` ```python strictyaml.exceptions.AnchorTokenDisallowed: While scanning in "disallowed", line 2, column 6: a: &node1 3.5 ^ (line: 2) Found confusing disallowed anchor token (surround with ' and ' to make text appear literally) in "disallowed", line 2, column 12: a: &node1 3.5 ^ (line: 2) ``` Syntactically invalid YAML: ```yaml - invalid string ``` ```python load(yaml_snippet, schema, label="disallowed") ``` ```python strictyaml.ruamel.scanner.ScannerError: while scanning a simple key in "disallowed", line 2, column 1: string ^ (line: 2) could not find expected ':' in "disallowed", line 3, column 1: ^ (line: 3) ``` Mixed space indentation: ```yaml item: two space indent: 2 item two: four space indent: 2 ``` ```python load(yaml_snippet, label="disallowed") ``` ```python strictyaml.exceptions.InconsistentIndentationDisallowed: While parsing in "disallowed", line 4, column 5: four space indent: 2 ^ (line: 4) Found mapping with indentation inconsistent with previous mapping in "disallowed", line 5, column 1: ^ (line: 5) ``` {{< note title="Executable specification" >}} Page automatically generated from disallow.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/restrictions/duplicate-keys.md000066400000000000000000000030671415074265300263440ustar00rootroot00000000000000--- title: Duplicate keys type: using --- Duplicate keys are allowed in regular YAML - as parsed by pyyaml, ruamel.yaml and poyo: Not only is it unclear whether x should be "cow" or "bull" (the parser will decide 'bull', but did you know that?), if there are 200 lines between x: cow and x: bull, a user might very likely change the *first* x and erroneously believe that the resulting value of x has been changed - when it has not. In order to avoid all possible confusion, StrictYAML will simply refuse to parse this and will only accept associative arrays where all of the keys are unique. It will throw a DuplicateKeysDisallowed exception. Example yaml_snippet: ```yaml a: cow a: bull ``` ```python from strictyaml import load, DuplicateKeysDisallowed ``` Nameless exception: ```python load(yaml_snippet) ``` ```python strictyaml.exceptions.DuplicateKeysDisallowed: While parsing in "", line 2, column 1: a: bull ^ (line: 2) Duplicate key 'a' found in "", line 2, column 2: a: bull ^ (line: 2) ``` Named exception: ```python load(yaml_snippet, label="mylabel") ``` ```python strictyaml.exceptions.DuplicateKeysDisallowed: While parsing in "mylabel", line 2, column 1: a: bull ^ (line: 2) Duplicate key 'a' found in "mylabel", line 2, column 2: a: bull ^ (line: 2) ``` {{< note title="Executable specification" >}} Page automatically generated from duplicatekeys.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/restrictions/index.md000066400000000000000000000002101415074265300245130ustar00rootroot00000000000000--- title: Restrictions --- - [Disallowed YAML](disallowed-yaml) - [Duplicate keys](duplicate-keys) - [Dirty load](loading-dirty-yaml) strictyaml-1.6.1/docs/public/using/alpha/restrictions/loading-dirty-yaml.md000066400000000000000000000020271415074265300271220ustar00rootroot00000000000000--- title: Dirty load type: using --- StrictYAML refuses to parse flow style and node anchors by default, but since there have since been [some requests](https://github.com/crdoconnor/strictyaml/issues/38) to parse flow style, this now allowed with the "dirty_load" method. If allow_flow_style is True, Map indentation is not checked for consistency, as the indentation level is dependent on the map key length. ```python from strictyaml import Map, Int, MapPattern, Seq, Str, Any, dirty_load schema = Map({"foo": Map({"a": Any(), "b": Any(), "c": Any()}), "y": MapPattern(Str(), Str()), "z": Seq(Str())}) ``` Flow style mapping: ```yaml foo: { a: 1, b: 2, c: 3 } y: {} z: [] ``` ```python assert dirty_load(yaml_snippet, schema, allow_flow_style=True) == {"foo": {"a": "1", "b": "2", "c": "3"}, "y": {}, "z": []} ``` {{< note title="Executable specification" >}} Page automatically generated from dirty-load.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/000077500000000000000000000000001415074265300216065ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/using/alpha/scalar/boolean.md000066400000000000000000000045421415074265300235540ustar00rootroot00000000000000--- title: Boolean (Bool) type: using --- Boolean values can be parsed using a Bool validator. It case-insensitively interprets "yes", "true", "1", "on" as "True", "y" and their opposites as False. Different values will trigger a validation error. When updating boolean values on a YAML object with True or False, the roundtripped string version is set to "yes" and "no". To have your boolean values updated to a different yes/no string, update with a string instead - e.g. "on" or "off". Example yaml_snippet: ```yaml a: yes b: true c: on d: 1 e: True f: Y u: n v: False w: 0 x: Off y: FALSE z: no ``` ```python from strictyaml import Bool, Str, MapPattern, load from ensure import Ensure schema = MapPattern(Str(), Bool()) ``` Parse to YAML object: ```python Ensure(load(yaml_snippet, schema)).equals({ "a": True, "b": True, "c": True, "d": True, "e": True, "f": True, "u": False, "v": False, "w": False, "x": False, "y": False, "z": False, }) ``` YAML object should resolve to True or False: ```python Ensure(load(yaml_snippet, schema)["w"]).equals(False) ``` Using .data you can get the actual boolean value parsed: ```python assert load(yaml_snippet, schema)["a"].data is True ``` .text returns the text of the boolean YAML: ```python Ensure(load(yaml_snippet, schema)["y"].text).equals("FALSE") ``` Update boolean values with string and bool type: ```python yaml = load(yaml_snippet, schema) yaml['a'] = 'no' yaml['b'] = False yaml['c'] = True print(yaml.as_yaml()) ``` ```yaml a: no b: no c: yes d: 1 e: True f: Y u: n v: False w: 0 x: Off y: FALSE z: no ``` Cannot cast boolean to string: ```python str(load(yaml_snippet, schema)["y"]) ``` ```python builtins.TypeError: Cannot cast 'YAML(False)' to str. Use str(yamlobj.data) or str(yamlobj.text) instead. ``` Different uninterpretable values raise validation error: ```python load('a: yâs', schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a boolean value (one of "yes", "true", "on", "1", "y", "no", "false", "off", "0", "n") found arbitrary text in "", line 1, column 1: a: "y\xE2s" ^ (line: 1) ``` {{< note title="Executable specification" >}} Page automatically generated from boolean.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/comma-separated.md000066400000000000000000000043761415074265300252040ustar00rootroot00000000000000--- title: Parsing comma separated items (CommaSeparated) type: using --- Comma-separated values can be validated and parsed using the CommaSeparated validator. Note that the space following the commas is stripped by default when parsed. ```python from strictyaml import CommaSeparated, Int, Str, Map, load, as_document from ensure import Ensure int_schema = Map({"a": CommaSeparated(Int())}) str_schema = Map({"a": CommaSeparated(Str())}) ``` Parse as int: ```yaml a: 1, 2, 3 ``` ```python Ensure(load(yaml_snippet, int_schema)).equals({"a": [1, 2, 3]}) ``` Parse as string: ```yaml a: 1, 2, 3 ``` ```python Ensure(load(yaml_snippet, str_schema)).equals({"a": ["1", "2", "3"]}) ``` Parse empty comma separated string: ```yaml a: ``` ```python Ensure(load(yaml_snippet, str_schema)).equals({"a": []}) ``` Invalid int comma separated sequence: ```yaml a: 1, x, 3 ``` ```python load(yaml_snippet, int_schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an integer in "", line 2, column 1: ^ (line: 2) found arbitrary text in "", line 1, column 1: a: 1, x, 3 ^ (line: 1) ``` Serialize list to comma separated sequence: ```python print(as_document({"a": [1, 2, 3]}, int_schema).as_yaml()) ``` ```yaml a: 1, 2, 3 ``` Serialize valid string to comma separated sequence: ```python print(as_document({"a": "1,2,3"}, int_schema).as_yaml()) ``` ```yaml a: 1,2,3 ``` Serialize empty list to comma separated sequence: ```python print(as_document({"a": []}, int_schema).as_yaml()) ``` ```yaml a: ``` Serialize invalid string to comma separated sequence: ```python print(as_document({"a": "1,x,3"}, int_schema).as_yaml()) ``` ```python strictyaml.exceptions.YAMLSerializationError: 'x' not an integer. ``` Attempt to serialize neither list nor string raises exception: ```python as_document({"a": 1}, int_schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: expected string or list, got '1' of type 'int' ``` {{< note title="Executable specification" >}} Page automatically generated from commaseparated.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/datetime.md000066400000000000000000000051311415074265300237240ustar00rootroot00000000000000--- title: Datetimes (Datetime) type: using --- Datetime validator parses using the python-dateutil library and returns a python datetime object. Example yaml_snippet: ```yaml date: 2016-10-22 datetime1: 2016-10-22T14:23:12+00:00 datetime2: 2016-10-22T14:23:12Z datetime3: 20161022T142312Z ``` ```python from strictyaml import Map, Datetime, YAMLValidationError, load, as_document from collections import OrderedDict from dateutil.tz.tz import tzutc from datetime import datetime from ensure import Ensure schema = Map({ "date": Datetime(), "datetime1": Datetime(), "datetime2": Datetime(), "datetime3": Datetime(), }) equivalent_data = OrderedDict([ ("date", datetime(2016, 10, 22, 0, 0)), ("datetime1", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ("datetime2", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ("datetime3", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ]) ``` Each of the four datetimes are valid and parsed: ```python Ensure(load(yaml_snippet, schema)).equals(equivalent_data) ``` .text still returns the original text: ```python Ensure(load(yaml_snippet, schema)["date"].text).equals("2016-10-22") ``` Non datetimes raise an exception: ```yaml date: 1 datetime1: â datetime2: b datetime3: c ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a datetime found arbitrary text in "", line 2, column 1: datetime1: "\xE2" ^ (line: 2) ``` Datetime objects serialize directly to ISO-8601 format YAML strings: ```python print(as_document(equivalent_data, schema).as_yaml()) ``` ```yaml date: 2016-10-22T00:00:00 datetime1: 2016-10-22T14:23:12+00:00 datetime2: 2016-10-22T14:23:12+00:00 datetime3: 2016-10-22T14:23:12+00:00 ``` Valid datetime strings serialize to YAML: ```python print(as_document({"a": "2016-10-22"}, Map({"a": Datetime()})).as_yaml()) ``` ```yaml a: 2016-10-22 ``` Serializing invalid datetime string raises exception: ```python as_document({"a": "x"}, Map({"a": Datetime()})) ``` ```python strictyaml.exceptions.YAMLSerializationError: expected a datetime, got 'x' ``` Serializing non-string and non-datetime object raises exception: ```python as_document({"a": 55}, Map({"a": Datetime()})) ``` ```python strictyaml.exceptions.YAMLSerializationError: expected a datetime, got '55' of type 'int' ``` {{< note title="Executable specification" >}} Page automatically generated from datetime.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/decimal.md000066400000000000000000000040331415074265300235260ustar00rootroot00000000000000--- title: Decimal numbers (Decimal) type: using --- StrictYAML parses to a YAML object representing a decimal, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in many cases if parsed as a decimal, it will behave in the same way. To get a python decimal.Decimal object, use .data. Parsing and validating as a Decimal is best for values which require precision, like prices. Example yaml_snippet: ```yaml a: 1.00000000000000000001 b: 5.4135 ``` ```python from strictyaml import Map, Decimal, load from decimal import Decimal as Dec from ensure import Ensure schema = Map({"a": Decimal(), "b": Decimal()}) ``` .data to get Decimal object: ```python Ensure(type(load(yaml_snippet, schema)["a"].data) is Dec).is_true() ``` Valid: ```python Ensure(load(yaml_snippet, schema)).equals({"a": Dec('1.00000000000000000001'), "b": Dec('5.4135')}) ``` Cast to str: ```python Ensure(str(load(yaml_snippet, schema)['a'])).equals("1.00000000000000000001") ``` Cast to float: ```python Ensure(float(load(yaml_snippet, schema)["a"])).equals(1.0) ``` Greater than: ```python Ensure(load(yaml_snippet, schema)["a"] > Dec('1.0')).is_true() ``` Less than which would not work for float: ```python Ensure(load(yaml_snippet, schema)["a"] < Dec('1.00000000000000000002')).is_true() ``` Cannot cast to bool: ```python bool(load(yaml_snippet, schema)['a']) ``` ```python : Cannot cast 'YAML(1.00000000000000000001)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. ``` Invalid: ```yaml a: string b: 2 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a decimal found arbitrary text in "", line 1, column 1: a: string ^ (line: 1) ``` {{< note title="Executable specification" >}} Page automatically generated from decimal.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/email-and-url.md000066400000000000000000000017311415074265300245610ustar00rootroot00000000000000--- title: Email and URL validators type: using --- StrictYAML can validate emails (using a simplified regex) and URLs. ```python from strictyaml import Email, Url, Map, load from ensure import Ensure schema = Map({"a": Email(), "b": Url()}) ``` Parsed: ```yaml a: billg@microsoft.com b: http://www.twitter.com/@realDonaldTrump ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": "billg@microsoft.com", "b": "http://www.twitter.com/@realDonaldTrump"}) ``` Exception: ```yaml a: notanemail b: notaurl ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an email address found non-matching string in "", line 1, column 1: a: notanemail ^ (line: 1) ``` {{< note title="Executable specification" >}} Page automatically generated from email-url.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/empty.md000066400000000000000000000044161415074265300232730ustar00rootroot00000000000000--- title: Empty key validation type: using --- Sometimes you may wish to not specify a value or specify that it does not exist. Using StrictYAML you can accept this as a valid value and have it parsed to one of three things - None, {} (empty dict), or [] (empty list). Example yaml_snippet: ```yaml a: ``` ```python from strictyaml import Map, Str, Enum, EmptyNone, EmptyDict, EmptyList, NullNone, load, as_document from ensure import Ensure ``` EmptyNone with empty value: ```python Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Enum(["A", "B",])}))).equals({"a": None}) ``` EmptyDict: ```python Ensure(load(yaml_snippet, Map({"a": EmptyDict() | Enum(["A", "B",])}))).equals({"a": {}}) ``` EmptyList: ```python Ensure(load(yaml_snippet, Map({"a": EmptyList() | Enum(["A", "B",])}))).equals({"a": []}) ``` NullNone: ```python Ensure(load("a: null", Map({"a": NullNone() | Enum(["A", "B",])}))).equals({"a": None}) ``` EmptyNone no empty value: ```yaml a: A ``` ```python Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Enum(["A", "B",])}))).equals({"a": "A"}) ``` Combine Str with EmptyNone and Str is evaluated first: ```python Ensure(load(yaml_snippet, Map({"a": Str() | EmptyNone()}))).equals({"a": ""}) ``` Combine EmptyNone with Str and Str is evaluated last: ```python Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Str()}))).equals({"a": None}) ``` Non-empty value: ```yaml a: C ``` ```python load(yaml_snippet, Map({"a": Enum(["A", "B",]) | EmptyNone()})) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting an empty value found arbitrary text in "", line 1, column 1: a: C ^ (line: 1) ``` Serialize empty dict: ```python print(as_document({"a": {}}, Map({"a": EmptyDict() | Str()})).as_yaml()) ``` ```yaml a: ``` Serialize empty list: ```python print(as_document({"a": []}, Map({"a": EmptyList() | Str()})).as_yaml()) ``` ```yaml a: ``` Serialize None: ```python print(as_document({"a": None}, Map({"a": EmptyNone() | Str()})).as_yaml()) ``` ```yaml a: ``` {{< note title="Executable specification" >}} Page automatically generated from empty.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/enum.md000066400000000000000000000040241415074265300230740ustar00rootroot00000000000000--- title: Enumerated scalars (Enum) type: using --- StrictYAML allows you to ensure that a scalar value can only be one of a set number of items. It will throw an exception if any strings not in the list are found. ```python from strictyaml import Map, Enum, MapPattern, YAMLValidationError, load from collections import OrderedDict from ensure import Ensure schema = Map({"a": Enum(["A", "B", "C"])}) ``` Valid because it contains 'A': ```yaml a: A ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": "A"}) ``` Get .data from enum: ```yaml a: A ``` ```python assert isinstance(load(yaml_snippet, schema)['a'].data, str) ``` Valid because it contains 'B': ```yaml a: B ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": "B"}) ``` Valid because it contains 'C': ```yaml a: C ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": "C"}) ``` Invalid because D is not in enum: ```yaml a: D ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting one of: A, B, C found arbitrary text in "", line 1, column 1: a: D ^ (line: 1) ``` Invalid because blank string is not in enum: ```yaml a: ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting one of: A, B, C found a blank string in "", line 1, column 1: a: '' ^ (line: 1) ``` Successful serialization: ```yaml a: A ``` ```python yaml = load(yaml_snippet, schema) yaml['a'] = "B" print(yaml.as_yaml()) ``` ```yaml a: B ``` Invalid serialization: ```yaml a: A ``` ```python yaml = load(yaml_snippet, schema) yaml['a'] = "D" print(yaml.as_yaml()) ``` ```python strictyaml.exceptions.YAMLSerializationError: Got 'D' when expecting one of: A, B, C ``` {{< note title="Executable specification" >}} Page automatically generated from enum.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/float.md000066400000000000000000000066741415074265300232520ustar00rootroot00000000000000--- title: Floating point numbers (Float) type: using --- StrictYAML parses to a YAML object representing a decimal - e.g. YAML(1.0000000000000001) To get a python float literal, use .data. Parsing and validating as a Decimal is best for values which require precision, but float is better for values for which precision is not required. Example yaml_snippet: ```yaml a: 1.00000000000000000001 b: 5.4135 ``` ```python from math import isnan, isinf from strictyaml import Map, MapPattern, Str, Float, Bool, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Float(), "b": Float()}) ``` Use .data to get float type: ```python Ensure(type(load(yaml_snippet, schema)["a"].data)).equals(float) ``` Equal to equivalent float which is different number: ```python Ensure(load(yaml_snippet, schema)).equals({"a": 1.0, "b": 5.4135}) ``` Cast to str: ```python Ensure(str(load(yaml_snippet, schema)["a"])).equals("1.0") ``` Cast to float: ```python Ensure(float(load(yaml_snippet, schema)["a"])).equals(1.0) ``` Greater than: ```python Ensure(load(yaml_snippet, schema)["a"] > 0).is_true() ``` Less than: ```python Ensure(load(yaml_snippet, schema)["a"] < 0).is_false() ``` Has NaN values: ```yaml a: nan b: .NaN ``` ```python Ensure(isnan(load(yaml_snippet, schema)["a"].data)).is_true() Ensure(isnan(load(yaml_snippet, schema)["b"].data)).is_true() ``` Has infinity values: ```yaml a: -.Inf b: INF ``` ```python Ensure(isinf(load(yaml_snippet, schema)["a"].data)).is_true() Ensure(isinf(load(yaml_snippet, schema)["b"].data)).is_true() ``` Has underscores: ```yaml a: 10_000_000.5 b: 10_0_0.2_5 ``` ```python Ensure(load(yaml_snippet, schema).data).equals({"a": 10000000.5, "b": 1000.25}) ``` Cannot cast to bool: ```python bool(load(yaml_snippet, schema)['a']) ``` ```python : Cannot cast 'YAML(1.0)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. ``` Cannot parse non-float: ```yaml a: string b: 2 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting a float found arbitrary text in "", line 1, column 1: a: string ^ (line: 1) ``` Serialize successfully: ```python print(as_document(OrderedDict([("a", 3.5), ("b", "2.1")]), schema).as_yaml()) ``` ```yaml a: 3.5 b: 2.1 ``` Serialize successfully with NaN: ```python print(as_document(OrderedDict([("a", 3.5), ("b", float("nan"))]), schema).as_yaml()) ``` ```yaml a: 3.5 b: nan ``` Serialize successfully with infinity: ```python print(as_document(OrderedDict([("a", float("inf")), ("b", float("-inf"))]), schema).as_yaml()) ``` ```yaml a: inf b: -inf ``` Serialization failure: ```python as_document(OrderedDict([("a", "x"), ("b", "2.1")]), schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: when expecting a float, got 'x' ``` Float as key: ```python document = as_document(OrderedDict([("3.5", "a"), ("2.1", "c")]), MapPattern(Float(), Str())) print(document.data[3.5]) print(document.data[2.1]) ``` ```yaml a c ``` Float or bool: ```python document = as_document({"a": True}, Map({"a": Float() | Bool()})) print(document.as_yaml()) ``` ```yaml a: yes ``` {{< note title="Executable specification" >}} Page automatically generated from float.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/hexadecimal-integer.md000066400000000000000000000015401415074265300260270ustar00rootroot00000000000000--- title: Hexadecimal Integers (HexInt) type: using --- StrictYAML can interpret a hexadecimal integer preserving its value Example yaml_snippet: ```yaml x: 0x1a ``` ```python from strictyaml import Map, HexInt, load from ensure import Ensure schema = Map({"x": HexInt()}) parsed = load(yaml_snippet, schema) ``` Parsed correctly: ```python Ensure(parsed).equals({"x": 26}) Ensure(parsed.as_yaml()).equals("x: 0x1a\n") ``` Uppercase: ```yaml x: 0X1A ``` ```python Ensure(load(yaml_snippet, schema).data).equals({"x": 26}) Ensure(load(yaml_snippet, schema).as_yaml()).equals("x: 0X1A\n") ``` {{< note title="Executable specification" >}} Page automatically generated from scalar-hexadecimal-integer.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/index.md000066400000000000000000000010471415074265300232410ustar00rootroot00000000000000--- title: Scalar StrictYAML Validators --- - [Boolean (Bool)](boolean) - [Parsing comma separated items (CommaSeparated)](comma-separated) - [Datetimes (Datetime)](datetime) - [Decimal numbers (Decimal)](decimal) - [Email and URL validators](email-and-url) - [Empty key validation](empty) - [Enumerated scalars (Enum)](enum) - [Floating point numbers (Float)](float) - [Hexadecimal Integers (HexInt)](hexadecimal-integer) - [Integers (Int)](integer) - [Validating strings with regexes (Regex)](regular-expressions) - [Parsing strings (Str)](string) strictyaml-1.6.1/docs/public/using/alpha/scalar/integer.md000066400000000000000000000027771415074265300236020ustar00rootroot00000000000000--- title: Integers (Int) type: using --- StrictYAML parses to a YAML object, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in many cases if parsed as a integer, it will behave in the same way. Example yaml_snippet: ```yaml a: 1 b: 5 ``` ```python from strictyaml import Map, Int, load from ensure import Ensure schema = Map({"a": Int(), "b": Int()}) parsed = load(yaml_snippet, schema) ``` Parsed correctly: ```python Ensure(parsed).equals({"a": 1, "b": 5}) ``` Has underscores: ```yaml a: 10_000_000 b: 10_0_0 ``` ```python Ensure(load(yaml_snippet, schema).data).equals({"a": 10000000, "b": 1000}) ``` Cast with str: ```python Ensure(str(parsed["a"])).equals("1") ``` Cast with float: ```python Ensure(float(parsed["a"])).equals(1.0) ``` Greater than: ```python Ensure(parsed["a"] > 0).equals(True) ``` Less than: ```python Ensure(parsed["a"] < 2).equals(True) ``` To get actual int, use .data: ```python Ensure(type(load(yaml_snippet, schema)["a"].data) is int).equals(True) ``` Cannot cast to bool: ```python bool(load(yaml_snippet, schema)['a']) ``` ```python : Cannot cast 'YAML(1)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. ``` {{< note title="Executable specification" >}} Page automatically generated from scalar-integer.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/regular-expressions.md000066400000000000000000000035641415074265300261610ustar00rootroot00000000000000--- title: Validating strings with regexes (Regex) type: using --- StrictYAML can validate regular expressions and return a string. If the regular expression does not match, an exception is raised. ```python from strictyaml import Regex, Map, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Regex(u"[1-4]"), "b": Regex(u"[5-9]")}) ``` Parsed correctly: ```yaml a: 1 b: 5 ``` ```python Ensure(load(yaml_snippet, schema)).equals({"a": "1", "b": "5"}) ``` Non-matching: ```yaml a: 5 b: 5 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting string matching [1-4] found non-matching string in "", line 1, column 1: a: '5' ^ (line: 1) ``` Non-matching suffix: ```yaml a: 1 Hello b: 5 ``` ```python load(yaml_snippet, schema) ``` ```python strictyaml.exceptions.YAMLValidationError: when expecting string matching [1-4] found non-matching string in "", line 1, column 1: a: 1 Hello ^ (line: 1) ``` Serialized successfully: ```python print(as_document(OrderedDict([("a", "1"), ("b", "5")]), schema).as_yaml()) ``` ```yaml a: 1 b: 5 ``` Serialization failure non matching regex: ```python as_document(OrderedDict([("a", "x"), ("b", "5")]), schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: when expecting string matching [1-4] found 'x' ``` Serialization failure not a string: ```python as_document(OrderedDict([("a", 1), ("b", "5")]), schema) ``` ```python strictyaml.exceptions.YAMLSerializationError: when expecting string matching [1-4] got '1' of type int. ``` {{< note title="Executable specification" >}} Page automatically generated from regexp.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/alpha/scalar/string.md000066400000000000000000000023571415074265300234450ustar00rootroot00000000000000--- title: Parsing strings (Str) type: using --- StrictYAML parses to a YAML object, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in most cases if parsed as a string, it will behave in the same way. Example yaml_snippet: ```yaml a: 1 b: yes c: â string d: | multiline string ``` ```python from strictyaml import Str, Map, load from ensure import Ensure schema = Map({"a": Str(), "b": Str(), "c": Str(), "d": Str()}) parsed = load(yaml_snippet, schema) ``` Parses correctly: ```python Ensure(parsed).equals( {"a": "1", "b": "yes", "c": u"â string", "d": "multiline string\n"} ) ``` Dict lookup cast to string: ```python Ensure(str(parsed["a"])).equals("1") ``` Dict lookup cast to int: ```python Ensure(int(parsed["a"])).equals(1) ``` Dict lookup cast to bool impossible: ```python bool(parsed["a"]) ``` ```python : Cannot cast 'YAML(1)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. ``` {{< note title="Executable specification" >}} Page automatically generated from scalar-string.story. {{< /note >}}strictyaml-1.6.1/docs/public/using/index.md000066400000000000000000000045171415074265300207140ustar00rootroot00000000000000--- title: Using StrictYAML --- How to: - [Build a YAML document from scratch in code](alpha/howto/build-yaml-document) - [Either/or schema validation of different, equally valid different kinds of YAML](alpha/howto/either-or-validation) - [Labeling exceptions](alpha/howto/label-exceptions) - [Merge YAML documents](alpha/howto/merge-yaml-documents) - [Revalidate an already validated document](alpha/howto/revalidation) - [Reading in YAML, editing it and writing it back out](alpha/howto/roundtripping) - [Get line numbers of YAML elements](alpha/howto/what-line) - [Parsing YAML without a schema](alpha/howto/without-a-schema) Compound validators: - [Fixed length sequences (FixedSeq)](alpha/compound/fixed-length-sequences) - [Mappings combining defined and undefined keys (MapCombined)](alpha/compound/map-combined) - [Mappings with arbitrary key names (MapPattern)](alpha/compound/map-pattern) - [Mapping with defined keys and a custom key validator (Map)](alpha/compound/mapping-with-slug-keys) - [Using a YAML object of a parsed mapping](alpha/compound/mapping-yaml-object) - [Mappings with defined keys (Map)](alpha/compound/mapping) - [Optional keys with defaults (Map/Optional)](alpha/compound/optional-keys-with-defaults) - [Validating optional keys in mappings (Map)](alpha/compound/optional-keys) - [Sequences of unique items (UniqueSeq)](alpha/compound/sequences-of-unique-items) - [Sequence/list validator (Seq)](alpha/compound/sequences) - [Updating document with a schema](alpha/compound/update) Scalar validators: - [Boolean (Bool)](alpha/scalar/boolean) - [Parsing comma separated items (CommaSeparated)](alpha/scalar/comma-separated) - [Datetimes (Datetime)](alpha/scalar/datetime) - [Decimal numbers (Decimal)](alpha/scalar/decimal) - [Email and URL validators](alpha/scalar/email-and-url) - [Empty key validation](alpha/scalar/empty) - [Enumerated scalars (Enum)](alpha/scalar/enum) - [Floating point numbers (Float)](alpha/scalar/float) - [Hexadecimal Integers (HexInt)](alpha/scalar/hexadecimal-integer) - [Integers (Int)](alpha/scalar/integer) - [Validating strings with regexes (Regex)](alpha/scalar/regular-expressions) - [Parsing strings (Str)](alpha/scalar/string) Restrictions: - [Disallowed YAML](alpha/restrictions/disallowed-yaml) - [Duplicate keys](alpha/restrictions/duplicate-keys) - [Dirty load](alpha/restrictions/loading-dirty-yaml) strictyaml-1.6.1/docs/public/what-is-yaml.md000066400000000000000000000017451415074265300207740ustar00rootroot00000000000000--- title: What is YAML? --- YAML is a simple, human readable format for representing associative and hierarchical data. Example from wikipedia page on YAML: ```yaml receipt: Oz-Ware Purchase Invoice date: 2012-08-06 customer: first name: Harry family name: Potter address: |- 4 Privet Drive, Little Whinging, England items: - part_no: A4786 description: Water Bucket (Filled) price: 1.47 quantity: 4 - part_no: E1628 description: High Heeled "Ruby" Slippers size: 8 price: 133.7 quantity: 1 ``` Key features: - Things which are associated with other things - delimited by the colon (:). - Ordered lists of things - delimited by the prepended dash (-). - Multi-line strings - delimited by the bar (|) if there is another newline at the end of the string, or bar + dash (|-) if not. - Indentation describing the hierarchy of data. - Maps directly to data types common to most high level languages - lists, dicts, scalars. You don't need to know much more than this. strictyaml-1.6.1/docs/public/when-to-use-validators.md000066400000000000000000000012621415074265300227730ustar00rootroot00000000000000--- title: When should I use a validator and when should I not? --- When starting out on greenfield projects it's much quicker not to create a validator. In such cases it's often more prudent to just parse the YAML and convert the strings explicitly on the fly (e.g. int(yaml['key'])). If the YAML is also going to be largely under the control of the developer it also might not make sense to write a validator either. If you have written software that is going to parse YAML from a source you do *not* control - especially by somebody who might make a mistake - then it probably does make sense to write a validator. You can start off without using a validator and then add one later. strictyaml-1.6.1/docs/public/why-not/000077500000000000000000000000001415074265300175345ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/why-not/environment-variables-as-config.md000066400000000000000000000141111415074265300262320ustar00rootroot00000000000000--- title: Why avoid using environment variables as configuration? --- The use of "environment variables as configuration" is recommended by "[12 factor](https://12factor.net/config)". While this is a common practice and often leads to few or no problems, calling it a best practice is a bit much. The reason cited by the 12 factor website for using them is: >The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; unlike config files, there is little chance of them being checked into the code repo accidentally; and unlike custom config files, or other config mechanisms such as Java System Properties, they are a language- and OS-agnostic standard. Two of these are fine reasons. It is true that these are both good reasons: - Easy to change between deploys. - Language and OS agnostic. However, neither of these things requires that config be stored in environment variables. It's easy enough to create language and OS agnostic configuration files (INI, YAML, etc.) and it's usually straightforward to make files easy to change between deployments too - e.g. if a deployment is containerized, by mounting the file. It is less true that environment variables are inherently "easier" to change between deployments - writing a file is not intrinsically difficult unless it is *made* difficult (e.g. a file is baked in to a container image rather than being mounted), it isn't hard to change. Moreover, there are several disadvantages to using environment variables that tend to exhibit themselves nastily when the size of the configuration grows beyond a certain point. ## Environment variables are global state Environment variables are a form of global state. Every variable is associated only with the environment. The variables will be shared with many other variables which have a multitude of different uses: - The chances of variable cross contamination is high - accidentally naming one variable the same as another which is unknowingly used for a different purpose (e.g. PATH) is elevated, and this can have both weird, hard to debug and terrible effects. - If you need to inspect environment variables e.g. to find one that you thought was there and it actually missing, tracking it down is a pain. Global state in and of itself isn't a "bad thing" but *too much* global state is a very bad thing. A small amount of configuration (e.g. less than 10 variables) can often be placed in to environment variables with very little harm, but as soon as the amount grows the danger of the global state grows. ## Environment variable values cannot handle structures more complex than a string Environment variables are a set of key-value pairs where the key is almost always an uppercase string and the value is always a string. While this is more than sufficient for many purposes, there are many kinds of situations where the configuration data that needs to be stored requires something a bit more complicated than just a string. Where developers run in to this limitation they have a tendency to create an ugly substructure with cryptic codes within the string. The way that LS_COLORS is used is a good example: ``` rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: ``` Clear as mud, right? (This isn't actually an example of 12 factor being followed assiduously, but I have seen many examples just like it where it was). In [StrictYAML](https://hitchdev.com/strictyaml) the following could be represented as: ```yaml # Special di: 01;34 # directory is blue # Extensions *.tz: 01;31 # red *.flv: 01;35 # purple ``` While the codes are cryptic and probably should be changed (e.g. directory: blue), the cryptic nature can at least be easily explained with comments. ## Creating naming conventions to handle the inability to handle A common example: >PERSONNEL_DATABASE_HOST, PERSONNEL_DATABASE_PORT, PERSONNEL_DATABASE_NAME, PERSONNEL_DATABASE_PASSWORD, FACTORY_DATABASE_HOST, FACTORY_DATABASE_PORT, FACTORY_DATABASE_NAME, FACTORY_DATABASE_PASSWORD, HOTEL_BACKUP_DATABASE_HOST, HOTEL_BACKUP_DATABASE_USERNAME, HOTEL_BACKUP_DATABASE_PASSWORD, HOTEL_DATABASE_HOST, HOTEL_DATABASE_PORT, HOTEL_DATABASE_NAME, HOTEL_DATABASE_PASSWORD Did you spot the accidentally missed variable in the list above which caused the critical bug? [StrictYAML](https://hitchdev.com/strictyaml) version: ```yaml database: personnel: host: xxx port: xxx name: xxx password: xxx factory: host: xxx port: xxx name: xxx password: xxx hotel backup: host: xxx name: xxx password: xxx hotel: host: xxx name: xxx port: xxx password: xxx ``` What about now? strictyaml-1.6.1/docs/public/why-not/hjson.md000066400000000000000000000026101415074265300211760ustar00rootroot00000000000000--- title: Why not use HJSON? --- {{< note title="No longer supported" >}} HJSON is no longer supported. {{< /note >}} [HJSON](http://hjson.org/) is an attempt at fixing the aforementioned lack of readability of JSON. It has the following criticisms of YAML: - JSON is easier to explain (compare the JSON and YAML specs). - JSON is not bloated (it does not have anchors, substitutions or concatenation). As with TOML's criticism, these are spot on. However, strictyaml fixes this by *cutting out those parts of the spec*, leaving something that is actually simpler than HJSON. It has another criticism: - JSON does not suffer from significant whitespace. This is not a valid criticism. Whitespace and indentation is meaningful to people parsing any kind of code and markup (why else would code which *doesn't* have meaningful whitespace use indentation as well?) so it *should* be meaningful to computers parsing. There is an initial 'usability hump' for first time users of languages which have significant whitespace *that were previously not used to significant whitespace* but this isn't especially hard to overcome - especially if you have a propery configured decent editor which is explicit about the use of whitespace. Python users often report this being a problem, but after using the language for a while usually come to prefer it since it keeps the code shorter and makes its intent clearer. strictyaml-1.6.1/docs/public/why-not/hocon.md000066400000000000000000000015711415074265300211700ustar00rootroot00000000000000--- title: Why not HOCON? --- [HOCON](https://github.com/typesafehub/config/blob/master/HOCON.md) is another "redesigned" JSON, ironically enough, taking JSON and making it even more complicated. Along with JSON's [syntax typing](../../why/syntax-typing-bad) - a downside of most non-YAML alternatives, HOCON makes the following mistakes in its design: - It does not fail loudly on duplicate keys. - It has a confusing rules for deciding on concatenations and substitutions. - It has a mechanism for substitutions similar to [YAML's node anchor feature](../why/node-anchors-and-references-removed.md) - which, unless used extremely sparingly, can create confusing markup that, ironically, is *not* human optimized. In addition, its attempt at using "less pedantic" syntax creates a system of rules which makes the behavior of the parser much less obvious and edge cases more frequent. strictyaml-1.6.1/docs/public/why-not/index.md000066400000000000000000000020601415074265300211630ustar00rootroot00000000000000--- title: Why not X? --- There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. Below is a series of comparisons with some of the more famous ones: - [Why avoid using environment variables as configuration?](environment-variables-as-config) - [Why not use HJSON?](hjson) - [Why not HOCON?](hocon) - [Why not use INI files?](ini) - [Why not use JSON Schema for validation?](json-schema) - [Why not JSON for simple configuration files?](json) - [Why not JSON5?](json5) - [Why not use the YAML 1.2 standard? - we don't need a new standard!](ordinary-yaml) - [Why not use kwalify with standard YAML to validate my YAML?](pykwalify) - [Why not use Python's schema library (or similar) for validation?](python-schema) - [Why not use SDLang?](sdlang) - [What is wrong with TOML?](toml) - [Why shouldn't I just use Python code for configuration?](turing-complete-code) - [Why not use XML for configuration or DSLs?](xml) If you'd like to write or link to a rebuttal to any argument raised here, feel free to raise a ticket.strictyaml-1.6.1/docs/public/why-not/ini.md000066400000000000000000000012471415074265300206410ustar00rootroot00000000000000--- title: Why not use INI files? --- INI is a very old and quite readable configuration format for small configuration files. It is still used by many programs today and it has some advantages due to this - e.g. python has inbuilt parser for it. Unfortunately it suffers from two major problems: - Different parsers will operate in subtly different ways that can lead to often obscure edge case bugs regarding the way whitespace is used, case sensitivity, comments and escape characters. - It doesn't let you represent hierarchical data. [TOML](../toml) is a configuration format designed to address these two concerns, although it also suffers from obscure edge case bugs. strictyaml-1.6.1/docs/public/why-not/json-schema.md000066400000000000000000000005161415074265300222670ustar00rootroot00000000000000--- title: Why not use JSON Schema for validation? --- JSON schema can also be used to validate YAML. This presumes that you might want to use jsonschema and pyyaml/ruamel.yaml together. [ TODO Flesh out ] - Line numbers - Simpler errors in StrictYAML - StrictYAML is a more flexible schema - Turing incompleteness / inflexibility strictyaml-1.6.1/docs/public/why-not/json.md000066400000000000000000000016101415074265300210250ustar00rootroot00000000000000--- title: Why not JSON for simple configuration files? --- JSON is an *ideal* format for REST APIs and other forms of data intended for machine exchange and it probably always will be because: - It's a simple spec. - It has all the basic types which map on to all programming languages - number, string, list, mapping, boolean *and no more*. - Its syntax contains a built in level of error detection - cut a JSON request in half and it is no longer still valid, eliminating an entire class of obscure and problematic bugs. - If pretty-printed correctly, it's more or less readable - for the purposes of debugging, anyway. However, while it is eminently suitable for REST APIs it is less suitable for configuration since: - The same syntax which gives it decent error detection (commas, curly brackets) makes it tricky for humans to edit. - It's not especially readable. - It doesn't allow comments. strictyaml-1.6.1/docs/public/why-not/json5.md000066400000000000000000000015271415074265300211210ustar00rootroot00000000000000--- title: Why not JSON5? --- [JSON5](http://json5.org/) is also a proposed extension to JSON to make it more readable. Its main criticism of YAML is:: There are other formats that are human-friendlier, like YAML, but changing from JSON to a completely different format is undesirable in many cases. This is, I believe, mistaken. It is better if a language is not subtly different if you are going to use it as such. Subtle differences invite mistakes brought on by confusion. JSON5 looks like a hybrid of YAML and JSON:: ```json { foo: 'bar', while: true, } ``` It has weaknesses similar to TOML: - The noisiness of the delimiters that supplant significant whitespace make it less readable and editable. - The use of [syntax typing](../../why/syntax-typing-bad) is neither necessary, nor an aid to stricter typing if you have a schema. strictyaml-1.6.1/docs/public/why-not/ordinary-yaml.md000066400000000000000000000025661415074265300226560ustar00rootroot00000000000000--- title: Why not use the YAML 1.2 standard? - we don't need a new standard! --- ![Standards](https://imgs.xkcd.com/comics/standards.png "Fortunately the configuration one has been solved now that we have Strict uh... wait, no it hasn't...") StrictYAML is composed of two parts: - A new YAML specification which parses a restricted subset of the [YAML 1.2 specification](https://github.com/yaml/yaml-spec/tree/spec-1.2) and *only* parses to ordered dict, list or string. - An optional validator (which will, as requested, validate and cast parse some of those scalar string values to ints, floats, datetimes, etc.). Note that StrictYAML is *not* a new standard. If you have a syntax highlighter or editor or anything else that recognizes or reads YAML it will recognize StrictYAML in the same way. While not all YAML output by other programs will be readable by StrictYAML (it is, after all, stricter), a lot will be. The features removed from the YAML spec, and their rationales are as follows: - [Implicit Typing](../../why/implicit-typing-removed) - [Direct representations of objects](../../why/not-parse-direct-representations-of-python-objects) - [Explicit tags](../../why/explicit-tags-removed) - [Node anchors and refs](../../why/node-anchors-and-references-removed) - [Flow style](../../why/flow-style-removed) - [Duplicate Keys Disallowed](../../why/duplicate-keys-disallowed) strictyaml-1.6.1/docs/public/why-not/pykwalify.md000066400000000000000000000016341415074265300221010ustar00rootroot00000000000000--- title: Why not use kwalify with standard YAML to validate my YAML? --- Kwalify is a schema validation language that is written *in* YAML. It is a descriptive schema language suitable for validating simple YAML. Kwalify compiles to the strictyaml equivalent but is able to do less. You cannot, for example: - Plug generated lists that come from outside of the spec (e.g. a list of country code from pycountry). - Validate parts of the schema which can be either one thing *or* another - e.g. a list *or* a single string. - Plug sub-validators of a document into larger validators. If your schema is very simple and small, there is no point to using kwalify. If your schema needs to be shared with a 3rd party - especially a third party using another language, it may be helpful to use it. If your schema validation requirements are more complicated - e.g. like what is described above - it's best *not* to use it. strictyaml-1.6.1/docs/public/why-not/python-schema.md000066400000000000000000000030611415074265300226350ustar00rootroot00000000000000--- title: Why not use Python's schema library (or similar) for validation? --- Python's 'schema' (as well as similar libraries) can also be used to validate the structure of objects. Validating YAML is even [cited as a reason on their README](https://github.com/keleshev/schema). Using a schema for validation requires running the YAML through a parser first which and then taking the output (usually a data structure like a dict) and passing it through the schema. Unfortunately there are a number of problems with this approach: ## You still have [the Norway Problem](../../why/implicit-typing-removed) If the standard YAML parser parses 'NO' as false or [empty string as None](https://github.com/Grokzen/pykwalify/issues/77) then it doesn't really matter if the schema says an empty string or the text 'NO' is okay, it will be seeing a 'None' or a 'False' which will cause a failure. ## You can't get line numbers and snippets for the validation errors Assuming you've successfully circumvented the Norway problem, parsing and feeding the output to schema is still problematic. If you pass a parsed dict to schema, schema can't tell which line number the failure happened on and can't give you a code snippet highlighting where it happened. ## Roundtripping becomes very very difficult if not impossible Due to the loss of metadata about parsed YAML being lost when it is fed into a generic schema validator, it also becomes impossible to to *change* the data and serialize it without losing critical details (i.e. mapping ordering, comments or validation structures). strictyaml-1.6.1/docs/public/why-not/sdlang.md000066400000000000000000000020651415074265300213310ustar00rootroot00000000000000--- title: Why not use SDLang? --- [SDLang](http://sdlang.org/) or "simple declarative language" is a proposed configuration language with an XML-like structure inspired by C. Example: ``` // This is a node with a single string value title "Hello, World" // Multiple values are supported, too bookmarks 12 15 188 1234 // Nodes can have attributes author "Peter Parker" email="peter@example.org" active=true // Nodes can be arbitrarily nested contents { section "First section" { paragraph "This is the first paragraph" paragraph "This is the second paragraph" } } // Anonymous nodes are supported "This text is the value of an anonymous node!" // This makes things like matrix definitions very convenient matrix { 1 0 0 0 1 0 0 0 1 } ``` Advantages: - Relatively more straightforward than other serialization languages. Disadvantages: - Syntax typing - leading to noisy syntax. - The distinction between properties and values is not entirely clear. - Instead of having one obvious way to describe property:value mappings - Niche strictyaml-1.6.1/docs/public/why-not/toml-indentation-1.png000066400000000000000000001241101415074265300236640ustar00rootroot00000000000000‰PNG  IHDRí8vG«gsBIT|dˆ IDATxœìÝw$רùof–÷¶½7Óã½Å`  3••ù2+»~ùL¾Ÿd†ÁG0 ƒD"ÝnÇf³}Ôê‚ ‚ |ä¥.€ ‚ FmAAX&LK]€åÄ0 $IZtù‚¾þa¯=ÈëŸfË¹ì‚ ËÑ’Ö´ Ã@UUŠÅ¥R Ã00 MÓ(•*Ë4MãºÝ«2‹%t]§Tªü{¿õÊå2ÙlŽt&K>_¨–Ë0 Î]¼Ìå«7î}pöâe®öÝ\t»’$q£€“g/~lÇôq+—Ë”TõžåÅb‰ƒ¯¿C:y í,|¾wþÿÎe‚ ‡[²š¶a¤ÓÆÆÆ‘$ µ¬ÒÔÔHÀïgbb’T:,IØl6š››0›Íûþ4Mãýc'ydÏ.Žœ8Ãî[±Z,÷ÔUUåèɳLÍÌ¢È ¹|/>y€P0ÀÌ\·Ë…ßç­®¯i‰dŠrY#O¢¹]†®D¢1 à ðc6›ÈfóÄb "Ñ’$áózeMӈœhºFÀïÃb6cñD§ÃA"•BQ>/Š¢,zŒš¦17¿ÝPÀÉdÂ0 ²¹<ÉT—ÓÇí@× É$¥’ŠÏçÁfµR,•8s¾—D"ÉÆõkðû¼8vrù<Ñhœž®¬VkueM#!K2Á€“ÉD&“E›¿)’eùÞãÓ4ü~/¶ùí‚ ‹[ÒæqMÓ¨««Áï÷37an.‚×ã!ÒÓ³³ÉÄ­r¹^¯÷£7økŠÅôÝ`|bšÓg/18~¿€¦æFñ&“ Y–‘åO¦ë=384J]m˜d2…Åb!Íb·ßû,º"ËlÛ¼žžî¦¦g¹tå:ùBm›7,zlªZÂf³"Ëv› Y’ª}ç¹\¾ì7­Ççõ‰Æp¹œ(ŠŒÕj™¿‘)rýæ-ì66«m¾¯]ìV M u°¬6+öìâú[\½ÑOÐïc÷Ž­d³9L&…l6ß掠¥­¬#Ë2&“‚,ˬèj¯ìï×`–/p:H’„¢(X­ŠÅ"’,ãt:+7dŠŒÅbaÛæ ôݼűSg±X,b‰$ªª’Éæ(‹ŠÅjÿ­ËédffŽh,Áì\Ý0ðzÝX­šëhomÂb1ã°Û0 ƒh4N<‘$‰a³Ù(•Tbñ8Ýmƒ~ ]§P(bÀ|Püè®T,‘ÍåØºi=»¶mflbŠL.GC]-f³™Îö‚?»—«rܳsQ"Ñ8o¿ŒX†FÇI&SȲLC] -Mجúú™žCQdëë*0“%É’L¥Y·º‡p0@6›#K Ë‹Y– üDã jÂA¼÷‡£¦ë 1>9Å\$FSCí­Íý^¦g#LMÍ%¨  ýØlV†F™šžÅj±°¢«EQ%‰‘ñ æ"1LŠ‚×ãæòµ ŽŒ‘ÏçÉå ”Ëe:Ú[ÐtÁ¦g#¬ì¹t&ƒÕb¡®&L:“Ål2Q[bbrš‘Ñ ¢Ñ8Á€Ÿ]mŸX«Š Âg´TÓ˜.ŒlÖîxÄJ–¤êMÁB­Ûd2}bÏ뺎¦i(Š‚¦iº/Ã0Ð ƒb±ˆ®éX,Ìfår™B±Tmî—$ »ÍŠ,ËK% ݨ>YÂ_Àl1ãr:‘$ ]×Éærhš†Ëé)A%®iK’„ÝnÇn·ßµ\–eÜ¢v-‚ w? ‚ Â2!‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,âèOÈbiGÿ.öW.—ÑuE‘«Ï·kšŽaèÕÙætପ• l>e©O Ã@Ó4$IFQÄ}¥ ‚%MÍ P*•(K˜L&ìv[5§¶®ë DzʧmFµ¼¹\³Å\ @HØî3k2•&‘H‚$ár:ðz=(²\=<þ…e|mbršó½W1) -ͬ]µ€ñÉ)bñë׬BQ$ò¹<§/ô219Í3?‚ßç½kÛðÑ7‹•á~åZl›†Q™o>™JãózîzŸ¦é\¸|ÚPˆ–æ†ÝöϹ ÂgÙ’VcÒé4ƒD"‰Çã•Ä!Å"cãܼyëžë§ÝB`1 ƒ¾[ƒäó†Glj'“÷}Ï•ë7yïèI®ß¼Åkoâìù^t]GÓõùZ§N¹\¾ë\TjÔzu_ªZæü¥+xÜ.6mXCsc]µÆêózhn¬G–+Íf³²vå …"ÅR©ºÍ…›%M×ÑõNÒ¢ë:š¦U˱p욦QÖ´jÙ¶].—çký•eéL–ÃÇN£iÚ|Íúö¹kmj à÷Þµ¿…õî1‰Ë嬭åÆ0 †FÆ›À$+ŒMNÑÔPGm8„,ß{ŸT,imnä‘=»çÐÑ“¬^ÙMï•ëÌEc¤3Y ù"_}îIl6 o¿ŒL6‡®ë¬ìî`Óú5¼{ø8½Wo ø˜˜šaýš•¬îqrüôy.]¹Nc}_|ò&“ EQðz\˜Íæjy˚ƅޫô c»g?€ÕjYôø†F8yö"&EÁn·±ÿáX,fŽ<ËÌlÝ0hn¬g÷Ž-DcqÞ;r’r¹ ’ÄCÛ7SWæÈñ3œ¿t…L6G}]˜{¢X,qðõwå <Ìæ kÑuƒ+×nÐ{õ: a±˜ùÂ#O$yçÐqêjÂLÏÎÑÒÔÀ£ûvc2)ŸôG,‚°$–,hK’„Ïç£X,Ò×wU-ÓÙÙ¢(tuvP*•˜œ˜ZªâýÆn2ò…Š¢Tk¯V˽Áïö{¨Ö–§gf±XÌÕ`853Çï<÷>¯ŤÐ{å:ªZæ_}–x<Éko½Owg;íßM:›eÓº5tu´V·½g×6|^#c÷Ý¿$I¤R®^»Éªž.œNGOœalrŠ®öÖEß33Åa³ñÈÞ]‹%,f3“3 ñðέ”T•gγ¢«+×nÒÔXϮ훙˜˜F’%v;›7¬a.åÛïKÕíÚlV¾üìã¼öÖûÕÔ­e­Ìåë7ؾe­-M¼þö!†GÇñy=¨ªÊþ=;(•T^|åMòùŸ‹Ù\726Ëå ð35=KG[3C#(ŠÂÊîNdY¦¬iä dYÆb6cZùvÙÊe “©Òê‰Æq»\Ä)zº;*çiÙßä ‚ <¨% Ú^¯‡ññI"‘(’,ÑÚÒ‚$Iô÷ߢ¤ª¨ªÊ›ýÔ„ÃÔ,ñ>¥¬ M ux=Š¥ µt´µ,º®$UF•/x¬f3Ž;R¡J’DgG3sQ^zõ-EaÇ–¸œtÃÀép`6ßþHg碜8sžÙ¹(…b‘—~õÛ6¯gzvŽ¡Q"Ño¾{˜†ºZöíÞÁ–ë8}þº® øXÝÓußc,—Ë9Ñ‹V®üæ†z\.kV­à•×ßA’$º;Û0) ×­â艳¼xð ;;¶nÀëuôûøé/^!ðÄc{éæâåkÌEbLÍÌ215Íž]ÛØ¶i=z¯rõúM‚?í­ÍÄâ \®J ƒ,I¸]NdiyÜÜ ‚ ü&–4Ÿ¶ ‚ NTKAa™A[A– ´Aa™A[A– ´Aa™A[A– ´…OÄ'1gürœ‡^áã´¤“«Ü™ $îœ_äƒé—Û¬WºnÐ{ù2×ûnàõzسû!ÜîûÏ0öYP™… fFsŒÝH£˜$º7ùñ,”Uçã´¯öàp›ÑuƒÉÁ,c7Òêlt®÷b2/~icãœ=w]×Ù¹};MMÇG'‚°ô–¬¦½‚stlœ¡¡a†††Èçó$’I††GfjjMÓ–ª˜¿1ÃÐ9zü©TŠöÖVÌfËüÔœºn •u ýöM‹®”U]3tþgþµ… ¸ðþ…4—·§#Õ+¯Íÿ»°m­¬Wö÷!5Õ–í®÷/l·¬£ÝY>íöº eË&T^þ‹AJ·ß‚¬€¡he­TY@œ^²I¢÷h„²z{Îð€Û墫³“sç/rk`àøÄA>ý–´¦J§Á0¨© ‰D™šš¦££¹¹n· ›ÕÊøø$n·k™ÖR%ÚÚZYµj%©X‰Þ#L‰Ù±<뼬Ù Ó8õÚ4©X‰P½ ûÂ8\&ú/$Ð5ƒ‰ ‰¹"}³—ßÌÅ÷æ˜Êbs(ìxª‡ÛÌ¡ÆÑuƒ@­èTž–•Vnñ3Ò—æêÉ(²,ѳÕOûÏ¢­…¬Æ¥#s:ÌMæ©ir°ã‰ZʪÎé7fð†¬Üº” ®ÕÁæ5dÓ*—EÈ&UÂMv6ì )pæÍ¢Ó¦G³óumNR±G_ž¤˜ÓhYUù%Y²ÒÐádüfºZ­¬sñÐÓÃ9¬…MûÃêløý>ü~o¾õÎßÍG'‚ð)´¤}Ú¡`††z Å"ùB÷|‚ŠŽö6êjk±Z­Èвl’…|”b^ãÒÑ9ü56vë†ãýJyÞ$‰H»Ëtß A„Ï›%íÓŽÆb Z[[hll`jj]×I¥ÒŒ  ‡CËnÚ‡ÑËjQ«ôñª:ŠYââáv§Âú‡C¸}–ʽË|œrù-·ß€¾312‰ö†©ivTV3@’îø™_Y’`å6?{¿ÜHëJ7G_š$-•üÙ““¨ªz»lšZÔÑæËh±+˜,2v—R-‡¬H˜-2…œ†®”òV‡²èñ.ôz¥œ†Qéã®,3îXVYßî2¬·³ù@˜-Ö²åÑì.¥º-A„ϳ%=>6>Óá _( Äjµ’ÍdÉf²„BÁÏLmÛ0àü{³hšÕ¡PÛâ`v,Ïä`†Þ$•Ñ×ù\Y®ÔR«$ð­ ]M3Ø›$—.“Ž©$æŠ(Š\É;®Tjã²"QÈiœ}{—ÏŒ¦êøj¬Xl•û´É©)þäßþ)ò¿ükš+#±5Íàʉ(6‡B!«ÑÔ婲­;oœLf™•Ûü\?cz8K|¶À®gê+Eœ_ÿN—“L f‰Læé=a®ÝAëJ·.%˜Í1;žçÌ[3¬Øìcõöoýx”‹‡æ(ä4µ6µ"³œ ,qÐØmvÔ²ŠßïÃét"Ë2íí,|õ›LKZÌߊ¦i”Ëe¥R[´:º6øP‹:¾ýVn3ö†¨owb¶Ètmô‘Š–0[:ÖùhZq{ž$Itmðâ X€îM>’‘".¿…G·_ØÊã¿_ØÊ¾¯4âp›Øþx-‰H ðÕØ°9+ç²¶¦†?þ§ÿ„P0Xݾ͡оƃծ°fg§ÇŒaìýrn¿åŽrÀš]AêÚœ” «wð×T«Ókæ±o6c»£æí YQ™†N² V» ‹½ÒŒ¨³Ñ³5€¢H8=f\>3}«…L¼„b– 5Ø ]׫?‚ ŸWK ú´].ç=¯ù|Þ%(ÑÇL’\¼t™|¾ÀcÁd¶¬³nr`¶Üî™°9L´ôÜΞ@%@VÖ1ßµY³E¡±ÓUý}!˜:=•õêZÕmxCV¼!ë=ųÛí¬žÕ ˜$µ6jšøÂ·×—$‰p“ã‡&a±*Ô·ÝûÙ™Ì2µ-Ž»Ö 5Ø 5ÜÛRÒܽøÁ:ÁºÛµkÃ0娉“¤Òi\.×¢ïAø¬“Œè(4 ƒD"ÝnÇfM•¢rÎ’$ LfuµµH(d“j¥¦,ºúé5Í ›TqzL(¦O×h}Ã0ÈfsD¢$$Âá‡ã£ß(‚ð#‚¶ ‚ ,Ÿ®*• ‚ ÷%‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,Ë÷háSÇ0 Êš†„„aH’„É´øLiŸd4MI¤|úîI à¤j˜MÓ1™äÏÌÜú‚ |ò–,h/ Z× ]ב¨|Á/,×ôJZFE_jË…¦ëüð•“Ô…<ÌFS}.žÝ·Ã0PUBIÅlR°YÍ‹NM»è³Øk S¤–T ›õöó쯽ŒÃnáÑ«?²ÌÅ’J¾¨‚&“ŒÝfAù¯·BQåÏþæu¾ôÈ&^=|‰¯?¾®–ÚOl‚ |¶,iM;Î0>>Q©¡•Ë477á÷û˜œœ$O€$a1›immÁj½w‚Ï£|>O&›E’$òù5á‹…²¦‹Å(‹ø¼>ÜnÙl–b±DI-a·ÙÉò„C!dY&•N“J¦pºœø¼>”¡V*I#“¬ÓQ*5î›#Óü‡¿yƒBQÅb6ñ¾ø{6wS(©ÌFS€DmЃÝV9޹X§ÝB$žÁn³Pð :‘x†|¡DÐçÂã²S(©œ¹2Ä oå¿ýÝ}n‚^'c³Ø¬fºšk0›j‚Lʽ5~Ã0xþåãüà•ãtµÔRRË<´¡“ßÿÒØÍ&"ñ4…¢JÀçÄã¬L“+”˜‰¦0™dB>L&MÓ‰&3¤³|n~I’(©e¦#•©ikìV3²,qõÖO<´–«“dY¡³³Y–éï X,‰ M%ÈÜäç¿ø%íLMÏÐÝÝÉsÏ<͹sç9vüv‡]×ùæ7¾Î¥Þ+œ=w­¬áñzˆÇã|í«_ÆépðÂ/^Âëó’N¥Ù¿oë×­½§&«ë:·FŸoq»œlܰ~ÑYÉdI"äsò¹ˆ%3½. ÃàoŸ£³©†ü­Gé½9F±¤’+¨¼ðÖƦc”Ô2m !¾óÕ½•šè÷_#äw“ÊäÙ¸²•/îÛÀ©Ëƒ>w—ÃJ¾¨òß|mðÎÉk\éŸàc—Ù¾®“àºÊšÆÉÞQ %•‘É(ßzz'[×´Ýs|’$QÖ4ž{dÿü÷žàbß(ÿñù7xfï®NqúÊ ›…’ªñ‡_Û‹Ålâ¯y”²¦‘+”X×ÝÄ÷oäÖÈ,?yí$AŸ‹BQå+n¡±ÖÏÏÞ8ÍT$ x\vþðk{1›MÔ…¼øÜjnœvq] ‚ðà–´ÝÙçõâr:$‰PS ¶¶†|>ÏàÐV«›M|±A%È fggyúÉÇyîÙ§8yò4™l—ËÅÖ-›Y¿v-7ûo1==C:Ân³ÓÔÔˆÕb%388Ìëo¾ZVY·f ‡ƒwÞ;D©¤~ÈŽ+™Ã¤…_X¼ [’$¾ýì.voêæÛÏìbÿ¶JªÆ­ÑY¶¬n%‘Ρª¯ ƒÆZ?{·ö°²£—]Ä0 Ì&…›£³x]þéï=Á³û6 Éûú)êB6¯jeb:ÆñK·ù\ì\ßÉÚîFþë¯îcóªV ’ ­³9Ìßö!:›k8uy°zÓ±˜ñéÇ/Þâè…~šë IüìÍ34×Ù´ª•‘É'/rüÒWn³}]›Wµò³7Ï0Irk|–t®ÀžÍ+xjÏ:^'ý#Ó¼{ú:{šÙ´ª…÷Ïö18A–$þÕÅbùè ~N‚Á`%ÐJ •Ë\¼ÔËÌÌ,ÁPT*E¹¬!I2€V{ÈÌ IDAT»Ý†Õj¥X,¢ª*““S‹EúnÜÀl6±võ*äûL«j·Ùðz½Õ±»ýC›ÒýžÊ|ä 5Èb©ŒÉ$£j3q~ô«”Jßýï¾ÄÑóý8æ›ÄS™Âüõ ö¹Ø¶¶š€€t6ÏØtŒ°ÏM¹¬³¢½ŽÆ?²,a1›0›”êþ ÃÀ¬(„jø=¼.;‘D¦šêôN Ç44ᇯg*’ä?ÿËocµ˜˜˜32!—/²²£žú—«·&‰§r\î¾°k 6‹™íkÛŸãçoÅbVøúãÛH¤rDâ®Ma1™Ø³y[å® VŽ«fþ_A„µ¤5íÉ©)¢Ñ.— ¯ÇK:“AÓ4úo  i^$‰R©´”ÅüÔYH¶²@Ót.^ºÌöí[Ùûðnì6;¥R‘JNí»ƒ±,K´µ¶ÐÜÜÄ—Ÿ{–Ý»vÒÖÖZÍBöAñD‚‘‘ÑêÏÄä%õCjå`6)¬_ÑÌÉÞAÚ›Â<»w#n—‰™ã31¾üèf¶¯í@‘%Jª€¢È˜ïun1›èhªaÇúN¾ýÌN¶¯í µ!X]7_TI¤sdsÅJ –x ì ëìÙ²‚ûO¾FK]Þ›ãXÍ&ÚCìÙÜÍ·æ÷×R¤±ÖG8àæ›Oíàk_ØBgs —D*Ç«øƒ¯íÅdR8{u˜ ßEmÐ×læOng]wa¿Ht"ÂoGùîw¿ûÝY±P(`6›?Ö4™Š¬‰DH¥R¤Òip8hå2ssR©4&³™pMxÑDŸG©Tšd2Å–Í›(KLLL±i㊅ƒCäÓ)œN'©TšºÚ:,V+n— »Í^­5ïØ¶K½W¸ví:×oܤ¹¹™ÚššEû|kkkX»fuõ§»«Û¯1¾@’$:ܽ:Ä¡37›‰ñð¦nÖu71>cd2Šªi躎Çe§¹6@ßð4zZú*AN‘eB>7ƥ›c OFÙ¼º—ÆÕbâðÙôÞ#•˳¢µŽ‰Ù8~ƒî–Zf¢IÌ&kº äã316+Û×vÐPãçÐÙ>6®l¡>ìãS×è½9ÆÈTŒ-«Ûèl3Mñîé>N_ÂïqÒÓVÇðd”_¾{ž¡‰š¦óèÎÕtµÔ¢ëo¿Âùë# %¶¬iÿDG¦ ‚ðÙ·¤ C*£Æ5ÊZE–1›+麎ª–1 ³ÙŒ,ËTsú<(‹äryü~år™T:×ã¡\.O$p:(²B6›ÅápTò’K†Q©iÛívrù<©T ‡ÃÛåúD«3 ƒ\¡D<•Ãa³àuÛ‘%‰t¶@:WÀïqR(Vjï>ƒX2ƒÛiÇj¾}ƒ¨ë:‰tž\¡ˆÏíÄi· ÍN&2ä %¼n§t¶€,K8íV²ù"šnàqÚ½†RÙ<†^—½2?‘Åçv ˜d©ùb ¿Ç‰Ãf©ŽŸ‹§Qd™š€Y–ÑtD*G®PÂã´ávUޝ¬iDâitÝ èscµˆiAøíˆ,_‚ ‚°Lˆ¶:AAX&DÐA„eBmAAX&DÐA„eBmAAX&DÐþœù°‡àA‚Oܧ¡ ‚°”ÄßÀmâ\ÜkI'WYH­¨ë:pï,Vú|zÎÅ^û¼Z8gš¦ýFÏVK’D6›åø‰Sô^¹B}}}5‹$IÄã ®\½F(øX?ë¥i:ŽÉ¤pþúfEÁí´QTË”JeÔ²†ZÖ@’~£‰J&fã\˜¤6èýÄ':1 ƒ±éׇ¦>¶ý†AQ-ƒQù¼Ô²†¦ëK–¶¿kjY£X*Wr°K|lshºNßà‡ÏÝ ™ÉQö.«ô¾ñT޳׆iûî{Íèó×A¹¬#Éò'øý¨–5.örì|?jY£&à®~—5­:ÿÃoÂ0 Êš~×,jYãØ…~ÊšÎõÁ)ÊšVYXâ|Úªª25=M!_DQdšš±ÙlÕ@=5=¡44Ô‹ =O’$FÇÆbÿ¾½÷¼þ@9©% ÝÐyáÅ—X»f5·»zÎËZ™tº2!Èï[Ø÷¯³¿ß„¦ëüõKGùΗ÷ð£_ä<»‹º—ÿôüŒNÅ0™*_b¿ûävoêþµ·_*•Igò'wð’$Q(©÷Ýßor2öýר²¦ýÛVò“_¤ÈüÞs»«Û\Ø÷·çï÷ËO¾Øû>)ïŸéãÅ·Ïa³Yèj®á›OïÀçvüÖû–E–ŠrüÒ-6ö´TgTü¨kù~¯-¼þ랟ÝßÛœœóƒƒÇÙ´²å®é|”5“—øÕ‘^JªÆþm=<½gÃo8?Jå|Ê\èapbŽ5],ÜK¼wú:õ!k»›~£mgrEÞ=}]:«¹4Mçǯžäé=ëy÷ôuìXMGSÍÇt4Ëß’NÑ”H$Qd…Ö¶fff™˜˜¢££ €X,N"‘DB¢¡¡~)‹ù©a33³|ï¯¾ÏØø8}7n²kç6¬_ÇðÈ(‡%—ËÓÖÖ¾=c2™8uæ,W¯^£¹¹‰™™Y¾øìÓ„‚AvíØÎëo¼Yݶ®ëœ>{–Ó§Ïáóùئ•+¹°ûoqúÌY\.'SÓ3ìÛó0«Vö05=Ãñã'I$464°gÏnÜ‹¤ëÔ ƒh4J!_¨.³Ùl„BÁE¿ÈÌ&…š€ŸÇAÈçÂër ÏçäþÆ;غ¶—ÝÊÍ‘iÞ>q€ÏIÿÈ û¶ô°{s7Ss ^|ç²,ãrØûÝ<¶s5ÎÑ ý´7†Ðæ[wN_äBß³‰ÙXš/îÛ@O{=·FgxÿLél®–ZßµE‘9zþ&úF1) Oì^KO[=c³»ÐÅb¢ohŠ'w¯cëš6Þ8v™“½ƒt4…«û;|î7†¦‘$H¤r|éÀ&Ú›Â\ìåý37ÜÄ’YžÛ¿‘Îæ»¿¨*-,:'z˜Ž¦hªñó³7ϰqe ÅR™£çorîÚ0²,ñÄîu¬ënâÝSןCÓ 2Ù_~t­ !Î]æØÅ~4Í`׆NÚØM®Pä¥÷.0MÑ\ ‘Îñ­§w2Mñö©k$Ò9ÚC<¶s V³‰Ÿ¾~Цºç¯à°Yùý/=ŒËñàSÜŒÍRöò_=·›_¼sŽÿøüü‹ø4s±F2S «¥†G¶­äØÅ~@⩇×QÖt¾ÿË#ìXßICØÇ+‡/1KÓTë籫 ùÝô´×OeyáísÕs—Éxåð%Ʀc¸ì6žÙ·ž–º þ³÷pÚmÌF“4„ý|qÿÌ&Ç/Þâüõa,f…Gw¬aUg7†¦8~±§ÝƵÁ žxh;×w.0ã©,?ó,‘Dšš —ßyl »•ï½p¿ÇÁètŒ†/?ºÀƒ‡.2Mb1›Ð4ý¾çíæð þ³÷ùG_ßÇiãùƒÇÙ¶¶›ÅÌKï]`:š$äsóôžõ}NþËÏôºžŒÐÖâKû7ao¿Â­ÑlV3ÏìÝ@[Cˆ¾z‚M+[XÙ^ÏÁ÷/Ò\`Ëê66®láæð4ÃSÑj9Nõðï¾÷*m !6ô4óÕǶÒöqìb?§/°wK[V·qúÊ 7†§)k:“³q¾ùôNºškøÙ›§ùë_eÇúÖu7ñ¥G6ãqÚð¸ìÔ=øÜ‚^Q˾Ӓ¶ƒjjÂÄbq ù<€|>O2•"-eñ>u$IÂï÷ÑÝÕIOw7O=ñ8mm•t”““x=Ö¬YÅo½ÃøÄ““üôÿ{ÝíBQL¼ûþ!òùü¢Û–e™•+V°iãúoÝ¢T*!IÑXŒC‡ÒÜT™Ÿü—/$›ÍòË—’H&Ù´q§ÏžãìÙó‹nW×4Ξ¿À¯Þx³úsúì¹j—Èbåøêc[hmò¥G6ÑR P*sñÆ(GÎÝàä¥[J*±d–ƒï_ 6àaËê6þË ï“Êäùå»ç‰§rìZßÅ‘s7¸Ü?Ž,Il^ÝÊúÍ\š¢\®ìb6ÎÛ'®±ª£‡ÕÌÏÞGO[=õa/õâJj™“½9w“;V36ãð¹›”ÊýÒQrù»6trô|?‡ÏÞ@–%.ßšàçožeMg#{·¬Àbþõó8lZB|ã‰í ŒÍ29ç/_çÒÍ1Êš^í7L¤sŒNEÑtÁñ9VuÔ³¡§Ã0¸|sœb©Ìñ‹·¨yšŒ0KOe‘åJ&¶‡7wóÔÃëë¦Y›µ’w )püâÍunÍMdH¤rìÛÚÃ_9ÁµÁIŽ¿Å¶5m½NNöP(ªÄS9bÉ µAš¦ßÓ¬¬–5.\aÏæ¬éjÄn³ð'þ™\—ÓÆÎõl^ÝFóñ+LÌÄ(©7Ggøù[g)kEµL6_Äj1Q_ãã+ny `26e|&ÎøLŒH< €Ïí`׆.º[j ùÝdòEF&#¬l¯gMg#…¢ÊÅ£‹nÏ0 ÅV‹ Y®ô{\Ô²ÆÙ«Ãüî“ÛXÝÙ@"㧯Ÿ&W,á÷8ص¡“Öú ^—BQ¥!죡ÆÏ󳢭ާ^Ï%wg—ÎbÝ;~ŸÛAC¯z. %•BIåЙ>¢É,±DU-#I+ÛëÙº¦$‰¿|á0ºnò»ñºì´5„*¹ìç-t}|"3Þ-iÐŽÅâ˜-fš›Éø| àtº(ŠHRšb±H6›#“Îà÷û–²¨Ÿ _üº¦Wk«ºaðÒÁWyú©'X³jssÿ Ã00™Í”J%t]'›ËÝ7ÅéGõÑ™M¦»’¶È’L]m-O?ù8»vî`n.‚ÃéXô½š¦qúÌY†Gn 575ñÌSOÜ7èbƒjœ+O=¼Ž]ºîX:ƒÙ¬Tn$©Rã‘ÀbRÈÔJ"tC_¼_qáw³I™SùÞ2™Zëƒüão=ÆúMLÎ&ð{¼sòW¶ð§ÿìëÕf¾ÆZ?c³x]vì¶»s¾ð‹®º¿ùD( ¿+²„D%I±¤’ÉWÒªÞ,IìÛÚæ•­ÄS9.ôp¹Œ ×Gø_ÿèk\¼1Ê^91¿ª5à…ýMG’¼qì ÿêž!_Tù÷ù*PIKdÑuƒD&WàÖTëç;_ÙÃîÝÌD“Õ㔨 zï Ø†aHçHeó4×h€”¦ëô Ma·š© yhªó󾾟ë;˜Ž$ñ8í¸6h-ßûùaÔ²Æ}û1L&…ÏÅžÍ=<÷ÈFæâd‰E³È²„Ëa#‘ÎbÌßlUr±kšN¡X¢¬éKeìV “Φ•­üÙ?ÿEµÌL$ImÀC6WÀë²d7€ZÖøÑ«'èn­åOîàßüùKÕëA–åÊñüõ&V‹™éH]׉§²÷m—$‰¦Úo¿BjþfôµÃ½lèiÆí´Kå0ŒJ"»ÍŒIQPdi¾O¿2ž*#ô­ÇÈä ¼yü*ÿùGoñïþ‡ß©¤È-WF™ûþíÜf kÆ|åf¢)Þ8v…ÿíŸý=f¢)þâgïW×´˜MHReМ1}KT®}þGâÞÔÃÂÝ–4hkšÆäÈn—‹l.G Àn·ÓÙYé·Lg2”5 —KôiÜ) qðÕ×øþó?¤««ƒ];vàóùè»q“™Ù9ŠÅ"—z/óøIâùþ˜RIEÓ5 Ãàô™³Üd|b’7Þz‡žî.6nXÏé³çfxd”ƒ¯¾Æö­[d©\%©ò‹ÅÂC;wðú[ï0<2ÊÈèßüÆ×ñ¸Ý÷”UQv?´‹m[¶T—Y,æ_»å$W(ñÓ×OsâÒ@åKÊaÃ4ß* &EA–$¶®içå÷/0MqþÚ[Ö´Md8t®’ÂóúÐ?yí$û·®D’åÛÛ$EÆå°²uM;Ï¿|Œ®æ†§¢ü÷ßþÛÖvðæñ«ü?|‹BQ%èsñ¯ìA’$L&ù®fÒéH’Cgû8}”ëCSüík§Ø¿}%ò£ÞþïvÚimòƒ—Sö25—@Z¤¦½+\Qd›ÕŒÅRé³Meóä %.ßccO3vâ*ñó÷™ˆVjÓŠÌ«ùéë§¹Ü?Îèd”ß{î!V´UZd>XÃ6 ƒÞ:Ël,Å¿øÎÓÈ’RW‘%N]äßïUú†§øæS;©û8°}?yíúF›Žñ‡_Û‡Çe籫ù¿ÿöv®ë¤¥.ˆ"Ë<»o#Ï<Ît4ÉÀØ,_}t HoŸ¸Êõ¡)ú†¦øÁÁã<¾{-OíYÏÿûÓwÉæKÜžæÑ«p;mä‹%^;z™þÑYfb)V´Õa³˜yýøþÏ¿M*›§.èå.|ÖŠ²èçs÷±ÉÔ…¼Üšæçož!“+Ò74M"›¿V*[X8‡ëW4ñæñ+ü??}—‘Éhõš\̪ŽÂ~þô¯…Ýj!–ʲwk_>°™ï½p˜™h’Þ›ã<·#vk%pWoH•ʾÇgãüøÕ“t·ÖMdhªõ£È2Ía~ùÎyúGf›‰!+£S1ŸëãTï ÑT–ÿê$O<´–º—€×É‹ïžcp|ŽÝ›º¨ y1) GÎÝdb6N:›§oxºò·%/üUþV‘Àn³ ëõâ:šÂ<±{-Mµû»ð)HÍY() ˜LfœÎ»GŽ–ËeŠÅ"Ço?¢ô³DUUFFGI§3464‡H§Ó Œ‡±XÌÌÎÎÑÝÕÉù I¥Óü~þæ‡?æþŸþG’É‘Èí%.—“Ö–f‡†Éß1`¬©±·ÛE<‘¤µ¥™\.Ïìì,MMH’ÄÔô4‘¹( õ„‚ÁO¬ C× .÷“HçªËÂ>jƒ¦æô´×£–5nÎÐÑ&‘Îñþ™>êB^N]"äsòµ/l¥ohºšT–%ºšk0™²ù"m !’éÑd†Öúºa0<!–ÊÒÞ"ì¯<æ’ɸ>8…Ŭ°º³³I!W(11§µ>ˆe¾Hç蚢X*W÷×ÝR‹$AIÕh® MdHç 4Õø«¦6èáóÇ¿ÿ$=m÷ÀÔ´J3vK}«ÅL<•%-ÐXëgl*J,™£³9ÌT$‰ËaE‘e4Ý ©ÖO4‘&“+ÒT`&šd|6NWs ‰tet{}ØÇÛ'®â°YHç =ßÏ¿ù£¯b1+ŒNE™¥ikRð‚Ãünþ; •Jeþø?ü-_:°™ÛW}èßíä\‚Á±YE¦½1L]È TFGLF‰Ä3ÿ?{÷$ÉyxþŸ®¼ï®ö=m¦Ç{ ¼' È•(Q\J”VZéâV¡Û½¸/‹ÓÞ*BRP†ZÞR%qi%ô„÷`àÇÛîi﫺ËWVº÷>Tw€™jÈšÞ_DÇLggg¾YÕQO¾&Ÿ§þÚ¿å£ãƒ“$¢A:[’KÓg‚©ÙEÆgènKÑ‘NP®ZœÀ¶/Ì oYÛAs2ÊL6ÏÈd†t2J_gUUø½?þ'>~ËNÒ©z/¿­)^4²ZãÄà$¿Áƾv|†NÅ´˜˜] ·£yå½¾œšesrhŠHÈO:ct*KWšéù- ~ƒÑ©,éd”h8Àô|ŽÉ¹E:[’+5Öv·\6x[¶ÃÙÑj–Ãúž6¢áúçòäÜ"ãÓ tµ%élI"„àÜØ]-I|>‘É - B3™<#S"!?›ú:Ðu ³fszx C׉†„>PNŸŸÂ]êukšÊÖNÑÅR•³£³( ¬ín!29·ÈÔ|Ž}mÌ-Ñ5•pÐOµfÓÓÞD±l2É1ÐÝ‚¢(Ì-žœ' Ðß•¾hÄJz;YšóÌu]žxê^|é;¶måÿèe‡¥?(Ægøòwž¦jÖð:_¸ÿF6ô¶]µ7~®çqèè>þ:®ëÒžNð¿ÿúïk%ö•P1küíwŸåüÄ<ŽëÕ«Û½î}=¼˜/ó7ßy’ßÿÌm¤â«c>ò?þÙwøÌ=ûÙ·¥¯ÑM‘¤ŸIí°åÄ5¶í€RŸ›þ l¨_·e×5jšº2g}µZ^Lh9õé ]ÓV†1Ùíp\ÇqQŸ¡¿ïfËÇhDû^–í iê/<ÙŽ$] 2hK’$IÒ*!o-%I’$i•A[’$I’V ´%I’$i•A[’$I’V ´¥¤åõ•²¯$I$ ͈¶\žÓ¶m4M[©ë\«ÕpÝzbUUñûý2÷ø$„ T*¡ë:Á`°!mp]Gc>W¤½9ÁÄì"¿}ÿWäØBÇæxè‰×(UkìXßÍý·ïAÓTNMqôì8Ÿ¼m7~ŸŽY³ùÛŸåìè,¿÷«·²u óŠ´áýð<ÁóoœáäÐ;6tsôÌ8_¸ÿFü>ã—ÞI’®n ­§],•˜™™Å0 LÓ¤­µ…D"ÁøÄ àó†Aº¹Ym.ô-Ë¢\®`:‘H¤þZKD"aTUÅqªÕ*ÑhÇu)ŠhšJ4ZÏ,U.WøÉÃÒÑÞÎÖ-›‰D†뺔Êe‚p$‚®i!(—Ëøý~Êå º®‡/z?Þ­vó¥( Ìd ÌfóhŠÊäR5¡šecÙž¨'‰G‚¨Ku©…¨—­˜ñH°^ÂÐóÈ+(ŠJ"DQòe¾ûø«ds%>vËNZR1 ¥êJæ&mé9bŸOçž¶qêü‹ùÒÊkì A¹RÃr"¡þ¥¼É—z?*¦µôl¸ƒ¡ë„C~<×£Z³PU•ªi ðûŒ¥‘5,Û!ðázÁ€l®ÌèôÝíMŒLg‘’$]JÃs777™ŸÏ]X$S«Y´µ¶ Ñu]oh3¯*¥R™¿ýû jšT*>}ÿ'زyÿýÏ¿Èç?÷ë ðÂÁ—8|ô¿÷»¿Ãwþå»LLNQ®T¸õæ¹óöÛ8sæ,}ÿ‡´µ¶rôØq>zïÝttvðð#rôèq,Û¦¯¯—/|þsÔ,‹?ÿ‹¿¦µ¥…±ñq¶nÙ̧>ùñKöÐçççŸÀ[%Ñuƒþ¾‰‹‹½(ŠBs2‚ß§“ЇßRoú,ßüéKôv43=ŸçÎ[øø­»xêåS¼~b„…B™RµÆøìt·7ñ?xñélÇeÿ¶>>}ç^~ôÌa=xœd<ÌOŸ?Æ »èïJóÔ˧øÉóGik޳ym†®¡©*­)¢áàRÑÇuyìÅ<õòITU%ôñïõ6Z›b]ƒ‚¯~ÿyÎÏ È—*ü»OÕseÿå7§%ej.GK*Æÿý»ej.Çÿû•Ñ’Šâ AK*ÊïævRñ0}M¤baÖ´7ý««fI’ôÁÔ°h¨( ÉD˪çÑ.—Ëôõöõœã ‹‹äòyTE¥³³}eèüÃÎqÖ­ »«“ƒ/â©§Ÿe×}¦_à IDATÎtuuòÆ›‡ikmå•×^gÏ®xžKkk ;wìàÔéÓ<üÈãÜyûmlÞ¼‰köíåÚkösýu×õ)‰d"É]wÞÎôÌ,>ô}~ëßþ†®³¸¸H[k+ÿù?ý¡`øô°m¡Päüùal§žo;à÷ÓÚš¾lоq÷z]㉗OrrhŠ¿ú¿~ƒ\±Âþé79°s€ÏÞw-Ó™zÛùÔ •|òöÝ´4ÅøþSoày—îÊ !(Wk|ûá—Ù¿µ¾®4ßøñK¼|tˆßºëÒ×]¬ÒœŒðëþñ/ðÔ+§øØÍ;™˜Yä÷?s+±pÿóϾCf±Äž=̶uüþgnç/¿ù8™ÅÁîM=lîï ð±¶;®ð3×I’ôþ5txÜóJsSŠ{>r'©dÛqp— ëëEãyìqúû{iiI322JµjòñÞKWg'®ëaYŠR_à·°˜#Ÿ/P­VÉd²LÏÎr÷]w0°¶EQWn4µ^Žógå’ž˜œâ‰§žæÑÇžàÑÇžàégže>“y_×09—c&“g|&‹Ï§ øP¨—ñ[nƒ¢(t·¥˜™Ï3¿Xdx2ƒ‚æä¥‹T¸®G¹R£bZØŽC¹Z»P¿ºbb;UÓ¢\­ðt·5±¾·»®Ûº5­tµ$p—Ç_:ÁÉÁÉ•ö:ŽËèT†Ìb‘‰ÙEÒ©z‰Ò …6.´¹£%Á¹± eNMá¸îEm•$Iºíxà½ìhš&†qeç—Ee>“%“ÉR*—éêì  R*•™›#—Ë ‡I77}( ]¼†apèåW§©©‰sƒƒ¬XKkk ¦iòúë‡ù7Ÿú-é4š®qòä)Μ=G$fbjŠ–t3TªâiŽ?A2™¤£½ááaŽ?‰nè˜f ÏóèïïãðÑ£lX·Ž––ôeÛ¥(õ“½»w±ß^öïÛËî];/94þnÇ87:ËËÇÎ3›ÉsèØy>zÓ6¯í`&“gn¡Àu;ЖêC7'",æË<ôäë¼vb˜Þ¼‹]{8vn‚–TŒu=õn÷ÄÜó­'yêåS Írfd†x4ÄÉ¡)þþ{ÏsâÜ$CãsŒÏ,°m]͉<Æó¯Ÿc±Xaï–^BA?–íð·=‡ÏÐÙº® €22•áäÐ$ ù÷ß±¿Ïàøà$7ïÝ€¢(:6Ä »×Óß•æ{O¾Áá3ãTM‹€ßàÆÝëäp¸$IïIà †¼õ9Úw7+C¬Šª¢Àª©ô‹¶ü˜Ô‡ŸÇA× TUÁó<ÇÁ0 TU­ïë8O ë®ë¢ªš¦®ì+¨WÿZ^u^¯Œ¥­¼?º®ã8š¦ýRVðÿä¹#<ÿÆYþŸßÿ$B€Ï¨Wèò¼útŠþ–Š]B\Oà,͇û måº×CU´¥zÄžçaÙî…ç¶•ú°¶¬ü>Ôë^†l§¾Š]×tM[Yxf;.ª¢ ëõ×éÏþñZRQ>s÷5KÕ±–ŽëºKÁØv\t]£T6ùáÓo²¡¯§_=E2æ Ÿ¸AmI’Þ“†.D{ë¿ï${Ö—VLéB‘ø·¾Nš¦½íûz1ÞÓ¾Àe˜]nû/BgK’=›{/zFYÓ.àeŠ¢,Ô‹·ï‚ªªð_ú¦ãûÖÂ%Ÿ“^.YùÖï· t ø·lŸza¿åßñ:¦mó›g‰GBÜ{Ãö‹®K’$érdiNéªâ¸.®'ð«ç1?Ëv.y£p9¶ã.=Ó­]õµ¾%Iºº¬žOFéCA×4VÛH±ï}Þ`,kI’¤÷KŽËI’$IÒ*!ƒ¶$I’$­2hK’$IÒ*!ƒ¶$I’$­ Ú+ÏgsqÍã·þL’$I’¤º†®·,‹ùù f­†¦ª´··á÷û©Õjd2Yj–E$¦¹IfD“$I’¤† E×¥µ¥!“SÓ@='¹ëº$ \×Å•ù™%I’$©±A;™LÐÑÞŽeYX–E$FAva˶™›ŸÇªY2ù„$I’$Ñà ­iй\×ó–r;ƒm;¤’Iúûz)W* ÅF6S’$I’® \ˆ‹‹9lÛ¦¯¿î®.¦¦gÂ#ðcøêÅ|†!‡Ç%I’$‰† ×u%S*•H¥’hšFkk+ÓSÓƒAÇ%½tdI’$Iú0ihiN!¥R ³VC×uⱚV/!Y,–p‡p8D óÚ’$IÒ‡ž¬ò%I’$I«„̈&I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -]u„ض#3áI’$½CÃëi;ŽƒY«aÛöʶZ­†iš+_žç5ª™?7!–eáº.¶í`;Îe÷«)›B¡H¥Zý¹j‰ !–^«Ÿ¿¹‚jÕü¹ÿýœGçyOëz9~ŠÉéÙ‹~önßK’$}Ð5´žv©Tfbr°‡®ÎN‰8““SØŽ»ô¡^fÝÀÑh´‘M}_–ƒÒÑgXÛ·†éÙyâÑí­—Ìì–/yüé°m€m[6°yãz‚JÕÄu]‚?~¿Ûv°, pl‡P8„®iKeúyìßM8"¡ª*¶íP©VQ…H8„¦iõ_«Q«YèºN(XO˜“]Èñä³ùÈí7¡i±Ë¤Bà-xÇqðû|~Ç¥R­ áP]×—n(jÔ, ÃÐ !Èdyöà!î¸å à ©Wy+”J$âQ¢‘ð…óyÞÊkðûñû}8ŽCͲQ˶‰„Cèº.³çI’ôÕРmÛ-éf‰ÓÓ3d²Y’É}}½äòy²Ù…U—…MÁÐð#c¨ªÂÄÔ m­´·µ\2 =q]×ùØÝ·3=;Ç|fÛ¶ëÇGQT4Må¶›053Ç˯¦µ¥‰l6ÇšîvmßÊÈØ§ÏG× ñ({vnÃg¼üú›‹ej5‹õëúÙ²q ‹9^yýH}¤ÃuÙ¶ym­œ:3ÈÐð(‡^;L"åÀ¾Ý—½ÆÉ©ŽŸ<ƒ®ëèºÆ¾ÝÛñù|¼qø8¹BÏ´4§Øµc+ù|aå|žç±cë&š›Sœ<3Èàù1¢Ñ#¤›’ìݵÛqxóÈqÆ&¦¸vï.’‰xýõãÜà0ªª¢ª*×]³›|¾È¡×“nN‘]ÈÑÖÚÌ5{v¢iÚ/òí•$Ij˜†Îi' "‘ç‡GX\ÌÑ’n^òt]—™™YÒÍÍèzCï-Þ—•ö{n½Z™a ©*áHè’û{žÇôÌ}=]~zº;Ùµc –esôø)Öôq`ÿ.æ3YÆ'§±m›ìÂ"[6®gçöÍœ:3H­V£«³tSŠëöïbÿî„‚õÞl4aûÖt´·òÆácK#§Ñ4ìcú~lÛF×5úzºhnJqóuûÙµmË察œÆõÖöõ`Ù6>Ÿ¾ž.ÒÍõómݼŸa°÷â±(µšµ4\îòæ‘tw¶s`ÿnÊ• ãضÍìü<›7 °÷Nü¥ ïK’$5JC£¡MÓhjJ‘[̱°°H4EQÐ5X,ºê†;]×¥Z51Þr³QÕKß#)¼}~Öu]L³†ëz4§RÄcQR©$™ì"©dœX,JsS EU€ëyºŽ¦k‚K»\©24<ÊôÌUÓ¤\­¯ÈdÙ¸n-±h„H8„TE©ß`h¡Pð]¯O6®ëç•×ðÌ ‡ˆÇ¢ÄcQòù…b‘³ç†èîì@Q “]dËÆuD#a6¬ë¯cé|ú;Χ( ` ~£¦Ô¿w\—JÕ$n"ˆÇÉå D#!¢áéæ&Jåú¼ãÉÅk’$}p5´§=;;G.—'™HÐÔÔD._XBÍç $ñU°EAˆú|½¢@¥RÅq=.·dJUU::ÚŸ¤V³˜œžåÅ—_¯q*Õ*¶ãÏH&bõspñ",@€í:xž‡‚Ù¹y*U“;n¹žþ^TEAQbÑ™…\×e|ršÓgñ„Xj»Àq\\×}×…^¶ípÍÞÜvÓJå2“3D"a¢‘0·Ýr·Þ|€þÞnb±(±h„…Åžç1<:ÎÙ¡a„( xÂÃu/œOQ/¶¾t=Ë7v>ŸN©TÆvŠ¥ÒÛæÛå‚4I’>,´xà÷²£išõžÑªV˜eaa\>OWg¡Ph)hçijJaÆ;ß/‹ªªÔ,›®Î"áM©$½k:/{ÒÜ”bt|‚£'Ï019ÍÚ¾º;ÛÐuc'Os~dœp(ÈžÛ©TL Åëú°l›Ù¹ kûzø}K%Ž;Åèø$ÉDœh$ÌÄÔ Ù…E„ä‹Eñ}=]œ<Ïàð(S3ôtw’ŒÇÐu‰©iNdlrŠÞ5]—ž˜šæ#'˜Ï"„`˦u´¤›¨TMŽ=ɹó£xB°¦«ƒ¦d‚Sgefvž¾žnbÑHý|Óœ<3ÈôìÝŒŽOðòëG˜Ë+YXÌ‘nn"‹rüäYFÆ&0tÝ;¶`Û¹|ž ëÖ⸓ӳ ôõâ÷ù~ï®$IRã4´4çò#_Žã j>ÃXéíÕçYõË)_í–k„»®‹ª¨Æåov–ŸK.•Ëø ƒP¨>”îy¥rÇq‰DBø ˶©Õ,¢‘0®ëR©T ‡C(ŠRï…KhªJ$FÓ4*•êÊÊêªYoS0à§V³(Wª>BÁàÊënš5Ê•*†¡‹F.y£Q T*,Û& øQ×u)KÄ¢‘·­V¯TL‚A?Á¥úèË©•«&¾¥ÕãUÓ¤\©²<4¡i*±hUUW®%®¿Žãbš&Ñhdåµ ‡Ch«ôoF’$ég‘õ´%I’$i•]I’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤U¢áiL—¿EYy&û­ÛUU]uYÑ`©hÈùóLN̈́ٴqÁà»§ý Bà¹Mÿùï=O€USVŽ™Í.08tÛ¶Ù¸a=étó•j²$IÒªÑÐzÚ¶m3==ÃØØ8ccã˜f­^š±Pdbb’ÉÉ)2™ ®»úòI{žÇÃ>ÁÁ_"Ÿ/à.¥}§·Þ \ÉŸ½õß_¦JÑáøKÙÕ1¦‡ËL –޶Ͳ,r¹ÿòàCœ8yò_u|I’¤Õª¡=í\.í8457‘Íd™žž¦§g ““475ø™˜œ" \º¶óÕLQöìÞÉ­·Ü@qÑâÄ¡BQÙ± }[bôo‹c×<Ž>ŸaqÞ$Ýdã¾ ÆÐ±<ž'X˜1ÉÎÔ¸ñí„c§_[dj¨D0ª³óæ4 Î‹?FQ ™°8gÒѦokœé‘2§_]@UÖïNÐÞ¾t–3O°0krúµEª%‡Îö$ž`ðHžñs%‚aÍפˆ7ù9q(‹ðÙ“PÔ`ûÍ8¶Ç ?˜âô«‹LTèÙeÓ¾•’ͱ²ä³5Ú{ÃlØ“D÷©õ󽺈YvèÙeíö 3&}} ! {C„ÍûSt®ÐÑÑNGG;ÇOÈ€-IÒ‡WCç´›šRtv´cVMlÛ&ª§ãôûý˜¦I¥ZEÓ4ôH}d³ìòÆÓs¸Ž [œç¾7Éâ\Ü\‚ÅúÝIN¾¼À™×ÈÍÕxöÁ @aóþþ N9ï09Xb`g‚â‚ÍËÏ⺂/fÑ4…~8…pøÙy,Óåñ¯Ñ5¡kŒÉ¡ž{éÞw¹`ó̃“„¢:k·Ç9ûÆ"¥œÅè©"¯=9ËÆ½ |g¾;IµìpîpŽó' ôo3t4ÏøÙ"Á°N{_˜tW=·¥é^W¿ÑÊN™TK;~vž±3E Ï~w’H kœÓ¯,R)ØÄR>ÒAz6EÙusš¦v™O’$iYÃzÚËsØžçQ®T¨š&©T!@UT,Ûªç§ö¼F5ñÊSÀ0T6íKbøU^‹äæL-~ªe‡“‡²²•¢ è>…XÊϾ»ZÑ–æwÛCÕN½²@v¦F$®‚pÜ`ÍÆ(g^Ëѳ1ÊÌhMWéÞáÍgæIwÙrmježøJ9ÇòX»=A4iÐÒÂçW?W¢{]”εBƒÁÃ9ª%ÝPéÙ£s BS[`e[$n k¤;ëõÃ…•RÞæÔ+ ,ÌŠC![Ãu<Ön‹‰´ö„ð‡44M!Ó ' š;Þ¾@Vó’$éî¡sÚ¹\ŽZ­FoÏúûúŸ˜Ä²j,ær¬é³UÓ(•Êjæç c{x®Àsª¦ðêc³ÄR>îüì:úÃõÊ”Kñ)Þì_ ØÂ}!C)gsû¯­aóþäÊ¾Š ªª ¨ ¨õýöÜÖÂG>ßCS{€ŸþÃ(… €Z­ÆÈÈ(–Uÿ^Õ„W¿)pÝú¼UóðTjU!À¶=¥¾@LQA×…ú_Ñr°]×åË_ù;zzÖp×í·aÙé?þ»aîþÍÌ’Cª=H,eP«ºÌŒ”ñ4" ƒÅÙíýa̲C­ê‘î¬ !plÁÌhH¤ýd§MRm Mí2S&Mmr™mkBó6Ù©*P¿AÇëåO+• gΞcã†õƒA„X¦ËÜDÇòH¶ø‰7ûÈÍ[,Ιøƒé® †Oe~¢J0ª‰d¦MüA•XÊëxõ›Ž²K¬ÉGsG³â0;Z!Ñ „t Ùzð·-ù‰*ŽíÑÔ šò¡( fÅaf´‚çx4µ‰5ùð<Û¶ù‹¿þ\»/·Ü|S£ÞZI’¤†‘¥9A\×åkßø##£lݲ™»î¸§êã©oóÉ¿Ã/óÚ¼WB†GFxáàK>r”_ùôý¸öšF7K’$é—Ní_úóæŠÅº¡ÓÔÔ„ŠJ¹àM«nä ‘„T«&¹\ •L~(ÕH’$½“ Ú’$I’´JÈ1ZI’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤U¢¡ÉU.e¹L§ô‹S*•ñ³³sœ>{ŸácÛÖ-D£Êå2“SÓ¼òÚ먊¦ …‚ìÙ½“p$Ä·¾ý/xK…Z„ŒOL022F(dó¦„B¡÷ÔÎw1©Z.OŸÊó—Lów¶“ŽXÅr<–iŠœ™®Ò3ض&Ä™©*í ©ˆÎ©©*é¨NKÌ`6osd¬Œ®)ìî _uƒG’$IWLC?á‚Á=kº1 Ä µš…çy‹%6oÚˆ®kœ¢R­Ê M=ÎÍÏó÷ÿø56oÚH.—ãÄÉ“|ö×?ÃO~”…ÅEvl߯+¯½Æ¯~úS ¬]Ë׿ùmúúzŸäÜÐ ŸÿÏâ¸.§Ïœe``-ÇŽŸ`ff–{î¾ í%P…ÌÌÌòÿôu¶oÝÂÉSóœ=7È¿ùÔ'ñù|í{âä)ž}î…•"$ÁP»î¸þ¾Þ‹¯ÈW\*–‡íx¸^}4År_|dŠÖ¸AgÒ¦†°Á¿¼’á®mIö÷GxèÕ 7nˆáz!þâÑ)6µ)š./+òG÷tò0J¹J’$½SCKs†B!j5‹³ç±jk×ö£ª*€Ÿl6‹ßï§Z­âØN£šyÕ9xðóóóh[·8|äÙl˶س{÷|ä.cÓ¦ìß·Ó4‰Ç¢¼òÊë|þ7>‹¢(¬_?ÀGï½›T2É /¾È·ßzÉ ýüÁÉ (KïËñ'¹ãö[iI§/j[OÏ>zß=x¢ÞK×T榦K^Gȧ±­+DwÊϧ÷7¯l÷i 5Ûãæq>±;…ª*Ø®GÕòp\ª¶‡í^:WàÜL•m]!Â~•ƒg‹Œ/Xlh—ÙÒ$Iú`jXÐB`YõB!kûû(ŠLLN²aý:Ö¬é&ŸËcÛ6~¿ÿ¢`òa%„ fÕˆD"tv´°mëRÉ$º¦ QMÕB0::ÆOy”ûîù‰D|å8ª¢ †PUÃÐëe4/³l V«‘ˆÅèìh§£½½{v‹Æ.¹ïÔÔ4‡^yǶÜxÃõ¬ u½çëS€DHgM“U}Ëðº¸ðïrð®9‚O£§96w…iOïé\’$I«QC—ÏÏÏ3::N­Vò-TEAQÑ ]×Ñ5PHöœ >:±cû6lÇA×ulÛæüùaTU}G O€zîs€ŽŽlÛÁ¬™,ær8®ËàÐ'Oæø‰“t´·ãy.##£Œ“Ïç:?L>ŸgÇöm”« ÃGÕ¬266þö`úéæföîÞɾ½{Ø·w»vîxÛÍÂ;|*–ãqh°À±±òʹ¢(o;‡‚B,¤ql¼Ì«ÃEÎ͘ÛºC誂¾ôW|b¢>·-I’ôA¥=ðÀ¼—MÓÄ0 týÊuΖeQ.W:;:0 EU(—ʸ®K:ÝL(’QfÉd‚p8Ä™³çÈ lÛº…––4¦iÒÙÙAss5«FK:ÍÀÚµXVùù ½=k‡B$q¢‘ÁP\.ªªÜvËͨªÊ¡—_aff–`0@¡P$‹±iã Ý`phr¹Â¶­[hnjºèýP…`0HK:Mkk ­­-¤ÓÍïº!äSqœš¬Rª¹lé ¡) åšËæÎ± ¾tlH†t†çjXŽ`mk€mk¬oÒ7xc¤Ì\ÑaO_„5©ÕWÆU’$é½jhÁåšÚË+—¿–·+Û¤ <ÏÃv¨ßä( Žã,õPU\×}Ûÿ=ÏC×õ•튢à ðŠº®#„À¶·=^§ëº®/Õ²vPU]ׯØû!„ÀõêÃݪ ÆR/Ùvºz¡·-D}8ÜvêíU”ú¿¦Ö¯ÃvêmöéòoE’¤6YåK’$I’V ™òJ’$I’V ´%I’$i•A[’$I’V ´%I’$i•A[’$I’V ´%I’$i•A[’$I’V‰†×1BP.W˜ž™¡¿¯UU©V«ÌÏgp]—d*I<«§ê”$I’¤±†GBÛ¶™™effÏóB02:†Ïç#™J259M¥Rit3%I’$©á´…ÌÏg‡B„‚õ¢ Žã`š&±XMÕüT*ÕF6S’$I’® ÚB2™,–m¯T‚r§ž…b©ÌÜü<ªVÏŸ-I’$Iv ÚžçQ(©ÕjLLLQ5Mæç3,טŒÇ¢twubÛ>Ÿ¬‘,I’$I [ˆ¦ª*½=kð<뺔ËeZZÒø|>R©$SSÓø|>„D"‘F5S’$I’®WE•/ÏóÈåó$âqTUÅq …"®ç‹Fñù|²ä¢$I’ô¡wUmI’$I’~¶†?ò%I’$IÒ{#ƒ¶$I’ðÔµ IDAT$­2hK’$IÒ*!ƒ¶$I’$­2hK’$IÒ*!ƒ¶$I’$­ ¯ò%½¶m“ψD„”ÊeLÓD×ôzîvMkpK?X„TKŽ-„4| ¯¯çÕ ë¨ÚûË) „À¬Õ(—Ë((D£Q|>Ïó(‹X–a+é~%IúðjxO[eYd2™•*_ËÛx‘¨Ô« òÀý¾ýÏßåüð®ëòõo~›/}ù+<úø”J¥‹~ïç=ß•þÙ•<ŸeºL—ÿUçËgjäæÍŸ¹¯ëN½ºÀþçyŽ̼­MVÕåÕÇf)ìwm/ÀâœI!k½m¿óç‡ùÞ÷Èý“?åô™3@=ÿ3Ï=Ïßü_á‹õ¥Ÿëú$Iú`ixOÛó—g÷î¸ì1¾ƒc;4¥’üʧ里)Ô?øKÅ7]׸¿ß‡eº,ÎÕðTJ9›¦öÁˆŽçBnÞ¤”·‰¥|Ä›ý( äækè†JµìP)8tô‡1|*¹LÂB`X'ÕDx‚ÌT„¢:å¢C¬ÉG(¢S«ºÌOVQU…æŽ ¾€zÉlvBÛca¦†mº$Û„¢õ?Ç‚E>SÃÐhj é 3&š®RÌY„¢É?vÍãÌ닜8”åÚ{ÚH¤ý$[8¶Çâ¬IµìoòKù@ÛòÈN›¸¶Gª­þZTK¯<:‹¢ÀºÝ šÛƒ„ã—Îu¯é [4SÈZX5oe{¥ä0;R¦¥+ˆÏ¯®\Ÿezd¦ªMm~aJÁáÅOŽôo‹ÓÜQodžõëèëíáOþôÏ©ÕêÝ0 >rç$I}ü‰+ú7$IÒêÔР½œiÍu]ü~ÿÊ6ÏóhiIS,Ù¼«ÒrÔt`0ˆ®¿ý-ôûýƒõ¬u‹súÒ ö&±M\¦ÆÇ~·ìŒÉK?ž¦¥'ÄÄ™"7~²“îõQŽÌ0}¾‚/ ë¤Ú˜e—Ç¿9Fsg™á2Û®obÍÆþÕ í}aòY‹PL'Ùâç¦û;ùÉWGhj ày‚‘“®½· íÃÅfÙåনU\t¿J!kqßïôR\°xâ[ãtDÈgj¤ZìÿHOýó©Ö£§ Ü÷Û}„c:“C%fL¦†ÊhºB²%Àè©"‡Ÿ'Ùêgz¸Ì-¿ÒEª%ÀsߟÄuª Õ’Ã=¿Õ‹Yq™8WÂÒˆ$}„cÆeƒ¶¢(>ÝP߶ͪºŒŸ+rô`––5!ü!ËôxòÛcøüšOeHƒ>ÞAµì0q®D²µ~“M„¢º®£ë:šööcü~?ªÚðA1I’® ÚBªÕ*¹|ö¶Vz!‘ÖÖ,ËBAæÿ×ÜÛŽ¦+üóÏ‘6‰$ ú¶Æës©UìŒI÷†(þ Žm{|ôwû„5TUÁ¬¸¬ÝÇuЦ09Tfͦá¸Á®[Ò<óà$;oJsê•„'¨mŒž=›b„¢úesÆ/Ì™d¦M>ò¹5D3#EáÌk9º"Üx'³cžýî•ëìzOxg‚ûRë=ñ–î ý[ãÔ*.×¼cåØñfk6F°-A¥à›«PÈZÜýù!™‘ ªªj н1B¢ÙÏÞ;Z®×8‘ö³ï®6ÆÎ^˜’¨Ïq»#:k·Å Ç ¥>úн>B÷ú(Û®oþ¹Î'IÒ‡WCKsÎÌÌ¢* ¹\žZÍbq1'kg_a†OÁÒÐtÍP°-#Ïe˜,ÑÒ"–4ž@Úû„¢z|ƒGrœy}‘¦¶@½íÖçiuŸŠ/ ¢ë †_E?¨sïo÷âó«¼òè /ü` ˬ¿Ÿ®ëR©Tð¼ú°²cy(€n¨èºJ²µ> ^«:"õ`ïó«ž+Ð •PÔ@Õt]ÅóÄÊ æ=Oðú“sd&MÚÖ„'ê×çXªª *š¡’lõ_´`ì­sÐBlÛ¡R­âýóåþ Æí¿ÖE¼ÙÏOÍñ̃+¯ââóI’$½ ÚŠ¢L% …BzcJý¸jš˜¦‰ë¹˜¦‰mÛï~°‘÷ûAoV\FN˜:_¦Vu‰5ùÈg-Ò]AB±úœn¥èàyõ㪪ò¶`˜ÏX$Ò~bM>̲ƒYqq¬ ó¹o ©”l†ŽèÛg×-iò™µj=PÍg²ü¯o|›B¡¾ê=–ò¡ª0v¦ÈÄ`‰çšÀ,»tD;]dn¢ÂÈ©"á¸A ¬_t®e†_¥¸h1;Z!;SÅs…‹–5!|³\¿¾hÂÀu=ÆÏ?SäùLaVêmÓu•ìŒÉÜD•‚…õ‚3çÎñÐ÷~€Y­"„ VuÉÍ›TŠõ9ÿ|¦†m¹”r¹¹VÕ%Ÿ©Q\´0ËÃÇ tô‡ÙuKšJÞ^ Úª®’™ª2?Q¥˜³dð–$é=kXÐVU•d"Akk --iÚÚZH%“hšÊìì,ssó„B!¦gf)K?û€ï·Di0¢“›¯qúÕÜÛNª%ÀîÛÒT 6#' 컫 »æaןZÍbjjšx<~UÍÏ6’‚#GÑß×Ëöm[q]=»w.°t]§£½#G“]X`bb €x<Žßïãñ'žbË–Môöö k==k8yê4…B­[7cÆEÇT…@ À~ós+e6ü>ßeÛ ø¸óÀVî»i;Š¢p~bž³#³DCB?Š¢0=ŸCWUönéexrží뺘˜Y¤f;¤SQÞ<5ÆÄÌ"“s9,ËASUÚÓ ÒÉ3™Ü»¾vŠ¢°gsï{{¡%I’®R Ú®ë’Í.`YÃ#UÊå ““StvvP³,¦&§I$ã4¥R¨jçޯ†a`ÙžضÅb.G8FÓ´·í·Ü+?ôò+üáÿñœ;7Èèè(BZÒÍüo¿óò…<ú8™L–Oßÿ nºñzvîØÎàÐÿôµoÐÞÖNgÇÅ‹µÌZ¯óÛärå®;¸÷ž\²ÍõÅ„þ k*M‰÷ß¾‡¶æ8‹… ­M1~ơ̂躆¢(hš†¢€iY|맇øÔ{iO'xýÄÞòüö;îU.¾yyû"GI’¤Õ¬aA[[êÙyž‡ëº‹%ZZÒ¨ªÊÐÐytMñæææI&“þF5õª¡( ûöîæÈWñÐ÷~@¡XÄs]>ûk¿Ê«¯½ÆìÜ<333<ýÌslÚ¸ŸÏ‡ðgΜcrjŠR©ÌèèÅR‰ç¾H_Á`` €eY<ýÌsK#Ñhý7Ë~?Ÿù•Oã¹zÚàûËIßÝ–b}O+¿t‚t*ÊèT–?øµÛ.}ÝÔÛ4:•!›+Q³mŽœGxóô(‡Ž‘+Uøá3o²okÓ†Ææ›Êò“玲u “›ön@“7’$­rWEÁÏóÈårÄãqE!“ÉÖ·Ô3ŠÅ¢øý2hCýµgbbM×Ù°n€D"Áá#GÉ ˜¦‰ßï§%f``-§OŸÁ¶mÚÚÚ˜œœ¤««‹d"ÎÙsƒ”ËB¡kûˆD"ÌÌÎ266Žëz45¥X¿nà6Ê!„`~±È©¡),Ç¥»-ÅÚ »-ÅØt–5m)&çsô´71›É39—£-§V«×¸nkŠsjxŠrÕBA4`ËÚN&ç™Í橚6¿As2ʾ-½rÄF’¤UïªÚ’$I’$ýl²ë!I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´J\uU~Vñ ié¼|MՈŢnΊåg¯¥Ò—º¦‘Їúóå™\‰|±J4 µ)vU¼Ï®ç16Åq=:ÓIBÁ˧c}?„ds¥•lpž4'®ž÷J’¤ÕéªÚ®ëR.—‰D"¨ªŠëºT«U\×# ^2ö‡™çyzùU¢‘7\ ÑÍYázßúé!ºÚRLÍ-Ò’Šñ™»¯©W&;7Áß>øá€ºùÌÝ× iÚ–íð£góÚ‰þðsw±{sÏ9®ëy|ï©7ø 4MŲ~ë“7\‘cK’ôáÕð íº.““S ŒrÝkP…ÑÑ1,ÛÆ0t*cÖ®í' 5º©W×u9}æ,±X”žîî•íµšÅøÄùBd"NwWº®S(E¡wM7‰D‚Ù¹9J¥2µZ !}½=ƒÁKö|ßÏȇª(+&®ë‘/™¤â„œ™ák?|MUùØ-;éH'PÈ,9?1@Wšæä¥{¢Bª¦ÅÐÄ<åj¶æ8Ým)J“éù< ù2kZHÆÂœž¦¿+M(à£X6ŸY`c_;ŲÉàøžðèmo¦¥)FÐïãß}úf&f)›µ•ó•ª5ÎÏQ­Ù+ç+–M¦æsxž X®²±¯D4„í¸ MÌ“/ThJFèmoBÓTŠeCÓ0-û}½Ç’$I—£=ðÀ¼—MÓÄ0 týÊÅy!…Br¥Šm[´·µ-¥š´¶¶ ‡Y\ÌË4¦oáºGŽã±Çž6mÜÀsÏä©gž¥Z5yÿ{÷Gvøþ[¡«3:!gÌ9‡áDN”&ÚÊòXò¾µ½û¼öÓûì¾ã}Þ'¯Þѳžì]Y¶‚¬8š¬‘4™r†œaæ`@äÔ 4Ð9Wx4Ø$‡ä$ bx?çô!Q]¸u«ºÐ¿º·nÝ>_n›ïýàÇ$“IFÇÆ9xè0«W­dç®Wxêég°k»v¿ t,h¿,8†Á¡ÃGxæ—ÏrðÐa>¹þšš¯šym$£­1@S]€¶†§û'xõH7n§·ËŽÏã⇿|¾Ñ)ÎLpìì07mX|YyÖlb§_9ÆkGzÈä ¼°ï$‹Úê Çøþå—H’ÄÉÞQvõ±ay;ÿƒðº´5V³ûÐY~þÊQ6®h燿|Áñ(c“qv:Ëæ• Ðl*%Ý`÷¡³,i¯§­!„n˜üäÙýìê'‘ÎñÂÞ.–,h`hlšÿñí_bSºzGy³{˜ ËÛÙßÕÇ“/¡¤ì:x¯ÛAËloCC§C#àu±°µî<A¸ÍijÎ\.ÇôÌ —dŒ LNNŽDð¸ÝbêÔ‹¨ªÂŽ[oab"RIØaš&»vïæ£÷Üͦë G&±k==½„Ãaþäþ€R±ÄÿÛ/Ñ×?€®ë4Ô×óà÷áp:9××Ïm·Ü|ÙÜܲ,ÓÙÙA0¬,Ó4·ûŠu“$‰Ý¹UVX¿¬Y.gÖºaM'ÇÎáó8ùÜýÛ‘$(ê›V. žÊ221ÃS»ŽVZõÃÓLÇÓØ5•ΖZLÓâé]GY»¤•€×E<•áèéAZêƒè†ÉÃ÷m#‘Êò…¿{„|¡ÄMë³óàV-nfÏá³l^ÕÁX$ÆÇû¸ë†•Øm ¯ë¥od’5KZ/Ûd:Ǿc½üùÃwÑÙRËW¾ÿ»h® Kß·\¾ÈŸ~éGŒMÆyâ¥Ãø=.ü^ºnðú›çضf!÷ݲY–x牂AÞ9 Ú¦i2>Æ0 ¦¢QòùÑè4ÕÕ!t]' âv»#•J]8®g’$!Ë2²|¡Ulš&ù|·ÇªªÔÕÖ`šù|‡Ãf³!Ïæ)Ïç È’LUUš¦a×´+æâ†ò…ÕØØ8gÎvWrgû|>ü>?ÚUòg;íåå6.d“d Y*¿¥|a019ÍOŸÝϪE- •{Îí®s£;=„¿ÊMcMU•Éæ‹¸n—n]˲ŽF¦ãiªÜ\|¡„$K˜–Åúåm¼ôÆ)Žžb|*Φ ˜šIa.‡†Çeçól§±&pÅ}ÐuÓ´p;íØm*n§ìl×¹×mÇåÐf? (K¤³jü¸7oXBK}ù\uØÅX A>Xs´%I¢®¶†RIÇ´LE©t RWW‡Ý®aš¦h©Ì²¬r N¦Ò¤ÓéJNr¯×C[[ §OŸ!è÷³ûÕ½,XÐFSc#©TŠÁ¡aòù…B¦¦xww©%‚Á  T»ÓéDÓ~½`dY±T–l¾ÄÖ5ô…)éÉtŸ×É]ÛVrÇ–å Ih³#Ñ—w4ârÚYÞÙÈÞ£½Ò†J\Ö­ßPí£½±š<³ ËÛi¬õ#KÕ/µ~j^öëÅa·‘Ê䉯Sdrfi¢ñ‡ÏCWï%Ý`$cëš…LL%8;0A2Ãiר zY·¬•\®ÈŠÎ&ޏ(e© ÂiÎîiK’„ÝnÇétb·Û1 ƒššjTUEÓlÌÄb¤Òiª|^BÁ H«8«»§—_ÞI,'“É26>NSS#Ë—.¥÷\?'ººðz=ܰu µµµTW‡Øµ{ƒCÃ|ôž»X´°“L&ƒ×ëeA{Ù\‡ÝÁ‚—§®”$ _UMM4Ͼêêjßóh~ HeòÔ†ªhk!I.§\¡È™ j‚^j^, ­±U‘QU‘g{$4ÕÐÕ3ʱ3Ãx]v6®hDz@7L6,oÇ4Mb©,ë–¶âq9ð¸ìLD<|ß6‚>n—ú݇Îr¦œŽ–Z–.¨ç@W¿ÜsœBQgj&I$šdYG#‹Úê8tj“çFY·´•[6-ab*Á‰ž$IæxÏŸùè¶ÖÑÑ\Cßè»ú±°Ø´²KÜÒáƒwM¤æ´, Ó4‘åò—tùg °fŸs•®‰gzçšeY†A©TÎ' åÀªÙlȲŒ®ë†ªª(ŠR9–…B$ »¦!Iº®cY6› Ý0°LUUcÇØ²,ôÙÖ§MU.Y¦ë6›2ûy—ß¿Ú(ö’n &šª (2¦e¡ëv͆eYK:6›Š,I†IQ×±Ûl•[ ¦eQ,–÷]ÓÊë•tƒ’nT¶#Ëv›­|ß½¤cššMA–eÞ8~Žÿj?_ýËÏ`&v{ù¶ƒ5[’abSËâ|á7ášÚ‚0ŒOÅé˜à¶ÍËæº*‚ \§DÐA„yBÜ(A„yBmAA˜'DÐA„yBmAA˜'DÐA„ybγ|]Md:I•ÇÁ¢Ö:ì¿æ,uç¦IïPËNs]y:ÖÈt’“½£Øl [Vub×~³_“ÓIƦb4TûŸŠ³¢³é7¾MA˜/®‰–v6›c``Ã(Orq>`G£QÂáHeÞká‚·›/üâ÷òù<ûæÅ—vVŽï¡g(Ì·ŸØÃ™q¾õÄRéüeu{O–ÓeMV&_1g/¾ÿó}¼Ù=\Y¯P,ñó]G)•ôwµw³í÷êíÊ|·ŸÇ»-ïítõŒòË=o2O±çP7o?÷¶e¾—ãd™ÃÓLͤ*ËŠ%±)þñ'/“ÌäÞ±~¿nNõñý§÷ÒÕ3ÂwžÜC&—¿ìwáz5§—¯–eQ*•‡ÃŒO„iiiF™S:‘HO¢¨ µµ5sYÍkÊèØgÎtsç;H&“¼þÆn¾i;‘È$cããä ÆÇÃlÙ²‘Å ²bù2Æ'&èíí{ÛrK¥‰dò’ $ÛƒËuåœv› ¿×…C³áó8‘e‰C'›œ!“+Rð29ä#7®B‘ev8M"£¹>Èëa×TŸäèé!ªžJ¹6Uá®VòƉ¾Ê,i¦iòÒ§ø»ï?Çh¤œîò#7®BUNöŽr¼gY’ض¦“ÎÖ:dIbl2Æó{»øø]›ð{¯œ‹Ý²,ò…Ïïë"“-P¬,nÙ¸”¡‰iŸ ¨ë,ëhdãŠv¦ãvÔÍÁ“ì;ÚË®>Žœbp<Êàx”¶Æ¿xˆ³A¶>ä£Êã䕃gÉKWœ²V’$‚Unvet¶Ô (2Ý|ý‘4Tû±k*ÿ£‰%2dsåyÍ‹Eý²º_\îþ}<ûÚ ëü¼rð4Ïíí"_,qôô ¦ib³©|çÉW‰%³¸KÚëhm±ÿDùBY’ˆ'³|ý‘]>5@s]¯ÛÎøTœï<ù*—ŸÇÉ7ÛM4–âÕ#Ýì7“3I’™í!LÓâõ7ÏÍæÊÎ IíMÕ¤2yt½}/€$I´4„¨ö{غº—CÃ4-NôŽRðr× +ÈJüòÕãŒFb„ü:Zjø¿þøAÜNûUË5M“£g†Øºº“Û6-£T2xáõ“}nŽ&KÑ;!_(Qð²¼£‰þâu6®hçwmBQdìšJ Êŧ?²…ΖZöëåè™!tÃD–%Æ'ãŒFbì>ÔMÏP^…™D†Ú`ºaÒXëçå7Nãu;X±° ͦrôôgúDZÍf:›Ž§9;0"Ë8ì6¶ÖÒPë§PÔ™˜Šãtظ}Ërâ© /¼~ °k6n\¿ˆ³ƒl[³‘H Ý4 ù<œîc"š g0ŒiZ86V/n¡µ!ÄûJRÔ v:Ë·®cÇ–el[³Ó4‹’˸{ûJ¼n;÷Ÿ¦otͦ²euÉLŽõËÛÏÍYØVG}µ¯rn]œ†V®wsšO;‘Hbšáp¤’OÛç«"•LUZÜÙL–t:ßår½‘$Ã0*ǧX,Î.—¨…p8•D!çóS¿ñD‚]¯ì&›»pÏrû¶­¬\±üŠëË’ÊlÙJ9A‡h6…bIGUärâÓä'Ïî'è÷ðÙne|2þëÝo>ÿ»¨ŠŒ®_ašêl¾n]7+é3•«^ïI•:J³ÉCLË"‘Êòã_íç³÷n¥Æïeh|(çü~yÿiF"3üé§w\2ø+èóàu;+ÇÛ¦(´5Vó…ÏßËa'–ÌÐPãG³©Üsã*>uÏòù"¦Un·o]ÎÚ¥m8ÑÇ×ÙÉÿùû÷c³)l\±€/|þnLÓ"™Î1‰ÑÕ;ŠaZœî'—/âu;ðy]X¦…4[ÏóéA•ÙŒiŠ,Wr™G¢ }ñøñ[ø‡È‹ï»Ckr&‰MQðW¹‘P…Â츃t6O&W@–%L LÓÂ2- ÓĦª(²„ªÈåºÉr¥5ÿÖsK„ æ,h˲LSSÃìŒA,Ãï÷ãpØY¸°£Üý›É —J¸\W¾'y=ª«­!Oð‹/Ó?8„¦Ù‘(w)«6•‹›Hñx‚sýýôœ;Çøø‡eñ¢E„BÁËÊõû|ì¸í–KrAÞã…’rþ ø-ÿjšJ._äDï‰t–þÑIâÉ,g'8Ó?Ad:ÉÞ£½¬^Ü2;z9Ìødœ“½£„|nV.jÆi·a×T}á A¿‡›Ö/fãòv^Þš'_>B4–¢¥.HkC€þÑIþç_â¿ÿÉCÔW_­Ç6­\À·ßCcM€—Þ8…,ËȲŒMUÊArbšL®À¹áÓñ4ÿúÌ^¸u-§ûÆ‹ÄYÞÙˆ,Ih6å’ÖiGK >“ç÷váu;é óŸï>zÓj¾ö“—h¬ñÓ3aé‚nÛ¼”§wEU‚>7N͆ªÊl[³×Žô°óÀâ©,ºnpû–åX–…,I¤3y<.ÛÖtÒÖ"_,ñÌžr÷÷ùc6UA–¥ò¿’„MQf¹ÌL"ÃHx†L®ÀÀèUnÝCaƦHeò¼qüK4Ð3fhbšéDš}ÇzYÒÞÀòÎF¾óäk4×øÌG· ª2½q¿|Ý0ؼ­k:Ù¾vAŸ›§wÅã´c&‹ÛêØ{´Y–+xª*‹§EáÌi>mUU±ÙlØl*»¯×‹ª*³ËÊ/§Ó‰Ëå̳¼^íÄãq6¬_Ç’Å‹hl¬ÇétPS[CC}]åVB"™¢¯¯§ÃACC=ÅR‰ššjª¼ÞËÊUU•*¯Ÿ¯ªò²Ûµw]/I—ÓNK}æº -õAšê´5†Ø¾v¹|‘@•›‡vlÀ² 6XE÷`˜’n°¨µŽ¢®ò{ÈŠ ŒEio¬Æëv`˜&­õAü^7«7žNârØéh®¡6TÅæ•Œ„g¼<|ß xÝŽJ…¼n'‹ÚêÐlW>g%I¢&à%“/"IòyÉå‹Ü¹m+61“̰iånÙ¸Ý0©ö{¨r;1d¦<¢¹¥>ˆÓ®QôÑÚª¤u94¶¬ê$–Ì"Ë÷ݲ† ÏCc­ŸEmõ„£ –w4rãºE¸­ AR™<¹B‰n[ÇÂÖ:j^Ö-me|2N°ÊÍý·®£6è¥wd’Ú`KÔãÐ4–u4P_íÃãr’Heiª 00:Åý·­£¹.@GK-5~:šk¨¯ö±tA#Ë:ˆ%³l_»Í+`Yå;-}#SÔ‡ü|îr·}Ÿž¡é\žå”t§C£µ¾|q¤È2m!ê‚UȲL[c5­õ!ÂÓInX³íkáqÙY¿¼L®€ª*|ú#[i¨öá÷ºio ÑX ½±š†?íÕ¸ïþ¼„ëÈò%\×2Ù_üÖ3ä %2¹Ÿ¼{3·oYvM¡˜ˆÆ9ØÕO6_¤¹.ÈÚ%­Ø5•Ÿ=]ý•ÚöÙ;ÄÈkAøA[¸îå %ÒÙ<MÅãrÌÛ^Ã0H¤s€D•ÇY¹¿/‡‡˜fH¸î9í6œöùÿX‘¢(}žw^Q„yK\Š ‚ ÂД’óÝùD!ãáËÞËår|õ¾F8©,ÓuþÁKò“G£Óü¿_ù*Ÿüì羬œ’$IŒFføý¿ù‡Nð¹ÿúmº'®ºþûIùNï½÷ò{Å’Îóûºøâ7Ÿáoþéi~¾ë(¹Bñ}•}¥÷Æ'ãüßÿüóÙǶ _(ñÕ}{ÿäïùɳûß±~¿î±ËJüÉßþ€çöžàOþöì;ÖûŽÛáßÖœ§æ4M“©©)‡Ù²y#ªª22:†¦i8lªŠË媤ì¼Þe2Yžñ%º{zùÈÝwÑÔÔ@]m-±XœþÖ¯_‹Ûí Ÿ/püÄ ~ôÓŸñ{¿û055!Z[ZüáïÿoD¾ôåKr˜†ÁððùB–æ&¼W˜9í­ó™_|1p%–eÍÎ-]žJS’ʳhY–E6_¤odŸÇIQ7hª à´ÛŸŒ16£Ú省© 87<ÉT¬œãY‘%–w6âth ŽE© V1žÁ²,·×£*2ÑSqkÔÏ&(–tz‡#†IgK-n§I’˜œI2<1M°ÊM[c5Š"SÒ †'¦I¤Ë OUnNôŒÐ;æÞ›Vc³)<úü!ZꃬZÜÂÐx”|Q'Xåº0 j&G¡¨¨r3“H³°µU‘ Ï™NÐT ¡ÆeÁøTŒþ‘)n\¿ûì n»?øøÍä‹%J¥w¾p-• z†ËÉ?¶ÖâœÑ.<`4£.XEs]€L®@d:I±dPô™I²°¥›"c˜&N‡†n˜8>Á ‡͜fùH&“‹%ìšVYnU^/‡»]ûÀ¦Nï,Ë"ŸÏsæìYÆ'ÂôöÃívQW[K*¦»§—×ö½N{[+^¯‡R©ÄÙî&&ÂôœëŲLZ[Ze»Ý^I q¾ÜÇŠX<ŽÛåbpx˜¿üŸSUUuI$I¢P,òÊî=¤Ó™ÊòíílX¿ö²:K’„ÃncåÂfU.V.lÆãrHçøÚ_F7 DzŽFºzFø£OÞÆºemLÅR9=È›ÝìYÒŠÃnC’$ìš­25êÛ±d†¯ÿtH ë’,ñWÿéç¾Âê%-ô F¸cërjCUüÃ_Äçq¡ª2ÉtŽÏ~t+÷ܸŠ%íõ4VûYÒ^OÐçþÎZA>Hs +Ýq¹3±8 õuÄâñÊû²,“J¥H¥R”J%ÚÚZqÎæÛ¾žI’D(dÝÚ5TWWó©¬ò^kK3¿õà>rÓ,_Çͦè:ušÏ|òo[öäT”}û÷³fÕ*lšÁANœ<Å7l»l]Y’¨«­Åç+T–ü¾«–íó¸ø÷¿}3v›ZÎŽe·Ñ3f$2Ã_ÿ‡Pd‰×Žô`š»Ê–Õ•$‡O rû–åܼa ë—µñåï>Çï>°½’D–eª\þâswc·Û°,xjçQ %Y’Èä ¼ôÆIªýöíeûÚEØT…C'8;0AÈï!‘ÊQðpó†ÅÔ×øHgóüb÷1V,lFUdG§xíH7%Ý wx’?øø-üjÏqB~¥’Á¾7ÏqïÍk¸{ûJ¢³Á¼d˜,YÐ@]°Š±©‹ÛꚈ²¨­–-«:Ðl*Ù|‘Ó}ãlXÞκ¥mÔ‡|=3„ù>ºúÏL™Iò¥?ûªªp¢{„’nðÔΣܾu9Ÿ¸{3;÷ŸâÅ×OqÏöU4Õ¸uóR^=Üͦ "IôÉÛ°Ûlüñ§vTZü‚ \;æä¯R’$ à ™D–$’ÉÅb‰x"AÀï§½­ M+·4úúúI¥Ò"h_Á{I½ùvÝØ’$Q*•pØí¬\± ¿ßϲ¥Kèèè¸jY©TŠl._YVå½úL\²,U’@¸œöò8]G™íFvh*žÙå¯éáØ™!>q×F†ÆCf9ëX*“ç»O½FGs ÷Þ¼u¶+צʬ^ÚŠÓ¡UÒkæ %V.jfóªÖ.mÃïu’Íñºl\ÑŽÇå`Êv:[jq94¾ðù{8Þ=Ì×ÙÅ-—póÆ%¨ŠÂúe­´Ö‡X»¤€ÏÅÎgÈЍŠÌT,ÅÂÖZìšbIÇëv È2Á*Š\¾àp;54MÅ1›G7L^ØwŠîÁ0ÞºŽúj_eÿÞ˽õóiHKº]+ÿ góE욊:{<—w6b³)$³y‚>àq:*éG] ‡fîÙÐ4•t®|ær”ó»‘´C®Es6M’$?.—Ë2+ùŸ-Ëb|b‚D"I*•¦XÒÑ4ñr1»ÝÁÄx˜³Ý=ŒŽŽa&áp„á‘2™ ãããD"“8vb±§Ïœe`p]7ˆF£ Œ’NgŸ`"ÆïóðûÉd²¨ªBï¹>o“å«üY™½ÞÛ>ø½n Ãäx÷0ºú›ŠaY}£S4Õú<$ÒYÉ,™\Gž?@ïP˜­«;˜˜Œ“Éžoå—ÓMž¿Q™µK[‰L'°k6F#åÔ“å,Xò…`Ñ3F³©ŒEb OLsû–ål[ÓI÷`»M¥½©š™D›MáÜH»MESšêì9Ü$A"•cé‚ÚB:9@×¹QŽ®b¸p$QÎ'Ý;¦£¹†*ƒD*G,‘!—/2<1]N‰™Í386ÅtvÇn\¿x®«+‡Úœ0-‹‘‰Þ8~Ý0X¹°™5KZ®é´˜–eQ*LÅS8ì6^7²,‘ÊäyõH7Óñ4Mu¶®îÄí¼v/>A˜¿DÐæLù¾8@y@ݵ°ßŽeY”wãÂÀÀùº/‚ \ÛÄ3œ)j˜ïC¡$I*ïÔ‚ ü†‰Ñ]‚ ‚0Oˆ -‚ ó„Ú‚ ‚0Oˆ -‚ óÄ5´E^_AA¸Üœ·,‹l6Ëää­­-(ŠB©¤‹Å(–Џ\.ü>ŸHÍ)‚ \÷æ¼¥máÈ$£cc˜fyòHd’L6‹Óá$—ÍUr> ‚ Âõl΂v9S‘ÉÔT‡ÃŽËéª,ŸžŽ‚e1=3bÞqAA`Ž[Úñx‚b©DmM ’|abŠ|¡€Û㦹©‘x,A*•šÃZ ‚ µaÎîiF9Ed>_ “ÉIg¥©±‡ÝÓQž2Õn×(•ô¹ª¦ ‚ \3æ,h+ŠBGÇ,ËÂ0 r¹ õ hšF($™ÄãqS*•p»]sUMAA¸f\ CLÓdff†@ 0;z¼D"‘D7 Ü.'n·[Ü×A®{×Dо¸ ’$]öœ¶È˜$‚ ×ÀsÚpyPAZA.'úœAažA[Aæ ´AažA[Aæ ´Aaž¸&F_̲,1zümX–E©Tbj*ŠiYü~<7–e‘H$H¥Ò¨6•êP56Û5÷ñ^sÞËùfšéx‰RÁÀ]eÃá¾p|MÃ"(á®RQÔ÷v-lY™L†x<,Ë„BAìv;¦iNS(°;ìÔÖÔ¼§rAøð™Ó–¶eYX–…®ëÄã LÓ ‹1==S~ÍÌP*•沚×I’矾ùm^xéeÆÆÇò´°¿øÕs<òØã¼±ÿ ¹\vŽkúÎÞMÞô·[Dz T4™žÈýµH'J¤âÅw\Ó4,N&xå±QzŽÅ/y¯˜78ºk’lêÒ)wßZË‚T¬H&YºdѱqvíÞÃ7ÿå;ô  ë:GŽãg=ÁôÓ÷¹‚ |˜ÌySÌ4M&Âa†††Ù²yªªR(1 Ã0˜‰ÅèììÀf³ÍuU¯ –e‘ÏåQU•îû(Á@ ²<ްaý:n¸a+šÍF±`œ."IÏøBnŸ ðˆE óNНZC–%R±² Ù”N1oP×êBµÉĦ äÒ:6»L°Î$Át8e‚Ý©Ïêx.¯J6¥“œ.‚þj;NÏ•O1˲(ä âSLÃÂã×ðlX$¢²)U“ ÔÚQm2‰h¡¼ïÍ¡¬sP*ôOÒµ/Êö¨ Úñlè%“X¤@©hâòªøBv$¹| âS,Ó¢*X>ù¬ÎÁçà ÁÒM5Ž«ÖYQ%–m›ÌW‚îùýˆNähYâÁîT*ËsiD´HT5\U*¹´Î¾_ŒãöÙX¸ÚO ÎŽÝ©°°³ƒ¶Ö¾ôå¯Ig°Ùlì¸õÜn7/¾´ó>“A˜æ,hŸo$S)JÅš¦å–d]]-“SSø}>\Nç\Uóš¥( »U½ô#Ô4 ‡ÝŽeYÄ"9žùæ‹7øÉ¥u 9ƒ»n#:–cÿ³aB¢ã9¶ßßH}»›7_":šCQ%lv™l²tvýl„@ƒ™‰<«oª¦±ÃÍS_ÕU æÕN¶ÝÛÀž'ÆpzT,ËÂåQÙtWÝ»‹ 9ƒÃ/EÊ ŠD*Vä#¿×N.­³ûñQB 2 Ú'kn®æ•ÇF‘ ÏFx(Ë=ŸkÃîT8•`r4ǹã Ú—yñl ŸMsüÕ)ªBñ©7=Ô„¯ZãàóarË‚BÖàîßm#›Ò8•Ds*h…%ëå«mI’Pmò%û#IÙ”NÏÑ8gÇøÔŸ/"Ô褘7Øÿ\˰ ›CaÛ½ ¤ã%N%ñרQ ÍÄáRQUEQPäKËÖ4 »¦‰[F‚ sÜÒ. ÌLÏP__G<‘¸ä½b±|ß¶³c˜wü"ïõË»˜7ذ£E•yücz"‡7`cÕö6‡Lx0ÃäH–†n4»L:QâÁ?êÀåQQl2…¬ÁÊBØì2ùŒÎЙ$ n4‡ÂÊB¼úÔkn©§çH½d25šcãµÔ·»0 ë’”«‹E ŒõeØñ©ª‚ãUâìáÁ:7=ÔDx(ËÞŸ³hÓ°h_^ŲÍAžÿþ ±Hž…kýt®öQÌÜüPç¿VcŶH¦'r˜†ÅähŽ;?ۊí,o/Pç m™_µ ;j‘ÞÇ©¨µ³õ#õŒ÷g8ßnèÑñ«|´.­š½wÕMNZ—xiYìeå !D,὘³hhš&£cãX@"‘¤P(‹Ç1 €ééiÜ.‡C´2~ šSÁí³¡9dlv‰bÎàÌ¡ÇvO1)P*Zåà ȲDS§_ÈŽÍ® I0Ò“âà ¢ãyŠyC/‡%›]ÆéVP5‡k¶eíU¹çó­Œõ¥yù§#t훦T,Sxë½ÝBÞ@’ÀéVÐ2MnU&/QÒPT—GÅ2- ÝDµÉxªMFÕ £\ž$IH’„¬H •ïwí¦k_”ÄT½TÞ¿BÞ@Q$ì.»S¡±Ãƒj“ËASII¾pQt~¼ÅyWº·~Éœùryûç9Ý*;>ÙB"Zd×£#Ý9I©h–·sÉKœÛ‚ ¼{s´%I" Råõ ËråËÊ=™JáóùÄ—Ú¯)—Ò™çɦu²i·ßFx0Ã’ Vo¯Æ(™˜&•¢¬\8Þ–ƒšyXk ªMÂ4¹°2—Ä)Òñá¡,;>ÕÂíŸja´7M.U¾÷;9å;ßûA¥GÅ㳡ë‰h‘L¢Ä®GGI'ŠÔµ»=—¦T4˜žÈa³ËØ]Wï’‰BÎ@/šè%Ó°ˆ gY±5È’ò2³¼½BÎ +’ŠÙõè(™D¹n’T.Ã4,ô’Y ÆgÎtóÈ£“Íæf‡…Å[–YÖEË2©¶Ý×ÀŸnaj,G.=;Hí*ÛAx'sÖ=.Ë2€(|N§Óü~EA×u;Nq/û2ïõ ÞíS9ñZ”èXŽMwÖSÓäbÝmµy9ÂôDžwÖ1t&I>«ã«¶_ % Vmñê“ãìzt”ÎÕ>zŽÆ‰Oåiìpcw©4v¸qyTj[]8Ü*¥‚ÉÓÿÜHl¼³_È@©TdrrC/÷¤ëÜô`#_ŠPÊ,ZëÇ_mÇ»M#/òô?÷áòÚ¸õã͸¼6êZ]¸¼*²µ-NÜ>’$Ñ´ÐM×>™ŸþÝ,\ãgÓu¬¿­†®}Ó„‡²l¼£–‘žK6úÙv_{ŸG×M–mâ ”ËXºÑÏÞŸOðXO/kn®aɆ²,‘Ë瘊F1Írû»’Û=I&©#ѱ[î©§ûhŒÑÞ4ÅœÁ ?¢¡ÍÍͿ݄i³ß Vm¯Æ*ß›^¾9ÀþçÂôu%ذ£–ŽU>ÑM.»rͤæ<ÿ¼ìùÔœÿ,\êô™³üËw¿Ï§>ñ1-ì$ R*•øûÿõ464°yÓFÚZ[ˆGtžýî Ÿÿëe˜&(Š„$—¯aX•îYË,–²Ê=Ùoim—»—¡Ü}nšåûÔççü}kËY.78MÃiv{õž†ªª—tA›†…e]¨[yÝÙ.{éB]LäÙ®lË´@’g÷å|²,UîI—÷OB’Êå)çË9¿/Õí’2©r\LÓÄ4MevD¸Y.ëbåãf]ÜЮÔÛ²f×Ëþ]i{3±a¾ûýðÙO}’×cCÃÃ8x˜7ŸàË_úâ¯}î‚0¿Íù#_ÀeÁYë·WSSͺµkèï$  ‘e™oØÆ¹¾~z{ÏÑP_‡Çï`ýŽZdEFV.ü¾$I¨êEÇw6˜H —‘$ å¢u•óËfÿ½ø½òúåàùV²,_6 ð­e_XWº‡òÊ'ÃIDAT¬ åâú+—ž+W*ãâý»¸¬+­{õz\ZgI¹ô‚æÂŠW>W¯v,Þº=Ó4™™‰qêô6mÜ@cS#Pîê¤X,róMÛ¯¸ A®/×DK[A„w&ž¥A„yBmAA˜'DÐA„yBmAA˜'DÐþ™Ïuü[Õ}>#A®osþÈ×ùg²K¥ ICÊ?ëX–‰Íf«<'+\ÝèØ8{^}\.ÇÇ~û!~ÿ\W©b:žæ‰—sû–å¼øúI¼m=µ—×O’$NöŽÒ72Ń;Ö} u0M“—÷ŸF7 ìš|¾Ä½·¬ù@·!‚ð›vM´´ãñ8g»{*óŽG£ÓŒŽGŸ˜ù´/rñœØ·}U^–-]¡#GÉd2Wü½¹’+”xýÍsÄ’^=ÒM*›¿âz–e141Í®¾+¾w5—笾üøXÀ™þ º#t†930ñ>öDanÍiKÛ²,òùÓ312™L% E"“465à°;Âçó‰|Ú³ŠÅÏ>ÿ‡EQdî¿÷#lX¿¯×ËÒ%‹+½¥R‰ç_x‰×÷@’$>þ±ßbýÚ5ÌÌÌð“Ÿ=Æää55Õ<ôÀ}456räè1^xi'º®³lé>ñ±ßºb†5Ó4™ŠF+Y.§ ¿ßwÅ:Û5•¦Ú—ƒÆš.»‰©8ß~bšMåÜp„MüÇÏÞÀÀØÿíëO11çslgûÚ…œèá‘ç’HgYº ß{èF†Æ§yìÅCx]N÷sÿ­kyèöõŒOÆùÞÓ¯Ž&hª ð;÷m£­!D Ê…fS0-¨r‹ùA˜æ4h†A8! J¥€riU•—h4ЦÙQûEèzfY‡åð‘£üÕþs¢Ó3_ P(ðòÎWX·v Û¶l¢ëÔit]¿ä" Rn¡À3¿øñD²²lͪÜuçW¬w•ÛÉïÜ»•溟p;Õ/S±]=£|þ¡íüÞƒ7ò…¿{„³³­_Ã0ùãOÞÆ‰žžxé0+:ùÎS¯q× +¹qí"¾ø­gØw¬—Ïá“|ý¿þ.÷ߪó7ÿô47o\ÂO~õ¦iñû¿}3?å¾p/|înÙ¸UQ0-ë’‚ óÅœm˲ˆF£Ø5 ·Û]žÙ,g<Ê ¸ÝnìšF>Ÿ§T*a·Ûçªª× Ë²èë ½½ªª*ªªªhiiƦªWì".é:Ã##ŒOLPÈ*Fímm´··ñÕø ÚÛxèÁûÐ4Ûn»…_|™W_Û˦P”ËO˲p:<üÙO_²MU½ú©¤Ù–u”§æ\µ¨¹²Üçu²lA#AŸ›ºê*ÂSq:šk©¯ö19Ä0MÒÙ"‰T–ΖZªTOML is a bad file format. **It looks good at first glance**, and for really really >trivial things it is probably good. But once I started using it and the >configuration schema became more complex, I found the syntax ugly and hard to read. Despite this, PyPA still went ahead and used TOML for PEP-518. Fortunately pyproject.toml *is* fairly trivial and appears just once per project so the problems he alludes to aren't that pronounced. StrictYAML, by contrast, was designed to be a language to write [readable 'story' tests](../../../hitchstory) where there will be *many* files per project with more complex hierarchies, a use case where TOML starts to really suck. So what specifically *is* wrong with TOML when you scale it up? ## 1. It's very verbose. It's not DRY. It's syntactically noisy. In [this example of a StrictYAML story](https://github.com/crdoconnor/strictyaml/blob/master/hitch/story/map.story) and [its equivalent serialized TOML](https://github.com/crdoconnor/strictyaml/blob/master/hitch/story/map.toml) the latter ends up [spending](https://www.goodreads.com/quotes/775257-my-point-today-is-that-if-we-wish-to-count) **50% more** characters to represent the exact same data. This is largely due to the design decision to have the full name of every key being associated with every value which is **not** [DRY](../../../code-quality/least-code). It is also partly due to the large numbers of syntactic cruft - quotation marks and square brackets dominate TOML documents whereas in the StrictYAML example they are absent. Shortening program lengths (and DRYing code), all other things being equal, [reduces the number of bugs significantly](https://blog.codinghorror.com/diseconomies-of-scale-and-lines-of-code/) because maintenance becomes easier and deriving intent from the code becomes clearer. What goes for Turing-complete code also applies to configuration code. ## 2. TOML's hierarchies are difficult to infer from syntax alone Mapping hierarchy in TOML is determined by dots. This is simple enough for parsers to read and understand but this alone makes it difficult to perceive the relationships between data. This has been recognized by [many](https://github.com/leereilly/csi/blob/567e5b55f766847c9dcc7de482c0fd241fa7377a/lib/data/master.toml) TOML [writers](https://github.com/CzarSimon/simonlindgren.info/blob/a391a6345b16f2d8093f6d4c5f422399b4b901eb/simon-cv/config.toml) who have adopted a method that will be quite familiar to a lot of programmers - indentation that the parser ignores: [![Non-meaningful indentation](toml-indentation-1.png)](https://github.com/gazreese/gazreese.com/blob/c4c3fa7d576a4c316f11f0f7a652ca11ab23586d/Hugo/config.toml) This parallels the way indentation is added in *lots* of programming languages that have syntactic markers like brackets - e.g. JSON, Javascript or Java are all commonly rendered with non-parsed indentation to make it easier for humans to understand them. But not Python. Python, has long been a stand out exception in how it was designed - syntactic markers are *not* necessary to infer program structure because indentation *is* the marker that determines program structure. This argument over the merits of meaningful indentation in Python has been going on for decades, and [not everybody agrees with this](https://www.quora.com/Do-you-think-that-indentation-in-Python-is-annoying), but it's generally considered a good idea - usually for [the reasons argued in this stack exchange question](https://softwareengineering.stackexchange.com/questions/313034/why-should-a-language-prefer-indentation-over-explicit-markers-for-blocks): 1. Python inherited the significant indentation from the (now obsolete) predecessor language ABC. ABC is one of the very few programming languages which have used usability testing to direct the design. So while discussions about syntax usually comes down to subjective opinions and personal preferences, the choice of significant indentation actually has a sounder foundation. 2. Guido van Rossum came across subtle bugs where the indentation disagreed with the syntactic grouping. Meaningful indentation fixed this class of bug. Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. 3. Having symbols delimiting blocks and indentation violates the DRY principle. 4. It does away with the typical religious C debate of "where to put the curly braces" (although TOML is not yet popular enough to inspire such religious wars over indentation... yet). ## 3. Overcomplication: Like YAML, TOML has too many features Somewhat ironically, TOML's creator quite rightly [criticizes YAML for not aiming for simplicity](https://github.com/toml-lang/toml#comparison-with-other-formats) and then falls into the same trap itself - albeit not quite as deeply. One way it does this is by trying to include date and time parsing which imports *all* of the inherent complications associated with dates and times. Dates and times, as many more experienced programmers are probably aware is an unexpectedly deep rabbit hole of [complications and quirky, unexpected, headache and bug inducing edge cases](https://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time). TOML experiences [many](https://github.com/uiri/toml/issues/55) [of these](https://github.com/uiri/toml/issues/196) [edge cases](https://github.com/uiri/toml/issues/202) because of this. The best way to deal with [essential complexity](https://simplicable.com/new/accidental-complexity-vs-essential-complexity) like these is to decouple, isolate the complexity and *delegate* it to a [specialist tool that is good at handling that specific problem](https://en.wikipedia.org/wiki/Unix_philosophy) which you can swap out later if required. This the approach that JSON took (arguably a good decision) and it's the approach that StrictYAML takes too. StrictYAML the library (as opposed to the format) has a validator that uses [Python's most popular date/time parsing library](https://dateutil.readthedocs.io/en/stable/) although developers are not obliged or even necessarily encouraged to use this. StrictYAML parses everything as a string by default and whatever validation occurs later is considered to be outside of its purview. ## 4. Syntax typing Like most other markup languages TOML has [syntax typing](../../why/syntax-typing-bad) - the *writer* of the markup decides if, for example, something should be parsed as a number or a string: ```toml flt2 = 3.1415 string = "hello" ``` Programmers will feel at home maintaining this, but non programmers tend to find the difference between "1.5" and 1.5 needlessly confusing. StrictYAML does not require quotes around any value to infer a data type because the schema is assumed to be the single source of truth for type information: ```yaml flt2: 3.1415 string: hello ``` In the above example it just removes two characters, but in larger documents with more complex data, pushing type parsing decision to the schema (or assuming strings) removes an enormous amount of syntactic noise. The lack of syntax typing combined with the use of indentation instead of square brackets to denote hierarchies makes equivalent StrictYAML documents 10-20% shorter, cleaner and ultimately more readable. ## Advantages of TOML still has over StrictYAML There are currently still a few: - StrictYAML does not currently have an "official spec". The spec is currently just "YAML 1.2 with [features removed](../../features-removed)". This has some advantages (e.g. YAML syntax highlighting in editors works just fine) but also some disadvantages (some documents will render differently). - StrictYAML does not yet have parsers in languages other than Python. If you'd like to write one for your language (if you don't also do validation it actually wouldn't be very complicated), contact me, I'd love to help you in any way I can - including doing a test suite and documentation. - Popularity. strictyaml-1.6.1/docs/public/why-not/turing-complete-code.md000066400000000000000000000040131415074265300241020ustar00rootroot00000000000000--- title: Why shouldn't I just use Python code for configuration? --- It is relatively common for many applications to avoid the use of a different markup language for configuration and simply allow configuration to be done using code. One famous and unapologetic example of this is [Django](https://docs.djangoproject.com/en/1.10/ref/settings), which requires all configuration to be in a "settings.py" or similar file. This seems like a great idea from the outset - Python is more flexible than any configuration language, so, for instance, if you wanted to use a list comprehension or read a file or call an API to fill a value, you can. However, with this flexibility comes many traps and unsightly pitfalls. The Django pitfalls in particular are [cogently summed up by Ned Bachelder on his blog](http://nedbatchelder.com/blog/201112/duplicitous_django_settings.html) - pitfalls which have been the cause of countless bugs over the years. The language expressiveness trade off applies at every level in code - [We need less powerful languages](http://lukeplant.me.uk/blog/posts/less-powerful-languages/). - [Rule of least power (wikipedia)](https://en.wikipedia.org/wiki/Rule_of_least_power). - [Principle of least power by Tim Berners Lee](https://www.w3.org/DesignIssues/Principles.html#PLP). - [Principle of least power by Jeff Atwood (coding horror blogger / stack overflow founder)](https://blog.codinghorror.com/the-principle-of-least-power/). A good way of refactoring, in fact, is to take a large chunk of Turing-complete Python code that *can* be transformed directly into StrictYAML with no loss in expressiveness and and to transform it - for example, a list of translation strings, countries or parameters. It also makes it easier to have the markup generated by another program or a templating language. While you technically *can* do this with Turing-complete code, it will often lead to a debugging nightmare - [just ask C++ programmers](https://stackoverflow.com/questions/622659/what-are-the-good-and-bad-points-of-c-templates>)! strictyaml-1.6.1/docs/public/why-not/xml.md000066400000000000000000000025261415074265300206630ustar00rootroot00000000000000--- title: Why not use XML for configuration or DSLs? --- XML suffers from overcomplication much like vanilla YAML does - although to an ever greater degree, thanks to the committee-driven design. Doctypes and namespaces are horrendous additions to the language, for instance. XML is not only not really human readable (beyond a very basic subset of the language), it's often barely *programmer* readable despite being less expressive than most Turing-complete languages. It's a flagrant violation of the [rule of least power](https://en.wikipedia.org/wiki/Rule_of_least_power). The language was, in fact, *so* overcomplicated that it ended up increasing the attack surface of the parser itself to the point that it led to parsers with [security vulnerabilities](https://en.wikipedia.org/wiki/Billion_laughs). Unlike JSON and YAML, XML's structure also does not map well on to the default data types used by most languages, often requiring a *third* language to act as a go between - e.g. either XQuery or XPath. XML's decline in favor of JSON as a default API format is largely due to these complications and the lack of any real benefit drawn from them. The associated technologies (e.g. XSLT) also suffered from design by committee. Using it as a configuration language will all but ensure that you need to write extra boilerplate code to manage its quirks. strictyaml-1.6.1/docs/public/why/000077500000000000000000000000001415074265300167365ustar00rootroot00000000000000strictyaml-1.6.1/docs/public/why/duplicate-keys-disallowed.md000066400000000000000000000012631415074265300243320ustar00rootroot00000000000000--- title: What is wrong with duplicate keys? --- Duplicate keys are allowed in regular YAML - as parsed by pyyaml, ruamel.yaml and poyo: ```yaml x: cow y: dog x: bull ``` Not only is it unclear whether x should be "cow" or "bull" (the parser will decide 'bull', but did you know that?), if there are 200 lines between x: cow and x: bull, a user might very likely change the *first* x and erroneously believe that the resulting value of x has been changed - when it hasn't. In order to avoid all possible confusion, StrictYAML will simply refuse to parse this and will *only* accept associative arrays where all of the keys are unique. It will throw a DuplicateKeysDisallowed exception. strictyaml-1.6.1/docs/public/why/explicit-tags-removed.md000066400000000000000000000016021415074265300234730ustar00rootroot00000000000000--- title: What is wrong with explicit tags? --- Explicit tags are tags that have an explicit type attached that is used to determine what type to convert the data to when it is parsed. For example, if it were to be applied to "fix" the Godfather movie script parsing issue described above, it would look like this: ```yaml - Don Corleone: Do you have faith in my judgment? - Clemenza: !!str Yes - Don Corleone: Do I have your loyalty? ``` Explicit typecasts in YAML markup are slightly confusing for non-programmers, much like the concept of 'types' in general. StrictYAML's philosophy is that types should be kept strictly separated from data, so this 'feature' of YAML is switched off. If tags are seen in a YAML file it will raise a special TagTokenDisallowed exception. ## Counterpoints - [Valid usage in AWS cloudformation syntax?](https://github.com/crdoconnor/strictyaml/issues/37) strictyaml-1.6.1/docs/public/why/flow-style-removed.md000066400000000000000000000023601415074265300230250ustar00rootroot00000000000000--- title: What is wrong with flow-style YAML? --- Flow style is essentially JSON embedded in YAML - making use of curly { } and square brackets to denote lists and mappings. Example: ```yaml a: 1 b: {c: 3, d: 4} ``` This use of JSONesque { and } is also ugly and hampers readability - *especially* when { and } are used for other purposes (e.g. templating) and the human reader/writer of YAML has to give themselves a headache figuring out what *kind* of curly bracket it is. The *first* question in the FAQ of pyyaml actually subtly indicates that this feature wasn't a good idea - see "[why does my YAML look wrong?](http://pyyaml.org/wiki/PyYAMLDocumentation#Dictionarieswithoutnestedcollectionsarenotdumpedcorrectly)". To take a real life example, use of flow style in [this saltstack YAML definition](https://github.com/saltstack-formulas/mysql-formula/blob/master/mysql/server.sls#L27) which blurs the distinction between flow style and jinja2, confusing the reader. ## Parsing 'dirty' YAML with flow style To parse YAML with flow style, you can use [dirty load](../../using/alpha/restrictions/loading-dirty-yaml). ## Counterarguments - - strictyaml-1.6.1/docs/public/why/implicit-typing-removed.md000066400000000000000000000104171415074265300240440ustar00rootroot00000000000000--- title: The Norway Problem - why StrictYAML refuses to do implicit typing and so should you --- A while back I met an old coworker and he started telling me about this interesting bug he faced: "So, we started internationalizing the website by creating a config file. We added the UK, Ireland, France and Germany at first." ```yaml countries: - GB - IE - FR - DE ``` "This was all fine. However, one day after a quick configuration change all hell broke loose. It turned out that while the UK, France and Germany were all fine, *Norway* was *not*..." "While the website went down and we were losing money we chased down a number of loose ends until finally finding the root cause." "If turned out that if feed this configuration file into [pyyaml](http://pyyaml.org):" ```yaml countries: - GB - IE - FR - DE - NO ``` "This is what you got in return:" ```python >>> from pyyaml import load >>> load(the_configuration) {'countries': ['GB', 'IE', 'FR', 'DE', False]} ``` It snows a *lot* in False. When this is fed to code that expects a string of the form 'NO', then the code will usually break, often with a cryptic error, Typically it would be a KeyError when trying to use 'False' as a key in a dict when no such key exists. It can be "quick fixed" by using quotes - a fix for sure, but kind of a hack - and by that time the damage is done: ```yaml countries: - GB - IE - FR - DE - 'NO' ``` The most tragic aspect of this bug, however, is that it is *intended* behavior according to the [YAML 1.2 specification](https://github.com/yaml/yaml-spec/tree/spec-1.2). The real fix requires explicitly disregarding the spec - which is why most YAML parsers have it. StrictYAML sidesteps this problem by ignoring key parts of the spec, in an attempt to create a "zero surprises" parser. *Everything* is a string by default: ```python >>> from strictyaml import load >>> load(the_configuration).data {'countries': ['GB', 'IE', 'FR', 'DE', 'NO']} ``` ## String or float? Norway is just the tip of the iceberg. The first time this problem hit me I was maintaining a configuration file of application versions. I had a file like this initially - which caused no issues: ```yaml python: 3.5.3 postgres: 9.3.0 ``` However, if I changed it *very* slightly: ```yaml python: 3.5.3 postgres: 9.3 ``` I started getting type errors because it was parsed like this: ```python >>> from ruamel.yaml import load >>> load(versions) == [{"python": "3.5.3", "postgres": 9.3}] # oops those *both* should have been strings ``` Again, this led to type errors in my code. Again, I 'quick fixed' it with quotes. However, the solution I really wanted was: ```python >>> from strictyaml import load >>> load(versions) == [{"python": "3.5.3", "postgres": "9.3"}] # that's better ``` ## The world's most buggy name [Christopher Null](http://www.wired.com/2015/11/null) has a name that is notorious for breaking software code - airlines, banks, every bug caused by a programmer who didn't know a type from their elbow has hit him. YAML, sadly, is no exception: ```yaml first name: Christopher surname: Null ``` ```python # Is it okay if we just call you Christopher None instead? >>> load(name) == {"first name": "Christopher", "surname": None} ``` With StrictYAML: ```python >>> from strictyaml import load >>> load(name) == {"first name": "Christopher", "surname": "Null"} ``` ## Type theoretical concerns Type theory is a popular topic with regards to programming languages, where a well designed type system is regarded (rightly) as a yoke that can catch bugs at an early stage of development while *poorly* designed type systems provide fertile breeding ground for edge case bugs. (it's equally true that extremely strict type systems require a lot more upfront and the law of diminishing returns applies to type strictness - a cogent answer to the question "why is so little software written in haskell?"). A less popular, although equally true idea is the notion that markup languages like YAML have the same issues with types - as demonstrated above. ## User Experience In a way, type systems can be considered both a mathematical concern and a UX device. In the above, and in most cases, implicit typing represents a major violation of the UX [principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment). strictyaml-1.6.1/docs/public/why/index.md000066400000000000000000000024631415074265300203740ustar00rootroot00000000000000--- title: Design Justifications --- StrictYAML is the result of some carefully considered, although controversial design decisions. These are justified here. - [What is wrong with duplicate keys?](duplicate-keys-disallowed) - [What is wrong with explicit tags?](explicit-tags-removed) - [What is wrong with flow-style YAML?](flow-style-removed) - [The Norway Problem - why StrictYAML refuses to do implicit typing and so should you](implicit-typing-removed) - [What is wrong with node anchors and references?](node-anchors-and-references-removed) - [Why does StrictYAML not parse direct representations of Python objects?](not-parse-direct-representations-of-python-objects) - [Why does StrictYAML only parse from strings and not files?](only-parse-strings-not-files) - [Why is parsing speed not a high priority for StrictYAML?](speed-not-a-priority) - [What is syntax typing?](syntax-typing-bad) - [Why does StrictYAML make you define a schema in Python - a Turing-complete language?](turing-complete-schema) If you have seen a relevant counterargument to you'd like to link to addressed at StrictYAML, please create a pull request and link to it in the relevant document. If you'd like to write your own rebuttal to any argument raised here, raise a ticket and issue a pull request linking to it at the end of the document.strictyaml-1.6.1/docs/public/why/node-anchors-and-references-removed.md000066400000000000000000000062441415074265300261640ustar00rootroot00000000000000--- title: What is wrong with node anchors and references? --- An example of a snippet of YAML that uses node anchors and references is described on the [YAML wikipedia page](https://en.wikipedia.org/wiki/YAML#Advanced_components): ```yaml # sequencer protocols for Laser eye surgery --- - step: &id001 # defines anchor label &id001 instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: &id002 instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: *id001 # refers to the first step (with anchor &id001) - step: *id002 # refers to the second step - step: <<: *id001 spotSize: 2mm # redefines just this key, refers rest from &id001 - step: *id002 ``` While the intent of the feature is obvious (it lets you deduplicate code), the effect is to make the markup more or less unreadable to non-programmers. The example above could be refactored to be clearly as follows: ```yaml # sequencer protocols for Laser eye surgery --- - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm ``` The intent of this document is a lot clearer than the version above - *especially* for non-programmers. However, it comes at a cost of increased repetition. Between the node/anchor version and this I would prefer this. However, it is still repetitive and ideally it should be non-repetitive and still clear. This can be done by refactoring the *structure* of the document and changing the way the application interprets it. For example, instead of representing using the schema above, a schema that separates step definitions from actual steps could be used. For example: ```yaml step definitions: large: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm medium: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm steps: - step: large - step: medium - step: large - step: medium - step: from: large except: spotSize: 2mm - step: large ``` The above document has an entirely different and slightly complex schema but it fundamentally represents the same data as the node/anchor version above, in a clearer manner, without duplication. strictyaml-1.6.1/docs/public/why/not-parse-direct-representations-of-python-objects.md000066400000000000000000000020601415074265300312310ustar00rootroot00000000000000--- title: Why does StrictYAML not parse direct representations of Python objects? --- Regular YAML allows the direct representation of Python objects. For example: ```yaml --- !python/hash:UserObject email: evilhacker@hacker.com password: passwordtoset type: admin ``` If we load this YAML formatted string in, we get a user object. This was how YAML was intended to work since it allows the ability to write objects to and read them from, say, a database. By itself, this behavior isn't necessarily capable of enacting a successful attack, so not all code that parses untrusted YAML is insecure, but it can be used, especially in conjunction with metaprogramming to execute arbitrary code on your system. This shares a lot in common with the pickle module's behavior, which is why its use with [untrusted input is strongly recommended against in the Python docs](https://docs.python.org/3/library/pickle.html). This anti-feature led to Ruby on Rails' spectacular [security fail](https://codeclimate.com/blog/rails-remote-code-execution-vulnerability-explained/). strictyaml-1.6.1/docs/public/why/only-parse-strings-not-files.md000066400000000000000000000016741415074265300247460ustar00rootroot00000000000000--- title: Why does StrictYAML only parse from strings and not files? --- While other parsers will take strings, file handles and file names, StrictYAML will only parse YAML strings. This is done deliberately to reduce the module's remit, with the intention of reducing both the potential bug surface and the number of exceptions that StrictYAML has to deal with - things like nonexistent files, file system errors, bad reads, unknown file extensions, etc. become the problem of some other module - ideally one more focused on handling those kinds of issues. If you want a quick and easy one liner way to get text from a file, I recommend that you pip install path.py and and use .text() on the Path object: ```python >>> from path import Path >>> from strictyaml import load >>> parsed_data = load(Path("myfile.yaml").text()).data >>> print(parsed_data) [ parsed yaml ] ``` ## Counterarguments - strictyaml-1.6.1/docs/public/why/speed-not-a-priority.md000066400000000000000000000025671415074265300232650ustar00rootroot00000000000000--- title: Why is parsing speed not a high priority for StrictYAML? --- JSON and StrictYAML are essentially complementary formats. They both allow a relatively loose representation of data that just contains, mappings and sequences. They are serialization formats that are relatively straightforward for both humans and machines to read and write. The main difference is simply one of degree: JSON is primarily optimized for *machine* readability and writability, while still maintaining human readability. YAML is optimized for *human* readability and writability, while maintaining machine readability and writability. This means that the two formats are better suited to slightly different applications. For instance, JSON is better suited as a format for use with REST APIs while YAML is better suited as a format for use by configuration languages and DSLs. If you are using YAML primarily as a readable medium to express a markup language or represent configuration in, this probably means that 1) what you are reading is probably relatively short (e.g. < 1,000 lines) and 2) it will be read/written infrequently (e.g. once, when a program starts). For this reason, it is assumed that for most StrictYAML applications, parsing speed is of a lower importance than strictness, readability and ease of use. That being said, any requests that improve parsing or writing speed are welcome. strictyaml-1.6.1/docs/public/why/syntax-typing-bad.md000066400000000000000000000055611415074265300226510ustar00rootroot00000000000000--- title: What is syntax typing? --- Explicit syntax typing is the use of syntax to designate the type of some data. It is a feature of the design of most programming languages. ```python x = "11" # this is a string y = 11 # this isn't ``` It it isn't limited to programming languages though. It's a common feature of serialization formats like JSON: ```json {"x": "11", "y": 42} ``` But not others, like INI: ```ini server=192.0.2.62 port=143 ``` Or CSV: ```ini server,port 192.0.2.62,143 ``` Or StrictYAML: ```yaml server: 192.0.2.62 port: 143 ``` In those cases, it's up to the program - in another file - to decide what part of that data is and what part is an integer. ## What does it mean to attach syntax typing to data? Syntax typing the explicit prioritization of cohesion over terseness. It puts type information right next to the data, but in the process this means that if the data appears a *lot* ## When would you sacrifice terseness? Let's return to Python for a second and imagine that ```python x = "11" # this is a string y = 11 # this isn't ``` This helps distinguish the types for the parser, which is useful for JSON, but it also comes with two disadvantages: - The distinction is subtle and not particularly clear to *non-programmers*, who will not necessarily understand that a directive needs to be given to the parser to avoid it being misinterpreted. - It's not necessary if the type structure is maintained outside of the markup. - Verbosity - two extra characters per string makes the markup longer and noisier. In JSON when being used as a REST API, syntax typing is often an *advantage* - it is explicit to the machine reading the JSON that "string" and "age" is an integer and it can convert accordingly *in the absence of a schema*. StrictYAML assumes all values are strings unless the schema explicitly indicates otherwise (e.g. Map(Int(), Int())). StrictYAML does not require quotation marks for strings that are implicitly converted to other types (e.g. yes or 1.5), but it does require quotation marks for strings that are syntactically confusing (e.g. "{ text in curly brackets }") Standard YAML has explicit syntax typing to explicitly declare strings, although it's confusing as hell to know when it's required and when it is not. For example: ```yaml a: text # not necessary b: "yes" # necessary c: "0" # necessary d: "3.5" # necessary e: in # not necessary f: out # not necessary g: shake it all about # not necessary h: "on" # necessary ``` Most other configuration language formats also make use of syntax typing. For example: - [TOML](../../why-not/toml) - [JSON5](../../why-not/json5) - [HJSON](../../why-not/hjson) - [SDLang](../../why-not/sdlang) - [HOCON](../../why-not/hocon) [INI](../../why-not/ini) does not have explicit syntax typing however. strictyaml-1.6.1/docs/public/why/turing-complete-schema.md000066400000000000000000000023431415074265300236360ustar00rootroot00000000000000--- title: Why does StrictYAML make you define a schema in Python - a Turing-complete language? --- StrictYAML defines schemas in Python (i.e. Turing-complete) code. For example: ```python Map({"name": Str(), "email": Str()}) ``` Instead of: ```yaml type: map mapping: "name": type: str required: yes "email": type: str required: yes ``` There are some trade offs here: Schema definition in a non-Turing-complete language like YAML makes the schema programming language independent and gives it more potential for being read and understood by non-programmers. However, schema definition in a non-Turing-complete language also restricts and makes certain use cases impossible or awkward. Some use cases I came across included: - Being able to import pycountry's country list and restrict "country: " to valid country names. - Being able to implement a schema that validated date/time scalar values against the specific date/time parser I wanted. - Being able to revalidate sections of the document on a 'second pass' that used new data - e.g. a list in one part of the document is restricted to items which come from another part. ## Counterarguments - strictyaml-1.6.1/docs/src/000077500000000000000000000000001415074265300154405ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/comparison/000077500000000000000000000000001415074265300176125ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/comparison/table.yml000066400000000000000000000022131415074265300214220ustar00rootroot00000000000000json: syntax to data ratio: % length of JSON: 100 syntax typing: yes well defined: yes comments: no hierarchical: yes duplicate keys: no multiline strings: no meaningful whitespace: no complex features: no strictyaml: syntax to data ratio: % length of JSON: syntax typing: no well defined: yes comments: yes syntactic noise ratio: hierarchical: yes duplicate keys: no multiline strings: yes meaningful whitespace: yes complex features: no yaml: syntax to data ratio: % length of JSON: syntax typing: yes well defined: yes comments: yes hierarchical: yes duplicate keys: yes multiline strings: yes meaningful whitespace: no complex features: yes ini: syntactic noise ratio: % length of JSON: syntax typing: no well defined: no comments: yes hierarchical: no duplicate keys: yes multiline strings: no meaningful whitespace: no complex features: no toml: syntax to data ratio: % length of JSON: syntax typing: no well defined: no comments: yes hierarchical: yes duplicate keys: no multiline strings: yes meaningful whitespace: no complex features: yes json5: strictyaml-1.6.1/docs/src/dirtemplate.yml000066400000000000000000000007771415074265300205100ustar00rootroot00000000000000base templates: template templated: - template/story.jinja2: content: yes filename: yes - why/index.md: content: yes - why-not/index.md: content: yes - using/alpha/compound/index.md: content: yes - using/alpha/howto/index.md: content: yes - using/alpha/restrictions/index.md: content: yes - using/alpha/scalar/index.md: content: yes - using/index.md: content: yes - using/alpha/index.md: content: yes - using/alpha/index.md: content: yes - index.md: content: yes strictyaml-1.6.1/docs/src/features-removed.md000066400000000000000000000042011415074265300212340ustar00rootroot00000000000000--- title: What YAML features does StrictYAML remove? --- StrictYAML restricts you from parsing a number of things which the YAML specification says should be parsed. An issue has been [raised](https://github.com/yaml/YAML2/issues/8) by [David Seaward](https://inkwell.za.net/) about this critique on the official YAML repository. This document lists those of those features: ## Implicit Typing ([Why?](../why/implicit-typing-removed)) ```yaml x: yes y: null ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {"x": True, "y": None} ``` Example StrictYAML without schema: ```python load(yaml) == {"x": "yes", "y": "null"} ``` Example StrictYAML with schema: ```python load(yaml, Map({"x": Bool(), "y": Str()})) == {"x": True, "y": "null"} ``` ## Direct representations of objects ([Why?](../why/not-parse-direct-representations-of-python-objects)) ```yaml --- !python/hash:UnsafeUserObject email: evilhacker@hacker.com password: passwordtoset type: admin ``` Example pyyaml/ruamel: ```python load(yaml) == {'evil': b'z\xf8\xa5u\xabZ'} ``` Example StrictYAML ```python raises TagTokenDisallowed ``` ## Duplicate Keys Disallowed ([Why?](../why/duplicate-keys-disallowed)) ```yaml x: 1 x: 2 ``` Example pyyaml/poyo: ```python load(yaml) == {'x': 2} ``` Example StrictYAML ```python raises DuplicateKeysDisallowed ``` ## Explicit tags ([Why?](../why/explicit-tags-removed)) ```yaml x: !!int 5 ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == load(yaml) == {"x": 5} ``` Example StrictYAML ```python raises TagTokenDisallowed ``` ## Node anchors and refs ([Why?](../why/node-anchors-and-references-removed)) ```yaml x: &id001 a: 1 y: *id001 ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {'x': {'a': 1}, 'y': {'a': 1}} ``` Example StrictYAML ```python raises NodeAnchorDisallowed ``` To parse the above YAML *literally* in StrictYAML do: ```yaml x: '&id001' a: 1 y: '*id001' ``` ## Flow style ([Why?](../why/flow-style-removed)) ```yaml x: 1 b: {c: 3, d: 4} ``` Example pyyaml/ruamel/poyo: ```python load(yaml) == {'x': 1, 'b': {'c': 3, 'd': 4}} ``` Example StrictYAML ```python raises FlowStyleDisallowed ``` strictyaml-1.6.1/docs/src/index.md000066400000000000000000000074521415074265300171010ustar00rootroot00000000000000{% if readme -%} # StrictYAML {%- else -%} --- title: StrictYAML --- {% raw %}{{< github-stars user="crdoconnor" project="strictyaml" >}}{% endraw %} {% endif %} StrictYAML is a [type-safe](https://en.wikipedia.org/wiki/Type_safety) YAML parser that parses and validates a [restricted subset](features-removed) of the [YAML](what-is-yaml) specification. Priorities: - Beautiful API - Refusing to parse [the ugly, hard to read and insecure features of YAML](features-removed) like [the Norway problem](why/implicit-typing-removed). - Strict validation of markup and straightforward type casting. - Clear, readable exceptions with **code snippets** and **line numbers**. - Acting as a near-drop in replacement for pyyaml, ruamel.yaml or poyo. - Ability to read in YAML, make changes and write it out again with comments preserved. - [Not speed](why/speed-not-a-priority), currently. {% for story in quickstart %} {{ story.name }}: {% if 'yaml_snippet' in story.data['given'] %} ```yaml {{ story.given['yaml_snippet'] }} ``` {% endif %} {% if 'setup' in story.data['given'] %} ```python {{ story.given['setup'] }} ``` {% endif %} {% for variation in story.variations %} {{ variation.child_name }}: {% with step = variation.steps[0] %}{% include "step.jinja2" %}{% endwith %} {% endfor %} {% endfor %} ## Install ```sh $ pip install strictyaml ``` ## Why StrictYAML? There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. I've tried to make it the best one. Below is a series of documented justifications: {% for dirfile in (subdir("why-not").ext("md") - subdir("why-not").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](why-not/{{ dirfile.name.splitext()[0] }}) {% endfor %} ## Using StrictYAML How to: {% for dirfile in (subdir("using/alpha/howto/").ext("md") - subdir("using/alpha/howto/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](using/alpha/howto/{{ dirfile.name.splitext()[0] }}) {% endfor %} Compound validators: {% for dirfile in (subdir("using/alpha/compound/").ext("md") - subdir("using/alpha/compound/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](using/alpha/compound/{{ dirfile.name.splitext()[0] }}) {% endfor %} Scalar validators: {% for dirfile in (subdir("using/alpha/scalar/").ext("md") - subdir("using/alpha/scalar/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](using/alpha/scalar/{{ dirfile.name.splitext()[0] }}) {% endfor %} Restrictions: {% for dirfile in (subdir("using/alpha/restrictions/").ext("md") - subdir("using/alpha/restrictions/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](using/alpha/restrictions/{{ dirfile.name.splitext()[0] }}) {% endfor %} ## Design justifications There are some design decisions in StrictYAML which are controversial and/or not obvious. Those are documented here: {% for dirfile in (subdir("why").ext("md") - subdir("why").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](why/{{ dirfile.name.splitext()[0] }}) {% endfor %} ## Star Contributors - @wwoods - @chrisburr - @jnichols0 ## Other Contributors - @eulores - @WaltWoods - @ChristopherGS - @gvx - @AlexandreDecan - @lots0logs - @tobbez - @jaredsampson - @BoboTIG StrictYAML also includes code from [ruamel.yaml](https://yaml.readthedocs.io/en/latest/), Copyright Anthon van der Neut. ## Contributing - Before writing any code, please read the tutorial on [contributing to hitchdev libraries](https://hitchdev.com/approach/contributing-to-hitch-libraries/). - Before writing any code, if you're proposing a new feature, please raise it on github. If it's an existing feature / bug, please comment and briefly describe how you're going to implement it. - All code needs to come accompanied with a story that exercises it or a modification to an existing story. This is used both to test the code and build the documentation. strictyaml-1.6.1/docs/src/redirects.yml000066400000000000000000000003751415074265300201540ustar00rootroot00000000000000/why-not/syntax-typing: /why/syntax-typing-bad /new-faq: / /using/alpha/compound/either-or-validation: /using/alpha/howto/either-or-validation/ /why/care-about-yaml: /why /why/binary-data-removed: /why/not-parse-direct-representations-of-python-objects strictyaml-1.6.1/docs/src/template/000077500000000000000000000000001415074265300172535ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/template/step.jinja2000066400000000000000000000012171415074265300213260ustar00rootroot00000000000000{% if step['in_interpreter'] %} ```python {% for line in step['code'].rstrip('\n').split('\n') %}>>> {{ line }} {% endfor -%} {{ step['will_output'] }} ``` {% else %} ```python {{ step['code'] }} ``` {% if 'will_output' in step %} ```yaml {{ step['will_output'] }} ``` {% endif %} {% if 'raises' in step %} ```python {% if 'in python 3' in step['raises']['type'] -%} {{ step['raises']['type']['in python 3'] }}: {%- else %}{{ step['raises']['type'] }}:{% endif -%} {%- if 'in python 3' in step['raises']['message'] -%} {{ step['raises']['message']['in python 3'] }}: ``` {% else %} {{ step['raises']['message'] }} ``` {% endif %} {% endif %} {% endif %} strictyaml-1.6.1/docs/src/template/story.jinja2000066400000000000000000000022241415074265300215320ustar00rootroot00000000000000--- title: {{ story.name }} type: using --- {% if story.info['experimental'] %}{% raw %} {{< warning title="Experimental" >}} This feature is in alpha. The API may change on a minor version increment. {{< /warning >}} {% endraw %}{% endif %} {{ story.info['description'] }} {% if 'yaml_snippet' in story.given %}Example yaml_snippet: ```yaml {{ story.given['yaml_snippet'] }} ``` {% endif %} {% if 'setup' in story.given -%} ```python {{ story.given['setup'] }} ``` {%- endif %} {% if story.variations %} {% for variation in story.variations %} {{ variation.child_name }}: {% if'yaml_snippet' in variation.data.get('given', {}).keys() %} ```yaml {{ variation.given['yaml_snippet'] }} ``` {% endif %} {% with step = variation.steps[0] %}{% include "step.jinja2" %}{% endwith %} {% endfor %} {% else %} {% with step = story.steps[0] %}{% include "step.jinja2" %}{% endwith %} {% endif %} {% raw %}{{< note title="Executable specification" >}}{% endraw %} Page automatically generated from {{ story.filename.basename() }}. {% raw %}{{< /note >}}{% endraw %} strictyaml-1.6.1/docs/src/using/000077500000000000000000000000001415074265300165655ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/000077500000000000000000000000001415074265300176525ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/compound/000077500000000000000000000000001415074265300214765ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/compound/index.md000066400000000000000000000003071415074265300231270ustar00rootroot00000000000000--- title: Compound StrictYAML Validators --- {% for dirfile in (thisdir.is_not_dir() - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.name.splitext()[0] }}) {% endfor %} strictyaml-1.6.1/docs/src/using/alpha/howto/000077500000000000000000000000001415074265300210125ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/howto/index.md000066400000000000000000000002621415074265300224430ustar00rootroot00000000000000--- title: How to... --- {% for dirfile in (thisdir.is_not_dir() - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.name.splitext()[0] }}) {% endfor %} strictyaml-1.6.1/docs/src/using/alpha/index.md000066400000000000000000000000471415074265300213040ustar00rootroot00000000000000--- title: Using StrictYAML v0.x.x --- strictyaml-1.6.1/docs/src/using/alpha/restrictions/000077500000000000000000000000001415074265300224025ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/restrictions/index.md000066400000000000000000000002651415074265300240360ustar00rootroot00000000000000--- title: Restrictions --- {% for dirfile in (thisdir.is_not_dir() - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.name.splitext()[0] }}) {% endfor %} strictyaml-1.6.1/docs/src/using/alpha/scalar/000077500000000000000000000000001415074265300211175ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/using/alpha/scalar/index.md000066400000000000000000000003051415074265300225460ustar00rootroot00000000000000--- title: Scalar StrictYAML Validators --- {% for dirfile in (thisdir.is_not_dir() - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.name.splitext()[0] }}) {% endfor %} strictyaml-1.6.1/docs/src/using/index.md000066400000000000000000000016761415074265300202300ustar00rootroot00000000000000--- title: Using StrictYAML --- How to: {% for dirfile in (subdir("using/alpha/howto/").is_not_dir() - subdir("using/alpha/howto/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](alpha/howto/{{ dirfile.name.splitext()[0] }}) {% endfor %} Compound validators: {% for dirfile in (subdir("using/alpha/compound/").is_not_dir() - subdir("using/alpha/compound/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](alpha/compound/{{ dirfile.name.splitext()[0] }}) {% endfor %} Scalar validators: {% for dirfile in (subdir("using/alpha/scalar/").is_not_dir() - subdir("using/alpha/scalar/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](alpha/scalar/{{ dirfile.name.splitext()[0] }}) {% endfor %} Restrictions: {% for dirfile in (subdir("using/alpha/restrictions/").is_not_dir() - subdir("using/alpha/restrictions/").named("index.md"))|sort() -%} - [{{ title(dirfile) }}](alpha/restrictions/{{ dirfile.name.splitext()[0] }}) {% endfor %} strictyaml-1.6.1/docs/src/what-is-yaml.md000066400000000000000000000017451415074265300203050ustar00rootroot00000000000000--- title: What is YAML? --- YAML is a simple, human readable format for representing associative and hierarchical data. Example from wikipedia page on YAML: ```yaml receipt: Oz-Ware Purchase Invoice date: 2012-08-06 customer: first name: Harry family name: Potter address: |- 4 Privet Drive, Little Whinging, England items: - part_no: A4786 description: Water Bucket (Filled) price: 1.47 quantity: 4 - part_no: E1628 description: High Heeled "Ruby" Slippers size: 8 price: 133.7 quantity: 1 ``` Key features: - Things which are associated with other things - delimited by the colon (:). - Ordered lists of things - delimited by the prepended dash (-). - Multi-line strings - delimited by the bar (|) if there is another newline at the end of the string, or bar + dash (|-) if not. - Indentation describing the hierarchy of data. - Maps directly to data types common to most high level languages - lists, dicts, scalars. You don't need to know much more than this. strictyaml-1.6.1/docs/src/when-to-use-validators.md000066400000000000000000000012621415074265300223040ustar00rootroot00000000000000--- title: When should I use a validator and when should I not? --- When starting out on greenfield projects it's much quicker not to create a validator. In such cases it's often more prudent to just parse the YAML and convert the strings explicitly on the fly (e.g. int(yaml['key'])). If the YAML is also going to be largely under the control of the developer it also might not make sense to write a validator either. If you have written software that is going to parse YAML from a source you do *not* control - especially by somebody who might make a mistake - then it probably does make sense to write a validator. You can start off without using a validator and then add one later. strictyaml-1.6.1/docs/src/why-not/000077500000000000000000000000001415074265300170455ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/why-not/environment-variables-as-config.md000066400000000000000000000141111415074265300255430ustar00rootroot00000000000000--- title: Why avoid using environment variables as configuration? --- The use of "environment variables as configuration" is recommended by "[12 factor](https://12factor.net/config)". While this is a common practice and often leads to few or no problems, calling it a best practice is a bit much. The reason cited by the 12 factor website for using them is: >The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; unlike config files, there is little chance of them being checked into the code repo accidentally; and unlike custom config files, or other config mechanisms such as Java System Properties, they are a language- and OS-agnostic standard. Two of these are fine reasons. It is true that these are both good reasons: - Easy to change between deploys. - Language and OS agnostic. However, neither of these things requires that config be stored in environment variables. It's easy enough to create language and OS agnostic configuration files (INI, YAML, etc.) and it's usually straightforward to make files easy to change between deployments too - e.g. if a deployment is containerized, by mounting the file. It is less true that environment variables are inherently "easier" to change between deployments - writing a file is not intrinsically difficult unless it is *made* difficult (e.g. a file is baked in to a container image rather than being mounted), it isn't hard to change. Moreover, there are several disadvantages to using environment variables that tend to exhibit themselves nastily when the size of the configuration grows beyond a certain point. ## Environment variables are global state Environment variables are a form of global state. Every variable is associated only with the environment. The variables will be shared with many other variables which have a multitude of different uses: - The chances of variable cross contamination is high - accidentally naming one variable the same as another which is unknowingly used for a different purpose (e.g. PATH) is elevated, and this can have both weird, hard to debug and terrible effects. - If you need to inspect environment variables e.g. to find one that you thought was there and it actually missing, tracking it down is a pain. Global state in and of itself isn't a "bad thing" but *too much* global state is a very bad thing. A small amount of configuration (e.g. less than 10 variables) can often be placed in to environment variables with very little harm, but as soon as the amount grows the danger of the global state grows. ## Environment variable values cannot handle structures more complex than a string Environment variables are a set of key-value pairs where the key is almost always an uppercase string and the value is always a string. While this is more than sufficient for many purposes, there are many kinds of situations where the configuration data that needs to be stored requires something a bit more complicated than just a string. Where developers run in to this limitation they have a tendency to create an ugly substructure with cryptic codes within the string. The way that LS_COLORS is used is a good example: ``` rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: ``` Clear as mud, right? (This isn't actually an example of 12 factor being followed assiduously, but I have seen many examples just like it where it was). In [StrictYAML](https://hitchdev.com/strictyaml) the following could be represented as: ```yaml # Special di: 01;34 # directory is blue # Extensions *.tz: 01;31 # red *.flv: 01;35 # purple ``` While the codes are cryptic and probably should be changed (e.g. directory: blue), the cryptic nature can at least be easily explained with comments. ## Creating naming conventions to handle the inability to handle A common example: >PERSONNEL_DATABASE_HOST, PERSONNEL_DATABASE_PORT, PERSONNEL_DATABASE_NAME, PERSONNEL_DATABASE_PASSWORD, FACTORY_DATABASE_HOST, FACTORY_DATABASE_PORT, FACTORY_DATABASE_NAME, FACTORY_DATABASE_PASSWORD, HOTEL_BACKUP_DATABASE_HOST, HOTEL_BACKUP_DATABASE_USERNAME, HOTEL_BACKUP_DATABASE_PASSWORD, HOTEL_DATABASE_HOST, HOTEL_DATABASE_PORT, HOTEL_DATABASE_NAME, HOTEL_DATABASE_PASSWORD Did you spot the accidentally missed variable in the list above which caused the critical bug? [StrictYAML](https://hitchdev.com/strictyaml) version: ```yaml database: personnel: host: xxx port: xxx name: xxx password: xxx factory: host: xxx port: xxx name: xxx password: xxx hotel backup: host: xxx name: xxx password: xxx hotel: host: xxx name: xxx port: xxx password: xxx ``` What about now? strictyaml-1.6.1/docs/src/why-not/hjson.md000066400000000000000000000026101415074265300205070ustar00rootroot00000000000000--- title: Why not use HJSON? --- {{< note title="No longer supported" >}} HJSON is no longer supported. {{< /note >}} [HJSON](http://hjson.org/) is an attempt at fixing the aforementioned lack of readability of JSON. It has the following criticisms of YAML: - JSON is easier to explain (compare the JSON and YAML specs). - JSON is not bloated (it does not have anchors, substitutions or concatenation). As with TOML's criticism, these are spot on. However, strictyaml fixes this by *cutting out those parts of the spec*, leaving something that is actually simpler than HJSON. It has another criticism: - JSON does not suffer from significant whitespace. This is not a valid criticism. Whitespace and indentation is meaningful to people parsing any kind of code and markup (why else would code which *doesn't* have meaningful whitespace use indentation as well?) so it *should* be meaningful to computers parsing. There is an initial 'usability hump' for first time users of languages which have significant whitespace *that were previously not used to significant whitespace* but this isn't especially hard to overcome - especially if you have a propery configured decent editor which is explicit about the use of whitespace. Python users often report this being a problem, but after using the language for a while usually come to prefer it since it keeps the code shorter and makes its intent clearer. strictyaml-1.6.1/docs/src/why-not/hocon.md000066400000000000000000000015711415074265300205010ustar00rootroot00000000000000--- title: Why not HOCON? --- [HOCON](https://github.com/typesafehub/config/blob/master/HOCON.md) is another "redesigned" JSON, ironically enough, taking JSON and making it even more complicated. Along with JSON's [syntax typing](../../why/syntax-typing-bad) - a downside of most non-YAML alternatives, HOCON makes the following mistakes in its design: - It does not fail loudly on duplicate keys. - It has a confusing rules for deciding on concatenations and substitutions. - It has a mechanism for substitutions similar to [YAML's node anchor feature](../why/node-anchors-and-references-removed.md) - which, unless used extremely sparingly, can create confusing markup that, ironically, is *not* human optimized. In addition, its attempt at using "less pedantic" syntax creates a system of rules which makes the behavior of the parser much less obvious and edge cases more frequent. strictyaml-1.6.1/docs/src/why-not/index.md000066400000000000000000000007161415074265300205020ustar00rootroot00000000000000--- title: Why not X? --- There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. Below is a series of comparisons with some of the more famous ones: {% for dirfile in (thisdir.ext("md") - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.basename().splitext()[0] }}) {% endfor %} If you'd like to write or link to a rebuttal to any argument raised here, feel free to raise a ticket. strictyaml-1.6.1/docs/src/why-not/ini.md000066400000000000000000000012471415074265300201520ustar00rootroot00000000000000--- title: Why not use INI files? --- INI is a very old and quite readable configuration format for small configuration files. It is still used by many programs today and it has some advantages due to this - e.g. python has inbuilt parser for it. Unfortunately it suffers from two major problems: - Different parsers will operate in subtly different ways that can lead to often obscure edge case bugs regarding the way whitespace is used, case sensitivity, comments and escape characters. - It doesn't let you represent hierarchical data. [TOML](../toml) is a configuration format designed to address these two concerns, although it also suffers from obscure edge case bugs. strictyaml-1.6.1/docs/src/why-not/json-schema.md000066400000000000000000000005161415074265300216000ustar00rootroot00000000000000--- title: Why not use JSON Schema for validation? --- JSON schema can also be used to validate YAML. This presumes that you might want to use jsonschema and pyyaml/ruamel.yaml together. [ TODO Flesh out ] - Line numbers - Simpler errors in StrictYAML - StrictYAML is a more flexible schema - Turing incompleteness / inflexibility strictyaml-1.6.1/docs/src/why-not/json.md000066400000000000000000000016101415074265300203360ustar00rootroot00000000000000--- title: Why not JSON for simple configuration files? --- JSON is an *ideal* format for REST APIs and other forms of data intended for machine exchange and it probably always will be because: - It's a simple spec. - It has all the basic types which map on to all programming languages - number, string, list, mapping, boolean *and no more*. - Its syntax contains a built in level of error detection - cut a JSON request in half and it is no longer still valid, eliminating an entire class of obscure and problematic bugs. - If pretty-printed correctly, it's more or less readable - for the purposes of debugging, anyway. However, while it is eminently suitable for REST APIs it is less suitable for configuration since: - The same syntax which gives it decent error detection (commas, curly brackets) makes it tricky for humans to edit. - It's not especially readable. - It doesn't allow comments. strictyaml-1.6.1/docs/src/why-not/json5.md000066400000000000000000000015271415074265300204320ustar00rootroot00000000000000--- title: Why not JSON5? --- [JSON5](http://json5.org/) is also a proposed extension to JSON to make it more readable. Its main criticism of YAML is:: There are other formats that are human-friendlier, like YAML, but changing from JSON to a completely different format is undesirable in many cases. This is, I believe, mistaken. It is better if a language is not subtly different if you are going to use it as such. Subtle differences invite mistakes brought on by confusion. JSON5 looks like a hybrid of YAML and JSON:: ```json { foo: 'bar', while: true, } ``` It has weaknesses similar to TOML: - The noisiness of the delimiters that supplant significant whitespace make it less readable and editable. - The use of [syntax typing](../../why/syntax-typing-bad) is neither necessary, nor an aid to stricter typing if you have a schema. strictyaml-1.6.1/docs/src/why-not/ordinary-yaml.md000066400000000000000000000025661415074265300221670ustar00rootroot00000000000000--- title: Why not use the YAML 1.2 standard? - we don't need a new standard! --- ![Standards](https://imgs.xkcd.com/comics/standards.png "Fortunately the configuration one has been solved now that we have Strict uh... wait, no it hasn't...") StrictYAML is composed of two parts: - A new YAML specification which parses a restricted subset of the [YAML 1.2 specification](https://github.com/yaml/yaml-spec/tree/spec-1.2) and *only* parses to ordered dict, list or string. - An optional validator (which will, as requested, validate and cast parse some of those scalar string values to ints, floats, datetimes, etc.). Note that StrictYAML is *not* a new standard. If you have a syntax highlighter or editor or anything else that recognizes or reads YAML it will recognize StrictYAML in the same way. While not all YAML output by other programs will be readable by StrictYAML (it is, after all, stricter), a lot will be. The features removed from the YAML spec, and their rationales are as follows: - [Implicit Typing](../../why/implicit-typing-removed) - [Direct representations of objects](../../why/not-parse-direct-representations-of-python-objects) - [Explicit tags](../../why/explicit-tags-removed) - [Node anchors and refs](../../why/node-anchors-and-references-removed) - [Flow style](../../why/flow-style-removed) - [Duplicate Keys Disallowed](../../why/duplicate-keys-disallowed) strictyaml-1.6.1/docs/src/why-not/pykwalify.md000066400000000000000000000016341415074265300214120ustar00rootroot00000000000000--- title: Why not use kwalify with standard YAML to validate my YAML? --- Kwalify is a schema validation language that is written *in* YAML. It is a descriptive schema language suitable for validating simple YAML. Kwalify compiles to the strictyaml equivalent but is able to do less. You cannot, for example: - Plug generated lists that come from outside of the spec (e.g. a list of country code from pycountry). - Validate parts of the schema which can be either one thing *or* another - e.g. a list *or* a single string. - Plug sub-validators of a document into larger validators. If your schema is very simple and small, there is no point to using kwalify. If your schema needs to be shared with a 3rd party - especially a third party using another language, it may be helpful to use it. If your schema validation requirements are more complicated - e.g. like what is described above - it's best *not* to use it. strictyaml-1.6.1/docs/src/why-not/python-schema.md000066400000000000000000000030611415074265300221460ustar00rootroot00000000000000--- title: Why not use Python's schema library (or similar) for validation? --- Python's 'schema' (as well as similar libraries) can also be used to validate the structure of objects. Validating YAML is even [cited as a reason on their README](https://github.com/keleshev/schema). Using a schema for validation requires running the YAML through a parser first which and then taking the output (usually a data structure like a dict) and passing it through the schema. Unfortunately there are a number of problems with this approach: ## You still have [the Norway Problem](../../why/implicit-typing-removed) If the standard YAML parser parses 'NO' as false or [empty string as None](https://github.com/Grokzen/pykwalify/issues/77) then it doesn't really matter if the schema says an empty string or the text 'NO' is okay, it will be seeing a 'None' or a 'False' which will cause a failure. ## You can't get line numbers and snippets for the validation errors Assuming you've successfully circumvented the Norway problem, parsing and feeding the output to schema is still problematic. If you pass a parsed dict to schema, schema can't tell which line number the failure happened on and can't give you a code snippet highlighting where it happened. ## Roundtripping becomes very very difficult if not impossible Due to the loss of metadata about parsed YAML being lost when it is fed into a generic schema validator, it also becomes impossible to to *change* the data and serialize it without losing critical details (i.e. mapping ordering, comments or validation structures). strictyaml-1.6.1/docs/src/why-not/sdlang.md000066400000000000000000000020651415074265300206420ustar00rootroot00000000000000--- title: Why not use SDLang? --- [SDLang](http://sdlang.org/) or "simple declarative language" is a proposed configuration language with an XML-like structure inspired by C. Example: ``` // This is a node with a single string value title "Hello, World" // Multiple values are supported, too bookmarks 12 15 188 1234 // Nodes can have attributes author "Peter Parker" email="peter@example.org" active=true // Nodes can be arbitrarily nested contents { section "First section" { paragraph "This is the first paragraph" paragraph "This is the second paragraph" } } // Anonymous nodes are supported "This text is the value of an anonymous node!" // This makes things like matrix definitions very convenient matrix { 1 0 0 0 1 0 0 0 1 } ``` Advantages: - Relatively more straightforward than other serialization languages. Disadvantages: - Syntax typing - leading to noisy syntax. - The distinction between properties and values is not entirely clear. - Instead of having one obvious way to describe property:value mappings - Niche strictyaml-1.6.1/docs/src/why-not/toml-indentation-1.png000066400000000000000000001241101415074265300231750ustar00rootroot00000000000000‰PNG  IHDRí8vG«gsBIT|dˆ IDATxœìÝw$רùof–÷¶½7Óã½Å`  3••ù2+»~ùL¾Ÿd†ÁG0 ƒD"ÝnÇf³}Ôê‚ ‚ |ä¥.€ ‚ FmAAX&LK]€åÄ0 $IZtù‚¾þa¯=ÈëŸfË¹ì‚ ËÑ’Ö´ Ã@UUŠÅ¥R Ã00 MÓ(•*Ë4MãºÝ«2‹%t]§Tªü{¿õÊå2ÙlŽt&K>_¨–Ë0 Î]¼Ìå«7î}pöâe®öÝ\t»’$q£€“g/~lÇôq+—Ë”TõžåÅb‰ƒ¯¿C:y í,|¾wþÿÎe‚ ‡[²š¶a¤ÓÆÆÆ‘$ µ¬ÒÔÔHÀïgbb’T:,IØl6š››0›Íûþ4Mãýc'ydÏ.Žœ8Ãî[±Z,÷ÔUUåèɳLÍÌ¢È ¹|/>y€P0ÀÌ\·Ë…ßç­®¯i‰dŠrY#O¢¹]†®D¢1 à ðc6›ÈfóÄb "Ñ’$áózeMӈœhºFÀïÃb6cñD§ÃA"•BQ>/Š¢,zŒš¦17¿ÝPÀÉdÂ0 ²¹<ÉT—ÓÇí@× É$¥’ŠÏçÁfµR,•8s¾—D"ÉÆõkðû¼8vrù<Ñhœž®¬VkueM#!K2Á€“ÉD&“E›¿)’eùÞãÓ4ü~/¶ùí‚ ‹[ÒæqMÓ¨««Áï÷37an.‚×ã!ÒÓ³³ÉÄ­r¹^¯÷£7økŠÅôÝ`|bšÓg/18~¿€¦æFñ&“ Y–‘åO¦ë=384J]m˜d2…Åb!Íb·ßû,º"ËlÛ¼žžî¦¦g¹tå:ùBm›7,zlªZÂf³"Ëv› Y’ª}ç¹\¾ì7­Ççõ‰Æp¹œ(ŠŒÕj™¿‘)rýæ-ì66«m¾¯]ìV M u°¬6+öìâú[\½ÑOÐïc÷Ž­d³9L&…l6ß掠¥­¬#Ë2&“‚,ˬèj¯ìï×`–/p:H’„¢(X­ŠÅ"’,ãt:+7dŠŒÅbaÛæ ôݼűSg±X,b‰$ªª’Éæ(‹ŠÅjÿ­ËédffŽh,Áì\Ý0ðzÝX­šëhomÂb1ã°Û0 ƒh4N<‘$‰a³Ù(•Tbñ8Ýmƒ~ ]§P(bÀ|Püè®T,‘ÍåØºi=»¶mflbŠL.GC]-f³™Îö‚?»—«rܳsQ"Ñ8o¿ŒX†FÇI&SȲLC] -Mجúú™žCQdëë*0“%É’L¥Y·º‡p0@6›#K Ë‹Y– üDã jÂA¼÷‡£¦ë 1>9Å\$FSCí­Íý^¦g#LMÍ%¨  ýØlV†F™šžÅj±°¢«EQ%‰‘ñ æ"1LŠ‚×ãæòµ ŽŒ‘ÏçÉå ”Ëe:Ú[ÐtÁ¦g#¬ì¹t&ƒÕb¡®&L:“Ål2Q[bbrš‘Ñ ¢Ñ8Á€Ÿ]mŸX«Š Âg´TÓ˜.ŒlÖîxÄJ–¤êMÁB­Ûd2}bÏ뺎¦i(Š‚¦iº/Ã0Ð ƒb±ˆ®éX,Ìfår™B±Tmî—$ »ÍŠ,ËK% ݨ>YÂ_Àl1ãr:‘$ ]×Éærhš†Ëé)A%®iK’„ÝnÇn·ßµ\–eÜ¢v-‚ w? ‚ Â2!‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,âèOÈbiGÿ.öW.—ÑuE‘«Ï·kšŽaèÕÙætପ• l>e©O Ã@Ó4$IFQÄ}¥ ‚%MÍ P*•(K˜L&ìv[5§¶®ë DzʧmFµ¼¹\³Å\ @HØî3k2•&‘H‚$ár:ðz=(²\=<þ…e|mbršó½W1) -ͬ]µ€ñÉ)bñë׬BQ$ò¹<§/ô219Í3?‚ßç½kÛðÑ7‹•á~åZl›†Q™o>™JãózîzŸ¦é\¸|ÚPˆ–æ†ÝöϹ ÂgÙ’VcÒé4ƒD"‰Çã•Ä!Å"cãܼyëžë§ÝB`1 ƒ¾[ƒäó†Glj'“÷}Ï•ë7yïèI®ß¼Åkoâìù^t]GÓõùZ§N¹\¾ë\TjÔzu_ªZæü¥+xÜ.6mXCsc]µÆêózhn¬G–+Íf³²vå …"ÅR©ºÍ…›%M×ÑõNÒ¢ë:š¦U˱p욦QÖ´jÙ¶].—çký•eéL–ÃÇN£iÚ|Íúö¹kmj à÷Þµ¿…õî1‰Ë嬭åÆ0 †FÆ›À$+ŒMNÑÔPGm8„,ß{ŸT,imnä‘=»çÐÑ“¬^ÙMï•ëÌEc¤3Y ù"_}îIl6 o¿ŒL6‡®ë¬ìî`Óú5¼{ø8½Wo ø˜˜šaýš•¬îqrüôy.]¹Nc}_|ò&“ EQðz\˜Íæjy˚ƅޫô c»g?€ÕjYôø†F8yö"&EÁn·±ÿáX,fŽ<ËÌlÝ0hn¬g÷Ž-DcqÞ;r’r¹ ’ÄCÛ7SWæÈñ3œ¿t…L6G}]˜{¢X,qðõwå <Ìæ kÑuƒ+×nÐ{õ: a±˜ùÂ#O$yçÐqêjÂLÏÎÑÒÔÀ£ûvc2)ŸôG,‚°$–,hK’„Ïç£X,Ò×wU-ÓÙÙ¢(tuvP*•˜œ˜ZªâýÆn2ò…Š¢Tk¯V˽Áïö{¨Ö–§gf±XÌÕ`853Çï<÷>¯ŤÐ{å:ªZæ_}–x<Éko½Owg;íßM:›eÓº5tu´V·½g×6|^#c÷Ý¿$I¤R®^»Éªž.œNGOœalrŠ®öÖEß33Åa³ñÈÞ]‹%,f3“3 ñðέ”T•gγ¢«+×nÒÔXϮ훙˜˜F’%v;›7¬a.åÛïKÕíÚlV¾üìã¼öÖûÕÔ­e­Ìåë7ؾe­-M¼þö!†GÇñy=¨ªÊþ=;(•T^|åMòùŸ‹Ù\726Ëå ð35=KG[3C#(ŠÂÊîNdY¦¬iä dYÆb6cZùvÙÊe “©Òê‰Æq»\Ä)zº;*çiÙßä ‚ <¨% Ú^¯‡ññI"‘(’,ÑÚÒ‚$Iô÷ߢ¤ª¨ªÊ›ýÔ„ÃÔ,ñ>¥¬ M ux=Š¥ µt´µ,º®$UF•/x¬f3Ž;R¡J’DgG3sQ^zõ-EaÇ–¸œtÃÀép`6ßþHg碜8sžÙ¹(…b‘—~õÛ6¯gzvŽ¡Q"Ño¾{˜†ºZöíÞÁ–ë8}þº® øXÝÓußc,—Ë9Ñ‹V®üæ†z\.kV­à•×ßA’$º;Û0) ×­â艳¼xð ;;¶nÀëuôûøé/^!ðÄc{éæâåkÌEbLÍÌ215Íž]ÛØ¶i=z¯rõúM‚?í­ÍÄâ \®J ƒ,I¸]NdiyÜÜ ‚ ü&–4Ÿ¶ ‚ NTKAa™A[A– ´Aa™A[A– ´Aa™A[A– ´…OÄ'1gürœ‡^áã´¤“«Ü™ $îœ_äƒé—Û¬WºnÐ{ù2×ûnàõzسû!ÜîûÏ0öYP™… fFsŒÝH£˜$º7ùñ,”Uçã´¯öàp›ÑuƒÉÁ,c7Òêlt®÷b2/~icãœ=w]×Ù¹};MMÇG'‚°ô–¬¦½‚stlœ¡¡a†††Èçó$’I††GfjjMÓ–ª˜¿1ÃÐ9zü©TŠöÖVÌfËüÔœºn •u ýöM‹®”U]3tþgþµ… ¸ðþ…4—·§#Õ+¯Íÿ»°m­¬Wö÷!5Õ–í®÷/l·¬£ÝY>íöº eË&T^þ‹AJ·ß‚¬€¡he­TY@œ^²I¢÷h„²z{Îð€Û墫³“sç/rk`àøÄA>ý–´¦J§Á0¨© ‰D™šš¦££¹¹n· ›ÕÊøø$n·k™ÖR%ÚÚZYµj%©X‰Þ#L‰Ù±<뼬Ù Ó8õÚ4©X‰P½ ûÂ8\&ú/$Ð5ƒ‰ ‰¹"}³—ßÌÅ÷æ˜Êbs(ìxª‡ÛÌ¡ÆÑuƒ@­èTž–•Vnñ3Ò—æêÉ(²,ѳÕOûÏ¢­…¬Æ¥#s:ÌMæ©ir°ã‰ZʪÎé7fð†¬Üº” ®ÕÁæ5dÓ*—EÈ&UÂMv6ì )pæÍ¢Ó¦G³óumNR±G_ž¤˜ÓhYUù%Y²ÒÐádüfºZ­¬sñÐÓÃ9¬…MûÃêløý>ü~o¾õÎßÍG'‚ð)´¤}Ú¡`††z Å"ùB÷|‚ŠŽö6êjk±Z­Èвl’…|”b^ãÒÑ9ü56vë†ãýJyÞ$‰H»Ëtß A„Ï›%íÓŽÆb Z[[hll`jj]×I¥ÒŒ  ‡CËnÚ‡ÑËjQ«ôñª:ŠYââáv§Âú‡C¸}–ʽË|œrù-·ß€¾312‰ö†©ivTV3@’îø™_Y’`å6?{¿ÜHëJ7G_š$-•üÙ““¨ªz»lšZÔÑæËh±+˜,2v—R-‡¬H˜-2…œ†®”òV‡²èñ.ôz¥œ†Qéã®,3îXVYßî2¬·³ù@˜-Ö²åÑì.¥º-A„ϳ%=>6>Óá _( Äjµ’ÍdÉf²„BÁÏLmÛ0àü{³hšÕ¡PÛâ`v,Ïä`†Þ$•Ñ×ù\Y®ÔR«$ð­ ]M3Ø›$—.“Ž©$æŠ(Š\É;®Tjã²"QÈiœ}{—ÏŒ¦êøj¬Xl•û´É©)þäßþ)ò¿ükš+#±5Íàʉ(6‡B!«ÑÔ婲­;oœLf™•Ûü\?cz8K|¶À®gê+Eœ_ÿN—“L f‰Læé=a®ÝAëJ·.%˜Í1;žçÌ[3¬Øìcõöoýx”‹‡æ(ä4µ6µ"³œ ,qÐØmvÔ²ŠßïÃét"Ë2íí,|õ›LKZÌߊ¦i”Ëe¥R[´:º6øP‹:¾ýVn3ö†¨owb¶Ètmô‘Š–0[:ÖùhZq{ž$Itmðâ X€îM>’‘".¿…G·_ØÊã¿_ØÊ¾¯4âp›Øþx-‰H ðÕØ°9+ç²¶¦†?þ§ÿ„P0Xݾ͡оƃծ°fg§ÇŒaìýrn¿åŽrÀš]AêÚœ” «wð×T«Ókæ±o6c»£æí YQ™†N² V» ‹½ÒŒ¨³Ñ³5€¢H8=f\>3}«…L¼„b– 5Ø ]׫?‚ ŸWK ú´].ç=¯ù|Þ%(ÑÇL’\¼t™|¾ÀcÁd¶¬³nr`¶Üî™°9L´ôÜΞ@%@VÖ1ßµY³E¡±ÓUý}!˜:=•õêZÕmxCV¼!ë=ųÛí¬žÕ ˜$µ6jšøÂ·×—$‰p“ã‡&a±*Ô·ÝûÙ™Ì2µ-Ž»Ö 5Ø 5ÜÛRÒܽøÁ:ÁºÛµkÃ0娉“¤Òi\.×¢ïAø¬“Œè(4 ƒD"ÝnÇfM•¢rÎ’$ LfuµµH(d“j¥¦,ºúé5Í ›TqzL(¦O×h}Ã0ÈfsD¢$$Âá‡ã£ß(‚ð#‚¶ ‚ ,Ÿ®*• ‚ ÷%‚¶ ‚ ,"h ‚ Â2!‚¶ ‚ ,Ë÷háSÇ0 Êš†„„aH’„É´øLiŸd4MI¤|úîI à¤j˜MÓ1™äÏÌÜú‚ |ò–,h/ Z× ]ב¨|Á/,×ôJZFE_jË…¦ëüð•“Ô…<ÌFS}.žÝ·Ã0PUBIÅlR°YÍ‹NM»è³Øk S¤–T ›õöó쯽ŒÃnáÑ«?²ÌÅ’J¾¨‚&“ŒÝfAù¯·BQåÏþæu¾ôÈ&^=|‰¯?¾®–ÚOl‚ |¶,iM;Î0>>Q©¡•Ë477á÷û˜œœ$O€$a1›immÁj½w‚Ï£|>O&›E’$òù5á‹…²¦‹Å(‹ø¼>ÜnÙl–b±DI-a·ÙÉò„C!dY&•N“J¦pºœø¼>”¡V*I#“¬ÓQ*5î›#Óü‡¿yƒBQÅb6ñ¾ø{6wS(©ÌFS€DmЃÝV9޹X§ÝB$žÁn³Pð :‘x†|¡DÐçÂã²S(©œ¹2Ä oå¿ýÝ}n‚^'c³Ø¬fºšk0›j‚Lʽ5~Ã0xþåãüà•ãtµÔRRË<´¡“ßÿÒØÍ&"ñ4…¢JÀçÄã¬L“+”˜‰¦0™dB>L&MÓ‰&3¤³|n~I’(©e¦#•©ikìV3²,qõÖO<´–«“dY¡³³Y–éï X,‰ M%ÈÜäç¿ø%íLMÏÐÝÝÉsÏ<͹sç9vüv‡]×ùæ7¾Î¥Þ+œ=w­¬áñzˆÇã|í«_ÆépðÂ/^Âëó’N¥Ù¿oë×­½§&«ë:·FŸoq»œlܰ~ÑYÉdI"äsò¹ˆ%3½. ÃàoŸ£³©†ü­Gé½9F±¤’+¨¼ðÖƦc”Ô2m !¾óÕ½•šè÷_#äw“ÊäÙ¸²•/îÛÀ©Ëƒ>w—ÃJ¾¨òß|mðÎÉk\éŸàc—Ù¾®“àºÊšÆÉÞQ %•‘É(ßzz'[×´Ýs|’$QÖ4ž{dÿü÷žàbß(ÿñù7xfï®NqúÊ ›…’ªñ‡_Û‹Ålâ¯y”²¦‘+”X×ÝÄ÷oäÖÈ,?yí$AŸ‹BQå+n¡±ÖÏÏÞ8ÍT$ x\vþðk{1›MÔ…¼øÜjnœvq] ‚ðà–´ÝÙçõâr:$‰PS ¶¶†|>ÏàÐV«›M|±A%È fggyúÉÇyîÙ§8yò4™l—ËÅÖ-›Y¿v-7ûo1==C:Ân³ÓÔÔˆÕb%388Ìëo¾ZVY·f ‡ƒwÞ;D©¤~ÈŽ+™Ã¤…_X¼ [’$¾ýì.voêæÛÏìbÿ¶JªÆ­ÑY¶¬n%‘Ρª¯ ƒÆZ?{·ö°²£—]Ä0 Ì&…›£³x]þéï=Á³û6 Éûú)êB6¯jeb:ÆñK·ù\ì\ßÉÚîFþë¯îcóªV ’ ­³9Ìßö!:›k8uy°zÓ±˜ñéÇ/Þâè…~šë IüìÍ34×Ù´ª•‘É'/rüÒWn³}]›Wµò³7Ï0Irk|–t®ÀžÍ+xjÏ:^'ý#Ó¼{ú:{šÙ´ª…÷Ïö18A–$þÕÅbùè ~N‚Á`%ÐJ •Ë\¼ÔËÌÌ,ÁPT*E¹¬!I2€V{ÈÌ IDAT»Ý†Õj¥X,¢ª*““S‹EúnÜÀl6±võ*äûL«j·Ùðz½Õ±»ýC›ÒýžÊ|ä 5Èb©ŒÉ$£j3q~ô«”Jßýï¾ÄÑóý8æ›ÄS™Âüõ ö¹Ø¶¶š€€t6ÏØtŒ°ÏM¹¬³¢½ŽÆ?²,a1›0›”êþ ÃÀ¬(„jø=¼.;‘D¦šêôN Ç44ᇯg*’ä?ÿËocµ˜˜˜32!—/²²£žú—«·&‰§r\î¾°k 6‹™íkÛŸãçoÅbVøúãÛH¤rDâ®Ma1™Ø³y[å® VŽ«fþ_A„µ¤5íÉ©)¢Ñ.— ¯ÇK:“AÓ4úo  i^$‰R©´”ÅüÔYH¶²@Ót.^ºÌöí[Ùûðnì6;¥R‘JNí»ƒ±,K´µ¶ÐÜÜÄ—Ÿ{–Ý»vÒÖÖZÍBöAñD‚‘‘ÑêÏÄä%õCjå`6)¬_ÑÌÉÞAÚ›Â<»w#n—‰™ã31¾üèf¶¯í@‘%Jª€¢È˜ïun1›èhªaÇúN¾ýÌN¶¯í µ!X]7_TI¤sdsÅJ –x ì ëìÙ²‚ûO¾FK]Þ›ãXÍ&ÚCìÙÜÍ·æ÷×R¤±ÖG8àæ›Oíàk_ØBgs —D*Ç«øƒ¯íÅdR8{u˜ ßEmÐ×læOng]wa¿Ht"ÂoGùîw¿ûÝY±P(`6›?Ö4™Š¬‰DH¥R¤Òip8hå2ssR©4&³™pMxÑDŸG©Tšd2Å–Í›(KLLL±i㊅ƒCäÓ)œN'©TšºÚ:,V+n— »Í^­5ïØ¶K½W¸ví:×oܤ¹¹™ÚššEû|kkkX»fuõ§»«Û¯1¾@’$:ܽ:Ä¡37›‰ñð¦nÖu71>cd2Šªi躎Çe§¹6@ßð4zZú*AN‘eB>7ƥ›c OFÙ¼º—ÆÕbâðÙôÞ#•˳¢µŽ‰Ù8~ƒî–Zf¢IÌ&kº äã316+Û×vÐPãçÐÙ>6®l¡>ìãS×è½9ÆÈTŒ-«Ûèl3Mñîé>N_ÂïqÒÓVÇðd”_¾{ž¡‰š¦óèÎÕtµÔ¢ëo¿Âùë# %¶¬iÿDG¦ ‚ðÙ·¤ C*£Æ5ÊZE–1›+麎ª–1 ³ÙŒ,ËTsú<(‹äryü~år™T:×ã¡\.O$p:(²B6›ÅápTò’K†Q©iÛívrù<©T ‡ÃÛåúD«3 ƒ\¡D<•Ãa³àuÛ‘%‰t¶@:WÀïqR(Vjï>ƒX2ƒÛiÇj¾}ƒ¨ë:‰tž\¡ˆÏíÄi· ÍN&2ä %¼n§t¶€,K8íV²ù"šnàqÚ½†RÙ<†^—½2?‘Åçv ˜d©ùb ¿Ç‰Ãf©ŽŸ‹§Qd™š€Y–ÑtD*G®PÂã´ávUޝ¬iDâitÝ èscµˆiAøíˆ,_‚ ‚°Lˆ¶:AAX&DÐA„eBmAAX&DÐA„eBmAAX&DÐþœù°‡àA‚Oܧ¡ ‚°”ÄßÀmâ\ÜkI'WYH­¨ë:pï,Vú|zÎÅ^û¼Z8gš¦ýFÏVK’D6›åø‰Sô^¹B}}}5‹$IÄã ®\½F(øX?ë¥i:ŽÉ¤pþúfEÁí´QTË”JeÔ²†ZÖ@’~£‰J&fã\˜¤6èýÄ':1 ƒ±éׇ¦>¶ý†AQ-ƒQù¼Ô²†¦ëK–¶¿kjY£X*Wr°K|lshºNßà‡ÏÝ ™ÉQö.«ô¾ñT޳׆iûî{Íèó×A¹¬#Éò'øý¨–5.örì|?jY£&à®~—5­:ÿÃoÂ0 Êš~×,jYãØ…~ÊšÎõÁ)ÊšVYXâ|Úªª25=M!_DQdšš±ÙlÕ@=5=¡44Ô‹ =O’$FÇÆbÿ¾½÷¼þ@9©% ÝÐyáÅ—X»f5·»zÎËZ™tº2!Èï[Ø÷¯³¿ß„¦ëüõKGùΗ÷ð£_ä<»‹º—ÿôüŒNÅ0™*_b¿ûävoêþµ·_*•Igò'wð’$Q(©÷Ýßor2öýר²¦ýÛVò“_¤ÈüÞs»«Û\Ø÷·çï÷ËO¾Øû>)ïŸéãÅ·Ïa³Yèj®á›OïÀçvüÖû–E–ŠrüÒ-6ö´TgTü¨kù~¯-¼þ랟ÝßÛœœóƒƒÇÙ´²å®é|”5“—øÕ‘^JªÆþm=<½gÃo8?Jå|Ê\èapbŽ5],ÜK¼wú:õ!k»›~£mgrEÞ=}]:«¹4Mçǯžäé=ëy÷ôuìXMGSÍÇt4Ëß’NÑ”H$Qd…Ö¶fff™˜˜¢££ €X,N"‘DB¢¡¡~)‹ù©a33³|ï¯¾ÏØø8}7n²kç6¬_ÇðÈ(‡%—ËÓÖÖ¾=c2™8uæ,W¯^£¹¹‰™™Y¾øìÓ„‚AvíØÎëo¼Yݶ®ëœ>{–Ó§Ïáóùئ•+¹°ûoqúÌY\.'SÓ3ìÛó0«Vö05=Ãñã'I$464°gÏnÜ‹¤ëÔ ƒh4J!_¨.³Ùl„BÁE¿ÈÌ&…š€ŸÇAÈçÂër ÏçäþÆ;غ¶—ÝÊÍ‘iÞ>q€ÏIÿÈ û¶ô°{s7Ss ^|ç²,ãrØûÝ<¶s5ÎÑ ý´7†Ðæ[wN_äBß³‰ÙXš/îÛ@O{=·FgxÿLél®–ZßµE‘9zþ&úF1) Oì^KO[=c³»ÐÅb¢ohŠ'w¯cëš6Þ8v™“½ƒt4…«û;|î7†¦‘$H¤r|éÀ&Ú›Â\ìåý37ÜÄ’YžÛ¿‘Îæ»¿¨*-,:'z˜Ž¦hªñó³7ϰqe ÅR™£çorîÚ0²,ñÄîu¬ënâÝSןCÓ 2Ù_~t­ !Î]æØÅ~4Í`׆NÚØM®Pä¥÷.0MÑ\ ‘Îñ­§w2Mñö©k$Ò9ÚC<¶s V³‰Ÿ¾~Цºç¯à°Yùý/=ŒËñàSÜŒÍRöò_=·›_¼sŽÿøüü‹ø4s±F2S «¥†G¶­äØÅ~@⩇×QÖt¾ÿË#ìXßICØÇ+‡/1KÓTë籫 ùÝô´×OeyáísÕs—Éxåð%Ʀc¸ì6žÙ·ž–º þ³÷pÚmÌF“4„ý|qÿÌ&Ç/Þâüõa,f…Gw¬aUg7†¦8~±§ÝƵÁ žxh;×w.0ã©,?ó,‘Dšš —ßyl »•ï½p¿ÇÁètŒ†/?ºÀƒ‡.2Mb1›Ð4ý¾çíæð þ³÷ùG_ßÇiãùƒÇÙ¶¶›ÅÌKï]`:š$äsóôžõ}NþËÏôºžŒÐÖâKû7ao¿Â­ÑlV3ÏìÝ@[Cˆ¾z‚M+[XÙ^ÏÁ÷/Ò\`Ëê66®láæð4ÃSÑj9Nõðï¾÷*m !6ô4óÕǶÒöqìb?§/°wK[V·qúÊ 7†§)k:“³q¾ùôNºškøÙ›§ùë_eÇúÖu7ñ¥G6ãqÚð¸ìÔ=øÜ‚^Q˾Ӓ¶ƒjjÂÄbq ù<€|>O2•"-eñ>u$IÂï÷ÑÝÕIOw7O=ñ8mm•t”““x=Ö¬YÅo½ÃøÄ““üôÿ{ÝíBQL¼ûþ!òùü¢Û–e™•+V°iãúoÝ¢T*!IÑXŒC‡ÒÜT™Ÿü—/$›ÍòË—’H&Ù´q§ÏžãìÙó‹nW×4Ξ¿À¯Þx³úsúì¹j—Èbåøêc[hmò¥G6ÑR P*sñÆ(GÎÝàä¥[J*±d–ƒï_ 6àaËê6þË ï“Êäùå»ç‰§rìZßÅ‘s7¸Ü?Ž,Il^ÝÊúÍ\š¢\®ìb6ÎÛ'®±ª£‡ÕÌÏÞGO[=õa/õâJj™“½9w“;V36ãð¹›”ÊýÒQrù»6trô|?‡ÏÞ@–%.ßšàçožeMg#{·¬Àbþõó8lZB|ã‰í ŒÍ29ç/_çÒÍ1Êš^í7L¤sŒNEÑtÁñ9VuÔ³¡§Ã0¸|sœb©Ìñ‹·¨yšŒ0KOe‘åJ&¶‡7wóÔÃëë¦Y›µ’w )püâÍunÍMdH¤rìÛÚÃ_9ÁµÁIŽ¿Å¶5m½NNöP(ªÄS9bÉ µAš¦ßÓ¬¬–5.\aÏæ¬éjÄn³ð'þ™\—ÓÆÎõl^ÝFóñ+LÌÄ(©7Ggøù[g)kEµL6_Äj1Q_ãã+ny `26e|&ÎøLŒH< €Ïí`׆.º[j ùÝdòEF&#¬l¯gMg#…¢ÊÅ£‹nÏ0 ÅV‹ Y®ô{\Ô²ÆÙ«Ãüî“ÛXÝÙ@"㧯Ÿ&W,á÷8ص¡“Öú ^—BQ¥!죡ÆÏ󳢭ާ^Ï%wg—ÎbÝ;~ŸÛAC¯z. %•BIåЙ>¢É,±DU-#I+ÛëÙº¦$‰¿|á0ºnò»ñºì´5„*¹ìç-t}|"3Þ-iÐŽÅâ˜-fš›Éø| àtº(ŠHRšb±H6›#“Îà÷û–²¨Ÿ _üº¦Wk«ºaðÒÁWyú©'X³jssÿ Ã00™Í”J%t]'›ËÝ7ÅéGõÑ™M¦»’¶È’L]m-O?ù8»vî`n.‚ÃéXô½š¦qúÌY†Gn 575ñÌSOÜ7èbƒjœ+O=¼Ž]ºîX:ƒÙ¬Tn$©Rã‘ÀbRÈÔJ"tC_¼_qáw³I™SùÞ2™Zëƒüão=ÆúMLÎ&ð{¼sòW¶ð§ÿìëÕf¾ÆZ?c³x]vì¶»s¾ð‹®º¿ùD( ¿+²„D%I±¤’ÉWÒªÞ,IìÛÚæ•­ÄS9.ôp¹Œ ×Gø_ÿèk\¼1Ê^91¿ª5à…ýMG’¼qì ÿêž!_Tù÷ù*PIKdÑuƒD&WàÖTëç;_ÙÃîÝÌD“Õ㔨 zï Ø†aHçHeó4×h€”¦ëô Ma·š© yhªó󾾟ë;˜Ž$ñ8í¸6h-ßûùaÔ²Æ}û1L&…ÏÅžÍ=<÷ÈFæâd‰E³È²„Ëa#‘ÎbÌßlUr±kšN¡X¢¬éKeìV “Φ•­üÙ?ÿEµÌL$ImÀC6WÀë²d7€ZÖøÑ«'èn­åOîàßüùKÕëA–åÊñüõ&V‹™éH]׉§²÷m—$‰¦Úo¿BjþfôµÃ½lèiÆí´Kå0ŒJ"»ÍŒIQPdi¾O¿2ž*#ô­ÇÈä ¼yü*ÿùGoñïþ‡ß©¤È-WF™ûþíÜf kÆ|åf¢)Þ8v…ÿíŸý=f¢)þâgïW×´˜MHReМ1}KT®}þGâÞÔÃÂÝ–4hkšÆäÈn—‹l.G Àn·ÓÙYé·Lg2”5 —KôiÜ) qðÕ×øþó?¤««ƒ];vàóùè»q“™Ù9ŠÅ"—z/óøIâùþ˜RIEÓ5 Ãàô™³Üd|b’7Þz‡žî.6nXÏé³çfxd”ƒ¯¾Æö­[d©\%©ò‹ÅÂC;wðú[ï0<2ÊÈèßüÆ×ñ¸Ý÷”UQv?´‹m[¶T—Y,æ_»å$W(ñÓ×OsâÒ@åKÊaÃ4ß* &EA–$¶®içå÷/0MqþÚ[Ö´Md8t®’ÂóúÐ?yí$û·®D’åÛÛ$EÆå°²uM;Ï¿|Œ®æ†§¢ü÷ßþÛÖvðæñ«ü?|‹BQ%èsñ¯ìA’$L&ù®fÒéH’Cgû8}”ëCSüík§Ø¿}%ò£ÞþïvÚimòƒ—Sö25—@Z¤¦½+\Qd›ÕŒÅRé³Meóä %.ßccO3vâ*ñó÷™ˆVjÓŠÌ«ùéë§¹Ü?Îèd”ß{î!V´UZd>XÃ6 ƒÞ:Ël,Å¿øÎÓÈ’RW‘%N]äßïUú†§øæS;©û8°}?yíúF›Žñ‡_Û‡Çe籫ù¿ÿöv®ë¤¥.ˆ"Ë<»o#Ï<Ît4ÉÀØ,_}t HoŸ¸Êõ¡)ú†¦øÁÁã<¾{-OíYÏÿûÓwÉæKÜžæÑ«p;mä‹%^;z™þÑYfb)V´Õa³˜yýøþÏ¿M*›§.èå.|ÖŠ²èçs÷±ÉÔ…¼Üšæçož!“+Ò74M"›¿V*[X8‡ëW4ñæñ+ü??}—‘Éhõš\̪ŽÂ~þô¯…Ýj!–ʲwk_>°™ï½p˜™h’Þ›ã<·#vk%pWoH•ʾÇgãüøÕ“t·ÖMdhªõ£È2Ía~ùÎyúGf›‰!+£S1ŸëãTï ÑT–ÿê$O<´–º—€×É‹ïžcp|ŽÝ›º¨ y1) GÎÝdb6N:›§oxºò·%/üUþV‘Àn³ ëõâ:šÂ<±{-Mµû»ð)HÍY() ˜LfœÎ»GŽ–ËeŠÅ"Ço?¢ô³DUUFFGI§3464‡H§Ó Œ‡±XÌÌÎÎÑÝÕÉù I¥Óü~þæ‡?æþŸþG’É‘Èí%.—“Ö–f‡†Éß1`¬©±·ÛE<‘¤µ¥™\.Ïìì,MMH’ÄÔô4‘¹( õ„‚ÁO¬ C× .÷“HçªËÂ>jƒ¦æô´×£–5nÎÐÑ&‘Îñþ™>êB^N]"äsòµ/l¥ohºšT–%ºšk0™²ù"m !’éÑd†Öúºa0<!–ÊÒÞ"ì¯<æ’ɸ>8…Ŭ°º³³I!W(11§µ>ˆe¾Hç蚢X*W÷×ÝR‹$AIÕh® MdHç 4Õø«¦6èáóÇ¿ÿ$=m÷ÀÔ´J3vK}«ÅL<•%-ÐXëgl*J,™£³9ÌT$‰ËaE‘e4Ý ©ÖO4‘&“+ÒT`&šd|6NWs ‰tet{}ØÇÛ'®â°YHç =ßÏ¿ù£¯b1+ŒNE™¥ikRð‚Ãünþ; •Jeþø?ü-_:°™ÛW}èßíä\‚Á±YE¦½1L]È TFGLF‰Ä3ÿ?{÷$ÉyxþŸ®¼ï®ö=m¦Ç{ ¼' È•(Q\J”VZéâV¡Û½¸/‹ÓÞ*BRP†ZÞR%qi%ô„÷`àÇÛîi﫺ËWVº÷>Tw€™jÈšÞ_DÇLggg¾YÕQO¾&Ÿ§þÚ¿å£ãƒ“$¢A:[’KÓg‚©ÙEÆgènKÑ‘NP®ZœÀ¶/Ì oYÛAs2ÊL6ÏÈd†t2J_gUUø½?þ'>~ËNÒ©z/¿­)^4²ZãÄà$¿Áƾv|†NÅ´˜˜] ·£yå½¾œšesrhŠHÈO:ct*KWšéù- ~ƒÑ©,éd”h8Àô|ŽÉ¹E:[’+5Öv·\6x[¶ÃÙÑj–Ãúž6¢áúçòäÜ"ãÓ tµ%élI"„àÜØ]-I|>‘É - B3™<#S"!?›ú:Ðu ³fszx C׉†„>PNŸŸÂ]êukšÊÖNÑÅR•³£³( ¬ín!29·ÈÔ|Ž}mÌ-Ñ5•pÐOµfÓÓÞD±l2É1ÐÝ‚¢(Ì-žœ' Ðß•¾hÄJz;YšóÌu]žxê^|é;¶måÿèe‡¥?(Ægøòwž¦jÖð:_¸ÿF6ô¶]µ7~®çqèè>þ:®ëÒžNð¿ÿúïk%ö•P1küíwŸåüÄ<ŽëÕ«Û½î}=¼˜/ó7ßy’ßÿÌm¤â«c>ò?þÙwøÌ=ûÙ·¥¯ÑM‘¤ŸIí°åÄ5¶í€RŸ›þ l¨_·e×5jšº2g}µZ^Lh9õé ]ÓV†1Ùíp\ÇqQŸ¡¿ïfËÇhDû^–í iê/<ÙŽ$] 2hK’$IÒ*!o-%I’$i•A[’$I’V ´%I’$i•A[’$I’V ´¥¤åõ•²¯$I$ ͈¶\žÓ¶m4M[©ë\«ÕpÝzbUUñûý2÷ø$„ T*¡ë:Á`°!mp]Gc>W¤½9ÁÄì"¿}ÿWäØBÇæxè‰×(UkìXßÍý·ïAÓTNMqôì8Ÿ¼m7~ŸŽY³ùÛŸåìè,¿÷«·²u óŠ´áýð<ÁóoœáäÐ;6tsôÌ8_¸ÿFü>ã—ÞI’®n ­§],•˜™™Å0 LÓ¤­µ…D"ÁøÄ àó†Aº¹Ym.ô-Ë¢\®`:‘H¤þZKD"aTUÅqªÕ*ÑhÇu)ŠhšJ4ZÏ,U.WøÉÃÒÑÞÎÖ-›‰D†뺔Êe‚p$‚®i!(—Ëøý~Êå º®‡/z?Þ­vó¥( Ìd ÌfóhŠÊäR5¡šecÙž¨'‰G‚¨Ku©…¨—­˜ñH°^ÂÐóÈ+(ŠJ"DQòe¾ûø«ds%>vËNZR1 ¥êJæ&mé9bŸOçž¶qêü‹ùÒÊkì A¹RÃr"¡þ¥¼É—z?*¦µôl¸ƒ¡ë„C~<×£Z³PU•ªi ðûŒ¥‘5,Û!ðázÁ€l®ÌèôÝíMŒLg‘’$]JÃs777™ŸÏ]X$S«Y´µ¶ Ñu]oh3¯*¥R™¿ýû jšT*>}ÿ'زyÿýÏ¿Èç?÷ë ðÂÁ—8|ô¿÷»¿Ãwþå»LLNQ®T¸õæ¹óöÛ8sæ,}ÿ‡´µ¶rôØq>zïÝttvðð#rôèq,Û¦¯¯—/|þsÔ,‹?ÿ‹¿¦µ¥…±ñq¶nÙ̧>ùñKöÐçççŸÀ[%Ñuƒþ¾‰‹‹½(ŠBs2‚ß§“ЇßRoú,ßüéKôv43=ŸçÎ[øø­»xêåS¼~b„…B™RµÆøìt·7ñ?xñélÇeÿ¶>>}ç^~ôÌa=xœd<ÌOŸ?Æ »èïJóÔ˧øÉóGik޳ym†®¡©*­)¢áàRÑÇuyìÅ<õòITU%ôñïõ6Z›b]ƒ‚¯~ÿyÎÏ È—*ü»OÕseÿå7§%ej.GK*Æÿý»ej.Çÿû•Ñ’Šâ AK*ÊïævRñ0}M¤baÖ´7ý««fI’ôÁÔ°h¨( ÉD˪çÑ.—Ëôõöõœã ‹‹äòyTE¥³³}eèüÃÎqÖ­ »«“ƒ/â©§Ÿe×}¦_à IDATÎtuuòÆ›‡ikmå•×^gÏ®xžKkk ;wìàÔéÓ<üÈãÜyûmlÞ¼‰köíåÚkösýu×õ)‰d"É]wÞÎôÌ,>ô}~ëßþ†®³¸¸H[k+ÿù?ý¡`øô°m¡Päüùal§žo;à÷ÓÚš¾lоq÷z]㉗OrrhŠ¿ú¿~ƒ\±Âþé79°s€ÏÞw-Ó™zÛùÔ •|òöÝ´4ÅøþSoày—îÊ !(Wk|ûá—Ù¿µ¾®4ßøñK¼|tˆßºëÒ×]¬ÒœŒðëþñ/ðÔ+§øØÍ;™˜Yä÷?s+±pÿóϾCf±Äž=̶uüþgnç/¿ù8™ÅÁîM=lîï ð±¶;®ð3×I’ôþ5txÜóJsSŠ{>r'©dÛqp— ëëEãyìqúû{iiI322JµjòñÞKWg'®ëaYŠR_à·°˜#Ÿ/P­VÉd²LÏÎr÷]w0°¶EQWn4µ^Žógå’ž˜œâ‰§žæÑÇžàÑÇžàégže>“y_×09—c&“g|&‹Ï§ øP¨—ñ[nƒ¢(t·¥˜™Ï3¿Xdx2ƒ‚æä¥‹T¸®G¹R£bZØŽC¹Z»P¿ºbb;UÓ¢\­ðt·5±¾·»®Ûº5­tµ$p—Ç_:ÁÉÁÉ•ö:ŽËèT†Ìb‘‰ÙEÒ©z‰Ò …6.´¹£%Á¹± eNMá¸îEm•$Iºíxà½ìhš&†qeç—Ee>“%“ÉR*—éêì  R*•™›#—Ë ‡I77}( ]¼†apèåW§©©‰sƒƒ¬XKkk ¦iòúë‡ù7Ÿú-é4š®qòä)Μ=G$fbjŠ–t3TªâiŽ?A2™¤£½ááaŽ?‰nè˜f ÏóèïïãðÑ£lX·Ž––ôeÛ¥(õ“½»w±ß^öïÛËî];/94þnÇ87:ËËÇÎ3›ÉsèØy>zÓ6¯í`&“gn¡Àu;ЖêC7'",æË<ôäë¼vb˜Þ¼‹]{8vn‚–TŒu=õn÷ÄÜó­'yêåS Írfd†x4ÄÉ¡)þþ{ÏsâÜ$CãsŒÏ,°m]͉<Æó¯Ÿc±Xaï–^BA?–íð·=‡ÏÐÙº® €22•áäÐ$ ù÷ß±¿Ïàøà$7ïÝ€¢(:6Ä »×Óß•æ{O¾Áá3ãTM‹€ßàÆÝëäp¸$IïIà †¼õ9Úw7+C¬Šª¢Àª©ô‹¶ü˜Ô‡ŸÇA× TUÁó<ÇÁ0 TU­ïë8O ë®ë¢ªš¦®ì+¨WÿZ^u^¯Œ¥­¼?º®ã8š¦ýRVðÿä¹#<ÿÆYþŸßÿ$B€Ï¨Wèò¼útŠþ–Š]B\Oà,͇û måº×CU´¥zÄžçaÙî…ç¶•ú°¶¬ü>Ôë^†l§¾Š]×tM[Yxf;.ª¢ ëõ×éÏþñZRQ>s÷5KÕ±–ŽëºKÁØv\t]£T6ùáÓo²¡¯§_=E2æ Ÿ¸AmI’Þ“†.D{ë¿ï${Ö—VLéB‘ø·¾Nš¦½íûz1ÞÓ¾Àe˜]nû/BgK’=›{/zFYÓ.àeŠ¢,Ô‹·ï‚ªªð_ú¦ãûÖÂ%Ÿ“^.YùÖï· t ø·lŸza¿åßñ:¦mó›g‰GBÜ{Ãö‹®K’$érdiNéªâ¸.®'ð«ç1?Ëv.y£p9¶ã.=Ó­]õµ¾%Iºº¬žOFéCA×4VÛH±ï}Þ`,kI’¤÷KŽËI’$IÒ*!ƒ¶$I’$­2hK’$IÒ*!ƒ¶$I’$­ Ú+ÏgsqÍã·þL’$I’¤º†®·,‹ùù f­†¦ª´··á÷û©Õjd2Yj–E$¦¹IfD“$I’¤† E×¥µ¥!“SÓ@='¹ëº$ \×Å•ù™%I’$©±A;™LÐÑÞŽeYX–E$FAva˶™›ŸÇªY2ù„$I’$Ñà ­iй\×ó–r;ƒm;¤’Iúûz)W* ÅF6S’$I’® \ˆ‹‹9lÛ¦¯¿î®.¦¦gÂ#ðcøêÅ|†!‡Ç%I’$‰† ×u%S*•H¥’hšFkk+ÓSÓƒAÇ%½tdI’$Iú0ihiN!¥R ³VC×uⱚV/!Y,–p‡p8D óÚ’$IÒ‡ž¬ò%I’$I«„̈&I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -]u„ض#3áI’$½CÃëi;ŽƒY«aÛöʶZ­†iš+_žç5ª™?7!–eáº.¶í`;Îe÷«)›B¡H¥Zý¹j‰ !–^«Ÿ¿¹‚jÕü¹ÿýœGçyOëz9~ŠÉéÙ‹~önßK’$}Ð5´žv©Tfbr°‡®ÎN‰8““SØŽ»ô¡^fÝÀÑh´‘M}_–ƒÒÑgXÛ·†éÙyâÑí­—Ìì–/yüé°m€m[6°yãz‚JÕÄu]‚?~¿Ûv°, pl‡P8„®iKeúyìßM8"¡ª*¶íP©VQ…H8„¦iõ_«Q«YèºN(XO˜“]Èñä³ùÈí7¡i±Ë¤Bà-xÇqðû|~Ç¥R­ áP]×—n(jÔ, ÃÐ !Èdyöà!î¸å à ©Wy+”J$âQ¢‘ð…óyÞÊkðûñû}8ŽCͲQ˶‰„Cèº.³çI’ôÕРmÛ-éf‰ÓÓ3d²Y’É}}½äòy²Ù…U—…MÁÐð#c¨ªÂÄÔ m­´·µ\2 =q]×ùØÝ·3=;Ç|fÛ¶ëÇGQT4Må¶›053Ç˯¦µ¥‰l6ÇšîvmßÊÈØ§ÏG× ñ({vnÃg¼üú›‹ej5‹õëúÙ²q ‹9^yýH}¤ÃuÙ¶ym­œ:3ÈÐð(‡^;L"åÀ¾Ý—½ÆÉ©ŽŸ<ƒ®ëèºÆ¾ÝÛñù|¼qø8¹BÏ´4§Øµc+ù|aå|žç±cë&š›Sœ<3Èàù1¢Ñ#¤›’ìݵÛqxóÈqÆ&¦¸vï.’‰xýõãÜà0ªª¢ª*×]³›|¾È¡×“nN‘]ÈÑÖÚÌ5{v¢iÚ/òí•$Ij˜†Îi' "‘ç‡GX\ÌÑ’n^òt]—™™YÒÍÍèzCï-Þ—•ö{n½Z™a ©*áHè’û{žÇôÌ}=]~zº;Ùµc –esôø)Öôq`ÿ.æ3YÆ'§±m›ìÂ"[6®gçöÍœ:3H­V£«³tSŠëöïbÿî„‚õÞl4aûÖt´·òÆácK#§Ñ4ìcú~lÛF×5úzºhnJqóuûÙµmË察œÆõÖöõ`Ù6>Ÿ¾ž.ÒÍõómݼŸa°÷â±(µšµ4\îòæ‘tw¶s`ÿnÊ• ãضÍìü<›7 °÷Nü¥ ïK’$5JC£¡MÓhjJ‘[̱°°H4EQÐ5X,ºê†;]×¥Z51Þr³QÕKß#)¼}~Öu]L³†ëz4§RÄcQR©$™ì"©dœX,JsS EU€ëyºŽ¦k‚K»\©24<ÊôÌUÓ¤\­¯ÈdÙ¸n-±h„H8„TE©ß`h¡Pð]¯O6®ëç•×ðÌ ‡ˆÇ¢ÄcQòù…b‘³ç†èîì@Q “]dËÆuD#a6¬ë¯cé|ú;Χ( ` ~£¦Ô¿w\—JÕ$n"ˆÇÉå D#!¢áéæ&Jåú¼ãÉÅk’$}p5´§=;;G.—'™HÐÔÔD._XBÍç $ñU°EAˆú|½¢@¥RÅq=.·dJUU::ÚŸ¤V³˜œžåÅ—_¯q*Õ*¶ãÏH&bõspñ",@€í:xž‡‚Ù¹y*U“;n¹žþ^TEAQbÑ™…\×e|ršÓgñ„Xj»Àq\\×}×…^¶ípÍÞÜvÓJå2“3D"a¢‘0·Ýr·Þ|€þÞnb±(±h„…Åžç1<:ÎÙ¡a„( xÂÃu/œOQ/¶¾t=Ë7v>ŸN©TÆvŠ¥ÒÛæÛå‚4I’>,´xà÷²£išõžÑªV˜eaa\>OWg¡Ph)hçijJaÆ;ß/‹ªªÔ,›®Î"áM©$½k:/{ÒÜ”bt|‚£'Ï019ÍÚ¾º;ÛÐuc'Os~dœp(ÈžÛ©TL Åëú°l›Ù¹ kûzø}K%Ž;Åèø$ÉDœh$ÌÄÔ Ù…E„ä‹Eñ}=]œ<Ïàð(S3ôtw’ŒÇÐu‰©iNdlrŠÞ5]—ž˜šæ#'˜Ï"„`˦u´¤›¨TMŽ=ɹó£xB°¦«ƒ¦d‚Sgefvž¾žnbÑHý|Óœ<3ÈôìÝŒŽOðòëG˜Ë+YXÌ‘nn"‹rüäYFÆ&0tÝ;¶`Û¹|ž ëÖ⸓ӳ ôõâ÷ù~ï®$IRã4´4çò#_Žã j>ÃXéíÕçYõË)_í–k„»®‹ª¨Æåov–ŸK.•Ëø ƒP¨>”îy¥rÇq‰DBø ˶©Õ,¢‘0®ëR©T ‡C(ŠRï…KhªJ$FÓ4*•êÊÊêªYoS0à§V³(Wª>BÁàÊënš5Ê•*†¡‹F.y£Q T*,Û& øQ×u)KÄ¢‘·­V¯TL‚A?Á¥úèË©•«&¾¥ÕãUÓ¤\©²<4¡i*±hUUW®%®¿Žãbš&Ñhdåµ ‡Ch«ôoF’$ég‘õ´%I’$i•]I’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤U¢áiL—¿EYy&û­ÛUU]uYÑ`©hÈùóLN̈́ٴqÁà»§ý Bà¹Mÿùï=O€USVŽ™Í.08tÛ¶Ù¸a=étó•j²$IÒªÑÐzÚ¶m3==ÃØØ8ccã˜f­^š±Pdbb’ÉÉ)2™ ®»úòI{žÇÃ>ÁÁ_"Ÿ/à.¥}§·Þ \ÉŸ½õß_¦JÑáøKÙÕ1¦‡ËL –޶Ͳ,r¹ÿòàCœ8yò_u|I’¤Õª¡=í\.í8457‘Íd™žž¦§g ““475ø™˜œ" \º¶óÕLQöìÞÉ­·Ü@qÑâÄ¡BQÙ± }[bôo‹c×<Ž>ŸaqÞ$Ýdã¾ ÆÐ±<ž'X˜1ÉÎÔ¸ñí„c§_[dj¨D0ª³óæ4 Î‹?FQ ™°8gÒѦokœé‘2§_]@UÖïNÐÞ¾t–3O°0krúµEª%‡Îö$ž`ðHžñs%‚aÍפˆ7ù9q(‹ðÙ“PÔ`ûÍ8¶Ç ?˜âô«‹LTèÙeÓ¾•’ͱ²ä³5Ú{ÃlØ“D÷©õ󽺈YvèÙeíö 3&}} ! {C„ÍûSt®ÐÑÑNGG;ÇOÈ€-IÒ‡WCç´›šRtv´cVMlÛ&ª§ãôûý˜¦I¥ZEÓ4ôH}d³ìòÆÓs¸Ž [œç¾7Éâ\Ü\‚ÅúÝIN¾¼À™×ÈÍÕxöÁ @aóþþ N9ï09Xb`g‚â‚ÍËÏ⺂/fÑ4…~8…pøÙy,Óåñ¯Ñ5¡kŒÉ¡ž{éÞw¹`ó̃“„¢:k·Ç9ûÆ"¥œÅè©"¯=9ËÆ½ |g¾;IµìpîpŽó' ôo3t4ÏøÙ"Á°N{_˜tW=·¥é^W¿ÑÊN™TK;~vž±3E Ï~w’H kœÓ¯,R)ØÄR>ÒAz6EÙusš¦v™O’$iYÃzÚËsØžçQ®T¨š&©T!@UT,Ûªç§ö¼F5ñÊSÀ0T6íKbøU^‹äæL-~ªe‡“‡²²•¢ è>…XÊϾ»ZÑ–æwÛCÕN½²@v¦F$®‚pÜ`ÍÆ(g^Ëѳ1ÊÌhMWéÞáÍgæIwÙrmježøJ9ÇòX»=A4iÐÒÂçW?W¢{]”εBƒÁÃ9ª%ÝPéÙ£s BS[`e[$n k¤;ëõÃ…•RÞæÔ+ ,ÌŠC![Ãu<Ön‹‰´ö„ð‡44M!Ó ' š;Þ¾@Vó’$éî¡sÚ¹\ŽZ­FoÏúûúŸ˜Ä²j,ær¬é³UÓ(•Êjæç c{x®Àsª¦ðêc³ÄR>îüì:úÃõÊ”Kñ)Þì_ ØÂ}!C)gsû¯­aóþäÊ¾Š ªª ¨ ¨õýöÜÖÂG>ßCS{€ŸþÃ(… €Z­ÆÈÈ(–Uÿ^Õ„W¿)pÝú¼UóðTjU!À¶=¥¾@LQA×…ú_Ñr°]×åË_ù;zzÖp×í·aÙé?þ»aîþÍÌ’Cª=H,eP«ºÌŒ”ñ4" ƒÅÙíýa̲C­ê‘î¬ !plÁÌhH¤ýd§MRm Mí2S&Mmr™mkBó6Ù©*P¿AÇëåO+• gΞcã†õƒA„X¦ËÜDÇòH¶ø‰7ûÈÍ[,Ιøƒé® †Oe~¢J0ª‰d¦MüA•XÊëxõ›Ž²K¬ÉGsG³â0;Z!Ñ „t Ùzð·-ù‰*ŽíÑÔ šò¡( fÅaf´‚çx4µ‰5ùð<Û¶ù‹¿þ\»/·Ü|S£ÞZI’¤†‘¥9A\×åkßø##£lݲ™»î¸§êã©oóÉ¿Ã/óÚ¼WB†GFxáàK>r”_ùôý¸öšF7K’$é—Ní_úóæŠÅº¡ÓÔÔ„ŠJ¹àM«nä ‘„T«&¹\ •L~(ÕH’$½“ Ú’$I’´JÈ1ZI’$IZ%dЖ$I’¤UBmI’$IZ%dЖ$I’¤U¢¡ÉU.e¹L§ô‹S*•ñ³³sœ>{ŸácÛÖ-D£Êå2“SÓ¼òÚ먊¦ …‚ìÙ½“p$Ä·¾ý/xK…Z„ŒOL022F(dó¦„B¡÷ÔÎw1©Z.OŸÊó—Lów¶“ŽXÅr<–iŠœ™®Ò3ض&Ä™©*í ©ˆÎ©©*é¨NKÌ`6osd¬Œ®)ìî _uƒG’$IWLC?á‚Á=kº1 Ä µš…çy‹%6oÚˆ®kœ¢R­Ê M=ÎÍÏó÷ÿø56oÚH.—ãÄÉ“|ö×?ÃO~”…ÅEvl߯+¯½Æ¯~úS ¬]Ë׿ùmúúzŸäÜÐ ŸÿÏâ¸.§Ïœe``-ÇŽŸ`ff–{î¾ í%P…ÌÌÌòÿôu¶oÝÂÉSóœ=7È¿ùÔ'ñù|í{âä)ž}î…•"$ÁP»î¸þ¾Þ‹¯ÈW\*–‡íx¸^}4År_|dŠÖ¸AgÒ¦†°Á¿¼’á®mIö÷GxèÕ 7nˆáz!þâÑ)6µ)š./+òG÷tò0J¹J’$½SCKs†B!j5‹³ç±jk×ö£ª*€Ÿl6‹ßï§Z­âØN£šyÕ9xðóóóh[·8|äÙl˶س{÷|ä.cÓ¦ìß·Ó4‰Ç¢¼òÊë|þ7>‹¢(¬_?ÀGï½›T2É /¾È·ßzÉ ýüÁÉ (KïËñ'¹ãö[iI§/j[OÏ>zß=x¢ÞK×T榦K^Gȧ±­+DwÊϧ÷7¯l÷i 5Ûãæq>±;…ª*Ø®GÕòp\ª¶‡í^:WàÜL•m]!Â~•ƒg‹Œ/Xlh—ÙÒ$Iú`jXÐB`YõB!kûû(ŠLLN²aý:Ö¬é&ŸËcÛ6~¿ÿ¢`òa%„ fÕˆD"tv´°mëRÉ$º¦ QMÕB0::ÆOy”ûîù‰D|å8ª¢ †PUÃÐëe4/³l V«‘ˆÅèìh§£½½{v‹Æ.¹ïÔÔ4‡^yǶÜxÃõ¬ u½çëS€DHgM“U}Ëðº¸ðïrð®9‚O£§96w…iOïé\’$I«QC—ÏÏÏ3::N­Vò-TEAQÑ ]×Ñ5PHöœ >:±cû6lÇA×ulÛæüùaTU}G O€zîs€ŽŽlÛÁ¬™,ær8®ËàÐ'Oæø‰“t´·ãy.##£Œ“Ïç:?L>ŸgÇöm”« ÃGÕ¬266þö`úéæföîÞɾ½{Ø·w»vîxÛÍÂ;|*–ãqh°À±±òʹ¢(o;‡‚B,¤ql¼Ì«ÃEÎ͘ÛºC誂¾ôW|b¢>·-I’ôA¥=ðÀ¼—MÓÄ0 týÊuΖeQ.W:;:0 EU(—ʸ®K:ÝL(’QfÉd‚p8Ä™³çÈ lÛº…––4¦iÒÙÙAss5«FK:ÍÀÚµXVùù ½=k‡B$q¢‘ÁP\.ªªÜvËͨªÊ¡—_aff–`0@¡P$‹±iã Ý`phr¹Â¶­[hnjºèýP…`0HK:Mkk ­­-¤ÓÍïº!äSqœš¬Rª¹lé ¡) åšËæÎ± ¾tlH†t†çjXŽ`mk€mk¬oÒ7xc¤Ì\ÑaO_„5©ÕWÆU’$é½jhÁåšÚË+—¿–·+Û¤ <ÏÃv¨ßä( Žã,õPU\×}Ûÿ=ÏC×õ•튢à ðŠº®#„À¶·=^§ëº®/Õ²vPU]ׯØû!„ÀõêÃݪ ÆR/Ùvºz¡·-D}8ÜvêíU”ú¿¦Ö¯ÃvêmöéòoE’¤6YåK’$I’V ™òJ’$I’V ´%I’$i•A[’$I’V ´%I’$i•A[’$I’V ´%I’$i•A[’$I’V‰†×1BP.W˜ž™¡¿¯UU©V«ÌÏgp]—d*I<«§ê”$I’¤±†GBÛ¶™™effÏóB02:†Ïç#™J259M¥Rit3%I’$©á´…ÌÏg‡B„‚õ¢ Žã`š&±XMÕüT*ÕF6S’$I’® ÚB2™,–m¯T‚r§ž…b©ÌÜü<ªVÏŸ-I’$Iv ÚžçQ(©ÕjLLLQ5Mæç3,טŒÇ¢twubÛ>Ÿ¬‘,I’$I [ˆ¦ª*½=kð<뺔ËeZZÒø|>R©$SSÓø|>„D"‘F5S’$I’®WE•/ÏóÈåó$âqTUÅq …"®ç‹Fñù|²ä¢$I’ô¡wUmI’$I’~¶†?ò%I’$IÒ{#ƒ¶$I’ðÔµ IDAT$­2hK’$IÒ*!ƒ¶$I’$­2hK’$IÒ*!ƒ¶$I’$­ ¯ò%½¶m“ψD„”ÊeLÓD×ôzîvMkpK?X„TKŽ-„4| ¯¯çÕ ë¨ÚûË) „À¬Õ(—Ë((D£Q|>Ïó(‹X–a+é~%IúðjxO[eYd2™•*_ËÛx‘¨Ô« òÀý¾ýÏßåüð®ëòõo~›/}ù+<úø”J¥‹~ïç=ß•þÙ•<ŸeºL—ÿUçËgjäæÍŸ¹¯ëN½ºÀþçyŽ̼­MVÕåÕÇf)ìwm/ÀâœI!k½m¿óç‡ùÞ÷Èý“?åô™3@=ÿ3Ï=Ïßü_á‹õ¥Ÿëú$Iú`ixOÛó—g÷î¸ì1¾ƒc;4¥’üʧ里)Ô?øKÅ7]׸¿ß‡eº,ÎÕðTJ9›¦öÁˆŽçBnÞ¤”·‰¥|Ä›ý( äækè†JµìP)8tô‡1|*¹LÂB`X'ÕDx‚ÌT„¢:å¢C¬ÉG(¢S«ºÌOVQU…æŽ ¾€zÉlvBÛca¦†mº$Û„¢õ?Ç‚E>SÃÐhj é 3&š®RÌY„¢É?vÍãÌ닜8”åÚ{ÚH¤ý$[8¶Çâ¬IµìoòKù@ÛòÈN›¸¶Gª­þZTK¯<:‹¢ÀºÝ šÛƒ„ã—Îu¯é [4SÈZX5oe{¥ä0;R¦¥+ˆÏ¯®\Ÿezd¦ªMm~aJÁáÅOŽôo‹ÓÜQodžõëèëíáOþôÏ©ÕêÝ0 >rç$I}ü‰+ú7$IÒêÔР½œiÍu]ü~ÿÊ6ÏóhiIS,Ù¼«ÒrÔt`0ˆ®¿ý-ôûýƒõ¬u‹súÒ ö&±M\¦ÆÇ~·ìŒÉK?ž¦¥'ÄÄ™"7~²“îõQŽÌ0}¾‚/ ë¤Ú˜e—Ç¿9Fsg™á2Û®obÍÆþÕ í}aòY‹PL'Ùâç¦û;ùÉWGhj ày‚‘“®½· íÃÅfÙåনU\t¿J!kqßïôR\°xâ[ãtDÈgj¤ZìÿHOýó©Ö£§ Ü÷Û}„c:“C%fL¦†ÊhºB²%Àè©"‡Ÿ'Ùêgz¸Ì-¿ÒEª%ÀsߟÄuª Õ’Ã=¿Õ‹Yq™8WÂÒˆ$}„cÆeƒ¶¢(>ÝP߶ͪºŒŸ+rô`––5!ü!ËôxòÛcøüšOeHƒ>ÞAµì0q®D²µ~“M„¢º®£ë:šööcü~?ªÚðA1I’® ÚBªÕ*¹|ö¶Vz!‘ÖÖ,ËBAæÿ×ÜÛŽ¦+üóÏ‘6‰$ ú¶Æës©UìŒI÷†(þ Žm{|ôwû„5TUÁ¬¸¬ÝÇuЦ09Tfͦá¸Á®[Ò<óà$;oJsê•„'¨mŒž=›b„¢úesÆ/Ì™d¦M>ò¹5D3#EáÌk9º"Üx'³cžýî•ëìzOxg‚ûRë=ñ–î ý[ãÔ*.×¼cåØñfk6F°-A¥à›«PÈZÜýù!™‘ ªªj н1B¢ÙÏÞ;Z®×8‘ö³ï®6ÆÎ^˜’¨Ïq»#:k·Å Ç ¥>úн>B÷ú(Û®oþ¹Î'IÒ‡WCKsÎÌÌ¢* ¹\žZÍbq1'kg_a†OÁÒÐtÍP°-#Ïe˜,ÑÒ"–4ž@Úû„¢z|ƒGrœy}‘¦¶@½íÖçiuŸŠ/ ¢ë †_E?¨sïo÷âó«¼òè /ü` ˬ¿Ÿ®ëR©Tð¼ú°²cy(€n¨èºJ²µ> ^«:"õ`ïó«ž+Ð •PÔ@Õt]ÅóÄÊ æ=Oðú“sd&MÚÖ„'ê×çXªª *š¡’lõ_´`ì­sÐBlÛ¡R­âýóåþ Æí¿ÖE¼ÙÏOÍñ̃+¯ââóI’$½ ÚŠ¢L% …BzcJý¸jš˜¦‰ë¹˜¦‰mÛï~°‘÷ûAoV\FN˜:_¦Vu‰5ùÈg-Ò]AB±úœn¥èàyõ㪪ò¶`˜ÏX$Ò~bM>̲ƒYqq¬ ó¹o ©”l†ŽèÛg×-iò™µj=PÍg²ü¯o|›B¡¾ê=–ò¡ª0v¦ÈÄ`‰çšÀ,»tD;]dn¢ÂÈ©"á¸A ¬_t®e†_¥¸h1;Z!;SÅs…‹–5!|³\¿¾hÂÀu=ÆÏ?SäùLaVêmÓu•ìŒÉÜD•‚…õ‚3çÎñÐ÷~€Y­"„ VuÉÍ›TŠõ9ÿ|¦†m¹”r¹¹VÕ%Ÿ©Q\´0ËÃÇ tô‡ÙuKšJÞ^ Úª®’™ª2?Q¥˜³dð–$é=kXÐVU•d"Akk --iÚÚZH%“hšÊìì,ssó„B!¦gf)K?û€ï·Di0¢“›¯qúÕÜÛNª%ÀîÛÒT 6#' 컫 »æaןZÍbjjšx<~UÍÏ6’‚#GÑß×Ëöm[q]=»w.°t]§£½#G“]X`bb €x<Žßïãñ'žbË–Môöö k==k8yê4…B­[7cÆEÇT…@ À~ós+e6ü>ßeÛ ø¸óÀVî»i;Š¢p~bž³#³DCB?Š¢0=ŸCWUönéexrží뺘˜Y¤f;¤SQÞ<5ÆÄÌ"“s9,ËASUÚÓ ÒÉ3™Ü»¾vŠ¢°gsï{{¡%I’®R Ú®ë’Í.`YÃ#UÊå ““StvvP³,¦&§I$ã4¥R¨jçޯ†a`ÙžضÅb.G8FÓ´·í·Ü+?ôò+üáÿñœ;7Èèè(BZÒÍüo¿óò…<ú8™L–Oßÿ nºñzvîØÎàÐÿôµoÐÞÖNgÇÅ‹µÌZ¯óÛärå®;¸÷ž\²ÍõÅ„þ k*M‰÷ß¾‡¶æ8‹… ­M1~ơ̂躆¢(hš†¢€iY|맇øÔ{iO'xýÄÞòüö;îU.¾yyû"GI’¤Õ¬aA[[êÙyž‡ëº‹%ZZÒ¨ªÊÐÐytMñæææI&“þF5õª¡( ûöîæÈWñÐ÷~@¡XÄs]>ûk¿Ê«¯½ÆìÜ<333<ýÌslÚ¸ŸÏ‡ðgΜcrjŠR©ÌèèÅR‰ç¾H_Á`` €eY<ýÌsK#Ñhý7Ë~?Ÿù•Oã¹zÚàûËIßÝ–b}O+¿t‚t*ÊèT–?øµÛ.}ÝÔÛ4:•!›+Q³mŽœGxóô(‡Ž‘+Uøá3o²okÓ†Ææ›Êò“玲u “›ön@“7’$­rWEÁÏóÈårÄãqE!“ÉÖ·Ô3ŠÅ¢øý2hCýµgbbM×Ù°n€D"Áá#GÉ ˜¦‰ßï§%f``-§OŸÁ¶mÚÚÚ˜œœ¤««‹d"ÎÙsƒ”ËB¡kûˆD"ÌÌÎ266Žëz45¥X¿nà6Ê!„`~±È©¡),Ç¥»-ÅÚ »-ÅØt–5m)&çsô´71›É39—£-§V«×¸nkŠsjxŠrÕBA4`ËÚN&ç™Í橚6¿As2ʾ-½rÄF’¤UïªÚ’$I’$ýl²ë!I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´JÈ -I’$I«„ Ú’$I’´J\uU~Vñ ié¼|MՈŢnΊåg¯¥Ò—º¦‘Їúóå™\‰|±J4 µ)vU¼Ï®ç16Åq=:ÓIBÁ˧c}?„ds¥•lpž4'®ž÷J’¤ÕéªÚ®ëR.—‰D"¨ªŠëºT«U\×# ^2ö‡™çyzùU¢‘7\ ÑÍYázßúé!ºÚRLÍ-Ò’Šñ™»¯©W&;7Áß>øá€ºùÌÝ× iÚ–íð£góÚ‰þðsw±{sÏ9®ëy|ï©7ø 4MŲ~ë“7\‘cK’ôáÕð íº.““S ŒrÝkP…ÑÑ1,ÛÆ0t*cÖ®í' 5º©W×u9}æ,±X”žîî•íµšÅøÄùBd"NwWº®S(E¡wM7‰D‚Ù¹9J¥2µZ !}½=ƒÁKö|ßÏȇª(+&®ë‘/™¤â„œ™ák?|MUùØ-;éH'PÈ,9?1@Wšæä¥{¢Bª¦ÅÐÄ<åj¶æ8Ým)J“éù< ù2kZHÆÂœž¦¿+M(à£X6ŸY`c_;ŲÉàøžðèmo¦¥)FÐïãß}úf&f)›µ•ó•ª5ÎÏQ­Ù+ç+–M¦æsxž X®²±¯D4„í¸ MÌ“/ThJFèmoBÓTŠeCÓ0-û}½Ç’$I—£=ðÀ¼—MÓÄ0 týÊÅy!…Br¥Šm[´·µ-¥š´¶¶ ‡Y\ÌË4¦oáºGŽã±Çž6mÜÀsÏä©gž¥Z5yÿ{÷Gvøþ[¡«3:!gÌ9‡áDN”&ÚÊòXò¾µ½û¼öÓûì¾ã}Þ'¯Þѳžì]Y¶‚¬8š¬‘4™r†œaæ`@äÔ 4Ð9Wx4Ø$‡ä$ bx?çô!Q]¸u«ºÐ¿º·nÝ>_n›ïýàÇ$“IFÇÆ9xè0«W­dç®Wxêég°k»v¿ t,h¿,8†Á¡ÃGxæ—ÏrðÐa>¹þšš¯šym$£­1@S]€¶†§û'xõH7n§·ËŽÏã⇿|¾Ñ)ÎLpìì07mX|YyÖlb§_9ÆkGzÈä ¼°ï$‹Úê Çøþå—H’ÄÉÞQvõ±ay;ÿƒðº´5V³ûÐY~þÊQ6®h燿|Áñ(c“qv:Ëæ• Ðl*%Ý`÷¡³,i¯§­!„n˜üäÙýìê'‘ÎñÂÞ.–,h`hlšÿñí_bSºzGy³{˜ ËÛÙßÕÇ“/¡¤ì:x¯ÛAËloCC§C#àu±°µî<A¸ÍijÎ\.ÇôÌ —dŒ LNNŽDð¸ÝbêÔ‹¨ªÂŽ[oab"RIØaš&»vïæ£÷Üͦë G&±k==½„Ãaþäþ€R±ÄÿÛ/Ñ×?€®ë4Ô×óà÷áp:9××Ïm·Ü|ÙÜܲ,ÓÙÙA0¬,Ó4·ûŠu“$‰Ý¹UVX¿¬Y.gÖºaM'ÇÎáó8ùÜýÛ‘$(ê›V. žÊ221ÃS»ŽVZõÃÓLÇÓØ5•ΖZLÓâé]GY»¤•€×E<•áèéAZêƒè†ÉÃ÷m#‘Êò…¿{„|¡ÄMë³óàV-nfÏá³l^ÕÁX$ÆÇû¸ë†•Øm ¯ë¥od’5KZ/Ûd:Ǿc½üùÃwÑÙRËW¾ÿ»h® Kß·\¾ÈŸ~éGŒMÆyâ¥Ãø=.ü^ºnðú›çضf!÷ݲY–x牂AÞ9 Ú¦i2>Æ0 ¦¢QòùÑè4ÕÕ!t]' âv»#•J]8®g’$!Ë2²|¡Ulš&ù|·ÇªªÔÕÖ`šù|‡Ãf³!Ïæ)Ïç È’LUUš¦a×´+æâ†ò…ÕØØ8gÎvWrgû|>ü>?ÚUòg;íåå6.d“d Y*¿¥|a019ÍOŸÝϪE- •{Îí®s£;=„¿ÊMcMU•Éæ‹¸n—n]˲ŽF¦ãiªÜ\|¡„$K˜–Åúåm¼ôÆ)Žžb|*Φ ˜šIa.‡†Çeçól§±&pÅ}ÐuÓ´p;íØm*n§ìl×¹×mÇåÐf? (K¤³jü¸7oXBK}ù\uØÅX A>Xs´%I¢®¶†RIÇ´LE©t RWW‡Ý®aš¦h©Ì²¬r N¦Ò¤ÓéJNr¯×C[[ §OŸ!è÷³ûÕ½,XÐFSc#©TŠÁ¡aòù…B¦¦xww©%‚Á  T»ÓéDÓ~½`dY±T–l¾ÄÖ5ô…)éÉtŸ×É]ÛVrÇ–å Ih³#Ñ—w4ârÚYÞÙÈÞ£½Ò†J\Ö­ßPí£½±š<³ ËÛi¬õ#KÕ/µ~j^öëÅa·‘Ê䉯Sdrfi¢ñ‡ÏCWï%Ý`$cëš…LL%8;0A2Ãiר zY·¬•\®ÈŠÎ&ޏ(e© ÂiÎîiK’„ÝnÇétb·Û1 ƒššjTUEÓlÌÄb¤Òiª|^BÁ H«8«»§—_ÞI,'“É26>NSS#Ë—.¥÷\?'ººðz=ܰu µµµTW‡Øµ{ƒCÃ|ôž»X´°“L&ƒ×ëeA{Ù\‡ÝÁ‚—§®”$ _UMM4Ͼêêjßóh~ HeòÔ†ªhk!I.§\¡È™ j‚^j^, ­±U‘QU‘g{$4ÕÐÕ3ʱ3Ãx]v6®hDz@7L6,oÇ4Mb©,ë–¶âq9ð¸ìLD<|ß6‚>n—ú݇Îr¦œŽ–Z–.¨ç@W¿ÜsœBQgj&I$šdYG#‹Úê8tj“çFY·´•[6-ab*Á‰ž$IæxÏŸùè¶ÖÑÑ\Cßè»ú±°Ø´²KÜÒáƒwM¤æ´, Ó4‘åò—tùg °fŸs•®‰gzçšeY†A©TÎ' åÀªÙlȲŒ®ë†ªª(ŠR9–…B$ »¦!Iº®cY6› Ý0°LUUcÇØ²,ôÙÖ§MU.Y¦ë6›2ûy—ß¿Ú(ö’n &šª (2¦e¡ëv͆eYK:6›Š,I†IQ×±Ûl•[ ¦eQ,–÷]ÓÊë•tƒ’nT¶#Ëv›­|ß½¤cššMA–eÞ8~Žÿj?_ýËÏ`&v{ù¶ƒ5[’abSËâ|á7ášÚ‚0ŒOÅé˜à¶ÍËæº*‚ \§DÐA„yBÜ(A„yBmAA˜'DÐA„yBmAA˜'DÐA„ybγ|]Md:I•ÇÁ¢Ö:ì¿æ,uç¦IïPËNs]y:ÖÈt’“½£Øl [Vub×~³_“ÓIƦb4TûŸŠ³¢³é7¾MA˜/®‰–v6›c``Ã(Orq>`G£QÂáHeÞká‚·›/üâ÷òù<ûæÅ—vVŽï¡g(Ì·ŸØÃ™q¾õÄRéüeu{O–ÓeMV&_1g/¾ÿó}¼Ù=\Y¯P,ñó]G)•ôwµw³í÷êíÊ|·ŸÇ»-ïítõŒòË=o2O±çP7o?÷¶e¾—ãd™ÃÓLͤ*ËŠ%±)þñ'/“ÌäÞ±~¿nNõñý§÷ÒÕ3ÂwžÜC&—¿ìwáz5§—¯–eQ*•‡ÃŒO„iiiF™S:‘HO¢¨ µµ5sYÍkÊèØgÎtsç;H&“¼þÆn¾i;‘È$cããä ÆÇÃlÙ²‘Å ²bù2Æ'&èíí{ÛrK¥‰dò’ $ÛƒËuåœv› ¿×…C³áó8‘e‰C'›œ!“+Rð29ä#7®B‘ev8M"£¹>Èëa×TŸäèé!ªžJ¹6Uá®VòƉ¾Ê,i¦iòÒ§ø»ï?Çh¤œîò#7®BUNöŽr¼gY’ض¦“ÎÖ:dIbl2Æó{»øø]›ð{¯œ‹Ý²,ò…Ïïë"“-P¬,nÙ¸”¡‰iŸ ¨ë,ëhdãŠv¦ãvÔÍÁ“ì;ÚË®>Žœbp<Êàx”¶Æ¿xˆ³A¶>ä£Êã䕃gÉKWœ²V’$‚Unvet¶Ô (2Ý|ý‘4Tû±k*ÿ£‰%2dsåyÍ‹Eý²º_\îþ}<ûÚ ëü¼rð4Ïíí"_,qôô ¦ib³©|çÉW‰%³¸KÚëhm±ÿDùBY’ˆ'³|ý‘]>5@s]¯ÛÎøTœï<ù*—ŸÇÉ7ÛM4–âÕ#Ýì7“3I’™í!LÓâõ7ÏÍæÊÎ IíMÕ¤2yt½}/€$I´4„¨ö{غº—CÃ4-NôŽRðr× +ÈJüòÕãŒFb„ü:Zjø¿þøAÜNûUË5M“£g†Øºº“Û6-£T2xáõ“}nŽ&KÑ;!_(Qð²¼£‰þâu6®hçwmBQdìšJ Êŧ?²…ΖZöëåè™!tÃD–%Æ'ãŒFbì>ÔMÏP^…™D†Ú`ºaÒXëçå7Nãu;X±° ͦrôôgúDZÍf:›Ž§9;0"Ë8ì6¶ÖÒPë§PÔ™˜Šãtظ}Ërâ© /¼~ °k6n\¿ˆ³ƒl[³‘H Ý4 ù<œîc"š g0ŒiZ86V/n¡µ!ÄûJRÔ v:Ë·®cÇ–el[³Ó4‹’˸{ûJ¼n;÷Ÿ¦otͦ²euÉLŽõËÛÏÍYØVG}µ¯rn]œ†V®wsšO;‘Hbšáp¤’OÛç«"•LUZÜÙL–t:ßår½‘$Ã0*ǧX,Î.—¨…p8•D!çóS¿ñD‚]¯ì&›»pÏrû¶­¬\±üŠëË’ÊlÙJ9A‡h6…bIGUärâÓä'Ïî'è÷ðÙne|2þëÝo>ÿ»¨ŠŒ®_ašêl¾n]7+é3•«^ïI•:J³ÉCLË"‘Êòã_íç³÷n¥Æïeh|(çü~yÿiF"3üé§w\2ø+èóàu;+ÇÛ¦(´5Vó…ÏßËa'–ÌÐPãG³©Üsã*>uÏòù"¦Un·o]ÎÚ¥m8ÑÇ×ÙÉÿùû÷c³)l\±€/|þnLÓ"™Î1‰ÑÕ;ŠaZœî'—/âu;ðy]X¦…4[ÏóéA•ÙŒiŠ,Wr™G¢ }ñøñ[ø‡È‹ï»Ckr&‰MQðW¹‘P…Â츃t6O&W@–%L LÓÂ2- ÓĦª(²„ªÈåºÉr¥5ÿÖsK„ æ,h˲LSSÃìŒA,Ãï÷ãpØY¸°£Üý›É —J¸\W¾'y=ª«­!Oð‹/Ó?8„¦Ù‘(w)«6•‹›Hñx‚sýýôœ;Çøø‡eñ¢E„BÁËÊõû|ì¸í–KrAÞã…’rþ ø-ÿjšJ._äDï‰t–þÑIâÉ,g'8Ó?Ad:ÉÞ£½¬^Ü2;z9Ìødœ“½£„|nV.jÆi·a×T}á A¿‡›Ö/fãòv^Þš'_>B4–¢¥.HkC€þÑIþç_â¿ÿÉCÔW_­Ç6­\À·ßCcM€—Þ8…,ËȲŒMUÊArbšL®À¹áÓñ4ÿúÌ^¸u-§ûÆ‹ÄYÞÙˆ,Ih6å’ÖiGK >“ç÷váu;é óŸï>zÓj¾ö“—h¬ñÓ3aé‚nÛ¼”§wEU‚>7N͆ªÊl[³×Žô°óÀâ©,ºnpû–åX–…,I¤3y<.ÛÖtÒÖ"_,ñÌžr÷÷ùc6UA–¥ò¿’„MQf¹ÌL"ÃHx†L®ÀÀèUnÝCaƦHeò¼qüK4Ð3fhbšéDš}ÇzYÒÞÀòÎF¾óäk4×øÌG· ª2½q¿|Ý0ؼ­k:Ù¾vAŸ›§wÅã´c&‹ÛêØ{´Y–+xª*‹§EáÌi>mUU±ÙlØl*»¯×‹ª*³ËÊ/§Ó‰Ëå̳¼^íÄãq6¬_Ç’Å‹hl¬ÇétPS[CC}]åVB"™¢¯¯§ÃACC=ÅR‰ššjª¼ÞËÊUU•*¯Ÿ¯ªò²Ûµw]/I—ÓNK}æº -õAšê´5†Ø¾v¹|‘@•›‡vlÀ² 6XE÷`˜’n°¨µŽ¢®ò{ÈŠ ŒEio¬Æëv`˜&­õAü^7«7žNârØéh®¡6TÅæ•Œ„g¼<|ß xÝŽJ…¼n'‹ÚêÐlW>g%I¢&à%“/"IòyÉå‹Ü¹m+61“̰iånÙ¸Ý0©ö{¨r;1d¦<¢¹¥>ˆÓ®QôÑÚª¤u94¶¬ê$–Ì"Ë÷ݲ† ÏCc­ŸEmõ„£ –w4rãºE¸­ AR™<¹B‰n[ÇÂÖ:j^Ö-me|2N°ÊÍý·®£6è¥wd’Ú`KÔãÐ4–u4P_íÃãr’Heiª 00:Åý·­£¹.@GK-5~:šk¨¯ö±tA#Ë:ˆ%³l_»Í+`Yå;-}#SÔ‡ü|îr·}Ÿž¡é\žå”t§C£µ¾|q¤È2m!ê‚UȲL[c5­õ!ÂÓInX³íkáqÙY¿¼L®€ª*|ú#[i¨öá÷ºio ÑX ½±š†?íÕ¸ïþ¼„ëÈò%\×2Ù_üÖ3ä %2¹Ÿ¼{3·oYvM¡˜ˆÆ9ØÕO6_¤¹.ÈÚ%­Ø5•Ÿ=]ý•ÚöÙ;ÄÈkAøA[¸îå %ÒÙ<MÅãrÌÛ^Ã0H¤s€D•ÇY¹¿/‡‡˜fH¸î9í6œöùÿX‘¢(}žw^Q„yK\Š ‚ ÂД’óÝùD!ãáËÞËår|õ¾F8©,ÓuþÁKò“G£Óü¿_ù*Ÿüì羬œ’$IŒFføý¿ù‡Nð¹ÿúmº'®ºþûIùNï½÷ò{Å’Îóûºøâ7Ÿáoþéi~¾ë(¹Bñ}•}¥÷Æ'ãüßÿüóÙǶ _(ñÕ}{ÿäïùɳûß±~¿î±ËJüÉßþ€çöžàOþöì;ÖûŽÛáßÖœ§æ4M“©©)‡Ù²y#ªª22:†¦i8lªŠË媤ì¼Þe2Yžñ%º{zùÈÝwÑÔÔ@]m-±XœþÖ¯_‹Ûí Ÿ/püÄ ~ôÓŸñ{¿û055!Z[ZüáïÿoD¾ôåKr˜†ÁððùB–æ&¼W˜9í­ó™_|1p%–eÍÎ-]žJS’ʳhY–E6_¤odŸÇIQ7hª à´ÛŸŒ16£Ú省© 87<ÉT¬œãY‘%–w6âth ŽE© V1žÁ²,·×£*2ÑSqkÔÏ&(–tz‡#†IgK-n§I’˜œI2<1M°ÊM[c5Š"SÒ †'¦I¤Ë OUnNôŒÐ;æÞ›Vc³)<úü!ZꃬZÜÂÐx”|Q'Xåº0 j&G¡¨¨r3“H³°µU‘ Ï™NÐT ¡ÆeÁøTŒþ‘)n\¿ûì n»?øøÍä‹%J¥w¾p-• z†ËÉ?¶ÖâœÑ.<`4£.XEs]€L®@d:I±dPô™I²°¥›"c˜&N‡†n˜8>Á ‡͜fùH&“‹%ìšVYnU^/‡»]ûÀ¦Nï,Ë"ŸÏsæìYÆ'ÂôöÃívQW[K*¦»§—×ö½N{[+^¯‡R©ÄÙî&&ÂôœëŲLZ[Ze»Ý^I q¾ÜÇŠX<ŽÛåbpx˜¿üŸSUUuI$I¢P,òÊî=¤Ó™ÊòíílX¿ö²:K’„ÃncåÂfU.V.lÆãrHçøÚ_F7 DzŽFºzFø£OÞÆºemLÅR9=È›ÝìYÒŠÃnC’$ìš­25êÛ±d†¯ÿtH ë’,ñWÿéç¾Âê%-ô F¸cërjCUüÃ_Äçq¡ª2ÉtŽÏ~t+÷ܸŠ%íõ4VûYÒ^OÐçþÎZA>Hs +Ýq¹3±8 õuÄâñÊû²,“J¥H¥R”J%ÚÚZqÎæÛ¾žI’D(dÝÚ5TWWó©¬ò^kK3¿õà>rÓ,_Çͦè:ušÏ|òo[öäT”}û÷³fÕ*lšÁANœ<Å7l»l]Y’¨«­Åç+T–ü¾«–íó¸ø÷¿}3v›ZÎŽe·Ñ3f$2Ã_ÿ‡Pd‰×Žô`š»Ê–Õ•$‡O rû–åܼa ë—µñåï>Çï>°½’D–eª\þâswc·Û°,xjçQ %Y’Èä ¼ôÆIªýöíeûÚEØT…C'8;0AÈï!‘ÊQðpó†ÅÔ×øHgóüb÷1V,lFUdG§xíH7%Ý wx’?øø-üjÏqB~¥’Á¾7ÏqïÍk¸{ûJ¢³Á¼d˜,YÐ@]°Š±©‹ÛꚈ²¨­–-«:Ðl*Ù|‘Ó}ãlXÞκ¥mÔ‡|=3„ù>ºúÏL™Iò¥?ûªªp¢{„’nðÔΣܾu9Ÿ¸{3;÷ŸâÅ×OqÏöU4Õ¸uóR^=Üͦ "IôÉÛ°Ûlüñ§vTZü‚ \;æä¯R’$ à ™D–$’ÉÅb‰x"AÀï§½­ M+·4úúúI¥Ò"h_Á{I½ùvÝØ’$Q*•pØí¬\± ¿ßϲ¥Kèèè¸jY©TŠl._YVå½úL\²,U’@¸œöò8]G™íFvh*žÙå¯éáØ™!>q×F†ÆCf9ëX*“ç»O½FGs ÷Þ¼u¶+צʬ^ÚŠÓ¡UÒkæ %V.jfóªÖ.mÃïu’Íñºl\ÑŽÇå`Êv:[jq94¾ðù{8Þ=Ì×ÙÅ-—póÆ%¨ŠÂúe­´Ö‡X»¤€ÏÅÎgÈЍŠÌT,ÅÂÖZìšbIÇëv È2Á*Š\¾àp;54MÅ1›G7L^ØwŠîÁ0ÞºŽúj_eÿÞ˽õóiHKº]+ÿ góE욊:{<—w6b³)$³y‚>àq:*éG] ‡fîÙÐ4•t®|ær”ó»‘´C®Es6M’$?.—Ë2+ùŸ-Ëb|b‚D"I*•¦XÒÑ4ñr1»ÝÁÄx˜³Ý=ŒŽŽa&áp„á‘2™ ãããD"“8vb±§Ïœe`p]7ˆF£ Œ’NgŸ`"ÆïóðûÉd²¨ªBï¹>o“å«üY™½ÞÛ>ø½n Ãäx÷0ºú›ŠaY}£S4Õú<$ÒYÉ,™\Gž?@ïP˜­«;˜˜Œ“Éžoå—ÓMž¿Q™µK[‰L'°k6F#åÔ“å,Xò…`Ñ3F³©ŒEb OLsû–ål[ÓI÷`»M¥½©š™D›MáÜH»MESšêì9Ü$A"•cé‚ÚB:9@×¹QŽ®b¸p$QÎ'Ý;¦£¹†*ƒD*G,‘!—/2<1]N‰™Í386ÅtvÇn\¿x®«+‡Úœ0-‹‘‰Þ8~Ý0X¹°™5KZ®é´˜–eQ*LÅS8ì6^7²,‘ÊäyõH7Óñ4Mu¶®îÄí¼v/>A˜¿DÐæLù¾8@y@ݵ°ßŽeY”wãÂÀÀùº/‚ \ÛÄ3œ)j˜ïC¡$I*ïÔ‚ ü†‰Ñ]‚ ‚0Oˆ -‚ ó„Ú‚ ‚0Oˆ -‚ óÄ5´E^_AA¸Üœ·,‹l6Ëää­­-(ŠB©¤‹Å(–Џ\.ü>ŸHÍ)‚ \÷æ¼¥máÈ$£cc˜fyòHd’L6‹Óá$—ÍUr> ‚ Âõl΂v9S‘ÉÔT‡ÃŽËéª,ŸžŽ‚e1=3bÞqAA`Ž[Úñx‚b©DmM ’|abŠ|¡€Û㦹©‘x,A*•šÃZ ‚ µaÎîiF9Ed>_ “ÉIg¥©±‡ÝÓQž2Õn×(•ô¹ª¦ ‚ \3æ,h+ŠBGÇ,ËÂ0 r¹ õ hšF($™ÄãqS*•p»]sUMAA¸f\ CLÓdff†@ 0;z¼D"‘D7 Ü.'n·[Ü×A®{×Dо¸ ’$]öœ¶È˜$‚ ×ÀsÚpyPAZA.'úœAažA[Aæ ´AažA[Aæ ´Aaž¸&F_̲,1zümX–E©Tbj*ŠiYü~<7–e‘H$H¥Ò¨6•êP56Û5÷ñ^sÞËùfšéx‰RÁÀ]eÃá¾p|MÃ"(á®RQÔ÷v-lY™L†x<,Ë„BAìv;¦iNS(°;ìÔÖÔ¼§rAøð™Ó–¶eYX–…®ëÄã LÓ ‹1==S~ÍÌP*•沚×I’矾ùm^xéeÆÆÇò´°¿øÕs<òØã¼±ÿ ¹\vŽkúÎÞMÞô·[Dz T4™žÈýµH'J¤âÅw\Ó4,N&xå±QzŽÅ/y¯˜78ºk’lêÒ)wßZË‚T¬H&YºdѱqvíÞÃ7ÿå;ô  ë:GŽãg=ÁôÓ÷¹‚ |˜ÌySÌ4M&Âa†††Ù²yªªR(1 Ã0˜‰ÅèììÀf³ÍuU¯ –e‘ÏåQU•îû(Á@ ²<ްaý:n¸a+šÍF±`œ."IÏøBnŸ ðˆE óNНZC–%R±² Ù”N1oP×êBµÉĦ äÒ:6»L°Î$Át8e‚Ý©Ïêx.¯J6¥“œ.‚þj;NÏ•O1˲(ä âSLÃÂã×ðlX$¢²)U“ ÔÚQm2‰h¡¼ïÍ¡¬sP*ôOÒµ/Êö¨ Úñlè%“X¤@©hâòªøBv$¹| âS,Ó¢*X>ù¬ÎÁçà ÁÒM5Ž«ÖYQ%–m›ÌW‚îùýˆNähYâÁîT*ËsiD´HT5\U*¹´Î¾_ŒãöÙX¸ÚO ÎŽÝ©°°³ƒ¶Ö¾ôå¯Ig°Ùlì¸õÜn7/¾´ó>“A˜æ,hŸo$S)JÅš¦å–d]]-“SSø}>\Nç\Uóš¥( »U½ô#Ô4 ‡ÝŽeYÄ"9žùæ‹7øÉ¥u 9ƒ»n#:–cÿ³aB¢ã9¶ßßH}»›7_":šCQ%lv™l²tvýl„@ƒ™‰<«oª¦±ÃÍS_ÕU æÕN¶ÝÛÀž'ÆpzT,ËÂåQÙtWÝ»‹ 9ƒÃ/EÊ ŠD*Vä#¿×N.­³ûñQB 2 Ú'kn®æ•ÇF‘ ÏFx(Ë=ŸkÃîT8•`r4ǹã Ú—yñl ŸMsüÕ)ªBñ©7=Ô„¯ZãàóarË‚BÖàîßm#›Ò8•Ds*h…%ëå«mI’Pmò%û#IÙ”NÏÑ8gÇøÔŸ/"Ô褘7Øÿ\˰ ›CaÛ½ ¤ã%N%ñרQ ÍÄáRQUEQPäKËÖ4 »¦‰[F‚ sÜÒ. ÌLÏP__G<‘¸ä½b±|ß¶³c˜wü"ïõË»˜7ذ£E•yücz"‡7`cÕö6‡Lx0ÃäH–†n4»L:QâÁ?êÀåQQl2…¬ÁÊBØì2ùŒÎЙ$ n4‡ÂÊB¼úÔkn©§çH½d25šcãµÔ·»0 ë’”«‹E ŒõeØñ©ª‚ãUâìáÁ:7=ÔDx(ËÞŸ³hÓ°h_^ŲÍAžÿþ ±Hž…kýt®öQÌÜüPç¿VcŶH¦'r˜†ÅähŽ;?ۊí,o/Pç m™_µ ;j‘ÞÇ©¨µ³õ#õŒ÷g8ßnèÑñ«|´.­š½wÕMNZ—xiYìeå !D,὘³hhš&£cãX@"‘¤P(‹Ç1 €ééiÜ.‡C´2~ šSÁí³¡9dlv‰bÎàÌ¡ÇvO1)P*Zåà ȲDS§_ÈŽÍ® I0Ò“âà ¢ãyŠyC/‡%›]ÆéVP5‡k¶eíU¹çó­Œõ¥yù§#t훦T,Sxë½ÝBÞ@’ÀéVÐ2MnU&/QÒPT—GÅ2- ÝDµÉxªMFÕ £\ž$IH’„¬H •ïwí¦k_”ÄT½TÞ¿BÞ@Q$ì.»S¡±Ãƒj“ËASII¾pQt~¼ÅyWº·~Éœùryûç9Ý*;>ÙB"Zd×£#Ý9I©h–·sÉKœÛ‚ ¼{s´%I" Råõ ËråËÊ=™JáóùÄ—Ú¯)—Ò™çɦu²i·ßFx0Ã’ Vo¯Æ(™˜&•¢¬\8Þ–ƒšyXk ªMÂ4¹°2—Ä)Òñá¡,;>ÕÂíŸja´7M.U¾÷;9å;ßûA¥GÅ㳡ë‰h‘L¢Ä®GGI'ŠÔµ»=—¦T4˜žÈa³ËØ]Wï’‰BÎ@/šè%Ó°ˆ gY±5È’ò2³¼½BÎ +’ŠÙõè(™D¹n’T.Ã4,ô’Y ÆgÎtóÈ£“Íæf‡…Å[–YÖEË2©¶Ý×ÀŸnaj,G.=;Hí*ÛAx'sÖ=.Ë2€(|N§Óü~EA×u;Nq/û2ïõ ÞíS9ñZ”èXŽMwÖSÓäbÝmµy9ÂôDžwÖ1t&I>«ã«¶_ % Vmñê“ãìzt”ÎÕ>zŽÆ‰Oåiìpcw©4v¸qyTj[]8Ü*¥‚ÉÓÿÜHl¼³_È@©TdrrC/÷¤ëÜô`#_ŠPÊ,ZëÇ_mÇ»M#/òô?÷áòÚ¸õã͸¼6êZ]¸¼*²µ-NÜ>’$Ñ´ÐM×>™ŸþÝ,\ãgÓu¬¿­†®}Ó„‡²l¼£–‘žK6úÙv_{ŸG×M–mâ ”ËXºÑÏÞŸOðXO/kn®aɆ²,‘Ë瘊F1Írû»’Û=I&©#ѱ[î©§ûhŒÑÞ4ÅœÁ ?¢¡ÍÍͿ݄i³ß Vm¯Æ*ß›^¾9ÀþçÂôu%ذ£–ŽU>ÑM.»rͤæ<ÿ¼ìùÔœÿ,\êô™³üËw¿Ï§>ñ1-ì$ R*•øûÿõ464°yÓFÚZ[ˆGtžýî Ÿÿëe˜&(Š„$—¯aX•îYË,–²Ê=Ùoim—»—¡Ü}nšåûÔççü}kËY.78MÃiv{õž†ªª—tA›†…e]¨[yÝÙ.{éB]LäÙ®lË´@’g÷å|²,UîI—÷OB’Êå)çË9¿/Õí’2©r\LÓÄ4MevD¸Y.ëbåãf]ÜЮÔÛ²f×Ëþ]i{3±a¾ûýðÙO}’×cCÃÃ8x˜7ŸàË_úâ¯}î‚0¿Íù#_ÀeÁYë·WSSͺµkèï$  ‘e™oØÆ¹¾~z{ÏÑP_‡Çï`ýŽZdEFV.ü¾$I¨êEÇw6˜H —‘$ å¢u•óËfÿ½ø½òúåàùV²,_6 ð­e_XWº‡òÊ'ÃIDAT¬ åâú+—ž+W*ãâý»¸¬+­{õz\ZgI¹ô‚æÂŠW>W¯v,Þº=Ó4™™‰qêô6mÜ@cS#Pîê¤X,róMÛ¯¸ A®/×DK[A„w&ž¥A„yBmAA˜'DÐA„yBmAA˜'DÐþ™Ïuü[Õ}>#A®osþÈ×ùg²K¥ ICÊ?ëX–‰Íf«<'+\ÝèØ8{^}\.ÇÇ~û!~ÿ\W©b:žæ‰—sû–å¼øúI¼m=µ—×O’$NöŽÒ72Ń;Ö} u0M“—÷ŸF7 ìš|¾Ä½·¬ù@·!‚ð›vM´´ãñ8g»{*óŽG£ÓŒŽGŸ˜ù´/rñœØ·}U^–-]¡#GÉd2Wü½¹’+”xýÍsÄ’^=ÒM*›¿âz–e141Í®¾+¾w5—笾üøXÀ™þ º#t†930ñ>öDanÍiKÛ²,òùÓ312™L% E"“465à°;Âçó‰|Ú³ŠÅÏ>ÿ‡EQdî¿÷#lX¿¯×ËÒ%‹+½¥R‰ç_x‰×÷@’$>þ±ßbýÚ5ÌÌÌð“Ÿ=Æää55Õ<ôÀ}456räè1^xi'º®³lé>ñ±ßºb†5Ó4™ŠF+Y.§ ¿ßwÅ:Û5•¦Ú—ƒÆš.»‰©8ß~bšMåÜp„MüÇÏÞÀÀØÿíëO11çslgûÚ…œèá‘ç’HgYº ß{èF†Æ§yìÅCx]N÷sÿ­kyèöõŒOÆùÞÓ¯Ž&hª ð;÷m£­!D Ê…fS0-¨r‹ùA˜æ4h†A8! J¥€riU•—h4ЦÙQûEèzfY‡åð‘£üÕþs¢Ó3_ P(ðòÎWX·v Û¶l¢ëÔit]¿ä" Rn¡À3¿øñD²²lͪÜuçW¬w•ÛÉïÜ»•溟p;Õ/S±]=£|þ¡íüÞƒ7ò…¿{„³³­_Ã0ùãOÞÆ‰žžxé0+:ùÎS¯q× +¹qí"¾ø­gØw¬—Ïá“|ý¿þ.÷ߪó7ÿô47o\ÂO~õ¦iñû¿}3?å¾p/|înÙ¸UQ0-ë’‚ óÅœm˲ˆF£Ø5 ·Û]žÙ,g<Ê ¸ÝnìšF>Ÿ§T*a·Ûçªª× Ë²èë ½½ªª*ªªªhiiƦªWì".é:Ã##ŒOLPÈ*Fímm´··ñÕø ÚÛxèÁûÐ4Ûn»…_|™W_Û˦P”ËO˲p:<üÙO_²MU½ú©¤Ù–u”§æ\µ¨¹²Üçu²lA#AŸ›ºê*ÂSq:šk©¯ö19Ä0MÒÙ"‰T–ΖZªTOML is a bad file format. **It looks good at first glance**, and for really really >trivial things it is probably good. But once I started using it and the >configuration schema became more complex, I found the syntax ugly and hard to read. Despite this, PyPA still went ahead and used TOML for PEP-518. Fortunately pyproject.toml *is* fairly trivial and appears just once per project so the problems he alludes to aren't that pronounced. StrictYAML, by contrast, was designed to be a language to write [readable 'story' tests](../../../hitchstory) where there will be *many* files per project with more complex hierarchies, a use case where TOML starts to really suck. So what specifically *is* wrong with TOML when you scale it up? ## 1. It's very verbose. It's not DRY. It's syntactically noisy. In [this example of a StrictYAML story](https://github.com/crdoconnor/strictyaml/blob/master/hitch/story/map.story) and [its equivalent serialized TOML](https://github.com/crdoconnor/strictyaml/blob/master/hitch/story/map.toml) the latter ends up [spending](https://www.goodreads.com/quotes/775257-my-point-today-is-that-if-we-wish-to-count) **50% more** characters to represent the exact same data. This is largely due to the design decision to have the full name of every key being associated with every value which is **not** [DRY](../../../code-quality/least-code). It is also partly due to the large numbers of syntactic cruft - quotation marks and square brackets dominate TOML documents whereas in the StrictYAML example they are absent. Shortening program lengths (and DRYing code), all other things being equal, [reduces the number of bugs significantly](https://blog.codinghorror.com/diseconomies-of-scale-and-lines-of-code/) because maintenance becomes easier and deriving intent from the code becomes clearer. What goes for Turing-complete code also applies to configuration code. ## 2. TOML's hierarchies are difficult to infer from syntax alone Mapping hierarchy in TOML is determined by dots. This is simple enough for parsers to read and understand but this alone makes it difficult to perceive the relationships between data. This has been recognized by [many](https://github.com/leereilly/csi/blob/567e5b55f766847c9dcc7de482c0fd241fa7377a/lib/data/master.toml) TOML [writers](https://github.com/CzarSimon/simonlindgren.info/blob/a391a6345b16f2d8093f6d4c5f422399b4b901eb/simon-cv/config.toml) who have adopted a method that will be quite familiar to a lot of programmers - indentation that the parser ignores: [![Non-meaningful indentation](toml-indentation-1.png)](https://github.com/gazreese/gazreese.com/blob/c4c3fa7d576a4c316f11f0f7a652ca11ab23586d/Hugo/config.toml) This parallels the way indentation is added in *lots* of programming languages that have syntactic markers like brackets - e.g. JSON, Javascript or Java are all commonly rendered with non-parsed indentation to make it easier for humans to understand them. But not Python. Python, has long been a stand out exception in how it was designed - syntactic markers are *not* necessary to infer program structure because indentation *is* the marker that determines program structure. This argument over the merits of meaningful indentation in Python has been going on for decades, and [not everybody agrees with this](https://www.quora.com/Do-you-think-that-indentation-in-Python-is-annoying), but it's generally considered a good idea - usually for [the reasons argued in this stack exchange question](https://softwareengineering.stackexchange.com/questions/313034/why-should-a-language-prefer-indentation-over-explicit-markers-for-blocks): 1. Python inherited the significant indentation from the (now obsolete) predecessor language ABC. ABC is one of the very few programming languages which have used usability testing to direct the design. So while discussions about syntax usually comes down to subjective opinions and personal preferences, the choice of significant indentation actually has a sounder foundation. 2. Guido van Rossum came across subtle bugs where the indentation disagreed with the syntactic grouping. Meaningful indentation fixed this class of bug. Since there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. 3. Having symbols delimiting blocks and indentation violates the DRY principle. 4. It does away with the typical religious C debate of "where to put the curly braces" (although TOML is not yet popular enough to inspire such religious wars over indentation... yet). ## 3. Overcomplication: Like YAML, TOML has too many features Somewhat ironically, TOML's creator quite rightly [criticizes YAML for not aiming for simplicity](https://github.com/toml-lang/toml#comparison-with-other-formats) and then falls into the same trap itself - albeit not quite as deeply. One way it does this is by trying to include date and time parsing which imports *all* of the inherent complications associated with dates and times. Dates and times, as many more experienced programmers are probably aware is an unexpectedly deep rabbit hole of [complications and quirky, unexpected, headache and bug inducing edge cases](https://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time). TOML experiences [many](https://github.com/uiri/toml/issues/55) [of these](https://github.com/uiri/toml/issues/196) [edge cases](https://github.com/uiri/toml/issues/202) because of this. The best way to deal with [essential complexity](https://simplicable.com/new/accidental-complexity-vs-essential-complexity) like these is to decouple, isolate the complexity and *delegate* it to a [specialist tool that is good at handling that specific problem](https://en.wikipedia.org/wiki/Unix_philosophy) which you can swap out later if required. This the approach that JSON took (arguably a good decision) and it's the approach that StrictYAML takes too. StrictYAML the library (as opposed to the format) has a validator that uses [Python's most popular date/time parsing library](https://dateutil.readthedocs.io/en/stable/) although developers are not obliged or even necessarily encouraged to use this. StrictYAML parses everything as a string by default and whatever validation occurs later is considered to be outside of its purview. ## 4. Syntax typing Like most other markup languages TOML has [syntax typing](../../why/syntax-typing-bad) - the *writer* of the markup decides if, for example, something should be parsed as a number or a string: ```toml flt2 = 3.1415 string = "hello" ``` Programmers will feel at home maintaining this, but non programmers tend to find the difference between "1.5" and 1.5 needlessly confusing. StrictYAML does not require quotes around any value to infer a data type because the schema is assumed to be the single source of truth for type information: ```yaml flt2: 3.1415 string: hello ``` In the above example it just removes two characters, but in larger documents with more complex data, pushing type parsing decision to the schema (or assuming strings) removes an enormous amount of syntactic noise. The lack of syntax typing combined with the use of indentation instead of square brackets to denote hierarchies makes equivalent StrictYAML documents 10-20% shorter, cleaner and ultimately more readable. ## Advantages of TOML still has over StrictYAML There are currently still a few: - StrictYAML does not currently have an "official spec". The spec is currently just "YAML 1.2 with [features removed](../../features-removed)". This has some advantages (e.g. YAML syntax highlighting in editors works just fine) but also some disadvantages (some documents will render differently). - StrictYAML does not yet have parsers in languages other than Python. If you'd like to write one for your language (if you don't also do validation it actually wouldn't be very complicated), contact me, I'd love to help you in any way I can - including doing a test suite and documentation. - Popularity. strictyaml-1.6.1/docs/src/why-not/turing-complete-code.md000066400000000000000000000040131415074265300234130ustar00rootroot00000000000000--- title: Why shouldn't I just use Python code for configuration? --- It is relatively common for many applications to avoid the use of a different markup language for configuration and simply allow configuration to be done using code. One famous and unapologetic example of this is [Django](https://docs.djangoproject.com/en/1.10/ref/settings), which requires all configuration to be in a "settings.py" or similar file. This seems like a great idea from the outset - Python is more flexible than any configuration language, so, for instance, if you wanted to use a list comprehension or read a file or call an API to fill a value, you can. However, with this flexibility comes many traps and unsightly pitfalls. The Django pitfalls in particular are [cogently summed up by Ned Bachelder on his blog](http://nedbatchelder.com/blog/201112/duplicitous_django_settings.html) - pitfalls which have been the cause of countless bugs over the years. The language expressiveness trade off applies at every level in code - [We need less powerful languages](http://lukeplant.me.uk/blog/posts/less-powerful-languages/). - [Rule of least power (wikipedia)](https://en.wikipedia.org/wiki/Rule_of_least_power). - [Principle of least power by Tim Berners Lee](https://www.w3.org/DesignIssues/Principles.html#PLP). - [Principle of least power by Jeff Atwood (coding horror blogger / stack overflow founder)](https://blog.codinghorror.com/the-principle-of-least-power/). A good way of refactoring, in fact, is to take a large chunk of Turing-complete Python code that *can* be transformed directly into StrictYAML with no loss in expressiveness and and to transform it - for example, a list of translation strings, countries or parameters. It also makes it easier to have the markup generated by another program or a templating language. While you technically *can* do this with Turing-complete code, it will often lead to a debugging nightmare - [just ask C++ programmers](https://stackoverflow.com/questions/622659/what-are-the-good-and-bad-points-of-c-templates>)! strictyaml-1.6.1/docs/src/why-not/xml.md000066400000000000000000000025261415074265300201740ustar00rootroot00000000000000--- title: Why not use XML for configuration or DSLs? --- XML suffers from overcomplication much like vanilla YAML does - although to an ever greater degree, thanks to the committee-driven design. Doctypes and namespaces are horrendous additions to the language, for instance. XML is not only not really human readable (beyond a very basic subset of the language), it's often barely *programmer* readable despite being less expressive than most Turing-complete languages. It's a flagrant violation of the [rule of least power](https://en.wikipedia.org/wiki/Rule_of_least_power). The language was, in fact, *so* overcomplicated that it ended up increasing the attack surface of the parser itself to the point that it led to parsers with [security vulnerabilities](https://en.wikipedia.org/wiki/Billion_laughs). Unlike JSON and YAML, XML's structure also does not map well on to the default data types used by most languages, often requiring a *third* language to act as a go between - e.g. either XQuery or XPath. XML's decline in favor of JSON as a default API format is largely due to these complications and the lack of any real benefit drawn from them. The associated technologies (e.g. XSLT) also suffered from design by committee. Using it as a configuration language will all but ensure that you need to write extra boilerplate code to manage its quirks. strictyaml-1.6.1/docs/src/why/000077500000000000000000000000001415074265300162475ustar00rootroot00000000000000strictyaml-1.6.1/docs/src/why/duplicate-keys-disallowed.md000066400000000000000000000012631415074265300236430ustar00rootroot00000000000000--- title: What is wrong with duplicate keys? --- Duplicate keys are allowed in regular YAML - as parsed by pyyaml, ruamel.yaml and poyo: ```yaml x: cow y: dog x: bull ``` Not only is it unclear whether x should be "cow" or "bull" (the parser will decide 'bull', but did you know that?), if there are 200 lines between x: cow and x: bull, a user might very likely change the *first* x and erroneously believe that the resulting value of x has been changed - when it hasn't. In order to avoid all possible confusion, StrictYAML will simply refuse to parse this and will *only* accept associative arrays where all of the keys are unique. It will throw a DuplicateKeysDisallowed exception. strictyaml-1.6.1/docs/src/why/explicit-tags-removed.md000066400000000000000000000016021415074265300230040ustar00rootroot00000000000000--- title: What is wrong with explicit tags? --- Explicit tags are tags that have an explicit type attached that is used to determine what type to convert the data to when it is parsed. For example, if it were to be applied to "fix" the Godfather movie script parsing issue described above, it would look like this: ```yaml - Don Corleone: Do you have faith in my judgment? - Clemenza: !!str Yes - Don Corleone: Do I have your loyalty? ``` Explicit typecasts in YAML markup are slightly confusing for non-programmers, much like the concept of 'types' in general. StrictYAML's philosophy is that types should be kept strictly separated from data, so this 'feature' of YAML is switched off. If tags are seen in a YAML file it will raise a special TagTokenDisallowed exception. ## Counterpoints - [Valid usage in AWS cloudformation syntax?](https://github.com/crdoconnor/strictyaml/issues/37) strictyaml-1.6.1/docs/src/why/flow-style-removed.md000066400000000000000000000023601415074265300223360ustar00rootroot00000000000000--- title: What is wrong with flow-style YAML? --- Flow style is essentially JSON embedded in YAML - making use of curly { } and square brackets to denote lists and mappings. Example: ```yaml a: 1 b: {c: 3, d: 4} ``` This use of JSONesque { and } is also ugly and hampers readability - *especially* when { and } are used for other purposes (e.g. templating) and the human reader/writer of YAML has to give themselves a headache figuring out what *kind* of curly bracket it is. The *first* question in the FAQ of pyyaml actually subtly indicates that this feature wasn't a good idea - see "[why does my YAML look wrong?](http://pyyaml.org/wiki/PyYAMLDocumentation#Dictionarieswithoutnestedcollectionsarenotdumpedcorrectly)". To take a real life example, use of flow style in [this saltstack YAML definition](https://github.com/saltstack-formulas/mysql-formula/blob/master/mysql/server.sls#L27) which blurs the distinction between flow style and jinja2, confusing the reader. ## Parsing 'dirty' YAML with flow style To parse YAML with flow style, you can use [dirty load](../../using/alpha/restrictions/loading-dirty-yaml). ## Counterarguments - - strictyaml-1.6.1/docs/src/why/implicit-typing-removed.md000066400000000000000000000104171415074265300233550ustar00rootroot00000000000000--- title: The Norway Problem - why StrictYAML refuses to do implicit typing and so should you --- A while back I met an old coworker and he started telling me about this interesting bug he faced: "So, we started internationalizing the website by creating a config file. We added the UK, Ireland, France and Germany at first." ```yaml countries: - GB - IE - FR - DE ``` "This was all fine. However, one day after a quick configuration change all hell broke loose. It turned out that while the UK, France and Germany were all fine, *Norway* was *not*..." "While the website went down and we were losing money we chased down a number of loose ends until finally finding the root cause." "If turned out that if feed this configuration file into [pyyaml](http://pyyaml.org):" ```yaml countries: - GB - IE - FR - DE - NO ``` "This is what you got in return:" ```python >>> from pyyaml import load >>> load(the_configuration) {'countries': ['GB', 'IE', 'FR', 'DE', False]} ``` It snows a *lot* in False. When this is fed to code that expects a string of the form 'NO', then the code will usually break, often with a cryptic error, Typically it would be a KeyError when trying to use 'False' as a key in a dict when no such key exists. It can be "quick fixed" by using quotes - a fix for sure, but kind of a hack - and by that time the damage is done: ```yaml countries: - GB - IE - FR - DE - 'NO' ``` The most tragic aspect of this bug, however, is that it is *intended* behavior according to the [YAML 1.2 specification](https://github.com/yaml/yaml-spec/tree/spec-1.2). The real fix requires explicitly disregarding the spec - which is why most YAML parsers have it. StrictYAML sidesteps this problem by ignoring key parts of the spec, in an attempt to create a "zero surprises" parser. *Everything* is a string by default: ```python >>> from strictyaml import load >>> load(the_configuration).data {'countries': ['GB', 'IE', 'FR', 'DE', 'NO']} ``` ## String or float? Norway is just the tip of the iceberg. The first time this problem hit me I was maintaining a configuration file of application versions. I had a file like this initially - which caused no issues: ```yaml python: 3.5.3 postgres: 9.3.0 ``` However, if I changed it *very* slightly: ```yaml python: 3.5.3 postgres: 9.3 ``` I started getting type errors because it was parsed like this: ```python >>> from ruamel.yaml import load >>> load(versions) == [{"python": "3.5.3", "postgres": 9.3}] # oops those *both* should have been strings ``` Again, this led to type errors in my code. Again, I 'quick fixed' it with quotes. However, the solution I really wanted was: ```python >>> from strictyaml import load >>> load(versions) == [{"python": "3.5.3", "postgres": "9.3"}] # that's better ``` ## The world's most buggy name [Christopher Null](http://www.wired.com/2015/11/null) has a name that is notorious for breaking software code - airlines, banks, every bug caused by a programmer who didn't know a type from their elbow has hit him. YAML, sadly, is no exception: ```yaml first name: Christopher surname: Null ``` ```python # Is it okay if we just call you Christopher None instead? >>> load(name) == {"first name": "Christopher", "surname": None} ``` With StrictYAML: ```python >>> from strictyaml import load >>> load(name) == {"first name": "Christopher", "surname": "Null"} ``` ## Type theoretical concerns Type theory is a popular topic with regards to programming languages, where a well designed type system is regarded (rightly) as a yoke that can catch bugs at an early stage of development while *poorly* designed type systems provide fertile breeding ground for edge case bugs. (it's equally true that extremely strict type systems require a lot more upfront and the law of diminishing returns applies to type strictness - a cogent answer to the question "why is so little software written in haskell?"). A less popular, although equally true idea is the notion that markup languages like YAML have the same issues with types - as demonstrated above. ## User Experience In a way, type systems can be considered both a mathematical concern and a UX device. In the above, and in most cases, implicit typing represents a major violation of the UX [principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment). strictyaml-1.6.1/docs/src/why/index.md000066400000000000000000000011671415074265300177050ustar00rootroot00000000000000--- title: Design Justifications --- StrictYAML is the result of some carefully considered, although controversial design decisions. These are justified here. {% for dirfile in (thisdir.is_not_dir() - thisdir.named("index.md"))|sort() -%} - [{{ title(dirfile) }}]({{ dirfile.name.splitext()[0] }}) {% endfor %} If you have seen a relevant counterargument to you'd like to link to addressed at StrictYAML, please create a pull request and link to it in the relevant document. If you'd like to write your own rebuttal to any argument raised here, raise a ticket and issue a pull request linking to it at the end of the document. strictyaml-1.6.1/docs/src/why/node-anchors-and-references-removed.md000066400000000000000000000062441415074265300254750ustar00rootroot00000000000000--- title: What is wrong with node anchors and references? --- An example of a snippet of YAML that uses node anchors and references is described on the [YAML wikipedia page](https://en.wikipedia.org/wiki/YAML#Advanced_components): ```yaml # sequencer protocols for Laser eye surgery --- - step: &id001 # defines anchor label &id001 instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: &id002 instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: *id001 # refers to the first step (with anchor &id001) - step: *id002 # refers to the second step - step: <<: *id001 spotSize: 2mm # redefines just this key, refers rest from &id001 - step: *id002 ``` While the intent of the feature is obvious (it lets you deduplicate code), the effect is to make the markup more or less unreadable to non-programmers. The example above could be refactored to be clearly as follows: ```yaml # sequencer protocols for Laser eye surgery --- - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 2mm - step: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm ``` The intent of this document is a lot clearer than the version above - *especially* for non-programmers. However, it comes at a cost of increased repetition. Between the node/anchor version and this I would prefer this. However, it is still repetitive and ideally it should be non-repetitive and still clear. This can be done by refactoring the *structure* of the document and changing the way the application interprets it. For example, instead of representing using the schema above, a schema that separates step definitions from actual steps could be used. For example: ```yaml step definitions: large: instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm medium: instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm steps: - step: large - step: medium - step: large - step: medium - step: from: large except: spotSize: 2mm - step: large ``` The above document has an entirely different and slightly complex schema but it fundamentally represents the same data as the node/anchor version above, in a clearer manner, without duplication. strictyaml-1.6.1/docs/src/why/not-parse-direct-representations-of-python-objects.md000066400000000000000000000020601415074265300305420ustar00rootroot00000000000000--- title: Why does StrictYAML not parse direct representations of Python objects? --- Regular YAML allows the direct representation of Python objects. For example: ```yaml --- !python/hash:UserObject email: evilhacker@hacker.com password: passwordtoset type: admin ``` If we load this YAML formatted string in, we get a user object. This was how YAML was intended to work since it allows the ability to write objects to and read them from, say, a database. By itself, this behavior isn't necessarily capable of enacting a successful attack, so not all code that parses untrusted YAML is insecure, but it can be used, especially in conjunction with metaprogramming to execute arbitrary code on your system. This shares a lot in common with the pickle module's behavior, which is why its use with [untrusted input is strongly recommended against in the Python docs](https://docs.python.org/3/library/pickle.html). This anti-feature led to Ruby on Rails' spectacular [security fail](https://codeclimate.com/blog/rails-remote-code-execution-vulnerability-explained/). strictyaml-1.6.1/docs/src/why/only-parse-strings-not-files.md000066400000000000000000000016741415074265300242570ustar00rootroot00000000000000--- title: Why does StrictYAML only parse from strings and not files? --- While other parsers will take strings, file handles and file names, StrictYAML will only parse YAML strings. This is done deliberately to reduce the module's remit, with the intention of reducing both the potential bug surface and the number of exceptions that StrictYAML has to deal with - things like nonexistent files, file system errors, bad reads, unknown file extensions, etc. become the problem of some other module - ideally one more focused on handling those kinds of issues. If you want a quick and easy one liner way to get text from a file, I recommend that you pip install path.py and and use .text() on the Path object: ```python >>> from path import Path >>> from strictyaml import load >>> parsed_data = load(Path("myfile.yaml").text()).data >>> print(parsed_data) [ parsed yaml ] ``` ## Counterarguments - strictyaml-1.6.1/docs/src/why/speed-not-a-priority.md000066400000000000000000000025671415074265300225760ustar00rootroot00000000000000--- title: Why is parsing speed not a high priority for StrictYAML? --- JSON and StrictYAML are essentially complementary formats. They both allow a relatively loose representation of data that just contains, mappings and sequences. They are serialization formats that are relatively straightforward for both humans and machines to read and write. The main difference is simply one of degree: JSON is primarily optimized for *machine* readability and writability, while still maintaining human readability. YAML is optimized for *human* readability and writability, while maintaining machine readability and writability. This means that the two formats are better suited to slightly different applications. For instance, JSON is better suited as a format for use with REST APIs while YAML is better suited as a format for use by configuration languages and DSLs. If you are using YAML primarily as a readable medium to express a markup language or represent configuration in, this probably means that 1) what you are reading is probably relatively short (e.g. < 1,000 lines) and 2) it will be read/written infrequently (e.g. once, when a program starts). For this reason, it is assumed that for most StrictYAML applications, parsing speed is of a lower importance than strictness, readability and ease of use. That being said, any requests that improve parsing or writing speed are welcome. strictyaml-1.6.1/docs/src/why/syntax-typing-bad.md000066400000000000000000000055611415074265300221620ustar00rootroot00000000000000--- title: What is syntax typing? --- Explicit syntax typing is the use of syntax to designate the type of some data. It is a feature of the design of most programming languages. ```python x = "11" # this is a string y = 11 # this isn't ``` It it isn't limited to programming languages though. It's a common feature of serialization formats like JSON: ```json {"x": "11", "y": 42} ``` But not others, like INI: ```ini server=192.0.2.62 port=143 ``` Or CSV: ```ini server,port 192.0.2.62,143 ``` Or StrictYAML: ```yaml server: 192.0.2.62 port: 143 ``` In those cases, it's up to the program - in another file - to decide what part of that data is and what part is an integer. ## What does it mean to attach syntax typing to data? Syntax typing the explicit prioritization of cohesion over terseness. It puts type information right next to the data, but in the process this means that if the data appears a *lot* ## When would you sacrifice terseness? Let's return to Python for a second and imagine that ```python x = "11" # this is a string y = 11 # this isn't ``` This helps distinguish the types for the parser, which is useful for JSON, but it also comes with two disadvantages: - The distinction is subtle and not particularly clear to *non-programmers*, who will not necessarily understand that a directive needs to be given to the parser to avoid it being misinterpreted. - It's not necessary if the type structure is maintained outside of the markup. - Verbosity - two extra characters per string makes the markup longer and noisier. In JSON when being used as a REST API, syntax typing is often an *advantage* - it is explicit to the machine reading the JSON that "string" and "age" is an integer and it can convert accordingly *in the absence of a schema*. StrictYAML assumes all values are strings unless the schema explicitly indicates otherwise (e.g. Map(Int(), Int())). StrictYAML does not require quotation marks for strings that are implicitly converted to other types (e.g. yes or 1.5), but it does require quotation marks for strings that are syntactically confusing (e.g. "{ text in curly brackets }") Standard YAML has explicit syntax typing to explicitly declare strings, although it's confusing as hell to know when it's required and when it is not. For example: ```yaml a: text # not necessary b: "yes" # necessary c: "0" # necessary d: "3.5" # necessary e: in # not necessary f: out # not necessary g: shake it all about # not necessary h: "on" # necessary ``` Most other configuration language formats also make use of syntax typing. For example: - [TOML](../../why-not/toml) - [JSON5](../../why-not/json5) - [HJSON](../../why-not/hjson) - [SDLang](../../why-not/sdlang) - [HOCON](../../why-not/hocon) [INI](../../why-not/ini) does not have explicit syntax typing however. strictyaml-1.6.1/docs/src/why/turing-complete-schema.md000066400000000000000000000023431415074265300231470ustar00rootroot00000000000000--- title: Why does StrictYAML make you define a schema in Python - a Turing-complete language? --- StrictYAML defines schemas in Python (i.e. Turing-complete) code. For example: ```python Map({"name": Str(), "email": Str()}) ``` Instead of: ```yaml type: map mapping: "name": type: str required: yes "email": type: str required: yes ``` There are some trade offs here: Schema definition in a non-Turing-complete language like YAML makes the schema programming language independent and gives it more potential for being read and understood by non-programmers. However, schema definition in a non-Turing-complete language also restricts and makes certain use cases impossible or awkward. Some use cases I came across included: - Being able to import pycountry's country list and restrict "country: " to valid country names. - Being able to implement a schema that validated date/time scalar values against the specific date/time parser I wanted. - Being able to revalidate sections of the document on a 'second pass' that used new data - e.g. a list in one part of the document is restricted to items which come from another part. ## Counterarguments - strictyaml-1.6.1/hitch/000077500000000000000000000000001415074265300150205ustar00rootroot00000000000000strictyaml-1.6.1/hitch/asroot.sh000066400000000000000000000000501415074265300166560ustar00rootroot00000000000000sudo apt-get install libreadline-dev -y strictyaml-1.6.1/hitch/debugrequirements.txt000066400000000000000000000001241415074265300213100ustar00rootroot00000000000000flake8 ipython #==1.2.1 pyzmq path.py q ipykernel sure ensure python-slugify pytest strictyaml-1.6.1/hitch/engine.py000066400000000000000000000152731415074265300166470ustar00rootroot00000000000000from hitchstory import StoryCollection, BaseEngine, exceptions, validate, no_stacktrace_for from hitchstory import GivenDefinition, GivenProperty, InfoDefinition, InfoProperty from templex import Templex from strictyaml import Optional, Str, Map, Int, Bool, Enum, load from path import Path import hitchpylibrarytoolkit from hitchrunpy import ( ExamplePythonCode, HitchRunPyException, ExpectedExceptionMessageWasDifferent, ) CODE_TYPE = Map({"in python 2": Str(), "in python 3": Str()}) | Str() class Engine(BaseEngine): """Python engine for running tests.""" given_definition = GivenDefinition( yaml_snippet=GivenProperty( Str(), document="yaml_snippet:\n```yaml\n{{ yaml_snippet }}\n```" ), yaml_snippet_1=GivenProperty( Str(), document="yaml_snippet_1:\n```yaml\n{{ yaml_snippet_1 }}\n```" ), yaml_snippet_2=GivenProperty( Str(), document="yaml_snippet_2:\n```yaml\n{{ yaml_snippet_2 }}\n```" ), modified_yaml_snippet=GivenProperty( Str(), document="modified_yaml_snippet:\n```yaml\n{{ modified_yaml_snippet }}\n```" ), python_version=GivenProperty(Str()), ruamel_version=GivenProperty(Str()), setup=GivenProperty( Str(), document="```python\n{{ setup }}\n```" ), ) info_definition = InfoDefinition( status=InfoProperty(schema=Enum(["experimental", "stable"])), docs=InfoProperty(schema=Str()), fails_on_python_2=InfoProperty(schema=Bool()), description=InfoProperty(schema=Str()), experimental=InfoProperty(schema=Bool()), ) def __init__(self, keypath, python_path=None, rewrite=False, cprofile=False): self.path = keypath self._python_path = python_path self._rewrite = rewrite self._cprofile = cprofile def set_up(self): """Set up your applications and the test environment.""" self.path.profile = self.path.gen.joinpath("profile") if not self.path.profile.exists(): self.path.profile.mkdir() if not self._python_path: self.pylibrary = hitchpylibrarytoolkit.PyLibraryBuild( "strictyaml", self.path ).with_python_version(self.given["python version"])\ .with_packages({"ruamel.yaml": self.given["ruamel version"]}) self.pylibrary.ensure_built() self.python = self.pylibrary.bin.python else: self.python = Path(self._python_path) assert self.python.exists() self.example_py_code = ( ExamplePythonCode(self.python, self.path.gen) .with_code(self.given.get("code", "")) .with_setup_code( self.given.get("setup", "") ) .with_terminal_size(160, 100) .with_strings( yaml_snippet_1=self.given.get("yaml_snippet_1"), yaml_snippet=self.given.get("yaml_snippet"), yaml_snippet_2=self.given.get("yaml_snippet_2"), modified_yaml_snippet=self.given.get("modified_yaml_snippet"), ) ) @no_stacktrace_for(AssertionError) @no_stacktrace_for(HitchRunPyException) @validate( code=Str(), will_output=Map({"in python 2": Str(), "in python 3": Str()}) | Str(), raises=Map({Optional("type"): CODE_TYPE, Optional("message"): CODE_TYPE}), in_interpreter=Bool(), ) def run( self, code, will_output=None, yaml_output=True, raises=None, in_interpreter=False, ): if in_interpreter: if self.given["python version"].startswith("3"): code = "{0}\nprint(repr({1}))".format( "\n".join(code.strip().split("\n")[:-1]), code.strip().split("\n")[-1] ) else: code = "{0}\nprint repr({1})".format( "\n".join(code.strip().split("\n")[:-1]), code.strip().split("\n")[-1] ) to_run = self.example_py_code.with_code(code) if self._cprofile: to_run = to_run.with_cprofile( self.path.profile.joinpath("{0}.dat".format(self.story.slug)) ) if raises is None: result = ( to_run.expect_exceptions().run() if raises is not None else to_run.run() ) if will_output is not None: actual_output = "\n".join( [line.rstrip() for line in result.output.split("\n")] ) try: Templex(will_output).assert_match(actual_output) except AssertionError: if self._rewrite: self.current_step.update(**{"will output": actual_output}) else: raise elif raises is not None: differential = False # Difference between Python 2 and Python 3 output? exception_type = raises.get("type") message = raises.get("message") if exception_type is not None: if not isinstance(exception_type, str): differential = True exception_type = ( exception_type["in python 2"] if self.given["python version"].startswith("2") else exception_type["in python 3"] ) if message is not None: if not isinstance(message, str): differential = True message = ( message["in python 2"] if self.given["python version"].startswith("2") else message["in python 3"] ) try: result = to_run.expect_exceptions().run() result.exception_was_raised(exception_type, message) except ExpectedExceptionMessageWasDifferent as error: if self._rewrite and not differential: new_raises = raises.copy() new_raises["message"] = result.exception.message self.current_step.update(raises=new_raises) else: raise def pause(self, message="Pause"): import IPython IPython.embed() def on_success(self): if self._rewrite: self.new_story.save() if self._cprofile: self.python( self.path.key.joinpath("printstats.py"), self.path.profile.joinpath("{0}.dat".format(self.story.slug)), ).run() strictyaml-1.6.1/hitch/glossary.yml000066400000000000000000000002561415074265300174110ustar00rootroot00000000000000ruamelparsed: The output of ruamel.yaml roundtrip parser. strictparsed: Processed ruamelparsed. map: undefined seq: undefined regularkey: undefined strictkey: undefined strictyaml-1.6.1/hitch/hitchreqs.in000066400000000000000000000000551415074265300173420ustar00rootroot00000000000000hitchrun>=0.4.0 hitchpylibrarytoolkit>=0.5.1 strictyaml-1.6.1/hitch/hitchreqs.txt000066400000000000000000000077321415074265300175640ustar00rootroot00000000000000appdirs==1.4.4 # via black argcomplete==1.12.2 # via hitchrun backcall==0.2.0 # via ipython black==20.8b1 # via hitchpylibrarytoolkit bleach==3.2.1 # via readme-renderer certifi==2020.11.8 # via requests cffi==1.14.4 # via cryptography chardet==3.0.4 # via requests click==7.1.2 # via # black # hitchrun # pip-tools colorama==0.4.4 # via # hitchrun # hitchstory # prettystack # twine commandlib==0.3.5 # via # hitchbuildpy # hitchrun # icommandlib cryptography==3.2.1 # via secretstorage decorator==4.4.2 # via ipython dirtemplate==0.4.0 # via hitchpylibrarytoolkit docutils==0.16 # via readme-renderer flake8==3.8.4 # via hitchpylibrarytoolkit gitdb==4.0.5 # via gitpython gitpython==3.1.11 # via hitchpylibrarytoolkit hitchbuild==0.6.3 # via # dirtemplate # hitchbuildpy hitchbuildpy==0.5.4 # via hitchpylibrarytoolkit hitchpylibrarytoolkit==0.5.1 # via -r hitchreqs.in hitchrun==0.4.0 # via # -r hitchreqs.in # hitchpylibrarytoolkit hitchrunpy==0.10.0 # via hitchpylibrarytoolkit hitchstory==0.12.1 # via hitchpylibrarytoolkit icommandlib==0.5.0 # via hitchrunpy idna==2.10 # via requests ipython==7.16.1 # via hitchpylibrarytoolkit ipython-genutils==0.2.0 # via traitlets jedi==0.17.2 # via ipython jeepney==0.6.0 # via # keyring # secretstorage jinja2==2.11.2 # via # dirtemplate # hitchrunpy # hitchstory # prettystack kaching==0.4.2 # via hitchpylibrarytoolkit keyring==21.5.0 # via twine markupsafe==1.1.1 # via jinja2 mccabe==0.6.1 # via flake8 mypy-extensions==0.4.3 # via black packaging==20.7 # via bleach parso==0.7.1 # via jedi path==15.0.0 # via path.py path.py==12.5.0 # via # hitchbuild # hitchrun # hitchrunpy # hitchstory # pathquery # prettystack pathquery==0.3.0 # via # dirtemplate # hitchbuild # hitchbuildpy # hitchstory pathspec==0.8.1 # via black peewee==3.14.0 # via hitchbuild pep517==0.12.0 # via pip-tools pexpect==4.8.0 # via ipython pickleshare==0.7.5 # via ipython pip-tools==6.4.0 # via hitchrun pkginfo==1.6.1 # via twine prettystack==0.3.0 # via # hitchrun # hitchrunpy # hitchstory prompt-toolkit==3.0.8 # via ipython psutil==5.7.3 # via icommandlib ptyprocess==0.6.0 # via pexpect pycodestyle==2.6.0 # via flake8 pycparser==2.20 # via cffi pyflakes==2.2.0 # via flake8 pygments==2.7.2 # via # ipython # readme-renderer pyparsing==2.4.7 # via packaging pyte==0.8.0 # via icommandlib python-dateutil==2.8.1 # via strictyaml python-slugify==4.0.1 # via # dirtemplate # hitchbuild # hitchstory pyuv==1.4.0 # via icommandlib q==2.6 # via hitchpylibrarytoolkit readme-renderer==28.0 # via twine regex==2020.11.13 # via black requests==2.25.0 # via # requests-toolbelt # twine requests-toolbelt==0.9.1 # via twine rfc3986==1.4.0 # via twine secretstorage==3.3.0 # via keyring six==1.15.0 # via # bleach # cryptography # python-dateutil # readme-renderer smmap==3.0.4 # via gitdb strictyaml==1.5.0 # via # dirtemplate # hitchstory templex==0.2.0 # via hitchpylibrarytoolkit text-unidecode==1.3 # via python-slugify toml==0.10.2 # via black tomli==1.2.2 # via pep517 tqdm==4.54.0 # via twine traitlets==5.0.4 # via ipython twine==3.2.0 # via hitchpylibrarytoolkit typed-ast==1.4.1 # via black typing-extensions==3.7.4.3 # via black urllib3==1.26.2 # via requests uvloop==0.14.0 # via icommandlib wcwidth==0.2.5 # via # prompt-toolkit # pyte webencodings==0.5.1 # via bleach wheel==0.37.0 # via pip-tools # The following packages are considered to be unsafe in a requirements file: # pip # setuptools strictyaml-1.6.1/hitch/key.py000066400000000000000000000136271415074265300161730ustar00rootroot00000000000000from hitchstory import HitchStoryException, StoryCollection from hitchrun import expected from commandlib import CommandError from strictyaml import Str, Map, Bool, load from pathquery import pathquery from hitchrun import DIR import dirtemplate import hitchpylibrarytoolkit from engine import Engine PROJECT_NAME = "strictyaml" toolkit = hitchpylibrarytoolkit.ProjectToolkit( "strictyaml", DIR, ) """ ---------------------------- Non-runnable utility methods --------------------------- """ def _storybook(**settings): return StoryCollection( pathquery(DIR.key / "story").ext("story"), Engine(DIR, **settings) ) def _current_version(): return DIR.project.joinpath("VERSION").bytes().decode("utf8").rstrip() def _personal_settings(): settings_file = DIR.key.joinpath("personalsettings.yml") if not settings_file.exists(): settings_file.write_text( ( "engine:\n" " rewrite: no\n" " cprofile: no\n" "params:\n" " python version: 3.7.0\n" ) ) return load( settings_file.bytes().decode("utf8"), Map( { "engine": Map({"rewrite": Bool(), "cprofile": Bool()}), "params": Map({"python version": Str()}), } ), ) def _default_python_version(): return _personal_settings().data["params"]["python version"] """ ----------------- RUNNABLE COMMANDS ----------------- """ @expected(HitchStoryException) def bdd(*keywords): """ Run story matching keywords. """ _storybook().with_params( **{"python version": _default_python_version()} ).only_uninherited().shortcut(*keywords).play() @expected(HitchStoryException) def tver(pyversion, *keywords): """ Run story against specific version of Python - e.g. tver 3.7.0 modify multi line """ _storybook().with_params( **{"python version": pyversion} ).only_uninherited().shortcut(*keywords).play() @expected(HitchStoryException) def rbdd(*keywords): """ Run story matching keywords and rewrite story if code changed. """ _storybook(rewrite=True).with_params( **{"python version": _default_python_version()} ).only_uninherited().shortcut(*keywords).play() @expected(HitchStoryException) def regressfile(filename): """ Run all stories in filename 'filename' in python 3.7. """ _storybook().with_params(**{"python version": "3.7.0"}).in_filename( filename ).ordered_by_name().play() @expected(HitchStoryException) def regression(): """ Run regression testing - lint and then run all tests. """ lint() doctests() storybook = _storybook().only_uninherited() storybook.with_params(**{"python version": "3.7.0"}).ordered_by_name().play() @expected(HitchStoryException) def regression_on_python_path(python_path, python_version): """ Run regression tests - e.g. hk regression_on_python_path /usr/bin/python 3.7.0 """ _storybook(python_path=python_path).with_params( **{"python version": python_version} ).only_uninherited().ordered_by_name().play() @expected(hitchpylibrarytoolkit.ToolkitError) def checks(): """ Run all checks ensure linter, code formatter, tests and docgen all run correctly. These checks should prevent code that doesn't have the proper checks run from being merged. """ toolkit.validate_reformatting() toolkit.lint(exclude=["__init__.py", "ruamel"]) toolkit.validate_readmegen(Engine(DIR)) toolkit.validate_docgen(Engine(DIR)) doctests() storybook = _storybook().only_uninherited() storybook.with_params(**{"python version": "3.7.0"}).ordered_by_name().play() def reformat(): """ Reformat using black and then relint. """ toolkit.reformat() def ipython(): """ Run ipython in strictyaml virtualenv. """ DIR.gen.joinpath("example.py").write_text( ("from strictyaml import *\n" "import IPython\n" "IPython.embed()\n") ) from commandlib import Command version = _personal_settings().data["params"]["python version"] Command(DIR.gen.joinpath("py{0}".format(version), "bin", "python"))( DIR.gen.joinpath("example.py") ).run() @expected(CommandError) def lint(): """ Lint project code and hitch code. """ toolkit.lint(exclude=["__init__.py", "ruamel"]) def deploy(version): """ Deploy to pypi as specified version. """ toolkit.deploy(version) @expected(dirtemplate.exceptions.DirTemplateException) def docgen(): """ Build documentation. """ toolkit.docgen(Engine(DIR)) @expected(dirtemplate.exceptions.DirTemplateException) def readmegen(): """ Build README.md and CHANGELOG.md. """ toolkit.readmegen(Engine(DIR)) @expected(CommandError) def doctests(): """ Run doctests in utils.py in python 2 and 3. """ for python_version in ["2.7.14", "3.7.0"]: pylibrary = hitchpylibrarytoolkit.PyLibraryBuild( "strictyaml", DIR, ) pylibrary.bin.python( "-m", "doctest", "-v", DIR.project.joinpath(PROJECT_NAME, "utils.py") ).in_dir(DIR.project.joinpath(PROJECT_NAME)).run() @expected(CommandError) def rerun(): """ Rerun last example code block with specified version of Python. """ from commandlib import Command version = _personal_settings().data["params"]["python version"] Command(DIR.gen.joinpath("py{0}".format(version), "bin", "python"))( DIR.gen.joinpath("working", "examplepythoncode.py") ).in_dir(DIR.gen.joinpath("working")).run() @expected(CommandError) def bash(): """ Run bash """ from commandlib import Command Command("bash").run() def build(): import hitchpylibrarytoolkit hitchpylibrarytoolkit.project_build( "strictyaml", DIR, "3.7.0", {"ruamel.yaml": "0.16.5"}, ) strictyaml-1.6.1/hitch/printstats.py000066400000000000000000000002311415074265300176010ustar00rootroot00000000000000import pstats import sys import os if os.path.exists(sys.argv[1]): p = pstats.Stats(sys.argv[1]) p.sort_stats("cumulative") p.print_stats() strictyaml-1.6.1/hitch/story/000077500000000000000000000000001415074265300162005ustar00rootroot00000000000000strictyaml-1.6.1/hitch/story/boolean.story000066400000000000000000000060611415074265300207240ustar00rootroot00000000000000Boolean (Bool): based on: strictyaml docs: scalar/boolean description: | Boolean values can be parsed using a Bool validator. It case-insensitively interprets "yes", "true", "1", "on" as "True", "y" and their opposites as False. Different values will trigger a validation error. When updating boolean values on a YAML object with True or False, the roundtripped string version is set to "yes" and "no". To have your boolean values updated to a different yes/no string, update with a string instead - e.g. "on" or "off". given: setup: | from strictyaml import Bool, Str, MapPattern, load from ensure import Ensure schema = MapPattern(Str(), Bool()) yaml_snippet: | a: yes b: true c: on d: 1 e: True f: Y u: n v: False w: 0 x: Off y: FALSE z: no variations: Parse to YAML object: steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({ "a": True, "b": True, "c": True, "d": True, "e": True, "f": True, "u": False, "v": False, "w": False, "x": False, "y": False, "z": False, }) YAML object should resolve to True or False: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["w"]).equals(False) Using .data you can get the actual boolean value parsed: steps: - Run: code: | assert load(yaml_snippet, schema)["a"].data is True .text returns the text of the boolean YAML: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["y"].text).equals("FALSE") Update boolean values with string and bool type: steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = 'no' yaml['b'] = False yaml['c'] = True print(yaml.as_yaml()) will output: |- a: no b: no c: yes d: 1 e: True f: Y u: n v: False w: 0 x: Off y: FALSE z: no Cannot cast boolean to string: steps: - Run: code: str(load(yaml_snippet, schema)["y"]) raises: type: in python 2: exceptions.TypeError in python 3: builtins.TypeError message: |- Cannot cast 'YAML(False)' to str. Use str(yamlobj.data) or str(yamlobj.text) instead. Different uninterpretable values raise validation error: steps: - Run: code: | load('a: yâs', schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a boolean value (one of "yes", "true", "on", "1", "y", "no", "false", "off", "0", "n") found arbitrary text in "", line 1, column 1: a: "y\xE2s" ^ (line: 1) strictyaml-1.6.1/hitch/story/bugs.story000066400000000000000000000014131415074265300202410ustar00rootroot00000000000000Revalidation with an or breaks lookup: based on: strictyaml given: yaml_snippet: | x: a: b setup: | from strictyaml import load, Any, Int, Str, Map loose_schema = Map({"x": Any()}) strict_schema = Str() | Map({"a": Str()}) steps: - run: | parsed = load(yaml_snippet, loose_schema) parsed['x'].revalidate(strict_schema) parsed['x']['a'] = "x" assert parsed['x']['a'] == "x" Parsing string with data shouldn't resolve to ruamel data structures: based on: strictyaml given: yaml_snippet: 'x: |\n x' setup: | from strictyaml import load from strictyaml.ruamel.scalarstring import ScalarString steps: - run: | assert not isinstance(load("- >\n hello").data[0], ScalarString) strictyaml-1.6.1/hitch/story/build-yaml-document-from-scratch.story000066400000000000000000000037271415074265300255540ustar00rootroot00000000000000Build a YAML document from scratch in code: docs: howto/build-yaml-document based on: strictyaml description: | YAML documents can be built from combinations of dicts, lists and strings if no schema is used. given: setup: | from ensure import Ensure from strictyaml import as_document from collections import OrderedDict # Can also use regular dict if an arbitrary ordering is ok yaml = as_document(OrderedDict( [(u"â", 'yes'), ("b", "hâllo"), ("c", ["1", "2", "3"])] )) variations: Then dump: steps: - Run: code: print(yaml.as_yaml()) will output: |- â: yes b: hâllo c: - 1 - 2 - 3 However, any type that is not a string, dict or list cannot be parsed without a schema: steps: - run: code: | class RandomClass(object): def __repr__(self): return 'some random object' as_document({"x": RandomClass()}) raises: type: strictyaml.exceptions.YAMLSerializationError message: |- 'some random object' is not a string Empty dicts also cannot be serialized without a schema: steps: - run: code: | as_document({'hello': {}}) raises: type: strictyaml.exceptions.YAMLSerializationError message: Empty dicts are not serializable to StrictYAML unless schema is used. Neither can lists: steps: - run: code: | as_document({'hello': []}) raises: type: strictyaml.exceptions.YAMLSerializationError message: Empty lists are not serializable to StrictYAML unless schema is used. You can grab line numbers from the object that is serialized: steps: - Run: code: | Ensure(yaml.start_line).equals(1) strictyaml-1.6.1/hitch/story/commaseparated.story000066400000000000000000000055651415074265300223020ustar00rootroot00000000000000Parsing comma separated items (CommaSeparated): docs: scalar/comma-separated based on: strictyaml description: | Comma-separated values can be validated and parsed using the CommaSeparated validator. Note that the space following the commas is stripped by default when parsed. given: setup: | from strictyaml import CommaSeparated, Int, Str, Map, load, as_document from ensure import Ensure int_schema = Map({"a": CommaSeparated(Int())}) str_schema = Map({"a": CommaSeparated(Str())}) variations: Parse as int: given: yaml_snippet: | a: 1, 2, 3 steps: - Run: code: | Ensure(load(yaml_snippet, int_schema)).equals({"a": [1, 2, 3]}) Parse as string: given: yaml_snippet: | a: 1, 2, 3 steps: - Run: code: | Ensure(load(yaml_snippet, str_schema)).equals({"a": ["1", "2", "3"]}) Parse empty comma separated string: given: yaml_snippet: | a: steps: - Run: code: | Ensure(load(yaml_snippet, str_schema)).equals({"a": []}) Invalid int comma separated sequence: given: yaml_snippet: | a: 1, x, 3 steps: - Run: code: load(yaml_snippet, int_schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer in "", line 2, column 1: ^ (line: 2) found arbitrary text in "", line 1, column 1: a: 1, x, 3 ^ (line: 1) Serialize list to comma separated sequence: steps: - Run: code: | print(as_document({"a": [1, 2, 3]}, int_schema).as_yaml()) will output: 'a: 1, 2, 3' Serialize valid string to comma separated sequence: steps: - Run: code: | print(as_document({"a": "1,2,3"}, int_schema).as_yaml()) will output: 'a: 1,2,3' Serialize empty list to comma separated sequence: steps: - Run: code: | print(as_document({"a": []}, int_schema).as_yaml()) will output: 'a:' Serialize invalid string to comma separated sequence: steps: - Run: code: | print(as_document({"a": "1,x,3"}, int_schema).as_yaml()) raises: type: strictyaml.exceptions.YAMLSerializationError message: "'x' not an integer." Attempt to serialize neither list nor string raises exception: steps: - Run: code: | as_document({"a": 1}, int_schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: expected string or list, got '1' of type 'int' strictyaml-1.6.1/hitch/story/datetime.story000066400000000000000000000062071415074265300211030ustar00rootroot00000000000000Datetimes (Datetime): docs: scalar/datetime based on: strictyaml description: | Datetime validator parses using the python-dateutil library and returns a python datetime object. given: setup: | from strictyaml import Map, Datetime, YAMLValidationError, load, as_document from collections import OrderedDict from dateutil.tz.tz import tzutc from datetime import datetime from ensure import Ensure schema = Map({ "date": Datetime(), "datetime1": Datetime(), "datetime2": Datetime(), "datetime3": Datetime(), }) equivalent_data = OrderedDict([ ("date", datetime(2016, 10, 22, 0, 0)), ("datetime1", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ("datetime2", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ("datetime3", datetime(2016, 10, 22, 14, 23, 12, tzinfo=tzutc())), ]) yaml_snippet: | date: 2016-10-22 datetime1: 2016-10-22T14:23:12+00:00 datetime2: 2016-10-22T14:23:12Z datetime3: 20161022T142312Z variations: Each of the four datetimes are valid and parsed: steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals(equivalent_data) .text still returns the original text: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["date"].text).equals("2016-10-22") Non datetimes raise an exception: given: yaml_snippet: | date: 1 datetime1: â datetime2: b datetime3: c steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a datetime found arbitrary text in "", line 2, column 1: datetime1: "\xE2" ^ (line: 2) Datetime objects serialize directly to ISO-8601 format YAML strings: steps: - Run: code: | print(as_document(equivalent_data, schema).as_yaml()) will output: |- date: 2016-10-22T00:00:00 datetime1: 2016-10-22T14:23:12+00:00 datetime2: 2016-10-22T14:23:12+00:00 datetime3: 2016-10-22T14:23:12+00:00 Valid datetime strings serialize to YAML: steps: - Run: code: | print(as_document({"a": "2016-10-22"}, Map({"a": Datetime()})).as_yaml()) will output: 'a: 2016-10-22' Serializing invalid datetime string raises exception: steps: - Run: code: | as_document({"a": "x"}, Map({"a": Datetime()})) raises: type: strictyaml.exceptions.YAMLSerializationError message: expected a datetime, got 'x' Serializing non-string and non-datetime object raises exception: steps: - Run: code: | as_document({"a": 55}, Map({"a": Datetime()})) raises: type: strictyaml.exceptions.YAMLSerializationError message: expected a datetime, got '55' of type 'int' strictyaml-1.6.1/hitch/story/decimal.story000066400000000000000000000044721415074265300207070ustar00rootroot00000000000000Decimal numbers (Decimal): docs: scalar/decimal based on: strictyaml description: | StrictYAML parses to a YAML object representing a decimal, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in many cases if parsed as a decimal, it will behave in the same way. To get a python decimal.Decimal object, use .data. Parsing and validating as a Decimal is best for values which require precision, like prices. given: setup: | from strictyaml import Map, Decimal, load from decimal import Decimal as Dec from ensure import Ensure schema = Map({"a": Decimal(), "b": Decimal()}) yaml_snippet: | a: 1.00000000000000000001 b: 5.4135 variations: .data to get Decimal object: steps: - Run: Ensure(type(load(yaml_snippet, schema)["a"].data) is Dec).is_true() Valid: steps: - Run: | Ensure(load(yaml_snippet, schema)).equals({"a": Dec('1.00000000000000000001'), "b": Dec('5.4135')}) Cast to str: steps: - Run: Ensure(str(load(yaml_snippet, schema)['a'])).equals("1.00000000000000000001") Cast to float: steps: - Run: Ensure(float(load(yaml_snippet, schema)["a"])).equals(1.0) Greater than: steps: - Run: Ensure(load(yaml_snippet, schema)["a"] > Dec('1.0')).is_true() Less than which would not work for float: steps: - Run: Ensure(load(yaml_snippet, schema)["a"] < Dec('1.00000000000000000002')).is_true() Cannot cast to bool: steps: - Run: code: bool(load(yaml_snippet, schema)['a']) raises: message: |- Cannot cast 'YAML(1.00000000000000000001)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. Invalid: given: yaml_snippet: | a: string b: 2 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a decimal found arbitrary text in "", line 1, column 1: a: string ^ (line: 1) strictyaml-1.6.1/hitch/story/dirty-load.story000066400000000000000000000017661415074265300213640ustar00rootroot00000000000000Dirty load: docs: restrictions/loading-dirty-yaml based on: strictyaml description: | StrictYAML refuses to parse flow style and node anchors by default, but since there have since been [some requests](https://github.com/crdoconnor/strictyaml/issues/38) to parse flow style, this now allowed with the "dirty_load" method. If allow_flow_style is True, Map indentation is not checked for consistency, as the indentation level is dependent on the map key length. given: setup: | from strictyaml import Map, Int, MapPattern, Seq, Str, Any, dirty_load schema = Map({"foo": Map({"a": Any(), "b": Any(), "c": Any()}), "y": MapPattern(Str(), Str()), "z": Seq(Str())}) variations: Flow style mapping: given: yaml_snippet: | foo: { a: 1, b: 2, c: 3 } y: {} z: [] steps: - Run: | assert dirty_load(yaml_snippet, schema, allow_flow_style=True) == {"foo": {"a": "1", "b": "2", "c": "3"}, "y": {}, "z": []} strictyaml-1.6.1/hitch/story/disallow.story000066400000000000000000000117731415074265300211310ustar00rootroot00000000000000Disallowed YAML: docs: restrictions/disallowed-yaml based on: strictyaml description: | StrictYAML parses an opinionated subset of the YAML specification which refuses to parse features which are otherwise valid in standard YAML. For an explanation as to why these features are stripped out of StrictYAML, see the FAQ. Disallowed YAML features raise Disallowed exceptions while syntactically invalid YAML raises ScannerError or ComposerError. Every error inherits from YAMLError. given: setup: | from strictyaml import Map, Int, Any, load from strictyaml import TagTokenDisallowed, FlowMappingDisallowed, AnchorTokenDisallowed schema = Map({"x": Map({"a": Any(), "b": Any(), "c": Any()})}) variations: Tag tokens: given: yaml_snippet: | x: a: !!str yes b: !!str 3.5 c: !!str yes steps: - Run: code: load(yaml_snippet, schema, label="disallowed") raises: type: strictyaml.exceptions.TagTokenDisallowed message: |- While scanning in "disallowed", line 2, column 11: a: !!str yes ^ (line: 2) Found disallowed tag tokens (do not specify types in markup) in "disallowed", line 2, column 6: a: !!str yes ^ (line: 2) Flow style sequence: given: yaml_snippet: | [a, b]: [x, y] steps: - Run: code: load(yaml_snippet, schema, label="disallowed") raises: type: strictyaml.exceptions.FlowMappingDisallowed message: |- While scanning in "disallowed", line 1, column 1: [a, b]: [x, y] ^ (line: 1) Found ugly disallowed JSONesque flow mapping (surround with ' and ' to make text appear literally) in "disallowed", line 1, column 2: [a, b]: [x, y] ^ (line: 1) Flow style mapping: description: | To use literally, surround with quotes, e.g. x: '{ a: 1, b: 2, c: 3 }' given: yaml_snippet: | x: { a: 1, b: 2, c: 3 } steps: - Run: code: load(yaml_snippet, schema, label="disallowed") raises: type: strictyaml.exceptions.FlowMappingDisallowed message: |- While scanning in "disallowed", line 1, column 4: x: { a: 1, b: 2, c: 3 } ^ (line: 1) Found ugly disallowed JSONesque flow mapping (surround with ' and ' to make text appear literally) in "disallowed", line 1, column 5: x: { a: 1, b: 2, c: 3 } ^ (line: 1) Node anchors and references: description: | To use literally, surround with quotes, e.g. x: '{ a: 1, b: 2, c: 3 }' given: yaml_snippet: | x: a: &node1 3.5 b: 1 c: *node1 steps: - Run: code: load(yaml_snippet, schema, label="disallowed") raises: type: strictyaml.exceptions.AnchorTokenDisallowed message: |- While scanning in "disallowed", line 2, column 6: a: &node1 3.5 ^ (line: 2) Found confusing disallowed anchor token (surround with ' and ' to make text appear literally) in "disallowed", line 2, column 12: a: &node1 3.5 ^ (line: 2) Syntactically invalid YAML: description: | To use literally, surround with quotes, e.g. x: '{ a: 1, b: 2, c: 3 }' given: yaml_snippet: | - invalid string steps: - Run: code: load(yaml_snippet, schema, label="disallowed") raises: type: strictyaml.ruamel.scanner.ScannerError message: |- while scanning a simple key in "disallowed", line 2, column 1: string ^ (line: 2) could not find expected ':' in "disallowed", line 3, column 1: ^ (line: 3) Mixed space indentation: description: | You must use consistent spacing given: yaml_snippet: | item: two space indent: 2 item two: four space indent: 2 steps: - Run: code: load(yaml_snippet, label="disallowed") raises: type: strictyaml.exceptions.InconsistentIndentationDisallowed message: "While parsing\n in \"disallowed\", line 4, column 5:\n \ \ four space indent: 2\n ^ (line: 4)\nFound mapping with indentation\ \ inconsistent with previous mapping\n in \"disallowed\", line 5, column\ \ 1:\n \n ^ (line: 5)" strictyaml-1.6.1/hitch/story/duplicatekeys.story000066400000000000000000000034441415074265300221550ustar00rootroot00000000000000Duplicate keys: docs: restrictions/duplicate-keys based on: strictyaml description: | Duplicate keys are allowed in regular YAML - as parsed by pyyaml, ruamel.yaml and poyo: Not only is it unclear whether x should be "cow" or "bull" (the parser will decide 'bull', but did you know that?), if there are 200 lines between x: cow and x: bull, a user might very likely change the *first* x and erroneously believe that the resulting value of x has been changed - when it has not. In order to avoid all possible confusion, StrictYAML will simply refuse to parse this and will only accept associative arrays where all of the keys are unique. It will throw a DuplicateKeysDisallowed exception. given: setup: | from strictyaml import load, DuplicateKeysDisallowed yaml_snippet: | a: cow a: bull variations: Nameless exception: steps: - Run: code: load(yaml_snippet) raises: type: strictyaml.exceptions.DuplicateKeysDisallowed message: |- While parsing in "", line 2, column 1: a: bull ^ (line: 2) Duplicate key 'a' found in "", line 2, column 2: a: bull ^ (line: 2) Named exception: steps: - Run: code: load(yaml_snippet, label="mylabel") raises: type: strictyaml.exceptions.DuplicateKeysDisallowed message: |- While parsing in "mylabel", line 2, column 1: a: bull ^ (line: 2) Duplicate key 'a' found in "mylabel", line 2, column 2: a: bull ^ (line: 2) strictyaml-1.6.1/hitch/story/email-url.story000066400000000000000000000021221415074265300211660ustar00rootroot00000000000000Email and URL validators: based on: strictyaml docs: scalar/email-and-url description: | StrictYAML can validate emails (using a simplified regex) and URLs. given: setup: | from strictyaml import Email, Url, Map, load from ensure import Ensure schema = Map({"a": Email(), "b": Url()}) variations: Parsed: given: yaml_snippet: | a: billg@microsoft.com b: http://www.twitter.com/@realDonaldTrump steps: - Run: | Ensure(load(yaml_snippet, schema)).equals({"a": "billg@microsoft.com", "b": "http://www.twitter.com/@realDonaldTrump"}) Exception: given: yaml_snippet: | a: notanemail b: notaurl steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an email address found non-matching string in "", line 1, column 1: a: notanemail ^ (line: 1) strictyaml-1.6.1/hitch/story/empty.story000066400000000000000000000052121415074265300204400ustar00rootroot00000000000000Empty key validation: docs: scalar/empty based on: strictyaml description: | Sometimes you may wish to not specify a value or specify that it does not exist. Using StrictYAML you can accept this as a valid value and have it parsed to one of three things - None, {} (empty dict), or [] (empty list). given: setup: | from strictyaml import Map, Str, Enum, EmptyNone, EmptyDict, EmptyList, NullNone, load, as_document from ensure import Ensure yaml_snippet: 'a:' variations: EmptyNone with empty value: steps: - Run: | Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Enum(["A", "B",])}))).equals({"a": None}) EmptyDict: steps: - Run: | Ensure(load(yaml_snippet, Map({"a": EmptyDict() | Enum(["A", "B",])}))).equals({"a": {}}) EmptyList: steps: - Run: | Ensure(load(yaml_snippet, Map({"a": EmptyList() | Enum(["A", "B",])}))).equals({"a": []}) NullNone: steps: - Run: | Ensure(load("a: null", Map({"a": NullNone() | Enum(["A", "B",])}))).equals({"a": None}) EmptyNone no empty value: given: yaml_snippet: 'a: A' steps: - Run: | Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Enum(["A", "B",])}))).equals({"a": "A"}) Combine Str with EmptyNone and Str is evaluated first: steps: - Run: | Ensure(load(yaml_snippet, Map({"a": Str() | EmptyNone()}))).equals({"a": ""}) Combine EmptyNone with Str and Str is evaluated last: steps: - Run: | Ensure(load(yaml_snippet, Map({"a": EmptyNone() | Str()}))).equals({"a": None}) Non-empty value: given: yaml_snippet: 'a: C' steps: - Run: code: | load(yaml_snippet, Map({"a": Enum(["A", "B",]) | EmptyNone()})) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an empty value found arbitrary text in "", line 1, column 1: a: C ^ (line: 1) Serialize empty dict: steps: - Run: code: | print(as_document({"a": {}}, Map({"a": EmptyDict() | Str()})).as_yaml()) will output: 'a:' Serialize empty list: steps: - Run: code: | print(as_document({"a": []}, Map({"a": EmptyList() | Str()})).as_yaml()) will output: 'a:' Serialize None: steps: - Run: code: | print(as_document({"a": None}, Map({"a": EmptyNone() | Str()})).as_yaml()) will output: 'a:' strictyaml-1.6.1/hitch/story/enum-with-item-validation.story000066400000000000000000000035361415074265300243120ustar00rootroot00000000000000Enum with item validation: docs: compound/map-pattern based on: strictyaml description: | See also: enum validation. Your enums can be a transformed string or something other than a string if you use an item validator. given: setup: | from strictyaml import Map, Enum, Int, MapPattern, YAMLValidationError, load from ensure import Ensure schema = Map({"a": Enum([1, 2, 3], item_validator=Int())}) variations: Parse correctly: given: yaml_snippet: 'a: 1' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1}) Invalid because D is not an integer: given: yaml_snippet: 'a: D' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 1, column 1: a: D ^ (line: 1) Invalid because 4 is not in enum: given: yaml_snippet: 'a: 4' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: "when expecting one of: 1, 2, 3\nfound an arbitrary integer\n\ \ in \"\", line 1, column 1:\n a: '4'\n ^ (line:\ \ 1)" Invalid because blank string is not in enum: given: yaml_snippet: 'a:' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: "when expecting an integer\nfound a blank string\n in \"\", line 1, column 1:\n a: ''\n ^ (line: 1)" strictyaml-1.6.1/hitch/story/enum.story000066400000000000000000000052561415074265300202560ustar00rootroot00000000000000Enumerated scalars (Enum): docs: scalar/enum based on: strictyaml description: | StrictYAML allows you to ensure that a scalar value can only be one of a set number of items. It will throw an exception if any strings not in the list are found. given: setup: | from strictyaml import Map, Enum, MapPattern, YAMLValidationError, load from collections import OrderedDict from ensure import Ensure schema = Map({"a": Enum(["A", "B", "C"])}) variations: Valid because it contains 'A': given: yaml_snippet: 'a: A' steps: - Run: | Ensure(load(yaml_snippet, schema)).equals({"a": "A"}) Get .data from enum: given: yaml_snippet: 'a: A' steps: - Run: | assert isinstance(load(yaml_snippet, schema)['a'].data, str) Valid because it contains 'B': given: yaml_snippet: 'a: B' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": "B"}) Valid because it contains 'C': given: yaml_snippet: 'a: C' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": "C"}) Invalid because D is not in enum: given: yaml_snippet: 'a: D' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting one of: A, B, C found arbitrary text in "", line 1, column 1: a: D ^ (line: 1) Invalid because blank string is not in enum: given: yaml_snippet: 'a:' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting one of: A, B, C found a blank string in "", line 1, column 1: a: '' ^ (line: 1) Successful serialization: given: yaml_snippet: 'a: A' steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = "B" print(yaml.as_yaml()) will output: 'a: B' Invalid serialization: given: yaml_snippet: 'a: A' steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = "D" print(yaml.as_yaml()) raises: type: strictyaml.exceptions.YAMLSerializationError message: "Got 'D' when expecting one of: A, B, C" strictyaml-1.6.1/hitch/story/fixed-sequence.story000066400000000000000000000044011415074265300222060ustar00rootroot00000000000000Fixed length sequences (FixedSeq): docs: compound/fixed-length-sequences based on: strictyaml description: | Sequences of fixed length can be validated with a series of different (or the same) types. given: setup: | from strictyaml import FixedSeq, Str, Map, Int, Float, YAMLValidationError, load from ensure import Ensure schema = FixedSeq([Int(), Map({"x": Str()}), Float()]) variations: Equivalent list: given: yaml_snippet: | - 1 - x: 5 - 2.5 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals([1, {"x": "5"}, 2.5, ]) Invalid list 1: given: yaml_snippet: | a: 1 b: 2 c: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a sequence of 3 elements in "", line 1, column 1: a: '1' ^ (line: 1) found a mapping in "", line 3, column 1: c: '3' ^ (line: 3) Invalid list 2: given: yaml_snippet: | - 2 - a - a: - 1 - 2 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a mapping found arbitrary text in "", line 2, column 1: - a ^ (line: 2) Invalid list 3: given: yaml_snippet: | - 1 - a steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a sequence of 3 elements in "", line 1, column 1: - '1' ^ (line: 1) found a sequence of 2 elements in "", line 2, column 1: - a ^ (line: 2) strictyaml-1.6.1/hitch/story/float.story000066400000000000000000000110621415074265300204070ustar00rootroot00000000000000Floating point numbers (Float): docs: scalar/float based on: strictyaml description: | StrictYAML parses to a YAML object representing a decimal - e.g. YAML(1.0000000000000001) To get a python float literal, use .data. Parsing and validating as a Decimal is best for values which require precision, but float is better for values for which precision is not required. given: setup: | from math import isnan, isinf from strictyaml import Map, MapPattern, Str, Float, Bool, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Float(), "b": Float()}) yaml_snippet: | a: 1.00000000000000000001 b: 5.4135 variations: Use .data to get float type: steps: - Run: code: | Ensure(type(load(yaml_snippet, schema)["a"].data)).equals(float) Equal to equivalent float which is different number: steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1.0, "b": 5.4135}) Cast to str: steps: - Run: code: | Ensure(str(load(yaml_snippet, schema)["a"])).equals("1.0") Cast to float: steps: - Run: code: | Ensure(float(load(yaml_snippet, schema)["a"])).equals(1.0) Greater than: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["a"] > 0).is_true() Less than: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["a"] < 0).is_false() Has NaN values: given: yaml_snippet: | a: nan b: .NaN steps: - Run: code: | Ensure(isnan(load(yaml_snippet, schema)["a"].data)).is_true() Ensure(isnan(load(yaml_snippet, schema)["b"].data)).is_true() Has infinity values: given: yaml_snippet: | a: -.Inf b: INF steps: - Run: code: | Ensure(isinf(load(yaml_snippet, schema)["a"].data)).is_true() Ensure(isinf(load(yaml_snippet, schema)["b"].data)).is_true() Has underscores: given: yaml_snippet: | a: 10_000_000.5 b: 10_0_0.2_5 steps: - Run: code: | Ensure(load(yaml_snippet, schema).data).equals({"a": 10000000.5, "b": 1000.25}) Cannot cast to bool: steps: - Run: code: bool(load(yaml_snippet, schema)['a']) raises: message: |- Cannot cast 'YAML(1.0)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. Cannot parse non-float: given: yaml_snippet: | a: string b: 2 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a float found arbitrary text in "", line 1, column 1: a: string ^ (line: 1) Serialize successfully: steps: - Run: code: print(as_document(OrderedDict([("a", 3.5), ("b", "2.1")]), schema).as_yaml()) will output: |- a: 3.5 b: 2.1 Serialize successfully with NaN: steps: - Run: code: print(as_document(OrderedDict([("a", 3.5), ("b", float("nan"))]), schema).as_yaml()) will output: |- a: 3.5 b: nan Serialize successfully with infinity: steps: - Run: code: print(as_document(OrderedDict([("a", float("inf")), ("b", float("-inf"))]), schema).as_yaml()) will output: |- a: inf b: -inf Serialization failure: steps: - Run: code: as_document(OrderedDict([("a", "x"), ("b", "2.1")]), schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: when expecting a float, got 'x' Float as key: steps: - Run: code: | document = as_document(OrderedDict([("3.5", "a"), ("2.1", "c")]), MapPattern(Float(), Str())) print(document.data[3.5]) print(document.data[2.1]) will output: |- a c Float or bool: steps: - Run: code: | document = as_document({"a": True}, Map({"a": Float() | Bool()})) print(document.as_yaml()) will output: |- a: yes strictyaml-1.6.1/hitch/story/handle-exceptions.story000066400000000000000000000016301415074265300227140ustar00rootroot00000000000000Labeling exceptions: docs: howto/label-exceptions based on: strictyaml description: | When raising exceptions, you can add a label that will replace with whatever you want. given: setup: | from strictyaml import Map, Int, load, YAMLValidationError yaml_snippet: | a: 1 b: - 1 - 2 variations: Label myfilename: steps: - Run: code: | load(yaml_snippet, Map({"a": Int(), "b": Map({"x": Int(), "y": Int()})}), label="myfilename") raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a mapping in "myfilename", line 2, column 1: b: ^ (line: 2) found a sequence in "myfilename", line 4, column 1: - '2' ^ (line: 4) strictyaml-1.6.1/hitch/story/map-combined.story000066400000000000000000000073431415074265300216440ustar00rootroot00000000000000Mappings combining defined and undefined keys (MapCombined): docs: compound/map-combined based on: strictyaml experimental: yes description: | When you wish to support arbitrary optional keys in some mappings (i.e. to specify some required keys in the schema, but allow any additional ones on top of that), you use a MapCombined. See https://github.com/crdoconnor/strictyaml/issues/148#issuecomment-861007657 given: setup: | from strictyaml import Any, Int, MapCombined, Optional, Str, load from ensure import Ensure schema = MapCombined( { "required": Str(), Optional("foo"): Int(), }, Str(), Any(), ) variations: "Optional is present": given: yaml_snippet: | required: Hello World foo: 42 bar: 42 steps: - Run: | Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "foo": 42, "bar": "42", } ) "Optional is absent": given: yaml_snippet: | required: Hello World bar: 42 steps: - Run: | Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "bar": "42", } ) "Multiple undefined": given: yaml_snippet: | required: Hello World bar: 42 baz: forty two steps: - Run: | Ensure(load(yaml_snippet, schema).data).equals( { "required": "Hello World", "bar": "42", "baz": "forty two", } ) "Required is absent": given: yaml_snippet: | bar: 42 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping required key(s) 'required' not found in "", line 1, column 1: bar: '42' ^ (line: 1) "Undefined of invalid type": given: setup: | from strictyaml import Any, Int, MapCombined, Optional, Str, load from ensure import Ensure schema = MapCombined( { "required": Str(), }, Str(), Int(), ) yaml_snippet: | required: Hello World bar: forty two steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 2, column 1: bar: forty two ^ (line: 2) "Invalid key type": given: setup: | from strictyaml import Any, Int, MapCombined, Optional, Str, load from ensure import Ensure schema = MapCombined( { "1": Str(), }, Int(), Str(), ) yaml_snippet: | 1: Hello World not_an_integer: 42 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 2, column 1: not_an_integer: '42' ^ (line: 2) strictyaml-1.6.1/hitch/story/map-with-key-validator.story000066400000000000000000000053211415074265300236020ustar00rootroot00000000000000Mapping with defined keys and a custom key validator (Map): docs: compound/mapping-with-slug-keys based on: strictyaml experimental: yes description: | A typical mapping except that the key values are determined by the value provided by the validator. given: setup: | from collections import OrderedDict from strictyaml import Map, Optional, Str, Seq, load, ScalarValidator from ensure import Ensure # This example uses slugify from the "python-slugify" package from slugify import slugify class Slug(ScalarValidator): def validate_scalar(self, chunk): return slugify(unicode(chunk.contents)) schema = Map({ "name": Str(), Optional("country-code"): Str(), "dial-code": Str(), "official-languages": Seq(Str()) }, key_validator=Slug()) yaml_snippet: | Name: United Kingdom country-code: GB DIAL CODE: +44 official languages: - English - Welsh steps: - Run: | Ensure(load(yaml_snippet, schema).data).equals( { "name": "United Kingdom", "country-code": "GB", "dial-code": "+44", "official-languages": ["English", "Welsh"], } ) Slug key validator revalidation bug: based on: Mapping with defined keys and a custom key validator (Map) steps: - Run: | yaml = load(yaml_snippet, schema) yaml.revalidate(schema) Slug key validation getitem setitem and delitem: description: | You can set properties on slug key validated by using a key that turns into the same slug as the text key. E.g. DIAL CODE -> dial-code dial code -> dial-code Therefore treated as the same key. based on: Mapping with defined keys and a custom key validator (Map) variations: Getting: steps: - Run: code: | yaml = load(yaml_snippet, schema) Ensure(yaml['dial code']).equals("+44") Setting: steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml.revalidate(schema) yaml['dial-code'] = "+48" print(yaml.as_yaml()) will output: |- Name: United Kingdom country-code: GB DIAL CODE: +48 official languages: - English - Welsh Deleting: steps: - Run: code: | yaml = load(yaml_snippet, schema) del yaml['dial code'] print(yaml.as_yaml()) will output: |- Name: United Kingdom country-code: GB official languages: - English - Welsh strictyaml-1.6.1/hitch/story/map.story000066400000000000000000000111551415074265300200620ustar00rootroot00000000000000Mappings with defined keys (Map): docs: compound/mapping based on: strictyaml description: | Mappings of one value to another are represented by : in YAML and parsed as python dicts. Using StrictYAML's 'Map' you can validate that a mapping contains the right keys and the right *type* of values. Note: for mappings where you don't know the exact names of the keys in advance but you do know the type, use MapPattern. given: setup: | from collections import OrderedDict from strictyaml import Map, Int, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Int(), "b": Int(), "c": Int()}) schema_2 = Map({u"â": Int(), "b": Int(), "c": Int()}) yaml_snippet: | â: 1 b: 2 c: 3 variations: one key mapping: given: yaml_snippet: 'x: 1' steps: - Run: | Ensure(load(yaml_snippet, Map({"x": Int()})).data).equals(OrderedDict([('x', 1)])) key value: steps: - Run: | Ensure(load(yaml_snippet, schema_2)[u'â']).equals(1) get item key not found: steps: - Run: code: load(yaml_snippet, schema_2)['keynotfound'] raises: message: "'keynotfound'" cannot use .text: steps: - Run: code: load(yaml_snippet, schema_2).text raises: type: in python 3: builtins.TypeError in python 2: exceptions.TypeError message: in python 3: "YAML({'â': 1, 'b': 2, 'c': 3}) is a mapping, has no text value." in python 2: "YAML({u'\\xe2': 1, 'b': 2, 'c': 3}) is a mapping, has no text value." parse snippet where key is not found in schema: given: yaml_snippet: | a: 1 b: 2 â: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping unexpected key not in schema 'â' in "", line 3, column 1: "\xE2": '3' ^ (line: 3) sequence not expected when parsing: given: yaml_snippet: | - 1 - 2 - 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a mapping in "", line 1, column 1: - '1' ^ (line: 1) found a sequence in "", line 3, column 1: - '3' ^ (line: 3) List not expected when serializing: steps: - Run: code: as_document([1, 2, 3], schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: Expected a dict, found '[1, 2, 3]' Empty dict not valid when serializing: steps: - Run: code: as_document({}, schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: "Expected a non-empty dict, found an empty dict.\nUse EmptyDict\ \ validator to serialize empty dicts." Unexpected key: given: yaml_snippet: | a: 1 b: 2 c: 3 d: 4 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping unexpected key not in schema 'd' in "", line 4, column 1: d: '4' ^ (line: 4) required key not found: given: yaml_snippet: | a: 1 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping required key(s) 'b', 'c' not found in "", line 1, column 1: a: '1' ^ (line: 1) iterator: given: yaml_snippet: | a: 1 b: 2 c: 3 steps: - Run: | assert [item for item in load(yaml_snippet, schema)] == ["a", "b", "c"] serialize: steps: - Run: | assert as_document(OrderedDict([(u"â", 1), ("b", 2), ("c", 3)]), schema_2).as_yaml() == yaml_snippet strictyaml-1.6.1/hitch/story/map.toml000066400000000000000000000160351415074265300176570ustar00rootroot00000000000000["Mappings with defined keys (Map)"] docs = "compound/mapping" "based on" = "strictyaml" description = "Mappings of one value to another are represented by : in YAML\nand parsed as python dicts.\n\nUsing StrictYAML's 'Map' you can validate that a mapping\ncontains the right keys and the right *type* of values.\n\nNote: for mappings where you don't know the exact names of\nthe keys in advance but you do know the type, use MapPattern.\n" ["Mappings with defined keys (Map)".given] setup = "from collections import OrderedDict\nfrom strictyaml import Map, Int, load, as_document\nfrom collections import OrderedDict\nfrom ensure import Ensure\n\nschema = Map({\"a\": Int(), \"b\": Int(), \"c\": Int()})\n\nschema_2 = Map({u\"â\": Int(), \"b\": Int(), \"c\": Int()})\n" yaml_snippet = "â: 1\nb: 2\nc: 3\n" ["Mappings with defined keys (Map)".variations] ["Mappings with defined keys (Map)".variations."one key mapping"] ["Mappings with defined keys (Map)".variations."one key mapping".given] yaml_snippet = "x: 1" [["Mappings with defined keys (Map)".variations."one key mapping".steps]] Run = "Ensure(load(yaml_snippet, Map({\"x\": Int()})).data).equals(OrderedDict([('x', 1)]))\n" ["Mappings with defined keys (Map)".variations."key value"] [["Mappings with defined keys (Map)".variations."key value".steps]] Run = "Ensure(load(yaml_snippet, schema_2)[u'â']).equals(1)\n" ["Mappings with defined keys (Map)".variations."get item key not found"] [["Mappings with defined keys (Map)".variations."get item key not found".steps]] ["Mappings with defined keys (Map)".variations."get item key not found".steps.Run] code = "load(yaml_snippet, schema_2)['keynotfound']" ["Mappings with defined keys (Map)".variations."get item key not found".steps.Run.raises] message = "'keynotfound'" ["Mappings with defined keys (Map)".variations."cannot use .text"] [["Mappings with defined keys (Map)".variations."cannot use .text".steps]] ["Mappings with defined keys (Map)".variations."cannot use .text".steps.Run] code = "load(yaml_snippet, schema_2).text" ["Mappings with defined keys (Map)".variations."cannot use .text".steps.Run.raises] ["Mappings with defined keys (Map)".variations."cannot use .text".steps.Run.raises.type] "in python 3" = "builtins.TypeError" "in python 2" = "exceptions.TypeError" ["Mappings with defined keys (Map)".variations."cannot use .text".steps.Run.raises.message] "in python 3" = "YAML(OrderedDict([('â', 1), ('b', 2), ('c', 3)])) is a mapping, has no text value." "in python 2" = "YAML(OrderedDict([(u'\\xe2', 1), ('b', 2), ('c', 3)])) is a mapping, has no text value." ["Mappings with defined keys (Map)".variations."parse snippet where key is not found in schema"] ["Mappings with defined keys (Map)".variations."parse snippet where key is not found in schema".given] yaml_snippet = "a: 1\nb: 2\nâ: 3 \n" [["Mappings with defined keys (Map)".variations."parse snippet where key is not found in schema".steps]] ["Mappings with defined keys (Map)".variations."parse snippet where key is not found in schema".steps.Run] code = "load(yaml_snippet, schema)" ["Mappings with defined keys (Map)".variations."parse snippet where key is not found in schema".steps.Run.raises] type = "strictyaml.exceptions.YAMLValidationError" message = "while parsing a mapping\nunexpected key not in schema 'â'\n in \"\", line 3, column 1:\n \"\\xE2\": '3'\n ^ (line: 3)" ["Mappings with defined keys (Map)".variations."sequence not expected when parsing"] ["Mappings with defined keys (Map)".variations."sequence not expected when parsing".given] yaml_snippet = "- 1\n- 2\n- 3 \n" [["Mappings with defined keys (Map)".variations."sequence not expected when parsing".steps]] ["Mappings with defined keys (Map)".variations."sequence not expected when parsing".steps.Run] code = "load(yaml_snippet, schema)" ["Mappings with defined keys (Map)".variations."sequence not expected when parsing".steps.Run.raises] type = "strictyaml.exceptions.YAMLValidationError" message = "when expecting a mapping\n in \"\", line 1, column 1:\n - '1'\n ^ (line: 1)\nfound a sequence\n in \"\", line 3, column 1:\n - '3'\n ^ (line: 3)" ["Mappings with defined keys (Map)".variations."List not expected when serializing"] [["Mappings with defined keys (Map)".variations."List not expected when serializing".steps]] ["Mappings with defined keys (Map)".variations."List not expected when serializing".steps.Run] code = "as_document([1, 2, 3], schema)" ["Mappings with defined keys (Map)".variations."List not expected when serializing".steps.Run.raises] type = "strictyaml.exceptions.YAMLSerializationError" message = "Expected a dict, found '[1, 2, 3]'" ["Mappings with defined keys (Map)".variations."Empty dict not valid when serializing"] [["Mappings with defined keys (Map)".variations."Empty dict not valid when serializing".steps]] ["Mappings with defined keys (Map)".variations."Empty dict not valid when serializing".steps.Run] code = "as_document({}, schema)" ["Mappings with defined keys (Map)".variations."Empty dict not valid when serializing".steps.Run.raises] type = "strictyaml.exceptions.YAMLSerializationError" message = "Expected a non-empty dict, found an empty dict.\nUse EmptyDict validator to serialize empty dicts." ["Mappings with defined keys (Map)".variations."Unexpected key"] ["Mappings with defined keys (Map)".variations."Unexpected key".given] yaml_snippet = "a: 1\nb: 2\nc: 3\nd: 4\n" [["Mappings with defined keys (Map)".variations."Unexpected key".steps]] ["Mappings with defined keys (Map)".variations."Unexpected key".steps.Run] code = "load(yaml_snippet, schema)" ["Mappings with defined keys (Map)".variations."Unexpected key".steps.Run.raises] type = "strictyaml.exceptions.YAMLValidationError" message = "while parsing a mapping\nunexpected key not in schema 'd'\n in \"\", line 4, column 1:\n d: '4'\n ^ (line: 4)" ["Mappings with defined keys (Map)".variations."required key not found"] ["Mappings with defined keys (Map)".variations."required key not found".given] yaml_snippet = "a: 1\n" [["Mappings with defined keys (Map)".variations."required key not found".steps]] ["Mappings with defined keys (Map)".variations."required key not found".steps.Run] code = "load(yaml_snippet, schema)" ["Mappings with defined keys (Map)".variations."required key not found".steps.Run.raises] type = "strictyaml.exceptions.YAMLValidationError" message = "while parsing a mapping\nrequired key(s) 'b', 'c' not found\n in \"\", line 1, column 1:\n a: '1'\n ^ (line: 1)" ["Mappings with defined keys (Map)".variations.iterator] ["Mappings with defined keys (Map)".variations.iterator.given] yaml_snippet = "a: 1\nb: 2\nc: 3\n" [["Mappings with defined keys (Map)".variations.iterator.steps]] Run = "assert [item for item in load(yaml_snippet, schema)] == [\"a\", \"b\", \"c\"]\n" ["Mappings with defined keys (Map)".variations.serialize] [["Mappings with defined keys (Map)".variations.serialize.steps]] Run = "assert as_document(OrderedDict([(u\"â\", 1), (\"b\", 2), (\"c\", 3)]), schema_2).as_yaml() == yaml_snippet\n" strictyaml-1.6.1/hitch/story/mappattern.story000066400000000000000000000101071415074265300214540ustar00rootroot00000000000000Mappings with arbitrary key names (MapPattern): docs: compound/map-pattern based on: strictyaml description: | When you do not wish to let the user define the key names in a mapping and and only specify what type the keys are, use a MapPattern. When you wish to specify the exact key name, use the 'Map' validator instead. given: setup: | from strictyaml import MapPattern, Int, Float, Str, Any, Seq, YAMLValidationError, load from ensure import Ensure schema = MapPattern(Str(), Int()) variations: Equivalence 1: given: yaml_snippet: | â: 1 b: 2 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({u"â": 1, "b": 2}) Equivalence 2: given: yaml_snippet: | a: 1 c: 3 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1, "c": 3}) Equivalence 3: given: yaml_snippet: | a: 1 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1, }) With floats and ints: given: yaml_snippet: | 10.25: 23 20.33: 76 steps: - Run: code: | Ensure(load(yaml_snippet, MapPattern(Float(), Int())).data).equals({10.25: 23, 20.33: 76}) With Int and List: given: yaml_snippet: | 1: - ABC 2: - DEF steps: - Run: code: | Ensure(load(yaml_snippet, MapPattern(Int(), Seq(Str()))).data).equals({1: ["ABC"], 2: ["DEF"]}) Invalid 1: given: yaml_snippet: | b: b steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 1, column 1: b: b ^ (line: 1) Invalid 2: given: yaml_snippet: | a: a b: 2 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 1, column 1: a: a ^ (line: 1) More than the maximum number of keys: given: yaml_snippet: | â: 1 b: 2 steps: - Run: code: load(yaml_snippet, MapPattern(Str(), Int(), maximum_keys=1)) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping in "", line 1, column 1: "\xE2": '1' ^ (line: 1) expected a maximum of 1 key, found 2. in "", line 2, column 1: b: '2' ^ (line: 2) Fewer than the minimum number of keys: given: yaml_snippet: | â: 1 steps: - Run: code: load(yaml_snippet, MapPattern(Str(), Int(), minimum_keys=2)) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping expected a minimum of 2 keys, found 1. in "", line 1, column 1: "\xE2": '1' ^ (line: 1) Invalid with non-ascii: given: yaml_snippet: | a: 1 b: yâs c: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 2, column 1: b: "y\xE2s" ^ (line: 2) strictyaml-1.6.1/hitch/story/mapping-representation.story000066400000000000000000000045161415074265300240030ustar00rootroot00000000000000Using a YAML object of a parsed mapping: docs: compound/mapping-yaml-object based on: strictyaml description: | When a YAML document with mappings is parsed, it is not parsed as a dict but as a YAML object which behaves very similarly to a dict, but with some extra capabilities. You can use .items(), .keys(), .values(), look up items with square bracket notation, .get(key, with_default_if_nonexistent) and use "x in y" notation to determine key membership. To retrieve the equivalent dict (containing just other dicts, lists and strings/ints/etc.) use .data. given: setup: | from strictyaml import Map, Int, load from ensure import Ensure schema = Map({"a": Int(), "b": Int(), "c": Int()}) yaml_snippet: | a: 1 b: 2 c: 3 variations: .is_mapping(): steps: - Run: code: | Ensure(load(yaml_snippet, schema).is_mapping()).is_true() Equivalence with equivalent plain dict: steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": 2, "c": 3}) .items(): steps: - Run: code: | Ensure(load(yaml_snippet, schema).items()).equals([("a", 1), ("b", 2), ("c", 3)]) Use in to detect presence of a key: steps: - Run: code: | Ensure("a" in load(yaml_snippet, schema)).is_true() .values(): steps: - Run: code: | Ensure(load(yaml_snippet, schema).values()).equals([1, 2, 3]) .keys(): steps: - Run: code: | Ensure(load(yaml_snippet, schema).keys()).equals(["a", "b", "c"]) Key lookup: steps: - Run: code: | yaml = load(yaml_snippet, schema) Ensure(yaml[yaml.keys()[0]]).equals(1) Dict lookup: steps: - Run: code: | Ensure(load(yaml_snippet, schema)["a"]).equals(1) .get(): steps: - Run: code: | Ensure(load(yaml_snippet, schema).get("a")).equals(1) .get() nonexistent: steps: - Run: code: | Ensure(load(yaml_snippet, schema).get("nonexistent")).equals(None) len(): steps: - Run: code: | Ensure(len(load(yaml_snippet, schema))).equals(3) strictyaml-1.6.1/hitch/story/merge-documents.story000066400000000000000000000020461415074265300224020ustar00rootroot00000000000000Merge YAML documents: docs: howto/merge-yaml-documents based on: strictyaml description: | Loaded YAML can be combined and dumped with the comments intact. given: yaml_snippet_1: | # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: 1 - b: 2 yaml_snippet_2: | x: 8 # y is now 9 y: 9 setup: | from strictyaml import Map, MapPattern, Str, Seq, Int, load schema_1 = Map({ "a": Str(), "b": Map({"x": Int(), "y": Int()}), "c": Seq(MapPattern(Str(), Str())), }) schema_2 = Map({"x": Int(), "y": Int()}) yaml_1 = load(yaml_snippet_1, schema_1) yaml_2 = load(yaml_snippet_2, schema_2) yaml_1['b'] = yaml_2 steps: - Run: code: print(yaml_1.as_yaml()) will output: |- # Some comment a: â # value comment # Another comment b: x: 8 # y is now 9 y: 9 c: - a: 1 - b: 2 strictyaml-1.6.1/hitch/story/nested-map.story000066400000000000000000000046741415074265300213520ustar00rootroot00000000000000Nested mapping validation: based on: strictyaml description: | Mappings can be nested within one another, which will be parsed as a dict within a dict. given: setup: | from strictyaml import Map, Int, load from ensure import Ensure schema = Map({"a": Map({"x": Int(), "y": Int()}), "b": Int(), "c": Int()}) variations: Valid nested mapping: given: yaml_snippet: | a: x: 9 y: 8 b: 2 c: 3 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": {"x": 9, "y": 8}, "b": 2, "c": 3}) Invalid nested mapping: given: yaml_snippet: | a: x: 9 z: 8 b: 2 d: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping unexpected key not in schema 'z' in "", line 3, column 1: z: '8' ^ (line: 3) No nested mapping where expected: given: yaml_snippet: | a: 11 b: 2 d: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a mapping found an arbitrary integer in "", line 1, column 1: a: '11' ^ (line: 1) Modify nested map: given: yaml_snippet: | a: x: 9 y: - 1 - 2 - 3 b: 2 c: 3 setup: | from strictyaml import Map, Int, load, Seq from collections import OrderedDict schema = Map({"a": Map({"x": Int(), "y": Seq(Int())}), "b": Int(), "c": Int()}) yaml = load(yaml_snippet, schema) # Non-ordered dict would also work, but would yield an indeterminate order of keys yaml['a'] = OrderedDict([("x", 5), ("y", [4, 5, 6])]) yaml['a']['x'] = 99 steps: - Run: code: print(yaml.as_yaml()) will output: |- a: x: 99 y: - 4 - 5 - 6 b: 2 c: 3 strictyaml-1.6.1/hitch/story/non-schema-validation.story000066400000000000000000000025221415074265300234630ustar00rootroot00000000000000Parsing YAML without a schema: docs: howto/without-a-schema based on: strictyaml description: | When using strictyaml you do not have to specify a schema. If you do this, the validator "Any" is used which will accept any mapping and any list and any scalar values (which will always be interpreted as a string, unlike regular YAML). This is the recommended approach when rapidly prototyping and the desired schema is fluid. When your prototype code is parsing YAML that has a more fixed structure, we recommend that you 'lock it down' with a schema. The Any validator can be used inside fixed structures as well. given: setup: | from strictyaml import Str, Any, MapPattern, load from ensure import Ensure yaml_snippet: | a: x: 9 y: 8 b: 2 c: 3 variations: Parse without validator: steps: - Run: | Ensure(load(yaml_snippet)).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) Parse with any validator - equivalent: steps: - Run: | Ensure(load(yaml_snippet, Any())).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) Fix higher levels of schema: steps: - Run: | Ensure(load(yaml_snippet, MapPattern(Str(), Any()))).equals({"a": {"x": "9", "y": "8"}, "b": "2", "c": "3"}) strictyaml-1.6.1/hitch/story/optional-with-defaults.story000066400000000000000000000050101415074265300237010ustar00rootroot00000000000000Optional keys with defaults (Map/Optional): docs: compound/optional-keys-with-defaults experimental: yes based on: strictyaml about: | Not every key in a YAML mapping will be required. If you use the "Optional('key')" validator with YAML, you can signal that a key/value pair is not required. given: yaml_snippet: | a: 1 setup: | from strictyaml import Map, Int, Str, Bool, EmptyNone, Optional, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Int(), Optional("b", default=False): Bool(), }) variations: When parsed the result will include the optional value: steps: - Run: | Ensure(load(yaml_snippet, schema).data).equals(OrderedDict([("a", 1), ("b", False)])) If parsed and then output to YAML again the default data won't be there: steps: - Run: code: print(load(yaml_snippet, schema).as_yaml()) will output: |- a: 1 When default data is output to YAML it is removed: steps: - Run: code: | print(as_document({"a": 1, "b": False}, schema).as_yaml()) will output: |- a: 1 When you want a key to stay and default to None: steps: - Run: code: | schema = Map({"a": Int(), Optional("b", default=None, drop_if_none=False): EmptyNone() | Bool(), }) Ensure(load(yaml_snippet, schema).data).equals(OrderedDict([("a", 1), ("b", None)])) Optional keys with bad defaults: based on: Optional keys with defaults (Map/Optional) steps: - Run: code: | Map({"a": Int(), Optional("b", default="nonsense"): Bool(), }) raises: type: strictyaml.exceptions.InvalidOptionalDefault message: "Optional default for 'b' failed validation:\n Not a boolean" Optional keys revalidation bug: based on: Optional keys with defaults (Map/Optional) given: yaml_snippet: | content: subitem: a: 1 steps: - Run: code: | from strictyaml import MapPattern, Any loose_schema = Map({"content": Any()}) strict_schema = Map({"subitem": Map({"a": Int(), Optional("b", default=False): Bool()})}) myyaml = load(yaml_snippet, loose_schema) myyaml['content'].revalidate(strict_schema) assert myyaml.data == {"content": {"subitem": {"a": 1, "b": False}}}, myyaml.data print(myyaml.data.__repr__()) will output: "{'content': {'subitem': {'a': 1, 'b': False}}}" strictyaml-1.6.1/hitch/story/optional.story000066400000000000000000000053531415074265300211350ustar00rootroot00000000000000Validating optional keys in mappings (Map): docs: compound/optional-keys based on: strictyaml description: | Not every key in a YAML mapping will be required. If you use the "Optional('key')" validator with YAML, you can signal that a key/value pair is not required. given: setup: | from strictyaml import Map, Int, Str, Bool, Optional, load from ensure import Ensure schema = Map({"a": Int(), Optional("b"): Bool(), }) variations: Valid example 1: given: yaml_snippet: | a: 1 b: yes steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": True}) Valid example 2: given: yaml_snippet: | a: 1 b: no steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1, "b": False}) Valid example missing key: given: yaml_snippet: 'a: 1' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1}) Invalid 1: given: yaml_snippet: | a: 1 b: 2 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a boolean value (one of "yes", "true", "on", "1", "y", "no", "false", "off", "0", "n") found an arbitrary integer in "", line 2, column 1: b: '2' ^ (line: 2) Invalid 2: given: yaml_snippet: | a: 1 b: yes c: 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping unexpected key not in schema 'c' in "", line 3, column 1: c: '3' ^ (line: 3) Nested optional validation: based on: strictyaml given: setup: | from strictyaml import Map, Int, Str, Bool, Optional, load from ensure import Ensure schema = Map({"a": Int(), Optional("b"): Map({Optional("x"): Str(), Optional("y"): Str()})}) variations: Valid 1: given: yaml_snippet: 'a: 1' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 1}) Valid 2: given: yaml_snippet: | a: 1 b: x: y y: z steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals( {"a": 1, "b": {"x": "y", "y": "z"}} ) strictyaml-1.6.1/hitch/story/or.story000066400000000000000000000050631415074265300177260ustar00rootroot00000000000000Either/or schema validation of different, equally valid different kinds of YAML: docs: howto/either-or-validation description: | StrictYAML can be directed to parse two different elements or blocks of YAML. If the first thing does not parse correctly, it attempts to parse the second. If the second does not parse correctly, it raises an exception. based on: strictyaml given: setup: | from strictyaml import Map, Seq, Bool, Int, Str, YAMLValidationError, load from ensure import Ensure schema = Str() | Map({"a": Bool() | Int()}) variations: Boolean first choice true: given: yaml_snippet: 'a: yes' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": True}) Boolean first choice false: given: yaml_snippet: 'a: no' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": False}) Int second choice: given: yaml_snippet: 'a: 5' steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": 5}) Invalid not bool or int: given: yaml_snippet: 'a: A' steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 1, column 1: a: A ^ (line: 1) Invalid combinations of more than one map: given: yaml_snippet: 'a: x' steps: - Run: code: | load(yaml_snippet, Map({"a": Str()}) | Map({"b": Str()})) raises: type: strictyaml.exceptions.InvalidValidatorError message: You tried to Or ('|') together 2 Map validators. Try using revalidation instead. Invalid combinations of more than one seq: given: yaml_snippet: | - 1 - 2 steps: - Run: code: | load(yaml_snippet, Seq(Int()) | Seq(Str())) raises: type: strictyaml.exceptions.InvalidValidatorError message: You tried to Or ('|') together 2 Seq validators. Try using revalidation instead. Change item after validated: given: yaml_snippet: 'a: yes' steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = 5 assert yaml['a'] == 5 strictyaml-1.6.1/hitch/story/quickstart.story000066400000000000000000000062751415074265300215060ustar00rootroot00000000000000Simple example: based on: strictyaml given: yaml_snippet: | # All about the character name: Ford Prefect age: 42 possessions: - Towel setup: | from strictyaml import load, Map, Str, Int, Seq, YAMLError variations: Default parse result: fails on python 2: yes steps: - Run: code: | load(yaml_snippet) will output: |- YAML({'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']}) in interpreter: yes All data is string, list or OrderedDict: fails on python 2: yes steps: - Run: code: | load(yaml_snippet).data will output: |- {'name': 'Ford Prefect', 'age': '42', 'possessions': ['Towel']} in interpreter: yes Quickstart with schema: based on: simple example given: setup: | from strictyaml import load, Map, Str, Int, Seq, YAMLError schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) variations: 42 is now parsed as an integer: steps: - Run: in interpreter: yes code: | person = load(yaml_snippet, schema) person.data will output: |- {'name': 'Ford Prefect', 'age': 42, 'possessions': ['Towel']} A YAMLError will be raised if there are syntactic problems, violations of your schema or use of disallowed YAML features: based on: quickstart with schema given: yaml_snippet: | # All about the character name: Ford Prefect age: 42 variations: For example, a schema violation: steps: - Run: code: | try: person = load(yaml_snippet, schema) except YAMLError as error: print(error) will output: |- while parsing a mapping in "", line 1, column 1: # All about the character ^ (line: 1) required key(s) 'possessions' not found in "", line 3, column 1: age: '42' ^ (line: 3) If parsed correctly: based on: simple example given: setup: | from strictyaml import load, Map, Str, Int, Seq, YAMLError, as_document schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())}) variations: You can modify values and write out the YAML with comments preserved: steps: - Run: code: | person = load(yaml_snippet, schema) person['age'] = 43 print(person.as_yaml()) will output: |- # All about the character name: Ford Prefect age: 43 possessions: - Towel As well as look up line numbers: steps: - Run: in interpreter: yes code: | person = load(yaml_snippet, schema) person['possessions'][0].start_line will output: 5 And construct YAML documents from dicts or lists: steps: - Run: in interpreter: no code: | print(as_document({"x": 1}).as_yaml()) will output: |- x: 1 strictyaml-1.6.1/hitch/story/regexp.story000066400000000000000000000046711415074265300206040ustar00rootroot00000000000000Validating strings with regexes (Regex): docs: scalar/regular-expressions based on: strictyaml description: | StrictYAML can validate regular expressions and return a string. If the regular expression does not match, an exception is raised. given: setup: | from strictyaml import Regex, Map, load, as_document from collections import OrderedDict from ensure import Ensure schema = Map({"a": Regex(u"[1-4]"), "b": Regex(u"[5-9]")}) variations: Parsed correctly: given: yaml_snippet: | a: 1 b: 5 steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals({"a": "1", "b": "5"}) Non-matching: given: yaml_snippet: | a: 5 b: 5 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting string matching [1-4] found non-matching string in "", line 1, column 1: a: '5' ^ (line: 1) Non-matching suffix: given: yaml_snippet: | a: 1 Hello b: 5 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting string matching [1-4] found non-matching string in "", line 1, column 1: a: 1 Hello ^ (line: 1) Serialized successfully: steps: - Run: code: | print(as_document(OrderedDict([("a", "1"), ("b", "5")]), schema).as_yaml()) will output: |- a: 1 b: 5 Serialization failure non matching regex: steps: - Run: code: | as_document(OrderedDict([("a", "x"), ("b", "5")]), schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: when expecting string matching [1-4] found 'x' Serialization failure not a string: steps: - Run: code: | as_document(OrderedDict([("a", 1), ("b", "5")]), schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: when expecting string matching [1-4] got '1' of type int. strictyaml-1.6.1/hitch/story/revalidation.story000066400000000000000000000040351415074265300217650ustar00rootroot00000000000000Revalidate an already validated document: docs: howto/revalidation based on: strictyaml description: | When parsing a YAML document you may wish to do more than one validation pass over a document. This is needed when: * It simplifies your code to apply validation in stages. * You want to validate recursively. * One part of the document validation depends upon another (this is the example given below). given: setup: | from strictyaml import Str, Int, Map, Seq, Any, load from ensure import Ensure overall_schema = Map({"capitals": Any(), "countries": Seq(Str())}) parsed = load(yaml_snippet, overall_schema) yaml_snippet: | capitals: UK: 1 Germany: 2 countries: - Germany - UK variations: Reparse mapping: steps: - Run: code: | Ensure(parsed.data['capitals']['UK']).equals("1") parsed['capitals'].revalidate(Map({capital: Int() for capital in parsed.data['countries']})) Ensure(parsed.data['capitals']['UK']).equals(1) Reparse scalar: steps: - Run: code: | Ensure(parsed.data['capitals']['UK']).equals("1") parsed['capitals']['UK'].revalidate(Int()) Ensure(parsed.data['capitals']['UK']).equals(1) Ensure(parsed['capitals']['UK'].data).is_an(int) Parse error: given: yaml_snippet: | capitals: UK: 1 Germany: 2 France: 3 countries: - Germany - UK steps: - Run: code: | parsed['capitals'].revalidate(Map({capital: Int() for capital in parsed.data['countries']})) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a mapping unexpected key not in schema 'France' in "", line 4, column 1: France: '3' ^ (line: 4) strictyaml-1.6.1/hitch/story/roundtrip.story000066400000000000000000000071211415074265300213310ustar00rootroot00000000000000Reading in YAML, editing it and writing it back out: docs: howto/roundtripping based on: strictyaml description: | Loaded YAML can be modified and dumped out again with comments preserved using .as_yaml(). Note that due to some bugs in the library (ruamel.yaml) underlying StrictYAML, while the data parsed should be precisely the same, the exact syntax (newlines, comment locations, etc.) may not be identical. given: yaml_snippet: | # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: 1 - b: 2 setup: | from strictyaml import Map, MapPattern, EmptyDict, Str, Seq, Int, load from ensure import Ensure schema = Map({ "a": Str(), "b": Map({"x": Int(), "y": Int()}), "c": EmptyDict() | Seq(MapPattern(Str(), Str())), }) variations: Commented: steps: - Run: code: | Ensure(load(yaml_snippet, schema).as_yaml()).equals(yaml_snippet) Modified with invalid variable: steps: - Run: code: | to_modify = load(yaml_snippet, schema) to_modify['b']['x'] = 2 to_modify['c'][0]['a'] = '3' to_modify['b']['x'] = 'not an integer' raises: type: strictyaml.exceptions.YAMLSerializationError message: "'not an integer' not an integer." Modified with float: steps: - run: code: | to_modify = load(yaml_snippet, schema) to_modify['c'][0]['a'] = "1.0001" print(to_modify.as_yaml()) will output: |- # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: 1.0001 - b: 2 Modify multi line string: given: yaml_snippet: | a: some b: | text steps: - run: code: | schema = Map({"a": Str(), "b": Str()}) to_modify = load(yaml_snippet, schema) to_modify['a'] = 'changed' print(to_modify.as_yaml()) will output: |- a: changed b: | text Modified with one variable: steps: - run: code: | to_modify = load(yaml_snippet, schema) to_modify['b']['x'] = 2 to_modify['c'][0]['a'] = '3' print(to_modify.as_yaml()) will output: |- # Some comment a: â # value comment # Another comment b: x: 2 y: 5 c: - a: 3 - b: 2 Text across lines: steps: - run: code: | to_modify = load(yaml_snippet, schema) to_modify['c'][0]['a'] = "text\nacross\nlines" print(to_modify.as_yaml()) will output: |- # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: - a: |- text across lines - b: 2 With empty dict: steps: - run: code: | to_modify = load(yaml_snippet, schema) to_modify['c'] = {} print(to_modify.as_yaml()) will output: |- # Some comment a: â # value comment # Another comment b: x: 4 y: 5 c: strictyaml-1.6.1/hitch/story/scalar-hexadecimal-integer.story000066400000000000000000000025341415074265300244500ustar00rootroot00000000000000Hexadecimal Integers (HexInt): docs: scalar/hexadecimal-integer based on: strictyaml description: | StrictYAML can interpret a hexadecimal integer preserving its value given: yaml_snippet: | x: 0x1a setup: | from strictyaml import Map, HexInt, load from ensure import Ensure schema = Map({"x": HexInt()}) parsed = load(yaml_snippet, schema) variations: Parsed correctly: steps: - Run: | Ensure(parsed).equals({"x": 26}) Ensure(parsed.as_yaml()).equals("x: 0x1a\n") Uppercase: given: yaml_snippet: | x: 0X1A steps: - Run: code: | Ensure(load(yaml_snippet, schema).data).equals({"x": 26}) Ensure(load(yaml_snippet, schema).as_yaml()).equals("x: 0X1A\n") Invalid scalar hexadecimal integer: based on: strictyaml given: yaml_snippet: | x: some_string setup: | from strictyaml import Map, HexInt, load schema = Map({"x": HexInt()}) steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a hexadecimal integer found arbitrary text in "", line 1, column 1: x: some_string ^ (line: 1) strictyaml-1.6.1/hitch/story/scalar-integer.story000066400000000000000000000042051415074265300222030ustar00rootroot00000000000000Integers (Int): docs: scalar/integer based on: strictyaml description: | StrictYAML parses to a YAML object, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in many cases if parsed as a integer, it will behave in the same way. given: yaml_snippet: | a: 1 b: 5 setup: | from strictyaml import Map, Int, load from ensure import Ensure schema = Map({"a": Int(), "b": Int()}) parsed = load(yaml_snippet, schema) variations: Parsed correctly: steps: - Run: | Ensure(parsed).equals({"a": 1, "b": 5}) Has underscores: given: yaml_snippet: | a: 10_000_000 b: 10_0_0 steps: - Run: code: | Ensure(load(yaml_snippet, schema).data).equals({"a": 10000000, "b": 1000}) Cast with str: steps: - Run: Ensure(str(parsed["a"])).equals("1") Cast with float: steps: - Run: Ensure(float(parsed["a"])).equals(1.0) Greater than: steps: - Run: Ensure(parsed["a"] > 0).equals(True) Less than: steps: - Run: Ensure(parsed["a"] < 2).equals(True) To get actual int, use .data: steps: - Run: Ensure(type(load(yaml_snippet, schema)["a"].data) is int).equals(True) Cannot cast to bool: steps: - Run: code: bool(load(yaml_snippet, schema)['a']) raises: message: |- Cannot cast 'YAML(1)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. Invalid scalar integer: based on: strictyaml given: yaml_snippet: | a: string b: 2 setup: | from strictyaml import Map, Int, load schema = Map({"a": Int(), "b": Int()}) steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found arbitrary text in "", line 1, column 1: a: string ^ (line: 1) strictyaml-1.6.1/hitch/story/scalar-string.story000066400000000000000000000025301415074265300220530ustar00rootroot00000000000000Parsing strings (Str): docs: scalar/string based on: strictyaml description: | StrictYAML parses to a YAML object, not the value directly to give you more flexibility and control over what you can do with the YAML. This is what that can object can do - in most cases if parsed as a string, it will behave in the same way. given: setup: | from strictyaml import Str, Map, load from ensure import Ensure schema = Map({"a": Str(), "b": Str(), "c": Str(), "d": Str()}) parsed = load(yaml_snippet, schema) yaml_snippet: | a: 1 b: yes c: â string d: | multiline string variations: Parses correctly: steps: - Run: code: | Ensure(parsed).equals( {"a": "1", "b": "yes", "c": u"â string", "d": "multiline string\n"} ) Dict lookup cast to string: steps: - Run: code: Ensure(str(parsed["a"])).equals("1") Dict lookup cast to int: steps: - Run: code: | Ensure(int(parsed["a"])).equals(1) Dict lookup cast to bool impossible: steps: - Run: code: bool(parsed["a"]) raises: message: |- Cannot cast 'YAML(1)' to bool. Use bool(yamlobj.data) or bool(yamlobj.text) instead. strictyaml-1.6.1/hitch/story/sequence.story000066400000000000000000000121211415074265300211070ustar00rootroot00000000000000Sequence/list validator (Seq): docs: compound/sequences based on: strictyaml description: | Sequences in YAML are denoted by a series of dashes ('-') and parsed as a list in python. Validating sequences of a particular type can be done with the Seq validator, specifying the type. See also [UniqueSeq](../sequences-of-unique-items) and [FixedSeq](../fixed-length-sequences) for other types of sequence validation. given: yaml_snippet: | - 1 - 2 - 3 setup: | from strictyaml import Seq, Str, Int, load from ensure import Ensure variations: Valid Parsed: steps: - Run: | Ensure(load(yaml_snippet, Seq(Str()))).equals(["1", "2", "3", ]) Is sequence: steps: - Run: | assert load(yaml_snippet, Seq(Str())).is_sequence() Iterator: steps: - Run: | assert [x for x in load(yaml_snippet, Seq(Str()))] == ["1", "2", "3"] Lists of lists: given: yaml_snippet: | - - a - b - c - - d - e - f steps: - Run: | assert load(yaml_snippet, Seq(Seq(Str()))) == [["a", "b", "c"], ["d", "e", "f"]] .text is nonsensical: given: yaml_snippet: | - â - 2 - 3 steps: - Run: code: load(yaml_snippet, Seq(Str())).text raises: type: in python 2: exceptions.TypeError in python 3: builtins.TypeError message: in python 2: YAML([u'\xe2', '2', '3']) is a sequence, has no text value. in python 3: YAML(['â', '2', '3']) is a sequence, has no text value. Invalid mapping instead: given: yaml_snippet: | a: 1 b: 2 c: 3 steps: - Run: code: load(yaml_snippet, Seq(Str())) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a sequence in "", line 1, column 1: a: '1' ^ (line: 1) found a mapping in "", line 3, column 1: c: '3' ^ (line: 3) Invalid nested structure instead: given: yaml_snippet: | - 2 - 3 - a: - 1 - 2 steps: - Run: code: load(yaml_snippet, Seq(Str())) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting a str in "", line 3, column 1: - a: ^ (line: 3) found a mapping in "", line 5, column 1: - '2' ^ (line: 5) Invalid item in sequence: given: yaml_snippet: | - 1.1 - 1.2 steps: - Run: code: load(yaml_snippet, Seq(Int())) raises: type: strictyaml.exceptions.YAMLValidationError message: |- when expecting an integer found an arbitrary number in "", line 1, column 1: - '1.1' ^ (line: 1) One invalid item in sequence: given: yaml_snippet: | - 1 - 2 - 3.4 steps: - Run: code: load(yaml_snippet, Seq(Int())) raises: type: strictyaml.exceptions.YAMLValidationError message: "when expecting an integer\nfound an arbitrary number\n in \"\ \", line 3, column 1:\n - '3.4'\n ^ (line: 3)" Modify nested sequence: based on: strictyaml given: yaml_snippet: | a: - a - b b: 2 c: 3 setup: | from strictyaml import Map, Int, load, Seq, Str from collections import OrderedDict schema = Map({"a": Seq(Str()), "b": Int(), "c": Int()}) variations: successfully: steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = ['b', 'c', 'd'] yaml['a'][1] = 'x' print(yaml.as_yaml()) will output: |- a: - b - x - d b: 2 c: 3 with non sequence where sequence expected: steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = {'a': '1'} raises: type: strictyaml.exceptions.YAMLSerializationError message: "Expected a list, found '{'a': '1'}'" with empty list: steps: - Run: code: | yaml = load(yaml_snippet, schema) yaml['a'] = [] raises: type: strictyaml.exceptions.YAMLSerializationError message: "Expected a non-empty list, found an empty list.\nUse EmptyList\ \ validator to serialize empty lists." strictyaml-1.6.1/hitch/story/single-value.story000066400000000000000000000027041415074265300217000ustar00rootroot00000000000000Single value: based on: strictyaml description: | The minimal YAML document that is parsed by StrictYAML is a string of characters which parses by default to a string unless a scalar validator is used. Where standard YAML implicitly converts certain strings to other types, StrictYAML will only parse to strings unless otherwise directed. given: setup: | from strictyaml import Str, Int, load from ensure import Ensure variations: Raise exception on None: steps: - Run: code: load(None, Str()) raises: message: StrictYAML can only read a string of valid YAML. String of 1: steps: - Run: code: | Ensure(load("1", Str())).equals("1") Int of 1: steps: - Run: code: | Ensure(load("1", Int())).equals(1) Empty value parsed as blank string by default: steps: - Run: code: | Ensure(load("x:")).equals({"x": ""}) Empty document parsed as blank string by default: steps: - Run: code: | Ensure(load("", Str())).equals("") Null parsed as string null by default: steps: - Run: code: | Ensure(load("null: null")).equals({"null": "null"}) #Single value with comment: #steps: #- Run: #code: | #Ensure(load("# ought not to be parsed\nstring")).equals("string") strictyaml-1.6.1/hitch/story/strictyaml.story000066400000000000000000000002451415074265300214760ustar00rootroot00000000000000strictyaml: given: python version: (( python version )) ruamel version: (( ruamel version )) with: python version: 2.7.14 ruamel version: 0.17.5 strictyaml-1.6.1/hitch/story/unique-sequence.story000066400000000000000000000037731415074265300224300ustar00rootroot00000000000000Sequences of unique items (UniqueSeq): docs: compound/sequences-of-unique-items based on: strictyaml description: | UniqueSeq validates sequences which contain no duplicate values. given: yaml_snippet: | - A - B - C setup: | from strictyaml import UniqueSeq, Str, load, as_document from ensure import Ensure schema = UniqueSeq(Str()) variations: Valid: steps: - Run: code: | Ensure(load(yaml_snippet, schema)).equals(["A", "B", "C", ]) Parsing with one dupe raises an exception: given: yaml_snippet: | - A - B - B steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a sequence in "", line 1, column 1: - A ^ (line: 1) duplicate found in "", line 3, column 1: - B ^ (line: 3) Parsing all dupes raises an exception: given: yaml_snippet: | - 3 - 3 - 3 steps: - Run: code: load(yaml_snippet, schema) raises: type: strictyaml.exceptions.YAMLValidationError message: |- while parsing a sequence in "", line 1, column 1: - '3' ^ (line: 1) duplicate found in "", line 3, column 1: - '3' ^ (line: 3) Serializing with dupes raises an exception: steps: - Run: code: | as_document(["A", "B", "B"], schema) raises: type: strictyaml.exceptions.YAMLSerializationError message: Expecting all unique items, but duplicates were found in '['A', 'B', 'B']'. strictyaml-1.6.1/hitch/story/update-with-schema.story000066400000000000000000000057631415074265300230060ustar00rootroot00000000000000Updating document with a schema: docs: compound/update based on: strictyaml description: | When StrictYAML loads a document with a schema, it checks that future updates to that document follow the original schema. given: setup: | import strictyaml as s from ensure import Ensure variations: GitHub \#72: steps: - Run: |- doc = s.load('a: 9', s.Map({ 'a': s.Str(), s.Optional('b'): s.Int(), })) doc['b'] = 9 assert doc['b'] == 9 Works on empty mapping: steps: - Run: |- doc = s.load('', s.EmptyDict() | s.Map({ 'a': s.Int(), })) doc['a'] = 9 assert doc['a'] == 9, doc.as_yaml() Works on complex types: steps: - Run: |- doc = s.load('a: 8', s.Map({'a': s.Int() | s.Float()})) assert type(doc['a'].data) == int, repr(doc.data) doc['a'] = '5.' assert type(doc['a'].data) == float, repr(doc.data) assert doc['a'] == 5. Will not work on empty sequence: steps: - Run: code: | doc = s.load('', s.EmptyList() | s.Seq(s.Int())) doc[0] = 9 raises: type: strictyaml.exceptions.YAMLSerializationError message: |- cannot extend list via __setitem__. Instead, replace whole list on parent node. Works on map with setting, updating, and then setting multiple keys (regression): steps: - Run: code: | doc = s.load('', s.EmptyDict() | s.MapPattern( s.Str(), s.EmptyDict() | s.Map({ s.Optional('b'): s.Seq(s.Int()), }) )) doc['a'] = {} doc['a']['b'] = ['9'] assert doc.data == {'a': {'b': [9]}}, doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n', doc.as_yaml() # Second assignment doesn't occur... doc['a']['b'] = ['9', '10'] assert doc.data == {'a': {'b': [9, 10]}}, doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n - 10\n', doc.as_yaml() # If and only if another node is overwritten. This was a bug due # to mismatched _ruamelparsed objects. doc['b'] = {'b': ['11']} assert doc['a']['b'].data == [9, 10], doc.data assert doc['b']['b'].data == [11], doc.data assert doc.as_yaml() == 'a:\n b:\n - 9\n - 10\nb:\n b:\n - 11\n', doc.as_yaml() For empty sequence, must instead assign whole sequence as key: steps: - Run: |- doc = s.load('a:', s.Map({'a': s.EmptyList() | s.Seq(s.Int())})) doc['a'] = [1, 2, 3] assert doc['a'].data == [1, 2, 3], repr(doc.data) Can assign from string: steps: - Run: |- doc = s.load('a: 9', s.Map({ 'a': s.Str(), s.Optional('b'): s.Int(), })) doc['b'] = '9' assert doc['b'] == 9 strictyaml-1.6.1/hitch/story/validator-repr.story000066400000000000000000000031331415074265300222350ustar00rootroot00000000000000Validator repr: based on: strictyaml description: | When repr(x) is called on validators it should print an executable representation of the object. given: setup: | from ensure import Ensure import strictyaml as sy variations: Int: steps: - Run: code: | Ensure(repr(sy.Map({"a": sy.Int()}))).equals("""Map({'a': Int()})""") Optional: steps: - Run: code: | Ensure(repr(sy.Map({sy.Optional("a"): sy.Int()}))).equals("""Map({Optional("a"): Int()})""") Sequence: steps: - Run: code: | Ensure(repr(sy.Seq(sy.Str()))).equals("""Seq(Str())""") Empty: steps: - Run: code: | Ensure(repr(sy.FixedSeq([sy.EmptyNone(), sy.EmptyDict(), sy.EmptyList()]))).equals( """FixedSeq([EmptyNone(), EmptyDict(), EmptyList()])""" ) UniqueSeq Decimal: steps: - Run: code: | Ensure(repr(sy.UniqueSeq(sy.Decimal()))).equals("""UniqueSeq(Decimal())""") MapPattern Bool Enum: steps: - Run: code: | Ensure(repr(sy.MapPattern(sy.Bool(), sy.Enum(["x", "y"])))).equals("MapPattern(Bool(), Enum(['x', 'y']))") Seq Datetime Any Or: steps: - Run: code: | Ensure(repr(sy.Seq(sy.Datetime() | sy.Any()))).equals("""Seq(Datetime() | Any())""") Comma Separated Float: steps: - Run: code: | Ensure(repr(sy.Map({"x": sy.CommaSeparated(sy.Float())}))).equals("Map({'x': CommaSeparated(Float())})") strictyaml-1.6.1/hitch/story/whatline.story000066400000000000000000000051101415074265300211120ustar00rootroot00000000000000Get line numbers of YAML elements: based on: strictyaml docs: howto/what-line description: | Line numbers, the text of an item and text of surrounding lines can be grabbed from returned YAML objects - using .start_line, .end_line, lines(), lines_before(x) and lines_after(x). given: yaml_snippet: | y: p # Some comment a: | x # Another comment b: y c: a d: b setup: | from strictyaml import Map, Str, YAMLValidationError, load from ensure import Ensure schema = Map({"y": Str(), "a": Str(), "b": Str(), "c": Str(), "d": Str()}) snippet = load(yaml_snippet, schema) variations: If there is preceding comment for an item the start line includes it: steps: - Run: code: | Ensure(snippet["a"].start_line).equals(3) Ensure(snippet["d"].start_line).equals(9) If there is a trailing comment the end line includes it: steps: - Run: code: | Ensure(snippet["a"].end_line).equals(6) Ensure(snippet["d"].end_line).equals(10) You can grab the start line of a key: steps: - Run: code: | Ensure(snippet.keys()[1].start_line).equals(3) Start line and end line of whole snippet: steps: - Run: code: | Ensure(snippet.start_line).equals(1) Ensure(snippet.end_line).equals(10) Grabbing a line before an item: steps: - Run: code: | Ensure(snippet['a'].lines_before(1)).equals("# Some comment") Grabbing a line after an item: steps: - Run: code: | Ensure(snippet['a'].lines_after(4)).equals("b: y\nc: a\n\nd: b") Grabbing the lines of an item including surrounding comments: steps: - Run: code: | print(load(yaml_snippet, schema)['a'].lines()) will output: |- a: | x # Another comment Start line of YAML with list: based on: strictyaml description: | Actually, this should probably be 6, not 4. This is likely a bug in ruamel.yaml however. TODO: Come back to this test. given: yaml_snippet: | a: b: - 1 # comment # second comment - 2 - 3 - 4 setup: | from strictyaml import load from ensure import Ensure steps: - Run: code: |- Ensure(load(yaml_snippet)['a']['b'][1].start_line).equals(4) Ensure(load(yaml_snippet)['a']['b'][1].end_line).equals(4) strictyaml-1.6.1/hitch/todo.yml000066400000000000000000000007671415074265300165220ustar00rootroot00000000000000- Create rules such that only certain map key combinations are allowed. x and y, x alone, c and y, c alone, etc. - Using YAML objects as keys causes infinite loop. - Add why not section on jsonschema and yaml/ruamel.yaml - Add why not section on kwalify and ruamel.yaml - Add why not section on schema and yaml/ruamel.yaml - Add scalar types AbsoluteUrl(), RelativeUrl(), Domain() - Consider adding scalar types PositiveInt and Base64. - Refactor so that custom scalar validators can be written. strictyaml-1.6.1/setup.py000066400000000000000000000034651415074265300154430ustar00rootroot00000000000000# -*- coding: utf-8 -* from setuptools.command.install import install from setuptools import find_packages from setuptools import setup from sys import version_info, stderr, exit import codecs import sys import os def read(*parts): # intentionally *not* adding an encoding option to open # see here: https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690 with codecs.open(os.path.join(os.path.abspath(os.path.dirname(__file__)), *parts)) as f: return f.read() setup(name="strictyaml", version=read('VERSION').replace('\n', ''), description="Strict, typed YAML parser", long_description=read('README.md'), long_description_content_type="text/markdown", classifiers=[ 'Development Status :: 4 - Beta', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Topic :: Text Processing :: Markup', 'Topic :: Software Development :: Libraries', 'Natural Language :: English', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.1', 'Programming Language :: Python :: 3.2', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', ], keywords='yaml', author='Colm O\'Connor', author_email='colm.oconnor.github@gmail.com', url='http://hitchdev.com/strictyaml', license='MIT', install_requires=["python-dateutil>=2.6.0", ], packages=find_packages(exclude=["tests", "docs", ]), package_data={}, zip_safe=False, include_package_data=True, ) strictyaml-1.6.1/strictyaml/000077500000000000000000000000001415074265300161145ustar00rootroot00000000000000strictyaml-1.6.1/strictyaml/__init__.py000066400000000000000000000035611415074265300202320ustar00rootroot00000000000000# The all important loader from strictyaml.parser import load from strictyaml.parser import dirty_load # Document builder from strictyaml.parser import as_document # YAML object from strictyaml.representation import YAML # Validators from strictyaml.validators import Validator from strictyaml.validators import OrValidator from strictyaml.any_validator import Any from strictyaml.scalar import ScalarValidator from strictyaml.scalar import Enum from strictyaml.scalar import Regex from strictyaml.scalar import Email from strictyaml.scalar import Url from strictyaml.scalar import Str from strictyaml.scalar import Int from strictyaml.scalar import HexInt from strictyaml.scalar import Bool from strictyaml.scalar import Float from strictyaml.scalar import Decimal from strictyaml.scalar import Datetime from strictyaml.scalar import CommaSeparated from strictyaml.scalar import NullNone from strictyaml.scalar import EmptyNone from strictyaml.scalar import EmptyDict from strictyaml.scalar import EmptyList from strictyaml.compound import Optional from strictyaml.compound import Map from strictyaml.compound import MapPattern from strictyaml.compound import MapCombined from strictyaml.compound import Seq from strictyaml.compound import UniqueSeq from strictyaml.compound import FixedSeq # Base exception from strictyaml.ruamel (all exceptions inherit from this) from strictyaml.ruamel import YAMLError # Exceptions from strictyaml.exceptions import StrictYAMLError from strictyaml.exceptions import YAMLValidationError # Disallowed token exceptions from strictyaml.exceptions import DisallowedToken from strictyaml.exceptions import TagTokenDisallowed from strictyaml.exceptions import FlowMappingDisallowed from strictyaml.exceptions import AnchorTokenDisallowed from strictyaml.exceptions import DuplicateKeysDisallowed from strictyaml import exceptions __version__ = "DEVELOPMENT_VERSION" strictyaml-1.6.1/strictyaml/any_validator.py000066400000000000000000000041521415074265300213240ustar00rootroot00000000000000from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml.compound import FixedSeq, Map from strictyaml.validators import Validator from strictyaml.exceptions import YAMLSerializationError from strictyaml.scalar import Bool, EmptyDict, EmptyList, Float, Int, Str def schema_from_document(document): if isinstance(document, CommentedMap): return Map( {key: schema_from_document(value) for key, value in document.items()} ) elif isinstance(document, CommentedSeq): return FixedSeq([schema_from_document(item) for item in document]) else: return Str() def schema_from_data(data, allow_empty): if isinstance(data, dict): if len(data) == 0: if allow_empty: return EmptyDict() raise YAMLSerializationError( "Empty dicts are not serializable to StrictYAML unless schema is used." ) return Map( {key: schema_from_data(value, allow_empty) for key, value in data.items()} ) elif isinstance(data, list): if len(data) == 0: if allow_empty: return EmptyList() raise YAMLSerializationError( "Empty lists are not serializable to StrictYAML unless schema is used." ) return FixedSeq([schema_from_data(item, allow_empty) for item in data]) elif isinstance(data, bool): return Bool() elif isinstance(data, int): return Int() elif isinstance(data, float): return Float() else: return Str() class Any(Validator): """ Validates any YAML and returns simple dicts/lists of strings. """ def validate(self, chunk): return schema_from_document(chunk.contents)(chunk) def to_yaml(self, data, allow_empty=False): """ Args: allow_empty (bool): True to allow EmptyDict and EmptyList in the schema generated from the data. """ return schema_from_data(data, allow_empty=allow_empty).to_yaml(data) @property def key_validator(self): return Str() strictyaml-1.6.1/strictyaml/compound.py000066400000000000000000000264031415074265300203170ustar00rootroot00000000000000from strictyaml.exceptions import YAMLSerializationError, InvalidOptionalDefault from strictyaml.validators import Validator, MapValidator, SeqValidator from strictyaml.ruamel.comments import CommentedMap, CommentedSeq from strictyaml.representation import YAML from strictyaml.scalar import ScalarValidator, Str from strictyaml.yamllocation import YAMLChunk import sys if sys.version_info[0] == 3: unicode = str class Optional(object): def __init__(self, key, default=None, drop_if_none=True): self.key = key self.default = default self.drop_if_none = drop_if_none def __repr__(self): # TODO: Add default return u'Optional("{0}")'.format(self.key) class MapPattern(MapValidator): def __init__( self, key_validator, value_validator, minimum_keys=None, maximum_keys=None ): self._key_validator = key_validator self._value_validator = value_validator self._maximum_keys = maximum_keys self._minimum_keys = minimum_keys assert isinstance( self._key_validator, ScalarValidator ), "key_validator must be ScalarValidator" assert isinstance( self._value_validator, Validator ), "value_validator must be Validator" assert isinstance( maximum_keys, (type(None), int) ), "maximum_keys must be an integer" assert isinstance( minimum_keys, (type(None), int) ), "maximum_keys must be an integer" @property def key_validator(self): return self._key_validator def validate(self, chunk): items = chunk.expect_mapping() if self._maximum_keys is not None and len(items) > self._maximum_keys: chunk.expecting_but_found( u"while parsing a mapping", u"expected a maximum of {0} key{1}, found {2}.".format( self._maximum_keys, u"s" if self._maximum_keys > 1 else u"", len(items), ), ) if self._minimum_keys is not None and len(items) < self._minimum_keys: chunk.expecting_but_found( u"while parsing a mapping", u"expected a minimum of {0} key{1}, found {2}.".format( self._minimum_keys, u"s" if self._minimum_keys > 1 else u"", len(items), ), ) for key, value in items: yaml_key = self._key_validator(key) key.process(yaml_key) value.process(self._value_validator(value)) chunk.add_key_association(key.contents, yaml_key.data) def to_yaml(self, data): self._should_be_mapping(data) # TODO : Maximum minimum keys return CommentedMap( [ (self._key_validator.to_yaml(key), self._value_validator.to_yaml(value)) for key, value in data.items() ] ) def __repr__(self): return u"MapPattern({0}, {1})".format( repr(self._key_validator), repr(self._value_validator) ) class Map(MapValidator): def __init__(self, validator, key_validator=None): self._validator = validator self._key_validator = Str() if key_validator is None else key_validator assert isinstance( self._key_validator, ScalarValidator ), "key validator must be ScalarValidator" self._validator_dict = { key.key if isinstance(key, Optional) else key: value for key, value in validator.items() } self._required_keys = [ key for key in validator.keys() if not isinstance(key, Optional) ] for key_val, value_val in validator.items(): if isinstance(key_val, Optional): if key_val.default is not None and not key_val.drop_if_none: raise InvalidOptionalDefault( "If you have a default that isn't None, drop_if_none must be True." ) if key_val.default is not None and key_val.drop_if_none: try: value_val.to_yaml(key_val.default) except YAMLSerializationError as error: raise InvalidOptionalDefault( "Optional default for '{}' failed validation:\n {}".format( key_val.key, error ) ) self._defaults = { key.key: key.default for key in validator.keys() if isinstance(key, Optional) and (key.default is not None or not key.drop_if_none) } @property def key_validator(self): return self._key_validator def __repr__(self): # TODO : repr key_validator return u"Map({{{0}}})".format( ", ".join( [ "{0}: {1}".format(repr(key), repr(value)) for key, value in self._validator.items() ] ) ) def get_validator(self, key): return self._validator_dict[key] def unexpected_key(self, key, yaml_key, value, chunk): key.expecting_but_found( u"while parsing a mapping", u"unexpected key not in schema '{0}'".format(unicode(yaml_key.scalar)), ) def validate(self, chunk): found_keys = set() items = chunk.expect_mapping() for key, value in items: yaml_key = self._key_validator(key) if yaml_key.scalar not in self._validator_dict.keys(): self.unexpected_key(key, yaml_key, value, chunk) value.process(self.get_validator(yaml_key.scalar)(value)) key.process(yaml_key) chunk.add_key_association(key.contents, yaml_key.data) found_keys.add(yaml_key.scalar) for default_key, default_data in self._defaults.items(): if default_key not in [key.contents for key, _ in items]: key_chunk = YAMLChunk(default_key) yaml_key = self._key_validator(key_chunk) strictindex = yaml_key.data value_validator = self.get_validator(default_key) new_value = value_validator( YAMLChunk(value_validator.to_yaml(default_data)) ) forked_chunk = chunk.fork(strictindex, new_value) forked_chunk.val(strictindex).process(new_value) updated_value = value_validator(forked_chunk.val(strictindex)) updated_value._chunk.make_child_of(chunk.val(strictindex)) # marked_up = new_value.as_marked_up() # chunk.contents[chunk.ruamelindex(strictindex)] = marked_up chunk.add_key_association(default_key, strictindex) sp = chunk.strictparsed() if isinstance(sp, YAML): # Do not trigger __setitem__ validation at this point, as # we just ran the validator, and # representation.py:revalidate() doesn't overwrite the # _validator property until after all values are checked, # which leads to an exception being raised if it is # re-checked. sp._value[yaml_key] = updated_value else: sp[yaml_key] = updated_value if not set(self._required_keys).issubset(found_keys): chunk.while_parsing_found( u"a mapping", u"required key(s) '{0}' not found".format( "', '".join( sorted(list(set(self._required_keys).difference(found_keys))) ) ), ) def to_yaml(self, data): self._should_be_mapping(data) # TODO : if keys not in list or required keys missing, raise exception. return CommentedMap( [ (key, self.get_validator(key).to_yaml(value)) for key, value in data.items() if key not in self._defaults.keys() or key in self._defaults.keys() and value != self._defaults[key] ] ) class MapCombined(Map): def __init__(self, map_validator, key_validator, value_validator): super(MapCombined, self).__init__(map_validator, key_validator) self._value_validator = value_validator def get_validator(self, key): return self._validator_dict.get(key, self._value_validator) def unexpected_key(self, key, yaml_key, value, chunk): pass class Seq(SeqValidator): def __init__(self, validator): self._validator = validator def __repr__(self): return "Seq({0})".format(repr(self._validator)) def validate(self, chunk): for item in chunk.expect_sequence(): item.process(self._validator(item)) def to_yaml(self, data): self._should_be_list(data) return CommentedSeq([self._validator.to_yaml(item) for item in data]) class FixedSeq(SeqValidator): def __init__(self, validators): self._validators = validators for item in validators: assert isinstance( item, Validator ), "all FixedSeq validators must be Validators" def __repr__(self): return "FixedSeq({0})".format(repr(self._validators)) def validate(self, chunk): sequence = chunk.expect_sequence( "when expecting a sequence of {0} elements".format(len(self._validators)) ) if len(self._validators) != len(sequence): chunk.expecting_but_found( "when expecting a sequence of {0} elements".format( len(self._validators) ), "found a sequence of {0} elements".format(len(chunk.contents)), ) for item, validator in zip(sequence, self._validators): item.process(validator(item)) def to_yaml(self, data): self._should_be_list(data) # TODO : Different length string return CommentedSeq( [validator.to_yaml(item) for item, validator in zip(data, self._validators)] ) class UniqueSeq(SeqValidator): def __init__(self, validator): self._validator = validator assert isinstance( self._validator, ScalarValidator ), "UniqueSeq validator must be ScalarValidator" def __repr__(self): return "UniqueSeq({0})".format(repr(self._validator)) def validate(self, chunk): existing_items = set() for item in chunk.expect_sequence("when expecting a unique sequence"): if item.contents in existing_items: chunk.while_parsing_found("a sequence", "duplicate found") else: existing_items.add(item.contents) item.process(self._validator(item)) def to_yaml(self, data): self._should_be_list(data) if len(set(data)) < len(data): raise YAMLSerializationError( ( "Expecting all unique items, " "but duplicates were found in '{}'.".format(data) ) ) return CommentedSeq([self._validator.to_yaml(item) for item in data]) strictyaml-1.6.1/strictyaml/constants.py000066400000000000000000000004541415074265300205050ustar00rootroot00000000000000TRUE_VALUES = ["yes", "true", "on", "1", "y"] FALSE_VALUES = ["no", "false", "off", "0", "n"] BOOL_VALUES = TRUE_VALUES + FALSE_VALUES REGEXES = { "email": r".+?\@.+?", # https://urlregex.com/ "url": r"http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+", } strictyaml-1.6.1/strictyaml/dumper.py000066400000000000000000000044261415074265300177700ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from strictyaml.ruamel.representer import RoundTripRepresenter from strictyaml.ruamel.scalarstring import ScalarString from strictyaml.ruamel.emitter import Emitter from strictyaml.ruamel.serializer import Serializer from strictyaml.ruamel.resolver import BaseResolver import sys if sys.version_info[0] == 3: RoundTripRepresenter.add_representer( ScalarString, RoundTripRepresenter.represent_str ) else: RoundTripRepresenter.add_representer( ScalarString, RoundTripRepresenter.represent_unicode ) class StrictYAMLResolver(BaseResolver): def __init__(self, version=None, loader=None): BaseResolver.__init__(self, loader) class StrictYAMLDumper(Emitter, Serializer, RoundTripRepresenter, StrictYAMLResolver): def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (Any, StreamType, Any, bool, Union[None, int], Union[None, int], bool, Any, Any, Union[None, bool], Union[None, bool], Any, Any, Any, Any, Any) -> None # NOQA Emitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, block_seq_indent=block_seq_indent, top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon, dumper=self, ) Serializer.__init__( self, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, dumper=self, ) RoundTripRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) StrictYAMLResolver.__init__(self, loader=self) strictyaml-1.6.1/strictyaml/exceptions.py000066400000000000000000000047361415074265300206610ustar00rootroot00000000000000from strictyaml.ruamel.error import MarkedYAMLError from strictyaml.ruamel.dumper import RoundTripDumper from strictyaml.ruamel import dump try: from strictyaml.ruamel.error import Mark as StringMark except ImportError: from strictyaml.ruamel.error import StringMark class StrictYAMLError(MarkedYAMLError): pass class InvalidValidatorError(StrictYAMLError): pass class CannotBuildDocumentFromInvalidData(StrictYAMLError): pass class CannotBuildDocumentsFromEmptyDictOrList(StrictYAMLError): pass class YAMLSerializationError(StrictYAMLError): pass class InvalidOptionalDefault(YAMLSerializationError): pass class YAMLValidationError(StrictYAMLError): def __init__(self, context, problem, chunk): self.context = context self.problem = problem self._chunk = chunk self.note = None @property def context_mark(self): context_line = self._chunk.start_line() - 1 str_document = dump(self._chunk.whole_document, Dumper=RoundTripDumper) context_index = len(u"\n".join(str_document.split(u"\n")[:context_line])) return StringMark( self._chunk.label, context_index, context_line, 0, str_document, context_index + 1, ) @property def problem_mark(self): problem_line = self._chunk.end_line() - 1 str_document = dump(self._chunk.whole_document, Dumper=RoundTripDumper) problem_index = len(u"\n".join(str_document.split(u"\n")[:problem_line])) return StringMark( self._chunk.label, problem_index, problem_line, 0, str_document, problem_index + 1, ) class DisallowedToken(StrictYAMLError): MESSAGE = "Disallowed token" class TagTokenDisallowed(DisallowedToken): MESSAGE = "Tag tokens not allowed" class FlowMappingDisallowed(DisallowedToken): MESSAGE = "Flow mapping tokens not allowed" class AnchorTokenDisallowed(DisallowedToken): MESSAGE = "Anchor tokens not allowed" class DuplicateKeysDisallowed(DisallowedToken): MESSAGE = "Duplicate keys not allowed" class InconsistentIndentationDisallowed(DisallowedToken): MESSAGE = "Inconsistent indentation not allowed" def raise_type_error(yaml_object, to_type, alternatives): raise TypeError( ("Cannot cast {0} to {1}.\n" "Use {2} instead.").format( repr(yaml_object), to_type, alternatives ) ) strictyaml-1.6.1/strictyaml/parser.py000066400000000000000000000263421415074265300177710ustar00rootroot00000000000000""" Parsing code for strictyaml. """ import sys from strictyaml import ruamel as ruamelyaml from strictyaml import exceptions from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml.any_validator import Any from strictyaml.yamllocation import YAMLChunk from strictyaml import utils from strictyaml.ruamel.reader import Reader from strictyaml.ruamel.scanner import RoundTripScanner from strictyaml.ruamel.parser import RoundTripParser from strictyaml.ruamel.composer import Composer from strictyaml.ruamel.constructor import RoundTripConstructor from strictyaml.ruamel.resolver import VersionedResolver from strictyaml.ruamel.nodes import MappingNode from strictyaml.ruamel.compat import PY2 from strictyaml.ruamel.constructor import ConstructorError if sys.version_info[:2] > (3, 4): from collections.abc import Hashable else: from collections import Hashable # StrictYAMLConstructor is mostly taken from RoundTripConstructor ruamel/yaml/constructor.py # Differences: # * If a duplicate key is found, an exception is raised class StrictYAMLConstructor(RoundTripConstructor): yaml_constructors = {} def construct_mapping(self, node, maptyp, deep=False): if not isinstance(node, MappingNode): raise ConstructorError( None, None, "expected a mapping node, but found %s" % node.id, node.start_mark, ) merge_map = self.flatten_mapping(node) # mapping = {} if node.comment: maptyp._yaml_add_comment(node.comment[:2]) if len(node.comment) > 2: maptyp.yaml_end_comment_extend(node.comment[2], clear=True) if node.anchor: from strictyaml.ruamel.serializer import templated_id if not templated_id(node.anchor): maptyp.yaml_set_anchor(node.anchor) for key_node, value_node in node.value: # keys can be list -> deep key = self.construct_object(key_node, deep=True) # lists are not hashable, but tuples are if not isinstance(key, Hashable): if isinstance(key, list): key = tuple(key) if PY2: try: hash(key) except TypeError as exc: raise ConstructorError( "while constructing a mapping", node.start_mark, "found unacceptable key (%s)" % exc, key_node.start_mark, ) else: if not isinstance(key, Hashable): raise ConstructorError( "while constructing a mapping", node.start_mark, "found unhashable key", key_node.start_mark, ) value = self.construct_object(value_node, deep=deep) if key_node.comment: maptyp._yaml_add_comment(key_node.comment, key=key) if value_node.comment: maptyp._yaml_add_comment(value_node.comment, value=key) maptyp._yaml_set_kv_line_col( key, [ key_node.start_mark.line, key_node.start_mark.column, value_node.start_mark.line, value_node.start_mark.column, ], ) if key in maptyp: key_node.start_mark.name = self.label key_node.end_mark.name = self.label raise exceptions.DuplicateKeysDisallowed( "While parsing", key_node.start_mark, "Duplicate key '{0}' found".format(key), key_node.end_mark, ) maptyp[key] = value # do this last, or <<: before a key will prevent insertion in instances # of collections.OrderedDict (as they have no __contains__ if merge_map: maptyp.add_yaml_merge(merge_map) # Don't verify Mapping indentation when allowing flow, # as that disallows: # short_key: { x = 1 } # very_long_key: { x = 1 } if not self.allow_flow_style: previous_indentation = None for node in [ nodegroup[1] for nodegroup in node.value if isinstance(nodegroup[1], ruamelyaml.nodes.MappingNode) ]: if previous_indentation is None: previous_indentation = node.start_mark.column if node.start_mark.column != previous_indentation: raise exceptions.InconsistentIndentationDisallowed( "While parsing", node.start_mark, "Found mapping with indentation " "inconsistent with previous mapping", node.end_mark, ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:null", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:bool", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:int", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:float", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:binary", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:timestamp", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:omap", RoundTripConstructor.construct_yaml_omap ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:pairs", RoundTripConstructor.construct_yaml_pairs ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:set", RoundTripConstructor.construct_yaml_set ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:str", RoundTripConstructor.construct_yaml_str ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:seq", RoundTripConstructor.construct_yaml_seq ) StrictYAMLConstructor.add_constructor( u"tag:yaml.org,2002:map", RoundTripConstructor.construct_yaml_map ) StrictYAMLConstructor.add_constructor(None, RoundTripConstructor.construct_undefined) # StrictYAMLScanner is mostly taken from RoundTripScanner in ruamel/yaml/scanner.py # Differences: # * Tokens are checked for disallowed tokens. class StrictYAMLScanner(RoundTripScanner): def check_token(self, *choices): # Check if the next token is one of the given types. while self.need_more_tokens(): self.fetch_more_tokens() self._gather_comments() if self.tokens: if not choices: return True for choice in choices: if isinstance(self.tokens[0], choice): token = self.tokens[0] token.start_mark.name = self.label token.end_mark.name = self.label if isinstance(token, ruamelyaml.tokens.TagToken): raise exceptions.TagTokenDisallowed( "While scanning", token.end_mark, "Found disallowed tag tokens " "(do not specify types in markup)", token.start_mark, ) if not self.allow_flow_style: if isinstance( token, ruamelyaml.tokens.FlowMappingStartToken ) or isinstance( token, ruamelyaml.tokens.FlowSequenceStartToken ): raise exceptions.FlowMappingDisallowed( "While scanning", token.start_mark, "Found ugly disallowed JSONesque flow mapping " "(surround with ' and ' to make text appear literally)", token.end_mark, ) if isinstance(token, ruamelyaml.tokens.AnchorToken): raise exceptions.AnchorTokenDisallowed( "While scanning", token.start_mark, "Found confusing disallowed anchor token " "(surround with ' and ' to make text appear literally)", token.end_mark, ) return True return False class StrictYAMLLoader( Reader, StrictYAMLScanner, RoundTripParser, Composer, StrictYAMLConstructor, VersionedResolver, ): def __init__(self, stream, version=None, preserve_quotes=None): Reader.__init__(self, stream, loader=self) StrictYAMLScanner.__init__(self, loader=self) RoundTripParser.__init__(self, loader=self) Composer.__init__(self, loader=self) StrictYAMLConstructor.__init__( self, preserve_quotes=preserve_quotes, loader=self ) VersionedResolver.__init__(self, version, loader=self) def as_document(data, schema=None, label=u""): """ Translate dicts/lists and scalar (string/bool/float/int/etc.) values into a YAML object which can be dumped out. """ if schema is None: schema = Any() return schema(YAMLChunk(schema.to_yaml(data), label=label)) def generic_load( yaml_string, schema=None, label=u"", allow_flow_style=False ): if not utils.is_string(yaml_string): raise TypeError("StrictYAML can only read a string of valid YAML.") # We manufacture a class that has the label we want DynamicStrictYAMLLoader = type( "DynamicStrictYAMLLoader", (StrictYAMLLoader,), {"label": label, "allow_flow_style": allow_flow_style}, ) try: document = ruamelyaml.load(yaml_string, Loader=DynamicStrictYAMLLoader) except ruamelyaml.YAMLError as parse_error: if parse_error.context_mark is not None: parse_error.context_mark.name = label if parse_error.problem_mark is not None: parse_error.problem_mark.name = label raise parse_error # Document is just a (string, int, etc.) if type(document) not in (CommentedMap, CommentedSeq): document = yaml_string if schema is None: schema = Any() return schema(YAMLChunk(document, label=label)) def dirty_load( yaml_string, schema=None, label=u"", allow_flow_style=False ): """ Parse the first YAML document in a string and produce corresponding YAML object. If allow_flow_style is set to True, then flow style is allowed. """ return generic_load( yaml_string, schema=schema, label=label, allow_flow_style=allow_flow_style ) def load(yaml_string, schema=None, label=u""): """ Parse the first YAML document in a string and produce corresponding YAML object. """ return generic_load(yaml_string, schema=schema, label=label) strictyaml-1.6.1/strictyaml/representation.py000066400000000000000000000306631415074265300215400ustar00rootroot00000000000000from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml.exceptions import raise_type_error, YAMLSerializationError from strictyaml.yamllocation import YAMLChunk from strictyaml.dumper import StrictYAMLDumper from strictyaml.ruamel import dump, scalarstring from copy import copy import decimal import sys if sys.version_info[0] == 3: unicode = str if sys.version_info[:2] < (3, 7): from collections import OrderedDict as OrderedDictBase class OrderedDict(OrderedDictBase): def __repr__(self): return ( "{" + ", ".join("{}: {}".format(repr(k), repr(v)) for k, v in self.items()) + "}" ) else: OrderedDict = dict class YAMLIterator(object): def __init__(self, yaml_object): self._yaml_object = yaml_object self._index = 0 def __iter__(self): return self def next(self): return self.__next__() def __next__(self): if self._index >= len(self._yaml_object): raise StopIteration else: self._index = self._index + 1 return self._yaml_object[self._index - 1] class YAML(object): """ A YAML object represents a block of YAML which can be: * Used to extract parsed data from the YAML (.data). * Used to render to a string of YAML, with comments (.as_yaml()). * Revalidated with a stricter schema (.revalidate(schema)). """ def __init__(self, value, validator=None): if isinstance(value, YAMLChunk): self._chunk = value self._validator = validator if value.is_scalar(): self._value = validator.validate(value) if isinstance(self._value, YAML): self._value = self._value._value self._text = value.contents else: self._value = ( value.strictparsed()._value if isinstance(value.strictparsed(), YAML) else value.strictparsed() ) self._text = None elif isinstance(value, YAML): self._chunk = value._chunk self._validator = validator if validator is not None else value.validator self._value = value._value self._text = value._text else: self._chunk = YAMLChunk(value) self._validator = validator self._value = value self._text = unicode(value) self._selected_validator = None assert not isinstance(self._value, YAML) def __int__(self): # TODO: Raise more sensible exception if not int return int(self._value) def __str__(self): if not self.is_scalar(): raise TypeError( "Cannot cast mapping/sequence '{0}' to string".format(repr(self._value)) ) elif type(self._value) in (unicode, str, int, float, decimal.Decimal): return unicode(self._value) else: raise_type_error( repr(self), "str", "str(yamlobj.data) or str(yamlobj.text)" ) def __unicode__(self): return self.__str__() def revalidate(self, schema): if self.is_scalar(): self._value = schema(self._chunk)._value else: result = schema(self._chunk) self._selected_validator = result._selected_validator self._validator = schema @property def data(self): """ Returns raw data representation of the document or document segment. Mappings are rendered as ordered dicts, sequences as lists and scalar values as whatever the validator returns (int, string, etc.). If no validators are used, scalar values are always returned as strings. """ if isinstance(self._value, CommentedMap): mapping = OrderedDict() for key, value in self._value.items(): """ #if isinstance(key, YAML): #mapping[key.data] = value.data if isinstance(value, YAML) else value """ mapping[key.data] = value.data return mapping elif isinstance(self._value, CommentedSeq): return [item.data for item in self._value] else: if isinstance(self._value, scalarstring.ScalarString): return str(self._value) return self._value def as_marked_up(self): """ Returns strictyaml.ruamel CommentedSeq/CommentedMap objects with comments. This can be fed directly into a strictyaml.ruamel dumper. """ return self._chunk.contents @property def start_line(self): """ Return line number that the element starts on (including preceding comments). """ return self._chunk.start_line() @property def end_line(self): """ Return line number that the element ends on (including trailing comments). """ return self._chunk.end_line() def lines(self): """ Return a string of the lines which make up the selected line including preceding and trailing comments. """ return self._chunk.lines() def lines_before(self, how_many): return self._chunk.lines_before(how_many) def lines_after(self, how_many): return self._chunk.lines_after(how_many) def __float__(self): return float(self._value) def __repr__(self): return u"YAML({0})".format(self.data) def __bool__(self): if isinstance(self._value, bool): return self._value else: raise_type_error( repr(self), "bool", "bool(yamlobj.data) or bool(yamlobj.text)" ) def _strictindex(self, index): if isinstance(index, YAML): index = index.data if self.is_mapping(): key_validator = ( self._selected_validator.key_validator if self._selected_validator is not None else self._validator.key_validator ) return key_validator(YAMLChunk(index)).data else: return index def __nonzero__(self): return self.__bool__() def __getitem__(self, index): return self._value[self._strictindex(index)] def __setitem__(self, index, value): strictindex = self._strictindex(index) # Generate nice error messages - first, copy our whole node's data # and use ``to_yaml()`` to determine if the resulting data would # validate our schema. Must replace whole current node to support # complex types, e.g. ``EmptyList() | Seq(Str())``. if isinstance(value, YAML): yaml_value = self._chunk.fork(strictindex, value) new_value = self._validator(yaml_value) else: old_data = self.data if isinstance(old_data, dict): old_data[index] = value elif isinstance(old_data, list): if len(old_data) <= index: raise YAMLSerializationError( "cannot extend list via __setitem__. " "Instead, replace whole list on parent " "node." ) old_data[index] = value else: raise NotImplementedError(repr(old_data)) yaml_value = YAMLChunk(self._validator.to_yaml(old_data)) yaml_value_repr = self._validator(yaml_value) # Now that the new content is properly validated, create a valid # chunk with the new information. forked_chunk = self._chunk.fork(strictindex, yaml_value_repr[strictindex]) new_value = self._validator(forked_chunk) # Now, overwrite our chunk and value with the new information. old_chunk = self._chunk # Needed for reference to pre-fork ruamel self._chunk = new_value._chunk self._value = new_value._value self._text = new_value._text self._selected_validator = new_value._selected_validator # Update any parent ruamel links to point to our new chunk. self._chunk.pointer.set(old_chunk, "_ruamelparsed", new_value._chunk.contents) self._chunk.pointer.set(old_chunk, "_strictparsed", self, strictdoc=True) # forked chunk made a deep copy of the original document, but we just # updated pointers in the original document. So, restore our chunk to # pointing at the original document. self._chunk._ruamelparsed = old_chunk._ruamelparsed self._chunk._strictparsed = old_chunk._strictparsed def __delitem__(self, index): strictindex = self._strictindex(index) del self._value[strictindex] del self._chunk.contents[self._chunk.ruamelindex(strictindex)] def __hash__(self): return hash(self._value) def __len__(self): return len(self._value) def as_yaml(self): """ Render the YAML node and subnodes as string. """ dumped = dump(self.as_marked_up(), Dumper=StrictYAMLDumper, allow_unicode=True) return dumped if sys.version_info[0] == 3 else dumped.decode("utf8") def items(self): if not isinstance(self._value, CommentedMap): raise TypeError("{0} not a mapping, cannot use .items()".format(repr(self))) return [(key, self._value[key]) for key, value in self._value.items()] def keys(self): if not isinstance(self._value, CommentedMap): raise TypeError("{0} not a mapping, cannot use .keys()".format(repr(self))) return [key for key, _ in self._value.items()] def values(self): if not isinstance(self._value, CommentedMap): raise TypeError( "{0} not a mapping, cannot use .values()".format(repr(self)) ) return [self._value[key] for key, value in self._value.items()] def get(self, index, default=None): if not isinstance(self._value, CommentedMap): raise TypeError("{0} not a mapping, cannot use .get()".format(repr(self))) return self._value[index] if index in self._value.keys() else default def __contains__(self, item): if isinstance(self._value, CommentedSeq): return item in self._value elif isinstance(self._value, CommentedMap): return item in self.keys() else: return item in self._value def __iter__(self): if self.is_sequence(): return YAMLIterator(self) elif self.is_mapping(): return YAMLIterator(self.keys()) else: raise TypeError("{0} is a scalar value, cannot iterate.".format(repr(self))) @property def validator(self): return self._validator @property def text(self): """ Return string value of scalar, whatever value it was parsed as. """ if isinstance(self._value, CommentedMap): raise TypeError("{0} is a mapping, has no text value.".format(repr(self))) if isinstance(self._value, CommentedSeq): raise TypeError("{0} is a sequence, has no text value.".format(repr(self))) return self._text def copy(self): return copy(self) def __gt__(self, val): if isinstance(self._value, CommentedMap) or isinstance( self._value, CommentedSeq ): raise TypeError("{0} not an orderable type.".format(repr(self._value))) return self._value > val def __lt__(self, val): if isinstance(self._value, CommentedMap) or isinstance( self._value, CommentedSeq ): raise TypeError("{0} not an orderable type.".format(repr(self._value))) return self._value < val @property def value(self): return self._value def is_mapping(self): return isinstance(self._value, CommentedMap) def is_sequence(self): return isinstance(self._value, CommentedSeq) def is_scalar(self): return not self.is_mapping() and not self.is_sequence() @property def scalar(self): if isinstance(self._value, (CommentedMap, CommentedSeq)): raise TypeError("{0} has no scalar value.".format(repr(self))) return self._value def whole_document(self): return self._chunk.whole_document def __eq__(self, value): return self.data == value def __ne__(self, value): return self.data != value strictyaml-1.6.1/strictyaml/ruamel/000077500000000000000000000000001415074265300174015ustar00rootroot00000000000000strictyaml-1.6.1/strictyaml/ruamel/__init__.py000066400000000000000000000042721415074265300215170ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals if False: # MYPY from typing import Dict, Any # NOQA _package_data = dict( full_package_name="strictyaml.ruamel", version_info=(0, 16, 13), __version__="0.16.13", author="Anthon van der Neut", author_email="a.van.der.neut@ruamel.eu", description="strictyaml.ruamel is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order", # NOQA entry_points=None, since=2014, extras_require={ ':platform_python_implementation=="CPython" and python_version<="2.7"': [ "ruamel.ordereddict" ], # NOQA ':platform_python_implementation=="CPython" and python_version<"3.10"': [ "strictyaml.ruamel.clib>=0.1.2" ], # NOQA "jinja2": ["strictyaml.ruamel.jinja2>=0.2"], "docs": ["ryd"], }, classifiers=[ "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Programming Language :: Python :: Implementation :: Jython", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: Text Processing :: Markup", "Typing :: Typed", ], keywords="yaml 1.2 parser round-trip preserve quotes order config", read_the_docs="yaml", supported=[(2, 7), (3, 5)], # minimum tox=dict( env="*", # remove 'pn', no longer test narrow Python 2.7 for unicode patterns and PyPy deps="ruamel.std.pathlib", fl8excl="_test/lib", ), universal=True, rtfd="yaml", ) # type: Dict[Any, Any] version_info = _package_data["version_info"] __version__ = _package_data["__version__"] try: from .cyaml import * # NOQA __with_libyaml__ = True except (ImportError, ValueError): # for Jython __with_libyaml__ = False from strictyaml.ruamel.main import * # NOQA strictyaml-1.6.1/strictyaml/ruamel/anchor.py000066400000000000000000000007641415074265300212340ustar00rootroot00000000000000if False: # MYPY from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA anchor_attrib = "_yaml_anchor" class Anchor(object): __slots__ = "value", "always_dump" attrib = anchor_attrib def __init__(self): # type: () -> None self.value = None self.always_dump = False def __repr__(self): # type: () -> Any ad = ", (always dump)" if self.always_dump else "" return "Anchor({!r}{})".format(self.value, ad) strictyaml-1.6.1/strictyaml/ruamel/comments.py000066400000000000000000001047101415074265300216030ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import, print_function """ stuff to deal with comments and formatting on dict/list/ordereddict/set these are not really related, formatting could be factored out as a separate base """ import sys import copy from strictyaml.ruamel.compat import ordereddict # type: ignore from strictyaml.ruamel.compat import PY2, string_types, MutableSliceableSequence from strictyaml.ruamel.scalarstring import ScalarString from strictyaml.ruamel.anchor import Anchor if PY2: from collections import MutableSet, Sized, Set, Mapping else: from collections.abc import MutableSet, Sized, Set, Mapping if False: # MYPY from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA # fmt: off __all__ = ['CommentedSeq', 'CommentedKeySeq', 'CommentedMap', 'CommentedOrderedMap', 'CommentedSet', 'comment_attrib', 'merge_attrib'] # fmt: on comment_attrib = "_yaml_comment" format_attrib = "_yaml_format" line_col_attrib = "_yaml_line_col" merge_attrib = "_yaml_merge" tag_attrib = "_yaml_tag" class Comment(object): # sys.getsize tested the Comment objects, __slots__ makes them bigger # and adding self.end did not matter __slots__ = "comment", "_items", "_end", "_start" attrib = comment_attrib def __init__(self): # type: () -> None self.comment = None # [post, [pre]] # map key (mapping/omap/dict) or index (sequence/list) to a list of # dict: post_key, pre_key, post_value, pre_value # list: pre item, post item self._items = {} # type: Dict[Any, Any] # self._start = [] # should not put these on first item self._end = [] # type: List[Any] # end of document comments def __str__(self): # type: () -> str if bool(self._end): end = ",\n end=" + str(self._end) else: end = "" return "Comment(comment={0},\n items={1}{2})".format( self.comment, self._items, end ) @property def items(self): # type: () -> Any return self._items @property def end(self): # type: () -> Any return self._end @end.setter def end(self, value): # type: (Any) -> None self._end = value @property def start(self): # type: () -> Any return self._start @start.setter def start(self, value): # type: (Any) -> None self._start = value # to distinguish key from None def NoComment(): # type: () -> None pass class Format(object): __slots__ = ("_flow_style",) attrib = format_attrib def __init__(self): # type: () -> None self._flow_style = None # type: Any def set_flow_style(self): # type: () -> None self._flow_style = True def set_block_style(self): # type: () -> None self._flow_style = False def flow_style(self, default=None): # type: (Optional[Any]) -> Any """if default (the flow_style) is None, the flow style tacked on to the object explicitly will be taken. If that is None as well the default flow style rules the format down the line, or the type of the constituent values (simple -> flow, map/list -> block)""" if self._flow_style is None: return default return self._flow_style class LineCol(object): attrib = line_col_attrib def __init__(self): # type: () -> None self.line = None self.col = None self.data = None # type: Optional[Dict[Any, Any]] def add_kv_line_col(self, key, data): # type: (Any, Any) -> None if self.data is None: self.data = {} self.data[key] = data def key(self, k): # type: (Any) -> Any return self._kv(k, 0, 1) def value(self, k): # type: (Any) -> Any return self._kv(k, 2, 3) def _kv(self, k, x0, x1): # type: (Any, Any, Any) -> Any if self.data is None: return None data = self.data[k] return data[x0], data[x1] def item(self, idx): # type: (Any) -> Any if self.data is None: return None return self.data[idx][0], self.data[idx][1] def add_idx_line_col(self, key, data): # type: (Any, Any) -> None if self.data is None: self.data = {} self.data[key] = data class Tag(object): """store tag information for roundtripping""" __slots__ = ("value",) attrib = tag_attrib def __init__(self): # type: () -> None self.value = None def __repr__(self): # type: () -> Any return "{0.__class__.__name__}({0.value!r})".format(self) class CommentedBase(object): @property def ca(self): # type: () -> Any if not hasattr(self, Comment.attrib): setattr(self, Comment.attrib, Comment()) return getattr(self, Comment.attrib) def yaml_end_comment_extend(self, comment, clear=False): # type: (Any, bool) -> None if comment is None: return if clear or self.ca.end is None: self.ca.end = [] self.ca.end.extend(comment) def yaml_key_comment_extend(self, key, comment, clear=False): # type: (Any, Any, bool) -> None r = self.ca._items.setdefault(key, [None, None, None, None]) if clear or r[1] is None: if comment[1] is not None: assert isinstance(comment[1], list) r[1] = comment[1] else: r[1].extend(comment[0]) r[0] = comment[0] def yaml_value_comment_extend(self, key, comment, clear=False): # type: (Any, Any, bool) -> None r = self.ca._items.setdefault(key, [None, None, None, None]) if clear or r[3] is None: if comment[1] is not None: assert isinstance(comment[1], list) r[3] = comment[1] else: r[3].extend(comment[0]) r[2] = comment[0] def yaml_set_start_comment(self, comment, indent=0): # type: (Any, Any) -> None """overwrites any preceding comment lines on an object expects comment to be without `#` and possible have multiple lines """ from .error import CommentMark from .tokens import CommentToken pre_comments = self._yaml_get_pre_comment() if comment[-1] == "\n": comment = comment[:-1] # strip final newline if there start_mark = CommentMark(indent) for com in comment.split("\n"): c = com.strip() if len(c) > 0 and c[0] != "#": com = "# " + com pre_comments.append(CommentToken(com + "\n", start_mark, None)) def yaml_set_comment_before_after_key( self, key, before=None, indent=0, after=None, after_indent=None ): # type: (Any, Any, Any, Any, Any) -> None """ expects comment (before/after) to be without `#` and possible have multiple lines """ from strictyaml.ruamel.error import CommentMark from strictyaml.ruamel.tokens import CommentToken def comment_token(s, mark): # type: (Any, Any) -> Any # handle empty lines as having no comment return CommentToken(("# " if s else "") + s + "\n", mark, None) if after_indent is None: after_indent = indent + 2 if before and (len(before) > 1) and before[-1] == "\n": before = before[:-1] # strip final newline if there if after and after[-1] == "\n": after = after[:-1] # strip final newline if there start_mark = CommentMark(indent) c = self.ca.items.setdefault(key, [None, [], None, None]) if before == "\n": c[1].append(comment_token("", start_mark)) elif before: for com in before.split("\n"): c[1].append(comment_token(com, start_mark)) if after: start_mark = CommentMark(after_indent) if c[3] is None: c[3] = [] for com in after.split("\n"): c[3].append(comment_token(com, start_mark)) # type: ignore @property def fa(self): # type: () -> Any """format attribute set_flow_style()/set_block_style()""" if not hasattr(self, Format.attrib): setattr(self, Format.attrib, Format()) return getattr(self, Format.attrib) def yaml_add_eol_comment(self, comment, key=NoComment, column=None): # type: (Any, Optional[Any], Optional[Any]) -> None """ there is a problem as eol comments should start with ' #' (but at the beginning of the line the space doesn't have to be before the #. The column index is for the # mark """ from .tokens import CommentToken from .error import CommentMark if column is None: try: column = self._yaml_get_column(key) except AttributeError: column = 0 if comment[0] != "#": comment = "# " + comment if column is None: if comment[0] == "#": comment = " " + comment column = 0 start_mark = CommentMark(column) ct = [CommentToken(comment, start_mark, None), None] self._yaml_add_eol_comment(ct, key=key) @property def lc(self): # type: () -> Any if not hasattr(self, LineCol.attrib): setattr(self, LineCol.attrib, LineCol()) return getattr(self, LineCol.attrib) def _yaml_set_line_col(self, line, col): # type: (Any, Any) -> None self.lc.line = line self.lc.col = col def _yaml_set_kv_line_col(self, key, data): # type: (Any, Any) -> None self.lc.add_kv_line_col(key, data) def _yaml_set_idx_line_col(self, key, data): # type: (Any, Any) -> None self.lc.add_idx_line_col(key, data) @property def anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) def yaml_anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): return None return self.anchor def yaml_set_anchor(self, value, always_dump=False): # type: (Any, bool) -> None self.anchor.value = value self.anchor.always_dump = always_dump @property def tag(self): # type: () -> Any if not hasattr(self, Tag.attrib): setattr(self, Tag.attrib, Tag()) return getattr(self, Tag.attrib) def yaml_set_tag(self, value): # type: (Any) -> None self.tag.value = value def copy_attributes(self, t, memo=None): # type: (Any, Any) -> None # fmt: off for a in [Comment.attrib, Format.attrib, LineCol.attrib, Anchor.attrib, Tag.attrib, merge_attrib]: if hasattr(self, a): if memo is not None: setattr(t, a, copy.deepcopy(getattr(self, a, memo))) else: setattr(t, a, getattr(self, a)) # fmt: on def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None raise NotImplementedError def _yaml_get_pre_comment(self): # type: () -> Any raise NotImplementedError def _yaml_get_column(self, key): # type: (Any) -> Any raise NotImplementedError class CommentedSeq(MutableSliceableSequence, list, CommentedBase): # type: ignore __slots__ = (Comment.attrib, "_lst") def __init__(self, *args, **kw): # type: (Any, Any) -> None list.__init__(self, *args, **kw) def __getsingleitem__(self, idx): # type: (Any) -> Any return list.__getitem__(self, idx) def __setsingleitem__(self, idx, value): # type: (Any, Any) -> None # try to preserve the scalarstring type if setting an existing key to a new value if idx < len(self): if ( isinstance(value, string_types) and not isinstance(value, ScalarString) and isinstance(self[idx], ScalarString) ): value = type(self[idx])(value) list.__setitem__(self, idx, value) def __delsingleitem__(self, idx=None): # type: (Any) -> Any list.__delitem__(self, idx) self.ca.items.pop(idx, None) # might not be there -> default value for list_index in sorted(self.ca.items): if list_index < idx: continue self.ca.items[list_index - 1] = self.ca.items.pop(list_index) def __len__(self): # type: () -> int return list.__len__(self) def insert(self, idx, val): # type: (Any, Any) -> None """the comments after the insertion have to move forward""" list.insert(self, idx, val) for list_index in sorted(self.ca.items, reverse=True): if list_index < idx: break self.ca.items[list_index + 1] = self.ca.items.pop(list_index) def extend(self, val): # type: (Any) -> None list.extend(self, val) def __eq__(self, other): # type: (Any) -> bool return list.__eq__(self, other) def _yaml_add_comment(self, comment, key=NoComment): # type: (Any, Optional[Any]) -> None if key is not NoComment: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None self._yaml_add_comment(comment, key=key) def _yaml_get_columnX(self, key): # type: (Any) -> Any return self.ca.items[key][0].start_mark.column def _yaml_get_column(self, key): # type: (Any) -> Any column = None sel_idx = None pre, post = key - 1, key + 1 if pre in self.ca.items: sel_idx = pre elif post in self.ca.items: sel_idx = post else: # self.ca.items is not ordered for row_idx, _k1 in enumerate(self): if row_idx >= key: break if row_idx not in self.ca.items: continue sel_idx = row_idx if sel_idx is not None: column = self._yaml_get_columnX(sel_idx) return column def _yaml_get_pre_comment(self): # type: () -> Any pre_comments = [] # type: List[Any] if self.ca.comment is None: self.ca.comment = [None, pre_comments] else: self.ca.comment[1] = pre_comments return pre_comments def __deepcopy__(self, memo): # type: (Any) -> Any res = self.__class__() memo[id(self)] = res for k in self: res.append(copy.deepcopy(k, memo)) self.copy_attributes(res, memo=memo) return res def __add__(self, other): # type: (Any) -> Any return list.__add__(self, other) def sort(self, key=None, reverse=False): # type: ignore # type: (Any, bool) -> None if key is None: tmp_lst = sorted(zip(self, range(len(self))), reverse=reverse) list.__init__(self, [x[0] for x in tmp_lst]) else: tmp_lst = sorted( zip(map(key, list.__iter__(self)), range(len(self))), reverse=reverse ) list.__init__(self, [list.__getitem__(self, x[1]) for x in tmp_lst]) itm = self.ca.items self.ca._items = {} for idx, x in enumerate(tmp_lst): old_index = x[1] if old_index in itm: self.ca.items[idx] = itm[old_index] def __repr__(self): # type: () -> Any return list.__repr__(self) class CommentedKeySeq(tuple, CommentedBase): # type: ignore """This primarily exists to be able to roundtrip keys that are sequences""" def _yaml_add_comment(self, comment, key=NoComment): # type: (Any, Optional[Any]) -> None if key is not NoComment: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None self._yaml_add_comment(comment, key=key) def _yaml_get_columnX(self, key): # type: (Any) -> Any return self.ca.items[key][0].start_mark.column def _yaml_get_column(self, key): # type: (Any) -> Any column = None sel_idx = None pre, post = key - 1, key + 1 if pre in self.ca.items: sel_idx = pre elif post in self.ca.items: sel_idx = post else: # self.ca.items is not ordered for row_idx, _k1 in enumerate(self): if row_idx >= key: break if row_idx not in self.ca.items: continue sel_idx = row_idx if sel_idx is not None: column = self._yaml_get_columnX(sel_idx) return column def _yaml_get_pre_comment(self): # type: () -> Any pre_comments = [] # type: List[Any] if self.ca.comment is None: self.ca.comment = [None, pre_comments] else: self.ca.comment[1] = pre_comments return pre_comments class CommentedMapView(Sized): __slots__ = ("_mapping",) def __init__(self, mapping): # type: (Any) -> None self._mapping = mapping def __len__(self): # type: () -> int count = len(self._mapping) return count class CommentedMapKeysView(CommentedMapView, Set): # type: ignore __slots__ = () @classmethod def _from_iterable(self, it): # type: (Any) -> Any return set(it) def __contains__(self, key): # type: (Any) -> Any return key in self._mapping def __iter__(self): # type: () -> Any # yield from self._mapping # not in py27, pypy # for x in self._mapping._keys(): for x in self._mapping: yield x class CommentedMapItemsView(CommentedMapView, Set): # type: ignore __slots__ = () @classmethod def _from_iterable(self, it): # type: (Any) -> Any return set(it) def __contains__(self, item): # type: (Any) -> Any key, value = item try: v = self._mapping[key] except KeyError: return False else: return v == value def __iter__(self): # type: () -> Any for key in self._mapping._keys(): yield (key, self._mapping[key]) class CommentedMapValuesView(CommentedMapView): __slots__ = () def __contains__(self, value): # type: (Any) -> Any for key in self._mapping: if value == self._mapping[key]: return True return False def __iter__(self): # type: () -> Any for key in self._mapping._keys(): yield self._mapping[key] class CommentedMap(ordereddict, CommentedBase): # type: ignore __slots__ = (Comment.attrib, "_ok", "_ref") def __init__(self, *args, **kw): # type: (Any, Any) -> None self._ok = set() # type: MutableSet[Any] # own keys self._ref = [] # type: List[CommentedMap] ordereddict.__init__(self, *args, **kw) def _yaml_add_comment(self, comment, key=NoComment, value=NoComment): # type: (Any, Optional[Any], Optional[Any]) -> None """values is set to key to indicate a value attachment of comment""" if key is not NoComment: self.yaml_key_comment_extend(key, comment) return if value is not NoComment: self.yaml_value_comment_extend(value, comment) else: self.ca.comment = comment def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None """add on the value line, with value specified by the key""" self._yaml_add_comment(comment, value=key) def _yaml_get_columnX(self, key): # type: (Any) -> Any return self.ca.items[key][2].start_mark.column def _yaml_get_column(self, key): # type: (Any) -> Any column = None sel_idx = None pre, post, last = None, None, None for x in self: if pre is not None and x != key: post = x break if x == key: pre = last last = x if pre in self.ca.items: sel_idx = pre elif post in self.ca.items: sel_idx = post else: # self.ca.items is not ordered for k1 in self: if k1 >= key: break if k1 not in self.ca.items: continue sel_idx = k1 if sel_idx is not None: column = self._yaml_get_columnX(sel_idx) return column def _yaml_get_pre_comment(self): # type: () -> Any pre_comments = [] # type: List[Any] if self.ca.comment is None: self.ca.comment = [None, pre_comments] else: self.ca.comment[1] = pre_comments return pre_comments def update(self, *vals, **kw): # type: (Any, Any) -> None try: ordereddict.update(self, *vals, **kw) except TypeError: # probably a dict that is used for x in vals[0]: self[x] = vals[0][x] try: self._ok.update(vals.keys()) # type: ignore except AttributeError: # assume one argument that is a list/tuple of two element lists/tuples for x in vals[0]: self._ok.add(x[0]) if kw: self._ok.add(*kw.keys()) def insert(self, pos, key, value, comment=None): # type: (Any, Any, Any, Optional[Any]) -> None """insert key value into given position attach comment if provided """ ordereddict.insert(self, pos, key, value) self._ok.add(key) if comment is not None: self.yaml_add_eol_comment(comment, key=key) def mlget(self, key, default=None, list_ok=False): # type: (Any, Any, Any) -> Any """multi-level get that expects dicts within dicts""" if not isinstance(key, list): return self.get(key, default) # assume that the key is a list of recursively accessible dicts def get_one_level(key_list, level, d): # type: (Any, Any, Any) -> Any if not list_ok: assert isinstance(d, dict) if level >= len(key_list): if level > len(key_list): raise IndexError return d[key_list[level - 1]] return get_one_level(key_list, level + 1, d[key_list[level - 1]]) try: return get_one_level(key, 1, self) except KeyError: return default except (TypeError, IndexError): if not list_ok: raise return default def __getitem__(self, key): # type: (Any) -> Any try: return ordereddict.__getitem__(self, key) except KeyError: for merged in getattr(self, merge_attrib, []): if key in merged[1]: return merged[1][key] raise def __setitem__(self, key, value): # type: (Any, Any) -> None # try to preserve the scalarstring type if setting an existing key to a new value if key in self: if ( isinstance(value, string_types) and not isinstance(value, ScalarString) and isinstance(self[key], ScalarString) ): value = type(self[key])(value) ordereddict.__setitem__(self, key, value) self._ok.add(key) def _unmerged_contains(self, key): # type: (Any) -> Any if key in self._ok: return True return None def __contains__(self, key): # type: (Any) -> bool return bool(ordereddict.__contains__(self, key)) def get(self, key, default=None): # type: (Any, Any) -> Any try: return self.__getitem__(key) except: # NOQA return default def __repr__(self): # type: () -> Any return ordereddict.__repr__(self).replace("CommentedMap", "ordereddict") def non_merged_items(self): # type: () -> Any for x in ordereddict.__iter__(self): if x in self._ok: yield x, ordereddict.__getitem__(self, x) def __delitem__(self, key): # type: (Any) -> None # for merged in getattr(self, merge_attrib, []): # if key in merged[1]: # value = merged[1][key] # break # else: # # not found in merged in stuff # ordereddict.__delitem__(self, key) # for referer in self._ref: # referer.update_key_value(key) # return # # ordereddict.__setitem__(self, key, value) # merge might have different value # self._ok.discard(key) self._ok.discard(key) ordereddict.__delitem__(self, key) for referer in self._ref: referer.update_key_value(key) def __iter__(self): # type: () -> Any for x in ordereddict.__iter__(self): yield x def _keys(self): # type: () -> Any for x in ordereddict.__iter__(self): yield x def __len__(self): # type: () -> int return int(ordereddict.__len__(self)) def __eq__(self, other): # type: (Any) -> bool return bool(dict(self) == other) if PY2: def keys(self): # type: () -> Any return list(self._keys()) def iterkeys(self): # type: () -> Any return self._keys() def viewkeys(self): # type: () -> Any return CommentedMapKeysView(self) else: def keys(self): # type: () -> Any return CommentedMapKeysView(self) if PY2: def _values(self): # type: () -> Any for x in ordereddict.__iter__(self): yield ordereddict.__getitem__(self, x) def values(self): # type: () -> Any return list(self._values()) def itervalues(self): # type: () -> Any return self._values() def viewvalues(self): # type: () -> Any return CommentedMapValuesView(self) else: def values(self): # type: () -> Any return CommentedMapValuesView(self) def _items(self): # type: () -> Any for x in ordereddict.__iter__(self): yield x, ordereddict.__getitem__(self, x) if PY2: def items(self): # type: () -> Any return list(self._items()) def iteritems(self): # type: () -> Any return self._items() def viewitems(self): # type: () -> Any return CommentedMapItemsView(self) else: def items(self): # type: () -> Any return CommentedMapItemsView(self) @property def merge(self): # type: () -> Any if not hasattr(self, merge_attrib): setattr(self, merge_attrib, []) return getattr(self, merge_attrib) def copy(self): # type: () -> Any x = type(self)() # update doesn't work for k, v in self._items(): x[k] = v self.copy_attributes(x) return x def add_referent(self, cm): # type: (Any) -> None if cm not in self._ref: self._ref.append(cm) def add_yaml_merge(self, value): # type: (Any) -> None for v in value: v[1].add_referent(self) for k, v in v[1].items(): if ordereddict.__contains__(self, k): continue ordereddict.__setitem__(self, k, v) self.merge.extend(value) def update_key_value(self, key): # type: (Any) -> None if key in self._ok: return for v in self.merge: if key in v[1]: ordereddict.__setitem__(self, key, v[1][key]) return ordereddict.__delitem__(self, key) def __deepcopy__(self, memo): # type: (Any) -> Any res = self.__class__() memo[id(self)] = res for k in self: res[k] = copy.deepcopy(self[k], memo) self.copy_attributes(res, memo=memo) return res # based on brownie mappings @classmethod # type: ignore def raise_immutable(cls, *args, **kwargs): # type: (Any, *Any, **Any) -> None raise TypeError("{} objects are immutable".format(cls.__name__)) class CommentedKeyMap(CommentedBase, Mapping): # type: ignore __slots__ = Comment.attrib, "_od" """This primarily exists to be able to roundtrip keys that are mappings""" def __init__(self, *args, **kw): # type: (Any, Any) -> None if hasattr(self, "_od"): raise_immutable(self) try: self._od = ordereddict(*args, **kw) except TypeError: if PY2: self._od = ordereddict(args[0].items()) else: raise __delitem__ = ( __setitem__ ) = clear = pop = popitem = setdefault = update = raise_immutable # need to implement __getitem__, __iter__ and __len__ def __getitem__(self, index): # type: (Any) -> Any return self._od[index] def __iter__(self): # type: () -> Iterator[Any] for x in self._od.__iter__(): yield x def __len__(self): # type: () -> int return len(self._od) def __hash__(self): # type: () -> Any return hash(tuple(self.items())) def __repr__(self): # type: () -> Any if not hasattr(self, merge_attrib): return self._od.__repr__() return "ordereddict(" + repr(list(self._od.items())) + ")" @classmethod def fromkeys(keys, v=None): # type: (Any, Any) -> Any return CommentedKeyMap(dict.fromkeys(keys, v)) def _yaml_add_comment(self, comment, key=NoComment): # type: (Any, Optional[Any]) -> None if key is not NoComment: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None self._yaml_add_comment(comment, key=key) def _yaml_get_columnX(self, key): # type: (Any) -> Any return self.ca.items[key][0].start_mark.column def _yaml_get_column(self, key): # type: (Any) -> Any column = None sel_idx = None pre, post = key - 1, key + 1 if pre in self.ca.items: sel_idx = pre elif post in self.ca.items: sel_idx = post else: # self.ca.items is not ordered for row_idx, _k1 in enumerate(self): if row_idx >= key: break if row_idx not in self.ca.items: continue sel_idx = row_idx if sel_idx is not None: column = self._yaml_get_columnX(sel_idx) return column def _yaml_get_pre_comment(self): # type: () -> Any pre_comments = [] # type: List[Any] if self.ca.comment is None: self.ca.comment = [None, pre_comments] else: self.ca.comment[1] = pre_comments return pre_comments class CommentedOrderedMap(CommentedMap): __slots__ = (Comment.attrib,) class CommentedSet(MutableSet, CommentedBase): # type: ignore # NOQA __slots__ = Comment.attrib, "odict" def __init__(self, values=None): # type: (Any) -> None self.odict = ordereddict() MutableSet.__init__(self) if values is not None: self |= values # type: ignore def _yaml_add_comment(self, comment, key=NoComment, value=NoComment): # type: (Any, Optional[Any], Optional[Any]) -> None """values is set to key to indicate a value attachment of comment""" if key is not NoComment: self.yaml_key_comment_extend(key, comment) return if value is not NoComment: self.yaml_value_comment_extend(value, comment) else: self.ca.comment = comment def _yaml_add_eol_comment(self, comment, key): # type: (Any, Any) -> None """add on the value line, with value specified by the key""" self._yaml_add_comment(comment, value=key) def add(self, value): # type: (Any) -> None """Add an element.""" self.odict[value] = None def discard(self, value): # type: (Any) -> None """Remove an element. Do not raise an exception if absent.""" del self.odict[value] def __contains__(self, x): # type: (Any) -> Any return x in self.odict def __iter__(self): # type: () -> Any for x in self.odict: yield x def __len__(self): # type: () -> int return len(self.odict) def __repr__(self): # type: () -> str return "set({0!r})".format(self.odict.keys()) class TaggedScalar(CommentedBase): # the value and style attributes are set during roundtrip construction def __init__(self, value=None, style=None, tag=None): # type: (Any, Any, Any) -> None self.value = value self.style = style if tag is not None: self.yaml_set_tag(tag) def __str__(self): # type: () -> Any return self.value def dump_comments(d, name="", sep=".", out=sys.stdout): # type: (Any, str, str, Any) -> None """ recursively dump comments, all but the toplevel preceded by the path in dotted form x.0.a """ if isinstance(d, dict) and hasattr(d, "ca"): if name: sys.stdout.write("{}\n".format(name)) out.write("{}\n".format(d.ca)) # type: ignore for k in d: dump_comments(d[k], name=(name + sep + k) if name else k, sep=sep, out=out) elif isinstance(d, list) and hasattr(d, "ca"): if name: sys.stdout.write("{}\n".format(name)) out.write("{}\n".format(d.ca)) # type: ignore for idx, k in enumerate(d): dump_comments( k, name=(name + sep + str(idx)) if name else str(idx), sep=sep, out=out ) strictyaml-1.6.1/strictyaml/ruamel/compat.py000066400000000000000000000212271415074265300212420ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function # partially from package six by Benjamin Peterson import sys import os import types import traceback from abc import abstractmethod # fmt: off if False: # MYPY from typing import Any, Dict, Optional, List, Union, BinaryIO, IO, Text, Tuple # NOQA from typing import Optional # NOQA # fmt: on _DEFAULT_YAML_VERSION = (1, 2) try: from ruamel.ordereddict import ordereddict except: # NOQA try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict # type: ignore # to get the right name import ... as ordereddict doesn't do that class ordereddict(OrderedDict): # type: ignore if not hasattr(OrderedDict, "insert"): def insert(self, pos, key, value): # type: (int, Any, Any) -> None if pos >= len(self): self[key] = value return od = ordereddict() od.update(self) for k in od: del self[k] for index, old_key in enumerate(od): if pos == index: self[key] = value self[old_key] = od[old_key] PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 if PY3: def utf8(s): # type: (str) -> str return s def to_str(s): # type: (str) -> str return s def to_unicode(s): # type: (str) -> str return s else: if False: unicode = str def utf8(s): # type: (unicode) -> str return s.encode("utf-8") def to_str(s): # type: (str) -> str return str(s) def to_unicode(s): # type: (str) -> unicode return unicode(s) # NOQA if PY3: string_types = str integer_types = int class_types = type text_type = str binary_type = bytes MAXSIZE = sys.maxsize unichr = chr import io StringIO = io.StringIO BytesIO = io.BytesIO # have unlimited precision no_limit_int = int from collections.abc import ( Hashable, MutableSequence, MutableMapping, Mapping, ) # NOQA else: string_types = basestring # NOQA integer_types = (int, long) # NOQA class_types = (type, types.ClassType) text_type = unicode # NOQA binary_type = str # to allow importing unichr = unichr from StringIO import StringIO as _StringIO StringIO = _StringIO import cStringIO BytesIO = cStringIO.StringIO # have unlimited precision no_limit_int = long # NOQA not available on Python 3 from collections import Hashable, MutableSequence, MutableMapping, Mapping # NOQA if False: # MYPY # StreamType = Union[BinaryIO, IO[str], IO[unicode], StringIO] # StreamType = Union[BinaryIO, IO[str], StringIO] # type: ignore StreamType = Any StreamTextType = StreamType # Union[Text, StreamType] VersionType = Union[List[int], str, Tuple[int, int]] if PY3: builtins_module = "builtins" else: builtins_module = "__builtin__" UNICODE_SIZE = 4 if sys.maxunicode > 65535 else 2 def with_metaclass(meta, *bases): # type: (Any, Any) -> Any """Create a base class with a metaclass.""" return meta("NewBase", bases, {}) DBG_TOKEN = 1 DBG_EVENT = 2 DBG_NODE = 4 _debug = None # type: Optional[int] if "RUAMELDEBUG" in os.environ: _debugx = os.environ.get("RUAMELDEBUG") if _debugx is None: _debug = 0 else: _debug = int(_debugx) if bool(_debug): class ObjectCounter(object): def __init__(self): # type: () -> None self.map = {} # type: Dict[Any, Any] def __call__(self, k): # type: (Any) -> None self.map[k] = self.map.get(k, 0) + 1 def dump(self): # type: () -> None for k in sorted(self.map): sys.stdout.write("{} -> {}".format(k, self.map[k])) object_counter = ObjectCounter() # used from yaml util when testing def dbg(val=None): # type: (Any) -> Any global _debug if _debug is None: # set to true or false _debugx = os.environ.get("YAMLDEBUG") if _debugx is None: _debug = 0 else: _debug = int(_debugx) if val is None: return _debug return _debug & val class Nprint(object): def __init__(self, file_name=None): # type: (Any) -> None self._max_print = None # type: Any self._count = None # type: Any self._file_name = file_name def __call__(self, *args, **kw): # type: (Any, Any) -> None if not bool(_debug): return out = sys.stdout if self._file_name is None else open(self._file_name, "a") dbgprint = print # to fool checking for print statements by dv utility kw1 = kw.copy() kw1["file"] = out dbgprint(*args, **kw1) out.flush() if self._max_print is not None: if self._count is None: self._count = self._max_print self._count -= 1 if self._count == 0: dbgprint("forced exit\n") traceback.print_stack() out.flush() sys.exit(0) if self._file_name: out.close() def set_max_print(self, i): # type: (int) -> None self._max_print = i self._count = None nprint = Nprint() nprintf = Nprint("/var/tmp/strictyaml.ruamel.log") # char checkers following production rules def check_namespace_char(ch): # type: (Any) -> bool if u"\x21" <= ch <= u"\x7E": # ! to ~ return True if u"\xA0" <= ch <= u"\uD7FF": return True if (u"\uE000" <= ch <= u"\uFFFD") and ch != u"\uFEFF": # excl. byte order mark return True if u"\U00010000" <= ch <= u"\U0010FFFF": return True return False def check_anchorname_char(ch): # type: (Any) -> bool if ch in u",[]{}": return False return check_namespace_char(ch) def version_tnf(t1, t2=None): # type: (Any, Any) -> Any """ return True if strictyaml.ruamel version_info < t1, None if t2 is specified and bigger else False """ from strictyaml.ruamel import version_info # NOQA if version_info < t1: return True if t2 is not None and version_info < t2: return None return False class MutableSliceableSequence(MutableSequence): # type: ignore __slots__ = () def __getitem__(self, index): # type: (Any) -> Any if not isinstance(index, slice): return self.__getsingleitem__(index) return type(self)([self[i] for i in range(*index.indices(len(self)))]) # type: ignore def __setitem__(self, index, value): # type: (Any, Any) -> None if not isinstance(index, slice): return self.__setsingleitem__(index, value) assert iter(value) # nprint(index.start, index.stop, index.step, index.indices(len(self))) if index.step is None: del self[index.start : index.stop] for elem in reversed(value): self.insert(0 if index.start is None else index.start, elem) else: range_parms = index.indices(len(self)) nr_assigned_items = (range_parms[1] - range_parms[0] - 1) // range_parms[ 2 ] + 1 # need to test before changing, in case TypeError is caught if nr_assigned_items < len(value): raise TypeError( "too many elements in value {} < {}".format( nr_assigned_items, len(value) ) ) elif nr_assigned_items > len(value): raise TypeError( "not enough elements in value {} > {}".format( nr_assigned_items, len(value) ) ) for idx, i in enumerate(range(*range_parms)): self[i] = value[idx] def __delitem__(self, index): # type: (Any) -> None if not isinstance(index, slice): return self.__delsingleitem__(index) # nprint(index.start, index.stop, index.step, index.indices(len(self))) for i in reversed(range(*index.indices(len(self)))): del self[i] @abstractmethod def __getsingleitem__(self, index): # type: (Any) -> Any raise IndexError @abstractmethod def __setsingleitem__(self, index, value): # type: (Any, Any) -> None raise IndexError @abstractmethod def __delsingleitem__(self, index): # type: (Any) -> None raise IndexError strictyaml-1.6.1/strictyaml/ruamel/composer.py000066400000000000000000000203471415074265300216100ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import, print_function import warnings from strictyaml.ruamel.error import MarkedYAMLError, ReusedAnchorWarning from strictyaml.ruamel.compat import utf8, nprint, nprintf # NOQA from strictyaml.ruamel.events import ( StreamStartEvent, StreamEndEvent, MappingStartEvent, MappingEndEvent, SequenceStartEvent, SequenceEndEvent, AliasEvent, ScalarEvent, ) from strictyaml.ruamel.nodes import MappingNode, ScalarNode, SequenceNode if False: # MYPY from typing import Any, Dict, Optional, List # NOQA __all__ = ["Composer", "ComposerError"] class ComposerError(MarkedYAMLError): pass class Composer(object): def __init__(self, loader=None): # type: (Any) -> None self.loader = loader if self.loader is not None and getattr(self.loader, "_composer", None) is None: self.loader._composer = self self.anchors = {} # type: Dict[Any, Any] @property def parser(self): # type: () -> Any if hasattr(self.loader, "typ"): self.loader.parser return self.loader._parser @property def resolver(self): # type: () -> Any # assert self.loader._resolver is not None if hasattr(self.loader, "typ"): self.loader.resolver return self.loader._resolver def check_node(self): # type: () -> Any # Drop the STREAM-START event. if self.parser.check_event(StreamStartEvent): self.parser.get_event() # If there are more documents available? return not self.parser.check_event(StreamEndEvent) def get_node(self): # type: () -> Any # Get the root node of the next document. if not self.parser.check_event(StreamEndEvent): return self.compose_document() def get_single_node(self): # type: () -> Any # Drop the STREAM-START event. self.parser.get_event() # Compose a document if the stream is not empty. document = None # type: Any if not self.parser.check_event(StreamEndEvent): document = self.compose_document() # Ensure that the stream contains no more documents. if not self.parser.check_event(StreamEndEvent): event = self.parser.get_event() raise ComposerError( "expected a single document in the stream", document.start_mark, "but found another document", event.start_mark, ) # Drop the STREAM-END event. self.parser.get_event() return document def compose_document(self): # type: (Any) -> Any # Drop the DOCUMENT-START event. self.parser.get_event() # Compose the root node. node = self.compose_node(None, None) # Drop the DOCUMENT-END event. self.parser.get_event() self.anchors = {} return node def compose_node(self, parent, index): # type: (Any, Any) -> Any if self.parser.check_event(AliasEvent): event = self.parser.get_event() alias = event.anchor if alias not in self.anchors: raise ComposerError( None, None, "found undefined alias %r" % utf8(alias), event.start_mark, ) return self.anchors[alias] event = self.parser.peek_event() anchor = event.anchor if anchor is not None: # have an anchor if anchor in self.anchors: # raise ComposerError( # "found duplicate anchor %r; first occurrence" # % utf8(anchor), self.anchors[anchor].start_mark, # "second occurrence", event.start_mark) ws = ( "\nfound duplicate anchor {!r}\nfirst occurrence {}\nsecond occurrence " "{}".format( (anchor), self.anchors[anchor].start_mark, event.start_mark ) ) warnings.warn(ws, ReusedAnchorWarning) self.resolver.descend_resolver(parent, index) if self.parser.check_event(ScalarEvent): node = self.compose_scalar_node(anchor) elif self.parser.check_event(SequenceStartEvent): node = self.compose_sequence_node(anchor) elif self.parser.check_event(MappingStartEvent): node = self.compose_mapping_node(anchor) self.resolver.ascend_resolver() return node def compose_scalar_node(self, anchor): # type: (Any) -> Any event = self.parser.get_event() tag = event.tag if tag is None or tag == u"!": tag = self.resolver.resolve(ScalarNode, event.value, event.implicit) node = ScalarNode( tag, event.value, event.start_mark, event.end_mark, style=event.style, comment=event.comment, anchor=anchor, ) if anchor is not None: self.anchors[anchor] = node return node def compose_sequence_node(self, anchor): # type: (Any) -> Any start_event = self.parser.get_event() tag = start_event.tag if tag is None or tag == u"!": tag = self.resolver.resolve(SequenceNode, None, start_event.implicit) node = SequenceNode( tag, [], start_event.start_mark, None, flow_style=start_event.flow_style, comment=start_event.comment, anchor=anchor, ) if anchor is not None: self.anchors[anchor] = node index = 0 while not self.parser.check_event(SequenceEndEvent): node.value.append(self.compose_node(node, index)) index += 1 end_event = self.parser.get_event() if node.flow_style is True and end_event.comment is not None: if node.comment is not None: nprint( "Warning: unexpected end_event commment in sequence " "node {}".format(node.flow_style) ) node.comment = end_event.comment node.end_mark = end_event.end_mark self.check_end_doc_comment(end_event, node) return node def compose_mapping_node(self, anchor): # type: (Any) -> Any start_event = self.parser.get_event() tag = start_event.tag if tag is None or tag == u"!": tag = self.resolver.resolve(MappingNode, None, start_event.implicit) node = MappingNode( tag, [], start_event.start_mark, None, flow_style=start_event.flow_style, comment=start_event.comment, anchor=anchor, ) if anchor is not None: self.anchors[anchor] = node while not self.parser.check_event(MappingEndEvent): # key_event = self.parser.peek_event() item_key = self.compose_node(node, None) # if item_key in node.value: # raise ComposerError("while composing a mapping", # start_event.start_mark, # "found duplicate key", key_event.start_mark) item_value = self.compose_node(node, item_key) # node.value[item_key] = item_value node.value.append((item_key, item_value)) end_event = self.parser.get_event() if node.flow_style is True and end_event.comment is not None: node.comment = end_event.comment node.end_mark = end_event.end_mark self.check_end_doc_comment(end_event, node) return node def check_end_doc_comment(self, end_event, node): # type: (Any, Any) -> None if end_event.comment and end_event.comment[1]: # pre comments on an end_event, no following to move to if node.comment is None: node.comment = [None, None] assert not isinstance(node, ScalarEvent) # this is a post comment on a mapping node, add as third element # in the list node.comment.append(end_event.comment[1]) end_event.comment[1] = None strictyaml-1.6.1/strictyaml/ruamel/configobjwalker.py000066400000000000000000000005601415074265300231220ustar00rootroot00000000000000# coding: utf-8 import warnings from strictyaml.ruamel.util import configobj_walker as new_configobj_walker if False: # MYPY from typing import Any # NOQA def configobj_walker(cfg): # type: (Any) -> Any warnings.warn( "configobj_walker has moved to strictyaml.ruamel.util, please update your code" ) return new_configobj_walker(cfg) strictyaml-1.6.1/strictyaml/ruamel/constructor.py000066400000000000000000002140301415074265300223400ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division import datetime import base64 import binascii import re import sys import types import warnings # fmt: off from strictyaml.ruamel.error import (MarkedYAMLError, MarkedYAMLFutureWarning, MantissaNoDotYAML1_1Warning) from strictyaml.ruamel.nodes import * # NOQA from strictyaml.ruamel.nodes import (SequenceNode, MappingNode, ScalarNode) from strictyaml.ruamel.compat import (utf8, builtins_module, to_str, PY2, PY3, # NOQA text_type, nprint, nprintf, version_tnf) from strictyaml.ruamel.compat import ordereddict, Hashable, MutableSequence # type: ignore from strictyaml.ruamel.compat import MutableMapping # type: ignore from strictyaml.ruamel.comments import * # NOQA from strictyaml.ruamel.comments import (CommentedMap, CommentedOrderedMap, CommentedSet, CommentedKeySeq, CommentedSeq, TaggedScalar, CommentedKeyMap) from strictyaml.ruamel.scalarstring import (SingleQuotedScalarString, DoubleQuotedScalarString, LiteralScalarString, FoldedScalarString, PlainScalarString, ScalarString,) from strictyaml.ruamel.scalarint import ScalarInt, BinaryInt, OctalInt, HexInt, HexCapsInt from strictyaml.ruamel.scalarfloat import ScalarFloat from strictyaml.ruamel.scalarbool import ScalarBoolean from strictyaml.ruamel.timestamp import TimeStamp from strictyaml.ruamel.util import RegExp if False: # MYPY from typing import Any, Dict, List, Set, Generator, Union, Optional # NOQA __all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor', 'ConstructorError', 'RoundTripConstructor'] # fmt: on class ConstructorError(MarkedYAMLError): pass class DuplicateKeyFutureWarning(MarkedYAMLFutureWarning): pass class DuplicateKeyError(MarkedYAMLFutureWarning): pass class BaseConstructor(object): yaml_constructors = {} # type: Dict[Any, Any] yaml_multi_constructors = {} # type: Dict[Any, Any] def __init__(self, preserve_quotes=None, loader=None): # type: (Optional[bool], Any) -> None self.loader = loader if ( self.loader is not None and getattr(self.loader, "_constructor", None) is None ): self.loader._constructor = self self.loader = loader self.yaml_base_dict_type = dict self.yaml_base_list_type = list self.constructed_objects = {} # type: Dict[Any, Any] self.recursive_objects = {} # type: Dict[Any, Any] self.state_generators = [] # type: List[Any] self.deep_construct = False self._preserve_quotes = preserve_quotes self.allow_duplicate_keys = version_tnf((0, 15, 1), (0, 16)) @property def composer(self): # type: () -> Any if hasattr(self.loader, "typ"): return self.loader.composer try: return self.loader._composer except AttributeError: sys.stdout.write("slt {}\n".format(type(self))) sys.stdout.write("slc {}\n".format(self.loader._composer)) sys.stdout.write("{}\n".format(dir(self))) raise @property def resolver(self): # type: () -> Any if hasattr(self.loader, "typ"): return self.loader.resolver return self.loader._resolver def check_data(self): # type: () -> Any # If there are more documents available? return self.composer.check_node() def get_data(self): # type: () -> Any # Construct and return the next document. if self.composer.check_node(): return self.construct_document(self.composer.get_node()) def get_single_data(self): # type: () -> Any # Ensure that the stream contains a single document and construct it. node = self.composer.get_single_node() if node is not None: return self.construct_document(node) return None def construct_document(self, node): # type: (Any) -> Any data = self.construct_object(node) while bool(self.state_generators): state_generators = self.state_generators self.state_generators = [] for generator in state_generators: for _dummy in generator: pass self.constructed_objects = {} self.recursive_objects = {} self.deep_construct = False return data def construct_object(self, node, deep=False): # type: (Any, bool) -> Any """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ if node in self.constructed_objects: return self.constructed_objects[node] if deep: old_deep = self.deep_construct self.deep_construct = True if node in self.recursive_objects: return self.recursive_objects[node] # raise ConstructorError( # None, None, 'found unconstructable recursive node', node.start_mark # ) self.recursive_objects[node] = None data = self.construct_non_recursive_object(node) self.constructed_objects[node] = data del self.recursive_objects[node] if deep: self.deep_construct = old_deep return data def construct_non_recursive_object(self, node, tag=None): # type: (Any, Optional[str]) -> Any constructor = None # type: Any tag_suffix = None if tag is None: tag = node.tag if tag in self.yaml_constructors: constructor = self.yaml_constructors[tag] else: for tag_prefix in self.yaml_multi_constructors: if tag.startswith(tag_prefix): tag_suffix = tag[len(tag_prefix) :] constructor = self.yaml_multi_constructors[tag_prefix] break else: if None in self.yaml_multi_constructors: tag_suffix = tag constructor = self.yaml_multi_constructors[None] elif None in self.yaml_constructors: constructor = self.yaml_constructors[None] elif isinstance(node, ScalarNode): constructor = self.__class__.construct_scalar elif isinstance(node, SequenceNode): constructor = self.__class__.construct_sequence elif isinstance(node, MappingNode): constructor = self.__class__.construct_mapping if tag_suffix is None: data = constructor(self, node) else: data = constructor(self, tag_suffix, node) if isinstance(data, types.GeneratorType): generator = data data = next(generator) if self.deep_construct: for _dummy in generator: pass else: self.state_generators.append(generator) return data def construct_scalar(self, node): # type: (Any) -> Any if not isinstance(node, ScalarNode): raise ConstructorError( None, None, "expected a scalar node, but found %s" % node.id, node.start_mark, ) return node.value def construct_sequence(self, node, deep=False): # type: (Any, bool) -> Any """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ if not isinstance(node, SequenceNode): raise ConstructorError( None, None, "expected a sequence node, but found %s" % node.id, node.start_mark, ) return [self.construct_object(child, deep=deep) for child in node.value] def construct_mapping(self, node, deep=False): # type: (Any, bool) -> Any """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ if not isinstance(node, MappingNode): raise ConstructorError( None, None, "expected a mapping node, but found %s" % node.id, node.start_mark, ) total_mapping = self.yaml_base_dict_type() if getattr(node, "merge", None) is not None: todo = [(node.merge, False), (node.value, False)] else: todo = [(node.value, True)] for values, check in todo: mapping = self.yaml_base_dict_type() # type: Dict[Any, Any] for key_node, value_node in values: # keys can be list -> deep key = self.construct_object(key_node, deep=True) # lists are not hashable, but tuples are if not isinstance(key, Hashable): if isinstance(key, list): key = tuple(key) if PY2: try: hash(key) except TypeError as exc: raise ConstructorError( "while constructing a mapping", node.start_mark, "found unacceptable key (%s)" % exc, key_node.start_mark, ) else: if not isinstance(key, Hashable): raise ConstructorError( "while constructing a mapping", node.start_mark, "found unhashable key", key_node.start_mark, ) value = self.construct_object(value_node, deep=deep) if check: if self.check_mapping_key(node, key_node, mapping, key, value): mapping[key] = value else: mapping[key] = value total_mapping.update(mapping) return total_mapping def check_mapping_key(self, node, key_node, mapping, key, value): # type: (Any, Any, Any, Any, Any) -> bool """return True if key is unique""" if key in mapping: if not self.allow_duplicate_keys: mk = mapping.get(key) if PY2: if isinstance(key, unicode): key = key.encode("utf-8") if isinstance(value, unicode): value = value.encode("utf-8") if isinstance(mk, unicode): mk = mk.encode("utf-8") args = [ "while constructing a mapping", node.start_mark, 'found duplicate key "{}" with value "{}" ' '(original value: "{}")'.format(key, value, mk), key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args)) else: raise DuplicateKeyError(*args) return False return True def check_set_key(self, node, key_node, setting, key): # type: (Any, Any, Any, Any, Any) -> None if key in setting: if not self.allow_duplicate_keys: if PY2: if isinstance(key, unicode): key = key.encode("utf-8") args = [ "while constructing a set", node.start_mark, 'found duplicate key "{}"'.format(key), key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args)) else: raise DuplicateKeyError(*args) def construct_pairs(self, node, deep=False): # type: (Any, bool) -> Any if not isinstance(node, MappingNode): raise ConstructorError( None, None, "expected a mapping node, but found %s" % node.id, node.start_mark, ) pairs = [] for key_node, value_node in node.value: key = self.construct_object(key_node, deep=deep) value = self.construct_object(value_node, deep=deep) pairs.append((key, value)) return pairs @classmethod def add_constructor(cls, tag, constructor): # type: (Any, Any) -> None if "yaml_constructors" not in cls.__dict__: cls.yaml_constructors = cls.yaml_constructors.copy() cls.yaml_constructors[tag] = constructor @classmethod def add_multi_constructor(cls, tag_prefix, multi_constructor): # type: (Any, Any) -> None if "yaml_multi_constructors" not in cls.__dict__: cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy() cls.yaml_multi_constructors[tag_prefix] = multi_constructor class SafeConstructor(BaseConstructor): def construct_scalar(self, node): # type: (Any) -> Any if isinstance(node, MappingNode): for key_node, value_node in node.value: if key_node.tag == u"tag:yaml.org,2002:value": return self.construct_scalar(value_node) return BaseConstructor.construct_scalar(self, node) def flatten_mapping(self, node): # type: (Any) -> Any """ This implements the merge key feature http://yaml.org/type/merge.html by inserting keys from the merge dict/list of dicts if not yet available in this node """ merge = [] # type: List[Any] index = 0 while index < len(node.value): key_node, value_node = node.value[index] if key_node.tag == u"tag:yaml.org,2002:merge": if merge: # double << key if self.allow_duplicate_keys: del node.value[index] index += 1 continue args = [ "while constructing a mapping", node.start_mark, 'found duplicate key "{}"'.format(key_node.value), key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args)) else: raise DuplicateKeyError(*args) del node.value[index] if isinstance(value_node, MappingNode): self.flatten_mapping(value_node) merge.extend(value_node.value) elif isinstance(value_node, SequenceNode): submerge = [] for subnode in value_node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( "while constructing a mapping", node.start_mark, "expected a mapping for merging, but found %s" % subnode.id, subnode.start_mark, ) self.flatten_mapping(subnode) submerge.append(subnode.value) submerge.reverse() for value in submerge: merge.extend(value) else: raise ConstructorError( "while constructing a mapping", node.start_mark, "expected a mapping or list of mappings for merging, " "but found %s" % value_node.id, value_node.start_mark, ) elif key_node.tag == u"tag:yaml.org,2002:value": key_node.tag = u"tag:yaml.org,2002:str" index += 1 else: index += 1 if bool(merge): node.merge = ( merge # separate merge keys to be able to update without duplicate ) node.value = merge + node.value def construct_mapping(self, node, deep=False): # type: (Any, bool) -> Any """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ if isinstance(node, MappingNode): self.flatten_mapping(node) return BaseConstructor.construct_mapping(self, node, deep=deep) def construct_yaml_null(self, node): # type: (Any) -> Any self.construct_scalar(node) return None # YAML 1.2 spec doesn't mention yes/no etc any more, 1.1 does bool_values = { u"yes": True, u"no": False, u"y": True, u"n": False, u"true": True, u"false": False, u"on": True, u"off": False, } def construct_yaml_bool(self, node): # type: (Any) -> bool value = self.construct_scalar(node) return self.bool_values[value.lower()] def construct_yaml_int(self, node): # type: (Any) -> int value_s = to_str(self.construct_scalar(node)) value_s = value_s.replace("_", "") sign = +1 if value_s[0] == "-": sign = -1 if value_s[0] in "+-": value_s = value_s[1:] if value_s == "0": return 0 elif value_s.startswith("0b"): return sign * int(value_s[2:], 2) elif value_s.startswith("0x"): return sign * int(value_s[2:], 16) elif value_s.startswith("0o"): return sign * int(value_s[2:], 8) elif self.resolver.processing_version == (1, 1) and value_s[0] == "0": return sign * int(value_s, 8) elif self.resolver.processing_version == (1, 1) and ":" in value_s: digits = [int(part) for part in value_s.split(":")] digits.reverse() base = 1 value = 0 for digit in digits: value += digit * base base *= 60 return sign * value else: return sign * int(value_s) inf_value = 1e300 while inf_value != inf_value * inf_value: inf_value *= inf_value nan_value = -inf_value / inf_value # Trying to make a quiet NaN (like C99). def construct_yaml_float(self, node): # type: (Any) -> float value_so = to_str(self.construct_scalar(node)) value_s = value_so.replace("_", "").lower() sign = +1 if value_s[0] == "-": sign = -1 if value_s[0] in "+-": value_s = value_s[1:] if value_s == ".inf": return sign * self.inf_value elif value_s == ".nan": return self.nan_value elif self.resolver.processing_version != (1, 2) and ":" in value_s: digits = [float(part) for part in value_s.split(":")] digits.reverse() base = 1 value = 0.0 for digit in digits: value += digit * base base *= 60 return sign * value else: if self.resolver.processing_version != (1, 2) and "e" in value_s: # value_s is lower case independent of input mantissa, exponent = value_s.split("e") if "." not in mantissa: warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so)) return sign * float(value_s) if PY3: def construct_yaml_binary(self, node): # type: (Any) -> Any try: value = self.construct_scalar(node).encode("ascii") except UnicodeEncodeError as exc: raise ConstructorError( None, None, "failed to convert base64 data into ascii: %s" % exc, node.start_mark, ) try: if hasattr(base64, "decodebytes"): return base64.decodebytes(value) else: return base64.decodestring(value) except binascii.Error as exc: raise ConstructorError( None, None, "failed to decode base64 data: %s" % exc, node.start_mark, ) else: def construct_yaml_binary(self, node): # type: (Any) -> Any value = self.construct_scalar(node) try: return to_str(value).decode("base64") except (binascii.Error, UnicodeEncodeError) as exc: raise ConstructorError( None, None, "failed to decode base64 data: %s" % exc, node.start_mark, ) timestamp_regexp = RegExp( u"""^(?P[0-9][0-9][0-9][0-9]) -(?P[0-9][0-9]?) -(?P[0-9][0-9]?) (?:((?P[Tt])|[ \\t]+) # explictly not retaining extra spaces (?P[0-9][0-9]?) :(?P[0-9][0-9]) :(?P[0-9][0-9]) (?:\\.(?P[0-9]*))? (?:[ \\t]*(?PZ|(?P[-+])(?P[0-9][0-9]?) (?::(?P[0-9][0-9]))?))?)?$""", re.X, ) def construct_yaml_timestamp(self, node, values=None): # type: (Any, Any) -> Any if values is None: try: match = self.timestamp_regexp.match(node.value) except TypeError: match = None if match is None: raise ConstructorError( None, None, 'failed to construct timestamp from "{}"'.format(node.value), node.start_mark, ) values = match.groupdict() year = int(values["year"]) month = int(values["month"]) day = int(values["day"]) if not values["hour"]: return datetime.date(year, month, day) hour = int(values["hour"]) minute = int(values["minute"]) second = int(values["second"]) fraction = 0 if values["fraction"]: fraction_s = values["fraction"][:6] while len(fraction_s) < 6: fraction_s += "0" fraction = int(fraction_s) if len(values["fraction"]) > 6 and int(values["fraction"][6]) > 4: fraction += 1 delta = None if values["tz_sign"]: tz_hour = int(values["tz_hour"]) minutes = values["tz_minute"] tz_minute = int(minutes) if minutes else 0 delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute) if values["tz_sign"] == "-": delta = -delta # should do something else instead (or hook this up to the preceding if statement # in reverse # if delta is None: # return datetime.datetime(year, month, day, hour, minute, second, fraction) # return datetime.datetime(year, month, day, hour, minute, second, fraction, # datetime.timezone.utc) # the above is not good enough though, should provide tzinfo. In Python3 that is easily # doable drop that kind of support for Python2 as it has not native tzinfo data = datetime.datetime(year, month, day, hour, minute, second, fraction) if delta: data -= delta return data def construct_yaml_omap(self, node): # type: (Any) -> Any # Note: we do now check for duplicate keys omap = ordereddict() yield omap if not isinstance(node, SequenceNode): raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a sequence, but found %s" % node.id, node.start_mark, ) for subnode in node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a mapping of length 1, but found %s" % subnode.id, subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a single mapping item, but found %d items" % len(subnode.value), subnode.start_mark, ) key_node, value_node = subnode.value[0] key = self.construct_object(key_node) assert key not in omap value = self.construct_object(value_node) omap[key] = value def construct_yaml_pairs(self, node): # type: (Any) -> Any # Note: the same code as `construct_yaml_omap`. pairs = [] # type: List[Any] yield pairs if not isinstance(node, SequenceNode): raise ConstructorError( "while constructing pairs", node.start_mark, "expected a sequence, but found %s" % node.id, node.start_mark, ) for subnode in node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( "while constructing pairs", node.start_mark, "expected a mapping of length 1, but found %s" % subnode.id, subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( "while constructing pairs", node.start_mark, "expected a single mapping item, but found %d items" % len(subnode.value), subnode.start_mark, ) key_node, value_node = subnode.value[0] key = self.construct_object(key_node) value = self.construct_object(value_node) pairs.append((key, value)) def construct_yaml_set(self, node): # type: (Any) -> Any data = set() # type: Set[Any] yield data value = self.construct_mapping(node) data.update(value) def construct_yaml_str(self, node): # type: (Any) -> Any value = self.construct_scalar(node) if PY3: return value try: return value.encode("ascii") except UnicodeEncodeError: return value def construct_yaml_seq(self, node): # type: (Any) -> Any data = self.yaml_base_list_type() # type: List[Any] yield data data.extend(self.construct_sequence(node)) def construct_yaml_map(self, node): # type: (Any) -> Any data = self.yaml_base_dict_type() # type: Dict[Any, Any] yield data value = self.construct_mapping(node) data.update(value) def construct_yaml_object(self, node, cls): # type: (Any, Any) -> Any data = cls.__new__(cls) yield data if hasattr(data, "__setstate__"): state = self.construct_mapping(node, deep=True) data.__setstate__(state) else: state = self.construct_mapping(node) data.__dict__.update(state) def construct_undefined(self, node): # type: (Any) -> None raise ConstructorError( None, None, "could not determine a constructor for the tag %r" % utf8(node.tag), node.start_mark, ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:null", SafeConstructor.construct_yaml_null ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:bool", SafeConstructor.construct_yaml_bool ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:int", SafeConstructor.construct_yaml_int ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:float", SafeConstructor.construct_yaml_float ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:binary", SafeConstructor.construct_yaml_binary ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:timestamp", SafeConstructor.construct_yaml_timestamp ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:omap", SafeConstructor.construct_yaml_omap ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:pairs", SafeConstructor.construct_yaml_pairs ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:set", SafeConstructor.construct_yaml_set ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:str", SafeConstructor.construct_yaml_str ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:seq", SafeConstructor.construct_yaml_seq ) SafeConstructor.add_constructor( u"tag:yaml.org,2002:map", SafeConstructor.construct_yaml_map ) SafeConstructor.add_constructor(None, SafeConstructor.construct_undefined) if PY2: class classobj: pass class Constructor(SafeConstructor): def construct_python_str(self, node): # type: (Any) -> Any return utf8(self.construct_scalar(node)) def construct_python_unicode(self, node): # type: (Any) -> Any return self.construct_scalar(node) if PY3: def construct_python_bytes(self, node): # type: (Any) -> Any try: value = self.construct_scalar(node).encode("ascii") except UnicodeEncodeError as exc: raise ConstructorError( None, None, "failed to convert base64 data into ascii: %s" % exc, node.start_mark, ) try: if hasattr(base64, "decodebytes"): return base64.decodebytes(value) else: return base64.decodestring(value) except binascii.Error as exc: raise ConstructorError( None, None, "failed to decode base64 data: %s" % exc, node.start_mark, ) def construct_python_long(self, node): # type: (Any) -> int val = self.construct_yaml_int(node) if PY3: return val return int(val) def construct_python_complex(self, node): # type: (Any) -> Any return complex(self.construct_scalar(node)) def construct_python_tuple(self, node): # type: (Any) -> Any return tuple(self.construct_sequence(node)) def find_python_module(self, name, mark): # type: (Any, Any) -> Any if not name: raise ConstructorError( "while constructing a Python module", mark, "expected non-empty name appended to the tag", mark, ) try: __import__(name) except ImportError as exc: raise ConstructorError( "while constructing a Python module", mark, "cannot find module %r (%s)" % (utf8(name), exc), mark, ) return sys.modules[name] def find_python_name(self, name, mark): # type: (Any, Any) -> Any if not name: raise ConstructorError( "while constructing a Python object", mark, "expected non-empty name appended to the tag", mark, ) if u"." in name: lname = name.split(".") lmodule_name = lname lobject_name = [] # type: List[Any] while len(lmodule_name) > 1: lobject_name.insert(0, lmodule_name.pop()) module_name = ".".join(lmodule_name) try: __import__(module_name) # object_name = '.'.join(object_name) break except ImportError: continue else: module_name = builtins_module lobject_name = [name] try: __import__(module_name) except ImportError as exc: raise ConstructorError( "while constructing a Python object", mark, "cannot find module %r (%s)" % (utf8(module_name), exc), mark, ) module = sys.modules[module_name] object_name = ".".join(lobject_name) obj = module while lobject_name: if not hasattr(obj, lobject_name[0]): raise ConstructorError( "while constructing a Python object", mark, "cannot find %r in the module %r" % (utf8(object_name), module.__name__), mark, ) obj = getattr(obj, lobject_name.pop(0)) return obj def construct_python_name(self, suffix, node): # type: (Any, Any) -> Any value = self.construct_scalar(node) if value: raise ConstructorError( "while constructing a Python name", node.start_mark, "expected the empty value, but found %r" % utf8(value), node.start_mark, ) return self.find_python_name(suffix, node.start_mark) def construct_python_module(self, suffix, node): # type: (Any, Any) -> Any value = self.construct_scalar(node) if value: raise ConstructorError( "while constructing a Python module", node.start_mark, "expected the empty value, but found %r" % utf8(value), node.start_mark, ) return self.find_python_module(suffix, node.start_mark) def make_python_instance(self, suffix, node, args=None, kwds=None, newobj=False): # type: (Any, Any, Any, Any, bool) -> Any if not args: args = [] if not kwds: kwds = {} cls = self.find_python_name(suffix, node.start_mark) if PY3: if newobj and isinstance(cls, type): return cls.__new__(cls, *args, **kwds) else: return cls(*args, **kwds) else: if newobj and isinstance(cls, type(classobj)) and not args and not kwds: instance = classobj() instance.__class__ = cls return instance elif newobj and isinstance(cls, type): return cls.__new__(cls, *args, **kwds) else: return cls(*args, **kwds) def set_python_instance_state(self, instance, state): # type: (Any, Any) -> None if hasattr(instance, "__setstate__"): instance.__setstate__(state) else: slotstate = {} # type: Dict[Any, Any] if isinstance(state, tuple) and len(state) == 2: state, slotstate = state if hasattr(instance, "__dict__"): instance.__dict__.update(state) elif state: slotstate.update(state) for key, value in slotstate.items(): setattr(instance, key, value) def construct_python_object(self, suffix, node): # type: (Any, Any) -> Any # Format: # !!python/object:module.name { ... state ... } instance = self.make_python_instance(suffix, node, newobj=True) self.recursive_objects[node] = instance yield instance deep = hasattr(instance, "__setstate__") state = self.construct_mapping(node, deep=deep) self.set_python_instance_state(instance, state) def construct_python_object_apply(self, suffix, node, newobj=False): # type: (Any, Any, bool) -> Any # Format: # !!python/object/apply # (or !!python/object/new) # args: [ ... arguments ... ] # kwds: { ... keywords ... } # state: ... state ... # listitems: [ ... listitems ... ] # dictitems: { ... dictitems ... } # or short format: # !!python/object/apply [ ... arguments ... ] # The difference between !!python/object/apply and !!python/object/new # is how an object is created, check make_python_instance for details. if isinstance(node, SequenceNode): args = self.construct_sequence(node, deep=True) kwds = {} # type: Dict[Any, Any] state = {} # type: Dict[Any, Any] listitems = [] # type: List[Any] dictitems = {} # type: Dict[Any, Any] else: value = self.construct_mapping(node, deep=True) args = value.get("args", []) kwds = value.get("kwds", {}) state = value.get("state", {}) listitems = value.get("listitems", []) dictitems = value.get("dictitems", {}) instance = self.make_python_instance(suffix, node, args, kwds, newobj) if bool(state): self.set_python_instance_state(instance, state) if bool(listitems): instance.extend(listitems) if bool(dictitems): for key in dictitems: instance[key] = dictitems[key] return instance def construct_python_object_new(self, suffix, node): # type: (Any, Any) -> Any return self.construct_python_object_apply(suffix, node, newobj=True) Constructor.add_constructor( u"tag:yaml.org,2002:python/none", Constructor.construct_yaml_null ) Constructor.add_constructor( u"tag:yaml.org,2002:python/bool", Constructor.construct_yaml_bool ) Constructor.add_constructor( u"tag:yaml.org,2002:python/str", Constructor.construct_python_str ) Constructor.add_constructor( u"tag:yaml.org,2002:python/unicode", Constructor.construct_python_unicode ) if PY3: Constructor.add_constructor( u"tag:yaml.org,2002:python/bytes", Constructor.construct_python_bytes ) Constructor.add_constructor( u"tag:yaml.org,2002:python/int", Constructor.construct_yaml_int ) Constructor.add_constructor( u"tag:yaml.org,2002:python/long", Constructor.construct_python_long ) Constructor.add_constructor( u"tag:yaml.org,2002:python/float", Constructor.construct_yaml_float ) Constructor.add_constructor( u"tag:yaml.org,2002:python/complex", Constructor.construct_python_complex ) Constructor.add_constructor( u"tag:yaml.org,2002:python/list", Constructor.construct_yaml_seq ) Constructor.add_constructor( u"tag:yaml.org,2002:python/tuple", Constructor.construct_python_tuple ) Constructor.add_constructor( u"tag:yaml.org,2002:python/dict", Constructor.construct_yaml_map ) Constructor.add_multi_constructor( u"tag:yaml.org,2002:python/name:", Constructor.construct_python_name ) Constructor.add_multi_constructor( u"tag:yaml.org,2002:python/module:", Constructor.construct_python_module ) Constructor.add_multi_constructor( u"tag:yaml.org,2002:python/object:", Constructor.construct_python_object ) Constructor.add_multi_constructor( u"tag:yaml.org,2002:python/object/apply:", Constructor.construct_python_object_apply ) Constructor.add_multi_constructor( u"tag:yaml.org,2002:python/object/new:", Constructor.construct_python_object_new ) class RoundTripConstructor(SafeConstructor): """need to store the comments on the node itself, as well as on the items """ def construct_scalar(self, node): # type: (Any) -> Any if not isinstance(node, ScalarNode): raise ConstructorError( None, None, "expected a scalar node, but found %s" % node.id, node.start_mark, ) if node.style == "|" and isinstance(node.value, text_type): lss = LiteralScalarString(node.value, anchor=node.anchor) if node.comment and node.comment[1]: lss.comment = node.comment[1][0] # type: ignore return lss if node.style == ">" and isinstance(node.value, text_type): fold_positions = [] # type: List[int] idx = -1 while True: idx = node.value.find("\a", idx + 1) if idx < 0: break fold_positions.append(idx - len(fold_positions)) fss = FoldedScalarString(node.value.replace("\a", ""), anchor=node.anchor) if node.comment and node.comment[1]: fss.comment = node.comment[1][0] # type: ignore if fold_positions: fss.fold_pos = fold_positions # type: ignore return fss elif bool(self._preserve_quotes) and isinstance(node.value, text_type): if node.style == "'": return SingleQuotedScalarString(node.value, anchor=node.anchor) if node.style == '"': return DoubleQuotedScalarString(node.value, anchor=node.anchor) if node.anchor: return PlainScalarString(node.value, anchor=node.anchor) return node.value def construct_yaml_int(self, node): # type: (Any) -> Any width = None # type: Any value_su = to_str(self.construct_scalar(node)) try: sx = value_su.rstrip("_") underscore = [len(sx) - sx.rindex("_") - 1, False, False] # type: Any except ValueError: underscore = None except IndexError: underscore = None value_s = value_su.replace("_", "") sign = +1 if value_s[0] == "-": sign = -1 if value_s[0] in "+-": value_s = value_s[1:] if value_s == "0": return 0 elif value_s.startswith("0b"): if self.resolver.processing_version > (1, 1) and value_s[2] == "0": width = len(value_s[2:]) if underscore is not None: underscore[1] = value_su[2] == "_" underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_" return BinaryInt( sign * int(value_s[2:], 2), width=width, underscore=underscore, anchor=node.anchor, ) elif value_s.startswith("0x"): # default to lower-case if no a-fA-F in string if self.resolver.processing_version > (1, 1) and value_s[2] == "0": width = len(value_s[2:]) hex_fun = HexInt # type: Any for ch in value_s[2:]: if ch in "ABCDEF": # first non-digit is capital hex_fun = HexCapsInt break if ch in "abcdef": break if underscore is not None: underscore[1] = value_su[2] == "_" underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_" return hex_fun( sign * int(value_s[2:], 16), width=width, underscore=underscore, anchor=node.anchor, ) elif value_s.startswith("0o"): if self.resolver.processing_version > (1, 1) and value_s[2] == "0": width = len(value_s[2:]) if underscore is not None: underscore[1] = value_su[2] == "_" underscore[2] = len(value_su[2:]) > 1 and value_su[-1] == "_" return OctalInt( sign * int(value_s[2:], 8), width=width, underscore=underscore, anchor=node.anchor, ) elif self.resolver.processing_version != (1, 2) and value_s[0] == "0": return sign * int(value_s, 8) elif self.resolver.processing_version != (1, 2) and ":" in value_s: digits = [int(part) for part in value_s.split(":")] digits.reverse() base = 1 value = 0 for digit in digits: value += digit * base base *= 60 return sign * value elif self.resolver.processing_version > (1, 1) and value_s[0] == "0": # not an octal, an integer with leading zero(s) if underscore is not None: # cannot have a leading underscore underscore[2] = len(value_su) > 1 and value_su[-1] == "_" return ScalarInt( sign * int(value_s), width=len(value_s), underscore=underscore ) elif underscore: # cannot have a leading underscore underscore[2] = len(value_su) > 1 and value_su[-1] == "_" return ScalarInt( sign * int(value_s), width=None, underscore=underscore, anchor=node.anchor, ) elif node.anchor: return ScalarInt(sign * int(value_s), width=None, anchor=node.anchor) else: return sign * int(value_s) def construct_yaml_float(self, node): # type: (Any) -> Any def leading_zeros(v): # type: (Any) -> int lead0 = 0 idx = 0 while idx < len(v) and v[idx] in "0.": if v[idx] == "0": lead0 += 1 idx += 1 return lead0 # underscore = None m_sign = False # type: Any value_so = to_str(self.construct_scalar(node)) value_s = value_so.replace("_", "").lower() sign = +1 if value_s[0] == "-": sign = -1 if value_s[0] in "+-": m_sign = value_s[0] value_s = value_s[1:] if value_s == ".inf": return sign * self.inf_value if value_s == ".nan": return self.nan_value if self.resolver.processing_version != (1, 2) and ":" in value_s: digits = [float(part) for part in value_s.split(":")] digits.reverse() base = 1 value = 0.0 for digit in digits: value += digit * base base *= 60 return sign * value if "e" in value_s: try: mantissa, exponent = value_so.split("e") exp = "e" except ValueError: mantissa, exponent = value_so.split("E") exp = "E" if self.resolver.processing_version != (1, 2): # value_s is lower case independent of input if "." not in mantissa: warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so)) lead0 = leading_zeros(mantissa) width = len(mantissa) prec = mantissa.find(".") if m_sign: width -= 1 e_width = len(exponent) e_sign = exponent[0] in "+-" # nprint('sf', width, prec, m_sign, exp, e_width, e_sign) return ScalarFloat( sign * float(value_s), width=width, prec=prec, m_sign=m_sign, m_lead0=lead0, exp=exp, e_width=e_width, e_sign=e_sign, anchor=node.anchor, ) width = len(value_so) prec = value_so.index( "." ) # you can use index, this would not be float without dot lead0 = leading_zeros(value_so) return ScalarFloat( sign * float(value_s), width=width, prec=prec, m_sign=m_sign, m_lead0=lead0, anchor=node.anchor, ) def construct_yaml_str(self, node): # type: (Any) -> Any value = self.construct_scalar(node) if isinstance(value, ScalarString): return value if PY3: return value try: return value.encode("ascii") except AttributeError: # in case you replace the node dynamically e.g. with a dict return value except UnicodeEncodeError: return value def construct_rt_sequence(self, node, seqtyp, deep=False): # type: (Any, Any, bool) -> Any if not isinstance(node, SequenceNode): raise ConstructorError( None, None, "expected a sequence node, but found %s" % node.id, node.start_mark, ) ret_val = [] if node.comment: seqtyp._yaml_add_comment(node.comment[:2]) if len(node.comment) > 2: seqtyp.yaml_end_comment_extend(node.comment[2], clear=True) if node.anchor: from strictyaml.ruamel.serializer import templated_id if not templated_id(node.anchor): seqtyp.yaml_set_anchor(node.anchor) for idx, child in enumerate(node.value): if child.comment: seqtyp._yaml_add_comment(child.comment, key=idx) child.comment = None # if moved to sequence remove from child ret_val.append(self.construct_object(child, deep=deep)) seqtyp._yaml_set_idx_line_col( idx, [child.start_mark.line, child.start_mark.column] ) return ret_val def flatten_mapping(self, node): # type: (Any) -> Any """ This implements the merge key feature http://yaml.org/type/merge.html by inserting keys from the merge dict/list of dicts if not yet available in this node """ def constructed(value_node): # type: (Any) -> Any # If the contents of a merge are defined within the # merge marker, then they won't have been constructed # yet. But if they were already constructed, we need to use # the existing object. if value_node in self.constructed_objects: value = self.constructed_objects[value_node] else: value = self.construct_object(value_node, deep=False) return value # merge = [] merge_map_list = [] # type: List[Any] index = 0 while index < len(node.value): key_node, value_node = node.value[index] if key_node.tag == u"tag:yaml.org,2002:merge": if merge_map_list: # double << key if self.allow_duplicate_keys: del node.value[index] index += 1 continue args = [ "while constructing a mapping", node.start_mark, 'found duplicate key "{}"'.format(key_node.value), key_node.start_mark, """ To suppress this check see: http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys """, """\ Duplicate keys will become an error in future releases, and are errors by default when using the new API. """, ] if self.allow_duplicate_keys is None: warnings.warn(DuplicateKeyFutureWarning(*args)) else: raise DuplicateKeyError(*args) del node.value[index] if isinstance(value_node, MappingNode): merge_map_list.append((index, constructed(value_node))) # self.flatten_mapping(value_node) # merge.extend(value_node.value) elif isinstance(value_node, SequenceNode): # submerge = [] for subnode in value_node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( "while constructing a mapping", node.start_mark, "expected a mapping for merging, but found %s" % subnode.id, subnode.start_mark, ) merge_map_list.append((index, constructed(subnode))) # self.flatten_mapping(subnode) # submerge.append(subnode.value) # submerge.reverse() # for value in submerge: # merge.extend(value) else: raise ConstructorError( "while constructing a mapping", node.start_mark, "expected a mapping or list of mappings for merging, " "but found %s" % value_node.id, value_node.start_mark, ) elif key_node.tag == u"tag:yaml.org,2002:value": key_node.tag = u"tag:yaml.org,2002:str" index += 1 else: index += 1 return merge_map_list # if merge: # node.value = merge + node.value def _sentinel(self): # type: () -> None pass def construct_mapping(self, node, maptyp, deep=False): # type: ignore # type: (Any, Any, bool) -> Any if not isinstance(node, MappingNode): raise ConstructorError( None, None, "expected a mapping node, but found %s" % node.id, node.start_mark, ) merge_map = self.flatten_mapping(node) # mapping = {} if node.comment: maptyp._yaml_add_comment(node.comment[:2]) if len(node.comment) > 2: maptyp.yaml_end_comment_extend(node.comment[2], clear=True) if node.anchor: from strictyaml.ruamel.serializer import templated_id if not templated_id(node.anchor): maptyp.yaml_set_anchor(node.anchor) last_key, last_value = None, self._sentinel for key_node, value_node in node.value: # keys can be list -> deep key = self.construct_object(key_node, deep=True) # lists are not hashable, but tuples are if not isinstance(key, Hashable): if isinstance(key, MutableSequence): key_s = CommentedKeySeq(key) if key_node.flow_style is True: key_s.fa.set_flow_style() elif key_node.flow_style is False: key_s.fa.set_block_style() key = key_s elif isinstance(key, MutableMapping): key_m = CommentedKeyMap(key) if key_node.flow_style is True: key_m.fa.set_flow_style() elif key_node.flow_style is False: key_m.fa.set_block_style() key = key_m if PY2: try: hash(key) except TypeError as exc: raise ConstructorError( "while constructing a mapping", node.start_mark, "found unacceptable key (%s)" % exc, key_node.start_mark, ) else: if not isinstance(key, Hashable): raise ConstructorError( "while constructing a mapping", node.start_mark, "found unhashable key", key_node.start_mark, ) value = self.construct_object(value_node, deep=deep) if self.check_mapping_key(node, key_node, maptyp, key, value): if ( key_node.comment and len(key_node.comment) > 4 and key_node.comment[4] ): if last_value is None: key_node.comment[0] = key_node.comment.pop(4) maptyp._yaml_add_comment(key_node.comment, value=last_key) else: key_node.comment[2] = key_node.comment.pop(4) maptyp._yaml_add_comment(key_node.comment, key=key) key_node.comment = None if key_node.comment: maptyp._yaml_add_comment(key_node.comment, key=key) if value_node.comment: maptyp._yaml_add_comment(value_node.comment, value=key) maptyp._yaml_set_kv_line_col( key, [ key_node.start_mark.line, key_node.start_mark.column, value_node.start_mark.line, value_node.start_mark.column, ], ) maptyp[key] = value last_key, last_value = key, value # could use indexing # do this last, or <<: before a key will prevent insertion in instances # of collections.OrderedDict (as they have no __contains__ if merge_map: maptyp.add_yaml_merge(merge_map) def construct_setting(self, node, typ, deep=False): # type: (Any, Any, bool) -> Any if not isinstance(node, MappingNode): raise ConstructorError( None, None, "expected a mapping node, but found %s" % node.id, node.start_mark, ) if node.comment: typ._yaml_add_comment(node.comment[:2]) if len(node.comment) > 2: typ.yaml_end_comment_extend(node.comment[2], clear=True) if node.anchor: from strictyaml.ruamel.serializer import templated_id if not templated_id(node.anchor): typ.yaml_set_anchor(node.anchor) for key_node, value_node in node.value: # keys can be list -> deep key = self.construct_object(key_node, deep=True) # lists are not hashable, but tuples are if not isinstance(key, Hashable): if isinstance(key, list): key = tuple(key) if PY2: try: hash(key) except TypeError as exc: raise ConstructorError( "while constructing a mapping", node.start_mark, "found unacceptable key (%s)" % exc, key_node.start_mark, ) else: if not isinstance(key, Hashable): raise ConstructorError( "while constructing a mapping", node.start_mark, "found unhashable key", key_node.start_mark, ) # construct but should be null value = self.construct_object(value_node, deep=deep) # NOQA self.check_set_key(node, key_node, typ, key) if key_node.comment: typ._yaml_add_comment(key_node.comment, key=key) if value_node.comment: typ._yaml_add_comment(value_node.comment, value=key) typ.add(key) def construct_yaml_seq(self, node): # type: (Any) -> Any data = CommentedSeq() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) if node.comment: data._yaml_add_comment(node.comment) yield data data.extend(self.construct_rt_sequence(node, data)) self.set_collection_style(data, node) def construct_yaml_map(self, node): # type: (Any) -> Any data = CommentedMap() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) yield data self.construct_mapping(node, data, deep=True) self.set_collection_style(data, node) def set_collection_style(self, data, node): # type: (Any, Any) -> None if len(data) == 0: return if node.flow_style is True: data.fa.set_flow_style() elif node.flow_style is False: data.fa.set_block_style() def construct_yaml_object(self, node, cls): # type: (Any, Any) -> Any data = cls.__new__(cls) yield data if hasattr(data, "__setstate__"): state = SafeConstructor.construct_mapping(self, node, deep=True) data.__setstate__(state) else: state = SafeConstructor.construct_mapping(self, node) data.__dict__.update(state) def construct_yaml_omap(self, node): # type: (Any) -> Any # Note: we do now check for duplicate keys omap = CommentedOrderedMap() omap._yaml_set_line_col(node.start_mark.line, node.start_mark.column) if node.flow_style is True: omap.fa.set_flow_style() elif node.flow_style is False: omap.fa.set_block_style() yield omap if node.comment: omap._yaml_add_comment(node.comment[:2]) if len(node.comment) > 2: omap.yaml_end_comment_extend(node.comment[2], clear=True) if not isinstance(node, SequenceNode): raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a sequence, but found %s" % node.id, node.start_mark, ) for subnode in node.value: if not isinstance(subnode, MappingNode): raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a mapping of length 1, but found %s" % subnode.id, subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( "while constructing an ordered map", node.start_mark, "expected a single mapping item, but found %d items" % len(subnode.value), subnode.start_mark, ) key_node, value_node = subnode.value[0] key = self.construct_object(key_node) assert key not in omap value = self.construct_object(value_node) if key_node.comment: omap._yaml_add_comment(key_node.comment, key=key) if subnode.comment: omap._yaml_add_comment(subnode.comment, key=key) if value_node.comment: omap._yaml_add_comment(value_node.comment, value=key) omap[key] = value def construct_yaml_set(self, node): # type: (Any) -> Any data = CommentedSet() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) yield data self.construct_setting(node, data) def construct_undefined(self, node): # type: (Any) -> Any try: if isinstance(node, MappingNode): data = CommentedMap() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) if node.flow_style is True: data.fa.set_flow_style() elif node.flow_style is False: data.fa.set_block_style() data.yaml_set_tag(node.tag) yield data if node.anchor: data.yaml_set_anchor(node.anchor) self.construct_mapping(node, data) return elif isinstance(node, ScalarNode): data2 = TaggedScalar() data2.value = self.construct_scalar(node) data2.style = node.style data2.yaml_set_tag(node.tag) yield data2 if node.anchor: data2.yaml_set_anchor(node.anchor, always_dump=True) return elif isinstance(node, SequenceNode): data3 = CommentedSeq() data3._yaml_set_line_col(node.start_mark.line, node.start_mark.column) if node.flow_style is True: data3.fa.set_flow_style() elif node.flow_style is False: data3.fa.set_block_style() data3.yaml_set_tag(node.tag) yield data3 if node.anchor: data3.yaml_set_anchor(node.anchor) data3.extend(self.construct_sequence(node)) return except: # NOQA pass raise ConstructorError( None, None, "could not determine a constructor for the tag %r" % utf8(node.tag), node.start_mark, ) def construct_yaml_timestamp(self, node, values=None): # type: (Any, Any) -> Any try: match = self.timestamp_regexp.match(node.value) except TypeError: match = None if match is None: raise ConstructorError( None, None, 'failed to construct timestamp from "{}"'.format(node.value), node.start_mark, ) values = match.groupdict() if not values["hour"]: return SafeConstructor.construct_yaml_timestamp(self, node, values) for part in ["t", "tz_sign", "tz_hour", "tz_minute"]: if values[part]: break else: return SafeConstructor.construct_yaml_timestamp(self, node, values) year = int(values["year"]) month = int(values["month"]) day = int(values["day"]) hour = int(values["hour"]) minute = int(values["minute"]) second = int(values["second"]) fraction = 0 if values["fraction"]: fraction_s = values["fraction"][:6] while len(fraction_s) < 6: fraction_s += "0" fraction = int(fraction_s) if len(values["fraction"]) > 6 and int(values["fraction"][6]) > 4: fraction += 1 delta = None if values["tz_sign"]: tz_hour = int(values["tz_hour"]) minutes = values["tz_minute"] tz_minute = int(minutes) if minutes else 0 delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute) if values["tz_sign"] == "-": delta = -delta # shold check for NOne and solve issue 366 should be tzinfo=delta) if delta: dt = datetime.datetime(year, month, day, hour, minute) dt -= delta data = TimeStamp( dt.year, dt.month, dt.day, dt.hour, dt.minute, second, fraction ) data._yaml["delta"] = delta tz = values["tz_sign"] + values["tz_hour"] if values["tz_minute"]: tz += ":" + values["tz_minute"] data._yaml["tz"] = tz else: data = TimeStamp(year, month, day, hour, minute, second, fraction) if values["tz"]: # no delta data._yaml["tz"] = values["tz"] if values["t"]: data._yaml["t"] = True return data def construct_yaml_bool(self, node): # type: (Any) -> Any b = SafeConstructor.construct_yaml_bool(self, node) if node.anchor: return ScalarBoolean(b, anchor=node.anchor) return b RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:null", RoundTripConstructor.construct_yaml_null ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:bool", RoundTripConstructor.construct_yaml_bool ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:int", RoundTripConstructor.construct_yaml_int ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:float", RoundTripConstructor.construct_yaml_float ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:binary", RoundTripConstructor.construct_yaml_binary ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:timestamp", RoundTripConstructor.construct_yaml_timestamp ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:omap", RoundTripConstructor.construct_yaml_omap ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:pairs", RoundTripConstructor.construct_yaml_pairs ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:set", RoundTripConstructor.construct_yaml_set ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:str", RoundTripConstructor.construct_yaml_str ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:seq", RoundTripConstructor.construct_yaml_seq ) RoundTripConstructor.add_constructor( u"tag:yaml.org,2002:map", RoundTripConstructor.construct_yaml_map ) RoundTripConstructor.add_constructor(None, RoundTripConstructor.construct_undefined) strictyaml-1.6.1/strictyaml/ruamel/cyaml.py000066400000000000000000000147531415074265300210720ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from _ruamel_yaml import CParser, CEmitter # type: ignore from strictyaml.ruamel.constructor import Constructor, BaseConstructor, SafeConstructor from strictyaml.ruamel.representer import Representer, SafeRepresenter, BaseRepresenter from strictyaml.ruamel.resolver import Resolver, BaseResolver if False: # MYPY from typing import Any, Union, Optional # NOQA from strictyaml.ruamel.compat import StreamTextType, StreamType, VersionType # NOQA __all__ = [ "CBaseLoader", "CSafeLoader", "CLoader", "CBaseDumper", "CSafeDumper", "CDumper", ] # this includes some hacks to solve the usage of resolver by lower level # parts of the parser class CBaseLoader(CParser, BaseConstructor, BaseResolver): # type: ignore def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None CParser.__init__(self, stream) self._parser = self._composer = self BaseConstructor.__init__(self, loader=self) BaseResolver.__init__(self, loadumper=self) # self.descend_resolver = self._resolver.descend_resolver # self.ascend_resolver = self._resolver.ascend_resolver # self.resolve = self._resolver.resolve class CSafeLoader(CParser, SafeConstructor, Resolver): # type: ignore def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None CParser.__init__(self, stream) self._parser = self._composer = self SafeConstructor.__init__(self, loader=self) Resolver.__init__(self, loadumper=self) # self.descend_resolver = self._resolver.descend_resolver # self.ascend_resolver = self._resolver.ascend_resolver # self.resolve = self._resolver.resolve class CLoader(CParser, Constructor, Resolver): # type: ignore def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None CParser.__init__(self, stream) self._parser = self._composer = self Constructor.__init__(self, loader=self) Resolver.__init__(self, loadumper=self) # self.descend_resolver = self._resolver.descend_resolver # self.ascend_resolver = self._resolver.ascend_resolver # self.resolve = self._resolver.resolve class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver): # type: ignore def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA CEmitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, encoding=encoding, allow_unicode=allow_unicode, line_break=line_break, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, ) self._emitter = self._serializer = self._representer = self BaseRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) BaseResolver.__init__(self, loadumper=self) class CSafeDumper(CEmitter, SafeRepresenter, Resolver): # type: ignore def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA self._emitter = self._serializer = self._representer = self CEmitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, encoding=encoding, allow_unicode=allow_unicode, line_break=line_break, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, ) self._emitter = self._serializer = self._representer = self SafeRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style ) Resolver.__init__(self) class CDumper(CEmitter, Representer, Resolver): # type: ignore def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA CEmitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, encoding=encoding, allow_unicode=allow_unicode, line_break=line_break, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, ) self._emitter = self._serializer = self._representer = self Representer.__init__( self, default_style=default_style, default_flow_style=default_flow_style ) Resolver.__init__(self) strictyaml-1.6.1/strictyaml/ruamel/dumper.py000066400000000000000000000147771415074265300212670ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from strictyaml.ruamel.emitter import Emitter from strictyaml.ruamel.serializer import Serializer from strictyaml.ruamel.representer import ( Representer, SafeRepresenter, BaseRepresenter, RoundTripRepresenter, ) from strictyaml.ruamel.resolver import Resolver, BaseResolver, VersionedResolver if False: # MYPY from typing import Any, Dict, List, Union, Optional # NOQA from strictyaml.ruamel.compat import StreamType, VersionType # NOQA __all__ = ["BaseDumper", "SafeDumper", "Dumper", "RoundTripDumper"] class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver): def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (Any, StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA Emitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, block_seq_indent=block_seq_indent, dumper=self, ) Serializer.__init__( self, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, dumper=self, ) BaseRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) BaseResolver.__init__(self, loadumper=self) class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver): def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA Emitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, block_seq_indent=block_seq_indent, dumper=self, ) Serializer.__init__( self, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, dumper=self, ) SafeRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) Resolver.__init__(self, loadumper=self) class Dumper(Emitter, Serializer, Representer, Resolver): def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA Emitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, block_seq_indent=block_seq_indent, dumper=self, ) Serializer.__init__( self, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, dumper=self, ) Representer.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) Resolver.__init__(self, loadumper=self) class RoundTripDumper(Emitter, Serializer, RoundTripRepresenter, VersionedResolver): def __init__( self, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Optional[bool], Optional[int], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA Emitter.__init__( self, stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, block_seq_indent=block_seq_indent, top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon, dumper=self, ) Serializer.__init__( self, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, dumper=self, ) RoundTripRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=self, ) VersionedResolver.__init__(self, loader=self) strictyaml-1.6.1/strictyaml/ruamel/emitter.py000066400000000000000000001772111415074265300214350ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from __future__ import print_function # Emitter expects events obeying the following grammar: # stream ::= STREAM-START document* STREAM-END # document ::= DOCUMENT-START node DOCUMENT-END # node ::= SCALAR | sequence | mapping # sequence ::= SEQUENCE-START node* SEQUENCE-END # mapping ::= MAPPING-START (node node)* MAPPING-END import sys from strictyaml.ruamel.error import YAMLError, YAMLStreamError from strictyaml.ruamel.events import * # NOQA # fmt: off from strictyaml.ruamel.compat import utf8, text_type, PY2, nprint, dbg, DBG_EVENT, \ check_anchorname_char # fmt: on if False: # MYPY from typing import Any, Dict, List, Union, Text, Tuple, Optional # NOQA from strictyaml.ruamel.compat import StreamType # NOQA __all__ = ["Emitter", "EmitterError"] class EmitterError(YAMLError): pass class ScalarAnalysis(object): def __init__( self, scalar, empty, multiline, allow_flow_plain, allow_block_plain, allow_single_quoted, allow_double_quoted, allow_block, ): # type: (Any, Any, Any, bool, bool, bool, bool, bool) -> None self.scalar = scalar self.empty = empty self.multiline = multiline self.allow_flow_plain = allow_flow_plain self.allow_block_plain = allow_block_plain self.allow_single_quoted = allow_single_quoted self.allow_double_quoted = allow_double_quoted self.allow_block = allow_block class Indents(object): # replacement for the list based stack of None/int def __init__(self): # type: () -> None self.values = [] # type: List[Tuple[int, bool]] def append(self, val, seq): # type: (Any, Any) -> None self.values.append((val, seq)) def pop(self): # type: () -> Any return self.values.pop()[0] def last_seq(self): # type: () -> bool # return the seq(uence) value for the element added before the last one # in increase_indent() try: return self.values[-2][1] except IndexError: return False def seq_flow_align(self, seq_indent, column): # type: (int, int) -> int # extra spaces because of dash if len(self.values) < 2 or not self.values[-1][1]: return 0 # -1 for the dash base = self.values[-1][0] if self.values[-1][0] is not None else 0 return base + seq_indent - column - 1 def __len__(self): # type: () -> int return len(self.values) class Emitter(object): # fmt: off DEFAULT_TAG_PREFIXES = { u'!': u'!', u'tag:yaml.org,2002:': u'!!', } # fmt: on MAX_SIMPLE_KEY_LENGTH = 128 def __init__( self, stream, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, brace_single_entry_mapping_in_flow_sequence=None, dumper=None, ): # type: (StreamType, Any, Optional[int], Optional[int], Optional[bool], Any, Optional[int], Optional[bool], Any, Optional[bool], Any) -> None # NOQA self.dumper = dumper if self.dumper is not None and getattr(self.dumper, "_emitter", None) is None: self.dumper._emitter = self self.stream = stream # Encoding can be overriden by STREAM-START. self.encoding = None # type: Optional[Text] self.allow_space_break = None # Emitter is a state machine with a stack of states to handle nested # structures. self.states = [] # type: List[Any] self.state = self.expect_stream_start # type: Any # Current event and the event queue. self.events = [] # type: List[Any] self.event = None # type: Any # The current indentation level and the stack of previous indents. self.indents = Indents() self.indent = None # type: Optional[int] # flow_context is an expanding/shrinking list consisting of '{' and '[' # for each unclosed flow context. If empty list that means block context self.flow_context = [] # type: List[Text] # Contexts. self.root_context = False self.sequence_context = False self.mapping_context = False self.simple_key_context = False # Characteristics of the last emitted character: # - current position. # - is it a whitespace? # - is it an indention character # (indentation space, '-', '?', or ':')? self.line = 0 self.column = 0 self.whitespace = True self.indention = True self.compact_seq_seq = True # dash after dash self.compact_seq_map = True # key after dash # self.compact_ms = False # dash after key, only when excplicit key with ? self.no_newline = None # type: Optional[bool] # set if directly after `- ` # Whether the document requires an explicit document end indicator self.open_ended = False # colon handling self.colon = u":" self.prefixed_colon = ( self.colon if prefix_colon is None else prefix_colon + self.colon ) # single entry mappings in flow sequence self.brace_single_entry_mapping_in_flow_sequence = ( brace_single_entry_mapping_in_flow_sequence # NOQA ) # Formatting details. self.canonical = canonical self.allow_unicode = allow_unicode # set to False to get "\Uxxxxxxxx" for non-basic unicode like emojis self.unicode_supplementary = sys.maxunicode > 0xFFFF self.sequence_dash_offset = block_seq_indent if block_seq_indent else 0 self.top_level_colon_align = top_level_colon_align self.best_sequence_indent = 2 self.requested_indent = indent # specific for literal zero indent if indent and 1 < indent < 10: self.best_sequence_indent = indent self.best_map_indent = self.best_sequence_indent # if self.best_sequence_indent < self.sequence_dash_offset + 1: # self.best_sequence_indent = self.sequence_dash_offset + 1 self.best_width = 80 if width and width > self.best_sequence_indent * 2: self.best_width = width self.best_line_break = u"\n" # type: Any if line_break in [u"\r", u"\n", u"\r\n"]: self.best_line_break = line_break # Tag prefixes. self.tag_prefixes = None # type: Any # Prepared anchor and tag. self.prepared_anchor = None # type: Any self.prepared_tag = None # type: Any # Scalar analysis and style. self.analysis = None # type: Any self.style = None # type: Any self.scalar_after_indicator = True # write a scalar on the same line as `---` self.alt_null = "null" @property def stream(self): # type: () -> Any try: return self._stream except AttributeError: raise YAMLStreamError("output stream needs to specified") @stream.setter def stream(self, val): # type: (Any) -> None if val is None: return if not hasattr(val, "write"): raise YAMLStreamError("stream argument needs to have a write() method") self._stream = val @property def serializer(self): # type: () -> Any try: if hasattr(self.dumper, "typ"): return self.dumper.serializer return self.dumper._serializer except AttributeError: return self # cyaml @property def flow_level(self): # type: () -> int return len(self.flow_context) def dispose(self): # type: () -> None # Reset the state attributes (to clear self-references) self.states = [] self.state = None def emit(self, event): # type: (Any) -> None if dbg(DBG_EVENT): nprint(event) self.events.append(event) while not self.need_more_events(): self.event = self.events.pop(0) self.state() self.event = None # In some cases, we wait for a few next events before emitting. def need_more_events(self): # type: () -> bool if not self.events: return True event = self.events[0] if isinstance(event, DocumentStartEvent): return self.need_events(1) elif isinstance(event, SequenceStartEvent): return self.need_events(2) elif isinstance(event, MappingStartEvent): return self.need_events(3) else: return False def need_events(self, count): # type: (int) -> bool level = 0 for event in self.events[1:]: if isinstance(event, (DocumentStartEvent, CollectionStartEvent)): level += 1 elif isinstance(event, (DocumentEndEvent, CollectionEndEvent)): level -= 1 elif isinstance(event, StreamEndEvent): level = -1 if level < 0: return False return len(self.events) < count + 1 def increase_indent(self, flow=False, sequence=None, indentless=False): # type: (bool, Optional[bool], bool) -> None self.indents.append(self.indent, sequence) if self.indent is None: # top level if flow: # self.indent = self.best_sequence_indent if self.indents.last_seq() else \ # self.best_map_indent # self.indent = self.best_sequence_indent self.indent = self.requested_indent else: self.indent = 0 elif not indentless: self.indent += ( self.best_sequence_indent if self.indents.last_seq() else self.best_map_indent ) # if self.indents.last_seq(): # if self.indent == 0: # top level block sequence # self.indent = self.best_sequence_indent - self.sequence_dash_offset # else: # self.indent += self.best_sequence_indent # else: # self.indent += self.best_map_indent # States. # Stream handlers. def expect_stream_start(self): # type: () -> None if isinstance(self.event, StreamStartEvent): if PY2: if self.event.encoding and not getattr(self.stream, "encoding", None): self.encoding = self.event.encoding else: if self.event.encoding and not hasattr(self.stream, "encoding"): self.encoding = self.event.encoding self.write_stream_start() self.state = self.expect_first_document_start else: raise EmitterError("expected StreamStartEvent, but got %s" % (self.event,)) def expect_nothing(self): # type: () -> None raise EmitterError("expected nothing, but got %s" % (self.event,)) # Document handlers. def expect_first_document_start(self): # type: () -> Any return self.expect_document_start(first=True) def expect_document_start(self, first=False): # type: (bool) -> None if isinstance(self.event, DocumentStartEvent): if (self.event.version or self.event.tags) and self.open_ended: self.write_indicator(u"...", True) self.write_indent() if self.event.version: version_text = self.prepare_version(self.event.version) self.write_version_directive(version_text) self.tag_prefixes = self.DEFAULT_TAG_PREFIXES.copy() if self.event.tags: handles = sorted(self.event.tags.keys()) for handle in handles: prefix = self.event.tags[handle] self.tag_prefixes[prefix] = handle handle_text = self.prepare_tag_handle(handle) prefix_text = self.prepare_tag_prefix(prefix) self.write_tag_directive(handle_text, prefix_text) implicit = ( first and not self.event.explicit and not self.canonical and not self.event.version and not self.event.tags and not self.check_empty_document() ) if not implicit: self.write_indent() self.write_indicator(u"---", True) if self.canonical: self.write_indent() self.state = self.expect_document_root elif isinstance(self.event, StreamEndEvent): if self.open_ended: self.write_indicator(u"...", True) self.write_indent() self.write_stream_end() self.state = self.expect_nothing else: raise EmitterError( "expected DocumentStartEvent, but got %s" % (self.event,) ) def expect_document_end(self): # type: () -> None if isinstance(self.event, DocumentEndEvent): self.write_indent() if self.event.explicit: self.write_indicator(u"...", True) self.write_indent() self.flush_stream() self.state = self.expect_document_start else: raise EmitterError("expected DocumentEndEvent, but got %s" % (self.event,)) def expect_document_root(self): # type: () -> None self.states.append(self.expect_document_end) self.expect_node(root=True) # Node handlers. def expect_node(self, root=False, sequence=False, mapping=False, simple_key=False): # type: (bool, bool, bool, bool) -> None self.root_context = root self.sequence_context = sequence # not used in PyYAML self.mapping_context = mapping self.simple_key_context = simple_key if isinstance(self.event, AliasEvent): self.expect_alias() elif isinstance(self.event, (ScalarEvent, CollectionStartEvent)): if ( self.process_anchor(u"&") and isinstance(self.event, ScalarEvent) and self.sequence_context ): self.sequence_context = False if ( root and isinstance(self.event, ScalarEvent) and not self.scalar_after_indicator ): self.write_indent() self.process_tag() if isinstance(self.event, ScalarEvent): # nprint('@', self.indention, self.no_newline, self.column) self.expect_scalar() elif isinstance(self.event, SequenceStartEvent): # nprint('@', self.indention, self.no_newline, self.column) i2, n2 = self.indention, self.no_newline # NOQA if self.event.comment: if self.event.flow_style is False and self.event.comment: if self.write_post_comment(self.event): self.indention = False self.no_newline = True if self.write_pre_comment(self.event): self.indention = i2 self.no_newline = not self.indention if ( self.flow_level or self.canonical or self.event.flow_style or self.check_empty_sequence() ): self.expect_flow_sequence() else: self.expect_block_sequence() elif isinstance(self.event, MappingStartEvent): if self.event.flow_style is False and self.event.comment: self.write_post_comment(self.event) if self.event.comment and self.event.comment[1]: self.write_pre_comment(self.event) if ( self.flow_level or self.canonical or self.event.flow_style or self.check_empty_mapping() ): self.expect_flow_mapping(single=self.event.nr_items == 1) else: self.expect_block_mapping() else: raise EmitterError("expected NodeEvent, but got %s" % (self.event,)) def expect_alias(self): # type: () -> None if self.event.anchor is None: raise EmitterError("anchor is not specified for alias") self.process_anchor(u"*") self.state = self.states.pop() def expect_scalar(self): # type: () -> None self.increase_indent(flow=True) self.process_scalar() self.indent = self.indents.pop() self.state = self.states.pop() # Flow sequence handlers. def expect_flow_sequence(self): # type: () -> None ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column) self.write_indicator(u" " * ind + u"[", True, whitespace=True) self.increase_indent(flow=True, sequence=True) self.flow_context.append("[") self.state = self.expect_first_flow_sequence_item def expect_first_flow_sequence_item(self): # type: () -> None if isinstance(self.event, SequenceEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() assert popped == "[" self.write_indicator(u"]", False) if self.event.comment and self.event.comment[0]: # eol comment on empty flow sequence self.write_post_comment(self.event) elif self.flow_level == 0: self.write_line_break() self.state = self.states.pop() else: if self.canonical or self.column > self.best_width: self.write_indent() self.states.append(self.expect_flow_sequence_item) self.expect_node(sequence=True) def expect_flow_sequence_item(self): # type: () -> None if isinstance(self.event, SequenceEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() assert popped == "[" if self.canonical: self.write_indicator(u",", False) self.write_indent() self.write_indicator(u"]", False) if self.event.comment and self.event.comment[0]: # eol comment on flow sequence self.write_post_comment(self.event) else: self.no_newline = False self.state = self.states.pop() else: self.write_indicator(u",", False) if self.canonical or self.column > self.best_width: self.write_indent() self.states.append(self.expect_flow_sequence_item) self.expect_node(sequence=True) # Flow mapping handlers. def expect_flow_mapping(self, single=False): # type: (Optional[bool]) -> None ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column) map_init = u"{" if ( single and self.flow_level and self.flow_context[-1] == "[" and not self.canonical and not self.brace_single_entry_mapping_in_flow_sequence ): # single map item with flow context, no curly braces necessary map_init = u"" self.write_indicator(u" " * ind + map_init, True, whitespace=True) self.flow_context.append(map_init) self.increase_indent(flow=True, sequence=False) self.state = self.expect_first_flow_mapping_key def expect_first_flow_mapping_key(self): # type: () -> None if isinstance(self.event, MappingEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() assert popped == "{" # empty flow mapping self.write_indicator(u"}", False) if self.event.comment and self.event.comment[0]: # eol comment on empty mapping self.write_post_comment(self.event) elif self.flow_level == 0: self.write_line_break() self.state = self.states.pop() else: if self.canonical or self.column > self.best_width: self.write_indent() if not self.canonical and self.check_simple_key(): self.states.append(self.expect_flow_mapping_simple_value) self.expect_node(mapping=True, simple_key=True) else: self.write_indicator(u"?", True) self.states.append(self.expect_flow_mapping_value) self.expect_node(mapping=True) def expect_flow_mapping_key(self): # type: () -> None if isinstance(self.event, MappingEndEvent): # if self.event.comment and self.event.comment[1]: # self.write_pre_comment(self.event) self.indent = self.indents.pop() popped = self.flow_context.pop() assert popped in [u"{", u""] if self.canonical: self.write_indicator(u",", False) self.write_indent() if popped != u"": self.write_indicator(u"}", False) if self.event.comment and self.event.comment[0]: # eol comment on flow mapping, never reached on empty mappings self.write_post_comment(self.event) else: self.no_newline = False self.state = self.states.pop() else: self.write_indicator(u",", False) if self.canonical or self.column > self.best_width: self.write_indent() if not self.canonical and self.check_simple_key(): self.states.append(self.expect_flow_mapping_simple_value) self.expect_node(mapping=True, simple_key=True) else: self.write_indicator(u"?", True) self.states.append(self.expect_flow_mapping_value) self.expect_node(mapping=True) def expect_flow_mapping_simple_value(self): # type: () -> None self.write_indicator(self.prefixed_colon, False) self.states.append(self.expect_flow_mapping_key) self.expect_node(mapping=True) def expect_flow_mapping_value(self): # type: () -> None if self.canonical or self.column > self.best_width: self.write_indent() self.write_indicator(self.prefixed_colon, True) self.states.append(self.expect_flow_mapping_key) self.expect_node(mapping=True) # Block sequence handlers. def expect_block_sequence(self): # type: () -> None if self.mapping_context: indentless = not self.indention else: indentless = False if not self.compact_seq_seq and self.column != 0: self.write_line_break() self.increase_indent(flow=False, sequence=True, indentless=indentless) self.state = self.expect_first_block_sequence_item def expect_first_block_sequence_item(self): # type: () -> Any return self.expect_block_sequence_item(first=True) def expect_block_sequence_item(self, first=False): # type: (bool) -> None if not first and isinstance(self.event, SequenceEndEvent): if self.event.comment and self.event.comment[1]: # final comments on a block list e.g. empty line self.write_pre_comment(self.event) self.indent = self.indents.pop() self.state = self.states.pop() self.no_newline = False else: if self.event.comment and self.event.comment[1]: self.write_pre_comment(self.event) nonl = self.no_newline if self.column == 0 else False self.write_indent() ind = self.sequence_dash_offset # if len(self.indents) > 1 else 0 self.write_indicator(u" " * ind + u"-", True, indention=True) if nonl or self.sequence_dash_offset + 2 > self.best_sequence_indent: self.no_newline = True self.states.append(self.expect_block_sequence_item) self.expect_node(sequence=True) # Block mapping handlers. def expect_block_mapping(self): # type: () -> None if not self.mapping_context and not (self.compact_seq_map or self.column == 0): self.write_line_break() self.increase_indent(flow=False, sequence=False) self.state = self.expect_first_block_mapping_key def expect_first_block_mapping_key(self): # type: () -> None return self.expect_block_mapping_key(first=True) def expect_block_mapping_key(self, first=False): # type: (Any) -> None if not first and isinstance(self.event, MappingEndEvent): if self.event.comment and self.event.comment[1]: # final comments from a doc self.write_pre_comment(self.event) self.indent = self.indents.pop() self.state = self.states.pop() else: if self.event.comment and self.event.comment[1]: # final comments from a doc self.write_pre_comment(self.event) self.write_indent() if self.check_simple_key(): if not isinstance( self.event, (SequenceStartEvent, MappingStartEvent) ): # sequence keys try: if self.event.style == "?": self.write_indicator(u"?", True, indention=True) except AttributeError: # aliases have no style pass self.states.append(self.expect_block_mapping_simple_value) self.expect_node(mapping=True, simple_key=True) if isinstance(self.event, AliasEvent): self.stream.write(u" ") else: self.write_indicator(u"?", True, indention=True) self.states.append(self.expect_block_mapping_value) self.expect_node(mapping=True) def expect_block_mapping_simple_value(self): # type: () -> None if getattr(self.event, "style", None) != "?": # prefix = u'' if self.indent == 0 and self.top_level_colon_align is not None: # write non-prefixed colon c = u" " * (self.top_level_colon_align - self.column) + self.colon else: c = self.prefixed_colon self.write_indicator(c, False) self.states.append(self.expect_block_mapping_key) self.expect_node(mapping=True) def expect_block_mapping_value(self): # type: () -> None self.write_indent() self.write_indicator(self.prefixed_colon, True, indention=True) self.states.append(self.expect_block_mapping_key) self.expect_node(mapping=True) # Checkers. def check_empty_sequence(self): # type: () -> bool return ( isinstance(self.event, SequenceStartEvent) and bool(self.events) and isinstance(self.events[0], SequenceEndEvent) ) def check_empty_mapping(self): # type: () -> bool return ( isinstance(self.event, MappingStartEvent) and bool(self.events) and isinstance(self.events[0], MappingEndEvent) ) def check_empty_document(self): # type: () -> bool if not isinstance(self.event, DocumentStartEvent) or not self.events: return False event = self.events[0] return ( isinstance(event, ScalarEvent) and event.anchor is None and event.tag is None and event.implicit and event.value == "" ) def check_simple_key(self): # type: () -> bool length = 0 if isinstance(self.event, NodeEvent) and self.event.anchor is not None: if self.prepared_anchor is None: self.prepared_anchor = self.prepare_anchor(self.event.anchor) length += len(self.prepared_anchor) if ( isinstance(self.event, (ScalarEvent, CollectionStartEvent)) and self.event.tag is not None ): if self.prepared_tag is None: self.prepared_tag = self.prepare_tag(self.event.tag) length += len(self.prepared_tag) if isinstance(self.event, ScalarEvent): if self.analysis is None: self.analysis = self.analyze_scalar(self.event.value) length += len(self.analysis.scalar) return length < self.MAX_SIMPLE_KEY_LENGTH and ( isinstance(self.event, AliasEvent) or ( isinstance(self.event, SequenceStartEvent) and self.event.flow_style is True ) or ( isinstance(self.event, MappingStartEvent) and self.event.flow_style is True ) or ( isinstance(self.event, ScalarEvent) # if there is an explicit style for an empty string, it is a simple key and not (self.analysis.empty and self.style and self.style not in "'\"") and not self.analysis.multiline ) or self.check_empty_sequence() or self.check_empty_mapping() ) # Anchor, Tag, and Scalar processors. def process_anchor(self, indicator): # type: (Any) -> bool if self.event.anchor is None: self.prepared_anchor = None return False if self.prepared_anchor is None: self.prepared_anchor = self.prepare_anchor(self.event.anchor) if self.prepared_anchor: self.write_indicator(indicator + self.prepared_anchor, True) # issue 288 self.no_newline = False self.prepared_anchor = None return True def process_tag(self): # type: () -> None tag = self.event.tag if isinstance(self.event, ScalarEvent): if self.style is None: self.style = self.choose_scalar_style() if ( self.event.value == "" and self.style == "'" and tag == "tag:yaml.org,2002:null" and self.alt_null is not None ): self.event.value = self.alt_null self.analysis = None self.style = self.choose_scalar_style() if (not self.canonical or tag is None) and ( (self.style == "" and self.event.implicit[0]) or (self.style != "" and self.event.implicit[1]) ): self.prepared_tag = None return if self.event.implicit[0] and tag is None: tag = u"!" self.prepared_tag = None else: if (not self.canonical or tag is None) and self.event.implicit: self.prepared_tag = None return if tag is None: raise EmitterError("tag is not specified") if self.prepared_tag is None: self.prepared_tag = self.prepare_tag(tag) if self.prepared_tag: self.write_indicator(self.prepared_tag, True) if ( self.sequence_context and not self.flow_level and isinstance(self.event, ScalarEvent) ): self.no_newline = True self.prepared_tag = None def choose_scalar_style(self): # type: () -> Any if self.analysis is None: self.analysis = self.analyze_scalar(self.event.value) if self.event.style == '"' or self.canonical: return '"' if (not self.event.style or self.event.style == "?") and ( self.event.implicit[0] or not self.event.implicit[2] ): if not ( self.simple_key_context and (self.analysis.empty or self.analysis.multiline) ) and ( self.flow_level and self.analysis.allow_flow_plain or (not self.flow_level and self.analysis.allow_block_plain) ): return "" self.analysis.allow_block = True if self.event.style and self.event.style in "|>": if ( not self.flow_level and not self.simple_key_context and self.analysis.allow_block ): return self.event.style if not self.event.style and self.analysis.allow_double_quoted: if "'" in self.event.value or "\n" in self.event.value: return '"' if not self.event.style or self.event.style == "'": if self.analysis.allow_single_quoted and not ( self.simple_key_context and self.analysis.multiline ): return "'" return '"' def process_scalar(self): # type: () -> None if self.analysis is None: self.analysis = self.analyze_scalar(self.event.value) if self.style is None: self.style = self.choose_scalar_style() split = not self.simple_key_context # if self.analysis.multiline and split \ # and (not self.style or self.style in '\'\"'): # self.write_indent() # nprint('xx', self.sequence_context, self.flow_level) if self.sequence_context and not self.flow_level: self.write_indent() if self.style == '"': self.write_double_quoted(self.analysis.scalar, split) elif self.style == "'": self.write_single_quoted(self.analysis.scalar, split) elif self.style == ">": self.write_folded(self.analysis.scalar) elif self.style == "|": self.write_literal(self.analysis.scalar, self.event.comment) else: self.write_plain(self.analysis.scalar, split) self.analysis = None self.style = None if self.event.comment: self.write_post_comment(self.event) # Analyzers. def prepare_version(self, version): # type: (Any) -> Any major, minor = version if major != 1: raise EmitterError("unsupported YAML version: %d.%d" % (major, minor)) return u"%d.%d" % (major, minor) def prepare_tag_handle(self, handle): # type: (Any) -> Any if not handle: raise EmitterError("tag handle must not be empty") if handle[0] != u"!" or handle[-1] != u"!": raise EmitterError( "tag handle must start and end with '!': %r" % (utf8(handle)) ) for ch in handle[1:-1]: if not ( u"0" <= ch <= u"9" or u"A" <= ch <= u"Z" or u"a" <= ch <= u"z" or ch in u"-_" ): raise EmitterError( "invalid character %r in the tag handle: %r" % (utf8(ch), utf8(handle)) ) return handle def prepare_tag_prefix(self, prefix): # type: (Any) -> Any if not prefix: raise EmitterError("tag prefix must not be empty") chunks = [] # type: List[Any] start = end = 0 if prefix[0] == u"!": end = 1 ch_set = u"-;/?:@&=+$,_.~*'()[]" if self.dumper: version = getattr(self.dumper, "version", (1, 2)) if version is None or version >= (1, 2): ch_set += u"#" while end < len(prefix): ch = prefix[end] if ( u"0" <= ch <= u"9" or u"A" <= ch <= u"Z" or u"a" <= ch <= u"z" or ch in ch_set ): end += 1 else: if start < end: chunks.append(prefix[start:end]) start = end = end + 1 data = utf8(ch) for ch in data: chunks.append(u"%%%02X" % ord(ch)) if start < end: chunks.append(prefix[start:end]) return "".join(chunks) def prepare_tag(self, tag): # type: (Any) -> Any if not tag: raise EmitterError("tag must not be empty") if tag == u"!": return tag handle = None suffix = tag prefixes = sorted(self.tag_prefixes.keys()) for prefix in prefixes: if tag.startswith(prefix) and (prefix == u"!" or len(prefix) < len(tag)): handle = self.tag_prefixes[prefix] suffix = tag[len(prefix) :] chunks = [] # type: List[Any] start = end = 0 ch_set = u"-;/?:@&=+$,_.~*'()[]" if self.dumper: version = getattr(self.dumper, "version", (1, 2)) if version is None or version >= (1, 2): ch_set += u"#" while end < len(suffix): ch = suffix[end] if ( u"0" <= ch <= u"9" or u"A" <= ch <= u"Z" or u"a" <= ch <= u"z" or ch in ch_set or (ch == u"!" and handle != u"!") ): end += 1 else: if start < end: chunks.append(suffix[start:end]) start = end = end + 1 data = utf8(ch) for ch in data: chunks.append(u"%%%02X" % ord(ch)) if start < end: chunks.append(suffix[start:end]) suffix_text = "".join(chunks) if handle: return u"%s%s" % (handle, suffix_text) else: return u"!<%s>" % suffix_text def prepare_anchor(self, anchor): # type: (Any) -> Any if not anchor: raise EmitterError("anchor must not be empty") for ch in anchor: if not check_anchorname_char(ch): raise EmitterError( "invalid character %r in the anchor: %r" % (utf8(ch), utf8(anchor)) ) return anchor def analyze_scalar(self, scalar): # type: (Any) -> Any # Empty scalar is a special case. if not scalar: return ScalarAnalysis( scalar=scalar, empty=True, multiline=False, allow_flow_plain=False, allow_block_plain=True, allow_single_quoted=True, allow_double_quoted=True, allow_block=False, ) # Indicators and special characters. block_indicators = False flow_indicators = False line_breaks = False special_characters = False # Important whitespace combinations. leading_space = False leading_break = False trailing_space = False trailing_break = False break_space = False space_break = False # Check document indicators. if scalar.startswith(u"---") or scalar.startswith(u"..."): block_indicators = True flow_indicators = True # First character or preceded by a whitespace. preceeded_by_whitespace = True # Last character or followed by a whitespace. followed_by_whitespace = ( len(scalar) == 1 or scalar[1] in u"\0 \t\r\n\x85\u2028\u2029" ) # The previous character is a space. previous_space = False # The previous character is a break. previous_break = False index = 0 while index < len(scalar): ch = scalar[index] # Check for indicators. if index == 0: # Leading indicators are special characters. if ch in u"#,[]{}&*!|>'\"%@`": flow_indicators = True block_indicators = True if ch in u"?:": # ToDo if self.serializer.use_version == (1, 1): flow_indicators = True elif len(scalar) == 1: # single character flow_indicators = True if followed_by_whitespace: block_indicators = True if ch == u"-" and followed_by_whitespace: flow_indicators = True block_indicators = True else: # Some indicators cannot appear within a scalar as well. if ch in u",[]{}": # http://yaml.org/spec/1.2/spec.html#id2788859 flow_indicators = True if ch == u"?" and self.serializer.use_version == (1, 1): flow_indicators = True if ch == u":": if followed_by_whitespace: flow_indicators = True block_indicators = True if ch == u"#" and preceeded_by_whitespace: flow_indicators = True block_indicators = True # Check for line breaks, special, and unicode characters. if ch in u"\n\x85\u2028\u2029": line_breaks = True if not (ch == u"\n" or u"\x20" <= ch <= u"\x7E"): if ( ch == u"\x85" or u"\xA0" <= ch <= u"\uD7FF" or u"\uE000" <= ch <= u"\uFFFD" or ( self.unicode_supplementary and (u"\U00010000" <= ch <= u"\U0010FFFF") ) ) and ch != u"\uFEFF": # unicode_characters = True if not self.allow_unicode: special_characters = True else: special_characters = True # Detect important whitespace combinations. if ch == u" ": if index == 0: leading_space = True if index == len(scalar) - 1: trailing_space = True if previous_break: break_space = True previous_space = True previous_break = False elif ch in u"\n\x85\u2028\u2029": if index == 0: leading_break = True if index == len(scalar) - 1: trailing_break = True if previous_space: space_break = True previous_space = False previous_break = True else: previous_space = False previous_break = False # Prepare for the next character. index += 1 preceeded_by_whitespace = ch in u"\0 \t\r\n\x85\u2028\u2029" followed_by_whitespace = ( index + 1 >= len(scalar) or scalar[index + 1] in u"\0 \t\r\n\x85\u2028\u2029" ) # Let's decide what styles are allowed. allow_flow_plain = True allow_block_plain = True allow_single_quoted = True allow_double_quoted = True allow_block = True # Leading and trailing whitespaces are bad for plain scalars. if leading_space or leading_break or trailing_space or trailing_break: allow_flow_plain = allow_block_plain = False # We do not permit trailing spaces for block scalars. if trailing_space: allow_block = False # Spaces at the beginning of a new line are only acceptable for block # scalars. if break_space: allow_flow_plain = allow_block_plain = allow_single_quoted = False # Spaces followed by breaks, as well as special character are only # allowed for double quoted scalars. if special_characters: allow_flow_plain = ( allow_block_plain ) = allow_single_quoted = allow_block = False elif space_break: allow_flow_plain = allow_block_plain = allow_single_quoted = False if not self.allow_space_break: allow_block = False # Although the plain scalar writer supports breaks, we never emit # multiline plain scalars. if line_breaks: allow_flow_plain = allow_block_plain = False # Flow indicators are forbidden for flow plain scalars. if flow_indicators: allow_flow_plain = False # Block indicators are forbidden for block plain scalars. if block_indicators: allow_block_plain = False return ScalarAnalysis( scalar=scalar, empty=False, multiline=line_breaks, allow_flow_plain=allow_flow_plain, allow_block_plain=allow_block_plain, allow_single_quoted=allow_single_quoted, allow_double_quoted=allow_double_quoted, allow_block=allow_block, ) # Writers. def flush_stream(self): # type: () -> None if hasattr(self.stream, "flush"): self.stream.flush() def write_stream_start(self): # type: () -> None # Write BOM if needed. if self.encoding and self.encoding.startswith("utf-16"): self.stream.write(u"\uFEFF".encode(self.encoding)) def write_stream_end(self): # type: () -> None self.flush_stream() def write_indicator( self, indicator, need_whitespace, whitespace=False, indention=False ): # type: (Any, Any, bool, bool) -> None if self.whitespace or not need_whitespace: data = indicator else: data = u" " + indicator self.whitespace = whitespace self.indention = self.indention and indention self.column += len(data) self.open_ended = False if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) def write_indent(self): # type: () -> None indent = self.indent or 0 if ( not self.indention or self.column > indent or (self.column == indent and not self.whitespace) ): if bool(self.no_newline): self.no_newline = False else: self.write_line_break() if self.column < indent: self.whitespace = True data = u" " * (indent - self.column) self.column = indent if self.encoding: data = data.encode(self.encoding) self.stream.write(data) def write_line_break(self, data=None): # type: (Any) -> None if data is None: data = self.best_line_break self.whitespace = True self.indention = True self.line += 1 self.column = 0 if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) def write_version_directive(self, version_text): # type: (Any) -> None data = u"%%YAML %s" % version_text if self.encoding: data = data.encode(self.encoding) self.stream.write(data) self.write_line_break() def write_tag_directive(self, handle_text, prefix_text): # type: (Any, Any) -> None data = u"%%TAG %s %s" % (handle_text, prefix_text) if self.encoding: data = data.encode(self.encoding) self.stream.write(data) self.write_line_break() # Scalar streams. def write_single_quoted(self, text, split=True): # type: (Any, Any) -> None if self.root_context: if self.requested_indent is not None: self.write_line_break() if self.requested_indent != 0: self.write_indent() self.write_indicator(u"'", True) spaces = False breaks = False start = end = 0 while end <= len(text): ch = None if end < len(text): ch = text[end] if spaces: if ch is None or ch != u" ": if ( start + 1 == end and self.column > self.best_width and split and start != 0 and end != len(text) ): self.write_indent() else: data = text[start:end] self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end elif breaks: if ch is None or ch not in u"\n\x85\u2028\u2029": if text[start] == u"\n": self.write_line_break() for br in text[start:end]: if br == u"\n": self.write_line_break() else: self.write_line_break(br) self.write_indent() start = end else: if ch is None or ch in u" \n\x85\u2028\u2029" or ch == u"'": if start < end: data = text[start:end] self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end if ch == u"'": data = u"''" self.column += 2 if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end + 1 if ch is not None: spaces = ch == u" " breaks = ch in u"\n\x85\u2028\u2029" end += 1 self.write_indicator(u"'", False) ESCAPE_REPLACEMENTS = { u"\0": u"0", u"\x07": u"a", u"\x08": u"b", u"\x09": u"t", u"\x0A": u"n", u"\x0B": u"v", u"\x0C": u"f", u"\x0D": u"r", u"\x1B": u"e", u'"': u'"', u"\\": u"\\", u"\x85": u"N", u"\xA0": u"_", u"\u2028": u"L", u"\u2029": u"P", } def write_double_quoted(self, text, split=True): # type: (Any, Any) -> None if self.root_context: if self.requested_indent is not None: self.write_line_break() if self.requested_indent != 0: self.write_indent() self.write_indicator(u'"', True) start = end = 0 while end <= len(text): ch = None if end < len(text): ch = text[end] if ( ch is None or ch in u'"\\\x85\u2028\u2029\uFEFF' or not ( u"\x20" <= ch <= u"\x7E" or ( self.allow_unicode and (u"\xA0" <= ch <= u"\uD7FF" or u"\uE000" <= ch <= u"\uFFFD") ) ) ): if start < end: data = text[start:end] self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end if ch is not None: if ch in self.ESCAPE_REPLACEMENTS: data = u"\\" + self.ESCAPE_REPLACEMENTS[ch] elif ch <= u"\xFF": data = u"\\x%02X" % ord(ch) elif ch <= u"\uFFFF": data = u"\\u%04X" % ord(ch) else: data = u"\\U%08X" % ord(ch) self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end + 1 if ( 0 < end < len(text) - 1 and (ch == u" " or start >= end) and self.column + (end - start) > self.best_width and split ): data = text[start:end] + u"\\" if start < end: start = end self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) self.write_indent() self.whitespace = False self.indention = False if text[start] == u" ": data = u"\\" self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) end += 1 self.write_indicator(u'"', False) def determine_block_hints(self, text): # type: (Any) -> Any indent = 0 indicator = u"" hints = u"" if text: if text[0] in u" \n\x85\u2028\u2029": indent = self.best_sequence_indent hints += text_type(indent) elif self.root_context: for end in ["\n---", "\n..."]: pos = 0 while True: pos = text.find(end, pos) if pos == -1: break try: if text[pos + 4] in " \r\n": break except IndexError: pass pos += 1 if pos > -1: break if pos > 0: indent = self.best_sequence_indent if text[-1] not in u"\n\x85\u2028\u2029": indicator = u"-" elif len(text) == 1 or text[-2] in u"\n\x85\u2028\u2029": indicator = u"+" hints += indicator return hints, indent, indicator def write_folded(self, text): # type: (Any) -> None hints, _indent, _indicator = self.determine_block_hints(text) self.write_indicator(u">" + hints, True) if _indicator == u"+": self.open_ended = True self.write_line_break() leading_space = True spaces = False breaks = True start = end = 0 while end <= len(text): ch = None if end < len(text): ch = text[end] if breaks: if ch is None or ch not in u"\n\x85\u2028\u2029\a": if ( not leading_space and ch is not None and ch != u" " and text[start] == u"\n" ): self.write_line_break() leading_space = ch == u" " for br in text[start:end]: if br == u"\n": self.write_line_break() else: self.write_line_break(br) if ch is not None: self.write_indent() start = end elif spaces: if ch != u" ": if start + 1 == end and self.column > self.best_width: self.write_indent() else: data = text[start:end] self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) start = end else: if ch is None or ch in u" \n\x85\u2028\u2029\a": data = text[start:end] self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) if ch == u"\a": if end < (len(text) - 1) and not text[end + 2].isspace(): self.write_line_break() self.write_indent() end += 2 # \a and the space that is inserted on the fold else: raise EmitterError( "unexcpected fold indicator \\a before space" ) if ch is None: self.write_line_break() start = end if ch is not None: breaks = ch in u"\n\x85\u2028\u2029" spaces = ch == u" " end += 1 def write_literal(self, text, comment=None): # type: (Any, Any) -> None hints, _indent, _indicator = self.determine_block_hints(text) self.write_indicator(u"|" + hints, True) try: comment = comment[1][0] if comment: self.stream.write(comment) except (TypeError, IndexError): pass if _indicator == u"+": self.open_ended = True self.write_line_break() breaks = True start = end = 0 while end <= len(text): ch = None if end < len(text): ch = text[end] if breaks: if ch is None or ch not in u"\n\x85\u2028\u2029": for br in text[start:end]: if br == u"\n": self.write_line_break() else: self.write_line_break(br) if ch is not None: if self.root_context: idnx = self.indent if self.indent is not None else 0 self.stream.write(u" " * (_indent + idnx)) else: self.write_indent() start = end else: if ch is None or ch in u"\n\x85\u2028\u2029": data = text[start:end] if bool(self.encoding): data = data.encode(self.encoding) self.stream.write(data) if ch is None: self.write_line_break() start = end if ch is not None: breaks = ch in u"\n\x85\u2028\u2029" end += 1 def write_plain(self, text, split=True): # type: (Any, Any) -> None if self.root_context: if self.requested_indent is not None: self.write_line_break() if self.requested_indent != 0: self.write_indent() else: self.open_ended = True if not text: return if not self.whitespace: data = u" " self.column += len(data) if self.encoding: data = data.encode(self.encoding) self.stream.write(data) self.whitespace = False self.indention = False spaces = False breaks = False start = end = 0 while end <= len(text): ch = None if end < len(text): ch = text[end] if spaces: if ch != u" ": if start + 1 == end and self.column > self.best_width and split: self.write_indent() self.whitespace = False self.indention = False else: data = text[start:end] self.column += len(data) if self.encoding: data = data.encode(self.encoding) self.stream.write(data) start = end elif breaks: if ch not in u"\n\x85\u2028\u2029": # type: ignore if text[start] == u"\n": self.write_line_break() for br in text[start:end]: if br == u"\n": self.write_line_break() else: self.write_line_break(br) self.write_indent() self.whitespace = False self.indention = False start = end else: if ch is None or ch in u" \n\x85\u2028\u2029": data = text[start:end] self.column += len(data) if self.encoding: data = data.encode(self.encoding) try: self.stream.write(data) except: # NOQA sys.stdout.write(repr(data) + "\n") raise start = end if ch is not None: spaces = ch == u" " breaks = ch in u"\n\x85\u2028\u2029" end += 1 def write_comment(self, comment, pre=False): # type: (Any, bool) -> None value = comment.value # nprintf('{:02d} {:02d} {!r}'.format(self.column, comment.start_mark.column, value)) if not pre and value[-1] == "\n": value = value[:-1] try: # get original column position col = comment.start_mark.column if comment.value and comment.value.startswith("\n"): # never inject extra spaces if the comment starts with a newline # and not a real comment (e.g. if you have an empty line following a key-value col = self.column elif col < self.column + 1: ValueError except ValueError: col = self.column + 1 # nprint('post_comment', self.line, self.column, value) try: # at least one space if the current column >= the start column of the comment # but not at the start of a line nr_spaces = col - self.column if self.column and value.strip() and nr_spaces < 1 and value[0] != "\n": nr_spaces = 1 value = " " * nr_spaces + value try: if bool(self.encoding): value = value.encode(self.encoding) except UnicodeDecodeError: pass self.stream.write(value) except TypeError: raise if not pre: self.write_line_break() def write_pre_comment(self, event): # type: (Any) -> bool comments = event.comment[1] if comments is None: return False try: start_events = (MappingStartEvent, SequenceStartEvent) for comment in comments: if isinstance(event, start_events) and getattr( comment, "pre_done", None ): continue if self.column != 0: self.write_line_break() self.write_comment(comment, pre=True) if isinstance(event, start_events): comment.pre_done = True except TypeError: sys.stdout.write("eventtt {} {}".format(type(event), event)) raise return True def write_post_comment(self, event): # type: (Any) -> bool if self.event.comment[0] is None: return False comment = event.comment[0] self.write_comment(comment) return True strictyaml-1.6.1/strictyaml/ruamel/error.py000066400000000000000000000216411415074265300211100ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import import warnings import textwrap from strictyaml.ruamel.compat import utf8 if False: # MYPY from typing import Any, Dict, Optional, List, Text # NOQA __all__ = [ "FileMark", "StringMark", "CommentMark", "YAMLError", "MarkedYAMLError", "ReusedAnchorWarning", "UnsafeLoaderWarning", "MarkedYAMLWarning", "MarkedYAMLFutureWarning", ] class StreamMark(object): __slots__ = "name", "index", "line", "column" def __init__(self, name, index, line, column): # type: (Any, int, int, int) -> None self.name = name self.index = index self.line = line self.column = column def __str__(self): # type: () -> Any where = ' in "%s", line %d, column %d' % ( self.name, self.line + 1, self.column + 1, ) return where def __eq__(self, other): # type: (Any) -> bool if self.line != other.line or self.column != other.column: return False if self.name != other.name or self.index != other.index: return False return True def __ne__(self, other): # type: (Any) -> bool return not self.__eq__(other) class FileMark(StreamMark): __slots__ = () class StringMark(StreamMark): __slots__ = "name", "index", "line", "column", "buffer", "pointer" def __init__(self, name, index, line, column, buffer, pointer): # type: (Any, int, int, int, Any, Any) -> None StreamMark.__init__(self, name, index, line, column) self.buffer = buffer self.pointer = pointer def get_snippet(self, indent=4, max_length=75): # type: (int, int) -> Any if self.buffer is None: # always False return None head = "" start = self.pointer while start > 0 and self.buffer[start - 1] not in u"\0\r\n\x85\u2028\u2029": start -= 1 if self.pointer - start > max_length / 2 - 1: head = " ... " start += 5 break tail = "" end = self.pointer while ( end < len(self.buffer) and self.buffer[end] not in u"\0\r\n\x85\u2028\u2029" ): end += 1 if end - self.pointer > max_length / 2 - 1: tail = " ... " end -= 5 break snippet = utf8(self.buffer[start:end]) caret = "^" caret = "^ (line: {})".format(self.line + 1) return ( " " * indent + head + snippet + tail + "\n" + " " * (indent + self.pointer - start + len(head)) + caret ) def __str__(self): # type: () -> Any snippet = self.get_snippet() where = ' in "%s", line %d, column %d' % ( self.name, self.line + 1, self.column + 1, ) if snippet is not None: where += ":\n" + snippet return where class CommentMark(object): __slots__ = ("column",) def __init__(self, column): # type: (Any) -> None self.column = column class YAMLError(Exception): pass class MarkedYAMLError(YAMLError): def __init__( self, context=None, context_mark=None, problem=None, problem_mark=None, note=None, warn=None, ): # type: (Any, Any, Any, Any, Any, Any) -> None self.context = context self.context_mark = context_mark self.problem = problem self.problem_mark = problem_mark self.note = note # warn is ignored def __str__(self): # type: () -> Any lines = [] # type: List[str] if self.context is not None: lines.append(self.context) if self.context_mark is not None and ( self.problem is None or self.problem_mark is None or self.context_mark.name != self.problem_mark.name or self.context_mark.line != self.problem_mark.line or self.context_mark.column != self.problem_mark.column ): lines.append(str(self.context_mark)) if self.problem is not None: lines.append(self.problem) if self.problem_mark is not None: lines.append(str(self.problem_mark)) if self.note is not None and self.note: note = textwrap.dedent(self.note) lines.append(note) return "\n".join(lines) class YAMLStreamError(Exception): pass class YAMLWarning(Warning): pass class MarkedYAMLWarning(YAMLWarning): def __init__( self, context=None, context_mark=None, problem=None, problem_mark=None, note=None, warn=None, ): # type: (Any, Any, Any, Any, Any, Any) -> None self.context = context self.context_mark = context_mark self.problem = problem self.problem_mark = problem_mark self.note = note self.warn = warn def __str__(self): # type: () -> Any lines = [] # type: List[str] if self.context is not None: lines.append(self.context) if self.context_mark is not None and ( self.problem is None or self.problem_mark is None or self.context_mark.name != self.problem_mark.name or self.context_mark.line != self.problem_mark.line or self.context_mark.column != self.problem_mark.column ): lines.append(str(self.context_mark)) if self.problem is not None: lines.append(self.problem) if self.problem_mark is not None: lines.append(str(self.problem_mark)) if self.note is not None and self.note: note = textwrap.dedent(self.note) lines.append(note) if self.warn is not None and self.warn: warn = textwrap.dedent(self.warn) lines.append(warn) return "\n".join(lines) class ReusedAnchorWarning(YAMLWarning): pass class UnsafeLoaderWarning(YAMLWarning): text = """ The default 'Loader' for 'load(stream)' without further arguments can be unsafe. Use 'load(stream, Loader=strictyaml.ruamel.Loader)' explicitly if that is OK. Alternatively include the following in your code: import warnings warnings.simplefilter('ignore', strictyaml.ruamel.error.UnsafeLoaderWarning) In most other cases you should consider using 'safe_load(stream)'""" pass warnings.simplefilter("once", UnsafeLoaderWarning) class MantissaNoDotYAML1_1Warning(YAMLWarning): def __init__(self, node, flt_str): # type: (Any, Any) -> None self.node = node self.flt = flt_str def __str__(self): # type: () -> Any line = self.node.start_mark.line col = self.node.start_mark.column return """ In YAML 1.1 floating point values should have a dot ('.') in their mantissa. See the Floating-Point Language-Independent Type for YAMLâ„¢ Version 1.1 specification ( http://yaml.org/type/float.html ). This dot is not required for JSON nor for YAML 1.2 Correct your float: "{}" on line: {}, column: {} or alternatively include the following in your code: import warnings warnings.simplefilter('ignore', strictyaml.ruamel.error.MantissaNoDotYAML1_1Warning) """.format( self.flt, line, col ) warnings.simplefilter("once", MantissaNoDotYAML1_1Warning) class YAMLFutureWarning(Warning): pass class MarkedYAMLFutureWarning(YAMLFutureWarning): def __init__( self, context=None, context_mark=None, problem=None, problem_mark=None, note=None, warn=None, ): # type: (Any, Any, Any, Any, Any, Any) -> None self.context = context self.context_mark = context_mark self.problem = problem self.problem_mark = problem_mark self.note = note self.warn = warn def __str__(self): # type: () -> Any lines = [] # type: List[str] if self.context is not None: lines.append(self.context) if self.context_mark is not None and ( self.problem is None or self.problem_mark is None or self.context_mark.name != self.problem_mark.name or self.context_mark.line != self.problem_mark.line or self.context_mark.column != self.problem_mark.column ): lines.append(str(self.context_mark)) if self.problem is not None: lines.append(self.problem) if self.problem_mark is not None: lines.append(str(self.problem_mark)) if self.note is not None and self.note: note = textwrap.dedent(self.note) lines.append(note) if self.warn is not None and self.warn: warn = textwrap.dedent(self.warn) lines.append(warn) return "\n".join(lines) strictyaml-1.6.1/strictyaml/ruamel/events.py000066400000000000000000000075241415074265300212670ustar00rootroot00000000000000# coding: utf-8 # Abstract classes. if False: # MYPY from typing import Any, Dict, Optional, List # NOQA def CommentCheck(): # type: () -> None pass class Event(object): __slots__ = "start_mark", "end_mark", "comment" def __init__(self, start_mark=None, end_mark=None, comment=CommentCheck): # type: (Any, Any, Any) -> None self.start_mark = start_mark self.end_mark = end_mark # assert comment is not CommentCheck if comment is CommentCheck: comment = None self.comment = comment def __repr__(self): # type: () -> Any attributes = [ key for key in ["anchor", "tag", "implicit", "value", "flow_style", "style"] if hasattr(self, key) ] arguments = ", ".join( ["%s=%r" % (key, getattr(self, key)) for key in attributes] ) if self.comment not in [None, CommentCheck]: arguments += ", comment={!r}".format(self.comment) return "%s(%s)" % (self.__class__.__name__, arguments) class NodeEvent(Event): __slots__ = ("anchor",) def __init__(self, anchor, start_mark=None, end_mark=None, comment=None): # type: (Any, Any, Any, Any) -> None Event.__init__(self, start_mark, end_mark, comment) self.anchor = anchor class CollectionStartEvent(NodeEvent): __slots__ = "tag", "implicit", "flow_style", "nr_items" def __init__( self, anchor, tag, implicit, start_mark=None, end_mark=None, flow_style=None, comment=None, nr_items=None, ): # type: (Any, Any, Any, Any, Any, Any, Any, Optional[int]) -> None NodeEvent.__init__(self, anchor, start_mark, end_mark, comment) self.tag = tag self.implicit = implicit self.flow_style = flow_style self.nr_items = nr_items class CollectionEndEvent(Event): __slots__ = () # Implementations. class StreamStartEvent(Event): __slots__ = ("encoding",) def __init__(self, start_mark=None, end_mark=None, encoding=None, comment=None): # type: (Any, Any, Any, Any) -> None Event.__init__(self, start_mark, end_mark, comment) self.encoding = encoding class StreamEndEvent(Event): __slots__ = () class DocumentStartEvent(Event): __slots__ = "explicit", "version", "tags" def __init__( self, start_mark=None, end_mark=None, explicit=None, version=None, tags=None, comment=None, ): # type: (Any, Any, Any, Any, Any, Any) -> None Event.__init__(self, start_mark, end_mark, comment) self.explicit = explicit self.version = version self.tags = tags class DocumentEndEvent(Event): __slots__ = ("explicit",) def __init__(self, start_mark=None, end_mark=None, explicit=None, comment=None): # type: (Any, Any, Any, Any) -> None Event.__init__(self, start_mark, end_mark, comment) self.explicit = explicit class AliasEvent(NodeEvent): __slots__ = () class ScalarEvent(NodeEvent): __slots__ = "tag", "implicit", "value", "style" def __init__( self, anchor, tag, implicit, value, start_mark=None, end_mark=None, style=None, comment=None, ): # type: (Any, Any, Any, Any, Any, Any, Any, Any) -> None NodeEvent.__init__(self, anchor, start_mark, end_mark, comment) self.tag = tag self.implicit = implicit self.value = value self.style = style class SequenceStartEvent(CollectionStartEvent): __slots__ = () class SequenceEndEvent(CollectionEndEvent): __slots__ = () class MappingStartEvent(CollectionStartEvent): __slots__ = () class MappingEndEvent(CollectionEndEvent): __slots__ = () strictyaml-1.6.1/strictyaml/ruamel/loader.py000066400000000000000000000057071415074265300212320ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from strictyaml.ruamel.reader import Reader from strictyaml.ruamel.scanner import Scanner, RoundTripScanner from strictyaml.ruamel.parser import Parser, RoundTripParser from strictyaml.ruamel.composer import Composer from strictyaml.ruamel.constructor import ( BaseConstructor, SafeConstructor, Constructor, RoundTripConstructor, ) from strictyaml.ruamel.resolver import VersionedResolver if False: # MYPY from typing import Any, Dict, List, Union, Optional # NOQA from strictyaml.ruamel.compat import StreamTextType, VersionType # NOQA __all__ = ["BaseLoader", "SafeLoader", "Loader", "RoundTripLoader"] class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, VersionedResolver): def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) Parser.__init__(self, loader=self) Composer.__init__(self, loader=self) BaseConstructor.__init__(self, loader=self) VersionedResolver.__init__(self, version, loader=self) class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, VersionedResolver): def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) Parser.__init__(self, loader=self) Composer.__init__(self, loader=self) SafeConstructor.__init__(self, loader=self) VersionedResolver.__init__(self, version, loader=self) class Loader(Reader, Scanner, Parser, Composer, Constructor, VersionedResolver): def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) Parser.__init__(self, loader=self) Composer.__init__(self, loader=self) Constructor.__init__(self, loader=self) VersionedResolver.__init__(self, version, loader=self) class RoundTripLoader( Reader, RoundTripScanner, RoundTripParser, Composer, RoundTripConstructor, VersionedResolver, ): def __init__(self, stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None # self.reader = Reader.__init__(self, stream) Reader.__init__(self, stream, loader=self) RoundTripScanner.__init__(self, loader=self) RoundTripParser.__init__(self, loader=self) Composer.__init__(self, loader=self) RoundTripConstructor.__init__( self, preserve_quotes=preserve_quotes, loader=self ) VersionedResolver.__init__(self, version, loader=self) strictyaml-1.6.1/strictyaml/ruamel/main.py000066400000000000000000001533011415074265300207020ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import, unicode_literals, print_function import sys import os import warnings import glob from importlib import import_module import strictyaml.ruamel from strictyaml.ruamel.error import UnsafeLoaderWarning, YAMLError # NOQA from strictyaml.ruamel.tokens import * # NOQA from strictyaml.ruamel.events import * # NOQA from strictyaml.ruamel.nodes import * # NOQA from strictyaml.ruamel.loader import ( BaseLoader, SafeLoader, Loader, RoundTripLoader, ) # NOQA from strictyaml.ruamel.dumper import ( BaseDumper, SafeDumper, Dumper, RoundTripDumper, ) # NOQA from strictyaml.ruamel.compat import StringIO, BytesIO, with_metaclass, PY3, nprint from strictyaml.ruamel.resolver import VersionedResolver, Resolver # NOQA from strictyaml.ruamel.representer import ( BaseRepresenter, SafeRepresenter, Representer, RoundTripRepresenter, ) from strictyaml.ruamel.constructor import ( BaseConstructor, SafeConstructor, Constructor, RoundTripConstructor, ) from strictyaml.ruamel.loader import Loader as UnsafeLoader if False: # MYPY from typing import List, Set, Dict, Union, Any, Callable, Optional, Text # NOQA from strictyaml.ruamel.compat import StreamType, StreamTextType, VersionType # NOQA if PY3: from pathlib import Path else: Path = Any try: from _ruamel_yaml import CParser, CEmitter # type: ignore except: # NOQA CParser = CEmitter = None # import io enforce = object() # YAML is an acronym, i.e. spoken: rhymes with "camel". And thus a # subset of abbreviations, which should be all caps according to PEP8 class YAML(object): def __init__( self, _kw=enforce, typ=None, pure=False, output=None, plug_ins=None, # input=None, ): # type: (Any, Optional[Text], Any, Any, Any) -> None """ _kw: not used, forces keyword arguments in 2.7 (in 3 you can do (*, safe_load=..) typ: 'rt'/None -> RoundTripLoader/RoundTripDumper, (default) 'safe' -> SafeLoader/SafeDumper, 'unsafe' -> normal/unsafe Loader/Dumper 'base' -> baseloader pure: if True only use Python modules input/output: needed to work as context manager plug_ins: a list of plug-in files """ if _kw is not enforce: raise TypeError( "{}.__init__() takes no positional argument but at least " "one was given ({!r})".format(self.__class__.__name__, _kw) ) self.typ = ["rt"] if typ is None else (typ if isinstance(typ, list) else [typ]) self.pure = pure # self._input = input self._output = output self._context_manager = None # type: Any self.plug_ins = [] # type: List[Any] for pu in ([] if plug_ins is None else plug_ins) + self.official_plug_ins(): file_name = pu.replace(os.sep, ".") self.plug_ins.append(import_module(file_name)) self.Resolver = strictyaml.ruamel.resolver.VersionedResolver # type: Any self.allow_unicode = True self.Reader = None # type: Any self.Representer = None # type: Any self.Constructor = None # type: Any self.Scanner = None # type: Any self.Serializer = None # type: Any self.default_flow_style = None # type: Any typ_found = 1 setup_rt = False if "rt" in self.typ: setup_rt = True elif "safe" in self.typ: self.Emitter = ( strictyaml.ruamel.emitter.Emitter if pure or CEmitter is None else CEmitter ) self.Representer = strictyaml.ruamel.representer.SafeRepresenter self.Parser = ( strictyaml.ruamel.parser.Parser if pure or CParser is None else CParser ) self.Composer = strictyaml.ruamel.composer.Composer self.Constructor = strictyaml.ruamel.constructor.SafeConstructor elif "base" in self.typ: self.Emitter = strictyaml.ruamel.emitter.Emitter self.Representer = strictyaml.ruamel.representer.BaseRepresenter self.Parser = ( strictyaml.ruamel.parser.Parser if pure or CParser is None else CParser ) self.Composer = strictyaml.ruamel.composer.Composer self.Constructor = strictyaml.ruamel.constructor.BaseConstructor elif "unsafe" in self.typ: self.Emitter = ( strictyaml.ruamel.emitter.Emitter if pure or CEmitter is None else CEmitter ) self.Representer = strictyaml.ruamel.representer.Representer self.Parser = ( strictyaml.ruamel.parser.Parser if pure or CParser is None else CParser ) self.Composer = strictyaml.ruamel.composer.Composer self.Constructor = strictyaml.ruamel.constructor.Constructor else: setup_rt = True typ_found = 0 if setup_rt: self.default_flow_style = False # no optimized rt-dumper yet self.Emitter = strictyaml.ruamel.emitter.Emitter self.Serializer = strictyaml.ruamel.serializer.Serializer self.Representer = strictyaml.ruamel.representer.RoundTripRepresenter self.Scanner = strictyaml.ruamel.scanner.RoundTripScanner # no optimized rt-parser yet self.Parser = strictyaml.ruamel.parser.RoundTripParser self.Composer = strictyaml.ruamel.composer.Composer self.Constructor = strictyaml.ruamel.constructor.RoundTripConstructor del setup_rt self.stream = None self.canonical = None self.old_indent = None self.width = None self.line_break = None self.map_indent = None self.sequence_indent = None self.sequence_dash_offset = 0 self.compact_seq_seq = None self.compact_seq_map = None self.sort_base_mapping_type_on_output = None # default: sort self.top_level_colon_align = None self.prefix_colon = None self.version = None self.preserve_quotes = None self.allow_duplicate_keys = False # duplicate keys in map, set self.encoding = "utf-8" self.explicit_start = None self.explicit_end = None self.tags = None self.default_style = None self.top_level_block_style_scalar_no_indent_error_1_1 = False # directives end indicator with single scalar document self.scalar_after_indicator = None # [a, b: 1, c: {d: 2}] vs. [a, {b: 1}, {c: {d: 2}}] self.brace_single_entry_mapping_in_flow_sequence = False for module in self.plug_ins: if getattr(module, "typ", None) in self.typ: typ_found += 1 module.init_typ(self) break if typ_found == 0: raise NotImplementedError( 'typ "{}"not recognised (need to install plug-in?)'.format(self.typ) ) @property def reader(self): # type: () -> Any try: return self._reader # type: ignore except AttributeError: self._reader = self.Reader(None, loader=self) return self._reader @property def scanner(self): # type: () -> Any try: return self._scanner # type: ignore except AttributeError: self._scanner = self.Scanner(loader=self) return self._scanner @property def parser(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): if self.Parser is not CParser: setattr(self, attr, self.Parser(loader=self)) else: if getattr(self, "_stream", None) is None: # wait for the stream return None else: # if not hasattr(self._stream, 'read') and hasattr(self._stream, 'open'): # # pathlib.Path() instance # setattr(self, attr, CParser(self._stream)) # else: setattr(self, attr, CParser(self._stream)) # self._parser = self._composer = self # nprint('scanner', self.loader.scanner) return getattr(self, attr) @property def composer(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr(self, attr, self.Composer(loader=self)) return getattr(self, attr) @property def constructor(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): cnst = self.Constructor(preserve_quotes=self.preserve_quotes, loader=self) cnst.allow_duplicate_keys = self.allow_duplicate_keys setattr(self, attr, cnst) return getattr(self, attr) @property def resolver(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr(self, attr, self.Resolver(version=self.version, loader=self)) return getattr(self, attr) @property def emitter(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): if self.Emitter is not CEmitter: _emitter = self.Emitter( None, canonical=self.canonical, indent=self.old_indent, width=self.width, allow_unicode=self.allow_unicode, line_break=self.line_break, prefix_colon=self.prefix_colon, brace_single_entry_mapping_in_flow_sequence=self.brace_single_entry_mapping_in_flow_sequence, # NOQA dumper=self, ) setattr(self, attr, _emitter) if self.map_indent is not None: _emitter.best_map_indent = self.map_indent if self.sequence_indent is not None: _emitter.best_sequence_indent = self.sequence_indent if self.sequence_dash_offset is not None: _emitter.sequence_dash_offset = self.sequence_dash_offset # _emitter.block_seq_indent = self.sequence_dash_offset if self.compact_seq_seq is not None: _emitter.compact_seq_seq = self.compact_seq_seq if self.compact_seq_map is not None: _emitter.compact_seq_map = self.compact_seq_map else: if getattr(self, "_stream", None) is None: # wait for the stream return None return None return getattr(self, attr) @property def serializer(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr( self, attr, self.Serializer( encoding=self.encoding, explicit_start=self.explicit_start, explicit_end=self.explicit_end, version=self.version, tags=self.tags, dumper=self, ), ) return getattr(self, attr) @property def representer(self): # type: () -> Any attr = "_" + sys._getframe().f_code.co_name if not hasattr(self, attr): repres = self.Representer( default_style=self.default_style, default_flow_style=self.default_flow_style, dumper=self, ) if self.sort_base_mapping_type_on_output is not None: repres.sort_base_mapping_type_on_output = ( self.sort_base_mapping_type_on_output ) setattr(self, attr, repres) return getattr(self, attr) # separate output resolver? # def load(self, stream=None): # if self._context_manager: # if not self._input: # raise TypeError("Missing input stream while dumping from context manager") # for data in self._context_manager.load(): # yield data # return # if stream is None: # raise TypeError("Need a stream argument when not loading from context manager") # return self.load_one(stream) def load(self, stream): # type: (Union[Path, StreamTextType]) -> Any """ at this point you either have the non-pure Parser (which has its own reader and scanner) or you have the pure Parser. If the pure Parser is set, then set the Reader and Scanner, if not already set. If either the Scanner or Reader are set, you cannot use the non-pure Parser, so reset it to the pure parser and set the Reader resp. Scanner if necessary """ if not hasattr(stream, "read") and hasattr(stream, "open"): # pathlib.Path() instance with stream.open("rb") as fp: return self.load(fp) constructor, parser = self.get_constructor_parser(stream) try: return constructor.get_single_data() finally: parser.dispose() try: self._reader.reset_reader() except AttributeError: pass try: self._scanner.reset_scanner() except AttributeError: pass def load_all(self, stream, _kw=enforce): # , skip=None): # type: (Union[Path, StreamTextType], Any) -> Any if _kw is not enforce: raise TypeError( "{}.__init__() takes no positional argument but at least " "one was given ({!r})".format(self.__class__.__name__, _kw) ) if not hasattr(stream, "read") and hasattr(stream, "open"): # pathlib.Path() instance with stream.open("r") as fp: for d in self.load_all(fp, _kw=enforce): yield d return # if skip is None: # skip = [] # elif isinstance(skip, int): # skip = [skip] constructor, parser = self.get_constructor_parser(stream) try: while constructor.check_data(): yield constructor.get_data() finally: parser.dispose() try: self._reader.reset_reader() except AttributeError: pass try: self._scanner.reset_scanner() except AttributeError: pass def get_constructor_parser(self, stream): # type: (StreamTextType) -> Any """ the old cyaml needs special setup, and therefore the stream """ if self.Parser is not CParser: if self.Reader is None: self.Reader = strictyaml.ruamel.reader.Reader if self.Scanner is None: self.Scanner = strictyaml.ruamel.scanner.Scanner self.reader.stream = stream else: if self.Reader is not None: if self.Scanner is None: self.Scanner = strictyaml.ruamel.scanner.Scanner self.Parser = strictyaml.ruamel.parser.Parser self.reader.stream = stream elif self.Scanner is not None: if self.Reader is None: self.Reader = strictyaml.ruamel.reader.Reader self.Parser = strictyaml.ruamel.parser.Parser self.reader.stream = stream else: # combined C level reader>scanner>parser # does some calls to the resolver, e.g. BaseResolver.descend_resolver # if you just initialise the CParser, to much of resolver.py # is actually used rslvr = self.Resolver # if rslvr is strictyaml.ruamel.resolver.VersionedResolver: # rslvr = strictyaml.ruamel.resolver.Resolver class XLoader(self.Parser, self.Constructor, rslvr): # type: ignore def __init__( selfx, stream, version=self.version, preserve_quotes=None ): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None # NOQA CParser.__init__(selfx, stream) selfx._parser = selfx._composer = selfx self.Constructor.__init__(selfx, loader=selfx) selfx.allow_duplicate_keys = self.allow_duplicate_keys rslvr.__init__(selfx, version=version, loadumper=selfx) self._stream = stream loader = XLoader(stream) return loader, loader return self.constructor, self.parser def dump(self, data, stream=None, _kw=enforce, transform=None): # type: (Any, Union[Path, StreamType], Any, Any) -> Any if self._context_manager: if not self._output: raise TypeError( "Missing output stream while dumping from context manager" ) if _kw is not enforce: raise TypeError( "{}.dump() takes one positional argument but at least " "two were given ({!r})".format(self.__class__.__name__, _kw) ) if transform is not None: raise TypeError( "{}.dump() in the context manager cannot have transform keyword " "".format(self.__class__.__name__) ) self._context_manager.dump(data) else: # old style if stream is None: raise TypeError( "Need a stream argument when not dumping from context manager" ) return self.dump_all([data], stream, _kw, transform=transform) def dump_all(self, documents, stream, _kw=enforce, transform=None): # type: (Any, Union[Path, StreamType], Any, Any) -> Any if self._context_manager: raise NotImplementedError if _kw is not enforce: raise TypeError( "{}.dump(_all) takes two positional argument but at least " "three were given ({!r})".format(self.__class__.__name__, _kw) ) self._output = stream self._context_manager = YAMLContextManager(self, transform=transform) for data in documents: self._context_manager.dump(data) self._context_manager.teardown_output() self._output = None self._context_manager = None def Xdump_all(self, documents, stream, _kw=enforce, transform=None): # type: (Any, Union[Path, StreamType], Any, Any) -> Any """ Serialize a sequence of Python objects into a YAML stream. """ if not hasattr(stream, "write") and hasattr(stream, "open"): # pathlib.Path() instance with stream.open("w") as fp: return self.dump_all(documents, fp, _kw, transform=transform) if _kw is not enforce: raise TypeError( "{}.dump(_all) takes two positional argument but at least " "three were given ({!r})".format(self.__class__.__name__, _kw) ) # The stream should have the methods `write` and possibly `flush`. if self.top_level_colon_align is True: tlca = max([len(str(x)) for x in documents[0]]) # type: Any else: tlca = self.top_level_colon_align if transform is not None: fstream = stream if self.encoding is None: stream = StringIO() else: stream = BytesIO() serializer, representer, emitter = self.get_serializer_representer_emitter( stream, tlca ) try: self.serializer.open() for data in documents: try: self.representer.represent(data) except AttributeError: # nprint(dir(dumper._representer)) raise self.serializer.close() finally: try: self.emitter.dispose() except AttributeError: raise # self.dumper.dispose() # cyaml delattr(self, "_serializer") delattr(self, "_emitter") if transform: val = stream.getvalue() if self.encoding: val = val.decode(self.encoding) if fstream is None: transform(val) else: fstream.write(transform(val)) return None def get_serializer_representer_emitter(self, stream, tlca): # type: (StreamType, Any) -> Any # we have only .Serializer to deal with (vs .Reader & .Scanner), much simpler if self.Emitter is not CEmitter: if self.Serializer is None: self.Serializer = strictyaml.ruamel.serializer.Serializer self.emitter.stream = stream self.emitter.top_level_colon_align = tlca if self.scalar_after_indicator is not None: self.emitter.scalar_after_indicator = self.scalar_after_indicator return self.serializer, self.representer, self.emitter if self.Serializer is not None: # cannot set serializer with CEmitter self.Emitter = strictyaml.ruamel.emitter.Emitter self.emitter.stream = stream self.emitter.top_level_colon_align = tlca if self.scalar_after_indicator is not None: self.emitter.scalar_after_indicator = self.scalar_after_indicator return self.serializer, self.representer, self.emitter # C routines rslvr = ( strictyaml.ruamel.resolver.BaseResolver if "base" in self.typ else strictyaml.ruamel.resolver.Resolver ) class XDumper(CEmitter, self.Representer, rslvr): # type: ignore def __init__( selfx, stream, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA CEmitter.__init__( selfx, stream, canonical=canonical, indent=indent, width=width, encoding=encoding, allow_unicode=allow_unicode, line_break=line_break, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, ) selfx._emitter = selfx._serializer = selfx._representer = selfx self.Representer.__init__( selfx, default_style=default_style, default_flow_style=default_flow_style, ) rslvr.__init__(selfx) self._stream = stream dumper = XDumper( stream, default_style=self.default_style, default_flow_style=self.default_flow_style, canonical=self.canonical, indent=self.old_indent, width=self.width, allow_unicode=self.allow_unicode, line_break=self.line_break, explicit_start=self.explicit_start, explicit_end=self.explicit_end, version=self.version, tags=self.tags, ) self._emitter = self._serializer = dumper return dumper, dumper, dumper # basic types def map(self, **kw): # type: (Any) -> Any if "rt" in self.typ: from strictyaml.ruamel.comments import CommentedMap return CommentedMap(**kw) else: return dict(**kw) def seq(self, *args): # type: (Any) -> Any if "rt" in self.typ: from strictyaml.ruamel.comments import CommentedSeq return CommentedSeq(*args) else: return list(*args) # helpers def official_plug_ins(self): # type: () -> Any bd = os.path.dirname(__file__) gpbd = os.path.dirname(os.path.dirname(bd)) res = [x.replace(gpbd, "")[1:-3] for x in glob.glob(bd + "/*/__plug_in__.py")] return res def register_class(self, cls): # type:(Any) -> Any """ register a class for dumping loading - if it has attribute yaml_tag use that to register, else use class name - if it has methods to_yaml/from_yaml use those to dump/load else dump attributes as mapping """ tag = getattr(cls, "yaml_tag", "!" + cls.__name__) try: self.representer.add_representer(cls, cls.to_yaml) except AttributeError: def t_y(representer, data): # type: (Any, Any) -> Any return representer.represent_yaml_object( tag, data, cls, flow_style=representer.default_flow_style ) self.representer.add_representer(cls, t_y) try: self.constructor.add_constructor(tag, cls.from_yaml) except AttributeError: def f_y(constructor, node): # type: (Any, Any) -> Any return constructor.construct_yaml_object(node, cls) self.constructor.add_constructor(tag, f_y) return cls def parse(self, stream): # type: (StreamTextType) -> Any """ Parse a YAML stream and produce parsing events. """ _, parser = self.get_constructor_parser(stream) try: while parser.check_event(): yield parser.get_event() finally: parser.dispose() try: self._reader.reset_reader() except AttributeError: pass try: self._scanner.reset_scanner() except AttributeError: pass # ### context manager def __enter__(self): # type: () -> Any self._context_manager = YAMLContextManager(self) return self def __exit__(self, typ, value, traceback): # type: (Any, Any, Any) -> None if typ: nprint("typ", typ) self._context_manager.teardown_output() # self._context_manager.teardown_input() self._context_manager = None # ### backwards compatibility def _indent(self, mapping=None, sequence=None, offset=None): # type: (Any, Any, Any) -> None if mapping is not None: self.map_indent = mapping if sequence is not None: self.sequence_indent = sequence if offset is not None: self.sequence_dash_offset = offset @property def indent(self): # type: () -> Any return self._indent @indent.setter def indent(self, val): # type: (Any) -> None self.old_indent = val @property def block_seq_indent(self): # type: () -> Any return self.sequence_dash_offset @block_seq_indent.setter def block_seq_indent(self, val): # type: (Any) -> None self.sequence_dash_offset = val def compact(self, seq_seq=None, seq_map=None): # type: (Any, Any) -> None self.compact_seq_seq = seq_seq self.compact_seq_map = seq_map class YAMLContextManager(object): def __init__(self, yaml, transform=None): # type: (Any, Any) -> None # used to be: (Any, Optional[Callable]) -> None self._yaml = yaml self._output_inited = False self._output_path = None self._output = self._yaml._output self._transform = transform # self._input_inited = False # self._input = input # self._input_path = None # self._transform = yaml.transform # self._fstream = None if not hasattr(self._output, "write") and hasattr(self._output, "open"): # pathlib.Path() instance, open with the same mode self._output_path = self._output self._output = self._output_path.open("w") # if not hasattr(self._stream, 'write') and hasattr(stream, 'open'): # if not hasattr(self._input, 'read') and hasattr(self._input, 'open'): # # pathlib.Path() instance, open with the same mode # self._input_path = self._input # self._input = self._input_path.open('r') if self._transform is not None: self._fstream = self._output if self._yaml.encoding is None: self._output = StringIO() else: self._output = BytesIO() def teardown_output(self): # type: () -> None if self._output_inited: self._yaml.serializer.close() else: return try: self._yaml.emitter.dispose() except AttributeError: raise # self.dumper.dispose() # cyaml try: delattr(self._yaml, "_serializer") delattr(self._yaml, "_emitter") except AttributeError: raise if self._transform: val = self._output.getvalue() if self._yaml.encoding: val = val.decode(self._yaml.encoding) if self._fstream is None: self._transform(val) else: self._fstream.write(self._transform(val)) self._fstream.flush() self._output = self._fstream # maybe not necessary if self._output_path is not None: self._output.close() def init_output(self, first_data): # type: (Any) -> None if self._yaml.top_level_colon_align is True: tlca = max([len(str(x)) for x in first_data]) # type: Any else: tlca = self._yaml.top_level_colon_align self._yaml.get_serializer_representer_emitter(self._output, tlca) self._yaml.serializer.open() self._output_inited = True def dump(self, data): # type: (Any) -> None if not self._output_inited: self.init_output(data) try: self._yaml.representer.represent(data) except AttributeError: # nprint(dir(dumper._representer)) raise # def teardown_input(self): # pass # # def init_input(self): # # set the constructor and parser on YAML() instance # self._yaml.get_constructor_parser(stream) # # def load(self): # if not self._input_inited: # self.init_input() # try: # while self._yaml.constructor.check_data(): # yield self._yaml.constructor.get_data() # finally: # parser.dispose() # try: # self._reader.reset_reader() # type: ignore # except AttributeError: # pass # try: # self._scanner.reset_scanner() # type: ignore # except AttributeError: # pass def yaml_object(yml): # type: (Any) -> Any """decorator for classes that needs to dump/load objects The tag for such objects is taken from the class attribute yaml_tag (or the class name in lowercase in case unavailable) If methods to_yaml and/or from_yaml are available, these are called for dumping resp. loading, default routines (dumping a mapping of the attributes) used otherwise. """ def yo_deco(cls): # type: (Any) -> Any tag = getattr(cls, "yaml_tag", "!" + cls.__name__) try: yml.representer.add_representer(cls, cls.to_yaml) except AttributeError: def t_y(representer, data): # type: (Any, Any) -> Any return representer.represent_yaml_object( tag, data, cls, flow_style=representer.default_flow_style ) yml.representer.add_representer(cls, t_y) try: yml.constructor.add_constructor(tag, cls.from_yaml) except AttributeError: def f_y(constructor, node): # type: (Any, Any) -> Any return constructor.construct_yaml_object(node, cls) yml.constructor.add_constructor(tag, f_y) return cls return yo_deco ######################################################################################## def scan(stream, Loader=Loader): # type: (StreamTextType, Any) -> Any """ Scan a YAML stream and produce scanning tokens. """ loader = Loader(stream) try: while loader.scanner.check_token(): yield loader.scanner.get_token() finally: loader._parser.dispose() def parse(stream, Loader=Loader): # type: (StreamTextType, Any) -> Any """ Parse a YAML stream and produce parsing events. """ loader = Loader(stream) try: while loader._parser.check_event(): yield loader._parser.get_event() finally: loader._parser.dispose() def compose(stream, Loader=Loader): # type: (StreamTextType, Any) -> Any """ Parse the first YAML document in a stream and produce the corresponding representation tree. """ loader = Loader(stream) try: return loader.get_single_node() finally: loader.dispose() def compose_all(stream, Loader=Loader): # type: (StreamTextType, Any) -> Any """ Parse all YAML documents in a stream and produce corresponding representation trees. """ loader = Loader(stream) try: while loader.check_node(): yield loader._composer.get_node() finally: loader._parser.dispose() def load(stream, Loader=None, version=None, preserve_quotes=None): # type: (StreamTextType, Any, Optional[VersionType], Any) -> Any """ Parse the first YAML document in a stream and produce the corresponding Python object. """ if Loader is None: warnings.warn(UnsafeLoaderWarning.text, UnsafeLoaderWarning, stacklevel=2) Loader = UnsafeLoader loader = Loader(stream, version, preserve_quotes=preserve_quotes) try: return loader._constructor.get_single_data() finally: loader._parser.dispose() try: loader._reader.reset_reader() except AttributeError: pass try: loader._scanner.reset_scanner() except AttributeError: pass def load_all(stream, Loader=None, version=None, preserve_quotes=None): # type: (Optional[StreamTextType], Any, Optional[VersionType], Optional[bool]) -> Any # NOQA """ Parse all YAML documents in a stream and produce corresponding Python objects. """ if Loader is None: warnings.warn(UnsafeLoaderWarning.text, UnsafeLoaderWarning, stacklevel=2) Loader = UnsafeLoader loader = Loader(stream, version, preserve_quotes=preserve_quotes) try: while loader._constructor.check_data(): yield loader._constructor.get_data() finally: loader._parser.dispose() try: loader._reader.reset_reader() except AttributeError: pass try: loader._scanner.reset_scanner() except AttributeError: pass def safe_load(stream, version=None): # type: (StreamTextType, Optional[VersionType]) -> Any """ Parse the first YAML document in a stream and produce the corresponding Python object. Resolve only basic YAML tags. """ return load(stream, SafeLoader, version) def safe_load_all(stream, version=None): # type: (StreamTextType, Optional[VersionType]) -> Any """ Parse all YAML documents in a stream and produce corresponding Python objects. Resolve only basic YAML tags. """ return load_all(stream, SafeLoader, version) def round_trip_load(stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> Any """ Parse the first YAML document in a stream and produce the corresponding Python object. Resolve only basic YAML tags. """ return load(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes) def round_trip_load_all(stream, version=None, preserve_quotes=None): # type: (StreamTextType, Optional[VersionType], Optional[bool]) -> Any """ Parse all YAML documents in a stream and produce corresponding Python objects. Resolve only basic YAML tags. """ return load_all(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes) def emit( events, stream=None, Dumper=Dumper, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, ): # type: (Any, Optional[StreamType], Any, Optional[bool], Union[int, None], Optional[int], Optional[bool], Any) -> Any # NOQA """ Emit YAML parsing events into a stream. If stream is None, return the produced string instead. """ getvalue = None if stream is None: stream = StringIO() getvalue = stream.getvalue dumper = Dumper( stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, ) try: for event in events: dumper.emit(event) finally: try: dumper._emitter.dispose() except AttributeError: raise dumper.dispose() # cyaml if getvalue is not None: return getvalue() enc = None if PY3 else "utf-8" def serialize_all( nodes, stream=None, Dumper=Dumper, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=enc, explicit_start=None, explicit_end=None, version=None, tags=None, ): # type: (Any, Optional[StreamType], Any, Any, Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any) -> Any # NOQA """ Serialize a sequence of representation trees into a YAML stream. If stream is None, return the produced string instead. """ getvalue = None if stream is None: if encoding is None: stream = StringIO() else: stream = BytesIO() getvalue = stream.getvalue dumper = Dumper( stream, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, encoding=encoding, version=version, tags=tags, explicit_start=explicit_start, explicit_end=explicit_end, ) try: dumper._serializer.open() for node in nodes: dumper.serialize(node) dumper._serializer.close() finally: try: dumper._emitter.dispose() except AttributeError: raise dumper.dispose() # cyaml if getvalue is not None: return getvalue() def serialize(node, stream=None, Dumper=Dumper, **kwds): # type: (Any, Optional[StreamType], Any, Any) -> Any """ Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead. """ return serialize_all([node], stream, Dumper=Dumper, **kwds) def dump_all( documents, stream=None, Dumper=Dumper, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=enc, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> Optional[str] # NOQA """ Serialize a sequence of Python objects into a YAML stream. If stream is None, return the produced string instead. """ getvalue = None if top_level_colon_align is True: top_level_colon_align = max([len(str(x)) for x in documents[0]]) if stream is None: if encoding is None: stream = StringIO() else: stream = BytesIO() getvalue = stream.getvalue dumper = Dumper( stream, default_style=default_style, default_flow_style=default_flow_style, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, block_seq_indent=block_seq_indent, top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon, ) try: dumper._serializer.open() for data in documents: try: dumper._representer.represent(data) except AttributeError: # nprint(dir(dumper._representer)) raise dumper._serializer.close() finally: try: dumper._emitter.dispose() except AttributeError: raise dumper.dispose() # cyaml if getvalue is not None: return getvalue() return None def dump( data, stream=None, Dumper=Dumper, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=enc, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, ): # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any) -> Optional[str] # NOQA """ Serialize a Python object into a YAML stream. If stream is None, return the produced string instead. default_style ∈ None, '', '"', "'", '|', '>' """ return dump_all( [data], stream, Dumper=Dumper, default_style=default_style, default_flow_style=default_flow_style, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, block_seq_indent=block_seq_indent, ) def safe_dump_all(documents, stream=None, **kwds): # type: (Any, Optional[StreamType], Any) -> Optional[str] """ Serialize a sequence of Python objects into a YAML stream. Produce only basic YAML tags. If stream is None, return the produced string instead. """ return dump_all(documents, stream, Dumper=SafeDumper, **kwds) def safe_dump(data, stream=None, **kwds): # type: (Any, Optional[StreamType], Any) -> Optional[str] """ Serialize a Python object into a YAML stream. Produce only basic YAML tags. If stream is None, return the produced string instead. """ return dump_all([data], stream, Dumper=SafeDumper, **kwds) def round_trip_dump( data, stream=None, Dumper=RoundTripDumper, default_style=None, default_flow_style=None, canonical=None, indent=None, width=None, allow_unicode=None, line_break=None, encoding=enc, explicit_start=None, explicit_end=None, version=None, tags=None, block_seq_indent=None, top_level_colon_align=None, prefix_colon=None, ): # type: (Any, Optional[StreamType], Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any, Any, Any) -> Optional[str] # NOQA allow_unicode = True if allow_unicode is None else allow_unicode return dump_all( [data], stream, Dumper=Dumper, default_style=default_style, default_flow_style=default_flow_style, canonical=canonical, indent=indent, width=width, allow_unicode=allow_unicode, line_break=line_break, encoding=encoding, explicit_start=explicit_start, explicit_end=explicit_end, version=version, tags=tags, block_seq_indent=block_seq_indent, top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon, ) # Loader/Dumper are no longer composites, to get to the associated # Resolver()/Representer(), etc., you need to instantiate the class def add_implicit_resolver( tag, regexp, first=None, Loader=None, Dumper=None, resolver=Resolver ): # type: (Any, Any, Any, Any, Any, Any) -> None """ Add an implicit scalar detector. If an implicit scalar value matches the given regexp, the corresponding tag is assigned to the scalar. first is a sequence of possible initial characters or None. """ if Loader is None and Dumper is None: resolver.add_implicit_resolver(tag, regexp, first) return if Loader: if hasattr(Loader, "add_implicit_resolver"): Loader.add_implicit_resolver(tag, regexp, first) elif issubclass( Loader, (BaseLoader, SafeLoader, strictyaml.ruamel.loader.Loader, RoundTripLoader), ): Resolver.add_implicit_resolver(tag, regexp, first) else: raise NotImplementedError if Dumper: if hasattr(Dumper, "add_implicit_resolver"): Dumper.add_implicit_resolver(tag, regexp, first) elif issubclass( Dumper, (BaseDumper, SafeDumper, strictyaml.ruamel.dumper.Dumper, RoundTripDumper), ): Resolver.add_implicit_resolver(tag, regexp, first) else: raise NotImplementedError # this code currently not tested def add_path_resolver( tag, path, kind=None, Loader=None, Dumper=None, resolver=Resolver ): # type: (Any, Any, Any, Any, Any, Any) -> None """ Add a path based resolver for the given tag. A path is a list of keys that forms a path to a node in the representation tree. Keys can be string values, integers, or None. """ if Loader is None and Dumper is None: resolver.add_path_resolver(tag, path, kind) return if Loader: if hasattr(Loader, "add_path_resolver"): Loader.add_path_resolver(tag, path, kind) elif issubclass( Loader, (BaseLoader, SafeLoader, strictyaml.ruamel.loader.Loader, RoundTripLoader), ): Resolver.add_path_resolver(tag, path, kind) else: raise NotImplementedError if Dumper: if hasattr(Dumper, "add_path_resolver"): Dumper.add_path_resolver(tag, path, kind) elif issubclass( Dumper, (BaseDumper, SafeDumper, strictyaml.ruamel.dumper.Dumper, RoundTripDumper), ): Resolver.add_path_resolver(tag, path, kind) else: raise NotImplementedError def add_constructor(tag, object_constructor, Loader=None, constructor=Constructor): # type: (Any, Any, Any, Any) -> None """ Add an object constructor for the given tag. object_onstructor is a function that accepts a Loader instance and a node object and produces the corresponding Python object. """ if Loader is None: constructor.add_constructor(tag, object_constructor) else: if hasattr(Loader, "add_constructor"): Loader.add_constructor(tag, object_constructor) return if issubclass(Loader, BaseLoader): BaseConstructor.add_constructor(tag, object_constructor) elif issubclass(Loader, SafeLoader): SafeConstructor.add_constructor(tag, object_constructor) elif issubclass(Loader, Loader): Constructor.add_constructor(tag, object_constructor) elif issubclass(Loader, RoundTripLoader): RoundTripConstructor.add_constructor(tag, object_constructor) else: raise NotImplementedError def add_multi_constructor( tag_prefix, multi_constructor, Loader=None, constructor=Constructor ): # type: (Any, Any, Any, Any) -> None """ Add a multi-constructor for the given tag prefix. Multi-constructor is called for a node if its tag starts with tag_prefix. Multi-constructor accepts a Loader instance, a tag suffix, and a node object and produces the corresponding Python object. """ if Loader is None: constructor.add_multi_constructor(tag_prefix, multi_constructor) else: if False and hasattr(Loader, "add_multi_constructor"): Loader.add_multi_constructor(tag_prefix, constructor) return if issubclass(Loader, BaseLoader): BaseConstructor.add_multi_constructor(tag_prefix, multi_constructor) elif issubclass(Loader, SafeLoader): SafeConstructor.add_multi_constructor(tag_prefix, multi_constructor) elif issubclass(Loader, strictyaml.ruamel.loader.Loader): Constructor.add_multi_constructor(tag_prefix, multi_constructor) elif issubclass(Loader, RoundTripLoader): RoundTripConstructor.add_multi_constructor(tag_prefix, multi_constructor) else: raise NotImplementedError def add_representer( data_type, object_representer, Dumper=None, representer=Representer ): # type: (Any, Any, Any, Any) -> None """ Add a representer for the given type. object_representer is a function accepting a Dumper instance and an instance of the given data type and producing the corresponding representation node. """ if Dumper is None: representer.add_representer(data_type, object_representer) else: if hasattr(Dumper, "add_representer"): Dumper.add_representer(data_type, object_representer) return if issubclass(Dumper, BaseDumper): BaseRepresenter.add_representer(data_type, object_representer) elif issubclass(Dumper, SafeDumper): SafeRepresenter.add_representer(data_type, object_representer) elif issubclass(Dumper, Dumper): Representer.add_representer(data_type, object_representer) elif issubclass(Dumper, RoundTripDumper): RoundTripRepresenter.add_representer(data_type, object_representer) else: raise NotImplementedError # this code currently not tested def add_multi_representer( data_type, multi_representer, Dumper=None, representer=Representer ): # type: (Any, Any, Any, Any) -> None """ Add a representer for the given type. multi_representer is a function accepting a Dumper instance and an instance of the given data type or subtype and producing the corresponding representation node. """ if Dumper is None: representer.add_multi_representer(data_type, multi_representer) else: if hasattr(Dumper, "add_multi_representer"): Dumper.add_multi_representer(data_type, multi_representer) return if issubclass(Dumper, BaseDumper): BaseRepresenter.add_multi_representer(data_type, multi_representer) elif issubclass(Dumper, SafeDumper): SafeRepresenter.add_multi_representer(data_type, multi_representer) elif issubclass(Dumper, Dumper): Representer.add_multi_representer(data_type, multi_representer) elif issubclass(Dumper, RoundTripDumper): RoundTripRepresenter.add_multi_representer(data_type, multi_representer) else: raise NotImplementedError class YAMLObjectMetaclass(type): """ The metaclass for YAMLObject. """ def __init__(cls, name, bases, kwds): # type: (Any, Any, Any) -> None super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds) if "yaml_tag" in kwds and kwds["yaml_tag"] is not None: cls.yaml_constructor.add_constructor(cls.yaml_tag, cls.from_yaml) # type: ignore cls.yaml_representer.add_representer(cls, cls.to_yaml) # type: ignore class YAMLObject(with_metaclass(YAMLObjectMetaclass)): # type: ignore """ An object that can dump itself to a YAML stream and load itself from a YAML stream. """ __slots__ = () # no direct instantiation, so allow immutable subclasses yaml_constructor = Constructor yaml_representer = Representer yaml_tag = None # type: Any yaml_flow_style = None # type: Any @classmethod def from_yaml(cls, constructor, node): # type: (Any, Any) -> Any """ Convert a representation node to a Python object. """ return constructor.construct_yaml_object(node, cls) @classmethod def to_yaml(cls, representer, data): # type: (Any, Any) -> Any """ Convert a Python object to a representation node. """ return representer.represent_yaml_object( cls.yaml_tag, data, cls, flow_style=cls.yaml_flow_style ) strictyaml-1.6.1/strictyaml/ruamel/nodes.py000066400000000000000000000073711415074265300210730ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function import sys from .compat import string_types if False: # MYPY from typing import Dict, Any, Text # NOQA class Node(object): __slots__ = "tag", "value", "start_mark", "end_mark", "comment", "anchor" def __init__(self, tag, value, start_mark, end_mark, comment=None, anchor=None): # type: (Any, Any, Any, Any, Any, Any) -> None self.tag = tag self.value = value self.start_mark = start_mark self.end_mark = end_mark self.comment = comment self.anchor = anchor def __repr__(self): # type: () -> str value = self.value # if isinstance(value, list): # if len(value) == 0: # value = '' # elif len(value) == 1: # value = '<1 item>' # else: # value = '<%d items>' % len(value) # else: # if len(value) > 75: # value = repr(value[:70]+u' ... ') # else: # value = repr(value) value = repr(value) return "%s(tag=%r, value=%s)" % (self.__class__.__name__, self.tag, value) def dump(self, indent=0): # type: (int) -> None if isinstance(self.value, string_types): sys.stdout.write( "{}{}(tag={!r}, value={!r})\n".format( " " * indent, self.__class__.__name__, self.tag, self.value ) ) if self.comment: sys.stdout.write( " {}comment: {})\n".format(" " * indent, self.comment) ) return sys.stdout.write( "{}{}(tag={!r})\n".format(" " * indent, self.__class__.__name__, self.tag) ) if self.comment: sys.stdout.write(" {}comment: {})\n".format(" " * indent, self.comment)) for v in self.value: if isinstance(v, tuple): for v1 in v: v1.dump(indent + 1) elif isinstance(v, Node): v.dump(indent + 1) else: sys.stdout.write("Node value type? {}\n".format(type(v))) class ScalarNode(Node): """ styles: ? -> set() ? key, no value " -> double quoted ' -> single quoted | -> literal style > -> folding style """ __slots__ = ("style",) id = "scalar" def __init__( self, tag, value, start_mark=None, end_mark=None, style=None, comment=None, anchor=None, ): # type: (Any, Any, Any, Any, Any, Any, Any) -> None Node.__init__( self, tag, value, start_mark, end_mark, comment=comment, anchor=anchor ) self.style = style class CollectionNode(Node): __slots__ = ("flow_style",) def __init__( self, tag, value, start_mark=None, end_mark=None, flow_style=None, comment=None, anchor=None, ): # type: (Any, Any, Any, Any, Any, Any, Any) -> None Node.__init__(self, tag, value, start_mark, end_mark, comment=comment) self.flow_style = flow_style self.anchor = anchor class SequenceNode(CollectionNode): __slots__ = () id = "sequence" class MappingNode(CollectionNode): __slots__ = ("merge",) id = "mapping" def __init__( self, tag, value, start_mark=None, end_mark=None, flow_style=None, comment=None, anchor=None, ): # type: (Any, Any, Any, Any, Any, Any, Any) -> None CollectionNode.__init__( self, tag, value, start_mark, end_mark, flow_style, comment, anchor ) self.merge = None strictyaml-1.6.1/strictyaml/ruamel/parser.py000066400000000000000000001021571415074265300212550ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import # The following YAML grammar is LL(1) and is parsed by a recursive descent # parser. # # stream ::= STREAM-START implicit_document? explicit_document* # STREAM-END # implicit_document ::= block_node DOCUMENT-END* # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* # block_node_or_indentless_sequence ::= # ALIAS # | properties (block_content | # indentless_block_sequence)? # | block_content # | indentless_block_sequence # block_node ::= ALIAS # | properties block_content? # | block_content # flow_node ::= ALIAS # | properties flow_content? # | flow_content # properties ::= TAG ANCHOR? | ANCHOR TAG? # block_content ::= block_collection | flow_collection | SCALAR # flow_content ::= flow_collection | SCALAR # block_collection ::= block_sequence | block_mapping # flow_collection ::= flow_sequence | flow_mapping # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* # BLOCK-END # indentless_sequence ::= (BLOCK-ENTRY block_node?)+ # block_mapping ::= BLOCK-MAPPING_START # ((KEY block_node_or_indentless_sequence?)? # (VALUE block_node_or_indentless_sequence?)?)* # BLOCK-END # flow_sequence ::= FLOW-SEQUENCE-START # (flow_sequence_entry FLOW-ENTRY)* # flow_sequence_entry? # FLOW-SEQUENCE-END # flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? # flow_mapping ::= FLOW-MAPPING-START # (flow_mapping_entry FLOW-ENTRY)* # flow_mapping_entry? # FLOW-MAPPING-END # flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? # # FIRST sets: # # stream: { STREAM-START } # explicit_document: { DIRECTIVE DOCUMENT-START } # implicit_document: FIRST(block_node) # block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START # BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START } # flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START } # block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START # FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR } # flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR } # block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START } # flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START } # block_sequence: { BLOCK-SEQUENCE-START } # block_mapping: { BLOCK-MAPPING-START } # block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR # BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START # FLOW-MAPPING-START BLOCK-ENTRY } # indentless_sequence: { ENTRY } # flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START } # flow_sequence: { FLOW-SEQUENCE-START } # flow_mapping: { FLOW-MAPPING-START } # flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START # FLOW-MAPPING-START KEY } # flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START # FLOW-MAPPING-START KEY } # need to have full path with import, as pkg_resources tries to load parser.py in __init__.py # only to not do anything with the package afterwards # and for Jython too from strictyaml.ruamel.error import MarkedYAMLError from strictyaml.ruamel.tokens import * # NOQA from strictyaml.ruamel.events import * # NOQA from strictyaml.ruamel.scanner import Scanner, RoundTripScanner, ScannerError # NOQA from strictyaml.ruamel.compat import utf8, nprint, nprintf # NOQA if False: # MYPY from typing import Any, Dict, Optional, List # NOQA __all__ = ["Parser", "RoundTripParser", "ParserError"] class ParserError(MarkedYAMLError): pass class Parser(object): # Since writing a recursive-descendant parser is a straightforward task, we # do not give many comments here. DEFAULT_TAGS = {u"!": u"!", u"!!": u"tag:yaml.org,2002:"} def __init__(self, loader): # type: (Any) -> None self.loader = loader if self.loader is not None and getattr(self.loader, "_parser", None) is None: self.loader._parser = self self.reset_parser() def reset_parser(self): # type: () -> None # Reset the state attributes (to clear self-references) self.current_event = None self.tag_handles = {} # type: Dict[Any, Any] self.states = [] # type: List[Any] self.marks = [] # type: List[Any] self.state = self.parse_stream_start # type: Any def dispose(self): # type: () -> None self.reset_parser() @property def scanner(self): # type: () -> Any if hasattr(self.loader, "typ"): return self.loader.scanner return self.loader._scanner @property def resolver(self): # type: () -> Any if hasattr(self.loader, "typ"): return self.loader.resolver return self.loader._resolver def check_event(self, *choices): # type: (Any) -> bool # Check the type of the next event. if self.current_event is None: if self.state: self.current_event = self.state() if self.current_event is not None: if not choices: return True for choice in choices: if isinstance(self.current_event, choice): return True return False def peek_event(self): # type: () -> Any # Get the next event. if self.current_event is None: if self.state: self.current_event = self.state() return self.current_event def get_event(self): # type: () -> Any # Get the next event and proceed further. if self.current_event is None: if self.state: self.current_event = self.state() value = self.current_event self.current_event = None return value # stream ::= STREAM-START implicit_document? explicit_document* # STREAM-END # implicit_document ::= block_node DOCUMENT-END* # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* def parse_stream_start(self): # type: () -> Any # Parse the stream start. token = self.scanner.get_token() token.move_comment(self.scanner.peek_token()) event = StreamStartEvent( token.start_mark, token.end_mark, encoding=token.encoding ) # Prepare the next state. self.state = self.parse_implicit_document_start return event def parse_implicit_document_start(self): # type: () -> Any # Parse an implicit document. if not self.scanner.check_token( DirectiveToken, DocumentStartToken, StreamEndToken ): self.tag_handles = self.DEFAULT_TAGS token = self.scanner.peek_token() start_mark = end_mark = token.start_mark event = DocumentStartEvent(start_mark, end_mark, explicit=False) # Prepare the next state. self.states.append(self.parse_document_end) self.state = self.parse_block_node return event else: return self.parse_document_start() def parse_document_start(self): # type: () -> Any # Parse any extra document end indicators. while self.scanner.check_token(DocumentEndToken): self.scanner.get_token() # Parse an explicit document. if not self.scanner.check_token(StreamEndToken): token = self.scanner.peek_token() start_mark = token.start_mark version, tags = self.process_directives() if not self.scanner.check_token(DocumentStartToken): raise ParserError( None, None, "expected '', but found %r" % self.scanner.peek_token().id, self.scanner.peek_token().start_mark, ) token = self.scanner.get_token() end_mark = token.end_mark # if self.loader is not None and \ # end_mark.line != self.scanner.peek_token().start_mark.line: # self.loader.scalar_after_indicator = False event = DocumentStartEvent( start_mark, end_mark, explicit=True, version=version, tags=tags ) # type: Any self.states.append(self.parse_document_end) self.state = self.parse_document_content else: # Parse the end of the stream. token = self.scanner.get_token() event = StreamEndEvent( token.start_mark, token.end_mark, comment=token.comment ) assert not self.states assert not self.marks self.state = None return event def parse_document_end(self): # type: () -> Any # Parse the document end. token = self.scanner.peek_token() start_mark = end_mark = token.start_mark explicit = False if self.scanner.check_token(DocumentEndToken): token = self.scanner.get_token() end_mark = token.end_mark explicit = True event = DocumentEndEvent(start_mark, end_mark, explicit=explicit) # Prepare the next state. if self.resolver.processing_version == (1, 1): self.state = self.parse_document_start else: self.state = self.parse_implicit_document_start return event def parse_document_content(self): # type: () -> Any if self.scanner.check_token( DirectiveToken, DocumentStartToken, DocumentEndToken, StreamEndToken ): event = self.process_empty_scalar(self.scanner.peek_token().start_mark) self.state = self.states.pop() return event else: return self.parse_block_node() def process_directives(self): # type: () -> Any yaml_version = None self.tag_handles = {} while self.scanner.check_token(DirectiveToken): token = self.scanner.get_token() if token.name == u"YAML": if yaml_version is not None: raise ParserError( None, None, "found duplicate YAML directive", token.start_mark ) major, minor = token.value if major != 1: raise ParserError( None, None, "found incompatible YAML document (version 1.* is " "required)", token.start_mark, ) yaml_version = token.value elif token.name == u"TAG": handle, prefix = token.value if handle in self.tag_handles: raise ParserError( None, None, "duplicate tag handle %r" % utf8(handle), token.start_mark, ) self.tag_handles[handle] = prefix if bool(self.tag_handles): value = yaml_version, self.tag_handles.copy() # type: Any else: value = yaml_version, None if self.loader is not None and hasattr(self.loader, "tags"): self.loader.version = yaml_version if self.loader.tags is None: self.loader.tags = {} for k in self.tag_handles: self.loader.tags[k] = self.tag_handles[k] for key in self.DEFAULT_TAGS: if key not in self.tag_handles: self.tag_handles[key] = self.DEFAULT_TAGS[key] return value # block_node_or_indentless_sequence ::= ALIAS # | properties (block_content | indentless_block_sequence)? # | block_content # | indentless_block_sequence # block_node ::= ALIAS # | properties block_content? # | block_content # flow_node ::= ALIAS # | properties flow_content? # | flow_content # properties ::= TAG ANCHOR? | ANCHOR TAG? # block_content ::= block_collection | flow_collection | SCALAR # flow_content ::= flow_collection | SCALAR # block_collection ::= block_sequence | block_mapping # flow_collection ::= flow_sequence | flow_mapping def parse_block_node(self): # type: () -> Any return self.parse_node(block=True) def parse_flow_node(self): # type: () -> Any return self.parse_node() def parse_block_node_or_indentless_sequence(self): # type: () -> Any return self.parse_node(block=True, indentless_sequence=True) def transform_tag(self, handle, suffix): # type: (Any, Any) -> Any return self.tag_handles[handle] + suffix def parse_node(self, block=False, indentless_sequence=False): # type: (bool, bool) -> Any if self.scanner.check_token(AliasToken): token = self.scanner.get_token() event = AliasEvent( token.value, token.start_mark, token.end_mark ) # type: Any self.state = self.states.pop() return event anchor = None tag = None start_mark = end_mark = tag_mark = None if self.scanner.check_token(AnchorToken): token = self.scanner.get_token() start_mark = token.start_mark end_mark = token.end_mark anchor = token.value if self.scanner.check_token(TagToken): token = self.scanner.get_token() tag_mark = token.start_mark end_mark = token.end_mark tag = token.value elif self.scanner.check_token(TagToken): token = self.scanner.get_token() start_mark = tag_mark = token.start_mark end_mark = token.end_mark tag = token.value if self.scanner.check_token(AnchorToken): token = self.scanner.get_token() start_mark = tag_mark = token.start_mark end_mark = token.end_mark anchor = token.value if tag is not None: handle, suffix = tag if handle is not None: if handle not in self.tag_handles: raise ParserError( "while parsing a node", start_mark, "found undefined tag handle %r" % utf8(handle), tag_mark, ) tag = self.transform_tag(handle, suffix) else: tag = suffix # if tag == u'!': # raise ParserError("while parsing a node", start_mark, # "found non-specific tag '!'", tag_mark, # "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' # and share your opinion.") if start_mark is None: start_mark = end_mark = self.scanner.peek_token().start_mark event = None implicit = tag is None or tag == u"!" if indentless_sequence and self.scanner.check_token(BlockEntryToken): comment = None pt = self.scanner.peek_token() if pt.comment and pt.comment[0]: comment = [pt.comment[0], []] pt.comment[0] = None end_mark = self.scanner.peek_token().end_mark event = SequenceStartEvent( anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment, ) self.state = self.parse_indentless_sequence_entry return event if self.scanner.check_token(ScalarToken): token = self.scanner.get_token() # self.scanner.peek_token_same_line_comment(token) end_mark = token.end_mark if (token.plain and tag is None) or tag == u"!": implicit = (True, False) elif tag is None: implicit = (False, True) else: implicit = (False, False) # nprint('se', token.value, token.comment) event = ScalarEvent( anchor, tag, implicit, token.value, start_mark, end_mark, style=token.style, comment=token.comment, ) self.state = self.states.pop() elif self.scanner.check_token(FlowSequenceStartToken): pt = self.scanner.peek_token() end_mark = pt.end_mark event = SequenceStartEvent( anchor, tag, implicit, start_mark, end_mark, flow_style=True, comment=pt.comment, ) self.state = self.parse_flow_sequence_first_entry elif self.scanner.check_token(FlowMappingStartToken): pt = self.scanner.peek_token() end_mark = pt.end_mark event = MappingStartEvent( anchor, tag, implicit, start_mark, end_mark, flow_style=True, comment=pt.comment, ) self.state = self.parse_flow_mapping_first_key elif block and self.scanner.check_token(BlockSequenceStartToken): end_mark = self.scanner.peek_token().start_mark # should inserting the comment be dependent on the # indentation? pt = self.scanner.peek_token() comment = pt.comment # nprint('pt0', type(pt)) if comment is None or comment[1] is None: comment = pt.split_comment() # nprint('pt1', comment) event = SequenceStartEvent( anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment, ) self.state = self.parse_block_sequence_first_entry elif block and self.scanner.check_token(BlockMappingStartToken): end_mark = self.scanner.peek_token().start_mark comment = self.scanner.peek_token().comment event = MappingStartEvent( anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment, ) self.state = self.parse_block_mapping_first_key elif anchor is not None or tag is not None: # Empty scalars are allowed even if a tag or an anchor is # specified. event = ScalarEvent( anchor, tag, (implicit, False), "", start_mark, end_mark ) self.state = self.states.pop() else: if block: node = "block" else: node = "flow" token = self.scanner.peek_token() raise ParserError( "while parsing a %s node" % node, start_mark, "expected the node content, but found %r" % token.id, token.start_mark, ) return event # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* # BLOCK-END def parse_block_sequence_first_entry(self): # type: () -> Any token = self.scanner.get_token() # move any comment from start token # token.move_comment(self.scanner.peek_token()) self.marks.append(token.start_mark) return self.parse_block_sequence_entry() def parse_block_sequence_entry(self): # type: () -> Any if self.scanner.check_token(BlockEntryToken): token = self.scanner.get_token() token.move_comment(self.scanner.peek_token()) if not self.scanner.check_token(BlockEntryToken, BlockEndToken): self.states.append(self.parse_block_sequence_entry) return self.parse_block_node() else: self.state = self.parse_block_sequence_entry return self.process_empty_scalar(token.end_mark) if not self.scanner.check_token(BlockEndToken): token = self.scanner.peek_token() raise ParserError( "while parsing a block collection", self.marks[-1], "expected , but found %r" % token.id, token.start_mark, ) token = self.scanner.get_token() # BlockEndToken event = SequenceEndEvent( token.start_mark, token.end_mark, comment=token.comment ) self.state = self.states.pop() self.marks.pop() return event # indentless_sequence ::= (BLOCK-ENTRY block_node?)+ # indentless_sequence? # sequence: # - entry # - nested def parse_indentless_sequence_entry(self): # type: () -> Any if self.scanner.check_token(BlockEntryToken): token = self.scanner.get_token() token.move_comment(self.scanner.peek_token()) if not self.scanner.check_token( BlockEntryToken, KeyToken, ValueToken, BlockEndToken ): self.states.append(self.parse_indentless_sequence_entry) return self.parse_block_node() else: self.state = self.parse_indentless_sequence_entry return self.process_empty_scalar(token.end_mark) token = self.scanner.peek_token() event = SequenceEndEvent( token.start_mark, token.start_mark, comment=token.comment ) self.state = self.states.pop() return event # block_mapping ::= BLOCK-MAPPING_START # ((KEY block_node_or_indentless_sequence?)? # (VALUE block_node_or_indentless_sequence?)?)* # BLOCK-END def parse_block_mapping_first_key(self): # type: () -> Any token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_block_mapping_key() def parse_block_mapping_key(self): # type: () -> Any if self.scanner.check_token(KeyToken): token = self.scanner.get_token() token.move_comment(self.scanner.peek_token()) if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken): self.states.append(self.parse_block_mapping_value) return self.parse_block_node_or_indentless_sequence() else: self.state = self.parse_block_mapping_value return self.process_empty_scalar(token.end_mark) if self.resolver.processing_version > (1, 1) and self.scanner.check_token( ValueToken ): self.state = self.parse_block_mapping_value return self.process_empty_scalar(self.scanner.peek_token().start_mark) if not self.scanner.check_token(BlockEndToken): token = self.scanner.peek_token() raise ParserError( "while parsing a block mapping", self.marks[-1], "expected , but found %r" % token.id, token.start_mark, ) token = self.scanner.get_token() token.move_comment(self.scanner.peek_token()) event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment) self.state = self.states.pop() self.marks.pop() return event def parse_block_mapping_value(self): # type: () -> Any if self.scanner.check_token(ValueToken): token = self.scanner.get_token() # value token might have post comment move it to e.g. block if self.scanner.check_token(ValueToken): token.move_comment(self.scanner.peek_token()) else: if not self.scanner.check_token(KeyToken): token.move_comment(self.scanner.peek_token(), empty=True) # else: empty value for this key cannot move token.comment if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken): self.states.append(self.parse_block_mapping_key) return self.parse_block_node_or_indentless_sequence() else: self.state = self.parse_block_mapping_key comment = token.comment if comment is None: token = self.scanner.peek_token() comment = token.comment if comment: token._comment = [None, comment[1]] comment = [comment[0], None] return self.process_empty_scalar(token.end_mark, comment=comment) else: self.state = self.parse_block_mapping_key token = self.scanner.peek_token() return self.process_empty_scalar(token.start_mark) # flow_sequence ::= FLOW-SEQUENCE-START # (flow_sequence_entry FLOW-ENTRY)* # flow_sequence_entry? # FLOW-SEQUENCE-END # flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? # # Note that while production rules for both flow_sequence_entry and # flow_mapping_entry are equal, their interpretations are different. # For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?` # generate an inline mapping (set syntax). def parse_flow_sequence_first_entry(self): # type: () -> Any token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_flow_sequence_entry(first=True) def parse_flow_sequence_entry(self, first=False): # type: (bool) -> Any if not self.scanner.check_token(FlowSequenceEndToken): if not first: if self.scanner.check_token(FlowEntryToken): self.scanner.get_token() else: token = self.scanner.peek_token() raise ParserError( "while parsing a flow sequence", self.marks[-1], "expected ',' or ']', but got %r" % token.id, token.start_mark, ) if self.scanner.check_token(KeyToken): token = self.scanner.peek_token() event = MappingStartEvent( None, None, True, token.start_mark, token.end_mark, flow_style=True ) # type: Any self.state = self.parse_flow_sequence_entry_mapping_key return event elif not self.scanner.check_token(FlowSequenceEndToken): self.states.append(self.parse_flow_sequence_entry) return self.parse_flow_node() token = self.scanner.get_token() event = SequenceEndEvent( token.start_mark, token.end_mark, comment=token.comment ) self.state = self.states.pop() self.marks.pop() return event def parse_flow_sequence_entry_mapping_key(self): # type: () -> Any token = self.scanner.get_token() if not self.scanner.check_token( ValueToken, FlowEntryToken, FlowSequenceEndToken ): self.states.append(self.parse_flow_sequence_entry_mapping_value) return self.parse_flow_node() else: self.state = self.parse_flow_sequence_entry_mapping_value return self.process_empty_scalar(token.end_mark) def parse_flow_sequence_entry_mapping_value(self): # type: () -> Any if self.scanner.check_token(ValueToken): token = self.scanner.get_token() if not self.scanner.check_token(FlowEntryToken, FlowSequenceEndToken): self.states.append(self.parse_flow_sequence_entry_mapping_end) return self.parse_flow_node() else: self.state = self.parse_flow_sequence_entry_mapping_end return self.process_empty_scalar(token.end_mark) else: self.state = self.parse_flow_sequence_entry_mapping_end token = self.scanner.peek_token() return self.process_empty_scalar(token.start_mark) def parse_flow_sequence_entry_mapping_end(self): # type: () -> Any self.state = self.parse_flow_sequence_entry token = self.scanner.peek_token() return MappingEndEvent(token.start_mark, token.start_mark) # flow_mapping ::= FLOW-MAPPING-START # (flow_mapping_entry FLOW-ENTRY)* # flow_mapping_entry? # FLOW-MAPPING-END # flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? def parse_flow_mapping_first_key(self): # type: () -> Any token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_flow_mapping_key(first=True) def parse_flow_mapping_key(self, first=False): # type: (Any) -> Any if not self.scanner.check_token(FlowMappingEndToken): if not first: if self.scanner.check_token(FlowEntryToken): self.scanner.get_token() else: token = self.scanner.peek_token() raise ParserError( "while parsing a flow mapping", self.marks[-1], "expected ',' or '}', but got %r" % token.id, token.start_mark, ) if self.scanner.check_token(KeyToken): token = self.scanner.get_token() if not self.scanner.check_token( ValueToken, FlowEntryToken, FlowMappingEndToken ): self.states.append(self.parse_flow_mapping_value) return self.parse_flow_node() else: self.state = self.parse_flow_mapping_value return self.process_empty_scalar(token.end_mark) elif self.resolver.processing_version > (1, 1) and self.scanner.check_token( ValueToken ): self.state = self.parse_flow_mapping_value return self.process_empty_scalar(self.scanner.peek_token().end_mark) elif not self.scanner.check_token(FlowMappingEndToken): self.states.append(self.parse_flow_mapping_empty_value) return self.parse_flow_node() token = self.scanner.get_token() event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment) self.state = self.states.pop() self.marks.pop() return event def parse_flow_mapping_value(self): # type: () -> Any if self.scanner.check_token(ValueToken): token = self.scanner.get_token() if not self.scanner.check_token(FlowEntryToken, FlowMappingEndToken): self.states.append(self.parse_flow_mapping_key) return self.parse_flow_node() else: self.state = self.parse_flow_mapping_key return self.process_empty_scalar(token.end_mark) else: self.state = self.parse_flow_mapping_key token = self.scanner.peek_token() return self.process_empty_scalar(token.start_mark) def parse_flow_mapping_empty_value(self): # type: () -> Any self.state = self.parse_flow_mapping_key return self.process_empty_scalar(self.scanner.peek_token().start_mark) def process_empty_scalar(self, mark, comment=None): # type: (Any, Any) -> Any return ScalarEvent(None, None, (True, False), "", mark, mark, comment=comment) class RoundTripParser(Parser): """roundtrip is a safe loader, that wants to see the unmangled tag""" def transform_tag(self, handle, suffix): # type: (Any, Any) -> Any # return self.tag_handles[handle]+suffix if handle == "!!" and suffix in ( u"null", u"bool", u"int", u"float", u"binary", u"timestamp", u"omap", u"pairs", u"set", u"str", u"seq", u"map", ): return Parser.transform_tag(self, handle, suffix) return handle + suffix strictyaml-1.6.1/strictyaml/ruamel/py.typed000066400000000000000000000000001415074265300210660ustar00rootroot00000000000000strictyaml-1.6.1/strictyaml/ruamel/reader.py000066400000000000000000000257061415074265300212270ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import # This module contains abstractions for the input stream. You don't have to # looks further, there are no pretty code. # # We define two classes here. # # Mark(source, line, column) # It's just a record and its only use is producing nice error messages. # Parser does not use it for any other purposes. # # Reader(source, data) # Reader determines the encoding of `data` and converts it to unicode. # Reader provides the following methods and attributes: # reader.peek(length=1) - return the next `length` characters # reader.forward(length=1) - move the current position to `length` # characters. # reader.index - the number of the current character. # reader.line, stream.column - the line and the column of the current # character. import codecs from strictyaml.ruamel.error import YAMLError, FileMark, StringMark, YAMLStreamError from strictyaml.ruamel.compat import text_type, binary_type, PY3, UNICODE_SIZE from strictyaml.ruamel.util import RegExp if False: # MYPY from typing import Any, Dict, Optional, List, Union, Text, Tuple, Optional # NOQA # from strictyaml.ruamel.compat import StreamTextType # NOQA __all__ = ["Reader", "ReaderError"] class ReaderError(YAMLError): def __init__(self, name, position, character, encoding, reason): # type: (Any, Any, Any, Any, Any) -> None self.name = name self.character = character self.position = position self.encoding = encoding self.reason = reason def __str__(self): # type: () -> str if isinstance(self.character, binary_type): return ( "'%s' codec can't decode byte #x%02x: %s\n" ' in "%s", position %d' % ( self.encoding, ord(self.character), self.reason, self.name, self.position, ) ) else: return "unacceptable character #x%04x: %s\n" ' in "%s", position %d' % ( self.character, self.reason, self.name, self.position, ) class Reader(object): # Reader: # - determines the data encoding and converts it to a unicode string, # - checks if characters are in allowed range, # - adds '\0' to the end. # Reader accepts # - a `str` object (PY2) / a `bytes` object (PY3), # - a `unicode` object (PY2) / a `str` object (PY3), # - a file-like object with its `read` method returning `str`, # - a file-like object with its `read` method returning `unicode`. # Yeah, it's ugly and slow. def __init__(self, stream, loader=None): # type: (Any, Any) -> None self.loader = loader if self.loader is not None and getattr(self.loader, "_reader", None) is None: self.loader._reader = self self.reset_reader() self.stream = stream # type: Any # as .read is called def reset_reader(self): # type: () -> None self.name = None # type: Any self.stream_pointer = 0 self.eof = True self.buffer = "" self.pointer = 0 self.raw_buffer = None # type: Any self.raw_decode = None self.encoding = None # type: Optional[Text] self.index = 0 self.line = 0 self.column = 0 @property def stream(self): # type: () -> Any try: return self._stream except AttributeError: raise YAMLStreamError("input stream needs to specified") @stream.setter def stream(self, val): # type: (Any) -> None if val is None: return self._stream = None if isinstance(val, text_type): self.name = "" self.check_printable(val) self.buffer = val + u"\0" # type: ignore elif isinstance(val, binary_type): self.name = "" self.raw_buffer = val self.determine_encoding() else: if not hasattr(val, "read"): raise YAMLStreamError("stream argument needs to have a read() method") self._stream = val self.name = getattr(self.stream, "name", "") self.eof = False self.raw_buffer = None self.determine_encoding() def peek(self, index=0): # type: (int) -> Text try: return self.buffer[self.pointer + index] except IndexError: self.update(index + 1) return self.buffer[self.pointer + index] def prefix(self, length=1): # type: (int) -> Any if self.pointer + length >= len(self.buffer): self.update(length) return self.buffer[self.pointer : self.pointer + length] def forward_1_1(self, length=1): # type: (int) -> None if self.pointer + length + 1 >= len(self.buffer): self.update(length + 1) while length != 0: ch = self.buffer[self.pointer] self.pointer += 1 self.index += 1 if ch in u"\n\x85\u2028\u2029" or ( ch == u"\r" and self.buffer[self.pointer] != u"\n" ): self.line += 1 self.column = 0 elif ch != u"\uFEFF": self.column += 1 length -= 1 def forward(self, length=1): # type: (int) -> None if self.pointer + length + 1 >= len(self.buffer): self.update(length + 1) while length != 0: ch = self.buffer[self.pointer] self.pointer += 1 self.index += 1 if ch == u"\n" or (ch == u"\r" and self.buffer[self.pointer] != u"\n"): self.line += 1 self.column = 0 elif ch != u"\uFEFF": self.column += 1 length -= 1 def get_mark(self): # type: () -> Any if self.stream is None: return StringMark( self.name, self.index, self.line, self.column, self.buffer, self.pointer ) else: return FileMark(self.name, self.index, self.line, self.column) def determine_encoding(self): # type: () -> None while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2): self.update_raw() if isinstance(self.raw_buffer, binary_type): if self.raw_buffer.startswith(codecs.BOM_UTF16_LE): self.raw_decode = codecs.utf_16_le_decode # type: ignore self.encoding = "utf-16-le" elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE): self.raw_decode = codecs.utf_16_be_decode # type: ignore self.encoding = "utf-16-be" else: self.raw_decode = codecs.utf_8_decode # type: ignore self.encoding = "utf-8" self.update(1) if UNICODE_SIZE == 2: NON_PRINTABLE = RegExp( u"[^\x09\x0A\x0D\x20-\x7E\x85" u"\xA0-\uD7FF" u"\uE000-\uFFFD" u"]" ) else: NON_PRINTABLE = RegExp( u"[^\x09\x0A\x0D\x20-\x7E\x85" u"\xA0-\uD7FF" u"\uE000-\uFFFD" u"\U00010000-\U0010FFFF" u"]" ) _printable_ascii = ("\x09\x0A\x0D" + "".join(map(chr, range(0x20, 0x7F)))).encode( "ascii" ) @classmethod def _get_non_printable_ascii(cls, data): # type: ignore # type: (Text, bytes) -> Optional[Tuple[int, Text]] ascii_bytes = data.encode("ascii") non_printables = ascii_bytes.translate(None, cls._printable_ascii) # type: ignore if not non_printables: return None non_printable = non_printables[:1] return ascii_bytes.index(non_printable), non_printable.decode("ascii") @classmethod def _get_non_printable_regex(cls, data): # type: (Text) -> Optional[Tuple[int, Text]] match = cls.NON_PRINTABLE.search(data) if not bool(match): return None return match.start(), match.group() @classmethod def _get_non_printable(cls, data): # type: (Text) -> Optional[Tuple[int, Text]] try: return cls._get_non_printable_ascii(data) # type: ignore except UnicodeEncodeError: return cls._get_non_printable_regex(data) def check_printable(self, data): # type: (Any) -> None non_printable_match = self._get_non_printable(data) if non_printable_match is not None: start, character = non_printable_match position = self.index + (len(self.buffer) - self.pointer) + start raise ReaderError( self.name, position, ord(character), "unicode", "special characters are not allowed", ) def update(self, length): # type: (int) -> None if self.raw_buffer is None: return self.buffer = self.buffer[self.pointer :] self.pointer = 0 while len(self.buffer) < length: if not self.eof: self.update_raw() if self.raw_decode is not None: try: data, converted = self.raw_decode( self.raw_buffer, "strict", self.eof ) except UnicodeDecodeError as exc: if PY3: character = self.raw_buffer[exc.start] else: character = exc.object[exc.start] if self.stream is not None: position = ( self.stream_pointer - len(self.raw_buffer) + exc.start ) elif self.stream is not None: position = ( self.stream_pointer - len(self.raw_buffer) + exc.start ) else: position = exc.start raise ReaderError( self.name, position, character, exc.encoding, exc.reason ) else: data = self.raw_buffer converted = len(data) self.check_printable(data) self.buffer += data self.raw_buffer = self.raw_buffer[converted:] if self.eof: self.buffer += "\0" self.raw_buffer = None break def update_raw(self, size=None): # type: (Optional[int]) -> None if size is None: size = 4096 if PY3 else 1024 data = self.stream.read(size) if self.raw_buffer is None: self.raw_buffer = data else: self.raw_buffer += data self.stream_pointer += len(data) if not data: self.eof = True # try: # import psyco # psyco.bind(Reader) # except ImportError: # pass strictyaml-1.6.1/strictyaml/ruamel/representer.py000066400000000000000000001403641415074265300223210ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division from strictyaml.ruamel.error import * # NOQA from strictyaml.ruamel.nodes import * # NOQA from strictyaml.ruamel.compat import text_type, binary_type, to_unicode, PY2, PY3 from strictyaml.ruamel.compat import ordereddict # type: ignore from strictyaml.ruamel.compat import nprint, nprintf # NOQA from strictyaml.ruamel.scalarstring import ( LiteralScalarString, FoldedScalarString, SingleQuotedScalarString, DoubleQuotedScalarString, PlainScalarString, ) from strictyaml.ruamel.comments import ( CommentedMap, CommentedOrderedMap, CommentedSeq, CommentedKeySeq, CommentedKeyMap, CommentedSet, comment_attrib, merge_attrib, TaggedScalar, ) from strictyaml.ruamel.scalarint import ( ScalarInt, BinaryInt, OctalInt, HexInt, HexCapsInt, ) from strictyaml.ruamel.scalarfloat import ScalarFloat from strictyaml.ruamel.scalarbool import ScalarBoolean from strictyaml.ruamel.timestamp import TimeStamp import datetime import sys import types if PY3: import copyreg import base64 else: import copy_reg as copyreg # type: ignore if False: # MYPY from typing import Dict, List, Any, Union, Text, Optional # NOQA # fmt: off __all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer', 'RepresenterError', 'RoundTripRepresenter'] # fmt: on class RepresenterError(YAMLError): pass if PY2: def get_classobj_bases(cls): # type: (Any) -> Any bases = [cls] for base in cls.__bases__: bases.extend(get_classobj_bases(base)) return bases class BaseRepresenter(object): yaml_representers = {} # type: Dict[Any, Any] yaml_multi_representers = {} # type: Dict[Any, Any] def __init__(self, default_style=None, default_flow_style=None, dumper=None): # type: (Any, Any, Any, Any) -> None self.dumper = dumper if self.dumper is not None: self.dumper._representer = self self.default_style = default_style self.default_flow_style = default_flow_style self.represented_objects = {} # type: Dict[Any, Any] self.object_keeper = [] # type: List[Any] self.alias_key = None # type: Optional[int] self.sort_base_mapping_type_on_output = True @property def serializer(self): # type: () -> Any try: if hasattr(self.dumper, "typ"): return self.dumper.serializer return self.dumper._serializer except AttributeError: return self # cyaml def represent(self, data): # type: (Any) -> None node = self.represent_data(data) self.serializer.serialize(node) self.represented_objects = {} self.object_keeper = [] self.alias_key = None def represent_data(self, data): # type: (Any) -> Any if self.ignore_aliases(data): self.alias_key = None else: self.alias_key = id(data) if self.alias_key is not None: if self.alias_key in self.represented_objects: node = self.represented_objects[self.alias_key] # if node is None: # raise RepresenterError( # "recursive objects are not allowed: %r" % data) return node # self.represented_objects[alias_key] = None self.object_keeper.append(data) data_types = type(data).__mro__ if PY2: # if type(data) is types.InstanceType: if isinstance(data, types.InstanceType): data_types = get_classobj_bases(data.__class__) + list(data_types) if data_types[0] in self.yaml_representers: node = self.yaml_representers[data_types[0]](self, data) else: for data_type in data_types: if data_type in self.yaml_multi_representers: node = self.yaml_multi_representers[data_type](self, data) break else: if None in self.yaml_multi_representers: node = self.yaml_multi_representers[None](self, data) elif None in self.yaml_representers: node = self.yaml_representers[None](self, data) else: node = ScalarNode(None, text_type(data)) # if alias_key is not None: # self.represented_objects[alias_key] = node return node def represent_key(self, data): # type: (Any) -> Any """ David Fraser: Extract a method to represent keys in mappings, so that a subclass can choose not to quote them (for example) used in represent_mapping https://bitbucket.org/davidfraser/pyyaml/commits/d81df6eb95f20cac4a79eed95ae553b5c6f77b8c """ return self.represent_data(data) @classmethod def add_representer(cls, data_type, representer): # type: (Any, Any) -> None if "yaml_representers" not in cls.__dict__: cls.yaml_representers = cls.yaml_representers.copy() cls.yaml_representers[data_type] = representer @classmethod def add_multi_representer(cls, data_type, representer): # type: (Any, Any) -> None if "yaml_multi_representers" not in cls.__dict__: cls.yaml_multi_representers = cls.yaml_multi_representers.copy() cls.yaml_multi_representers[data_type] = representer def represent_scalar(self, tag, value, style=None, anchor=None): # type: (Any, Any, Any, Any) -> Any if style is None: style = self.default_style comment = None if style and style[0] in "|>": comment = getattr(value, "comment", None) if comment: comment = [None, [comment]] node = ScalarNode(tag, value, style=style, comment=comment, anchor=anchor) if self.alias_key is not None: self.represented_objects[self.alias_key] = node return node def represent_sequence(self, tag, sequence, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] node = SequenceNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True for item in sequence: node_item = self.represent_data(item) if not (isinstance(node_item, ScalarNode) and not node_item.style): best_style = False value.append(node_item) if flow_style is None: if self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style return node def represent_omap(self, tag, omap, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] node = SequenceNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True for item_key in omap: item_val = omap[item_key] node_item = self.represent_data({item_key: item_val}) # if not (isinstance(node_item, ScalarNode) \ # and not node_item.style): # best_style = False value.append(node_item) if flow_style is None: if self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style return node def represent_mapping(self, tag, mapping, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] node = MappingNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True if hasattr(mapping, "items"): mapping = list(mapping.items()) if self.sort_base_mapping_type_on_output: try: mapping = sorted(mapping) except TypeError: pass for item_key, item_value in mapping: node_key = self.represent_key(item_key) node_value = self.represent_data(item_value) if not (isinstance(node_key, ScalarNode) and not node_key.style): best_style = False if not (isinstance(node_value, ScalarNode) and not node_value.style): best_style = False value.append((node_key, node_value)) if flow_style is None: if self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style return node def ignore_aliases(self, data): # type: (Any) -> bool return False class SafeRepresenter(BaseRepresenter): def ignore_aliases(self, data): # type: (Any) -> bool # https://docs.python.org/3/reference/expressions.html#parenthesized-forms : # "i.e. two occurrences of the empty tuple may or may not yield the same object" # so "data is ()" should not be used if data is None or (isinstance(data, tuple) and data == ()): return True if isinstance(data, (binary_type, text_type, bool, int, float)): return True return False def represent_none(self, data): # type: (Any) -> Any return self.represent_scalar(u"tag:yaml.org,2002:null", u"null") if PY3: def represent_str(self, data): # type: (Any) -> Any return self.represent_scalar(u"tag:yaml.org,2002:str", data) def represent_binary(self, data): # type: (Any) -> Any if hasattr(base64, "encodebytes"): data = base64.encodebytes(data).decode("ascii") else: data = base64.encodestring(data).decode("ascii") return self.represent_scalar(u"tag:yaml.org,2002:binary", data, style="|") else: def represent_str(self, data): # type: (Any) -> Any tag = None style = None try: data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" except UnicodeDecodeError: try: data = unicode(data, "utf-8") tag = u"tag:yaml.org,2002:str" except UnicodeDecodeError: data = data.encode("base64") tag = u"tag:yaml.org,2002:binary" style = "|" return self.represent_scalar(tag, data, style=style) def represent_unicode(self, data): # type: (Any) -> Any return self.represent_scalar(u"tag:yaml.org,2002:str", data) def represent_bool(self, data, anchor=None): # type: (Any, Optional[Any]) -> Any try: value = self.dumper.boolean_representation[bool(data)] except AttributeError: if data: value = u"true" else: value = u"false" return self.represent_scalar(u"tag:yaml.org,2002:bool", value, anchor=anchor) def represent_int(self, data): # type: (Any) -> Any return self.represent_scalar(u"tag:yaml.org,2002:int", text_type(data)) if PY2: def represent_long(self, data): # type: (Any) -> Any return self.represent_scalar(u"tag:yaml.org,2002:int", text_type(data)) inf_value = 1e300 while repr(inf_value) != repr(inf_value * inf_value): inf_value *= inf_value def represent_float(self, data): # type: (Any) -> Any if data != data or (data == 0.0 and data == 1.0): value = u".nan" elif data == self.inf_value: value = u".inf" elif data == -self.inf_value: value = u"-.inf" else: value = to_unicode(repr(data)).lower() if getattr(self.serializer, "use_version", None) == (1, 1): if u"." not in value and u"e" in value: # Note that in some cases `repr(data)` represents a float number # without the decimal parts. For instance: # >>> repr(1e17) # '1e17' # Unfortunately, this is not a valid float representation according # to the definition of the `!!float` tag in YAML 1.1. We fix # this by adding '.0' before the 'e' symbol. value = value.replace(u"e", u".0e", 1) return self.represent_scalar(u"tag:yaml.org,2002:float", value) def represent_list(self, data): # type: (Any) -> Any # pairs = (len(data) > 0 and isinstance(data, list)) # if pairs: # for item in data: # if not isinstance(item, tuple) or len(item) != 2: # pairs = False # break # if not pairs: return self.represent_sequence(u"tag:yaml.org,2002:seq", data) # value = [] # for item_key, item_value in data: # value.append(self.represent_mapping(u'tag:yaml.org,2002:map', # [(item_key, item_value)])) # return SequenceNode(u'tag:yaml.org,2002:pairs', value) def represent_dict(self, data): # type: (Any) -> Any return self.represent_mapping(u"tag:yaml.org,2002:map", data) def represent_ordereddict(self, data): # type: (Any) -> Any return self.represent_omap(u"tag:yaml.org,2002:omap", data) def represent_set(self, data): # type: (Any) -> Any value = {} # type: Dict[Any, None] for key in data: value[key] = None return self.represent_mapping(u"tag:yaml.org,2002:set", value) def represent_date(self, data): # type: (Any) -> Any value = to_unicode(data.isoformat()) return self.represent_scalar(u"tag:yaml.org,2002:timestamp", value) def represent_datetime(self, data): # type: (Any) -> Any value = to_unicode(data.isoformat(" ")) return self.represent_scalar(u"tag:yaml.org,2002:timestamp", value) def represent_yaml_object(self, tag, data, cls, flow_style=None): # type: (Any, Any, Any, Any) -> Any if hasattr(data, "__getstate__"): state = data.__getstate__() else: state = data.__dict__.copy() return self.represent_mapping(tag, state, flow_style=flow_style) def represent_undefined(self, data): # type: (Any) -> None raise RepresenterError("cannot represent an object: %s" % (data,)) SafeRepresenter.add_representer(type(None), SafeRepresenter.represent_none) SafeRepresenter.add_representer(str, SafeRepresenter.represent_str) if PY2: SafeRepresenter.add_representer(unicode, SafeRepresenter.represent_unicode) else: SafeRepresenter.add_representer(bytes, SafeRepresenter.represent_binary) SafeRepresenter.add_representer(bool, SafeRepresenter.represent_bool) SafeRepresenter.add_representer(int, SafeRepresenter.represent_int) if PY2: SafeRepresenter.add_representer(long, SafeRepresenter.represent_long) SafeRepresenter.add_representer(float, SafeRepresenter.represent_float) SafeRepresenter.add_representer(list, SafeRepresenter.represent_list) SafeRepresenter.add_representer(tuple, SafeRepresenter.represent_list) SafeRepresenter.add_representer(dict, SafeRepresenter.represent_dict) SafeRepresenter.add_representer(set, SafeRepresenter.represent_set) SafeRepresenter.add_representer(ordereddict, SafeRepresenter.represent_ordereddict) if sys.version_info >= (2, 7): import collections SafeRepresenter.add_representer( collections.OrderedDict, SafeRepresenter.represent_ordereddict ) SafeRepresenter.add_representer(datetime.date, SafeRepresenter.represent_date) SafeRepresenter.add_representer(datetime.datetime, SafeRepresenter.represent_datetime) SafeRepresenter.add_representer(None, SafeRepresenter.represent_undefined) class Representer(SafeRepresenter): if PY2: def represent_str(self, data): # type: (Any) -> Any tag = None style = None try: data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" except UnicodeDecodeError: try: data = unicode(data, "utf-8") tag = u"tag:yaml.org,2002:python/str" except UnicodeDecodeError: data = data.encode("base64") tag = u"tag:yaml.org,2002:binary" style = "|" return self.represent_scalar(tag, data, style=style) def represent_unicode(self, data): # type: (Any) -> Any tag = None try: data.encode("ascii") tag = u"tag:yaml.org,2002:python/unicode" except UnicodeEncodeError: tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data) def represent_long(self, data): # type: (Any) -> Any tag = u"tag:yaml.org,2002:int" if int(data) is not data: tag = u"tag:yaml.org,2002:python/long" return self.represent_scalar(tag, to_unicode(data)) def represent_complex(self, data): # type: (Any) -> Any if data.imag == 0.0: data = u"%r" % data.real elif data.real == 0.0: data = u"%rj" % data.imag elif data.imag > 0: data = u"%r+%rj" % (data.real, data.imag) else: data = u"%r%rj" % (data.real, data.imag) return self.represent_scalar(u"tag:yaml.org,2002:python/complex", data) def represent_tuple(self, data): # type: (Any) -> Any return self.represent_sequence(u"tag:yaml.org,2002:python/tuple", data) def represent_name(self, data): # type: (Any) -> Any try: name = u"%s.%s" % (data.__module__, data.__qualname__) except AttributeError: # probably PY2 name = u"%s.%s" % (data.__module__, data.__name__) return self.represent_scalar(u"tag:yaml.org,2002:python/name:" + name, "") def represent_module(self, data): # type: (Any) -> Any return self.represent_scalar( u"tag:yaml.org,2002:python/module:" + data.__name__, "" ) if PY2: def represent_instance(self, data): # type: (Any) -> Any # For instances of classic classes, we use __getinitargs__ and # __getstate__ to serialize the data. # If data.__getinitargs__ exists, the object must be reconstructed # by calling cls(**args), where args is a tuple returned by # __getinitargs__. Otherwise, the cls.__init__ method should never # be called and the class instance is created by instantiating a # trivial class and assigning to the instance's __class__ variable. # If data.__getstate__ exists, it returns the state of the object. # Otherwise, the state of the object is data.__dict__. # We produce either a !!python/object or !!python/object/new node. # If data.__getinitargs__ does not exist and state is a dictionary, # we produce a !!python/object node . Otherwise we produce a # !!python/object/new node. cls = data.__class__ class_name = u"%s.%s" % (cls.__module__, cls.__name__) args = None state = None if hasattr(data, "__getinitargs__"): args = list(data.__getinitargs__()) if hasattr(data, "__getstate__"): state = data.__getstate__() else: state = data.__dict__ if args is None and isinstance(state, dict): return self.represent_mapping( u"tag:yaml.org,2002:python/object:" + class_name, state ) if isinstance(state, dict) and not state: return self.represent_sequence( u"tag:yaml.org,2002:python/object/new:" + class_name, args ) value = {} if bool(args): value["args"] = args value["state"] = state # type: ignore return self.represent_mapping( u"tag:yaml.org,2002:python/object/new:" + class_name, value ) def represent_object(self, data): # type: (Any) -> Any # We use __reduce__ API to save the data. data.__reduce__ returns # a tuple of length 2-5: # (function, args, state, listitems, dictitems) # For reconstructing, we calls function(*args), then set its state, # listitems, and dictitems if they are not None. # A special case is when function.__name__ == '__newobj__'. In this # case we create the object with args[0].__new__(*args). # Another special case is when __reduce__ returns a string - we don't # support it. # We produce a !!python/object, !!python/object/new or # !!python/object/apply node. cls = type(data) if cls in copyreg.dispatch_table: reduce = copyreg.dispatch_table[cls](data) elif hasattr(data, "__reduce_ex__"): reduce = data.__reduce_ex__(2) elif hasattr(data, "__reduce__"): reduce = data.__reduce__() else: raise RepresenterError("cannot represent object: %r" % (data,)) reduce = (list(reduce) + [None] * 5)[:5] function, args, state, listitems, dictitems = reduce args = list(args) if state is None: state = {} if listitems is not None: listitems = list(listitems) if dictitems is not None: dictitems = dict(dictitems) if function.__name__ == "__newobj__": function = args[0] args = args[1:] tag = u"tag:yaml.org,2002:python/object/new:" newobj = True else: tag = u"tag:yaml.org,2002:python/object/apply:" newobj = False try: function_name = u"%s.%s" % (function.__module__, function.__qualname__) except AttributeError: # probably PY2 function_name = u"%s.%s" % (function.__module__, function.__name__) if ( not args and not listitems and not dictitems and isinstance(state, dict) and newobj ): return self.represent_mapping( u"tag:yaml.org,2002:python/object:" + function_name, state ) if not listitems and not dictitems and isinstance(state, dict) and not state: return self.represent_sequence(tag + function_name, args) value = {} if args: value["args"] = args if state or not isinstance(state, dict): value["state"] = state if listitems: value["listitems"] = listitems if dictitems: value["dictitems"] = dictitems return self.represent_mapping(tag + function_name, value) if PY2: Representer.add_representer(str, Representer.represent_str) Representer.add_representer(unicode, Representer.represent_unicode) Representer.add_representer(long, Representer.represent_long) Representer.add_representer(complex, Representer.represent_complex) Representer.add_representer(tuple, Representer.represent_tuple) Representer.add_representer(type, Representer.represent_name) if PY2: Representer.add_representer(types.ClassType, Representer.represent_name) Representer.add_representer(types.FunctionType, Representer.represent_name) Representer.add_representer(types.BuiltinFunctionType, Representer.represent_name) Representer.add_representer(types.ModuleType, Representer.represent_module) if PY2: Representer.add_multi_representer( types.InstanceType, Representer.represent_instance ) Representer.add_multi_representer(object, Representer.represent_object) Representer.add_multi_representer(type, Representer.represent_name) class RoundTripRepresenter(SafeRepresenter): # need to add type here and write out the .comment # in serializer and emitter def __init__(self, default_style=None, default_flow_style=None, dumper=None): # type: (Any, Any, Any) -> None if not hasattr(dumper, "typ") and default_flow_style is None: default_flow_style = False SafeRepresenter.__init__( self, default_style=default_style, default_flow_style=default_flow_style, dumper=dumper, ) def ignore_aliases(self, data): # type: (Any) -> bool try: if data.anchor is not None and data.anchor.value is not None: return False except AttributeError: pass return SafeRepresenter.ignore_aliases(self, data) def represent_none(self, data): # type: (Any) -> Any if ( len(self.represented_objects) == 0 and not self.serializer.use_explicit_start ): # this will be open ended (although it is not yet) return self.represent_scalar(u"tag:yaml.org,2002:null", u"null") return self.represent_scalar(u"tag:yaml.org,2002:null", "") def represent_literal_scalarstring(self, data): # type: (Any) -> Any tag = None style = "|" anchor = data.yaml_anchor(any=True) if PY2 and not isinstance(data, unicode): data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data, style=style, anchor=anchor) represent_preserved_scalarstring = represent_literal_scalarstring def represent_folded_scalarstring(self, data): # type: (Any) -> Any tag = None style = ">" anchor = data.yaml_anchor(any=True) for fold_pos in reversed(getattr(data, "fold_pos", [])): if ( data[fold_pos] == " " and (fold_pos > 0 and not data[fold_pos - 1].isspace()) and (fold_pos < len(data) and not data[fold_pos + 1].isspace()) ): data = data[:fold_pos] + "\a" + data[fold_pos:] if PY2 and not isinstance(data, unicode): data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data, style=style, anchor=anchor) def represent_single_quoted_scalarstring(self, data): # type: (Any) -> Any tag = None style = "'" anchor = data.yaml_anchor(any=True) if PY2 and not isinstance(data, unicode): data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data, style=style, anchor=anchor) def represent_double_quoted_scalarstring(self, data): # type: (Any) -> Any tag = None style = '"' anchor = data.yaml_anchor(any=True) if PY2 and not isinstance(data, unicode): data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data, style=style, anchor=anchor) def represent_plain_scalarstring(self, data): # type: (Any) -> Any tag = None style = "" anchor = data.yaml_anchor(any=True) if PY2 and not isinstance(data, unicode): data = unicode(data, "ascii") tag = u"tag:yaml.org,2002:str" return self.represent_scalar(tag, data, style=style, anchor=anchor) def insert_underscore(self, prefix, s, underscore, anchor=None): # type: (Any, Any, Any, Any) -> Any if underscore is None: return self.represent_scalar( u"tag:yaml.org,2002:int", prefix + s, anchor=anchor ) if underscore[0]: sl = list(s) pos = len(s) - underscore[0] while pos > 0: sl.insert(pos, "_") pos -= underscore[0] s = "".join(sl) if underscore[1]: s = "_" + s if underscore[2]: s += "_" return self.represent_scalar( u"tag:yaml.org,2002:int", prefix + s, anchor=anchor ) def represent_scalar_int(self, data): # type: (Any) -> Any if data._width is not None: s = "{:0{}d}".format(data, data._width) else: s = format(data, "d") anchor = data.yaml_anchor(any=True) return self.insert_underscore("", s, data._underscore, anchor=anchor) def represent_binary_int(self, data): # type: (Any) -> Any if data._width is not None: # cannot use '{:#0{}b}', that strips the zeros s = "{:0{}b}".format(data, data._width) else: s = format(data, "b") anchor = data.yaml_anchor(any=True) return self.insert_underscore("0b", s, data._underscore, anchor=anchor) def represent_octal_int(self, data): # type: (Any) -> Any if data._width is not None: # cannot use '{:#0{}o}', that strips the zeros s = "{:0{}o}".format(data, data._width) else: s = format(data, "o") anchor = data.yaml_anchor(any=True) return self.insert_underscore("0o", s, data._underscore, anchor=anchor) def represent_hex_int(self, data): # type: (Any) -> Any if data._width is not None: # cannot use '{:#0{}x}', that strips the zeros s = "{:0{}x}".format(data, data._width) else: s = format(data, "x") anchor = data.yaml_anchor(any=True) return self.insert_underscore("0x", s, data._underscore, anchor=anchor) def represent_hex_caps_int(self, data): # type: (Any) -> Any if data._width is not None: # cannot use '{:#0{}X}', that strips the zeros s = "{:0{}X}".format(data, data._width) else: s = format(data, "X") anchor = data.yaml_anchor(any=True) return self.insert_underscore("0x", s, data._underscore, anchor=anchor) def represent_scalar_float(self, data): # type: (Any) -> Any """ this is way more complicated """ value = None anchor = data.yaml_anchor(any=True) if data != data or (data == 0.0 and data == 1.0): value = u".nan" elif data == self.inf_value: value = u".inf" elif data == -self.inf_value: value = u"-.inf" if value: return self.represent_scalar( u"tag:yaml.org,2002:float", value, anchor=anchor ) if data._exp is None and data._prec > 0 and data._prec == data._width - 1: # no exponent, but trailing dot value = u"{}{:d}.".format( data._m_sign if data._m_sign else "", abs(int(data)) ) elif data._exp is None: # no exponent, "normal" dot prec = data._prec ms = data._m_sign if data._m_sign else "" # -1 for the dot value = u"{}{:0{}.{}f}".format( ms, abs(data), data._width - len(ms), data._width - prec - 1 ) if prec == 0 or (prec == 1 and ms != ""): value = value.replace(u"0.", u".") while len(value) < data._width: value += u"0" else: # exponent m, es = u"{:{}.{}e}".format( # data, data._width, data._width - data._prec + (1 if data._m_sign else 0) data, data._width, data._width + (1 if data._m_sign else 0), ).split("e") w = data._width if data._prec > 0 else (data._width + 1) if data < 0: w += 1 m = m[:w] e = int(es) m1, m2 = m.split(".") # always second? while len(m1) + len(m2) < data._width - (1 if data._prec >= 0 else 0): m2 += u"0" if data._m_sign and data > 0: m1 = "+" + m1 esgn = u"+" if data._e_sign else "" if data._prec < 0: # mantissa without dot if m2 != u"0": e -= len(m2) else: m2 = "" while (len(m1) + len(m2) - (1 if data._m_sign else 0)) < data._width: m2 += u"0" e -= 1 value = ( m1 + m2 + data._exp + u"{:{}0{}d}".format(e, esgn, data._e_width) ) elif data._prec == 0: # mantissa with trailing dot e -= len(m2) value = ( m1 + m2 + u"." + data._exp + u"{:{}0{}d}".format(e, esgn, data._e_width) ) else: if data._m_lead0 > 0: m2 = u"0" * (data._m_lead0 - 1) + m1 + m2 m1 = u"0" m2 = m2[: -data._m_lead0] # these should be zeros e += data._m_lead0 while len(m1) < data._prec: m1 += m2[0] m2 = m2[1:] e -= 1 value = ( m1 + u"." + m2 + data._exp + u"{:{}0{}d}".format(e, esgn, data._e_width) ) if value is None: value = to_unicode(repr(data)).lower() return self.represent_scalar(u"tag:yaml.org,2002:float", value, anchor=anchor) def represent_sequence(self, tag, sequence, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] # if the flow_style is None, the flow style tacked on to the object # explicitly will be taken. If that is None as well the default flow # style rules try: flow_style = sequence.fa.flow_style(flow_style) except AttributeError: flow_style = flow_style try: anchor = sequence.yaml_anchor() except AttributeError: anchor = None node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True try: comment = getattr(sequence, comment_attrib) node.comment = comment.comment # reset any comment already printed information if node.comment and node.comment[1]: for ct in node.comment[1]: ct.reset() item_comments = comment.items for v in item_comments.values(): if v and v[1]: for ct in v[1]: ct.reset() item_comments = comment.items node.comment = comment.comment try: node.comment.append(comment.end) except AttributeError: pass except AttributeError: item_comments = {} for idx, item in enumerate(sequence): node_item = self.represent_data(item) self.merge_comments(node_item, item_comments.get(idx)) if not (isinstance(node_item, ScalarNode) and not node_item.style): best_style = False value.append(node_item) if flow_style is None: if len(sequence) != 0 and self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style return node def merge_comments(self, node, comments): # type: (Any, Any) -> Any if comments is None: assert hasattr(node, "comment") return node if getattr(node, "comment", None) is not None: for idx, val in enumerate(comments): if idx >= len(node.comment): continue nc = node.comment[idx] if nc is not None: assert val is None or val == nc comments[idx] = nc node.comment = comments return node def represent_key(self, data): # type: (Any) -> Any if isinstance(data, CommentedKeySeq): self.alias_key = None return self.represent_sequence( u"tag:yaml.org,2002:seq", data, flow_style=True ) if isinstance(data, CommentedKeyMap): self.alias_key = None return self.represent_mapping( u"tag:yaml.org,2002:map", data, flow_style=True ) return SafeRepresenter.represent_key(self, data) def represent_mapping(self, tag, mapping, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] try: flow_style = mapping.fa.flow_style(flow_style) except AttributeError: flow_style = flow_style try: anchor = mapping.yaml_anchor() except AttributeError: anchor = None node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True # no sorting! !! try: comment = getattr(mapping, comment_attrib) node.comment = comment.comment if node.comment and node.comment[1]: for ct in node.comment[1]: ct.reset() item_comments = comment.items for v in item_comments.values(): if v and v[1]: for ct in v[1]: ct.reset() try: node.comment.append(comment.end) except AttributeError: pass except AttributeError: item_comments = {} merge_list = [m[1] for m in getattr(mapping, merge_attrib, [])] try: merge_pos = getattr(mapping, merge_attrib, [[0]])[0][0] except IndexError: merge_pos = 0 item_count = 0 if bool(merge_list): items = mapping.non_merged_items() else: items = mapping.items() for item_key, item_value in items: item_count += 1 node_key = self.represent_key(item_key) node_value = self.represent_data(item_value) item_comment = item_comments.get(item_key) if item_comment: assert getattr(node_key, "comment", None) is None node_key.comment = item_comment[:2] nvc = getattr(node_value, "comment", None) if nvc is not None: # end comment already there nvc[0] = item_comment[2] nvc[1] = item_comment[3] else: node_value.comment = item_comment[2:] if not (isinstance(node_key, ScalarNode) and not node_key.style): best_style = False if not (isinstance(node_value, ScalarNode) and not node_value.style): best_style = False value.append((node_key, node_value)) if flow_style is None: if ( (item_count != 0) or bool(merge_list) ) and self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style if bool(merge_list): # because of the call to represent_data here, the anchors # are marked as being used and thereby created if len(merge_list) == 1: arg = self.represent_data(merge_list[0]) else: arg = self.represent_data(merge_list) arg.flow_style = True value.insert(merge_pos, (ScalarNode(u"tag:yaml.org,2002:merge", "<<"), arg)) return node def represent_omap(self, tag, omap, flow_style=None): # type: (Any, Any, Any) -> Any value = [] # type: List[Any] try: flow_style = omap.fa.flow_style(flow_style) except AttributeError: flow_style = flow_style try: anchor = omap.yaml_anchor() except AttributeError: anchor = None node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True try: comment = getattr(omap, comment_attrib) node.comment = comment.comment if node.comment and node.comment[1]: for ct in node.comment[1]: ct.reset() item_comments = comment.items for v in item_comments.values(): if v and v[1]: for ct in v[1]: ct.reset() try: node.comment.append(comment.end) except AttributeError: pass except AttributeError: item_comments = {} for item_key in omap: item_val = omap[item_key] node_item = self.represent_data({item_key: item_val}) # node_item.flow_style = False # node item has two scalars in value: node_key and node_value item_comment = item_comments.get(item_key) if item_comment: if item_comment[1]: node_item.comment = [None, item_comment[1]] assert getattr(node_item.value[0][0], "comment", None) is None node_item.value[0][0].comment = [item_comment[0], None] nvc = getattr(node_item.value[0][1], "comment", None) if nvc is not None: # end comment already there nvc[0] = item_comment[2] nvc[1] = item_comment[3] else: node_item.value[0][1].comment = item_comment[2:] # if not (isinstance(node_item, ScalarNode) \ # and not node_item.style): # best_style = False value.append(node_item) if flow_style is None: if self.default_flow_style is not None: node.flow_style = self.default_flow_style else: node.flow_style = best_style return node def represent_set(self, setting): # type: (Any) -> Any flow_style = False tag = u"tag:yaml.org,2002:set" # return self.represent_mapping(tag, value) value = [] # type: List[Any] flow_style = setting.fa.flow_style(flow_style) try: anchor = setting.yaml_anchor() except AttributeError: anchor = None node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objects[self.alias_key] = node best_style = True # no sorting! !! try: comment = getattr(setting, comment_attrib) node.comment = comment.comment if node.comment and node.comment[1]: for ct in node.comment[1]: ct.reset() item_comments = comment.items for v in item_comments.values(): if v and v[1]: for ct in v[1]: ct.reset() try: node.comment.append(comment.end) except AttributeError: pass except AttributeError: item_comments = {} for item_key in setting.odict: node_key = self.represent_key(item_key) node_value = self.represent_data(None) item_comment = item_comments.get(item_key) if item_comment: assert getattr(node_key, "comment", None) is None node_key.comment = item_comment[:2] node_key.style = node_value.style = "?" if not (isinstance(node_key, ScalarNode) and not node_key.style): best_style = False if not (isinstance(node_value, ScalarNode) and not node_value.style): best_style = False value.append((node_key, node_value)) best_style = best_style return node def represent_dict(self, data): # type: (Any) -> Any """write out tag if saved on loading""" try: t = data.tag.value except AttributeError: t = None if t: if t.startswith("!!"): tag = "tag:yaml.org,2002:" + t[2:] else: tag = t else: tag = u"tag:yaml.org,2002:map" return self.represent_mapping(tag, data) def represent_list(self, data): # type: (Any) -> Any try: t = data.tag.value except AttributeError: t = None if t: if t.startswith("!!"): tag = "tag:yaml.org,2002:" + t[2:] else: tag = t else: tag = u"tag:yaml.org,2002:seq" return self.represent_sequence(tag, data) def represent_datetime(self, data): # type: (Any) -> Any inter = "T" if data._yaml["t"] else " " _yaml = data._yaml if _yaml["delta"]: data += _yaml["delta"] value = data.isoformat(inter) else: value = data.isoformat(inter) if _yaml["tz"]: value += _yaml["tz"] return self.represent_scalar(u"tag:yaml.org,2002:timestamp", to_unicode(value)) def represent_tagged_scalar(self, data): # type: (Any) -> Any try: tag = data.tag.value except AttributeError: tag = None try: anchor = data.yaml_anchor() except AttributeError: anchor = None return self.represent_scalar(tag, data.value, style=data.style, anchor=anchor) def represent_scalar_bool(self, data): # type: (Any) -> Any try: anchor = data.yaml_anchor() except AttributeError: anchor = None return SafeRepresenter.represent_bool(self, data, anchor=anchor) RoundTripRepresenter.add_representer(type(None), RoundTripRepresenter.represent_none) RoundTripRepresenter.add_representer( LiteralScalarString, RoundTripRepresenter.represent_literal_scalarstring ) RoundTripRepresenter.add_representer( FoldedScalarString, RoundTripRepresenter.represent_folded_scalarstring ) RoundTripRepresenter.add_representer( SingleQuotedScalarString, RoundTripRepresenter.represent_single_quoted_scalarstring ) RoundTripRepresenter.add_representer( DoubleQuotedScalarString, RoundTripRepresenter.represent_double_quoted_scalarstring ) RoundTripRepresenter.add_representer( PlainScalarString, RoundTripRepresenter.represent_plain_scalarstring ) # RoundTripRepresenter.add_representer(tuple, Representer.represent_tuple) RoundTripRepresenter.add_representer( ScalarInt, RoundTripRepresenter.represent_scalar_int ) RoundTripRepresenter.add_representer( BinaryInt, RoundTripRepresenter.represent_binary_int ) RoundTripRepresenter.add_representer(OctalInt, RoundTripRepresenter.represent_octal_int) RoundTripRepresenter.add_representer(HexInt, RoundTripRepresenter.represent_hex_int) RoundTripRepresenter.add_representer( HexCapsInt, RoundTripRepresenter.represent_hex_caps_int ) RoundTripRepresenter.add_representer( ScalarFloat, RoundTripRepresenter.represent_scalar_float ) RoundTripRepresenter.add_representer( ScalarBoolean, RoundTripRepresenter.represent_scalar_bool ) RoundTripRepresenter.add_representer(CommentedSeq, RoundTripRepresenter.represent_list) RoundTripRepresenter.add_representer(CommentedMap, RoundTripRepresenter.represent_dict) RoundTripRepresenter.add_representer( CommentedOrderedMap, RoundTripRepresenter.represent_ordereddict ) if sys.version_info >= (2, 7): import collections RoundTripRepresenter.add_representer( collections.OrderedDict, RoundTripRepresenter.represent_ordereddict ) RoundTripRepresenter.add_representer(CommentedSet, RoundTripRepresenter.represent_set) RoundTripRepresenter.add_representer( TaggedScalar, RoundTripRepresenter.represent_tagged_scalar ) RoundTripRepresenter.add_representer(TimeStamp, RoundTripRepresenter.represent_datetime) strictyaml-1.6.1/strictyaml/ruamel/resolver.py000066400000000000000000000362651415074265300216300ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import import re if False: # MYPY from typing import Any, Dict, List, Union, Text, Optional # NOQA from strictyaml.ruamel.compat import VersionType # NOQA from strictyaml.ruamel.compat import string_types, _DEFAULT_YAML_VERSION # NOQA from strictyaml.ruamel.error import * # NOQA from strictyaml.ruamel.nodes import MappingNode, ScalarNode, SequenceNode # NOQA from strictyaml.ruamel.util import RegExp # NOQA __all__ = ["BaseResolver", "Resolver", "VersionedResolver"] # fmt: off # resolvers consist of # - a list of applicable version # - a tag # - a regexp # - a list of first characters to match implicit_resolvers = [ ([(1, 2)], u'tag:yaml.org,2002:bool', RegExp(u'''^(?:true|True|TRUE|false|False|FALSE)$''', re.X), list(u'tTfF')), ([(1, 1)], u'tag:yaml.org,2002:bool', RegExp(u'''^(?:y|Y|yes|Yes|YES|n|N|no|No|NO |true|True|TRUE|false|False|FALSE |on|On|ON|off|Off|OFF)$''', re.X), list(u'yYnNtTfFoO')), ([(1, 2)], u'tag:yaml.org,2002:float', RegExp(u'''^(?: [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)? |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+) |[-+]?\\.[0-9_]+(?:[eE][-+][0-9]+)? |[-+]?\\.(?:inf|Inf|INF) |\\.(?:nan|NaN|NAN))$''', re.X), list(u'-+0123456789.')), ([(1, 1)], u'tag:yaml.org,2002:float', RegExp(u'''^(?: [-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)? |[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+) |\\.[0-9_]+(?:[eE][-+][0-9]+)? |[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]* # sexagesimal float |[-+]?\\.(?:inf|Inf|INF) |\\.(?:nan|NaN|NAN))$''', re.X), list(u'-+0123456789.')), ([(1, 2)], u'tag:yaml.org,2002:int', RegExp(u'''^(?:[-+]?0b[0-1_]+ |[-+]?0o?[0-7_]+ |[-+]?[0-9_]+ |[-+]?0x[0-9a-fA-F_]+)$''', re.X), list(u'-+0123456789')), ([(1, 1)], u'tag:yaml.org,2002:int', RegExp(u'''^(?:[-+]?0b[0-1_]+ |[-+]?0?[0-7_]+ |[-+]?(?:0|[1-9][0-9_]*) |[-+]?0x[0-9a-fA-F_]+ |[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X), # sexagesimal int list(u'-+0123456789')), ([(1, 2), (1, 1)], u'tag:yaml.org,2002:merge', RegExp(u'^(?:<<)$'), [u'<']), ([(1, 2), (1, 1)], u'tag:yaml.org,2002:null', RegExp(u'''^(?: ~ |null|Null|NULL | )$''', re.X), [u'~', u'n', u'N', u'']), ([(1, 2), (1, 1)], u'tag:yaml.org,2002:timestamp', RegExp(u'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] |[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]? (?:[Tt]|[ \\t]+)[0-9][0-9]? :[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)? (?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X), list(u'0123456789')), ([(1, 2), (1, 1)], u'tag:yaml.org,2002:value', RegExp(u'^(?:=)$'), [u'=']), # The following resolver is only for documentation purposes. It cannot work # because plain scalars cannot start with '!', '&', or '*'. ([(1, 2), (1, 1)], u'tag:yaml.org,2002:yaml', RegExp(u'^(?:!|&|\\*)$'), list(u'!&*')), ] # fmt: on class ResolverError(YAMLError): pass class BaseResolver(object): DEFAULT_SCALAR_TAG = u"tag:yaml.org,2002:str" DEFAULT_SEQUENCE_TAG = u"tag:yaml.org,2002:seq" DEFAULT_MAPPING_TAG = u"tag:yaml.org,2002:map" yaml_implicit_resolvers = {} # type: Dict[Any, Any] yaml_path_resolvers = {} # type: Dict[Any, Any] def __init__(self, loadumper=None): # type: (Any, Any) -> None self.loadumper = loadumper if ( self.loadumper is not None and getattr(self.loadumper, "_resolver", None) is None ): self.loadumper._resolver = self.loadumper self._loader_version = None # type: Any self.resolver_exact_paths = [] # type: List[Any] self.resolver_prefix_paths = [] # type: List[Any] @property def parser(self): # type: () -> Any if self.loadumper is not None: if hasattr(self.loadumper, "typ"): return self.loadumper.parser return self.loadumper._parser return None @classmethod def add_implicit_resolver_base(cls, tag, regexp, first): # type: (Any, Any, Any) -> None if "yaml_implicit_resolvers" not in cls.__dict__: # deepcopy doesn't work here cls.yaml_implicit_resolvers = dict( (k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers ) if first is None: first = [None] for ch in first: cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp)) @classmethod def add_implicit_resolver(cls, tag, regexp, first): # type: (Any, Any, Any) -> None if "yaml_implicit_resolvers" not in cls.__dict__: # deepcopy doesn't work here cls.yaml_implicit_resolvers = dict( (k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers ) if first is None: first = [None] for ch in first: cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp)) implicit_resolvers.append(([(1, 2), (1, 1)], tag, regexp, first)) # @classmethod # def add_implicit_resolver(cls, tag, regexp, first): @classmethod def add_path_resolver(cls, tag, path, kind=None): # type: (Any, Any, Any) -> None # Note: `add_path_resolver` is experimental. The API could be changed. # `new_path` is a pattern that is matched against the path from the # root to the node that is being considered. `node_path` elements are # tuples `(node_check, index_check)`. `node_check` is a node class: # `ScalarNode`, `SequenceNode`, `MappingNode` or `None`. `None` # matches any kind of a node. `index_check` could be `None`, a boolean # value, a string value, or a number. `None` and `False` match against # any _value_ of sequence and mapping nodes. `True` matches against # any _key_ of a mapping node. A string `index_check` matches against # a mapping value that corresponds to a scalar key which content is # equal to the `index_check` value. An integer `index_check` matches # against a sequence value with the index equal to `index_check`. if "yaml_path_resolvers" not in cls.__dict__: cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy() new_path = [] # type: List[Any] for element in path: if isinstance(element, (list, tuple)): if len(element) == 2: node_check, index_check = element elif len(element) == 1: node_check = element[0] index_check = True else: raise ResolverError("Invalid path element: %s" % (element,)) else: node_check = None index_check = element if node_check is str: node_check = ScalarNode elif node_check is list: node_check = SequenceNode elif node_check is dict: node_check = MappingNode elif ( node_check not in [ScalarNode, SequenceNode, MappingNode] and not isinstance(node_check, string_types) and node_check is not None ): raise ResolverError("Invalid node checker: %s" % (node_check,)) if ( not isinstance(index_check, (string_types, int)) and index_check is not None ): raise ResolverError("Invalid index checker: %s" % (index_check,)) new_path.append((node_check, index_check)) if kind is str: kind = ScalarNode elif kind is list: kind = SequenceNode elif kind is dict: kind = MappingNode elif kind not in [ScalarNode, SequenceNode, MappingNode] and kind is not None: raise ResolverError("Invalid node kind: %s" % (kind,)) cls.yaml_path_resolvers[tuple(new_path), kind] = tag def descend_resolver(self, current_node, current_index): # type: (Any, Any) -> None if not self.yaml_path_resolvers: return exact_paths = {} prefix_paths = [] if current_node: depth = len(self.resolver_prefix_paths) for path, kind in self.resolver_prefix_paths[-1]: if self.check_resolver_prefix( depth, path, kind, current_node, current_index ): if len(path) > depth: prefix_paths.append((path, kind)) else: exact_paths[kind] = self.yaml_path_resolvers[path, kind] else: for path, kind in self.yaml_path_resolvers: if not path: exact_paths[kind] = self.yaml_path_resolvers[path, kind] else: prefix_paths.append((path, kind)) self.resolver_exact_paths.append(exact_paths) self.resolver_prefix_paths.append(prefix_paths) def ascend_resolver(self): # type: () -> None if not self.yaml_path_resolvers: return self.resolver_exact_paths.pop() self.resolver_prefix_paths.pop() def check_resolver_prefix(self, depth, path, kind, current_node, current_index): # type: (int, Text, Any, Any, Any) -> bool node_check, index_check = path[depth - 1] if isinstance(node_check, string_types): if current_node.tag != node_check: return False elif node_check is not None: if not isinstance(current_node, node_check): return False if index_check is True and current_index is not None: return False if (index_check is False or index_check is None) and current_index is None: return False if isinstance(index_check, string_types): if not ( isinstance(current_index, ScalarNode) and index_check == current_index.value ): return False elif isinstance(index_check, int) and not isinstance(index_check, bool): if index_check != current_index: return False return True def resolve(self, kind, value, implicit): # type: (Any, Any, Any) -> Any if kind is ScalarNode and implicit[0]: if value == "": resolvers = self.yaml_implicit_resolvers.get("", []) else: resolvers = self.yaml_implicit_resolvers.get(value[0], []) resolvers += self.yaml_implicit_resolvers.get(None, []) for tag, regexp in resolvers: if regexp.match(value): return tag implicit = implicit[1] if bool(self.yaml_path_resolvers): exact_paths = self.resolver_exact_paths[-1] if kind in exact_paths: return exact_paths[kind] if None in exact_paths: return exact_paths[None] if kind is ScalarNode: return self.DEFAULT_SCALAR_TAG elif kind is SequenceNode: return self.DEFAULT_SEQUENCE_TAG elif kind is MappingNode: return self.DEFAULT_MAPPING_TAG @property def processing_version(self): # type: () -> Any return None class Resolver(BaseResolver): pass for ir in implicit_resolvers: if (1, 2) in ir[0]: Resolver.add_implicit_resolver_base(*ir[1:]) class VersionedResolver(BaseResolver): """ contrary to the "normal" resolver, the smart resolver delays loading the pattern matching rules. That way it can decide to load 1.1 rules or the (default) 1.2 rules, that no longer support octal without 0o, sexagesimals and Yes/No/On/Off booleans. """ def __init__(self, version=None, loader=None, loadumper=None): # type: (Optional[VersionType], Any, Any) -> None if loader is None and loadumper is not None: loader = loadumper BaseResolver.__init__(self, loader) self._loader_version = self.get_loader_version(version) self._version_implicit_resolver = {} # type: Dict[Any, Any] def add_version_implicit_resolver(self, version, tag, regexp, first): # type: (VersionType, Any, Any, Any) -> None if first is None: first = [None] impl_resolver = self._version_implicit_resolver.setdefault(version, {}) for ch in first: impl_resolver.setdefault(ch, []).append((tag, regexp)) def get_loader_version(self, version): # type: (Optional[VersionType]) -> Any if version is None or isinstance(version, tuple): return version if isinstance(version, list): return tuple(version) # assume string return tuple(map(int, version.split(u"."))) @property def versioned_resolver(self): # type: () -> Any """ select the resolver based on the version we are parsing """ version = self.processing_version if version not in self._version_implicit_resolver: for x in implicit_resolvers: if version in x[0]: self.add_version_implicit_resolver(version, x[1], x[2], x[3]) return self._version_implicit_resolver[version] def resolve(self, kind, value, implicit): # type: (Any, Any, Any) -> Any if kind is ScalarNode and implicit[0]: if value == "": resolvers = self.versioned_resolver.get("", []) else: resolvers = self.versioned_resolver.get(value[0], []) resolvers += self.versioned_resolver.get(None, []) for tag, regexp in resolvers: if regexp.match(value): return tag implicit = implicit[1] if bool(self.yaml_path_resolvers): exact_paths = self.resolver_exact_paths[-1] if kind in exact_paths: return exact_paths[kind] if None in exact_paths: return exact_paths[None] if kind is ScalarNode: return self.DEFAULT_SCALAR_TAG elif kind is SequenceNode: return self.DEFAULT_SEQUENCE_TAG elif kind is MappingNode: return self.DEFAULT_MAPPING_TAG @property def processing_version(self): # type: () -> Any try: version = self.loadumper._scanner.yaml_version except AttributeError: try: if hasattr(self.loadumper, "typ"): version = self.loadumper.version else: version = self.loadumper._serializer.use_version # dumping except AttributeError: version = None if version is None: version = self._loader_version if version is None: version = _DEFAULT_YAML_VERSION return version strictyaml-1.6.1/strictyaml/ruamel/scalarbool.py000066400000000000000000000030011415074265300220660ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals """ You cannot subclass bool, and this is necessary for round-tripping anchored bool values (and also if you want to preserve the original way of writing) bool.__bases__ is type 'int', so that is what is used as the basis for ScalarBoolean as well. You can use these in an if statement, but not when testing equivalence """ from strictyaml.ruamel.anchor import Anchor if False: # MYPY from typing import Text, Any, Dict, List # NOQA __all__ = ["ScalarBoolean"] # no need for no_limit_int -> int class ScalarBoolean(int): def __new__(cls, *args, **kw): # type: (Any, Any, Any) -> Any anchor = kw.pop("anchor", None) # type: ignore b = int.__new__(cls, *args, **kw) # type: ignore if anchor is not None: b.yaml_set_anchor(anchor, always_dump=True) return b @property def anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) def yaml_anchor(self, any=False): # type: (bool) -> Any if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None def yaml_set_anchor(self, value, always_dump=False): # type: (Any, bool) -> None self.anchor.value = value self.anchor.always_dump = always_dump strictyaml-1.6.1/strictyaml/ruamel/scalarfloat.py000066400000000000000000000106641415074265300222550ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals import sys from .compat import no_limit_int # NOQA from strictyaml.ruamel.anchor import Anchor if False: # MYPY from typing import Text, Any, Dict, List # NOQA __all__ = ["ScalarFloat", "ExponentialFloat", "ExponentialCapsFloat"] class ScalarFloat(float): def __new__(cls, *args, **kw): # type: (Any, Any, Any) -> Any width = kw.pop("width", None) # type: ignore prec = kw.pop("prec", None) # type: ignore m_sign = kw.pop("m_sign", None) # type: ignore m_lead0 = kw.pop("m_lead0", 0) # type: ignore exp = kw.pop("exp", None) # type: ignore e_width = kw.pop("e_width", None) # type: ignore e_sign = kw.pop("e_sign", None) # type: ignore underscore = kw.pop("underscore", None) # type: ignore anchor = kw.pop("anchor", None) # type: ignore v = float.__new__(cls, *args, **kw) # type: ignore v._width = width v._prec = prec v._m_sign = m_sign v._m_lead0 = m_lead0 v._exp = exp v._e_width = e_width v._e_sign = e_sign v._underscore = underscore if anchor is not None: v.yaml_set_anchor(anchor, always_dump=True) return v def __iadd__(self, a): # type: ignore # type: (Any) -> Any return float(self) + a x = type(self)(self + a) x._width = self._width x._underscore = ( self._underscore[:] if self._underscore is not None else None ) # NOQA return x def __ifloordiv__(self, a): # type: ignore # type: (Any) -> Any return float(self) // a x = type(self)(self // a) x._width = self._width x._underscore = ( self._underscore[:] if self._underscore is not None else None ) # NOQA return x def __imul__(self, a): # type: ignore # type: (Any) -> Any return float(self) * a x = type(self)(self * a) x._width = self._width x._underscore = ( self._underscore[:] if self._underscore is not None else None ) # NOQA x._prec = self._prec # check for others return x def __ipow__(self, a): # type: ignore # type: (Any) -> Any return float(self) ** a x = type(self)(self ** a) x._width = self._width x._underscore = ( self._underscore[:] if self._underscore is not None else None ) # NOQA return x def __isub__(self, a): # type: ignore # type: (Any) -> Any return float(self) - a x = type(self)(self - a) x._width = self._width x._underscore = ( self._underscore[:] if self._underscore is not None else None ) # NOQA return x @property def anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) def yaml_anchor(self, any=False): # type: (bool) -> Any if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None def yaml_set_anchor(self, value, always_dump=False): # type: (Any, bool) -> None self.anchor.value = value self.anchor.always_dump = always_dump def dump(self, out=sys.stdout): # type: (Any) -> Any out.write( "ScalarFloat({}| w:{}, p:{}, s:{}, lz:{}, _:{}|{}, w:{}, s:{})\n".format( self, self._width, # type: ignore self._prec, # type: ignore self._m_sign, # type: ignore self._m_lead0, # type: ignore self._underscore, # type: ignore self._exp, # type: ignore self._e_width, # type: ignore self._e_sign, # type: ignore ) ) class ExponentialFloat(ScalarFloat): def __new__(cls, value, width=None, underscore=None): # type: (Any, Any, Any) -> Any return ScalarFloat.__new__(cls, value, width=width, underscore=underscore) class ExponentialCapsFloat(ScalarFloat): def __new__(cls, value, width=None, underscore=None): # type: (Any, Any, Any) -> Any return ScalarFloat.__new__(cls, value, width=width, underscore=underscore) strictyaml-1.6.1/strictyaml/ruamel/scalarint.py000066400000000000000000000107421415074265300217370ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals from .compat import no_limit_int # NOQA from strictyaml.ruamel.anchor import Anchor if False: # MYPY from typing import Text, Any, Dict, List # NOQA __all__ = ["ScalarInt", "BinaryInt", "OctalInt", "HexInt", "HexCapsInt", "DecimalInt"] class ScalarInt(no_limit_int): def __new__(cls, *args, **kw): # type: (Any, Any, Any) -> Any width = kw.pop("width", None) # type: ignore underscore = kw.pop("underscore", None) # type: ignore anchor = kw.pop("anchor", None) # type: ignore v = no_limit_int.__new__(cls, *args, **kw) # type: ignore v._width = width v._underscore = underscore if anchor is not None: v.yaml_set_anchor(anchor, always_dump=True) return v def __iadd__(self, a): # type: ignore # type: (Any) -> Any x = type(self)(self + a) x._width = self._width # type: ignore x._underscore = ( # type: ignore self._underscore[:] if self._underscore is not None else None # type: ignore ) # NOQA return x def __ifloordiv__(self, a): # type: ignore # type: (Any) -> Any x = type(self)(self // a) x._width = self._width # type: ignore x._underscore = ( # type: ignore self._underscore[:] if self._underscore is not None else None # type: ignore ) # NOQA return x def __imul__(self, a): # type: ignore # type: (Any) -> Any x = type(self)(self * a) x._width = self._width # type: ignore x._underscore = ( # type: ignore self._underscore[:] if self._underscore is not None else None # type: ignore ) # NOQA return x def __ipow__(self, a): # type: ignore # type: (Any) -> Any x = type(self)(self ** a) x._width = self._width # type: ignore x._underscore = ( # type: ignore self._underscore[:] if self._underscore is not None else None # type: ignore ) # NOQA return x def __isub__(self, a): # type: ignore # type: (Any) -> Any x = type(self)(self - a) x._width = self._width # type: ignore x._underscore = ( # type: ignore self._underscore[:] if self._underscore is not None else None # type: ignore ) # NOQA return x @property def anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) def yaml_anchor(self, any=False): # type: (bool) -> Any if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None def yaml_set_anchor(self, value, always_dump=False): # type: (Any, bool) -> None self.anchor.value = value self.anchor.always_dump = always_dump class BinaryInt(ScalarInt): def __new__(cls, value, width=None, underscore=None, anchor=None): # type: (Any, Any, Any, Any) -> Any return ScalarInt.__new__( cls, value, width=width, underscore=underscore, anchor=anchor ) class OctalInt(ScalarInt): def __new__(cls, value, width=None, underscore=None, anchor=None): # type: (Any, Any, Any, Any) -> Any return ScalarInt.__new__( cls, value, width=width, underscore=underscore, anchor=anchor ) # mixed casing of A-F is not supported, when loading the first non digit # determines the case class HexInt(ScalarInt): """uses lower case (a-f)""" def __new__(cls, value, width=None, underscore=None, anchor=None): # type: (Any, Any, Any, Any) -> Any return ScalarInt.__new__( cls, value, width=width, underscore=underscore, anchor=anchor ) class HexCapsInt(ScalarInt): """uses upper case (A-F)""" def __new__(cls, value, width=None, underscore=None, anchor=None): # type: (Any, Any, Any, Any) -> Any return ScalarInt.__new__( cls, value, width=width, underscore=underscore, anchor=anchor ) class DecimalInt(ScalarInt): """needed if anchor""" def __new__(cls, value, width=None, underscore=None, anchor=None): # type: (Any, Any, Any, Any) -> Any return ScalarInt.__new__( cls, value, width=width, underscore=underscore, anchor=anchor ) strictyaml-1.6.1/strictyaml/ruamel/scalarstring.py000066400000000000000000000107261415074265300224550ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals from strictyaml.ruamel.compat import text_type from strictyaml.ruamel.anchor import Anchor if False: # MYPY from typing import Text, Any, Dict, List # NOQA __all__ = [ "ScalarString", "LiteralScalarString", "FoldedScalarString", "SingleQuotedScalarString", "DoubleQuotedScalarString", "PlainScalarString", # PreservedScalarString is the old name, as it was the first to be preserved on rt, # use LiteralScalarString instead "PreservedScalarString", ] class ScalarString(text_type): __slots__ = Anchor.attrib def __new__(cls, *args, **kw): # type: (Any, Any) -> Any anchor = kw.pop("anchor", None) # type: ignore ret_val = text_type.__new__(cls, *args, **kw) # type: ignore if anchor is not None: ret_val.yaml_set_anchor(anchor, always_dump=True) return ret_val def replace(self, old, new, maxreplace=-1): # type: (Any, Any, int) -> Any return type(self)((text_type.replace(self, old, new, maxreplace))) @property def anchor(self): # type: () -> Any if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) def yaml_anchor(self, any=False): # type: (bool) -> Any if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None def yaml_set_anchor(self, value, always_dump=False): # type: (Any, bool) -> None self.anchor.value = value self.anchor.always_dump = always_dump class LiteralScalarString(ScalarString): __slots__ = "comment" # the comment after the | on the first line style = "|" def __new__(cls, value, anchor=None): # type: (Text, Any) -> Any return ScalarString.__new__(cls, value, anchor=anchor) PreservedScalarString = LiteralScalarString class FoldedScalarString(ScalarString): __slots__ = ("fold_pos", "comment") # the comment after the > on the first line style = ">" def __new__(cls, value, anchor=None): # type: (Text, Any) -> Any return ScalarString.__new__(cls, value, anchor=anchor) class SingleQuotedScalarString(ScalarString): __slots__ = () style = "'" def __new__(cls, value, anchor=None): # type: (Text, Any) -> Any return ScalarString.__new__(cls, value, anchor=anchor) class DoubleQuotedScalarString(ScalarString): __slots__ = () style = '"' def __new__(cls, value, anchor=None): # type: (Text, Any) -> Any return ScalarString.__new__(cls, value, anchor=anchor) class PlainScalarString(ScalarString): __slots__ = () style = "" def __new__(cls, value, anchor=None): # type: (Text, Any) -> Any return ScalarString.__new__(cls, value, anchor=anchor) def preserve_literal(s): # type: (Text) -> Text return LiteralScalarString(s.replace("\r\n", "\n").replace("\r", "\n")) def walk_tree(base, map=None): # type: (Any, Any) -> None """ the routine here walks over a simple yaml tree (recursing in dict values and list items) and converts strings that have multiple lines to literal scalars You can also provide an explicit (ordered) mapping for multiple transforms (first of which is executed): map = strictyaml.ruamel.compat.ordereddict map['\n'] = preserve_literal map[':'] = SingleQuotedScalarString walk_tree(data, map=map) """ from strictyaml.ruamel.compat import string_types from strictyaml.ruamel.compat import MutableMapping, MutableSequence # type: ignore if map is None: map = {"\n": preserve_literal} if isinstance(base, MutableMapping): for k in base: v = base[k] # type: Text if isinstance(v, string_types): for ch in map: if ch in v: base[k] = map[ch](v) break else: walk_tree(v, map=map) elif isinstance(base, MutableSequence): for idx, elem in enumerate(base): if isinstance(elem, string_types): for ch in map: if ch in elem: # type: ignore base[idx] = map[ch](elem) break else: walk_tree(elem, map=map) strictyaml-1.6.1/strictyaml/ruamel/scanner.py000066400000000000000000002162231415074265300214120ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals # Scanner produces tokens of the following types: # STREAM-START # STREAM-END # DIRECTIVE(name, value) # DOCUMENT-START # DOCUMENT-END # BLOCK-SEQUENCE-START # BLOCK-MAPPING-START # BLOCK-END # FLOW-SEQUENCE-START # FLOW-MAPPING-START # FLOW-SEQUENCE-END # FLOW-MAPPING-END # BLOCK-ENTRY # FLOW-ENTRY # KEY # VALUE # ALIAS(value) # ANCHOR(value) # TAG(value) # SCALAR(value, plain, style) # # RoundTripScanner # COMMENT(value) # # Read comments in the Scanner code for more details. # from strictyaml.ruamel.error import MarkedYAMLError from strictyaml.ruamel.tokens import * # NOQA from strictyaml.ruamel.compat import ( utf8, unichr, PY3, check_anchorname_char, nprint, ) # NOQA if False: # MYPY from typing import Any, Dict, Optional, List, Union, Text # NOQA from strictyaml.ruamel.compat import VersionType # NOQA __all__ = ["Scanner", "RoundTripScanner", "ScannerError"] _THE_END = "\n\0\r\x85\u2028\u2029" _THE_END_SPACE_TAB = " \n\0\t\r\x85\u2028\u2029" _SPACE_TAB = " \t" class ScannerError(MarkedYAMLError): pass class SimpleKey(object): # See below simple keys treatment. def __init__(self, token_number, required, index, line, column, mark): # type: (Any, Any, int, int, int, Any) -> None self.token_number = token_number self.required = required self.index = index self.line = line self.column = column self.mark = mark class Scanner(object): def __init__(self, loader=None): # type: (Any) -> None """Initialize the scanner.""" # It is assumed that Scanner and Reader will have a common descendant. # Reader do the dirty work of checking for BOM and converting the # input data to Unicode. It also adds NUL to the end. # # Reader supports the following methods # self.peek(i=0) # peek the next i-th character # self.prefix(l=1) # peek the next l characters # self.forward(l=1) # read the next l characters and move the pointer self.loader = loader if self.loader is not None and getattr(self.loader, "_scanner", None) is None: self.loader._scanner = self self.reset_scanner() self.first_time = False self.yaml_version = None # type: Any @property def flow_level(self): # type: () -> int return len(self.flow_context) def reset_scanner(self): # type: () -> None # Had we reached the end of the stream? self.done = False # flow_context is an expanding/shrinking list consisting of '{' and '[' # for each unclosed flow context. If empty list that means block context self.flow_context = [] # type: List[Text] # List of processed tokens that are not yet emitted. self.tokens = [] # type: List[Any] # Add the STREAM-START token. self.fetch_stream_start() # Number of tokens that were emitted through the `get_token` method. self.tokens_taken = 0 # The current indentation level. self.indent = -1 # Past indentation levels. self.indents = [] # type: List[int] # Variables related to simple keys treatment. # A simple key is a key that is not denoted by the '?' indicator. # Example of simple keys: # --- # block simple key: value # ? not a simple key: # : { flow simple key: value } # We emit the KEY token before all keys, so when we find a potential # simple key, we try to locate the corresponding ':' indicator. # Simple keys should be limited to a single line and 1024 characters. # Can a simple key start at the current position? A simple key may # start: # - at the beginning of the line, not counting indentation spaces # (in block context), # - after '{', '[', ',' (in the flow context), # - after '?', ':', '-' (in the block context). # In the block context, this flag also signifies if a block collection # may start at the current position. self.allow_simple_key = True # Keep track of possible simple keys. This is a dictionary. The key # is `flow_level`; there can be no more that one possible simple key # for each level. The value is a SimpleKey record: # (token_number, required, index, line, column, mark) # A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow), # '[', or '{' tokens. self.possible_simple_keys = {} # type: Dict[Any, Any] @property def reader(self): # type: () -> Any try: return self._scanner_reader # type: ignore except AttributeError: if hasattr(self.loader, "typ"): self._scanner_reader = self.loader.reader else: self._scanner_reader = self.loader._reader return self._scanner_reader @property def scanner_processing_version(self): # prefix until un-composited # type: () -> Any if hasattr(self.loader, "typ"): return self.loader.resolver.processing_version return self.loader.processing_version # Public methods. def check_token(self, *choices): # type: (Any) -> bool # Check if the next token is one of the given types. while self.need_more_tokens(): self.fetch_more_tokens() if bool(self.tokens): if not choices: return True for choice in choices: if isinstance(self.tokens[0], choice): return True return False def peek_token(self): # type: () -> Any # Return the next token, but do not delete if from the queue. while self.need_more_tokens(): self.fetch_more_tokens() if bool(self.tokens): return self.tokens[0] def get_token(self): # type: () -> Any # Return the next token. while self.need_more_tokens(): self.fetch_more_tokens() if bool(self.tokens): self.tokens_taken += 1 return self.tokens.pop(0) # Private methods. def need_more_tokens(self): # type: () -> bool if self.done: return False if not self.tokens: return True # The current token may be a potential simple key, so we # need to look further. self.stale_possible_simple_keys() if self.next_possible_simple_key() == self.tokens_taken: return True return False def fetch_comment(self, comment): # type: (Any) -> None raise NotImplementedError def fetch_more_tokens(self): # type: () -> Any # Eat whitespaces and comments until we reach the next token. comment = self.scan_to_next_token() if comment is not None: # never happens for base scanner return self.fetch_comment(comment) # Remove obsolete possible simple keys. self.stale_possible_simple_keys() # Compare the current indentation and column. It may add some tokens # and decrease the current indentation level. self.unwind_indent(self.reader.column) # Peek the next character. ch = self.reader.peek() # Is it the end of stream? if ch == "\0": return self.fetch_stream_end() # Is it a directive? if ch == "%" and self.check_directive(): return self.fetch_directive() # Is it the document start? if ch == "-" and self.check_document_start(): return self.fetch_document_start() # Is it the document end? if ch == "." and self.check_document_end(): return self.fetch_document_end() # TODO: support for BOM within a stream. # if ch == u'\uFEFF': # return self.fetch_bom() <-- issue BOMToken # Note: the order of the following checks is NOT significant. # Is it the flow sequence start indicator? if ch == "[": return self.fetch_flow_sequence_start() # Is it the flow mapping start indicator? if ch == "{": return self.fetch_flow_mapping_start() # Is it the flow sequence end indicator? if ch == "]": return self.fetch_flow_sequence_end() # Is it the flow mapping end indicator? if ch == "}": return self.fetch_flow_mapping_end() # Is it the flow entry indicator? if ch == ",": return self.fetch_flow_entry() # Is it the block entry indicator? if ch == "-" and self.check_block_entry(): return self.fetch_block_entry() # Is it the key indicator? if ch == "?" and self.check_key(): return self.fetch_key() # Is it the value indicator? if ch == ":" and self.check_value(): return self.fetch_value() # Is it an alias? if ch == "*": return self.fetch_alias() # Is it an anchor? if ch == "&": return self.fetch_anchor() # Is it a tag? if ch == "!": return self.fetch_tag() # Is it a literal scalar? if ch == "|" and not self.flow_level: return self.fetch_literal() # Is it a folded scalar? if ch == ">" and not self.flow_level: return self.fetch_folded() # Is it a single quoted scalar? if ch == "'": return self.fetch_single() # Is it a double quoted scalar? if ch == '"': return self.fetch_double() # It must be a plain scalar then. if self.check_plain(): return self.fetch_plain() # No? It's an error. Let's produce a nice error message. raise ScannerError( "while scanning for the next token", None, "found character %r that cannot start any token" % utf8(ch), self.reader.get_mark(), ) # Simple keys treatment. def next_possible_simple_key(self): # type: () -> Any # Return the number of the nearest possible simple key. Actually we # don't need to loop through the whole dictionary. We may replace it # with the following code: # if not self.possible_simple_keys: # return None # return self.possible_simple_keys[ # min(self.possible_simple_keys.keys())].token_number min_token_number = None for level in self.possible_simple_keys: key = self.possible_simple_keys[level] if min_token_number is None or key.token_number < min_token_number: min_token_number = key.token_number return min_token_number def stale_possible_simple_keys(self): # type: () -> None # Remove entries that are no longer possible simple keys. According to # the YAML specification, simple keys # - should be limited to a single line, # - should be no longer than 1024 characters. # Disabling this procedure will allow simple keys of any length and # height (may cause problems if indentation is broken though). for level in list(self.possible_simple_keys): key = self.possible_simple_keys[level] if key.line != self.reader.line or self.reader.index - key.index > 1024: if key.required: raise ScannerError( "while scanning a simple key", key.mark, "could not find expected ':'", self.reader.get_mark(), ) del self.possible_simple_keys[level] def save_possible_simple_key(self): # type: () -> None # The next token may start a simple key. We check if it's possible # and save its position. This function is called for # ALIAS, ANCHOR, TAG, SCALAR(flow), '[', and '{'. # Check if a simple key is required at the current position. required = not self.flow_level and self.indent == self.reader.column # The next token might be a simple key. Let's save it's number and # position. if self.allow_simple_key: self.remove_possible_simple_key() token_number = self.tokens_taken + len(self.tokens) key = SimpleKey( token_number, required, self.reader.index, self.reader.line, self.reader.column, self.reader.get_mark(), ) self.possible_simple_keys[self.flow_level] = key def remove_possible_simple_key(self): # type: () -> None # Remove the saved possible key position at the current flow level. if self.flow_level in self.possible_simple_keys: key = self.possible_simple_keys[self.flow_level] if key.required: raise ScannerError( "while scanning a simple key", key.mark, "could not find expected ':'", self.reader.get_mark(), ) del self.possible_simple_keys[self.flow_level] # Indentation functions. def unwind_indent(self, column): # type: (Any) -> None # In flow context, tokens should respect indentation. # Actually the condition should be `self.indent >= column` according to # the spec. But this condition will prohibit intuitively correct # constructions such as # key : { # } # #### # if self.flow_level and self.indent > column: # raise ScannerError(None, None, # "invalid intendation or unclosed '[' or '{'", # self.reader.get_mark()) # In the flow context, indentation is ignored. We make the scanner less # restrictive then specification requires. if bool(self.flow_level): return # In block context, we may need to issue the BLOCK-END tokens. while self.indent > column: mark = self.reader.get_mark() self.indent = self.indents.pop() self.tokens.append(BlockEndToken(mark, mark)) def add_indent(self, column): # type: (int) -> bool # Check if we need to increase indentation. if self.indent < column: self.indents.append(self.indent) self.indent = column return True return False # Fetchers. def fetch_stream_start(self): # type: () -> None # We always add STREAM-START as the first token and STREAM-END as the # last token. # Read the token. mark = self.reader.get_mark() # Add STREAM-START. self.tokens.append(StreamStartToken(mark, mark, encoding=self.reader.encoding)) def fetch_stream_end(self): # type: () -> None # Set the current intendation to -1. self.unwind_indent(-1) # Reset simple keys. self.remove_possible_simple_key() self.allow_simple_key = False self.possible_simple_keys = {} # Read the token. mark = self.reader.get_mark() # Add STREAM-END. self.tokens.append(StreamEndToken(mark, mark)) # The steam is finished. self.done = True def fetch_directive(self): # type: () -> None # Set the current intendation to -1. self.unwind_indent(-1) # Reset simple keys. self.remove_possible_simple_key() self.allow_simple_key = False # Scan and add DIRECTIVE. self.tokens.append(self.scan_directive()) def fetch_document_start(self): # type: () -> None self.fetch_document_indicator(DocumentStartToken) def fetch_document_end(self): # type: () -> None self.fetch_document_indicator(DocumentEndToken) def fetch_document_indicator(self, TokenClass): # type: (Any) -> None # Set the current intendation to -1. self.unwind_indent(-1) # Reset simple keys. Note that there could not be a block collection # after '---'. self.remove_possible_simple_key() self.allow_simple_key = False # Add DOCUMENT-START or DOCUMENT-END. start_mark = self.reader.get_mark() self.reader.forward(3) end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) def fetch_flow_sequence_start(self): # type: () -> None self.fetch_flow_collection_start(FlowSequenceStartToken, to_push="[") def fetch_flow_mapping_start(self): # type: () -> None self.fetch_flow_collection_start(FlowMappingStartToken, to_push="{") def fetch_flow_collection_start(self, TokenClass, to_push): # type: (Any, Text) -> None # '[' and '{' may start a simple key. self.save_possible_simple_key() # Increase the flow level. self.flow_context.append(to_push) # Simple keys are allowed after '[' and '{'. self.allow_simple_key = True # Add FLOW-SEQUENCE-START or FLOW-MAPPING-START. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) def fetch_flow_sequence_end(self): # type: () -> None self.fetch_flow_collection_end(FlowSequenceEndToken) def fetch_flow_mapping_end(self): # type: () -> None self.fetch_flow_collection_end(FlowMappingEndToken) def fetch_flow_collection_end(self, TokenClass): # type: (Any) -> None # Reset possible simple key on the current level. self.remove_possible_simple_key() # Decrease the flow level. try: popped = self.flow_context.pop() # NOQA except IndexError: # We must not be in a list or object. # Defer error handling to the parser. pass # No simple keys after ']' or '}'. self.allow_simple_key = False # Add FLOW-SEQUENCE-END or FLOW-MAPPING-END. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) def fetch_flow_entry(self): # type: () -> None # Simple keys are allowed after ','. self.allow_simple_key = True # Reset possible simple key on the current level. self.remove_possible_simple_key() # Add FLOW-ENTRY. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(FlowEntryToken(start_mark, end_mark)) def fetch_block_entry(self): # type: () -> None # Block context needs additional checks. if not self.flow_level: # Are we allowed to start a new entry? if not self.allow_simple_key: raise ScannerError( None, None, "sequence entries are not allowed here", self.reader.get_mark(), ) # We may need to add BLOCK-SEQUENCE-START. if self.add_indent(self.reader.column): mark = self.reader.get_mark() self.tokens.append(BlockSequenceStartToken(mark, mark)) # It's an error for the block entry to occur in the flow context, # but we let the parser detect this. else: pass # Simple keys are allowed after '-'. self.allow_simple_key = True # Reset possible simple key on the current level. self.remove_possible_simple_key() # Add BLOCK-ENTRY. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(BlockEntryToken(start_mark, end_mark)) def fetch_key(self): # type: () -> None # Block context needs additional checks. if not self.flow_level: # Are we allowed to start a key (not nessesary a simple)? if not self.allow_simple_key: raise ScannerError( None, None, "mapping keys are not allowed here", self.reader.get_mark(), ) # We may need to add BLOCK-MAPPING-START. if self.add_indent(self.reader.column): mark = self.reader.get_mark() self.tokens.append(BlockMappingStartToken(mark, mark)) # Simple keys are allowed after '?' in the block context. self.allow_simple_key = not self.flow_level # Reset possible simple key on the current level. self.remove_possible_simple_key() # Add KEY. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(KeyToken(start_mark, end_mark)) def fetch_value(self): # type: () -> None # Do we determine a simple key? if self.flow_level in self.possible_simple_keys: # Add KEY. key = self.possible_simple_keys[self.flow_level] del self.possible_simple_keys[self.flow_level] self.tokens.insert( key.token_number - self.tokens_taken, KeyToken(key.mark, key.mark) ) # If this key starts a new block mapping, we need to add # BLOCK-MAPPING-START. if not self.flow_level: if self.add_indent(key.column): self.tokens.insert( key.token_number - self.tokens_taken, BlockMappingStartToken(key.mark, key.mark), ) # There cannot be two simple keys one after another. self.allow_simple_key = False # It must be a part of a complex key. else: # Block context needs additional checks. # (Do we really need them? They will be caught by the parser # anyway.) if not self.flow_level: # We are allowed to start a complex value if and only if # we can start a simple key. if not self.allow_simple_key: raise ScannerError( None, None, "mapping values are not allowed here", self.reader.get_mark(), ) # If this value starts a new block mapping, we need to add # BLOCK-MAPPING-START. It will be detected as an error later by # the parser. if not self.flow_level: if self.add_indent(self.reader.column): mark = self.reader.get_mark() self.tokens.append(BlockMappingStartToken(mark, mark)) # Simple keys are allowed after ':' in the block context. self.allow_simple_key = not self.flow_level # Reset possible simple key on the current level. self.remove_possible_simple_key() # Add VALUE. start_mark = self.reader.get_mark() self.reader.forward() end_mark = self.reader.get_mark() self.tokens.append(ValueToken(start_mark, end_mark)) def fetch_alias(self): # type: () -> None # ALIAS could be a simple key. self.save_possible_simple_key() # No simple keys after ALIAS. self.allow_simple_key = False # Scan and add ALIAS. self.tokens.append(self.scan_anchor(AliasToken)) def fetch_anchor(self): # type: () -> None # ANCHOR could start a simple key. self.save_possible_simple_key() # No simple keys after ANCHOR. self.allow_simple_key = False # Scan and add ANCHOR. self.tokens.append(self.scan_anchor(AnchorToken)) def fetch_tag(self): # type: () -> None # TAG could start a simple key. self.save_possible_simple_key() # No simple keys after TAG. self.allow_simple_key = False # Scan and add TAG. self.tokens.append(self.scan_tag()) def fetch_literal(self): # type: () -> None self.fetch_block_scalar(style="|") def fetch_folded(self): # type: () -> None self.fetch_block_scalar(style=">") def fetch_block_scalar(self, style): # type: (Any) -> None # A simple key may follow a block scalar. self.allow_simple_key = True # Reset possible simple key on the current level. self.remove_possible_simple_key() # Scan and add SCALAR. self.tokens.append(self.scan_block_scalar(style)) def fetch_single(self): # type: () -> None self.fetch_flow_scalar(style="'") def fetch_double(self): # type: () -> None self.fetch_flow_scalar(style='"') def fetch_flow_scalar(self, style): # type: (Any) -> None # A flow scalar could be a simple key. self.save_possible_simple_key() # No simple keys after flow scalars. self.allow_simple_key = False # Scan and add SCALAR. self.tokens.append(self.scan_flow_scalar(style)) def fetch_plain(self): # type: () -> None # A plain scalar could be a simple key. self.save_possible_simple_key() # No simple keys after plain scalars. But note that `scan_plain` will # change this flag if the scan is finished at the beginning of the # line. self.allow_simple_key = False # Scan and add SCALAR. May change `allow_simple_key`. self.tokens.append(self.scan_plain()) # Checkers. def check_directive(self): # type: () -> Any # DIRECTIVE: ^ '%' ... # The '%' indicator is already checked. if self.reader.column == 0: return True return None def check_document_start(self): # type: () -> Any # DOCUMENT-START: ^ '---' (' '|'\n') if self.reader.column == 0: if ( self.reader.prefix(3) == "---" and self.reader.peek(3) in _THE_END_SPACE_TAB ): return True return None def check_document_end(self): # type: () -> Any # DOCUMENT-END: ^ '...' (' '|'\n') if self.reader.column == 0: if ( self.reader.prefix(3) == "..." and self.reader.peek(3) in _THE_END_SPACE_TAB ): return True return None def check_block_entry(self): # type: () -> Any # BLOCK-ENTRY: '-' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB def check_key(self): # type: () -> Any # KEY(flow context): '?' if bool(self.flow_level): return True # KEY(block context): '?' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB def check_value(self): # type: () -> Any # VALUE(flow context): ':' if self.scanner_processing_version == (1, 1): if bool(self.flow_level): return True else: if bool(self.flow_level): if self.flow_context[-1] == "[": if self.reader.peek(1) not in _THE_END_SPACE_TAB: return False elif self.tokens and isinstance(self.tokens[-1], ValueToken): # mapping flow context scanning a value token if self.reader.peek(1) not in _THE_END_SPACE_TAB: return False return True # VALUE(block context): ':' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB def check_plain(self): # type: () -> Any # A plain scalar may start with any non-space character except: # '-', '?', ':', ',', '[', ']', '{', '}', # '#', '&', '*', '!', '|', '>', '\'', '\"', # '%', '@', '`'. # # It may also start with # '-', '?', ':' # if it is followed by a non-space character. # # Note that we limit the last rule to the block context (except the # '-' character) because we want the flow context to be space # independent. srp = self.reader.peek ch = srp() if self.scanner_processing_version == (1, 1): return ch not in "\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>'\"%@`" or ( srp(1) not in _THE_END_SPACE_TAB and (ch == "-" or (not self.flow_level and ch in "?:")) ) # YAML 1.2 if ch not in "\0 \t\r\n\x85\u2028\u2029-?:,[]{}#&*!|>'\"%@`": # ################### ^ ??? return True ch1 = srp(1) if ch == "-" and ch1 not in _THE_END_SPACE_TAB: return True if ch == ":" and bool(self.flow_level) and ch1 not in _SPACE_TAB: return True return srp(1) not in _THE_END_SPACE_TAB and ( ch == "-" or (not self.flow_level and ch in "?:") ) # Scanners. def scan_to_next_token(self): # type: () -> Any # We ignore spaces, line breaks and comments. # If we find a line break in the block context, we set the flag # `allow_simple_key` on. # The byte order mark is stripped if it's the first character in the # stream. We do not yet support BOM inside the stream as the # specification requires. Any such mark will be considered as a part # of the document. # # TODO: We need to make tab handling rules more sane. A good rule is # Tabs cannot precede tokens # BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END, # KEY(block), VALUE(block), BLOCK-ENTRY # So the checking code is # if : # self.allow_simple_keys = False # We also need to add the check for `allow_simple_keys == True` to # `unwind_indent` before issuing BLOCK-END. # Scanners for block, flow, and plain scalars need to be modified. srp = self.reader.peek srf = self.reader.forward if self.reader.index == 0 and srp() == "\uFEFF": srf() found = False _the_end = _THE_END while not found: while srp() == " ": srf() if srp() == "#": while srp() not in _the_end: srf() if self.scan_line_break(): if not self.flow_level: self.allow_simple_key = True else: found = True return None def scan_directive(self): # type: () -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward start_mark = self.reader.get_mark() srf() name = self.scan_directive_name(start_mark) value = None if name == "YAML": value = self.scan_yaml_directive_value(start_mark) end_mark = self.reader.get_mark() elif name == "TAG": value = self.scan_tag_directive_value(start_mark) end_mark = self.reader.get_mark() else: end_mark = self.reader.get_mark() while srp() not in _THE_END: srf() self.scan_directive_ignored_line(start_mark) return DirectiveToken(name, value, start_mark, end_mark) def scan_directive_name(self, start_mark): # type: (Any) -> Any # See the specification for details. length = 0 srp = self.reader.peek ch = srp(length) while "0" <= ch <= "9" or "A" <= ch <= "Z" or "a" <= ch <= "z" or ch in "-_:.": length += 1 ch = srp(length) if not length: raise ScannerError( "while scanning a directive", start_mark, "expected alphabetic or numeric character, but found %r" % utf8(ch), self.reader.get_mark(), ) value = self.reader.prefix(length) self.reader.forward(length) ch = srp() if ch not in "\0 \r\n\x85\u2028\u2029": raise ScannerError( "while scanning a directive", start_mark, "expected alphabetic or numeric character, but found %r" % utf8(ch), self.reader.get_mark(), ) return value def scan_yaml_directive_value(self, start_mark): # type: (Any) -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward while srp() == " ": srf() major = self.scan_yaml_directive_number(start_mark) if srp() != ".": raise ScannerError( "while scanning a directive", start_mark, "expected a digit or '.', but found %r" % utf8(srp()), self.reader.get_mark(), ) srf() minor = self.scan_yaml_directive_number(start_mark) if srp() not in "\0 \r\n\x85\u2028\u2029": raise ScannerError( "while scanning a directive", start_mark, "expected a digit or ' ', but found %r" % utf8(srp()), self.reader.get_mark(), ) self.yaml_version = (major, minor) return self.yaml_version def scan_yaml_directive_number(self, start_mark): # type: (Any) -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward ch = srp() if not ("0" <= ch <= "9"): raise ScannerError( "while scanning a directive", start_mark, "expected a digit, but found %r" % utf8(ch), self.reader.get_mark(), ) length = 0 while "0" <= srp(length) <= "9": length += 1 value = int(self.reader.prefix(length)) srf(length) return value def scan_tag_directive_value(self, start_mark): # type: (Any) -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward while srp() == " ": srf() handle = self.scan_tag_directive_handle(start_mark) while srp() == " ": srf() prefix = self.scan_tag_directive_prefix(start_mark) return (handle, prefix) def scan_tag_directive_handle(self, start_mark): # type: (Any) -> Any # See the specification for details. value = self.scan_tag_handle("directive", start_mark) ch = self.reader.peek() if ch != " ": raise ScannerError( "while scanning a directive", start_mark, "expected ' ', but found %r" % utf8(ch), self.reader.get_mark(), ) return value def scan_tag_directive_prefix(self, start_mark): # type: (Any) -> Any # See the specification for details. value = self.scan_tag_uri("directive", start_mark) ch = self.reader.peek() if ch not in "\0 \r\n\x85\u2028\u2029": raise ScannerError( "while scanning a directive", start_mark, "expected ' ', but found %r" % utf8(ch), self.reader.get_mark(), ) return value def scan_directive_ignored_line(self, start_mark): # type: (Any) -> None # See the specification for details. srp = self.reader.peek srf = self.reader.forward while srp() == " ": srf() if srp() == "#": while srp() not in _THE_END: srf() ch = srp() if ch not in _THE_END: raise ScannerError( "while scanning a directive", start_mark, "expected a comment or a line break, but found %r" % utf8(ch), self.reader.get_mark(), ) self.scan_line_break() def scan_anchor(self, TokenClass): # type: (Any) -> Any # The specification does not restrict characters for anchors and # aliases. This may lead to problems, for instance, the document: # [ *alias, value ] # can be interpteted in two ways, as # [ "value" ] # and # [ *alias , "value" ] # Therefore we restrict aliases to numbers and ASCII letters. srp = self.reader.peek start_mark = self.reader.get_mark() indicator = srp() if indicator == "*": name = "alias" else: name = "anchor" self.reader.forward() length = 0 ch = srp(length) # while u'0' <= ch <= u'9' or u'A' <= ch <= u'Z' or u'a' <= ch <= u'z' \ # or ch in u'-_': while check_anchorname_char(ch): length += 1 ch = srp(length) if not length: raise ScannerError( "while scanning an %s" % (name,), start_mark, "expected alphabetic or numeric character, but found %r" % utf8(ch), self.reader.get_mark(), ) value = self.reader.prefix(length) self.reader.forward(length) # ch1 = ch # ch = srp() # no need to peek, ch is already set # assert ch1 == ch if ch not in "\0 \t\r\n\x85\u2028\u2029?:,[]{}%@`": raise ScannerError( "while scanning an %s" % (name,), start_mark, "expected alphabetic or numeric character, but found %r" % utf8(ch), self.reader.get_mark(), ) end_mark = self.reader.get_mark() return TokenClass(value, start_mark, end_mark) def scan_tag(self): # type: () -> Any # See the specification for details. srp = self.reader.peek start_mark = self.reader.get_mark() ch = srp(1) if ch == "<": handle = None self.reader.forward(2) suffix = self.scan_tag_uri("tag", start_mark) if srp() != ">": raise ScannerError( "while parsing a tag", start_mark, "expected '>', but found %r" % utf8(srp()), self.reader.get_mark(), ) self.reader.forward() elif ch in _THE_END_SPACE_TAB: handle = None suffix = "!" self.reader.forward() else: length = 1 use_handle = False while ch not in "\0 \r\n\x85\u2028\u2029": if ch == "!": use_handle = True break length += 1 ch = srp(length) handle = "!" if use_handle: handle = self.scan_tag_handle("tag", start_mark) else: handle = "!" self.reader.forward() suffix = self.scan_tag_uri("tag", start_mark) ch = srp() if ch not in "\0 \r\n\x85\u2028\u2029": raise ScannerError( "while scanning a tag", start_mark, "expected ' ', but found %r" % utf8(ch), self.reader.get_mark(), ) value = (handle, suffix) end_mark = self.reader.get_mark() return TagToken(value, start_mark, end_mark) def scan_block_scalar(self, style, rt=False): # type: (Any, Optional[bool]) -> Any # See the specification for details. srp = self.reader.peek if style == ">": folded = True else: folded = False chunks = [] # type: List[Any] start_mark = self.reader.get_mark() # Scan the header. self.reader.forward() chomping, increment = self.scan_block_scalar_indicators(start_mark) # block scalar comment e.g. : |+ # comment text block_scalar_comment = self.scan_block_scalar_ignored_line(start_mark) # Determine the indentation level and go to the first non-empty line. min_indent = self.indent + 1 if increment is None: # no increment and top level, min_indent could be 0 if min_indent < 1 and ( style not in "|>" or (self.scanner_processing_version == (1, 1)) and getattr( self.loader, "top_level_block_style_scalar_no_indent_error_1_1", False, ) ): min_indent = 1 breaks, max_indent, end_mark = self.scan_block_scalar_indentation() indent = max(min_indent, max_indent) else: if min_indent < 1: min_indent = 1 indent = min_indent + increment - 1 breaks, end_mark = self.scan_block_scalar_breaks(indent) line_break = "" # Scan the inner part of the block scalar. while self.reader.column == indent and srp() != "\0": chunks.extend(breaks) leading_non_space = srp() not in " \t" length = 0 while srp(length) not in _THE_END: length += 1 chunks.append(self.reader.prefix(length)) self.reader.forward(length) line_break = self.scan_line_break() breaks, end_mark = self.scan_block_scalar_breaks(indent) if style in "|>" and min_indent == 0: # at the beginning of a line, if in block style see if # end of document/start_new_document if self.check_document_start() or self.check_document_end(): break if self.reader.column == indent and srp() != "\0": # Unfortunately, folding rules are ambiguous. # # This is the folding according to the specification: if rt and folded and line_break == "\n": chunks.append("\a") if ( folded and line_break == "\n" and leading_non_space and srp() not in " \t" ): if not breaks: chunks.append(" ") else: chunks.append(line_break) # This is Clark Evans's interpretation (also in the spec # examples): # # if folded and line_break == u'\n': # if not breaks: # if srp() not in ' \t': # chunks.append(u' ') # else: # chunks.append(line_break) # else: # chunks.append(line_break) else: break # Process trailing line breaks. The 'chomping' setting determines # whether they are included in the value. trailing = [] # type: List[Any] if chomping in [None, True]: chunks.append(line_break) if chomping is True: chunks.extend(breaks) elif chomping in [None, False]: trailing.extend(breaks) # We are done. token = ScalarToken("".join(chunks), False, start_mark, end_mark, style) if block_scalar_comment is not None: token.add_pre_comments([block_scalar_comment]) if len(trailing) > 0: # nprint('trailing 1', trailing) # XXXXX # Eat whitespaces and comments until we reach the next token. comment = self.scan_to_next_token() while comment: trailing.append(" " * comment[1].column + comment[0]) comment = self.scan_to_next_token() # Keep track of the trailing whitespace and following comments # as a comment token, if isn't all included in the actual value. comment_end_mark = self.reader.get_mark() comment = CommentToken("".join(trailing), end_mark, comment_end_mark) token.add_post_comment(comment) return token def scan_block_scalar_indicators(self, start_mark): # type: (Any) -> Any # See the specification for details. srp = self.reader.peek chomping = None increment = None ch = srp() if ch in "+-": if ch == "+": chomping = True else: chomping = False self.reader.forward() ch = srp() if ch in "0123456789": increment = int(ch) if increment == 0: raise ScannerError( "while scanning a block scalar", start_mark, "expected indentation indicator in the range 1-9, " "but found 0", self.reader.get_mark(), ) self.reader.forward() elif ch in "0123456789": increment = int(ch) if increment == 0: raise ScannerError( "while scanning a block scalar", start_mark, "expected indentation indicator in the range 1-9, " "but found 0", self.reader.get_mark(), ) self.reader.forward() ch = srp() if ch in "+-": if ch == "+": chomping = True else: chomping = False self.reader.forward() ch = srp() if ch not in "\0 \r\n\x85\u2028\u2029": raise ScannerError( "while scanning a block scalar", start_mark, "expected chomping or indentation indicators, but found %r" % utf8(ch), self.reader.get_mark(), ) return chomping, increment def scan_block_scalar_ignored_line(self, start_mark): # type: (Any) -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward prefix = "" comment = None while srp() == " ": prefix += srp() srf() if srp() == "#": comment = prefix while srp() not in _THE_END: comment += srp() srf() ch = srp() if ch not in _THE_END: raise ScannerError( "while scanning a block scalar", start_mark, "expected a comment or a line break, but found %r" % utf8(ch), self.reader.get_mark(), ) self.scan_line_break() return comment def scan_block_scalar_indentation(self): # type: () -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward chunks = [] max_indent = 0 end_mark = self.reader.get_mark() while srp() in " \r\n\x85\u2028\u2029": if srp() != " ": chunks.append(self.scan_line_break()) end_mark = self.reader.get_mark() else: srf() if self.reader.column > max_indent: max_indent = self.reader.column return chunks, max_indent, end_mark def scan_block_scalar_breaks(self, indent): # type: (int) -> Any # See the specification for details. chunks = [] srp = self.reader.peek srf = self.reader.forward end_mark = self.reader.get_mark() while self.reader.column < indent and srp() == " ": srf() while srp() in "\r\n\x85\u2028\u2029": chunks.append(self.scan_line_break()) end_mark = self.reader.get_mark() while self.reader.column < indent and srp() == " ": srf() return chunks, end_mark def scan_flow_scalar(self, style): # type: (Any) -> Any # See the specification for details. # Note that we loose indentation rules for quoted scalars. Quoted # scalars don't need to adhere indentation because " and ' clearly # mark the beginning and the end of them. Therefore we are less # restrictive then the specification requires. We only need to check # that document separators are not included in scalars. if style == '"': double = True else: double = False srp = self.reader.peek chunks = [] # type: List[Any] start_mark = self.reader.get_mark() quote = srp() self.reader.forward() chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark)) while srp() != quote: chunks.extend(self.scan_flow_scalar_spaces(double, start_mark)) chunks.extend(self.scan_flow_scalar_non_spaces(double, start_mark)) self.reader.forward() end_mark = self.reader.get_mark() return ScalarToken("".join(chunks), False, start_mark, end_mark, style) ESCAPE_REPLACEMENTS = { "0": "\0", "a": "\x07", "b": "\x08", "t": "\x09", "\t": "\x09", "n": "\x0A", "v": "\x0B", "f": "\x0C", "r": "\x0D", "e": "\x1B", " ": "\x20", '"': '"', "/": "/", # as per http://www.json.org/ "\\": "\\", "N": "\x85", "_": "\xA0", "L": "\u2028", "P": "\u2029", } ESCAPE_CODES = {"x": 2, "u": 4, "U": 8} def scan_flow_scalar_non_spaces(self, double, start_mark): # type: (Any, Any) -> Any # See the specification for details. chunks = [] # type: List[Any] srp = self.reader.peek srf = self.reader.forward while True: length = 0 while srp(length) not in " \n'\"\\\0\t\r\x85\u2028\u2029": length += 1 if length != 0: chunks.append(self.reader.prefix(length)) srf(length) ch = srp() if not double and ch == "'" and srp(1) == "'": chunks.append("'") srf(2) elif (double and ch == "'") or (not double and ch in '"\\'): chunks.append(ch) srf() elif double and ch == "\\": srf() ch = srp() if ch in self.ESCAPE_REPLACEMENTS: chunks.append(self.ESCAPE_REPLACEMENTS[ch]) srf() elif ch in self.ESCAPE_CODES: length = self.ESCAPE_CODES[ch] srf() for k in range(length): if srp(k) not in "0123456789ABCDEFabcdef": raise ScannerError( "while scanning a double-quoted scalar", start_mark, "expected escape sequence of %d hexdecimal " "numbers, but found %r" % (length, utf8(srp(k))), self.reader.get_mark(), ) code = int(self.reader.prefix(length), 16) chunks.append(unichr(code)) srf(length) elif ch in "\n\r\x85\u2028\u2029": self.scan_line_break() chunks.extend(self.scan_flow_scalar_breaks(double, start_mark)) else: raise ScannerError( "while scanning a double-quoted scalar", start_mark, "found unknown escape character %r" % utf8(ch), self.reader.get_mark(), ) else: return chunks def scan_flow_scalar_spaces(self, double, start_mark): # type: (Any, Any) -> Any # See the specification for details. srp = self.reader.peek chunks = [] length = 0 while srp(length) in " \t": length += 1 whitespaces = self.reader.prefix(length) self.reader.forward(length) ch = srp() if ch == "\0": raise ScannerError( "while scanning a quoted scalar", start_mark, "found unexpected end of stream", self.reader.get_mark(), ) elif ch in "\r\n\x85\u2028\u2029": line_break = self.scan_line_break() breaks = self.scan_flow_scalar_breaks(double, start_mark) if line_break != "\n": chunks.append(line_break) elif not breaks: chunks.append(" ") chunks.extend(breaks) else: chunks.append(whitespaces) return chunks def scan_flow_scalar_breaks(self, double, start_mark): # type: (Any, Any) -> Any # See the specification for details. chunks = [] # type: List[Any] srp = self.reader.peek srf = self.reader.forward while True: # Instead of checking indentation, we check for document # separators. prefix = self.reader.prefix(3) if (prefix == "---" or prefix == "...") and srp(3) in _THE_END_SPACE_TAB: raise ScannerError( "while scanning a quoted scalar", start_mark, "found unexpected document separator", self.reader.get_mark(), ) while srp() in " \t": srf() if srp() in "\r\n\x85\u2028\u2029": chunks.append(self.scan_line_break()) else: return chunks def scan_plain(self): # type: () -> Any # See the specification for details. # We add an additional restriction for the flow context: # plain scalars in the flow context cannot contain ',', ': ' and '?'. # We also keep track of the `allow_simple_key` flag here. # Indentation rules are loosed for the flow context. srp = self.reader.peek srf = self.reader.forward chunks = [] # type: List[Any] start_mark = self.reader.get_mark() end_mark = start_mark indent = self.indent + 1 # We allow zero indentation for scalars, but then we need to check for # document separators at the beginning of the line. # if indent == 0: # indent = 1 spaces = [] # type: List[Any] while True: length = 0 if srp() == "#": break while True: ch = srp(length) if ch == ":" and srp(length + 1) not in _THE_END_SPACE_TAB: pass elif ch == "?" and self.scanner_processing_version != (1, 1): pass elif ( ch in _THE_END_SPACE_TAB or ( not self.flow_level and ch == ":" and srp(length + 1) in _THE_END_SPACE_TAB ) or (self.flow_level and ch in ",:?[]{}") ): break length += 1 # It's not clear what we should do with ':' in the flow context. if ( self.flow_level and ch == ":" and srp(length + 1) not in "\0 \t\r\n\x85\u2028\u2029,[]{}" ): srf(length) raise ScannerError( "while scanning a plain scalar", start_mark, "found unexpected ':'", self.reader.get_mark(), "Please check " "http://pyyaml.org/wiki/YAMLColonInFlowContext " "for details.", ) if length == 0: break self.allow_simple_key = False chunks.extend(spaces) chunks.append(self.reader.prefix(length)) srf(length) end_mark = self.reader.get_mark() spaces = self.scan_plain_spaces(indent, start_mark) if ( not spaces or srp() == "#" or (not self.flow_level and self.reader.column < indent) ): break token = ScalarToken("".join(chunks), True, start_mark, end_mark) if spaces and spaces[0] == "\n": # Create a comment token to preserve the trailing line breaks. comment = CommentToken("".join(spaces) + "\n", start_mark, end_mark) token.add_post_comment(comment) return token def scan_plain_spaces(self, indent, start_mark): # type: (Any, Any) -> Any # See the specification for details. # The specification is really confusing about tabs in plain scalars. # We just forbid them completely. Do not use tabs in YAML! srp = self.reader.peek srf = self.reader.forward chunks = [] length = 0 while srp(length) in " ": length += 1 whitespaces = self.reader.prefix(length) self.reader.forward(length) ch = srp() if ch in "\r\n\x85\u2028\u2029": line_break = self.scan_line_break() self.allow_simple_key = True prefix = self.reader.prefix(3) if (prefix == "---" or prefix == "...") and srp(3) in _THE_END_SPACE_TAB: return breaks = [] while srp() in " \r\n\x85\u2028\u2029": if srp() == " ": srf() else: breaks.append(self.scan_line_break()) prefix = self.reader.prefix(3) if (prefix == "---" or prefix == "...") and srp( 3 ) in _THE_END_SPACE_TAB: return if line_break != "\n": chunks.append(line_break) elif not breaks: chunks.append(" ") chunks.extend(breaks) elif whitespaces: chunks.append(whitespaces) return chunks def scan_tag_handle(self, name, start_mark): # type: (Any, Any) -> Any # See the specification for details. # For some strange reasons, the specification does not allow '_' in # tag handles. I have allowed it anyway. srp = self.reader.peek ch = srp() if ch != "!": raise ScannerError( "while scanning a %s" % (name,), start_mark, "expected '!', but found %r" % utf8(ch), self.reader.get_mark(), ) length = 1 ch = srp(length) if ch != " ": while ( "0" <= ch <= "9" or "A" <= ch <= "Z" or "a" <= ch <= "z" or ch in "-_" ): length += 1 ch = srp(length) if ch != "!": self.reader.forward(length) raise ScannerError( "while scanning a %s" % (name,), start_mark, "expected '!', but found %r" % utf8(ch), self.reader.get_mark(), ) length += 1 value = self.reader.prefix(length) self.reader.forward(length) return value def scan_tag_uri(self, name, start_mark): # type: (Any, Any) -> Any # See the specification for details. # Note: we do not check if URI is well-formed. srp = self.reader.peek chunks = [] length = 0 ch = srp(length) while ( "0" <= ch <= "9" or "A" <= ch <= "Z" or "a" <= ch <= "z" or ch in "-;/?:@&=+$,_.!~*'()[]%" or ((self.scanner_processing_version > (1, 1)) and ch == "#") ): if ch == "%": chunks.append(self.reader.prefix(length)) self.reader.forward(length) length = 0 chunks.append(self.scan_uri_escapes(name, start_mark)) else: length += 1 ch = srp(length) if length != 0: chunks.append(self.reader.prefix(length)) self.reader.forward(length) length = 0 if not chunks: raise ScannerError( "while parsing a %s" % (name,), start_mark, "expected URI, but found %r" % utf8(ch), self.reader.get_mark(), ) return "".join(chunks) def scan_uri_escapes(self, name, start_mark): # type: (Any, Any) -> Any # See the specification for details. srp = self.reader.peek srf = self.reader.forward code_bytes = [] # type: List[Any] mark = self.reader.get_mark() while srp() == "%": srf() for k in range(2): if srp(k) not in "0123456789ABCDEFabcdef": raise ScannerError( "while scanning a %s" % (name,), start_mark, "expected URI escape sequence of 2 hexdecimal numbers," " but found %r" % utf8(srp(k)), self.reader.get_mark(), ) if PY3: code_bytes.append(int(self.reader.prefix(2), 16)) else: code_bytes.append(chr(int(self.reader.prefix(2), 16))) srf(2) try: if PY3: value = bytes(code_bytes).decode("utf-8") else: value = unicode(b"".join(code_bytes), "utf-8") except UnicodeDecodeError as exc: raise ScannerError( "while scanning a %s" % (name,), start_mark, str(exc), mark ) return value def scan_line_break(self): # type: () -> Any # Transforms: # '\r\n' : '\n' # '\r' : '\n' # '\n' : '\n' # '\x85' : '\n' # '\u2028' : '\u2028' # '\u2029 : '\u2029' # default : '' ch = self.reader.peek() if ch in "\r\n\x85": if self.reader.prefix(2) == "\r\n": self.reader.forward(2) else: self.reader.forward() return "\n" elif ch in "\u2028\u2029": self.reader.forward() return ch return "" class RoundTripScanner(Scanner): def check_token(self, *choices): # type: (Any) -> bool # Check if the next token is one of the given types. while self.need_more_tokens(): self.fetch_more_tokens() self._gather_comments() if bool(self.tokens): if not choices: return True for choice in choices: if isinstance(self.tokens[0], choice): return True return False def peek_token(self): # type: () -> Any # Return the next token, but do not delete if from the queue. while self.need_more_tokens(): self.fetch_more_tokens() self._gather_comments() if bool(self.tokens): return self.tokens[0] return None def _gather_comments(self): # type: () -> Any """combine multiple comment lines""" comments = [] # type: List[Any] if not self.tokens: return comments if isinstance(self.tokens[0], CommentToken): comment = self.tokens.pop(0) self.tokens_taken += 1 comments.append(comment) while self.need_more_tokens(): self.fetch_more_tokens() if not self.tokens: return comments if isinstance(self.tokens[0], CommentToken): self.tokens_taken += 1 comment = self.tokens.pop(0) # nprint('dropping2', comment) comments.append(comment) if len(comments) >= 1: self.tokens[0].add_pre_comments(comments) # pull in post comment on e.g. ':' if not self.done and len(self.tokens) < 2: self.fetch_more_tokens() def get_token(self): # type: () -> Any # Return the next token. while self.need_more_tokens(): self.fetch_more_tokens() self._gather_comments() if bool(self.tokens): # nprint('tk', self.tokens) # only add post comment to single line tokens: # scalar, value token. FlowXEndToken, otherwise # hidden streamtokens could get them (leave them and they will be # pre comments for the next map/seq if ( len(self.tokens) > 1 and isinstance( self.tokens[0], ( ScalarToken, ValueToken, FlowSequenceEndToken, FlowMappingEndToken, ), ) and isinstance(self.tokens[1], CommentToken) and self.tokens[0].end_mark.line == self.tokens[1].start_mark.line ): self.tokens_taken += 1 c = self.tokens.pop(1) self.fetch_more_tokens() while len(self.tokens) > 1 and isinstance(self.tokens[1], CommentToken): self.tokens_taken += 1 c1 = self.tokens.pop(1) c.value = c.value + (" " * c1.start_mark.column) + c1.value self.fetch_more_tokens() self.tokens[0].add_post_comment(c) elif ( len(self.tokens) > 1 and isinstance(self.tokens[0], ScalarToken) and isinstance(self.tokens[1], CommentToken) and self.tokens[0].end_mark.line != self.tokens[1].start_mark.line ): self.tokens_taken += 1 c = self.tokens.pop(1) c.value = ( "\n" * (c.start_mark.line - self.tokens[0].end_mark.line) + (" " * c.start_mark.column) + c.value ) self.tokens[0].add_post_comment(c) self.fetch_more_tokens() while len(self.tokens) > 1 and isinstance(self.tokens[1], CommentToken): self.tokens_taken += 1 c1 = self.tokens.pop(1) c.value = c.value + (" " * c1.start_mark.column) + c1.value self.fetch_more_tokens() self.tokens_taken += 1 return self.tokens.pop(0) return None def fetch_comment(self, comment): # type: (Any) -> None value, start_mark, end_mark = comment while value and value[-1] == " ": # empty line within indented key context # no need to update end-mark, that is not used value = value[:-1] self.tokens.append(CommentToken(value, start_mark, end_mark)) # scanner def scan_to_next_token(self): # type: () -> Any # We ignore spaces, line breaks and comments. # If we find a line break in the block context, we set the flag # `allow_simple_key` on. # The byte order mark is stripped if it's the first character in the # stream. We do not yet support BOM inside the stream as the # specification requires. Any such mark will be considered as a part # of the document. # # TODO: We need to make tab handling rules more sane. A good rule is # Tabs cannot precede tokens # BLOCK-SEQUENCE-START, BLOCK-MAPPING-START, BLOCK-END, # KEY(block), VALUE(block), BLOCK-ENTRY # So the checking code is # if : # self.allow_simple_keys = False # We also need to add the check for `allow_simple_keys == True` to # `unwind_indent` before issuing BLOCK-END. # Scanners for block, flow, and plain scalars need to be modified. srp = self.reader.peek srf = self.reader.forward if self.reader.index == 0 and srp() == "\uFEFF": srf() found = False while not found: while srp() == " ": srf() ch = srp() if ch == "#": start_mark = self.reader.get_mark() comment = ch srf() while ch not in _THE_END: ch = srp() if ch == "\0": # don't gobble the end-of-stream character # but add an explicit newline as "YAML processors should terminate # the stream with an explicit line break # https://yaml.org/spec/1.2/spec.html#id2780069 comment += "\n" break comment += ch srf() # gather any blank lines following the comment too ch = self.scan_line_break() while len(ch) > 0: comment += ch ch = self.scan_line_break() end_mark = self.reader.get_mark() if not self.flow_level: self.allow_simple_key = True return comment, start_mark, end_mark if bool(self.scan_line_break()): start_mark = self.reader.get_mark() if not self.flow_level: self.allow_simple_key = True ch = srp() if ch == "\n": # empty toplevel lines start_mark = self.reader.get_mark() comment = "" while ch: ch = self.scan_line_break(empty_line=True) comment += ch if srp() == "#": # empty line followed by indented real comment comment = comment.rsplit("\n", 1)[0] + "\n" end_mark = self.reader.get_mark() return comment, start_mark, end_mark else: found = True return None def scan_line_break(self, empty_line=False): # type: (bool) -> Text # Transforms: # '\r\n' : '\n' # '\r' : '\n' # '\n' : '\n' # '\x85' : '\n' # '\u2028' : '\u2028' # '\u2029 : '\u2029' # default : '' ch = self.reader.peek() # type: Text if ch in "\r\n\x85": if self.reader.prefix(2) == "\r\n": self.reader.forward(2) else: self.reader.forward() return "\n" elif ch in "\u2028\u2029": self.reader.forward() return ch elif empty_line and ch in "\t ": self.reader.forward() return ch return "" def scan_block_scalar(self, style, rt=True): # type: (Any, Optional[bool]) -> Any return Scanner.scan_block_scalar(self, style, rt=rt) # try: # import psyco # psyco.bind(Scanner) # except ImportError: # pass strictyaml-1.6.1/strictyaml/ruamel/serializer.py000066400000000000000000000207221415074265300221270ustar00rootroot00000000000000# coding: utf-8 from __future__ import absolute_import from strictyaml.ruamel.error import YAMLError from strictyaml.ruamel.compat import ( nprint, DBG_NODE, dbg, string_types, nprintf, ) # NOQA from strictyaml.ruamel.util import RegExp from strictyaml.ruamel.events import ( StreamStartEvent, StreamEndEvent, MappingStartEvent, MappingEndEvent, SequenceStartEvent, SequenceEndEvent, AliasEvent, ScalarEvent, DocumentStartEvent, DocumentEndEvent, ) from strictyaml.ruamel.nodes import MappingNode, ScalarNode, SequenceNode if False: # MYPY from typing import Any, Dict, Union, Text, Optional # NOQA from strictyaml.ruamel.compat import VersionType # NOQA __all__ = ["Serializer", "SerializerError"] class SerializerError(YAMLError): pass class Serializer(object): # 'id' and 3+ numbers, but not 000 ANCHOR_TEMPLATE = u"id%03d" ANCHOR_RE = RegExp(u"id(?!000$)\\d{3,}") def __init__( self, encoding=None, explicit_start=None, explicit_end=None, version=None, tags=None, dumper=None, ): # type: (Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any) -> None # NOQA self.dumper = dumper if self.dumper is not None: self.dumper._serializer = self self.use_encoding = encoding self.use_explicit_start = explicit_start self.use_explicit_end = explicit_end if isinstance(version, string_types): self.use_version = tuple(map(int, version.split("."))) else: self.use_version = version # type: ignore self.use_tags = tags self.serialized_nodes = {} # type: Dict[Any, Any] self.anchors = {} # type: Dict[Any, Any] self.last_anchor_id = 0 self.closed = None # type: Optional[bool] self._templated_id = None @property def emitter(self): # type: () -> Any if hasattr(self.dumper, "typ"): return self.dumper.emitter return self.dumper._emitter @property def resolver(self): # type: () -> Any if hasattr(self.dumper, "typ"): self.dumper.resolver return self.dumper._resolver def open(self): # type: () -> None if self.closed is None: self.emitter.emit(StreamStartEvent(encoding=self.use_encoding)) self.closed = False elif self.closed: raise SerializerError("serializer is closed") else: raise SerializerError("serializer is already opened") def close(self): # type: () -> None if self.closed is None: raise SerializerError("serializer is not opened") elif not self.closed: self.emitter.emit(StreamEndEvent()) self.closed = True # def __del__(self): # self.close() def serialize(self, node): # type: (Any) -> None if dbg(DBG_NODE): nprint("Serializing nodes") node.dump() if self.closed is None: raise SerializerError("serializer is not opened") elif self.closed: raise SerializerError("serializer is closed") self.emitter.emit( DocumentStartEvent( explicit=self.use_explicit_start, version=self.use_version, tags=self.use_tags, ) ) self.anchor_node(node) self.serialize_node(node, None, None) self.emitter.emit(DocumentEndEvent(explicit=self.use_explicit_end)) self.serialized_nodes = {} self.anchors = {} self.last_anchor_id = 0 def anchor_node(self, node): # type: (Any) -> None if node in self.anchors: if self.anchors[node] is None: self.anchors[node] = self.generate_anchor(node) else: anchor = None try: if node.anchor.always_dump: anchor = node.anchor.value except: # NOQA pass self.anchors[node] = anchor if isinstance(node, SequenceNode): for item in node.value: self.anchor_node(item) elif isinstance(node, MappingNode): for key, value in node.value: self.anchor_node(key) self.anchor_node(value) def generate_anchor(self, node): # type: (Any) -> Any try: anchor = node.anchor.value except: # NOQA anchor = None if anchor is None: self.last_anchor_id += 1 return self.ANCHOR_TEMPLATE % self.last_anchor_id return anchor def serialize_node(self, node, parent, index): # type: (Any, Any, Any) -> None alias = self.anchors[node] if node in self.serialized_nodes: self.emitter.emit(AliasEvent(alias)) else: self.serialized_nodes[node] = True self.resolver.descend_resolver(parent, index) if isinstance(node, ScalarNode): # here check if the node.tag equals the one that would result from parsing # if not equal quoting is necessary for strings detected_tag = self.resolver.resolve( ScalarNode, node.value, (True, False) ) default_tag = self.resolver.resolve( ScalarNode, node.value, (False, True) ) implicit = ( (node.tag == detected_tag), (node.tag == default_tag), node.tag.startswith("tag:yaml.org,2002:"), ) self.emitter.emit( ScalarEvent( alias, node.tag, implicit, node.value, style=node.style, comment=node.comment, ) ) elif isinstance(node, SequenceNode): implicit = node.tag == self.resolver.resolve( SequenceNode, node.value, True ) comment = node.comment end_comment = None seq_comment = None if node.flow_style is True: if comment: # eol comment on flow style sequence seq_comment = comment[0] # comment[0] = None if comment and len(comment) > 2: end_comment = comment[2] else: end_comment = None self.emitter.emit( SequenceStartEvent( alias, node.tag, implicit, flow_style=node.flow_style, comment=node.comment, ) ) index = 0 for item in node.value: self.serialize_node(item, node, index) index += 1 self.emitter.emit(SequenceEndEvent(comment=[seq_comment, end_comment])) elif isinstance(node, MappingNode): implicit = node.tag == self.resolver.resolve( MappingNode, node.value, True ) comment = node.comment end_comment = None map_comment = None if node.flow_style is True: if comment: # eol comment on flow style sequence map_comment = comment[0] # comment[0] = None if comment and len(comment) > 2: end_comment = comment[2] self.emitter.emit( MappingStartEvent( alias, node.tag, implicit, flow_style=node.flow_style, comment=node.comment, nr_items=len(node.value), ) ) for key, value in node.value: self.serialize_node(key, node, None) self.serialize_node(value, node, key) self.emitter.emit(MappingEndEvent(comment=[map_comment, end_comment])) self.resolver.ascend_resolver() def templated_id(s): # type: (Text) -> Any return Serializer.ANCHOR_RE.match(s) strictyaml-1.6.1/strictyaml/ruamel/timestamp.py000066400000000000000000000035421415074265300217620ustar00rootroot00000000000000# coding: utf-8 from __future__ import print_function, absolute_import, division, unicode_literals import datetime import copy # ToDo: at least on PY3 you could probably attach the tzinfo correctly to the object # a more complete datetime might be used by safe loading as well if False: # MYPY from typing import Any, Dict, Optional, List # NOQA class TimeStamp(datetime.datetime): def __init__(self, *args, **kw): # type: (Any, Any) -> None self._yaml = dict(t=False, tz=None, delta=0) # type: Dict[Any, Any] def __new__(cls, *args, **kw): # datetime is immutable # type: (Any, Any) -> Any return datetime.datetime.__new__(cls, *args, **kw) # type: ignore def __deepcopy__(self, memo): # type: (Any) -> Any ts = TimeStamp( self.year, self.month, self.day, self.hour, self.minute, self.second ) ts._yaml = copy.deepcopy(self._yaml) return ts def replace( self, year=None, month=None, day=None, hour=None, minute=None, second=None, microsecond=None, tzinfo=True, fold=None, ): if year is None: year = self.year if month is None: month = self.month if day is None: day = self.day if hour is None: hour = self.hour if minute is None: minute = self.minute if second is None: second = self.second if microsecond is None: microsecond = self.microsecond if tzinfo is True: tzinfo = self.tzinfo if fold is None: fold = self.fold ts = type(self)( year, month, day, hour, minute, second, microsecond, tzinfo, fold=fold ) ts._yaml = copy.deepcopy(self._yaml) return ts strictyaml-1.6.1/strictyaml/ruamel/tokens.py000066400000000000000000000165041415074265300212640ustar00rootroot00000000000000# # header # coding: utf-8 from __future__ import unicode_literals if False: # MYPY from typing import Text, Any, Dict, Optional, List # NOQA from .error import StreamMark # NOQA SHOWLINES = True class Token(object): __slots__ = "start_mark", "end_mark", "_comment" def __init__(self, start_mark, end_mark): # type: (StreamMark, StreamMark) -> None self.start_mark = start_mark self.end_mark = end_mark def __repr__(self): # type: () -> Any # attributes = [key for key in self.__slots__ if not key.endswith('_mark') and # hasattr('self', key)] attributes = [key for key in self.__slots__ if not key.endswith("_mark")] attributes.sort() arguments = ", ".join( ["%s=%r" % (key, getattr(self, key)) for key in attributes] ) if SHOWLINES: try: arguments += ", line: " + str(self.start_mark.line) except: # NOQA pass try: arguments += ", comment: " + str(self._comment) except: # NOQA pass return "{}({})".format(self.__class__.__name__, arguments) def add_post_comment(self, comment): # type: (Any) -> None if not hasattr(self, "_comment"): self._comment = [None, None] self._comment[0] = comment def add_pre_comments(self, comments): # type: (Any) -> None if not hasattr(self, "_comment"): self._comment = [None, None] assert self._comment[1] is None self._comment[1] = comments def get_comment(self): # type: () -> Any return getattr(self, "_comment", None) @property def comment(self): # type: () -> Any return getattr(self, "_comment", None) def move_comment(self, target, empty=False): # type: (Any, bool) -> Any """move a comment from this token to target (normally next token) used to combine e.g. comments before a BlockEntryToken to the ScalarToken that follows it empty is a special for empty values -> comment after key """ c = self.comment if c is None: return # don't push beyond last element if isinstance(target, (StreamEndToken, DocumentStartToken)): return delattr(self, "_comment") tc = target.comment if not tc: # target comment, just insert # special for empty value in key: value issue 25 if empty: c = [c[0], c[1], None, None, c[0]] target._comment = c # nprint('mco2:', self, target, target.comment, empty) return self if c[0] and tc[0] or c[1] and tc[1]: raise NotImplementedError("overlap in comment %r %r" % (c, tc)) if c[0]: tc[0] = c[0] if c[1]: tc[1] = c[1] return self def split_comment(self): # type: () -> Any """split the post part of a comment, and return it as comment to be added. Delete second part if [None, None] abc: # this goes to sequence # this goes to first element - first element """ comment = self.comment if comment is None or comment[0] is None: return None # nothing to do ret_val = [comment[0], None] if comment[1] is None: delattr(self, "_comment") return ret_val # class BOMToken(Token): # id = '' class DirectiveToken(Token): __slots__ = "name", "value" id = "" def __init__(self, name, value, start_mark, end_mark): # type: (Any, Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.name = name self.value = value class DocumentStartToken(Token): __slots__ = () id = "" class DocumentEndToken(Token): __slots__ = () id = "" class StreamStartToken(Token): __slots__ = ("encoding",) id = "" def __init__(self, start_mark=None, end_mark=None, encoding=None): # type: (Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.encoding = encoding class StreamEndToken(Token): __slots__ = () id = "" class BlockSequenceStartToken(Token): __slots__ = () id = "" class BlockMappingStartToken(Token): __slots__ = () id = "" class BlockEndToken(Token): __slots__ = () id = "" class FlowSequenceStartToken(Token): __slots__ = () id = "[" class FlowMappingStartToken(Token): __slots__ = () id = "{" class FlowSequenceEndToken(Token): __slots__ = () id = "]" class FlowMappingEndToken(Token): __slots__ = () id = "}" class KeyToken(Token): __slots__ = () id = "?" # def x__repr__(self): # return 'KeyToken({})'.format( # self.start_mark.buffer[self.start_mark.index:].split(None, 1)[0]) class ValueToken(Token): __slots__ = () id = ":" class BlockEntryToken(Token): __slots__ = () id = "-" class FlowEntryToken(Token): __slots__ = () id = "," class AliasToken(Token): __slots__ = ("value",) id = "" def __init__(self, value, start_mark, end_mark): # type: (Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.value = value class AnchorToken(Token): __slots__ = ("value",) id = "" def __init__(self, value, start_mark, end_mark): # type: (Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.value = value class TagToken(Token): __slots__ = ("value",) id = "" def __init__(self, value, start_mark, end_mark): # type: (Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.value = value class ScalarToken(Token): __slots__ = "value", "plain", "style" id = "" def __init__(self, value, plain, start_mark, end_mark, style=None): # type: (Any, Any, Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.value = value self.plain = plain self.style = style class CommentToken(Token): __slots__ = "value", "pre_done" id = "" def __init__(self, value, start_mark, end_mark): # type: (Any, Any, Any) -> None Token.__init__(self, start_mark, end_mark) self.value = value def reset(self): # type: () -> None if hasattr(self, "pre_done"): delattr(self, "pre_done") def __repr__(self): # type: () -> Any v = "{!r}".format(self.value) if SHOWLINES: try: v += ", line: " + str(self.start_mark.line) v += ", col: " + str(self.start_mark.column) except: # NOQA pass return "CommentToken({})".format(v) def __eq__(self, other): # type: (Any) -> bool if self.start_mark != other.start_mark: return False if self.end_mark != other.end_mark: return False if self.value != other.value: return False return True def __ne__(self, other): # type: (Any) -> bool return not self.__eq__(other) strictyaml-1.6.1/strictyaml/ruamel/util.py000066400000000000000000000137571415074265300207450ustar00rootroot00000000000000# coding: utf-8 """ some helper functions that might be generally useful """ from __future__ import absolute_import, print_function from functools import partial import re from .compat import text_type, binary_type if False: # MYPY from typing import Any, Dict, Optional, List, Text # NOQA from .compat import StreamTextType # NOQA class LazyEval(object): """ Lightweight wrapper around lazily evaluated func(*args, **kwargs). func is only evaluated when any attribute of its return value is accessed. Every attribute access is passed through to the wrapped value. (This only excludes special cases like method-wrappers, e.g., __hash__.) The sole additional attribute is the lazy_self function which holds the return value (or, prior to evaluation, func and arguments), in its closure. """ def __init__(self, func, *args, **kwargs): # type: (Any, Any, Any) -> None def lazy_self(): # type: () -> Any return_value = func(*args, **kwargs) object.__setattr__(self, "lazy_self", lambda: return_value) return return_value object.__setattr__(self, "lazy_self", lazy_self) def __getattribute__(self, name): # type: (Any) -> Any lazy_self = object.__getattribute__(self, "lazy_self") if name == "lazy_self": return lazy_self return getattr(lazy_self(), name) def __setattr__(self, name, value): # type: (Any, Any) -> None setattr(self.lazy_self(), name, value) RegExp = partial(LazyEval, re.compile) # originally as comment # https://github.com/pre-commit/pre-commit/pull/211#issuecomment-186466605 # if you use this in your code, I suggest adding a test in your test suite # that check this routines output against a known piece of your YAML # before upgrades to this code break your round-tripped YAML def load_yaml_guess_indent(stream, **kw): # type: (StreamTextType, Any) -> Any """guess the indent and block sequence indent of yaml stream/string returns round_trip_loaded stream, indent level, block sequence indent - block sequence indent is the number of spaces before a dash relative to previous indent - if there are no block sequences, indent is taken from nested mappings, block sequence indent is unset (None) in that case """ from .main import round_trip_load # load a YAML document, guess the indentation, if you use TABs you're on your own def leading_spaces(line): # type: (Any) -> int idx = 0 while idx < len(line) and line[idx] == " ": idx += 1 return idx if isinstance(stream, text_type): yaml_str = stream # type: Any elif isinstance(stream, binary_type): # most likely, but the Reader checks BOM for this yaml_str = stream.decode("utf-8") else: yaml_str = stream.read() map_indent = None indent = None # default if not found for some reason block_seq_indent = None prev_line_key_only = None key_indent = 0 for line in yaml_str.splitlines(): rline = line.rstrip() lline = rline.lstrip() if lline.startswith("- "): l_s = leading_spaces(line) block_seq_indent = l_s - key_indent idx = l_s + 1 while line[idx] == " ": # this will end as we rstripped idx += 1 if line[idx] == "#": # comment after - continue indent = idx - key_indent break if map_indent is None and prev_line_key_only is not None and rline: idx = 0 while line[idx] in " -": idx += 1 if idx > prev_line_key_only: map_indent = idx - prev_line_key_only if rline.endswith(":"): key_indent = leading_spaces(line) idx = 0 while line[idx] == " ": # this will end on ':' idx += 1 prev_line_key_only = idx continue prev_line_key_only = None if indent is None and map_indent is not None: indent = map_indent return round_trip_load(yaml_str, **kw), indent, block_seq_indent def configobj_walker(cfg): # type: (Any) -> Any """ walks over a ConfigObj (INI file with comments) generating corresponding YAML output (including comments """ from configobj import ConfigObj # type: ignore assert isinstance(cfg, ConfigObj) for c in cfg.initial_comment: if c.strip(): yield c for s in _walk_section(cfg): if s.strip(): yield s for c in cfg.final_comment: if c.strip(): yield c def _walk_section(s, level=0): # type: (Any, int) -> Any from configobj import Section assert isinstance(s, Section) indent = u" " * level for name in s.scalars: for c in s.comments[name]: yield indent + c.strip() x = s[name] if u"\n" in x: i = indent + u" " x = u"|\n" + i + x.strip().replace(u"\n", u"\n" + i) elif ":" in x: x = u"'" + x.replace(u"'", u"''") + u"'" line = u"{0}{1}: {2}".format(indent, name, x) c = s.inline_comments[name] if c: line += u" " + c yield line for name in s.sections: for c in s.comments[name]: yield indent + c.strip() line = u"{0}{1}:".format(indent, name) c = s.inline_comments[name] if c: line += u" " + c yield line for val in _walk_section(s[name], level=level + 1): yield val # def config_obj_2_rt_yaml(cfg): # from .comments import CommentedMap, CommentedSeq # from configobj import ConfigObj # assert isinstance(cfg, ConfigObj) # #for c in cfg.initial_comment: # # if c.strip(): # # pass # cm = CommentedMap() # for name in s.sections: # cm[name] = d = CommentedMap() # # # #for c in cfg.final_comment: # # if c.strip(): # # yield c # return cm strictyaml-1.6.1/strictyaml/scalar.py000066400000000000000000000242351415074265300177410ustar00rootroot00000000000000import math from strictyaml.exceptions import YAMLSerializationError from strictyaml.validators import Validator from strictyaml.representation import YAML from strictyaml import constants from strictyaml import utils from datetime import datetime import dateutil.parser import decimal import sys import re from strictyaml.ruamel.scalarstring import PreservedScalarString if sys.version_info[0] == 3: unicode = str class ScalarValidator(Validator): @property def rule_description(self): return "a {0}".format(self.__class__.__name__.lower()) def __call__(self, chunk): chunk.expect_scalar(self.rule_description) return YAML(chunk, validator=self) def validate(self, chunk): return self.validate_scalar(chunk) def should_be_string(self, data, message): if not utils.is_string(data): raise YAMLSerializationError( "{0} got '{1}' of type {2}.".format(message, data, type(data).__name__) ) def validate_scalar(self, chunk): raise NotImplementedError("validate_scalar(self, chunk) must be implemented") class Enum(ScalarValidator): def __init__(self, restricted_to, item_validator=None): self._item_validator = Str() if item_validator is None else item_validator assert isinstance( self._item_validator, ScalarValidator ), "item validator must be scalar too" self._restricted_to = restricted_to def validate_scalar(self, chunk): val = self._item_validator(chunk) val._validator = self if val.scalar not in self._restricted_to: chunk.expecting_but_found( "when expecting one of: {0}".format( ", ".join(map(str, self._restricted_to)) ) ) else: return val def to_yaml(self, data): if data not in self._restricted_to: raise YAMLSerializationError( "Got '{0}' when expecting one of: {1}".format( data, ", ".join(map(str, self._restricted_to)) ) ) return self._item_validator.to_yaml(data) def __repr__(self): # TODO : item_validator return u"Enum({0})".format(repr(self._restricted_to)) class CommaSeparated(ScalarValidator): def __init__(self, item_validator): self._item_validator = item_validator assert isinstance( self._item_validator, ScalarValidator ), "item validator must be scalar too" def validate_scalar(self, chunk): if chunk.contents == "": return [] return [ self._item_validator.validate_scalar( chunk.textslice(positions[0], positions[1]) ) for positions in utils.comma_separated_positions(chunk.contents) ] def to_yaml(self, data): if isinstance(data, list): return ", ".join([self._item_validator.to_yaml(item) for item in data]) elif utils.is_string(data): for item in data.split(","): self._item_validator.to_yaml(item) return data else: raise YAMLSerializationError( "expected string or list, got '{}' of type '{}'".format( data, type(data).__name__ ) ) def __repr__(self): return "CommaSeparated({0})".format(self._item_validator) class Regex(ScalarValidator): def __init__(self, regular_expression): """ Give regular expression, e.g. u'[0-9]' """ self._regex = regular_expression # re.fullmatch is only available in Python 3.4+ so append "$" if needed if not regular_expression.endswith(r"$"): regular_expression += r"$" self._fullmatch = re.compile(regular_expression).match self._matching_message = "when expecting string matching {0}".format( self._regex ) def validate_scalar(self, chunk): if self._fullmatch(chunk.contents) is None: chunk.expecting_but_found( self._matching_message, "found non-matching string" ) return chunk.contents def to_yaml(self, data): self.should_be_string(data, self._matching_message) if self._fullmatch(data) is None: raise YAMLSerializationError( "{} found '{}'".format(self._matching_message, data) ) return data class Email(Regex): def __init__(self): super(Email, self).__init__(constants.REGEXES["email"]) self._matching_message = "when expecting an email address" class Url(Regex): def __init__(self): super(Url, self).__init__(constants.REGEXES["url"]) self._matching_message = "when expecting a url" class Str(ScalarValidator): def validate_scalar(self, chunk): return chunk.contents def to_yaml(self, data): if not utils.is_string(data): raise YAMLSerializationError("'{}' is not a string".format(data)) if "\n" in data: return PreservedScalarString(data) return data class Int(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if not utils.is_integer(val): chunk.expecting_but_found("when expecting an integer") else: # Only Python 3.6+ supports underscores in numeric literals return int(val.replace("_", "")) def to_yaml(self, data): if utils.is_string(data) or isinstance(data, int): if utils.is_integer(str(data)): return str(data) raise YAMLSerializationError("'{}' not an integer.".format(data)) class HexInt(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if not utils.is_hexadecimal(val): chunk.expecting_but_found("when expecting a hexadecimal integer") return int(val, 16) def to_yaml(self, data): if utils.is_hexadecimal(data): if isinstance(data, int): return hex(data) else: return data raise YAMLSerializationError("'{}' not a hexademial integer.".format(data)) class Bool(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if unicode(val).lower() not in constants.BOOL_VALUES: chunk.expecting_but_found( """when expecting a boolean value (one of "{0}")""".format( '", "'.join(constants.BOOL_VALUES) ) ) else: if val.lower() in constants.TRUE_VALUES: return True else: return False def to_yaml(self, data): if not isinstance(data, bool): if str(data).lower() in constants.BOOL_VALUES: return data else: raise YAMLSerializationError("Not a boolean") else: return u"yes" if data else u"no" class Float(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if utils.is_infinity(val) or utils.is_not_a_number(val): val = val.replace(".", "") elif not utils.is_decimal(val): chunk.expecting_but_found("when expecting a float") # Only Python 3.6+ supports underscores in numeric literals return float(val.replace("_", "")) def to_yaml(self, data): if utils.has_number_type(data): if math.isnan(data): return "nan" if data == float("inf"): return "inf" if data == float("-inf"): return "-inf" return str(data) if utils.is_string(data) and utils.is_decimal(data): return data raise YAMLSerializationError("when expecting a float, got '{}'".format(data)) class Decimal(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if not utils.is_decimal(val): chunk.expecting_but_found("when expecting a decimal") else: return decimal.Decimal(val) class Datetime(ScalarValidator): def validate_scalar(self, chunk): try: return dateutil.parser.parse(chunk.contents) except ValueError: chunk.expecting_but_found("when expecting a datetime") def to_yaml(self, data): if isinstance(data, datetime): return data.isoformat() if utils.is_string(data): try: dateutil.parser.parse(data) return data except ValueError: raise YAMLSerializationError( "expected a datetime, got '{}'".format(data) ) raise YAMLSerializationError( "expected a datetime, got '{}' of type '{}'".format( data, type(data).__name__ ) ) class NullNone(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if val.lower() != "null": chunk.expecting_but_found( "when expecting a 'null', got '{}' instead.".format(val) ) else: return self.empty(chunk) def empty(self, chunk): return None def to_yaml(self, data): if data is None: return u"null" raise YAMLSerializationError("expected None, got '{}'") class EmptyNone(ScalarValidator): def validate_scalar(self, chunk): val = chunk.contents if val != "": chunk.expecting_but_found("when expecting an empty value") else: return self.empty(chunk) def empty(self, chunk): return None def to_yaml(self, data): if data is None: return u"" raise YAMLSerializationError("expected None, got '{}'") class EmptyDict(EmptyNone): def empty(self, chunk): return {} def to_yaml(self, data): if data == {}: return u"" raise YAMLSerializationError("Not an empty dict") class EmptyList(EmptyNone): def empty(self, chunk): return [] def to_yaml(self, data): if data == []: return u"" raise YAMLSerializationError("expected empty list, got '{}'") strictyaml-1.6.1/strictyaml/utils.py000066400000000000000000000130161415074265300176270ustar00rootroot00000000000000from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml import exceptions from re import compile import decimal import sys if sys.version_info[:2] > (3, 4): from collections.abc import Iterable else: from collections import Iterable if sys.version_info[0] == 3: unicode = str def flatten(items): """ Yield items from any nested iterable. >>> list(flatten([[1, 2, 3], [[4, 5], 6, 7]])) [1, 2, 3, 4, 5, 6, 7] """ for x in items: if isinstance(x, Iterable) and not isinstance(x, (str, bytes)): for sub_x in flatten(x): yield sub_x else: yield x def has_number_type(value): """ Is a value a number or a non-number? >>> has_number_type(3.5) True >>> has_number_type(3) True >>> has_number_type(decimal.Decimal("3.5")) True >>> has_number_type("3.5") False >>> has_number_type(True) False """ return isinstance(value, (int, float, decimal.Decimal)) and not isinstance( value, bool ) def is_string(value): """ Python 2/3 compatible way of checking if a value is a string. """ return isinstance(value, unicode) or str(type(value)) in ( "", "", "", ) def is_integer(value): """ Is a string a string of an integer? >>> is_integer("4") True >>> is_integer("4_000") True >>> is_integer("3.4") False """ return compile(r"^[-+]?[0-9_]+$").match(value) is not None def is_hexadecimal(value): """ Is a string a string of a hexademcial integer? >>> is_hexadecimal("0xa1") True >>> is_hexadecimal("0XA1") True >>> is_hexadecimal("0xa1x") False >>> is_hexadecimal("xa1") False >>> is_hexadecimal("a1") False >>> is_hexadecimal("1") False """ return compile(r"^0[xX]+[a-fA-F0-9]+$").match(value) is not None def is_decimal(value): """ Is a string a decimal? >>> is_decimal("4") True >>> is_decimal("4_000") True >>> is_decimal("3.5") True >>> is_decimal("4.") True >>> is_decimal("4.000_001") True >>> is_decimal("blah") False """ return ( compile(r"^[-+]?[0-9_]*(\.[0-9_]*)?([eE][-+]?[0-9_]+)?$").match(value) is not None ) def is_infinity(value): """ Is string a valid representation for positive or negative infinity? Valid formats are: [+/-]inf, [+/-]INF, [+/-]Inf, [+/-].inf, [+/-].INF and [+/-].Inf >>> is_infinity(".inf") True >>> is_infinity("+.INF") True >>> is_infinity("-.Inf") True >>> is_infinity("Inf") True >>> is_infinity("INF") True >>> is_infinity("-INF") True >>> is_infinity("infinitesimal") False """ return compile(r"^[-+]?\.?(?:inf|Inf|INF)$").match(value) is not None def is_not_a_number(value): """ Is string a valid representation for 'not a number'? Valid formats are: nan, NaN, NAN, .nan, .NaN, .NAN. >>> is_not_a_number(".nan") True >>> is_not_a_number(".NaN") True >>> is_not_a_number("NAN") True >>> is_not_a_number("nan") True >>> is_not_a_number("nanan") False >>> is_not_a_number("1e5") False """ return compile(r"^\.?(?:nan|NaN|NAN)$").match(value) is not None def comma_separated_positions(text): """ Start and end positions of comma separated text items. Commas and trailing spaces should not be included. >>> comma_separated_positions("ABC, 2,3") [(0, 3), (5, 6), (7, 8)] """ chunks = [] start = 0 end = 0 for item in text.split(","): space_increment = 1 if item[0] == " " else 0 start += space_increment # Is there a space after the comma to ignore? ", " end += len(item.lstrip()) + space_increment chunks.append((start, end)) start += len(item.lstrip()) + 1 # Plus comma end = start return chunks def ruamel_structure(data, validator=None): """ Take dicts and lists and return a strictyaml.ruamel style structure of CommentedMaps, CommentedSeqs and data. If a validator is presented and the type is unknown, it is checked against the validator to see if it will turn it back in to YAML. """ if isinstance(data, dict): if len(data) == 0: raise exceptions.CannotBuildDocumentsFromEmptyDictOrList( "Document must be built with non-empty dicts and lists" ) return CommentedMap( [ (ruamel_structure(key), ruamel_structure(value)) for key, value in data.items() ] ) elif isinstance(data, list): if len(data) == 0: raise exceptions.CannotBuildDocumentsFromEmptyDictOrList( "Document must be built with non-empty dicts and lists" ) return CommentedSeq([ruamel_structure(item) for item in data]) elif isinstance(data, bool): return u"yes" if data else u"no" elif isinstance(data, (int, float)): return str(data) else: if not is_string(data): raise exceptions.CannotBuildDocumentFromInvalidData( ( "Document must be built from a combination of:\n" "string, int, float, bool or nonempty list/dict\n\n" "Instead, found variable with type '{}': '{}'" ).format(type(data).__name__, data) ) return data strictyaml-1.6.1/strictyaml/validators.py000066400000000000000000000073461415074265300206500ustar00rootroot00000000000000from strictyaml.exceptions import YAMLValidationError, YAMLSerializationError from strictyaml.exceptions import InvalidValidatorError from strictyaml.representation import YAML from strictyaml import utils import sys if sys.version_info[0] == 3: unicode = str class Validator(object): def __or__(self, other): return OrValidator(self, other) def __call__(self, chunk): self.validate(chunk) return YAML(chunk, validator=self) def __repr__(self): return u"{0}()".format(self.__class__.__name__) class MapValidator(Validator): def _should_be_mapping(self, data): if not isinstance(data, dict): raise YAMLSerializationError("Expected a dict, found '{}'".format(data)) if len(data) == 0: raise YAMLSerializationError( ( "Expected a non-empty dict, found an empty dict.\n" "Use EmptyDict validator to serialize empty dicts." ) ) class SeqValidator(Validator): def _should_be_list(self, data): if not isinstance(data, list): raise YAMLSerializationError("Expected a list, found '{}'".format(data)) if len(data) == 0: raise YAMLSerializationError( ( "Expected a non-empty list, found an empty list.\n" "Use EmptyList validator to serialize empty lists." ) ) class OrValidator(Validator): def __init__(self, validator_a, validator_b): assert isinstance(validator_a, Validator), "validator_a must be a Validator" assert isinstance(validator_b, Validator), "validator_b must be a Validator" self._validator_a = validator_a self._validator_b = validator_b def unpacked(validator): if isinstance(validator, OrValidator): return [ unpacked(validator._validator_a), unpacked(validator._validator_b), ] else: return [validator] map_validator_count = len( [ validator for validator in list(utils.flatten(unpacked(self))) if isinstance(validator, MapValidator) ] ) if map_validator_count > 1: raise InvalidValidatorError( ( "You tried to Or ('|') together {} Map validators. " "Try using revalidation instead." ).format(map_validator_count) ) seq_validator_count = len( [ validator for validator in list(utils.flatten(unpacked(self))) if isinstance(validator, SeqValidator) ] ) if seq_validator_count > 1: raise InvalidValidatorError( ( "You tried to Or ('|') together {} Seq validators. " "Try using revalidation instead." ).format(seq_validator_count) ) def to_yaml(self, value): try: return self._validator_a.to_yaml(value) except YAMLSerializationError: return self._validator_b.to_yaml(value) def __call__(self, chunk): try: result = self._validator_a(chunk) result._selected_validator = result._validator result._validator = self return result except YAMLValidationError: result = self._validator_b(chunk) result._selected_validator = result._validator result._validator = self return result def __repr__(self): return u"{0} | {1}".format(repr(self._validator_a), repr(self._validator_b)) strictyaml-1.6.1/strictyaml/yamllocation.py000066400000000000000000000206651415074265300211720ustar00rootroot00000000000000from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml.exceptions import YAMLValidationError from strictyaml.yamlpointer import YAMLPointer from strictyaml import utils from copy import deepcopy, copy import sys if sys.version_info[0] == 3: unicode = str class YAMLChunk(object): """ Represents a section of the document with references to the ruamel parsed document and the strictparsed document. Most operations done by validators on the document are done using this object. Before validation the strictparsed document will be identical to the ruamelparsed document. After it will contain CommentedMaps, CommentedSeqs and YAML objects. """ def __init__( self, ruamelparsed, pointer=None, label=None, strictparsed=None, key_association=None, ): self._ruamelparsed = ruamelparsed self._strictparsed = ( deepcopy(ruamelparsed) if strictparsed is None else strictparsed ) self._pointer = pointer if pointer is not None else YAMLPointer() self._label = label # Associates strictparsed key names with ruamelparsed key names # E.g. "my-key-name" -> "My Key name" self._key_association = {} if key_association is None else key_association def expecting_but_found(self, expecting, found=None): raise YAMLValidationError( expecting, found if found is not None else "found {0}".format(self.found()), self, ) def while_parsing_found(self, what, found=None): self.expecting_but_found("while parsing {0}".format(what), found=found) def process(self, new_item): strictparsed = self.pointer.parent().get(self._strictparsed, strictdoc=True) current_parsed = ( strictparsed._value if hasattr(strictparsed, "_value") else strictparsed ) def actual_key_from_string_key(string_key): if string_key in current_parsed.keys(): return string_key else: for key in current_parsed.keys(): if hasattr(key, "_value"): if key.text == string_key: return key if self.pointer.is_index(): current_parsed[self.pointer.last_index] = new_item elif self.pointer.is_val(): current_parsed[ actual_key_from_string_key(self.pointer.last_regularkey) ] = new_item elif self.pointer.is_key(): key = actual_key_from_string_key(self.pointer.last_regularkey) existing_val = current_parsed[key] del current_parsed[key] current_parsed[new_item] = existing_val def is_sequence(self): return isinstance(self.contents, CommentedSeq) def is_mapping(self): return isinstance(self.contents, CommentedMap) def is_scalar(self): return not isinstance(self.contents, (CommentedMap, CommentedSeq)) def found(self): if self.is_sequence(): return u"a sequence" elif self.is_mapping(): return u"a mapping" elif self.contents == u"": return u"a blank string" elif utils.is_integer(self.contents): return u"an arbitrary integer" elif utils.is_decimal(self.contents): return u"an arbitrary number" else: return u"arbitrary text" def expect_sequence(self, expecting="when expecting a sequence"): if not self.is_sequence(): self.expecting_but_found(expecting, "found {0}".format(self.found())) return [self.index(i) for i in range(len(self.contents))] def expect_mapping(self): if not self.is_mapping(): self.expecting_but_found( "when expecting a mapping", "found {0}".format(self.found()) ) return [ ( self.key(regular_key, unicode(validated_key)), self.val(unicode(validated_key)), ) for (regular_key, validated_key) in zip( self.contents.keys(), self.strictparsed().keys() ) ] def expect_scalar(self, what): if not self.is_scalar(): self.expecting_but_found( "when expecting {0}".format(what), "found {0}".format(self.found()) ) @property def label(self): return self._label @property def whole_document(self): return self._ruamelparsed @property def pointer(self): return self._pointer def fork(self, strictindex, new_value): """ Return a chunk referring to the same location in a duplicated document. Used when modifying a YAML chunk so that the modification can be validated before changing it. """ forked_chunk = YAMLChunk( deepcopy(self._ruamelparsed), pointer=self.pointer, label=self.label, key_association=copy(self._key_association), ) if self.is_scalar(): # Necessary for e.g. EmptyDict, which reports as a scalar. forked_chunk.pointer.set(forked_chunk, "_ruamelparsed", CommentedMap()) forked_chunk.pointer.set( forked_chunk, "_strictparsed", CommentedMap(), strictdoc=True ) forked_chunk.contents[self.ruamelindex(strictindex)] = new_value.as_marked_up() forked_chunk.strictparsed()[strictindex] = deepcopy(new_value.as_marked_up()) return forked_chunk def add_key_association(self, unprocessed_key, processed_key): self._key_association[processed_key] = unprocessed_key @property def key_association(self): return self._key_association def make_child_of(self, chunk): """ Link one YAML chunk to another. Used when inserting a chunk of YAML into another chunk. """ if self.is_mapping(): for key, value in self.contents.items(): self.key(key, key).pointer.make_child_of(chunk.pointer) self.val(key).make_child_of(chunk) elif self.is_sequence(): for index, item in enumerate(self.contents): self.index(index).make_child_of(chunk) else: self.pointer.make_child_of(chunk.pointer) def _select(self, pointer): """ Get a YAMLChunk referenced by a pointer. """ return YAMLChunk( self._ruamelparsed, pointer=pointer, label=self._label, strictparsed=self._strictparsed, key_association=copy(self._key_association), ) def index(self, strictindex): """ Return a chunk in a sequence referenced by index. """ return self._select(self._pointer.index(self.ruamelindex(strictindex))) def ruamelindex(self, strictindex): """ Get the ruamel equivalent of a strict parsed index. E.g. 0 -> 0, 1 -> 2, parsed-via-slugify -> Parsed via slugify """ return ( self.key_association.get(strictindex, strictindex) if self.is_mapping() else strictindex ) def val(self, strictkey): """ Return a chunk referencing a value in a mapping with the key 'key'. """ ruamelkey = self.ruamelindex(strictkey) return self._select(self._pointer.val(ruamelkey, strictkey)) def key(self, key, strictkey=None): """ Return a chunk referencing a key in a mapping with the name 'key'. """ return self._select(self._pointer.key(key, strictkey)) def textslice(self, start, end): """ Return a chunk referencing a slice of a scalar text value. """ return self._select(self._pointer.textslice(start, end)) def start_line(self): return self._pointer.start_line(self._ruamelparsed) def end_line(self): return self._pointer.end_line(self._ruamelparsed) def lines(self): return self._pointer.lines(self._ruamelparsed) def lines_before(self, how_many): return self._pointer.lines_before(self._ruamelparsed, how_many) def lines_after(self, how_many): return self._pointer.lines_after(self._ruamelparsed, how_many) @property def contents(self): return self._pointer.get(self._ruamelparsed) def strictparsed(self): return self._pointer.get(self._strictparsed, strictdoc=True) strictyaml-1.6.1/strictyaml/yamlpointer.py000066400000000000000000000203341415074265300210330ustar00rootroot00000000000000from strictyaml.ruamel.comments import CommentedSeq, CommentedMap from strictyaml.ruamel import dump, RoundTripDumper from copy import deepcopy import sys if sys.version_info[0] == 3: unicode = str class YAMLPointer(object): """ A sequence of indexes/keys that look up a specific chunk of a YAML document. A YAML pointer can point to a key, value, item in a sequence or part of a string in a value or item. """ def __init__(self): self._indices = [] @property def last_index(self): assert self.is_index() return self._indices[-1][1] @property def last_val(self): assert self.is_val() return self._indices[-1][1] @property def last_strictkey(self): assert self.is_key() or self.is_val() return self._indices[-1][1][1] @property def last_regularkey(self): assert self.is_key() or self.is_val() return self._indices[-1][1][0] def val(self, regularkey, strictkey): assert isinstance(regularkey, (str, unicode)), type(regularkey) assert isinstance(strictkey, (str, unicode)), type(strictkey) new_location = deepcopy(self) new_location._indices.append(("val", (regularkey, strictkey))) return new_location def is_val(self): return self._indices[-1][0] == "val" def key(self, regularkey, strictkey): assert isinstance(regularkey, (str, unicode)), type(regularkey) assert isinstance(strictkey, (str, unicode)), type(strictkey) new_location = deepcopy(self) new_location._indices.append(("key", (regularkey, strictkey))) return new_location def is_key(self): return self._indices[-1][0] == "key" def index(self, index): new_location = deepcopy(self) new_location._indices.append(("index", index)) return new_location def is_index(self): return self._indices[-1][0] == "index" def textslice(self, start, end): new_location = deepcopy(self) new_location._indices.append(("textslice", (start, end))) return new_location def is_textslice(self): return self._indices[-1][0] == "textslice" def parent(self): new_location = deepcopy(self) new_location._indices = new_location._indices[:-1] return new_location def make_child_of(self, pointer): new_indices = deepcopy(pointer._indices) new_indices.extend(self._indices) def _slice_segment(self, indices, segment, include_selected): slicedpart = deepcopy(segment) if len(indices) == 0 and not include_selected: slicedpart = None else: if len(indices) > 0: if indices[0][0] in ("val", "key"): index = indices[0][1][0] else: index = indices[0][1] start_popping = False if isinstance(segment, CommentedMap): for key in segment.keys(): if start_popping: slicedpart.pop(key) if index == key: start_popping = True if isinstance(segment[index], (CommentedSeq, CommentedMap)): slicedpart[index] = self._slice_segment( indices[1:], segment[index], include_selected=include_selected, ) if not include_selected and len(indices) == 1: slicedpart.pop(key) if isinstance(segment, CommentedSeq): for i, value in enumerate(segment): if start_popping: del slicedpart[-1] if i == index: start_popping = True if isinstance(segment[index], (CommentedSeq, CommentedMap)): slicedpart[index] = self._slice_segment( indices[1:], segment[index], include_selected=include_selected, ) if not include_selected and len(indices) == 1: slicedpart.pop(index) return slicedpart def start_line(self, document): slicedpart = self._slice_segment( self._indices, document, include_selected=False ) if slicedpart is None or slicedpart == {} or slicedpart == []: return 1 else: return ( len(dump(slicedpart, Dumper=RoundTripDumper).rstrip().split("\n")) + 1 ) def end_line(self, document): slicedpart = self._slice_segment(self._indices, document, include_selected=True) return len(dump(slicedpart, Dumper=RoundTripDumper).rstrip().split("\n")) def lines(self, document): return "\n".join( dump(document, Dumper=RoundTripDumper).split("\n")[ self.start_line(document) - 1 : self.end_line(document) ] ) def lines_before(self, document, how_many): return "\n".join( dump(document, Dumper=RoundTripDumper).split("\n")[ self.start_line(document) - 1 - how_many : self.start_line(document) - 1 ] ) def lines_after(self, document, how_many): return "\n".join( dump(document, Dumper=RoundTripDumper).split("\n")[ self.end_line(document) : self.end_line(document) + how_many ] ) def _individual_get(self, segment, index_type, index, strictdoc): if index_type == "val": for key, value in segment.items(): if key == index[0]: return value if hasattr(key, "text"): if key.text == index[0]: return value raise Exception("Invalid state") elif index_type == "index": return segment[index] elif index_type == "textslice": return segment[index[0] : index[1]] elif index_type == "key": return index[1] if strictdoc else index[0] else: raise Exception("Invalid state") def get(self, document, strictdoc=False): segment = document for index_type, index in self._indices: segment = self._individual_get(segment, index_type, index, strictdoc) return segment def set(self, src_obj, src_attr, new_ruamel, strictdoc=False): """Since set() needs to overwrite what this pointer points to, it affects the parent object. Therefore, rather than taking "document" as get(), it takes the object which holds the document and the name of the property which is the document. """ obj_last = src_obj key_last = src_attr r = getattr(src_obj, src_attr) for index_type, index in self._indices: obj_last = r if index_type == "val": key_last = index[1] if strictdoc else index[0] r = r[key_last] elif index_type == "index": key_last = index r = r[key_last] elif index_type == "textslice": key_last = None r = r[index[0] : index[1]] elif index_type == "key": key_last = None r = index[1] if strictdoc else index[0] else: raise RuntimeError("Invalid state") if obj_last is src_obj: # Starts with an attribute set setattr(src_obj, src_attr, new_ruamel) elif key_last is not None: # Others are item set if hasattr(obj_last, "_value"): # Only want to overwrite value, do NOT re-validate schema... obj_last._value[key_last] = new_ruamel else: obj_last[key_last] = new_ruamel else: raise NotImplementedError("invalid key, cannot set") def __repr__(self): return "".format(self._indices)