Cerberus-1.3.2/0000755000076500000240000000000013556067066013675 5ustar nicolastaff00000000000000Cerberus-1.3.2/AUTHORS0000644000076500000240000000214613556066427014750 0ustar nicolastaff00000000000000Cerberus is developed and maintained by the Cerberus community. It was created by Nicola Iarocci. Core maintainers ~~~~~~~~~~~~~~~~ - Nicola Iarocci (nicolaiarocci) - Frank Sachsenheim (funkyfuture) Contributors ~~~~~~~~~~~~ - Antoine Lubineau - Arsh Singh - Audric Schiltknecht - Brandon Aubie - Brett - Bruno Oliveira - Bryan W. Weber - C.D. Clark III - Christian Hogan - Connor Zapfel - Damián Nohales - Danielle Pizzolli - Davis Kirkendall - Denis Carriere - Dominik Kellner - Eelke Hermens - Evgeny Odegov - Florian Rathgeber - Gabriel Wainer - Harro van der Klauw - Jaroslav Semančík - Jonathan Huot - Kaleb Pomeroy - Kirill Pavlov - Kornelijus Survila - Lujeni - Luke Bechtel - Luo Peng - Martijn Vermaat - Martin Ortbauer - Matthew Ellison - Michael Klich - Nik Haldimann - Nikita Melentev - Nikita Vlaznev - Paul Weaver - Peter Demin - Riccardo - Roman Redkovich - Scott Crunkleton - Sebastian Heid - Sebastian Rajo - Sergey Leshchenko - Tobias Betz - Trong Hieu HA - Vipul Gupta - Waldir Pimenta - calve - gilbsgilbs A full, up-to-date list of contributors is available from git with: git shortlog -sne Cerberus-1.3.2/CONTRIBUTING.rst0000644000076500000240000000621113461521741016324 0ustar nicolastaff00000000000000How to Contribute ================= Contributions are welcome! Not familiar with the codebase yet? No problem! There are many ways to contribute to open source projects: reporting bugs, helping with the documentation, spreading the word and of course, adding new features and patches. .. note:: There's currently a feature freeze until the basic code modernization for the 2.0 release is finished. Have a look at the ``ROADMAP.md`` for a status on its progress. Getting Started --------------- #. Make sure you have a GitHub account. #. Open a `new issue`_, assuming one does not already exist. #. Clearly describe the issue including steps to reproduce when it is a bug. Making Changes -------------- * Fork_ the repository on GitHub. * Create a topic branch from where you want to base your work. * This is usually the ``master`` branch. * Please avoid working directly on ``master`` branch. * Make commits of logical units (if needed rebase your feature branch before submitting it). * Make sure your commit messages are in the `proper format`_. * If your commit fixes an open issue, reference it in the commit message (#15). * Make sure you have added the necessary tests for your changes. * Run all the tests to assure nothing else was accidentally broken. * Install and enable pre-commit_ (``pip install pre-commit``, then ``pre-commit install``) to ensure styleguides and codechecks are followed. CI will reject a change that does not conform to the guidelines. * Don't forget to add yourself to AUTHORS_. These guidelines also apply when helping with documentation (actually, for typos and minor additions you might choose to `fork and edit`_). .. _pre-commit: https://pre-commit.com/ Submitting Changes ------------------ * Push your changes to a topic branch in your fork of the repository. * Submit a `Pull Request`_. * Wait for maintainer feedback. First time contributor? ----------------------- It's alright. We've all been there. Dont' know where to start? -------------------------- There are usually several TODO comments scattered around the codebase, maybe check them out and see if you have ideas, or can help with them. Also, check the `open issues`_ in case there's something that sparks your interest. What about documentation? I suck at english so if you're fluent with it (or notice any error), why not help with that? In any case, other than GitHub help_ pages, you might want to check this excellent `Effective Guide to Pull Requests`_ .. _`the repository`: https://github.com/pyeve/cerberus .. _AUTHORS: https://github.com/pyeve/cerberus/blob/master/AUTHORS .. _`open issues`: https://github.com/pyeve/cerberus/issues .. _`new issue`: https://github.com/pyeve/cerberus/issues/new .. _Fork: https://help.github.com/articles/fork-a-repo .. _`proper format`: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html .. _help: https://help.github.com/ .. _`Effective Guide to Pull Requests`: http://codeinthehole.com/writing/pull-requests-and-other-good-practices-for-teams-using-github/ .. _`fork and edit`: https://github.com/blog/844-forking-with-the-edit-button .. _`Pull Request`: https://help.github.com/articles/creating-a-pull-request Cerberus-1.3.2/Cerberus.egg-info/0000755000076500000240000000000013556067066017141 5ustar nicolastaff00000000000000Cerberus-1.3.2/Cerberus.egg-info/PKG-INFO0000644000076500000240000001461713556067066020247 0ustar nicolastaff00000000000000Metadata-Version: 1.2 Name: Cerberus Version: 1.3.2 Summary: Lightweight, extensible schema and data validation tool for Python dictionaries. Home-page: http://docs.python-cerberus.org Author: Nicola Iarocci Author-email: nicola@nicolaiarocci.com Maintainer: Frank Sachsenheim Maintainer-email: funkyfuture@riseup.net License: ISC Project-URL: Documentation, http://python-cerberus.org Project-URL: Code, https://github.com/pyeve/cerberus Project-URL: Issue tracker, https://github.com/pyeve/cerberus/issues Description: Cerberus |latest-version| ========================= |build-status| |python-support| |black| Cerberus is a lightweight and extensible data validation library for Python. .. code-block:: python >>> v = Validator({'name': {'type': 'string'}}) >>> v.validate({'name': 'john doe'}) True Features -------- Cerberus provides type checking and other base functionality out of the box and is designed to be non-blocking and easily and widely extensible, allowing for custom validation. It has no dependencies, but has the potential to become yours. Versioning & Interpreter support -------------------------------- The Cerberus `1.x` versions can be used with Python 2 while version `2.0` and later rely on Python 3 features. Starting with Cerberus 1.2, it is maintained according to `semantic versioning`_. So, a major release sheds off the old and defines a space for the new, minor releases ship further new features and improvements (you now the drill, new bugs are inevitable too), and micro releases polish a definite amount of features to glory. We intend to test Cerberus against all CPython interpreters at least until half a year after their `end of life`_ and against the most recent PyPy interpreter as a requirement for a release. If you still need to use it with a potential security hole in your setup, it should most probably work with the latest minor version branch from the time when the interpreter was still tested. Subsequent minor versions have good chances as well. In any case, you are advised to run the contributed test suite on your target system. Funding ------- Cerberus is an open source, collaboratively funded project. If you run a business and are using Cerberus in a revenue-generating product, it would make business sense to sponsor its development: it ensures the project that your product relies on stays healthy and actively maintained. Individual users are also welcome to make a recurring pledge or a one time donation if Cerberus has helped you in your work or personal projects. Every single sign-up makes a significant impact towards making Eve possible. To learn more, check out our `funding page`_. Documentation ------------- Complete documentation is available at http://docs.python-cerberus.org Installation ------------ Cerberus is on PyPI_, so all you need to do is: .. code-block:: console $ pip install cerberus Testing ------- Just run: .. code-block:: console $ python setup.py test Or you can use tox to run the tests under all supported Python versions. Make sure the required python versions are installed and run: .. code-block:: console $ pip install tox # first time only $ tox Contributing ------------ Please see the `Contribution Guidelines`_. Copyright --------- Cerberus is an open source project by `Nicola Iarocci`_. See the license_ file for more information. .. _Contribution Guidelines: https://github.com/pyeve/cerberus/blob/master/CONTRIBUTING.rst .. _end of life: https://devguide.python.org/#status-of-python-branches .. _funding page: http://docs.python-cerberus.org/en/latest/funding.html .. _license: https://github.com/pyeve/cerberus/blob/master/LICENSE .. _Nicola Iarocci: http://nicolaiarocci.com/ .. _PyPI: https://pypi.python.org/ .. _semantic versioning: https://semver.org/ .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg :alt: Black code style :target: https://black.readthedocs.io/ .. |build-status| image:: https://travis-ci.org/pyeve/cerberus.svg?branch=master :alt: Build status :target: https://travis-ci.org/pyeve/cerberus .. |latest-version| image:: https://img.shields.io/pypi/v/cerberus.svg :alt: Latest version on PyPI :target: https://pypi.org/project/cerberus .. |license| image:: https://img.shields.io/pypi/l/cerberus.svg :alt: Software license :target: https://github.com/pyeve/cerberus/blob/master/LICENSE .. |python-support| image:: https://img.shields.io/pypi/pyversions/cerberus.svg :target: https://pypi.python.org/pypi/cerberus :alt: Python versions Keywords: validation,schema,dictionaries,documents,normalization Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: License :: OSI Approved :: ISC License (ISCL) Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Requires-Python: >=2.7 Cerberus-1.3.2/Cerberus.egg-info/SOURCES.txt0000644000076500000240000000136013556067066021025 0ustar nicolastaff00000000000000AUTHORS CONTRIBUTING.rst LICENSE MANIFEST.in README.rst ROADMAP.md UPGRADING.rst setup.cfg setup.py Cerberus.egg-info/PKG-INFO Cerberus.egg-info/SOURCES.txt Cerberus.egg-info/dependency_links.txt Cerberus.egg-info/pbr.json Cerberus.egg-info/requires.txt Cerberus.egg-info/top_level.txt cerberus/__init__.py cerberus/errors.py cerberus/platform.py cerberus/schema.py cerberus/utils.py cerberus/validator.py cerberus/tests/__init__.py cerberus/tests/conftest.py cerberus/tests/test_assorted.py cerberus/tests/test_customization.py cerberus/tests/test_errors.py cerberus/tests/test_legacy.py cerberus/tests/test_normalization.py cerberus/tests/test_registries.py cerberus/tests/test_schema.py cerberus/tests/test_utils.py cerberus/tests/test_validation.pyCerberus-1.3.2/Cerberus.egg-info/dependency_links.txt0000644000076500000240000000000113556067066023207 0ustar nicolastaff00000000000000 Cerberus-1.3.2/Cerberus.egg-info/pbr.json0000644000076500000240000000005713205476473020616 0ustar nicolastaff00000000000000{"is_release": false, "git_version": "b8b26f9"}Cerberus-1.3.2/Cerberus.egg-info/requires.txt0000644000076500000240000000001313556067066021533 0ustar nicolastaff00000000000000setuptools Cerberus-1.3.2/Cerberus.egg-info/top_level.txt0000644000076500000240000000001113556067066021663 0ustar nicolastaff00000000000000cerberus Cerberus-1.3.2/LICENSE0000644000076500000240000000135713323621536014676 0ustar nicolastaff00000000000000ISC License Copyright (c) 2012-2016 Nicola Iarocci. Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Cerberus-1.3.2/MANIFEST.in0000644000076500000240000000020513523220261015406 0ustar nicolastaff00000000000000include AUTHORS include CHANGES include CONTRIBUTING.rst include LICENSE include README.rst include ROADMAP.md include UPGRADING.rst Cerberus-1.3.2/PKG-INFO0000644000076500000240000001461713556067066015003 0ustar nicolastaff00000000000000Metadata-Version: 1.2 Name: Cerberus Version: 1.3.2 Summary: Lightweight, extensible schema and data validation tool for Python dictionaries. Home-page: http://docs.python-cerberus.org Author: Nicola Iarocci Author-email: nicola@nicolaiarocci.com Maintainer: Frank Sachsenheim Maintainer-email: funkyfuture@riseup.net License: ISC Project-URL: Documentation, http://python-cerberus.org Project-URL: Code, https://github.com/pyeve/cerberus Project-URL: Issue tracker, https://github.com/pyeve/cerberus/issues Description: Cerberus |latest-version| ========================= |build-status| |python-support| |black| Cerberus is a lightweight and extensible data validation library for Python. .. code-block:: python >>> v = Validator({'name': {'type': 'string'}}) >>> v.validate({'name': 'john doe'}) True Features -------- Cerberus provides type checking and other base functionality out of the box and is designed to be non-blocking and easily and widely extensible, allowing for custom validation. It has no dependencies, but has the potential to become yours. Versioning & Interpreter support -------------------------------- The Cerberus `1.x` versions can be used with Python 2 while version `2.0` and later rely on Python 3 features. Starting with Cerberus 1.2, it is maintained according to `semantic versioning`_. So, a major release sheds off the old and defines a space for the new, minor releases ship further new features and improvements (you now the drill, new bugs are inevitable too), and micro releases polish a definite amount of features to glory. We intend to test Cerberus against all CPython interpreters at least until half a year after their `end of life`_ and against the most recent PyPy interpreter as a requirement for a release. If you still need to use it with a potential security hole in your setup, it should most probably work with the latest minor version branch from the time when the interpreter was still tested. Subsequent minor versions have good chances as well. In any case, you are advised to run the contributed test suite on your target system. Funding ------- Cerberus is an open source, collaboratively funded project. If you run a business and are using Cerberus in a revenue-generating product, it would make business sense to sponsor its development: it ensures the project that your product relies on stays healthy and actively maintained. Individual users are also welcome to make a recurring pledge or a one time donation if Cerberus has helped you in your work or personal projects. Every single sign-up makes a significant impact towards making Eve possible. To learn more, check out our `funding page`_. Documentation ------------- Complete documentation is available at http://docs.python-cerberus.org Installation ------------ Cerberus is on PyPI_, so all you need to do is: .. code-block:: console $ pip install cerberus Testing ------- Just run: .. code-block:: console $ python setup.py test Or you can use tox to run the tests under all supported Python versions. Make sure the required python versions are installed and run: .. code-block:: console $ pip install tox # first time only $ tox Contributing ------------ Please see the `Contribution Guidelines`_. Copyright --------- Cerberus is an open source project by `Nicola Iarocci`_. See the license_ file for more information. .. _Contribution Guidelines: https://github.com/pyeve/cerberus/blob/master/CONTRIBUTING.rst .. _end of life: https://devguide.python.org/#status-of-python-branches .. _funding page: http://docs.python-cerberus.org/en/latest/funding.html .. _license: https://github.com/pyeve/cerberus/blob/master/LICENSE .. _Nicola Iarocci: http://nicolaiarocci.com/ .. _PyPI: https://pypi.python.org/ .. _semantic versioning: https://semver.org/ .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg :alt: Black code style :target: https://black.readthedocs.io/ .. |build-status| image:: https://travis-ci.org/pyeve/cerberus.svg?branch=master :alt: Build status :target: https://travis-ci.org/pyeve/cerberus .. |latest-version| image:: https://img.shields.io/pypi/v/cerberus.svg :alt: Latest version on PyPI :target: https://pypi.org/project/cerberus .. |license| image:: https://img.shields.io/pypi/l/cerberus.svg :alt: Software license :target: https://github.com/pyeve/cerberus/blob/master/LICENSE .. |python-support| image:: https://img.shields.io/pypi/pyversions/cerberus.svg :target: https://pypi.python.org/pypi/cerberus :alt: Python versions Keywords: validation,schema,dictionaries,documents,normalization Platform: any Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Natural Language :: English Classifier: License :: OSI Approved :: ISC License (ISCL) Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Requires-Python: >=2.7 Cerberus-1.3.2/README.rst0000644000076500000240000000775613461521741015371 0ustar nicolastaff00000000000000Cerberus |latest-version| ========================= |build-status| |python-support| |black| Cerberus is a lightweight and extensible data validation library for Python. .. code-block:: python >>> v = Validator({'name': {'type': 'string'}}) >>> v.validate({'name': 'john doe'}) True Features -------- Cerberus provides type checking and other base functionality out of the box and is designed to be non-blocking and easily and widely extensible, allowing for custom validation. It has no dependencies, but has the potential to become yours. Versioning & Interpreter support -------------------------------- The Cerberus `1.x` versions can be used with Python 2 while version `2.0` and later rely on Python 3 features. Starting with Cerberus 1.2, it is maintained according to `semantic versioning`_. So, a major release sheds off the old and defines a space for the new, minor releases ship further new features and improvements (you now the drill, new bugs are inevitable too), and micro releases polish a definite amount of features to glory. We intend to test Cerberus against all CPython interpreters at least until half a year after their `end of life`_ and against the most recent PyPy interpreter as a requirement for a release. If you still need to use it with a potential security hole in your setup, it should most probably work with the latest minor version branch from the time when the interpreter was still tested. Subsequent minor versions have good chances as well. In any case, you are advised to run the contributed test suite on your target system. Funding ------- Cerberus is an open source, collaboratively funded project. If you run a business and are using Cerberus in a revenue-generating product, it would make business sense to sponsor its development: it ensures the project that your product relies on stays healthy and actively maintained. Individual users are also welcome to make a recurring pledge or a one time donation if Cerberus has helped you in your work or personal projects. Every single sign-up makes a significant impact towards making Eve possible. To learn more, check out our `funding page`_. Documentation ------------- Complete documentation is available at http://docs.python-cerberus.org Installation ------------ Cerberus is on PyPI_, so all you need to do is: .. code-block:: console $ pip install cerberus Testing ------- Just run: .. code-block:: console $ python setup.py test Or you can use tox to run the tests under all supported Python versions. Make sure the required python versions are installed and run: .. code-block:: console $ pip install tox # first time only $ tox Contributing ------------ Please see the `Contribution Guidelines`_. Copyright --------- Cerberus is an open source project by `Nicola Iarocci`_. See the license_ file for more information. .. _Contribution Guidelines: https://github.com/pyeve/cerberus/blob/master/CONTRIBUTING.rst .. _end of life: https://devguide.python.org/#status-of-python-branches .. _funding page: http://docs.python-cerberus.org/en/latest/funding.html .. _license: https://github.com/pyeve/cerberus/blob/master/LICENSE .. _Nicola Iarocci: http://nicolaiarocci.com/ .. _PyPI: https://pypi.python.org/ .. _semantic versioning: https://semver.org/ .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg :alt: Black code style :target: https://black.readthedocs.io/ .. |build-status| image:: https://travis-ci.org/pyeve/cerberus.svg?branch=master :alt: Build status :target: https://travis-ci.org/pyeve/cerberus .. |latest-version| image:: https://img.shields.io/pypi/v/cerberus.svg :alt: Latest version on PyPI :target: https://pypi.org/project/cerberus .. |license| image:: https://img.shields.io/pypi/l/cerberus.svg :alt: Software license :target: https://github.com/pyeve/cerberus/blob/master/LICENSE .. |python-support| image:: https://img.shields.io/pypi/pyversions/cerberus.svg :target: https://pypi.python.org/pypi/cerberus :alt: Python versions Cerberus-1.3.2/ROADMAP.md0000644000076500000240000000631313523220261015263 0ustar nicolastaff00000000000000# Cerberus development and support roadmap This document lays out a roadmap for the further development of Cerberus in the next few years, particularly in anticipation of the decay of Python 2. ## Assumptions There are some assumptions that guide the following: - The support of CPython 2.7 will end on January 1st, 2020. (See [Python Developer’s Guide](https://devguide.python.org/#status-of-python-branches)) - Supporting Python 2 and 3 comes with trade-offs. - Everything is an object. ## Roadmap ### 1.3 release The release is estimated to be ready in mid or late 2018. The planned fixes and features are listed [here](https://github.com/pyeve/cerberus/milestone/6). It will contain a finalized version of this document. ### Branching off 1.3.x After that release, a new branch `1.3.x` is created. This one will continue to support Python 2 and receive bug fixes *at least* until December 31, 2019. A *feature freeze* for functionality of the public API is declared. #### Checklist - [x] The `README.rst` and `CONTRIBUTING.rst` are updated accordingly. - [ ] 1.3 is released. - [ ] 1.3.x branch is created. ### Modernization and consolidation This phase is designated to update the codebase with fundamental implications. #### Checklist - [ ] All Python 2 related code is removed. - [ ] Python 3 features that allow simpler code are applied where feasible. - [ ] A Python 3-style metaclass. - [ ] Using `super()` to call overridden methods. - [ ] Usage of dictionary comprehensions. - [ ] All *public* functions and methods are type annotated. MyPy is added to the test suite to validate these. - [ ] A wider choice of type names that are closer oriented on the builtin names are available. (#374) - [ ] Objects from the `typing` module can be used as constraints for the `type` rule. (#374) - [ ] The `schema` rule only handles mappings, a new `itemrules` replaces the part where `schema` tested items in sequences so far. There will be no backward-compatibility for schemas. (#385) - [ ] The rules `keyschema` and `valueschema` are renamed to `keyrules` and `valuerules`, backward-compatibility for schemas will be provided. (#385) - [ ] Implementations of rules, coercers etc. can and the contributed should be qualified as such by metadata-annotating decorators. (With the intend to clean the code and make extensions simpler.) (#372) - [ ] Dependency injection for all kind of handlers. (#279,#314) - [ ] The feature freeze gets lifted and the `CONTRIBUTING.rst` is updated accordingly. - [ ] The module `dataclasses` is implemented. This may get postponed 'til a following minor release. (#397) #### Undecided issues - Which Python version will be the minimum to support? - CPython 3.4 will be eol before 2.7 and 3.5 brings some extensions to the `inspect` module that would ease implementing a dependency injection. - The name `itemrules`. - Should the result be released as 2.0.a1? ### 2.0 release After a series of release candidates, a final 2.0 with new features might be available by the end of 2018. #### Checklist - [ ] The `DocumentError` exception is replaced with an error. (#141) - [ ] Include a guide on upgrading from 1.x. - [ ] Remove this document. Cerberus-1.3.2/UPGRADING.rst0000644000076500000240000001004513461521741015735 0ustar nicolastaff00000000000000Upgrading to Cerberus 1.0 ========================= Major Additions --------------- Error Handling .............. The inspection on and representation of errors is thoroughly overhauled and allows a more detailed and flexible handling. Make sure you have look on :doc:`errors`. Also, :attr:`~cerberus.Validator.errors` (as provided by the default :class:`~cerberus.errors.BasicErrorHandler`) values are lists containing error messages, and possibly a ``dict`` as last item containing nested errors. Previously, they were strings if single errors per field occurred; lists otherwise. Deprecations ------------ ``Validator`` class ................... transparent_schema_rules ~~~~~~~~~~~~~~~~~~~~~~~~ In the past you could override the schema validation by setting ``transparent_schema_rules`` to ``True``. Now all rules whose implementing method's docstring contain a schema to validate the arguments for that rule in the validation schema, are validated. To omit the schema validation for a particular rule, just omit that definition, but consider it a bad practice. The :class:`~cerberus.Validator`-attribute and -initialization-argument ``transparent_schema_rules`` are removed without replacement. validate_update ~~~~~~~~~~~~~~~ The method ``validate_update`` has been removed from :class:`~cerberus.Validator`. Instead use :meth:`~cerberus.Validator.validate` with the keyword-argument ``update`` set to ``True``. Rules ..... items (for mappings) ~~~~~~~~~~~~~~~~~~~~ The usage of the ``items``-rule is restricted to sequences. If you still had schemas that used that rule to validate :term:`mappings `, just rename these instances to ``schema`` (:ref:`docs `). keyschema & valueschema ~~~~~~~~~~~~~~~~~~~~~~~ To reflect the common terms in the Pythoniverse [#]_, the rule for validating all *values* of a :term:`mapping` was renamed from ``keyschema`` to ``valueschema``. Furthermore a rule was implemented to validate all *keys*, introduced as ``propertyschema``, now renamed to ``keyschema``. This means code using prior versions of cerberus would not break, but bring up wrong results! To update your code you may adapt cerberus' iteration: 1. Rename ``keyschema`` to ``valueschema`` in your schemas. (``0.9``) 2. Rename ``propertyschema`` to ``keyschema`` in your schemas. (``1.0``) Note that ``propertyschema`` will *not* be handled as an alias like ``keyschema`` was in the ``0.9``-branch. Custom validators ................. Data types ~~~~~~~~~~ Since the ``type``-rule allowed multiple arguments cerberus' type validation code was somewhat cumbersome as it had to deal with the circumstance that each type checking method would file an error though another one may not - and thus positively validate the constraint as a whole. The refactoring of the error handling allows cerberus' type validation to be much more lightweight and to formulate the corresponding methods in a simpler way. Previously such a method would test what a value *is not* and submit an error. Now a method tests what a value *is* to be expected and returns ``True`` in that case. This is the most critical part of updating your code, but still easy when your head is clear. Of course your code is well tested. It's essentially these three steps. Search, Replace and Regex may come at your service. 1. Remove the second method's argument (probably named ``field``). 2. Invert the logic of the conditional clauses where is tested what a value is not / has not. 3. Replace calls to ``self._error`` below such clauses with ``return True``. A method doesn't need to return ``False`` or any value when expected criteria are not met. Here's the change from the :ref:`documentation ` example. pre-1.0: .. code-block:: python def _validate_type_objectid(self, field, value): if not re.match('[a-f0-9]{24}', value): self._error(field, errors.BAD_TYPE) 1.0: .. code-block:: python def _validate_type_objectid(self, value): if re.match('[a-f0-9]{24}', value): return True .. [#] compare :term:`dictionary` Cerberus-1.3.2/cerberus/0000755000076500000240000000000013556067066015507 5ustar nicolastaff00000000000000Cerberus-1.3.2/cerberus/__init__.py0000644000076500000240000000143313462011151017575 0ustar nicolastaff00000000000000""" Extensible validation for Python dictionaries. :copyright: 2012-2016 by Nicola Iarocci. :license: ISC, see LICENSE for more details. Full documentation is available at http://python-cerberus.org/ """ from __future__ import absolute_import from pkg_resources import get_distribution, DistributionNotFound from cerberus.validator import DocumentError, Validator from cerberus.schema import rules_set_registry, schema_registry, SchemaError from cerberus.utils import TypeDefinition try: __version__ = get_distribution("Cerberus").version except DistributionNotFound: __version__ = "unknown" __all__ = [ DocumentError.__name__, SchemaError.__name__, TypeDefinition.__name__, Validator.__name__, "schema_registry", "rules_set_registry", ] Cerberus-1.3.2/cerberus/errors.py0000644000076500000240000005130013523220247017356 0ustar nicolastaff00000000000000# -*-: coding utf-8 -*- """ This module contains the error-related constants and classes. """ from __future__ import absolute_import from collections import defaultdict, namedtuple from copy import copy, deepcopy from functools import wraps from pprint import pformat from cerberus.platform import PYTHON_VERSION, MutableMapping from cerberus.utils import compare_paths_lt, quote_string ErrorDefinition = namedtuple('ErrorDefinition', 'code, rule') """ This class is used to define possible errors. Each distinguishable error is defined by a *unique* error ``code`` as integer and the ``rule`` that can cause it as string. The instances' names do not contain a common prefix as they are supposed to be referenced within the module namespace, e.g. ``errors.CUSTOM``. """ # custom CUSTOM = ErrorDefinition(0x00, None) # existence DOCUMENT_MISSING = ErrorDefinition(0x01, None) # issues/141 DOCUMENT_MISSING = "document is missing" REQUIRED_FIELD = ErrorDefinition(0x02, 'required') UNKNOWN_FIELD = ErrorDefinition(0x03, None) DEPENDENCIES_FIELD = ErrorDefinition(0x04, 'dependencies') DEPENDENCIES_FIELD_VALUE = ErrorDefinition(0x05, 'dependencies') EXCLUDES_FIELD = ErrorDefinition(0x06, 'excludes') # shape DOCUMENT_FORMAT = ErrorDefinition(0x21, None) # issues/141 DOCUMENT_FORMAT = "'{0}' is not a document, must be a dict" EMPTY_NOT_ALLOWED = ErrorDefinition(0x22, 'empty') NOT_NULLABLE = ErrorDefinition(0x23, 'nullable') BAD_TYPE = ErrorDefinition(0x24, 'type') BAD_TYPE_FOR_SCHEMA = ErrorDefinition(0x25, 'schema') ITEMS_LENGTH = ErrorDefinition(0x26, 'items') MIN_LENGTH = ErrorDefinition(0x27, 'minlength') MAX_LENGTH = ErrorDefinition(0x28, 'maxlength') # color REGEX_MISMATCH = ErrorDefinition(0x41, 'regex') MIN_VALUE = ErrorDefinition(0x42, 'min') MAX_VALUE = ErrorDefinition(0x43, 'max') UNALLOWED_VALUE = ErrorDefinition(0x44, 'allowed') UNALLOWED_VALUES = ErrorDefinition(0x45, 'allowed') FORBIDDEN_VALUE = ErrorDefinition(0x46, 'forbidden') FORBIDDEN_VALUES = ErrorDefinition(0x47, 'forbidden') MISSING_MEMBERS = ErrorDefinition(0x48, 'contains') # other NORMALIZATION = ErrorDefinition(0x60, None) COERCION_FAILED = ErrorDefinition(0x61, 'coerce') RENAMING_FAILED = ErrorDefinition(0x62, 'rename_handler') READONLY_FIELD = ErrorDefinition(0x63, 'readonly') SETTING_DEFAULT_FAILED = ErrorDefinition(0x64, 'default_setter') # groups ERROR_GROUP = ErrorDefinition(0x80, None) MAPPING_SCHEMA = ErrorDefinition(0x81, 'schema') SEQUENCE_SCHEMA = ErrorDefinition(0x82, 'schema') # TODO remove KEYSCHEMA AND VALUESCHEMA with next major release KEYSRULES = KEYSCHEMA = ErrorDefinition(0x83, 'keysrules') VALUESRULES = VALUESCHEMA = ErrorDefinition(0x84, 'valuesrules') BAD_ITEMS = ErrorDefinition(0x8F, 'items') LOGICAL = ErrorDefinition(0x90, None) NONEOF = ErrorDefinition(0x91, 'noneof') ONEOF = ErrorDefinition(0x92, 'oneof') ANYOF = ErrorDefinition(0x93, 'anyof') ALLOF = ErrorDefinition(0x94, 'allof') """ SchemaError messages """ SCHEMA_ERROR_DEFINITION_TYPE = "schema definition for field '{0}' must be a dict" SCHEMA_ERROR_MISSING = "validation schema missing" """ Error representations """ class ValidationError(object): """ A simple class to store and query basic error information. """ def __init__(self, document_path, schema_path, code, rule, constraint, value, info): self.document_path = document_path """ The path to the field within the document that caused the error. Type: :class:`tuple` """ self.schema_path = schema_path """ The path to the rule within the schema that caused the error. Type: :class:`tuple` """ self.code = code """ The error's identifier code. Type: :class:`int` """ self.rule = rule """ The rule that failed. Type: `string` """ self.constraint = constraint """ The constraint that failed. """ self.value = value """ The value that failed. """ self.info = info """ May hold additional information about the error. Type: :class:`tuple` """ def __eq__(self, other): """ Assumes the errors relate to the same document and schema. """ return hash(self) == hash(other) def __hash__(self): """ Expects that all other properties are transitively determined. """ return hash(self.document_path) ^ hash(self.schema_path) ^ hash(self.code) def __lt__(self, other): if self.document_path != other.document_path: return compare_paths_lt(self.document_path, other.document_path) else: return compare_paths_lt(self.schema_path, other.schema_path) def __repr__(self): return ( "{class_name} @ {memptr} ( " "document_path={document_path}," "schema_path={schema_path}," "code={code}," "constraint={constraint}," "value={value}," "info={info} )".format( class_name=self.__class__.__name__, memptr=hex(id(self)), # noqa: E501 document_path=self.document_path, schema_path=self.schema_path, code=hex(self.code), constraint=quote_string(self.constraint), value=quote_string(self.value), info=self.info, ) ) @property def child_errors(self): """ A list that contains the individual errors of a bulk validation error. """ return self.info[0] if self.is_group_error else None @property def definitions_errors(self): """ Dictionary with errors of an *of-rule mapped to the index of the definition it occurred in. Returns :obj:`None` if not applicable. """ if not self.is_logic_error: return None result = defaultdict(list) for error in self.child_errors: i = error.schema_path[len(self.schema_path)] result[i].append(error) return result @property def field(self): """ Field of the contextual mapping, possibly :obj:`None`. """ if self.document_path: return self.document_path[-1] else: return None @property def is_group_error(self): """ ``True`` for errors of bulk validations. """ return bool(self.code & ERROR_GROUP.code) @property def is_logic_error(self): """ ``True`` for validation errors against different schemas with *of-rules. """ return bool(self.code & LOGICAL.code - ERROR_GROUP.code) @property def is_normalization_error(self): """ ``True`` for normalization errors. """ return bool(self.code & NORMALIZATION.code) class ErrorList(list): """ A list for :class:`~cerberus.errors.ValidationError` instances that can be queried with the ``in`` keyword for a particular :class:`~cerberus.errors.ErrorDefinition`. """ def __contains__(self, error_definition): if not isinstance(error_definition, ErrorDefinition): raise TypeError wanted_code = error_definition.code return any(x.code == wanted_code for x in self) class ErrorTreeNode(MutableMapping): __slots__ = ('descendants', 'errors', 'parent_node', 'path', 'tree_root') def __init__(self, path, parent_node): self.parent_node = parent_node self.tree_root = self.parent_node.tree_root self.path = path[: self.parent_node.depth + 1] self.errors = ErrorList() self.descendants = {} def __contains__(self, item): if isinstance(item, ErrorDefinition): return item in self.errors else: return item in self.descendants def __delitem__(self, key): del self.descendants[key] def __iter__(self): return iter(self.errors) def __getitem__(self, item): if isinstance(item, ErrorDefinition): for error in self.errors: if item.code == error.code: return error return None else: return self.descendants.get(item) def __len__(self): return len(self.errors) def __repr__(self): return self.__str__() def __setitem__(self, key, value): self.descendants[key] = value def __str__(self): return str(self.errors) + ',' + str(self.descendants) @property def depth(self): return len(self.path) @property def tree_type(self): return self.tree_root.tree_type def add(self, error): error_path = self._path_of_(error) key = error_path[self.depth] if key not in self.descendants: self[key] = ErrorTreeNode(error_path, self) node = self[key] if len(error_path) == self.depth + 1: node.errors.append(error) node.errors.sort() if error.is_group_error: for child_error in error.child_errors: self.tree_root.add(child_error) else: node.add(error) def _path_of_(self, error): return getattr(error, self.tree_type + '_path') class ErrorTree(ErrorTreeNode): """ Base class for :class:`~cerberus.errors.DocumentErrorTree` and :class:`~cerberus.errors.SchemaErrorTree`. """ def __init__(self, errors=()): self.parent_node = None self.tree_root = self self.path = () self.errors = ErrorList() self.descendants = {} for error in errors: self.add(error) def add(self, error): """ Add an error to the tree. :param error: :class:`~cerberus.errors.ValidationError` """ if not self._path_of_(error): self.errors.append(error) self.errors.sort() else: super(ErrorTree, self).add(error) def fetch_errors_from(self, path): """ Returns all errors for a particular path. :param path: :class:`tuple` of :term:`hashable` s. :rtype: :class:`~cerberus.errors.ErrorList` """ node = self.fetch_node_from(path) if node is not None: return node.errors else: return ErrorList() def fetch_node_from(self, path): """ Returns a node for a path. :param path: Tuple of :term:`hashable` s. :rtype: :class:`~cerberus.errors.ErrorTreeNode` or :obj:`None` """ context = self for key in path: context = context[key] if context is None: break return context class DocumentErrorTree(ErrorTree): """ Implements a dict-like class to query errors by indexes following the structure of a validated document. """ tree_type = 'document' class SchemaErrorTree(ErrorTree): """ Implements a dict-like class to query errors by indexes following the structure of the used schema. """ tree_type = 'schema' class BaseErrorHandler(object): """ Base class for all error handlers. Subclasses are identified as error-handlers with an instance-test. """ def __init__(self, *args, **kwargs): """ Optionally initialize a new instance. """ pass def __call__(self, errors): """ Returns errors in a handler-specific format. :param errors: An object containing the errors. :type errors: :term:`iterable` of :class:`~cerberus.errors.ValidationError` instances or a :class:`~cerberus.Validator` instance """ raise NotImplementedError def __iter__(self): """ Be a superhero and implement an iterator over errors. """ raise NotImplementedError def add(self, error): """ Add an error to the errors' container object of a handler. :param error: The error to add. :type error: :class:`~cerberus.errors.ValidationError` """ raise NotImplementedError def emit(self, error): """ Optionally emits an error in the handler's format to a stream. Or light a LED, or even shut down a power plant. :param error: The error to emit. :type error: :class:`~cerberus.errors.ValidationError` """ pass def end(self, validator): """ Gets called when a validation ends. :param validator: The calling validator. :type validator: :class:`~cerberus.Validator` """ pass def extend(self, errors): """ Adds all errors to the handler's container object. :param errors: The errors to add. :type errors: :term:`iterable` of :class:`~cerberus.errors.ValidationError` instances """ for error in errors: self.add(error) def start(self, validator): """ Gets called when a validation starts. :param validator: The calling validator. :type validator: :class:`~cerberus.Validator` """ pass class ToyErrorHandler(BaseErrorHandler): def __call__(self, *args, **kwargs): raise RuntimeError('This is not supposed to happen.') def clear(self): pass def encode_unicode(f): """Cerberus error messages expect regular binary strings. If unicode is used in a ValidationError message can't be printed. This decorator ensures that if legacy Python is used unicode strings are encoded before passing to a function. """ @wraps(f) def wrapped(obj, error): def _encode(value): """Helper encoding unicode strings into binary utf-8""" if isinstance(value, unicode): # noqa: F821 return value.encode('utf-8') return value error = copy(error) error.document_path = _encode(error.document_path) error.schema_path = _encode(error.schema_path) error.constraint = _encode(error.constraint) error.value = _encode(error.value) error.info = _encode(error.info) return f(obj, error) return wrapped if PYTHON_VERSION < 3 else f class BasicErrorHandler(BaseErrorHandler): """ Models cerberus' legacy. Returns a :class:`dict`. When mangled through :class:`str` a pretty-formatted representation of that tree is returned. """ messages = { 0x00: "{0}", 0x01: "document is missing", 0x02: "required field", 0x03: "unknown field", 0x04: "field '{0}' is required", 0x05: "depends on these values: {constraint}", 0x06: "{0} must not be present with '{field}'", 0x21: "'{0}' is not a document, must be a dict", 0x22: "empty values not allowed", 0x23: "null value not allowed", 0x24: "must be of {constraint} type", 0x25: "must be of dict type", 0x26: "length of list should be {0}, it is {1}", 0x27: "min length is {constraint}", 0x28: "max length is {constraint}", 0x41: "value does not match regex '{constraint}'", 0x42: "min value is {constraint}", 0x43: "max value is {constraint}", 0x44: "unallowed value {value}", 0x45: "unallowed values {0}", 0x46: "unallowed value {value}", 0x47: "unallowed values {0}", 0x48: "missing members {0}", 0x61: "field '{field}' cannot be coerced: {0}", 0x62: "field '{field}' cannot be renamed: {0}", 0x63: "field is read-only", 0x64: "default value for '{field}' cannot be set: {0}", 0x81: "mapping doesn't validate subschema: {0}", 0x82: "one or more sequence-items don't validate: {0}", 0x83: "one or more keys of a mapping don't validate: {0}", 0x84: "one or more values in a mapping don't validate: {0}", 0x85: "one or more sequence-items don't validate: {0}", 0x91: "one or more definitions validate", 0x92: "none or more than one rule validate", 0x93: "no definitions validate", 0x94: "one or more definitions don't validate", } def __init__(self, tree=None): self.tree = {} if tree is None else tree def __call__(self, errors): self.clear() self.extend(errors) return self.pretty_tree def __str__(self): return pformat(self.pretty_tree) @property def pretty_tree(self): pretty = deepcopy(self.tree) for field in pretty: self._purge_empty_dicts(pretty[field]) return pretty @encode_unicode def add(self, error): # Make sure the original error is not altered with # error paths specific to the handler. error = deepcopy(error) self._rewrite_error_path(error) if error.is_logic_error: self._insert_logic_error(error) elif error.is_group_error: self._insert_group_error(error) elif error.code in self.messages: self._insert_error( error.document_path, self._format_message(error.field, error) ) def clear(self): self.tree = {} def start(self, validator): self.clear() def _format_message(self, field, error): return self.messages[error.code].format( *error.info, constraint=error.constraint, field=field, value=error.value ) def _insert_error(self, path, node): """ Adds an error or sub-tree to :attr:tree. :param path: Path to the error. :type path: Tuple of strings and integers. :param node: An error message or a sub-tree. :type node: String or dictionary. """ field = path[0] if len(path) == 1: if field in self.tree: subtree = self.tree[field].pop() self.tree[field] += [node, subtree] else: self.tree[field] = [node, {}] elif len(path) >= 1: if field not in self.tree: self.tree[field] = [{}] subtree = self.tree[field][-1] if subtree: new = self.__class__(tree=copy(subtree)) else: new = self.__class__() new._insert_error(path[1:], node) subtree.update(new.tree) def _insert_group_error(self, error): for child_error in error.child_errors: if child_error.is_logic_error: self._insert_logic_error(child_error) elif child_error.is_group_error: self._insert_group_error(child_error) else: self._insert_error( child_error.document_path, self._format_message(child_error.field, child_error), ) def _insert_logic_error(self, error): field = error.field self._insert_error(error.document_path, self._format_message(field, error)) for definition_errors in error.definitions_errors.values(): for child_error in definition_errors: if child_error.is_logic_error: self._insert_logic_error(child_error) elif child_error.is_group_error: self._insert_group_error(child_error) else: self._insert_error( child_error.document_path, self._format_message(field, child_error), ) def _purge_empty_dicts(self, error_list): subtree = error_list[-1] if not error_list[-1]: error_list.pop() else: for key in subtree: self._purge_empty_dicts(subtree[key]) def _rewrite_error_path(self, error, offset=0): """ Recursively rewrites the error path to correctly represent logic errors """ if error.is_logic_error: self._rewrite_logic_error_path(error, offset) elif error.is_group_error: self._rewrite_group_error_path(error, offset) def _rewrite_group_error_path(self, error, offset=0): child_start = len(error.document_path) - offset for child_error in error.child_errors: relative_path = child_error.document_path[child_start:] child_error.document_path = error.document_path + relative_path self._rewrite_error_path(child_error, offset) def _rewrite_logic_error_path(self, error, offset=0): child_start = len(error.document_path) - offset for i, definition_errors in error.definitions_errors.items(): if not definition_errors: continue nodename = '%s definition %s' % (error.rule, i) path = error.document_path + (nodename,) for child_error in definition_errors: rel_path = child_error.document_path[child_start:] child_error.document_path = path + rel_path self._rewrite_error_path(child_error, offset + 1) class SchemaErrorHandler(BasicErrorHandler): messages = BasicErrorHandler.messages.copy() messages[0x03] = "unknown rule" Cerberus-1.3.2/cerberus/platform.py0000644000076500000240000000134713362563234017703 0ustar nicolastaff00000000000000""" Platform-dependent objects """ import sys PYTHON_VERSION = float(sys.version_info[0]) + float(sys.version_info[1]) / 10 if PYTHON_VERSION < 3: _str_type = basestring # noqa: F821 _int_types = (int, long) # noqa: F821 else: _str_type = str _int_types = (int,) if PYTHON_VERSION < 3.3: from collections import ( # noqa: F401 Callable, Container, Hashable, Iterable, Mapping, MutableMapping, Sequence, Set, Sized, ) else: from collections.abc import ( # noqa: F401 Callable, Container, Hashable, Iterable, Mapping, MutableMapping, Sequence, Set, Sized, ) Cerberus-1.3.2/cerberus/schema.py0000644000076500000240000004330413523220247017307 0ustar nicolastaff00000000000000from __future__ import absolute_import from copy import copy from warnings import warn from cerberus import errors from cerberus.platform import ( _str_type, Callable, Hashable, Mapping, MutableMapping, Sequence, ) from cerberus.utils import ( get_Validator_class, validator_factory, mapping_hash, TypeDefinition, ) class _Abort(Exception): pass class SchemaError(Exception): """ Raised when the validation schema is missing, has the wrong format or contains errors. """ pass class DefinitionSchema(MutableMapping): """ A dict-subclass for caching of validated schemas. """ def __new__(cls, *args, **kwargs): if 'SchemaValidator' not in globals(): global SchemaValidator SchemaValidator = validator_factory('SchemaValidator', SchemaValidatorMixin) types_mapping = SchemaValidator.types_mapping.copy() types_mapping.update( { 'callable': TypeDefinition('callable', (Callable,), ()), 'hashable': TypeDefinition('hashable', (Hashable,), ()), } ) SchemaValidator.types_mapping = types_mapping return super(DefinitionSchema, cls).__new__(cls) def __init__(self, validator, schema): """ :param validator: An instance of Validator-(sub-)class that uses this schema. :param schema: A definition-schema as ``dict``. Defaults to an empty one. """ if not isinstance(validator, get_Validator_class()): raise RuntimeError('validator argument must be a Validator-' 'instance.') self.validator = validator if isinstance(schema, _str_type): schema = validator.schema_registry.get(schema, schema) if not isinstance(schema, Mapping): try: schema = dict(schema) except Exception: raise SchemaError(errors.SCHEMA_ERROR_DEFINITION_TYPE.format(schema)) self.validation_schema = SchemaValidationSchema(validator) self.schema_validator = SchemaValidator( None, allow_unknown=self.validation_schema, error_handler=errors.SchemaErrorHandler, target_schema=schema, target_validator=validator, ) schema = self.expand(schema) self.validate(schema) self.schema = schema def __delitem__(self, key): _new_schema = self.schema.copy() try: del _new_schema[key] except ValueError: raise SchemaError("Schema has no field '%s' defined" % key) except Exception as e: raise e else: del self.schema[key] def __getitem__(self, item): return self.schema[item] def __iter__(self): return iter(self.schema) def __len__(self): return len(self.schema) def __repr__(self): return str(self) def __setitem__(self, key, value): value = self.expand({0: value})[0] self.validate({key: value}) self.schema[key] = value def __str__(self): if hasattr(self, "schema"): return str(self.schema) else: return "No schema data is set yet." def copy(self): return self.__class__(self.validator, self.schema.copy()) @classmethod def expand(cls, schema): try: schema = cls._expand_logical_shortcuts(schema) schema = cls._expand_subschemas(schema) except Exception: pass # TODO remove this with the next major release schema = cls._rename_deprecated_rulenames(schema) return schema @classmethod def _expand_logical_shortcuts(cls, schema): """ Expand agglutinated rules in a definition-schema. :param schema: The schema-definition to expand. :return: The expanded schema-definition. """ def is_of_rule(x): return isinstance(x, _str_type) and x.startswith( ('allof_', 'anyof_', 'noneof_', 'oneof_') ) for field, rules in schema.items(): for of_rule in [x for x in rules if is_of_rule(x)]: operator, rule = of_rule.split('_', 1) rules.update({operator: []}) for value in rules[of_rule]: rules[operator].append({rule: value}) del rules[of_rule] return schema @classmethod def _expand_subschemas(cls, schema): def has_schema_rule(): return isinstance(schema[field], Mapping) and 'schema' in schema[field] def has_mapping_schema(): """ Tries to determine heuristically if the schema-constraints are aimed to mappings. """ try: return all( isinstance(x, Mapping) for x in schema[field]['schema'].values() ) except TypeError: return False for field in schema: if not has_schema_rule(): pass elif has_mapping_schema(): schema[field]['schema'] = cls.expand(schema[field]['schema']) else: # assumes schema-constraints for a sequence schema[field]['schema'] = cls.expand({0: schema[field]['schema']})[0] # TODO remove the last two values in the tuple with the next major release for rule in ('keysrules', 'valuesrules', 'keyschema', 'valueschema'): if rule in schema[field]: schema[field][rule] = cls.expand({0: schema[field][rule]})[0] for rule in ('allof', 'anyof', 'items', 'noneof', 'oneof'): if rule in schema[field]: if not isinstance(schema[field][rule], Sequence): continue new_rules_definition = [] for item in schema[field][rule]: new_rules_definition.append(cls.expand({0: item})[0]) schema[field][rule] = new_rules_definition return schema def get(self, item, default=None): return self.schema.get(item, default) def items(self): return self.schema.items() def update(self, schema): try: schema = self.expand(schema) _new_schema = self.schema.copy() _new_schema.update(schema) self.validate(_new_schema) except ValueError: raise SchemaError(errors.SCHEMA_ERROR_DEFINITION_TYPE.format(schema)) except Exception as e: raise e else: self.schema = _new_schema # TODO remove with next major release @staticmethod def _rename_deprecated_rulenames(schema): for field, rules in schema.items(): if isinstance(rules, str): # registry reference continue for old, new in ( ('keyschema', 'keysrules'), ('validator', 'check_with'), ('valueschema', 'valuesrules'), ): if old not in rules: continue if new in rules: raise RuntimeError( "The rule '{new}' is also present with its old " "name '{old}' in the same set of rules." ) warn( "The rule '{old}' was renamed to '{new}'. The old name will " "not be available in the next major release of " "Cerberus.".format(old=old, new=new), DeprecationWarning, ) schema[field][new] = schema[field][old] schema[field].pop(old) return schema def regenerate_validation_schema(self): self.validation_schema = SchemaValidationSchema(self.validator) def validate(self, schema=None): """ Validates a schema that defines rules against supported rules. :param schema: The schema to be validated as a legal cerberus schema according to the rules of the assigned Validator object. Raises a :class:`~cerberus.base.SchemaError` when an invalid schema is encountered. """ if schema is None: schema = self.schema _hash = (mapping_hash(schema), mapping_hash(self.validator.types_mapping)) if _hash not in self.validator._valid_schemas: self._validate(schema) self.validator._valid_schemas.add(_hash) def _validate(self, schema): if isinstance(schema, _str_type): schema = self.validator.schema_registry.get(schema, schema) if schema is None: raise SchemaError(errors.SCHEMA_ERROR_MISSING) schema = copy(schema) for field in schema: if isinstance(schema[field], _str_type): schema[field] = rules_set_registry.get(schema[field], schema[field]) if not self.schema_validator(schema, normalize=False): raise SchemaError(self.schema_validator.errors) class UnvalidatedSchema(DefinitionSchema): def __init__(self, schema={}): if not isinstance(schema, Mapping): schema = dict(schema) self.schema = schema def validate(self, schema): pass def copy(self): # Override ancestor's copy, because # UnvalidatedSchema does not have .validator: return self.__class__(self.schema.copy()) class SchemaValidationSchema(UnvalidatedSchema): def __init__(self, validator): self.schema = { 'allow_unknown': False, 'schema': validator.rules, 'type': 'dict', } class SchemaValidatorMixin(object): """ This validator mixin provides mechanics to validate schemas passed to a Cerberus validator. """ def __init__(self, *args, **kwargs): kwargs.setdefault('known_rules_set_refs', set()) kwargs.setdefault('known_schema_refs', set()) super(SchemaValidatorMixin, self).__init__(*args, **kwargs) @property def known_rules_set_refs(self): """ The encountered references to rules set registry items. """ return self._config['known_rules_set_refs'] @property def known_schema_refs(self): """ The encountered references to schema registry items. """ return self._config['known_schema_refs'] @property def target_schema(self): """ The schema that is being validated. """ return self._config['target_schema'] @property def target_validator(self): """ The validator whose schema is being validated. """ return self._config['target_validator'] def _check_with_bulk_schema(self, field, value): # resolve schema registry reference if isinstance(value, _str_type): if value in self.known_rules_set_refs: return else: self.known_rules_set_refs.add(value) definition = self.target_validator.rules_set_registry.get(value) if definition is None: self._error(field, 'Rules set definition %s not found.' % value) return else: value = definition _hash = ( mapping_hash({'turing': value}), mapping_hash(self.target_validator.types_mapping), ) if _hash in self.target_validator._valid_schemas: return validator = self._get_child_validator( document_crumb=field, allow_unknown=False, schema=self.target_validator.rules, ) validator(value, normalize=False) if validator._errors: self._error(validator._errors) else: self.target_validator._valid_schemas.add(_hash) def _check_with_dependencies(self, field, value): if isinstance(value, _str_type): pass elif isinstance(value, Mapping): validator = self._get_child_validator( document_crumb=field, schema={'valuesrules': {'type': 'list'}}, allow_unknown=True, ) if not validator(value, normalize=False): self._error(validator._errors) elif isinstance(value, Sequence): if not all(isinstance(x, Hashable) for x in value): path = self.document_path + (field,) self._error(path, 'All dependencies must be a hashable type.') def _check_with_items(self, field, value): for i, schema in enumerate(value): self._check_with_bulk_schema((field, i), schema) def _check_with_schema(self, field, value): try: value = self._handle_schema_reference_for_validator(field, value) except _Abort: return _hash = (mapping_hash(value), mapping_hash(self.target_validator.types_mapping)) if _hash in self.target_validator._valid_schemas: return validator = self._get_child_validator( document_crumb=field, schema=None, allow_unknown=self.root_allow_unknown ) validator(self._expand_rules_set_refs(value), normalize=False) if validator._errors: self._error(validator._errors) else: self.target_validator._valid_schemas.add(_hash) def _check_with_type(self, field, value): value = set((value,)) if isinstance(value, _str_type) else set(value) invalid_constraints = value - set(self.target_validator.types) if invalid_constraints: self._error( field, 'Unsupported types: {}'.format(', '.join(invalid_constraints)) ) def _expand_rules_set_refs(self, schema): result = {} for k, v in schema.items(): if isinstance(v, _str_type): result[k] = self.target_validator.rules_set_registry.get(v) else: result[k] = v return result def _handle_schema_reference_for_validator(self, field, value): if not isinstance(value, _str_type): return value if value in self.known_schema_refs: raise _Abort self.known_schema_refs.add(value) definition = self.target_validator.schema_registry.get(value) if definition is None: path = self.document_path + (field,) self._error(path, 'Schema definition {} not found.'.format(value)) raise _Abort return definition def _validate_logical(self, rule, field, value): """ {'allowed': ('allof', 'anyof', 'noneof', 'oneof')} """ if not isinstance(value, Sequence): self._error(field, errors.BAD_TYPE) return validator = self._get_child_validator( document_crumb=rule, allow_unknown=False, schema=self.target_validator.validation_rules, ) for constraints in value: _hash = ( mapping_hash({'turing': constraints}), mapping_hash(self.target_validator.types_mapping), ) if _hash in self.target_validator._valid_schemas: continue validator(constraints, normalize=False) if validator._errors: self._error(validator._errors) else: self.target_validator._valid_schemas.add(_hash) #### class Registry(object): """ A registry to store and retrieve schemas and parts of it by a name that can be used in validation schemas. :param definitions: Optional, initial definitions. :type definitions: any :term:`mapping` """ def __init__(self, definitions={}): self._storage = {} self.extend(definitions) def add(self, name, definition): """ Register a definition to the registry. Existing definitions are replaced silently. :param name: The name which can be used as reference in a validation schema. :type name: :class:`str` :param definition: The definition. :type definition: any :term:`mapping` """ self._storage[name] = self._expand_definition(definition) def all(self): """ Returns a :class:`dict` with all registered definitions mapped to their name. """ return self._storage def clear(self): """ Purge all definitions in the registry. """ self._storage.clear() def extend(self, definitions): """ Add several definitions at once. Existing definitions are replaced silently. :param definitions: The names and definitions. :type definitions: a :term:`mapping` or an :term:`iterable` with two-value :class:`tuple` s """ for name, definition in dict(definitions).items(): self.add(name, definition) def get(self, name, default=None): """ Retrieve a definition from the registry. :param name: The reference that points to the definition. :type name: :class:`str` :param default: Return value if the reference isn't registered. """ return self._storage.get(name, default) def remove(self, *names): """ Unregister definitions from the registry. :param names: The names of the definitions that are to be unregistered. """ for name in names: self._storage.pop(name, None) class SchemaRegistry(Registry): @classmethod def _expand_definition(cls, definition): return DefinitionSchema.expand(definition) class RulesSetRegistry(Registry): @classmethod def _expand_definition(cls, definition): return DefinitionSchema.expand({0: definition})[0] schema_registry, rules_set_registry = SchemaRegistry(), RulesSetRegistry() Cerberus-1.3.2/cerberus/tests/0000755000076500000240000000000013556067066016651 5ustar nicolastaff00000000000000Cerberus-1.3.2/cerberus/tests/__init__.py0000644000076500000240000001114313464227306020753 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- import re import pytest from cerberus import errors, Validator, SchemaError, DocumentError from cerberus.tests.conftest import sample_schema def assert_exception(exception, document={}, schema=None, validator=None, msg=None): """ Tests whether a specific exception is raised. Optionally also tests whether the exception message is as expected. """ if validator is None: validator = Validator() if msg is None: with pytest.raises(exception): validator(document, schema) else: with pytest.raises(exception, match=re.escape(msg)): validator(document, schema) def assert_schema_error(*args): """ Tests whether a validation raises an exception due to a malformed schema. """ assert_exception(SchemaError, *args) def assert_document_error(*args): """ Tests whether a validation raises an exception due to a malformed document. """ assert_exception(DocumentError, *args) def assert_fail( document, schema=None, validator=None, update=False, error=None, errors=None, child_errors=None, ): """ Tests whether a validation fails. """ if validator is None: validator = Validator(sample_schema) result = validator(document, schema, update) assert isinstance(result, bool) assert not result actual_errors = validator._errors assert not (error is not None and errors is not None) assert not (errors is not None and child_errors is not None), ( 'child_errors can only be tested in ' 'conjunction with the error parameter' ) assert not (child_errors is not None and error is None) if error is not None: assert len(actual_errors) == 1 assert_has_error(actual_errors, *error) if child_errors is not None: assert len(actual_errors[0].child_errors) == len(child_errors) assert_has_errors(actual_errors[0].child_errors, child_errors) elif errors is not None: assert len(actual_errors) == len(errors) assert_has_errors(actual_errors, errors) return actual_errors def assert_success(document, schema=None, validator=None, update=False): """ Tests whether a validation succeeds. """ if validator is None: validator = Validator(sample_schema) result = validator(document, schema, update) assert isinstance(result, bool) if not result: raise AssertionError(validator.errors) def assert_has_error(_errors, d_path, s_path, error_def, constraint, info=()): if not isinstance(d_path, tuple): d_path = (d_path,) if not isinstance(info, tuple): info = (info,) assert isinstance(_errors, errors.ErrorList) for i, error in enumerate(_errors): assert isinstance(error, errors.ValidationError) try: assert error.document_path == d_path assert error.schema_path == s_path assert error.code == error_def.code assert error.rule == error_def.rule assert error.constraint == constraint if not error.is_group_error: assert error.info == info except AssertionError: pass except Exception: raise else: break else: raise AssertionError( """ Error with properties: document_path={doc_path} schema_path={schema_path} code={code} constraint={constraint} info={info} not found in errors: {errors} """.format( doc_path=d_path, schema_path=s_path, code=hex(error.code), info=info, constraint=constraint, errors=_errors, ) ) return i def assert_has_errors(_errors, _exp_errors): assert isinstance(_exp_errors, list) for error in _exp_errors: assert isinstance(error, tuple) assert_has_error(_errors, *error) def assert_not_has_error(_errors, *args, **kwargs): try: assert_has_error(_errors, *args, **kwargs) except AssertionError: pass except Exception as e: raise e else: raise AssertionError('An unexpected error occurred.') def assert_bad_type(field, data_type, value): assert_fail( {field: value}, error=(field, (field, 'type'), errors.BAD_TYPE, data_type) ) def assert_normalized(document, expected, schema=None, validator=None): if validator is None: validator = Validator(sample_schema) assert_success(document, schema, validator) assert validator.document == expected Cerberus-1.3.2/cerberus/tests/conftest.py0000644000076500000240000000472513461521741021046 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from copy import deepcopy import pytest from cerberus import Validator @pytest.fixture def document(): return deepcopy(sample_document) @pytest.fixture def schema(): return deepcopy(sample_schema) @pytest.fixture def validator(): return Validator(sample_schema) sample_schema = { 'a_string': {'type': 'string', 'minlength': 2, 'maxlength': 10}, 'a_binary': {'type': 'binary', 'minlength': 2, 'maxlength': 10}, 'a_nullable_integer': {'type': 'integer', 'nullable': True}, 'an_integer': {'type': 'integer', 'min': 1, 'max': 100}, 'a_restricted_integer': {'type': 'integer', 'allowed': [-1, 0, 1]}, 'a_boolean': {'type': 'boolean', 'meta': 'can haz two distinct states'}, 'a_datetime': {'type': 'datetime', 'meta': {'format': '%a, %d. %b %Y'}}, 'a_float': {'type': 'float', 'min': 1, 'max': 100}, 'a_number': {'type': 'number', 'min': 1, 'max': 100}, 'a_set': {'type': 'set'}, 'one_or_more_strings': {'type': ['string', 'list'], 'schema': {'type': 'string'}}, 'a_regex_email': { 'type': 'string', 'regex': r'^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$', }, 'a_readonly_string': {'type': 'string', 'readonly': True}, 'a_restricted_string': {'type': 'string', 'allowed': ['agent', 'client', 'vendor']}, 'an_array': {'type': 'list', 'allowed': ['agent', 'client', 'vendor']}, 'an_array_from_set': { 'type': 'list', 'allowed': set(['agent', 'client', 'vendor']), }, 'a_list_of_dicts': { 'type': 'list', 'schema': { 'type': 'dict', 'schema': { 'sku': {'type': 'string'}, 'price': {'type': 'integer', 'required': True}, }, }, }, 'a_list_of_values': { 'type': 'list', 'items': [{'type': 'string'}, {'type': 'integer'}], }, 'a_list_of_integers': {'type': 'list', 'schema': {'type': 'integer'}}, 'a_dict': { 'type': 'dict', 'schema': { 'address': {'type': 'string'}, 'city': {'type': 'string', 'required': True}, }, }, 'a_dict_with_valuesrules': {'type': 'dict', 'valuesrules': {'type': 'integer'}}, 'a_list_length': { 'type': 'list', 'schema': {'type': 'integer'}, 'minlength': 2, 'maxlength': 5, }, 'a_nullable_field_without_type': {'nullable': True}, 'a_not_nullable_field_without_type': {}, } sample_document = {'name': 'john doe'} Cerberus-1.3.2/cerberus/tests/test_assorted.py0000644000076500000240000000614113462011151022064 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from decimal import Decimal from pkg_resources import Distribution, DistributionNotFound from pytest import mark from cerberus import TypeDefinition, Validator from cerberus.tests import assert_fail, assert_success from cerberus.utils import validator_factory from cerberus.validator import BareValidator from cerberus.platform import PYTHON_VERSION if PYTHON_VERSION > 3 and PYTHON_VERSION < 3.4: from imp import reload elif PYTHON_VERSION >= 3.4: from importlib import reload else: pass # Python 2.x def test_pkgresources_version(monkeypatch): def create_fake_distribution(name): return Distribution(project_name="cerberus", version="1.2.3") with monkeypatch.context() as m: cerberus = __import__("cerberus") m.setattr("pkg_resources.get_distribution", create_fake_distribution) reload(cerberus) assert cerberus.__version__ == "1.2.3" def test_version_not_found(monkeypatch): def raise_distribution_not_found(name): raise DistributionNotFound("pkg_resources cannot get distribution") with monkeypatch.context() as m: cerberus = __import__("cerberus") m.setattr("pkg_resources.get_distribution", raise_distribution_not_found) reload(cerberus) assert cerberus.__version__ == "unknown" def test_clear_cache(validator): assert len(validator._valid_schemas) > 0 validator.clear_caches() assert len(validator._valid_schemas) == 0 def test_docstring(validator): assert validator.__doc__ # Test that testing with the sample schema works as expected # as there might be rules with side-effects in it @mark.parametrize( "test,document", ((assert_fail, {"an_integer": 60}), (assert_success, {"an_integer": 110})), ) def test_that_test_fails(test, document): try: test(document) except AssertionError: pass else: raise AssertionError("test didn't fail") def test_dynamic_types(): decimal_type = TypeDefinition("decimal", (Decimal,), ()) document = {"measurement": Decimal(0)} schema = {"measurement": {"type": "decimal"}} validator = Validator() validator.types_mapping["decimal"] = decimal_type assert_success(document, schema, validator) class MyValidator(Validator): types_mapping = Validator.types_mapping.copy() types_mapping["decimal"] = decimal_type validator = MyValidator() assert_success(document, schema, validator) def test_mro(): assert Validator.__mro__ == (Validator, BareValidator, object), Validator.__mro__ def test_mixin_init(): class Mixin(object): def __init__(self, *args, **kwargs): kwargs["test"] = True super(Mixin, self).__init__(*args, **kwargs) MyValidator = validator_factory("MyValidator", Mixin) validator = MyValidator() assert validator._config["test"] def test_sub_init(): class MyValidator(Validator): def __init__(self, *args, **kwargs): kwargs["test"] = True super(MyValidator, self).__init__(*args, **kwargs) validator = MyValidator() assert validator._config["test"] Cerberus-1.3.2/cerberus/tests/test_customization.py0000644000076500000240000000645513461521741023172 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from pytest import mark import cerberus from cerberus.tests import assert_fail, assert_success from cerberus.tests.conftest import sample_schema def test_contextual_data_preservation(): class InheritedValidator(cerberus.Validator): def __init__(self, *args, **kwargs): if 'working_dir' in kwargs: self.working_dir = kwargs['working_dir'] super(InheritedValidator, self).__init__(*args, **kwargs) def _validate_type_test(self, value): if self.working_dir: return True assert 'test' in InheritedValidator.types v = InheritedValidator( {'test': {'type': 'list', 'schema': {'type': 'test'}}}, working_dir='/tmp' ) assert_success({'test': ['foo']}, validator=v) def test_docstring_parsing(): class CustomValidator(cerberus.Validator): def _validate_foo(self, argument, field, value): """ {'type': 'zap'} """ pass def _validate_bar(self, value): """ Test the barreness of a value. The rule's arguments are validated against this schema: {'type': 'boolean'} """ pass assert 'foo' in CustomValidator.validation_rules assert 'bar' in CustomValidator.validation_rules # TODO remove 'validator' as rule parameter with the next major release @mark.parametrize('rule', ('check_with', 'validator')) def test_check_with_method(rule): # https://github.com/pyeve/cerberus/issues/265 class MyValidator(cerberus.Validator): def _check_with_oddity(self, field, value): if not value & 1: self._error(field, "Must be an odd number") v = MyValidator(schema={'amount': {rule: 'oddity'}}) assert_success(document={'amount': 1}, validator=v) assert_fail( document={'amount': 2}, validator=v, error=('amount', (), cerberus.errors.CUSTOM, None, ('Must be an odd number',)), ) # TODO remove test with the next major release @mark.parametrize('rule', ('check_with', 'validator')) def test_validator_method(rule): class MyValidator(cerberus.Validator): def _validator_oddity(self, field, value): if not value & 1: self._error(field, "Must be an odd number") v = MyValidator(schema={'amount': {rule: 'oddity'}}) assert_success(document={'amount': 1}, validator=v) assert_fail( document={'amount': 2}, validator=v, error=('amount', (), cerberus.errors.CUSTOM, None, ('Must be an odd number',)), ) def test_schema_validation_can_be_disabled_in_schema_setter(): class NonvalidatingValidator(cerberus.Validator): """ Skips schema validation to speed up initialization """ @cerberus.Validator.schema.setter def schema(self, schema): if schema is None: self._schema = None elif self.is_child: self._schema = schema elif isinstance(schema, cerberus.schema.DefinitionSchema): self._schema = schema else: self._schema = cerberus.schema.UnvalidatedSchema(schema) v = NonvalidatingValidator(schema=sample_schema) assert v.validate(document={'an_integer': 1}) assert not v.validate(document={'an_integer': 'a'}) Cerberus-1.3.2/cerberus/tests/test_errors.py0000644000076500000240000002541213523220247021564 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from cerberus import Validator, errors from cerberus.tests import assert_fail ValidationError = errors.ValidationError def test__error_1(): v = Validator(schema={'foo': {'type': 'string'}}) v.document = {'foo': 42} v._error('foo', errors.BAD_TYPE, 'string') error = v._errors[0] assert error.document_path == ('foo',) assert error.schema_path == ('foo', 'type') assert error.code == 0x24 assert error.rule == 'type' assert error.constraint == 'string' assert error.value == 42 assert error.info == ('string',) assert not error.is_group_error assert not error.is_logic_error def test__error_2(): v = Validator(schema={'foo': {'keysrules': {'type': 'integer'}}}) v.document = {'foo': {'0': 'bar'}} v._error('foo', errors.KEYSRULES, ()) error = v._errors[0] assert error.document_path == ('foo',) assert error.schema_path == ('foo', 'keysrules') assert error.code == 0x83 assert error.rule == 'keysrules' assert error.constraint == {'type': 'integer'} assert error.value == {'0': 'bar'} assert error.info == ((),) assert error.is_group_error assert not error.is_logic_error def test__error_3(): valids = [ {'type': 'string', 'regex': '0x[0-9a-f]{2}'}, {'type': 'integer', 'min': 0, 'max': 255}, ] v = Validator(schema={'foo': {'oneof': valids}}) v.document = {'foo': '0x100'} v._error('foo', errors.ONEOF, (), 0, 2) error = v._errors[0] assert error.document_path == ('foo',) assert error.schema_path == ('foo', 'oneof') assert error.code == 0x92 assert error.rule == 'oneof' assert error.constraint == valids assert error.value == '0x100' assert error.info == ((), 0, 2) assert error.is_group_error assert error.is_logic_error def test_error_tree_from_subschema(validator): schema = {'foo': {'schema': {'bar': {'type': 'string'}}}} document = {'foo': {'bar': 0}} assert_fail(document, schema, validator=validator) d_error_tree = validator.document_error_tree s_error_tree = validator.schema_error_tree assert 'foo' in d_error_tree assert len(d_error_tree['foo'].errors) == 1, d_error_tree['foo'] assert d_error_tree['foo'].errors[0].code == errors.MAPPING_SCHEMA.code assert 'bar' in d_error_tree['foo'] assert d_error_tree['foo']['bar'].errors[0].value == 0 assert d_error_tree.fetch_errors_from(('foo', 'bar'))[0].value == 0 assert 'foo' in s_error_tree assert 'schema' in s_error_tree['foo'] assert 'bar' in s_error_tree['foo']['schema'] assert 'type' in s_error_tree['foo']['schema']['bar'] assert s_error_tree['foo']['schema']['bar']['type'].errors[0].value == 0 assert ( s_error_tree.fetch_errors_from(('foo', 'schema', 'bar', 'type'))[0].value == 0 ) def test_error_tree_from_anyof(validator): schema = {'foo': {'anyof': [{'type': 'string'}, {'type': 'integer'}]}} document = {'foo': []} assert_fail(document, schema, validator=validator) d_error_tree = validator.document_error_tree s_error_tree = validator.schema_error_tree assert 'foo' in d_error_tree assert d_error_tree['foo'].errors[0].value == [] assert 'foo' in s_error_tree assert 'anyof' in s_error_tree['foo'] assert 0 in s_error_tree['foo']['anyof'] assert 1 in s_error_tree['foo']['anyof'] assert 'type' in s_error_tree['foo']['anyof'][0] assert s_error_tree['foo']['anyof'][0]['type'].errors[0].value == [] def test_nested_error_paths(validator): schema = { 'a_dict': { 'keysrules': {'type': 'integer'}, 'valuesrules': {'regex': '[a-z]*'}, }, 'a_list': {'schema': {'type': 'string', 'oneof_regex': ['[a-z]*$', '[A-Z]*']}}, } document = { 'a_dict': {0: 'abc', 'one': 'abc', 2: 'aBc', 'three': 'abC'}, 'a_list': [0, 'abc', 'abC'], } assert_fail(document, schema, validator=validator) _det = validator.document_error_tree _set = validator.schema_error_tree assert len(_det.errors) == 0 assert len(_set.errors) == 0 assert len(_det['a_dict'].errors) == 2 assert len(_set['a_dict'].errors) == 0 assert _det['a_dict'][0] is None assert len(_det['a_dict']['one'].errors) == 1 assert len(_det['a_dict'][2].errors) == 1 assert len(_det['a_dict']['three'].errors) == 2 assert len(_set['a_dict']['keysrules'].errors) == 1 assert len(_set['a_dict']['valuesrules'].errors) == 1 assert len(_set['a_dict']['keysrules']['type'].errors) == 2 assert len(_set['a_dict']['valuesrules']['regex'].errors) == 2 _ref_err = ValidationError( ('a_dict', 'one'), ('a_dict', 'keysrules', 'type'), errors.BAD_TYPE.code, 'type', 'integer', 'one', (), ) assert _det['a_dict']['one'].errors[0] == _ref_err assert _set['a_dict']['keysrules']['type'].errors[0] == _ref_err _ref_err = ValidationError( ('a_dict', 2), ('a_dict', 'valuesrules', 'regex'), errors.REGEX_MISMATCH.code, 'regex', '[a-z]*$', 'aBc', (), ) assert _det['a_dict'][2].errors[0] == _ref_err assert _set['a_dict']['valuesrules']['regex'].errors[0] == _ref_err _ref_err = ValidationError( ('a_dict', 'three'), ('a_dict', 'keysrules', 'type'), errors.BAD_TYPE.code, 'type', 'integer', 'three', (), ) assert _det['a_dict']['three'].errors[0] == _ref_err assert _set['a_dict']['keysrules']['type'].errors[1] == _ref_err _ref_err = ValidationError( ('a_dict', 'three'), ('a_dict', 'valuesrules', 'regex'), errors.REGEX_MISMATCH.code, 'regex', '[a-z]*$', 'abC', (), ) assert _det['a_dict']['three'].errors[1] == _ref_err assert _set['a_dict']['valuesrules']['regex'].errors[1] == _ref_err assert len(_det['a_list'].errors) == 1 assert len(_det['a_list'][0].errors) == 1 assert _det['a_list'][1] is None assert len(_det['a_list'][2].errors) == 3 assert len(_set['a_list'].errors) == 0 assert len(_set['a_list']['schema'].errors) == 1 assert len(_set['a_list']['schema']['type'].errors) == 1 assert len(_set['a_list']['schema']['oneof'][0]['regex'].errors) == 1 assert len(_set['a_list']['schema']['oneof'][1]['regex'].errors) == 1 _ref_err = ValidationError( ('a_list', 0), ('a_list', 'schema', 'type'), errors.BAD_TYPE.code, 'type', 'string', 0, (), ) assert _det['a_list'][0].errors[0] == _ref_err assert _set['a_list']['schema']['type'].errors[0] == _ref_err _ref_err = ValidationError( ('a_list', 2), ('a_list', 'schema', 'oneof'), errors.ONEOF.code, 'oneof', 'irrelevant_at_this_point', 'abC', (), ) assert _det['a_list'][2].errors[0] == _ref_err assert _set['a_list']['schema']['oneof'].errors[0] == _ref_err _ref_err = ValidationError( ('a_list', 2), ('a_list', 'schema', 'oneof', 0, 'regex'), errors.REGEX_MISMATCH.code, 'regex', '[a-z]*$', 'abC', (), ) assert _det['a_list'][2].errors[1] == _ref_err assert _set['a_list']['schema']['oneof'][0]['regex'].errors[0] == _ref_err _ref_err = ValidationError( ('a_list', 2), ('a_list', 'schema', 'oneof', 1, 'regex'), errors.REGEX_MISMATCH.code, 'regex', '[a-z]*$', 'abC', (), ) assert _det['a_list'][2].errors[2] == _ref_err assert _set['a_list']['schema']['oneof'][1]['regex'].errors[0] == _ref_err def test_queries(): schema = {'foo': {'type': 'dict', 'schema': {'bar': {'type': 'number'}}}} document = {'foo': {'bar': 'zero'}} validator = Validator(schema) validator(document) assert 'foo' in validator.document_error_tree assert 'bar' in validator.document_error_tree['foo'] assert 'foo' in validator.schema_error_tree assert 'schema' in validator.schema_error_tree['foo'] assert errors.MAPPING_SCHEMA in validator.document_error_tree['foo'].errors assert errors.MAPPING_SCHEMA in validator.document_error_tree['foo'] assert errors.BAD_TYPE in validator.document_error_tree['foo']['bar'] assert errors.MAPPING_SCHEMA in validator.schema_error_tree['foo']['schema'] assert ( errors.BAD_TYPE in validator.schema_error_tree['foo']['schema']['bar']['type'] ) assert ( validator.document_error_tree['foo'][errors.MAPPING_SCHEMA].child_errors[0].code == errors.BAD_TYPE.code ) def test_basic_error_handler(): handler = errors.BasicErrorHandler() _errors, ref = [], {} _errors.append(ValidationError(['foo'], ['foo'], 0x63, 'readonly', True, None, ())) ref.update({'foo': [handler.messages[0x63]]}) assert handler(_errors) == ref _errors.append(ValidationError(['bar'], ['foo'], 0x42, 'min', 1, 2, ())) ref.update({'bar': [handler.messages[0x42].format(constraint=1)]}) assert handler(_errors) == ref _errors.append( ValidationError( ['zap', 'foo'], ['zap', 'schema', 'foo'], 0x24, 'type', 'string', True, () ) ) ref.update({'zap': [{'foo': [handler.messages[0x24].format(constraint='string')]}]}) assert handler(_errors) == ref _errors.append( ValidationError( ['zap', 'foo'], ['zap', 'schema', 'foo'], 0x41, 'regex', '^p[äe]ng$', 'boom', (), ) ) ref['zap'][0]['foo'].append(handler.messages[0x41].format(constraint='^p[äe]ng$')) assert handler(_errors) == ref def test_basic_error_of_errors(validator): schema = {'foo': {'oneof': [{'type': 'integer'}, {'type': 'string'}]}} document = {'foo': 23.42} error = ('foo', ('foo', 'oneof'), errors.ONEOF, schema['foo']['oneof'], ()) child_errors = [ (error[0], error[1] + (0, 'type'), errors.BAD_TYPE, 'integer'), (error[0], error[1] + (1, 'type'), errors.BAD_TYPE, 'string'), ] assert_fail( document, schema, validator=validator, error=error, child_errors=child_errors ) assert validator.errors == { 'foo': [ errors.BasicErrorHandler.messages[0x92], { 'oneof definition 0': ['must be of integer type'], 'oneof definition 1': ['must be of string type'], }, ] } def test_wrong_amount_of_items(validator): # https://github.com/pyeve/cerberus/issues/505 validator.schema = { 'test_list': { 'type': 'list', 'required': True, 'items': [{'type': 'string'}, {'type': 'string'}], } } validator({'test_list': ['test']}) assert validator.errors == {'test_list': ["length of list should be 2, it is 1"]} Cerberus-1.3.2/cerberus/tests/test_legacy.py0000644000076500000240000000003613323621536021513 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- pass Cerberus-1.3.2/cerberus/tests/test_normalization.py0000644000076500000240000004147413464226612023152 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from copy import deepcopy from tempfile import NamedTemporaryFile from pytest import mark from cerberus import Validator, errors from cerberus.tests import ( assert_fail, assert_has_error, assert_normalized, assert_success, ) def must_not_be_called(*args, **kwargs): raise RuntimeError('This shall not be called.') def test_coerce(): schema = {'amount': {'coerce': int}} document = {'amount': '1'} expected = {'amount': 1} assert_normalized(document, expected, schema) def test_coerce_in_dictschema(): schema = {'thing': {'type': 'dict', 'schema': {'amount': {'coerce': int}}}} document = {'thing': {'amount': '2'}} expected = {'thing': {'amount': 2}} assert_normalized(document, expected, schema) def test_coerce_in_listschema(): schema = {'things': {'type': 'list', 'schema': {'coerce': int}}} document = {'things': ['1', '2', '3']} expected = {'things': [1, 2, 3]} assert_normalized(document, expected, schema) def test_coerce_in_listitems(): schema = {'things': {'type': 'list', 'items': [{'coerce': int}, {'coerce': str}]}} document = {'things': ['1', 2]} expected = {'things': [1, '2']} assert_normalized(document, expected, schema) validator = Validator(schema) document['things'].append(3) assert not validator(document) assert validator.document['things'] == document['things'] def test_coerce_in_dictschema_in_listschema(): item_schema = {'type': 'dict', 'schema': {'amount': {'coerce': int}}} schema = {'things': {'type': 'list', 'schema': item_schema}} document = {'things': [{'amount': '2'}]} expected = {'things': [{'amount': 2}]} assert_normalized(document, expected, schema) def test_coerce_not_destructive(): schema = {'amount': {'coerce': int}} v = Validator(schema) doc = {'amount': '1'} v.validate(doc) assert v.document is not doc def test_coerce_catches_ValueError(): schema = {'amount': {'coerce': int}} _errors = assert_fail({'amount': 'not_a_number'}, schema) _errors[0].info = () # ignore exception message here assert_has_error( _errors, 'amount', ('amount', 'coerce'), errors.COERCION_FAILED, int ) def test_coerce_in_listitems_catches_ValueError(): schema = {'things': {'type': 'list', 'items': [{'coerce': int}, {'coerce': str}]}} document = {'things': ['not_a_number', 2]} _errors = assert_fail(document, schema) _errors[0].info = () # ignore exception message here assert_has_error( _errors, ('things', 0), ('things', 'items', 'coerce'), errors.COERCION_FAILED, int, ) def test_coerce_catches_TypeError(): schema = {'name': {'coerce': str.lower}} _errors = assert_fail({'name': 1234}, schema) _errors[0].info = () # ignore exception message here assert_has_error( _errors, 'name', ('name', 'coerce'), errors.COERCION_FAILED, str.lower ) def test_coerce_in_listitems_catches_TypeError(): schema = { 'things': {'type': 'list', 'items': [{'coerce': int}, {'coerce': str.lower}]} } document = {'things': ['1', 2]} _errors = assert_fail(document, schema) _errors[0].info = () # ignore exception message here assert_has_error( _errors, ('things', 1), ('things', 'items', 'coerce'), errors.COERCION_FAILED, str.lower, ) def test_coerce_unknown(): schema = {'foo': {'schema': {}, 'allow_unknown': {'coerce': int}}} document = {'foo': {'bar': '0'}} expected = {'foo': {'bar': 0}} assert_normalized(document, expected, schema) def test_custom_coerce_and_rename(): class MyNormalizer(Validator): def __init__(self, multiplier, *args, **kwargs): super(MyNormalizer, self).__init__(*args, **kwargs) self.multiplier = multiplier def _normalize_coerce_multiply(self, value): return value * self.multiplier v = MyNormalizer(2, {'foo': {'coerce': 'multiply'}}) assert v.normalized({'foo': 2})['foo'] == 4 v = MyNormalizer(3, allow_unknown={'rename_handler': 'multiply'}) assert v.normalized({3: None}) == {9: None} def test_coerce_chain(): drop_prefix = lambda x: x[2:] # noqa: E731 upper = lambda x: x.upper() # noqa: E731 schema = {'foo': {'coerce': [hex, drop_prefix, upper]}} assert_normalized({'foo': 15}, {'foo': 'F'}, schema) def test_coerce_chain_aborts(validator): def dont_do_me(value): raise AssertionError('The coercion chain did not abort after an ' 'error.') schema = {'foo': {'coerce': [hex, dont_do_me]}} validator({'foo': '0'}, schema) assert errors.COERCION_FAILED in validator._errors def test_coerce_non_digit_in_sequence(validator): # https://github.com/pyeve/cerberus/issues/211 schema = {'data': {'type': 'list', 'schema': {'type': 'integer', 'coerce': int}}} document = {'data': ['q']} assert validator.validated(document, schema) is None assert ( validator.validated(document, schema, always_return_document=True) == document ) # noqa: W503 def test_nullables_dont_fail_coerce(): schema = {'foo': {'coerce': int, 'nullable': True, 'type': 'integer'}} document = {'foo': None} assert_normalized(document, document, schema) def test_nullables_fail_coerce_on_non_null_values(validator): def failing_coercion(value): raise Exception("expected to fail") schema = {'foo': {'coerce': failing_coercion, 'nullable': True, 'type': 'integer'}} document = {'foo': None} assert_normalized(document, document, schema) validator({'foo': 2}, schema) assert errors.COERCION_FAILED in validator._errors def test_normalized(): schema = {'amount': {'coerce': int}} document = {'amount': '2'} expected = {'amount': 2} assert_normalized(document, expected, schema) def test_rename(validator): schema = {'foo': {'rename': 'bar'}} document = {'foo': 0} expected = {'bar': 0} # We cannot use assertNormalized here since there is bug where # Cerberus says that the renamed field is an unknown field: # {'bar': 'unknown field'} validator(document, schema, False) assert validator.document == expected def test_rename_handler(): validator = Validator(allow_unknown={'rename_handler': int}) schema = {} document = {'0': 'foo'} expected = {0: 'foo'} assert_normalized(document, expected, schema, validator) def test_purge_unknown(): validator = Validator(purge_unknown=True) schema = {'foo': {'type': 'string'}} document = {'bar': 'foo'} expected = {} assert_normalized(document, expected, schema, validator) def test_purge_unknown_in_subschema(): schema = { 'foo': { 'type': 'dict', 'schema': {'foo': {'type': 'string'}}, 'purge_unknown': True, } } document = {'foo': {'bar': ''}} expected = {'foo': {}} assert_normalized(document, expected, schema) def test_issue_147_complex(): schema = {'revision': {'coerce': int}} document = {'revision': '5', 'file': NamedTemporaryFile(mode='w+')} document['file'].write(r'foobar') document['file'].seek(0) normalized = Validator(schema, allow_unknown=True).normalized(document) assert normalized['revision'] == 5 assert normalized['file'].read() == 'foobar' document['file'].close() normalized['file'].close() def test_issue_147_nested_dict(): schema = {'thing': {'type': 'dict', 'schema': {'amount': {'coerce': int}}}} ref_obj = '2' document = {'thing': {'amount': ref_obj}} normalized = Validator(schema).normalized(document) assert document is not normalized assert normalized['thing']['amount'] == 2 assert ref_obj == '2' assert document['thing']['amount'] is ref_obj def test_coerce_in_valuesrules(): # https://github.com/pyeve/cerberus/issues/155 schema = { 'thing': {'type': 'dict', 'valuesrules': {'coerce': int, 'type': 'integer'}} } document = {'thing': {'amount': '2'}} expected = {'thing': {'amount': 2}} assert_normalized(document, expected, schema) def test_coerce_in_keysrules(): # https://github.com/pyeve/cerberus/issues/155 schema = { 'thing': {'type': 'dict', 'keysrules': {'coerce': int, 'type': 'integer'}} } document = {'thing': {'5': 'foo'}} expected = {'thing': {5: 'foo'}} assert_normalized(document, expected, schema) def test_coercion_of_sequence_items(validator): # https://github.com/pyeve/cerberus/issues/161 schema = {'a_list': {'type': 'list', 'schema': {'type': 'float', 'coerce': float}}} document = {'a_list': [3, 4, 5]} expected = {'a_list': [3.0, 4.0, 5.0]} assert_normalized(document, expected, schema, validator) for x in validator.document['a_list']: assert isinstance(x, float) @mark.parametrize( 'default', ({'default': 'bar_value'}, {'default_setter': lambda doc: 'bar_value'}) ) def test_default_missing(default): bar_schema = {'type': 'string'} bar_schema.update(default) schema = {'foo': {'type': 'string'}, 'bar': bar_schema} document = {'foo': 'foo_value'} expected = {'foo': 'foo_value', 'bar': 'bar_value'} assert_normalized(document, expected, schema) @mark.parametrize( 'default', ({'default': 'bar_value'}, {'default_setter': must_not_be_called}) ) def test_default_existent(default): bar_schema = {'type': 'string'} bar_schema.update(default) schema = {'foo': {'type': 'string'}, 'bar': bar_schema} document = {'foo': 'foo_value', 'bar': 'non_default'} assert_normalized(document, document.copy(), schema) @mark.parametrize( 'default', ({'default': 'bar_value'}, {'default_setter': must_not_be_called}) ) def test_default_none_nullable(default): bar_schema = {'type': 'string', 'nullable': True} bar_schema.update(default) schema = {'foo': {'type': 'string'}, 'bar': bar_schema} document = {'foo': 'foo_value', 'bar': None} assert_normalized(document, document.copy(), schema) @mark.parametrize( 'default', ({'default': 'bar_value'}, {'default_setter': lambda doc: 'bar_value'}) ) def test_default_none_nonnullable(default): bar_schema = {'type': 'string', 'nullable': False} bar_schema.update(default) schema = {'foo': {'type': 'string'}, 'bar': bar_schema} document = {'foo': 'foo_value', 'bar': None} expected = {'foo': 'foo_value', 'bar': 'bar_value'} assert_normalized(document, expected, schema) def test_default_none_default_value(): schema = { 'foo': {'type': 'string'}, 'bar': {'type': 'string', 'nullable': True, 'default': None}, } document = {'foo': 'foo_value'} expected = {'foo': 'foo_value', 'bar': None} assert_normalized(document, expected, schema) @mark.parametrize( 'default', ({'default': 'bar_value'}, {'default_setter': lambda doc: 'bar_value'}) ) def test_default_missing_in_subschema(default): bar_schema = {'type': 'string'} bar_schema.update(default) schema = { 'thing': { 'type': 'dict', 'schema': {'foo': {'type': 'string'}, 'bar': bar_schema}, } } document = {'thing': {'foo': 'foo_value'}} expected = {'thing': {'foo': 'foo_value', 'bar': 'bar_value'}} assert_normalized(document, expected, schema) def test_depending_default_setters(): schema = { 'a': {'type': 'integer'}, 'b': {'type': 'integer', 'default_setter': lambda d: d['a'] + 1}, 'c': {'type': 'integer', 'default_setter': lambda d: d['b'] * 2}, 'd': {'type': 'integer', 'default_setter': lambda d: d['b'] + d['c']}, } document = {'a': 1} expected = {'a': 1, 'b': 2, 'c': 4, 'd': 6} assert_normalized(document, expected, schema) def test_circular_depending_default_setters(validator): schema = { 'a': {'type': 'integer', 'default_setter': lambda d: d['b'] + 1}, 'b': {'type': 'integer', 'default_setter': lambda d: d['a'] + 1}, } validator({}, schema) assert errors.SETTING_DEFAULT_FAILED in validator._errors def test_issue_250(): # https://github.com/pyeve/cerberus/issues/250 schema = { 'list': { 'type': 'list', 'schema': { 'type': 'dict', 'allow_unknown': True, 'schema': {'a': {'type': 'string'}}, }, } } document = {'list': {'is_a': 'mapping'}} assert_fail( document, schema, error=('list', ('list', 'type'), errors.BAD_TYPE, schema['list']['type']), ) def test_issue_250_no_type_pass_on_list(): # https://github.com/pyeve/cerberus/issues/250 schema = { 'list': { 'schema': { 'allow_unknown': True, 'type': 'dict', 'schema': {'a': {'type': 'string'}}, } } } document = {'list': [{'a': 'known', 'b': 'unknown'}]} assert_normalized(document, document, schema) def test_issue_250_no_type_fail_on_dict(): # https://github.com/pyeve/cerberus/issues/250 schema = { 'list': {'schema': {'allow_unknown': True, 'schema': {'a': {'type': 'string'}}}} } document = {'list': {'a': {'a': 'known'}}} assert_fail( document, schema, error=( 'list', ('list', 'schema'), errors.BAD_TYPE_FOR_SCHEMA, schema['list']['schema'], ), ) def test_issue_250_no_type_fail_pass_on_other(): # https://github.com/pyeve/cerberus/issues/250 schema = { 'list': {'schema': {'allow_unknown': True, 'schema': {'a': {'type': 'string'}}}} } document = {'list': 1} assert_normalized(document, document, schema) def test_allow_unknown_with_of_rules(): # https://github.com/pyeve/cerberus/issues/251 schema = { 'test': { 'oneof': [ { 'type': 'dict', 'allow_unknown': True, 'schema': {'known': {'type': 'string'}}, }, {'type': 'dict', 'schema': {'known': {'type': 'string'}}}, ] } } # check regression and that allow unknown does not cause any different # than expected behaviour for one-of. document = {'test': {'known': 's'}} assert_fail( document, schema, error=('test', ('test', 'oneof'), errors.ONEOF, schema['test']['oneof']), ) # check that allow_unknown is actually applied document = {'test': {'known': 's', 'unknown': 'asd'}} assert_success(document, schema) def test_271_normalising_tuples(): # https://github.com/pyeve/cerberus/issues/271 schema = { 'my_field': {'type': 'list', 'schema': {'type': ('string', 'number', 'dict')}} } document = {'my_field': ('foo', 'bar', 42, 'albert', 'kandinsky', {'items': 23})} assert_success(document, schema) normalized = Validator(schema).normalized(document) assert normalized['my_field'] == ( 'foo', 'bar', 42, 'albert', 'kandinsky', {'items': 23}, ) def test_allow_unknown_wo_schema(): # https://github.com/pyeve/cerberus/issues/302 v = Validator({'a': {'type': 'dict', 'allow_unknown': True}}) v({'a': {}}) def test_allow_unknown_with_purge_unknown(): validator = Validator(purge_unknown=True) schema = {'foo': {'type': 'dict', 'allow_unknown': True}} document = {'foo': {'bar': True}, 'bar': 'foo'} expected = {'foo': {'bar': True}} assert_normalized(document, expected, schema, validator) def test_allow_unknown_with_purge_unknown_subdocument(): validator = Validator(purge_unknown=True) schema = { 'foo': { 'type': 'dict', 'schema': {'bar': {'type': 'string'}}, 'allow_unknown': True, } } document = {'foo': {'bar': 'baz', 'corge': False}, 'thud': 'xyzzy'} expected = {'foo': {'bar': 'baz', 'corge': False}} assert_normalized(document, expected, schema, validator) def test_purge_readonly(): schema = { 'description': {'type': 'string', 'maxlength': 500}, 'last_updated': {'readonly': True}, } validator = Validator(schema=schema, purge_readonly=True) document = {'description': 'it is a thing'} expected = deepcopy(document) document['last_updated'] = 'future' assert_normalized(document, expected, validator=validator) def test_defaults_in_allow_unknown_schema(): schema = {'meta': {'type': 'dict'}, 'version': {'type': 'string'}} allow_unknown = { 'type': 'dict', 'schema': { 'cfg_path': {'type': 'string', 'default': 'cfg.yaml'}, 'package': {'type': 'string'}, }, } validator = Validator(schema=schema, allow_unknown=allow_unknown) document = {'version': '1.2.3', 'plugin_foo': {'package': 'foo'}} expected = { 'version': '1.2.3', 'plugin_foo': {'package': 'foo', 'cfg_path': 'cfg.yaml'}, } assert_normalized(document, expected, schema, validator) Cerberus-1.3.2/cerberus/tests/test_registries.py0000644000076500000240000000543113461521741022433 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- from cerberus import schema_registry, rules_set_registry, Validator from cerberus.tests import ( assert_fail, assert_normalized, assert_schema_error, assert_success, ) def test_schema_registry_simple(): schema_registry.add('foo', {'bar': {'type': 'string'}}) schema = {'a': {'schema': 'foo'}, 'b': {'schema': 'foo'}} document = {'a': {'bar': 'a'}, 'b': {'bar': 'b'}} assert_success(document, schema) def test_top_level_reference(): schema_registry.add('peng', {'foo': {'type': 'integer'}}) document = {'foo': 42} assert_success(document, 'peng') def test_rules_set_simple(): rules_set_registry.add('foo', {'type': 'integer'}) assert_success({'bar': 1}, {'bar': 'foo'}) assert_fail({'bar': 'one'}, {'bar': 'foo'}) def test_allow_unknown_as_reference(): rules_set_registry.add('foo', {'type': 'number'}) v = Validator(allow_unknown='foo') assert_success({0: 1}, {}, v) assert_fail({0: 'one'}, {}, v) def test_recursion(): rules_set_registry.add('self', {'type': 'dict', 'allow_unknown': 'self'}) v = Validator(allow_unknown='self') assert_success({0: {1: {2: {}}}}, {}, v) def test_references_remain_unresolved(validator): rules_set_registry.extend( (('boolean', {'type': 'boolean'}), ('booleans', {'valuesrules': 'boolean'})) ) validator.schema = {'foo': 'booleans'} assert 'booleans' == validator.schema['foo'] assert 'boolean' == rules_set_registry._storage['booleans']['valuesrules'] def test_rules_registry_with_anyof_type(): rules_set_registry.add('string_or_integer', {'anyof_type': ['string', 'integer']}) schema = {'soi': 'string_or_integer'} assert_success({'soi': 'hello'}, schema) def test_schema_registry_with_anyof_type(): schema_registry.add('soi_id', {'id': {'anyof_type': ['string', 'integer']}}) schema = {'soi': {'schema': 'soi_id'}} assert_success({'soi': {'id': 'hello'}}, schema) def test_normalization_with_rules_set(): # https://github.com/pyeve/cerberus/issues/283 rules_set_registry.add('foo', {'default': 42}) assert_normalized({}, {'bar': 42}, {'bar': 'foo'}) rules_set_registry.add('foo', {'default_setter': lambda _: 42}) assert_normalized({}, {'bar': 42}, {'bar': 'foo'}) rules_set_registry.add('foo', {'type': 'integer', 'nullable': True}) assert_success({'bar': None}, {'bar': 'foo'}) def test_rules_set_with_dict_field(): document = {'a_dict': {'foo': 1}} schema = {'a_dict': {'type': 'dict', 'schema': {'foo': 'rule'}}} # the schema's not yet added to the valid ones, so test the faulty first rules_set_registry.add('rule', {'tüpe': 'integer'}) assert_schema_error(document, schema) rules_set_registry.add('rule', {'type': 'integer'}) assert_success(document, schema) Cerberus-1.3.2/cerberus/tests/test_schema.py0000644000076500000240000001153213523220247021506 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- import re import pytest from cerberus import Validator, errors, SchemaError from cerberus.schema import UnvalidatedSchema from cerberus.tests import assert_schema_error def test_empty_schema(): validator = Validator() with pytest.raises(SchemaError, match=errors.SCHEMA_ERROR_MISSING): validator({}, schema=None) def test_bad_schema_type(validator): schema = "this string should really be dict" msg = errors.SCHEMA_ERROR_DEFINITION_TYPE.format(schema) with pytest.raises(SchemaError, match=msg): validator.schema = schema def test_bad_schema_type_field(validator): field = 'foo' schema = {field: {'schema': {'bar': {'type': 'strong'}}}} with pytest.raises(SchemaError): validator.schema = schema def test_unknown_rule(validator): msg = "{'foo': [{'unknown': ['unknown rule']}]}" with pytest.raises(SchemaError, match=re.escape(msg)): validator.schema = {'foo': {'unknown': 'rule'}} def test_unknown_type(validator): msg = str({'foo': [{'type': ['Unsupported types: unknown']}]}) with pytest.raises(SchemaError, match=re.escape(msg)): validator.schema = {'foo': {'type': 'unknown'}} def test_bad_schema_definition(validator): field = 'name' msg = str({field: ['must be of dict type']}) with pytest.raises(SchemaError, match=re.escape(msg)): validator.schema = {field: 'this should really be a dict'} def test_bad_of_rules(): schema = {'foo': {'anyof': {'type': 'string'}}} assert_schema_error({}, schema) def test_normalization_rules_are_invalid_in_of_rules(): schema = {0: {'anyof': [{'coerce': lambda x: x}]}} assert_schema_error({}, schema) def test_anyof_allof_schema_validate(): # make sure schema with 'anyof' and 'allof' constraints are checked # correctly schema = { 'doc': {'type': 'dict', 'anyof': [{'schema': [{'param': {'type': 'number'}}]}]} } assert_schema_error({'doc': 'this is my document'}, schema) schema = { 'doc': {'type': 'dict', 'allof': [{'schema': [{'param': {'type': 'number'}}]}]} } assert_schema_error({'doc': 'this is my document'}, schema) def test_repr(): v = Validator({'foo': {'type': 'string'}}) assert repr(v.schema) == "{'foo': {'type': 'string'}}" def test_validated_schema_cache(): v = Validator({'foozifix': {'coerce': int}}) cache_size = len(v._valid_schemas) v = Validator({'foozifix': {'type': 'integer'}}) cache_size += 1 assert len(v._valid_schemas) == cache_size v = Validator({'foozifix': {'coerce': int}}) assert len(v._valid_schemas) == cache_size max_cache_size = 161 assert cache_size <= max_cache_size, ( "There's an unexpected high amount (%s) of cached valid " "definition schemas. Unless you added further tests, " "there are good chances that something is wrong. " "If you added tests with new schemas, you can try to " "adjust the variable `max_cache_size` according to " "the added schemas." % cache_size ) def test_expansion_in_nested_schema(): schema = {'detroit': {'schema': {'anyof_regex': ['^Aladdin', 'Sane$']}}} v = Validator(schema) assert v.schema['detroit']['schema'] == { 'anyof': [{'regex': '^Aladdin'}, {'regex': 'Sane$'}] } def test_unvalidated_schema_can_be_copied(): schema = UnvalidatedSchema() schema_copy = schema.copy() assert schema_copy == schema # TODO remove with next major release def test_deprecated_rule_names_in_valueschema(): def check_with(field, value, error): pass schema = { "field_1": { "type": "dict", "valueschema": { "type": "dict", "keyschema": {"type": "string"}, "valueschema": {"type": "string"}, }, }, "field_2": { "type": "list", "items": [ {"keyschema": {}}, {"validator": check_with}, {"valueschema": {}}, ], }, } validator = Validator(schema) assert validator.schema == { "field_1": { "type": "dict", "valuesrules": { "type": "dict", "keysrules": {"type": "string"}, "valuesrules": {"type": "string"}, }, }, "field_2": { "type": "list", "items": [ {"keysrules": {}}, {"check_with": check_with}, {"valuesrules": {}}, ], }, } def test_anyof_check_with(): def foo(field, value, error): pass def bar(field, value, error): pass schema = {'field': {'anyof_check_with': [foo, bar]}} validator = Validator(schema) assert validator.schema == { 'field': {'anyof': [{'check_with': foo}, {'check_with': bar}]} } Cerberus-1.3.2/cerberus/tests/test_utils.py0000644000076500000240000000044213461521741021410 0ustar nicolastaff00000000000000from cerberus.utils import compare_paths_lt def test_compare_paths(): lesser = ('a_dict', 'keysrules') greater = ('a_dict', 'valuesrules') assert compare_paths_lt(lesser, greater) lesser += ('type',) greater += ('regex',) assert compare_paths_lt(lesser, greater) Cerberus-1.3.2/cerberus/tests/test_validation.py0000644000076500000240000016220413461521741022407 0ustar nicolastaff00000000000000# -*- coding: utf-8 -*- import itertools import re import sys from datetime import datetime, date from random import choice from string import ascii_lowercase from pytest import mark from cerberus import errors, Validator from cerberus.tests import ( assert_bad_type, assert_document_error, assert_fail, assert_has_error, assert_not_has_error, assert_success, ) from cerberus.tests.conftest import sample_schema def test_empty_document(): assert_document_error(None, sample_schema, None, errors.DOCUMENT_MISSING) def test_bad_document_type(): document = "not a dict" assert_document_error( document, sample_schema, None, errors.DOCUMENT_FORMAT.format(document) ) def test_unknown_field(validator): field = 'surname' assert_fail( {field: 'doe'}, validator=validator, error=(field, (), errors.UNKNOWN_FIELD, None), ) assert validator.errors == {field: ['unknown field']} def test_empty_field_definition(document): field = 'name' schema = {field: {}} assert_success(document, schema) def test_required_field(schema): field = 'a_required_string' required_string_extension = { 'a_required_string': { 'type': 'string', 'minlength': 2, 'maxlength': 10, 'required': True, } } schema.update(required_string_extension) assert_fail( {'an_integer': 1}, schema, error=(field, (field, 'required'), errors.REQUIRED_FIELD, True), ) def test_nullable_field(): assert_success({'a_nullable_integer': None}) assert_success({'a_nullable_integer': 3}) assert_success({'a_nullable_field_without_type': None}) assert_fail({'a_nullable_integer': "foo"}) assert_fail({'an_integer': None}) assert_fail({'a_not_nullable_field_without_type': None}) def test_nullable_skips_allowed(): schema = {'role': {'allowed': ['agent', 'client', 'supplier'], 'nullable': True}} assert_success({'role': None}, schema) def test_readonly_field(): field = 'a_readonly_string' assert_fail( {field: 'update me if you can'}, error=(field, (field, 'readonly'), errors.READONLY_FIELD, True), ) def test_readonly_field_first_rule(): # test that readonly rule is checked before any other rule, and blocks. # See #63. schema = {'a_readonly_number': {'type': 'integer', 'readonly': True, 'max': 1}} v = Validator(schema) v.validate({'a_readonly_number': 2}) # it would be a list if there's more than one error; we get a dict # instead. assert 'read-only' in v.errors['a_readonly_number'][0] def test_readonly_field_with_default_value(): schema = { 'created': {'type': 'string', 'readonly': True, 'default': 'today'}, 'modified': { 'type': 'string', 'readonly': True, 'default_setter': lambda d: d['created'], }, } assert_success({}, schema) expected_errors = [ ( 'created', ('created', 'readonly'), errors.READONLY_FIELD, schema['created']['readonly'], ), ( 'modified', ('modified', 'readonly'), errors.READONLY_FIELD, schema['modified']['readonly'], ), ] assert_fail( {'created': 'tomorrow', 'modified': 'today'}, schema, errors=expected_errors ) assert_fail( {'created': 'today', 'modified': 'today'}, schema, errors=expected_errors ) def test_nested_readonly_field_with_default_value(): schema = { 'some_field': { 'type': 'dict', 'schema': { 'created': {'type': 'string', 'readonly': True, 'default': 'today'}, 'modified': { 'type': 'string', 'readonly': True, 'default_setter': lambda d: d['created'], }, }, } } assert_success({'some_field': {}}, schema) expected_errors = [ ( ('some_field', 'created'), ('some_field', 'schema', 'created', 'readonly'), errors.READONLY_FIELD, schema['some_field']['schema']['created']['readonly'], ), ( ('some_field', 'modified'), ('some_field', 'schema', 'modified', 'readonly'), errors.READONLY_FIELD, schema['some_field']['schema']['modified']['readonly'], ), ] assert_fail( {'some_field': {'created': 'tomorrow', 'modified': 'now'}}, schema, errors=expected_errors, ) assert_fail( {'some_field': {'created': 'today', 'modified': 'today'}}, schema, errors=expected_errors, ) def test_repeated_readonly(validator): # https://github.com/pyeve/cerberus/issues/311 validator.schema = {'id': {'readonly': True}} assert_fail({'id': 0}, validator=validator) assert_fail({'id': 0}, validator=validator) def test_not_a_string(): assert_bad_type('a_string', 'string', 1) def test_not_a_binary(): # 'u' literal prefix produces type `str` in Python 3 assert_bad_type('a_binary', 'binary', u"i'm not a binary") def test_not_a_integer(): assert_bad_type('an_integer', 'integer', "i'm not an integer") def test_not_a_boolean(): assert_bad_type('a_boolean', 'boolean', "i'm not a boolean") def test_not_a_datetime(): assert_bad_type('a_datetime', 'datetime', "i'm not a datetime") def test_not_a_float(): assert_bad_type('a_float', 'float', "i'm not a float") def test_not_a_number(): assert_bad_type('a_number', 'number', "i'm not a number") def test_not_a_list(): assert_bad_type('a_list_of_values', 'list', "i'm not a list") def test_not_a_dict(): assert_bad_type('a_dict', 'dict', "i'm not a dict") def test_bad_max_length(schema): field = 'a_string' max_length = schema[field]['maxlength'] value = "".join(choice(ascii_lowercase) for i in range(max_length + 1)) assert_fail( {field: value}, error=( field, (field, 'maxlength'), errors.MAX_LENGTH, max_length, (len(value),), ), ) def test_bad_max_length_binary(schema): field = 'a_binary' max_length = schema[field]['maxlength'] value = b'\x00' * (max_length + 1) assert_fail( {field: value}, error=( field, (field, 'maxlength'), errors.MAX_LENGTH, max_length, (len(value),), ), ) def test_bad_min_length(schema): field = 'a_string' min_length = schema[field]['minlength'] value = "".join(choice(ascii_lowercase) for i in range(min_length - 1)) assert_fail( {field: value}, error=( field, (field, 'minlength'), errors.MIN_LENGTH, min_length, (len(value),), ), ) def test_bad_min_length_binary(schema): field = 'a_binary' min_length = schema[field]['minlength'] value = b'\x00' * (min_length - 1) assert_fail( {field: value}, error=( field, (field, 'minlength'), errors.MIN_LENGTH, min_length, (len(value),), ), ) def test_bad_max_value(schema): def assert_bad_max_value(field, inc): max_value = schema[field]['max'] value = max_value + inc assert_fail( {field: value}, error=(field, (field, 'max'), errors.MAX_VALUE, max_value) ) field = 'an_integer' assert_bad_max_value(field, 1) field = 'a_float' assert_bad_max_value(field, 1.0) field = 'a_number' assert_bad_max_value(field, 1) def test_bad_min_value(schema): def assert_bad_min_value(field, inc): min_value = schema[field]['min'] value = min_value - inc assert_fail( {field: value}, error=(field, (field, 'min'), errors.MIN_VALUE, min_value) ) field = 'an_integer' assert_bad_min_value(field, 1) field = 'a_float' assert_bad_min_value(field, 1.0) field = 'a_number' assert_bad_min_value(field, 1) def test_bad_schema(): field = 'a_dict' subschema_field = 'address' schema = { field: { 'type': 'dict', 'schema': { subschema_field: {'type': 'string'}, 'city': {'type': 'string', 'required': True}, }, } } document = {field: {subschema_field: 34}} validator = Validator(schema) assert_fail( document, validator=validator, error=( field, (field, 'schema'), errors.MAPPING_SCHEMA, validator.schema['a_dict']['schema'], ), child_errors=[ ( (field, subschema_field), (field, 'schema', subschema_field, 'type'), errors.BAD_TYPE, 'string', ), ( (field, 'city'), (field, 'schema', 'city', 'required'), errors.REQUIRED_FIELD, True, ), ], ) handler = errors.BasicErrorHandler assert field in validator.errors assert subschema_field in validator.errors[field][-1] assert ( handler.messages[errors.BAD_TYPE.code].format(constraint='string') in validator.errors[field][-1][subschema_field] ) assert 'city' in validator.errors[field][-1] assert ( handler.messages[errors.REQUIRED_FIELD.code] in validator.errors[field][-1]['city'] ) def test_bad_valuesrules(): field = 'a_dict_with_valuesrules' schema_field = 'a_string' value = {schema_field: 'not an integer'} exp_child_errors = [ ( (field, schema_field), (field, 'valuesrules', 'type'), errors.BAD_TYPE, 'integer', ) ] assert_fail( {field: value}, error=(field, (field, 'valuesrules'), errors.VALUESRULES, {'type': 'integer'}), child_errors=exp_child_errors, ) def test_bad_list_of_values(validator): field = 'a_list_of_values' value = ['a string', 'not an integer'] assert_fail( {field: value}, validator=validator, error=( field, (field, 'items'), errors.BAD_ITEMS, [{'type': 'string'}, {'type': 'integer'}], ), child_errors=[ ((field, 1), (field, 'items', 1, 'type'), errors.BAD_TYPE, 'integer') ], ) assert ( errors.BasicErrorHandler.messages[errors.BAD_TYPE.code].format( constraint='integer' ) in validator.errors[field][-1][1] ) value = ['a string', 10, 'an extra item'] assert_fail( {field: value}, error=( field, (field, 'items'), errors.ITEMS_LENGTH, [{'type': 'string'}, {'type': 'integer'}], (2, 3), ), ) def test_bad_list_of_integers(): field = 'a_list_of_integers' value = [34, 'not an integer'] assert_fail({field: value}) def test_bad_list_of_dicts(): field = 'a_list_of_dicts' map_schema = { 'sku': {'type': 'string'}, 'price': {'type': 'integer', 'required': True}, } seq_schema = {'type': 'dict', 'schema': map_schema} schema = {field: {'type': 'list', 'schema': seq_schema}} validator = Validator(schema) value = [{'sku': 'KT123', 'price': '100'}] document = {field: value} assert_fail( document, validator=validator, error=(field, (field, 'schema'), errors.SEQUENCE_SCHEMA, seq_schema), child_errors=[ ((field, 0), (field, 'schema', 'schema'), errors.MAPPING_SCHEMA, map_schema) ], ) assert field in validator.errors assert 0 in validator.errors[field][-1] assert 'price' in validator.errors[field][-1][0][-1] exp_msg = errors.BasicErrorHandler.messages[errors.BAD_TYPE.code].format( constraint='integer' ) assert exp_msg in validator.errors[field][-1][0][-1]['price'] value = ["not a dict"] exp_child_errors = [ ((field, 0), (field, 'schema', 'type'), errors.BAD_TYPE, 'dict', ()) ] assert_fail( {field: value}, error=(field, (field, 'schema'), errors.SEQUENCE_SCHEMA, seq_schema), child_errors=exp_child_errors, ) def test_array_unallowed(): field = 'an_array' value = ['agent', 'client', 'profit'] assert_fail( {field: value}, error=( field, (field, 'allowed'), errors.UNALLOWED_VALUES, ['agent', 'client', 'vendor'], ['profit'], ), ) def test_string_unallowed(): field = 'a_restricted_string' value = 'profit' assert_fail( {field: value}, error=( field, (field, 'allowed'), errors.UNALLOWED_VALUE, ['agent', 'client', 'vendor'], value, ), ) def test_integer_unallowed(): field = 'a_restricted_integer' value = 2 assert_fail( {field: value}, error=(field, (field, 'allowed'), errors.UNALLOWED_VALUE, [-1, 0, 1], value), ) def test_integer_allowed(): assert_success({'a_restricted_integer': -1}) def test_validate_update(): assert_success( { 'an_integer': 100, 'a_dict': {'address': 'adr'}, 'a_list_of_dicts': [{'sku': 'let'}], }, update=True, ) def test_string(): assert_success({'a_string': 'john doe'}) def test_string_allowed(): assert_success({'a_restricted_string': 'client'}) def test_integer(): assert_success({'an_integer': 50}) def test_boolean(): assert_success({'a_boolean': True}) def test_datetime(): assert_success({'a_datetime': datetime.now()}) def test_float(): assert_success({'a_float': 3.5}) assert_success({'a_float': 1}) def test_number(): assert_success({'a_number': 3.5}) assert_success({'a_number': 3}) def test_array(): assert_success({'an_array': ['agent', 'client']}) def test_set(): assert_success({'a_set': set(['hello', 1])}) def test_one_of_two_types(validator): field = 'one_or_more_strings' assert_success({field: 'foo'}) assert_success({field: ['foo', 'bar']}) exp_child_errors = [ ((field, 1), (field, 'schema', 'type'), errors.BAD_TYPE, 'string') ] assert_fail( {field: ['foo', 23]}, validator=validator, error=(field, (field, 'schema'), errors.SEQUENCE_SCHEMA, {'type': 'string'}), child_errors=exp_child_errors, ) assert_fail( {field: 23}, error=((field,), (field, 'type'), errors.BAD_TYPE, ['string', 'list']), ) assert validator.errors == {field: [{1: ['must be of string type']}]} def test_regex(validator): field = 'a_regex_email' assert_success({field: 'valid.email@gmail.com'}, validator=validator) assert_fail( {field: 'invalid'}, update=True, error=( field, (field, 'regex'), errors.REGEX_MISMATCH, r'^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$', ), ) def test_a_list_of_dicts(): assert_success( { 'a_list_of_dicts': [ {'sku': 'AK345', 'price': 100}, {'sku': 'YZ069', 'price': 25}, ] } ) def test_a_list_of_values(): assert_success({'a_list_of_values': ['hello', 100]}) def test_an_array_from_set(): assert_success({'an_array_from_set': ['agent', 'client']}) def test_a_list_of_integers(): assert_success({'a_list_of_integers': [99, 100]}) def test_a_dict(schema): assert_success({'a_dict': {'address': 'i live here', 'city': 'in my own town'}}) assert_fail( {'a_dict': {'address': 8545}}, error=( 'a_dict', ('a_dict', 'schema'), errors.MAPPING_SCHEMA, schema['a_dict']['schema'], ), child_errors=[ ( ('a_dict', 'address'), ('a_dict', 'schema', 'address', 'type'), errors.BAD_TYPE, 'string', ), ( ('a_dict', 'city'), ('a_dict', 'schema', 'city', 'required'), errors.REQUIRED_FIELD, True, ), ], ) def test_a_dict_with_valuesrules(validator): assert_success( {'a_dict_with_valuesrules': {'an integer': 99, 'another integer': 100}} ) error = ( 'a_dict_with_valuesrules', ('a_dict_with_valuesrules', 'valuesrules'), errors.VALUESRULES, {'type': 'integer'}, ) child_errors = [ ( ('a_dict_with_valuesrules', 'a string'), ('a_dict_with_valuesrules', 'valuesrules', 'type'), errors.BAD_TYPE, 'integer', ) ] assert_fail( {'a_dict_with_valuesrules': {'a string': '99'}}, validator=validator, error=error, child_errors=child_errors, ) assert 'valuesrules' in validator.schema_error_tree['a_dict_with_valuesrules'] v = validator.schema_error_tree assert len(v['a_dict_with_valuesrules']['valuesrules'].descendants) == 1 # TODO remove 'keyschema' as rule with the next major release @mark.parametrize('rule', ('keysrules', 'keyschema')) def test_keysrules(rule): schema = { 'a_dict_with_keysrules': { 'type': 'dict', rule: {'type': 'string', 'regex': '[a-z]+'}, } } assert_success({'a_dict_with_keysrules': {'key': 'value'}}, schema=schema) assert_fail({'a_dict_with_keysrules': {'KEY': 'value'}}, schema=schema) def test_a_list_length(schema): field = 'a_list_length' min_length = schema[field]['minlength'] max_length = schema[field]['maxlength'] assert_fail( {field: [1] * (min_length - 1)}, error=( field, (field, 'minlength'), errors.MIN_LENGTH, min_length, (min_length - 1,), ), ) for i in range(min_length, max_length): value = [1] * i assert_success({field: value}) assert_fail( {field: [1] * (max_length + 1)}, error=( field, (field, 'maxlength'), errors.MAX_LENGTH, max_length, (max_length + 1,), ), ) def test_custom_datatype(): class MyValidator(Validator): def _validate_type_objectid(self, value): if re.match('[a-f0-9]{24}', value): return True schema = {'test_field': {'type': 'objectid'}} validator = MyValidator(schema) assert_success({'test_field': '50ad188438345b1049c88a28'}, validator=validator) assert_fail( {'test_field': 'hello'}, validator=validator, error=('test_field', ('test_field', 'type'), errors.BAD_TYPE, 'objectid'), ) def test_custom_datatype_rule(): class MyValidator(Validator): def _validate_min_number(self, min_number, field, value): """ {'type': 'number'} """ if value < min_number: self._error(field, 'Below the min') # TODO replace with TypeDefintion in next major release def _validate_type_number(self, value): if isinstance(value, int): return True schema = {'test_field': {'min_number': 1, 'type': 'number'}} validator = MyValidator(schema) assert_fail( {'test_field': '0'}, validator=validator, error=('test_field', ('test_field', 'type'), errors.BAD_TYPE, 'number'), ) assert_fail( {'test_field': 0}, validator=validator, error=('test_field', (), errors.CUSTOM, None, ('Below the min',)), ) assert validator.errors == {'test_field': ['Below the min']} def test_custom_validator(): class MyValidator(Validator): def _validate_isodd(self, isodd, field, value): """ {'type': 'boolean'} """ if isodd and not bool(value & 1): self._error(field, 'Not an odd number') schema = {'test_field': {'isodd': True}} validator = MyValidator(schema) assert_success({'test_field': 7}, validator=validator) assert_fail( {'test_field': 6}, validator=validator, error=('test_field', (), errors.CUSTOM, None, ('Not an odd number',)), ) assert validator.errors == {'test_field': ['Not an odd number']} @mark.parametrize( 'value, _type', (('', 'string'), ((), 'list'), ({}, 'dict'), ([], 'list')) ) def test_empty_values(value, _type): field = 'test' schema = {field: {'type': _type}} document = {field: value} assert_success(document, schema) schema[field]['empty'] = False assert_fail( document, schema, error=(field, (field, 'empty'), errors.EMPTY_NOT_ALLOWED, False), ) schema[field]['empty'] = True assert_success(document, schema) def test_empty_skips_regex(validator): schema = {'foo': {'empty': True, 'regex': r'\d?\d\.\d\d', 'type': 'string'}} assert validator({'foo': ''}, schema) def test_ignore_none_values(): field = 'test' schema = {field: {'type': 'string', 'empty': False, 'required': False}} document = {field: None} # Test normal behaviour validator = Validator(schema, ignore_none_values=False) assert_fail(document, validator=validator) validator.schema[field]['required'] = True validator.schema.validate() _errors = assert_fail(document, validator=validator) assert_not_has_error( _errors, field, (field, 'required'), errors.REQUIRED_FIELD, True ) # Test ignore None behaviour validator = Validator(schema, ignore_none_values=True) validator.schema[field]['required'] = False validator.schema.validate() assert_success(document, validator=validator) validator.schema[field]['required'] = True _errors = assert_fail(schema=schema, document=document, validator=validator) assert_has_error(_errors, field, (field, 'required'), errors.REQUIRED_FIELD, True) assert_not_has_error(_errors, field, (field, 'type'), errors.BAD_TYPE, 'string') def test_unknown_keys(): schema = {} # test that unknown fields are allowed when allow_unknown is True. v = Validator(allow_unknown=True, schema=schema) assert_success({"unknown1": True, "unknown2": "yes"}, validator=v) # test that unknown fields are allowed only if they meet the # allow_unknown schema when provided. v.allow_unknown = {'type': 'string'} assert_success(document={'name': 'mark'}, validator=v) assert_fail({"name": 1}, validator=v) # test that unknown fields are not allowed if allow_unknown is False v.allow_unknown = False assert_fail({'name': 'mark'}, validator=v) def test_unknown_key_dict(validator): # https://github.com/pyeve/cerberus/issues/177 validator.allow_unknown = True document = {'a_dict': {'foo': 'foo_value', 'bar': 25}} assert_success(document, {}, validator=validator) def test_unknown_key_list(validator): # https://github.com/pyeve/cerberus/issues/177 validator.allow_unknown = True document = {'a_dict': ['foo', 'bar']} assert_success(document, {}, validator=validator) def test_unknown_keys_list_of_dicts(validator): # test that allow_unknown is honored even for subdicts in lists. # https://github.com/pyeve/cerberus/issues/67. validator.allow_unknown = True document = {'a_list_of_dicts': [{'sku': 'YZ069', 'price': 25, 'extra': True}]} assert_success(document, validator=validator) def test_unknown_keys_retain_custom_rules(): # test that allow_unknown schema respect custom validation rules. # https://github.com/pyeve/cerberus/issues/#66. class CustomValidator(Validator): def _validate_type_foo(self, value): if value == "foo": return True validator = CustomValidator({}) validator.allow_unknown = {"type": "foo"} assert_success(document={"fred": "foo", "barney": "foo"}, validator=validator) def test_nested_unknown_keys(): schema = { 'field1': { 'type': 'dict', 'allow_unknown': True, 'schema': {'nested1': {'type': 'string'}}, } } document = {'field1': {'nested1': 'foo', 'arb1': 'bar', 'arb2': 42}} assert_success(document=document, schema=schema) schema['field1']['allow_unknown'] = {'type': 'string'} assert_fail(document=document, schema=schema) def test_novalidate_noerrors(validator): """ In v0.1.0 and below `self.errors` raised an exception if no validation had been performed yet. """ assert validator.errors == {} def test_callable_validator(): """ Validator instance is callable, functions as a shorthand passthrough to validate() """ schema = {'test_field': {'type': 'string'}} v = Validator(schema) assert v.validate({'test_field': 'foo'}) assert v({'test_field': 'foo'}) assert not v.validate({'test_field': 1}) assert not v({'test_field': 1}) def test_dependencies_field(): schema = {'test_field': {'dependencies': 'foo'}, 'foo': {'type': 'string'}} assert_success({'test_field': 'foobar', 'foo': 'bar'}, schema) assert_fail({'test_field': 'foobar'}, schema) def test_dependencies_list(): schema = { 'test_field': {'dependencies': ['foo', 'bar']}, 'foo': {'type': 'string'}, 'bar': {'type': 'string'}, } assert_success({'test_field': 'foobar', 'foo': 'bar', 'bar': 'foo'}, schema) assert_fail({'test_field': 'foobar', 'foo': 'bar'}, schema) def test_dependencies_list_with_required_field(): schema = { 'test_field': {'required': True, 'dependencies': ['foo', 'bar']}, 'foo': {'type': 'string'}, 'bar': {'type': 'string'}, } # False: all dependencies missing assert_fail({'test_field': 'foobar'}, schema) # False: one of dependencies missing assert_fail({'test_field': 'foobar', 'foo': 'bar'}, schema) # False: one of dependencies missing assert_fail({'test_field': 'foobar', 'bar': 'foo'}, schema) # False: dependencies are validated and field is required assert_fail({'foo': 'bar', 'bar': 'foo'}, schema) # False: All dependencies are optional but field is still required assert_fail({}, schema) # True: dependency missing assert_fail({'foo': 'bar'}, schema) # True: dependencies are validated but field is not required schema['test_field']['required'] = False assert_success({'foo': 'bar', 'bar': 'foo'}, schema) def test_dependencies_list_with_subodcuments_fields(): schema = { 'test_field': {'dependencies': ['a_dict.foo', 'a_dict.bar']}, 'a_dict': { 'type': 'dict', 'schema': {'foo': {'type': 'string'}, 'bar': {'type': 'string'}}, }, } assert_success( {'test_field': 'foobar', 'a_dict': {'foo': 'foo', 'bar': 'bar'}}, schema ) assert_fail({'test_field': 'foobar', 'a_dict': {}}, schema) assert_fail({'test_field': 'foobar', 'a_dict': {'foo': 'foo'}}, schema) def test_dependencies_dict(): schema = { 'test_field': {'dependencies': {'foo': 'foo', 'bar': 'bar'}}, 'foo': {'type': 'string'}, 'bar': {'type': 'string'}, } assert_success({'test_field': 'foobar', 'foo': 'foo', 'bar': 'bar'}, schema) assert_fail({'test_field': 'foobar', 'foo': 'foo'}, schema) assert_fail({'test_field': 'foobar', 'foo': 'bar'}, schema) assert_fail({'test_field': 'foobar', 'bar': 'bar'}, schema) assert_fail({'test_field': 'foobar', 'bar': 'foo'}, schema) assert_fail({'test_field': 'foobar'}, schema) def test_dependencies_dict_with_required_field(): schema = { 'test_field': {'required': True, 'dependencies': {'foo': 'foo', 'bar': 'bar'}}, 'foo': {'type': 'string'}, 'bar': {'type': 'string'}, } # False: all dependencies missing assert_fail({'test_field': 'foobar'}, schema) # False: one of dependencies missing assert_fail({'test_field': 'foobar', 'foo': 'foo'}, schema) assert_fail({'test_field': 'foobar', 'bar': 'bar'}, schema) # False: dependencies are validated and field is required assert_fail({'foo': 'foo', 'bar': 'bar'}, schema) # False: All dependencies are optional, but field is still required assert_fail({}, schema) # False: dependency missing assert_fail({'foo': 'bar'}, schema) assert_success({'test_field': 'foobar', 'foo': 'foo', 'bar': 'bar'}, schema) # True: dependencies are validated but field is not required schema['test_field']['required'] = False assert_success({'foo': 'bar', 'bar': 'foo'}, schema) def test_dependencies_field_satisfy_nullable_field(): # https://github.com/pyeve/cerberus/issues/305 schema = {'foo': {'nullable': True}, 'bar': {'dependencies': 'foo'}} assert_success({'foo': None, 'bar': 1}, schema) assert_success({'foo': None}, schema) assert_fail({'bar': 1}, schema) def test_dependencies_field_with_mutually_dependent_nullable_fields(): # https://github.com/pyeve/cerberus/pull/306 schema = { 'foo': {'dependencies': 'bar', 'nullable': True}, 'bar': {'dependencies': 'foo', 'nullable': True}, } assert_success({'foo': None, 'bar': None}, schema) assert_success({'foo': 1, 'bar': 1}, schema) assert_success({'foo': None, 'bar': 1}, schema) assert_fail({'foo': None}, schema) assert_fail({'foo': 1}, schema) def test_dependencies_dict_with_subdocuments_fields(): schema = { 'test_field': { 'dependencies': {'a_dict.foo': ['foo', 'bar'], 'a_dict.bar': 'bar'} }, 'a_dict': { 'type': 'dict', 'schema': {'foo': {'type': 'string'}, 'bar': {'type': 'string'}}, }, } assert_success( {'test_field': 'foobar', 'a_dict': {'foo': 'foo', 'bar': 'bar'}}, schema ) assert_success( {'test_field': 'foobar', 'a_dict': {'foo': 'bar', 'bar': 'bar'}}, schema ) assert_fail({'test_field': 'foobar', 'a_dict': {}}, schema) assert_fail( {'test_field': 'foobar', 'a_dict': {'foo': 'foo', 'bar': 'foo'}}, schema ) assert_fail({'test_field': 'foobar', 'a_dict': {'bar': 'foo'}}, schema) assert_fail({'test_field': 'foobar', 'a_dict': {'bar': 'bar'}}, schema) def test_root_relative_dependencies(): # https://github.com/pyeve/cerberus/issues/288 subschema = {'version': {'dependencies': '^repo'}} schema = {'package': {'allow_unknown': True, 'schema': subschema}, 'repo': {}} assert_fail( {'package': {'repo': 'somewhere', 'version': 0}}, schema, error=('package', ('package', 'schema'), errors.MAPPING_SCHEMA, subschema), child_errors=[ ( ('package', 'version'), ('package', 'schema', 'version', 'dependencies'), errors.DEPENDENCIES_FIELD, '^repo', ('^repo',), ) ], ) assert_success({'repo': 'somewhere', 'package': {'version': 1}}, schema) def test_dependencies_errors(): v = Validator( { 'field1': {'required': False}, 'field2': {'required': True, 'dependencies': {'field1': ['one', 'two']}}, } ) assert_fail( {'field1': 'three', 'field2': 7}, validator=v, error=( 'field2', ('field2', 'dependencies'), errors.DEPENDENCIES_FIELD_VALUE, {'field1': ['one', 'two']}, ({'field1': 'three'},), ), ) def test_options_passed_to_nested_validators(validator): validator.schema = { 'sub_dict': {'type': 'dict', 'schema': {'foo': {'type': 'string'}}} } validator.allow_unknown = True assert_success({'sub_dict': {'foo': 'bar', 'unknown': True}}, validator=validator) def test_self_root_document(): """ Make sure self.root_document is always the root document. See: * https://github.com/pyeve/cerberus/pull/42 * https://github.com/pyeve/eve/issues/295 """ class MyValidator(Validator): def _validate_root_doc(self, root_doc, field, value): """ {'type': 'boolean'} """ if 'sub' not in self.root_document or len(self.root_document['sub']) != 2: self._error(field, 'self.context is not the root doc!') schema = { 'sub': { 'type': 'list', 'root_doc': True, 'schema': { 'type': 'dict', 'schema': {'foo': {'type': 'string', 'root_doc': True}}, }, } } assert_success( {'sub': [{'foo': 'bar'}, {'foo': 'baz'}]}, validator=MyValidator(schema) ) def test_validator_rule(validator): def validate_name(field, value, error): if not value.islower(): error(field, 'must be lowercase') validator.schema = { 'name': {'validator': validate_name}, 'age': {'type': 'integer'}, } assert_fail( {'name': 'ItsMe', 'age': 2}, validator=validator, error=('name', (), errors.CUSTOM, None, ('must be lowercase',)), ) assert validator.errors == {'name': ['must be lowercase']} assert_success({'name': 'itsme', 'age': 2}, validator=validator) def test_validated(validator): validator.schema = {'property': {'type': 'string'}} document = {'property': 'string'} assert validator.validated(document) == document document = {'property': 0} assert validator.validated(document) is None def test_anyof(): # prop1 must be either a number between 0 and 10 schema = {'prop1': {'min': 0, 'max': 10}} doc = {'prop1': 5} assert_success(doc, schema) # prop1 must be either a number between 0 and 10 or 100 and 110 schema = {'prop1': {'anyof': [{'min': 0, 'max': 10}, {'min': 100, 'max': 110}]}} doc = {'prop1': 105} assert_success(doc, schema) # prop1 must be either a number between 0 and 10 or 100 and 110 schema = {'prop1': {'anyof': [{'min': 0, 'max': 10}, {'min': 100, 'max': 110}]}} doc = {'prop1': 50} assert_fail(doc, schema) # prop1 must be an integer that is either be # greater than or equal to 0, or greater than or equal to 10 schema = {'prop1': {'type': 'integer', 'anyof': [{'min': 0}, {'min': 10}]}} assert_success({'prop1': 10}, schema) # test that intermediate schemas do not sustain assert 'type' not in schema['prop1']['anyof'][0] assert 'type' not in schema['prop1']['anyof'][1] assert 'allow_unknown' not in schema['prop1']['anyof'][0] assert 'allow_unknown' not in schema['prop1']['anyof'][1] assert_success({'prop1': 5}, schema) exp_child_errors = [ (('prop1',), ('prop1', 'anyof', 0, 'min'), errors.MIN_VALUE, 0), (('prop1',), ('prop1', 'anyof', 1, 'min'), errors.MIN_VALUE, 10), ] assert_fail( {'prop1': -1}, schema, error=(('prop1',), ('prop1', 'anyof'), errors.ANYOF, [{'min': 0}, {'min': 10}]), child_errors=exp_child_errors, ) doc = {'prop1': 5.5} assert_fail(doc, schema) doc = {'prop1': '5.5'} assert_fail(doc, schema) def test_allof(): # prop1 has to be a float between 0 and 10 schema = {'prop1': {'allof': [{'type': 'float'}, {'min': 0}, {'max': 10}]}} doc = {'prop1': -1} assert_fail(doc, schema) doc = {'prop1': 5} assert_success(doc, schema) doc = {'prop1': 11} assert_fail(doc, schema) # prop1 has to be a float and an integer schema = {'prop1': {'allof': [{'type': 'float'}, {'type': 'integer'}]}} doc = {'prop1': 11} assert_success(doc, schema) doc = {'prop1': 11.5} assert_fail(doc, schema) doc = {'prop1': '11'} assert_fail(doc, schema) def test_unicode_allowed(): # issue 280 doc = {'letters': u'♄εℓł☺'} schema = {'letters': {'type': 'string', 'allowed': ['a', 'b', 'c']}} assert_fail(doc, schema) schema = {'letters': {'type': 'string', 'allowed': [u'♄εℓł☺']}} assert_success(doc, schema) schema = {'letters': {'type': 'string', 'allowed': ['♄εℓł☺']}} doc = {'letters': '♄εℓł☺'} assert_success(doc, schema) @mark.skipif(sys.version_info[0] < 3, reason='requires python 3.x') def test_unicode_allowed_py3(): """ All strings are unicode in Python 3.x. Input doc and schema have equal strings and validation yield success.""" # issue 280 doc = {'letters': u'♄εℓł☺'} schema = {'letters': {'type': 'string', 'allowed': ['♄εℓł☺']}} assert_success(doc, schema) @mark.skipif(sys.version_info[0] > 2, reason='requires python 2.x') def test_unicode_allowed_py2(): """ Python 2.x encodes value of allowed using default encoding if the string includes characters outside ASCII range. Produced string does not match input which is an unicode string.""" # issue 280 doc = {'letters': u'♄εℓł☺'} schema = {'letters': {'type': 'string', 'allowed': ['♄εℓł☺']}} assert_fail(doc, schema) def test_oneof(): # prop1 can only only be: # - greater than 10 # - greater than 0 # - equal to -5, 5, or 15 schema = { 'prop1': { 'type': 'integer', 'oneof': [{'min': 0}, {'min': 10}, {'allowed': [-5, 5, 15]}], } } # document is not valid # prop1 not greater than 0, 10 or equal to -5 doc = {'prop1': -1} assert_fail(doc, schema) # document is valid # prop1 is less then 0, but is -5 doc = {'prop1': -5} assert_success(doc, schema) # document is valid # prop1 greater than 0 doc = {'prop1': 1} assert_success(doc, schema) # document is not valid # prop1 is greater than 0 # and equal to 5 doc = {'prop1': 5} assert_fail(doc, schema) # document is not valid # prop1 is greater than 0 # and greater than 10 doc = {'prop1': 11} assert_fail(doc, schema) # document is not valid # prop1 is greater than 0 # and greater than 10 # and equal to 15 doc = {'prop1': 15} assert_fail(doc, schema) def test_noneof(): # prop1 can not be: # - greater than 10 # - greater than 0 # - equal to -5, 5, or 15 schema = { 'prop1': { 'type': 'integer', 'noneof': [{'min': 0}, {'min': 10}, {'allowed': [-5, 5, 15]}], } } # document is valid doc = {'prop1': -1} assert_success(doc, schema) # document is not valid # prop1 is equal to -5 doc = {'prop1': -5} assert_fail(doc, schema) # document is not valid # prop1 greater than 0 doc = {'prop1': 1} assert_fail(doc, schema) # document is not valid doc = {'prop1': 5} assert_fail(doc, schema) # document is not valid doc = {'prop1': 11} assert_fail(doc, schema) # document is not valid # and equal to 15 doc = {'prop1': 15} assert_fail(doc, schema) def test_anyof_allof(): # prop1 can be any number outside of [0-10] schema = { 'prop1': { 'allof': [ {'anyof': [{'type': 'float'}, {'type': 'integer'}]}, {'anyof': [{'min': 10}, {'max': 0}]}, ] } } doc = {'prop1': 11} assert_success(doc, schema) doc = {'prop1': -1} assert_success(doc, schema) doc = {'prop1': 5} assert_fail(doc, schema) doc = {'prop1': 11.5} assert_success(doc, schema) doc = {'prop1': -1.5} assert_success(doc, schema) doc = {'prop1': 5.5} assert_fail(doc, schema) doc = {'prop1': '5.5'} assert_fail(doc, schema) def test_anyof_schema(validator): # test that a list of schemas can be specified. valid_parts = [ {'schema': {'model number': {'type': 'string'}, 'count': {'type': 'integer'}}}, {'schema': {'serial number': {'type': 'string'}, 'count': {'type': 'integer'}}}, ] valid_item = {'type': ['dict', 'string'], 'anyof': valid_parts} schema = {'parts': {'type': 'list', 'schema': valid_item}} document = { 'parts': [ {'model number': 'MX-009', 'count': 100}, {'serial number': '898-001'}, 'misc', ] } # document is valid. each entry in 'parts' matches a type or schema assert_success(document, schema, validator=validator) document['parts'].append({'product name': "Monitors", 'count': 18}) # document is invalid. 'product name' does not match any valid schemas assert_fail(document, schema, validator=validator) document['parts'].pop() # document is valid again assert_success(document, schema, validator=validator) document['parts'].append({'product name': "Monitors", 'count': 18}) document['parts'].append(10) # and invalid. numbers are not allowed. exp_child_errors = [ (('parts', 3), ('parts', 'schema', 'anyof'), errors.ANYOF, valid_parts), ( ('parts', 4), ('parts', 'schema', 'type'), errors.BAD_TYPE, ['dict', 'string'], ), ] _errors = assert_fail( document, schema, validator=validator, error=('parts', ('parts', 'schema'), errors.SEQUENCE_SCHEMA, valid_item), child_errors=exp_child_errors, ) assert_not_has_error( _errors, ('parts', 4), ('parts', 'schema', 'anyof'), errors.ANYOF, valid_parts ) # tests errors.BasicErrorHandler's tree representation v_errors = validator.errors assert 'parts' in v_errors assert 3 in v_errors['parts'][-1] assert v_errors['parts'][-1][3][0] == "no definitions validate" scope = v_errors['parts'][-1][3][-1] assert 'anyof definition 0' in scope assert 'anyof definition 1' in scope assert scope['anyof definition 0'] == [{"product name": ["unknown field"]}] assert scope['anyof definition 1'] == [{"product name": ["unknown field"]}] assert v_errors['parts'][-1][4] == ["must be of ['dict', 'string'] type"] def test_anyof_2(): # these two schema should be the same schema1 = { 'prop': { 'anyof': [ {'type': 'dict', 'schema': {'val': {'type': 'integer'}}}, {'type': 'dict', 'schema': {'val': {'type': 'string'}}}, ] } } schema2 = { 'prop': { 'type': 'dict', 'anyof': [ {'schema': {'val': {'type': 'integer'}}}, {'schema': {'val': {'type': 'string'}}}, ], } } doc = {'prop': {'val': 0}} assert_success(doc, schema1) assert_success(doc, schema2) doc = {'prop': {'val': '0'}} assert_success(doc, schema1) assert_success(doc, schema2) doc = {'prop': {'val': 1.1}} assert_fail(doc, schema1) assert_fail(doc, schema2) def test_anyof_type(): schema = {'anyof_type': {'anyof_type': ['string', 'integer']}} assert_success({'anyof_type': 'bar'}, schema) assert_success({'anyof_type': 23}, schema) def test_oneof_schema(): schema = { 'oneof_schema': { 'type': 'dict', 'oneof_schema': [ {'digits': {'type': 'integer', 'min': 0, 'max': 99}}, {'text': {'type': 'string', 'regex': '^[0-9]{2}$'}}, ], } } assert_success({'oneof_schema': {'digits': 19}}, schema) assert_success({'oneof_schema': {'text': '84'}}, schema) assert_fail({'oneof_schema': {'digits': 19, 'text': '84'}}, schema) def test_nested_oneof_type(): schema = { 'nested_oneof_type': {'valuesrules': {'oneof_type': ['string', 'integer']}} } assert_success({'nested_oneof_type': {'foo': 'a'}}, schema) assert_success({'nested_oneof_type': {'bar': 3}}, schema) def test_nested_oneofs(validator): validator.schema = { 'abc': { 'type': 'dict', 'oneof_schema': [ { 'foo': { 'type': 'dict', 'schema': {'bar': {'oneof_type': ['integer', 'float']}}, } }, {'baz': {'type': 'string'}}, ], } } document = {'abc': {'foo': {'bar': 'bad'}}} expected_errors = { 'abc': [ 'none or more than one rule validate', { 'oneof definition 0': [ { 'foo': [ { 'bar': [ 'none or more than one rule validate', { 'oneof definition 0': [ 'must be of integer type' ], 'oneof definition 1': ['must be of float type'], }, ] } ] } ], 'oneof definition 1': [{'foo': ['unknown field']}], }, ] } assert_fail(document, validator=validator) assert validator.errors == expected_errors def test_no_of_validation_if_type_fails(validator): valid_parts = [ {'schema': {'model number': {'type': 'string'}, 'count': {'type': 'integer'}}}, {'schema': {'serial number': {'type': 'string'}, 'count': {'type': 'integer'}}}, ] validator.schema = {'part': {'type': ['dict', 'string'], 'anyof': valid_parts}} document = {'part': 10} _errors = assert_fail(document, validator=validator) assert len(_errors) == 1 def test_issue_107(validator): schema = { 'info': { 'type': 'dict', 'schema': {'name': {'type': 'string', 'required': True}}, } } document = {'info': {'name': 'my name'}} assert_success(document, schema, validator=validator) v = Validator(schema) assert_success(document, schema, v) # it once was observed that this behaves other than the previous line assert v.validate(document) def test_dont_type_validate_nulled_values(validator): assert_fail({'an_integer': None}, validator=validator) assert validator.errors == {'an_integer': ['null value not allowed']} def test_dependencies_error(validator): schema = { 'field1': {'required': False}, 'field2': {'required': True, 'dependencies': {'field1': ['one', 'two']}}, } validator.validate({'field2': 7}, schema) exp_msg = errors.BasicErrorHandler.messages[ errors.DEPENDENCIES_FIELD_VALUE.code ].format(field='field2', constraint={'field1': ['one', 'two']}) assert validator.errors == {'field2': [exp_msg]} def test_dependencies_on_boolean_field_with_one_value(): # https://github.com/pyeve/cerberus/issues/138 schema = { 'deleted': {'type': 'boolean'}, 'text': {'dependencies': {'deleted': False}}, } try: assert_success({'text': 'foo', 'deleted': False}, schema) assert_fail({'text': 'foo', 'deleted': True}, schema) assert_fail({'text': 'foo'}, schema) except TypeError as e: if str(e) == "argument of type 'bool' is not iterable": raise AssertionError( "Bug #138 still exists, couldn't use boolean in dependency " "without putting it in a list.\n" "'some_field': True vs 'some_field: [True]" ) else: raise def test_dependencies_on_boolean_field_with_value_in_list(): # https://github.com/pyeve/cerberus/issues/138 schema = { 'deleted': {'type': 'boolean'}, 'text': {'dependencies': {'deleted': [False]}}, } assert_success({'text': 'foo', 'deleted': False}, schema) assert_fail({'text': 'foo', 'deleted': True}, schema) assert_fail({'text': 'foo'}, schema) def test_document_path(): class DocumentPathTester(Validator): def _validate_trail(self, constraint, field, value): """ {'type': 'boolean'} """ test_doc = self.root_document for crumb in self.document_path: test_doc = test_doc[crumb] assert test_doc == self.document v = DocumentPathTester() schema = {'foo': {'schema': {'bar': {'trail': True}}}} document = {'foo': {'bar': {}}} assert_success(document, schema, validator=v) def test_excludes(): schema = { 'this_field': {'type': 'dict', 'excludes': 'that_field'}, 'that_field': {'type': 'dict'}, } assert_success({'this_field': {}}, schema) assert_success({'that_field': {}}, schema) assert_success({}, schema) assert_fail({'that_field': {}, 'this_field': {}}, schema) def test_mutual_excludes(): schema = { 'this_field': {'type': 'dict', 'excludes': 'that_field'}, 'that_field': {'type': 'dict', 'excludes': 'this_field'}, } assert_success({'this_field': {}}, schema) assert_success({'that_field': {}}, schema) assert_success({}, schema) assert_fail({'that_field': {}, 'this_field': {}}, schema) def test_required_excludes(): schema = { 'this_field': {'type': 'dict', 'excludes': 'that_field', 'required': True}, 'that_field': {'type': 'dict', 'excludes': 'this_field', 'required': True}, } assert_success({'this_field': {}}, schema, update=False) assert_success({'that_field': {}}, schema, update=False) assert_fail({}, schema) assert_fail({'that_field': {}, 'this_field': {}}, schema) def test_multiples_exclusions(): schema = { 'this_field': {'type': 'dict', 'excludes': ['that_field', 'bazo_field']}, 'that_field': {'type': 'dict', 'excludes': 'this_field'}, 'bazo_field': {'type': 'dict'}, } assert_success({'this_field': {}}, schema) assert_success({'that_field': {}}, schema) assert_fail({'this_field': {}, 'that_field': {}}, schema) assert_fail({'this_field': {}, 'bazo_field': {}}, schema) assert_fail({'that_field': {}, 'this_field': {}, 'bazo_field': {}}, schema) assert_success({'that_field': {}, 'bazo_field': {}}, schema) def test_bad_excludes_fields(validator): validator.schema = { 'this_field': { 'type': 'dict', 'excludes': ['that_field', 'bazo_field'], 'required': True, }, 'that_field': {'type': 'dict', 'excludes': 'this_field', 'required': True}, } assert_fail({'that_field': {}, 'this_field': {}}, validator=validator) handler = errors.BasicErrorHandler assert validator.errors == { 'that_field': [ handler.messages[errors.EXCLUDES_FIELD.code].format( "'this_field'", field="that_field" ) ], 'this_field': [ handler.messages[errors.EXCLUDES_FIELD.code].format( "'that_field', 'bazo_field'", field="this_field" ) ], } def test_boolean_is_not_a_number(): # https://github.com/pyeve/cerberus/issues/144 assert_fail({'value': True}, {'value': {'type': 'number'}}) def test_min_max_date(): schema = {'date': {'min': date(1900, 1, 1), 'max': date(1999, 12, 31)}} assert_success({'date': date(1945, 5, 8)}, schema) assert_fail({'date': date(1871, 5, 10)}, schema) def test_dict_length(): schema = {'dict': {'minlength': 1}} assert_fail({'dict': {}}, schema) assert_success({'dict': {'foo': 'bar'}}, schema) def test_forbidden(): schema = {'user': {'forbidden': ['root', 'admin']}} assert_fail({'user': 'admin'}, schema) assert_success({'user': 'alice'}, schema) def test_forbidden_number(): schema = {'amount': {'forbidden': (0, 0.0)}} assert_fail({'amount': 0}, schema) assert_fail({'amount': 0.0}, schema) def test_mapping_with_sequence_schema(): schema = {'list': {'schema': {'allowed': ['a', 'b', 'c']}}} document = {'list': {'is_a': 'mapping'}} assert_fail( document, schema, error=( 'list', ('list', 'schema'), errors.BAD_TYPE_FOR_SCHEMA, schema['list']['schema'], ), ) def test_sequence_with_mapping_schema(): schema = {'list': {'schema': {'foo': {'allowed': ['a', 'b', 'c']}}, 'type': 'dict'}} document = {'list': ['a', 'b', 'c']} assert_fail(document, schema) def test_type_error_aborts_validation(): schema = {'foo': {'type': 'string', 'allowed': ['a']}} document = {'foo': 0} assert_fail( document, schema, error=('foo', ('foo', 'type'), errors.BAD_TYPE, 'string') ) def test_dependencies_in_oneof(): # https://github.com/pyeve/cerberus/issues/241 schema = { 'a': { 'type': 'integer', 'oneof': [ {'allowed': [1], 'dependencies': 'b'}, {'allowed': [2], 'dependencies': 'c'}, ], }, 'b': {}, 'c': {}, } assert_success({'a': 1, 'b': 'foo'}, schema) assert_success({'a': 2, 'c': 'bar'}, schema) assert_fail({'a': 1, 'c': 'foo'}, schema) assert_fail({'a': 2, 'b': 'bar'}, schema) def test_allow_unknown_with_oneof_rules(validator): # https://github.com/pyeve/cerberus/issues/251 schema = { 'test': { 'oneof': [ { 'type': 'dict', 'allow_unknown': True, 'schema': {'known': {'type': 'string'}}, }, {'type': 'dict', 'schema': {'known': {'type': 'string'}}}, ] } } # check regression and that allow unknown does not cause any different # than expected behaviour for one-of. document = {'test': {'known': 's'}} validator(document, schema) _errors = validator._errors assert len(_errors) == 1 assert_has_error( _errors, 'test', ('test', 'oneof'), errors.ONEOF, schema['test']['oneof'] ) assert len(_errors[0].child_errors) == 0 # check that allow_unknown is actually applied document = {'test': {'known': 's', 'unknown': 'asd'}} assert_success(document, validator=validator) @mark.parametrize('constraint', (('Graham Chapman', 'Eric Idle'), 'Terry Gilliam')) def test_contains(constraint): validator = Validator({'actors': {'contains': constraint}}) document = {'actors': ('Graham Chapman', 'Eric Idle', 'Terry Gilliam')} assert validator(document) document = {'actors': ('Eric idle', 'Terry Jones', 'John Cleese', 'Michael Palin')} assert not validator(document) assert errors.MISSING_MEMBERS in validator.document_error_tree['actors'] missing_actors = validator.document_error_tree['actors'][ errors.MISSING_MEMBERS ].info[0] assert any(x in missing_actors for x in ('Eric Idle', 'Terry Gilliam')) def test_require_all_simple(): schema = {'foo': {'type': 'string'}} validator = Validator(require_all=True) assert_fail( {}, schema, validator, error=('foo', '__require_all__', errors.REQUIRED_FIELD, True), ) assert_success({'foo': 'bar'}, schema, validator) validator.require_all = False assert_success({}, schema, validator) assert_success({'foo': 'bar'}, schema, validator) def test_require_all_override_by_required(): schema = {'foo': {'type': 'string', 'required': False}} validator = Validator(require_all=True) assert_success({}, schema, validator) assert_success({'foo': 'bar'}, schema, validator) validator.require_all = False assert_success({}, schema, validator) assert_success({'foo': 'bar'}, schema, validator) schema = {'foo': {'type': 'string', 'required': True}} validator.require_all = True assert_fail( {}, schema, validator, error=('foo', ('foo', 'required'), errors.REQUIRED_FIELD, True), ) assert_success({'foo': 'bar'}, schema, validator) validator.require_all = False assert_fail( {}, schema, validator, error=('foo', ('foo', 'required'), errors.REQUIRED_FIELD, True), ) assert_success({'foo': 'bar'}, schema, validator) @mark.parametrize( "validator_require_all, sub_doc_require_all", list(itertools.product([True, False], repeat=2)), ) def test_require_all_override_by_subdoc_require_all( validator_require_all, sub_doc_require_all ): sub_schema = {"bar": {"type": "string"}} schema = { "foo": { "type": "dict", "require_all": sub_doc_require_all, "schema": sub_schema, } } validator = Validator(require_all=validator_require_all) assert_success({"foo": {"bar": "baz"}}, schema, validator) if validator_require_all: assert_fail({}, schema, validator) else: assert_success({}, schema, validator) if sub_doc_require_all: assert_fail({"foo": {}}, schema, validator) else: assert_success({"foo": {}}, schema, validator) def test_require_all_and_exclude(): schema = { 'foo': {'type': 'string', 'excludes': 'bar'}, 'bar': {'type': 'string', 'excludes': 'foo'}, } validator = Validator(require_all=True) assert_fail( {}, schema, validator, errors=[ ('foo', '__require_all__', errors.REQUIRED_FIELD, True), ('bar', '__require_all__', errors.REQUIRED_FIELD, True), ], ) assert_success({'foo': 'value'}, schema, validator) assert_success({'bar': 'value'}, schema, validator) assert_fail({'foo': 'value', 'bar': 'value'}, schema, validator) validator.require_all = False assert_success({}, schema, validator) assert_success({'foo': 'value'}, schema, validator) assert_success({'bar': 'value'}, schema, validator) assert_fail({'foo': 'value', 'bar': 'value'}, schema, validator) Cerberus-1.3.2/cerberus/utils.py0000644000076500000240000000745713461521741017224 0ustar nicolastaff00000000000000from __future__ import absolute_import from collections import namedtuple from cerberus.platform import _int_types, _str_type, Mapping, Sequence, Set TypeDefinition = namedtuple('TypeDefinition', 'name,included_types,excluded_types') """ This class is used to define types that can be used as value in the :attr:`~cerberus.Validator.types_mapping` property. The ``name`` should be descriptive and match the key it is going to be assigned to. A value that is validated against such definition must be an instance of any of the types contained in ``included_types`` and must not match any of the types contained in ``excluded_types``. """ def compare_paths_lt(x, y): min_length = min(len(x), len(y)) if x[:min_length] == y[:min_length]: return len(x) == min_length for i in range(min_length): a, b = x[i], y[i] for _type in (_int_types, _str_type, tuple): if isinstance(a, _type): if isinstance(b, _type): break else: return True if a == b: continue elif a < b: return True else: return False raise RuntimeError def drop_item_from_tuple(t, i): return t[:i] + t[i + 1 :] def get_Validator_class(): global Validator if 'Validator' not in globals(): from cerberus.validator import Validator return Validator def mapping_hash(schema): return hash(mapping_to_frozenset(schema)) def mapping_to_frozenset(mapping): """ Be aware that this treats any sequence type with the equal members as equal. As it is used to identify equality of schemas, this can be considered okay as definitions are semantically equal regardless the container type. """ aggregation = {} for key, value in mapping.items(): if isinstance(value, Mapping): aggregation[key] = mapping_to_frozenset(value) elif isinstance(value, Sequence): value = list(value) for i, item in enumerate(value): if isinstance(item, Mapping): value[i] = mapping_to_frozenset(item) aggregation[key] = tuple(value) elif isinstance(value, Set): aggregation[key] = frozenset(value) else: aggregation[key] = value return frozenset(aggregation.items()) def quote_string(value): if isinstance(value, _str_type): return '"%s"' % value else: return value class readonly_classproperty(property): def __get__(self, instance, owner): return super(readonly_classproperty, self).__get__(owner) def __set__(self, instance, value): raise RuntimeError('This is a readonly class property.') def __delete__(self, instance): raise RuntimeError('This is a readonly class property.') def validator_factory(name, bases=None, namespace={}): """ Dynamically create a :class:`~cerberus.Validator` subclass. Docstrings of mixin-classes will be added to the resulting class' one if ``__doc__`` is not in :obj:`namespace`. :param name: The name of the new class. :type name: :class:`str` :param bases: Class(es) with additional and overriding attributes. :type bases: :class:`tuple` of or a single :term:`class` :param namespace: Attributes for the new class. :type namespace: :class:`dict` :return: The created class. """ Validator = get_Validator_class() if bases is None: bases = (Validator,) elif isinstance(bases, tuple): bases += (Validator,) else: bases = (bases, Validator) docstrings = [x.__doc__ for x in bases if x.__doc__] if len(docstrings) > 1 and '__doc__' not in namespace: namespace.update({'__doc__': '\n'.join(docstrings)}) return type(name, bases, namespace) Cerberus-1.3.2/cerberus/validator.py0000644000076500000240000017640713464226612020055 0ustar nicolastaff00000000000000""" Extensible validation for Python dictionaries. This module implements Cerberus Validator class :copyright: 2012-2016 by Nicola Iarocci. :license: ISC, see LICENSE for more details. Full documentation is available at http://python-cerberus.org """ from __future__ import absolute_import from ast import literal_eval from copy import copy from datetime import date, datetime import re from warnings import warn from cerberus import errors from cerberus.platform import ( _int_types, _str_type, Container, Hashable, Iterable, Mapping, Sequence, Sized, ) from cerberus.schema import ( schema_registry, rules_set_registry, DefinitionSchema, SchemaError, ) from cerberus.utils import drop_item_from_tuple, readonly_classproperty, TypeDefinition toy_error_handler = errors.ToyErrorHandler() def dummy_for_rule_validation(rule_constraints): def dummy(self, constraint, field, value): raise RuntimeError( 'Dummy method called. Its purpose is to hold just' 'validation constraints for a rule in its ' 'docstring.' ) f = dummy f.__doc__ = rule_constraints return f class DocumentError(Exception): """ Raised when the target document is missing or has the wrong format """ pass class _SchemaRuleTypeError(Exception): """ Raised when a schema (list) validation encounters a mapping. Not supposed to be used outside this module. """ pass class BareValidator(object): """ Validator class. Normalizes and/or validates any mapping against a validation-schema which is provided as an argument at class instantiation or upon calling the :meth:`~cerberus.Validator.validate`, :meth:`~cerberus.Validator.validated` or :meth:`~cerberus.Validator.normalized` method. An instance itself is callable and executes a validation. All instantiation parameters are optional. There are the introspective properties :attr:`types`, :attr:`validators`, :attr:`coercers`, :attr:`default_setters`, :attr:`rules`, :attr:`normalization_rules` and :attr:`validation_rules`. The attributes reflecting the available rules are assembled considering constraints that are defined in the docstrings of rules' methods and is effectively used as validation schema for :attr:`schema`. :param schema: See :attr:`~cerberus.Validator.schema`. Defaults to :obj:`None`. :type schema: any :term:`mapping` :param ignore_none_values: See :attr:`~cerberus.Validator.ignore_none_values`. Defaults to ``False``. :type ignore_none_values: :class:`bool` :param allow_unknown: See :attr:`~cerberus.Validator.allow_unknown`. Defaults to ``False``. :type allow_unknown: :class:`bool` or any :term:`mapping` :param require_all: See :attr:`~cerberus.Validator.require_all`. Defaults to ``False``. :type require_all: :class:`bool` :param purge_unknown: See :attr:`~cerberus.Validator.purge_unknown`. Defaults to to ``False``. :type purge_unknown: :class:`bool` :param purge_readonly: Removes all fields that are defined as ``readonly`` in the normalization phase. :type purge_readonly: :class:`bool` :param error_handler: The error handler that formats the result of :attr:`~cerberus.Validator.errors`. When given as two-value tuple with an error-handler class and a dictionary, the latter is passed to the initialization of the error handler. Default: :class:`~cerberus.errors.BasicErrorHandler`. :type error_handler: class or instance based on :class:`~cerberus.errors.BaseErrorHandler` or :class:`tuple` """ # noqa: E501 mandatory_validations = ('nullable',) """ Rules that are evaluated on any field, regardless whether defined in the schema or not. Type: :class:`tuple` """ priority_validations = ('nullable', 'readonly', 'type', 'empty') """ Rules that will be processed in that order before any other. Type: :class:`tuple` """ types_mapping = { 'binary': TypeDefinition('binary', (bytes, bytearray), ()), 'boolean': TypeDefinition('boolean', (bool,), ()), 'container': TypeDefinition('container', (Container,), (_str_type,)), 'date': TypeDefinition('date', (date,), ()), 'datetime': TypeDefinition('datetime', (datetime,), ()), 'dict': TypeDefinition('dict', (Mapping,), ()), 'float': TypeDefinition('float', (float, _int_types), ()), 'integer': TypeDefinition('integer', (_int_types,), ()), 'list': TypeDefinition('list', (Sequence,), (_str_type,)), 'number': TypeDefinition('number', (_int_types, float), (bool,)), 'set': TypeDefinition('set', (set,), ()), 'string': TypeDefinition('string', (_str_type), ()), } """ This mapping holds all available constraints for the type rule and their assigned :class:`~cerberus.TypeDefinition`. """ _valid_schemas = set() """ A :class:`set` of hashes derived from validation schemas that are legit for a particular ``Validator`` class. """ def __init__(self, *args, **kwargs): """ The arguments will be treated as with this signature: __init__(self, schema=None, ignore_none_values=False, allow_unknown=False, require_all=False, purge_unknown=False, purge_readonly=False, error_handler=errors.BasicErrorHandler) """ self.document = None """ The document that is or was recently processed. Type: any :term:`mapping` """ self._errors = errors.ErrorList() """ The list of errors that were encountered since the last document processing was invoked. Type: :class:`~cerberus.errors.ErrorList` """ self.recent_error = None """ The last individual error that was submitted. Type: :class:`~cerberus.errors.ValidationError` """ self.document_error_tree = errors.DocumentErrorTree() """ A tree representiation of encountered errors following the structure of the document. Type: :class:`~cerberus.errors.DocumentErrorTree` """ self.schema_error_tree = errors.SchemaErrorTree() """ A tree representiation of encountered errors following the structure of the schema. Type: :class:`~cerberus.errors.SchemaErrorTree` """ self.document_path = () """ The path within the document to the current sub-document. Type: :class:`tuple` """ self.schema_path = () """ The path within the schema to the current sub-schema. Type: :class:`tuple` """ self.update = False self.error_handler = self.__init_error_handler(kwargs) """ The error handler used to format :attr:`~cerberus.Validator.errors` and process submitted errors with :meth:`~cerberus.Validator._error`. Type: :class:`~cerberus.errors.BaseErrorHandler` """ self.__store_config(args, kwargs) self.schema = kwargs.get('schema', None) self.allow_unknown = kwargs.get('allow_unknown', False) self.require_all = kwargs.get('require_all', False) self._remaining_rules = [] """ Keeps track of the rules that are next in line to be evaluated during the validation of a field. Type: :class:`list` """ super(BareValidator, self).__init__() @staticmethod def __init_error_handler(kwargs): error_handler = kwargs.pop('error_handler', errors.BasicErrorHandler) if isinstance(error_handler, tuple): error_handler, eh_config = error_handler else: eh_config = {} if isinstance(error_handler, type) and issubclass( error_handler, errors.BaseErrorHandler ): return error_handler(**eh_config) elif isinstance(error_handler, errors.BaseErrorHandler): return error_handler else: raise RuntimeError('Invalid error_handler.') def __store_config(self, args, kwargs): """ Assign args to kwargs and store configuration. """ signature = ( 'schema', 'ignore_none_values', 'allow_unknown', 'require_all', 'purge_unknown', 'purge_readonly', ) for i, p in enumerate(signature[: len(args)]): if p in kwargs: raise TypeError("__init__ got multiple values for argument " "'%s'" % p) else: kwargs[p] = args[i] self._config = kwargs """ This dictionary holds the configuration arguments that were used to initialize the :class:`Validator` instance except the ``error_handler``. """ @classmethod def clear_caches(cls): """ Purge the cache of known valid schemas. """ cls._valid_schemas.clear() def _error(self, *args): """ Creates and adds one or multiple errors. :param args: Accepts different argument's signatures. *1. Bulk addition of errors:* - :term:`iterable` of :class:`~cerberus.errors.ValidationError`-instances The errors will be added to :attr:`~cerberus.Validator._errors`. *2. Custom error:* - the invalid field's name - the error message A custom error containing the message will be created and added to :attr:`~cerberus.Validator._errors`. There will however be fewer information contained in the error (no reference to the violated rule and its constraint). *3. Defined error:* - the invalid field's name - the error-reference, see :mod:`cerberus.errors` - arbitrary, supplemental information about the error A :class:`~cerberus.errors.ValidationError` instance will be created and added to :attr:`~cerberus.Validator._errors`. """ if len(args) == 1: self._errors.extend(args[0]) self._errors.sort() for error in args[0]: self.document_error_tree.add(error) self.schema_error_tree.add(error) self.error_handler.emit(error) elif len(args) == 2 and isinstance(args[1], _str_type): self._error(args[0], errors.CUSTOM, args[1]) elif len(args) >= 2: field = args[0] code = args[1].code rule = args[1].rule info = args[2:] document_path = self.document_path + (field,) schema_path = self.schema_path if code != errors.UNKNOWN_FIELD.code and rule is not None: schema_path += (field, rule) if not rule: constraint = None else: field_definitions = self._resolve_rules_set(self.schema[field]) if rule == 'nullable': constraint = field_definitions.get(rule, False) elif rule == 'required': constraint = field_definitions.get(rule, self.require_all) if rule not in field_definitions: schema_path = "__require_all__" else: constraint = field_definitions[rule] value = self.document.get(field) self.recent_error = errors.ValidationError( document_path, schema_path, code, rule, constraint, value, info ) self._error([self.recent_error]) def _get_child_validator(self, document_crumb=None, schema_crumb=None, **kwargs): """ Creates a new instance of Validator-(sub-)class. All initial parameters of the parent are passed to the initialization, unless a parameter is given as an explicit *keyword*-parameter. :param document_crumb: Extends the :attr:`~cerberus.Validator.document_path` of the child-validator. :type document_crumb: :class:`tuple` or :term:`hashable` :param schema_crumb: Extends the :attr:`~cerberus.Validator.schema_path` of the child-validator. :type schema_crumb: :class:`tuple` or hashable :param kwargs: Overriding keyword-arguments for initialization. :type kwargs: :class:`dict` :return: an instance of ``self.__class__`` """ child_config = self._config.copy() child_config.update(kwargs) if not self.is_child: child_config['is_child'] = True child_config['error_handler'] = toy_error_handler child_config['root_allow_unknown'] = self.allow_unknown child_config['root_require_all'] = self.require_all child_config['root_document'] = self.document child_config['root_schema'] = self.schema child_validator = self.__class__(**child_config) if document_crumb is None: child_validator.document_path = self.document_path else: if not isinstance(document_crumb, tuple): document_crumb = (document_crumb,) child_validator.document_path = self.document_path + document_crumb if schema_crumb is None: child_validator.schema_path = self.schema_path else: if not isinstance(schema_crumb, tuple): schema_crumb = (schema_crumb,) child_validator.schema_path = self.schema_path + schema_crumb return child_validator def __get_rule_handler(self, domain, rule): methodname = '_{0}_{1}'.format(domain, rule.replace(' ', '_')) result = getattr(self, methodname, None) if result is None: raise RuntimeError( "There's no handler for '{}' in the '{}' " "domain.".format(rule, domain) ) return result def _drop_nodes_from_errorpaths(self, _errors, dp_items, sp_items): """ Removes nodes by index from an errorpath, relatively to the basepaths of self. :param errors: A list of :class:`errors.ValidationError` instances. :param dp_items: A list of integers, pointing at the nodes to drop from the :attr:`document_path`. :param sp_items: Alike ``dp_items``, but for :attr:`schema_path`. """ dp_basedepth = len(self.document_path) sp_basedepth = len(self.schema_path) for error in _errors: for i in sorted(dp_items, reverse=True): error.document_path = drop_item_from_tuple( error.document_path, dp_basedepth + i ) for i in sorted(sp_items, reverse=True): error.schema_path = drop_item_from_tuple( error.schema_path, sp_basedepth + i ) if error.child_errors: self._drop_nodes_from_errorpaths(error.child_errors, dp_items, sp_items) def _lookup_field(self, path): """ Searches for a field as defined by path. This method is used by the ``dependency`` evaluation logic. :param path: Path elements are separated by a ``.``. A leading ``^`` indicates that the path relates to the document root, otherwise it relates to the currently evaluated document, which is possibly a subdocument. The sequence ``^^`` at the start will be interpreted as a literal ``^``. :type path: :class:`str` :returns: Either the found field name and its value or :obj:`None` for both. :rtype: A two-value :class:`tuple`. """ if path.startswith('^'): path = path[1:] context = self.document if path.startswith('^') else self.root_document else: context = self.document parts = path.split('.') for part in parts: if part not in context: return None, None context = context.get(part, {}) return parts[-1], context def _resolve_rules_set(self, rules_set): if isinstance(rules_set, Mapping): return rules_set elif isinstance(rules_set, _str_type): return self.rules_set_registry.get(rules_set) return None def _resolve_schema(self, schema): if isinstance(schema, Mapping): return schema elif isinstance(schema, _str_type): return self.schema_registry.get(schema) return None # Properties @property def allow_unknown(self): """ If ``True`` unknown fields that are not defined in the schema will be ignored. If a mapping with a validation schema is given, any undefined field will be validated against its rules. Also see :ref:`allowing-the-unknown`. Type: :class:`bool` or any :term:`mapping` """ return self._config.get('allow_unknown', False) @allow_unknown.setter def allow_unknown(self, value): if not (self.is_child or isinstance(value, (bool, DefinitionSchema))): DefinitionSchema(self, {'allow_unknown': value}) self._config['allow_unknown'] = value @property def require_all(self): """ If ``True`` known fields that are defined in the schema will be required. Type: :class:`bool` """ return self._config.get('require_all', False) @require_all.setter def require_all(self, value): self._config['require_all'] = value @property def errors(self): """ The errors of the last processing formatted by the handler that is bound to :attr:`~cerberus.Validator.error_handler`. """ return self.error_handler(self._errors) @property def ignore_none_values(self): """ Whether to not process :obj:`None`-values in a document or not. Type: :class:`bool` """ return self._config.get('ignore_none_values', False) @ignore_none_values.setter def ignore_none_values(self, value): self._config['ignore_none_values'] = value @property def is_child(self): """ ``True`` for child-validators obtained with :meth:`~cerberus.Validator._get_child_validator`. Type: :class:`bool` """ return self._config.get('is_child', False) @property def _is_normalized(self): """ ``True`` if the document is already normalized. """ return self._config.get('_is_normalized', False) @_is_normalized.setter def _is_normalized(self, value): self._config['_is_normalized'] = value @property def purge_unknown(self): """ If ``True``, unknown fields will be deleted from the document unless a validation is called with disabled normalization. Also see :ref:`purging-unknown-fields`. Type: :class:`bool` """ return self._config.get('purge_unknown', False) @purge_unknown.setter def purge_unknown(self, value): self._config['purge_unknown'] = value @property def purge_readonly(self): """ If ``True``, fields declared as readonly will be deleted from the document unless a validation is called with disabled normalization. Type: :class:`bool` """ return self._config.get('purge_readonly', False) @purge_readonly.setter def purge_readonly(self, value): self._config['purge_readonly'] = value @property def root_allow_unknown(self): """ The :attr:`~cerberus.Validator.allow_unknown` attribute of the first level ancestor of a child validator. """ return self._config.get('root_allow_unknown', self.allow_unknown) @property def root_require_all(self): """ The :attr:`~cerberus.Validator.require_all` attribute of the first level ancestor of a child validator. """ return self._config.get('root_require_all', self.require_all) @property def root_document(self): """ The :attr:`~cerberus.Validator.document` attribute of the first level ancestor of a child validator. """ return self._config.get('root_document', self.document) @property def rules_set_registry(self): """ The registry that holds referenced rules sets. Type: :class:`~cerberus.Registry` """ return self._config.get('rules_set_registry', rules_set_registry) @rules_set_registry.setter def rules_set_registry(self, registry): self._config['rules_set_registry'] = registry @property def root_schema(self): """ The :attr:`~cerberus.Validator.schema` attribute of the first level ancestor of a child validator. """ return self._config.get('root_schema', self.schema) @property def schema(self): """ The validation schema of a validator. When a schema is passed to a method, it replaces this attribute. Type: any :term:`mapping` or :obj:`None` """ return self._schema @schema.setter def schema(self, schema): if schema is None: self._schema = None elif self.is_child or isinstance(schema, DefinitionSchema): self._schema = schema else: self._schema = DefinitionSchema(self, schema) @property def schema_registry(self): """ The registry that holds referenced schemas. Type: :class:`~cerberus.Registry` """ return self._config.get('schema_registry', schema_registry) @schema_registry.setter def schema_registry(self, registry): self._config['schema_registry'] = registry # FIXME the returned method has the correct docstring, but doesn't appear # in the API docs @readonly_classproperty def types(cls): """ The constraints that can be used for the 'type' rule. Type: A tuple of strings. """ redundant_types = set(cls.types_mapping) & set(cls._types_from_methods) if redundant_types: warn( "These types are defined both with a method and in the" "'types_mapping' property of this validator: %s" % redundant_types ) return tuple(cls.types_mapping) + cls._types_from_methods # Document processing def __init_processing(self, document, schema=None): self._errors = errors.ErrorList() self.recent_error = None self.document_error_tree = errors.DocumentErrorTree() self.schema_error_tree = errors.SchemaErrorTree() self.document = copy(document) if not self.is_child: self._is_normalized = False if schema is not None: self.schema = DefinitionSchema(self, schema) elif self.schema is None: if isinstance(self.allow_unknown, Mapping): self._schema = {} else: raise SchemaError(errors.SCHEMA_ERROR_MISSING) if document is None: raise DocumentError(errors.DOCUMENT_MISSING) if not isinstance(document, Mapping): raise DocumentError(errors.DOCUMENT_FORMAT.format(document)) self.error_handler.start(self) def _drop_remaining_rules(self, *rules): """ Drops rules from the queue of the rules that still need to be evaluated for the currently processed field. If no arguments are given, the whole queue is emptied. """ if rules: for rule in rules: try: self._remaining_rules.remove(rule) except ValueError: pass else: self._remaining_rules = [] # # Normalizing def normalized(self, document, schema=None, always_return_document=False): """ Returns the document normalized according to the specified rules of a schema. :param document: The document to normalize. :type document: any :term:`mapping` :param schema: The validation schema. Defaults to :obj:`None`. If not provided here, the schema must have been provided at class instantiation. :type schema: any :term:`mapping` :param always_return_document: Return the document, even if an error occurred. Defaults to: ``False``. :type always_return_document: :class:`bool` :return: A normalized copy of the provided mapping or :obj:`None` if an error occurred during normalization. """ self.__init_processing(document, schema) self.__normalize_mapping(self.document, self.schema) self.error_handler.end(self) if self._errors and not always_return_document: return None else: return self.document def __normalize_mapping(self, mapping, schema): if isinstance(schema, _str_type): schema = self._resolve_schema(schema) schema = schema.copy() for field in schema: schema[field] = self._resolve_rules_set(schema[field]) self.__normalize_rename_fields(mapping, schema) if self.purge_unknown and not self.allow_unknown: self._normalize_purge_unknown(mapping, schema) if self.purge_readonly: self.__normalize_purge_readonly(mapping, schema) # Check `readonly` fields before applying default values because # a field's schema definition might contain both `readonly` and # `default`. self.__validate_readonly_fields(mapping, schema) self.__normalize_default_fields(mapping, schema) self._normalize_coerce(mapping, schema) self.__normalize_containers(mapping, schema) self._is_normalized = True return mapping def _normalize_coerce(self, mapping, schema): """ {'oneof': [ {'type': 'callable'}, {'type': 'list', 'schema': {'oneof': [{'type': 'callable'}, {'type': 'string'}]}}, {'type': 'string'} ]} """ error = errors.COERCION_FAILED for field in mapping: if field in schema and 'coerce' in schema[field]: mapping[field] = self.__normalize_coerce( schema[field]['coerce'], field, mapping[field], schema[field].get('nullable', False), error, ) elif ( isinstance(self.allow_unknown, Mapping) and 'coerce' in self.allow_unknown ): mapping[field] = self.__normalize_coerce( self.allow_unknown['coerce'], field, mapping[field], self.allow_unknown.get('nullable', False), error, ) def __normalize_coerce(self, processor, field, value, nullable, error): if isinstance(processor, _str_type): processor = self.__get_rule_handler('normalize_coerce', processor) elif isinstance(processor, Iterable): result = value for p in processor: result = self.__normalize_coerce(p, field, result, nullable, error) if ( errors.COERCION_FAILED in self.document_error_tree.fetch_errors_from( self.document_path + (field,) ) ): break return result try: return processor(value) except Exception as e: if not (nullable and value is None): self._error(field, error, str(e)) return value def __normalize_containers(self, mapping, schema): for field in mapping: rules = set(schema.get(field, ())) # TODO: This check conflates validation and normalization if isinstance(mapping[field], Mapping): if 'keysrules' in rules: self.__normalize_mapping_per_keysrules( field, mapping, schema[field]['keysrules'] ) if 'valuesrules' in rules: self.__normalize_mapping_per_valuesrules( field, mapping, schema[field]['valuesrules'] ) if rules & set( ('allow_unknown', 'purge_unknown', 'schema') ) or isinstance(self.allow_unknown, Mapping): try: self.__normalize_mapping_per_schema(field, mapping, schema) except _SchemaRuleTypeError: pass elif isinstance(mapping[field], _str_type): continue elif isinstance(mapping[field], Sequence): if 'schema' in rules: self.__normalize_sequence_per_schema(field, mapping, schema) elif 'items' in rules: self.__normalize_sequence_per_items(field, mapping, schema) def __normalize_mapping_per_keysrules(self, field, mapping, property_rules): schema = dict(((k, property_rules) for k in mapping[field])) document = dict(((k, k) for k in mapping[field])) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'keysrules'), schema=schema ) result = validator.normalized(document, always_return_document=True) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2, 4]) self._error(validator._errors) for k in result: if k == result[k]: continue if result[k] in mapping[field]: warn( "Normalizing keys of {path}: {key} already exists, " "its value is replaced.".format( path='.'.join(str(x) for x in self.document_path + (field,)), key=k, ) ) mapping[field][result[k]] = mapping[field][k] else: mapping[field][result[k]] = mapping[field][k] del mapping[field][k] def __normalize_mapping_per_valuesrules(self, field, mapping, value_rules): schema = dict(((k, value_rules) for k in mapping[field])) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'valuesrules'), schema=schema ) mapping[field] = validator.normalized( mapping[field], always_return_document=True ) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2]) self._error(validator._errors) def __normalize_mapping_per_schema(self, field, mapping, schema): rules = schema.get(field, {}) if not rules and isinstance(self.allow_unknown, Mapping): rules = self.allow_unknown validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'schema'), schema=rules.get('schema', {}), allow_unknown=rules.get('allow_unknown', self.allow_unknown), # noqa: E501 purge_unknown=rules.get('purge_unknown', self.purge_unknown), require_all=rules.get('require_all', self.require_all), ) # noqa: E501 value_type = type(mapping[field]) result_value = validator.normalized(mapping[field], always_return_document=True) mapping[field] = value_type(result_value) if validator._errors: self._error(validator._errors) def __normalize_sequence_per_schema(self, field, mapping, schema): schema = dict( ((k, schema[field]['schema']) for k in range(len(mapping[field]))) ) document = dict((k, v) for k, v in enumerate(mapping[field])) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'schema'), schema=schema ) value_type = type(mapping[field]) result = validator.normalized(document, always_return_document=True) mapping[field] = value_type(result.values()) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2]) self._error(validator._errors) def __normalize_sequence_per_items(self, field, mapping, schema): rules, values = schema[field]['items'], mapping[field] if len(rules) != len(values): return schema = dict(((k, v) for k, v in enumerate(rules))) document = dict((k, v) for k, v in enumerate(values)) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'items'), schema=schema ) value_type = type(mapping[field]) result = validator.normalized(document, always_return_document=True) mapping[field] = value_type(result.values()) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2]) self._error(validator._errors) @staticmethod def __normalize_purge_readonly(mapping, schema): for field in [x for x in mapping if schema.get(x, {}).get('readonly', False)]: mapping.pop(field) return mapping @staticmethod def _normalize_purge_unknown(mapping, schema): """ {'type': 'boolean'} """ for field in [x for x in mapping if x not in schema]: mapping.pop(field) return mapping def __normalize_rename_fields(self, mapping, schema): for field in tuple(mapping): if field in schema: self._normalize_rename(mapping, schema, field) self._normalize_rename_handler(mapping, schema, field) elif ( isinstance(self.allow_unknown, Mapping) and 'rename_handler' in self.allow_unknown ): self._normalize_rename_handler( mapping, {field: self.allow_unknown}, field ) return mapping def _normalize_rename(self, mapping, schema, field): """ {'type': 'hashable'} """ if 'rename' in schema[field]: mapping[schema[field]['rename']] = mapping[field] del mapping[field] def _normalize_rename_handler(self, mapping, schema, field): """ {'oneof': [ {'type': 'callable'}, {'type': 'list', 'schema': {'oneof': [{'type': 'callable'}, {'type': 'string'}]}}, {'type': 'string'} ]} """ if 'rename_handler' not in schema[field]: return new_name = self.__normalize_coerce( schema[field]['rename_handler'], field, field, False, errors.RENAMING_FAILED ) if new_name != field: mapping[new_name] = mapping[field] del mapping[field] def __validate_readonly_fields(self, mapping, schema): for field in ( x for x in schema if x in mapping and self._resolve_rules_set(schema[x]).get('readonly') ): self._validate_readonly(schema[field]['readonly'], field, mapping[field]) def __normalize_default_fields(self, mapping, schema): empty_fields = [ x for x in schema if x not in mapping or ( mapping[x] is None # noqa: W503 and not schema[x].get('nullable', False) ) # noqa: W503 ] try: fields_with_default = [x for x in empty_fields if 'default' in schema[x]] except TypeError: raise _SchemaRuleTypeError for field in fields_with_default: self._normalize_default(mapping, schema, field) known_fields_states = set() fields_with_default_setter = [ x for x in empty_fields if 'default_setter' in schema[x] ] while fields_with_default_setter: field = fields_with_default_setter.pop(0) try: self._normalize_default_setter(mapping, schema, field) except KeyError: fields_with_default_setter.append(field) except Exception as e: self._error(field, errors.SETTING_DEFAULT_FAILED, str(e)) fields_processing_state = hash(tuple(fields_with_default_setter)) if fields_processing_state in known_fields_states: for field in fields_with_default_setter: self._error( field, errors.SETTING_DEFAULT_FAILED, 'Circular dependencies of default setters.', ) break else: known_fields_states.add(fields_processing_state) def _normalize_default(self, mapping, schema, field): """ {'nullable': True} """ mapping[field] = schema[field]['default'] def _normalize_default_setter(self, mapping, schema, field): """ {'oneof': [ {'type': 'callable'}, {'type': 'string'} ]} """ if 'default_setter' in schema[field]: setter = schema[field]['default_setter'] if isinstance(setter, _str_type): setter = self.__get_rule_handler('normalize_default_setter', setter) mapping[field] = setter(mapping) # # Validating def validate(self, document, schema=None, update=False, normalize=True): """ Normalizes and validates a mapping against a validation-schema of defined rules. :param document: The document to normalize. :type document: any :term:`mapping` :param schema: The validation schema. Defaults to :obj:`None`. If not provided here, the schema must have been provided at class instantiation. :type schema: any :term:`mapping` :param update: If ``True``, required fields won't be checked. :type update: :class:`bool` :param normalize: If ``True``, normalize the document before validation. :type normalize: :class:`bool` :return: ``True`` if validation succeeds, otherwise ``False``. Check the :func:`errors` property for a list of processing errors. :rtype: :class:`bool` """ self.update = update self._unrequired_by_excludes = set() self.__init_processing(document, schema) if normalize: self.__normalize_mapping(self.document, self.schema) for field in self.document: if self.ignore_none_values and self.document[field] is None: continue definitions = self.schema.get(field) if definitions is not None: self.__validate_definitions(definitions, field) else: self.__validate_unknown_fields(field) if not self.update: self.__validate_required_fields(self.document) self.error_handler.end(self) return not bool(self._errors) __call__ = validate def validated(self, *args, **kwargs): """ Wrapper around :meth:`~cerberus.Validator.validate` that returns the normalized and validated document or :obj:`None` if validation failed. """ always_return_document = kwargs.pop('always_return_document', False) self.validate(*args, **kwargs) if self._errors and not always_return_document: return None else: return self.document def __validate_unknown_fields(self, field): if self.allow_unknown: value = self.document[field] if isinstance(self.allow_unknown, (Mapping, _str_type)): # validate that unknown fields matches the schema # for unknown_fields schema_crumb = 'allow_unknown' if self.is_child else '__allow_unknown__' validator = self._get_child_validator( schema_crumb=schema_crumb, schema={field: self.allow_unknown} ) if not validator({field: value}, normalize=False): self._error(validator._errors) else: self._error(field, errors.UNKNOWN_FIELD) def __validate_definitions(self, definitions, field): """ Validate a field's value against its defined rules. """ def validate_rule(rule): validator = self.__get_rule_handler('validate', rule) return validator(definitions.get(rule, None), field, value) definitions = self._resolve_rules_set(definitions) value = self.document[field] rules_queue = [ x for x in self.priority_validations if x in definitions or x in self.mandatory_validations ] rules_queue.extend( x for x in self.mandatory_validations if x not in rules_queue ) rules_queue.extend( x for x in definitions if x not in rules_queue and x not in self.normalization_rules and x not in ('allow_unknown', 'require_all', 'meta', 'required') ) self._remaining_rules = rules_queue while self._remaining_rules: rule = self._remaining_rules.pop(0) try: result = validate_rule(rule) # TODO remove on next breaking release if result: break except _SchemaRuleTypeError: break self._drop_remaining_rules() # Remember to keep the validation methods below this line # sorted alphabetically _validate_allow_unknown = dummy_for_rule_validation( """ {'oneof': [{'type': 'boolean'}, {'type': ['dict', 'string'], 'check_with': 'bulk_schema'}]} """ ) def _validate_allowed(self, allowed_values, field, value): """ {'type': 'container'} """ if isinstance(value, Iterable) and not isinstance(value, _str_type): unallowed = set(value) - set(allowed_values) if unallowed: self._error(field, errors.UNALLOWED_VALUES, list(unallowed)) else: if value not in allowed_values: self._error(field, errors.UNALLOWED_VALUE, value) def _validate_check_with(self, checks, field, value): """ {'oneof': [ {'type': 'callable'}, {'type': 'list', 'schema': {'oneof': [{'type': 'callable'}, {'type': 'string'}]}}, {'type': 'string'} ]} """ if isinstance(checks, _str_type): try: value_checker = self.__get_rule_handler('check_with', checks) # TODO remove on next major release except RuntimeError: value_checker = self.__get_rule_handler('validator', checks) warn( "The 'validator' rule was renamed to 'check_with'. Please update " "your schema and method names accordingly.", DeprecationWarning, ) value_checker(field, value) elif isinstance(checks, Iterable): for v in checks: self._validate_check_with(v, field, value) else: checks(field, value, self._error) def _validate_contains(self, expected_values, field, value): """ {'empty': False } """ if not isinstance(value, Iterable): return if not isinstance(expected_values, Iterable) or isinstance( expected_values, _str_type ): expected_values = set((expected_values,)) else: expected_values = set(expected_values) missing_values = expected_values - set(value) if missing_values: self._error(field, errors.MISSING_MEMBERS, missing_values) def _validate_dependencies(self, dependencies, field, value): """ {'type': ('dict', 'hashable', 'list'), 'check_with': 'dependencies'} """ if isinstance(dependencies, _str_type) or not isinstance( dependencies, (Iterable, Mapping) ): dependencies = (dependencies,) if isinstance(dependencies, Sequence): self.__validate_dependencies_sequence(dependencies, field) elif isinstance(dependencies, Mapping): self.__validate_dependencies_mapping(dependencies, field) if ( self.document_error_tree.fetch_node_from( self.schema_path + (field, 'dependencies') ) is not None ): return True def __validate_dependencies_mapping(self, dependencies, field): validated_dependencies_counter = 0 error_info = {} for dependency_name, dependency_values in dependencies.items(): if not isinstance(dependency_values, Sequence) or isinstance( dependency_values, _str_type ): dependency_values = [dependency_values] wanted_field, wanted_field_value = self._lookup_field(dependency_name) if wanted_field_value in dependency_values: validated_dependencies_counter += 1 else: error_info.update({dependency_name: wanted_field_value}) if validated_dependencies_counter != len(dependencies): self._error(field, errors.DEPENDENCIES_FIELD_VALUE, error_info) def __validate_dependencies_sequence(self, dependencies, field): for dependency in dependencies: if self._lookup_field(dependency)[0] is None: self._error(field, errors.DEPENDENCIES_FIELD, dependency) def _validate_empty(self, empty, field, value): """ {'type': 'boolean'} """ if isinstance(value, Sized) and len(value) == 0: self._drop_remaining_rules( 'allowed', 'forbidden', 'items', 'minlength', 'maxlength', 'regex', 'check_with', ) if not empty: self._error(field, errors.EMPTY_NOT_ALLOWED) def _validate_excludes(self, excluded_fields, field, value): """ {'type': ('hashable', 'list'), 'schema': {'type': 'hashable'}} """ if isinstance(excluded_fields, Hashable): excluded_fields = [excluded_fields] # Mark the currently evaluated field as not required for now if it actually is. # One of the so marked will be needed to pass when required fields are checked. if self.schema[field].get('required', self.require_all): self._unrequired_by_excludes.add(field) for excluded_field in excluded_fields: if excluded_field in self.schema and self.schema[field].get( 'required', self.require_all ): self._unrequired_by_excludes.add(excluded_field) if any(excluded_field in self.document for excluded_field in excluded_fields): exclusion_str = ', '.join( "'{0}'".format(field) for field in excluded_fields ) self._error(field, errors.EXCLUDES_FIELD, exclusion_str) def _validate_forbidden(self, forbidden_values, field, value): """ {'type': 'list'} """ if isinstance(value, Sequence) and not isinstance(value, _str_type): forbidden = set(value) & set(forbidden_values) if forbidden: self._error(field, errors.FORBIDDEN_VALUES, list(forbidden)) else: if value in forbidden_values: self._error(field, errors.FORBIDDEN_VALUE, value) def _validate_items(self, items, field, values): """ {'type': 'list', 'check_with': 'items'} """ if len(items) != len(values): self._error(field, errors.ITEMS_LENGTH, len(items), len(values)) else: schema = dict( (i, definition) for i, definition in enumerate(items) ) # noqa: E501 validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'items'), # noqa: E501 schema=schema, ) if not validator( dict((i, value) for i, value in enumerate(values)), update=self.update, normalize=False, ): self._error(field, errors.BAD_ITEMS, validator._errors) def __validate_logical(self, operator, definitions, field, value): """ Validates value against all definitions and logs errors according to the operator. """ valid_counter = 0 _errors = errors.ErrorList() for i, definition in enumerate(definitions): schema = {field: definition.copy()} for rule in ('allow_unknown', 'type'): if rule not in schema[field] and rule in self.schema[field]: schema[field][rule] = self.schema[field][rule] if 'allow_unknown' not in schema[field]: schema[field]['allow_unknown'] = self.allow_unknown validator = self._get_child_validator( schema_crumb=(field, operator, i), schema=schema, allow_unknown=True ) if validator(self.document, update=self.update, normalize=False): valid_counter += 1 else: self._drop_nodes_from_errorpaths(validator._errors, [], [3]) _errors.extend(validator._errors) return valid_counter, _errors def _validate_anyof(self, definitions, field, value): """ {'type': 'list', 'logical': 'anyof'} """ valids, _errors = self.__validate_logical('anyof', definitions, field, value) if valids < 1: self._error(field, errors.ANYOF, _errors, valids, len(definitions)) def _validate_allof(self, definitions, field, value): """ {'type': 'list', 'logical': 'allof'} """ valids, _errors = self.__validate_logical('allof', definitions, field, value) if valids < len(definitions): self._error(field, errors.ALLOF, _errors, valids, len(definitions)) def _validate_noneof(self, definitions, field, value): """ {'type': 'list', 'logical': 'noneof'} """ valids, _errors = self.__validate_logical('noneof', definitions, field, value) if valids > 0: self._error(field, errors.NONEOF, _errors, valids, len(definitions)) def _validate_oneof(self, definitions, field, value): """ {'type': 'list', 'logical': 'oneof'} """ valids, _errors = self.__validate_logical('oneof', definitions, field, value) if valids != 1: self._error(field, errors.ONEOF, _errors, valids, len(definitions)) def _validate_max(self, max_value, field, value): """ {'nullable': False } """ try: if value > max_value: self._error(field, errors.MAX_VALUE) except TypeError: pass def _validate_min(self, min_value, field, value): """ {'nullable': False } """ try: if value < min_value: self._error(field, errors.MIN_VALUE) except TypeError: pass def _validate_maxlength(self, max_length, field, value): """ {'type': 'integer'} """ if isinstance(value, Iterable) and len(value) > max_length: self._error(field, errors.MAX_LENGTH, len(value)) _validate_meta = dummy_for_rule_validation('') def _validate_minlength(self, min_length, field, value): """ {'type': 'integer'} """ if isinstance(value, Iterable) and len(value) < min_length: self._error(field, errors.MIN_LENGTH, len(value)) def _validate_nullable(self, nullable, field, value): """ {'type': 'boolean'} """ if value is None: if not nullable: self._error(field, errors.NOT_NULLABLE) self._drop_remaining_rules( 'allowed', 'empty', 'forbidden', 'items', 'keysrules', 'min', 'max', 'minlength', 'maxlength', 'regex', 'schema', 'type', 'valuesrules', ) def _validate_keysrules(self, schema, field, value): """ {'type': ['dict', 'string'], 'check_with': 'bulk_schema', 'forbidden': ['rename', 'rename_handler']} """ if isinstance(value, Mapping): validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'keysrules'), schema=dict(((k, schema) for k in value.keys())), ) if not validator(dict(((k, k) for k in value.keys())), normalize=False): self._drop_nodes_from_errorpaths(validator._errors, [], [2, 4]) self._error(field, errors.KEYSRULES, validator._errors) def _validate_readonly(self, readonly, field, value): """ {'type': 'boolean'} """ if readonly: if not self._is_normalized: self._error(field, errors.READONLY_FIELD) # If the document was normalized (and therefore already been # checked for readonly fields), we still have to return True # if an error was filed. has_error = ( errors.READONLY_FIELD in self.document_error_tree.fetch_errors_from( self.document_path + (field,) ) ) if self._is_normalized and has_error: self._drop_remaining_rules() def _validate_regex(self, pattern, field, value): """ {'type': 'string'} """ if not isinstance(value, _str_type): return if not pattern.endswith('$'): pattern += '$' re_obj = re.compile(pattern) if not re_obj.match(value): self._error(field, errors.REGEX_MISMATCH) _validate_required = dummy_for_rule_validation(""" {'type': 'boolean'} """) _validate_require_all = dummy_for_rule_validation(""" {'type': 'boolean'} """) def __validate_required_fields(self, document): """ Validates that required fields are not missing. :param document: The document being validated. """ try: required = set( field for field, definition in self.schema.items() if self._resolve_rules_set(definition).get('required', self.require_all) is True ) except AttributeError: if self.is_child and self.schema_path[-1] == 'schema': raise _SchemaRuleTypeError else: raise required -= self._unrequired_by_excludes missing = required - set( field for field in document if document.get(field) is not None or not self.ignore_none_values ) for field in missing: self._error(field, errors.REQUIRED_FIELD) # At least one field from self._unrequired_by_excludes should be present in # document. if self._unrequired_by_excludes: fields = set(field for field in document if document.get(field) is not None) if self._unrequired_by_excludes.isdisjoint(fields): for field in self._unrequired_by_excludes - fields: self._error(field, errors.REQUIRED_FIELD) def _validate_schema(self, schema, field, value): """ {'type': ['dict', 'string'], 'anyof': [{'check_with': 'schema'}, {'check_with': 'bulk_schema'}]} """ if schema is None: return if isinstance(value, Sequence) and not isinstance(value, _str_type): self.__validate_schema_sequence(field, schema, value) elif isinstance(value, Mapping): self.__validate_schema_mapping(field, schema, value) def __validate_schema_mapping(self, field, schema, value): schema = self._resolve_schema(schema) allow_unknown = self.schema[field].get('allow_unknown', self.allow_unknown) require_all = self.schema[field].get('require_all', self.require_all) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'schema'), schema=schema, allow_unknown=allow_unknown, require_all=require_all, ) try: if not validator(value, update=self.update, normalize=False): self._error(field, errors.MAPPING_SCHEMA, validator._errors) except _SchemaRuleTypeError: self._error(field, errors.BAD_TYPE_FOR_SCHEMA) raise def __validate_schema_sequence(self, field, schema, value): schema = dict(((i, schema) for i in range(len(value)))) validator = self._get_child_validator( document_crumb=field, schema_crumb=(field, 'schema'), schema=schema, allow_unknown=self.allow_unknown, ) validator( dict(((i, v) for i, v in enumerate(value))), update=self.update, normalize=False, ) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2]) self._error(field, errors.SEQUENCE_SCHEMA, validator._errors) def _validate_type(self, data_type, field, value): """ {'type': ['string', 'list'], 'check_with': 'type'} """ if not data_type: return types = (data_type,) if isinstance(data_type, _str_type) else data_type for _type in types: # TODO remove this block on next major release # this implementation still supports custom type validation methods type_definition = self.types_mapping.get(_type) if type_definition is not None: matched = isinstance( value, type_definition.included_types ) and not isinstance(value, type_definition.excluded_types) else: type_handler = self.__get_rule_handler('validate_type', _type) matched = type_handler(value) if matched: return # TODO uncomment this block on next major release # when _validate_type_* methods were deprecated: # type_definition = self.types_mapping[_type] # if isinstance(value, type_definition.included_types) \ # and not isinstance(value, type_definition.excluded_types): # noqa 501 # return self._error(field, errors.BAD_TYPE) self._drop_remaining_rules() def _validate_valuesrules(self, schema, field, value): """ {'type': ['dict', 'string'], 'check_with': 'bulk_schema', 'forbidden': ['rename', 'rename_handler']} """ schema_crumb = (field, 'valuesrules') if isinstance(value, Mapping): validator = self._get_child_validator( document_crumb=field, schema_crumb=schema_crumb, schema=dict((k, schema) for k in value), ) validator(value, update=self.update, normalize=False) if validator._errors: self._drop_nodes_from_errorpaths(validator._errors, [], [2]) self._error(field, errors.VALUESRULES, validator._errors) RULE_SCHEMA_SEPARATOR = "The rule's arguments are validated against this schema:" class InspectedValidator(type): """ Metaclass for all validators """ def __new__(cls, *args): if '__doc__' not in args[2]: args[2].update({'__doc__': args[1][0].__doc__}) return super(InspectedValidator, cls).__new__(cls, *args) def __init__(cls, *args): def attributes_with_prefix(prefix): return tuple( x[len(prefix) + 2 :] for x in dir(cls) if x.startswith('_' + prefix + '_') ) super(InspectedValidator, cls).__init__(*args) cls._types_from_methods, cls.validation_rules = (), {} for attribute in attributes_with_prefix('validate'): # TODO remove inspection of type test methods in next major release if attribute.startswith('type_'): cls._types_from_methods += (attribute[len('type_') :],) else: cls.validation_rules[attribute] = cls.__get_rule_schema( '_validate_' + attribute ) # TODO remove on next major release if cls._types_from_methods: warn( "Methods for type testing are deprecated, use TypeDefinition " "and the 'types_mapping'-property of a Validator-instance " "instead.", DeprecationWarning, ) # TODO remove second summand on next major release cls.checkers = tuple(x for x in attributes_with_prefix('check_with')) + tuple( x for x in attributes_with_prefix('validator') ) x = cls.validation_rules['check_with']['oneof'] x[1]['schema']['oneof'][1]['allowed'] = x[2]['allowed'] = cls.checkers for rule in (x for x in cls.mandatory_validations if x != 'nullable'): cls.validation_rules[rule]['required'] = True cls.coercers, cls.default_setters, cls.normalization_rules = (), (), {} for attribute in attributes_with_prefix('normalize'): if attribute.startswith('coerce_'): cls.coercers += (attribute[len('coerce_') :],) elif attribute.startswith('default_setter_'): cls.default_setters += (attribute[len('default_setter_') :],) else: cls.normalization_rules[attribute] = cls.__get_rule_schema( '_normalize_' + attribute ) for rule in ('coerce', 'rename_handler'): x = cls.normalization_rules[rule]['oneof'] x[1]['schema']['oneof'][1]['allowed'] = x[2]['allowed'] = cls.coercers cls.normalization_rules['default_setter']['oneof'][1][ 'allowed' ] = cls.default_setters cls.rules = {} cls.rules.update(cls.validation_rules) cls.rules.update(cls.normalization_rules) def __get_rule_schema(cls, method_name): docstring = getattr(cls, method_name).__doc__ if docstring is None: result = {} else: if RULE_SCHEMA_SEPARATOR in docstring: docstring = docstring.split(RULE_SCHEMA_SEPARATOR)[1] try: result = literal_eval(docstring.strip()) except Exception: result = {} if not result and method_name != '_validate_meta': warn( "No validation schema is defined for the arguments of rule " "'%s'" % method_name.split('_', 2)[-1] ) return result Validator = InspectedValidator('Validator', (BareValidator,), {}) Cerberus-1.3.2/setup.cfg0000644000076500000240000000007713556067066015522 0ustar nicolastaff00000000000000[aliases] test = pytest [egg_info] tag_build = tag_date = 0 Cerberus-1.3.2/setup.py0000755000076500000240000000415613556066427015420 0ustar nicolastaff00000000000000#!/usr/bin/env python from setuptools import setup, find_packages import sys from collections import OrderedDict DESCRIPTION = ( "Lightweight, extensible schema and data validation tool for " "Python dictionaries." ) LONG_DESCRIPTION = open("README.rst").read() VERSION = "1.3.2" setup_requires = ( ["pytest-runner"] if any(x in sys.argv for x in ("pytest", "test", "ptr")) else [] ) setup( name="Cerberus", version=VERSION, description=DESCRIPTION, long_description=LONG_DESCRIPTION, author="Nicola Iarocci", author_email="nicola@nicolaiarocci.com", maintainer="Frank Sachsenheim", maintainer_email="funkyfuture@riseup.net", url="http://docs.python-cerberus.org", project_urls=OrderedDict( ( ("Documentation", "http://python-cerberus.org"), ("Code", "https://github.com/pyeve/cerberus"), ("Issue tracker", "https://github.com/pyeve/cerberus/issues"), ) ), license="ISC", platforms=["any"], packages=find_packages(), include_package_data=True, setup_requires=setup_requires, tests_require=["pytest"], test_suite="cerberus.tests", install_requires=["setuptools"], keywords=["validation", "schema", "dictionaries", "documents", "normalization"], python_requires=">=2.7", classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Natural Language :: English", "License :: OSI Approved :: ISC License (ISCL)", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", ], )