pax_global_header00006660000000000000000000000064147147130270014520gustar00rootroot0000000000000052 comment=8d58cfe196a9f8136f8eea453805e6e3c9e6b263 sybil-9.0.0/000077500000000000000000000000001471471302700126505ustar00rootroot00000000000000sybil-9.0.0/.carthorse.yml000066400000000000000000000004141471471302700154420ustar00rootroot00000000000000carthorse: version-from: setup.py tag-format: "{version}" when: - version-not-tagged actions: - run: "pip install -e .[build]" - run: "python setup.py sdist bdist_wheel" - run: "twine upload -u __token__ -p $PYPI_TOKEN dist/*" - create-tag sybil-9.0.0/.circleci/000077500000000000000000000000001471471302700145035ustar00rootroot00000000000000sybil-9.0.0/.circleci/config.yml000066400000000000000000000023711471471302700164760ustar00rootroot00000000000000version: 2.1 orbs: python: cjw296/python-ci@4 common: &common jobs: - python/pip-run-tests: matrix: alias: "python-versions" parameters: image: - cimg/python:3.9 - cimg/python:3.11 - cimg/python:3.12 - python/pip-run-tests: matrix: alias: "pytest-versions" parameters: image: - cimg/python:3.12 extras: - "[test,pytest]" extra_packages: # No known problems around 8.3, these are here to provide # syntax examples and ensure the matrix still works: - "'pytest>=8,<8.3'" - "'pytest>=8.3'" - python/typing: packages: sybil - python/coverage: name: coverage requires: - "python-versions" - "pytest-versions" - python/release: name: release config: .carthorse.yml requires: - coverage filters: branches: only: master workflows: push: <<: *common periodic: <<: *common triggers: - schedule: cron: "0 0 * * 2" filters: branches: only: master sybil-9.0.0/.gitignore000066400000000000000000000001301471471302700146320ustar00rootroot00000000000000/bin /*.egg-info /include /lib .coverage* _build/ .*cache/ pytestdebug.log __pycache__/ sybil-9.0.0/.readthedocs.yml000066400000000000000000000003341471471302700157360ustar00rootroot00000000000000version: 2 build: os: ubuntu-22.04 tools: python: "3" python: install: - method: pip path: . extra_requirements: - build sphinx: fail_on_warning: true configuration: docs/conf.py sybil-9.0.0/CHANGELOG.rst000066400000000000000000000226301471471302700146740ustar00rootroot00000000000000Changes ======= 9.0.0 (12 Nov 2024) ------------------- - Retire ``Document.find_region_sources()`` in favour of using a :class:`~sybil.parsers.abstract.lexers.BlockLexer`. See the :ref:`updated example `. - Better error messages when lexing fails to find the end of a block. - Improved documentation. 8.0.1 (30 Oct 2024) ------------------- - Better error message when skip arguments are malformed. - Remove unused constant that caused problems with development releases of pytest. 8.0.0 (20 Sep 2024) ------------------- - Drop Python 3.8 support. - Internal code tidying. Thanks to Adam Dangoor for the work on these! 7.1.1 (16 Sep 2024) ------------------- - Fix bug that broke docstring collection where a method had an :any:`ellipsis ` in place of the docstring. 7.1.0 (16 Sep 2024) ------------------- - Introduce a ``pytest`` extra, such that you can install Sybil in a way that ensures compatible versions of Sybil and pytest are used. - Fix a :class:`DeprecationWarning` on Python 3.13. Thanks to Adam Dangoor for this fix. 7.0.0 (12 Sep 2024) ------------------- - Drop Python 3.7 support. - Drop support for pytest versions less than 8. - :class:`Sybil` now takes a name which is used in any test identifiers it produces. - Add support for :rst:dir:`code` and :rst:dir:`sourcecode` directives in both ReST and MyST. - Fix bug in the pytest integration that prevented multiple :class:`Sybil` instances from parsing the same file. - Fix bug where escaped quotes were not correctly unescaped in regions extracted from docstrings. - Restructure usage documentation, splitting out :doc:`integration` and :doc:`parsers` documents and introducing a :doc:`concepts` glossary. 6.1.1 (9 May 2024) ------------------ - Fix lexing of indented blocks where embedded blank lines would be erroneously removed. 6.1.0 (22 Apr 2024) ------------------- - Add support for lexing nested fenced codeblocks in markdown. - Add support for tilde-delimited codeblocks in markdown. - Fix bug with the end offset of codeblocks in markdown. 6.0.3 (31 Jan 2024) ------------------- - Support pytest 8 and above, due to yet another breaking change in an API Sybil relies on. Thanks to Adam Dangoor for the fix. 6.0.2 (23 Nov 2023) ------------------- - Remove use of deprecated ``py.path.local``. 6.0.1 (22 Nov 2023) ------------------- - Fix lexing of ReST directives and directives-in-comments where the directives were not separated by at least one newline. 6.0.0 (21 Nov 2023) ------------------- - The public interface is now fully typed and checked with ``mypy``. - Official support for Python 3.12 with Python 3.7 now being the minimum supported version. - :doc:`Markdown ` is now supported separately to :doc:`MyST `. - :any:`ReST ` and :any:`MyST ` directives now have their options extracted as part of the lexing process. - Added support for MyST single-line html-style comment directives. - Fixed parsing of MyST directive options with no leading whitespace. - Fixed parsing of Markdown and MyST fenced codeblocks at the end of documents with no trailing newline. - Rework document evaluators to be more flexible and structured. - :ref:`skip ` has been reworked to check validity of operations and allow a reason to be provided for an unconditional skip so it can be highlighted as a skipped test in test runner output. The skip parsers are also now lexer-based. - Make :attr:`Region.evaluator` optional, removing the need for the separate ``LexedRegion`` class. Huge thanks to Adam Dangoor for all his work on typing! 5.0.3 (14 Jul 2023) ------------------- - Fix bug in traceback trimming on the latest release of pytest. 5.0.2 (19 May 2023) ------------------- - Fixed bug in the :func:`repr` of ``LexedRegion`` instances where a lexeme was ``None``. - Fixed lexing of ReST directives, such as :rst:dir:`code-block`, where they occurred at the end of a docstring. - Ensure the :class:`~sybil.Document.namespace` in which doctests are evaluated always has a ``__name__``. This is required by an implementation detail of :any:`typing.runtime_checkable`. 5.0.1 (9 May 2023) ------------------ - Fix a bug that prevent r-prefixed docstrings from being correctly parsed from ``.py`` files. 5.0.0 (26 Mar 2023) ------------------- - By default, on Python 3.8 and above, when parsing ``.py`` files, only examples in docstrings will be parsed. - The :attr:`~sybil.Document.namespace` can now be cleared in both :ref:`ReST ` and :ref:`MyST `. - Support for Python 3.6 has been dropped. - Support for pytest versions earlier than 7.1 has been dropped. 4.0.1 (8 Feb 2023) ------------------ - Switch :func:`sybil.parsers.myst.SkipParser` to use the correct comment character. - Note that the :external+sphinx:doc:`doctest extension ` needs to be enabled to render :rst:dir:`doctest` directives. - Warn about :ref:`ReST ` and :ref:`MyST ` doctest parsers and overlapping blocks. 4.0.0 (25 Dec 2022) ------------------- - Restructure to support lexing source languages such as ReST and MyST while testing examples in target languages such as Python, doctest and bash. - Add support for :doc:`MyST examples `. - Include a :ref:`plan for migrating ` from ``sphinx.ext.doctest``. 3.0.1 (25 Feb 2022) ------------------- - Continue with the ever shifting sands of pytest APIs, this time appeasing warnings from pytest 7 that when fixed break compatibility with pytest 6. 3.0.0 (26 Oct 2021) ------------------- - Require pytest 6.2.0. - Drop Python 2 support. - Add support for Python 3.10 - Remove the ``encoding`` parameter to :class:`~sybil.parsers.rest.DocTestParser` as it is no longer used. - :class:`~sybil.parsers.rest.CodeBlockParser` has been renamed to :class:`~sybil.parsers.rest.PythonCodeBlockParser`, see the :ref:`codeblock-parser` documentation for details. - Support has been added to check examples in Python source code in addition to documentation source files. - ``FIX_BYTE_UNICODE_REPR`` has been removed as it should no longer be needed. Thanks to Stefan Behnel for his work on :ref:`codeblock-parser` parsing! 2.0.1 (29 Nov 2020) ------------------- - Make :class:`~sybil.parsers.rest.DocTestParser` more permissive with respect to tabs in documents. Tabs that aren't in the doctest block not longer cause parsing of the document to fail. 2.0.0 (17 Nov 2020) ------------------- - Drop support for nose. - Handle encoded data returned by doctest execution on Python 2. 1.4.0 (5 Aug 2020) ------------------ - Support nested directories of source files rather than just one directory. - Support multiple patterns of files to include. 1.3.1 (29 Jul 2020) ------------------- - Support pytest 6. 1.3.0 (28 Mar 2020) ------------------- - Treat all documentation source files as being ``utf-8`` encoded. This can be overridden by passing an encoding when instantiating a :class:`~sybil.Sybil`. 1.2.2 (20 Feb 2020) ------------------- - Improvements to ``FIX_BYTE_UNICODE_REPR`` for multiple strings on a single line. - Better handling of files with Windows line endings on Linux under Python 2. 1.2.1 (21 Jan 2020) ------------------- - Fixes for pytest 3.1.0. 1.2.0 (28 Apr 2019) ------------------- - Only compile code in :ref:`codeblocks ` at evaluation time, giving :ref:`skip ` a chance to skip code blocks that won't compile on a particular version of Python. 1.1.0 (25 Apr 2019) ------------------- - Move to CircleCI__ and Carthorse__. __ https://circleci.com/gh/simplistix/sybil __ https://github.com/cjw296/carthorse - Add warning about the limitations of ``FIX_BYTE_UNICODE_REPR``. - Support explicit filenames to include and patterns to exclude when instantiating a :class:`~sybil.Sybil`. - Add the :ref:`skip ` parser. 1.0.9 (1 Aug 2018) ------------------ - Fix for pytest 3.7+. 1.0.8 (6 Apr 2018) ------------------ - Changes only to unit tests to support fixes in the latest release of pytest. 1.0.7 (25 January 2018) ----------------------- - Literal tabs may no longer be included in text that is parsed by the :class:`~sybil.parsers.rest.DocTestParser`. Previously, tabs were expanded which could cause unpleasant problems. 1.0.6 (30 November 2017) ------------------------ - Fix compatibility with pytest 3.3+. Thanks to Bruno Oliveira for this fix! 1.0.5 (6 June 2017) ------------------- - Fix ordering issue that would cause some tests to fail when run on systems using tmpfs. 1.0.4 (5 June 2017) ------------------- - Fix another bug in :class:`~sybil.parsers.rest.CodeBlockParser` where a :rst:dir:`code-block` followed by a less-indented block would be incorrectly indented, resulting in a :class:`SyntaxError`. 1.0.3 (2 June 2017) ------------------- - Fix bug in :class:`~sybil.parsers.rest.CodeBlockParser` where it would incorrectly parse indented code blocks. 1.0.2 (1 June 2017) ------------------- - Fix bug in :class:`~sybil.parsers.rest.CodeBlockParser` where it would not find indented code blocks. 1.0.1 (30 May 2017) ------------------- - Fix bug where unicode and byte literals weren't corrected in doctest tracebacks, even when ``FIX_BYTE_UNICODE_REPR`` was specified. 1.0.0 (26 May 2017) ------------------- - Initial release sybil-9.0.0/LICENSE.txt000066400000000000000000000021201471471302700144660ustar00rootroot00000000000000======= License ======= Copyright (c) 2017 onwards Chris Withers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. sybil-9.0.0/README.rst000066400000000000000000000012051471471302700143350ustar00rootroot00000000000000Sybil ===== |CircleCI|_ |Docs|_ .. |CircleCI| image:: https://circleci.com/gh/simplistix/sybil/tree/master.svg?style=shield .. _CircleCI: https://circleci.com/gh/simplistix/sybil/tree/master .. |Docs| image:: https://readthedocs.org/projects/sybil/badge/?version=latest .. _Docs: http://sybil.readthedocs.org/en/latest/ This library provides a way to check examples in your code and documentation by parsing them from their source and evaluating the parsed examples as part of your normal test run. Integration is provided for the main Python test runners. The `documentation `__ is the best place to start. sybil-9.0.0/conftest.py000066400000000000000000000017621471471302700150550ustar00rootroot00000000000000from doctest import ELLIPSIS from pathlib import Path from typing import Tuple, List import pytest from sybil import Sybil from sybil.parsers.rest import ( CaptureParser, DocTestParser, PythonCodeBlockParser, SkipParser, ) pytest_collect_file = Sybil( parsers=[ CaptureParser(), DocTestParser(optionflags=ELLIPSIS), PythonCodeBlockParser(), SkipParser(), ], patterns=['*.rst', '*.py'], excludes=['tests/samples/*', 'tests/*'] ).pytest() def _find_python_files() -> List[Tuple[Path, str]]: paths = [] for path in Path(__file__).parent.rglob('*.py'): source = Path(path).read_text() paths.append((path, source)) return paths @pytest.fixture def all_python_files(): return _find_python_files() def pytest_generate_tests(metafunc): files = _find_python_files() ids = [str(f[0]) for f in files] if "python_file" in metafunc.fixturenames: metafunc.parametrize("python_file", files, ids=ids) sybil-9.0.0/docs/000077500000000000000000000000001471471302700136005ustar00rootroot00000000000000sybil-9.0.0/docs/Makefile000066400000000000000000000050171471471302700152430ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf _build/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) _build/html @echo @echo "Build finished. The HTML pages are in _build/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) _build/dirhtml @echo @echo "Build finished. The HTML pages are in _build/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) _build/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) _build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) _build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in _build/htmlhelp." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo @echo "Build finished; the LaTeX files are in _build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes @echo @echo "The overview file is in _build/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) _build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in _build/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in _build/doctest/output.txt." sybil-9.0.0/docs/api.rst000066400000000000000000000071361471471302700151120ustar00rootroot00000000000000.. currentmodule:: sybil API Reference ============= This reference is organised into sections relating to the different :doc:`concepts ` you're likely to encounter when using Sybil. Sybils ------ See the :term:`Sybil` concept definition for information. .. autoclass:: Sybil :members: :special-members: __add__ .. autoclass:: sybil.sybil.SybilCollection :members: Documents --------- See the :term:`Document` concept definition for information. .. autoclass:: sybil.Document :members: .. autoclass:: sybil.document.PythonDocument :members: .. autoclass:: sybil.document.PythonDocStringDocument :members: Regions ------- See the :term:`Region` concept definition for information. .. autoclass:: sybil.Region :members: .. autoclass:: sybil.Lexeme :members: :show-inheritance: Lexing ------ See the :term:`Lexer` concept definition for information. .. autodata:: sybil.typing.Lexer .. autodata:: sybil.typing.LexemeMapping .. autoclass:: sybil.parsers.abstract.lexers.BlockLexer .. autoclass:: sybil.parsers.abstract.lexers.LexingException Parsing ------- See the :term:`Parser` concept definition for information. .. autodata:: sybil.typing.Parser .. autoclass:: sybil.parsers.abstract.codeblock.AbstractCodeBlockParser :members: .. autoclass:: sybil.parsers.abstract.doctest.DocTestStringParser :members: __call__, evaluator .. autoclass:: sybil.parsers.abstract.skip.AbstractSkipParser :members: .. autoclass:: sybil.parsers.abstract.clear.AbstractClearNamespaceParser ReST Parsing and Lexing ----------------------- See the :term:`Lexer` and :term:`Parser` concept definitions for information. .. autoclass:: sybil.parsers.rest.lexers.DirectiveLexer .. autoclass:: sybil.parsers.rest.lexers.DirectiveInCommentLexer .. autoclass:: sybil.parsers.rest.CaptureParser .. autoclass:: sybil.parsers.rest.CodeBlockParser :inherited-members: :exclude-members: pad .. autoclass:: sybil.parsers.rest.PythonCodeBlockParser .. autoclass:: sybil.parsers.rest.DocTestParser .. autoclass:: sybil.parsers.rest.DocTestDirectiveParser .. autoclass:: sybil.parsers.rest.SkipParser .. autoclass:: sybil.parsers.rest.ClearNamespaceParser Markdown Parsing and Lexing --------------------------- See the :term:`Lexer` and :term:`Parser` concept definitions for information. .. autoclass:: sybil.parsers.markdown.lexers.RawFencedCodeBlockLexer .. autoclass:: sybil.parsers.markdown.lexers.FencedCodeBlockLexer .. autoclass:: sybil.parsers.markdown.lexers.DirectiveInHTMLCommentLexer .. autoclass:: sybil.parsers.markdown.CodeBlockParser :inherited-members: .. autoclass:: sybil.parsers.markdown.PythonCodeBlockParser .. autoclass:: sybil.parsers.markdown.SkipParser .. autoclass:: sybil.parsers.markdown.ClearNamespaceParser MyST Parsing and Lexing ----------------------- See the :term:`Lexer` and :term:`Parser` concept definitions for information. .. autoclass:: sybil.parsers.myst.lexers.DirectiveLexer .. autoclass:: sybil.parsers.myst.lexers.DirectiveInPercentCommentLexer .. autoclass:: sybil.parsers.myst.CodeBlockParser :inherited-members: .. autoclass:: sybil.parsers.myst.PythonCodeBlockParser .. autoclass:: sybil.parsers.myst.DocTestDirectiveParser .. autoclass:: sybil.parsers.myst.SkipParser .. autoclass:: sybil.parsers.myst.ClearNamespaceParser Evaluation ---------- See the :term:`Evaluator` concept definition for information. .. automodule:: sybil :members: Example .. autoclass:: sybil.example.NotEvaluated .. autoclass:: sybil.typing.Evaluator .. autoclass:: sybil.evaluators.doctest.DocTestEvaluator .. autoclass:: sybil.evaluators.python.PythonEvaluator sybil-9.0.0/docs/changes.rst000066400000000000000000000000711471471302700157400ustar00rootroot00000000000000 .. currentmodule:: sybil .. include:: ../CHANGELOG.rst sybil-9.0.0/docs/concepts.rst000066400000000000000000000113051471471302700161500ustar00rootroot00000000000000Concepts ======== The following concepts are one's you'll encounter when :doc:`using ` Sybil or writing :doc:`parsers ` for it: .. glossary:: Sybil As well as being the name of the project, this is an object that contains configuration of :term:`parsers ` and provides :term:`test runner` integration for discovering :term:`examples ` in :term:`documents ` and :term:`evaluating ` them in the document's :term:`namespace`. Its API is defined by :class:`sybil.Sybil`. See :doc:`use` and :doc:`integration` for more information. Sybil Collection When more than one set of configuration is required, :term:`Sybils ` can be combined using the addition operator to form a single object with a :term:`test runner` integration. The API is defined by :class:`sybil.sybil.SybilCollection` and :any:`sybil.Sybil.__add__`. See :doc:`patterns`. Document An object representing a file in the project containing :term:`examples ` that need to be :term:`evaluated `. Its API is defined by :class:`sybil.Document`. See :doc:`use` for more information. Region An object representing a region in a :term:`document` containing exactly one :term:`example` that needs to be :term:`evaluated `. Within a single :term:`Sybil`, no region may overlap with another. This is to ensure that :term:`examples ` can be executed in a consistent order, and also helps to highlight :doc:`parsing ` errors, which often result in overlapping regions. The API is defined by :class:`sybil.Region`. See :doc:`use` and :doc:`parsers` for more information. Test Runner Sybil works by integrating with the test runner used by your project. This is done by using the appropriate integration method on the :term:`Sybil` or :term:`Sybil Collection`. See :doc:`test runner integration ` for more information. Lexer Different documentation and source formats often result in the same type of :term:`examples `. However, they have their own concepts such as :class:`directives ` in ReST and :class:`fenced code blocks ` in Markdown. A lexer is a callable that takes a :term:`document` and yields a sequence of :term:`regions ` that do not have the :attr:`~sybil.Region.parsed` or :attr:`~sybil.Region.evaluator` attributes set. These allow :term:`parsers ` and :term:`evaluators ` to have simpler implementations that are common across different source formats and can make life easier when writing new :term:`parsers ` for an existing source format. Its API is defined by :class:`sybil.typing.Lexer`. See :doc:`parsers` for more information. Parser A callable that takes a :term:`document` and yields a sequence of :term:`regions `. Parsers may user :term:`lexers ` to turn text in specific text formats into abstract primitives such as :class:`blocks `. Its API is defined by :any:`sybil.typing.Parser`. See :doc:`use` and :doc:`parsers` for more information. Example An objecting representing an example in a :term:`document`. It collects information from both the :term:`document` and the :term:`region` and knows how to evaluate itself using any applicable :term:`evaluators ` from either the :term:`region` or :term:`document` it came from. Its API is defined by :any:`sybil.example.Example`. See :doc:`use` and :doc:`parsers` for more information. Namespace This is a :class:`dict` in which all :term:`examples ` in a :term:`document` will be :term:`evaluated `. Namespaces are not shared between :term:`documents `. For :any:`python ` or :any:`doctest ` evaluation, this is used for :any:`globals`, and for other :term:`evaluators ` it can be used to store state or provide named objects for use in the evaluation of other examples. Its API is defined by :class:`sybil.Document.namespace` See :doc:`use` and :doc:`parsers` for more information. Evaluator A callable that takes an :term:`example` and can raise an :class:`Exception` or return a :class:`str` to indicate that the example was not successfully evaluated. It will often use or modify the :term:`namespace`. Its API is defined by :any:`sybil.typing.Evaluator`. See :doc:`use` and :doc:`parsers` for more information. sybil-9.0.0/docs/conf.py000066400000000000000000000023271471471302700151030ustar00rootroot00000000000000# -*- coding: utf-8 -*- import os, pkg_resources, datetime, time intersphinx_mapping = { 'python': ('https://docs.python.org/3/', None), 'sphinx': ('https://www.sphinx-doc.org/en/stable/', None), 'myst': ('https://myst-parser.readthedocs.io/en/latest', None), } extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx' ] # General source_suffix = '.rst' master_doc = 'index' project = 'sybil' build_date = datetime.datetime.utcfromtimestamp(int(os.environ.get('SOURCE_DATE_EPOCH', time.time()))) copyright = '2017 - %s Chris Withers' % build_date.year version = release = pkg_resources.get_distribution(project).version exclude_patterns = [ '_build', 'example*', ] pygments_style = 'sphinx' # Options for HTML output html_theme = 'furo' html_title = 'Sybil' htmlhelp_basename = project+'doc' # Options for LaTeX output latex_documents = [ ('index',project+'.tex', project+u' Documentation', 'Chris Withers', 'manual'), ] autodoc_member_order = 'bysource' nitpicky = True nitpick_ignore = [ ('py:class', 'Evaluator'), # https://github.com/sphinx-doc/sphinx/issues/10785 ('py:class', 'LexemeMapping'), # https://github.com/sphinx-doc/sphinx/issues/10785 ] toc_object_entries = False sybil-9.0.0/docs/development.rst000066400000000000000000000031311471471302700166520ustar00rootroot00000000000000Development =========== .. highlight:: bash If you wish to contribute to this project, then you should fork the repository found here: https://github.com/simplistix/sybil/ Once that has been done and you have a checkout, you can follow these instructions to perform various development tasks: Setting up a virtualenv ----------------------- The recommended way to set up a development environment is to create a virtualenv and then install the package in editable form as follows:: $ python3 -m venv ~/virtualenvs/sybil $ source ~/virtualenvs/sybil/bin/activate $ pip install -U pip setuptools $ pip install -U -e .[test,build] Running the tests ----------------- Once you've set up a virtualenv, the tests can be run in the activated virtualenv as follows:: $ pytest Building the documentation -------------------------- The Sphinx documentation is built by doing the following from the directory containing setup.py:: $ cd docs $ make html To check that the description that will be used on PyPI renders properly, do the following:: $ python setup.py --long-description | rst2html.py > desc.html The resulting ``desc.html`` should be checked by opening in a browser. To check that the README that will be used on GitHub renders properly, do the following:: $ cat README.rst | rst2html.py > readme.html The resulting ``readme.html`` should be checked by opening in a browser. Making a release ---------------- To make a release, just update the version in ``setup.py``, update the change log, and push to https://github.com/simplistix/sybil and Carthorse should take care of the rest. sybil-9.0.0/docs/examples/000077500000000000000000000000001471471302700154165ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/000077500000000000000000000000001471471302700177415ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/__init__.py000066400000000000000000000000001471471302700220400ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/docs/000077500000000000000000000000001471471302700206715ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/docs/conftest.py000066400000000000000000000012671471471302700230760ustar00rootroot00000000000000from os import chdir, getcwd from shutil import rmtree from tempfile import mkdtemp import pytest from sybil import Sybil from sybil.parsers.codeblock import PythonCodeBlockParser from sybil.parsers.doctest import DocTestParser @pytest.fixture(scope="module") def tempdir(): # there are better ways to do temp directories, but it's a simple example: path = mkdtemp() cwd = getcwd() try: chdir(path) yield path finally: chdir(cwd) rmtree(path) pytest_collect_file = Sybil( parsers=[ DocTestParser(), PythonCodeBlockParser(future_imports=['print_function']), ], pattern='*.rst', fixtures=['tempdir'] ).pytest() sybil-9.0.0/docs/examples/integration/docs/example.rst000066400000000000000000000005631471471302700230620ustar00rootroot00000000000000A fairly pointless function: .. code-block:: python import sys def write_message(filename, message): with open(filename, 'w') as target: target.write(message) Now we can use a doctest REPL to show it in action: >>> write_message('test.txt', 'is this thing on?') >>> with open('test.txt') as source: ... print(source.read()) is this thing on? sybil-9.0.0/docs/examples/integration/pytest.ini000066400000000000000000000000411471471302700217650ustar00rootroot00000000000000[pytest] addopts = -p no:doctest sybil-9.0.0/docs/examples/integration/unittest/000077500000000000000000000000001471471302700216205ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/unittest/__init__.py000066400000000000000000000000001471471302700237170ustar00rootroot00000000000000sybil-9.0.0/docs/examples/integration/unittest/test_docs.py000066400000000000000000000013331471471302700241610ustar00rootroot00000000000000from os import chdir, getcwd from shutil import rmtree from tempfile import mkdtemp from sybil import Sybil from sybil.parsers.codeblock import PythonCodeBlockParser from sybil.parsers.doctest import DocTestParser def sybil_setup(namespace): # there are better ways to do temp directories, but it's a simple example: namespace['path'] = path = mkdtemp() namespace['cwd'] = getcwd() chdir(path) def sybil_teardown(namespace): chdir(namespace['cwd']) rmtree(namespace['path']) load_tests = Sybil( parsers=[ DocTestParser(), PythonCodeBlockParser(future_imports=['print_function']), ], path='../docs', pattern='*.rst', setup=sybil_setup, teardown=sybil_teardown ).unittest() sybil-9.0.0/docs/examples/linting_and_checking/000077500000000000000000000000001471471302700215375ustar00rootroot00000000000000sybil-9.0.0/docs/examples/linting_and_checking/conftest.py000066400000000000000000000013011471471302700237310ustar00rootroot00000000000000from typing import Optional from sybil import Sybil, Example from sybil.parsers.rest import DocTestParser, PythonCodeBlockParser, CodeBlockParser def lint_python_source(example: Example) -> Optional[str]: # here you'd feed example.parsed, which contains the python source of the # .. code-block:: python, to your linting tool of choice pass linting = Sybil( name='linting', parsers=[ CodeBlockParser(language='python', evaluator=lint_python_source), ], patterns=['*.rst'], ) tests = Sybil( name='tests', parsers=[ DocTestParser(), PythonCodeBlockParser(), ], patterns=['*.rst'], ) pytest_collect_file = (linting + tests).pytest() sybil-9.0.0/docs/examples/linting_and_checking/index.rst000066400000000000000000000003561471471302700234040ustar00rootroot00000000000000My Project ========== Here is a function: .. code-block: python def a_function(text: str) -> str: return f'a function called with {text}' Let's see this function in use: >>> a_function('baz') 'a function called with baz' sybil-9.0.0/docs/examples/linting_and_checking/pytest.ini000066400000000000000000000000411471471302700235630ustar00rootroot00000000000000[pytest] addopts = -p no:doctest sybil-9.0.0/docs/examples/markdown/000077500000000000000000000000001471471302700172405ustar00rootroot00000000000000sybil-9.0.0/docs/examples/markdown/clear.md000066400000000000000000000002661471471302700206540ustar00rootroot00000000000000```python >>> x = 1 >>> x 1 ``` Now let's start a new test: ```python >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined ``` sybil-9.0.0/docs/examples/markdown/codeblock-bash.md000066400000000000000000000000451471471302700224210ustar00rootroot00000000000000```bash $ echo hi there hi there ``` sybil-9.0.0/docs/examples/markdown/codeblock-python.md000066400000000000000000000003561471471302700230320ustar00rootroot00000000000000 This fenced code block defines a function: ```python def prefix(text: str) -> str: return 'prefix: '+text ``` sybil-9.0.0/docs/examples/markdown/doctest.md000066400000000000000000000000701471471302700212240ustar00rootroot00000000000000A fenced code block: ```python >>> x = 1+1 >>> x 2 ``` sybil-9.0.0/docs/examples/markdown/skip.md000066400000000000000000000014571471471302700205370ustar00rootroot00000000000000 This would be wrong: ```python >>> 1 == 2 True ``` This is pseudo-code: ```python def foo(...) -> bool: ... ``` When you want to foo, you could do it like this: ```python foo('baz', 'bob', ...) ``` This will only work on Python 3: ```python >>> repr(b'foo') "b'foo'" ``` This example is not yet working, but I wanted to be reminded: ```python >>> 1.1 == 1.11 True ``` And here we can see some pseudo-code that will work in a future release: ```python >>> helper = Framework().make_helper() >>> helper.describe(...) ``` sybil-9.0.0/docs/examples/myst/000077500000000000000000000000001471471302700164125ustar00rootroot00000000000000sybil-9.0.0/docs/examples/myst/clear-html-comment.md000066400000000000000000000002661471471302700224300ustar00rootroot00000000000000```python >>> x = 1 >>> x 1 ``` Now let's start a new test: ```python >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined ``` sybil-9.0.0/docs/examples/myst/clear.md000066400000000000000000000002571471471302700200260ustar00rootroot00000000000000```python >>> x = 1 >>> x 1 ``` Now let's start a new test: % clear-namespace ```python >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined ``` sybil-9.0.0/docs/examples/myst/codeblock-bash.md000066400000000000000000000000471471471302700215750ustar00rootroot00000000000000```bash $ echo hi there hi there ``` sybil-9.0.0/docs/examples/myst/codeblock-python.md000066400000000000000000000006751471471302700222100ustar00rootroot00000000000000% invisible-code-block: python % % # This could be some state setup needed to demonstrate things % initialized = True This fenced code block defines a function: ```python def prefix(text: str) -> str: return 'prefix: '+text ``` This MyST `code-block` directive then uses it: ```{code-block} python prefixed = prefix('some text') ``` sybil-9.0.0/docs/examples/myst/doctest-directive.md000066400000000000000000000000771471471302700223610ustar00rootroot00000000000000A MyST `doctest` directive: ```{doctest} >>> x = 3 >>> x 3 ``` sybil-9.0.0/docs/examples/myst/doctest-eval-rst.md000066400000000000000000000001221471471302700221270ustar00rootroot00000000000000A MyST `eval-rst` directive: ```{eval-rst} .. doctest:: >>> 1 + 1 2 ``` sybil-9.0.0/docs/examples/myst/doctest.md000066400000000000000000000002141471471302700203760ustar00rootroot00000000000000A fenced code block: ```python >>> x = 1+1 >>> x 2 ``` A doctest in a MyST `code-block` directive: ```{code-block} python >>> x += 1 ``` sybil-9.0.0/docs/examples/myst/skip.md000066400000000000000000000015151471471302700177040ustar00rootroot00000000000000% skip: next This would be wrong: ```python >>> 1 == 2 True ``` This is pseudo-code: % skip: start ```python def foo(...) -> bool: ... ``` When you want to foo, you could do it like this: ```python foo('baz', 'bob', ...) ``` % skip: end % invisible-code-block: python % % import sys This will only work on Python 3: % skip: next if(sys.version_info < (3, 0), reason="python 3 only") ```python >>> repr(b'foo') "b'foo'" ``` This example is not yet working, but I wanted to be reminded: % skip: next "not yet working" ```python >>> 1.1 == 1.11 True ``` And here we can see some pseudo-code that will work in a future release: % skip: start "Fix in v5" ```python >>> helper = Framework().make_helper() >>> helper.describe(...) ``` % skip: end This would still be wrong: ```python >>> 1 == 1 True ``` sybil-9.0.0/docs/examples/myst_text_rest_src/000077500000000000000000000000001471471302700213625ustar00rootroot00000000000000sybil-9.0.0/docs/examples/myst_text_rest_src/conftest.py000066400000000000000000000015161471471302700235640ustar00rootroot00000000000000import pytest from sybil import Sybil from sybil.parsers.myst import ( DocTestDirectiveParser as MarkdownDocTestParser, PythonCodeBlockParser as MarkdownPythonCodeBlockParser ) from sybil.parsers.rest import ( DocTestParser as ReSTDocTestParser, PythonCodeBlockParser as ReSTPythonCodeBlockParser ) @pytest.fixture(scope='session') def keep_seed(): import myproj seed = myproj.SEED yield myproj.SEED = seed markdown_examples = Sybil( parsers=[ MarkdownDocTestParser(), MarkdownPythonCodeBlockParser(), ], patterns=['*.md'], fixtures=['keep_seed'] ) rest_examples = Sybil( parsers=[ ReSTDocTestParser(), ReSTPythonCodeBlockParser(), ], patterns=['*.py'], fixtures=['keep_seed'] ) pytest_collect_file = (markdown_examples+rest_examples).pytest() sybil-9.0.0/docs/examples/myst_text_rest_src/docs/000077500000000000000000000000001471471302700223125ustar00rootroot00000000000000sybil-9.0.0/docs/examples/myst_text_rest_src/docs/conf.py000066400000000000000000000001251471471302700236070ustar00rootroot00000000000000project = 'My Project' extensions = [ "myst_parser", "sphinx.ext.doctest", ] sybil-9.0.0/docs/examples/myst_text_rest_src/docs/index.md000066400000000000000000000003201471471302700237360ustar00rootroot00000000000000My Project #### % invisible-code-block: python % % import myproj % myproj.SEED = 0.3 This project helps you bar your foo! ```{doctest} >>> from myproj import foo >>> foo('baz') 'baz bar bar bar' ``` sybil-9.0.0/docs/examples/myst_text_rest_src/pytest.ini000066400000000000000000000000411471471302700234060ustar00rootroot00000000000000[pytest] addopts = -p no:doctest sybil-9.0.0/docs/examples/myst_text_rest_src/setup.py000066400000000000000000000006301471471302700230730ustar00rootroot00000000000000from setuptools import setup, find_packages setup( name='myproj', version='1.2.3', description="An example project...", packages=find_packages(exclude=['tests']), python_requires=">=3.6", extras_require=dict( test=[ 'pytest', 'pytest-cov', 'sybil', ], docs=[ 'furo', 'sphinx' ] ), ) sybil-9.0.0/docs/examples/myst_text_rest_src/src/000077500000000000000000000000001471471302700221515ustar00rootroot00000000000000sybil-9.0.0/docs/examples/myst_text_rest_src/src/myproj/000077500000000000000000000000001471471302700234715ustar00rootroot00000000000000sybil-9.0.0/docs/examples/myst_text_rest_src/src/myproj/__init__.py000066400000000000000000000006021471471302700256000ustar00rootroot00000000000000""" This package foo's stuff so it bars. :data:`SEED` is normally random, but if you like you can set it as follows: .. code-block:: python import myproj myproj.SEED = 0.2 """ import random SEED: float = random.random() def foo(text: str): """ Put some ``bar`` on your ``foo``! >>> foo('stuff') 'stuff bar bar' """ return text+int(SEED*10)*' bar' sybil-9.0.0/docs/examples/quickstart/000077500000000000000000000000001471471302700176105ustar00rootroot00000000000000sybil-9.0.0/docs/examples/quickstart/conftest.py000066400000000000000000000004461471471302700220130ustar00rootroot00000000000000from doctest import ELLIPSIS from sybil import Sybil from sybil.parsers.rest import DocTestParser, PythonCodeBlockParser pytest_collect_file = Sybil( parsers=[ DocTestParser(optionflags=ELLIPSIS), PythonCodeBlockParser(), ], patterns=['*.rst', '*.py'], ).pytest() sybil-9.0.0/docs/examples/quickstart/example.rst000066400000000000000000000011541471471302700217760ustar00rootroot00000000000000Sample Documentation ==================== Let's put something in the Sybil document's namespace: .. invisible-code-block: python remember_me = b'see how namespaces work?' Suppose we define a function, convoluted and pointless but shows stuff nicely: .. code-block:: python import sys def prefix_and_print(message): print('prefix:', message.decode('ascii')) Now we can use a doctest REPL to show it in action: >>> prefix_and_print(remember_me) prefix: see how namespaces work? The namespace carries across from example to example, no matter what parser: >>> remember_me b'see how namespaces work?' sybil-9.0.0/docs/examples/quickstart/pytest.ini000066400000000000000000000000411471471302700216340ustar00rootroot00000000000000[pytest] addopts = -p no:doctest sybil-9.0.0/docs/examples/rest/000077500000000000000000000000001471471302700163735ustar00rootroot00000000000000sybil-9.0.0/docs/examples/rest/clear.rst000066400000000000000000000002241471471302700202110ustar00rootroot00000000000000>>> x = 1 >>> x 1 Now let's start a new test: .. clear-namespace >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined sybil-9.0.0/docs/examples/rest/codeblock-python.rst000066400000000000000000000003471471471302700223750ustar00rootroot00000000000000Here's a function: .. code-block:: python import sys def prefix(message): return 'prefix:'+message This won't show up but will verify it works: .. invisible-code-block: python assert prefix('foo') == 'prefix:foo' sybil-9.0.0/docs/examples/rest/doctest-directive.rst000066400000000000000000000001471471471302700225500ustar00rootroot00000000000000This example will never work: >>> 1 + 1 3 However, this one will: .. doctest:: >>> 1 + 1 2 sybil-9.0.0/docs/examples/rest/doctest.rst000066400000000000000000000000641471471302700205720ustar00rootroot00000000000000>>> context = 'This' >>> context 'This' >>> 1 + 1 2 sybil-9.0.0/docs/examples/rest/skip.rst000066400000000000000000000011021471471302700200650ustar00rootroot00000000000000.. skip: next This would be wrong: >>> 1 == 2 True This is pseudo-code: .. skip: start >>> foo = ... >>> foo(..) .. skip: end >>> import sys This will only work on Python 3: .. skip: next if(sys.version_info < (3, 0), reason="python 3 only") >>> repr(b'foo') "b'foo'" This example is not yet working, but I wanted to be reminded: .. skip: next "not yet working" >>> 1.1 == 1.11 True And here we can see some pseudo-code that will work in a future release: .. skip: start "Fix in v5" >>> helper = Framework().make_helper() >>> helper.describe(...) .. skip: end sybil-9.0.0/docs/examples/rest_text_rest_src/000077500000000000000000000000001471471302700213435ustar00rootroot00000000000000sybil-9.0.0/docs/examples/rest_text_rest_src/conftest.py000066400000000000000000000006331471471302700235440ustar00rootroot00000000000000import pytest from sybil import Sybil from sybil.parsers.rest import DocTestParser, PythonCodeBlockParser @pytest.fixture(scope='session') def keep_seed(): import myproj seed = myproj.SEED yield myproj.SEED = seed pytest_collect_file = Sybil( parsers=[ DocTestParser(), PythonCodeBlockParser(), ], patterns=['*.rst', '*.py'], fixtures=['keep_seed'] ).pytest() sybil-9.0.0/docs/examples/rest_text_rest_src/docs/000077500000000000000000000000001471471302700222735ustar00rootroot00000000000000sybil-9.0.0/docs/examples/rest_text_rest_src/docs/conf.py000066400000000000000000000001371471471302700235730ustar00rootroot00000000000000project = 'My Project' extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx' ] sybil-9.0.0/docs/examples/rest_text_rest_src/docs/index.rst000066400000000000000000000003031471471302700241300ustar00rootroot00000000000000My Project ========== .. invisible-code-block: python import myproj myproj.SEED = 0.3 This project helps you bar your foo! >>> from myproj import foo >>> foo('baz') 'baz bar bar bar' sybil-9.0.0/docs/examples/rest_text_rest_src/pytest.ini000066400000000000000000000000411471471302700233670ustar00rootroot00000000000000[pytest] addopts = -p no:doctest sybil-9.0.0/docs/examples/rest_text_rest_src/setup.py000066400000000000000000000006301471471302700230540ustar00rootroot00000000000000from setuptools import setup, find_packages setup( name='myproj', version='1.2.3', description="An example project...", packages=find_packages(exclude=['tests']), python_requires=">=3.6", extras_require=dict( test=[ 'pytest', 'pytest-cov', 'sybil', ], docs=[ 'furo', 'sphinx' ] ), ) sybil-9.0.0/docs/examples/rest_text_rest_src/src/000077500000000000000000000000001471471302700221325ustar00rootroot00000000000000sybil-9.0.0/docs/examples/rest_text_rest_src/src/myproj/000077500000000000000000000000001471471302700234525ustar00rootroot00000000000000sybil-9.0.0/docs/examples/rest_text_rest_src/src/myproj/__init__.py000066400000000000000000000006021471471302700255610ustar00rootroot00000000000000""" This package foo's stuff so it bars. :data:`SEED` is normally random, but if you like you can set it as follows: .. code-block:: python import myproj myproj.SEED = 0.2 """ import random SEED: float = random.random() def foo(text: str): """ Put some ``bar`` on your ``foo``! >>> foo('stuff') 'stuff bar bar' """ return text+int(SEED*10)*' bar' sybil-9.0.0/docs/index.rst000066400000000000000000000017051471471302700154440ustar00rootroot00000000000000.. include:: ../README.rst :end-before: The `documentation Sybil comes with an array of included :ref:`parsers `, including ones for :mod:`doctest` and :rst:dir:`code-block` examples, and is designed so that it's easy to provide your own additional parsers. .. toctree:: :maxdepth: 3 quickstart.rst use.rst patterns.rst Test Runners ReST Parsers markdown.rst myst.rst Developing Parsers .. toctree:: :maxdepth: 1 concepts.rst api.rst development.rst changes.rst license.rst Why Sybil? ========== Sybil is heavily inspired by `Manuel`__, which was named after a `Fawlty Towers`__ character, and so it seemed only right to pick another Fawlty Towers character name for this library. __ https://pypi.org/project/manuel/ __ https://en.wikipedia.org/wiki/Fawlty_Towers Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` sybil-9.0.0/docs/integration.rst000066400000000000000000000057541471471302700166700ustar00rootroot00000000000000Test runner integration ======================= Sybil aims to integrate with all major Python test runners. Those currently catered for explicitly are listed below, but you may find that one of these integration methods will work as required with other test runners. If not, please file an issue on GitHub. To show how the integration options work, the following documentation examples will be tested. They use :ref:`doctests `, :rst:dir:`code blocks ` and require a temporary directory: .. literalinclude:: examples/integration/docs/example.rst :language: rest .. _pytest_integration: pytest ~~~~~~ You should install Sybil with the ``pytest`` extra, to ensure you have a compatible version of `pytest`__: __ https://docs.pytest.org .. code-block:: bash pip install sybil[pytest] To have `pytest`__ check the examples, Sybil makes use of the ``pytest_collect_file`` hook. To use this, configuration is placed in a ``confest.py`` in your documentation source directory, as shown below. ``pytest`` should be invoked from a location that has the opportunity to recurse into that directory: __ https://docs.pytest.org .. literalinclude:: examples/integration/docs/conftest.py The file glob passed as ``pattern`` should match any documentation source files that contain examples which you would like to be checked. As you can see, if your examples require any fixtures, these can be requested by passing their names to the ``fixtures`` argument of the :class:`~sybil.Sybil` class. These will be available in the :class:`~sybil.Document` :class:`~sybil.Document.namespace` in a way that should feel natural to ``pytest`` users. The ``setup`` and ``teardown`` parameters can still be used to pass :class:`~sybil.Document` setup and teardown callables. The ``path`` parameter, however, is ignored. .. note:: pytest provides its own doctest plugin, which can cause problems. It should be disabled by including the following in your pytest configuration file: .. literalinclude:: examples/quickstart/pytest.ini :language: ini .. _unitttest_integration: unittest ~~~~~~~~ To have :ref:`unittest-test-discovery` check the example, Sybil makes use of the `load_tests protocol`__. As such, the following should be placed in a test module, say ``test_docs.py``, where the unit test discovery process can find it: __ https://docs.python.org/3/library/unittest.html#load-tests-protocol .. literalinclude:: examples/integration/unittest/test_docs.py The ``path`` parameter gives the path, relative to the file containing this code, that contains the documentation source files. The file glob passed as ``pattern`` should match any documentation source files that contain examples which you would like to be checked. Any setup or teardown necessary for your tests can be carried out in callables passed to the ``setup`` and ``teardown`` parameters, which are both called with the :class:`~sybil.Document` :class:`~sybil.Document.namespace`. The ``fixtures`` parameter is ignored. sybil-9.0.0/docs/license.rst000066400000000000000000000000741471471302700157550ustar00rootroot00000000000000======= License ======= .. literalinclude:: ../LICENSE.txt sybil-9.0.0/docs/markdown.rst000066400000000000000000000155771471471302700161730ustar00rootroot00000000000000Markdown Parsers ================ Sybil supports `Markdown`__, including `GitHub Flavored Markdown`__ and :external+myst:doc:`MyST `. If you are using MyST, then you should use the :doc:`MyST parsers `. For other flavors of markdown, the parsers described below support extracting and checking examples from Markdown source, including the ability to :ref:`skip ` the evaluation of examples where necessary. __ https://commonmark.org/ __ https://github.github.com/markdown/ .. _markdown-doctest-parser: doctest ------- Doctest examples in ``python`` `fenced code blocks`__, can be checked with the :class:`~sybil.parsers.markdown.PythonCodeBlockParser`. __ https://spec.commonmark.org/0.30/#fenced-code-blocks For example: .. literalinclude:: examples/markdown/doctest.md :language: markdown Both examples in the single block above can be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.markdown import PythonCodeBlockParser sybil = Sybil(parsers=[PythonCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/markdown/doctest.md', sybil, expected=2) .. _markdown-codeblock-parser: Code blocks ----------- The codeblock parsers extract examples from `fenced code blocks`__ and "invisible" code blocks in HTML-style Markdown mult-line comments. __ https://spec.commonmark.org/0.30/#fenced-code-blocks Python ~~~~~~ Python examples can be checked in ``python`` `fenced code blocks`__ using the :class:`~sybil.parsers.markdown.PythonCodeBlockParser`. __ https://spec.commonmark.org/0.30/#fenced-code-blocks Including all the boilerplate necessary for examples to successfully evaluate and be checked can hinder writing documentation. To help with this, "invisible" code blocks are also supported. These take advantage of HTML-style Markdown block comments. For example: .. literalinclude:: examples/markdown/codeblock-python.md :language: markdown These examples can be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.markdown import PythonCodeBlockParser sybil = Sybil(parsers=[PythonCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/markdown/codeblock-python.md', sybil, expected=1) .. _markdown-codeblock-other: Other languages ~~~~~~~~~~~~~~~ :class:`~sybil.parsers.markdown.CodeBlockParser` can be used to check examples in any language you require, either by instantiating with a specified language and evaluator, or by subclassing to create your own parser. As an example, let's look at evaluating bash commands in a subprocess and checking the output is as expected: .. literalinclude:: examples/markdown/codeblock-bash.md :language: markdown .. -> bash_document_text We can do this using :class:`~sybil.parsers.markdown.CodeBlockParser` as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.markdown import CodeBlockParser def evaluate_bash(example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) parser = CodeBlockParser(language='bash', evaluator=evaluate_bash) sybil = Sybil(parsers=[parser]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/markdown/codeblock-bash.md', sybil, expected=1) Alternatively, we can create our own parser class and use it as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.markdown import CodeBlockParser class BashCodeBlockParser(CodeBlockParser): language = 'bash' def evaluate(self, example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) sybil = Sybil([BashCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/markdown/codeblock-bash.md', sybil, expected=1) .. _markdown-skip-parser: Skipping examples ----------------- :class:`~sybil.parsers.markdown.SkipParser` takes advantage of Markdown comments to allow checking of specified examples to be skipped. For example: .. literalinclude:: examples/markdown/skip.md :language: markdown :lines: 1-8 If you need to skip a collection of examples, this can be done as follows: .. literalinclude:: examples/markdown/skip.md :language: markdown :lines: 10-25 You can also add conditions to either ``next`` or ``start`` as shown below: .. literalinclude:: examples/markdown/skip.md :language: markdown :lines: 27-38 As you can see, any names used in the expression passed to ``if`` must be present in the document's :attr:`~sybil.Document.namespace`. :ref:`invisible code blocks `, :class:`setup ` methods or :ref:`fixtures ` are good ways to provide these. When a condition is used to skip one or more following example, it will be reported as a skipped test in your test runner. If you wish to have unconditional skips show up as skipped tests, this can be done as follows: .. literalinclude:: examples/markdown/skip.md :language: markdown :lines: 40-47 This can also be done when skipping collections of examples: .. literalinclude:: examples/markdown/skip.md :language: markdown :lines: 49-58 The above examples could be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.markdown import PythonCodeBlockParser, SkipParser sybil = Sybil(parsers=[PythonCodeBlockParser(), SkipParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path( 'examples/markdown/skip.md', sybil, expected=15, expected_skips=('not yet working', 'Fix in v5', 'Fix in v5'), ) .. _markdown-clear-namespace: Clearing the namespace ---------------------- If you want to isolate the testing of your examples within a single source file, you may want to clear the :class:`~sybil.Document.namespace`. This can be done as follows: .. literalinclude:: examples/markdown/clear.md :language: rest The following configuration is required: .. code-block:: python from sybil import Sybil from sybil.parsers.markdown import PythonCodeBlockParser, ClearNamespaceParser sybil = Sybil(parsers=[PythonCodeBlockParser(), ClearNamespaceParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/markdown/clear.md', sybil, expected=4) sybil-9.0.0/docs/myst.rst000066400000000000000000000224031471471302700153270ustar00rootroot00000000000000MyST Parsers ============ Sybil includes a range of parsers for extracting and checking examples from :external+myst:doc:`MyST ` including the ability to :ref:`skip ` the evaluation of examples where necessary. .. _myst-doctest-parser: doctest ------- A selection of parsers are included that can extract and check doctest examples in ``python`` `fenced code blocks`__, MyST ``code-block`` :ref:`directives ` and MyST ``doctest`` :ref:`directives `. __ https://spec.commonmark.org/0.30/#fenced-code-blocks Most cases can be covered using a :class:`sybil.parsers.myst.PythonCodeBlockParser`. For example: .. literalinclude:: examples/myst/doctest.md :language: markdown All three examples in the two blocks above can be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.myst import PythonCodeBlockParser sybil = Sybil(parsers=[PythonCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/doctest.md', sybil, expected=3) Alternatively, the ReST :ref:`doctest parser ` will find all doctest examples in a Markdown file. If any should not be checked, you can make use of the :ref:`skip ` parser. .. note:: You can only use the ReST :ref:`doctest parser ` if no doctest examples are contained in examples parsed by the other parsers listed here. If you do, :class:`ValueError` exceptions relating to overlapping regions will be raised. ``doctest`` directive ~~~~~~~~~~~~~~~~~~~~~ If you have made use of MyST ``doctest`` :ref:`directives ` such as this: .. literalinclude:: examples/myst/doctest-directive.md :language: markdown You can use the :class:`sybil.parsers.myst.DocTestDirectiveParser` as follows: .. code-block:: python from sybil import Sybil from sybil.parsers.myst import DocTestDirectiveParser sybil = Sybil(parsers=[DocTestDirectiveParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/doctest-directive.md', sybil, expected=2) .. note:: You will have to enable :external+sphinx:doc:`sphinx.ext.doctest ` in your ``conf.py`` for Sphinx to render :rst:dir:`doctest` directives. ``eval-rst`` directive ~~~~~~~~~~~~~~~~~~~~~~ If you have used ReST :rst:dir:`doctest` directive inside a MyST ``eval-rst`` :ref:`directive ` such as this: .. literalinclude:: examples/myst/doctest-eval-rst.md :language: markdown Then you would use the normal :class:`sybil.parsers.rest.DocTestDirectiveParser` as follows: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import DocTestDirectiveParser as ReSTDocTestDirectiveParser sybil = Sybil(parsers=[ReSTDocTestDirectiveParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/doctest-eval-rst.md', sybil, expected=1) .. note:: You will have to enable :external+sphinx:doc:`sphinx.ext.doctest ` in your ``conf.py`` for Sphinx to render :rst:dir:`doctest` directives. .. _myst-codeblock-parser: Code blocks ----------- The codeblock parsers extract examples from `fenced code blocks`__, MyST ``code-block`` :ref:`directives ` and "invisible" code blocks in both styles of Markdown mult-line comment. __ https://spec.commonmark.org/0.30/#fenced-code-blocks Python ~~~~~~ Python examples can be checked in either ``python`` `fenced code blocks`__ or MyST ``code-block`` :ref:`directives ` using the :class:`sybil.parsers.myst.PythonCodeBlockParser`. __ https://spec.commonmark.org/0.30/#fenced-code-blocks Including all the boilerplate necessary for examples to successfully evaluate and be checked can hinder writing documentation. To help with this, "invisible" code blocks are also supported. These take advantage of either style of Markdown block comments. For example: .. literalinclude:: examples/myst/codeblock-python.md :language: markdown These examples can be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.myst import PythonCodeBlockParser sybil = Sybil(parsers=[PythonCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/codeblock-python.md', sybil, expected=4) .. _myst-codeblock-other: Other languages ~~~~~~~~~~~~~~~ :class:`sybil.parsers.myst.CodeBlockParser` can be used to check examples in any language you require, either by instantiating with a specified language and evaluator, or by subclassing to create your own parser. As an example, let's look at evaluating bash commands in a subprocess and checking the output is as expected: .. literalinclude:: examples/myst/codeblock-bash.md :language: markdown .. -> bash_document_text We can do this using :class:`~sybil.parsers.myst.CodeBlockParser` as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.myst import CodeBlockParser def evaluate_bash(example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) parser = CodeBlockParser(language='bash', evaluator=evaluate_bash) sybil = Sybil(parsers=[parser]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/codeblock-bash.md', sybil, expected=1) Alternatively, we can create our own parser class and use it as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.myst import CodeBlockParser class BashCodeBlockParser(CodeBlockParser): language = 'bash' def evaluate(self, example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) sybil = Sybil([BashCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/codeblock-bash.md', sybil, expected=1) .. _myst-skip-parser: Skipping examples ----------------- :class:`sybil.parsers.myst.SkipParser` takes advantage of Markdown comments to allow checking of specified examples to be skipped. For example: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 1-8 You can also use HTML-style comments: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 60-67 If you need to skip a collection of examples, this can be done as follows: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 10-25 You can also add conditions to either ``next`` or ``start`` as shown below: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 27-38 As you can see, any names used in the expression passed to ``if`` must be present in the document's :attr:`~sybil.Document.namespace`. :ref:`invisible code blocks `, :class:`setup ` methods or :ref:`fixtures ` are good ways to provide these. When a condition is used to skip one or more following example, it will be reported as a skipped test in your test runner. If you wish to have unconditional skips show up as skipped tests, this can be done as follows: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 40-47 This can also be done when skipping collections of examples: .. literalinclude:: examples/myst/skip.md :language: markdown :lines: 49-58 The above examples could be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.myst import PythonCodeBlockParser, SkipParser sybil = Sybil(parsers=[PythonCodeBlockParser(), SkipParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path( 'examples/myst/skip.md', sybil, expected=17, expected_skips=('not yet working', 'Fix in v5', 'Fix in v5'), ) .. _myst-clear-namespace: Clearing the namespace ---------------------- If you want to isolate the testing of your examples within a single source file, you may want to clear the :class:`~sybil.Document.namespace`. This can be done as follows: .. literalinclude:: examples/myst/clear.md :language: rest The following configuration is required: .. code-block:: python from sybil import Sybil from sybil.parsers.myst import PythonCodeBlockParser, ClearNamespaceParser sybil = Sybil(parsers=[PythonCodeBlockParser(), ClearNamespaceParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/clear.md', sybil, expected=4) You can also used HTML-style comments as follows: .. literalinclude:: examples/myst/clear-html-comment.md :language: rest .. invisible-code-block: python from tests.helpers import check_path check_path('examples/myst/clear-html-comment.md', sybil, expected=4) sybil-9.0.0/docs/parsers.rst000066400000000000000000000130551471471302700160150ustar00rootroot00000000000000Developing your own parsers =========================== Sybil :term:`parsers ` are callables that take a :term:`document` and yield a sequence of :term:`regions `. A :term:`region` contains the character position of the :attr:`~sybil.Region.start` and :attr:`~sybil.Region.end` of the example in the document's :attr:`~sybil.Document.text`, along with a :attr:`~sybil.Region.parsed` version of the example and a callable :attr:`~sybil.Region.evaluator`. Parsers are free to access any documented attribute of the :class:`~sybil.Document` although will most likely only need to work with :attr:`~sybil.Document.text`. The :attr:`~sybil.Document.namespace` attribute should **not** be modified. The :attr:`~sybil.Region.parsed` version can take any form and only needs to be understood by the :attr:`~sybil.Region.evaluator`. That :term:`evaluator` will be called with an :term:`example` constructed from the :term:`document` and the :term:`region` and should return a :ref:`false value ` if the example is as expected. Otherwise, it should either raise an exception or return a textual description in the event of the example not being as expected. Evaluators may also modify the document's :attr:`~sybil.Document.namespace` or :any:`push ` and :any:`pop ` evaluators. :class:`~sybil.Example` instances are used to wrap up all the attributes you're likely to need when writing an evaluator and all documented attributes are fine to use. In particular, :attr:`~sybil.Example.parsed` is the parsed value provided by the parser when instantiating the :class:`~sybil.Region` and :attr:`~sybil.Example.namespace` is a reference to the document's namespace. Evaluators **are** free to modify the :attr:`~sybil.Document.namespace` if they need to. If you need to write your own parser, you should consult the :doc:`api` so see if suitable :term:`lexers ` already exist for the source language containing your examples. Worked example ~~~~~~~~~~~~~~ As an example, let's look at a parser suitable for evaluating bash commands in a subprocess and checking the output is as expected:: .. code-block:: bash $ echo hi there hi there .. -> bash_document_text Since this is a ReStructured Text code block, the simplest thing we could do would be to use the existing support for :ref:`other languages `: .. code-block:: python from subprocess import check_output from sybil import Sybil from sybil.parsers.rest import CodeBlockParser def evaluate_bash_block(example): command, expected = example.parsed.strip().split('\n') assert command.startswith('$ ') command = command[2:].split() actual = check_output(command).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) parser = CodeBlockParser(language='bash', evaluator=evaluate_bash_block) sybil = Sybil(parsers=[parser], pattern='*.rst') .. invisible-code-block: python from tests.helpers import check_text check_text(bash_document_text, sybil) Another alternative would be to start with the :class:`lexer for ReST directives `. Here, the parsed version consists of a tuple of the command to run and the expected output: .. code-block:: python from subprocess import check_output from typing import Iterable from sybil import Sybil, Document, Region, Example from sybil.parsers.rest.lexers import DirectiveLexer from subprocess import check_output def evaluate_bash_block(example: Example): command, expected = example.parsed actual = check_output(command).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) def parse_bash_blocks(document: Document) -> Iterable[Region]: lexer = DirectiveLexer(directive='code-block', arguments='bash') for lexed in lexer(document): command, output = lexed.lexemes['source'].strip().split('\n') assert command.startswith('$ ') parsed = command[2:].split(), output yield Region(lexed.start, lexed.end, parsed, evaluate_bash_block) sybil = Sybil(parsers=[parse_bash_blocks], pattern='*.rst') .. invisible-code-block: python from tests.helpers import check_text check_text(bash_document_text, sybil) .. _parser-from-scratch: Finally, the parser could be implemented from scratch, with the parsed version again consisting of a tuple of the command to run and the expected output: .. code-block:: python from subprocess import check_output import re, textwrap from sybil import Sybil, Region from sybil.parsers.abstract.lexers import BlockLexer BASHBLOCK_START = re.compile(r'^\.\.\s*code-block::\s*bash') BASHBLOCK_END = r'(\n\Z|\n(?=\S))' def evaluate_bash_block(example): command, expected = example.parsed actual = check_output(command).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) def parse_bash_blocks(document): lexer = BlockLexer(BASHBLOCK_START, BASHBLOCK_END) for region in lexer(document): command, output = region.lexemes['source'].strip().split('\n') assert command.startswith('$ ') region.parsed = command[2:].split(), output region.evaluator = evaluate_bash_block yield region sybil = Sybil(parsers=[parse_bash_blocks], pattern='*.rst') .. invisible-code-block: python from tests.helpers import check_text check_text(bash_document_text, sybil) sybil-9.0.0/docs/patterns.rst000066400000000000000000000074771471471302700162110ustar00rootroot00000000000000.. currentmodule:: sybil Patterns of Use =============== .. invisible-code-block: python from tests.helpers import check_tree Several commons patterns of use for Sybil are covered here. Documentation and source examples in Restructured Text ------------------------------------------------------ If your project looks like this:: ├─docs/ │ ├─conf.py │ └─index.rst ├─src/ │ └─myproj/ │ └─__init__.py ├─conftest.py ├─pytest.ini └─setup.py .. --> tree .. invisible-code-block: python check_tree(tree, 'examples/rest_text_rest_src') And if your documentation looks like this: .. literalinclude:: examples/rest_text_rest_src/docs/index.rst :language: rest With your examples in source code looking like this: .. literalinclude:: examples/rest_text_rest_src/src/myproj/__init__.py Then the following configuration in a ``conftest.py`` will ensure all your examples are correct: .. literalinclude:: examples/rest_text_rest_src/conftest.py Documentation in MyST and source examples in Restructured Text -------------------------------------------------------------- If your project looks like this:: ├─docs/ │ ├─conf.py │ └─index.md ├─src/ │ └─myproj/ │ └─__init__.py ├─conftest.py ├─pytest.ini └─setup.py .. --> tree .. invisible-code-block: python check_tree(tree, 'examples/myst_text_rest_src') And if your documentation looks like this: .. literalinclude:: examples/myst_text_rest_src/docs/index.md :language: markdown With your examples in source code looking like this: .. literalinclude:: examples/myst_text_rest_src/src/myproj/__init__.py Then the following configuration in a ``conftest.py`` will ensure all your examples are correct: .. literalinclude:: examples/myst_text_rest_src/conftest.py Linting and checking examples ----------------------------- If you wish to perform linting of examples in addition to checking that they are correct, you will need to parse each documentation file once for linting and once for checking. This can be done by having one :class:`Sybil` to do the linting and another :class:`Sybil` to do the checking. Given documentation that looks like this: .. literalinclude:: examples/linting_and_checking/index.rst Then the following configuration in a ``conftest.py`` could be used to ensure all your examples are both correct and lint-free: .. literalinclude:: examples/linting_and_checking/conftest.py .. _migrating-from-sphinx.ext.doctest: Migrating from sphinx.ext.doctest --------------------------------- Sybil currently has partial support for :external+sphinx:doc:`sphinx.ext.doctest `. The list below shows how to approach migrating or supporting the various directives from ``sphinx.ext.doctest``. Adding further support won't be hard, so if anything is missing that's holding you back, please open an issue on `GitHub`__. After that, it's mainly left to stop running ``make doctest``! __ https://github.com/simplistix/sybil/issues - :rst:dir:`testsetup` can be replaced with :ref:`invisible-code-block `. - :rst:dir:`testcleanup` can be replaced with :ref:`invisible-code-block `. - :rst:dir:`doctest` is supported using the :class:`~sybil.parsers.rest.DocTestDirectiveParser` as described in the :ref:`doctest-parser` section. Some of the options aren't supported, but their behaviour can be replaced by preceding the ``doctest`` with a :ref:`skip ` directive. - :rst:dir:`testcode` and :rst:dir:`testoutput` would need parsers and evaluators to be written, however, they could probably just be replaced with a :ref:`doctest-parser` block. - ``groups`` aren't supported, but you can achieve test isolation using :ref:`clear-namespace `. sybil-9.0.0/docs/quickstart.rst000066400000000000000000000020351471471302700165240ustar00rootroot00000000000000Quickstart ========== Sybil is installed as a standard Python package in whatever way works best for you. If you're using it with `pytest`__, you should install it with the ``pytest`` extra, to ensure you have compatible versions: __ https://docs.pytest.org .. code-block:: bash pip install sybil[pytest] Here's how you would set up a ``conftest.py`` in the root of your project such that running `pytest`__ would check examples in your project's source code and `Sphinx`__ source. Python :rst:dir:`code-block` and :ref:`doctest ` examples will be checked: __ https://docs.pytest.org __ https://www.sphinx-doc.org/ .. literalinclude:: examples/quickstart/conftest.py You'll also want to disable pytest's own doctest plugin by putting this in your pytest config: .. literalinclude:: examples/quickstart/pytest.ini :language: ini An example of a documentation source file that could be checked using the above configuration is shown below: .. literalinclude:: examples/quickstart/example.rst :language: rest sybil-9.0.0/docs/rest.rst000066400000000000000000000243571471471302700153220ustar00rootroot00000000000000Restructured Text Parsers ========================= Sybil includes a range of parsers for extracting and checking examples from Restructured Text including the ability to :ref:`capture ` previous blocks into a variable in the :attr:`~sybil.Document.namespace`, and :ref:`skip ` the evaluation of examples where necessary. .. _doctest-parser: doctest ------- Parsers are included for both classic :ref:`doctest ` examples along with those in :rst:dir:`doctest` directives. They are evaluated in the document's :attr:`~sybil.Document.namespace`. The parsers can optionally be instantiated with :ref:`doctest option flags`. Here are some classic :ref:`doctest ` examples: .. literalinclude:: examples/rest/doctest.rst :language: rest These could be parsed with the a :class:`sybil.parsers.rest.DocTestParser` in the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import DocTestParser sybil = Sybil(parsers=[DocTestParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/rest/doctest.rst', sybil, expected=3) If you want to ensure that only examples within a :rst:dir:`doctest` directive are checked, and any other doctest examples are ignored, then you can use the :class:`sybil.parsers.rest.DocTestDirectiveParser` instead: .. literalinclude:: examples/rest/doctest-directive.rst :language: rest These could be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import DocTestDirectiveParser sybil = Sybil(parsers=[DocTestDirectiveParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/rest/doctest-directive.rst', sybil, expected=1) .. note:: You will have to enable :external+sphinx:doc:`sphinx.ext.doctest ` in your ``conf.py`` for Sphinx to render :rst:dir:`doctest` directives. .. _codeblock-parser: Code blocks ----------- The codeblock parsers extract examples from Sphinx :rst:dir:`code-block` directives and evaluate them in the document's :attr:`~sybil.Document.namespace`. The boilerplate necessary for examples to successfully evaluate and be checked can hinder the quality of documentation. To help with this, these parsers also evaluate "invisible" code blocks such as this one: .. literalinclude:: examples/quickstart/example.rst :language: rest :lines: 5-9 These take advantage of Sphinx `comment`__ syntax so that the code block will not be rendered in your documentation but can be used to set up the document's namespace or make assertions about what the evaluation of other examples has put in that namespace. __ http://www.sphinx-doc.org/en/stable/rest.html#comments Python ~~~~~~ Python code blocks can be checked using the :class:`sybil.parsers.rest.PythonCodeBlockParser`. Here's a Python code block and an invisible Python code block that checks it: .. literalinclude:: examples/rest/codeblock-python.rst :language: rest These could be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import PythonCodeBlockParser sybil = Sybil(parsers=[PythonCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/rest/codeblock-python.rst', sybil, expected=2) .. note:: You should not wrap doctest examples in a ``python`` :rst:dir:`code-block`, they will render correctly without that and you should use the :ref:`doctest-parser` parser to check them. .. _codeblock-other: Other languages ~~~~~~~~~~~~~~~ .. note:: If your :rst:dir:`code-block` examples define content, such as JSON or YAML, rather than executable code, you may find the :ref:`capture ` parser is more useful. :class:`sybil.parsers.rest.CodeBlockParser` can be used to check examples in any language you require, either by instantiating with a specified language and evaluator, or by subclassing to create your own parser. As an example, let's look at evaluating bash commands in a subprocess and checking the output is as expected:: .. code-block:: bash $ echo hi there hi there .. -> bash_document_text We can do this using :class:`~sybil.parsers.rest.CodeBlockParser` as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.codeblock import CodeBlockParser def evaluate_bash(example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) parser = CodeBlockParser(language='bash', evaluator=evaluate_bash) sybil = Sybil(parsers=[parser]) .. invisible-code-block: python from tests.helpers import check_text check_text(bash_document_text, sybil) Alternatively, we can create our own parser class and use it as follows: .. code-block:: python from subprocess import check_output from textwrap import dedent from sybil import Sybil from sybil.parsers.codeblock import CodeBlockParser class BashCodeBlockParser(CodeBlockParser): language = 'bash' def evaluate(self, example): command, expected = dedent(example.parsed).strip().split('\n') actual = check_output(command[2:].split()).strip().decode('ascii') assert actual == expected, repr(actual) + ' != ' + repr(expected) sybil = Sybil([BashCodeBlockParser()]) .. invisible-code-block: python from tests.helpers import check_text check_text(bash_document_text, sybil) .. _capture-parser: Capturing blocks ---------------- :func:`sybil.parsers.rest.CaptureParser` takes advantage of Sphinx `comment`__ syntax to introduce a special comment that takes the preceding ReST `block`__ and inserts its raw content into the document's :attr:`~sybil.Document.namespace` using the name specified. __ http://www.sphinx-doc.org/en/stable/rest.html#comments __ http://www.sphinx-doc.org/en/stable/rest.html?highlight=block#source-code For example:: A simple example:: root.txt subdir/ subdir/file.txt subdir/logs/ .. -> expected_listing .. --> capture_example This listing could be captured into the :attr:`~sybil.Document.namespace` using the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import CaptureParser sybil = Sybil(parsers=[CaptureParser()]) .. invisible-code-block: python document = check_text(capture_example, sybil) expected_listing = document.namespace['expected_listing'] The above documentation source, when parsed by this parser and then evaluated, would mean that ``expected_listing`` could be used in other examples in the document: >>> expected_listing.split() ['root.txt', 'subdir/', 'subdir/file.txt', 'subdir/logs/'] It can also be used with :rst:dir:`code-block` examples that define content rather executable code, for example: :: .. code-block:: json { "a key": "value", "b key": 42 } .. -> json_source .. --> capture_example .. invisible-code-block: python document = check_text(capture_example, sybil) json_source = document.namespace['json_source'] The JSON source can now be used as follows: >>> import json >>> json.loads(json_source) {'a key': 'value', 'b key': 42} .. note:: It's important that the capture directive, ``.. -> json_source`` in this case, has identical indentation to the code block above it for this to work. .. _skip-parser: Skipping examples ----------------- :class:`sybil.parsers.rest.SkipParser` takes advantage of Sphinx `comment`__ syntax to introduce special comments that allow other examples in the document to be skipped. This can be useful if they include pseudo code or examples that can only be evaluated on a particular version of Python. __ https://www.sphinx-doc.org/en/stable/rest.html#comments For example: .. literalinclude:: examples/rest/skip.rst :language: rest :lines: 1-6 If you need to skip a collection of examples, this can be done as follows: .. literalinclude:: examples/rest/skip.rst :language: rest :lines: 8-15 You can also add conditions to either ``next`` or ``start`` as shown below: .. literalinclude:: examples/rest/skip.rst :language: rest :lines: 17-24 As you can see, any names used in the expression passed to ``if`` must be present in the document's :attr:`~sybil.Document.namespace`. :ref:`invisible code blocks `, :class:`setup ` methods or :ref:`fixtures ` are good ways to provide these. When a condition is used to skip one or more following example, it will be reported as a skipped test in your test runner. If you wish to have unconditional skips show up as skipped tests, this can be done as follows: .. literalinclude:: examples/rest/skip.rst :language: rest :lines: 26-31 This can also be done when skipping collections of examples: .. literalinclude:: examples/rest/skip.rst :language: rest :lines: 33-40 The above examples could be checked with the following configuration: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import DocTestParser, SkipParser sybil = Sybil(parsers=[DocTestParser(), SkipParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path( 'examples/rest/skip.rst', sybil, expected=15, expected_skips=('not yet working', 'Fix in v5', 'Fix in v5'), ) .. _clear-namespace: Clearing the namespace ---------------------- If you want to isolate the testing of your examples within a single source file, you may want to clear the :class:`~sybil.Document.namespace`. This can be done as follows: .. literalinclude:: examples/rest/clear.rst :language: rest The following configuration is required: .. code-block:: python from sybil import Sybil from sybil.parsers.rest import DocTestParser, ClearNamespaceParser sybil = Sybil(parsers=[DocTestParser(), ClearNamespaceParser()]) .. invisible-code-block: python from tests.helpers import check_path check_path('examples/rest/clear.rst', sybil, expected=4) sybil-9.0.0/docs/use.rst000066400000000000000000000051111471471302700151240ustar00rootroot00000000000000Usage ===== :term:`Sybil` works by discovering a series of :term:`documents ` as part of the :term:`test runner` :doc:`integration `. These documents are then :term:`parsed ` into a set of non-overlapping :term:`regions `. When the tests are run, each :term:`region` is turned into an :term:`example` that is :term:`evaluated ` in the document's :term:`namespace`. The examples are evaluated in the order in which they appear in the document. If an example does not evaluate as expected, a test failure is reported and :term:`Sybil` continues on to evaluate the remaining :term:`examples ` in the :term:`document`. To use Sybil, you need pick the :ref:`integration ` for your project's test runner and then configure appropriate :ref:`parsers ` for the examples in your project's documentation and source code. It's worth checking the :doc:`patterns` to see if the pattern required for your project is covered there. .. _integrations: Test runner integration ----------------------- Sybil is used by driving it through a test runner, with each example being presented as a test. The following test runners are currently supported: `pytest`__ Please use the :ref:`pytest integration `. :ref:`unittest ` Please use the :ref:`unittest integration `. `Twisted's trial`__ Please use the :ref:`unittest integration `. __ https://docs.pytest.org __ https://docs.twistedmatrix.com/en/stable/core/howto/trial.html .. _parsers: Parsers ------- Sybil parsers are what extract examples from source files and turns them into parsed examples with evaluators that can check if they are correct. The parsers available depend on the source language of the files containing the examples you wish to check: - For ReStructured Text, typically ``.rst`` or ``.txt`` files, see :doc:`ReST Parsers `. - For Markdown, typically ``.md`` files, :doc:`CommonMark `, :doc:`GitHub Flavored Markdown ` and :doc:`MyST `, along with other flavours, are supported. - For Python source code, typically ``.py`` files, it depends on the markup used in the docstrings; both the :doc:`ReST parsers ` and :doc:`MyST parsers ` will work. The source files are presented as :any:`PythonDocument` instances that import the document's source file as a Python module, making names within it available in the document's :attr:`~sybil.Document.namespace`. It's also relatively easy to :doc:`develop your own parsers `. sybil-9.0.0/mypy.ini000066400000000000000000000002751471471302700143530ustar00rootroot00000000000000[mypy] strict = True disable_error_code = union-attr [mypy-sybil.integration.pytest] ; This file is mainly pytest internals where unit tests catch things, not typing ignore_errors = True sybil-9.0.0/setup.cfg000066400000000000000000000004711471471302700144730ustar00rootroot00000000000000[tool:pytest] addopts = --verbose --strict-markers -p no:doctest norecursedirs=functional .git docs/examples filterwarnings = ignore::DeprecationWarning [coverage:run] source = sybil, tests omit = /the/path /tmp/* [coverage:report] exclude_lines = pragma: no cover if TYPE_CHECKING: \.\.\. sybil-9.0.0/setup.py000066400000000000000000000024271471471302700143670ustar00rootroot00000000000000# See docs/license.rst for license details. # Copyright (c) 2017 onwards Chris Withers import os from setuptools import setup, find_packages base_dir = os.path.dirname(__file__) PYTEST_VERSION_SPEC = 'pytest>=8' setup( name='sybil', version='9.0.0', author='Chris Withers', author_email='chris@withers.org', license='MIT', description="Automated testing for the examples in your code and documentation.", long_description=open('README.rst').read(), url='https://github.com/simplistix/sybil', classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3', ], packages=find_packages(exclude=['tests', 'functional_tests']), package_data={"sybil": ["py.typed"]}, python_requires=">=3.9", extras_require=dict( pytest=[PYTEST_VERSION_SPEC], test=[ 'mypy', 'myst_parser', PYTEST_VERSION_SPEC, 'pytest-cov', 'seedir', 'testfixtures', 'types-PyYAML', ], build=[ 'furo', 'sphinx', 'twine', 'urllib3<2', # workaround for RTD builds failing with old SSL 'wheel', ] ), ) sybil-9.0.0/sybil/000077500000000000000000000000001471471302700137725ustar00rootroot00000000000000sybil-9.0.0/sybil/__init__.py000066400000000000000000000003171471471302700161040ustar00rootroot00000000000000from .sybil import Sybil from .document import Document from .region import Region, Lexeme from .example import Example __all__ = [ 'Sybil', 'Document', 'Region', 'Lexeme', 'Example', ] sybil-9.0.0/sybil/document.py000066400000000000000000000211521471471302700161630ustar00rootroot00000000000000import ast import re from ast import AsyncFunctionDef, FunctionDef, ClassDef, Constant, Module, Expr from bisect import bisect from collections.abc import Iterator from io import open from itertools import chain from pathlib import Path from typing import Any, Dict from typing import List, Tuple from .example import Example, SybilFailure, NotEvaluated from .python import import_path from .region import Region from .text import LineNumberOffsets from .typing import Parser, Evaluator class Document: """ This is Sybil's representation of a documentation source file. It will be instantiated by Sybil and provided to each parser in turn. Different types of document can be handled by subclassing to provide the required :any:`evaluation `. The required file extensions, such as ``'.py'``, can then be mapped to these subclasses using :class:`Sybil's ` ``document_types`` parameter. """ def __init__(self, text: str, path: str) -> None: #: This is the text of the documentation source file. self.text: str = text #: This is the absolute path of the documentation source file. self.path: str = path self.end: int = len(text) self.regions: List[Tuple[int, Region]] = [] #: This dictionary is the namespace in which all examples parsed from #: this document will be evaluated. self.namespace: Dict[str, Any] = {} self.evaluators: list[Evaluator] = [] @classmethod def parse(cls, path: str, *parsers: Parser, encoding: str = 'utf-8') -> 'Document': """ Read the text from the supplied path and parse it into a document using the supplied parsers. """ with open(path, encoding=encoding) as source: text = source.read() document = cls(text, path) for parser in parsers: for region in parser(document): document.add(region) return document def line_column(self, position: int) -> str: """ Return a line and column location in this document based on a character position. """ line = self.text.count('\n', 0, position)+1 col = position - self.text.rfind('\n', 0, position) return 'line {}, column {}'.format(line, col) def region_details(self, region: Region) -> str: return '{!r} from {} to {}'.format( region, self.line_column(region.start), self.line_column(region.end) ) def raise_overlap(self, *regions: Region) -> None: reprs = [] for region in regions: reprs.append(self.region_details(region)) raise ValueError('{} overlaps {}'.format(*reprs)) def add(self, region: Region) -> None: if region.start < 0: raise ValueError('{} is before start of document'.format( self.region_details(region) )) if region.end > self.end: raise ValueError('{} goes beyond end of document'.format( self.region_details(region) )) entry = (region.start, region) index = bisect(self.regions, entry) if index > 0: previous = self.regions[index-1][1] if previous.end > region.start: self.raise_overlap(previous, region) if index < len(self.regions): next = self.regions[index][1] if next.start < region.end: self.raise_overlap(region, next) self.regions.insert(index, entry) def examples(self) -> Iterator[Example]: """ Return the :term:`examples ` contained within this document. """ line = 1 place = 0 for _, region in self.regions: line += self.text.count('\n', place, region.start) line_start = self.text.rfind('\n', place, region.start) place = region.start yield Example(self, line, region.start-line_start, region, self.namespace) def __iter__(self) -> Iterator[Example]: return self.examples() def push_evaluator(self, evaluator: Evaluator) -> None: """ Push an :any:`Evaluator` onto this document's stack of evaluators if it is not already in that stack. When evaluating an :any:`Example`, any evaluators in the stack will be tried in order, starting with the most recently pushed. If an evaluator raises a :any:`NotEvaluated` exception, then the next evaluator in the stack will be attempted. If the stack is empty or all evaluators present raise :any:`NotEvaluated`, then the example's evaluator will be used. This is the most common case! """ if evaluator not in self.evaluators: self.evaluators.append(evaluator) def pop_evaluator(self, evaluator: Evaluator) -> None: """ Pop an :any:`Evaluator` off this document's stack of evaluators. If it is not present in that stack, the method does nothing. """ if evaluator in self.evaluators: self.evaluators.remove(evaluator) def evaluate(self, example: Example, evaluator: Evaluator) -> None: __tracebackhide__ = True for current_evaluator in chain(reversed(self.evaluators), (evaluator,)): try: result = current_evaluator(example) except NotEvaluated: continue else: if result: raise SybilFailure(example, result if isinstance(result, str) else repr(result)) else: break else: raise SybilFailure(example, f'{evaluator!r} should not raise NotEvaluated()') DOCSTRING_PUNCTUATION = re.compile('[rf]?(["\']{3}|["\'])') class PythonDocument(Document): """ A :class:`~sybil.Document` type that imports the document's source file as a Python module, making names within it available in the document's :attr:`~sybil.Document.namespace`. """ def __init__(self, text: str, path: str) -> None: super().__init__(text, path) self.push_evaluator(self.import_document) def import_document(self, example: Example) -> None: """ Imports the document's source file as a Python module when the first :class:`~sybil.example.Example` from it is evaluated. """ module = import_path(Path(self.path)) self.namespace.update(module.__dict__) self.pop_evaluator(self.import_document) raise NotEvaluated() class PythonDocStringDocument(PythonDocument): """ A :class:`~sybil.document.PythonDocument` subclass that only considers the text of docstrings in the document's source. """ @staticmethod def extract_docstrings(python_source_code: str) -> Iterator[Tuple[int, int, str]]: line_offsets = LineNumberOffsets(python_source_code) for node in ast.walk(ast.parse(python_source_code)): if not isinstance(node, (AsyncFunctionDef, FunctionDef, ClassDef, Module)): continue if not (node.body and isinstance(node.body[0], Expr)): continue docstring = node.body[0].value if isinstance(docstring, Constant): text = docstring.value else: continue if text is Ellipsis: continue node_start = line_offsets.get(docstring.lineno-1, docstring.col_offset) end_lineno = docstring.end_lineno or 1 end_col_offset = docstring.end_col_offset or 0 node_end = line_offsets.get(end_lineno-1, end_col_offset) punc = DOCSTRING_PUNCTUATION.match(python_source_code, node_start, node_end) punc_size = len(punc.group(1)) start = punc.end() end = node_end - punc_size yield start, end, text @classmethod def parse(cls, path: str, *parsers: Parser, encoding: str = 'utf-8') -> 'Document': """ Read the text from the supplied path to a Python source file and parse any docstrings it contains into a document using the supplied parsers. """ with open(path, encoding=encoding) as source: document = cls(source.read(), path) for start, end, text in cls.extract_docstrings(document.text): docstring_document = cls(text, path) for parser in parsers: for region in parser(docstring_document): region.start += start region.end += start document.add(region) return document sybil-9.0.0/sybil/evaluators/000077500000000000000000000000001471471302700161575ustar00rootroot00000000000000sybil-9.0.0/sybil/evaluators/__init__.py000066400000000000000000000000001471471302700202560ustar00rootroot00000000000000sybil-9.0.0/sybil/evaluators/capture.py000066400000000000000000000002171471471302700201740ustar00rootroot00000000000000from sybil import Example def evaluate_capture(example: Example) -> None: name, text = example.parsed example.namespace[name] = text sybil-9.0.0/sybil/evaluators/doctest.py000066400000000000000000000044601471471302700202020ustar00rootroot00000000000000from doctest import ( DocTest as BaseDocTest, DocTestRunner as BaseDocTestRunner, Example as BaseDocTestExample, set_unittest_reportflags, ) from typing import Any, Dict, List, Optional from sybil import Example class DocTest(BaseDocTest): def __init__( self, examples: List[BaseDocTestExample], globs: Dict[str, Any], name: str, filename: Optional[str], lineno: Optional[int], docstring: Optional[str], ) -> None: # do everything like regular doctests, but don't make a copy of globs BaseDocTest.__init__(self, examples, globs, name, filename, lineno, docstring) self.globs = globs class DocTestRunner(BaseDocTestRunner): def __init__(self, optionflags: int) -> None: _unittest_reportflags = set_unittest_reportflags(0) set_unittest_reportflags(_unittest_reportflags) optionflags |= _unittest_reportflags BaseDocTestRunner.__init__( self, verbose=False, optionflags=optionflags, ) def _failure_header(self, test: DocTest, example: BaseDocTestExample) -> str: return '' class DocTestEvaluator: """ The :any:`Evaluator` to use for :class:`Regions ` yielded by a :class:`~sybil.parsers.abstract.doctest.DocTestStringParser`. :param optionflags: :ref:`doctest option flags` to use when evaluating examples. """ def __init__(self, optionflags: int = 0) -> None: self.runner = DocTestRunner(optionflags) def __call__(self, sybil_example: Example) -> str: example = sybil_example.parsed namespace = sybil_example.namespace output: List[str] = [] remove_name = False try: if '__name__' not in namespace: remove_name = True namespace['__name__'] = '__test__' self.runner.run( DocTest([example], namespace, name=sybil_example.path, filename=None, lineno=example.lineno, docstring=None), clear_globs=False, out=output.append ) finally: if remove_name: del namespace['__name__'] return ''.join(output) sybil-9.0.0/sybil/evaluators/python.py000066400000000000000000000025121471471302700200520ustar00rootroot00000000000000from collections.abc import Sequence import __future__ from sybil import Example def pad(source: str, line: int) -> str: """ Pad the supplied source such that line numbers will be based on the one provided when the source is evaluated. """ # There must be a nicer way to get line numbers to be correct... return line * '\n' + source class PythonEvaluator: """ The :any:`Evaluator` to use for :class:`Regions ` containing Python source code. :param future_imports: A sequence of strings naming future import options, for example ``'annotations'``, that should be used at the top of each :class:`~sybil.Example` being evaluated. """ def __init__(self, future_imports: Sequence[str] = ()) -> None: self.flags = 0 for future_import in future_imports: self.flags |= getattr(__future__, future_import).compiler_flag def __call__(self, example: Example) -> None: # There must be a nicer way to get line numbers to be correct... source = pad(example.parsed, example.line + example.parsed.line_offset) code = compile(source, example.path, 'exec', flags=self.flags, dont_inherit=True) exec(code, example.namespace) # exec adds __builtins__, we don't want it: del example.namespace['__builtins__'] sybil-9.0.0/sybil/evaluators/skip.py000066400000000000000000000062121471471302700175000ustar00rootroot00000000000000from dataclasses import dataclass from typing import Any, Optional, Dict from unittest import SkipTest from sybil import Example, Document from sybil.example import NotEvaluated class If: def __init__(self, default_reason: str) -> None: self.default_reason = default_reason def __call__(self, condition: Any, reason: Optional[str] = None) -> Optional[str]: if condition: return reason or self.default_reason return None @dataclass class SkipState: active: bool = True remove: bool = False exception: Optional[Exception] = None last_action: Optional[str] = None class Skipper: def __init__(self) -> None: self.document_state: Dict[Document, SkipState] = {} def state_for(self, example: Example) -> SkipState: document = example.document if document not in self.document_state: self.document_state[document] = SkipState() return self.document_state[example.document] def install(self, example: Example, state: SkipState, reason: Optional[str]) -> None: document = example.document document.push_evaluator(self) if reason: namespace = document.namespace.copy() reason = reason.lstrip() if reason.startswith('if'): condition = reason[2:] reason = 'if_' + condition namespace['if_'] = If(condition) reason = eval(reason, namespace) if reason: state.exception = SkipTest(reason) else: state.active = False def remove(self, example: Example) -> None: document = example.document document.pop_evaluator(self) del self.document_state[document] def evaluate_skip_example(self, example: Example) -> None: state = self.state_for(example) action, reason = example.parsed if action not in ('start', 'next', 'end'): raise ValueError('Bad skip action: ' + action) if state.last_action is None and action not in ('start', 'next'): raise ValueError(f"'skip: {action}' must follow 'skip: start'") elif state.last_action and action != 'end': raise ValueError(f"'skip: {action}' cannot follow 'skip: {state.last_action}'") state.last_action = action if action == 'start': self.install(example, state, reason) elif action == 'next': self.install(example, state, reason) state.remove = True elif action == 'end': self.remove(example) if reason: raise ValueError("Cannot have condition on 'skip: end'") def evaluate_other_example(self, example: Example) -> None: state = self.state_for(example) if state.remove: self.remove(example) if not state.active: raise NotEvaluated() if state.exception is not None: raise state.exception def __call__(self, example: Example) -> None: if example.region.evaluator is self: self.evaluate_skip_example(example) else: self.evaluate_other_example(example) sybil-9.0.0/sybil/example.py000066400000000000000000000055301471471302700160020ustar00rootroot00000000000000from typing import TYPE_CHECKING, Any, Dict from .region import Region if TYPE_CHECKING: from .document import Document class SybilFailure(AssertionError): def __init__(self, example: 'Example', result: str) -> None: super(SybilFailure, self).__init__(( 'Example at {}, line {}, column {} did not evaluate as expected:\n' '{}' ).format(example.path, example.line, example.column, result)) self.example = example self.result = result class NotEvaluated(Exception): """ An exception that can be raised by an :any:`Evaluator` previously :meth:`pushed ` onto the document to indicate that it is not evaluating the current example and that a previously pushed evaluator, or the :any:`Region` evaluator if no others have been pushed, should be used to evaluate the :any:`Example` instead. """ class Example: """ This represents a particular example from a documentation source file. It is assembled from the :class:`~sybil.document.Document` and :class:`~sybil.Region` the example comes from and is passed to the region's evaluator. """ def __init__( self, document: 'Document', line: int, column: int, region: Region, namespace: Dict[str, Any] ) -> None: #: The :class:`~sybil.document.Document` from which this example came. self.document: 'Document' = document #: The absolute path of the :class:`~sybil.document.Document`. self.path: str = document.path #: The line number at which this example occurs in the #: :class:`~sybil.document.Document`. self.line: int = line #: The column number at which this example occurs in the #: :class:`~sybil.document.Document`. self.column: int = column #: The :class:`~sybil.Region` from which this example came. self.region: Region = region #: The character position at which this example starts in the #: :class:`~sybil.document.Document`. self.start: int = region.start #: The character position at which this example ends in the #: :class:`~sybil.document.Document`. self.end: int = region.end #: The version of this example provided by the parser that yielded #: the :class:`~sybil.Region` containing it. self.parsed: Any = region.parsed #: The :attr:`~sybil.Document.namespace` of the document from #: which this example came. self.namespace: Dict[str, Any] = namespace def __repr__(self) -> str: return ''.format( self.path, self.line, self.column, self.region.evaluator ) def evaluate(self) -> None: if self.region.evaluator is not None: self.document.evaluate(self, self.region.evaluator) sybil-9.0.0/sybil/integration/000077500000000000000000000000001471471302700163155ustar00rootroot00000000000000sybil-9.0.0/sybil/integration/__init__.py000066400000000000000000000000001471471302700204140ustar00rootroot00000000000000sybil-9.0.0/sybil/integration/pytest.py000066400000000000000000000110461471471302700202210ustar00rootroot00000000000000from __future__ import absolute_import import os from collections.abc import Callable, Sequence from inspect import getsourcefile from os.path import abspath from pathlib import Path from typing import Union, Tuple, Optional, List import pytest from pytest import Collector, ExceptionInfo, Module, Session from _pytest import fixtures from _pytest._code.code import TerminalRepr, Traceback from _pytest._io import TerminalWriter from _pytest.fixtures import FuncFixtureInfo from sybil import example as example_module, Sybil, Document from sybil.example import Example from sybil.example import SybilFailure example_module_path = abspath(getsourcefile(example_module)) class SybilFailureRepr(TerminalRepr): def __init__(self, item: 'SybilItem', message: str) -> None: self.item = item self.message = message def toterminal(self, tw: TerminalWriter) -> None: tw.line() for line in self.message.splitlines(): tw.line(line) tw.line() tw.write(self.item.parent.name, bold=True, red=True) tw.line(":%s: SybilFailure" % self.item.example.line) class SybilItem(pytest.Item): obj = None def __init__(self, parent, sybil, example: Example) -> None: super(SybilItem, self).__init__(sybil.identify(example), parent) self.example = example self.request_fixtures(sybil.fixtures) def request_fixtures(self, names): # pytest fixtures dance: fm = self.session._fixturemanager closure = fm.getfixtureclosure(initialnames=names, parentnode=self, ignore_args=set()) names_closure, arg2fixturedefs = closure fixtureinfo = FuncFixtureInfo(argnames=names, initialnames=names, names_closure=names_closure, name2fixturedefs=arg2fixturedefs) self._fixtureinfo = fixtureinfo self.funcargs = {} self._request = fixtures.TopRequest(pyfuncitem=self, _ispytest=True) self.fixturenames = names_closure def reportinfo(self) -> Tuple[Union["os.PathLike[str]", str], Optional[int], str]: info = '%s line=%i column=%i' % ( self.path.name, self.example.line, self.example.column ) return self.example.path, self.example.line, info def getparent(self, cls): if cls is Module: return self.parent if cls is Session: return self.session def setup(self) -> None: self._request._fillfixtures() for name, fixture in self.funcargs.items(): self.example.namespace[name] = fixture def runtest(self) -> None: self.example.evaluate() def _traceback_filter(self, excinfo: ExceptionInfo[BaseException]) -> Traceback: traceback = excinfo.traceback tb = traceback.cut(path=example_module_path) tb_entry = tb[1] if getattr(tb_entry, '_rawentry', None) is not None: traceback = Traceback(tb_entry._rawentry) return traceback def repr_failure( self, excinfo: ExceptionInfo[BaseException], style = None, ) -> Union[str, TerminalRepr]: if isinstance(excinfo.value, SybilFailure): return SybilFailureRepr(self, str(excinfo.value)) return super().repr_failure(excinfo, style) class SybilFile(pytest.File): def __init__(self, *, sybils: Sequence[Sybil], **kwargs) -> None: super(SybilFile, self).__init__(**kwargs) self.sybils: Sequence[Sybil] = sybils self.documents: List[Document] = [] def collect(self): for sybil in self.sybils: document = sybil.parse(self.path) self.documents.append(document) for example in document.examples(): yield SybilItem.from_parent( self, sybil=sybil, example=example, ) def setup(self) -> None: for sybil, document in zip(self.sybils, self.documents): if sybil.setup: sybil.setup(document.namespace) def teardown(self) -> None: for sybil, document in zip(self.sybils, self.documents): if sybil.teardown: sybil.teardown(document.namespace) def pytest_integration(*sybils: Sybil) -> Callable[[Path, Collector], Optional[SybilFile]]: def pytest_collect_file(file_path: Path, parent: Collector) -> Optional[SybilFile]: active_sybils = [sybil for sybil in sybils if sybil.should_parse(file_path)] if active_sybils: return SybilFile.from_parent(parent, path=file_path, sybils=active_sybils) return None return pytest_collect_file sybil-9.0.0/sybil/integration/unittest.py000066400000000000000000000033151471471302700205500ustar00rootroot00000000000000from collections.abc import Callable from typing import Any, Dict, Optional from unittest import TestCase as BaseTestCase, TestSuite from unittest.loader import TestLoader from sybil import Sybil from sybil.example import Example class TestCase(BaseTestCase): sybil: Sybil namespace: Dict[str, Any] def __init__(self, example: Example) -> None: BaseTestCase.__init__(self) self.example = example def runTest(self) -> None: self.example.evaluate() def id(self) -> str: return f'{self.example.path},{self.sybil.identify(self.example)}' __str__ = __repr__ = id @classmethod def setUpClass(cls) -> None: if cls.sybil.setup is not None: cls.sybil.setup(cls.namespace) @classmethod def tearDownClass(cls) -> None: if cls.sybil.teardown is not None: cls.sybil.teardown(cls.namespace) def unittest_integration( *sybils: Sybil, ) -> Callable[[Optional[TestLoader], Optional[TestSuite], Optional[str]], TestSuite]: def load_tests( loader: Optional[TestLoader] = None, tests: Optional[TestSuite] = None, pattern: Optional[str] = None, ) -> TestSuite: suite = TestSuite() for sybil in sybils: for path in sorted(sybil.path.glob('**/*')): if path.is_file() and sybil.should_parse(path): document = sybil.parse(path) case = type(document.path, (TestCase, ), dict( sybil=sybil, namespace=document.namespace, )) for example in document.examples(): suite.addTest(case(example)) return suite return load_tests sybil-9.0.0/sybil/parsers/000077500000000000000000000000001471471302700154515ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/__init__.py000066400000000000000000000000001471471302700175500ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/abstract/000077500000000000000000000000001471471302700172545ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/abstract/__init__.py000066400000000000000000000004641471471302700213710ustar00rootroot00000000000000from .clear import AbstractClearNamespaceParser from .codeblock import AbstractCodeBlockParser from .skip import AbstractSkipParser from .doctest import DocTestStringParser __all__ = [ 'AbstractClearNamespaceParser', 'AbstractCodeBlockParser', 'AbstractSkipParser', 'DocTestStringParser', ] sybil-9.0.0/sybil/parsers/abstract/clear.py000066400000000000000000000013171471471302700207160ustar00rootroot00000000000000from collections.abc import Iterable, Sequence from sybil import Document, Region, Example from sybil.parsers.abstract.lexers import LexerCollection from sybil.typing import Lexer class AbstractClearNamespaceParser: """ An abstract parser for clearing the :class:`~sybil.Document.namespace`. """ def __init__(self, lexers: Sequence[Lexer]) -> None: self.lexers = LexerCollection(lexers) @staticmethod def evaluate(example: Example) -> None: example.document.namespace.clear() def __call__(self, document: Document) -> Iterable[Region]: for lexed in self.lexers(document): yield Region(lexed.start, lexed.end, lexed.lexemes['source'], self.evaluate) sybil-9.0.0/sybil/parsers/abstract/codeblock.py000066400000000000000000000060251471471302700215560ustar00rootroot00000000000000from collections.abc import Iterable, Sequence, Callable from typing import Optional from sybil import Region, Document, Example from sybil.typing import Evaluator, Lexer, Parser from .doctest import DocTestStringParser from .lexers import LexerCollection from ...evaluators.doctest import DocTestEvaluator from ...evaluators.python import PythonEvaluator class AbstractCodeBlockParser: """ An abstract parser for use when evaluating blocks of code. :param lexers: A sequence of :any:`Lexer` objects that will be applied in turn to each :class:`~sybil.Document` that is parsed. The :class:`~sybil.Region` objects returned by these lexers must have both an ``arguments`` string, containing the language of the lexed region, and a ``source`` :class:`~sybil.Lexeme` containing the source code of the lexed region. :param language: The language that this parser should look for. Lexed regions which don't have this language in their ``arguments`` lexeme will be ignored. :param evaluator: The evaluator to use for evaluating code blocks in the specified language. You can also override the :meth:`evaluate` method below. """ language: str def __init__( self, lexers: Sequence[Lexer], language: Optional[str] = None, evaluator: Optional[Evaluator] = None, ) -> None: self.lexers = LexerCollection(lexers) if language is not None: self.language = language assert self.language, 'language must be specified!' self._evaluator: Optional[Evaluator] = evaluator def evaluate(self, example: Example) -> Optional[str]: """ The :any:`Evaluator` used for regions yields by this parser can be provided by implementing this method. """ raise NotImplementedError def __call__(self, document: Document) -> Iterable[Region]: for region in self.lexers(document): if region.lexemes['arguments'] == self.language: region.parsed = region.lexemes['source'] region.evaluator = self._evaluator or self.evaluate yield region class PythonDocTestOrCodeBlockParser: codeblock_parser_class: Callable[[str, Evaluator], Parser] def __init__(self, future_imports: Sequence[str] = (), doctest_optionflags: int = 0) -> None: self.doctest_parser = DocTestStringParser( DocTestEvaluator(doctest_optionflags) ) self.codeblock_parser = self.codeblock_parser_class( 'python', PythonEvaluator(future_imports) ) def __call__(self, document: Document) -> Iterable[Region]: for region in self.codeblock_parser(document): source = region.parsed if region.parsed.startswith('>>>'): for doctest_region in self.doctest_parser(source, document.path): doctest_region.adjust(region, source) yield doctest_region else: yield region sybil-9.0.0/sybil/parsers/abstract/doctest.py000066400000000000000000000042401471471302700212730ustar00rootroot00000000000000from collections.abc import Iterable from doctest import ( DocTestParser as BaseDocTestParser, Example as DocTestExample, ) from sybil.evaluators.doctest import DocTestEvaluator from sybil.region import Region class DocTestStringParser(BaseDocTestParser): """ This isn't a true :any:`Parser` in that it must be called with a :class:`str` containing the doctest example's source and the file name that the example came from. """ def __init__(self, evaluator: DocTestEvaluator = DocTestEvaluator()) -> None: #: The evaluator to use for any doctests found in the supplied source string. self.evaluator: DocTestEvaluator = evaluator def __call__(self, string: str, name: str) -> Iterable[Region]: """ This will yield :class:`sybil.Region` objects for any doctest examples found in the supplied ``string`` with the :attr:`evaluator` supplied to its constructor and the file ``name`` supplied. Each section starting with a ``>>>`` will form a separate region. """ # a cut down version of doctest.DocTestParser.parse: charno, lineno = 0, 0 # Find all doctest examples in the string: for m in self._EXAMPLE_RE.finditer(string): # type: ignore # Update lineno (lines before this example) lineno += string.count('\n', charno, m.start()) # Extract info from the regexp match. source, options, want, exc_msg = self._parse_example(m, name, lineno) # type: ignore # Create an Example, and add it to the list. if not self._IS_BLANK_OR_COMMENT(source): # type: ignore yield Region( m.start(), m.end(), DocTestExample(source, want, exc_msg, lineno=lineno, indent=len(m.group('indent')), options=options), self.evaluator ) # Update lineno (lines inside this example) lineno += string.count('\n', m.start(), m.end()) # Update charno. charno = m.end() sybil-9.0.0/sybil/parsers/abstract/lexers.py000066400000000000000000000075021471471302700211340ustar00rootroot00000000000000import re import textwrap from itertools import chain from collections.abc import Iterable from typing import Optional, Dict, Pattern, List from sybil import Document from sybil.region import Lexeme, Region from sybil.typing import Lexer class LexingException(Exception): """ An exception when :term:`lexing ` a document. This may indicate invalid source text, or valid source text but where the elements found cannot be handled successfully. """ class LexerCollection(List[Lexer]): def __call__(self, document: Document) -> Iterable[Region]: return chain(*(lexer(document) for lexer in self)) class BlockLexer: """ This is a base class useful for any :any:`Lexer` that must handle block-style languages such as ReStructured Text or MarkDown. It yields a sequence of :class:`~sybil.Region` objects for each case where the ``start_pattern`` matches. A ``source`` :class:`~sybil.Lexeme` is created from the text between the end of the start pattern and the start of the end pattern. :param start_pattern: This is used to match the start of the block. Any named groups will be returned in the :attr:`~sybil.Region.lexemes` :class:`dict` of resulting :class:`~sybil.Region` objects. If a ``prefix`` named group forms part of the match, this will be template substituted into the ``end_pattern_template`` before it is compiled. :param end_pattern_template: This is used to match the end of any block found by the ``start_pattern``. It is templated with any ``prefix`` group from the ``start_pattern`` :class:`~typing.Match` and ``len_prefix``, the length of that prefix, before being compiled into a :class:`~typing.Pattern`. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__( self, start_pattern: Pattern[str], end_pattern_template: str, mapping: Optional[Dict[str, str]] = None, ) -> None: self.start_pattern = start_pattern self.end_pattern_template = end_pattern_template self.mapping = mapping def __call__(self, document: Document) -> Iterable[Region]: for start_match in re.finditer(self.start_pattern, document.text): source_start = start_match.end() lexemes = start_match.groupdict() prefix = lexemes.pop('prefix', '') end_pattern = re.compile(self.end_pattern_template.format( prefix=prefix, len_prefix=len(prefix) )) end_match = end_pattern.search(document.text, source_start) if end_match is None: raise LexingException( f'Could not find end of {start_match.group(0)!r}, ' f'starting at {document.line_column(start_match.start())}, ' f'looking for {end_pattern.pattern!r} in {document.path}:\n' f'{document.text[source_start:]!r}' ) source_end = end_match.start() source = document.text[source_start:source_end] lexemes['source'] = Lexeme( strip_prefix(source, prefix), offset=source_start-start_match.start(), line_offset=start_match.group(0).count('\n')-1 ) if self.mapping: lexemes = {dest: lexemes[source] for source, dest in self.mapping.items()} yield Region(start_match.start(), source_end, lexemes=lexemes) def strip_prefix(text: str, prefix: str) -> str: lines = text.splitlines(keepends=True) prefix_length = len(prefix) return textwrap.dedent(''.join(line[prefix_length:] or line[-1] for line in lines)) sybil-9.0.0/sybil/parsers/abstract/skip.py000066400000000000000000000021721471471302700205760ustar00rootroot00000000000000import re from collections.abc import Iterable, Sequence from sybil import Document, Region from sybil.evaluators.skip import Skipper from sybil.parsers.abstract.lexers import LexerCollection from sybil.typing import Lexer SKIP_ARGUMENTS_PATTERN = re.compile(r'(\w+)(?:\s+(.+)$)?') class AbstractSkipParser: """ An abstract parser for skipping subsequent examples. :param lexers: A sequence of :any:`Lexer` objects that will be applied in turn to each :class:`~sybil.Document` that is parsed. """ def __init__(self, lexers: Sequence[Lexer]): self.lexers = LexerCollection(lexers) self.skipper = Skipper() def __call__(self, document: Document) -> Iterable[Region]: for lexed in self.lexers(document): arguments = lexed.lexemes['arguments'] match = SKIP_ARGUMENTS_PATTERN.match(arguments) if match is None: directive = lexed.lexemes.get('directive', 'skip') raise ValueError(f'malformed arguments to {directive}: {arguments!r}') yield Region(lexed.start, lexed.end, match.groups(), self.skipper) sybil-9.0.0/sybil/parsers/capture.py000066400000000000000000000006121471471302700174650ustar00rootroot00000000000000# THIS MODULE IS FOR BACKWARDS COMPATIBILITY ONLY! from collections.abc import Iterable from sybil import Region, Document from sybil.parsers.rest import CaptureParser def parse_captures(document: Document) -> Iterable[Region]: """ A parser function to be included when your documentation makes use of :ref:`capture-parser` examples. """ return CaptureParser()(document) sybil-9.0.0/sybil/parsers/codeblock.py000066400000000000000000000002451471471302700177510ustar00rootroot00000000000000# THIS MODULE IS FOR BACKWARDS COMPATIBILITY ONLY! from .rest import CodeBlockParser, PythonCodeBlockParser __all__ = ['CodeBlockParser', 'PythonCodeBlockParser'] sybil-9.0.0/sybil/parsers/doctest.py000066400000000000000000000001601471471302700174650ustar00rootroot00000000000000# THIS MODULE IS FOR BACKWARDS COMPATIBILITY ONLY! from .rest import DocTestParser __all__ = ['DocTestParser'] sybil-9.0.0/sybil/parsers/markdown/000077500000000000000000000000001471471302700172735ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/markdown/__init__.py000066400000000000000000000003641471471302700214070ustar00rootroot00000000000000from .clear import ClearNamespaceParser from .codeblock import CodeBlockParser, PythonCodeBlockParser from .skip import SkipParser __all__ = [ 'ClearNamespaceParser', 'CodeBlockParser', 'PythonCodeBlockParser', 'SkipParser', ] sybil-9.0.0/sybil/parsers/markdown/clear.py000066400000000000000000000006411471471302700207340ustar00rootroot00000000000000from sybil.parsers.abstract import AbstractClearNamespaceParser from ..markdown.lexers import DirectiveInHTMLCommentLexer class ClearNamespaceParser(AbstractClearNamespaceParser): """ A :any:`Parser` for :ref:`clear-namespace ` instructions. """ def __init__(self) -> None: super().__init__([ DirectiveInHTMLCommentLexer('clear-namespace'), ]) sybil-9.0.0/sybil/parsers/markdown/codeblock.py000066400000000000000000000033721471471302700215770ustar00rootroot00000000000000from typing import Optional from sybil.parsers.abstract import AbstractCodeBlockParser from sybil.typing import Evaluator from ..abstract.codeblock import PythonDocTestOrCodeBlockParser from ..markdown.lexers import FencedCodeBlockLexer, DirectiveInHTMLCommentLexer class CodeBlockParser(AbstractCodeBlockParser): """ A :any:`Parser` for :ref:`markdown-codeblock-parser` examples. :param language: The language that this parser should look for. :param evaluator: The evaluator to use for evaluating code blocks in the specified language. You can also override the :meth:`evaluate` method below. """ def __init__( self, language: Optional[str] = None, evaluator: Optional[Evaluator] = None ) -> None: super().__init__( [ FencedCodeBlockLexer( language=r'.+', mapping={'language': 'arguments', 'source': 'source'}, ), DirectiveInHTMLCommentLexer( directive=r'(invisible-)?code(-block)?', arguments='.+', ), ], language, evaluator ) class PythonCodeBlockParser(PythonDocTestOrCodeBlockParser): """ A :any:`Parser` for Python :ref:`markdown-codeblock-parser` examples. :param future_imports: An optional list of strings that will be turned into ``from __future__ import ...`` statements and prepended to the code in each of the examples found by this parser. :param doctest_optionflags: :ref:`doctest option flags` to use when evaluating the doctest examples found by this parser. """ codeblock_parser_class = CodeBlockParser sybil-9.0.0/sybil/parsers/markdown/lexers.py000066400000000000000000000140361471471302700211530ustar00rootroot00000000000000import re from collections.abc import Iterable from typing import Optional, Dict, Pattern, Match, List from sybil import Document, Region, Lexeme from sybil.parsers.abstract.lexers import BlockLexer, strip_prefix FENCE = re.compile(r"^(?P[ \t]*)(?P`{3,}|~{3,})", re.MULTILINE) class RawFencedCodeBlockLexer: """ A :class:`~sybil.typing.Lexer` for Markdown fenced code blocks allowing flexible lexing of the whole `info` line along with more complicated prefixes. The following lexemes are extracted: - ``source`` as a :class:`~sybil.Lexeme`. - any other named groups specified in ``info_pattern`` as :class:`strings `. :param info_pattern: a :class:`re.Pattern` to match the `info` line and any required prefix that follows it. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__( self, info_pattern: Pattern[str] = re.compile(r'$\n', re.MULTILINE), mapping: Optional[Dict[str, str]] = None, ) -> None: self.info_pattern = info_pattern self.mapping = mapping @staticmethod def match_closes_existing(current: Match[str], existing: Match[str]) -> bool: current_fence = current.group('fence') existing_fence = existing.group('fence') same_type = current_fence[0] == existing_fence[0] okay_length = len(current_fence) >= len(existing_fence) same_prefix = len(current.group('prefix')) == len(existing.group('prefix')) return same_type and okay_length and same_prefix def make_region( self, opening: Match[str], document: Document, closing: Optional[Match[str]] ) -> Optional[Region]: if closing is None: content_end = region_end = len(document.text) else: content_end = closing.start() region_end = closing.end() content = document.text[opening.end(): content_end] info = self.info_pattern.match(content) if info is None: return None lexemes = info.groupdict() lexemes['source'] = Lexeme( strip_prefix(content[info.end():], opening.group('prefix')), offset=len(opening.group(0))+info.end(), line_offset=0, ) if self.mapping: lexemes = {dest: lexemes[source] for source, dest in self.mapping.items()} return Region(opening.start(), region_end, lexemes=lexemes) def __call__(self, document: Document) -> Iterable[Region]: open_blocks: List[Match[str]] = [] index = 0 while True: match = FENCE.search(document.text, index) if match is None: break else: index = match.end() # does this fence close any open block? for i in range(len(open_blocks)): existing = open_blocks[i] if self.match_closes_existing(match, existing): maybe_region = self.make_region(existing, document, match) if maybe_region is not None: yield maybe_region open_blocks = open_blocks[:i] break else: open_blocks.append(match) if open_blocks: maybe_region = self.make_region(open_blocks[0], document, closing=None) if maybe_region is not None: yield maybe_region class FencedCodeBlockLexer(RawFencedCodeBlockLexer): """ A :class:`~sybil.typing.Lexer` for Markdown fenced code blocks where a language is specified. :class:`RawFencedCodeBlockLexer` can be used if the whole `info` line, or a more complicated prefix, is required. The following lexemes are extracted: - ``language`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param language: a :class:`str` containing a regular expression pattern to match language names. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__(self, language: str, mapping: Optional[Dict[str, str]] = None) -> None: super().__init__( info_pattern=re.compile(f'(?P{language})$\n', re.MULTILINE), mapping=mapping, ) DIRECTIVE_IN_HTML_COMMENT_START = ( r"^(?P[ \t]*) It extracts the following lexemes: - ``directive`` as a :class:`str`. - ``arguments`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param directive: a :class:`str` containing a regular expression pattern to match directive names. :param arguments: a :class:`str` containing a regular expression pattern to match directive arguments. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__( self, directive: str, arguments: str = '.*?', mapping: Optional[Dict[str, str]] = None ) -> None: super().__init__( start_pattern=re.compile( DIRECTIVE_IN_HTML_COMMENT_START.format(directive=directive, arguments=arguments), re.MULTILINE ), end_pattern_template=DIRECTIVE_IN_HTML_COMMENT_END, mapping=mapping, ) sybil-9.0.0/sybil/parsers/markdown/skip.py000066400000000000000000000005351471471302700206160ustar00rootroot00000000000000from ..abstract import AbstractSkipParser from ..markdown.lexers import DirectiveInHTMLCommentLexer class SkipParser(AbstractSkipParser): """ A :any:`Parser` for :ref:`skip ` instructions. """ def __init__(self) -> None: super().__init__([ DirectiveInHTMLCommentLexer('skip'), ]) sybil-9.0.0/sybil/parsers/myst/000077500000000000000000000000001471471302700164455ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/myst/__init__.py000066400000000000000000000004761471471302700205650ustar00rootroot00000000000000from .codeblock import CodeBlockParser, PythonCodeBlockParser from .doctest import DocTestDirectiveParser from .skip import SkipParser from .clear import ClearNamespaceParser __all__ = [ 'CodeBlockParser', 'PythonCodeBlockParser', 'DocTestDirectiveParser', 'SkipParser', 'ClearNamespaceParser', ] sybil-9.0.0/sybil/parsers/myst/clear.py000066400000000000000000000010201471471302700200760ustar00rootroot00000000000000 from sybil.parsers.abstract import AbstractClearNamespaceParser from .lexers import DirectiveInPercentCommentLexer from ..markdown.lexers import DirectiveInHTMLCommentLexer class ClearNamespaceParser(AbstractClearNamespaceParser): """ A :any:`Parser` for :ref:`clear-namespace ` instructions. """ def __init__(self) -> None: super().__init__([ DirectiveInPercentCommentLexer('clear-namespace'), DirectiveInHTMLCommentLexer('clear-namespace'), ]) sybil-9.0.0/sybil/parsers/myst/codeblock.py000066400000000000000000000041671471471302700207540ustar00rootroot00000000000000from typing import Optional from sybil.parsers.abstract import AbstractCodeBlockParser from sybil.typing import Evaluator from .lexers import ( DirectiveLexer, DirectiveInPercentCommentLexer ) from ..markdown.lexers import FencedCodeBlockLexer, DirectiveInHTMLCommentLexer from ..abstract.codeblock import PythonDocTestOrCodeBlockParser class CodeBlockParser(AbstractCodeBlockParser): """ A :any:`Parser` for :ref:`myst-codeblock-parser` examples. :param language: The language that this parser should look for. :param evaluator: The evaluator to use for evaluating code blocks in the specified language. You can also override the :meth:`evaluate` method below. """ def __init__( self, language: Optional[str] = None, evaluator: Optional[Evaluator] = None ) -> None: super().__init__( [ FencedCodeBlockLexer( language=r'.+', mapping={'language': 'arguments', 'source': 'source'}, ), DirectiveLexer( directive=r'(sourcecode|code-block|code)', arguments='.+', ), DirectiveInPercentCommentLexer( directive=r'(invisible-)?code(-block)?', arguments='.+', ), DirectiveInHTMLCommentLexer( directive=r'(invisible-)?code(-block)?', arguments='.+', ), ], language, evaluator ) class PythonCodeBlockParser(PythonDocTestOrCodeBlockParser): """ A :any:`Parser` for Python :ref:`myst-codeblock-parser` examples. :param future_imports: An optional list of strings that will be turned into ``from __future__ import ...`` statements and prepended to the code in each of the examples found by this parser. :param doctest_optionflags: :ref:`doctest option flags` to use when evaluating the doctest examples found by this parser. """ codeblock_parser_class = CodeBlockParser sybil-9.0.0/sybil/parsers/myst/doctest.py000066400000000000000000000017511471471302700204700ustar00rootroot00000000000000from collections.abc import Iterable from sybil import Document, Region from sybil.evaluators.doctest import DocTestEvaluator from sybil.parsers.abstract import DocTestStringParser from .lexers import DirectiveLexer class DocTestDirectiveParser: """ A :any:`Parser` for :ref:`doctest directive ` examples. :param optionflags: :ref:`doctest option flags` to use when evaluating the examples found by this parser. """ def __init__(self, optionflags: int = 0) -> None: self.lexer = DirectiveLexer('doctest') self.string_parser = DocTestStringParser(DocTestEvaluator(optionflags)) def __call__(self, document: Document) -> Iterable[Region]: for lexed_region in self.lexer(document): source = lexed_region.lexemes['source'] for region in self.string_parser(source, document.path): region.adjust(lexed_region, source) yield region sybil-9.0.0/sybil/parsers/myst/lexers.py000066400000000000000000000075101471471302700203240ustar00rootroot00000000000000import re from collections.abc import Iterable from typing import Optional, Dict from sybil import Document, Region from sybil.parsers.abstract.lexers import BlockLexer from sybil.parsers.markdown.lexers import RawFencedCodeBlockLexer from sybil.parsers.rest.lexers import parse_options_and_source INFO_PATTERN = ( r'\{{(?P{directive})}} ?(?P{arguments})$\n' r'(?P(?:[ \t]*:[\w-]*:[^\n]*\n)+)?' r"([ \t]*---\n(?P(?:.+\n)*)[ \t]*---\n)?" ) def parse_yaml_options(lexed: Region) -> None: lexemes = lexed.lexemes yaml_options = lexemes.pop('yaml_options', None) if yaml_options is not None: # import here to avoid a dependency on PyYAML except where it's really needed: from yaml import safe_load options = safe_load(yaml_options) lexemes['options'].update(options) class DirectiveLexer(RawFencedCodeBlockLexer): """ A :class:`~sybil.typing.Lexer` for MyST directives such as: .. code-block:: markdown ```{directivename} arguments --- key1: val1 key2: val2 --- This is directive content ``` The following lexemes are extracted: - ``directive`` as a :class:`str`. - ``arguments`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param directive: a :class:`str` containing a regular expression pattern to match directive names. :param arguments: a :class:`str` containing a regular expression pattern to match directive arguments. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__( self, directive: str, arguments: str = '.*', mapping: Optional[Dict[str, str]] = None ) -> None: super().__init__( info_pattern=re.compile( INFO_PATTERN.format(directive=directive, arguments=arguments), re.MULTILINE, ), mapping=mapping, ) def __call__(self, document: Document) -> Iterable[Region]: for lexed in super().__call__(document): parse_options_and_source(lexed) parse_yaml_options(lexed) yield lexed DIRECTIVE_IN_PERCENT_COMMENT_START = ( r"^(?P[ \t]*%)[ \t]*(?P{directive})(:[ \t]*(?P{arguments}))?$\n" ) DIRECTIVE_IN_PERCENT_COMMENT_END = '(?<=\n)(?!{prefix})' class DirectiveInPercentCommentLexer(BlockLexer): """ A :class:`~sybil.parsers.abstract.lexers.BlockLexer` for faux MyST directives in %-style Markdown comments such as: .. code-block:: markdown % not-really-a-directive: some-argument % % Source here... It extracts the following lexemes: - ``directive`` as a :class:`str`. - ``arguments`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param directive: a :class:`str` containing a regular expression pattern to match directive names. :param arguments: a :class:`str` containing a regular expression pattern to match directive arguments. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ def __init__( self, directive: str, arguments: str = '.*', mapping: Optional[Dict[str, str]] = None ) -> None: super().__init__( start_pattern=re.compile( DIRECTIVE_IN_PERCENT_COMMENT_START.format(directive=directive, arguments=arguments), re.MULTILINE ), end_pattern_template=DIRECTIVE_IN_PERCENT_COMMENT_END, mapping=mapping, ) sybil-9.0.0/sybil/parsers/myst/skip.py000066400000000000000000000007001471471302700177620ustar00rootroot00000000000000from ..abstract import AbstractSkipParser from .lexers import DirectiveInPercentCommentLexer from ..markdown.lexers import DirectiveInHTMLCommentLexer class SkipParser(AbstractSkipParser): """ A :any:`Parser` for :ref:`skip ` instructions. """ def __init__(self) -> None: super().__init__([ DirectiveInPercentCommentLexer('skip'), DirectiveInHTMLCommentLexer('skip'), ]) sybil-9.0.0/sybil/parsers/rest/000077500000000000000000000000001471471302700164265ustar00rootroot00000000000000sybil-9.0.0/sybil/parsers/rest/__init__.py000066400000000000000000000006321471471302700205400ustar00rootroot00000000000000from .capture import CaptureParser from .codeblock import CodeBlockParser, PythonCodeBlockParser from .clear import ClearNamespaceParser from .doctest import DocTestParser, DocTestDirectiveParser from .skip import SkipParser __all__ = [ 'CaptureParser', 'CodeBlockParser', 'PythonCodeBlockParser', 'ClearNamespaceParser', 'DocTestParser', 'DocTestDirectiveParser', 'SkipParser', ] sybil-9.0.0/sybil/parsers/rest/capture.py000066400000000000000000000057711471471302700204550ustar00rootroot00000000000000import re import string from collections.abc import Iterable from typing import List, Tuple from textwrap import dedent from sybil import Region, Document from sybil.evaluators.capture import evaluate_capture CAPTURE_DIRECTIVE = re.compile( r'^(?P(\t| )*)\.\.\s*-+>\s*(?P\S+).*$' ) def indent_matches(line: str, indent: str) -> bool: # Is the indentation of a line match what we're looking for? if not line.strip(): # the line consists entirely of whitespace (or nothing at all), # so is not considered to be of the appropriate indentation return False if line.startswith(indent): if line[len(indent)] not in string.whitespace: return True # if none of the above found the indentation to be a match, it is # not a match return False class DocumentReversedLines(List[str]): def __init__(self, document: Document) -> None: super().__init__() self[:] = document.text.splitlines(keepends=True) self.current_line = len(self) self.current_line_end_position = len(document.text) def iterate_with_line_number(self) -> Iterable[Tuple[int, str]]: while self.current_line > 0: self.current_line -= 1 line = self[self.current_line] self.current_line_end_position -= len(line) yield self.current_line, line class CaptureParser: """ A :any:`Parser` for :ref:`captures `. """ def __call__(self, document: Document) -> Iterable[Region]: lines = DocumentReversedLines(document) for end_index, line in lines.iterate_with_line_number(): directive = CAPTURE_DIRECTIVE.match(line) if directive: region_end = lines.current_line_end_position indent = directive.group('indent') for start_index, line in lines.iterate_with_line_number(): if indent_matches(line, indent): # don't include the preceding line in the capture start_index += 1 break else: # make it blow up start_index = end_index if end_index - start_index < 2: raise ValueError(( "couldn't find the start of the block to match " "%r on line %i of %s" ) % (directive.group(), end_index+1, document.path)) # after dedenting, we need to remove excess leading and trailing # newlines, before adding back the final newline that's strippped # off text = dedent(''.join(lines[start_index:end_index])).strip()+'\n' name = directive.group('name') parsed = name, text yield Region( lines.current_line_end_position, region_end, parsed, evaluate_capture ) sybil-9.0.0/sybil/parsers/rest/clear.py000066400000000000000000000005601471471302700200670ustar00rootroot00000000000000 from sybil.parsers.abstract import AbstractClearNamespaceParser from .lexers import DirectiveInCommentLexer class ClearNamespaceParser(AbstractClearNamespaceParser): """ A :any:`Parser` for :ref:`clear-namespace ` instructions. """ def __init__(self) -> None: super().__init__([DirectiveInCommentLexer('clear-namespace')]) sybil-9.0.0/sybil/parsers/rest/codeblock.py000066400000000000000000000030321471471302700207230ustar00rootroot00000000000000from collections.abc import Sequence from typing import Optional from sybil.evaluators.python import pad, PythonEvaluator from sybil.parsers.abstract import AbstractCodeBlockParser from sybil.parsers.rest.lexers import DirectiveLexer, DirectiveInCommentLexer from sybil.typing import Evaluator class CodeBlockParser(AbstractCodeBlockParser): """ A :any:`Parser` for :ref:`codeblock-parser` examples. :param language: The language that this parser should look for. :param evaluator: The evaluator to use for evaluating code blocks in the specified language. You can also override the :meth:`evaluate` method below. """ def __init__(self, language: Optional[str] = None, evaluator: Optional[Evaluator] = None) -> None: super().__init__( [ DirectiveLexer(directive=r'(sourcecode|code-block|code)'), DirectiveInCommentLexer(directive=r'(invisible-)?code(-block)?'), ], language, evaluator ) pad = staticmethod(pad) class PythonCodeBlockParser(CodeBlockParser): """ A :any:`Parser` for Python :ref:`codeblock-parser` examples. :param future_imports: An optional sequence of strings that will be turned into ``from __future__ import ...`` statements and prepended to the code in each of the examples found by this parser. """ def __init__(self, future_imports: Sequence[str] = ()) -> None: super().__init__(language='python', evaluator=PythonEvaluator(future_imports)) sybil-9.0.0/sybil/parsers/rest/doctest.py000066400000000000000000000027401471471302700204500ustar00rootroot00000000000000from collections.abc import Iterable from sybil import Document, Region from sybil.evaluators.doctest import DocTestEvaluator from sybil.parsers.abstract import DocTestStringParser from .lexers import DirectiveLexer class DocTestParser: """ A :any:`Parser` for :ref:`doctest-parser` examples. :param optionflags: :ref:`doctest option flags` to use when evaluating the examples found by this parser. """ def __init__(self, optionflags: int = 0) -> None: self.string_parser = DocTestStringParser(DocTestEvaluator(optionflags)) def __call__(self, document: Document) -> Iterable[Region]: return self.string_parser(document.text, document.path) class DocTestDirectiveParser: """ A :any:`Parser` for :rst:dir:`doctest` directives. :param optionflags: :ref:`doctest option flags` to use when evaluating the examples found by this parser. """ def __init__(self, optionflags: int = 0) -> None: self.lexer = DirectiveLexer(directive='doctest') self.string_parser = DocTestStringParser(DocTestEvaluator(optionflags)) def __call__(self, document: Document) -> Iterable[Region]: for lexed in self.lexer(document): source = lexed.lexemes['source'] for doctest_region in self.string_parser(source, document.path): doctest_region.adjust(lexed, source) yield doctest_region sybil-9.0.0/sybil/parsers/rest/lexers.py000066400000000000000000000067071471471302700203140ustar00rootroot00000000000000import re from collections.abc import Iterable from typing import Optional, Dict from sybil import Document, Region from sybil.parsers.abstract.lexers import BlockLexer START_PATTERN_TEMPLATE =( r'^(?P[ \t]*)\.\.\s*(?P{directive})' r'{delimiter}[ \t]*' r'(?P[^\n]+)?\n' r'(?P(?:\1[ \t]+:[\w-]*:[^\n]*\n)+)?' ) OPTIONS_PATTERN = re.compile(r'[^:]*:(?P[^:]+):[ \t]*(?P[^\n]*)\n') END_PATTERN_TEMPLATE = r'((?<=\n)(?=\.\.)|\n?\Z|\n[ \t]{{0,{len_prefix}}}(?=\S|\Z))' def parse_options_and_source(lexed: Region) -> None: lexemes = lexed.lexemes raw_options = lexemes.pop('options', None) options = lexemes['options'] = {} if raw_options: for match in OPTIONS_PATTERN.finditer(raw_options): options[match['name']] = match['value'] source = lexemes.get('source') if source: lexemes['source'] = source.strip_leading_newlines() class DirectiveLexer(BlockLexer): """ A :class:`~sybil.parsers.abstract.lexers.BlockLexer` for ReST directives that extracts the following lexemes: - ``directive`` as a :class:`str`. - ``arguments`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param directive: a :class:`str` containing a regular expression pattern to match directive names. :param arguments: a :class:`str` containing a regular expression pattern to match directive arguments. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ delimiter = '::' def __init__( self, directive: str, arguments: str = '', mapping: Optional[Dict[str, str]] = None, ) -> None: """ A lexer for ReST directives. Both ``directive`` and ``arguments`` are regex patterns. """ super().__init__( start_pattern=re.compile( START_PATTERN_TEMPLATE.format( directive=directive, delimiter=self.delimiter, arguments=arguments ), re.MULTILINE ), end_pattern_template=END_PATTERN_TEMPLATE, mapping=mapping, ) def __call__(self, document: Document) -> Iterable[Region]: for lexed in super().__call__(document): parse_options_and_source(lexed) yield lexed class DirectiveInCommentLexer(DirectiveLexer): """ A :class:`~sybil.parsers.abstract.lexers.BlockLexer` for faux ReST directives in comments such as: .. code-block:: rest .. not-really-a-directive: some-argument Source here... It extracts the following lexemes: - ``directive`` as a :class:`str`. - ``arguments`` as a :class:`str`. - ``source`` as a :class:`~sybil.Lexeme`. :param directive: a :class:`str` containing a regular expression pattern to match directive names. :param arguments: a :class:`str` containing a regular expression pattern to match directive arguments. :param mapping: If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any :class:`~sybil.Region` objects. """ # This is the pattern used for invisible code blocks and the like. delimiter = ':?' sybil-9.0.0/sybil/parsers/rest/skip.py000066400000000000000000000004531471471302700177500ustar00rootroot00000000000000from ..abstract import AbstractSkipParser from .lexers import DirectiveInCommentLexer class SkipParser(AbstractSkipParser): """ A :any:`Parser` for :ref:`skip ` instructions. """ def __init__(self) -> None: super().__init__([DirectiveInCommentLexer('skip')]) sybil-9.0.0/sybil/parsers/skip.py000066400000000000000000000006031471471302700167700ustar00rootroot00000000000000# THIS MODULE IS FOR BACKWARDS COMPATIBILITY ONLY! from collections.abc import Iterable from sybil import Region, Document from .rest import SkipParser def skip(document: Document) -> Iterable[Region]: """ A parser function to be included when your documentation makes use of :ref:`skipping ` examples in a document. """ return SkipParser()(document) sybil-9.0.0/sybil/py.typed000066400000000000000000000000001471471302700154570ustar00rootroot00000000000000sybil-9.0.0/sybil/python.py000066400000000000000000000022751471471302700156730ustar00rootroot00000000000000import importlib import sys from collections.abc import Iterator from contextlib import contextmanager from types import ModuleType from pathlib import Path @contextmanager def import_cleanup() -> Iterator[None]: """ Clean up the results of importing modules, including the modification of :attr:`sys.path` necessary to do so. """ modules = set(sys.modules) path = sys.path.copy() yield for added_module in set(sys.modules) - modules: sys.modules.pop(added_module) sys.path[:] = path importlib.invalidate_caches() INIT_FILE = '__init__.py' def import_path(path: Path) -> ModuleType: container = path while True: container = container.parent if not (container / INIT_FILE).exists(): break relative = path.relative_to(container) if relative.name == INIT_FILE: parts = tuple(relative.parts)[:-1] else: parts = tuple(relative.parts)[:-1]+(relative.stem,) module = '.'.join(parts) try: return importlib.import_module(module) except ImportError as e: raise ImportError( f'{module!r} not importable from {path} as:\n{type(e).__name__}: {e}' ) from None sybil-9.0.0/sybil/region.py000066400000000000000000000071541471471302700156360ustar00rootroot00000000000000from typing import Any, Optional from sybil.typing import Evaluator, LexemeMapping class Lexeme(str): """ Where needed, this can store both the text of the lexeme and it's line offset relative to the line number of the example that contains it. """ def __new__(cls, text: str, offset: int, line_offset: int) -> 'Lexeme': return str.__new__(cls, text) def __init__(self, text: str, offset: int, line_offset: int) -> None: self.text = text self.offset = offset self.line_offset = line_offset def strip_leading_newlines(self) -> 'Lexeme': stripped = self.lstrip('\n') removed = len(self) - len(stripped) return Lexeme(stripped, self.offset + removed, self.line_offset + removed) MAX_REPR_PART_LENGTH = 40 CONTRACTED = '...' class Region: """ Parsers should yield instances of this class for each example they discover in a documentation source file. :param start: The character position at which the example starts in the :class:`~sybil.document.Document`. :param end: The character position at which the example ends in the :class:`~sybil.document.Document`. :param parsed: The parsed version of the example. :param evaluator: The callable to use to evaluate this example and check if it is as it should be. """ def __init__( self, start: int, end: int, parsed: Any = None, evaluator: Optional[Evaluator] = None, lexemes: Optional[LexemeMapping] = None, ) -> None: #: The start of this region within the document's :attr:`~sybil.Document.text`. self.start: int = start #: The end of this region within the document's :attr:`~sybil.Document.text`. self.end: int = end #: The parsed version of this region. This only needs to have meaning to #: the :attr:`evaluator`. self.parsed: Any = parsed #: The :any:`Evaluator` for this region. self.evaluator: Optional[Evaluator] = evaluator #: The lexemes extracted from the region. self.lexemes: LexemeMapping = lexemes or {} @staticmethod def trim(text: str) -> str: if len(text) > MAX_REPR_PART_LENGTH: half = int((MAX_REPR_PART_LENGTH + len(CONTRACTED)) / 2) text = text[:half] + CONTRACTED + text[-half:] return text def __repr__(self) -> str: evaluator_text = f' evaluator={self.evaluator!r}' if self.evaluator else '' text = f'' if self.lexemes: text += '\n' for name, lexeme in self.lexemes.items(): if isinstance(lexeme, str): lexeme = self.trim(lexeme) text += f'{name}: {lexeme!r}\n' if self.parsed: parsed_text = self.trim(repr(self.parsed)) text += f'{parsed_text}' if self.parsed or self.lexemes: text += '' return text def __lt__(self, other: 'Region') -> bool: assert isinstance(other, type(self)), f"{type(other)} not supported for <" assert self.start == other.start # This is where this may happen, if not something weird return True def adjust(self, lexed: 'Region', lexeme: Lexeme) -> None: """ Adjust the start and end of this region based on the provided :class:`Lexeme` and ::class:`Region` that lexeme came from. """ self.start += (lexed.start + lexeme.offset) self.end += lexed.start sybil-9.0.0/sybil/sybil.py000066400000000000000000000161761471471302700155010ustar00rootroot00000000000000import inspect from pathlib import Path from collections.abc import Callable, Collection, Mapping, Sequence from typing import Any, Dict, Optional, Type, List, Tuple from .document import Document, PythonDocStringDocument from .example import Example from .typing import Parser DEFAULT_DOCUMENT_TYPES = { None: Document, '.py': PythonDocStringDocument, } class Sybil: """ An object to provide test runner integration for discovering examples in documentation and ensuring they are correct. :param parsers: A sequence of callable :term:`parsers `. :param path: The path in which source files are found, relative to the path of the Python source file in which this class is instantiated. Absolute paths can also be passed. .. note:: This is ignored when using the :ref:`pytest integration `. :param pattern: An optional :func:`pattern ` used to match source files that will be parsed for examples. :param patterns: An optional sequence of :func:`patterns ` used to match source paths that will be parsed for examples. :param exclude: An optional :func:`pattern ` for source file names that will be excluded when looking for examples. :param excludes: An optional sequence of :func:`patterns ` for source paths that will be excluded when looking for examples. :param filenames: An optional collection of file names that, if found anywhere within the root ``path`` or its sub-directories, will be parsed for examples. :param setup: An optional callable that will be called once before any examples from a :class:`~sybil.document.Document` are evaluated. If provided, it is called with the document's :attr:`~sybil.Document.namespace`. :param teardown: An optional callable that will be called after all the examples from a :class:`~sybil.document.Document` have been evaluated. If provided, it is called with the document's :attr:`~sybil.Document.namespace`. :param fixtures: An optional sequence of strings specifying the names of fixtures to be requested when using the :ref:`pytest integration `. The fixtures will be inserted into the document's :attr:`~sybil.Document.namespace` before any examples for that document are evaluated. All scopes of fixture are supported. :param encoding: An optional string specifying the encoding to be used when decoding documentation source files. :param document_types: A mapping of file extension to :class:`Document` subclass such that custom evaluation can be performed per document type. :param name: A name to use in test identifiers so that the identifier indicates which :class:`Sybil` that test was discovered by. """ def __init__( self, parsers: Sequence[Parser], pattern: Optional[str] = None, patterns: Sequence[str] = (), exclude: Optional[str] = None, excludes: Sequence[str] = (), filenames: Collection[str] = (), path: str = '.', setup: Optional[Callable[[Dict[str, Any]], None]] = None, teardown: Optional[Callable[[Dict[str, Any]], None]] = None, fixtures: Sequence[str] = (), encoding: str = 'utf-8', document_types: Optional[Mapping[Optional[str], Type[Document]]] = None, name: str = '', ) -> None: self.parsers: Sequence[Parser] = parsers current_frame = inspect.currentframe() calling_frame = current_frame.f_back assert calling_frame is not None, 'Cannot find previous frame, which is weird...' calling_filename = inspect.getframeinfo(calling_frame).filename start_path = Path(calling_filename).parent / path self.path: Path = start_path.absolute() self.patterns = list(patterns) if pattern: self.patterns.append(pattern) self.excludes = list(excludes) if exclude: self.excludes.append(exclude) self.filenames = filenames self.setup: Optional[Callable[[Dict[str, Any]], None]] = setup self.teardown: Optional[Callable[[Dict[str, Any]], None]] = teardown self.fixtures: Tuple[str, ...] = tuple(fixtures) self.encoding: str = encoding self.document_types = DEFAULT_DOCUMENT_TYPES.copy() if document_types: self.document_types.update(document_types) self.default_document_type: Type[Document] = self.document_types[None] self.name = name def __repr__(self) -> str: return f'' def __add__(self, other: 'Sybil') -> 'SybilCollection': """ :class:`Sybil` instances can be concatenated into a :class:`~sybil.sybil.SybilCollection`. """ assert isinstance(other, Sybil) return SybilCollection((self, other)) def should_parse(self, path: Path) -> bool: try: path = path.relative_to(self.path) except ValueError: return False include = False if any(path.match(p) for p in self.patterns): include = True if path.name in self.filenames: include = True if not include: return False if any(path.match(e) for e in self.excludes): return False return True def parse(self, path: Path) -> Document: type_ = self.document_types.get(path.suffix, self.default_document_type) return type_.parse(str(path), *self.parsers, encoding=self.encoding) def identify(self, example: Example) -> str: sybil_name = f'sybil:{self.name},' if self.name else '' return f'{sybil_name}line:{example.line},column:{example.column}' def pytest(self) -> Callable[[Path, Any], Any]: """ The helper method for when you use :ref:`pytest_integration`. """ from .integration.pytest import pytest_integration return pytest_integration(self) def unittest(self) -> Callable[[Any, Any, Optional[str]], Any]: """ The helper method for when you use :ref:`unitttest_integration`. """ from .integration.unittest import unittest_integration return unittest_integration(self) class SybilCollection(List[Sybil]): """ When :class:`Sybil` instances are concatenated, the collection returned can be used in the same way as a single :class:`Sybil`. This allows multiple configurations to be used in a single test run. """ def pytest(self) -> Callable[[Path, Any], Any]: """ The helper method for when you use :ref:`pytest_integration`. """ from .integration.pytest import pytest_integration return pytest_integration(*self) def unittest(self) -> Callable[[Any, Any, Optional[str]], Any]: """ The helper method for when you use :ref:`unitttest_integration`. """ from .integration.unittest import unittest_integration return unittest_integration(*self) sybil-9.0.0/sybil/text.py000066400000000000000000000007321471471302700153320ustar00rootroot00000000000000import re NEWLINE = re.compile("\n") class LineNumberOffsets: def __init__(self, text: str) -> None: self.offsets = { line: match.start()+1 for (line, match) in enumerate(NEWLINE.finditer(text), start=1) } self.offsets[0] = 0 def get(self, line: int, column: int) -> int: """ Return the character offset of the zero based line number and column offset. """ return self.offsets[line] + column sybil-9.0.0/sybil/typing.py000066400000000000000000000013161471471302700156570ustar00rootroot00000000000000from collections.abc import Callable, Iterable from typing import TYPE_CHECKING, Optional, Any, Dict if TYPE_CHECKING: import sybil #: The signature for an :term:`evaluator`. Evaluator = Callable[['sybil.Example'], Optional[str]] #: The signature for a :term:`lexer`. #: Lexers must not set :attr:`~sybil.Region.parsed` or :attr:`~sybil.Region.evaluator` #: on the :class:`~sybil.Region` instances they return. Lexer = Callable[['sybil.Document'], Iterable['sybil.Region']] #: The signature for a :term:`parser`. Parser = Callable[['sybil.Document'], Iterable['sybil.Region']] # This could likely be a TypedDict. #: Mappings used to store lexemes for a :class:`~sybil.Region`. LexemeMapping = Dict[str, Any] sybil-9.0.0/tests/000077500000000000000000000000001471471302700140125ustar00rootroot00000000000000sybil-9.0.0/tests/__init__.py000066400000000000000000000000541471471302700161220ustar00rootroot00000000000000# believe it or not, # this line is a test! sybil-9.0.0/tests/functional/000077500000000000000000000000001471471302700161545ustar00rootroot00000000000000sybil-9.0.0/tests/functional/markdown/000077500000000000000000000000001471471302700177765ustar00rootroot00000000000000sybil-9.0.0/tests/functional/markdown/conftest.py000066400000000000000000000004321471471302700221740ustar00rootroot00000000000000from __future__ import print_function from sybil import Sybil from sybil.parsers.markdown import ( PythonCodeBlockParser, SkipParser, ) pytest_collect_file = Sybil( parsers=[ PythonCodeBlockParser(), SkipParser(), ], pattern='*.md', ).pytest() sybil-9.0.0/tests/functional/markdown/doctest.md000066400000000000000000000001751471471302700217700ustar00rootroot00000000000000# DocTest examples A Python REPL / DocTest example using normal way of specifying Python language: ```python >>> 1+1 2 ``` sybil-9.0.0/tests/functional/markdown/python.md000066400000000000000000000010761471471302700216450ustar00rootroot00000000000000# Python examples A Python example using the normal MD way of specifying a language: ```python assert 1 + 1 == 2 ``` Here's the way to do invisible python code blocks: This is an . What about a bullets? - Bullet 1 ```python raise Exception('boom!') ``` - Bullet 2: sybil-9.0.0/tests/functional/markdown/test_docs.py000066400000000000000000000004231471471302700223360ustar00rootroot00000000000000from __future__ import print_function from sybil import Sybil from sybil.parsers.markdown import ( PythonCodeBlockParser, SkipParser, ) load_tests = Sybil( parsers=[ PythonCodeBlockParser(), SkipParser(), ], pattern='*.md', ).unittest() sybil-9.0.0/tests/functional/modules/000077500000000000000000000000001471471302700176245ustar00rootroot00000000000000sybil-9.0.0/tests/functional/modules/a.py000066400000000000000000000002131471471302700204120ustar00rootroot00000000000000def func(txt: str): """ .. code-block:: python result = func('a') >>> print(result) aa """ return txt*2 sybil-9.0.0/tests/functional/modules/b.py000066400000000000000000000002461471471302700204210ustar00rootroot00000000000000""" >>> result Traceback (most recent call last): ... NameError: name 'result' is not defined .. code-block:: python from a import func >>> func('b') 'bb' """ sybil-9.0.0/tests/functional/myst/000077500000000000000000000000001471471302700171505ustar00rootroot00000000000000sybil-9.0.0/tests/functional/myst/Makefile000066400000000000000000000014351471471302700206130ustar00rootroot00000000000000# Minimal makefile for Sphinx documentation # # You can set these variables from the command line, and also # from the environment for the first two. SPHINXOPTS ?= SPHINXBUILD ?= sphinx-build SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) # raise warnings to errors html-strict: @$(SPHINXBUILD) -b html -nW --keep-going "$(SOURCEDIR)" "$(BUILDDIR)/html" $(SPHINXOPTS) $(O) clean: rm -r $(BUILDDIR) sybil-9.0.0/tests/functional/myst/conf.py000066400000000000000000000001271471471302700204470ustar00rootroot00000000000000project = 'MyST Example' extensions = [ "myst_parser", "sphinx.ext.doctest", ] sybil-9.0.0/tests/functional/myst/conftest.py000066400000000000000000000007161471471302700213530ustar00rootroot00000000000000from __future__ import print_function from sybil import Sybil from sybil.parsers.myst import ( DocTestDirectiveParser, PythonCodeBlockParser, SkipParser, ) from sybil.parsers.rest import DocTestDirectiveParser as ReSTDocTestDirectiveParser pytest_collect_file = Sybil( parsers=[ DocTestDirectiveParser(), PythonCodeBlockParser(), ReSTDocTestDirectiveParser(), SkipParser(), ], pattern='*.md', ).pytest() sybil-9.0.0/tests/functional/myst/doctest.md000066400000000000000000000006421471471302700211410ustar00rootroot00000000000000# DocTest examples A Python REPL / DocTest example using normal way of specifying Python language: ```python >>> 1+1 2 ``` A Python REPL / DocTest example using a MyST role: ```{code-block} python >>> 1 + 1 2 ``` A Python REPL / DocTest example using the `{eval-rst}` role and the `.. doctest::` role from `sphinx.ext.doctest`: ```{eval-rst} .. doctest:: >>> 1 + 1 3 ``` ```{doctest} >>> 1 + 1 4 ``` sybil-9.0.0/tests/functional/myst/index.md000066400000000000000000000000761471471302700206040ustar00rootroot00000000000000# MyST Example Project ```{toctree} python.md doctest.md ``` sybil-9.0.0/tests/functional/myst/python.md000066400000000000000000000013641471471302700210170ustar00rootroot00000000000000# Python examples A Python example using the normal MD way of specifying a language: ```python assert 1 + 1 == 2 ``` A Python example using a MyST role: ```{code-block} python assert 1 + 1 == 2 ``` Here's one way we could do invisible code blocks: % invisible-code-block: python % % b = 4 % % # ...etc... Here's another way we might be able to do them: This is an . What about a bullet? - Bullet 1 ```python raise Exception('boom!') ``` - Bullet 2: % skip: next sybil-9.0.0/tests/functional/myst/test_docs.py000066400000000000000000000007071471471302700215150ustar00rootroot00000000000000from __future__ import print_function from sybil import Sybil from sybil.parsers.myst import ( DocTestDirectiveParser, PythonCodeBlockParser, SkipParser, ) from sybil.parsers.rest import DocTestDirectiveParser as ReSTDocTestDirectiveParser load_tests = Sybil( parsers=[ DocTestDirectiveParser(), PythonCodeBlockParser(), ReSTDocTestDirectiveParser(), SkipParser(), ], pattern='*.md', ).unittest() sybil-9.0.0/tests/functional/package_and_docs/000077500000000000000000000000001471471302700214015ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/docs/000077500000000000000000000000001471471302700223315ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/docs/use.rst000066400000000000000000000001611471471302700236550ustar00rootroot00000000000000Here's how you use :func:`from`: >>> from parent.child.module_a import foo >>> foo('fred') 'module_a.foo(fred)' sybil-9.0.0/tests/functional/package_and_docs/src/000077500000000000000000000000001471471302700221705ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/src/parent/000077500000000000000000000000001471471302700234615ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/src/parent/__init__.py000066400000000000000000000004051471471302700255710ustar00rootroot00000000000000""" This one tests that the first example in a .py is actually evaluated rather than being skipped! >>> print('bad') good """ from .child.module_a import foo def parent_init(text): """ >>> 1==1 True """ return foo('parent_init:'+text) sybil-9.0.0/tests/functional/package_and_docs/src/parent/child/000077500000000000000000000000001471471302700245445ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/src/parent/child/__init__.py000066400000000000000000000000001471471302700266430ustar00rootroot00000000000000sybil-9.0.0/tests/functional/package_and_docs/src/parent/child/module_a.py000066400000000000000000000000621471471302700267010ustar00rootroot00000000000000def foo(name): return f'module_a.foo({name})' sybil-9.0.0/tests/functional/package_and_docs/src/parent/child/module_b.py000066400000000000000000000002531471471302700267040ustar00rootroot00000000000000from parent import parent_init def foo(text): """ >>> parent_init('module_a.foo(parent_init)') 'module_a.foo(parent_init:module_a.foo(parent_init))' """ sybil-9.0.0/tests/functional/package_and_docs/src/parent/module_c.py000066400000000000000000000003011471471302700256140ustar00rootroot00000000000000from . import foo def bar(text): """ >>> foo('something') 'module_a.foo(something)' >>> bar('something') 'barmodule_a.foo(something)' """ return 'bar'+foo(text) sybil-9.0.0/tests/functional/pytest/000077500000000000000000000000001471471302700175045ustar00rootroot00000000000000sybil-9.0.0/tests/functional/pytest/conftest.py000066400000000000000000000040151471471302700217030ustar00rootroot00000000000000from __future__ import print_function from functools import partial import re import pytest from sybil import Region, Sybil from sybil.parsers.rest import PythonCodeBlockParser @pytest.fixture(scope="function") def function_fixture(): print('function_fixture setup') yield 'f' print(' function_fixture teardown') @pytest.fixture(scope="class") def class_fixture(): print('class_fixture setup') yield 'c' print('class_fixture teardown') @pytest.fixture(scope="module") def module_fixture(): print('module_fixture setup') yield 'm' print('module_fixture teardown') @pytest.fixture(scope="session") def session_fixture(): print('session_fixture setup') yield 's' print('session_fixture teardown') def check(letter, example): namespace = example.namespace for name in ( 'x', 'session_fixture', 'module_fixture', 'class_fixture', 'function_fixture' ): print(namespace[name], end='') print(end=' ') namespace['x'] += 1 text, expected = example.parsed actual = text.count(letter) if actual != expected: message = '{} count was {} instead of {}'.format( letter, actual, expected ) if letter=='X': raise ValueError(message) return message def parse_for(letter, document): for m in re.finditer(r'(%s+) (\d+) check' % letter, document.text): yield Region(m.start(), m.end(), (m.group(1), int(m.group(2))), partial(check, letter)) def sybil_setup(namespace): print('sybil setup', end=' ') namespace['x'] = 0 def sybil_teardown(namespace): print('sybil teardown', namespace['x']) pytest_collect_file = Sybil( parsers=[ partial(parse_for, 'X'), partial(parse_for, 'Y'), PythonCodeBlockParser(['print_function']) ], pattern='*.rst', setup=sybil_setup, teardown=sybil_teardown, fixtures=['function_fixture', 'class_fixture', 'module_fixture', 'session_fixture'] ).pytest() sybil-9.0.0/tests/functional/pytest/fail.rst000066400000000000000000000003311471471302700211460ustar00rootroot00000000000000.. code-block:: python print('x is currently:', x) raise Exception('the start!') XXXX 4 check YYY 2 check XXX 4 check YYY 3 check .. code-block:: python x += 1 if x > 0: raise Exception('boom!') sybil-9.0.0/tests/functional/pytest/pass.rst000066400000000000000000000000641471471302700212040ustar00rootroot00000000000000XXXX 4 check YYY 3 check XXX 3 check YYY 3 check sybil-9.0.0/tests/functional/pytest/pytest.ini000066400000000000000000000000501471471302700215300ustar00rootroot00000000000000[pytest] console_output_style = classic sybil-9.0.0/tests/functional/skips/000077500000000000000000000000001471471302700173055ustar00rootroot00000000000000sybil-9.0.0/tests/functional/skips/skip.rst000066400000000000000000000010251471471302700210030ustar00rootroot00000000000000.. invisible-code-block: python run = [] Let's skips some stuff: .. skip: next After this text is a code block that goes boom, it should be skipped: .. code-block:: python run.append(1) This one should run: .. invisible-code-block: python run.append(2) .. skip: start These should not: .. code-block:: python run.append(3) Nor this one: .. code-block:: python run.append(4) .. skip: end But this one should: .. code-block:: python run.append(5) Let's make sure things worked! >>> print(run) [2, 5] sybil-9.0.0/tests/functional/unittest/000077500000000000000000000000001471471302700200335ustar00rootroot00000000000000sybil-9.0.0/tests/functional/unittest/__init__.py000066400000000000000000000000001471471302700221320ustar00rootroot00000000000000sybil-9.0.0/tests/functional/unittest/test_unittest.py000066400000000000000000000020561471471302700233260ustar00rootroot00000000000000from __future__ import print_function import re from functools import partial from sybil import Sybil, Region def check(letter, example): print(example.namespace['x']) example.namespace['x'] += 1 text, expected = example.parsed actual = text.count(letter) if actual != expected: message = '{} count was {} instead of {}'.format( letter, actual, expected ) if letter=='X': raise ValueError(message) return message def parse_for(letter, document): for m in re.finditer(r'(%s+) (\d+) check' % letter, document.text): yield Region(m.start(), m.end(), (m.group(1), int(m.group(2))), partial(check, letter)) def sybil_setup(namespace): print('sybil setup') namespace['x'] = 0 def sybil_teardown(namespace): print('sybil teardown', namespace['x']) load_tests = Sybil( [partial(parse_for, 'X'), partial(parse_for, 'Y')], path='../pytest', pattern='*.rst', setup=sybil_setup, teardown=sybil_teardown ).unittest() sybil-9.0.0/tests/helpers.py000066400000000000000000000210441471471302700160270ustar00rootroot00000000000000import ast import sys from collections.abc import Sequence, Iterable from contextlib import contextmanager from os.path import dirname, join from pathlib import Path from shutil import copytree from tempfile import NamedTemporaryFile from textwrap import dedent from traceback import TracebackException from typing import Optional, Tuple, List, Union from unittest import TextTestRunner, main as unittest_main, SkipTest from pytest import CaptureFixture, ExceptionInfo, main as pytest_main from seedir import seedir from testfixtures import compare from sybil import Sybil from sybil.document import Document from sybil.example import Example from sybil.python import import_cleanup from sybil.region import Region from sybil.typing import Parser, Lexer HERE = Path(__file__).parent DOCS = HERE.parent / 'docs' SAMPLE_PATH = HERE / 'samples' def sample_path(name) -> str: return str(SAMPLE_PATH / name) def regions_and_document(name: str, lexer: Lexer) -> Tuple[Document, List[Region]]: path = sample_path(name) document = Document(Path(path).read_text(), path) return document, list(lexer(document)) def lex(name: str, lexer: Lexer) -> List[Region]: return regions_and_document(name, lexer)[1] def region_details( document: Document, regions: Iterable[Region] ) -> List[Tuple[Tuple[str, Union[Region, str]], ...]]: # return a list of tuple of tuples to make failures easier to work through: return [( ('start', document.line_column(region.start)), ('end', document.line_column(region.end)), ('region', region) ) for region in regions] def check_lexed_regions(name: str, lexer: Lexer, *, expected: List[Region]) -> None: document, actual = regions_and_document(name, lexer) compare( expected=region_details(document, expected), actual=region_details(document, actual), ) def lex_text(text: str, lexer: Lexer) -> List[Region]: document = Document(text, 'sample.txt') return list(lexer(document)) def check_lexed_text_regions(text: str, lexer: Lexer, *, expected: List[Region]) -> None: document = Document(text, 'sample.txt') actual =list(lexer(document)) compare( expected=region_details(document, expected), actual=region_details(document, actual), ) def parse(name: str, *parsers: Parser, expected: int) -> Tuple[List[Example], dict]: document = Document.parse(sample_path(name), *parsers) examples = list(document) assert len(examples) == expected, f'{len(examples)} != {expected}: {examples!r}' return examples, document.namespace def check_excinfo(example: Example, excinfo: ExceptionInfo, text: str, *, lineno: int): compare(str(excinfo.value), expected=text) details = TracebackException.from_exception(excinfo.value, lookup_lines=False).stack[-1] document = example.document assert details.filename == document.path, f'{details.filename!r} != {document.path!r}' assert details.lineno == lineno, f'{details.lineno} != {lineno}' def check_path(path: str, sybil: Sybil, *, expected: int, expected_skips: Sequence[str] = ()): document = sybil.parse(DOCS / path) examples = list(document) actual_skips = [] for example in examples: try: example.evaluate() except SkipTest as e: actual_skips.append(str(e)) compare(expected, actual=len(examples)) compare(expected=expected_skips, actual=actual_skips) def check_text(text: str, sybil: Sybil): with NamedTemporaryFile() as temp: temp.write(text.encode('ascii')) temp.flush() document = sybil.parse(Path(temp.name)) (example,) = document example.evaluate() return document def check_tree(expected: str, path: str): raw = seedir( DOCS / path, printout=False, first='folders', sort=True, regex=True, exclude_folders=r'\..+|__pycache__' ) actual = '\n'+raw.split('\n', 1)[1] text = compare(expected=expected.strip(), actual=actual.strip(), raises=False) if text: # pragma: no cover text += '\n\nShould be:\n'+actual raise AssertionError(text) FUNCTIONAL_TEST_DIR = join(dirname(__file__), 'functional') PYTEST = 'pytest' UNITTEST = 'unittest' TEST_OUTPUT_TEMPLATES = { PYTEST: '{file}::{sybil}line:{line},column:{column}', UNITTEST: '{file},{sybil}line:{line},column:{column}' } class Finder: def __init__(self, text): self.text = text self.index = 0 def then_find(self, substring): assert substring in self.text[self.index:], self.text[self.index:] self.index = self.text.index(substring, self.index) def assert_present(self, text): assert text in self.text, f'{self.text}\n{self.text!r}' def assert_not_present(self, text): index = self.text.find(text) if index > -1: raise AssertionError('\n'+self.text[index-500:index+500]) def assert_has_run( self, integration: str, file: str, *, sybil: str = '', line: int = 1, column: int = 1 ): if sybil: sybil=f'sybil:{sybil},' self.assert_present(TEST_OUTPUT_TEMPLATES[integration].format( sybil=sybil, file=file, line=line, column=column )) class Results: def __init__( self, capsys: CaptureFixture[str], total: int, errors: int = 0, failures: int = 0, return_code: Optional[int] = None, ): self.total = total self.errors = errors self.failures = failures self.return_code = return_code out, err = capsys.readouterr() assert err == '', err self.out = Finder(out) def functional_sample(name: str) -> Path: return Path(FUNCTIONAL_TEST_DIR) / name def clone_functional_sample(name: str, target: Path) -> Path: source = functional_sample(name) dest = target / name copytree(str(source), str(dest)) return dest def run_pytest(capsys: CaptureFixture[str], path: Path) -> Results: class CollectResults: def pytest_sessionfinish(self, session): self.session = session results = CollectResults() return_code = pytest_main(['-vvs', str(path), '-p', 'no:doctest'], plugins=[results]) return Results( capsys, results.session.testscollected, failures=results.session.testsfailed, return_code=return_code ) def run_unittest(capsys: CaptureFixture[str], path: Path) -> Results: runner = TextTestRunner(verbosity=2, stream=sys.stdout) main = unittest_main( exit=False, module=None, testRunner=runner, argv=['x', 'discover', '-v', '-t', str(path), '-s', str(path)] ) return Results( capsys, main.result.testsRun, errors=len(main.result.errors), failures=len(main.result.failures), ) RUNNERS = { PYTEST: run_pytest, UNITTEST: run_unittest, } def run(capsys: CaptureFixture[str], integration: str, path: Path) -> Results: return RUNNERS[integration](capsys, path) CONFIG_TEMPLATE = """ from sybil import Sybil from sybil.parsers.rest import PythonCodeBlockParser from sybil.parsers.rest import DocTestParser from sybil.parsers.rest import SkipParser {assigned_name} = Sybil( {params} ).{integration}() """ CONFIG_FILENAMES = { PYTEST: 'conftest.py', UNITTEST: 'test_docs.py' } CONFIG_ASSIGNED_NAME = { PYTEST: 'pytest_collect_file', UNITTEST: 'load_tests' } def write_config(tmp_path: Path, integration: str, template=CONFIG_TEMPLATE, **params: str): import sys sys.modules.pop('test_docs', None) params_ = {'parsers': '[DocTestParser()]'} params_.update(params) config = dedent(template).format( assigned_name=CONFIG_ASSIGNED_NAME[integration], params='\n'.join([f' {name}={value},' for name, value in params_.items()]), integration=integration, ) (tmp_path / CONFIG_FILENAMES[integration]).write_text(config, 'ascii') def write_doctest(tmp_path: Path, *path: str) -> Path: file_path = tmp_path.joinpath(*path) file_path.parent.mkdir(parents=True, exist_ok=True) file_path.write_text(f">>> assert '{file_path.name}' == '{file_path.name}'") return file_path @contextmanager def add_to_python_path(path: Path): with import_cleanup(): sys.path.append(str(path)) yield sys.path.pop() def ast_docstrings(python_source_code: str) -> Sequence[str]: for node in ast.walk(ast.parse(python_source_code)): try: docstring = ast.get_docstring(node, clean=False) except TypeError: pass else: if docstring: yield docstring sybil-9.0.0/tests/samples/000077500000000000000000000000001471471302700154565ustar00rootroot00000000000000sybil-9.0.0/tests/samples/capture.txt000066400000000000000000000012661471471302700176670ustar00rootroot00000000000000A simple example:: root.txt subdir/ subdir/file.txt subdir/logs/ .. -> expected_listing Respecting indentation ---------------------- The text captured is determined by the indentation of the capture directive. :: First level of indentation. Second level of indentation. Third level of indentation. .. -> foo Nested directives ----------------- If two capture directives are nested, the outer one is effective. :: First level of indentation. Second level of indentation. Third level of indentation. .. -> foo .. -> bar That holds true even if more dashes are included:: example .. --> another sybil-9.0.0/tests/samples/capture_bad_indent1.txt000066400000000000000000000001331471471302700221070ustar00rootroot00000000000000The directive here is beyond the indentation of the block:: Block .. -> foo sybil-9.0.0/tests/samples/capture_bad_indent2.txt000066400000000000000000000001301471471302700221050ustar00rootroot00000000000000The directive here is at the same indentation as the block:: Block .. -> foo sybil-9.0.0/tests/samples/capture_codeblock.txt000066400000000000000000000001501471471302700216630ustar00rootroot00000000000000Here's some JSON: .. code-block:: json { "a key": "value", "b key": 42 } .. --> json sybil-9.0.0/tests/samples/clear.txt000066400000000000000000000002241471471302700173030ustar00rootroot00000000000000>>> x = 1 >>> x 1 Now let's start a new test: .. clear-namespace >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined sybil-9.0.0/tests/samples/code.rst000066400000000000000000000000611471471302700171170ustar00rootroot00000000000000This is a code block: .. code:: python y = 1 sybil-9.0.0/tests/samples/codeblock-subclassing.txt000066400000000000000000000001511471471302700224540ustar00rootroot00000000000000.. code-block:: python :ignore: raise Exception('Boom 2') .. code-block:: python assert 1 == 1 sybil-9.0.0/tests/samples/codeblock.txt000066400000000000000000000017261471471302700201520ustar00rootroot00000000000000This is a code block: .. code-block:: python y += 1 After this text is a code block that goes boom: .. code-block:: python raise Exception('boom!') Now we have an invisible code block, great for setting things up or checking stuff within a doc: .. invisible-code-block: python z += 1 This paranoidly checks that we can use binary and unicode literals: .. code-block:: python bin = b'x' uni = u'x' - Here's a code block that should still be found!: .. code-block:: python class NoVars: __slots__ = ['x'] This one has some text after it that also forms part of the bullet. - Another bullet: .. code-block:: python define_this = 1 - A following bullet straight away! - A code block in a non-python language: .. code-block:: lolcode HAI CAN HAS STDIO? VISIBLE "HAI WORLD!" KTHXBYE - Here's another code block that should still be found!: .. code-block:: python class YesVars: __slots__ = ['x'] sybil-9.0.0/tests/samples/codeblock_future_imports.txt000066400000000000000000000004411471471302700233120ustar00rootroot00000000000000.. invisible-code-block: python raise Exception('Boom 1') More likely is one down here: .. code-block:: python raise Exception('Boom 2') This will keep working but not be an effective test once PEP 563 finally lands: .. code-block:: python def foo(x: str): print(x) sybil-9.0.0/tests/samples/codeblock_lolcode.txt000066400000000000000000000001241471471302700216420ustar00rootroot00000000000000.. code-block:: lolcode HAI Some text here. .. code-block:: lolcode KTHXBYE sybil-9.0.0/tests/samples/codeblock_with_options.txt000066400000000000000000000004471471471302700227570ustar00rootroot00000000000000More likely is one down here: .. code-block:: python :caption: A cool example raise Exception('Boom 1') This will keep working but not be an effective test once PEP 563 finally lands: .. code-block:: python :caption: A cool example :another: option raise Exception('Boom 2') sybil-9.0.0/tests/samples/comments.py000066400000000000000000000000431471471302700176520ustar00rootroot00000000000000# XXXX 4 check """ YYY 3 check """ sybil-9.0.0/tests/samples/docstrings.py000066400000000000000000000016531471471302700202140ustar00rootroot00000000000000def r_prefixed_docstring(): r""" Wat? Why?! """ def function_with_codeblock_in_middle(text): """ My comment .. code-block:: python function_with_codeblock_in_middle("Hello World") Some more documentation. """ assert text == 'Hello World' def function_with_single_line_codeblock_at_end(text): """ My comment .. code-block:: python function_with_single_line_codeblock_at_end("Hello World") """ assert text == 'Hello World' def function_with_multi_line_codeblock_at_end(text): """ My comment .. code-block:: python function_with_multi_line_codeblock_at_end("Hello") function_with_multi_line_codeblock_at_end("World") """ assert text in ('Hello', 'World') def decorator(fn): """ Example: >>> def decorated(): ... \"\"\"docstring of a decorated function\"\"\" """ def ellipsis_as_body(): ... sybil-9.0.0/tests/samples/doctest.txt000066400000000000000000000004441471471302700176660ustar00rootroot00000000000000This is some documentation. >>> y = 1 >>> print('here is an example') here is an example This is some more documentation. >>> x = [ ... 1, 2, 3 ... ] Here's an exception with a traceback: >>> y = 2 >>> raise Exception('uh oh') Traceback (most recent call last): ... Exception: uh oh sybil-9.0.0/tests/samples/doctest_directive.txt000066400000000000000000000005421471471302700217230ustar00rootroot00000000000000Who knew that you could have a doctest role? You have to make sure to enable "sphinx.ext.doctest"... .. doctest:: >>> 1 + 1 2 This is what it looks like when output doesn't match expectations: .. doctest:: >>> 1 + 1 Unexpected! This is what it looks like when an exception is raised: .. doctest:: >>> raise Exception('boom!') sybil-9.0.0/tests/samples/doctest_fail.txt000066400000000000000000000001541471471302700206570ustar00rootroot00000000000000>>> print("where's my output?") Not my output Here's an exception happening: >>> raise Exception('boom!') sybil-9.0.0/tests/samples/doctest_irrelevant_tabs.txt000066400000000000000000000000461471471302700231300ustar00rootroot00000000000000These tabs don't matter. >>> 1 + 1 2 sybil-9.0.0/tests/samples/doctest_literals.txt000066400000000000000000000003021471471302700215560ustar00rootroot00000000000000>>> repr(b'foo') "b'foo'" >>> repr(u'foo') "'foo'" >>> repr(b"'") 'b"\'"' >>> repr(u"'") '"\'"' >>> raise Exception(repr(u'uh oh')) Traceback (most recent call last): ... Exception: 'uh oh' sybil-9.0.0/tests/samples/doctest_min_indent.txt000066400000000000000000000000541471471302700220670ustar00rootroot00000000000000 Just for the coverage: >>> True True sybil-9.0.0/tests/samples/doctest_rest_nested_in_md.md000066400000000000000000000000641471471302700232120ustar00rootroot00000000000000```{eval-rst} .. doctest:: >>> 1 + 1 3 ``` sybil-9.0.0/tests/samples/doctest_tabs.txt000066400000000000000000000001751471471302700207000ustar00rootroot00000000000000>>> handler = SummarisingLogger('from@example.com',('to@example.com',), ... username='auser',password='theirpassword') sybil-9.0.0/tests/samples/doctest_unicode.txt000066400000000000000000000000331471471302700213660ustar00rootroot00000000000000>>> print("├─") ├─ sybil-9.0.0/tests/samples/lexing-directives.txt000066400000000000000000000041331471471302700216450ustar00rootroot00000000000000This one has arguments a a "title" and then the body: .. note:: This is a note admonition. This is the second line of the first paragraph. - The note contains all indented body elements following. - It includes this bullet list. .. admonition:: And, by the way... You can make up your own admonition too. .. sample:: This directive has no arguments, just a body. An image with no body followed by one that has params and a body: .. image:: picture.png .. image:: picture.jpeg :height: 100px :width: 200 px :scale: 50 % :alt: alternate text :align: right .. figure:: picture.png :scale: 50 % :alt: map to buried treasure This is the caption of the figure (a simple paragraph). The legend consists of all elements after the caption. In this case, the legend consists of this paragraph and the following table: +-----------------------+-----------------------+ | Symbol | Meaning | +=======================+=======================+ | .. image:: tent.png | Campground | +-----------------------+-----------------------+ | .. image:: waves.png | Lake | +-----------------------+-----------------------+ | .. image:: peak.png | Mountain | +-----------------------+-----------------------+ .. topic:: Topic Title Subsequent indented lines comprise the body of the topic, and are interpreted as body elements. Now a topic with a class, as used by testfixtures: .. topic:: example.cfg :class: read-file :: [A Section] dir = frob Another example: .. sidebar:: Optional Sidebar Title :subtitle: Optional Sidebar Subtitle Subsequent indented lines comprise the body of the sidebar, and are interpreted as body elements. Two directives next to each other: .. skip:: next .. code-block:: python run.append(1) The following topic ens with two lines of whitespace, which is important: .. topic:: example.cfg :class: read-file :: [A Section] dir = frob .. config parser writes whitespace at the end, be careful when testing! sybil-9.0.0/tests/samples/lexing-fail.txt000066400000000000000000000000121471471302700204070ustar00rootroot00000000000000START EDN sybil-9.0.0/tests/samples/lexing-indented-block.txt000066400000000000000000000000451471471302700223640ustar00rootroot00000000000000 START line 1 line 2 END sybil-9.0.0/tests/samples/lexing-nested-directives.txt000066400000000000000000000014201471471302700231210ustar00rootroot00000000000000This one has arguments a a "title" and then the body: .. note:: This is a note admonition. This is the second line of the first paragraph. .. admonition:: And, by the way... You can make up your own admonition too. An image with no body followed by one that has params and a body: .. image:: picture.png .. image:: picture.jpeg :height: 100px :width: 200 px :scale: 50 % :alt: alternate text :align: right .. figure:: picture.png :scale: 50 % :alt: map to buried treasure This is the caption of the figure (a simple paragraph). .. topic:: Topic Title Subsequent indented lines comprise the body of the topic, and are interpreted as body elements. sybil-9.0.0/tests/samples/markdown-fenced-code-block.md000066400000000000000000000006451471471302700230510ustar00rootroot00000000000000backticks: ``` < > ``` tildes: ~~~ < > ~~~ Fewer than three backticks is not enough: `` foo `` The closing code fence must use the same character as the opening fence: ``` aaa ~~~ ``` The closing code fence must be at least as long as the opening fence: ```` aaa ``` `````` Nested: ~~~~ ~~~ aaa ~~~ ~~~~ Can't mix chars: ~`~ foo ~`~ This one gets closed by the end of document: ``` some stuff here ~~~ sybil-9.0.0/tests/samples/myst-clear.md000066400000000000000000000002571471471302700200640ustar00rootroot00000000000000```python >>> x = 1 >>> x 1 ``` Now let's start a new test: % clear-namespace ```python >>> x Traceback (most recent call last): ... NameError: name 'x' is not defined ``` sybil-9.0.0/tests/samples/myst-code.md000066400000000000000000000001021471471302700176750ustar00rootroot00000000000000This is a code block: ```{code} python assert 1 + 1 == 2 ``` sybil-9.0.0/tests/samples/myst-codeblock-blank-lines-indented.md000066400000000000000000000002471471471302700247070ustar00rootroot00000000000000 This is a code block: ```python y = 0 # now a blank line: y += 1 # two blank lines: assert not y, 'Boom!' ``` sybil-9.0.0/tests/samples/myst-codeblock-blank-lines.md000066400000000000000000000002131471471302700231100ustar00rootroot00000000000000This is a code block: ```python y = 0 # now a blank line: y += 1 # two blank lines: assert not y, 'Boom!' ``` sybil-9.0.0/tests/samples/myst-codeblock-doctests-end-of-fenced-codeblocks.md000066400000000000000000000000631471471302700272420ustar00rootroot00000000000000```python >>> b = 2 ``` ```python >>> 1 + 1 2 ``` sybil-9.0.0/tests/samples/myst-codeblock-future-imports.md000066400000000000000000000004421471471302700237220ustar00rootroot00000000000000 More likely is one down here: ```python raise Exception('Boom 2') ``` This will keep working but not be an effective test once PEP 563 finally lands: ```{code-block} python def foo(x: str): print(x) ``` sybil-9.0.0/tests/samples/myst-codeblock-lolcode.md000066400000000000000000000001151471471302700223330ustar00rootroot00000000000000```lolcode HAI ``` Some text here. ```{code-block} lolcode KTHXBYE ``` sybil-9.0.0/tests/samples/myst-codeblock-subclassing.md000066400000000000000000000002201471471302700232240ustar00rootroot00000000000000```{code-block} python :ignore: raise Exception('Boom 2') ``` ```{code-block} python assert 1 == 1 ``` ```python >>> assert 1 == 1 ``` sybil-9.0.0/tests/samples/myst-codeblock.md000066400000000000000000000021341471471302700207170ustar00rootroot00000000000000This is a code block: ```python y += 1 ``` After this text is a code block that goes boom: ```{code-block} python raise Exception('boom!') ``` Now we have an invisible code block, great for setting things up or checking stuff within a doc: % invisible-code-block: python % % z += 1 [__init__.py](..%2F..%2Fsybil%2Fparsers%2Fmyst%2F__init__.py) This paranoidly checks that we can use binary and unicode literals: - Here's a code block that should still be found!: ```python class NoVars: __slots__ = ['x'] ``` This one has some text after it that also forms part of the bullet. - Another bullet: ```{code-block} python define_this = 1 ``` - A following bullet straight away! - A code block in a non-python language: ```lolcode HAI CAN HAS STDIO? VISIBLE "HAI WORLD!" KTHXBYE ``` - Here's another code block that should still be found, even though it's at the end of the document, so don't add more after it!: ```python class YesVars: __slots__ = ['x'] ``` sybil-9.0.0/tests/samples/myst-complicated-nesting.md000066400000000000000000000025001471471302700227200ustar00rootroot00000000000000# {py:mod}`bytewax.connectors.demo` ```{py:module} bytewax.connectors.demo ``` ```{autodoc2-docstring} bytewax.connectors.demo :parser: myst :allowtitles: ``` ## Data ````{py:data} X :canonical: bytewax.connectors.demo.X :type: typing.TypeVar ```{autodoc2-docstring} bytewax.connectors.demo.X :parser: myst ``` ```` ## Classes `````{py:class} RandomMetricSource(metric_name: str, interval: datetime.timedelta = timedelta(seconds=0.7), count: int = sys.maxsize, next_random: typing.Callable[[], float] = lambda: random.randrange(0, 10)) :canonical: bytewax.connectors.demo.RandomMetricSource :Bases: - {py:obj}`~bytewax.inputs.FixedPartitionedSource``[`{py:obj}`~typing.Tuple``[`{py:obj}`~str``, `{py:obj}`~float``], `{py:obj}`~bytewax.connectors.demo._RandomMetricState``]` ```{autodoc2-docstring} bytewax.connectors.demo.RandomMetricSource :parser: myst ``` ```{rubric} Initialization ``` ```{autodoc2-docstring} bytewax.connectors.demo.RandomMetricSource.__init__ :parser: myst ``` ````{py:method} list_parts() -> typing.List[str] :canonical: bytewax.connectors.demo.RandomMetricSource.list_parts ```` ````{py:method} build_part(now: datetime.datetime, for_part: str, resume_state: typing.Optional[bytewax.connectors.demo._RandomMetricState]) :canonical: bytewax.connectors.demo.RandomMetricSource.build_part ```` ````` sybil-9.0.0/tests/samples/myst-directive-nested.md000066400000000000000000000003441471471302700222310ustar00rootroot00000000000000````{note} The warning block will be properly-parsed ```{warning} Here's my warning ``` But the next block will be parsed as raw text ```{warning} Here's my raw text warning that isn't parsed... ``` ```` sybil-9.0.0/tests/samples/myst-directive-no-trailing-newline.md000066400000000000000000000001041471471302700246230ustar00rootroot00000000000000# Integrations ```{toctree} :maxdepth: 1 flask pyramid custom ``` sybil-9.0.0/tests/samples/myst-doctest-fail.md000066400000000000000000000002011471471302700213410ustar00rootroot00000000000000```python >>> print("where's my output?") Not my output ``` ```{doctest} >>> print("where's my output?") Also not my output ``` sybil-9.0.0/tests/samples/myst-doctest-use-rest.md000066400000000000000000000010071471471302700222020ustar00rootroot00000000000000# DocTest examples A Python REPL / DocTest example using normal way of specifying Python language: ```python >>> x = 1+1 >>> x 2 ``` A Python REPL / DocTest example using a MyST role: ```{code-block} python >>> x += 1; x 3 ``` A Python REPL / DocTest example using the `{eval-rst}` role and the `.. doctest::` role from `sphinx.ext.doctest`: ```{eval-rst} .. doctest:: >>> 1 + 1 3 ``` ```{doctest} >>> y = 2 >>> raise Exception('uh oh') Traceback (most recent call last): ... Exception: uh oh ``` sybil-9.0.0/tests/samples/myst-doctest.md000066400000000000000000000011461471471302700204410ustar00rootroot00000000000000# DocTest examples A Python REPL / DocTest example using normal way of specifying Python language: ```python >>> x = 1+1 >>> x 2 ``` A Python REPL / DocTest example using a MyST directive: ```{code-block} python >>> x += 1; x 3 ``` A Python REPL / DocTest example using the `{eval-rst}` role and the `.. doctest::` role from `sphinx.ext.doctest`: ```{eval-rst} .. doctest:: >>> 1 + 1 3 ``` ```{doctest} >>> y = 2 >>> raise Exception('uh oh') Traceback (most recent call last): ... Exception: uh oh ``` Normal pass followed by a mismatch with expected: ```{doctest} >>> y == 2 True >>> y 3 ``` sybil-9.0.0/tests/samples/myst-lexers.md000066400000000000000000000032471471471302700203020ustar00rootroot00000000000000Here's a simple fenced code block: ```python >>> 1+1 2 ``` Here's a fenced code block forming a MyST role: ```{code-block} python >>> 1 + 1 3 ``` Here's a fleshed out MyST directive: ```{directivename} arguments --- key1: val1 key2: val2 --- This is directive content ``` Here's the eval-rst role with a nexted Shinx role: ```{eval-rst} .. doctest:: >>> 1 + 1 4 ``` Here's an example of one way we could do "invisible directives": % invisible-code-block: python % % b = 5 % % ...etc... This is the same style, but indented and is parsed out for pragmatic reasons: % code-block: py % % b = 6 % ...etc... % Here's another way we might be able to do them: This is the same style, but indented and is parsed out for pragmatic reasons: This is an . - This one is in a bullet list so should be picked up (typo deliberate!). ```pthon assert 1 + 1 == 2 ``` - Here's one indented because it's in a bullet list: - Directives can also be indented for the same reason: ```{foo} bar --- key1: val1 --- This, too, is a directive content ``` sybil-9.0.0/tests/samples/myst-lexing-directives.md000066400000000000000000000042461471471302700224250ustar00rootroot00000000000000This one has arguments a a "title" and then the body: ```{note} This is a note admonition. This is the second line of the first paragraph. - The note contains all indented body elements following. - It includes this bullet list. ``` ```{admonition} And, by the way... You can make up your own admonition too. ``` ```{sample} This directive has no arguments, just a body. ``` An image with no body followed by one that has params and a body: ```{image} picture.png ``` ```{image} picture.jpeg :height: 100px :width:200 px :scale: 50 % :alt: alternate text :align: right ``` ```{figure} picture.png --- scale: 50 % alt: map to buried treasure --- This is the caption of the figure (a simple paragraph). The legend consists of all elements after the caption. In this case, the legend consists of this paragraph and the following table: +-----------------------+-----------------------+ | Symbol | Meaning | +=======================+=======================+ | .. image:: tent.png | Campground | +-----------------------+-----------------------+ | .. image:: waves.png | Lake | +-----------------------+-----------------------+ | .. image:: peak.png | Mountain | +-----------------------+-----------------------+ ``` ```{topic} Topic Title Subsequent indented lines comprise the body of the topic, and are interpreted as body elements. ``` Now a topic with a class, as used by testfixtures: ```{topic} example.cfg --- class: read-file --- :: [A Section] dir = frob ``` Another example: ```{sidebar} Optional Sidebar Title --- subtitle: Optional Sidebar Subtitle --- Subsequent indented lines comprise the body of the sidebar, and are interpreted as body elements. ``` ```{code-block} python --- lineno-start: 10 emphasize-lines: 1, 3 caption: | This is my multi-line caption. It is *pretty nifty* ;-) --- a = 2 print('my 1st line') print(f'my {a}nd line') ``` ```{eval-rst} .. figure:: img/fun-fish.png :width: 100px :name: rst-fun-fish Party time! A reference from inside: :ref:`rst-fun-fish` A reference from outside: :ref:`syntax/directives/parsing` ``` sybil-9.0.0/tests/samples/myst-skip.md000066400000000000000000000006221471471302700177400ustar00rootroot00000000000000```python run = [] ``` Let's skips some stuff: % skip: next After this text is a code block that goes boom, it should be skipped: ```python run.append(1) ``` This one should run: ```python run.append(2) ``` % skip: start These should not: ```python run.append(3) ``` Nor this one: ```python run.append(4) ``` % skip: end But this one should: ```python run.append(5) ``` sybil-9.0.0/tests/samples/myst-sourcecode.md000066400000000000000000000001101471471302700211150ustar00rootroot00000000000000This is a code block: ```{sourcecode} python assert 1 + 1 == 2 ``` sybil-9.0.0/tests/samples/nested-evaluators.txt000066400000000000000000000002131471471302700216600ustar00rootroot00000000000000example-0 install-1-passthrough example-2 install-3-all example-4 install-5-even example-6 example-7 remove-5 example-8 remove-3 example-9 sybil-9.0.0/tests/samples/protocol-typing.rst000066400000000000000000000010261471471302700213600ustar00rootroot00000000000000.. code-block:: python from typing import Protocol, runtime_checkable @runtime_checkable class NamedValue(Protocol): """Interface for an object that has a name and a value.""" value: float name: str some text .. code-block:: python class NamedValueClass1: def __init__(self, name: str, value: float): self.name = name self.value = value v = NamedValueClass1("foo", 1.0) :: >>> isinstance(v, NamedValue) True This above should be ``True`` sybil-9.0.0/tests/samples/sample1.txt000066400000000000000000000000321471471302700175540ustar00rootroot00000000000000XXXX 4 check YYY 3 check sybil-9.0.0/tests/samples/sample2.txt000066400000000000000000000000311471471302700175540ustar00rootroot00000000000000XXX 4 check YYY 3 check sybil-9.0.0/tests/samples/skip-conditional-bad.txt000066400000000000000000000002631471471302700222130ustar00rootroot00000000000000Bad action: .. skip: lolwut Condition on end: .. skip: start .. skip: end if(1 > 2) Malformed if: .. skip: next if(sys.version_info < (3, 0), reason="only true on python 3" sybil-9.0.0/tests/samples/skip-conditional-edges.txt000066400000000000000000000004051471471302700225520ustar00rootroot00000000000000.. skip: next if(True, reason="skip 1") >>> raise Exception('should not run!') >>> run.append(1) .. skip: start if(False, reason="skip 2") These should both run: >>> run.append(2) >>> run.append(3) .. skip: end This should also run: >>> run.append(4) sybil-9.0.0/tests/samples/skip-conditional.txt000066400000000000000000000007471471471302700214760ustar00rootroot00000000000000>>> result.append('start') Default reason: .. skip: next if(2 > 1) Should not run: >>> result.append('bad 1') >>> result.append('good 1') .. skip: start if(2 > 1, reason='foo') >>> result.append('bad 2') >>> result.append('bad 3') .. skip: end >>> result.append('good 2') Here's a really extreme example of computing the reason from stuff in the namespace: .. skip: next if (True, 'good reason' if result[-1] == 'good 2' else 'bad reason' ) >>> result.append('bad 4') sybil-9.0.0/tests/samples/skip-just-end.txt000066400000000000000000000001171471471302700207130ustar00rootroot00000000000000>>> result.append('good') .. skip: end Should run: >>> result.append('bad') sybil-9.0.0/tests/samples/skip-malformed-arguments.txt000066400000000000000000000000551471471302700231340ustar00rootroot00000000000000.. skip<: The above arguments are invalid sybil-9.0.0/tests/samples/skip-next-next.txt000066400000000000000000000001231471471302700211110ustar00rootroot00000000000000.. skip: next if(False) >>> result.append(1) .. skip: next >>> result.append(2) sybil-9.0.0/tests/samples/skip-start-next.txt000066400000000000000000000003171471471302700212750ustar00rootroot00000000000000.. skip: start Should not run: >>> result.append('bad 1') .. skip: next >>> result.append('bad 2') Also should not run: >>> result.append('bad 3') .. skip: end Should run: >>> result.append('good') sybil-9.0.0/tests/samples/skip-start-start.txt000066400000000000000000000003201471471302700214460ustar00rootroot00000000000000.. skip: start Should not run: >>> result.append('bad 1') .. skip: start >>> result.append('bad 2') Also should not run: >>> result.append('bad 3') .. skip: end Should run: >>> result.append('good') sybil-9.0.0/tests/samples/skip.txt000066400000000000000000000006251471471302700171700ustar00rootroot00000000000000.. invisible-code-block: python run = [] Let's skips some stuff: .. skip: next .. code-block:: python run.append(1) This one should run: .. invisible-code-block: python run.append(2) .. skip: start These should not: .. code-block:: python run.append(3) Nor this one: .. code-block:: python run.append(4) .. skip: end But this one should: .. code-block:: python run.append(5) sybil-9.0.0/tests/samples/sourcecode.rst000066400000000000000000000000671471471302700203460ustar00rootroot00000000000000This is a code block: .. sourcecode:: python y = 1 sybil-9.0.0/tests/test_capture.py000066400000000000000000000032131471471302700170650ustar00rootroot00000000000000import json import pytest from sybil import Document from sybil.parsers.rest import CaptureParser from tests.helpers import sample_path, parse def test_basic(): examples, namespace = parse('capture.txt', CaptureParser(), expected=4) examples[0].evaluate() assert namespace['expected_listing'] == ( 'root.txt\n' 'subdir/\n' 'subdir/file.txt\n' 'subdir/logs/\n' ) examples[1].evaluate() assert namespace['foo'] == 'Third level of indentation.\n' examples[2].evaluate() assert namespace['bar'] == ( 'Second level of indentation.\n\n' ' Third level of indentation.\n\n.. -> foo\n' ) examples[3].evaluate() assert namespace['another'] == ( 'example\n' ) def test_directive_indent_beyond_block(): path = sample_path('capture_bad_indent1.txt') with pytest.raises(ValueError) as excinfo: Document.parse(path, CaptureParser()) assert str(excinfo.value) == ( "couldn't find the start of the block to match ' .. -> foo' " f"on line 5 of {path}" ) def test_directive_indent_equal_to_block(): path = sample_path('capture_bad_indent2.txt') with pytest.raises(ValueError) as excinfo: Document.parse(path, CaptureParser()) assert str(excinfo.value) == ( "couldn't find the start of the block to match ' .. -> foo' " f"on line 5 of {path}" ) def test_capture_codeblock(): examples, namespace = parse('capture_codeblock.txt', CaptureParser(), expected=1) examples[0].evaluate() assert json.loads(namespace['json']) == {"a key": "value", "b key": 42} sybil-9.0.0/tests/test_clear.py000066400000000000000000000004671471471302700165200ustar00rootroot00000000000000from sybil.parsers.rest import ClearNamespaceParser, DocTestParser from .helpers import parse def test_basic(): examples, namespace = parse('clear.txt', DocTestParser(), ClearNamespaceParser(), expected=4) for example in examples: example.evaluate() assert 'x' not in namespace, namespace sybil-9.0.0/tests/test_codeblock.py000066400000000000000000000137701471471302700173600ustar00rootroot00000000000000import __future__ from pathlib import Path import pytest from testfixtures import compare from sybil import Example, Sybil from sybil.document import Document from sybil.parsers.codeblock import PythonCodeBlockParser, CodeBlockParser from sybil.parsers.rest import DocTestParser from .helpers import check_excinfo, parse, sample_path, check_path, SAMPLE_PATH, add_to_python_path def test_basic(): examples, namespace = parse('codeblock.txt', PythonCodeBlockParser(), expected=7) namespace['y'] = namespace['z'] = 0 assert examples[0].evaluate() is None assert namespace['y'] == 1 assert namespace['z'] == 0 with pytest.raises(Exception) as excinfo: examples[1].evaluate() compare(examples[1].parsed, expected="raise Exception('boom!')\n", show_whitespace=True) assert examples[1].line == 9 check_excinfo(examples[1], excinfo, 'boom!', lineno=11) assert examples[2].evaluate() is None assert namespace['y'] == 1 assert namespace['z'] == 1 assert examples[3].evaluate() is None assert namespace['bin'] == b'x' assert namespace['uni'] == u'x' assert examples[4].evaluate() is None assert 'NoVars' in namespace assert examples[5].evaluate() is None assert namespace['define_this'] == 1 assert examples[6].evaluate() is None assert 'YesVars' in namespace assert '__builtins__' not in namespace def test_other_language_composition_pass(): def oh_hai(example): assert isinstance(example, Example) assert 'HAI' in example.parsed parser = CodeBlockParser(language='lolcode', evaluator=oh_hai) examples, namespace = parse('codeblock.txt', parser, expected=1) assert examples[0].evaluate() is None def test_other_language_composition_fail(): def oh_noez(example): if 'KTHXBYE' in example.parsed: raise ValueError('oh noez') parser = CodeBlockParser(language='lolcode', evaluator=oh_noez) examples, namespace = parse('codeblock.txt', parser, expected=1) with pytest.raises(ValueError): examples[0].evaluate() def test_other_language_no_evaluator(): parser = CodeBlockParser('foo') with pytest.raises(NotImplementedError): parser.evaluate(...) class LolCodeCodeBlockParser(CodeBlockParser): language = 'lolcode' def evaluate(self, example: Example): if example.parsed != 'HAI\n': raise ValueError(repr(example.parsed)) def test_other_language_inheritance(): examples, namespace = parse('codeblock_lolcode.txt', LolCodeCodeBlockParser(), expected=2) examples[0].evaluate() with pytest.raises(ValueError) as excinfo: examples[1].evaluate() assert str(excinfo.value) == "'KTHXBYE'" class IgnoringPythonCodeBlockParser(PythonCodeBlockParser): def __call__(self, document): for region in super().__call__(document): options = region.lexemes.get('options') if options and 'ignore' in options: region.evaluator = None yield region def test_other_functionality_inheritance(): examples, namespace = parse( 'codeblock-subclassing.txt', IgnoringPythonCodeBlockParser(), expected=2 ) examples[0].evaluate() examples[1].evaluate() def future_import_checks(*future_imports): parser = PythonCodeBlockParser(future_imports) examples, namespace = parse('codeblock_future_imports.txt', parser, expected=3) with pytest.raises(Exception) as excinfo: examples[0].evaluate() # check the line number of the first block, which is the hardest case: check_excinfo(examples[0], excinfo, 'Boom 1', lineno=3) with pytest.raises(Exception) as excinfo: examples[1].evaluate() # check the line number of the second block: check_excinfo(examples[1], excinfo, 'Boom 2', lineno=9) examples[2].evaluate() # check the line number of the third block: assert namespace['foo'].__code__.co_firstlineno == 15 return namespace['foo'] def test_no_future_imports(): future_import_checks() def test_single_future_import(): future_import_checks('barry_as_FLUFL') def test_multiple_future_imports(): future_import_checks('barry_as_FLUFL', 'print_function') def test_functional_future_imports(): foo = future_import_checks('annotations') # This will keep working but not be an effective test once PEP 563 finally lands: assert foo.__code__.co_flags & __future__.annotations.compiler_flag def test_windows_line_endings(tmp_path: Path): p = tmp_path / "example.txt" p.write_bytes( b'This is my example:\r\n\r\n' b'.. code-block:: python\r\n\r\n' b' from math import cos\r\n' b' x = 123\r\n\r\n' b'That was my example.\r\n' ) document = Document.parse(str(p), PythonCodeBlockParser()) example, = document example.evaluate() assert document.namespace['x'] == 123 def test_line_numbers_with_options(): parser = PythonCodeBlockParser() examples, namespace = parse('codeblock_with_options.txt', parser, expected=2) with pytest.raises(Exception) as excinfo: examples[0].evaluate() # check the line number of the first block, which is the hardest case: check_excinfo(examples[0], excinfo, 'Boom 1', lineno=6) with pytest.raises(Exception) as excinfo: examples[1].evaluate() # check the line number of the second block: check_excinfo(examples[1], excinfo, 'Boom 2', lineno=14) def test_codeblocks_in_docstrings(): sybil = Sybil([PythonCodeBlockParser()]) with add_to_python_path(SAMPLE_PATH): check_path(sample_path('docstrings.py'), sybil, expected=3) def test_doctest_in_docstrings(): sybil = Sybil([DocTestParser()]) with add_to_python_path(SAMPLE_PATH): check_path(sample_path('docstrings.py'), sybil, expected=1) def test_code_directive(): sybil = Sybil([PythonCodeBlockParser()]) check_path(sample_path('code.rst'), sybil, expected=1) def test_sourcecode_directive(): sybil = Sybil([PythonCodeBlockParser()]) check_path(sample_path('sourcecode.rst'), sybil, expected=1) sybil-9.0.0/tests/test_compatibility.py000066400000000000000000000020641471471302700202760ustar00rootroot00000000000000# Tests for backwards compatibility import json from sybil.parsers.capture import parse_captures from sybil.parsers.codeblock import CodeBlockParser, PythonCodeBlockParser from sybil.parsers.skip import skip from .helpers import parse def test_imports(): # uncomment once all the moves are done! from sybil.parsers.capture import parse_captures from sybil.parsers.codeblock import CodeBlockParser, PythonCodeBlockParser from sybil.parsers.doctest import DocTestParser from sybil.parsers.skip import skip pass def test_code_block_parser_pad(): assert CodeBlockParser('foo').pad('x', line=2) == '\n\nx' def test_skip_parser_function(): examples, namespace = parse('skip.txt', PythonCodeBlockParser(), skip, expected=9) for example in examples: example.evaluate() assert namespace['run'] == [2, 5] def test_capture_parser_function(): examples, namespace = parse('capture_codeblock.txt', parse_captures, expected=1) examples[0].evaluate() assert json.loads(namespace['json']) == {"a key": "value", "b key": 42} sybil-9.0.0/tests/test_docs_examples.py000066400000000000000000000042511471471302700202530ustar00rootroot00000000000000import sys from os.path import join from pathlib import Path from unittest.main import main as unittest_main from unittest.runner import TextTestRunner from pytest import main as pytest_main from tests.helpers import add_to_python_path DOC_EXAMPLES = Path(__file__).parent.parent / 'docs' / 'examples' class CollectResults: def pytest_sessionfinish(self, session): self.session = session def pytest_in(*path: str): results = CollectResults() return_code = pytest_main([join(DOC_EXAMPLES, *path)], plugins=[results]) assert return_code == results.session.exitstatus return results.session class TestIntegrationExamples: def test_pytest(self): session = pytest_in('integration', 'docs') assert session.exitstatus == 0 assert session.testsfailed == 0 assert session.testscollected == 3 def test_unittest(self): runner = TextTestRunner(verbosity=2, stream=sys.stdout) path = str(DOC_EXAMPLES / 'integration' / 'unittest') main = unittest_main( exit=False, module=None, testRunner=runner, argv=['x', 'discover', '-s', path, '-t', path] ) assert main.result.testsRun == 3 assert len(main.result.failures) == 0 assert len(main.result.errors) == 0 def test_quickstart(): session = pytest_in('quickstart') assert session.exitstatus == 0 assert session.testsfailed == 0 assert session.testscollected == 4 def test_rest_text_rest_src(): directory = 'rest_text_rest_src' with add_to_python_path(DOC_EXAMPLES / directory / 'src'): session = pytest_in(directory) assert session.testsfailed == 0 assert session.testscollected == 5 def test_myst_text_rest_src(): directory = 'myst_text_rest_src' with add_to_python_path(DOC_EXAMPLES / directory / 'src'): session = pytest_in(directory) assert session.testsfailed == 0 assert session.testscollected == 5 def test_linting_and_checking(): directory = 'linting_and_checking' with add_to_python_path(DOC_EXAMPLES / directory / 'src'): session = pytest_in(directory) assert session.testsfailed == 0 assert session.testscollected == 3 sybil-9.0.0/tests/test_doctest.py000066400000000000000000000153631471471302700171000ustar00rootroot00000000000000# coding=utf-8 from doctest import REPORT_NDIFF, ELLIPSIS, DocTestParser as BaseDocTestParser from pathlib import Path import pytest from testfixtures import compare from sybil.document import Document, PythonDocStringDocument from sybil.example import SybilFailure from sybil.parsers.abstract import DocTestStringParser from sybil.parsers.rest import DocTestParser, DocTestDirectiveParser from .helpers import sample_path, parse, FUNCTIONAL_TEST_DIR def test_pass(): examples, namespace = parse('doctest.txt', DocTestParser(), expected=5) examples[0].evaluate() assert namespace['y'] == 1 examples[1].evaluate() assert namespace['y'] == 1 examples[2].evaluate() assert namespace['x'] == [1, 2, 3] examples[3].evaluate() assert namespace['y'] == 2 examples[4].evaluate() assert namespace['y'] == 2 def test_fail(): path = sample_path('doctest_fail.txt') examples, namespace = parse('doctest_fail.txt', DocTestParser(), expected=2) with pytest.raises(SybilFailure) as excinfo: examples[0].evaluate() # Note on line numbers: This test shows how the example's line number is correct # however, doctest doesn't do the line padding trick Sybil does with codeblocks, # so the line number will never be correct, it's always 1. compare(str(excinfo.value), expected=( f"Example at {path}, line 1, column 1 did not evaluate as expected:\n" "Expected:\n" " Not my output\n" "Got:\n" " where's my output?\n" )) with pytest.raises(SybilFailure) as excinfo: examples[1].evaluate() actual = excinfo.value.result assert actual.startswith('Exception raised:') assert actual.endswith('Exception: boom!\n') def test_fail_with_options(): parser = DocTestParser(optionflags=REPORT_NDIFF|ELLIPSIS) examples, namespace = parse('doctest_fail.txt', parser, expected=2) with pytest.raises(SybilFailure) as excinfo: examples[0].evaluate() assert excinfo.value.result == ( "Differences (ndiff with -expected +actual):\n" " - Not my output\n" " + where's my output?\n" ) def test_literals(): parser = DocTestParser() examples, _ = parse('doctest_literals.txt', parser, expected=5) for example in examples: example.evaluate() def test_min_indent(): examples, _ = parse('doctest_min_indent.txt', DocTestParser(), expected=1) examples[0].evaluate() def test_tabs(): path = sample_path('doctest_tabs.txt') parser = DocTestParser() with pytest.raises(ValueError): Document.parse(path, parser) def test_irrelevant_tabs(): examples, _ = parse('doctest_irrelevant_tabs.txt', DocTestParser(), expected=1) examples[0].evaluate() def test_unicode(): examples, _ = parse('doctest_unicode.txt', DocTestParser(), expected=1) examples[0].evaluate() def test_directive(): path = sample_path('doctest_directive.txt') examples, _ = parse('doctest_directive.txt', DocTestDirectiveParser(), expected=3) examples[0].evaluate() with pytest.raises(SybilFailure) as excinfo: examples[1].evaluate() compare(str(excinfo.value), expected = ( f"Example at {path}, line 13, column 1 did not evaluate as expected:\n" "Expected:\n" " Unexpected!\n" "Got:\n" " 2\n" )) with pytest.raises(SybilFailure) as excinfo: examples[2].evaluate() actual = excinfo.value.result assert actual.startswith('Exception raised:') assert actual.endswith('Exception: boom!\n') def test_directive_with_options(): path = sample_path('doctest_directive.txt') parser = DocTestDirectiveParser(optionflags=REPORT_NDIFF|ELLIPSIS) examples, namespace = parse('doctest_directive.txt', parser, expected=3) with pytest.raises(SybilFailure) as excinfo: examples[1].evaluate() compare(str(excinfo.value), expected = ( f"Example at {path}, line 13, column 1 did not evaluate as expected:\n" "Differences (ndiff with -expected +actual):\n" " - Unexpected!\n" " + 2\n" )) # Number of doctests that can't be parsed in a file when looking at the whole file source: ROOT = Path(FUNCTIONAL_TEST_DIR) UNPARSEABLE = { ROOT / 'package_and_docs' / 'src' / 'parent' / 'child' / 'module_b.py': 1, } MINIMUM_EXPECTED_DOCTESTS = 9 def test_sybil_example_count(all_python_files): parser = DocTestStringParser() seen_examples_from_source = 0 seen_examples_from_docstrings = 0 for path, source in all_python_files: seen_examples_from_source += len(tuple(parser(source, path))) for start, end, docstring in PythonDocStringDocument.extract_docstrings(source): seen_examples_from_docstrings += len(tuple(parser(docstring, path))) assert seen_examples_from_source > MINIMUM_EXPECTED_DOCTESTS, seen_examples_from_source assert seen_examples_from_docstrings > MINIMUM_EXPECTED_DOCTESTS, seen_examples_from_docstrings assert seen_examples_from_docstrings == seen_examples_from_source def check_sybil_against_doctest(path, text): problems = [] name = str(path) regions = list(DocTestStringParser()(text, path)) sybil_examples = [region.parsed for region in regions] doctest_examples = BaseDocTestParser().get_examples(text, name) problems.append(compare( expected=doctest_examples, actual=sybil_examples, raises=False, show_whitespace=True )) for region in regions: example = region.parsed region_source = text[region.start:region.end] for name in 'source', 'want': expected = getattr(example, name) if name == 'source': # doctest.DocTestParser REPL examples are pre-processed into python # source to execute, so do similar here but note this is pretty fragile: expected_source = region_source.replace('>>>', '').replace(' ... ', '') else: expected_source = region_source if expected not in expected_source: # pragma: no cover - Only on failures! problems.append(f'{region}:{name}\n{expected!r} not in {expected_source!r}') problems = [problem for problem in problems if problem] if problems: # pragma: no cover - Only on failures! raise AssertionError('doctests not correctly extracted:\n\n'+'\n\n'.join(problems)) def test_all_docstest_examples_extracted_from_source_correctly(python_file): path, source = python_file if path in UNPARSEABLE: return check_sybil_against_doctest(path, source) def test_all_docstest_examples_extracted_from_docstrings_correctly(python_file): path, source = python_file for start, end, docstring in PythonDocStringDocument.extract_docstrings(source): check_sybil_against_doctest(path, docstring) sybil-9.0.0/tests/test_document.py000066400000000000000000000127321471471302700172460ustar00rootroot00000000000000""" This is a module doc string. """ # This is a comment on how annoying 3.7 and earlier are! import re from functools import partial from pathlib import Path from collections.abc import Iterable from testfixtures import compare, ShouldRaise from sybil import Region, Example from sybil.document import PythonDocStringDocument, Document from sybil.example import NotEvaluated, SybilFailure from sybil.parsers.abstract.lexers import LexingException from .helpers import ast_docstrings, parse, sample_path def test_extract_docstring(): """ This is a function docstring. """ python_source_code = Path(__file__).read_text() expected = list(ast_docstrings(python_source_code)) actual = list(PythonDocStringDocument.extract_docstrings(python_source_code)) compare(expected, actual=[text for _, _, text in actual], show_whitespace=True) compare(expected, actual=[python_source_code[s:e] for s, e, _ in actual], show_whitespace=True) def test_all_docstrings_extracted_correctly(python_file): problems = [] path, source = python_file expected = list(ast_docstrings(source)) actual = list(PythonDocStringDocument.extract_docstrings(source)) actual_from_start_end = [source[s:e].replace('\\"', '"') for s, e, _ in actual] check = partial(compare, raises=False, show_whitespace=True) for check_result in ( check(expected=len(expected), actual=len(actual), prefix=f'{path}: number'), check(expected, actual=[text for _, _, text in actual], prefix=f'{path}: content'), check(expected, actual=actual_from_start_end, prefix=f'{path}: start/end'), ): if check_result: # pragma: no cover - Only on failures! problems.append(check_result) if problems: # pragma: no cover - Only on failures! raise AssertionError('docstrings not correctly extracted:\n\n'+'\n\n'.join(problems)) def test_evaluator_returns_non_string(): def evaluator(example: Example) -> NotEvaluated: # This is a bug! return NotEvaluated() def parser(document: Document) -> Iterable[Region]: yield Region(0, 1, None, evaluator) examples, namespace = parse('sample1.txt', parser, expected=1) with ShouldRaise(SybilFailure(examples[0], 'NotEvaluated()')): examples[0].evaluate() def test_nested_evaluators(): def record(example: Example, evaluator_name): (instruction, id_, param) = example.parsed example.namespace['results'].append((instruction, id_, param, evaluator_name)) def normal_evaluator(example: Example): record(example, 'normal') class InstructionEvaluator: def __init__(self, id_, mode): self.name = f'{id_}-{mode}' self.mode = mode assert hasattr(self, mode), mode def __call__(self, example: Example): (instruction, id_, param) = example.parsed if instruction in ('install', 'remove'): raise NotEvaluated() record(example, self.name) return getattr(self, self.mode)(id_) def install(self, example: Example): record(example, self.name+'-install') example.document.push_evaluator(self) def remove(self, example: Example): record(example, self.name + '-remove') example.document.pop_evaluator(self) def passthrough(self, id_: int): raise NotEvaluated() def all(self, id_: int): return '' def even(self, id_: int): if id_ % 2: raise NotEvaluated() evaluators = {} def parser(document: Document) -> Iterable[Region]: for match in re.finditer(r'(example|install|remove)-(\d+)(.+)?', document.text): instruction, id_, param = match.groups() id_ = int(id_) param = param.lstrip('-') if param else None if instruction == 'example': evaluator = normal_evaluator else: obj = evaluators.get(id_) if obj is None: obj = evaluators.setdefault(id_, InstructionEvaluator(id_, param)) evaluator = getattr(obj, instruction) yield Region(match.start(), match.end(), (instruction, id_, param), evaluator) examples, namespace = parse('nested-evaluators.txt', parser, expected=12) namespace['results'] = results = [] for e in examples: e.evaluate() compare(results, expected=[ ('example', 0, None, 'normal'), ('install', 1, 'passthrough', '1-passthrough-install'), ('example', 2, None, '1-passthrough'), ('example', 2, None, 'normal'), ('install', 3, 'all', '3-all-install'), ('example', 4, None, '3-all'), ('install', 5, 'even', '5-even-install'), ('example', 6, None, '5-even'), ('example', 7, None, '5-even'), ('example', 7, None, '3-all'), ('remove', 5, None, '5-even-remove'), ('example', 8, None, '3-all'), ('remove', 3, None, '3-all-remove'), ('example', 9, None, '1-passthrough'), ('example', 9, None, 'normal'), ]) def test_nested_evaluators_not_evaluated_from_region(): def evaluator(example: Example): raise NotEvaluated() def parser(document: Document) -> Iterable[Region]: yield Region(0, 1, None, evaluator) examples, namespace = parse('sample1.txt', parser, expected=1) with ShouldRaise(SybilFailure(examples[0], f'{evaluator!r} should not raise NotEvaluated()')): examples[0].evaluate() sybil-9.0.0/tests/test_functional.py000066400000000000000000000505051471471302700175720ustar00rootroot00000000000000import sys from pathlib import Path import pytest from pytest import CaptureFixture from testfixtures import compare from sybil import Sybil from sybil.parsers.rest import PythonCodeBlockParser, DocTestParser from sybil.python import import_cleanup from .helpers import ( run_pytest, run_unittest, PYTEST, run, write_config, UNITTEST, write_doctest, functional_sample, clone_functional_sample, check_path, sample_path ) @pytest.fixture(autouse=True) def cleanup_imports(): with import_cleanup(): yield def test_pytest(capsys: CaptureFixture[str]): results = run_pytest(capsys, functional_sample('pytest')) out = results.out # check we're trimming tracebacks: out.assert_not_present('sybil/example.py') out.then_find('fail.rst::line:1,column:1') out.then_find('fail.rst::line:1,column:1 sybil setup session_fixture setup\n' 'module_fixture setup\n' 'class_fixture setup\n' 'function_fixture setup\n' 'x is currently: 0\n' 'FAILED function_fixture teardown\n' 'class_fixture teardown') out.then_find('fail.rst::line:6,column:1') out.then_find('fail.rst::line:6,column:1 class_fixture setup\n' 'function_fixture setup\n' '0smcf PASSED function_fixture teardown\n' 'class_fixture teardown') out.then_find('fail.rst::line:8,column:1') out.then_find('fail.rst::line:8,column:1 class_fixture setup\n' 'function_fixture setup\n' '1smcf FAILED function_fixture teardown\n' 'class_fixture teardown') out.then_find('fail.rst::line:10,column:1') out.then_find('fail.rst::line:10,column:1 class_fixture setup\n' 'function_fixture setup\n' '2smcf FAILED function_fixture teardown\n' 'class_fixture teardown') out.then_find('fail.rst::line:12,column:1') out.then_find('fail.rst::line:12,column:1 class_fixture setup\n' 'function_fixture setup\n' '3smcf PASSED function_fixture teardown\n' 'class_fixture teardown') out.then_find('fail.rst::line:14,column:1') out.then_find('fail.rst::line:14,column:1 class_fixture setup\n' 'function_fixture setup\n' 'FAILED function_fixture teardown\n' 'class_fixture teardown\n' 'module_fixture teardown\n' 'sybil teardown 5') out.then_find('pass.rst::line:1,column:1') out.then_find('pass.rst::line:1,column:1 sybil setup module_fixture setup\n' 'class_fixture setup\n' 'function_fixture setup\n' '0smcf PASSED function_fixture teardown\n' 'class_fixture teardown') out.then_find('pass.rst::line:3,column:1') out.then_find('pass.rst::line:3,column:1 class_fixture setup\n' 'function_fixture setup\n' '1smcf PASSED function_fixture teardown\n' 'class_fixture teardown') out.then_find('pass.rst::line:5,column:1') out.then_find('pass.rst::line:5,column:1 class_fixture setup\n' 'function_fixture setup\n' '2smcf PASSED function_fixture teardown\n' 'class_fixture teardown') out.then_find('pass.rst::line:7,column:1') out.then_find('pass.rst::line:7,column:1 class_fixture setup\n' 'function_fixture setup\n' '3smcf PASSED function_fixture teardown\n' 'class_fixture teardown\n' 'module_fixture teardown\n' 'sybil teardown 4\n' 'session_fixture teardown') out.then_find('_ fail.rst line=1 column=1 _') out.then_find("raise Exception('the start!')") out.then_find('_ fail.rst line=8 column=1 _') out.then_find('Y count was 3 instead of 2') out.then_find('fail.rst:8: SybilFailure') out.then_find('_ fail.rst line=10 column=1 _') out.then_find('ValueError: X count was 3 instead of 4') out.then_find('_ fail.rst line=14 column=1 _') out.then_find("raise Exception('boom!')") out.then_find('fail.rst:18: Exception') assert results.return_code == 1 assert results.failures == 4 assert results.total == 10 def test_unittest(capsys: CaptureFixture[str]): results = run_unittest(capsys, functional_sample('unittest')) out = results.out out.then_find('sybil setup') out.then_find('fail.rst,line:6,column:1 ... 0\nok') out.then_find('fail.rst,line:8,column:1 ... 1\nFAIL') out.then_find('fail.rst,line:10,column:1 ... 2\nERROR') out.then_find('fail.rst,line:12,column:1 ... 3\nok') out.then_find('sybil teardown 4\nsybil setup') out.then_find('pass.rst,line:1,column:1 ... 0\nok') out.then_find('pass.rst,line:3,column:1 ... 1\nok') out.then_find('pass.rst,line:5,column:1 ... 2\nok') out.then_find('pass.rst,line:7,column:1 ... 3\nok') out.then_find('sybil teardown 4') out.then_find('ERROR: ') out.then_find('fail.rst,line:10,column:1') out.then_find('ValueError: X count was 3 instead of 4') out.then_find('FAIL:') out.then_find('fail.rst,line:8,column:1') out.then_find('Y count was 3 instead of 2') out.then_find('Ran 8 tests') assert results.total == 8 assert results.failures == 1 assert results.errors == 1 def make_tree(tmp_path: Path): write_doctest(tmp_path, 'foo.rst') write_doctest(tmp_path, 'bar.rst') write_doctest(tmp_path, 'parent', 'foo.rst') write_doctest(tmp_path, 'parent', 'bar.rst') write_doctest(tmp_path, 'parent', 'child', 'foo.rst') write_doctest(tmp_path, 'parent', 'child', 'bar.rst') @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_everything(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner) results = run(capsys, runner, tmp_path) # ask for nothing, get nothing... assert results.total == 0 @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_just_pattern(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, pattern="'*.rst'") results = run(capsys, runner, tmp_path) assert results.total == 6 @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_fnmatch_pattern(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, pattern="'**/*.rst'") results = run(capsys, runner, tmp_path) # The fact that the two .rst files in the root aren't matched is # arguably a bug in the Python interpretation of **/ results.out.assert_has_run(runner, '/parent/foo.rst') results.out.assert_has_run(runner, '/parent/bar.rst') results.out.assert_has_run(runner, '/parent/child/foo.rst') results.out.assert_has_run(runner, '/parent/child/bar.rst') assert results.total == 4, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_just_filenames(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, filenames="['bar.rst']") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/bar.rst') results.out.assert_has_run(runner, '/parent/bar.rst') results.out.assert_has_run(runner, '/parent/child/bar.rst') assert results.total == 3, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_directory(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, path=f"'{tmp_path / 'parent'}'", pattern="'*.rst'") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/parent/foo.rst') results.out.assert_has_run(runner, '/parent/bar.rst') results.out.assert_has_run(runner, '/parent/child/foo.rst') results.out.assert_has_run(runner, '/parent/child/bar.rst') assert results.total == 4, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_directory_with_excludes(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, path=f"'{tmp_path / 'parent'}'", pattern="'*.rst'", exclude="'ba*.rst'") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/parent/foo.rst') results.out.assert_has_run(runner, '/parent/child/foo.rst') assert results.total == 2, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_filenames_and_excludes(tmp_path: Path, capsys: CaptureFixture[str], runner: str): make_tree(tmp_path) write_config(tmp_path, runner, path=f"'{tmp_path / 'parent'}'", filenames="{'bar.rst'}", excludes="['**child/*.rst']") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/parent/bar.rst') assert results.total == 1, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_exclude_by_name(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'foo.txt') write_doctest(tmp_path, 'bar.txt') write_doctest(tmp_path, 'child', 'foo.txt') write_doctest(tmp_path, 'child', 'bar.txt') write_config(tmp_path, runner, pattern="'*.txt'", exclude="'bar.txt'") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/foo.txt') results.out.assert_has_run(runner, '/child/foo.txt') assert results.total == 2, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_include_filenames(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'foo.txt') write_doctest(tmp_path, 'bar.txt') write_doctest(tmp_path, 'baz', 'bar.txt') write_config(tmp_path, runner, filenames="['bar.txt']") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/bar.txt') results.out.assert_has_run(runner, '/baz/bar.txt') assert results.total == 2, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_globs(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'middle', 'interesting', 'foo.txt') write_doctest(tmp_path, 'middle', 'boring', 'bad1.txt') write_doctest(tmp_path, 'middle', 'boring', 'bad2.txt') write_config(tmp_path, runner, patterns="['middle/**/*.txt']", excludes="['**/boring/*.txt']") results = run(capsys, runner, tmp_path) results.out.assert_has_run(runner, '/middle/interesting/foo.txt') assert results.total == 1, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_filter_multiple_patterns(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'test.rst') write_doctest(tmp_path, 'test.txt') write_config(tmp_path, runner, patterns="['*.rst', '*.txt']") results = run(capsys, runner, tmp_path) assert results.total == 2, results.out.text @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_skips(tmp_path: Path, capsys: CaptureFixture[str], runner: str): root = clone_functional_sample('skips', tmp_path) write_config(root, runner, parsers="[PythonCodeBlockParser(), SkipParser(), DocTestParser()]", patterns="['*.rst']") results = run(capsys, runner, root) assert results.total == 10, results.out.text assert results.failures == 0, results.out.text assert results.errors == 0, results.out.text def clone_and_run_modules_tests(tmp_path: Path, capsys: CaptureFixture[str], runner: str): clone_functional_sample('modules', tmp_path) write_config(tmp_path, runner, path="'./modules'", parsers="[PythonCodeBlockParser(), DocTestParser()]", patterns="['*.py']") results = run(capsys, runner, tmp_path) return results @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_modules(tmp_path: Path, capsys: CaptureFixture[str], runner: str): sys.path.append(str(tmp_path / 'modules')) results = clone_and_run_modules_tests(tmp_path, capsys, runner) assert results.total == 5, results.out.text assert results.failures == 0, results.out.text assert results.errors == 0, results.out.text def test_modules_not_importable_pytest(tmp_path: Path, capsys: CaptureFixture[str]): # NB: no append to sys.path results = clone_and_run_modules_tests(tmp_path, capsys, PYTEST) compare(results.total, expected=5, suffix=results.out.text) compare(results.errors, expected=0, suffix=results.out.text) compare(results.failures, expected=5, suffix=results.out.text) out = results.out out.then_find('a.py line=3 column=1') out.then_find(f"ImportError: 'a' not importable from {tmp_path/'modules'/'a.py'} as:") out.then_find("ModuleNotFoundError: No module named 'a'") out.then_find('a.py line=7 column=1') out.then_find("ModuleNotFoundError: No module named 'a'") out.then_find('b.py line=2 column=1') out.then_find(f"ImportError: 'b' not importable from {tmp_path/'modules'/'b.py'} as:") out.then_find('b.py line=7 column=1') out.then_find("ModuleNotFoundError: No module named 'b'") out.then_find('b.py line=11 column=1') out.then_find("ModuleNotFoundError: No module named 'b'") def test_modules_not_importable_unittest(tmp_path: Path, capsys: CaptureFixture[str]): # NB: no append to sys.path results = clone_and_run_modules_tests(tmp_path, capsys, UNITTEST) assert results.total == 5, results.out.text assert results.failures == 0, results.out.text assert results.errors == 5, results.out.text a_py = tmp_path/'modules'/'a.py' b_py = tmp_path/'modules'/'b.py' out = results.out out.then_find(f'ERROR: {a_py},line:3,column:1') out.then_find(f"ImportError: 'a' not importable from {tmp_path/'modules'/'a.py'} as:") out.then_find("ModuleNotFoundError: No module named 'a'") out.then_find(f'ERROR: {b_py},line:2,column:1') out.then_find(f"ImportError: 'b' not importable from {tmp_path/'modules'/'b.py'} as:") out.then_find("ModuleNotFoundError: No module named 'b'") @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_package_and_docs(tmp_path: Path, capsys: CaptureFixture[str], runner: str): root = clone_functional_sample('package_and_docs', tmp_path) write_config(root, runner, patterns="['**/*.py', '**/*.rst']") sys.path.append(str((root / 'src'))) results = run(capsys, runner, root) assert results.total == 7, results.out.text assert results.failures == 1, results.out.text assert results.errors == 0, results.out.text # output from the one expected failure! results.out.then_find('Expected:') results.out.then_find('good') results.out.then_find('Got:') results.out.then_find('bad') @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_multiple_sybils_process_all(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'test.rst') write_doctest(tmp_path, 'test.txt') config_template = """ from sybil.parsers.rest import DocTestParser from sybil import Sybil sybil1 = Sybil(parsers=[DocTestParser()], pattern='test.*', name='a') sybil2 = Sybil(parsers=[DocTestParser()], pattern='test.*', name='b') {assigned_name} = (sybil1 + sybil2).{integration}() """ write_config(tmp_path, runner, template=config_template) results = run(capsys, runner, tmp_path) compare(results.total, expected=4, suffix=results.out.text) # make sure the name is used: results.out.assert_has_run(runner, 'test.rst', sybil='a') results.out.assert_has_run(runner, 'test.rst', sybil='b') results.out.assert_has_run(runner, 'test.txt', sybil='a') results.out.assert_has_run(runner, 'test.txt', sybil='b') @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_multiple_sybils_process_one_each(tmp_path: Path, capsys: CaptureFixture[str], runner: str): write_doctest(tmp_path, 'test.rst') write_doctest(tmp_path, 'test.txt') config_template = """ from sybil.parsers.rest import DocTestParser from sybil import Sybil rst = Sybil(parsers=[DocTestParser()], pattern='*.rst') txt = Sybil(parsers=[DocTestParser()], pattern='*.txt') {assigned_name} = (rst + txt).{integration}() """ write_config(tmp_path, runner, template=config_template) results = run(capsys, runner, tmp_path) compare(results.total, expected=2, suffix=results.out.text) @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_myst(capsys: CaptureFixture[str], runner: str): results = run(capsys, runner, functional_sample('myst')) out = results.out # Check all the tests are found: out.assert_has_run(runner, '/doctest.md', line=7) out.assert_has_run(runner, '/doctest.md', line=14) out.assert_has_run(runner, '/doctest.md', line=25) out.assert_has_run(runner, '/doctest.md', line=31) out.assert_has_run(runner, '/python.md', line=5) out.assert_has_run(runner, '/python.md', line=11) out.assert_has_run(runner, '/python.md', line=17) out.assert_has_run(runner, '/python.md', line=26) out.assert_has_run(runner, '/python.md', line=41) out.assert_has_run(runner, '/python.md', line=47) out.assert_has_run(runner, '/python.md', line=49) # unittest treats exceptions as errors rather than failures, # and they appear at the top of the output, hence the conditionals below. # Check counts: assert results.total == 4+7, results.out.text if runner == PYTEST: assert results.failures == 2 + 1, results.out.text assert results.errors == 0, results.out.text else: assert results.failures == 2 + 0, results.out.text assert results.errors == 1, results.out.text # Check error text: if runner == UNITTEST: out.then_find("Exception: boom!") out.then_find("doctest.md, line 25, column 1 did not evaluate as expected:") out.then_find("Expected:\n 3\nGot:\n 2\n") out.then_find("doctest.md, line 31, column 1 did not evaluate as expected:") out.then_find("Expected:\n 4\nGot:\n 2\n") if runner == PYTEST: out.then_find("Exception: boom!") @pytest.mark.parametrize('runner', [PYTEST, UNITTEST]) def test_markdown(capsys: CaptureFixture[str], runner: str): results = run(capsys, runner, functional_sample('markdown')) out = results.out # Check all the tests are found: out.assert_has_run(runner, '/doctest.md', line=7) out.assert_has_run(runner, '/python.md', line=5) out.assert_has_run(runner, '/python.md', line=11) out.assert_has_run(runner, '/python.md', line=26) out.assert_has_run(runner, '/python.md', line=32) out.assert_has_run(runner, '/python.md', line=34) # unittest treats exceptions as errors rather than failures, # and they appear at the top of the output, hence the conditionals below. # Check counts: assert results.total == 1+5, results.out.text if runner == PYTEST: assert results.failures == 0 + 1, results.out.text assert results.errors == 0, results.out.text else: assert results.failures == 0 + 0, results.out.text assert results.errors == 1, results.out.text # Check error text: out.then_find("Exception: boom!") def test_codeblock_with_protocol_then_doctest(): sybil = Sybil([PythonCodeBlockParser(), DocTestParser()]) check_path(sample_path('protocol-typing.rst'), sybil, expected=3) def test_request_missing_fixtures(tmp_path: Path, capsys: CaptureFixture[str]): write_doctest(tmp_path, 'test.rst') config_template = """ from sybil.parsers.rest import DocTestParser from sybil import Sybil sybil = Sybil(parsers=[DocTestParser()], pattern='*.rst', fixtures=['bad']) {assigned_name} = sybil.{integration}() """ write_config(tmp_path, PYTEST, template=config_template) results = run(capsys, PYTEST, tmp_path) compare(results.total, expected=1, suffix=results.out.text) compare(results.failures, expected=1, suffix=results.out.text) compare(results.errors, expected=0, suffix=results.out.text) sybil-9.0.0/tests/test_helpers.py000066400000000000000000000026511471471302700170710ustar00rootroot00000000000000import sys from importlib import import_module from pathlib import Path import pytest from sybil.python import import_cleanup from .helpers import Finder, ast_docstrings def test_finder_present_but_is_not_present(): finder = Finder('foo\nbaz\n') with pytest.raises(AssertionError) as info: finder.assert_present('bob') assert str(info.value) == "foo\nbaz\n\n'foo\\nbaz\\n'" def test_finder_not_present_but_is_present(): finder = Finder('foo baz bar') with pytest.raises(AssertionError) as info: finder.assert_not_present('baz') assert str(info.value) == '\nfoo baz bar' def test_import_cleanup(tmp_path: Path): (tmp_path / 'some_module.py').write_text('import unittest\nfoo = 1') (tmp_path / 'other_module.py').write_text('import other_module\nfoo = 1') initial_modules = sys.modules.copy() initial_path = sys.path.copy() with import_cleanup(): sys.path.append(str(tmp_path)) some_module = import_module('some_module') assert some_module.foo == 1 assert sys.modules == initial_modules assert sys.path == initial_path def test_all_python_files(all_python_files): count = len(all_python_files) assert count > 50, count def test_ast_docstrings(all_python_files): seen_docstrings = 0 for _, source in all_python_files: seen_docstrings += len(tuple(ast_docstrings(source))) assert seen_docstrings > 50, seen_docstrings sybil-9.0.0/tests/test_lexing.py000066400000000000000000000062151471471302700167150ustar00rootroot00000000000000import re import pytest from testfixtures import ShouldRaise, compare from testfixtures.comparison import compare_text, compare_dict from sybil import Lexeme from sybil.parsers.abstract.lexers import BlockLexer, LexingException from .helpers import lex, sample_path def test_examples_from_parsing_tests(): lexer = BlockLexer(start_pattern=re.compile('START'), end_pattern_template='END') path = sample_path('lexing-fail.txt') with ShouldRaise( LexingException( f"Could not find end of 'START', starting at line 1, column 1, " f"looking for 'END' in {path}:\n'\\nEDN\\n'" ) ): lex('lexing-fail.txt', lexer) def test_indented_block(): lexer = BlockLexer( start_pattern=re.compile('(?P *)START\n'), end_pattern_template=' {{{len_prefix}}}END' ) region, = lex('lexing-indented-block.txt', lexer) compare( region.lexemes, expected={ 'source': Lexeme( "".join( ( "\n", "line 1\n", "\n", " line 2\n", "\n", ) ), offset=0, line_offset=0, ), } ) class TestLexemeStripping: @staticmethod def compare_lexeme(expected: Lexeme, actual: Lexeme): def _compare(x, y, context): if str(x) != str(y): return compare_text(x, y, context) return compare_dict(x.__dict__, y.__dict__, context) compare(expected=expected, actual=actual, comparers={Lexeme: _compare}, strict=True) @pytest.mark.parametrize("actual,message", [ (Lexeme('bar', 10, 1), "'foo' (expected) != 'bar' (actual)"), (Lexeme('foo', 11, 1), "'offset': 10 (expected) != 11"), (Lexeme('foo', 10, 2), "'line_offset': 1 (expected) != 2 (actual)"), (Lexeme('foo', 12, 3), "line_offset': 1 (expected) != 3 (actual)\n" "'offset': 10 (expected) != 12 (actual)"), ]) def test_not_equal(self, actual: Lexeme, message: str): with ShouldRaise(AssertionError) as s: self.compare_lexeme(Lexeme('foo', 10, 1), actual) assert message in str(s.raised) def test_strip_no_newlines_present(self): self.compare_lexeme( actual=Lexeme(' foo \n', 10, 1).strip_leading_newlines(), expected=Lexeme(' foo \n', 10, 1) ) def test_strip_one_newline_present(self): self.compare_lexeme( actual=Lexeme('\n foo \n', 10, 1).strip_leading_newlines(), expected=Lexeme(' foo \n', 11, 2) ) def test_strip_multiple_newlines_present(self): self.compare_lexeme( actual=Lexeme('\n\n\n foo \n', 13, 3).strip_leading_newlines(), expected=Lexeme(' foo \n', 16, 6) ) def test_strip_newlines_and_spaces_present(self): self.compare_lexeme( actual=Lexeme(' \n \n foo \n', 10, 1).strip_leading_newlines(), expected=Lexeme(' \n \n foo \n', 10, 1) ) sybil-9.0.0/tests/test_markdown_lexers.py000066400000000000000000000013141471471302700206260ustar00rootroot00000000000000 from sybil.parsers.markdown.lexers import RawFencedCodeBlockLexer from sybil.region import Region from .helpers import check_lexed_regions def test_fenced_code_block(): lexer = RawFencedCodeBlockLexer() check_lexed_regions('markdown-fenced-code-block.md', lexer, expected = [ Region(12, 24, lexemes={'source': '<\n >\n'}), Region(34, 46, lexemes={'source': '<\n >\n'}), Region(177, 192, lexemes={'source': 'aaa\n~~~\n'}), Region(266, 285, lexemes={'source': 'aaa\n```\n'}), Region(301, 312, lexemes={'source': 'aaa\n'}), Region(296, 317, lexemes={'source': '~~~\naaa\n~~~\n'}), Region(397, 421, lexemes={'source': 'some stuff here\n~~~\n'}), ]) sybil-9.0.0/tests/test_myst_clear.py000066400000000000000000000005311471471302700175640ustar00rootroot00000000000000from sybil.parsers.myst import ClearNamespaceParser, PythonCodeBlockParser from .helpers import parse def test_basic(): examples, namespace = parse( 'myst-clear.md', PythonCodeBlockParser(), ClearNamespaceParser(), expected=4 ) for example in examples: example.evaluate() assert 'x' not in namespace, namespace sybil-9.0.0/tests/test_myst_codeblock.py000066400000000000000000000146471471471302700204400ustar00rootroot00000000000000import __future__ import pytest from testfixtures import compare from sybil import Example, Sybil from sybil.parsers.myst import PythonCodeBlockParser, CodeBlockParser from .helpers import check_excinfo, parse, check_path, sample_path def test_basic(): examples, namespace = parse('myst-codeblock.md', PythonCodeBlockParser(), expected=7) namespace['y'] = namespace['z'] = 0 assert examples[0].evaluate() is None assert namespace['y'] == 1 assert namespace['z'] == 0 with pytest.raises(Exception) as excinfo: examples[1].evaluate() compare(examples[1].parsed, expected="raise Exception('boom!')\n", show_whitespace=True) assert examples[1].line == 10 check_excinfo(examples[1], excinfo, 'boom!', lineno=11) assert examples[2].evaluate() is None assert namespace['y'] == 1 assert namespace['z'] == 1 assert examples[3].evaluate() is None assert namespace['bin'] == b'x' assert namespace['uni'] == u'x' assert examples[4].evaluate() is None assert 'NoVars' in namespace assert examples[5].evaluate() is None assert namespace['define_this'] == 1 assert examples[6].evaluate() is None assert 'YesVars' in namespace assert '__builtins__' not in namespace def test_complicated_nesting(): # This has no code blocks, but should still parse fine: parse('myst-complicated-nesting.md', PythonCodeBlockParser(), expected=0) def test_doctest_at_end_of_fenced_codeblock(): examples, namespace = parse('myst-codeblock-doctests-end-of-fenced-codeblocks.md', PythonCodeBlockParser(), expected=2) assert examples[0].evaluate() is None assert examples[1].evaluate() is None assert namespace['b'] == 2 def test_other_language_composition_pass(): def oh_hai(example): assert isinstance(example, Example) assert 'HAI' in example.parsed parser = CodeBlockParser(language='lolcode', evaluator=oh_hai) examples, namespace = parse('myst-codeblock.md', parser, expected=1) assert examples[0].evaluate() is None def test_other_language_composition_fail(): def oh_noez(example): if 'KTHXBYE' in example.parsed: raise ValueError('oh noez') parser = CodeBlockParser(language='lolcode', evaluator=oh_noez) examples, namespace = parse('myst-codeblock.md', parser, expected=1) with pytest.raises(ValueError): examples[0].evaluate() def test_other_language_no_evaluator(): parser = CodeBlockParser('foo') with pytest.raises(NotImplementedError): parser.evaluate(...) class LolCodeCodeBlockParser(CodeBlockParser): language = 'lolcode' def evaluate(self, example: Example): if example.parsed != 'HAI\n': raise ValueError(repr(example.parsed)) def test_other_language_inheritance(): examples, namespace = parse('myst-codeblock-lolcode.md', LolCodeCodeBlockParser(), expected=2) examples[0].evaluate() with pytest.raises(ValueError) as excinfo: examples[1].evaluate() assert str(excinfo.value) == "'KTHXBYE\\n'" class IgnoringPythonCodeBlockParser(CodeBlockParser): def __call__(self, document): for region in super().__call__(document): options = region.lexemes.get('options') if options and 'ignore' in options: continue yield region class IgnoringCodeBlockParser(PythonCodeBlockParser): codeblock_parser_class = IgnoringPythonCodeBlockParser def test_other_functionality_inheritance(): examples, namespace = parse( 'myst-codeblock-subclassing.md', IgnoringCodeBlockParser(), expected=2 ) examples[0].evaluate() examples[1].evaluate() def future_import_checks(*future_imports): parser = PythonCodeBlockParser(future_imports) examples, namespace = parse('myst-codeblock-future-imports.md', parser, expected=3) with pytest.raises(Exception) as excinfo: examples[0].evaluate() # check the line number of the first block, which is the hardest case: check_excinfo(examples[0], excinfo, 'Boom 1', lineno=3) with pytest.raises(Exception) as excinfo: examples[1].evaluate() # check the line number of the second block: check_excinfo(examples[1], excinfo, 'Boom 2', lineno=9) examples[2].evaluate() # check the line number of the third block: assert namespace['foo'].__code__.co_firstlineno == 15 return namespace['foo'] def test_no_future_imports(): future_import_checks() def test_single_future_import(): future_import_checks('barry_as_FLUFL') def test_multiple_future_imports(): future_import_checks('barry_as_FLUFL', 'print_function') def test_functional_future_imports(): foo = future_import_checks('annotations') # This will keep working but not be an effective test once PEP 563 finally lands: assert foo.__code__.co_flags & __future__.annotations.compiler_flag def test_blank_lines(): examples, namespace = parse( 'myst-codeblock-blank-lines.md', PythonCodeBlockParser(), expected=1 ) example, = examples compare(example.parsed, expected=''.join(( '\n', 'y = 0\n', '# now a blank line:\n', '\n', 'y += 1\n', '# two blank lines:\n', '\n', '\n', "assert not y, 'Boom!'\n", ))) with pytest.raises(AssertionError) as excinfo: example.evaluate() # check the line number in the exception: check_excinfo(example, excinfo, 'Boom!', lineno=12) def test_blank_lines_indented(): examples, namespace = parse( "myst-codeblock-blank-lines-indented.md", PythonCodeBlockParser(), expected=1 ) (example,) = examples compare( example.parsed, expected="".join( ( "\n", "y = 0\n", "# now a blank line:\n", "\n", "y += 1\n", "# two blank lines:\n", "\n", "\n", "assert not y, 'Boom!'\n", ) ), ) with pytest.raises(AssertionError) as excinfo: example.evaluate() # check the line number in the exception: check_excinfo(example, excinfo, "Boom!", lineno=12) def test_code_directive(): sybil = Sybil([PythonCodeBlockParser()]) check_path(sample_path('myst-code.md'), sybil, expected=1) def test_sourcecode_directive(): sybil = Sybil([PythonCodeBlockParser()]) check_path(sample_path('myst-sourcecode.md'), sybil, expected=1) sybil-9.0.0/tests/test_myst_doctest.py000066400000000000000000000053541471471302700201530ustar00rootroot00000000000000# coding=utf-8 from doctest import REPORT_NDIFF, ELLIPSIS import pytest from testfixtures import compare from sybil.example import SybilFailure from sybil.parsers.rest import DocTestParser as ReSTDocTestParser from sybil.parsers.myst import DocTestDirectiveParser, PythonCodeBlockParser from tests.helpers import parse, sample_path def test_use_existing_doctest_parser(): path = sample_path('myst-doctest-use-rest.md') examples, namespace = parse('myst-doctest-use-rest.md', ReSTDocTestParser(), expected=6) examples[0].evaluate() assert namespace['x'] == 2 examples[1].evaluate() assert namespace['x'] == 2 examples[2].evaluate() assert namespace['x'] == 3 with pytest.raises(SybilFailure) as excinfo: examples[3].evaluate() compare(str(excinfo.value), expected = ( f"Example at {path}, line 28, column 1 did not evaluate as expected:\n" "Expected:\n" " 3\n" "Got:\n" " 2\n" )) examples[4].evaluate() examples[5].evaluate() def test_use_python_codeblock_parser(): examples, namespace = parse('myst-doctest.md', PythonCodeBlockParser(), expected=3) examples[0].evaluate() examples[1].evaluate() assert namespace['x'] == 2 examples[2].evaluate() assert namespace['x'] == 3 def test_fail_with_options_using_python_codeblock_parser(): parser = PythonCodeBlockParser(doctest_optionflags=REPORT_NDIFF|ELLIPSIS) examples, namespace = parse('myst-doctest-fail.md', parser, expected=1) with pytest.raises(SybilFailure) as excinfo: examples[0].evaluate() assert excinfo.value.result == ( "Differences (ndiff with -expected +actual):\n" " - Not my output\n" " + where's my output?\n" ) def test_use_doctest_role_parser(): path = sample_path('myst-doctest.md') examples, namespace = parse('myst-doctest.md', DocTestDirectiveParser(), expected=4) examples[0].evaluate() assert namespace['y'] == 2 examples[1].evaluate() examples[2].evaluate() with pytest.raises(SybilFailure) as excinfo: examples[3].evaluate() compare(str(excinfo.value), expected=( f"Example at {path}, line 45, column 1 did not evaluate as expected:\n" "Expected:\n" " 3\n" "Got:\n" " 2\n" )) def test_fail_with_options_using_doctest_role_parser(): parser = DocTestDirectiveParser(optionflags=REPORT_NDIFF|ELLIPSIS) examples, namespace = parse('myst-doctest-fail.md', parser, expected=1) with pytest.raises(SybilFailure) as excinfo: examples[0].evaluate() assert excinfo.value.result == ( "Differences (ndiff with -expected +actual):\n" " - Also not my output\n" " + where's my output?\n" ) sybil-9.0.0/tests/test_myst_lexers.py000066400000000000000000000404551471471302700200110ustar00rootroot00000000000000from pathlib import Path from testfixtures import compare from sybil.parsers.markdown.lexers import FencedCodeBlockLexer, DirectiveInHTMLCommentLexer from sybil.parsers.myst.lexers import ( DirectiveLexer, DirectiveInPercentCommentLexer ) from sybil.region import Region from .helpers import lex, sample_path, lex_text, check_lexed_regions, check_lexed_text_regions def test_fenced_code_block(): lexer = FencedCodeBlockLexer('py?thon') check_lexed_regions('myst-lexers.md', lexer, expected=[ Region(36, 59, lexemes={'language': 'python', 'source': '>>> 1+1\n2\n'}), Region(1137, 1173, lexemes={'language': 'pthon', 'source': 'assert 1 + 1 == 2\n'}), ]) def test_fenced_code_block_with_mapping(): lexer = FencedCodeBlockLexer('python', mapping={'source': 'body'}) check_lexed_regions('myst-lexers.md', lexer, expected=[ Region(36, 59, lexemes={'body': '>>> 1+1\n2\n'}) ]) def test_myst_directives(): lexer = DirectiveLexer(directive='[^}]+') check_lexed_regions('myst-lexers.md', lexer, expected=[ Region(110, 148, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': '>>> 1 + 1\n3\n', }), Region(188, 276, lexemes={ 'directive': 'directivename', 'arguments': 'arguments', 'options': {'key1': 'val1', 'key2': 'val2'}, 'source': 'This is\ndirective content\n', }), Region(330, 381, lexemes={ 'directive': 'eval-rst', 'arguments': '', 'options': {}, 'source': '.. doctest::\n\n >>> 1 + 1\n 4\n', }), Region(1398, 1479, lexemes={ 'directive': 'foo', 'arguments': 'bar', 'options': {'key1': 'val1'}, 'source': 'This, too, is a directive content\n', }), ]) def test_examples_from_parsing_tests(): lexer = DirectiveLexer(directive='code-block', arguments='python') compare(lex('myst-codeblock.md', lexer), expected=[ Region(99, 154, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': "raise Exception('boom!')\n", }), Region(701, 753, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': 'define_this = 1\n\n', }), ]) def test_myst_directives_with_mapping(): lexer = DirectiveLexer(directive='directivename', arguments='.*', mapping={'arguments': 'foo'}) compare(lex('myst-lexers.md', lexer), expected=[ Region(188, 276, lexemes={'foo': 'arguments', 'options': {}}), ]) def test_myst_percent_comment_invisible_directive(): lexer = DirectiveInPercentCommentLexer( directive='(invisible-)?code(-block)?' ) compare(lex('myst-lexers.md', lexer), expected=[ Region(449, 504, lexemes={ 'directive': 'invisible-code-block', 'arguments': 'python', 'source': '\nb = 5\n\n...etc...\n', }), Region(584, 652, lexemes={ 'directive': 'code-block', 'arguments': 'py', 'source': '\nb = 6\n...etc...\n\n', }), ]) def test_myst_percent_comment_invisible_directive_mapping(): lexer = DirectiveInPercentCommentLexer( directive='inv[^:]+', arguments='python', mapping={'arguments': 'language'} ) compare(lex('myst-lexers.md', lexer), expected=[ Region(449, 504, lexemes={'language': 'python'}), ]) def test_myst_html_comment_invisible_directive(): lexer = DirectiveInHTMLCommentLexer( directive='(invisible-)?code(-block)?' ) compare(lex('myst-lexers.md', lexer), show_whitespace=True, expected=[ Region(702, 827, lexemes={ 'directive': 'invisible-code-block', 'arguments': 'python', 'source': ( 'def foo():\n' ' return 42\n\n' 'meaning_of_life = 42\n\n' 'assert foo() == meaning_of_life()\n' ), }), Region(912, 1015, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'source': ( '\n' 'blank line above ^^\n' '\n' 'blank line below:\n' '\n' ), }), Region(1229, 1332, lexemes={ 'directive': 'invisible-code', 'arguments': 'py', 'source': ( '\n' 'blank line above ^^\n' '\n' 'blank line below:\n' '\n' ), }), ]) def test_myst_html_comment_invisible_skip_directive(): lexer = DirectiveInHTMLCommentLexer(directive='skip') compare(lex('myst-lexers.md', lexer), show_whitespace=True, expected=[ Region(1482, 1498, lexemes={ 'directive': 'skip', 'arguments': 'next', 'source': '', }), Region(1503, 1562, lexemes={ 'directive': 'skip', 'arguments': 'start if("some stuff here", reason=\'Something\')' , 'source': '', }), Region(1567, 1584, lexemes={ 'directive': 'skip', 'arguments': 'and', 'source': '', }), Region(1589, 1647, lexemes={ 'directive': 'skip', 'arguments': 'end', 'source': '\n\n\nOther stuff here just gets ignored\n\n', }), Region(1652, 1672, lexemes={ 'directive': 'skip', 'arguments': 'also', 'source': '', }), ]) def test_myst_html_comment_invisible_clear_directive(): lexer = DirectiveInHTMLCommentLexer('clear-namespace') compare(lex('myst-lexers.md', lexer), show_whitespace=True, expected=[ Region(1678, 1699, lexemes={ 'directive': 'clear-namespace', 'arguments': '', 'source': '', }), ]) def test_lexing_directives(): lexer = DirectiveLexer('[^}]+') compare(lex('myst-lexing-directives.md', lexer), expected=[ Region(55, 236, lexemes={ 'directive': 'note', 'arguments': 'This is a note admonition.', 'options': {}, 'source': ('This is the second line of the first paragraph.\n' '\n' '- The note contains all indented body elements\n' ' following.\n' '- It includes this bullet list.\n'), }), Region(238, 320, lexemes={ 'directive': 'admonition', 'arguments': 'And, by the way...', 'options': {}, 'source': 'You can make up your own admonition too.\n', }), Region(322, 386, lexemes={ 'directive': 'sample', 'arguments': '', 'options': {}, 'source': 'This directive has no arguments, just a body.\n', }), Region(455, 481, lexemes={ 'directive': 'image', 'arguments': 'picture.png', 'options': {}, 'source': '', }), Region(483, 595, lexemes={ 'directive': 'image', 'arguments': 'picture.jpeg', 'options': { 'height': '100px', 'width': '200 px', 'scale': '50 %', 'alt': 'alternate text', 'align': 'right', }, 'source': '', }), Region(597, 1314, lexemes={ 'directive': 'figure', 'arguments': 'picture.png', 'options': { 'alt': 'map to buried treasure', 'scale': '50 %', }, 'source': ('This is the caption of the figure (a simple paragraph).\n' '\n' 'The legend consists of all elements after the caption. In this\n' 'case, the legend consists of this paragraph and the following\n' 'table:\n' '\n' '+-----------------------+-----------------------+\n' '| Symbol | Meaning |\n' '+=======================+=======================+\n' '| .. image:: tent.png | Campground |\n' '+-----------------------+-----------------------+\n' '| .. image:: waves.png | Lake |\n' '+-----------------------+-----------------------+\n' '| .. image:: peak.png | Mountain |\n' '+-----------------------+-----------------------+\n' '\n') }), Region(1317, 1452, lexemes={ 'directive': 'topic', 'arguments': 'Topic Title', 'options': {}, 'source': ('Subsequent indented lines comprise\n' 'the body of the topic, and are\n' 'interpreted as body elements.\n') }), Region(1506, 1592, lexemes={ 'directive': 'topic', 'arguments': 'example.cfg', 'options': {'class': 'read-file'}, 'source': '::\n\n [A Section]\n dir = frob\n' }), Region(1612, 1804, lexemes={ 'directive': 'sidebar', 'arguments': 'Optional Sidebar Title', 'options': {'subtitle': 'Optional Sidebar Subtitle'}, 'source': ('Subsequent indented lines comprise\n' 'the body of the sidebar, and are\n' 'interpreted as body elements.\n') }), Region(1807, 2006, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {'lineno-start': 10, 'emphasize-lines': '1, 3', 'caption': 'This is my\nmulti-line caption. It is *pretty nifty* ;-)\n'}, 'source': "a = 2\nprint('my 1st line')\nprint(f'my {a}nd line')\n", }), Region(2008, 2213, lexemes={ 'directive': 'eval-rst', 'arguments': '', 'options': {}, 'source': ( '.. figure:: img/fun-fish.png\n' ' :width: 100px\n' ' :name: rst-fun-fish\n' '\n' ' Party time!\n' '\n' 'A reference from inside: :ref:`rst-fun-fish`\n' '\n' 'A reference from outside: :ref:`syntax/directives/parsing`\n' ) }), ]) def test_directive_no_trailing_newline(): lexer = DirectiveLexer(directive='toctree') text = Path(sample_path('myst-directive-no-trailing-newline.md')).read_text().rstrip('\n') compare(lex_text(text, lexer), expected=[ Region(16, 67, lexemes={ 'directive': 'toctree', 'arguments': '', 'options': {'maxdepth': '1'}, 'source': 'flask\npyramid\ncustom\n', }), ]) def test_directive_nested(): lexer = DirectiveLexer(directive='.+') text = Path(sample_path('myst-directive-nested.md')).read_text().rstrip('\n') check_lexed_text_regions(text, lexer, expected=[ Region(54, 97, lexemes={ 'directive': 'warning', 'arguments': '', 'options': {}, 'source': "Here's my warning\n", }), Region(146, 222, lexemes={ 'directive': 'warning', 'arguments': '', 'options': {}, 'source': "Here's my raw text warning that isn't parsed...\n", }), Region(0, 227, lexemes={ 'directive': 'note', 'arguments': '', 'options': {}, 'source': ('The warning block will be properly-parsed\n' '\n' ' ```{warning}\n' " Here's my warning\n" ' ```\n' '\n' 'But the next block will be parsed as raw text\n' '\n' ' ```{warning}\n' " Here's my raw text warning that isn't parsed...\n" ' ```\n'), }), ]) def test_complicated_nesting(): lexer = DirectiveLexer(directive='.+') check_lexed_regions('myst-complicated-nesting.md', lexer, expected=[ Region(37, 79, lexemes={ 'directive': 'py:module', 'arguments': 'bytewax.connectors.demo', 'options': {}, 'source': "", }), Region(81, 160, lexemes={ 'directive': 'autodoc2-docstring', 'arguments': 'bytewax.connectors.demo', 'options': {'parser': 'myst', 'allowtitles': ''}, 'source': "", }), Region(248, 315, lexemes={ 'directive': 'autodoc2-docstring', 'arguments': 'bytewax.connectors.demo.X', 'options': {'parser': 'myst'}, 'source': "", }), Region(171, 321, lexemes={ 'directive': 'py:data', 'arguments': 'X', 'options': {'canonical': 'bytewax.connectors.demo.X', 'type': 'typing.TypeVar'}, 'source': "```{autodoc2-docstring} bytewax.connectors.demo.X\n" ":parser: myst\n" "```\n\n", }), Region(789, 873, lexemes={ 'directive': 'autodoc2-docstring', 'arguments': 'bytewax.connectors.demo.RandomMetricSource', 'options': {'parser': 'myst'}, 'source': "", }), Region(875, 905, lexemes={ 'directive': 'rubric', 'arguments': 'Initialization', 'options': {}, 'source': "", }), Region(907, 1000, lexemes={ 'directive': 'autodoc2-docstring', 'arguments': 'bytewax.connectors.demo.RandomMetricSource.__init__', 'options': {'parser': 'myst'}, 'source': "", }), Region(1002, 1122, lexemes={ 'directive': 'py:method', 'arguments': 'list_parts() -> typing.List[str]', 'options': {'canonical': 'bytewax.connectors.demo.RandomMetricSource.list_parts'}, 'source': "", }), Region(1124, 1336, lexemes={ 'directive': 'py:method', 'arguments': 'build_part(now: datetime.datetime, for_part: str, resume_state: ' 'typing.Optional[bytewax.connectors.demo._RandomMetricState])', 'options': {'canonical': 'bytewax.connectors.demo.RandomMetricSource.build_part'}, 'source': "", }), Region(336, 1343, lexemes={ 'directive': 'py:class', 'arguments': 'RandomMetricSource(metric_name: str, interval: datetime.timedelta = ' 'timedelta(seconds=0.7), count: int = sys.maxsize, next_random: ' 'typing.Callable[[], float] = lambda: random.randrange(0, 10))', 'options': {'canonical': 'bytewax.connectors.demo.RandomMetricSource'}, 'source': ":Bases:\n" " - {py:obj}`~bytewax.inputs.FixedPartitionedSource``[`{py:obj}`" "~typing.Tuple``[`{py:obj}`~str``, `{py:obj}`~float``]," " `{py:obj}`~bytewax.connectors.demo._RandomMetricState``]`\n\n" "```{autodoc2-docstring} bytewax.connectors.demo.RandomMetricSource\n" ":parser: myst\n" "```\n\n" "```{rubric} Initialization\n" "```\n\n" "```{autodoc2-docstring} " "bytewax.connectors.demo.RandomMetricSource.__init__\n" ":parser: myst\n" "```\n\n" "````{py:method} list_parts() -> typing.List[str]\n" ":canonical: bytewax.connectors.demo.RandomMetricSource.list_parts\n\n" "````\n\n" "````{py:method} build_part(now: datetime.datetime, for_part: str, " "resume_state: typing.Optional[bytewax.connectors.demo._RandomMetricState])\n" ":canonical: bytewax.connectors.demo.RandomMetricSource.build_part\n\n" "````\n\n", }), ]) sybil-9.0.0/tests/test_myst_skip.py000066400000000000000000000004611471471302700174460ustar00rootroot00000000000000from sybil.parsers.myst import PythonCodeBlockParser, SkipParser from .helpers import parse def test_basic(): examples, namespace = parse('myst-skip.md', PythonCodeBlockParser(), SkipParser(), expected=9) for example in examples: example.evaluate() assert namespace['run'] == [2, 5] sybil-9.0.0/tests/test_rest_lexers.py000066400000000000000000000320201471471302700177570ustar00rootroot00000000000000from testfixtures import compare from sybil.parsers.rest.lexers import DirectiveInCommentLexer from sybil.parsers.rest.lexers import DirectiveLexer from sybil.region import Region from .helpers import lex, lex_text def test_examples_from_parsing_tests(): lexer = DirectiveLexer(directive='code-block', arguments='python') compare(lex('codeblock.txt', lexer)[:2], expected=[ Region(23, 56, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': 'y += 1\n', }), Region(106, 157, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': "raise Exception('boom!')\n", }), ]) def test_examples_from_directive_tests(): lexer = DirectiveLexer(directive='doctest') compare(lex('doctest_directive.txt', lexer), expected=[ Region(102, 136, lexemes={ 'directive': 'doctest', 'arguments': None, 'options': {}, 'source': '>>> 1 + 1\n2\n', }), Region(205, 249, lexemes={ 'directive': 'doctest', 'arguments': None, 'options': {}, 'source': '>>> 1 + 1\nUnexpected!\n', }), Region(307, 353, lexemes={ 'directive': 'doctest', 'arguments': None, 'options': {}, 'source': ">>> raise Exception('boom!')", }), ]) def test_directive_nested_in_md(): lexer = DirectiveLexer(directive='doctest') compare(lex('doctest_rest_nested_in_md.md', lexer), expected=[ Region(14, 47, lexemes={ 'directive': 'doctest', 'arguments': None, 'options': {}, 'source': '>>> 1 + 1\n3', }), ]) def test_directive_with_single_line_body_at_end_of_string(): text = """ My comment .. code-block:: python test_my_code("Hello World") """ lexer = DirectiveLexer(directive='code-block') start = 25 end = 95 compare(lex_text(text, lexer), expected=[ Region(start, end, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': 'test_my_code("Hello World")', }), ]) compare(text[start:end], show_whitespace=True, expected=( ' .. code-block:: python\n' ' test_my_code("Hello World")' )) def test_directive_with_multi_line_body_at_end_of_string(): text = """ My comment .. code-block:: python test_my_code("Hello") test_my_code("World") """ lexer = DirectiveLexer(directive='code-block') start = 21 end = 119 compare(lex_text(text, lexer), expected=[ Region(start, end, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': 'test_my_code("Hello")\ntest_my_code("World")', }), ]) compare(text[start:end], show_whitespace=True, expected=( ' .. code-block:: python\n' ' test_my_code("Hello")\n' ' test_my_code("World")' )) def test_skip_lexing(): lexer = DirectiveInCommentLexer(directive='skip') compare(lex('skip.txt', lexer), expected=[ Region(70, 84, lexemes={ 'directive': 'skip', 'arguments': 'next', 'options': {}, 'source': '' }), Region(197, 212, lexemes={ 'directive': 'skip', 'arguments': 'start', 'options': {}, 'source': '' }), Region(329, 342, lexemes={ 'directive': 'skip', 'arguments': 'end', 'options': {}, 'source': '' }), ]) def test_skip_lexing_bad(): lexer = DirectiveInCommentLexer(directive='skip') compare(lex('skip-conditional-bad.txt', lexer), expected=[ Region(13, 29, lexemes={ 'directive': 'skip', 'arguments': 'lolwut', 'options': {}, 'source': '' }), Region(49, 64, lexemes={ 'directive': 'skip', 'arguments': 'start', 'options': {}, 'source': '' }), Region(65, 88, lexemes={ 'directive': 'skip', 'arguments': 'end if(1 > 2)', 'options': {}, 'source': '' }), Region(104, 179, lexemes={ 'directive': 'skip', 'arguments': 'next if(sys.version_info < (3, 0), reason="only true on python 3"', 'options': {}, 'source': '', }), ]) def test_arguments_without_body(): lexer = DirectiveInCommentLexer(directive='skip') compare(lex('skip-conditional-edges.txt', lexer), expected=[ Region(0, 40, lexemes={ 'directive': 'skip', 'arguments': 'next if(True, reason="skip 1")', 'options': {}, 'source': '' }), Region(100, 142, lexemes={ 'directive': 'skip', 'arguments': 'start if(False, reason="skip 2")', 'options': {}, 'source': '' }), Region(205, 218, lexemes={ 'directive': 'skip', 'arguments': 'end', 'options': {}, 'source': '' }), ]) def test_lexing_directives(): lexer = DirectiveLexer('[^:]+') compare(lex('lexing-directives.txt', lexer), expected=[ Region(55, 245, lexemes={ 'directive': 'note', 'arguments': 'This is a note admonition.', 'options': {}, 'source': ('This is the second line of the first paragraph.\n' '\n' '- The note contains all indented body elements\n' ' following.\n' '- It includes this bullet list.\n'), }), Region(246, 326, lexemes={ 'directive': 'admonition', 'arguments': 'And, by the way...', 'options': {}, 'source': 'You can make up your own admonition too.\n', }), Region(327, 389, lexemes={ 'directive': 'sample', 'arguments': None, 'options': {}, 'source': 'This directive has no arguments, just a body.\n', }), Region(457, 480, lexemes={ 'directive': 'image', 'arguments': 'picture.png', 'options': {}, 'source': '', }), Region(481, 598, lexemes={ 'directive': 'image', 'arguments': 'picture.jpeg', 'options': { 'height': '100px', 'width': '200 px', 'scale': '50 %', 'alt': 'alternate text', 'align': 'right', }, 'source': '', }), Region(599, 1353, lexemes={ 'directive': 'figure', 'arguments': 'picture.png', 'options': { 'alt': 'map to buried treasure', 'scale': '50 %', }, 'source': ('This is the caption of the figure (a simple paragraph).\n' '\n' 'The legend consists of all elements after the caption. In this\n' 'case, the legend consists of this paragraph and the following\n' 'table:\n' '\n' '+-----------------------+-----------------------+\n' '| Symbol | Meaning |\n' '+=======================+=======================+\n' '| .. image:: tent.png | Campground |\n' '+-----------------------+-----------------------+\n' '| .. image:: waves.png | Lake |\n' '+-----------------------+-----------------------+\n' '| .. image:: peak.png | Mountain |\n' '+-----------------------+-----------------------+\n' '\n') }), Region(1354, 1486, lexemes={ 'directive': 'topic', 'arguments': 'Topic Title', 'options': {}, 'source': ('Subsequent indented lines comprise\n' 'the body of the topic, and are\n' 'interpreted as body elements.\n') }), Region(1539, 1616, lexemes={ 'directive': 'topic', 'arguments': 'example.cfg', 'options': {'class': 'read-file'}, 'source': '::\n\n [A Section]\n dir = frob\n' }), Region(1635, 1819, lexemes={ 'directive': 'sidebar', 'arguments': 'Optional Sidebar Title', 'options': {'subtitle': 'Optional Sidebar Subtitle'}, 'source': ('Subsequent indented lines comprise\n' 'the body of the sidebar, and are\n' 'interpreted as body elements.\n') }), Region(1856, 1871, lexemes={ 'directive': 'skip', 'arguments': 'next', 'options': {}, 'source': '', }), Region(1871, 1911, lexemes={ 'directive': 'code-block', 'arguments': 'python', 'options': {}, 'source': ( 'run.append(1)\n' ), }), Region(1988, 2066, lexemes={ 'directive': 'topic', 'arguments': 'example.cfg', 'options': {'class': 'read-file'}, 'source': ( '::\n\n [A Section]\n dir = frob\n\n' ), }), ]) def test_lexing_nested_directives(): # This test illustrates that we can lex nested directives, # but the blocks overlap, so would not be parseable unless only # the deepest nested blocks were parsed. lexer = DirectiveLexer('[^:]+') actual = lex('lexing-nested-directives.txt', lexer) compare(actual, expected=[ Region(55, 236, lexemes={ 'directive': 'note', 'arguments': 'This is a note admonition.', 'options': {}, 'source': ( 'This is the second line of the first paragraph.\n\n' ' .. admonition:: And, by the way...\n\n' ' You can make up your own admonition too.\n' ), }), Region(144, 236, lexemes={ 'directive': 'admonition', 'arguments': 'And, by the way...', 'options': {}, 'source': 'You can make up your own admonition too.\n', }), Region(304, 783, lexemes={ 'directive': 'image', 'arguments': 'picture.png', 'options': {}, 'source': ( '.. image:: picture.jpeg\n' ' :height: 100px\n' ' :width: 200 px\n' ' :scale: 50 %\n' ' :alt: alternate text\n' ' :align: right\n' '\n' '.. figure:: picture.png\n' ' :scale: 50 %\n' ' :alt: map to buried treasure\n' '\n' ' This is the caption of the figure (a simple paragraph).\n' '\n' '\n' ' .. topic:: Topic Title\n' '\n' ' Subsequent indented lines comprise\n' ' the body of the topic, and are\n' ' interpreted as body elements.' ), }), Region(328, 469, lexemes={ 'directive': 'image', 'arguments': 'picture.jpeg', 'options': { 'align': 'right', 'alt': 'alternate text', 'height': '100px', 'scale': '50 %', 'width': '200 px', }, 'source': '', }), Region(470, 783, lexemes={ 'directive': 'figure', 'arguments': 'picture.png', 'options': { 'scale': '50 %', 'alt': 'map to buried treasure' }, 'source': ( 'This is the caption of the figure (a simple paragraph).\n' '\n' '\n' ' .. topic:: Topic Title\n' '\n' ' Subsequent indented lines comprise\n' ' the body of the topic, and are\n' ' interpreted as body elements.' ), }), Region(620, 783, lexemes={ 'directive': 'topic', 'arguments': 'Topic Title', 'options': {}, 'source': ( 'Subsequent indented lines comprise\n' 'the body of the topic, and are\n' 'interpreted as body elements.' ), }), ]) def test_lexing_directive_in_comment_without_double_colon(): lexer = DirectiveInCommentLexer('clear-namespace') compare(lex('clear.txt', lexer), expected=[ Region(48, 67, lexemes={ 'directive': 'clear-namespace', 'arguments': None, 'options': {}, 'source': '' }), ]) sybil-9.0.0/tests/test_skip.py000066400000000000000000000066011471471302700163740ustar00rootroot00000000000000import sys from unittest import SkipTest import pytest from testfixtures import ShouldRaise from sybil import Document from sybil.parsers.rest import PythonCodeBlockParser, DocTestParser, SkipParser from .helpers import parse, sample_path def test_basic(): examples, namespace = parse('skip.txt', PythonCodeBlockParser(), SkipParser(), expected=9) for example in examples: example.evaluate() assert namespace['run'] == [2, 5] def test_conditional_edge_cases(): examples, namespace = parse( 'skip-conditional-edges.txt', DocTestParser(), SkipParser(), expected=8 ) namespace['sys'] = sys namespace['run'] = [] skipped = [] for example in examples: try: example.evaluate() except SkipTest as e: skipped.append(str(e)) assert namespace['run'] == [1, 2, 3, 4] # we should always have one and only one skip from this document. assert skipped == ['skip 1'] def test_conditional_full(): examples, namespace = parse('skip-conditional.txt', DocTestParser(), SkipParser(), expected=11) namespace['result'] = result = [] for example in examples: try: example.evaluate() except SkipTest as e: result.append('skip:'+str(e)) assert result == [ 'start', 'skip:(2 > 1)', 'good 1', 'skip:foo', 'skip:foo', 'good 2', 'skip:good reason', ] def test_bad(): examples, namespace = parse('skip-conditional-bad.txt', SkipParser(), expected=4) with pytest.raises(ValueError) as excinfo: examples[0].evaluate() assert str(excinfo.value) == 'Bad skip action: lolwut' examples[1].evaluate() with pytest.raises(ValueError) as excinfo: examples[2].evaluate() assert str(excinfo.value) == "Cannot have condition on 'skip: end'" with pytest.raises(SyntaxError): examples[3].evaluate() def test_start_follows_start(): examples, namespace = parse('skip-start-start.txt', DocTestParser(), SkipParser(), expected=7) namespace['result'] = result = [] for example in examples[:2]: example.evaluate() with ShouldRaise(ValueError("'skip: start' cannot follow 'skip: start'")): examples[2].evaluate() assert result == [] def test_next_follows_start(): examples, namespace = parse('skip-start-next.txt', DocTestParser(), SkipParser(), expected=7) namespace['result'] = result = [] for example in examples[:2]: example.evaluate() with ShouldRaise(ValueError("'skip: next' cannot follow 'skip: start'")): examples[2].evaluate() assert result == [] def test_end_no_start(): examples, namespace = parse('skip-just-end.txt', DocTestParser(), SkipParser(), expected=3) namespace['result'] = result = [] examples[0].evaluate() with ShouldRaise(ValueError("'skip: end' must follow 'skip: start'")): examples[1].evaluate() assert result == ['good'] def test_next_follows_next(): examples, namespace = parse('skip-next-next.txt', DocTestParser(), SkipParser(), expected=4) namespace['result'] = result = [] for example in examples: example.evaluate() assert result == [1] def test_malformed_arguments(): path = sample_path('skip-malformed-arguments.txt') with ShouldRaise(ValueError("malformed arguments to skip: '<:'")): Document.parse(path, SkipParser()) sybil-9.0.0/tests/test_sybil.py000066400000000000000000000315711471471302700165540ustar00rootroot00000000000000from __future__ import print_function import re from functools import partial from os.path import split from pathlib import Path import pytest from testfixtures import compare, StringComparison from sybil import Sybil, Region from sybil.document import Document, PythonDocument from sybil.example import Example, SybilFailure from .helpers import sample_path, write_doctest @pytest.fixture() def document(): return Document('ABCDEFGH', '/the/path') class TestRegion: def test_repr_with_evaluator(self): region = Region(0, 1, parsed='parsed', evaluator='evaluator') compare( repr(region), expected=( "'parsed'" ) ) def test_repr_with_lexemes(self): compare( str(Region(36, 56, lexemes={ 'language': 'python', 'foo': None, 'source': 'A'+'X'*1000+'Z', 'bar': {}, 'baz': {f'a{i}': 'b' for i in range(11)}, })), expected="\n" "language: 'python'\n" "foo: None\n" "source: 'AXXXXXXXXXXXXXXXXXXXX...XXXXXXXXXXXXXXXXXXXXZ'\n" "bar: {}\n" "baz: {'a0': 'b', 'a1': 'b', 'a2': 'b', 'a3': 'b', " "'a4': 'b', 'a5': 'b', 'a6': 'b', 'a7': 'b', 'a8': 'b', 'a9': 'b', " "'a10': 'b'}\n" "" ) def test_repr_with_parsed(self): compare( str(Region(36, 56, parsed={ 'language': 'python', 'foo': None, 'source': 'X'*1000, 'bar': {}, 'baz': {f'a{i}': 'b' for i in range(11)}, })), expected="{'language': 'python'" "...9': 'b', 'a10': 'b'}}" ) class TestExample: def test_repr(self, document): region = Region(0, 1, 'parsed', 'evaluator') example = Example(document, 1, 2, region, {}) assert (repr(example) == "") def test_evaluate_okay(self, document): def evaluator(example): example.namespace['parsed'] = example.parsed region = Region(0, 1, 'the data', evaluator) namespace = {} example = Example(document, 1, 2, region, namespace) result = example.evaluate() assert result is None assert namespace == {'parsed': 'the data'} def test_evaluate_not_okay(self, document): def evaluator(example): return 'foo!' region = Region(0, 1, 'the data', evaluator) example = Example(document, 1, 2, region, {}) with pytest.raises(SybilFailure) as excinfo: example.evaluate() assert str(excinfo.value) == ( 'Example at /the/path, line 1, column 2 did not evaluate as ' 'expected:\nfoo!' ) assert excinfo.value.example is example assert excinfo.value.result == 'foo!' def test_evaluate_raises_exception(self, document): def evaluator(example): raise ValueError('foo!') region = Region(0, 1, 'the data', evaluator) example = Example(document, 1, 2, region, {}) with pytest.raises(ValueError) as excinfo: example.evaluate() assert str(excinfo.value) == 'foo!' class TestDocument: def test_add(self, document): region = Region(0, 1, None, None) document.add(region) assert [e.region for e in document] == [region] def test_add_no_overlap(self, document): region1 = Region(0, 1, None, None) region2 = Region(6, 8, None, None) document.add(region1) document.add(region2) assert [e.region for e in document] == [region1, region2] def test_add_out_of_order(self, document): region1 = Region(0, 1, None, None) region2 = Region(6, 8, None, None) document.add(region2) document.add(region1) assert [e.region for e in document] == [region1, region2] def test_add_adjacent(self, document): region1 = Region(0, 1, None, None) region2 = Region(1, 2, None, None) region3 = Region(2, 3, None, None) document.add(region1) document.add(region3) document.add(region2) assert [e.region for e in document] == [region1, region2, region3] def test_add_before_start(self, document): region = Region(-1, 0, None, None) with pytest.raises(ValueError) as excinfo: document.add(region) assert str(excinfo.value) == ( ' ' 'from line 1, column 0 to line 1, column 1 ' 'is before start of document' ) def test_add_after_end(self, document): region = Region(len(document.text), len(document.text)+1, None, None) with pytest.raises(ValueError) as excinfo: document.add(region) assert str(excinfo.value) == ( ' ' 'from line 1, column 9 to line 1, column 10 ' 'goes beyond end of document' ) def test_add_overlaps_with_previous(self, document): region1 = Region(0, 2, None, None) region2 = Region(1, 3, None, None) document.add(region1) with pytest.raises(ValueError) as excinfo: document.add(region2) assert str(excinfo.value) == ( '' ' from line 1, column 1 to line 1, column 3 overlaps ' '' ' from line 1, column 2 to line 1, column 4' ) def test_add_at_same_place(self, document): region1 = Region(0, 2, None, None) region2 = Region(0, 3, None, None) document.add(region1) with pytest.raises(ValueError) as excinfo: document.add(region2) assert str(excinfo.value) == ( '' ' from line 1, column 1 to line 1, column 4 overlaps ' '' ' from line 1, column 1 to line 1, column 3' ) def test_add_identical(self, document): region1 = Region(0, 2, None, None) region2 = Region(0, 2, None, None) document.add(region1) with pytest.raises(ValueError) as excinfo: document.add(region2) assert str(excinfo.value) == ( '' ' from line 1, column 1 to line 1, column 3 overlaps ' '' ' from line 1, column 1 to line 1, column 3' ) def test_add_overlaps_with_next(self, document): region1 = Region(0, 1, None, None) region2 = Region(1, 3, None, None) region3 = Region(2, 4, None, None) document.add(region1) document.add(region3) with pytest.raises(ValueError) as excinfo: document.add(region2) assert str(excinfo.value) == ( ' ' 'from line 1, column 2 to line 1, column 4 overlaps ' ' ' 'from line 1, column 3 to line 1, column 5' ) def test_example_path(self, document): document.add(Region(0, 1, None, None)) assert [e.document for e in document] == [document] def test_example_line_and_column(self): text = 'R1XYZ\nR2XYZ\nR3XYZ\nR4XYZ\nR4XYZ\n' i = text.index document = Document(text, '') document.add(Region(0, i('R2')+2, None, None)) document.add(Region(i('R3')-1, i('R3')+2, None, None)) document.add(Region(i('R4')+3, len(text), None, None)) assert ([(e.line, e.column) for e in document] == [(1, 1), (2, 6), (4, 4)]) def check(letter, parsed, namespace): assert namespace is None text, expected = parsed assert set(text) == {letter} actual = text.count(letter) if actual != expected: return '{} count was {} instead of {}'.format( letter, actual, expected ) # This would normally be wrong, but handy for testing here: return '{} count was {}, as expected'.format(letter, actual) def parse_for_x(document): for m in re.finditer(r'(X+) (\d+) check', document.text): yield Region(m.start(), m.end(), (m.group(1), int(m.group(2))), partial(check, 'X')) def parse_for_y(document): for m in re.finditer(r'(Y+) (\d+) check', document.text): yield Region(m.start(), m.end(), (m.group(1), int(m.group(2))), partial(check, 'Y')) def evaluate_examples(examples): return [e.region.evaluator(e.region.parsed, namespace=None) for e in examples] class TestSybil: def test_parse(self): sybil = Sybil([parse_for_x, parse_for_y]) document = sybil.parse(Path(sample_path('sample1.txt'))) assert (evaluate_examples(document) == ['X count was 4, as expected', 'Y count was 3, as expected']) document = sybil.parse(Path(sample_path('sample2.txt'))) assert (evaluate_examples(document) == ['X count was 3 instead of 4', 'Y count was 3, as expected']) def test_explicit_encoding(self, tmp_path: Path): path = (tmp_path / 'encoded.txt') path.write_text(u'X 1 check\n\xa3', encoding='charmap') sybil = Sybil([parse_for_x], encoding='charmap') document = sybil.parse(path) assert (evaluate_examples(document) == ['X count was 1, as expected']) def test_augment_document_mapping(self, tmp_path: Path): class TextDocument(Document): pass sybil = Sybil([], document_types={'.txt': TextDocument}) document = sybil.parse(write_doctest(tmp_path, 'test.txt')) assert type(document) is TextDocument document = sybil.parse(write_doctest(tmp_path, 'test.rst')) assert type(document) is Document def test_override_document_mapping(self): sybil = Sybil([parse_for_x, parse_for_y], document_types={'.py': PythonDocument}) document = sybil.parse(Path(sample_path('sample1.txt'))) assert (evaluate_examples(document) == ['X count was 4, as expected', 'Y count was 3, as expected']) document = sybil.parse(Path(sample_path('comments.py'))) assert (evaluate_examples(document) == ['X count was 4, as expected', 'Y count was 3, as expected']) def test_addition(self): rest = Sybil([parse_for_x]) myst = Sybil([parse_for_y]) sybil = rest + myst assert sybil == [rest, myst] # check integrations exist: assert sybil.pytest assert sybil.unittest def test_addition_to_collection(self): rest = Sybil([parse_for_x]) myst = Sybil([parse_for_y]) bust = Sybil([parse_for_y]) sybil = rest + myst sybil += [bust] assert sybil == [rest, myst, bust] # check integrations exist: assert sybil.pytest assert sybil.unittest def test_default_str(self): sybil = Sybil([parse_for_x]) compare(str(sybil), expected=StringComparison('')) def test_named_str(self): sybil = Sybil([parse_for_x], name='foo') compare(str(sybil), expected='') def test_default_repr(self): sybil = Sybil([parse_for_x]) compare(repr(sybil), expected=StringComparison('')) def test_named_repr(self): sybil = Sybil([parse_for_x], name='foo') compare(repr(sybil), expected='') def check_into_namespace(example): parsed, namespace = example.region.parsed, example.namespace if 'parsed' not in namespace: namespace['parsed'] = [] namespace['parsed'].append(parsed) print(namespace['parsed']) def parse(document): for m in re.finditer(r'([XY]+) (\d+) check', document.text): yield Region(m.start(), m.end(), m.start(), check_into_namespace) def test_namespace(capsys): sybil = Sybil([parse], path='./samples') documents = [sybil.parse(p) for p in sorted(sybil.path.glob('sample*.txt'))] actual = [] for document in documents: for example in document: print(split(example.path)[-1], example.line) example.evaluate() actual.append(( split(example.path)[-1], example.line, document.namespace['parsed'].copy(), )) out, _ = capsys.readouterr() assert out.split('\n') == [ 'sample1.txt 1', '[0]', 'sample1.txt 3', '[0, 14]', 'sample2.txt 1', '[0]', 'sample2.txt 3', '[0, 13]', '' ]