parso-0.5.2/0000775000175000017500000000000013575273727012546 5ustar davedave00000000000000parso-0.5.2/PKG-INFO0000664000175000017500000001607013575273727013647 0ustar davedave00000000000000Metadata-Version: 2.1 Name: parso Version: 0.5.2 Summary: A Python Parser Home-page: https://github.com/davidhalter/parso Author: David Halter Author-email: davidhalter88@gmail.com Maintainer: David Halter Maintainer-email: davidhalter88@gmail.com License: MIT Description: ################################################################### parso - A Python Parser ################################################################### .. image:: https://travis-ci.org/davidhalter/parso.svg?branch=master :target: https://travis-ci.org/davidhalter/parso :alt: Travis CI build status .. image:: https://coveralls.io/repos/github/davidhalter/parso/badge.svg?branch=master :target: https://coveralls.io/github/davidhalter/parso?branch=master :alt: Coverage Status .. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png Parso is a Python parser that supports error recovery and round-trip parsing for different Python versions (in multiple Python versions). Parso is also able to list multiple syntax errors in your python file. Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful for other projects as well. Parso consists of a small API to parse Python and analyse the syntax tree. A simple example: .. code-block:: python >>> import parso >>> module = parso.parse('hello + 1', version="3.6") >>> expr = module.children[0] >>> expr PythonNode(arith_expr, [, , ]) >>> print(expr.get_code()) hello + 1 >>> name = expr.children[0] >>> name >>> name.end_pos (1, 5) >>> expr.end_pos (1, 9) To list multiple issues: .. code-block:: python >>> grammar = parso.load_grammar() >>> module = grammar.parse('foo +\nbar\ncontinue') >>> error1, error2 = grammar.iter_errors(module) >>> error1.message 'SyntaxError: invalid syntax' >>> error2.message "SyntaxError: 'continue' not properly in loop" Resources ========= - `Testing `_ - `PyPI `_ - `Docs `_ - Uses `semantic versioning `_ Installation ============ pip install parso Future ====== - There will be better support for refactoring and comments. Stay tuned. - There's a WIP PEP8 validator. It's however not in a good shape, yet. Known Issues ============ - `async`/`await` are already used as keywords in Python3.6. - `from __future__ import print_function` is not ignored. Acknowledgements ================ - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). - `Salome Schneider `_ for the extremely awesome parso logo. .. _jedi: https://github.com/davidhalter/jedi .. :changelog: Changelog --------- 0.5.2 (2019-12-15) ++++++++++++++++++ - Add include_setitem to get_definition/is_definition and get_defined_names (#66) - Fix named expression error listing (#89, #90) - Fix some f-string tokenizer issues (#93) 0.5.1 (2019-07-13) ++++++++++++++++++ - Fix: Some unicode identifiers were not correctly tokenized - Fix: Line continuations in f-strings are now working 0.5.0 (2019-06-20) ++++++++++++++++++ - **Breaking Change** comp_for is now called sync_comp_for for all Python versions to be compatible with the Python 3.8 Grammar - Added .pyi stubs for a lot of the parso API - Small FileIO changes 0.4.0 (2019-04-05) ++++++++++++++++++ - Python 3.8 support - FileIO support, it's now possible to use abstract file IO, support is alpha 0.3.4 (2019-02-13) +++++++++++++++++++ - Fix an f-string tokenizer error 0.3.3 (2019-02-06) +++++++++++++++++++ - Fix async errors in the diff parser - A fix in iter_errors - This is a very small bugfix release 0.3.2 (2019-01-24) +++++++++++++++++++ - 20+ bugfixes in the diff parser and 3 in the tokenizer - A fuzzer for the diff parser, to give confidence that the diff parser is in a good shape. - Some bugfixes for f-string 0.3.1 (2018-07-09) +++++++++++++++++++ - Bugfixes in the diff parser and keyword-only arguments 0.3.0 (2018-06-30) +++++++++++++++++++ - Rewrote the pgen2 parser generator. 0.2.1 (2018-05-21) +++++++++++++++++++ - A bugfix for the diff parser. - Grammar files can now be loaded from a specific path. 0.2.0 (2018-04-15) +++++++++++++++++++ - f-strings are now parsed as a part of the normal Python grammar. This makes it way easier to deal with them. 0.1.1 (2017-11-05) +++++++++++++++++++ - Fixed a few bugs in the caching layer - Added support for Python 3.7 0.1.0 (2017-09-04) +++++++++++++++++++ - Pulling the library out of Jedi. Some APIs will definitely change. Keywords: python parser parsing Platform: any Classifier: Development Status :: 4 - Beta Classifier: Environment :: Plugins Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) Classifier: Topic :: Utilities Provides-Extra: testing parso-0.5.2/AUTHORS.txt0000664000175000017500000000344113575273707014434 0ustar davedave00000000000000Main Authors ============ David Halter (@davidhalter) Code Contributors ================= Alisdair Robertson (@robodair) Code Contributors (to Jedi and therefore possibly to this library) ================================================================== Takafumi Arakaki (@tkf) Danilo Bargen (@dbrgn) Laurens Van Houtven (@lvh) <_@lvh.cc> Aldo Stracquadanio (@Astrac) Jean-Louis Fuchs (@ganwell) tek (@tek) Yasha Borevich (@jjay) Aaron Griffin andviro (@andviro) Mike Gilbert (@floppym) Aaron Meurer (@asmeurer) Lubos Trilety Akinori Hattori (@hattya) srusskih (@srusskih) Steven Silvester (@blink1073) Colin Duquesnoy (@ColinDuquesnoy) Jorgen Schaefer (@jorgenschaefer) Fredrik Bergroth (@fbergroth) Mathias Fußenegger (@mfussenegger) Syohei Yoshida (@syohex) ppalucky (@ppalucky) immerrr (@immerrr) immerrr@gmail.com Albertas Agejevas (@alga) Savor d'Isavano (@KenetJervet) Phillip Berndt (@phillipberndt) Ian Lee (@IanLee1521) Farkhad Khatamov (@hatamov) Kevin Kelley (@kelleyk) Sid Shanker (@squidarth) Reinoud Elhorst (@reinhrst) Guido van Rossum (@gvanrossum) Dmytro Sadovnychyi (@sadovnychyi) Cristi Burcă (@scribu) bstaint (@bstaint) Mathias Rav (@Mortal) Daniel Fiterman (@dfit99) Simon Ruggier (@sruggier) Élie Gouzien (@ElieGouzien) Note: (@user) means a github user name. parso-0.5.2/pytest.ini0000664000175000017500000000054413575273707014600 0ustar davedave00000000000000[pytest] addopts = --doctest-modules testpaths = parso test # Ignore broken files inblackbox test directories norecursedirs = .* docs scripts normalizer_issue_files build # Activate `clean_jedi_cache` fixture for all tests. This should be # fine as long as we are using `clean_jedi_cache` as a session scoped # fixture. usefixtures = clean_parso_cache parso-0.5.2/docs/0000775000175000017500000000000013575273727013476 5ustar davedave00000000000000parso-0.5.2/docs/Makefile0000664000175000017500000001267013575273707015142 0ustar davedave00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/parso.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/parso.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/parso" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/parso" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." parso-0.5.2/docs/_templates/0000775000175000017500000000000013575273727015633 5ustar davedave00000000000000parso-0.5.2/docs/_templates/ghbuttons.html0000664000175000017500000000034713575273707020540 0ustar davedave00000000000000

Github



parso-0.5.2/docs/_templates/sidebarlogo.html0000664000175000017500000000021013575273707021002 0ustar davedave00000000000000 parso-0.5.2/docs/docs/0000775000175000017500000000000013575273727014426 5ustar davedave00000000000000parso-0.5.2/docs/docs/development.rst0000664000175000017500000000153413575273707017503 0ustar davedave00000000000000.. include:: ../global.rst Development =========== If you want to contribute anything to |parso|, just open an issue or pull request to discuss it. We welcome changes! Please check the ``CONTRIBUTING.md`` file in the repository, first. Deprecations Process -------------------- The deprecation process is as follows: 1. A deprecation is announced in the next major/minor release. 2. We wait either at least a year & at least two minor releases until we remove the deprecated functionality. Testing ------- The test suite depends on ``tox`` and ``pytest``:: pip install tox pytest To run the tests for all supported Python versions:: tox If you want to test only a specific Python version (e.g. Python 2.7), it's as easy as:: tox -e py27 Tests are also run automatically on `Travis CI `_. parso-0.5.2/docs/docs/installation.rst0000664000175000017500000000154313575273707017662 0ustar davedave00000000000000.. include:: ../global.rst Installation and Configuration ============================== The preferred way (pip) ----------------------- On any system you can install |parso| directly from the Python package index using pip:: sudo pip install parso From git -------- If you want to install the current development version (master branch):: sudo pip install -e git://github.com/davidhalter/parso.git#egg=parso Manual installation from a downloaded package (not recommended) --------------------------------------------------------------- If you prefer not to use an automated package installer, you can `download `__ a current copy of |parso| and install it manually. To install it, navigate to the directory containing `setup.py` on your console and type:: sudo python setup.py install parso-0.5.2/docs/docs/usage.rst0000664000175000017500000000302313575273707016260 0ustar davedave00000000000000.. include:: ../global.rst Usage ===== |parso| works around grammars. You can simply create Python grammars by calling :py:func:`parso.load_grammar`. Grammars (with a custom tokenizer and custom parser trees) can also be created by directly instantiating :py:func:`parso.Grammar`. More information about the resulting objects can be found in the :ref:`parser tree documentation `. The simplest way of using parso is without even loading a grammar (:py:func:`parso.parse`): .. sourcecode:: python >>> import parso >>> parso.parse('foo + bar') Loading a Grammar ----------------- Typically if you want to work with one specific Python version, use: .. autofunction:: parso.load_grammar Grammar methods --------------- You will get back a grammar object that you can use to parse code and find issues in it: .. autoclass:: parso.Grammar :members: :undoc-members: Error Retrieval --------------- |parso| is able to find multiple errors in your source code. Iterating through those errors yields the following instances: .. autoclass:: parso.normalizer.Issue :members: :undoc-members: Utility ------- |parso| also offers some utility functions that can be really useful: .. autofunction:: parso.parse .. autofunction:: parso.split_lines .. autofunction:: parso.python_bytes_to_unicode Used By ------- - jedi_ (which is used by IPython and a lot of editor plugins). - mutmut_ (mutation tester) .. _jedi: https://github.com/davidhalter/jedi .. _mutmut: https://github.com/boxed/mutmut parso-0.5.2/docs/docs/parser-tree.rst0000664000175000017500000000174013575273707017411 0ustar davedave00000000000000.. include:: ../global.rst .. _parser-tree: Parser Tree =========== The parser tree is returned by calling :py:meth:`parso.Grammar.parse`. .. note:: Note that parso positions are always 1 based for lines and zero based for columns. This means the first position in a file is (1, 0). Parser Tree Base Classes ------------------------ Generally there are two types of classes you will deal with: :py:class:`parso.tree.Leaf` and :py:class:`parso.tree.BaseNode`. .. autoclass:: parso.tree.BaseNode :show-inheritance: :members: .. autoclass:: parso.tree.Leaf :show-inheritance: :members: All nodes and leaves have these methods/properties: .. autoclass:: parso.tree.NodeOrLeaf :members: :undoc-members: :show-inheritance: Python Parser Tree ------------------ .. currentmodule:: parso.python.tree .. automodule:: parso.python.tree :members: :undoc-members: :show-inheritance: Utility ------- .. autofunction:: parso.tree.search_ancestor parso-0.5.2/docs/conf.py0000664000175000017500000002173113575273707014777 0ustar davedave00000000000000# -*- coding: utf-8 -*- # # parso documentation build configuration file, created by # sphinx-quickstart on Wed Dec 26 00:11:34 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) sys.path.append(os.path.abspath('_themes')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.todo', 'sphinx.ext.intersphinx', 'sphinx.ext.inheritance_diagram'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'parso' copyright = u'parso contributors' import parso from parso.utils import version_info # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '.'.join(str(x) for x in version_info()[:2]) # The full version, including alpha/beta/rc tags. release = parso.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'flask' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = { '**': [ 'sidebarlogo.html', 'localtoc.html', #'relations.html', 'ghbuttons.html', #'sourcelink.html', 'searchbox.html' ] } # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'parsodoc' #html_style = 'default.css' # Force usage of default template on RTD # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'parso.tex', u'parso documentation', u'parso contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'parso', u'parso Documentation', [u'parso contributors'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'parso', u'parso documentation', u'parso contributors', 'parso', 'Awesome Python autocompletion library.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # -- Options for todo module --------------------------------------------------- todo_include_todos = False # -- Options for autodoc module ------------------------------------------------ autoclass_content = 'both' autodoc_member_order = 'bysource' autodoc_default_flags = [] #autodoc_default_flags = ['members', 'undoc-members'] # -- Options for intersphinx module -------------------------------------------- intersphinx_mapping = { 'http://docs.python.org/': ('https://docs.python.org/3.6', None), } def skip_deprecated(app, what, name, obj, skip, options): """ All attributes containing a deprecated note shouldn't be documented anymore. This makes it even clearer that they are not supported anymore. """ doc = obj.__doc__ return skip or doc and '.. deprecated::' in doc def setup(app): app.connect('autodoc-skip-member', skip_deprecated) parso-0.5.2/docs/index.rst0000664000175000017500000000100413575273707015330 0ustar davedave00000000000000.. include global.rst parso - A Python Parser ======================= Release v\ |release|. (:doc:`Installation `) .. automodule:: parso .. _toc: Docs ---- .. toctree:: :maxdepth: 2 docs/installation docs/usage docs/parser-tree docs/development .. _resources: Resources --------- - `Source Code on Github `_ - `Travis Testing `_ - `Python Package Index `_ parso-0.5.2/docs/_themes/0000775000175000017500000000000013575273727015122 5ustar davedave00000000000000parso-0.5.2/docs/_themes/flask_theme_support.py0000664000175000017500000001502013575273707021546 0ustar davedave00000000000000""" Copyright (c) 2010 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ # flasky extensions. flasky pygments style based on tango style from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic, Whitespace, Punctuation, Other, Literal class FlaskyStyle(Style): background_color = "#f8f8f8" default_style = "" styles = { # No corresponding class for the following: #Text: "", # class: '' Whitespace: "underline #f8f8f8", # class: 'w' Error: "#a40000 border:#ef2929", # class: 'err' Other: "#000000", # class 'x' Comment: "italic #8f5902", # class: 'c' Comment.Preproc: "noitalic", # class: 'cp' Keyword: "bold #004461", # class: 'k' Keyword.Constant: "bold #004461", # class: 'kc' Keyword.Declaration: "bold #004461", # class: 'kd' Keyword.Namespace: "bold #004461", # class: 'kn' Keyword.Pseudo: "bold #004461", # class: 'kp' Keyword.Reserved: "bold #004461", # class: 'kr' Keyword.Type: "bold #004461", # class: 'kt' Operator: "#582800", # class: 'o' Operator.Word: "bold #004461", # class: 'ow' - like keywords Punctuation: "bold #000000", # class: 'p' # because special names such as Name.Class, Name.Function, etc. # are not recognized as such later in the parsing, we choose them # to look the same as ordinary variables. Name: "#000000", # class: 'n' Name.Attribute: "#c4a000", # class: 'na' - to be revised Name.Builtin: "#004461", # class: 'nb' Name.Builtin.Pseudo: "#3465a4", # class: 'bp' Name.Class: "#000000", # class: 'nc' - to be revised Name.Constant: "#000000", # class: 'no' - to be revised Name.Decorator: "#888", # class: 'nd' - to be revised Name.Entity: "#ce5c00", # class: 'ni' Name.Exception: "bold #cc0000", # class: 'ne' Name.Function: "#000000", # class: 'nf' Name.Property: "#000000", # class: 'py' Name.Label: "#f57900", # class: 'nl' Name.Namespace: "#000000", # class: 'nn' - to be revised Name.Other: "#000000", # class: 'nx' Name.Tag: "bold #004461", # class: 'nt' - like a keyword Name.Variable: "#000000", # class: 'nv' - to be revised Name.Variable.Class: "#000000", # class: 'vc' - to be revised Name.Variable.Global: "#000000", # class: 'vg' - to be revised Name.Variable.Instance: "#000000", # class: 'vi' - to be revised Number: "#990000", # class: 'm' Literal: "#000000", # class: 'l' Literal.Date: "#000000", # class: 'ld' String: "#4e9a06", # class: 's' String.Backtick: "#4e9a06", # class: 'sb' String.Char: "#4e9a06", # class: 'sc' String.Doc: "italic #8f5902", # class: 'sd' - like a comment String.Double: "#4e9a06", # class: 's2' String.Escape: "#4e9a06", # class: 'se' String.Heredoc: "#4e9a06", # class: 'sh' String.Interpol: "#4e9a06", # class: 'si' String.Other: "#4e9a06", # class: 'sx' String.Regex: "#4e9a06", # class: 'sr' String.Single: "#4e9a06", # class: 's1' String.Symbol: "#4e9a06", # class: 'ss' Generic: "#000000", # class: 'g' Generic.Deleted: "#a40000", # class: 'gd' Generic.Emph: "italic #000000", # class: 'ge' Generic.Error: "#ef2929", # class: 'gr' Generic.Heading: "bold #000080", # class: 'gh' Generic.Inserted: "#00A000", # class: 'gi' Generic.Output: "#888", # class: 'go' Generic.Prompt: "#745334", # class: 'gp' Generic.Strong: "bold #000000", # class: 'gs' Generic.Subheading: "bold #800080", # class: 'gu' Generic.Traceback: "bold #a40000", # class: 'gt' } parso-0.5.2/docs/_themes/flask/0000775000175000017500000000000013575273727016222 5ustar davedave00000000000000parso-0.5.2/docs/_themes/flask/layout.html0000664000175000017500000000163413575273707020427 0ustar davedave00000000000000{%- extends "basic/layout.html" %} {%- block extrahead %} {{ super() }} {% if theme_touch_icon %} {% endif %} Fork me {% endblock %} {%- block relbar2 %}{% endblock %} {% block header %} {{ super() }} {% if pagename == 'index' %}
{% endif %} {% endblock %} {%- block footer %} {% if pagename == 'index' %}
{% endif %} {%- endblock %} parso-0.5.2/docs/_themes/flask/LICENSE0000664000175000017500000000337513575273707017235 0ustar davedave00000000000000Copyright (c) 2010 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. parso-0.5.2/docs/_themes/flask/theme.conf0000664000175000017500000000024213575273707020167 0ustar davedave00000000000000[theme] inherit = basic stylesheet = flasky.css pygments_style = flask_theme_support.FlaskyStyle [options] index_logo = index_logo_height = 120px touch_icon = parso-0.5.2/docs/_themes/flask/static/0000775000175000017500000000000013575273727017511 5ustar davedave00000000000000parso-0.5.2/docs/_themes/flask/static/flasky.css_t0000664000175000017500000001441313575273707022040 0ustar davedave00000000000000/* * flasky.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Georgia', serif; font-size: 17px; background-color: white; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 30px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Garamond', 'Georgia', serif; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: 'Georgia', serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } {% if theme_index_logo %} div.indexwrapper h1 { text-indent: -999999px; background: url({{ theme_index_logo }}) no-repeat center center; height: {{ theme_index_logo_height }}; } {% endif %} div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #eee; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: -60px; padding-left: 60px; } dl dl pre { margin-left: -90px; padding-left: 90px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #004B6B; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } parso-0.5.2/docs/_themes/flask/static/small_flask.css0000664000175000017500000000172013575273707022511 0ustar davedave00000000000000/* * small_flask.css_t * ~~~~~~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ body { margin: 0; padding: 20px 30px; } div.documentwrapper { float: none; background: white; } div.sphinxsidebar { display: block; float: none; width: 102.5%; margin: 50px -30px -20px -30px; padding: 10px 20px; background: #333; color: white; } div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar a { color: #aaa; } div.sphinxsidebar p.logo { display: none; } div.document { width: 100%; margin: 0; } div.related { display: block; margin: 0; padding: 10px 0 20px 0; } div.related ul, div.related ul li { margin: 0; padding: 0; } div.footer { display: none; } div.bodywrapper { margin: 0; } div.body { min-height: 0; padding: 0; } parso-0.5.2/docs/_themes/flask/relations.html0000664000175000017500000000111613575273707021105 0ustar davedave00000000000000

Related Topics

parso-0.5.2/docs/_static/0000775000175000017500000000000013575273727015124 5ustar davedave00000000000000parso-0.5.2/docs/_static/logo_characters.png0000664000175000017500000015630713575273707021003 0ustar davedave00000000000000PNG  IHDRƴgbKGD pHYs  tIME  5< IDATxw|UU>s{I/:TR,(UdD, *ꨈP86Q6Q J( !!$7}= 7pOn{=Wyֳ6!)IIJh4 EQ~fF1`N@Bf3  G%K`4D`0<Cx71`8@$A4>DIN$H0D4l6a rb6DD`Xн{w8D"p7ǻ "  )IIoAV+wưaP__p8 ł`0(W+JDEQDpwG=IҵHJRՊp8 EQPWWf3l6d20 JKK )IIJRZD"`p+VfB!F vc/Cs)))8x ka( )IIJRZ0p8@D0B+}8j`fC4hhD}}=੧BYYY$%)IIJkq_0 e( RRRpwoDJJD$>G$mۄ18%NJR?x ڷo/mڴΝ;p80uT|"*m| rrr@RFa>ڵù+rlܸ^x!f& VU/ b'sI$%)p3+uvO<|>.BAUPHD(z=z4i$5. ;w-[ zBvv6FPr0"B(fK$%)IiBDHMM 3fn4 0rHL&(n(Uj4OJHJRr xB^/, N'Ν4#ظq#BF)<}Yˑ )IIJRZ0͢`@JJ ,  vڅ 8PWs$HJR*rKc7x<*]]>s 8#FBe:i$5 3:tXv-0H$|7l4hh5V/4IIJR+ 1LW^Q_~%~iڵK7nŔIIJRNulxDQ\.~vk׮D~~> -- ^z), 2lP"46fn81.i½ee^|8nW)vXb طok|g9r$؃Y)kl@E(Zh9N>:,ktT>w"e`0߉J$%)I%d2 I;+@ P( ǃ,]6l@UUeI\^Ze _f%ϐL03͢%aD Ácĉl, F02rZވ^6}Űaðh"iF@`f9&O4IIJRNj((f`0G>{EEE1޹  ᭳jSgqtN:o߾9r$RSSt:Ź8za0`pףsΰlسg~۷GѣGW_hT|^c ό9F1a̚5 #GƃsI$-]vøv \QaEcј0a\.rss-HaE۵6'f3<QVVO>;v?Jqݜ\G(RɄh4*fZDU°a0c \s5HMMm0g4III,2EXTp8,)z|Ɣ-+[Xu- 믱zj|'X,B fddWXˊ1%% رc1g! 8ʟ ɆZ$= QdD ?#<݋>عs #f C~_g`ۅ`#jc6qAl߾֭G}ݎB8NA1e_f1թ ׆_|C}}=\.N-i2 ߵENZ Wm7%r!oH$"G!Ȥ( pv#%%555Œ3?h4 'sV+ECh4;bʔ)Xx1RRRbj6m R}>hcŊ#n ` QYY~*jkk dшH$2‹/ɓ'Kc?i +zH2q XLJ)=u˅F nN! JJJ0dTUU E'M&YrssqM7OP2;/_mm-Nh`4c@6$rR[[+\ H$Z>|}gQUU6mJڵkE{rAQ8b(''_|N;4a\&%)IiFF=S(G8VD$G"FF)D(k;yp8&"ÇСC 8FwKh͚5 vS]]{ B3 ży^n\mmɓi?7F#m۶<|>յB!x0F||B6$P)~ 8k.qݻwr""*++38P߾}Ur-P^W駟￟*++_zLCnX`0oDDc O$> <ġP(afsx|`0 BqyZeLe%ϘD"  Q(**))Ю]JJJ( Q0:z{Hx _ҥKU޾(d=\zUJ^ZZp4N OK.Tؽ{7*O ѩS'ڽ{7A߿?Y,r:t!"}ܸqBY Gk֬[pl>cldZU_k @ǎ \Rx)?s(.aU$61y" @N(T[pVbSD6p82O> C+B˅3f!;;.K`MI-ƍ#G:8p"rssQ[[kFհ7nn&1iiiHMMϖ1cI0fÃ>?Ϫ)];Ľ ϧj{}vf3<9"~7L«{bݺuh߾=|>F@ 999;su…OQTTCnÈ#c1jjjУGߟ%ZEϚ5 ~-y0c Ol]- 4en]VVɄoٸ+1I` ˥J\+~_ ܷ[nj2`XxpQL&dffb#n n(1{r$CxDN Cr)\x? .ѥK`߾}**СC1i$fS>sD V!/zX,/uVA;1#={w$7c찂2FD8|0^x|طo:C9rrj" pcڴiXhP8<#jMVTT-[۷o7|#DZtNC~ЫW/ :ݺuCNb7SvD-[o]$3y/^vR|>RRRpB}'d>"D"x<8x ƌCo߾_kXd V+֯_8r/BL6 ?pS =j)p8zżd]%F|gO<悁 Ea")==]ZD9D@A]OD͵8 IM7DݺudCuNtRzz:׭( 924|p55۷,X@ P}ň! E 0f̘AeeeBh|I@.K$,`K5]]]}$7?ĸoߞ|>o8C+,^z{`ߧZ9#^ĉ| u҂ bc}|>H'NT%#^Njr~T}qCnWM^v_~|L&ZԵkW;v,=TXXH*\Vܔ|H %༨y!GQrݴsNZr%3Uy16Ƙ RKw}7}о}TX9D6JrnݻIQ榛n"ε]s5l62 g:xYdeeс 0Uuv0+lb䲩h4Sfl6L&2d4UnXTh4fkpšL&ٳ'-Zݫn9eW$b%BU5џ'ڵk\2--M0gf2d2|t+iϞ=b26 D!UVVR>}f oWDHT^բWQјlJJJ'O?=F;aJ =D?#'/^iB!k>Ǡ|ȆLn'@'N?x`Ude\}T__O3HD0d:<{)rѿ/U[VgKKcOZZ͟?JJJD<A,;$NFQ'Ѓ>HEEEɟ ôn:FQnE]DK,͛Ggy.2b03Ƞ҈#wd0Tpq\b}Х C~rj;ؓ{6MLV7Þ̤U&ڵkG3gΤ={{9zh kֳ,**EQ~T'ぼ/Ӄ?6ѣU?ރ l2dExѣTXXH_|[,13 yQ>B( axDf;="?#'DCՅhƠ),>eeeܣ` A4p@i޼yҴibM|'NGyD( qj׮=C Q6jg/ ؈B@eee*/]6k'`K.DDZם?dB{x.+ dYfU12h.b'#D@@<9,Y$2xbg+lQzA@Z*EcYYp %---abe#CR'bRRRb901;<HyyyT]]MhT;c։ܹ٘Sݛ _rMdCΝ+oƍ4vX19#7omٲE\ӦMh֬Y~ѸqbO/*ʅ cPh12K.TxB ],0ZOC9Aߡk׮-YDS :zg3TrzE0>?E[U89Zf?'Q޿9\ϋ Rud/_.B\޽F)99)VɿkYz۪rgM_zHZ,2ʹ{nq=EEE#P^^^筧V+KAԩS' 7xCkV^Mz\0`Jxƍ4hР(*2;;[}РA1szN>Yt84}t,`NIoXpIٻ/_{5ΎrZ3k+9C{?Ǥs<*D'>f͚EgϦw׿]wݥNj_|rxyw:$ w}Gk֬^x>l۶rssFZfN=p@[iݪxƌRr%"^^W^>A ϝ>6FQ8=aVT*Bω8Ϟ=Bu.dGM١"UL~zgמBFQtdX<^xᅘ]jj״mۖvcjjjNc/+[ܔy\B/m)Wn喟h/*#覐4u>uuuOZRRBݺuLzTz;r2Dk.qCQUʵK.tad.R_fŊ1FP/>s &yP$WG"SEf!C 0dJEJK.QymzxrH4D)}܎v̙1Q+ 2zhwKDiѼyDVN2[aÆ2Hword9o߮ZKMT__wI|h笥~3ݚǣLW-[_q/^,sY_mh>3ys5!(9rDܙ9s&ԡC6n8l/m% JL7|ңɸ<zMqh(bFT\\L1`Ckv60VՆŲn=ztLĦW\ضmvi*/J/1 ݔJv[VڵkPGI0RVVF]t9ztԉJJJTs yV\\,Vvv6m۶MsѣG)77W/N?8|>onjC(6b~^y՘3⩧G6jܴ`ڴizԣG;N袋[A @例X3{ig}6߿? >pKKKgϞ1J].ZZ3+~z j߾(!3mACǏi3xgTFYgK:lUqpZZtg.挚z\z*M^s@Y犊ʲDt]~jedd"x ݙQ?aU39*1DDO>s事CT̗$s)o#ey>pgQj.wh ?5Y@zrT`ZiڴiOیJ(pK.av> MhN ]Nq{qq1_7Rx"χsf39NK=p8*H,,,3+7;1ǣRl2ȑ#4h 2 tgN>׋/ʻL/h0К5kD Λ9GCJÇ͛7z=O|Nf/ZVU񖼟@^(555.A7pgkW_MPH$尔5=JӦMmp*Bݻj_hl6ӓO>:NՆ1dFH2 뮋HSx<9 fϞMcǎyS\\,J{ٜYSr믗@L=z4Z+ڷoUTT߮7ѣpBU4o!Cw}p$sf={""ڳg%N/'Y"q9sP}}jw(..K[8xL&3gN˷$7X,+zh5r>Q/߹k.7hw Bh=~٠ \pp5uaӧ)B=z={P]]_z%@`0믿./[,%=cŴYF\{hF;vlU/vޝk|7U}XW^4k,U!L"awgVm8sU9D .0ohi[]|ŧw֔ɓt_}VhwX Ez]tP2(W.iț*E^^޽[7osNǍbL>]u^hjkkEkuِR$QuyeFN8Ro={)S^{ _ |N3--K}'o}5֭;h2pzx>|8S"A'z%"ݮ[ oO9\ԭVV6cDQ=dw ֭['X/Lda`0E lB9svl EŴ3޽&|M#ѨϘ1CСCS=Tiҥ }m6ںu+}{Gyyyy9#:3ߧ3CEEEԹsgީSg{O1eo=)a)++:*..{ҥKoTHyN(F%Ahtg<@+WT'Ӵ4Zx hȐ!dXbemT(6! V^%jz*׮ KKKU[m~_mʖ t#OǎJU';oOPc*4ǫ8qBk#4@&MZlY:[;5N^z%U/f=r 5;ydU-ϯ;wRZZZLڐݫYOhkEZ҇vp8ᠶmR>}3ΠSnb Ѯ'+k5v*m ;f/_qSO=%xh'{AZEސNh'餭[t̠߂Q.]=Nvf3]q`Zr%^-[FwqM<pei2b*8Qڵ:w0/پ?~Сczi1ul(:H/3l(Xǻ㚶`S6[x_bAB0&tҕW^I{VM!CqU:t>*cbi ND<ѵ Sm@f1k֬vSh+Cda߾}*%><͛G;wi&k0m߾Ν+;Q2w*SZZz6|:u={hȐ!1d^^:vӞ={(??MEQ1"|OMMz]orxӃFZ,CVDV#bj_".M;N]|O~~.Y6^٘}v\e ՞={L/]Wy}›Z<:3̞jh^{«_~}F;v}e+ʑ nVfzd)GuȊV7`{EvzA|ALcVBsEl;wߓ;wi]+o h9\NR^@yyytg҈#m۶ 4GZCbW_}U9ZoO<cGx֘BjUդ n.G.8p.t啽 Ƞ)SP޽c`.v!ɑVCʙ $k׮4uTZp!-_VXA>(-[y[hذaW79:NKKS9~/ej YgEsu9F$jlYky<ّ?ύozz̐t2 ԱcG0aOzΝ+xrP{"9L|ĺuTm|Ou {1]# ~{K;wRflZnh4#&۔);tNS@z]VVLyyyy56,ISSW_}ED$r s >[oU0Deś8}BdܹsJI9 v555~z={a-sx-yJJ q\|c %駟mstЁjjjT .M1_[[K{)))@N0xsZ^"*ŵ zcƮ)UW]%M41|pyUYy4O]V\yhoV4^ݤ޲_7D]ou`Ϟ=dx| sKeoI/"жC9x~`0ʌ3b~s@H3 C~&7tS^+ԄB!1%4PAʯ hƌhyżXF?'G/ ☢rh1T3yӐl{ٿ%{)Sp|_m y5Jx2ԦUM7ݤZ"`&ZiϞ=MN)Ȭ_y`0cְZ[14yʳ)|Zvaa!"7%;V"1VDP( 0 HII~}݇~&^aqD"DQ}hT ,8G>}DPhNDh4@ łE'Td2h4pP[[ u'"aL&(dgg#7h4BN6DqcÆ ٳ}fF~>B0 0 FÈF0LYYYba$Caa!VZ!rF15^@2۸q#k.a$Zɴk׎U _K1]@gZi*z'g0Rn=wޡٳgzi;Z й瞫ci)E(+{=`0ĉœWA}oA^6j)-7l@GKxM5?9$I&5|/M& D %k1j/"` C0VÁiu2Œ z-[`ܸq^Wu~Ǝ$NS555x衇0w\[N(dLjn.FPFpEऽz‡~(pp8 χqca<sVs5JDرcPEx|>#✖(FD"?~<|>_3 IDATf9<űH0DTsDsM&_.rsD6zPִ@ye "y %~h#GD6mb<':(((I+@ ݎQFD6/||Z q'8eCD@D0 D"p\{XhJKKl6Pr|]>Ռ1NģY1a9xb BvP;Qfaܸq5j 4(ł`0%Y  'EEQTJXg·dfy" jE$Q)NyuNHOO1=+a˥aLAflٲEcVcB'+~VD&"(Dv ax`P*>fC4E0fߏ.] ##D:l6r=^0Q7`„ 9s&v p\b!%{N-`Yi#0<>l F#n7RRRSOaڴie%ED 3& ~_xFSLQy)hTD@Xt)***T^,ϹxGkZ%.{ȲïF#ci$LD8ph ##))) sF6(."@~bNE4ՂvKd#@D- tEQ6J)OxrJ\wuشif3"L&ra4ۚ% < mZZnV̝;WЌ^/, ~? 1PSߏ?Oq,HOOGMM P|gXt)9"x|K.ŨQ[o I^.\#.`k.D { -ǏNJ+`2DT Cyrԗ$:ǎIп2DtkH.0EQD(G#~;ʕ+U^C{w)b/9j6ͨREF>/a1?D={K.~!b$JiM~DxTgGyPZsFp%q-"AXG~͋6 frDCD'0C8p\5kjkkEɊ&N$MxbO1lH,x!˕C k&/lx`. 4inݪb*xqxEQDU.Wc8>#A+egǂ:ehB31-nN]\^^*BN9 Ը'(A+++$Eÿ'J=34YYhш hX,lܸsw%C-HZZf3 ^0 Aw> a6UosK8 OÇ QC\≮b6L&j]ᡇ_rT%R}"c瓣D&QtsbQԛS:ݻwMBc/tr?oJhx۷O rVвFd2! xx <B!cqR SRR0i$ x0j(1ϟGUAd4%;13gٳU4ՆecC p_~В7G7?, TpO{P1*{1er߾}DyKf3cjeX3أG0hC!,&BKH>d]]JJJT(GvFz}r PUU%Nmm-(}]Q[[zFqD~ d2v̙31h  s[s\ˢ-b{n}Avv6 .LdSEC='xFvEw'K[=_/h0h'6CQ!///HLj 93b&BC"O\DP555BqNЅ~?EPAE^fYYYX3؛޻w/*++aXP^^"rqs"*k) *e׋=z`޼yYc6 tl6oNj/Isnx=EipW`ر(((@EE~TTT5[cKy@pݻwǕW^ $eE}b ??]tA~~&;tp8 ՊL׫ݎG}_~XWIJ";-iœkNX ͛i&79M$p8JBT ٘y7cٲem HJXe?J>݋ t [l?#GPgN+'xE@@ŔcW٠aprmS].0*հZۘf|>l6}"iT0ijƌ͛ѵkWXV`UVr`}333sϩ:sj\cYZhfl'^z%x^fvdggvr+`Vp{$ @-.څ^ٳg#;;oq%`ӦMh߾=}Q|駘0aՉ :u*/_.8z}8"a>WN`4t:ѳgOlݺ`1p@bv`Zq=୷ނfK7OtR= dEQPYY~ZA䖴O;4^Dd 7܀7xCxiHyC^$d_Zcfe!]I\ve@fA Չ]w֯_/qXU< 4was΁d7|bҤIصkmۆ>{vAx@SL駟͛7#//#F ;%gE_UVVkMF NQk I*\z-pc|ժ22TÂ^zj^Z B^/Vx6uuu$Zs o7a ==}'|Gbܹ?~<ˑk<ffjEvv6v;~߉B*xz %77yyyb Ʉ~A<3X,c( _ON_92`)Ρfp2=I@e\"yXB|X˸>… xbUw6D CEC[͆;*?c[UE41 >vK7VT_B!^)oǸrJ,]TʑCKxlXf3l6jaS֭[?k֬Aǎa2DŭX@@Dh?!aN=&ڎ N̙3GP(TL&dggW^(//^A(رc1ydUS<^z)8 `,cp7Qs|lx@ q%j1'5|wFOn{1z%ㆀMK+1NΒ+wyۘIs ĘɐnC)v>S{0L u2331k,l޼YDExQ+"6IZ*{oxzgqp[[2KOOQ "i.]V(x<󗯑қ7oƂ 9b}SS-ᠹnİo WQV\=0esK @8Vyn^˖-ҥKE<zH`0/ 3wWU뿻F@Q*]X2:׮MwccE"D ;!}:'APg1$'>{{#CS4GA#4Pӏu ;[#/I0 䰾 UUё$ lj?-X2t˲0o<<^#* E@e0tOzN;^(Lh<)1Ɇ  xGIq g>|8 ̘1~p!1Mmm- ~;o>W/~i캈8qꩧn` <ǁ7||::%PLzD) Q-=8sτ fz_J$|d&D={஻gol$3E&6hSO:w^J*]I H$)#Zr?sѣ* c;w.V\)S΃AEEFH$ ӧ&O}}ݻӧOǤI`B/gŰtR3gh#6R5 W^Sk۶m9AQ~Sfh,駟P(H:+/ȓD"uh"vVk-2ƫpFW42l9jN2yH$Ҩa?wTLm%oUUUX,* jBg>00x`L>=kt=$@b hnv:ݒ]sg?ôNGwpc7n\A2$9qWyy9vڕм`$IxQ\\֭[>υG >8x`?z<1o~!t x$߭)IiNF]>C_M֛oGof,pY?x'k7|3!_EUQUQxBG6` Ά鿺k|GQQ(CO*0@eP-7vğ_>[723Nj:_dS 4s!2F) O"jiy睇y汿Cuc&HE?}ߕn)#Z@6}@62l k #KFqr-LR 4eҥzYTDj#_<)La躎nݺ_ϧ|܉^999(,,СT={4Ms$E(0u9 >g C'+WB2M4p{}]o4D@pEHy,~ hkaﱦSގ; f݋x\%|2:tѣGc}h,,@}].X&Mt"Axpdò-X i0t@e8N;]}^8;0/²,Ʋ#GLLuK$A}}=>c7ҋ L8DŽ  #pxC IDAT3y"48YPH/.ZzэD"ꪫ0j(Z|><;uuu)ʞ0`@aBBIpw?ĴiRB$IB `ϘA&5Сb ,PކcQȪ/_X4 (WhGC}=B99$kPq.Kn|<38o_5Ν x p9Cŗ^> ʟ555_cFškCaSOd5H̳΄ڥ+r R=)A4s颖ӴPڽƎcȟ `ܹ)=>-9%fb$2J7| )vtIɁ'y/l!(c8v L8kϞ=1uTaw|)F  ]PP*Yc„ /~6;5?H&S)f&/GT&7??h1`DQi9d aIOة=wK/iwށøf:[EUָhȅ~#}C(°/W ;_wtu8Kqݔ )ڷkERЩS'޽k֬cQGaه˱if\rXW__ǁѾ֭]Q1p@A݂$d=cEmW]8*i5o|>@q/"&Mp8Z `(Lt^'S?O(G~G4D&n`65( yG Yj ;=Ek$ʀrrrXԩS1|L:M CΡbS:D. d1ݭwh߾=^o2!a ut.q!`ȅa؈n}zFnn>jkkヌO 7ވNʪJ۰xC.|_v9^6) [nwݍ@(JDعkvޝb\}ͿSO=/HUU`B̙3axkVPWW>"X@7U 6L= ?3~/ƒ%KX&YKv5Z3o)SfD_#'3+33Gl0M* .Hh-7ROD:vXL0ްJW_}xތ>N_ǃx<9s`ĉPH$Ҭ^90 4IӲ,u֨dTX,M`ܠ "瞋N~P1||9>_9_}{bp5E#bPT_|%.r- \2g< ]*j֭ v]v7p;\ۻ7\EuЪU>dYʕƤIЦM!?6UiLv~aF2*K$$[D_%\%'( ܹ3 ݲ( 6niN, )t2eFp%> ปذ`ô,ed:<\ M7݄X,˼(yjwC$A @8x<)Z,w0 ~Ƒ޼y3t (AUUfth+'j2.4=LFxGp!V_ ²-dSU98/0vX\}5Xn-j\4p]wcێx0|ʵ>I8]PaȐ0rj[ބH,Q @=[86ڢCTV_PW[O>ʕ-[{`!DmE8 E`284 l$ |lٲuuup0uT+#[2^-… qW08 _%Gf:csS۷31)ƋYgb9}B!—m';m(4ۆ-CYS|1uT\zLZkaΝe]]D"x<,Z,P'hoN@?7pE"̢@ bdUn"@Q(^={vٍU~SKFa6|,c℟c5!2ZeeenwWk0t^uukƖHua+$t@OđaÆ!7'Ç 2.s)}Vh|b… $~ sh>{- 1ӹܿ5C:c3`껚dE*LG4y-H'h覇B!y睘8 ˂mq!T{];Lۆ*,12Zm٨Fq/ng²=UYY7\SA]x4躁X,IaFTWVp555ظq# (H$ٳ' 9=31ʔmbx)+ a|GL`Y+))an<Нi0!l>lp ;P(İz1Ç+X,۶$p8`̙)Ʈ9TF* @Ęx衇0|& x4.כ8<؈I5D;H=UU:v 6z596mכ6!ӸuuPUrr((?\E*?ߝp4Yh mR N>3-p:t믻U5… w0s3B8省plm0,^cIA0)ꫯ0a„F{("S" ihh@((i 8M)n79 eze0ʒQao ȪQ=5ߏ%~ 2x ݷ6l(H40 lذ0Cgf۶mlP82D&|]*HLwjDs]nݷp7v2;aBg%v;[S0 x< 6 oJKKu=Pݏ>ytMsu@b `wM71Z^G)eYgO*++1o޼-M5gxp,B$$K%b Kw iÁx"x< SQ\5KS\|EU`o`抻ԸO0`XP]U]v`)|yH$;QVVQl{+*l(F=oLZmEEE)% B#z:oK$N)Yo~۶޽{UuEqs,R:@cACii){uuuL  U׿43a蟧@kӧOgEfrrr n E>ctM h (IVFCt2bы2DEy mC7umAe"BQ>hEra _oߎvØ<CPUIs=nɖ-d[q $MDi)h~o+Wdk&Oâ[o3y΋pEmǁLn $e?W^c%o !//a Wq0o޼z 8=d  ]v6ejAVc'^ȧS 3q(IX>,"fiYG^qh/ہ$JD55j^4DF"DZ8ǃ@Ya艌DoEU7t"F]]z4G>b8DQ+?}{`9$ID^A:JX"0_ٵkWJK{g@Xf *++Yv X,$?2Vd|iӆh(H}L-`8D"7Vl}!t o<#xT"`:`z5>z<2/.t]OQ;ԔgR__ /0eSClX}Mٌ-ҳ(hu?ѹsd9.H&@y.a%%n&DmmL `/h<60nX t*++%hDh* A$'|6ˉ"Qg ؿ?dYf!CR2LeҾ ^MM ,YҋŸtU20{-̝;˖-;>7M癟Gtu.·$Gl;IQ-Ke@RSFW8>}zc7JsZEYJ~_ÇљA:yF)lˏJf6gQE;?#|ڵkc'zia~h[o% 9U>|=zm.վ;xսhX  9T1^N2T5i仑u]OcIZj]KDo^/BF3o+ CHTouG0l9U_(|!|0M,4mt=jH۷d٫yKؽ{gƋPShSz14;ta6Tb#\0DQŰaAⵑs4\^V]}v]Q v9t]ʕ+QWWǃÇcϞ=4 ݺuKipl pn3/Ƞ,L_|}v<$ɯh4PM΁9% xhH?pΜ9/2vG"}|>\q۲"ɓeLm*L /;v 1 &3u:ݻwcݍ&]r%O)6GX2ڣPUӦMÒ%Kz@&Z IQO(DzxDZ~ŽMٶi4 ")ݍ rH$ X6($I?͛o$Y]{$4d&HcY1tk&lMPR##ny-(Lt5y齗?(GIns(e7]#ƍC/^ EQPQQp8>}d`&t]W_}"!mQDTexGYAXDOY%iIJE>|?|Ah e(t)APRRh4!/mQkčOA<\m#F`ܹ{Ѷm"@I.UMx~V\Q1w\o@1zp#2)/a^٘L5q裏@p. !CaA:up,]vb̘1h߾=-[W^yD@,#777gN=H.kd"IT"i<\v͛O Vۢk#EꟋ/ĉϲoTDݤh]EU OEJ`<^i"3Ĝ9s0k,;6eC$9fgYf('S`YA&YPiY>#= ,HI>bOP@UU5"I?RO2Vt~-hs/FNn;ƲKİsN|>XpСCtMhժt]G<GAA~?vܙA=.#s*Ğz)3S(|oAQSSǃ۷oqa `ʔ)#$ X2%>={يƯ^K.>ao߾*qEcD֫'\ȑ#g7DQŋqgcڵ-~}3J 8RfV۶P(}Nkk"Ɵ;8cOҥK1~x 0~:b:VZ7>ڵkSboض`0q 4eqZwEǕ$)ko 4M6}K$t vn馔tn E|82 yKy /0 &Kv'ָBR6S8䷭ tUC+O?ud ?E礶CŲe˰{nTi~:ӉDnƌ~Ϛ [^x< Iu}t]-"F&7-fPYY ߏR\x8s0~xG! 1g@x<ӴSXЦM[imk.AbJ%IS΢gC Euj4a٠[Q~#77ퟟ Fx<ԩ={0H N9k"2PƬYR&tfg"|OT 1d F&cǕڻw/*++YJ>q2 @ _CEAwY!i0ZU * @HNT]J<?Bw1Jz #%I!CXuuugJJJp.}Plu]fO | C@gd N{}i9:uB@ o|DD1]vN=Tuʴ_A <D$ãy +2+K?, X,%TUHN@ PR-&in 6 xV2A/[ #F`ذa,JM&)Vs6(8 =#GU+88yf"̫ ߤYD j47=3 Zj~a޽޽;ӄbnA=$J+@Aa!<^W.H$ "D0(JB?ŏ6{ǎQTTt*H-B0dLM gZ&x<u]())AMM UUep>eYF$Aǎ30w %D;" 0i0jNn..2̞O1d^IzA֭kd8l&v튢"cسgn Ivt=zDΝѮMڶ/B@OMH DE$R^a0gL}ݺuÁ20GM4 c="{0 .b<Ӹ+!Hϗf'jѽ}ڶ#Gk'yXX,$eUrљuw i(,,Č3PPPm۶aΝ83PW_}a7۰c.444`ʏjj{c>={. ?Ph ?$MQc V&=Ozyyy38]#J@hYb׮]7o<***}݇RE9iK)@={RG0f;7 AVTTMC('ݺaכ4l HJLdSN9`wF߾}QTT1cƸA<ÇP]W =G4ъXfjjj| ,_;wƘ+F3Da¤4;t׋[ׯǮ]2ycNQynp뭷 9U]uVٻ,}4cljW\qhT%<.mД'!8ya:HP59n 6>|cX7nDUUS,Z{PZsoddYfP,L$KE(//ǐ!Cpg}hU5*m[E43 @4ŪW0/s7ڵk~=z!\_`n@4ض ENx1e@ .RxU5*?6 QYUb…{)۶m;6A;e2ܟVǎK/Q?)(a+Xf 6o qz,싴]ѳgO`~SDZAsp)XSG@ii)# {L6!8"Qedx q=0 N6Oy]uUx饗%1Z[lA>}kX 0usqq1?3fH|NCݻz.]aEbhժk*Z_CqF`Փq/Qh/ǁaYLN4UԆ,oiܥ3+f>(說B~سGuu5#}h4krǛA]>>C.]e )'[ )Y44w> /26mڄX,\ֲS7кug18NqdիN;4|E[nغu+" IEUyGR9/v YV`+ßpaTUV⮻ƹ瞃˗#"/7e YV`ye!pap!9U TGnݰw^ڵ ۶mkE yC,Gi7 3pѤc]pܦM|Ǹ˙f:Px]A 2@ p8P(0k[n-x/?%PF8n`%BRDv S%|ꫯf$S(@KF#*ӛH-? 99w B @MM *++QZZ+V`Ŋ/Qw֯ۀ ,YV|F$ (((@YE9 V D8'|| Ӧ݁]"i@8'UFyE%D(hUg^%{ŀ◿%Ep#&V/3gD4=nOY h%\5k0(lI4 xp%\6OFpҴlApNLHJhwc۸+q=0Gŷ`ld[/OL!f͚5 O=l!''{Fuu5vڅb\s54 ƍO<鿾 sŦM09j}wj:ۇ DIĚ5_Uk}逸r {v#L]'o_!??GCΝtR^nfIBAAqkÇGqqq$%ܟB޽?ee4@~XEQtR\wu, aS?W,Ka@=aYwAg(O(! *H'`(ΓO>.Ix)/*Ċ &gcu|7+=:0xuu58;2I۶a&0|ps,utСC4M@UUUW3g5 /WO&:8V0k,>w^N; ӦM;#z &Mœ9s<"d65в,N; }Zj^w"~ʼ , G^A??x<EQ0}t>>_"%N tDQ²ea׮]dkܣ' ؿ H 777&|S!#//ojjjпfa䄿Η ǹի͛4YѺu댼h4 MDW]]oWdYF+'HQmS .wixl>S4% wN,^VFe^~3g΄F?eHxmFJ7Ah$֜3j (++CqgjjjpYg{{,ł q)XbnXH3g>K?GSѪt]GMMڶm[nݺAۧNCe(--yi{w|'8x pιgc=30|̝;E555?>ȿ)s9Xjv\3EǻBJS}> Àax<E]wzvӲ0;eV<'e۸uN wNCqqHxk׭E `2.)'3!wE& ȂD\~r LB"nW׋%%={6lLEBQ+V?3$DKnV,Z@=P(~fCYFSӿ.dG{oe&.b̟?ӦMcs|>QUQGzVD.{MI]2(̡'Y0IsK#x=,wXL˧@'kd*jjP|/"z Qy 8ZTL0ӳo؏>!uՎ^iZi Cwa3v@h8:wO~?҄I_,*|3:t?S]6]~djkk1m4oGqq1$I̙3QUUM\A0 >0dYFuu5̙p>"'mR?(QIx<v IX|V^X,_BxRi( 8Ìs%#LƖf͚~ j~?U'3? -+Qa ?fy睗"AiG I~?h#8oWa{^:GmF0dM4+ꫯƿ/s='̾XH6w2u){ F5\?XWN?v$iARmgE?'5 %aqAE9>{z!L Jx8AC8 Q\O̐d a)'Ct]֦zqB! 6 ݻwO7,C *3(x,Yŋ㥗^Bv{n̚55aD >\wuFX|jt֕sDQe&3?$^r4~=ضm⪫Byy9~mg-bs0n8^ γo.BxwqW6§y2/p^)tHaQ4MNTFF1i$;x1vX"H,6dv~95C>0͓CHӨǽދ ,?={ ^3hQOm:9wl˂燞H@|7_WT_}jyqش59$$$䫚m Qruқ{LATF h,P^SrRp׋H$6;xH9x=z4q ìqBnIaY1|1MxW#TH$UUܹxW7oN=G;{ٝ nD8 YхAp8棍F-Es K' x޽())W^ٳgc(++WU ">U58cx #^;F G\X+) bqOsp*D;w#0D ͛_Cex^v &K "}ض={b8p B:wSO=PG~Oޣ}1b4Q,I(ۇ 60AAGMM .]4K풌?/k IDAT~FA^^JJJXmؽ{7VZD"lڴ VBUUѾ}{M6С6۴i=z]vz65D^q=;z.W%ElX Ӳj- '7+ƎcxrSlDž$%fhIQ$䠡(;Jc=Zn;K~X,N{)oێ/;wµ^qǣUa! ӆ{j <yyyL$77{Gyy9TUMaҥc`ժU={6 2 a6ڵk-[`ժU3f 3f} >Ki⬳B۶maWX $K;q٥CX|x W"7H@dě逛ܬq=i*A c6qЪU+ض͞m,H$0aFر#/_\xg>'~0L)mhhh@NN3Zz5***w^l۶ ׯǎ;\V([l48" 9e{DY IUwm۶i=ٳe!''x]zj(C?ٌ)ň&'(Ut E^RM هbf$ArPTV;f,ƎRdI)dk;##oFEE-[]駟bܹҥ ,bMz_|ƌqObڵXdIJG1a}R,2*++7ߠw۰8tÄ(PT cF+`Bt YQDC:N wQYOCJBJ@, `E vQ@]lp,Zql*MN$ &s2>yt_O_d2ss> (h4%%$hrc]XVy >fPƼKY|9| *0sLUn@0["#;Uqɹ ǶmFs(//GsiUiՊjK/eРA<"!!@ii);wn|3g4<ĞT9CQUUZM뉊  8ڣ999Ͼ`Аb2v"OW~M`6p{ QV+_|4zɓ1L 0kf|,^iӦp]w1{fK\"K Ǐ#//]vQWWGMMM INNK.k׮B߾}b-v$$:rًϻenB!/[~5[w[$;3 Q/T4=(Ҥ)UN-/`ŗUd5Z~E᧟~BeՇpvx:;v}vƎ 77͛ǢET`3Dڵ+\rYYY!|9,Z3p4:8p˖-_(@ՆF =~zz{pT~{=ٵkYYY?hyx &NkvAUd 7_Ν;Wivv6Ç'%%]reD0f H D,#"G555ۘL&X$?Q-B$tQ0A"_Ux)rv&^G~W5od?V1bpwS\\̪Ux[ػw/Ax yNll,.ZHcٲerhIAMbn[*++9s ۷oBC݄MƨQt1B)pGv-' rJ %%%ݻL.b~~Zl6[3g2(XӓN bFwL4: F(K_{ ܹs$nݺO駟RVVƋ/… VexfÀnW1`0?l$%%Q\\z?`9t///Ŧo7^MbN_BxUcP!K[t4P_Bhk4-[oCmS\\Lzz:=s yyy>|7RPP#--LnVvJԩBkKX`⑋ah:uf3oʪIIIa֬Yp DEE5B$!g!W=P,f&.jN:Ecc#NS _׷ω5kְn:5qw J6mꫯ'..… >|8-" h"+.i EeV\ɂ SC~: Jb&˭NG X:Svޭv۷}8N H=n ͦhW0Ldt~iwxꩧj8!k@"$I1M"[DEիSNcǎ\r%sN̙rfԩ\wu$%%54t4666㮋 I"9mIHSN1`7pӧOٳT2xٻw/FKtt4C o禛nRO!dZIe?ζmٱcz?۵kǤIׯt֍]v1i$bccyꪫ-1p@GRRݻwK.XVIKKcС*plyU^ **;wOc6:u*SWW,dYl4qݤ30R3KdBL7V?lpB֮>{3mW `фVNxWvӦP׫e:wBϞ=INNmVWTTyf>CR3<äI`$?VwIZx{($IРj߯"%% vZnf̙CNN111v^/zR))),YEqq1\s5\\ve*l#׳qF֬YCUU8y:uFh ك~N>oݻ+Ϛ@[ @koNJ+ԇ<2"7"#;˗3[$ ޢg!B?[[[ˤ[n]vؿx\N';>?'A=2dt:TVVR[[jU$& (ȊFd0 'OFP8].k. &MOՊ(ȲހU`tw=r3g`&)xN(BTW Izzgam,|~U# sw4ۋ< a]]@ ={())Q?SNOuu5ΝNqM7`f)v?~1w\FtKƙ3g&55Gxb***t̝;zH0Lw,+yBi y^ a7EQFdN,^C!(zF .h28~+VW_ЦM&L@TT555L0ND8?)SU)Fq1n8:t& a4KBBڠB]EQA{C#z[)>#Gcz@0\ᗤA h&|1:tg%!!,KyY)#GbΝ|lݶݎfĉ\ve<쳜>}>\#///g׮]x^)--sA|>Ѫ|fS劢A^pD׳o>*++߿?o]tivyxꩧ;v,wq}X%::Z+l۶ MLL Ç禛nb̘1$''7Kbnɡo9\#dhy2VKxF)v>z)%%d}_Μa-466A6mD}}= /'%%EܹJǩ> &&3uTzp8γZ @kCH.\zXHE `lX;;{ ?@+Wr뭷s=Ǯ]ՇkԨQL0LU`.2fF/h}&Un&kƠQSS?GхMd8Xm8nV+r +V'ogݜ9s-[p*++U+F#>7>1M5Z<>"Wu% oE`0`$^Kl۶CY QF3\B {ut؜9f{]$ISk裏tуGv֮]U/шV%--=zпtˆ#hӦx|^L#Q|G,ZhHIJ+'//Oe5 u]GNN'N$**JeD~#ORŸ!L\hi,(BfQѭ/]R={{$''SZZؽ{7yyyQRR~ ģ>Jnn:H#\$zg$h2[* bj>|/:hƎKtl,QV+={tq8ݺv{7pQ^EGG3 Ç>?̙3{@khUnEka֭^y ( "F꾒r; @xf0ԨH/f֬Y >ロDv;#Gؼy3{AQ2ƎC1nhA,+Dl"+2 I9lߞmS&sȲE!>.z@\L '*:|DNmm^=zp뭷e> 4AMmm-#F``HA5˥vhX|F>šO//P#(:BA!crXz5[nEד 7@VV)))!chn7߿C:mۖ\uVFn w*)8(-+x1{,c2l\:RrMJ:wh43N)dۙOuM k֬e vRR999tԉ:@׫ǓO>Iii)#FO?UeS"?MԀ-~/ ,bwTz r_L]OV5 _Fjkk'%%Z~8AZ 4i˗/g֭l6n6f3P*].\իy0`999\|*K.NիY~=gϞ `V\UW]ŸqT!Ccc#%%%KM#ӕ)..XQ:9|[oW` as=z0dzɹsՋar(*:FNËtu-zmS(+/ѣйKƏϨQ0ͤ_C@%&&z֭'O䮻rZw Y\u5M{#cB]ip4VGLFϟuȑl޼9En%}ٳg3rHM TpS{|f3wu'Ç #1>ϋPTYC; !&{w͘n,}v!_w ~)/2C 'a1q|K.cDlnoKl2bbb8qVUM-"; m^h,z"D#/854R=\.l¶mxWk֭'OO>jfIɞ={عs'MMM@lm IDAT}'OrBᓃd0t!2 v;Y}0bt$))LՊb2Qׇd dY&! r!r6mx B|B<:.]?^b^"϶m=z47n =?aHȭouu߰}S*h[n<3oߞZ UL&C4hp$EG yчIHhááEK h4|L0FSO=M7݄V*9BP¾}jJOOg^r)i1L 22Q EE<ݯ} AhF1 r@d4a-?`F^jjppǝp8hZ Ȝ N(b21jEOH$M &#| < gΜ!!!EQVVFee%VYal}d0L8N ?b۹+ѣ:8ׯ:Dn L^G+v"/^s=GYYJ`ڴi 6, 5BB766sN*++}aIMMU&:)8:ev"5%4tZh5DEРX\^/Qgf` {Dx>jkkXn_} sϞ=UMvq:Jh[]lc O޽illɓZO?NرcYb?5o-ŢE БϿeWrt}\2z4={⇽{ZmݎlYkkeښN(''woڦ,+3FKN'̟?~VG."bbbxC=a99DEg +!'++#G`ZU(.&&m۶cv؁^'33x)S`XAp5~y%lٲ a1cFs阱0oO?rcZU! X">(ŐO *m6J8 kCW\Azz:K,ɓHĈ#1wZJYIl @4ڴIO>a={m2c]ՋOVb'IvSz0'=`2 .ؘh^c,Zx^y GRs{s'˸N}]8@cX C~$YfѾ}{}о}{u ,11FCuu5ʩShj Y]vESYY館p!N:~]tK.dddpEѻwo ЌN ޛ(_wyGM'N$77D:wLQQ*j'…`ɲA/1Jh)B+,  fG'&'s\57lÏ>D=z3gMVVot:=Ebp/͛7S__@C=ĸqT@kOعs+s 6ÆѫW1uzjYx<Ə [> 2zI"1!6,5f0B]yg 2s1[,5ӵ9|06md2qܹflrr2l FnHJjv{IMK{xTVV637 r1/TEnoT[Gw}{dId_h$'|VZźo֫sﻟ^Ӂhbܹ<ӴЁ7|^Çy0L~|>>S8}43f`t:f)Ao$ it:х[^voyy988pK}~t}؜Ga2(AІ`+]W)>6Kbb"g\vhZ v墡ގ˅cq1UUUQ&xzK/ Z @kfټq=-]N!٫;nD~}HȠhP]SC6M&:wB24֪hd !2NjVt3{Ô@AAII#@={cǎQQQɓ'ٷo?@Hk>..[n u::=_3g߭>Z-5'2j(]4Pp45aSXf-MMMуǞxĸx=OF?Vb굡b`XؙGX-VΞ; ]Gwq{EVYMb&r3>YcQXXHzz:ǎSMBi-g-1Hg/q+,ώ?Κ5kظq#wp0|f͚b @D}N0i$nu-Q%~lf7fbp;8n^e:xʟNu6u@zфwA[gzIR ݻwgذa>}kr9F#Vbͤ.ĵ391m$AJr,E %= Df:?NdT18F׿v{ =RL$H;eXv-k׮U-#ń^ \4p}d1jHՕJ$` ?\{4?G͆jaZIHH`ŃtX-bccb{@&];u(.D ,Y/Lnذa:tP[ @k՟jէcFkO>4&&:*@Q :=.@҇ES (H.!(K[ nڌܹsh4yyL|+YF ' h4T#njö[UEtl‚ po>@رc!=S^{-&LPѲ[l%f.-O -!Nb)vL0p\pB|o\9Kr.rN:5\̙3Si k"^%Au~*O믿f߁!2K?;.5Jzz:555LF$Qo T0JmSS<}Z-b[zѢElٲe˖8=5;Gfǎh4rϽwa9t҅Rp%[&#f{V+~ُOQTZ$eNa0ꗾ},.Ya+ΐ!C\D&H!c A$ aVW_0 Lz<_y_~9F8_׊+}DG hNK>Urͽ;CtAA!##oEQXl?f[owyY2hyo"\tE=+~EQ1H^IA7niBw k쩧g+nb?/h2yHYOI!^K:,]CG7+&A~91tDfFEtee)L{+D% * FtrJ֭[ǽ˼ycR1n-?XΆ Xf-vNG}xu\z8Tkf+a9NL&9DC}e xjk-;ii?fBQ`̙*f3[BcX^&4A2ԩSQZZNlt4Zׇu #MIIٳx̤G(BMM ڵ#++ gQUW]`x;w.O'}QB:7'Ofҥb@+:qN<?gx+뮻/=ͥcDŽOZ26s5;l۶FM A(z=uuuLJOC}o),F!( ~OVVgΜ+/gW2V]x3U]1үO w^$&$үO_9y C-HEł6a*FCDei:̞{;Ozz:)d"6.mƿ( >w_޽{ecr3bROx榤Ù3g8~xXΜ9þ}ᩧ"%%JBRA$g0Huu5 R=ɤ@޽{~f -C K?S>Ξ=Kvv6ӟMTiuhZhir:X-ٓ͛6OP]]o̙3 {)-. lp$C ?S'A/ ~`ٲe_NG=۹iӦ*?!FG&O˹Ty`|>φ5)+`~UUUa锞ΈQ#ݻ蘎B2\N#IF n2p8b x>55ZtNvmh'$$D^IMKkn&\% ш">!O?UK/qV+o߾<ũm"j1l>n`"Sx+S0(b{w;De:%1/8t:u ?LAuvI^luՏEo/olfٲ3v\hQ^CQg'/C z-Xb%yOҿ6GU0%x~[lh"N1υ5KSa}1bks۷ !!-&{r(Kh]F($D@ `4ëIf%=.6m6r9bccٳ;CXL!^d0a|&&IC2q49~3FU?j?̨QHKKSX |a#)FԵx"XcT]]]wŚ5kԥHB[SLWU,#HrqWyftٳILL$ b5|iS::GKU\yV(3 b0J2? ,{2x`ۏWe): !644yf,Y$IL6d_t^ #H*d-面w4ٳT- L~tfѵkW*p{!SmVp !w0DCP vwXb %NQa#|j͛ihhP(kzuq!8:? Ò'H ǁ ?}PQQyXyi!4 UVqUWOb&eee3p@N>d Jף{y V?!JDuu5}v:wm[i <ϣ 4MN Aȍ_FS\J\Ŷm>~ak>B,ql.b: &MĹK ˲( Oyy9a>1iI t֯_ΡK.ضMSSeee8ed%I$bK AȨBm466RSSW_mFcc#ay+2u] @uLu]8q_r|>O&٭ #u/UVi&LdȑԧeY*kiڇ,A  1UqFWi455P1 CՔ,>9Ө2E`)}\Ȩjwqi myB ࠃc/{%d%# $Hk/f-k_=L|>o"2۶UHFlr9 P5I&qM7ѫW/u]***v=# C̚5k(//˲t:#ab6sӧO0B@*Jj\ $HvO:MMMj?3x'IJ,EZT 00fq]T*eY:L60l0.2i]XŽ˦Mi(PZPFU9CseO42LBZ $H"\l63gr9P(p,2d#FsQjC04~wy7|Sdd:o߾L:o~tuСCYt)B;蠃hmm7\Lm^\pdYDQ ARؖ-[1b֭+S^{-'NLd^  ="_x'fg{$<0QP!_>})|KK hVv,&?3]ױ,9s0qD%8䓹[}͛73sLmۆ%QWUc_1A :¸ꫯN.ç1u]̚5q8J1}tN=T Prq!=EMݻq1Rשۛض-jPִh0Dzn6Z?~|8s֭w% C٬;Æ 2deeeJqiA05MS)HU&H Aq!!hnnc_G4ERFDJE`6a8Niةmmmzۭ<ē?dYYr0-ߟ'x={DZ\t:]5ϊ+8chnn ]fr3{lTO/~ fΜI[[Jw,B0訡O<00 qCk׮\JLMG[nՑdT'$H != uYp!sUGD:) F=@E@i*B`FBĶ(2 XXa"biaGΨQ#y/H穞9sP[[ˁSڈ#hkkc;ի֭C QkQu Yg߯DuR>, .cfE ݾ*A$OiҫW/e$]#ٺu+if/.۶1M0q\PeZ*Rq\'J)ƌܹOӻwo 9w"L,⪫bرPeR" C0Trݹ\u;9r2 @,XСC"K(O?42Q$DIHj\2\ץ 6`QR)<|dJ9R)0IDm QIx<#<={h"2ƸxǸq, uf%slۦsqF5zE˶m8p Gq*bO!yFu)I6崵K/OsG#Pކ㔸Gxckk\ U7ڑ A^7ׯ=mmmlnJ߾}ӧRB&V 4EWhq"2MC9Z-ZDkk+L0 ) ,Y_~SO=۶پ}e@ss3?`H$SN9EBVLȲuq8oSQQR2un:n6^z%|ߧjr1Ic=%\o~,Yʕ+y޽;555* |~4  “'OGBOF67N";A(Q\5k*(RH$8)ɓ("IKFktMӸٳ'Ռ9#Gr=yԒ~;Ǝ˦Mr5$HW3|gC.|Hj\2aR)-[Ƃ I{}}=CQ ]ag\MU ?T=G8I'ۏ=zki3Hݸq#>(eѩS' +d˖-˶m0};Ç+bDZ95j/:7oVd( \qe @q*HE矧 &$ $ĕ#Ղqa1|VZD,b„ jDa;A C'?Υ^W\ASScǎUAqG;nݺ*ASO=]wmƆK#$xӴXt)u] Ѐ0HR9l;C!}Cn]<֖VZ[Z"qJNcGw#8q"4Dzt:yaYGV*6I%cU$ 3>"1cSLQ$!袋׿!P*^2‹/1uT?L2뮻.s)+ӟD>W yx]ð}qP;l3f0-3VBl& S60gKKFChMa ĸ/È8al \ӟq}VqVEEo (j?UtW AJf&LgU0)?xTGN\**5L ˣ>;HRwqy*4B\veu4-Mv jƼAiѵpq9ry}v.˗/_C.#JQש5k9z1q~98.P~;Yz5TJE^A|Yf0xJ .$2dOyhRʡZ/y˖-cر;$zu)ɷK9?ζm4huuuJnnW_}5~8>(oBW;aHU_W83(JdY0q d3i@`硡T]!_uB4LxFf~>|$ =^YgEPWZl5\|ネ_ J30KaZa*D(HS? Pfq1jXrmXEvQQEy|C'BG RiHߨaÆ){K˪-[0iҤB+vH AB\L_HjС,X@ 5RfoG;X:)r*K i ȾK&aҥK^sqqW7,dќyY ٸ46`N;t&r2] ˆC!(pw'JaJR_b+22@]7uW!BD)tlĶ͊s=GcccܪU7RrXRJ !j+4Mu]jjj7n~IHe8g(EWY(kzjkk0amb ZZZd28Cee%ͼ+SB ӌ !C0qD8 nD&S[[c<6S?8ڊRc:~h7ݡ'1Ei<i Oi.upBV\~оQWWW q%bOuuߜ9sx衇0 L:yZϲ?ze(Z~[ߦo߾"+tM09-J$đ#GRVVVD>(mmmQm_ +A;v,obtMcei~alqF\a+8\./-=XYYŋyoTUUѽ{7R1QzD>0 )f,^L&k8`3Z{ }OP#&{t]O>ꫬ^$ݾ};wRqF q%ѕtO;v,gϦQmxyTVV1Chvg,9HPu,Y,1c/V}MrJ6ogϞti~d`+h"9o_"S(hnif}ҽ[w4\B ; Èg8"-yt]_~Y}-$uuuy%Qo J!֭[ܹ3uuu̘100ҭk7tS'ev/aܘNod:t(sQiu_:42A/I+!eWT裏[>BO2AૈPG%^GK~'JԅTN95+A q%xsÇ筷b͚5PVVylݺ~:t 2i0iiC aРA̟?۷+{Æhika!è!||o3 B:we[>>vȖ-(++CF7LLI%' C&N^xWذaOCCw$Hw!g|H477Jm %\ "ڔʀ(etRx̱,Z8)`h â5Y55z4W%3i"]0;&ay0EMвI[CS_ɰa>auɉлB12 )ɩxWPE)˩ZT.^䆟 AKj\=Dh 4g}xGhnn&" ;qGYYYIݪXNK6t]Dz-*t=_ou]oow!!u;޼= 0Mbt:;|dYk׮' IDAT|#@xu4`CӼ};֭{v BQb 9%r_a=-Rþq&M^lKz&s\lBYY%E/RE*U*=%/A q%=SVSSC׮]_@zǰad2%UhR]aeZJykmD[nW_ȍ I2:t㏟HuuZW\ Z,w}c:ŲRlWײ!LZE͛7uVnݪD#e䝤 $Hkdxf>G=.b5JFdr-XE>|] v%"cg!64M+VXj(#Lҙ,8dd2lQ"Z&/ɣi{l+J !S6k4beDj=6kIZ˔aׅB˗sgs>p3tPzmF}~ZZZp r?O9Ø>}:=k׮-I%x3eN:$V\W q흐uܘ v zŴiT*InAp.TJwMƍE)Iڛ/ɏd~z3 o~7ytZmmma/Y:768n0AHgL5607|~xV@&0 Y[[ׯ_YFkhh@B) )eٴl )-MMْ`F>f=Fsss20RR)<ӧOgРA%b6㨑*t]RD:uR>EV\ICCC4-~l6%M q%oPl$?4* \Eub֬Y\I%DGMBah?gIjsa4440wLs,ӈeYxGUM57hm[6my;$+vUl|/`SQQ\E,_f w4MömUw'_=˖wUv#[89U"ӃUE<֯!><6oR#Sz5QO>Y"4K./}KOy6666E]ٶ+/3_ISO8x_9\GuC ᨣb˖-ocÆ ܊cy#FON+,k}e1tP%¹=z43gW_Zȑ#Yx1{QEm ${םE|W-7=1:CϞ=l|lذo r@B`Qd'{? LpLrk֬^!daP[[KY&6*]2qWE4-D4өG ' 'IB|? 7-￯z銍0$2uTfΜəg枹8%iGyR:f~2uT9I3ΈZjjjWQeY455Nq?/| %i#HӬY?/v= E:" q]4iFut:w{!Sr4 %G${Mkuε1)457? eðXb6oެAEEt0A5 ^zJ@ZLYUlFnݺq饗T*\3L$RSB|>惙IYYap%_J=^"3o<RL !{_r9 à }ƌ;L . [Q$@di~ݻw?D5jq8 ~~#t'ѫgQh$] 0@PPjj;w0>_֤<ϣ{n\vb X)yW;/s2e씵?ϋfm۪['}կҧOv" ~W[ AB\ *dto4ﶻv_Pq?ӟqo?qk4KC- aiL0|b434~ybws/]XV%L 4;Kk.񌯸h ;!e {{.#S/;Y蚎L>e27Wv衈tMsE}..uu䪫t:9`z|͢Cӟ"I Ģ1.b:as5DMxyꩧqF|~a67;TLNP`[&To;?xp\sCy'A@:&N_RyʆZ9H{(EAY(>0ݨ:ՕUJd3_'&}e.q($#D5]wsf$*q= }b:rX4bz.C0 yw_Hn Zl٢҈0r˲?WutMkݓWRӁ(y 7pWIjO5bMNq=?(*5]/H?d@h0ь>j4O? /ޥP׹r"$!\?EiT@!k\=dQ׹:'0,^qv꼒 AB\ J!---JH #[6|%lta9WsyWxH qfyqtAy"DNFǎe͊ #߮Ȥ0tQi'2a—>:[@lFE |tM>tC/'+b|wWp]07o+`=n,tC#m(gGAc8oNcx]mrc_`?Jf͚ױK4/[ZkmYhK}`a, t:<)-t<)?Rley"P"]'j[ƨQG2xH}6o\b AB\ >?o`,˜1c"ᗗ3b |$CkFbYQ oMUiLlݺvÆ jM*M(:+WhT:׉R#=mF"W^DRmSQm- sruYX-*oY@Rc6_җ?I*I#lQF> +D1[s=ѣGsvZX9ɏtMУYPZt[OMu5g1I&"f@594X%SN@#wu4s]nww˲#Lt4$bt a4Z(|,kOg=JxW:~]\7 N3<40Tvuug AB\uE&}n6c=fB>(ufYv*Z˲ 0CFҖ*ޕ1ѴvqF*BAs/˵Y ȶeae=4͸W챈D]'aŪU,[P:( 7HMS:2xDFRҵkWq%E*QUUE.S 0MK?o$6v555 xcS6s NNBm}oqjOOyY7`cs':", =p{tslذA6c"O" H$axZYx!vZnt\/9;Dw_l償 AB\ v rU477SSSʕ+̲e4l6保:u} @^=0Xbi) 'u1L DHN~;|+W* uٿl_w\sM|k$UWU˵N,5C}:j"M1MF-}4ô|zL|>ҡSLAD[ {|4|>GD&K,T, Sҥ ӧO;cǎl0 ]Z7$AW8+V' !SO=d4R! 94 {m^T3--E!^+K2{0Dўi̟?FړE,)C*t7A7??uuu>h3A쮈+ɨy.%dY:N=TEhj@h7m$"Ś1̨I5=d3@5ecTl)2u'EGFKK s"R iu4 *0t-O+T[:ь.?Pm*eXʟ=>q[g9뙦矯Փ> q% }8C(++#Jdx衇XhnN#R_T㊢ 07bG(Kn 'T# ;eiyY͘cԦX"g㱿>9daɨ-i?6PJ1k֬8ʎe*z(d3Q*0/3 j#Z%đ*|AJ͎7:8$AB\\p֦m7̛oIN%D#ށaX4IUPbq);vf"k||t=^zŎi5!kږ#_}:GI=6R__E]u]GFI|.}Bjr5] t=~'8K[\ }Og0Uju];o,8E-Ll:nD f F4畗_Uy}TdZ E銐2 O8㺮r BTMi^0hnncY555Ӈ0`=z*E $qIX:Vߗݻwg޼yy䑼An-f\~/aX"jL}?6nc6G޽Ybk׮䌳bʔt=;MA~G"G[np0 0Cw/䥗^bxO:)/ %>t0&m?hnm5D6J am8/Ѳ-. Oyy94,uLF׍DôR8^mٺUI @Lk(VKeR*u]8Rnf&c&4642i5K۩j m]2f* ֕d2[ T |ߧ^x޽{c&lV5צR)nVnFrOg*z$d  "  #."` @DEX\T]%1 bDiROwX52D}ןaf{[s1MYmh#С~W-?FB>>%/nM uɐC>? HDL%FpGD.oߎ c&$rssb{$`A}-=aʕ$LA^FЍ, Ő7^;N:ᤓN*s_XQQ @*nP$`ʔ)h~F rʩXlv܉t:-9$TM<3f@5+UZZ*QBQ5ME*i͸x (--j?qY=EtN㏱qFs9FI(g>" #~;A4 Ǜox<.& ߿Æ ã>H$"E {%T HêU+qݵ@ ;>,:>fL{]G؎ J8tD@ >GL)h;6:^i-I-1 k%+1'',+K6UThu:.(VY%o-!>h\P~`TjGII Kȶm}hܸqAnĸeb9ysxm)7FQQVVXVV&OvT9990M ,@.] Ц #0qb[.+4o\l ''aԩ4it] Bi>(܆U -h {/fh#6l؀ :_ӛ6ENNR bѨTyyhٲ5j_V&(>8wFxE:64x͛'s5kgϞcY d3{(~"XTQAolذA…>@6/Fݺu3};H4O9n7E30b=] ~r !XӧOHĥ^*-Mıw($m6t]Ǟ={p瞃RWT_cǎENd]ׅj`,#kD 4qyRbض[W^u=k#0; ( c [FŊa ! xiϞ=xWpiIK Y.X+vL@(<υXȉѲe+4lX |H˖-yQR4iRT|.AB<7ЪXlo.ϋ{bIT3cǎUVp/y *߀o.yǝwiPϒo@Wp}YlٲEJ)"fΜUd&WUU֭[qWb…>EϖR)na'rP[/{OM&DJ)ZnO>[nŎ;$pir ڵkt:FG*Brq⦛n»ヒ-[¶mTXQ.`?#&OÇg^Y&Iu.KOiplg}&M%KϽz]w:F4EII  ǟ1-[&Dp, c#k6mH8`*dI#6T~||y6lxOVChl@8.s6mʒVb ' _;ا &+`%:ŋnݺZMRвi&;!xN}h74h˖-qwओN82Y駟ƶmŠ+,wqq1U}V' lٲСEa&?by!#Xʕa4h{E4E*B2ĢE0a,_\F/yA.ۧfMXj%z\'[”o"{#" % :,5rrpEsNl޼9 _ [oX$dMШaC1lo !6mž={ꪫ~fTyfL0Ao~:ƌpGlb;6TdXjP~}Ym M`80q#(Mf-\`rssP4D5aG"PLmYUT$ʒ5 TMyz 2Wwnc?2(W<"zuQXX۷gID " 3f`Yu}.J=4?щJ(MD8oO< 6#~LnE˶m9rK Lի+BZKo޽{ꫯpa:^"h,B>v;gu`P[<&h(4$uЂݰaCU\@9w.|  J/KdEV>sP\X7 F W W\v9VZ ϧ \*-T@. ̟MLOZqXիV˹CRJT=>#y>OnҗsEQy Ja JB@j'L%BxexP6yA mYPH 'G+@. g*.7TK lWR J35CA~> ]8P'?d%>;G30q#I`M8 qT+PD"!r6,eeeRMf͚Xj$qdH9v2(kM@`CUU8~DŽ ' g`wJzurU]CՑ-A2'GǑ *YE &֛K,U!֡>=,iB>3g#sb,'0Ro0 ض/njB t8&ttAYA3g|SZ|N|ah༎P E_5ƍ㦛n7ވf͚SOE:uP^=Æ ̙3";8tk=BxFQXXƌ?Q8TCpN(4&K0\\r%;w.Nٍm۶&bWa0t HwgU9y 18Zhv,ӔՎHV$0\]bE( rB %\al깠w~'T7L9wrBժYLH1p@Q]$vfZϠiɤr]1=R$ʒ<:Հ2wrXq&XU j 8qs=hРO^* )g?뮻۷Ǘr\jMnGǥR)7 3m c -GTEal 㔓OsѶ0 (((@II Ə9aX_ݔ'!PsyU˳x%kS$ѢE o_5}&gi(++0S]J<hQQ@~'\XyȊ>x.`֬ѴiD""5 E@) IDAT+WG SsrѵkWTV3|0k"`@9;z %Kə%4Ѳ噸_?vS 7O=Tq$zYHż`/ n"J>J%jRUr5J]quJ@(7C^|%p(c T͈k[UM+?L1l޼\p=f`iÏ}h5 Jm}㣑(ҩdG \ oz׃i cR@0 L4!68Wg<Ưϱ e ԮS' }ZAq !xYN9X x<>}ȅ\T[Vu 4gy&,Bii)g|ϰc)J*tE͛q饗ꫯFXbAV33S%K&Sa-Bg``W7ŀ̳ $BخX=C.>}ЭE2t-(׵=P1-| (g$ǃA0Qe0 X#Gb͚5`IRJؿ?`ђv*lU4;#~/"[ϖpӦM9r$-ZiӦSO͂D$^`6~7gYG1F=/큼\رC6'Lbf\dEtrڵmW@)U)Tis ۶1IE Bf<բZm&MSN*QV|7&;wDŊKQ1{ھ};&O,p=p|0KRɶmD")D)QW^y%VXI&aӦMX5kШQ#'NDڵ%"JKK%lNySM8 'q&MTLSHaƌװ, E۱M{8т*% #e0qG4i!ZQ  p 6EBP4\tEر#.Y*1|,]Æ _WTZUV\"rss.JKKa"V\VRtO~F^з,d&3f̐wkEA*B$Y&pcĈСy[())2VU(**ʲ†O"֨Qm۶E )0 Thmu4MC=è܇ݻwKz kDTUh?X[N=]a\_n2Ua.ʼ Pj/WJ<^*OVIpf3tÉ?JԪX&AD\Pj?'ڿϙZ✄aP1J\g+"5DU4h WuaYWdp00Okw %g.\\yfsL:=6D0?˗ -|'x"n64aM/z%zQ6^mضmW-@ze˰?Y-DK{Q _@͛'>8/W_|&cĎ1D&HH:t=RugnUȇJ\ң;b*90MW_@S5Nv w9 xћyC%F`ꔩ~%l&ض uSSPfmtU:%,B~~~E`}}FB <,_+D2t8#lQ8<@Yik׬_ǶŐJJ+*NЃ3 TVM:ENN+)z0 Q +Wp}$L@T D#PUoǡlj(#:^ z"`+N:R+Jg-;T ٛM4=\eY`ᓏ6²L(*RxGl1kF>ux Ћko@=VVϞB m#L\G,?_Z*HHI'u$$yUms#JAB|srr`<8vi5]2!c8&ZUe[|&+(>s >J{1.z%@HSW|EǶu\jO N4g*jXm#,3SJ.ۆ`}0ׯ.Ɲ;wF (OJQbʔ)UDUNarss L=~bZfƠp5 w`LgۋCAEeBtM H^ W_yyyr?>OD"!d2)'D e~-"ԇ׭[' ;< fza1_#!˗.Ł~vMB10qU4 'mt_ZgUAذIXQKNF+ϧO ,x+!T\RգfƏlj8?84M87RJ\ {マD"!I.kVG*UЮymB5}|ɧP5 xX<=‚a؆i8n,Hu`Æ Զ;V!a9A\^]d*"oAWzOw4$.U:6އ2aHx'N<Ȩ'ZjIFeYi>#PJz|k 0^ (u {IUU؎-)۱˭(43 %of$=0!VPԷ0 0+0hTt9/BҲUp=$¡x<#5ԍdb{7n,z3|I>\Rh]湜]Jd2իWKy-QqԩS'\"`HugnXUJ^}zsƸr7=.8/\NQZZ 0g\s=Ϲذ=IjTcg|2ſP2MB!׮ի}Ѐ,@^^EĉYEF%4/([nH$նm|Xh Dyv RaxqLKs؝U8?)_>J(aҧJ1(kyyy>gH&bWrss|ŋ0`رÇ9sa408.DOo.Ann.(|87o.N1sL\`M8Ly T 9Nl\=TzLRسu[nҦM9,*ƍK5 h_@L3櫛S QRfy_ 0-߀RƬf1S:thnjlXMՠ(,YUf8b.q߰D C=5hw4 Ga5h4*Vii)|X[oE4E:ʕ+Jʕ2 &B <-څ ',Y ˶R RIԷ POCx F4A>l84h0`fxq>, <83_(F8q"ڶm ]REﰫs T. paabuغuk"y:]3gNuV(*knh}EͧێݛyҔYӦ()) RKՉ y8^=[ ڒxOJʞ**\'g]ץWqaY.իWǨ.f,WUiBAM.bP W_5k@UUyG"TV ~0+,^}U4h S=.l##{w@U)"Feôl8g<8.?8P"ڶnu/  BYM3[0*<e#Njvl)mE 0桇)<ȍixph\ta7PJ_P"o I 1UPUKFƍ1uTʿ) A剮GL30u"LaÆ>|TUEYY6l؀3g1ڎ I)QA nq3Ν|ߨZup 7`e0tr E,Po& V^YUv\*儀ܸ&`ffO\%=k!iPN`T4MT?F61x`Lyz TU!ZV{*( '"yt?_Ho&,Bii<_Yf7ntk2](&eb1`Q"ÏbٿT cH%S,z fEQTUAjUgh IDATpiK%#~weK ]B鿃ydz`02 Hgq-[R] Ɉ2ƠNƎmC7t7ڵ r2|w;A?o>ʪ L.oP>`d;.TF J -Y\2reT20qPy_p\*UPR̓n.yxdx\{50mF4E*m+ysH8q,PB JocH&HrL1[oz褔' $FG|7jԨQi8 $ ZjO>۶mCQQTUE,ŷ~t*#bD:|HQ5PJNeZ6A\X UN8_~{ [o/mvp\@ՕZCڵosV(z:uOrE.\_{]Ӹ !#Nv< 4o-yB HR0 Æ C^=A(Uz\i0t峃 yEUâEcȐr#"H-Z^ KUU$,qYFXqaݎ7'|V>7 3gu 0LsEndd(0 ƭ)LB$eY8|i:k'=]T[&%rU; Q~뛋^r%( 7o7xX 1ybm[*I+0q8l{SN9sEժUyF7B NjUT\gʲ-i0,g :!>c&FχeYx /LVHVX_BE@eDEsυi2;#ϑmAZjoG'dkJfVx~۶(g&b .^sZD)ye7_fEo`ĈsN3$엗yZjr;4Mr0u@P!3<|PJa?nkB9|N 8 }Bv*ތ[QC9`IPT%+y :rnyP(Aŀr .tk>&{X`ۂ8_kY<>SpMغu+̝麎1c5^ $}%P ?C:2S]ׅЕ_Mt'խ HJ@Jh͍|T i"QA5<2aYACڶm{ym^]-[W\qt)R(!,10Xğ Vvu <,˂PE 9D#n^|i8W=~nygɎ "JsF6UBriq ÀNoCc+e tRtMPؽ{ #cJZrq]tZ6=A*A)ڵd*b@(AMj:3\?ߝ+"u6U(\kT9k֬Įþ㍨_ӛk(LBQb;S,KD$ڵjK.pxCȌ-q膆t2]ɓ'cKdRx\_ t~AOu-š!/? 19` ((d2(FcƎ֗lL(((i!0G a( 5֭-_AQ^uQNkp6<:s]W!!*%/̞_hڴ)X6}bg8 DZѼyʲF& /^c0q\". ,{BO ֮[Yfސ=0EWw^7naBiPW}:P|V1}&N{_#R)߱on6 sV,>K(I8Ư~\G7ؾ};.l޼91yyy?~Rʕ5\Džcs t:JKKn#HfJ-=+'۰(K@Q&^~y6oHVGyw:VE)--EN.2<ҭ[DҺcȑ4 TJ#L\HӈC 0sL !//m{AYYIYRFn1PӺ5Ly#F@̵eKGϞX~J`*+*>9'NnBqF\W,(BV](.- @̯by7E 1lh:7KEY-Z຿^Cפiyx,}(L xʫG}CJ;H ''GR{ǣnݺF #0q"L"ikp]w,Yqݻ1x,eb¨> "߉_qX븤{w޽{q3 & d;1æ"1 E4b)'RdY)0T2góƂd*/g}7cŊuᦁ7QӐL&FePy * N0e [1a 꺎b܆ /Yp'¶mqPB0[$+$g}P(ȫ*:uƍ_dm`˖-سgڶmx}u9ky?y۶ҡr_x" ʮAU,=V~ïW4l@B|I̘1xf> i#̸'W`gM3vXt)>:ݢEScJ7M`Zu|w%JrZ\_]˕=ρmXrta%|FMVTN>ί6B f*6{t`~ݦ~!t xzh|UL=6S"@MB2ҏw IVر:l9̇]%֭[F9@Au+} F\R!=AxU]܎*:uNb1Nc&l ,BժUqםwLXxڂΝNԄ1+41v34KLwQFxQJi fPʈBDQ>FgNI @SW^ςM ta=UT$ 44<F!pJDh @g%]*VL8g3C3x ֩w罇b\C_q#zzmiA$TEe[tUH!c(dR客/MӰm68EAti3N_Ƽao[Tgքtoo )F,SYF"Q̜9 K,,=ȑ#QJTTg #]@w5j1cFL@ XᇐDZ}7x|kS_SO={܋8&SD h HKT^`Z} uN`uEu))Pw۶}%OȒݐU5zB;Km 㥗^PQ f>x? {zuqW+ s=ǁiɑ}ND"nq,7RZDP7k.A6|\:&2V,7be M8bP~VN# w>_Bt >%G>T-Q&$I0 vٶh4={((,{bU!I"/X;w>mۈD"PU#FC=`>bl.0B 7GItznݺl,lжm0x L]*M3)^L: E_T ob0 =/+(iQl?ҿѿnx-ޗL[Y1*q]}]X :Y2e@LyE<#0 * $K\je3fL'+>>+uYel=Ͳ+a0yf͚q`uX /[gF? Y I' (HH;CD <ǏE53zoh׮f̘. 29"/<{rsy7wt.)ǎ/(,L<M7y#xZ]:]aa!t]dz>~Umh!d0ذ/˲rV~eeaB$}?ף%:Y!J?s]83/C@2(Wƍ+Z\i"arrc n'ӣ '3*U8(rbL p/;aW"L>l;>w|%BDQޚ>@DDe;^/plC^_E mf谁6lʔ)B꺎#<;vA%3;Iʠ9b,v~V#Io˄r!A@JӰ%':6DI1vx ut%_ї\2O,ۢ%%{(*uH+X 5jƹUXNT( w#8QR%(tR)d[fjDE*Ecset> g}E,FݑPt w弄Ԓ0 c.'Q$Ig'd0GcT t m۶baNb޽PU5C)JnݺGjFRKaiH$+bĉ<#s]e!''> Ww]eʔA2eY\pheշPqc'QV-}m۶I[Peq=Ȳ2-8EUzXD,d$Yz) ۴ ЪuKD"uqa0zeآz*AbQ>>p)T*d2˴fTηފ7o!D@޽0txGM#Ѐ8QQP(f̬YF!pd]a`ȑhٲe(Xt)|IHINTDN(<σc;(uH L{?LYK%0h7f," ,a`ΝGe)l4M'GuGw莁dX؈͹ΤJa7"IDQ_|ehUF!pq²,D"H_~/HMMUUŴi0eZˎ`BȢ=J ɲL}>}'@VVb{{ IDATyg`3c2b//垲"ޔc9'ۗ15TT$a…1s5l;nǰ!'sD88+T(ǁiϰm6W+2qa 0˖-aÆ!;;(gŋrq1Ș+3MŠk)cqf_9B[3}gMI<W\> 7ok,DŊ?(Kd:츎?LDYp˭ǖ-[`43۵{7֮C!"?91zD$֭˗wc*:u1PUǶѩm8| \\UxCDDFuE eB'K 8Xp! ]ˡCpW߅F\a`ãLD&M.VZ[UU~]UVE qZ%@.:g~PL\s,y7p~X#Gd͛5K& \6sFF0 %J͛Ѻukٳq)v #8%D"|'_re_(FꫯгgOj l>Z "gʒ>^Ow߃!Cq=̟>oyׅبwA=hB8 ʹ?3H!g?-d;UU|r̜9h4 M0|p\sՀhqʯ}z(S{M7g "6n܈K/'OiEŔ F\a{X՘|~~>T]vu](Q+)DQ,YÆ?b.F]}Sx< D"C0#C%[t-Z⼬s<cA (G$5ȑ#e ACii~(S L0QYn]*,ӄe[%)&7xC/6leilST)۷ś hҤ L F!pQ,)s EA"@r0|(aÆKp92$qO!p\j(K(ma==\rI}TT >/ĥKWV᧟7]vz%ǣ3>Q$>|9b=*K NBPjU\vexwQǁ}K.+H DZNm;`ɮ P矏jժmO|si8dYҥKQPP믿>eF!pQ{llo6lFݺuqA['H:Ja[е[tV .Ϸ9d,cd DLhb /<|j%nي;w+,'w")"\υ !d ݻws[YmۨP֯_ٳgsj(hr 2h(@_~~F*F֭AqAٸq#:6mڄW!pqj"+>;eW]u͛]őlԮU+tjף&lۂ'd2KgĀ9{$+%(W-[FbӦغm ڵ`/jEҲzbA vzX`ذaC%(裏"ry"U|#Fs^\/GaWJlL&Q~}R)|;o;!JCU-g,D @ND1R&#a|:*U˰, [nCP90 ۊH"R_u p'ρq-X>~c7}˲pbĉ DG3G{ G$A,۹pp~kԩ<}aWźp]`خyĈqKco9˖|B,@ףD{6}Fv2rs`:U0 ȊJKyQNXSZ;ϲ,dee{r[ vxȲcdG`:߲eKiӆH5MĶmw V}0+b ʨK7eH$YƢ:f̜@Dvka#Jv9qK67~ 5W,Ç :6(t륳K"u]4oիW"a!-Ka)Ff+qij";;_l/|^zi2U Ⱍ. r6,۲l~P|yD"Q@aayڹZuޟH)| 59 xhѢR1SJbR@ L:rET +V9{}4E͚5*n#(D"Rvv6}NqAK>{B$i Q!u+~vĤ U= ٞ(ҡ /F!++ ""??#G(RBjFy"t#g|`3EQPZ5 4g- $$=?'sD;D2̽4Ml2$ +8uC%Jf;v(װv@%a6ףPqpDoi=U`s`ԚD$Qjs-~ pY2_O?\X4+rM>ftDQY !={DҥX /P>o,9_=>c0$}%sEpa܊uhT)NVP >Hf͚H$gY&#e:h~휤AwW,RD)t^^fBqsݫgԩ1Hy*WJ*{vn^L4mڔa19yঊl[V(P2eYpl_QAVb̙go.$;p@U\Px8 Nϱa(Y$׆0oWgJw^m<9r$*TNG?QbNkLhP EN cQKa&">cp98z?T^il$ {.Vz3(J͊tMr e(' G]Ч:t(?}_8D๴ q=Ȋ {8uh <(,,#hs8E3+?,dt]>4WNNuz>EKz,K "^xa.Sϔۺu+O2ii0X,H,g(ϮA3fk<;e[D ى,o׭CηaРͥ%>O@"'.aYQ/HөCP֜@*n86eݍtz>L2rr>8C<܋5<\ׅ:(s<"/s27 D 57}o~ M6-jU 3.ץ qt3gveʖ-~ =z,cy10B ɸ|Y1 qЬY4iOPƆϔ]7-]9[~;ĉL>UI'KgBv:,?oؖ I!h==ڷ/x lݺg )S?G}.LyTU84Ϝ10$^z mڴ :94MTXK,Ai̟ #833.σз𼴣0!]3؟)}@*2>/^LD"*g*낙yyyuae ^rC Q&<^$D8N7qU=$(iCӄ('OO?͛7g0U~z+Wf^^iw#a&dE7^Ê ;;۟rQvmtx :=ytD"ӻ7w5Moe9xg2FbeF!I6bx ֮o2_~-TV$QHimsP$ mc?~<{9a&h4CҥH$ka|@Z9AQr-$V6LC5-[]h4JB$$A!\ TȾ% !3*N^ 1[ ϣ۶MG>ϿUa65Д%HIMӨƲ,Cu̙3]wV^ek>2d$E"$ORx !+ L/xA:޽ oEQədg1"'0qхס ¥þT=0 &d/G&(7Efz9LI78"F :4a&9 ڶ۷Dɒ%вi>0 ڵ+&N%Kx(((=@߁Wa={WJ@SO5^Aehf(Lf@>IdR~?Ʉ]\ׅH&ah:؟6"c˾t]cĴ,)^}u:^}U u^@$**,ہR6mM(P ,YADˤٳg?ĺu8H1mH$[.:u;B,J&H$gj$F\aqai ̝;+/z :t8ЪcߠI%qbw}d`Yϙ 7jZEоH/vը((lI @B(~'<7៘?oA`6KK}> DIJRPm~a@ (!ǢLO~4D$H8=z4z[EXFYalAcE}$IL0!J-;k; zl;te7F4`=%Eo5 3¢m%ؐU3fĸqp Oa?=z z0 DvR j6xDvѣ0\޲,4 UTȑ#qr ~P{083/oQN0~x|9BU+Vۃ@U3 Yv2-Vomۄؽ{gLcQ^F%^w@K7BĔ;テѣG#Lr*<R =1׶mK3)) maI!8% ӂp_~>\V?lA*M-Zr`0+3*h#;;,c˖-xgbJd={ލK/ ||95m<^>d HT,#Jхs^cs D؞Xg9ʻ(I>GUϐeG31cbNfn?Q ;;U@ڵPvm4i4r(%F03 IDATXS;PFqBI@ (A@*,+dr7?nbzACCWiD#LeZô.EcK//@)Q*ʔ)q[nIg~QNȠ]l,aW(ڄСY\!Hwހ@s. ~ފn$IHRؿ?v: EQp^ժhԸ5i=!Ӿ?{:[hvL+wQQT)ԭ[W^y%:wx,AK3:g2Yx&2 |w} Yе[gX{yl6LD*P˼ywW_} ]}EU%K>Bkmu(2 EJ;;a00 PÆ 0Pڵ+d!WI̢^U"Jҥ3nRb]8p֬-[y:+s^M\ue h-[Jq=o(4Lx8x󓕝=hr-˂a,||,0 *T3cAkidmafa!!i2 Xx1۶y/\ǁ8hժ^|ETUYůa`pSĮgEDٿ=f 7 l F9j\]׏lxc#LȲɓ'#''稿5imD$p; !cjOM4ŵm0 |՗+}7m۶bxyhܤ:vM7,H?Ȣ$" Yqض N'8@pٶm+"qtׯ#ωD(Wܳŕt%޽U=#ґȂkHRd? 0d2 bΜ9^:\ڋg22QQ .̽(Ӓ27m˲ǑL&;wUTAzPZ5@CŊ[(+D"`f!p%Fh׮"9;?" ^" 4o~6mNvœsr'O'f-|k۶A.AFydYY18Mרzz ]ۏYJc $IV"4MC߾}a@AAʖ-qcǠEWĿHD(JXlq۷PR%޽B{xiJE (J*ӦmW$rssQ|y;h"|(((@,޽{i&P?|TP(bرc,BݺuQvmT\ժUå^H$2epy-vȲ ,hqz=?yl߾>ߙ~5os΁S+Y^!+Qt\xuҪdE[gwի$[<8nx xD:.g?{`OZ28=.vqUWa׮]DSNe] o߾䢋[o!ضϛ5/PcCUT>\޽{s"Eɒ%Sl?N:!p3}p3RmFaa!,Y~ k֬ƍw^$I~?˲|"8Ǯ4~F`b<G:uаaC<PJ,ʴ{l1R1}tu]oƘ1cr JpZr7kqd'aܑ0 _E eb1g\,Z_\\ׅh״j "& D|1S \'wXd _ TV +V_ kʴW2 |PLˆ*PdÇ iYh4 .o&ʕ+Ǎ5MrZ, h fV[<ܹWƊ+fZ7EIC\J{ a>#K+V_CRJ>1M*m,,N a0b5k :ZjAuȒI.kV3RDP$Q@36ǁnZ$Q&+ps0k,;^ꫯO[p}WވDb "P[z˶N99yb]lp!b(x뭷0gZ,3R &0\θ"%_pz g/_{2`6l,;{ae;h1p)3 r{˱h"Y;vdu66NHyDf pedJL&1sL˸ ѧOtxVs]rx ˪#F Hym|Wk9`yB #=UVq+a2NI#qE]o,4JDe`\Ȳ 04OuGL8)Y8RԯnT<σi3M{СCѢE 4COf@ge ,4Xf f=9b6r|sy4M  իm믿MY KVn\a, X?`KFAA~L>s] YYLi/de$_3b=`RvQmv\ru Ӳ09xꩧ8z T3p < F:f bQ-Zˈ"udHG MVlٲʕDf wc0U !+%b5kb #H00dei2c$ %JgU|hժ7oi,0;g|?p)ZgBr/z[$lSc6о}{keI/T F˜1cʇeql8DQ\|hӦ t UB,/E2f$ Heb;c h}]h A(LF<5\Ͽ0cǎ .ltvǩT &MB9KD˲,\yUػw/c ٕW])SlW3S`@PTvСC7o}Ylݺ/%J@^^f"I `!*UZjjժ_>ڵkQQti<|0hڃE98t/^CaϞ=;w. :{Y%kEQx90lcƌ#Ɵ2 lYΝ3QdITR*DcI.Z>CH4ݠdJDa&n&߸_iEŗ罽puDcפ/bرk@\fM,]eAUUt3NYRw~Á3'GD=PbEBS/-`=V&d;0`,[ ,a0NlxQYW-[_~9.rmf:,(UU3콂߱Vd)h98oߎ b޼yشi:EA*,Z`fYYڵzjD"f\Ct={`ԨQxp @` +++ ZD֭Ѻuk(Tg%:6LFD⣢$C yGK2v*U€p 73LŘ1c<9k!yڸq#֮]nW6 R+?Ekb׬Y}A< :i' a,Y+W~{aʆQBˇVQTQN]Խ_[@ШQcHȁ]UOMk`Jz,@ LǾf.{YXgdvXQPPiӦ\B2JArB*y\2 ~E㼗t>UA?^.]j*¶m-[W@,˂aD"|ZD"yfY^{-A?{e߯MM!$ RTXl QQPQWB"E"QP(u) IHN/w̜  8y Lʽ==2n7/8CI 85-- ٸЦMʸㅗz8R|&l p6;0QM0ڑ:tC֡Y44i }- k?~+2ХKH~A:Gu>w}/rtڕX,{m/I$uVS~|y~+ !??C AFFo $KUUx^h;w… o>YcBD KRncXlw:upG˜ De^ 47ߌÇo D[j’z %%ע<$/. Ełn>l޼Z _4Nާ~;aj sH*}]v=jfK2,H-[(>aoJJKKm6<\F aUWWԩSC~ B`t?||3f :uč*aʓQ?{\ Gjj*4 6 ]ta^r88e"dѺ!J~ٻwo^v!$p%唓ؑ"^ak׮ڵ+ZjKr P(ذaba$f0 N:4 FѽgOTUT`7 %%P_5B w0'Z~1ClӳgOwy\"MJk(Y IDATCv)ɏt1G$lk.(=z H<޽{ѧOYYYYp`ݐJ48b̜9sGׇ+aInb&r\+u\0W_>eee[.<o+2 C^ߴi zq:K-\4ML=pMJXtiӦXd .baE^!$6aCfuȒp8EU94 4 # ^z%0^I;w9qqX"ƯVӧ?w$~RSS1{lݻ4Tk׮i>lbz<!?kIϞ=uV̛7-[=,Vk!ĤKHIIA0D0gSw|(&5%' vP}UWx8 \I9eETL,QQe+2?7tc66V`L}h2u?fMj]H2(ɒ` 2+'&aLVo|D=ySa  Zj%+i]cJbscJX ڽ׿gx[&۵k/Oۘ;wv;W>Qfp$DF-.-6̜ﺓ{c{ ,V00>(G\@+;D;Ef U`qЪU+n۶m5j*++'L@K,˦ᗿhKO~apB駟pUWO>S4 -Z'`С& y0 -B8Qۍ4Sډxb/hCaR+_^PHSaXx~bi LaPT툱~ ̥X( τRwn%vZqO&נInMHN\bڷo;v. nT&v]bݺuxѼy>U[! ݇Lu]>|8//VZIS1}tS$p%/'^L7x"mI5V_ `5.͉+&I`Y{!IƎ;]vrHcVw%kB\XQ۷7x"KeCaHn ]vʆZ=>| ~亮zdK2ᥗ^-‡/R3H"3gVZƍt:lѭ[7M_$Jy୷?|uHq+7?۹ MU"3LOP!?Eqqǝw@QH7:n0y>\.JAьN  H%".MpmiӦܪeV޽{9rHbxE}@x衇pj"--[#Т*.Rl۶  trPïdhkcIJ_S:-6=Lox .rI$3ɲ EբSu!V?pW`47CnEQШQi{XyxFa0 :rRHڵ㊉w}\و xC'R!(..Ɛ!C /@Qm՜T@~޽eL>DJ*NgoxK4p  bذaݻ7N'7IN1"`X|yהjymb10 !(lx…4DYVaa`LbE(aDك ]#-,&E#Ha 5R/uNѠAhg.\1 cذaXl?Okos-D;T,PRRRx}ł ,\`Lr2dYFJJ ֭[BrjcѺu iiyCX7 ATS{C$t6L7DϋňbϽ:Oj5L8pn?l4"!*uޥ0& 0 Fdc2ISY㉀4YЭ[=L.ACIر^{-:z_ѬY3x^^%Ţ<ݻ:5CFKf0mڴz0~%Iu_~%>gw ,k BBxwaҥ FGP`0Mx#TjNp(X-m" dp*$ɜϻXppYM=$I^5E[)ZQ-ĕXF#9D PQQa#~PoDjD zy%{A <_~%W&#6mڄ | :t(I8 BkkANNNܾ=|&ÃrWSVnf4 ׯTЧO>ml6>'Q6jGk1cL,[n/B8(ZH={k 04+ºʄmbr'DC I&\P7tϽQc' #@, W~v{E׮]ߚ 2Ӥ_ R>j(|G841+++NiѢ.8XvZ^NQ*v?1IJ z9DNOOʷsμ3Q/G7$EË b?~,ZjG}YVSK=&Q `5`aP3yW+ё!/9D_+9iӦq-$IN%N-*,WVV>}QAfIGySLAjj*|>[CUȲI-{T"M~z~[w+pOWR~+r@yT ׯLO>EEE|Hejj TEF}cXU0f0\U"]0;z}l3@ Hg@ Zӊ0w1 D{'(:uz߸q#uE;kχΝ;c˖-Ȁ1)p8cbƌHIIn`ۯ0)'.ttƍKV+z-n "ΩLI_\Đ7 W\q ˅{#F3?CqX(`UZh1Ǝ}YYِeCn1wm;`a_;T"M hE hۦMMq5j^bl~$T ~S$Iȿ LC[ynn.n***gv;UU#F뮻o^$à |~ yQJzdy\b؉={p㣏>Mkj5fL>Ԣ uްZuˡKb]pq 2|7vMWRNIdn6<DP裏0uCgϏg TEbγpDthx埋pՕ}.DEDQ%ю59ȃ<-@@F "I6Dy4 bLtn歟~PI/?׉B{l8ӀcIqq1F~!_[G`H[65Ѝ̙3 /~&L5k0`l۶T6H`\{-&?D4Qs?ƌ] E"t(DQ Z+E**HGdZVأP, q35dJsAVTK{qex7kE^CBE%=E[l¤ޡD :6l͛1yd\||\8)lՊ?0L@f+R?n34tK瞋tnEfffu ~g1-N queeeEZIа~}tر$Qe943^xa7nB8泒lz(lq׮]8$ KOVAgƍ1rHL,fMA#^?4oޜIy ,P3䑉YG瞚R|Wו",LoØ1c;M68faWqU YaXy2XJ4mpvB(„O`- '͢qں'b3|u3~4 fG)9 `6w,!EB~x9s6(b.r0 qυ]{$7Vk'`"0 bF>}}<)A*(e n6x"hQ}WR~?ࢁ۶muiFTOI(J}1y$+)! ᅬ;DFF cΝ=H~4aǻ~۵-ݎb<fspOatΖdD~FUe<%+v] >675aA|(i\ƍ 'E˲'1S;vp Z r&+Vlg{RTwƠAvy@ j堺l2~`!0;)>OH}yqexG1b>Q[\( wvB Q+ѧ%IB{aڴihܸ1uBZ8pc($E,EFNII{+]/(|P_G Εú)'nv:Y\jTU 53"=HY3ޣy*BCHq:kk׋>ܻ!PׯIᐢIDv8`%"fY;v@NbJ,tR 4YoQC;(OlϱcPƍܹsѭ[7L5ڇt)dlrrrL3JDE-I4M}hu֨[.mۆEq%GضG~}" ` W^'{O4 ys Dڶ# @TY )3L` 0dm aDv.bsZB:/ZXf 0>l 0+|[F}q7 V+.YMIP({7p NʏX_1QH5)PH('']vu`Xݴœ׋{&+)1vqB0cP( ܹ3+a\.|kZHKMA0iF=Դt\~y$ x9o!`Pth{:*+C"k-IO3P $޺R.>|f~H  "3d /ħ~jjdзo_4k֬&(˼vԩ_~رckB# N믿V#_RSA>[x nٳSi/ 0YvM]RJ\)Ȣoذ!>cf?W!Rt݀b0Xx 8̨ٴNEբ0W3rr}jfA0$CaRdH! ^xb2>:,(*.<|L4?9zP{}BoFǍeCt-`jl;zhl߾tXTfc=LӢ"$@rxo߾hڴ)oݺ%%%hݺ5x7vm4MuHWRDV"Avڅo>4wEYYg!HqN8Ջ7zuxWyEQxΌn@Y":u:[0ƠZ"߬ggc鲥B(¢EС}GX,6>L4ΑnD=> [nm+**p|)#'ݎJ7k,iI&Àey; nݺ8p ^t֍OS>Z8O OWR+@UUřgiӦ!330 0sL>x\a{=z4N' >I`,0t!"EjQȭ&defddsυ\YF:u*^~e( OQAv{ /вX,P5 ͛]vqĔ)SNJK$:6ů3uT,[,Ni8̛?7 AX-V a@Qy=X|a5j6mz98 ~)))BXd fϞ÷6 `?n@ב^IM}ʉS`;wDaa!:tbI?@p,pR׭V+5k+W]&8DPP}L̙]D> ;Y3?O4 fD= 9PYeIE0{,߿{`M4k`0C{ŝwku@t.&LߏT~ND$"xLuXѡCt҅[OEo< \IIJ" uUUU8q"I4xꩧxE*?{Meee;w.YC>6•noz#͆-Zbp|o6iѽk7C5}~/I! o^ vqaÆ<7qBSRR={Il6 2SNvlwbƤBD@ 38[oqϘB|6fG0 LWROhܸ1^yoߞ6իWc1I! IDAT妑 |Ç͕W_ms#a8EEu>5Ebn(Yf7M@sGJJ ƌ%Au0AUT0!:-M!LłkqW{XUUU<4 ٳ'^xaS괒 [lŋƍ1w\ܹԀZ܏=IJ_Nj РA^@N6 ƛo IL.>EУ퉂3v,iՊ_ a{QXXADU#1o<":c=,ynbgLXf5w]wnZ_^|;v^"+/ &%(*DuZyZl6^_TT+^z)v-++èQE@bKdYFJJ){]ɓ'ON.`"!%mٰn:///GZZsYՄ&h,8t +Co:u$)`,^ X;QX,(//Ǐ?˗@7#<}"5-&i.c (PJG/[򞅆PVA[[!J-Νٳg:rl6+YfXz5rrrHq$J:P~1Z/L?{ԁE$x^\Ǐc=1~Hmb%̍7ވg} b 'ɐ `qx}22`#EZhUVq<)*cy0,..%\p˕@=~a~{BJ6)^{%an %עB(((W_}-[ݻy46GO,ŝ"`QKG6y5Ӊs9_~9.B ++ OBEaQEx"咔T-*r>|8Ͻg} -~1j~+E{!\( pi\xmf 㢰%[j3gBu|ȴc݄0)]סZd"3V^ހ,Gf1^͚53v{Bk4.-,n&۷)))pݦBpR<Æ ?f㼹 "c{(D(Bee%*++1sL|K - J=:ZDZCDDF#%U80FAC4nlݺ۷o瓲V+5j~hܸ170^<Hk3'=i E{SN.ab C=sp0B7o> 4!Mj%/հZxgp_աbYjj*o,u!HiA&>XR~}h;v_;Ax2 5$W$R"Dz8tPԭ[Fd];<;MT8Dٽ{76mڄ+V/L[q-2DcƔdeeqz<Nys m8QS6Aaa!z쉼<z̙3̨ =HP0\Fa%+|>. :@JKd0ӧO^i5BG(.}S>9(tGXh4ȁfÇu\^FJJ -[!CN:4Yx<H09'eS=ϟիW-zCthMwD*Ye8묳ФIiݺuCZZN'.^/$I^qq1n76m ͆ݻw~@AA< lڴIO|?d"0 \wuqg,aX!yRDK: 㫯’%KaΔiEE DۣW^;x['1LeFalHjTh۶-7onݺj ?#<ك?رc?XJ0|>/&f%͛7믿F^^&Mu֗&Kz\Iӈpsss1uTlܸxG1vX8pkYIϫ^z O?tcèp={ve *Qss0fX3AsVX:븮)DJ7 b`߾}ҥ 9Q  "33eeeXx1ȁc1\%*چE t5 '!Xzp+)#] }ϑ?ٳgs:ѿcU+H0y٢E ~0`233|8@zGIOT[yA n`ŚRl߾˗/ ݎ\'!JtTWWsC0 \|xWжm[ 0&=)gW_}"77N:X,Fuu5v;UP^-Zݥj|ӦMqM7Yߍhg*G^7Fyy9:v숴4}p:k9u_$uY b6lΝ;cϞ=Ppw/++1dZDXGAĤ:(z7:t=b~L\t E;eSbVVV?<\: /0иqc 0&MBjjj9ukq))v"pŠX†}&ANNʶoߎÇcŊXt)L;hKIIA W_}nݺW^A:Ĥ$P(d0d1vY8{lǎٳ|}2j2$2baӦMc%lݬ13tfa!faƘt]7C(b@ `1XYYcVUU0 #9cl„ LUUfۙ$I4hv `pc BM90➧kzqwI8NxΉ#`0КP8ߑϱ֧xo Xvv6SY,( ,3ɓ;G]Y ``/VYYɏ#3e39~JY(bjeee6}~' r!evw}iSRRXZZngXz؎;ct,GZ$IJ.>A"MZeee,77VPP:w0,QcŬDAGg~bPoR7V Lޑ$ (yC B'4m|UUYZZ[bEcnDGmMmEuGXDNv9c"p>*oru\|-5+'zJ^}dB!տ֣G? ޽{3EQw} 2=7,0z8 6x`ְaCeff]Yje2K $p%X=m]Y0@ ***ǵ`(TUeR Xq^X8V#%Io>~\~ǑZuM81N kZNml^z|l61AWUuԉ0Ǽ^/|DEOQuv9XID9t b A0۹s'?޳>8p1XII ;֦Mтul۶mΝ˦L>́gɒ%w4Mcb"xڍIJʟVt]7-`0 m{'ȃL$t묰B>XnrU3sT:`e  lLe;v4)`0hLۊXO@Tε2 2ǁP(Xaa!+..f;wFK(b`UWW<œ9F:9sɻ"O^',.-Z(.0yе̙3ِ!CX^^;v>CB'rwhҤ ۵k c9NvAVYYѾz7co&nܸqܓ\~=l>3<` .`|4#IH*TELXb nݺ8tH uˈ0ZlF΀)01b%^SU/t8FׁEX~Vvv6x ^IDtPp8aÆ KXtTFJJ χ,%e,J*w8x vQ\\ ]ב*ԫW999Pݻ7>~?ӑƯ-+MfôiOWlD}Q?3"KNeSA>@5/ġ K.Ş={|r4j>=z@ ?z߈]ƍç~<۷?0t_~Jhт 33 <[l͛9pƌ8p ڶmb3^ڢ@ YYY~O:BJJ $I:nV T'v㦦\YB#0*+C @ɔP(ĻJӌ6>keB!>|~!qYg&JoΝqC4>Xjb)((!C_@44#s啽x(kJ`p&(b/7|)'Z_Z-[_GƍM]elw$Ix0h Swh-Аwx嗱~z`߾}qf{hZvpM7a„ \Qu`д'E v^^n݊\l߾7o޽{k5lX1ӉVZm۶hݺ5:w͛I&qE= {G8Abܸq3f OTt]Aii)z}M6Xj-[ɓ'jw5\{nn.n&`r\v믿0TVVW^غu+paW>Sn۶mCNLT(OthЩS'D%%rEEfϞ dggu='l߾]v(}3τ6Xzt:ya>}lذiZG+#mҐD MPYYɁMe>cǎشiI }s4 7o֯_~iz' &t=\{={6yAXÜ9s0zh1fY)3gģ>p8 ˅RK6l@QUUeF!77@q]Qo>;v0˚5kp饗"%%l6Ȳj~Xr%N>5 >,)S3x^WF'2+VE41:Ec 6YV&2s:<|nj1n}$PD뤇jf͚^ziӦ\vSܹʔc݁@ !"O?b;5r؎;… YϞ=Y:uLIYw4`,6mڰ1cưҸ\L,-0 ݿ?;qRU]ye/w^ =DI~qq܉αmZ[^_b\r SUՔ0'&f'Zj[5+d&b(&RNj/0iLD)/q8Pviz^x{n~լe5se\#XGx=۵keܻtʟQ.yС,==M$Ib}YB#SrG"333sa:#_/6iq3(L>if1UUvZX,&PTl#n4UUY˖-ل ؞={Gbk2hn]YII {XfjY$^3UUqG:u-v7ёp~̙3wu/ 8F\D8|0;sfc,'RSSy_e6~x"@:3؊+H4 ٳYvppc(q [ll֬YDfH*s9p94\X|jǏSN=b dC an>|TAǓDZx=[l'%%\DHADƍW_w}ك> 91%֭[']ve6l`pUUU͛7;u%_jhm6lؐmܸDXb 4`NiRJ0[-#mXeKQ44k֌ٓM2bDa'[:޽{ٌ3ح5jdZl8h~&.jRSS{Ή IDAT֬Yc*E3Yi1zY,vmYQQ! _=!p1XϞ=qzL6-\.{衇L,ֽ,4˙1cڵ隧UUդL`E1)-[^x!!mhMx<,77իWϤD/HG34Mh~X,5Hj<[MؕW^Nd,%2JOT?m4fۙ(tr=4m4)p8̖/_Y|,:u]ήJ6ydf6rHv$DIįQFFСҥ ;s$I\Dߓ{'a)i\Tױc#ZG{5l6dd#c,'' 4޽KEEEkp~؄ X۶mMG-bZq!?2dYf=z!*Qׄ]}yvVPPv޽{UmËzYLDʯ}z0 W`yyyqƌiBH,77Ӈ{R=v-y]D_&u,j$IhZՉx\gtѵ}RUu9a(ֈV}fw$1XwFϟn&{9׏M>f$q]'ާSO~.IQc;2֍7b i*,))Vc<ň؍#IꪫFǓp=N=DD! 0)))qKĕ'ei1b/4tQEZ8;YIi1۱c+**zj0 ֧O(0yE6bm4iɓKӴ8HjE_Vdl*UsEII 6lXg葒C'\qƒi$È7( kР+//Bi?";v0}9^yޱB"^~֭[z83EJMMe#Gd[n'8#k۶-ݻw%{Ǎ:-+ 1Zh7x.čܖZv #999lɼd/_#l޼y .0/{)#ޱ^G?' 7|p.4M*{NVPPpLbxt>'ۊ5R+ȑg./"_ņ|($>k͙37Qp׮][nxH61 b I"OlŊGGۧiU[/%K8b0Mخ]ӧ y:je[D*++Y&MLxW'~z۶mcK,a[f٬}֭1$˲w1.urZm֭[:/gz|nfT"^Hl$Z[evJ֜ح`ٲe,+++Ϋ3^c:aÆBU+zlݬ_WQ駟kOuZ!OQ'vUw޼yqRz,--5uNjeÆ 35% ]k|ii)?>߿ܚYlykZ,6x`vn >K75\_|({.;QXTTįnA߳7|lڵXvvvy7|t IlXC졇bv2-HsTX4-rkj$R'L ,,trf'v7&L^Buu5+..f{5u"FҩO8L! |6dȐd;YXzz:SUy|= /*`0JJJX˖-J9L$^!Q,C06,]veq!:uEs=ߛ~Sd۷7V^ `]v5>mիll|UVV9tKIIߠA}^#&ܭ[7S$p$WaXXV8>KDH9XiAݻByHM7|3?r!QKKKy8FF@  Ǯ;kH%MRXo':#`j \=z&¢A `={fn#Fv17iitXHg:tn /0ѭ[7ӞXpa\(c}&-O>w\GȲ|A:St ' /V^dקWש*$梸QąצMVXXgZ&8A|Ђ s\ԟCZt-Yc c={Fm&NKčNkD EQ8q@A pV"T-[CRyXG3Őh:t噊ׯ_2228mw10?>?T( .2|iii<ٰaC>/KJ֪U+~;v\._jkǰ㭷?q,7)ݻ"D 6֞Q(BlĈq]-b]ml)Ǜ'Q/XqE\eOd]vcmTyl߾ݴ2 2mv6oǒĖP ݏSVƍM=fJd& A*UWY@EwuA/],Ȋ"" EKB g3s~$sL`"7l3B̽G93g䵡Xp8̾ۘr3`~&DJg*cƌaWf7m b=fN9_*d2)Sԡ /E.rO'hjAAA=^" yGeQ}brqkԃ,N;b&qnֿ9VzME\DcvUW)XџN;a~QbaF*RW2;y$k޼P^MǏ+"xѣl;v(&t._6͟?__7RRRx^2{{N`7|S(@A#Gd>糖-[v%nf %BTR+ٳ'͍+UAφ҃pXV+Jڥ`ףINNf5u2dEoX$Tbd&uݬiӦWWO$KT== *IիW;7 G"zh8o_N:ɩ'guRʵN qq6|*X$rJشiSv cǏW[5qVNN֭[TbXRRECKQZZE ֿ>{j偘Ŗ:ٳgס /HX,6rHNρ$1FR_jy_R4\Ģ8Q=zk.fdUBYL&p8ح"{!֭[7Hz%>('U(d~YֹsgXϞ=9{ァ+y,;;7FFGkDا~Zejvɒ%G5al2NoEuaN{hXX=ؖ-[\߉hhŢh&CWVVZnrlD衇>3\ *$}dY 1}AET2X%%%lȑ1钮䗨H۴irss/åO^:)-Y;D|"C<kM[^^z;$IbEEElܸql,'''Jn=Sp9rD(6 սZjx.%K;vz*f]e˖lŊ^PD$O=e^y^;{L&6n8iQ1<@su2 QX4N4L͛f4;U(fι=瓷.Fi)FOaJ`0`w3ڶmz=kٲ%;x +//ȢE81Nc+VRx㍨JvL&6w(:H$[aW*V6ʚ4i<Զ~'~WV4aq֭رc "uǎiyMݻV /ӝu2bP{5Ι3p8e[n#􉉉⣏>$"._5\`hoFKL54ZqeddĬ۷+A)w}}ΥWVVG@bb" BB!h `6e֪U+.bK,Ä iN1ʞ={"u 5(N6].7|cرc| ku\rɫ!"nTh糎;^>?ٺuPaqlg=-uddXؼyPtPG}4j&1-J^ʕ+9`(]w"znwԠIBlN6-,;;y1Wf`0("F/d-[T8ljgylÆ lǎll׮]coc@ɨ- DuzXݣTlQ_zI=0?'ӈbȑDf&zg+y᪡AFi5P`bb"ѣHVq*--e7p&X )TRfK}Q)UR,FRA]fvWs5iذa1='󱙨,^JJvѢE ?1{1EwީvN} TTNUu}_ RY%v:v;kРk۶-ر#k׮kѢEO`j*,N~II $IO?뮻pHFcőX,32,geZ!(d&`dB0l$Ix<1b4ill6ԩS8r9?fa۷/Zhz= `0G@ I3p:$ p6 PPP~>,C#[nXt)>3l߾7oFYY BB 0v;"Oh, =d6t鬘f ~,T|>Ξ=8U: ]q-iӦVuBTwp8بQX6m!ƺv "-ˣӧO={Fr]<^xhP)Nu D%5v(Ęzq3t}رc:Em"I&OEt+B_ؤI.~1ҥNS<zz걻K1k׮Qi쌌 >v*z=يi@a'&&FeVC}ĉzK&>D\3\c5]Z)^1-عsvbyע luZ\XnpBtnڴ}DDq@J6\b``ӧOY=ptϞ= 0\$fM2EAciLB>F-''=cb=so&qQ^^^}UkjٵZlСlIIIvr꧄(XrGC1F{'664|K:<=zƔFkPg4T!Z4*>*~mqxbBsrrys?j`p8bt:QVJչ_~?5=Uy]4ӧ kO(}P999SNFcV%&LQԚ5kŚ4iRBq?3HG$[9$!Zˑ=+zle˖lÆ Qu5.V8yYsZ7H$Ár$''! d2! CѣGѲeK~MF?څX|33կ_ 6CpE]K`0(rcy;wm۶Q[uDD"DܵkBkk=x饗HKK$IE0χ͛cݺu8q"+D"LjI BO駟ػw/$I_=ΝVkXg  p\|?|>D5b}0ƐN:UVWL&<f3F#>{Eyy9|>z=/BjctMXx1ի`0PWu]zŚ5k"Ux*w$zd5.雼3f3o it:kҤ :t(۳g|ǏI&>1E*l.9R:V\}Gq EoСCQ,py<ޟTf͚$(#HC:c=s:bp8xJ+V! - ҥ ȈY71#)BNbe,ozڷoϾ{2񹔕huÆ WDW#q]Ȝ>8i$VTTϤX3\tii)[f 7nOh9hPݿ#`i Q%$$~:&לk׮*\x(ůSSSY>}ݻCj}a=03Q-|J|TS$QS]s5QF|hcS,FpذaQrxXjP(YfU+UW5IĒ 2U U3( T*j|O(b_};vlLg;pήO|VZ-н8U>errrX>}bRRRXB H;222b" IXk׎-Y$)[56\N6Kpo IDATU5]ͰAM$]ve˖-c{e嬨HM<Ç+kk #UHn7ӟeb:jV]huŋG9\NH' 2WgzkQ .qAii)?_eCDNf,矙b(PgҘ5'!%qwT+Wzz:ۻw/?$jR7\䱪 SPBߟGV{<ϫqƬTA-Wpd\b.nl@q0ߩQZ0 ;zhUp{5ѣ7ZUM3\5`DWU}e˖ܘLZGje/Fy.j;(!jE&z5 ,55-YՄU-J*** kռy($](p%U(*3ӿݻwWbK %p(Ɗ3;zhRCG% c @\.~Vkj4|ON7ި@ PH4Rg4HDJ< Y:/4^ZRF>(c^dRzfF{7 #K?p8jbŊ(/OTܗ" mS1,Gy(`0534hrss5^-ʠtNU3 oكݻw{GQX,F]j^AtVM-:Rz$Il͚5n{ӦMQA13\5q@0d<l6+W*b Uç^xᅘ)H$@ Xvjत=R"ݐ`쭷ފND LP1+??]s5gm?QGMPcE`j@ ^y啨:w'IRѣUMPᘎ.=g#N0ɲ\:u_S/zѣk!'yϙzbN=Q[q`u9$[ͭ5pFVuR7\"}=~(KLɪZ1Ī#[Gm.'boy_UV3L0 Fet: l Ib fߏCaɒ%b`d2) `1FX,x!ʌ7/~;`0NY6n܈-[3 Igbup 8{,Nh4i^EBB7^-ZG", P3sÕqE"vUWՈscD^0~{hfyi޽L{˗/QHS`ĔyT_bzBe~6o<֩SZI`|:۷/&#zU$bW]Aev1NZ}jt:푚xWY,P?g |$ևi=@Un߾=+,,vdUR u-[sx< )6mZu+\q]JIIArrSLѨP(łaÆ n?ւ/77ǎ,0L9[~& ^_^G8摣b;#;`6"Dyu:^/ z=֭[|]t>;wb9PsQJ'zUbf(f\:wq!++siq}F~F 45\ã|1گEfHsDSx`0xdo#%&&b裏ЧO}t:0N>;vp?ܵHIf39|eed2gϞHIIN:Op͊{Xqi8+m#"7=nܸsNj[_>۾}{3 * nܸ1۰aCPU=5bd׳ѣGOzFfGݒ rݑ'N<~Ǧ"/<ѨX۶m٪U9Z8L^ AK1_0׫!Qghd2H P#-"jE$' VNoՑٺ,|(E "3mn@ |ͼE9}1bv$$$(M&233d$IhժmۆA񟉬ձ6xp8XZZW^y&Mʕ+y NX,>-y[6B! Ѻuk|W~iii-#E^`XjmB:޽{qc+NjoQ=FL!C g4e4LSHHƍWќ`h4nv^{ժv'އCAA}=3=IP)&JQ6>n@yCjfqq1rrrR]z&t333aX,nHMU4Êt#?ӦMC^^=ʕ3]VaXN'(f3Ol6c̙hݺ5ӹҶlЭEjŠAЯ_?ԩa6!IN'?g`O>$6lbN4F{ri2Zg$ AX,0"E4>:III\kP߃hv:QTp%IL&mV`ӿdHىHRcHIIAÆ 𐲩lUc+++CYYt:RXVD"HՊٳgyHNNc Z56tɃ?СC1f۷t: WTkC1ƣ8TGahȈ n$$$7ȑ#Oʕ17<1 >|f5xj\$!)) ׻F#1o<(xG5#Ak,>H$zRt:<G'^E/{VSeYA5;3fP\QW8njE˖-裏ZDLTf^˹I1a6yZt(cp88zVtL& }vEl6iqW|Z9ND'VnдiӘNV5T1sc4rXxзo_P pč'AJb ]vZ3hٳGX: /'| ,@ƍ!2Bl6RD("^7o뇏?{Rìhz^S Q5ie՟-:C oabJZekn#6lڵk^r~4g1< .\݃[afp,f)L& ߯Y4BiBZj͛w&JNJ&~@{uE.so 7^eee=DzIȲ=X-O}:xx9s ##C ᎀ":i8_Jgjg2 '',רSCO%ZElMR3gΠIII1SWT%z-T4* )R䁉LA*))QѪRNdbyDɊE7nqwcΝz rƌ}͛7/$L&Ȩ312gˁJ~Z7j5*6tlV:}48 Q&VSZQF!ωرc\8Na=t2O&z{b*Fe_VVH$ף]vQVV\hFn:u 3f :w N6m`ҤIsb؋nDϣ^zDN"h۶-еkWvm,hm> ^{5 l6f.$KǨ~?/(RGaH7ѥ^GFF222+!" *ձcG.""gHeYFii)WVJ1S튔ldBjj*Tb?b„ (**}C& 7nȑ#(,,lƙ3gԩSQ}s Om,IE2>-[ɓpaRL&\.ddd %%r 5jpmLzz 0sL,YA1ʪNZ^6^UTY&V2QһZB)Ҧ߀LXshj] _?֭QZZqB4h?rTV(c۹HLLڵkѣG<Qyp8 u%¾sssQTTÇ#// 9 VEiӦe˖{N `0dlݺVIII\iLr`0ٳHgڵk9b1W9\],åakpfZ!Մ ^ݸ\P=k˖-^#F##G5#*mܸ$)PbTӊ4kQX."*=ɓ'֭[4NNѣDqF:u ֭gi@6E@k(^lQGLEp[kذ9D`G,{G0k,gmŢbS7 QU-ۍ={pcL(J-?::r^ߡ1J\uU$ k֬7|R$$$(BM6q$24E񤤤(fA=9#Yp8`2SOS:TE{ND1rH1DNN {n(ҰqSR`! BƨQ`49{=[40_ٌL4oCݛ׉i˖-شi 2Gnrr"]HD}2e x B!^8Wn!*2# *(" Ѯ];GAAAZhm۶gϞ功 .>9r~kHIIQp+A 3Ĉ q(Bzz:v;N'xEưg/εL&~?V+~豴Tz9-:d᪡UZZ=NRReeeرcn7틡Cr '.]2—EC`,"χDL8 .i"n]V|H2++ SNō7ވ_زe `XS,YB+@@bfJySBM=sDrǍ EpPHA3c~ -[xb3JXbSU Zy4uB0=Faƌ6l75RjڵXh7D# d2f!--KhDQF+GjQZfrhh4⦛n¸q㐖?[i&4jCur@.KA$ᮻ‚ x7"E`bޫiӦ0 p8hժoIСCL::t@0P]2!?<>cX:P5s TEMtDt:p xgx%鐲/,,[oXgh+ 4<=i4CaժU+&Bw6{I.CO 9ݻ1|plڴ Oȑ#QPPm۶ᡇO< N'g9."B|J0ԩSq]wq8$QGj 4k֌GDrr2Μ9e˖a"^HϬ&M©S5XZղqQKN{%I0MT@iif>Rr"/VnSD`"Jӡ3?Xf 'ELu+~Rmc/^zh4D0l0߿;v@Ϟ=ѦMKX~=G;'R9 bK,СC(F4pTL7<6nܘGC`Çǵ^-[ ##}Pv볅0aB]ĥqv#v>:ubpt:y/oncX`ATTeXL+㕔nM P(aK.ҥKyMe>BhFyyy]q;EC"A8~8жm[@II &M!C̙3W~v{v)i, |>`p^X0==|`fhݻ3lq?ٌ"z_5VXQ'9(.åatD9w<$#$֭hC^x̜9S1gGdȪMR V_=233/z^5w\,x EP'#E#I~?GR 0>(  tb…7o<IIIՓA4LۑI>K-U^۷o?ǃ,[ M4h999dY{&N/w%^~;x^<?o0A)));v,l٢`>ХVȤz=*Q%{FA<ӟ޽{nzQXNJJQ&7 *4oeee㘊xٲe |IDsf6XN<5ejñt(--BiXJQ*Wl"v^ҁN*B#P(n7|>v;x ̛7|>"2xr $I{tMϩB)BRCV+PS/䕺t:Nt̚5 ۷ǨQpYEΟwq6l磴T8Q 7c)&&[& baMt,r7̕`@jj*+NGNīI ðXf x ,Hψ)IXc-r%8 H0,ObM&Y ׮]CbРA8q">3磴`cŃR;)GFBBV+VZYf) EGZo'u]g}+&LJ5͝;~-Ox^M4HR4J_YYYxwpq2dur>`0E߿? 6lؠ;N:h)@C5. , &}chݺ5/YS>EM4F'h*K/>s,ʸN(]__ &b6fi)i-/%o N<[bXhƎc„ xzj8p\hfM/F'$$/2ƌIxɋC_s*Ŕ) QW*6SdL>;fz~Ukv՚t9-[֭[_֭[q1dff>s֮]H%^s8 ),LUĥ3)v1vX~J:;wg%p~P(čرcqM7q9-\%͝;eE{R!VM].x굶?˺e41l0L8#̙'B$5jڷo *ByCdxн{w`Aɓ4i'ӵX,e>ڈfaĈXx1,Xƍ_DꚖ?b֬Y۷/{9x<>@b| $b 7/ģ2,+$MH+"{cE94և̙F)&h, Bt5`„ |Vu Vj1Ác-^?#PwJfUdM| rJ0^4̓|Kp2dӑNvصW[> 6AX v ͥ ǏǓO>뮻^ay<BMqݻjbQ3t:yU^뮻ŸglܸWСCa6qĉ+":/Wիyfw}ݻ7V+ 8R0ikcǎg}vGxnn.x =|#e??+VB7࿣жm[L8Qa2228hd2pH"qYetq(?(CB$Xn?+5N"MUYŴV4"2\X."&IG)F5WwykCNebX\w8.\w`a@D;nr+:0@ &ڵGQQ*lZD!2'xB=" …D ;'e`:7.sA`ٳ̳Ojh@.^rjƍ1qDtnٌB^ !t6"xع'|n=t&63&lBfFPvÒGbȻ/UDpΗsU B|_Ǐш!CsJCW  ޽{c͚5s XAPש7n̯Ά~@.z"U(>`J|({~t:y`z1x`XH= '*\y+aGx5J磤oBm y#@ƃ1~SGB/ƽx-,bQRK$dggcxwFKL{\.ԫW jU 8p"4@#$Ip%$Ge op(PC*/+OX-J`4b{M<3gbKJJ0y<((( <77ӦMa|tt4iC"x0W> O0f `2bU$#a"dYB&hּ96j 'z\uj|$w,`ҥKJ} #~rFѢEyy9 VM/j{ҋBlj=/"aI8C8Faa!7jb{`m}g0n| |г*d=a"86X2cƌCpnǩS0uT)޵PԸ)5W L:?裏*XwD N'c:ZK#eyWtFd9^@PBfhsUÏcSQ?v iitMk]H4T*R8Dz:t|>{ b0f=π[d@`2ЬY3;v ;w}Y;7ĽvlǪUp)4jw^Gݸ T Xqԯ+Җ=Ya9c K.ѣx*S&:k!t%+Ri@.T_=jdTlʲa}Al6~8ppmDtt-dŧW^@(0zW#O53jAٳg[nӣtFʕ+lqVdɓ'cڵ)]2qYE='=#^E3\bڊ&"rEa˃@ZZի3'O`W :F4& ـ0 c8+ B0*dfk%d5w|~6{-%cΜ9zp:x<0ͰZ ߺE(2qAVV!lFKL{T wVii)e=ң&iРN>@R" j2# #U6zJBuvyǝ{{.ULC2z)<||~ 7o*EfgB-+1g_mǾCoCN'^K?yfILCDŽ PVZnݺc}8v8>S:B!LD"F`̄`0ل۷eee`1CՂHfúu駟bذa#peXmVAحV@dU!2^ׇdBdIL$)@~2nW_B1TDrHF&7nOwq\~}=_Q{ZP^V^%X)fI jH9|VQ-AQD -ZaÆ Dih/-ClEP (hfCqq1OqRy8Frr;E c ~'N_y̙3ԩn\s5HLLDRR'Ҥ,~IFqp_ e'ЦS'hzz.!\C*=ӡC7ϋn`W_EII@x>X^=|2-IX"V BĐHQ!Ī2p!qWBHEł#Gr)=t ,tzڵ3nݎҲrtكD ck"TDG ѱc;4kvHEt嗂E"Ma " GP\Tf͛bCbϻm6P>'Nĉ'QPPł0chUDS/AqaQ8%%%سgZj3jӦ ի&9'__D"8}4<RRR0Tq\M9337X>WDYK$0%afh/D"`fۆ_s_pVx:=J`6af*IΞɇbAZJJŤbD*S3ߑ 6Z[!x=n4iE%(++ç~/֭ǼyLDϞ" Hf ȄD'`yϔ ;v#8jeP jd()mvi}KPCiit+ s@NE\b\Gj 4~뮻 صkGv\.8>b6eK Cno?Q/> 0V!a 6@o_Ko!I:v*D۶mSQ/IسB7nLJܼy36nNӉP(sbڵĄcCAj RvR $'GƪPe=cxѰQ#$%'ALgEvkU9 " la!+?zz#Œ!U 5#a$*"͇,P~a6ꫯ⧟vUg\0Gm0葚Sqaki c Q\\AрV %4¢ҁUIlZ#ə&'V$^[9G y_)@ ͇3F"0]RHd^b8ǡ۫JZn>UW̆ xdU(x_ E(;)UՊzJYIb 7?>f̘{~E>PnQK|^26oތ1a`W1B Cz1AEDPkDh4&5&7$*1%jb"DJe~s fFM3w{>{w}w6PN~E"R6H)$_j IDAT MC ‘(믣h4e`Z6 z(7 EP9x<΍_2_νmpq1bԖe|r;@Oul?@R)>p`KL kbWVVR__=?C_h} WphqZZZ vm.>3{ƎG^|L:%kuȤ3hF#*/ئI(yLL&TFu`Dþhnc, qhNhWϦM+w) |毰{w +4F\P($[(Bmm-ݻw/YA*˜#&B²,V\YFcKuu5Ç/q5n2 ӧO筷bĉd2nw+DUE݊mzHem/l-FIm<e4\kŕWD8?!Eq K@j5m=z[n䏠dKSS{eȑ 8zozn*}\[t9\VCo){lkѕB0 4%D 9?RuTC'`b&iHG}c렫:UUT$*0Bhp|otMу={0T$ l)Ήx[f QV&޽;2[ǩ\-6xE)O~2n8^A#h|!O8s=4UCQ؎+#BP<y8D"c6?VUUa 򖉪Kss3ys?%˓'!Λtٶm+X 4vI]}=mw9#kܴiS Fб ~%ɒ%KhJ[DAXѺ塇Q|Yі+ ѩS'u&pCf3_mDyqx9.TZ>y.l:7\۶nN_O<:\y.D5UAǥBa \5BP  J(Fa{a.}TU-zj_?uݻK$BQMpl0x29{`œ9/ /`f@AVZZZ8KspS;QGŽw]F?~8Acu]hF&! & 0Q&Ae\bk%>W,cԩm\&d3i(u?d߼y38h~n:[mcEw-^8ݻwj*lץ"v\Zt5>.aEt:K$/ns1mڅ7Lpy[cNi.dzm6 79vH$!CX t5J*ir9 FvbpQںF,_C( fo3^mb lƶM?7BLD"3P]]8]chaYF*Iwj#X}m6d̜9S۶Mmmm1,&\u]2 Occj?( r '`?h>4\!OPI`F:Fxu|9l&MDmDQ*qR+V<:uBE%Nxdflo(,4M^#,sϽ\ 4m+: ~Amm۶5 å^zIȁ gϞtuPhc ^;ؐS(Ut~]]LFKrcQҰrJL;.h]r\ CXC$u{ݳ'7͸<}Ѳ"Hp?_|C<6o͛0`@ssYgѩS'5 凶ZpXB--3e` RU [DQ<>dohsG.]sDP`EȰN55< 6DžSreqƙgRLbh:Mxg֬Y^Lhł m8C>? ʻ;xtԉX,&JV- g~mۖ|^ϟT T&C,KM #˲iFLBϞ=7o<BD"TUUu'*9sDV %rff…p ̞=HQ5E@TJnŵ +9sp饗Dvyp[diKZŃHP)2iTU"伉Xr%?}1݂c[FaK$²-Z2iBᐼж8jAڷhq0\h۶fp <<\x%E󤬓"#/Ɩf[|ɖvј"HNq?>\pO?tw%`c B y![= }[CrSRG{e L(ϥo 96nH,w\{t4ٸqcQ(HH$FQrЗ疎.qD"_k[n={VEE\O?͕W^_μyD"d2iڪ`چH Ə2߭392uxǹb?|l۶K.Ẑ bŹF45=˜*O"R~Qr >tGHRr9 /+dĉ%0x,(%d2Mn̪ի1r;ضo<($ƍ "i|){AVnAC1GLG8AE4ϳ,QF]wM?nU֚cqyN48y7_TUeΜ9|,-6l!ӟ\xFcika?P(_s2m4ƌñeYaLdѢEX_~Kbr{ u]***HRyN:$5v7\ΎY}:NLH& ,"P(DuuXwTB¶mmX4  7^[)" &nx6׮,QD+@߰apvtfѢE|+_S 搄7*JD.7ͬ_^2TK$iɲ,f̘gQt\>eG28;v+`ܹ4YlfkbCuuLE@8{G{=֭['Uɤ]nll,+X 0'L~!JѳW~'? ֬^?iKn#G/ة#H$eY2iÇ _3ԍ酅%h|^K/ɓ'3ydh`ǣRF&حXBϲg^6oL}tU%P,D%1CQUT>S,R= J$?~˲ի~b5("JɂI!U:_޽{%MXvTTguu5\r aoGO0('MMML0yyf|kzM ƍ;8H02e w}7۷o0%κj${eG].0VU?_vO1m4Ո$LJ#&|vH$ٳuֲo_mAWб]L::b1rnt֍{iB]! Mؾ}{IR=ŽqKKK c#hvC%codY"cȑlٲE,s#3T80dq裏Pמ`0 \ HDhi2nܸ#a9~}Nٲu3ecm+1^Du`ew`„ b18r\Еx@#EV__ϪUHR.JT-Pm&\-ӗ*g("J3j) TWWS($?.Dt6fw<݈9(w1C.Y BKK(e,aA6lÆ k|zӧdK8(qmfbċ$iOe־O{G[)S8묳GV`կ~+sϑf%<֮y..G]j`̀¶md7[qxvǮ( a-TU5^IcC;v`,yVZɾ}Ag 8+͋m 0` yo|غu+2 &h:<H$XŐ!CH&ZW ot=DUe]i|9|!?ɠgvȐ!m^.EL<7~xY( `ow!|Z09,'" Gꫯƶch,ƤI~AXf >Q#T7>G.]',ڂF*>=hbs9#F$,:(2! ŇzN$A7t)k|qr\I$ ;vl`].$r&L H8¢NT`-/|'e}`ҤIL8Q:~mj\%ʳE!:t"ovċC91G@ mƒTr9V%KeŊ[b7tX4B8=[d&ۮK6D`?25;Iqw殮e?[TѶm _% ȡNΝ9غu+C \'hFuu 5]L6PX/(IdYCPM,FԲeimҽ{wZZZ9s&[laڵh%K۟l IDATۻ7GJѭ;='~챬XC3P X,kO B!ۖ5]=O=\8;8 c׮]Rf[8FPÇK~#??UTbq5kװni@w CʩF"  &tu54} v1kfI+ӭ[6CMy,z8p _җD"/ 0ajڴi|;$zŎ*(+Kmnx[n8S&/DZZ,P^ϺuJ,}F:&NHuߘj>̇k zh+mE|T:MRz{sOfXzYC:P_%KP* LMoWo6]tnuֱqFNwΝw!GpEQud2.:sH,- :f1bؾqF~wT#-[sNԞITT!H$BH$bX# '**(HhѴ d9_²FP dAYCmnT: 2\P: 6ǫQ޽;SL|=v܄YȓX~=,`e466 xߟ)S/NK.V)"fSwDQ֮]K>}xشiSQРA}= LӔ ˲vb1wi2sLr\SNP-6 Gܰo乿=nj72hQō`;~v\TpZȾOUUUp $ACj[&2\,x8k=a_5"e~:zF(k瞗xrji+?aX}I{Ç FLH KOu)>*ַc|ٓ5Dðk-ۢSU5ǏC6e[۽9/O=z;w>5#BEU`øa?ruqR>< Un]P;w3Ȳ:p~P[۷/ӟ;Ect0akږGaɒ%^ZJzr9ً`;!Wj9i2tP~誵|pK(3577S]]/ {.-4UMGϯ((e*Zo ejjjܹkJ(P<vq ]˦H!L:~;Bh4*>yӇd2YM ѣV.|8~X0k6ۗ+WB>'/"X*=zz  liS/+g;O?4vnɓ91m"!D2yXVt'NiiiaٜuY%|~  `A?!/掻dێD!#J&)s)TAWExH56aŃ'+dFg}E8"U G>CuKۥ Jtq>\MK,`p7Eg >6mD^+\tE{[5k+Wɓ( ̞$7nFm!USY]j:ww#x%?ctm!û tԉp$L;w._CӨ{+pyѫWBSV BlqbX:vb뭪0 ΝW_8455IHt$s\ 3qC#AT Ha|A.ƌ}FL&G8#Jw֖*h( JCCCSP8j-&Aq}Ѵ/V\۶̣4 Qw}qVΠo ˆk>[444c+U]űmIb7<{Qn-tޝ]vqkomTWU hnhoΉ'AQ~~߰{nb~,_lΝyGybqÌ3y;vJ;Rqtq,c97|$WTS9! CWTT~>[΅?I|0[~@uu5(GS1(2)6ٳ_6%d0Ջo|_P^]=z4<{o|{^*9}yW/XfhTTTuV"!#9dlЍmi)%WVml,UE-nۯz/ӖL#Nj:@6yBI `аXU"V<(tG^𸢢ݻwӳgON>d9yd֭\~J.X˸.dWCti46+]= BQggn1p\b!ܹ>uvm+~c 9rҼ8G[(낪(J]S AQ=T]((!#[z4BDUU0jj|Nk*%ϩ hJp9\ćhMⵕ_WBpQU0 92 Cwߕ0 To?*Y B@EEc˖-b1_}դR)<|-J0e(؎̚5k݇|뮻Xn#DEcF*su'n zꍦi|otRlz1ʵE?/]Q)MT 8a[MSc3M$w>̜8W+QU(0 lf%KG!,믿{D"!(#Ė0~_r`bض͙gSO=Ō3$a|_X,vP`^< sWZ|0!ӧwO= ⵄSў|UEQ4Mvo~ƍGEEǢ ]C0~^BiV44 Vn SZ ذP8LdСF8s]BWPTM#Yqܢ%IPo&7C$F,hЧiA%(0]u2}xU,<_Kx}qf]"˽Ť C<4MY,]DGjjjbƌ477sM7ѫW/4M`߾}aRU ލ444㏓NUb{wjm!WUU"s^4! :ŋ׿N2.-) `ǎD"iTu,˒ђh@)y<}%-"` Y'X/jcqF8TUŲ,ts}Q>OHaL48l:( b:]1wHgumsP(׿r]_ZN84MdOPGG\ۊMu)?C7Ȥ3,]LvQݩL6# ^yy.^r)f *D`* p8`~0h*x,K2D<ӈ9=WD@yB4W,x'ɐ*¬F)v}+ 4eɃ5z%P'I&NȐ!CJFuz-a+?C<K/1g8ӟѣ7oG~n*N=?5l9/[o-AUuTTl.F^G?B!Җ8E'R[[>9xuG-梋.b~hTֿ O'^K.9 $eV4 Bx0 ,lӧ{ /.]P(8lNDǂgH)j9|-#AҖ\LR=;q w@\WyQi0WQ Ãh¯TH UW s^A&XwVJZhL&&$ d2i x*ȤS8C4`Y>zöL4-b:lr) x'b]t!N)'a(jtM|Qh~+`#u%ZIdF2nC4lO?s=O*G|nks&Tk{1(}{9r$\pcǎ /u]GKKp7d־cmI]vS:^6P5, I| FI,%ǵH4xGeUNaܩXpEq<|W迎Ce& m0:jߚQYYI*4MvE&jDڵ+}K.̝ \~0x`֯[;Mꪫh4:weXa1G"*} VJrq*++%{2-x?wA@{.(4~zyxoR]U6\|h4aO$*O #,N(0Kı-˴d,۷oϑb IDATS=L.Kl([S[J&M00BaL‚ xg߿?>=(J|#2tWTeill_'s=0qD~Jg}6_5k?Op]믿^ \>5\C"pݘɼyxM_-[:;wd޽R(99eP(ĦM4hTE E%*#2*@E,77d;}$rKya\yFƍyǸ( (5T+- %ț캮 Aᢋ_ƥּr(PQClh4V$7//kUG@J|?3 #pfY}:|Gc8j ѳg_z)b,vރGhFsK3;נ(*ŔI(BUC ϕqCX..U ;{xШ- vD 漂ŪF]BYZF]]?ٳgp Hq8c׮]%jUwٲe޽P(ilj"NhBNp))‚SpI'~==O&('+dxgC+D"TVV()LrgOs5/~o|8b}^qBk׮e…,X_Kyt֍3<3l0bst2{׭D$%wy4X~6o*'5I(ƪ&)֌=<܈aH$曹$qUUO>Y* HeذRYPgm&삫gn G[oF9ᄱ,\8:u*F#{P߇CGDW5 n& ~J栦B]P /Ւ蚊'LhH~ 2K- &MN C<WS\TTl߾۶et:MϞ=GiW_3 tSr)_dsg%,#G'źS㩪˗t4K-c̙^qHG4Q ,Y'o^ZA?z.wQnjK箄 x]b):aODBx`4EGp~ .$rSlؠs,i9íS׫* SiuV, aL.(4MD +~ RaLLC'{b1nn6֭[ٳ>ŋχav}ѣYd K.e37Rm#ϣ( >Æ cݚt Bmm-]v˖/g˯ۋ9,}ϯ TZlX%#X]`j*bSSSUUUdY&O,g曙9s&G777D#1gv]xWYp$[p50~x4MSOB"MDh\fJe_<׳xbرc7n4`D`- Oj-29 :b M8麮O^|E<3#5 $"e`SyA_o0[$Q_psD"! tS XqFѻ?4E+g ⋌(e96`Ee44PX6V|$Q*".TAڲ;EWpp@UbͺENd|~p83<ѣGq8t֍|G"a^DIA@_ޜdy:!i;3Xr,DEY}kYHY4@B,V5sٝIJJᏋIIFh&PQn(}X#fNݻwӭ[7rrrXl˗/'..Aӧϧ>}oD!Ӑ" 1j|=/r6si] ޷ |?.0A9F,져vj愊<Μ9c eee1gL’%K믉gZdYfܸq'?aСkΆbuQ |kz_XHC$vu%0k,.rnrrr8rǎ[oW_}ӧƍ7̓j#PaI^9w,]ԞpH޽ݻ7ǏgÆ $)J4M#''˥^Jv(8v^}gQ*}aMyJNeI6ҔbUH]VЊŦc[?6Be0Z$Kru8A58z0Rds#9hO $lByai8jUHiՊ Ϟc;l͒i~ϓO>yAV#vZΜ9 gHMM駟&!!f~_%jadќgf`G!PÔLn ` ! K#aS=W_}ի9y$P]]MCC=zЧOFIff&_-[p8>|8߿|jkkY`ӟر##GdСtܙ*VX+h۶-m۶2bm \z466/=($' 4hNRٵk ߕ^|I!۞w*۶};ﶂ{^i߾=W]u"99Il_Wϣ>ʍ7HZZZ3"G]o;gXZ=>Z2Gc$1o</^Lcc#GԩShF۶m={6 pI^{5vŮ]غu+vmv<uPUߢcͩifͬX-[M6̜9K/du/ILLZjKEJ{ӧ 8t7|!rt\ {VߒkKb \lAOBB\6v"ΨwZQ]]~ NQQ4 ]Ue9:DPG 55g߾!Ctn _$DID "Ya @u] TֲichGpm۶Ѯ];͛g/HMXz5v`#1%%tDQEYq#D"hͦH$ %Q@$sڵݻwFݹ|ߟ, b_Clklٲ>}C=nm۶O ?ϚԱ piOrdffұcG̙c\AI?| ))?U~˗9p( HRuEL62MK.ٓtG֭mٸq#}~^x3g {k}B`ֺf߾}cm`dee5K K/yiӦ CR[[( >Ͼ}`ɒ%֭[Ǯ]ذa?O0a /!h @ /`|駔!IIIW";;Ap\$''*۷o^r1j(L;eT1c96l ==׍(VHvTJ"8祢@SF@QUN9ࠣjNSP0LDhha3A0G4m႒6&)0У{5D+f}g!(0 QD711"N|@"ں TL3 ȺjEMu .th$RRs]MKbúc5 Z Z~G$)>'}WE!2<|H8\.Ba QONVRU8^njÃ>رc|6X}Z,pٲe?r:g*9j|>ѵkg 5%OU|he'%TC*v߲oh<@vN(>U!ӻO_ڼ?lذUU=|fY(R]]M("sN URRB^^b$MƂ ìa&M⪫"77gygy#G(M~~>~ &pYN2339x /fż<3rZK d)VZTgtڵ눱L^MCC^o)XDJ(\466U^|Zq]wGee%:ubСN(-b۶m̞={듭lʬLYr%#G{]ndC{^f?t8P* DqXC8Ow(\hr?>X,x}?}±kk}=!ch:B +رfK2풒޽;s ߜ.];STTDML>΂u]r1c DQ[n!%%z۷石j*VZŨQ4i:uk׮;wj041P(Fy}P0$YI) 17\̖-[2e ,ʨA0 !%b¸8?uuovB2l()оCGzm/^La!GLHJJӼ444}vˑ#Glx͢L{< 7i߾MHJJO>+ZA8S__O.]MHJJ?]v3رS2~xRRRq Vor1ydrssXz5֭SN6I@cn.O8ҥK/~EG kI%/(cIh*j4`9$MAhhh-N8a3v>]ٴi> 7Lpt,!! VɓСٛl  9s&9>eee̟?kgϲi&<`8q")))\.Atqx\b&$/Y IDATJj,//UJ W^9͛Vii$'!Iu#IS0㦠'OBٿw/M<-A<{{%5-H$b颢"N'ׯ端BE0aK.1VZ֒^XϷ4YRjXܴi9r$1K,᭷bŊ 4)S0zhN:ESS dggsw0{ldYw>ȑ#/2SNev1[rutzdΪ6YVEY +9@Vשaݺ/8qy&NۍbӲL6mo_:u!#aPPP@NN:tʮƺuFjj*yyy^`… ZZ֔[fDQH$B׮]M5ʦe.55` ?@jeV@۲RXѭ"-.ְ[t:0p8\hj_Z#mqѢEA΂'N}Q!I=JkdS"F$\.=Z[8S"C !sGN9cZ59$ b#-e؈|ܜPv,].QE?>k֬aуCҳgOY~=˗/>`Ŋ 0YfKaBMB6Wd [<4D骚!1Y8pۿ榛n"33^'`8 QVȢH0mv6O>$@Y +$'ϻˢE '))xs|gC1m4ԩSm/+N<$ɦ[EhiJKc*d ud͛Ǽyxxeٓ￟ݻΉ'x<8N2e RXXHee%[n>׿u3UYY(c]bQQ%p.5lnA^^׭OWrijj#olA0vXٻ7 Ѧm8]Xf U||?"Ξ=,,Zޔ'[8UYɲ̩S8{,ӧO4vz~Ҧ6nu!b4hmBQ5bt.J;6VR~NsΕYnΝ;my6 g1cݛ8vN%L!ɊC67\P(=:"9Aj={ѣGӿ?5WG7`0̘1#ꂮ3uTRRӨ,QGχCl6lp:0 sԩ/wK,vJmm-={1zh[6nȎ;c =n=GP ˇ0 <׃2 tF>\K^eUzԣ׻뤥b۶ܵx C,]+ڧM6OlETUU1bfOlJ+lJ64M#>>f8YIgkG2hYte[nSrJx rrr{䦛nO>ddd(..[nk׎qƱgO^^;wdԩnݚ)S0|pMfmK7j[`e(}rI sNhnTƏ!s ;r9~ v$FCJJ2Mgu6)fggӮ];222P@ W\oˉ'm,6<*[Zc~ѣFB )DzJ(&L#AK$(|ݕBKH3@2*;:\y'kv|"=l4z9t̯VJMMeҤI&oP^^~1XN;bd-t=  ((f7'_?r:)**dثhl3Qضu+oeS]WK8p 誆TUULFFUUU|wiY5̙3Yl999~nV<{ckrssyضu;W 7@FzB(@v:q8$$KѣdIbذa w6&^s ))IȒU9K',q9ٓ 4=q:],]Ç婧&))dJJJh߾=]q'N?'k9mC}(63nF`UlgM*Ӫ}ֿ-mFStb iJ2+1cSLa,\+W?0tP.rNҥ }E$V\_|s(((`,X>kqٞPz i!@R 8oOW_<˅G b0{^z1lP)`T]UUE%ZgfqI<#Et҅뮻QFxȰvOejj*Э[7YY{f+v»Kqq1 tޣ0*DUU QѫaDPuf ̸ y^lB?*&49پ?ADhECp$$S:G4j,lIl6oŋ9[Aj^6h =^|@5H#o͠Tt.GPBa:oAGD(_~}2btDZZ:n8`=z ٳo&l kzFŧ8|0/,$U+nLr)ҥ G]c'sۗBy7ټy3GVTyʕ$ 0 #=z4OW^A4԰q6SRx>;pZ8LXՈLRqGĈ ڶmÇLPn ҶoߞpT1AhՇ={@(@ (}&ih *AxAJJM}/X9s( wy'˖-!kc,,}].mRHttVt$`F3iAՀ6> v67OWdqL>ZXJJ֥OmRSS鶫}ާAD"旿%))׬v`i>3&M $gϞ%%%EqIΜ9c±XsvDcc[xQjkkkѣr饗DX(<<~~&ZK{/橧ɓر#wy'Æ 396qG؎۶m̙3$%%ݻ%33@ `;tcv'MfF9OHG%4-B$0~Tzm dڵ^Atu\Kiۦ-Mhb -)ݛzX|9}$1vX fat]CEx0"˿{ )wMGC "\A,.#j[$HFYiljknUp!zt܉ĤDԐBJVAum5dUX?:u&֢p Ua.7ФeN_hg ~oN׮])//gСk/6`i̙3ƩQhh2lwTr* N=}.YBVp:{g1`@:kϊO>a H=?Fg^|k'>Ç2h\9**mJUe%Ǐ"Z]?:ML41c&y饗/({ 4؈i$'/Z^5lJ\\G mV=|8YRٳ<~Gm';dW9o߁:̵эfJկ͞I$Ҭ?s>|Xt\pX!Vޡɢ"'{S_o X%eYPYX^e֭[ӹsgΦ{tܙ>}4su꡵ hqv}ظq#~-'Ofɸn{ay8NJr5sNq݈@ p8i B,գnVu](Jɓ\TUUVB6t(qq~h ӇogC l޼-[edYSN<̞=$@ \C̞=K/ѣGS]]W 6n3!15جbYȶ͒aMҵ _oc|1p8ډN,F"defѪU~!/Νu̙s7zYGfnM zt#}g^n'@0DbB<(!r-Z> )9wߡ{o!>>lj+>jyeUVph۶-Ѷm[222h۶->NMME***HNN蔔@||ǒ$Μ9CSSݻSRRb? .]ХK:vȠAݻ7 su݆ xy1uTmعsgm`{њ hi4M);,=Q$dQ,00"z3!#9Lqt+b:yѣ;$DtʪJ`NjAfh gƍt_q}X \f[ZZ]wŬY43S0aPQp:eZXE$&Km‚Col0 їu Yd1XiB}Wq\7i˖-e/QU^=zG q8].*Ε3zhg+ӹPg%ͪ Bgs>֭-^RUロ_M6qluVZ+'CF%?_uUQill($jhfg۶|}JϙDa3|0zCrr$SU]ͼy kۯ?MCv8HMIA#3tMpMy̟?H$Bff&= C f2idF_5%gf87|3 _7Fj;]n.S_kD!FEDID%(*:4PXXhC.B`Ǐ7I81)&/bDT3 JMPӁ墦P(DVFk23.6w{I={ug/:v$SVvgUZ+}9srrݶtP[k.;Ig 6RVZe3~?v/ZqifXPƍ7oO?=FS.ChNuTUAD#!"--P @AqKV| (^dܹ4F>l5zKIIa޼y"w׿5̙3zHEQLxR7$&Q][eV9h*`_p¨9QrκW^5\Ã>_f_cǎ461tA*NeɑXb&uLAFz:td D)q46zt$KٳIpfomM-1ʒ,ʈ5r!}YgZCL-V[nK.]&BBLkYoYS&1heۇQFڽFÁn3ˤta. ,.w 6YmFYY>Z)S J xn%3gΤgp SdǏW|gHDΝyGׯ={l[Z,?rqldAqVn,, Yt)cƌo?RmRc)Pf%裏/&##þ.=*:$5#gkoG"B =/];wp1؈󑒒_aڗy$&&7{D#MbA/Y?t֍uѮ];;HYZ1NIױxb W1b$$&&W>}޽ bMn:ȄXEUVt؞K.zhACm,RWE8|plق(Ӈ xZLygINN& ` D,~ IDATskM~+?r$%'QQv}Pp˗H$˜W2edK]m-CFS5ALFdJ!K$%$h*)N8;B^EQh׮gJJh΋/O>pp ZBUU)V2MLLoȆ/7Hyy9 n!i8*0;|Gź399hZ!E!## 0I"E߅sd\8Rpp 70i${Ua[N-zҖ%TrrE8$K;;VZb5~)jJJJsvkB}0aEaUW1+}ѡC*++q]JF"uf&gJKCu-bӦM,]i]uc /*y'4PUVqW3G07|wyp8LaA!r*QUUŪOWpkdꫯOጾD,Ѐ")9V(uudwᤱGQf/|$*21cW\q+V7߼ X 0ÇC$Ê9'̰ı&RIأ>}r`ٲeYN~7x# /aq&*=&1.b uÚ$ʸ=n$YF  `,*m3*s|E^i߾}M ZkkЩ˺n4yv+CfqNٽ/֭#!ªJ6ml{j4R'jXWkJΜrs-D"Afܹ۷fN:u]gUU J,fmUiUVj΂,b-ZeR]..z{ОuޱAE[×Lee%7n}tݬwdVP @D`D%cY9$:vIݛ9srwsweڶo ڵ455vkN"szD Ԉf;[@}}=6l{e޼y6inP*[bX,뇡# XE5g4l1n  ߖD^&kQNEE?\DϿ ٷ2{=k1:wk<Z~"YYYx<֯_@gYEw0aD,Ԉ(HDH FeC5nIe #6v8\v6>.He ywϳU"Ig72~88~m>e$E. IᒈaMGBNU9X@ )ՓZnMuU%`zfmoﻏ;CJJ IIIB!t]qoonfY,.>F#"74f'HIIa߾}iӦYuE4dY"Y0缠ܚ@]]Z\ >tbpm:\.\.VZŢE˳mDE"sλ` ؟~B$ph5&Eאج[Ξ=[nvOUUJHHBEVZvZEaرW A 8? $& DO~:Wn6FwЁ]vdlقcĉn|A?STTԩSYn= df͚EMM III?0{/_=gbCzz:ݻwW^(Bzz:mڴaʕ64a54yd\KJJBբxA'++˞|SDAAD%ܢA5 K2&Th {AqHn$ UUke̘1\y<<3@cc X>VO `Ȑ!r-{uuu\.+*2dH3$*P/K2)**"3{ֵ3;#HLLO_g-C!_hoUbW+HnJ%5v◝Md+řIHH`)))deed VXAnnDZ#Dc=Xm,$&&YR~4~1x` BZZ-9d 4 *b1 444؊tRrrr֭k׮}A뮻x饗(..楗^b…l2jfڶmk?s3 D"yFfݻEQXre3Ce{Θ1cE /I,hpЩS'^o ,#!NB4ttdF"B!n7J'ngEM6$$%q`**ؽ>_J6}7n:ej>#3ۍ硱1`ΰ32^E as*pbB<)-0>Ķy8aTVVAP(d333 äwپR+ cA ((bC1A# ꍽ-wF{Daf]s]I0Q9y!uO)#5kPSSCVXf ~)-[$3g$`Yfѿ)#eڶm M2J$ 0W_}U+E!r hg-DH^k1gi٫iӣ߭ZTV^nï_iּJeX4J <tz*7:Z{=yS{/ӔPBmd> 0o< s1tܙ-Zh"?|Tsgjkkٽ{D].X@r ~?]v%6gee%[n9 .)v(,B?T|0eee4o% '<,Z~GI/1\h0|.\i\|ŜuYr3b_E~i%|D[ETV;`v)E6i'AH$4tfjn`5-R~ߝcnl^5.S àM6GMaa!Yѱd< /̲e0M:[n>}HP(iѢYYYqƔjyyy7P^^α+NfB8]?o߾lْ]v5A &޽;sc5 қoߎ(z~[ M>ҒR [ߑɝwŴiӲeKFza4M,˒h׊y'o嘞=?\ Ჲ24MM6,_^{>Mҥ \s _~9͚5O " ϗ4 f !I?ہ+}eZnMͩ_n6QEVPYYc=Ʋe88q" 0\YŸ4hDk 4  >QU9z)^/]wÆ cǎ{iE]$ʕ+eƘ~vVVvDiF@!3 W;bvY$HPY]ŪzR.ZXlْZ:wH~}޽mZm68lߺk?i] r^ @ @~n.ǃ(WVi|/XZjRyyyqDKwNQ":vꄮ{ kE0y<>xܼ|료K3WXW_}E2d޼yL}x<"dU__RE&FNY0Hлwo~G)3%f͚_ҡCt]o߾|r &)]wEvضmmj$dfd T֠*eeRQYe_l7 i޼9 =zyy9U`6XP˶]*x4m]J CXC<'`|>xdggӵkgN;4=X~WO58~7e(L1 K]E0LP0BO>}L0~ѲeK\K'g>o1][1rHyg!x\uUL2yFl'2p@>:(nvqG<-"HK^ST7ޘK9;$ٶɝ?Huu5gy|j~_V04qaV`ze=;dݺvcѢoY>t1} J |`\yR";mOSO?C}$B߾}9WoҶM_-eUSI&\=5W_).. /ʕ+p4hɥҁEV"3gCj&&L`ȑxR?&8HtY5\ìYpGTvYf 'x"'OFUUjjj\HDrr[P#['ryezlFrV:Y,Kܷ3))nyWؖ-L ItM'R]Sc;|> ҪuL믿Ygܹse"|}ˑH B\l߾ &)vڇ>C{v-0)m8_{ypKLqEV{Y@?soO=OfV~N:pB0hs9 6R|ߣ]vy晴mۖσ#xhs-;=5t,"RlO)EZRТ@*ۑ<:'bLtM%4Q5,Pǰ,uuu\~nݚO?tfҥ||Ͱaӧ{-cBY\ŞܱcSLW^mڴa\t9spWSPPg}&o3qD~8YYYlذ\nff̘AVx˓uu5yyycA**tPV^+D4lPe?j2S=klDZQT,rReo[8ETzcG$dqA4eҥs6:#߿ nl޼>S-ZDII eѥKYa ֎jxSs=ݻFٹs'|̝;qظq#?0vM6iK,ᡇbʔ)TWW9Sx78ꨣx'ds9**nJD={gO1r&b|sgHJ1e0{g׮]tڕ)SХs'2236M 4{t4D6$yy}BI =zla[q(ŸgMO<]d@~N&*.e-M\9+!Lcdl޲EzBP@w'PVVF,c,Y*Y) h賲XlEEE.Q`ME IDATtw3c9r4=X躎}FK (X=^r|>]PRZOK*9DTy]EdBV`v1j(>EᅲwQ5{,X~p8aAF9TXո-dfΜ3<_-?tga4M à &0~xf͚Ő!C$#˗/'vU#TTTH$$c;6AX"/> w>T>{A oO?4-" q 7pg ZWӰSԁD>7ҦMJJJOzeY{.#GSLa֭2l>s9 6P^^ALD2I"@=4p~ʲ\RMը qJʗLX%Ny}K}$ dRltFyE%ƍ{PSSJD"it֍;+0ꫯ,\PVʽz0 ɱ_0 =X:ڷo//i"l8ʌ3߿?g,lcǎ%Ls!,c&I Dֵ;YY,YO>6mH/`,юMUUU{dС7N:%Q=)2a):K"ĎmБ ֜tЁ3fH԰/ʘ߾Ltc„ teqCпҿo +N\ ,)h{u@Hǯ@K@E;੧k׮4oޜ|9ڴiömۘfݽD#@,  $7\c40 ٚJ[&ML(K.aR Z-UD:s|̙3bǡ/ /:KX*`0(̖e1w\lo^uYJ>}m@>}:#F`b18vBV)sS۔^?G_~̛7hݟuƯXgΜ-BEE>￟SWWG"`׮]mIۘt_7aO{tbz<>ɌƢL>֮_<Ø1c(--y7KNh9lڴoqƑ͓=;WAmW:q3jŝ("T0ɦ'N"Lhbrrs*HJo5L4+Wrgr7J:zyxꩧ@E}'3g ̙3Aɖw֍{W%Pvcv$XtB2k< Mш%b2Hg TWg۵?ONndڴicYXp8,_BR/^̜9sx7ٺu+Gy$#F`ذao^ N ε;Re"1K]yƏi0i$5k&gOH$`~PjR" ('}vfϞ%`v:~;K/eY 4AQTTiٳGlBCqBkADbQ<{U߮`ĉ<}b[DPX Q۶9ꨣ´˳Qg& |\S۶‹/0}Ba{1^~eIM n8**SO=zyq}پ};'Ofڴi8CnnOKp}9PWW'̙3;dp뮻h֬,KⲲd+|~F"A4%a$x>***bȑdff2}2L^sa4k֌3g2zhe|ܹ] н^TUwJ˔{0LJJKPUg~E ߞqM}жm[(ir=Сcw8ϸƽX,F,cڵr)LO&6%`eQSS#E1m4fϞ?3<Ü9s;v,Hߖ35WGzn(2^{ lNIRbիꫯOx7Núu̡ĹQ[>@(rMpLK5UkR{).>##i),*W_coӬysf~_6$w_δiشi EEE[_|>W^yEG䑎MXWWǍo䆑7Hqcؾ={J3 ۶y衇d*@l5M" J3]v14k֌ n7Q{/~_@`ظaeee r&{/RxRp M[lɖ-[6lW]uD"J۶m)//SN|駴h␹4aJW}Ru^}UF8< 8oF^/;(^}DŽ -ǎ˳<㑜էOoƏե$\'HbZ&M;+{MsҸk ;#Ut Ǵ),[ӧ3Q?tz0Dw}'R\\LmmDУGǶm(--E4>3)=%# O?E>ext'=Iä+WJuVCWee%RQQgwV1j(Oسg555.e +d͙EASu˖.e 4*IדO0m۶n+2--M(2w\ xeE 6ll+'*!V@ u]gz8èQ8p ֭ANyKZ>1c4mضŸ Wkx<F@UT5oXvk/Mu;+VPt 7fە+{\{$1tpBP\<4L2x`֯_ʕ+y駹袋ͥ?S2k,}ϟ/%vJ-,)UdgEzt|lْ^zQUU=#)E(: 77]׉D"rgj*nm<_|iѺZA("*g۰ n’%K0 .]כ.-ǃ8|{9գ)Kw_馛ɑjFIyy9\plɥJۋ謁QFQ]]͈# YnXL1#G2M0#MlC 4h۶-7t#F`˖-lٲZ,?dϞ=TTTPUUG!C曩̾}̛ /Mz \е غu+͛7g̘1TUUsѧO{D*Άa׏͛7sg{1x=:e%$毂'*,bm~-|>=ztSde]T[PSRRYgE]]]C/.MS]hB6n(/.]`&R84;;/C>.Ncڵ|嗌= .eV]]ͼyKH| :mpB*++Sl)ᾦidgfDPP\i$lio+e#%yfӿ?YYD#Ca@ x:ڷoի1M38CVB FJZΝ˔)S8S8p_ [o5d҉ڵw Pf>˗KXL<4v AA&őՖh~kkd ZD%%L&e5 lK҃h? DHtDeC.]6,Z۶r䑭&|up8lڴ֭[sUW0aВGhxeeeՋիW3j( f2p3Ձ{v+-\ /sjҁKzI~~~igw*I~/ԅiZ&~3uBFGdD%%`ݳ.ƿq*mر+ロN:[nE֦PDH&k*;|/r .rվM9W,KV88?ض%}qSYq=KgkӉ͢jg^(p磪PZ\Hx} 0:N?tz>~LӔ9 (}F"ʚ5k83ӧ-Z>eP>v?[k}#5Dmݙd2sN?/tJ7̤ /LN?m\,(x/pvHLϺuرcG}R9'% .4WؒiTTTH\{UW]ŤI8w~ޓ=4FR' q9YSosoj( :4Z෰pLANN`?"ueAMcCUu-_*ZddzVHۗs=;vpkv8K ˠ,*Sdz 7I'͛e 4g)i2goۆmgK H8ݴϔȪt]PrҡBBJ5PRٯAee%ݻwCXUU%G:L]a̙zje]&I87oܹsʒoeee"D_9kwfݺuHm&//vc$v;Cf\{/T˶o֌h4V?F2t牡@?]Yg oaǎ$ C`"qѹsgZn,b<`ι՛siӦriqM7ѳgOVZaYh'r)`:Ůi`5VazvѪU+jkk{c:6h̬Lbzpl232R-]uJKKec"M":v ʼkx/1kleܱqēFmjFuu5iyfˣ>Jff&OfӦMq^/{n|>L~FE"`ĉtؑ8*JaEZQVnk׬q744ej3tiUʾLe뷔E$駟+?>`+VD8#;"#|R \=zxb <@vvdwڕ^b)jnʲePUU%IfΜ8t֍cǢ A-[dܻUTة#hiv87eJ.ZOˢ":wEy<{0{q.f' GO\ $$aDؽ{7g}6O>$={ᆪcǎ+A<H~d2\@mm-ƍN~QYx>RWCs}m—_|i<٤݄ۅjiI^@5iH40l a٧RRRk׎m۶xbL$??_Δ} n+E<\s5R 332dcǎe!\i2a0`\r esN^k˖inhnGȶ͔zŽ;Yq#GwE-e0ƚD+(nݺQUU%`VTx\^kV0 0L8e͌9o.]pYg1ՊѲϧc#pqpI':UUUD"jjjȠNz $bYY_|񅜓%4KddQye IDATZ:wV񹟻GAqO& C\|EĢQ~+8x}Μ+8b- }>sJ<:~2 qٳ'C n4M^~eZmۆeYlڴg}ȑ#GUU$N1PlL&(o@USN J P*jYȮ8۷%uLd}פϴz%bEcq<^同'ٵ۶XkMFvv6Xs=Wf^ h"1m#GWiެ/".ʭܜl6n(/Judߙf L3I2d޼PO+3tQj-L5SmUu[Цgb.?MQ4l[Ųڶi$Qu~¯rϹ<`0z;n]D>V`$xt Hb^L $33L!Za Bz:c[o1+xګQ}~|LǰLMu ¶L_:hX(XB4'IUm8V}e%/EjD.!7CYY&q`&Z2tM#2B1-|>/KYI)ѳk8wc1rHw.֡P HzAQ0R(x^fϚo 0ajՊ?[@ ZճjT!!SUU;Vh3 7Qqsmڴs޽ۥeb&rMC^yaڶmˊU?6VZW S)"oͥ\bVSOK. 04qphg;Ӻuk+sN> kZ9q#) q,BwΓO>_.A*œ lۦNzKMQ9s&C Ų,&NСC̢jC ڠ6-Xb;vH-rǃi ]]osD6 8y=b ,O'H0{u pp.((`ǎEbZ n4н^ڶm˔q؛$gj,^FI8&2{l+X(:u-?H]]w~;3?Ǫ+F ؎a6^l={ H&D"4MK$q|D#n=#3jiޘW]s>E]H"v؎G`X?AEEE,]_|DIҧOUh=^Z8w_5gd.7F1,szhʕ߱k)"gJqv̀~ i$ N]sΡo3(,,'++h<{ s򨭭?^Οr5a նmy GQ à4ٵk s#8#S EasP(6RJBI) Uorȝ Ì3+P?ߟhQX-SZ% ?GII6" 6K.eQRR"[D.@lUS~"MDt¶tQPj 򜕕E  RKZߛ;3T 4j:^?d޼yXѣ9躚|8hP(㡾[b&'|2[nsܛIEE۶mSN->}5 4\Qb' ζm|>~Mb(^UQd{0ƛ_zʇRQQR&eU˺uxͷ{s|{+((`2Yjmm-BsaFFڶ`i^~u|x2e4k֬IE'Qxd2I^^>!ⵦ.WJTUY^^v2;E$!2TPHOq ɹg.]CIXHիWɦ<6n*+93^\4 c5\o(UJoH?p>?:+ R_D{﹇Syҭ[WV\]*$USxzK.//e֬YDa0~駟.,xd>s[ ikmֶlj0;v ;;ߟyawƶmjX>u66MV\QQ|y}x*l2JKK9Sͦ۷S1JE!i$S⺂h+ņB87a qF;<&OK ++Ӳql_o}>q4~F \1QoJ?.]w Q`&{R:))XH{x"mu~n.ؒSv! 6rK999b+.eFF^^/^NyU%~c /84_(51IiL  wƠ$ \یWY9gddHq\:S2dH$E9[rJ{mMG|$ 2QUW%##D"cN{2^h,* ٨Pqq׊ǴLtUuΪkK*<<޽W~Rx=d/I& d{|>6nXG| 999ؖ/,Tαm2,{w}ߟ.["sfB#0XXtnMG"`Ix+hQ 瓓CH8ks2kLN݋Gvn.xðRq_DG^^f$pTU=c躊q2 bvIbk/3T6ݹcc;3m/^Ʊ=zpH&x}7JM}>g&pw?GQNCr 'vx<>TUm%S` p[t6l䭷fˊϵNpDnݺq饗~Ho%N]]vj\-q[f2I(ml߱Ly'f o{EsΑ~˱h׾=UUPZQ[hߡ- EUN`5|>WѱLWR?kY)vyձh]\k˲td5Kq8< EEEhтX4`w{ٶ_ZZ*}F%~Kmx_^Oz:vű3x饗|LrG>RO ,8ph#[LаM0}z?wk>iz (=lH֯]eXx} zVJBajՒ9sbh) 4Ez5*4MnQ ulz۴ݻwspD}7$:۶ocӦM|q ظa>(/<'<-pTں:JJxV}֮gzF}6YD"Q֮x"IP\M+*2-,[ک^Pt 3ēR$ )wyY%wObn8^ݍdX_0xO?͎;0`YYYcQrs;vW_ѣGFM.۶ecs  #Ѭy3vņ ҟXt1m۶/΂O>.›?l@ aXdddPZZF"?VS__/7ؐseX?Fvn6%{tDMՈ$"muY{Sܫ8YYlݺ9sޖ)e5eIQQC r P[1TVVRZZڠ2, %l3deea&z2GfР3^g6aQDEE Ѹ1KpG`!hT4.(h@VE(.&6]UQu42rqa=<9sLP8U(~z{?_0:uV_=@T555N.cʕd3iJKJ=U̝'1&%I!БevgEVlX$i x:׿޵W0$J:/ڵcOf]\$n:.rᰯ \G@R&EwV8 8TVZɊ+jZ3>>0}tcPRR m0+&̛/|/B!yd2㾿j)UP0u.[_54M!͢(̙XjO.v!Ë.vfpgc0|gՍxBr-%|۷qHH߾tES/I6m8xͷ)- 5P()7ĉF8X]=ٮݟ\{U^FUU3gXd @|< ࡇbtޝݻx"N$abp̙31M~yYp-{,z @>#{sYEQ^:nݟ.o_^x"(W)ZT͚qn:ۑ{F4>Sp1!ꫯfΜ9q 87zRtMe*d*4/`=wLcҥT_I̥eoPXuV : "/!,OUD| `zȡ1 υXpXt'H>Ǻ'4*H=^'<@|mٿ#a Eĥd+E`}(h #Axwyىh?Ͱ-˃uf۳X*bծD6K$!J]ߟr5kְd>h:F?s+ND҉Xp4¸"p'0QOC{S9 $IBP hmn;w>H |mw}7?;RS]E*!SU^-ta`0$)L6xj3|p)b|31cG)*??ׂ2#F\ɇ.ㅋСe*ŶG}CړNY|neHmETB{wPN" VtEXp8̿ow_?| P0W{6;1 k9A("1{lf̘E8oY>1WЊ?wuuuae걟B0D ه(eYA Q*`Hry.rƍG(BeG'Ikas$)^~i6+W.g8e*]8C!He3AMu ;{$I8d}c5Մ]AZUUlaɦӔD"2ut:Ͷd㟢Ν:1bB^e16jxgJ"vh/,1ן7,O|.XyT Er$ ^~euƐ!C)Ջ2-Z=dAxWQ}|r&b1VZ~57ZnڴW:u*ƍ#ҺukFGI)++-[RSS9yzS<ȣ\}2Se]_Mii)>P8L  J2yd,^Yv{ز,4]uV p8,ƝT2k5"A Ys"c&ao}8C q\1Tq}eY.#e˖$IN'@`&eSQVAfxqTfΜI߾}#M9xeYҼ`ٞT>SN!0fp쯇UY3|]1 *]7k v?߬YXpE ÇS]]׬Y_@?X/͛7l6K1$ѣLyE:ēIrX>x"IJdqp c6?mpƠ BlڴGyny .Oնm~(--%Ұ}ɐL% >sY,alNe,_޽{>hA, P__O.]H{BS`D<#8 <_4D7?6$/2k,>`:wLiip5e$iۖeKrcY|9O?Q6k׮9+?-pc׬Y(Wb;'iJwr~62qD=%4|+Hų[ӟ(++P"D`ܹ"ІP`) ;s]T?ogr)\p444>jF.Al@I5WӾ}{t]'seӼדL&4O-Fr'sΡ~@vHئ;\l8nKzIjCARUzjQ֬Yرc9ꨣ4Unf^ 唗{ewC v w<58^?~Uձ & i&|mZno~_$B'A']qCW_EQ0 SlP@.gm53|tUCud/_N.)ߴ")>t?<ӧOK.W^,ZY1ت:xj,,"L( ^XX;Ջ!CU7De%0lD">٠TuhR8ӧӻwo$Ib|ヤ+6l;I6еsgX~=^'nr޹q hoI418"^pp<Yt|.hd̤ɤ2JCCd2\xD"Opχ^e]'p}TP Df,]K㐟L)a 4X,ֈs&e|gDNJv\l׬/M O!I444`ݺv%rLJ*UL6eرjՊ?O>h4J2e(`~w2\&C}C=`mZӪ>h >L}ٔ+]SA*RUկXE@kJr>QŶmJKKC#ִnݚiӦqOpB^N=TW8)ywGM2ԭ /ӅߜQnԃ`ӢinzO}yTR 奘yT*C jBYu5t%K0i$Zl'i~OpΌۮۡ]Z/YYF,QۡOYB}'_YF$ &xoȲq)&2MYYp!۷砃5Wl@(𽕾?]H(ĶupSNfSO29ͪ65Mgyyq9PRRBACTxJE>g~$ LəU8H5sI'}\x"PSSÙg!YQ¡:E=9p 5;%7)s^NB+ hYwࡇSN!1i$_BKL$<.f a8 7=L4ԃHHlް7T˶|1q\sy ,4L3,I̝wމ(o#q߂-V+=Aݺ_⅄!$r$t:M}qV}S 5)ٍm1DcStU 艗ȤR֭[G߾}ud7X4&OL  ~zEVݯ\\x(--Ed$i}>FA_sвU1.sRZZAD,k4%Q(_$Vj AJmxJOsI#igSskYg}CyBP@"[E4ek** ;ܹ3:t[ni7C]r%eNٸq#mZlt>{mÆ J N" $x2I@vzdy͚qPn|`A++-H4+0o< ۘqNm8 IfI¶DeZEA+PӬ9?oɚ5ktr9jkk袋|i*)6&;XŐ!Cxb$ ZjEP ͒H$ܹ3Hp8Ɣy= @Splwg6i$!󂐡xBƴ-ºBYDa4{.ZĐ!C=zx܇oU,,H$¼̈́ggtҙիWI\4QuGDz^_al#N4qb+R\mLvPWQ$Ή\>uaCP3pUUU裏2b&OLmm-W_}5;wf㺵Kf$dǢ  xǹnHpbQLPd2D"^{5BC$R/+U\" o+SS(H.{SgHn~;q!nݺF@p8{***TbES p`] NܹQi Ц:& ?PY~=mڴ_~b1ZjE۶m) (\-|_\jۦ(ïmH"+ 1555}ݍ}iiI'ޟp=~y grd&MP G~nPӱYgu[aT-۶QQYm;n2K\"D(QȒ IUe4U#NL&IsYTME%Lå<PkAF+ӋlYg<R?F\?0|(Rxu}ݟX,Fii)̞=;3͛nc\{ߓ^\3ߍ>`ZM"diTUY>cclBUva.CVe q&MmðߞE˖-$'1?'U@)S0ӭkWu! S $<=Hv!pq*6Y :Ac!4O*Jӿ^_qeU!i`yu8L:7 &ܭ0 Z,jՊkuPUP0i$SIt]SNؖoCn^%%UxB_}=wQ>g6l@۶mд+ c 1HgiMEf ߂Qdȑ!/ wua ,YO'Pr&A4?8K_r8=X.2y_3<+B&SN~iܘ{MVpoH.q7o*^$IkK1mXY-]\~eyHRXǨF?QLqƑ9sPbeG q?lyz`8H,&]@ mne8sz{>L$ ᰽n;}X,FOvw%]q͛G۶m6u:N" #I?|-~e+mݽ4TQ>Ǹ#H|gً-[ٳ'-D-\!Wq\yd^KRy,4$INl$i.\HMM z+hD"5\e׶,<׮x<"y7]%FbNeZns=Ǎ7H&7^z)gʕr9b@vd;~!uUU)7}$ID+ in6TUaǡGO־: iRD5N3n8Oa9|T&,ɔD UU,E=y6mB]C=PR! pI+CYF5v /"riб;Wq.0uQ̜9#G*VmڶIq7y&iG3ц+ZwRX48mۖ _3s,E,YdslٺD@ @IY)ҘwA?Ix/${]ױm?0r0Ft0(<ڴiC0b4-z'px[o{Ν;ǹ1c]v+G׎jl/Ms+^la k֬}iz+Ҏ^*8lEǥqoݼϗmvU`=~K2LR5 q`V^믿N:{;TVVL&)++sϖ$r!『mz"ȊLP ۲4$#I Tʝu \(PVV0tڿ3Qmo]kKeeeaN2͛7'z8|}ìepa6 Jj{y~H:*++9$IBuLBضe+eee(Ml `0,]QFn:R"W^᭷ޢUV4k֌#<۷ӹsg<ݺucԨQ\{?P֭[& 5bU<_LBjidV.]v̝;{oѣGS(8!(B.# H(-)!JҐOHD>Y3f!2nRq9/q@r)//U(*tp8W{Yv_ RUUEc&LԩS[E"b5 ˗3pꩧr(LҪU+2e <+W\.G6md2ah*giL&(antO$7Ȳay? urfϞay5EV蓅` +WazE==6Ǜ% |?4- V@0̉}O8t ߗj>k9O&hբW4(WVXp!֭W^u](VfrX{ѣGsyUVѱcG: .24McذadضMuuV:;l6˶m|rFq .;LT\Ŭ3N[Bu5 ncܸq8ѣyiޢ%Lt6Kee9qOLwAȣ*e2ncgQ[[Kf͑X !TeE%JSU]ilٲٿfCQO1 XEQL&C }AOMD*X(ߨ+jȎ[ͬ Tȣqs'ӥK Oii)9x6mbƌ\~̛7)Srt҅r֮]wUW]ʕ+9xgyI>$27j(q4Si4F>Sp%4E#bCu?0r&aԩ :g,W-iðL6KmHDiy)^}5]O}MJA[Iin\&!-so2b?Y$ԦTz(UAL& B`chLv W>ĉ>|8=0rHFEVx'hӦ|D"͛w;eUWWӭ[FZLl68L8]1L}qeW=7ٝ;srp D&eYv{_F&vݐ[dɢ0M=!x'u:vaM*"rs++rmCm۶QŮ>3`qʘ>}:w}7SNeќ>DlY}m y;v,gy&ǏkaŊ;3guuuAh9JS0 XVC(  in3<[$2gS$vZV\I޽رyjU<#(Ϫ*ه⴩][R J6mYs,Trr5ykrIϣ ⺑xip<\|>&9JW+]HBT% , N Y}b.N7b%歷ޢs<  qꩧҿVZiMR'Au ^E ˲~RFŤM~A$/iժ|;,؎Y0)rȡ=Yl'OL&=*lcúl;Mh#!& |4%J7Bp:vի?~AL,3i$.F(r.abXF$~u{=7 ٶy+ 25͛brl ˛ϐƹúQr"KdșC׿ xLݚ㲼,4 lLS]L®:+Wv¿0 zI$_3sΥM6\zTTT0sLK<o:y,,QPcXGq0. J9%7440d~=Xŵ׎Gi޲9[nmRE}{p7N"H8aMJ<L 8#,'?WZXԇ{U\=KĀ8<5im'`.DS545,"cyqp;L&È#|z'x3gꫯ2sL}G˖-d2lٲ%Kp0g^z%.|ɴi˲ظq#۷Y%%%,wXT*aᠫŪ ӛp ň8W8]vl۶YfѳgO8FH \;DАmtm=zՋ6$6P/{12mur-ӇٳgsGvZ TViؖ3X|%F6 ڮ1\TUCEŘ($ N9^}Uz!<@֬YèQ1boww}7555lٲ'6L&I&_R[G.3I1F}(ղ,1M;p1apͷrJ4t/~mu<8$ lm_.]oRK)͛]&7p$NJ|H]v~EU<+~A:t(^z)0{lƏϜ9s8p  "J駟6IGpڵl޼HJmTp]KTaNG4Ǝ'g~6lꫮ!OPYU?U?g„$ ^;@QUwȲ%ٯr U50˲5Nwtbɯp*׃_~F{\C0 vD#ޠLI8ilܴtM# Z.#?k(XBRO /Yf>#W`XQuQ&O̴i|SUUWXU EB53gΫz.6s&W,'L-WK6!TN*bL6-Z^Fn\Ň:+IR *++ٿkgY=]ӄRٕrq,l^, &^rU<| HyXTVU2B8.t0 zd~:vyry]WVV1 ܾ[mAݺb 94M{XO<:tdL:P(D@|RMQ0M @0Dc#!PxA/^{gE";묳8餓fr-s=lܸC|z[f;Ylt˗ϗO>!v1nPM&~(tI,W]u,W] *MN9cY0>i8ڶTb Tݭ\1R7W.т ;w.2ihei۶m.M(cP|UhӦ {/Z{M$ qFq 6P(~CkuV~a{=B]AE d,%%%$R}kDp/֮% V@.o,lȲ G>JI۷( HGy .q5kC 駟f3[ne}LoƏ3gSUUE>cIdYJKJ0-d2<***ضm|MӨaԨQK8}tJO()*JFdb0gϞs=r!yXer '|bkr%0f~_qӪU+:$?bɪxE@! YQdٽĘ;ҬS(DY,-m GMz.8^'zꅅCX82Kw" JN4oSTN;4?06oks'I^e) `\&òϖb6}1[6n+<60gaL0gysߠ] \sPY^Ιdksiݺ { .fdU5mq,)L%$M}ٍTϯ T$!dY&HRWW$Q#QH?rn&N?tzPx^hꢋ.b̙L:˗s70ʮV֭d)cq|L}x{eR.)wmނ,?SXb%SiOfd[nc=x9vTt&dbήV"$(-ZNh\:ogg挙t$IqSۙ4w?w6@ y% }뮻'|+W裏rLyf/e:;ׇ3`yyg/*ȊL4 M֭[>J"'׿!ΤpUՕ\qmۖSBvs W̘1gڶ*x*ڵk4icz '`[]v/4=n8.F-8ZBuLN6K/eb1g7`zv<. ms?3I;</9xCC=e˖C@#[|h3p0ϧ_~$>t֭O؃"q:48ga޼yȲBы B9rrM}A+NJD.ő QZV3XrUސljkk;(-)aM!wPd28MPv_;@6e())Ovދ=aSMeƌrhO:Hj:b TUf{}f*Zq><@^z%OΚ5k8CxrPE`IXx|@g8Hz6B>tٿ;Ƕ->t!L444dPU0e?hyG\H]?$T8JK]zk:뮴KڛPB3i3^W"KaVTDc=;3[dY0e *mۖz ozPTUUC׼ys+7vZt]G{va̜9~q뭷6" ?I"+4ɡʛoD pz`$϶mdt-X\:U(邏xYt K~yGee%o_W#_{!2pI22H+ƙCpu#ټÖd M0,X؎+DlYQoF]Z=a]W b-… 9䓹+x7x&6dDJbXM0a[]):"+hzRck*F>ǒŋx_1`ӆ<EU2ٺu3ϊ뮣wTTTpM71x`^FaӦQWTb(P(P( fŇF|mۮ L^ eYCpe H$R.G(BrA("{wGY[[%|_0i֬>|.s! cxsqP$K?CML&Yrl^ Va0,^B#o#<@4M#HPZZ+Bqt]YfCI:&RҊ+8hӦ 'N$6) C8d*U(iF,Ǧ̵5dQu ՃeJ>ks#&=CŔ)/S]]d9Jc? `֬YlْGyH0q&I>*Q@IBSŇ?XqBe 41<ϰP8,˼\|xeԭ7x=OuU5e}Y>ą=i4Og\;Io'\&f̼8 1jL?d2HĨ$.lޜxCDP @>0 ,\·1-Ӵ<,#8q< 7@ `?rI%]-޽8[=B:7 Ns饗ҥKA-[De~mL´iH$SVVaM>xPOС M>x݆uظqeO&U`8[)DQb!}sw#3=uǟxλ{iDyGQ9} %tBJr"Cp'g"(U@!Hlٙa5<y<Ĕݙ|Ut.UӁyyylݺn)SM$cĈx<#l[smʿ { D(#gՊb\j{=nF/W_} ii)..뿲̓cs.5wGD"\E^KNIIIt2yd_ɡw^̩ !ݨIBMڽ&McP%pX,f~?NFh֬){}-[K>S}&iDabRg̓V)oKWKXu)==C石tR7oBUUrssxt+Won:֮]zdgg1mWnNM;/xZVVFyy9?3;viӦۚe4̛?u׸1 -5V^Ϳ={Y&+VO>:V˶[ٺm;H8LZ4k Ɍ>|tc}Wٻw/[neٲeԩSUp8e{Q+W?$(IzM6 ʸms'2h1S xEfM|d:tH(4*mZ=~\^Z^JIICˤѻwoJKK9{,Ofʔ),Y-Z0tPƏObb"fd2Ѷm[1U 6@ `؄˲ŋ[.*p)4 ƬO>aѥsg].)Ƴ C86RSSbi3Q%B:uYj5EEػw/%95n&|~?G̙3lٺ{s5JrrAG(((@$Yn]mƑ#GX~=ק^zƬbתU*om۶.>.M8[:u BFF|rGÌiۮ Ő;0&x~f^qJ*Lc+Zd8yLfLl2b&9HKO'jU6n]ͥYfZ:_ nڿ?}T1s.d- $l$B|~6ovZdLf3HI+15b# R% Ms(?s}vv`0TABQ\"|ӸqcfΜɸqhӦ .3gd5jҥKѣG,_=W]u)--72O<ڿ< !L$]$ Nu#Gq:$%%NpNp#O9w,>DRR/2?s>;oÖmزmwuz~l60֭vsגw}ǜ9s3f ;v੧(̻qCUpa+ xbϴi8~8PZ5}ڷiKBB(Q*v#a6ID%v@uCDE96А$]CjĪܱ%3h *E'7v Ϳ1c 3f \}RZ\l\눢tQ{;v$ QV- CzOZP" YjY%~ fnE挺~X;vG3kDbRq²bUrwPē vrx6l3gb vbAQ}>x`~?wWD\1)((@eEQkzyi׮C #pc2|0=z] BTRtgHIMAӢFeٱt hdEu$ãITKv۷sMHMME4Nn[͛7+mIDQQ }ٳ'yyD8u$ BQa!;V6uaUGz˽/m3B Ӊ`-*8ä瞣S=~N_>3J^e P^{FzjnjԨaXKdl6ӭ[7 Ԯ]Q p@jj*C]L$]L46DQ9qn RS1ÏS>驩؁&?-oFUUe*wd:$p:t@f~< 3bΝ;NJ+xꩧ4i6_"0lْg}lBN~|PP IDATpH%] ^x 0n8vE=?~<ӧO[n&IH G>Pڂ0@q񪊆~OeT0NCIi&qј6m暫jjHĪU((/ 9za8x`jժΝ;-V~"M:ФDؿ M5r(auV{$~?ހILII!-= z'|,&Mz-Z0gΜ zVޯ2Wv[.w={4. *,*U0w\6oLڵHOd20Eq\z*0 Jb>U #--Ǐ+0l0RSSX1>FuΙ$q ͛Gvv 6$ i*&Y&gPp ;v`ᄑdeeѤIS|~ 'qg>>sWJV-uTU7BF(--ܹ@k+ Qn]ڶm 3gGUVnB!kvmؔHzMv팃V-½j/7|>pOP,X￟?^I馛?~t3giݺ5ݻw@r+,Ȳlk֬ڵkϧm۶\.4U!bY0eB0;fl6+7n$==wӫSICU!b)v]$̌L'w`ҥ,X7t:56]%P~`ӦMt֍5jl6)*QMr VN<+h4B޽PPG^Ul:#c͎b%1)^ Yz5} ,tҸqc#ٕe ы`$˲A'Jmݺ5~-ׯvS)c6. .Ʉnj9WT"c-6Gl6<|r|MׯOӦM ~J*"*u9N] ?DjILL$ HKK%''jիv:"^Ln]q:޳$rsk1%/[ҥ?tqƀl65nqlذm۶1{lft:Dt5֭[7߰o>^z%]sg"nY2e d̙d _ֈa}Y&Mlߟ~>}@ff&G043t&",zPFV|cǎ*o`}lܼ9 ?`ǏYIT+V`ӦMlْuE{*VҐM I l>UW' V"P:g_WfDTS$fL&3LTZ믿͛sQx +TUvچΝ;KZŵxΝˋ/ȣ>{Ǿ}hժwuCk׮TZ՘ z^D2w!&*<U Y& n:JKر=Gs9}Ww7$33ӀpAc>yE M:]$l.'O}ԩSP *_"j |̙;SNҩSGn<UUp:u;N*&&٤'ѨB0" qVffͰZ|w|׼+?~3gΐI*U*i Rn7&颹lv3{y$ sDU"|$D=zdٷdɓVZ(B~~>~)ganuFJJ c޽,]~l֯_o@{5k.P --7<k׮eǎ,ZEϞ=ꪫUs0sá̒DTJ DG( gʕ|`!?QZ5ڷoO5)//סq~go gϠ8իG~~>ӧO';;߇*8q H|B&OI_ƪE+:;iH&s$Ţطo۶m3SSSȠQFtޝtIؕDI \e2(//7=sL0{G3zz!͝ˡÇyǸmm8tGDv[x^!D R222B<裸\.~'-f(DU$ MUQHݻ54n҄:]En]Ipc-GU6mʙSxbᗨJ5_bq\9s@0Drr2I=qk( 6go6ܹs̙3zIHHӇF2e ӧOs,X4j׮]dS" ڵ5kJt={_}v˖-[UVL&JKK $&&N?ٖh?j4MnXl҃ni{FrqITUٳdee4_q=4o< 5\?jB.JԤddLIQ &<͞}* ,f Y*z|8+JXL:SO].1SMNNp8(|رcǎQZZ @^^&L`РAHDYY /ɇB .@eZe/t:i߾-VqYd=A8u&ۭxJ<"۵X,lْŋsw3f^} TPN>/ b7TժW=ӧ7[fϞX-2eȩ]{ɏ?GӠU<#Ԯ]ߋ׫C͒DRj*.n:x zň#ׯ.'O䣏>_MӸ>|8v~B?5jp Af III$%%aĉL:p_[.999tؑ:uꐒ}D8,sƢѨ^ s)( Ղavs$?Wۨf.=!P^|ͬXwy:u& ́3M^^I&a5(J$ E۶mIKKĉڵ9s株*5"--ͨR~?iiit҅;*Un:Y|93nFڵkܹs)((8hМȘ1c4 y[Xl5j 77cҿrrrp8-B1BŽM86Zb.}zIƐ2HU͛rAbCcի,\q%ěLzEg1uT7o5\C ((("A2U"Ͳ 0=K3aA8ARbBlNb6-/I&4tp$M2vUׁcds{3U=zoo]65 Zץ_'Of,XkxM4M6 :+Xaa!T^{nΝ;l߾GhN82臋}Æ ի( ZhSO=Ե C!2339vKOOGeذa/g6Bڶm˶m裏Xl$Q~}MUW]e$am#G0{l>vڅbgaر8Fʹiӌ/ jxŢ/1gg( AҿRbhiFFASlDB!EGJ~f$ZeL&c|oBB q&ch  @+裏7oSN3*Q \ DU.ZW@UF47)pm}&d 3qs̲@U! fd 岓UdYWdV,~\#5 " Ic:2U!$cK}:O&;;^{Ν;sAq`JWYpJJJر#)))^~e&MTW$,1Q4%AFAP%ƐPU]:8q&A0FpZE9" Gp]HhaXq:G [dB8b1#@h%J@E&MXnƍ̙3,Y={mm+xj-ڶxs6m`0i 3O9GsϨ{ymw,f lFSD(1y0Mn@U5 QOԩS?_@f Fj% Φ Q $7(˲@E"Ea4jԈ>"l6iiiҬY3mƁ޽;6mbѢE*/`ܹ;4hV2{7f>`B`SH$Bff&^Ǐ3qDvA֭ ߿5j0k,EvSRR‹/),,np8 #p؀lܸ1cFSR\_EjA+ TmW׿6 x*6̎;Xd }p.HVZn:^x^//]tѕʤVZ4lЀd>}:'O&jdEuOSrAk.!!H$Byy9vR4vŮ]$ݻs9v7",ڷoÇٵk˖-_~dff@_C4V&e˖i&x,b[nEvNEx>$ lW|C#|Ӵl>_2U`. juH /îG+WdƌTV3fr}*Ĭr ]C1ddYG5|DLTMp(}D"\۳]tAQDu=U98mv>/p~}aMLNnݻ.MFcTe*+ nfp8_Wrss9r$|>:t@6mgѢEbt9ks|w0Xl&M2uQٵilْ7|zuQZ5V\ɸq(//'993gr4h)))fZjEVfFvz$&&wa`\`0H(2D$\4M3x}~dU18Q[iмbD}e bܧpX!=5Td Q ԿKD$$lDBmv#kF*U(++c8qa 4Vp%xx?HII }%---g18hUo,2k$ѳgO\.1մnїq¡0PHDgРA6mbF+WeW("IĥKҽ{wz!MƻҥKꫯX`M6e֬YtHwZs8q?O:ua IZB3L4jԈe˖1rH@W޸q#M4رcL8#GPvmE!--S9y$3g_rYhΝ# U4m2~x, 7o6nwk 9CvfyŵW4צZVURRy 0x/ƯlFS5TMEUACb1t9[hM3Ҥ )4c=,d։bFf2a R%IɓݻcǎߎiF%G땴.p5k`Xԩ999>b19z(?Ā[S;;؞2a2ɘeK,GcUh ^RBgIOOٳG\FwBq(q@U*VYY^2Ac\.FI0d…Z%K{nzŋiժh=QYh0rH~h8C4hΝ;a2HNNk׮|L غu+`'2vX$Iqݲe k֬! ѬY35jÇ-jɓlٲD׮]B5m?FL>CNWUWudС$$uH$&Y.ԭM qvLYg9{Wx%11ٳ}}olV.(hL"l޽qq7Re-E~%@b~ɓ'STTСCl ,3}tvC+>%KRUUyj*222 Ԗ0UժUcذa߿7i&4ijEe IDAT֭ˊ+ذaѵkW7oΝ;9z({e4lЀ_'&&ٳgٻw/k֬VZ4m-2? Tr|sH2u;vpDŌ5=8tilT%f9ȑ#n9|}3vu?rԩ`1A]RbESuH&HSN1zhׯ_4mX}osۄAK 'Orѷo_CO$ABbJ3?]#Yyp-(Πk<^6O%P(briSRDl3U;+'Wθ*[2sЅ7#*&_Ϙ1ck׎AaZ5j6l_~l߾? RXXK/om (m6h{;|3m#!%w9 2[Q)EUay(x:~s$&$ IMM&5- Dn^'jժEii)7n$))Ν;{/~+A}<{&>Z}LdY&==0\s)rr7#CBM6aZBHŬ6+fr$]0U0j /X)SHooL8|09}4 w1 `@ @8uֆBȺ 7|3 55ߏ/ 9Y>NgDIHHl6͐zٲmj⥗нG4M3CCBb"i 2YP)+rj׮͒%K(((̝;7#$Po69w:tjl26l@bb>u0l6g0|p@h֬Ta]]DpՂjAUbnҮJ&3Y;ォ34UxI K b9r/FEfٌUHnf3<~{VVD%j U 0l0-ZO?͐!Cx>}:C aϞ=Fu(lC\.Æ 㦛nO>a„ ,Z" : ,`xg)f?N.]Ϙ?>EEE8JKK޽U&&O $IppCe˖7 vء:bpX1ڍɌFiTspqڃi 4KvTjժM0ߡ*_kL9AdFx?BzA(OOOQFٳǰ$q`Ϛ5H$D-i0$S0J&Lff&AH[B߇b%Wb[$&$Қ+K/+/*--6VRRalhٌt:ׯ"CP|M m61kPUd^/$q-͜9sXf }vV\i(brJ1Ю];^y8֭[={6 > 0 !4,\VZ0`O>$իW7x]nULuA͊QPaL+5FLgB\;w5t5rqZjIaa!( Z7^NJL$--B.,^z[u\\\LժUX,A4hѣGY|9&[UoЀ$R(++#91"s1BN<Iiϵ+ '[eVZx}>alVV\WRÇӯ_?oy<6 &ڵkW_}el@yy]* @ `lߟ~1vXf̘Azz:UTSNBAK&P~deeAqq1ɬZQF x'4hǎɓl6QLfvEGVBmΈF!Uyx?ZsHĨDa%Æ=aIG? ׇ403B%\.EQFFczsNfΜy>Yl޴ GKiI? :ΝQU͆餠D~frj]'p`>Q+;\& \u zAqqnQ^n#0Lx^={6@:=TVdNAF a_/ (Fw]wjXo"Ӡ[U8NC1E| >K/DΝٿ?~LN:t)zk3z2v$&&D̚1EEE4mܘQF-L tow,bнG7X˭?tb:v_K/Tw84hЀp8LAA{1$Şyi=JMS+t)**2ڪ Mf1SZRaUAdU%V 7[c鿠Dd.jk-(J8d" p&PRRbh5z^, NӰ +,ahcV4ޢb_Ȩlrc2FOL0Dvf?^{eҤIO :N:7кukrrrBرcǎQF rrrо}{j֬׏QF( -Zdm DU5ACDtI$GрN'V=R*e:㽈l67ooF0AY DaFFӦMn +BF |AUXV xLժUy)ԩ[ӧO#U@#JH Gî? T}IB=K*U 挌 Æf~~>ӦMCUUuF+hVQ(ĝw;ÛoI߾}q\Fb}UFF(V5;3SdN8IKK㩧2II wFT^zig# Ҿ}{RRR6qINPإ~v;Hd$#@ph(**̙3dee 0͸\+$ H+ g̘O>Iii!hf@͛ǭJYYIIIB!IOOOT8E!NHO?厡èR W4mJFFfI#ai2q'cAcӦMDQ~?[la~g>կ_[t:|Zeeeݛ 60j(y睿HkK +PO>|8d\ #!nh\٬Z4M󑛛˙3gR SLA0wHOO'S^^nhFJJ {%55M6mtX*\ `3f`رʼn.cpuʔ)<h%])KE 8$IP^4:U>999LyE5oƮݻ2ߣpg5d`(HFF%>SW_}ř3g())!??Q>0ҥ YYY;w.C po>cunv:t('иqcϟOÆ ɑq@Bz+/Fez-jժ?? Mf!:˅b0g͛ɓ;v؊rg2Ȍp[oq}dc@ <>4^x?O0fڶm9ߕFW^+jIdf̘W_}Eǎy:th+U.KRUѣG'xHLLԝNcp001b|$9z>E! H$—_~iXSeVv>OّM&1u\Im@Y,>[aw$D04E+L4>BbuQ}7W^yʞ={^as- )"Κ5'rQZj|@^^z "ȗ1sLz!, 'Nd/nfzz̙3lݺ!C0ejԨ7gxVgkĈ[x<f3BВeZ,?xE˩Cҹsg^H{ pB tޒd&vl"u=GA PP-peb9VB _L2n nuJ^/>Ixw9r$[n  Uy!ڧ| < y޽ลJ#{!gVXXȫ֭[:t(o&5jԨ %v52p?5\ϝV+ ۬/<`4MػB6˨Q>*Q dy@]ZA._1#dO?D6oo/d.v^%kFvt:y'{ Bx,_ܸVvSDl2"͚5c]Aצ( n駟i&nf͚etMe+J+2rT@ķ-V04xVÇgѢEݺ|<7#g( 4k֌/hWJ5%@ ` a` ۷}h";L㟹eβtQ@EC!XXF,E4F`( J }we3c}3b!q,f9{{ѣGWc 1&bl2SQQ^ExjG0({Eg`00sL: VO%E_ґZj!ֹM4̟?N: /Q63g+tR,X׿KհN?yBXoBO<ܔz)If`0HӦMe.ʓAA\^@dz%\ݻӧÇIM %d"=YT=Xϖn'L5y_MH++@GGkV0ϮJٳqݼK CEH҆h4m&|s{k}g]ZZݻ̤{&@lYQ"l6~?p;C5Rʸ~x,i ^O?wMVVAXQɽ~B6l u ɔFIDQ&Nȶm۸˘8q"P[[K0(?9)))k'i*d?3@ӦMtFFs9 60d)ev$PF#@Ҥhu8RXUՓ"пS0`q;9cBB! ۈ#<رcq8ȌEm1 t,\PjAV?(O8*w&NСC{5^EcXr}N֭_O4/P8}Gpٺ jv-0{l222Xf ]v>HkCbj6;&4b֛P(//+dСs뭷`rrr\$ӣYYYlْvAQQ b2سgFtEv6q399f3 6)SF%Y σSmmt`+Ю];f3FQ6&TWWM&ɓ's}ѯ_?FIII hCTk<ߋjh0r8((h?År83Y,?V׮A`)R_~9ӦMUV8N… e$~?z x賘60gٳF̘1{EZZȏd}닚aXZlْH$ڵke瓳ф?іMEE5:-K 9XhQKE@fhٲ%~! L }R8~?@@4i҄q9_f&Mnݺ>:6jYjON<^z%@zb=B,MARl.D"\z\ql޼Y6l6N'VIvEa 6QFeN;46o̥^*Q 9Ic~m6픔m66m*£(Pѯ_?)u\Ml6 e˖1|pJJJhllG nwsh4)ᷴdi>iv??.[y<233('y>rssiQPQUnn=AA4az,i8&==]'rG1uTy޽;W]u3gd2 IDATΝwKd[l.cذa̟?^z%֮]Kǎq\r¯21h4J'}嗓_|udpZbrp8j&4 k) a!} K&MĄ 4pPɑx=͆lf֬Y\r%'!#j"jh@?뮿r>E:@4BD#IBG'TEEE&L.]pB!9Cp;K$֒(ڵ~cʃP4FIQQ-Ze˖r)RR*, Ò˖-W^ax^:,.?|w~AdgyI&Fyʢ˅l5F#~2hlH|(Й~9ssq&L8 x%Iz@ yꩧZvfffJnާ4ڵkǸq4(fғNb@D:E~U׌{v-VTUe⋬_^6:NTPP@Νϗ;ŠAh4RSS#^ul6^̤{mƒ%KX|9PVZqs뭷&ȼydXl*deeQSS#{.s%333d &,^pB翲sN&Lx>&-#jD"C:àHm::t@0dڴil6Oȑ#{ b~q~V!u'7߰}vfϞ͒%Kp\rP@-~UVvx< 9TdBB0Km6;w殻O>iFBv"GLAqX,MFVVn`MB@ ;hxpb57Ç3k֬euq,6*mٲAQW h!k BR07;;j4iByy<0Nd1# liI&qo/QVsi &?ehLd2 KBԩO>$3f@UU&M/ab?gWA5{H҅+ 5ݻwyfֲ}v{JJJiڴ):~?UUUiӆ@ wޜs9rKQQ㵞=yʚ5k831=H&hxS.-lP jՋ 2z1m4~Id:fC Yl&Mbƍ v (Ӛѷo_|Iz!jEEM49)#.#a_0aC\}|aFH@ aN~9S$q\9/++KڷoϫԩS4sʚ\&Cl6IӸl7';J} D"R+P(I譾^fW`0PWWi۶-RL I=Bs'GJ;׮];i߾=wy'͚5tR[[t|"S8c̝4zl9;KlꚚ&O4\+‹x1wSO?' `2ʢAf\zK:C_644`Zꫯ>}:>K/9s$hNS_͛G8gϞ撞.pY.|P(DII +WZ(1x`ْaX$F?Kχ Fl;a ,^bH*^XΒBp\b ZWWǺuظqC;ZkC_ϖ,I(a]w YVq5<&Ԭm63g2fq8 tJ0R_ . !:s:@4s8pw}7Zg(-ko2yX%Fx<pnqq1_|E3BSTTĦje֭s}TUUѲeK|r C/QMv8 E9#ki 8N"b=s<#TWWL/vZ,@HDܦ8H<-[FVX-/y`IXE 0[,~l[]ws=PZrP<)}tig}6d陇ǂS)b:uZL:-ZPSU? F#FY6 }dg1dR031B4B E1QQYɟo3 n݆#FPW_KfTTUP]SǻKMMlqôhтyiiiʞӷo_C^h߾}B.fC=co`0pa}]yF-_ĉvѻwo."СZ`KQvѣ={v”mPCmm-:t=2=3odONL&:tT:BqӧO3V"nFE0OsΕY(2PU{3g[:q|^XcCC$׏s=N=T xPUZ,Lb! lݾOoM~~>FQ[[OzFLawR^"rjƎ˛o)!6OH9c2p8RgȐ!L>":ѬA#ѵ(x<,Y{+VpYg1rHZl*sZ(D$**]D5LFS9E#V| . $! lB1(Ïex 2/(F#4f߾} 8 Fѣ1 giӦ8veft 7п dO>'`y睜}$77x >#fΜƍQU]2tP:vHnСtE Op8&3t:/b])׍h4ʅ gm1I4F)((iUU999x1aԺ:{(../J 䠪*F oiӦ1p@|>ڵ CFFƍ?OȔD|(RN `̘1 0#GJBw}g}멪J7n=.Qzg_"3Ԇz}YElܸqL<#Q?O>O6K.f͚%ZL&I-;wrxt%QhuS[[˅ ?ԡ:>KR+帎"Č35j:ٖ84{J@G7n\ fݻ兩 xee%};w$ a鱗L ɘMCl!C Fb3RSSitQW[1|u  h aY F iޢN>}hҤ<<OYPDyMM ?<< ~MPUhŋ2d &y衇xdL))b=)zOO ZM;{ܳg+W$##@ @nHKK#333 |<ӘL&ȠK.; |>/K.`6ec*̙àAT@L`ZCy >Ç3j(*++͍1R) rT ɛdddwb.؈˝)@Qx{yQݎNGĴdh42m4zMC1Xof,XEg8٤rOQQfׯlكl6P@8?6㲘ج6ywڍkay"HBSGNNBAj(zMr ja}}P| Yx1>SO=D&Md?ѽ?.3EQٳ'3f̐bWL?s;h b dM}!^Ksowߍm۶<3\|2+} уoYfN(BٚztX"`Tbm?]c領\s533}Ne\b F}8\&3g+ +VFЂp^?MݲeKL_~D:EH N!׫p(',׎uD {vM)6;6݆?0&@8b8b4wZaRLJLcC#6[u =-=v[̄ñb˖-9p}>NS:VGVV&ŋ3c lݺ5[jؽ{7v"iDK/1bٌh 6wyP(D$GŦ uٯ_?֬YC4ӉG(Nn#yyy?-VBf45sz^L4'}Ʉ]/`ҵ+[nf͚svV3F׏fSyhǽKLx %@biiiosSVVFӦMJB)A4.HO|OgG X1d2%4XTz7q_fsl1ͱl̞=oF^o/Ō8].@ :F A@(?Egi 0c H>"F$=-U #Pp8c^ii, =zD:={Fygyq\6khP`4)Tn0bAh$7حvh5ljҹs'>])Sʆ2d\s ;RRR—_~ɷ~oaΝt҅n#F4#꧂ t_OvbGkBE Տ9F;m~EE|BqHߏE%pѠA<4Ǝt1Y̘PHE1]`"j\aMŨQ&jjjʦoHqNdh4Ug\z'<-bɒ 3-pT6M SLᦛnnY]^'+w$j*..fժU'|9<{=v;fsL0)3dKMM VMGg'NApBB{ɚB7SHpSC4<ӞFbrj1`B?(A$ йS'VZW sbҥ1g%j0ќ'"z x^-ZM7DMM!;0@" ۙP³ !Dv|g}8ҿށ}a~%׏eNFf#}lv̶[ooضm=N=ˉ#LDpl1=L7LB3oZn5~O@W|9'3q)z}[mz}q (B4q̝;{h[LbD ;N^/EN(fʔ)k.!⑨BX(`0ȳ>ʕ+93h֬:zm۶~^/\h4aGSUf(8PQ೥K)Q̕Çӵ[DFq"]Ha Gf|(F=PQ (ҙT԰mzMNetimZRmq<ݷt;;6ZÑfBb0вeKٛ~>xEE(((r @(zSj R_n2d-+NCCAQzVPHֹ4ӣ>w!V0 L_]Ox'y ̟??w-[͛qraF~?eeeTVVbZTޅBzG#mڶi֭[X,r~::~4iy18]N--HU c1V7&+Vo>.2!3?[C\B*AU孷}j,7h1Z8c6Q՘:o9OeʕرEԓO?_U 2aÇST1!D" \].WDm=K M9CzĊ+dBn~,F8̝;M?0btD.'*C.@/\Ĉ#$vZZn͹hNAAݺu~)m$yAsZj)3l,v+yMعs{`[. >/1p(qڶ/&&t`>}(}8ł"[*bLٱr\"Lիi_TDVVdG8F1MG=HewAF@8řAEdS[[nҡRRR̗_}Ū>E&O={ⷿ-W\qEEE . 1a}R8n 7c zY'#7[uQ ׿<8TSB5Y,o)8.UU% 2yd9]ԔsPtyGɑDk l̙K,_L95^2N'("GL؄xOπt\rA  )$ԦMJ }㍌9dRGbɤ`F ۷ma ҥ|;v-7aWr5WӫG7!`4PC1ʲ^q?~u]v\2 䒋O  }*?a`4`2Ykj|jl/5H^l0(~?nχ7.lǂVρطoŬ]Vwyz :oǽ%TEcg.z;vP\\bϞ=aÆIt5(`=%lKO}+!΂ _}˗/r׿ݻKl5kF}}dEQJj ݻm۶Q^^N]m-.@193=_IOl6SWW\T }ӈFuˮݻ8PQI8l0ٿ*)?pjrrrС=tMpW% b7Z(^ʎoE"hbJqi)yyyXmVT5V["԰E&ELm0 cL >h4FPC*fY|>L N"8d"##&M /zy.]M6S5Rb%8(|:uSN Fٍ|嗤qg_|LXMHEcH<֭6mЮ];#2A=zgPb#{:D#^q`0piQUUΝ;%1R^^3oMp8\[ZZ555n\tEwv']v]vs9 p BO\ӱTHYʎ c͚53[nm۶Ѐ锃YbZ/t:cpfb/ 35yn#ވR(BӴXtML6FΦM624 ݎh$`0i,Y¦MHKK={a~?ݺu/kԋr@C)u2@ .!K6Ꮯ| W"ɣKAV^Mnn.͚5#//mҢE >ln7^7A[2o<6lĔo\XlhhVfJadggSXXH~ӧgffRXXi˽(Z\\.ӭ[7뮻شirj޽zLF# Z^x/4nFVZh>A "j,6NL+ܧpF@B![8\ĩu8H4yوPP?RŁ5o\NdGbإd:S3DͩOy-C9@bRv2[rZjz͛?ζm ӓw]v y沶|>~?^K2{l^6;޽{c2ܹ3Nee%RsάYJNTYn[neӦM"]1v)--n=SJR8 E?N:,KeZZ䂍$E%3$8xֻwoQ@ @^(++}RP]]eŊ<1S_*_ÀӔJYR_ĉ(~?R\\| diFFÆ [nL&jkkx<28@ 8hcǎhF-h֬l޼[o>L&oLfA `)Ǖ,e6+ "oV⫯bPXXOڵ+hjkkj7nNʼnLJ8k҅;N<,_֭[WR3HȚvʕ+ihh`ԩݻOZZr)vef%f Θuuu5p^/|wnCee%| ;v젢UFS+e)KY~Mdi2IOғIL|ǎY>>R٦zlYfEmm-NO?L94w4oJmm-_}KOׯ_q,e)KىrN6.t\ZZ_ٺu+p֭[w^VX;$ptpx`Pʉ ǠAOSU,eFǩo_9MnEjbC8>U$YLgPP(ħ~ʾ}l6S__/35kйsg4iرc+R Pa0LPm"Hlx|prAωl`XX,}OOO9,e)KYNbK,e)KYR+e)KYReIrY'IENDB`parso-0.5.2/docs/README.md0000664000175000017500000000011113575273707014744 0ustar davedave00000000000000Installation ------------ Install sphinx:: sudo pip install sphinx parso-0.5.2/docs/global.rst0000664000175000017500000000010213575273707015457 0ustar davedave00000000000000:orphan: .. |jedi| replace:: *jedi* .. |parso| replace:: *parso* parso-0.5.2/parso/0000775000175000017500000000000013575273727013672 5ustar davedave00000000000000parso-0.5.2/parso/parser.py0000664000175000017500000001577013575273707015550 0ustar davedave00000000000000# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Modifications: # Copyright David Halter and Contributors # Modifications are dual-licensed: MIT and PSF. # 99% of the code is different from pgen2, now. """ The ``Parser`` tries to convert the available Python code in an easy to read format, something like an abstract syntax tree. The classes who represent this tree, are sitting in the :mod:`parso.tree` module. The Python module ``tokenize`` is a very important part in the ``Parser``, because it splits the code into different words (tokens). Sometimes it looks a bit messy. Sorry for that! You might ask now: "Why didn't you use the ``ast`` module for this? Well, ``ast`` does a very good job understanding proper Python code, but fails to work as soon as there's a single line of broken code. There's one important optimization that needs to be known: Statements are not being parsed completely. ``Statement`` is just a representation of the tokens within the statement. This lowers memory usage and cpu time and reduces the complexity of the ``Parser`` (there's another parser sitting inside ``Statement``, which produces ``Array`` and ``Call``). """ from parso import tree from parso.pgen2.generator import ReservedString class ParserSyntaxError(Exception): """ Contains error information about the parser tree. May be raised as an exception. """ def __init__(self, message, error_leaf): self.message = message self.error_leaf = error_leaf class InternalParseError(Exception): """ Exception to signal the parser is stuck and error recovery didn't help. Basically this shouldn't happen. It's a sign that something is really wrong. """ def __init__(self, msg, type_, value, start_pos): Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" % (msg, type_.name, value, start_pos)) self.msg = msg self.type = type self.value = value self.start_pos = start_pos class Stack(list): def _allowed_transition_names_and_token_types(self): def iterate(): # An API just for Jedi. for stack_node in reversed(self): for transition in stack_node.dfa.transitions: if isinstance(transition, ReservedString): yield transition.value else: yield transition # A token type if not stack_node.dfa.is_final: break return list(iterate()) class StackNode(object): def __init__(self, dfa): self.dfa = dfa self.nodes = [] @property def nonterminal(self): return self.dfa.from_rule def __repr__(self): return '%s(%s, %s)' % (self.__class__.__name__, self.dfa, self.nodes) def _token_to_transition(grammar, type_, value): # Map from token to label if type_.contains_syntax: # Check for reserved words (keywords) try: return grammar.reserved_syntax_strings[value] except KeyError: pass return type_ class BaseParser(object): """Parser engine. A Parser instance contains state pertaining to the current token sequence, and should not be used concurrently by different threads to parse separate token sequences. See python/tokenize.py for how to get input tokens by a string. When a syntax error occurs, error_recovery() is called. """ node_map = {} default_node = tree.Node leaf_map = { } default_leaf = tree.Leaf def __init__(self, pgen_grammar, start_nonterminal='file_input', error_recovery=False): self._pgen_grammar = pgen_grammar self._start_nonterminal = start_nonterminal self._error_recovery = error_recovery def parse(self, tokens): first_dfa = self._pgen_grammar.nonterminal_to_dfas[self._start_nonterminal][0] self.stack = Stack([StackNode(first_dfa)]) for token in tokens: self._add_token(token) while True: tos = self.stack[-1] if not tos.dfa.is_final: # We never broke out -- EOF is too soon -- Unfinished statement. # However, the error recovery might have added the token again, if # the stack is empty, we're fine. raise InternalParseError( "incomplete input", token.type, token.value, token.start_pos ) if len(self.stack) > 1: self._pop() else: return self.convert_node(tos.nonterminal, tos.nodes) def error_recovery(self, token): if self._error_recovery: raise NotImplementedError("Error Recovery is not implemented") else: type_, value, start_pos, prefix = token error_leaf = tree.ErrorLeaf(type_, value, start_pos, prefix) raise ParserSyntaxError('SyntaxError: invalid syntax', error_leaf) def convert_node(self, nonterminal, children): try: node = self.node_map[nonterminal](children) except KeyError: node = self.default_node(nonterminal, children) for c in children: c.parent = node return node def convert_leaf(self, type_, value, prefix, start_pos): try: return self.leaf_map[type_](value, start_pos, prefix) except KeyError: return self.default_leaf(value, start_pos, prefix) def _add_token(self, token): """ This is the only core function for parsing. Here happens basically everything. Everything is well prepared by the parser generator and we only apply the necessary steps here. """ grammar = self._pgen_grammar stack = self.stack type_, value, start_pos, prefix = token transition = _token_to_transition(grammar, type_, value) while True: try: plan = stack[-1].dfa.transitions[transition] break except KeyError: if stack[-1].dfa.is_final: self._pop() else: self.error_recovery(token) return except IndexError: raise InternalParseError("too much input", type_, value, start_pos) stack[-1].dfa = plan.next_dfa for push in plan.dfa_pushes: stack.append(StackNode(push)) leaf = self.convert_leaf(type_, value, prefix, start_pos) stack[-1].nodes.append(leaf) def _pop(self): tos = self.stack.pop() # If there's exactly one child, return that child instead of # creating a new node. We still create expr_stmt and # file_input though, because a lot of Jedi depends on its # logic. if len(tos.nodes) == 1: new_node = tos.nodes[0] else: new_node = self.convert_node(tos.dfa.from_rule, tos.nodes) self.stack[-1].nodes.append(new_node) parso-0.5.2/parso/pgen2/0000775000175000017500000000000013575273727014705 5ustar davedave00000000000000parso-0.5.2/parso/pgen2/generator.py0000664000175000017500000003354413575273707017254 0ustar davedave00000000000000# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Modifications: # Copyright David Halter and Contributors # Modifications are dual-licensed: MIT and PSF. """ This module defines the data structures used to represent a grammar. Specifying grammars in pgen is possible with this grammar:: grammar: (NEWLINE | rule)* ENDMARKER rule: NAME ':' rhs NEWLINE rhs: items ('|' items)* items: item+ item: '[' rhs ']' | atom ['+' | '*'] atom: '(' rhs ')' | NAME | STRING This grammar is self-referencing. This parser generator (pgen2) was created by Guido Rossum and used for lib2to3. Most of the code has been refactored to make it more Pythonic. Since this was a "copy" of the CPython Parser parser "pgen", there was some work needed to make it more readable. It should also be slightly faster than the original pgen2, because we made some optimizations. """ from ast import literal_eval from parso.pgen2.grammar_parser import GrammarParser, NFAState class Grammar(object): """ Once initialized, this class supplies the grammar tables for the parsing engine implemented by parse.py. The parsing engine accesses the instance variables directly. The only important part in this parsers are dfas and transitions between dfas. """ def __init__(self, start_nonterminal, rule_to_dfas, reserved_syntax_strings): self.nonterminal_to_dfas = rule_to_dfas # Dict[str, List[DFAState]] self.reserved_syntax_strings = reserved_syntax_strings self.start_nonterminal = start_nonterminal class DFAPlan(object): """ Plans are used for the parser to create stack nodes and do the proper DFA state transitions. """ def __init__(self, next_dfa, dfa_pushes=[]): self.next_dfa = next_dfa self.dfa_pushes = dfa_pushes def __repr__(self): return '%s(%s, %s)' % (self.__class__.__name__, self.next_dfa, self.dfa_pushes) class DFAState(object): """ The DFAState object is the core class for pretty much anything. DFAState are the vertices of an ordered graph while arcs and transitions are the edges. Arcs are the initial edges, where most DFAStates are not connected and transitions are then calculated to connect the DFA state machines that have different nonterminals. """ def __init__(self, from_rule, nfa_set, final): assert isinstance(nfa_set, set) assert isinstance(next(iter(nfa_set)), NFAState) assert isinstance(final, NFAState) self.from_rule = from_rule self.nfa_set = nfa_set self.arcs = {} # map from terminals/nonterminals to DFAState # In an intermediary step we set these nonterminal arcs (which has the # same structure as arcs). These don't contain terminals anymore. self.nonterminal_arcs = {} # Transitions are basically the only thing that the parser is using # with is_final. Everyting else is purely here to create a parser. self.transitions = {} #: Dict[Union[TokenType, ReservedString], DFAPlan] self.is_final = final in nfa_set def add_arc(self, next_, label): assert isinstance(label, str) assert label not in self.arcs assert isinstance(next_, DFAState) self.arcs[label] = next_ def unifystate(self, old, new): for label, next_ in self.arcs.items(): if next_ is old: self.arcs[label] = new def __eq__(self, other): # Equality test -- ignore the nfa_set instance variable assert isinstance(other, DFAState) if self.is_final != other.is_final: return False # Can't just return self.arcs == other.arcs, because that # would invoke this method recursively, with cycles... if len(self.arcs) != len(other.arcs): return False for label, next_ in self.arcs.items(): if next_ is not other.arcs.get(label): return False return True __hash__ = None # For Py3 compatibility. def __repr__(self): return '<%s: %s is_final=%s>' % ( self.__class__.__name__, self.from_rule, self.is_final ) class ReservedString(object): """ Most grammars will have certain keywords and operators that are mentioned in the grammar as strings (e.g. "if") and not token types (e.g. NUMBER). This class basically is the former. """ def __init__(self, value): self.value = value def __repr__(self): return '%s(%s)' % (self.__class__.__name__, self.value) def _simplify_dfas(dfas): """ This is not theoretically optimal, but works well enough. Algorithm: repeatedly look for two states that have the same set of arcs (same labels pointing to the same nodes) and unify them, until things stop changing. dfas is a list of DFAState instances """ changes = True while changes: changes = False for i, state_i in enumerate(dfas): for j in range(i + 1, len(dfas)): state_j = dfas[j] if state_i == state_j: #print " unify", i, j del dfas[j] for state in dfas: state.unifystate(state_j, state_i) changes = True break def _make_dfas(start, finish): """ Uses the powerset construction algorithm to create DFA states from sets of NFA states. Also does state reduction if some states are not needed. """ # To turn an NFA into a DFA, we define the states of the DFA # to correspond to *sets* of states of the NFA. Then do some # state reduction. assert isinstance(start, NFAState) assert isinstance(finish, NFAState) def addclosure(nfa_state, base_nfa_set): assert isinstance(nfa_state, NFAState) if nfa_state in base_nfa_set: return base_nfa_set.add(nfa_state) for nfa_arc in nfa_state.arcs: if nfa_arc.nonterminal_or_string is None: addclosure(nfa_arc.next, base_nfa_set) base_nfa_set = set() addclosure(start, base_nfa_set) states = [DFAState(start.from_rule, base_nfa_set, finish)] for state in states: # NB states grows while we're iterating arcs = {} # Find state transitions and store them in arcs. for nfa_state in state.nfa_set: for nfa_arc in nfa_state.arcs: if nfa_arc.nonterminal_or_string is not None: nfa_set = arcs.setdefault(nfa_arc.nonterminal_or_string, set()) addclosure(nfa_arc.next, nfa_set) # Now create the dfa's with no None's in arcs anymore. All Nones have # been eliminated and state transitions (arcs) are properly defined, we # just need to create the dfa's. for nonterminal_or_string, nfa_set in arcs.items(): for nested_state in states: if nested_state.nfa_set == nfa_set: # The DFA state already exists for this rule. break else: nested_state = DFAState(start.from_rule, nfa_set, finish) states.append(nested_state) state.add_arc(nested_state, nonterminal_or_string) return states # List of DFAState instances; first one is start def _dump_nfa(start, finish): print("Dump of NFA for", start.from_rule) todo = [start] for i, state in enumerate(todo): print(" State", i, state is finish and "(final)" or "") for label, next_ in state.arcs: if next_ in todo: j = todo.index(next_) else: j = len(todo) todo.append(next_) if label is None: print(" -> %d" % j) else: print(" %s -> %d" % (label, j)) def _dump_dfas(dfas): print("Dump of DFA for", dfas[0].from_rule) for i, state in enumerate(dfas): print(" State", i, state.is_final and "(final)" or "") for nonterminal, next_ in state.arcs.items(): print(" %s -> %d" % (nonterminal, dfas.index(next_))) def generate_grammar(bnf_grammar, token_namespace): """ ``bnf_text`` is a grammar in extended BNF (using * for repetition, + for at-least-once repetition, [] for optional parts, | for alternatives and () for grouping). It's not EBNF according to ISO/IEC 14977. It's a dialect Python uses in its own parser. """ rule_to_dfas = {} start_nonterminal = None for nfa_a, nfa_z in GrammarParser(bnf_grammar).parse(): #_dump_nfa(a, z) dfas = _make_dfas(nfa_a, nfa_z) #_dump_dfas(dfas) # oldlen = len(dfas) _simplify_dfas(dfas) # newlen = len(dfas) rule_to_dfas[nfa_a.from_rule] = dfas #print(nfa_a.from_rule, oldlen, newlen) if start_nonterminal is None: start_nonterminal = nfa_a.from_rule reserved_strings = {} for nonterminal, dfas in rule_to_dfas.items(): for dfa_state in dfas: for terminal_or_nonterminal, next_dfa in dfa_state.arcs.items(): if terminal_or_nonterminal in rule_to_dfas: dfa_state.nonterminal_arcs[terminal_or_nonterminal] = next_dfa else: transition = _make_transition( token_namespace, reserved_strings, terminal_or_nonterminal ) dfa_state.transitions[transition] = DFAPlan(next_dfa) _calculate_tree_traversal(rule_to_dfas) return Grammar(start_nonterminal, rule_to_dfas, reserved_strings) def _make_transition(token_namespace, reserved_syntax_strings, label): """ Creates a reserved string ("if", "for", "*", ...) or returns the token type (NUMBER, STRING, ...) for a given grammar terminal. """ if label[0].isalpha(): # A named token (e.g. NAME, NUMBER, STRING) return getattr(token_namespace, label) else: # Either a keyword or an operator assert label[0] in ('"', "'"), label assert not label.startswith('"""') and not label.startswith("'''") value = literal_eval(label) try: return reserved_syntax_strings[value] except KeyError: r = reserved_syntax_strings[value] = ReservedString(value) return r def _calculate_tree_traversal(nonterminal_to_dfas): """ By this point we know how dfas can move around within a stack node, but we don't know how we can add a new stack node (nonterminal transitions). """ # Map from grammar rule (nonterminal) name to a set of tokens. first_plans = {} nonterminals = list(nonterminal_to_dfas.keys()) nonterminals.sort() for nonterminal in nonterminals: if nonterminal not in first_plans: _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal) # Now that we have calculated the first terminals, we are sure that # there is no left recursion. for dfas in nonterminal_to_dfas.values(): for dfa_state in dfas: transitions = dfa_state.transitions for nonterminal, next_dfa in dfa_state.nonterminal_arcs.items(): for transition, pushes in first_plans[nonterminal].items(): if transition in transitions: prev_plan = transitions[transition] # Make sure these are sorted so that error messages are # at least deterministic choices = sorted([ ( prev_plan.dfa_pushes[0].from_rule if prev_plan.dfa_pushes else prev_plan.next_dfa.from_rule ), ( pushes[0].from_rule if pushes else next_dfa.from_rule ), ]) raise ValueError( "Rule %s is ambiguous; given a %s token, we " "can't determine if we should evaluate %s or %s." % ( ( dfa_state.from_rule, transition, ) + tuple(choices) ) ) transitions[transition] = DFAPlan(next_dfa, pushes) def _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal): """ Calculates the first plan in the first_plans dictionary for every given nonterminal. This is going to be used to know when to create stack nodes. """ dfas = nonterminal_to_dfas[nonterminal] new_first_plans = {} first_plans[nonterminal] = None # dummy to detect left recursion # We only need to check the first dfa. All the following ones are not # interesting to find first terminals. state = dfas[0] for transition, next_ in state.transitions.items(): # It's a string. We have finally found a possible first token. new_first_plans[transition] = [next_.next_dfa] for nonterminal2, next_ in state.nonterminal_arcs.items(): # It's a nonterminal and we have either a left recursion issue # in the grammar or we have to recurse. try: first_plans2 = first_plans[nonterminal2] except KeyError: first_plans2 = _calculate_first_plans(nonterminal_to_dfas, first_plans, nonterminal2) else: if first_plans2 is None: raise ValueError("left recursion for rule %r" % nonterminal) for t, pushes in first_plans2.items(): new_first_plans[t] = [next_] + pushes first_plans[nonterminal] = new_first_plans return new_first_plans parso-0.5.2/parso/pgen2/__init__.py0000664000175000017500000000057613575273707017024 0ustar davedave00000000000000# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Modifications: # Copyright 2006 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Copyright 2014 David Halter and Contributors # Modifications are dual-licensed: MIT and PSF. from parso.pgen2.generator import generate_grammar parso-0.5.2/parso/pgen2/grammar_parser.py0000664000175000017500000001242713575273707020265 0ustar davedave00000000000000# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. # Modifications: # Copyright David Halter and Contributors # Modifications are dual-licensed: MIT and PSF. from parso.python.tokenize import tokenize from parso.utils import parse_version_string from parso.python.token import PythonTokenTypes class GrammarParser(): """ The parser for Python grammar files. """ def __init__(self, bnf_grammar): self._bnf_grammar = bnf_grammar self.generator = tokenize( bnf_grammar, version_info=parse_version_string('3.6') ) self._gettoken() # Initialize lookahead def parse(self): # grammar: (NEWLINE | rule)* ENDMARKER while self.type != PythonTokenTypes.ENDMARKER: while self.type == PythonTokenTypes.NEWLINE: self._gettoken() # rule: NAME ':' rhs NEWLINE self._current_rule_name = self._expect(PythonTokenTypes.NAME) self._expect(PythonTokenTypes.OP, ':') a, z = self._parse_rhs() self._expect(PythonTokenTypes.NEWLINE) yield a, z def _parse_rhs(self): # rhs: items ('|' items)* a, z = self._parse_items() if self.value != "|": return a, z else: aa = NFAState(self._current_rule_name) zz = NFAState(self._current_rule_name) while True: # Add the possibility to go into the state of a and come back # to finish. aa.add_arc(a) z.add_arc(zz) if self.value != "|": break self._gettoken() a, z = self._parse_items() return aa, zz def _parse_items(self): # items: item+ a, b = self._parse_item() while self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING) \ or self.value in ('(', '['): c, d = self._parse_item() # Need to end on the next item. b.add_arc(c) b = d return a, b def _parse_item(self): # item: '[' rhs ']' | atom ['+' | '*'] if self.value == "[": self._gettoken() a, z = self._parse_rhs() self._expect(PythonTokenTypes.OP, ']') # Make it also possible that there is no token and change the # state. a.add_arc(z) return a, z else: a, z = self._parse_atom() value = self.value if value not in ("+", "*"): return a, z self._gettoken() # Make it clear that we can go back to the old state and repeat. z.add_arc(a) if value == "+": return a, z else: # The end state is the same as the beginning, nothing must # change. return a, a def _parse_atom(self): # atom: '(' rhs ')' | NAME | STRING if self.value == "(": self._gettoken() a, z = self._parse_rhs() self._expect(PythonTokenTypes.OP, ')') return a, z elif self.type in (PythonTokenTypes.NAME, PythonTokenTypes.STRING): a = NFAState(self._current_rule_name) z = NFAState(self._current_rule_name) # Make it clear that the state transition requires that value. a.add_arc(z, self.value) self._gettoken() return a, z else: self._raise_error("expected (...) or NAME or STRING, got %s/%s", self.type, self.value) def _expect(self, type_, value=None): if self.type != type_: self._raise_error("expected %s, got %s [%s]", type_, self.type, self.value) if value is not None and self.value != value: self._raise_error("expected %s, got %s", value, self.value) value = self.value self._gettoken() return value def _gettoken(self): tup = next(self.generator) self.type, self.value, self.begin, prefix = tup def _raise_error(self, msg, *args): if args: try: msg = msg % args except: msg = " ".join([msg] + list(map(str, args))) line = self._bnf_grammar.splitlines()[self.begin[0] - 1] raise SyntaxError(msg, ('', self.begin[0], self.begin[1], line)) class NFAArc(object): def __init__(self, next_, nonterminal_or_string): self.next = next_ self.nonterminal_or_string = nonterminal_or_string def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.nonterminal_or_string) class NFAState(object): def __init__(self, from_rule): self.from_rule = from_rule self.arcs = [] # List[nonterminal (str), NFAState] def add_arc(self, next_, nonterminal_or_string=None): assert nonterminal_or_string is None or isinstance(nonterminal_or_string, str) assert isinstance(next_, NFAState) self.arcs.append(NFAArc(next_, nonterminal_or_string)) def __repr__(self): return '<%s: from %s>' % (self.__class__.__name__, self.from_rule) parso-0.5.2/parso/utils.py0000664000175000017500000001364413575273707015412 0ustar davedave00000000000000from collections import namedtuple import re import sys from ast import literal_eval from parso._compatibility import unicode, total_ordering # The following is a list in Python that are line breaks in str.splitlines, but # not in Python. In Python only \r (Carriage Return, 0xD) and \n (Line Feed, # 0xA) are allowed to split lines. _NON_LINE_BREAKS = ( u'\v', # Vertical Tabulation 0xB u'\f', # Form Feed 0xC u'\x1C', # File Separator u'\x1D', # Group Separator u'\x1E', # Record Separator u'\x85', # Next Line (NEL - Equivalent to CR+LF. # Used to mark end-of-line on some IBM mainframes.) u'\u2028', # Line Separator u'\u2029', # Paragraph Separator ) Version = namedtuple('Version', 'major, minor, micro') def split_lines(string, keepends=False): r""" Intended for Python code. In contrast to Python's :py:meth:`str.splitlines`, looks at form feeds and other special characters as normal text. Just splits ``\n`` and ``\r\n``. Also different: Returns ``[""]`` for an empty string input. In Python 2.7 form feeds are used as normal characters when using str.splitlines. However in Python 3 somewhere there was a decision to split also on form feeds. """ if keepends: lst = string.splitlines(True) # We have to merge lines that were broken by form feed characters. merge = [] for i, line in enumerate(lst): try: last_chr = line[-1] except IndexError: pass else: if last_chr in _NON_LINE_BREAKS: merge.append(i) for index in reversed(merge): try: lst[index] = lst[index] + lst[index + 1] del lst[index + 1] except IndexError: # index + 1 can be empty and therefore there's no need to # merge. pass # The stdlib's implementation of the end is inconsistent when calling # it with/without keepends. One time there's an empty string in the # end, one time there's none. if string.endswith('\n') or string.endswith('\r') or string == '': lst.append('') return lst else: return re.split(r'\n|\r\n|\r', string) def python_bytes_to_unicode(source, encoding='utf-8', errors='strict'): """ Checks for unicode BOMs and PEP 263 encoding declarations. Then returns a unicode object like in :py:meth:`bytes.decode`. :param encoding: See :py:meth:`bytes.decode` documentation. :param errors: See :py:meth:`bytes.decode` documentation. ``errors`` can be ``'strict'``, ``'replace'`` or ``'ignore'``. """ def detect_encoding(): """ For the implementation of encoding definitions in Python, look at: - http://www.python.org/dev/peps/pep-0263/ - http://docs.python.org/2/reference/lexical_analysis.html#encoding-declarations """ byte_mark = literal_eval(r"b'\xef\xbb\xbf'") if source.startswith(byte_mark): # UTF-8 byte-order mark return 'utf-8' first_two_lines = re.match(br'(?:[^\n]*\n){0,2}', source).group(0) possible_encoding = re.search(br"coding[=:]\s*([-\w.]+)", first_two_lines) if possible_encoding: return possible_encoding.group(1) else: # the default if nothing else has been set -> PEP 263 return encoding if isinstance(source, unicode): # only cast str/bytes return source encoding = detect_encoding() if not isinstance(encoding, unicode): encoding = unicode(encoding, 'utf-8', 'replace') # Cast to unicode return unicode(source, encoding, errors) def version_info(): """ Returns a namedtuple of parso's version, similar to Python's ``sys.version_info``. """ from parso import __version__ tupl = re.findall(r'[a-z]+|\d+', __version__) return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)]) def _parse_version(version): match = re.match(r'(\d+)(?:\.(\d)(?:\.\d+)?)?$', version) if match is None: raise ValueError('The given version is not in the right format. ' 'Use something like "3.2" or "3".') major = int(match.group(1)) minor = match.group(2) if minor is None: # Use the latest Python in case it's not exactly defined, because the # grammars are typically backwards compatible? if major == 2: minor = "7" elif major == 3: minor = "6" else: raise NotImplementedError("Sorry, no support yet for those fancy new/old versions.") minor = int(minor) return PythonVersionInfo(major, minor) @total_ordering class PythonVersionInfo(namedtuple('Version', 'major, minor')): def __gt__(self, other): if isinstance(other, tuple): if len(other) != 2: raise ValueError("Can only compare to tuples of length 2.") return (self.major, self.minor) > other super(PythonVersionInfo, self).__gt__(other) return (self.major, self.minor) def __eq__(self, other): if isinstance(other, tuple): if len(other) != 2: raise ValueError("Can only compare to tuples of length 2.") return (self.major, self.minor) == other super(PythonVersionInfo, self).__eq__(other) def __ne__(self, other): return not self.__eq__(other) def parse_version_string(version=None): """ Checks for a valid version number (e.g. `3.2` or `2.7.1` or `3`) and returns a corresponding version info that is always two characters long in decimal. """ if version is None: version = '%s.%s' % sys.version_info[:2] if not isinstance(version, (unicode, str)): raise TypeError("version must be a string like 3.2.") return _parse_version(version) parso-0.5.2/parso/python/0000775000175000017500000000000013575273727015213 5ustar davedave00000000000000parso-0.5.2/parso/python/parser.py0000664000175000017500000002064213575273707017063 0ustar davedave00000000000000from parso.python import tree from parso.python.token import PythonTokenTypes from parso.parser import BaseParser NAME = PythonTokenTypes.NAME INDENT = PythonTokenTypes.INDENT DEDENT = PythonTokenTypes.DEDENT class Parser(BaseParser): """ This class is used to parse a Python file, it then divides them into a class structure of different scopes. :param pgen_grammar: The grammar object of pgen2. Loaded by load_grammar. """ node_map = { 'expr_stmt': tree.ExprStmt, 'classdef': tree.Class, 'funcdef': tree.Function, 'file_input': tree.Module, 'import_name': tree.ImportName, 'import_from': tree.ImportFrom, 'break_stmt': tree.KeywordStatement, 'continue_stmt': tree.KeywordStatement, 'return_stmt': tree.ReturnStmt, 'raise_stmt': tree.KeywordStatement, 'yield_expr': tree.YieldExpr, 'del_stmt': tree.KeywordStatement, 'pass_stmt': tree.KeywordStatement, 'global_stmt': tree.GlobalStmt, 'nonlocal_stmt': tree.KeywordStatement, 'print_stmt': tree.KeywordStatement, 'assert_stmt': tree.AssertStmt, 'if_stmt': tree.IfStmt, 'with_stmt': tree.WithStmt, 'for_stmt': tree.ForStmt, 'while_stmt': tree.WhileStmt, 'try_stmt': tree.TryStmt, 'sync_comp_for': tree.SyncCompFor, # Not sure if this is the best idea, but IMO it's the easiest way to # avoid extreme amounts of work around the subtle difference of 2/3 # grammar in list comoprehensions. 'list_for': tree.SyncCompFor, # Same here. This just exists in Python 2.6. 'gen_for': tree.SyncCompFor, 'decorator': tree.Decorator, 'lambdef': tree.Lambda, 'old_lambdef': tree.Lambda, 'lambdef_nocond': tree.Lambda, } default_node = tree.PythonNode # Names/Keywords are handled separately _leaf_map = { PythonTokenTypes.STRING: tree.String, PythonTokenTypes.NUMBER: tree.Number, PythonTokenTypes.NEWLINE: tree.Newline, PythonTokenTypes.ENDMARKER: tree.EndMarker, PythonTokenTypes.FSTRING_STRING: tree.FStringString, PythonTokenTypes.FSTRING_START: tree.FStringStart, PythonTokenTypes.FSTRING_END: tree.FStringEnd, } def __init__(self, pgen_grammar, error_recovery=True, start_nonterminal='file_input'): super(Parser, self).__init__(pgen_grammar, start_nonterminal, error_recovery=error_recovery) self.syntax_errors = [] self._omit_dedent_list = [] self._indent_counter = 0 def parse(self, tokens): if self._error_recovery: if self._start_nonterminal != 'file_input': raise NotImplementedError tokens = self._recovery_tokenize(tokens) return super(Parser, self).parse(tokens) def convert_node(self, nonterminal, children): """ Convert raw node information to a PythonBaseNode instance. This is passed to the parser driver which calls it whenever a reduction of a grammar rule produces a new complete node, so that the tree is build strictly bottom-up. """ try: node = self.node_map[nonterminal](children) except KeyError: if nonterminal == 'suite': # We don't want the INDENT/DEDENT in our parser tree. Those # leaves are just cancer. They are virtual leaves and not real # ones and therefore have pseudo start/end positions and no # prefixes. Just ignore them. children = [children[0]] + children[2:-1] elif nonterminal == 'list_if': # Make transitioning from 2 to 3 easier. nonterminal = 'comp_if' elif nonterminal == 'listmaker': # Same as list_if above. nonterminal = 'testlist_comp' node = self.default_node(nonterminal, children) for c in children: c.parent = node return node def convert_leaf(self, type, value, prefix, start_pos): # print('leaf', repr(value), token.tok_name[type]) if type == NAME: if value in self._pgen_grammar.reserved_syntax_strings: return tree.Keyword(value, start_pos, prefix) else: return tree.Name(value, start_pos, prefix) return self._leaf_map.get(type, tree.Operator)(value, start_pos, prefix) def error_recovery(self, token): tos_nodes = self.stack[-1].nodes if tos_nodes: last_leaf = tos_nodes[-1].get_last_leaf() else: last_leaf = None if self._start_nonterminal == 'file_input' and \ (token.type == PythonTokenTypes.ENDMARKER or token.type == DEDENT and '\n' not in last_leaf.value and '\r' not in last_leaf.value): # In Python statements need to end with a newline. But since it's # possible (and valid in Python ) that there's no newline at the # end of a file, we have to recover even if the user doesn't want # error recovery. if self.stack[-1].dfa.from_rule == 'simple_stmt': try: plan = self.stack[-1].dfa.transitions[PythonTokenTypes.NEWLINE] except KeyError: pass else: if plan.next_dfa.is_final and not plan.dfa_pushes: # We are ignoring here that the newline would be # required for a simple_stmt. self.stack[-1].dfa = plan.next_dfa self._add_token(token) return if not self._error_recovery: return super(Parser, self).error_recovery(token) def current_suite(stack): # For now just discard everything that is not a suite or # file_input, if we detect an error. for until_index, stack_node in reversed(list(enumerate(stack))): # `suite` can sometimes be only simple_stmt, not stmt. if stack_node.nonterminal == 'file_input': break elif stack_node.nonterminal == 'suite': # In the case where we just have a newline we don't want to # do error recovery here. In all other cases, we want to do # error recovery. if len(stack_node.nodes) != 1: break return until_index until_index = current_suite(self.stack) if self._stack_removal(until_index + 1): self._add_token(token) else: typ, value, start_pos, prefix = token if typ == INDENT: # For every deleted INDENT we have to delete a DEDENT as well. # Otherwise the parser will get into trouble and DEDENT too early. self._omit_dedent_list.append(self._indent_counter) error_leaf = tree.PythonErrorLeaf(typ.name, value, start_pos, prefix) self.stack[-1].nodes.append(error_leaf) tos = self.stack[-1] if tos.nonterminal == 'suite': # Need at least one statement in the suite. This happend with the # error recovery above. try: tos.dfa = tos.dfa.arcs['stmt'] except KeyError: # We're already in a final state. pass def _stack_removal(self, start_index): all_nodes = [node for stack_node in self.stack[start_index:] for node in stack_node.nodes] if all_nodes: node = tree.PythonErrorNode(all_nodes) for n in all_nodes: n.parent = node self.stack[start_index - 1].nodes.append(node) self.stack[start_index:] = [] return bool(all_nodes) def _recovery_tokenize(self, tokens): for token in tokens: typ = token[0] if typ == DEDENT: # We need to count indents, because if we just omit any DEDENT, # we might omit them in the wrong place. o = self._omit_dedent_list if o and o[-1] == self._indent_counter: o.pop() continue self._indent_counter -= 1 elif typ == INDENT: self._indent_counter += 1 yield token parso-0.5.2/parso/python/errors.py0000664000175000017500000012027613575273707017107 0ustar davedave00000000000000# -*- coding: utf-8 -*- import codecs import warnings import re from contextlib import contextmanager from parso.normalizer import Normalizer, NormalizerConfig, Issue, Rule from parso.python.tree import search_ancestor _BLOCK_STMTS = ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt') _STAR_EXPR_PARENTS = ('testlist_star_expr', 'testlist_comp', 'exprlist') # This is the maximal block size given by python. _MAX_BLOCK_SIZE = 20 _MAX_INDENT_COUNT = 100 ALLOWED_FUTURES = ( 'all_feature_names', 'nested_scopes', 'generators', 'division', 'absolute_import', 'with_statement', 'print_function', 'unicode_literals', ) _COMP_FOR_TYPES = ('comp_for', 'sync_comp_for') def _iter_stmts(scope): """ Iterates over all statements and splits up simple_stmt. """ for child in scope.children: if child.type == 'simple_stmt': for child2 in child.children: if child2.type == 'newline' or child2 == ';': continue yield child2 else: yield child def _get_comprehension_type(atom): first, second = atom.children[:2] if second.type == 'testlist_comp' and second.children[1].type in _COMP_FOR_TYPES: if first == '[': return 'list comprehension' else: return 'generator expression' elif second.type == 'dictorsetmaker' and second.children[-1].type in _COMP_FOR_TYPES: if second.children[1] == ':': return 'dict comprehension' else: return 'set comprehension' return None def _is_future_import(import_from): # It looks like a __future__ import that is relative is still a future # import. That feels kind of odd, but whatever. # if import_from.level != 0: # return False from_names = import_from.get_from_names() return [n.value for n in from_names] == ['__future__'] def _remove_parens(atom): """ Returns the inner part of an expression like `(foo)`. Also removes nested parens. """ try: children = atom.children except AttributeError: pass else: if len(children) == 3 and children[0] == '(': return _remove_parens(atom.children[1]) return atom def _iter_params(parent_node): return (n for n in parent_node.children if n.type == 'param') def _is_future_import_first(import_from): """ Checks if the import is the first statement of a file. """ found_docstring = False for stmt in _iter_stmts(import_from.get_root_node()): if stmt.type == 'string' and not found_docstring: continue found_docstring = True if stmt == import_from: return True if stmt.type == 'import_from' and _is_future_import(stmt): continue return False def _iter_definition_exprs_from_lists(exprlist): def check_expr(child): if child.type == 'atom': if child.children[0] == '(': testlist_comp = child.children[1] if testlist_comp.type == 'testlist_comp': for expr in _iter_definition_exprs_from_lists(testlist_comp): yield expr return else: # It's a paren that doesn't do anything, like 1 + (1) for c in check_expr(testlist_comp): yield c return elif child.children[0] == '[': yield testlist_comp return yield child if exprlist.type in _STAR_EXPR_PARENTS: for child in exprlist.children[::2]: for c in check_expr(child): # Python 2 sucks yield c else: for c in check_expr(exprlist): # Python 2 sucks yield c def _get_expr_stmt_definition_exprs(expr_stmt): exprs = [] for list_ in expr_stmt.children[:-2:2]: if list_.type in ('testlist_star_expr', 'testlist'): exprs += _iter_definition_exprs_from_lists(list_) else: exprs.append(list_) return exprs def _get_for_stmt_definition_exprs(for_stmt): exprlist = for_stmt.children[1] return list(_iter_definition_exprs_from_lists(exprlist)) class _Context(object): def __init__(self, node, add_syntax_error, parent_context=None): self.node = node self.blocks = [] self.parent_context = parent_context self._used_name_dict = {} self._global_names = [] self._nonlocal_names = [] self._nonlocal_names_in_subscopes = [] self._add_syntax_error = add_syntax_error def is_async_funcdef(self): # Stupidly enough async funcdefs can have two different forms, # depending if a decorator is used or not. return self.is_function() \ and self.node.parent.type in ('async_funcdef', 'async_stmt') def is_function(self): return self.node.type == 'funcdef' def add_name(self, name): parent_type = name.parent.type if parent_type == 'trailer': # We are only interested in first level names. return if parent_type == 'global_stmt': self._global_names.append(name) elif parent_type == 'nonlocal_stmt': self._nonlocal_names.append(name) else: self._used_name_dict.setdefault(name.value, []).append(name) def finalize(self): """ Returns a list of nonlocal names that need to be part of that scope. """ self._analyze_names(self._global_names, 'global') self._analyze_names(self._nonlocal_names, 'nonlocal') # Python2.6 doesn't have dict comprehensions. global_name_strs = dict((n.value, n) for n in self._global_names) for nonlocal_name in self._nonlocal_names: try: global_name = global_name_strs[nonlocal_name.value] except KeyError: continue message = "name '%s' is nonlocal and global" % global_name.value if global_name.start_pos < nonlocal_name.start_pos: error_name = global_name else: error_name = nonlocal_name self._add_syntax_error(error_name, message) nonlocals_not_handled = [] for nonlocal_name in self._nonlocal_names_in_subscopes: search = nonlocal_name.value if search in global_name_strs or self.parent_context is None: message = "no binding for nonlocal '%s' found" % nonlocal_name.value self._add_syntax_error(nonlocal_name, message) elif not self.is_function() or \ nonlocal_name.value not in self._used_name_dict: nonlocals_not_handled.append(nonlocal_name) return self._nonlocal_names + nonlocals_not_handled def _analyze_names(self, globals_or_nonlocals, type_): def raise_(message): self._add_syntax_error(base_name, message % (base_name.value, type_)) params = [] if self.node.type == 'funcdef': params = self.node.get_params() for base_name in globals_or_nonlocals: found_global_or_nonlocal = False # Somehow Python does it the reversed way. for name in reversed(self._used_name_dict.get(base_name.value, [])): if name.start_pos > base_name.start_pos: # All following names don't have to be checked. found_global_or_nonlocal = True parent = name.parent if parent.type == 'param' and parent.name == name: # Skip those here, these definitions belong to the next # scope. continue if name.is_definition(): if parent.type == 'expr_stmt' \ and parent.children[1].type == 'annassign': if found_global_or_nonlocal: # If it's after the global the error seems to be # placed there. base_name = name raise_("annotated name '%s' can't be %s") break else: message = "name '%s' is assigned to before %s declaration" else: message = "name '%s' is used prior to %s declaration" if not found_global_or_nonlocal: raise_(message) # Only add an error for the first occurence. break for param in params: if param.name.value == base_name.value: raise_("name '%s' is parameter and %s"), @contextmanager def add_block(self, node): self.blocks.append(node) yield self.blocks.pop() def add_context(self, node): return _Context(node, self._add_syntax_error, parent_context=self) def close_child_context(self, child_context): self._nonlocal_names_in_subscopes += child_context.finalize() class ErrorFinder(Normalizer): """ Searches for errors in the syntax tree. """ def __init__(self, *args, **kwargs): super(ErrorFinder, self).__init__(*args, **kwargs) self._error_dict = {} self.version = self.grammar.version_info def initialize(self, node): def create_context(node): if node is None: return None parent_context = create_context(node.parent) if node.type in ('classdef', 'funcdef', 'file_input'): return _Context(node, self._add_syntax_error, parent_context) return parent_context self.context = create_context(node) or _Context(node, self._add_syntax_error) self._indentation_count = 0 def visit(self, node): if node.type == 'error_node': with self.visit_node(node): # Don't need to investigate the inners of an error node. We # might find errors in there that should be ignored, because # the error node itself already shows that there's an issue. return '' return super(ErrorFinder, self).visit(node) @contextmanager def visit_node(self, node): self._check_type_rules(node) if node.type in _BLOCK_STMTS: with self.context.add_block(node): if len(self.context.blocks) == _MAX_BLOCK_SIZE: self._add_syntax_error(node, "too many statically nested blocks") yield return elif node.type == 'suite': self._indentation_count += 1 if self._indentation_count == _MAX_INDENT_COUNT: self._add_indentation_error(node.children[1], "too many levels of indentation") yield if node.type == 'suite': self._indentation_count -= 1 elif node.type in ('classdef', 'funcdef'): context = self.context self.context = context.parent_context self.context.close_child_context(context) def visit_leaf(self, leaf): if leaf.type == 'error_leaf': if leaf.token_type in ('INDENT', 'ERROR_DEDENT'): # Indents/Dedents itself never have a prefix. They are just # "pseudo" tokens that get removed by the syntax tree later. # Therefore in case of an error we also have to check for this. spacing = list(leaf.get_next_leaf()._split_prefix())[-1] if leaf.token_type == 'INDENT': message = 'unexpected indent' else: message = 'unindent does not match any outer indentation level' self._add_indentation_error(spacing, message) else: if leaf.value.startswith('\\'): message = 'unexpected character after line continuation character' else: match = re.match('\\w{,2}("{1,3}|\'{1,3})', leaf.value) if match is None: message = 'invalid syntax' else: if len(match.group(1)) == 1: message = 'EOL while scanning string literal' else: message = 'EOF while scanning triple-quoted string literal' self._add_syntax_error(leaf, message) return '' elif leaf.value == ':': parent = leaf.parent if parent.type in ('classdef', 'funcdef'): self.context = self.context.add_context(parent) # The rest is rule based. return super(ErrorFinder, self).visit_leaf(leaf) def _add_indentation_error(self, spacing, message): self.add_issue(spacing, 903, "IndentationError: " + message) def _add_syntax_error(self, node, message): self.add_issue(node, 901, "SyntaxError: " + message) def add_issue(self, node, code, message): # Overwrite the default behavior. # Check if the issues are on the same line. line = node.start_pos[0] args = (code, message, node) self._error_dict.setdefault(line, args) def finalize(self): self.context.finalize() for code, message, node in self._error_dict.values(): self.issues.append(Issue(node, code, message)) class IndentationRule(Rule): code = 903 def _get_message(self, message): message = super(IndentationRule, self)._get_message(message) return "IndentationError: " + message @ErrorFinder.register_rule(type='error_node') class _ExpectIndentedBlock(IndentationRule): message = 'expected an indented block' def get_node(self, node): leaf = node.get_next_leaf() return list(leaf._split_prefix())[-1] def is_issue(self, node): # This is the beginning of a suite that is not indented. return node.children[-1].type == 'newline' class ErrorFinderConfig(NormalizerConfig): normalizer_class = ErrorFinder class SyntaxRule(Rule): code = 901 def _get_message(self, message): message = super(SyntaxRule, self)._get_message(message) return "SyntaxError: " + message @ErrorFinder.register_rule(type='error_node') class _InvalidSyntaxRule(SyntaxRule): message = "invalid syntax" def get_node(self, node): return node.get_next_leaf() def is_issue(self, node): # Error leafs will be added later as an error. return node.get_next_leaf().type != 'error_leaf' @ErrorFinder.register_rule(value='await') class _AwaitOutsideAsync(SyntaxRule): message = "'await' outside async function" def is_issue(self, leaf): return not self._normalizer.context.is_async_funcdef() def get_error_node(self, node): # Return the whole await statement. return node.parent @ErrorFinder.register_rule(value='break') class _BreakOutsideLoop(SyntaxRule): message = "'break' outside loop" def is_issue(self, leaf): in_loop = False for block in self._normalizer.context.blocks: if block.type in ('for_stmt', 'while_stmt'): in_loop = True return not in_loop @ErrorFinder.register_rule(value='continue') class _ContinueChecks(SyntaxRule): message = "'continue' not properly in loop" message_in_finally = "'continue' not supported inside 'finally' clause" def is_issue(self, leaf): in_loop = False for block in self._normalizer.context.blocks: if block.type in ('for_stmt', 'while_stmt'): in_loop = True if block.type == 'try_stmt': last_block = block.children[-3] if last_block == 'finally' and leaf.start_pos > last_block.start_pos: self.add_issue(leaf, message=self.message_in_finally) return False # Error already added if not in_loop: return True @ErrorFinder.register_rule(value='from') class _YieldFromCheck(SyntaxRule): message = "'yield from' inside async function" def get_node(self, leaf): return leaf.parent.parent # This is the actual yield statement. def is_issue(self, leaf): return leaf.parent.type == 'yield_arg' \ and self._normalizer.context.is_async_funcdef() @ErrorFinder.register_rule(type='name') class _NameChecks(SyntaxRule): message = 'cannot assign to __debug__' message_none = 'cannot assign to None' def is_issue(self, leaf): self._normalizer.context.add_name(leaf) if leaf.value == '__debug__' and leaf.is_definition(): return True if leaf.value == 'None' and self._normalizer.version < (3, 0) \ and leaf.is_definition(): self.add_issue(leaf, message=self.message_none) @ErrorFinder.register_rule(type='string') class _StringChecks(SyntaxRule): message = "bytes can only contain ASCII literal characters." def is_issue(self, leaf): string_prefix = leaf.string_prefix.lower() if 'b' in string_prefix \ and self._normalizer.version >= (3, 0) \ and any(c for c in leaf.value if ord(c) > 127): # b'ä' return True if 'r' not in string_prefix: # Raw strings don't need to be checked if they have proper # escaping. is_bytes = self._normalizer.version < (3, 0) if 'b' in string_prefix: is_bytes = True if 'u' in string_prefix: is_bytes = False payload = leaf._get_payload() if is_bytes: payload = payload.encode('utf-8') func = codecs.escape_decode else: func = codecs.unicode_escape_decode try: with warnings.catch_warnings(): # The warnings from parsing strings are not relevant. warnings.filterwarnings('ignore') func(payload) except UnicodeDecodeError as e: self.add_issue(leaf, message='(unicode error) ' + str(e)) except ValueError as e: self.add_issue(leaf, message='(value error) ' + str(e)) @ErrorFinder.register_rule(value='*') class _StarCheck(SyntaxRule): message = "named arguments must follow bare *" def is_issue(self, leaf): params = leaf.parent if params.type == 'parameters' and params: after = params.children[params.children.index(leaf) + 1:] after = [child for child in after if child not in (',', ')') and not child.star_count] return len(after) == 0 @ErrorFinder.register_rule(value='**') class _StarStarCheck(SyntaxRule): # e.g. {**{} for a in [1]} # TODO this should probably get a better end_pos including # the next sibling of leaf. message = "dict unpacking cannot be used in dict comprehension" def is_issue(self, leaf): if leaf.parent.type == 'dictorsetmaker': comp_for = leaf.get_next_sibling().get_next_sibling() return comp_for is not None and comp_for.type in _COMP_FOR_TYPES @ErrorFinder.register_rule(value='yield') @ErrorFinder.register_rule(value='return') class _ReturnAndYieldChecks(SyntaxRule): message = "'return' with value in async generator" message_async_yield = "'yield' inside async function" def get_node(self, leaf): return leaf.parent def is_issue(self, leaf): if self._normalizer.context.node.type != 'funcdef': self.add_issue(self.get_node(leaf), message="'%s' outside function" % leaf.value) elif self._normalizer.context.is_async_funcdef() \ and any(self._normalizer.context.node.iter_yield_exprs()): if leaf.value == 'return' and leaf.parent.type == 'return_stmt': return True elif leaf.value == 'yield' \ and leaf.get_next_leaf() != 'from' \ and self._normalizer.version == (3, 5): self.add_issue(self.get_node(leaf), message=self.message_async_yield) @ErrorFinder.register_rule(type='strings') class _BytesAndStringMix(SyntaxRule): # e.g. 's' b'' message = "cannot mix bytes and nonbytes literals" def _is_bytes_literal(self, string): if string.type == 'fstring': return False return 'b' in string.string_prefix.lower() def is_issue(self, node): first = node.children[0] # In Python 2 it's allowed to mix bytes and unicode. if self._normalizer.version >= (3, 0): first_is_bytes = self._is_bytes_literal(first) for string in node.children[1:]: if first_is_bytes != self._is_bytes_literal(string): return True @ErrorFinder.register_rule(type='import_as_names') class _TrailingImportComma(SyntaxRule): # e.g. from foo import a, message = "trailing comma not allowed without surrounding parentheses" def is_issue(self, node): if node.children[-1] == ',' and node.parent.children[-1] != ')': return True @ErrorFinder.register_rule(type='import_from') class _ImportStarInFunction(SyntaxRule): message = "import * only allowed at module level" def is_issue(self, node): return node.is_star_import() and self._normalizer.context.parent_context is not None @ErrorFinder.register_rule(type='import_from') class _FutureImportRule(SyntaxRule): message = "from __future__ imports must occur at the beginning of the file" def is_issue(self, node): if _is_future_import(node): if not _is_future_import_first(node): return True for from_name, future_name in node.get_paths(): name = future_name.value allowed_futures = list(ALLOWED_FUTURES) if self._normalizer.version >= (3, 5): allowed_futures.append('generator_stop') if name == 'braces': self.add_issue(node, message="not a chance") elif name == 'barry_as_FLUFL': m = "Seriously I'm not implementing this :) ~ Dave" self.add_issue(node, message=m) elif name not in ALLOWED_FUTURES: message = "future feature %s is not defined" % name self.add_issue(node, message=message) @ErrorFinder.register_rule(type='star_expr') class _StarExprRule(SyntaxRule): message = "starred assignment target must be in a list or tuple" message_iterable_unpacking = "iterable unpacking cannot be used in comprehension" message_assignment = "can use starred expression only as assignment target" def is_issue(self, node): if node.parent.type not in _STAR_EXPR_PARENTS: return True if node.parent.type == 'testlist_comp': # [*[] for a in [1]] if node.parent.children[1].type in _COMP_FOR_TYPES: self.add_issue(node, message=self.message_iterable_unpacking) if self._normalizer.version <= (3, 4): n = search_ancestor(node, 'for_stmt', 'expr_stmt') found_definition = False if n is not None: if n.type == 'expr_stmt': exprs = _get_expr_stmt_definition_exprs(n) else: exprs = _get_for_stmt_definition_exprs(n) if node in exprs: found_definition = True if not found_definition: self.add_issue(node, message=self.message_assignment) @ErrorFinder.register_rule(types=_STAR_EXPR_PARENTS) class _StarExprParentRule(SyntaxRule): def is_issue(self, node): if node.parent.type == 'del_stmt': self.add_issue(node.parent, message="can't use starred expression here") else: def is_definition(node, ancestor): if ancestor is None: return False type_ = ancestor.type if type_ == 'trailer': return False if type_ == 'expr_stmt': return node.start_pos < ancestor.children[-1].start_pos return is_definition(node, ancestor.parent) if is_definition(node, node.parent): args = [c for c in node.children if c != ','] starred = [c for c in args if c.type == 'star_expr'] if len(starred) > 1: message = "two starred expressions in assignment" self.add_issue(starred[1], message=message) elif starred: count = args.index(starred[0]) if count >= 256: message = "too many expressions in star-unpacking assignment" self.add_issue(starred[0], message=message) @ErrorFinder.register_rule(type='annassign') class _AnnotatorRule(SyntaxRule): # True: int # {}: float message = "illegal target for annotation" def get_node(self, node): return node.parent def is_issue(self, node): type_ = None lhs = node.parent.children[0] lhs = _remove_parens(lhs) try: children = lhs.children except AttributeError: pass else: if ',' in children or lhs.type == 'atom' and children[0] == '(': type_ = 'tuple' elif lhs.type == 'atom' and children[0] == '[': type_ = 'list' trailer = children[-1] if type_ is None: if not (lhs.type == 'name' # subscript/attributes are allowed or lhs.type in ('atom_expr', 'power') and trailer.type == 'trailer' and trailer.children[0] != '('): return True else: # x, y: str message = "only single target (not %s) can be annotated" self.add_issue(lhs.parent, message=message % type_) @ErrorFinder.register_rule(type='argument') class _ArgumentRule(SyntaxRule): def is_issue(self, node): first = node.children[0] if node.children[1] == '=' and first.type != 'name': if first.type == 'lambdef': # f(lambda: 1=1) if self._normalizer.version < (3, 8): message = "lambda cannot contain assignment" else: message = 'expression cannot contain assignment, perhaps you meant "=="?' else: # f(+x=1) if self._normalizer.version < (3, 8): message = "keyword can't be an expression" else: message = 'expression cannot contain assignment, perhaps you meant "=="?' self.add_issue(first, message=message) @ErrorFinder.register_rule(type='nonlocal_stmt') class _NonlocalModuleLevelRule(SyntaxRule): message = "nonlocal declaration not allowed at module level" def is_issue(self, node): return self._normalizer.context.parent_context is None @ErrorFinder.register_rule(type='arglist') class _ArglistRule(SyntaxRule): @property def message(self): if self._normalizer.version < (3, 7): return "Generator expression must be parenthesized if not sole argument" else: return "Generator expression must be parenthesized" def is_issue(self, node): first_arg = node.children[0] if first_arg.type == 'argument' \ and first_arg.children[1].type in _COMP_FOR_TYPES: # e.g. foo(x for x in [], b) return len(node.children) >= 2 else: arg_set = set() kw_only = False kw_unpacking_only = False is_old_starred = False # In python 3 this would be a bit easier (stars are part of # argument), but we have to understand both. for argument in node.children: if argument == ',': continue if argument in ('*', '**'): # Python < 3.5 has the order engraved in the grammar # file. No need to do anything here. is_old_starred = True continue if is_old_starred: is_old_starred = False continue if argument.type == 'argument': first = argument.children[0] if first in ('*', '**'): if first == '*': if kw_unpacking_only: # foo(**kwargs, *args) message = "iterable argument unpacking " \ "follows keyword argument unpacking" self.add_issue(argument, message=message) else: kw_unpacking_only = True else: # Is a keyword argument. kw_only = True if first.type == 'name': if first.value in arg_set: # f(x=1, x=2) self.add_issue(first, message="keyword argument repeated") else: arg_set.add(first.value) else: if kw_unpacking_only: # f(**x, y) message = "positional argument follows keyword argument unpacking" self.add_issue(argument, message=message) elif kw_only: # f(x=2, y) message = "positional argument follows keyword argument" self.add_issue(argument, message=message) @ErrorFinder.register_rule(type='parameters') @ErrorFinder.register_rule(type='lambdef') class _ParameterRule(SyntaxRule): # def f(x=3, y): pass message = "non-default argument follows default argument" def is_issue(self, node): param_names = set() default_only = False for p in _iter_params(node): if p.name.value in param_names: message = "duplicate argument '%s' in function definition" self.add_issue(p.name, message=message % p.name.value) param_names.add(p.name.value) if p.default is None and not p.star_count: if default_only: return True else: default_only = True @ErrorFinder.register_rule(type='try_stmt') class _TryStmtRule(SyntaxRule): message = "default 'except:' must be last" def is_issue(self, try_stmt): default_except = None for except_clause in try_stmt.children[3::3]: if except_clause in ('else', 'finally'): break if except_clause == 'except': default_except = except_clause elif default_except is not None: self.add_issue(default_except, message=self.message) @ErrorFinder.register_rule(type='fstring') class _FStringRule(SyntaxRule): _fstring_grammar = None message_nested = "f-string: expressions nested too deeply" message_conversion = "f-string: invalid conversion character: expected 's', 'r', or 'a'" def _check_format_spec(self, format_spec, depth): self._check_fstring_contents(format_spec.children[1:], depth) def _check_fstring_expr(self, fstring_expr, depth): if depth >= 2: self.add_issue(fstring_expr, message=self.message_nested) conversion = fstring_expr.children[2] if conversion.type == 'fstring_conversion': name = conversion.children[1] if name.value not in ('s', 'r', 'a'): self.add_issue(name, message=self.message_conversion) format_spec = fstring_expr.children[-2] if format_spec.type == 'fstring_format_spec': self._check_format_spec(format_spec, depth + 1) def is_issue(self, fstring): self._check_fstring_contents(fstring.children[1:-1]) def _check_fstring_contents(self, children, depth=0): for fstring_content in children: if fstring_content.type == 'fstring_expr': self._check_fstring_expr(fstring_content, depth) class _CheckAssignmentRule(SyntaxRule): def _check_assignment(self, node, is_deletion=False, is_namedexpr=False): error = None type_ = node.type if type_ == 'lambdef': error = 'lambda' elif type_ == 'atom': first, second = node.children[:2] error = _get_comprehension_type(node) if error is None: if second.type == 'dictorsetmaker': if self._normalizer.version < (3, 8): error = 'literal' else: if second.children[1] == ':': error = 'dict display' else: error = 'set display' elif first in ('(', '['): if second.type == 'yield_expr': error = 'yield expression' elif second.type == 'testlist_comp': # This is not a comprehension, they were handled # further above. for child in second.children[::2]: self._check_assignment(child, is_deletion, is_namedexpr) else: # Everything handled, must be useless brackets. self._check_assignment(second, is_deletion, is_namedexpr) elif type_ == 'keyword': if self._normalizer.version < (3, 8): error = 'keyword' else: error = str(node.value) elif type_ == 'operator': if node.value == '...': error = 'Ellipsis' elif type_ == 'comparison': error = 'comparison' elif type_ in ('string', 'number', 'strings'): error = 'literal' elif type_ == 'yield_expr': # This one seems to be a slightly different warning in Python. message = 'assignment to yield expression not possible' self.add_issue(node, message=message) elif type_ == 'test': error = 'conditional expression' elif type_ in ('atom_expr', 'power'): if node.children[0] == 'await': error = 'await expression' elif node.children[-2] == '**': error = 'operator' else: # Has a trailer trailer = node.children[-1] assert trailer.type == 'trailer' if trailer.children[0] == '(': error = 'function call' elif is_namedexpr and trailer.children[0] == '[': error = 'subscript' elif is_namedexpr and trailer.children[0] == '.': error = 'attribute' elif type_ in ('testlist_star_expr', 'exprlist', 'testlist'): for child in node.children[::2]: self._check_assignment(child, is_deletion, is_namedexpr) elif ('expr' in type_ and type_ != 'star_expr' # is a substring or '_test' in type_ or type_ in ('term', 'factor')): error = 'operator' if error is not None: if is_namedexpr: message = 'cannot use named assignment with %s' % error else: cannot = "can't" if self._normalizer.version < (3, 8) else "cannot" message = ' '.join([cannot, "delete" if is_deletion else "assign to", error]) self.add_issue(node, message=message) @ErrorFinder.register_rule(type='sync_comp_for') class _CompForRule(_CheckAssignmentRule): message = "asynchronous comprehension outside of an asynchronous function" def is_issue(self, node): expr_list = node.children[1] if expr_list.type != 'expr_list': # Already handled. self._check_assignment(expr_list) return node.parent.children[0] == 'async' \ and not self._normalizer.context.is_async_funcdef() @ErrorFinder.register_rule(type='expr_stmt') class _ExprStmtRule(_CheckAssignmentRule): message = "illegal expression for augmented assignment" def is_issue(self, node): for before_equal in node.children[:-2:2]: self._check_assignment(before_equal) augassign = node.children[1] if augassign != '=' and augassign.type != 'annassign': # Is augassign. return node.children[0].type in ('testlist_star_expr', 'atom', 'testlist') @ErrorFinder.register_rule(type='with_item') class _WithItemRule(_CheckAssignmentRule): def is_issue(self, with_item): self._check_assignment(with_item.children[2]) @ErrorFinder.register_rule(type='del_stmt') class _DelStmtRule(_CheckAssignmentRule): def is_issue(self, del_stmt): child = del_stmt.children[1] if child.type != 'expr_list': # Already handled. self._check_assignment(child, is_deletion=True) @ErrorFinder.register_rule(type='expr_list') class _ExprListRule(_CheckAssignmentRule): def is_issue(self, expr_list): for expr in expr_list.children[::2]: self._check_assignment(expr) @ErrorFinder.register_rule(type='for_stmt') class _ForStmtRule(_CheckAssignmentRule): def is_issue(self, for_stmt): # Some of the nodes here are already used, so no else if expr_list = for_stmt.children[1] if expr_list.type != 'expr_list': # Already handled. self._check_assignment(expr_list) @ErrorFinder.register_rule(type='namedexpr_test') class _NamedExprRule(_CheckAssignmentRule): # namedexpr_test: test [':=' test] def is_issue(self, namedexpr_test): # assigned name first = namedexpr_test.children[0] def search_namedexpr_in_comp_for(node): while True: parent = node.parent if parent is None: return parent if parent.type == 'sync_comp_for' and parent.children[3] == node: return parent node = parent if search_namedexpr_in_comp_for(namedexpr_test): # [i+1 for i in (i := range(5))] # [i+1 for i in (j := range(5))] # [i+1 for i in (lambda: (j := range(5)))()] message = 'assignment expression cannot be used in a comprehension iterable expression' self.add_issue(namedexpr_test, message=message) # defined names exprlist = list() def process_comp_for(comp_for): if comp_for.type == 'sync_comp_for': comp = comp_for elif comp_for.type == 'comp_for': comp = comp_for.children[1] exprlist.extend(_get_for_stmt_definition_exprs(comp)) def search_all_comp_ancestors(node): has_ancestors = False while True: node = search_ancestor(node, 'testlist_comp', 'dictorsetmaker') if node is None: break for child in node.children: if child.type in _COMP_FOR_TYPES: process_comp_for(child) has_ancestors = True break return has_ancestors # check assignment expressions in comprehensions search_all = search_all_comp_ancestors(namedexpr_test) if search_all: if self._normalizer.context.node.type == 'classdef': message = 'assignment expression within a comprehension ' \ 'cannot be used in a class body' self.add_issue(namedexpr_test, message=message) namelist = [expr.value for expr in exprlist if expr.type == 'name'] if first.type == 'name' and first.value in namelist: # [i := 0 for i, j in range(5)] # [[(i := i) for j in range(5)] for i in range(5)] # [i for i, j in range(5) if True or (i := 1)] # [False and (i := 0) for i, j in range(5)] message = 'assignment expression cannot rebind ' \ 'comprehension iteration variable %r' % first.value self.add_issue(namedexpr_test, message=message) self._check_assignment(first, is_namedexpr=True) parso-0.5.2/parso/python/grammar33.txt0000664000175000017500000001375613575273707017562 0ustar davedave00000000000000# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed in PEP 306, # "How to Change Python's Grammar" # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef) funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' ['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' ['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') strings: STRING+ testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) | (test (sync_comp_for | (',' test)* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. argument: test [sync_comp_for] | test '=' test # Really [keyword '='] test comp_iter: sync_comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist parso-0.5.2/parso/python/tokenize.py0000664000175000017500000006447613575273707017434 0ustar davedave00000000000000# -*- coding: utf-8 -*- """ This tokenizer has been copied from the ``tokenize.py`` standard library tokenizer. The reason was simple: The standard library tokenizer fails if the indentation is not right. To make it possible to do error recovery the tokenizer needed to be rewritten. Basically this is a stripped down version of the standard library module, so you can read the documentation there. Additionally we included some speed and memory optimizations here. """ from __future__ import absolute_import import sys import string import re from collections import namedtuple import itertools as _itertools from codecs import BOM_UTF8 from parso.python.token import PythonTokenTypes from parso._compatibility import py_version from parso.utils import split_lines # Maximum code point of Unicode 6.0: 0x10ffff (1,114,111) MAX_UNICODE = '\U0010ffff' STRING = PythonTokenTypes.STRING NAME = PythonTokenTypes.NAME NUMBER = PythonTokenTypes.NUMBER OP = PythonTokenTypes.OP NEWLINE = PythonTokenTypes.NEWLINE INDENT = PythonTokenTypes.INDENT DEDENT = PythonTokenTypes.DEDENT ENDMARKER = PythonTokenTypes.ENDMARKER ERRORTOKEN = PythonTokenTypes.ERRORTOKEN ERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT FSTRING_START = PythonTokenTypes.FSTRING_START FSTRING_STRING = PythonTokenTypes.FSTRING_STRING FSTRING_END = PythonTokenTypes.FSTRING_END TokenCollection = namedtuple( 'TokenCollection', 'pseudo_token single_quoted triple_quoted endpats whitespace ' 'fstring_pattern_map always_break_tokens', ) BOM_UTF8_STRING = BOM_UTF8.decode('utf-8') _token_collection_cache = {} if py_version >= 30: # Python 3 has str.isidentifier() to check if a char is a valid identifier is_identifier = str.isidentifier else: # Python 2 doesn't, but it's not that important anymore and if you tokenize # Python 2 code with this, it's still ok. It's just that parsing Python 3 # code with this function is not 100% correct. # This just means that Python 2 code matches a few identifiers too much, # but that doesn't really matter. def is_identifier(s): return True def group(*choices, **kwargs): capture = kwargs.pop('capture', False) # Python 2, arrghhhhh :( assert not kwargs start = '(' if not capture: start += '?:' return start + '|'.join(choices) + ')' def maybe(*choices): return group(*choices) + '?' # Return the empty string, plus all of the valid string prefixes. def _all_string_prefixes(version_info, include_fstring=False, only_fstring=False): def different_case_versions(prefix): for s in _itertools.product(*[(c, c.upper()) for c in prefix]): yield ''.join(s) # The valid string prefixes. Only contain the lower case versions, # and don't contain any permuations (include 'fr', but not # 'rf'). The various permutations will be generated. valid_string_prefixes = ['b', 'r', 'u'] if version_info >= (3, 0): valid_string_prefixes.append('br') result = set(['']) if version_info >= (3, 6) and include_fstring: f = ['f', 'fr'] if only_fstring: valid_string_prefixes = f result = set() else: valid_string_prefixes += f elif only_fstring: return set() # if we add binary f-strings, add: ['fb', 'fbr'] for prefix in valid_string_prefixes: for t in _itertools.permutations(prefix): # create a list with upper and lower versions of each # character result.update(different_case_versions(t)) if version_info <= (2, 7): # In Python 2 the order cannot just be random. result.update(different_case_versions('ur')) result.update(different_case_versions('br')) return result def _compile(expr): return re.compile(expr, re.UNICODE) def _get_token_collection(version_info): try: return _token_collection_cache[tuple(version_info)] except KeyError: _token_collection_cache[tuple(version_info)] = result = \ _create_token_collection(version_info) return result fstring_string_single_line = _compile(r'(?:\{\{|\}\}|\\(?:\r\n?|\n)|[^{}\r\n])+') fstring_string_multi_line = _compile(r'(?:[^{}]+|\{\{|\}\})+') fstring_format_spec_single_line = _compile(r'(?:\\(?:\r\n?|\n)|[^{}\r\n])+') fstring_format_spec_multi_line = _compile(r'[^{}]+') def _create_token_collection(version_info): # Note: we use unicode matching for names ("\w") but ascii matching for # number literals. Whitespace = r'[ \f\t]*' whitespace = _compile(Whitespace) Comment = r'#[^\r\n]*' # Python 2 is pretty much not working properly anymore, we just ignore # parsing unicode properly, which is fine, I guess. if version_info[0] == 2: Name = r'([A-Za-z_0-9]+)' elif sys.version_info[0] == 2: # Unfortunately the regex engine cannot deal with the regex below, so # just use this one. Name = r'(\w+)' else: Name = u'([A-Za-z_0-9\u0080-' + MAX_UNICODE + ']+)' if version_info >= (3, 6): Hexnumber = r'0[xX](?:_?[0-9a-fA-F])+' Binnumber = r'0[bB](?:_?[01])+' Octnumber = r'0[oO](?:_?[0-7])+' Decnumber = r'(?:0(?:_?0)*|[1-9](?:_?[0-9])*)' Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber) Exponent = r'[eE][-+]?[0-9](?:_?[0-9])*' Pointfloat = group(r'[0-9](?:_?[0-9])*\.(?:[0-9](?:_?[0-9])*)?', r'\.[0-9](?:_?[0-9])*') + maybe(Exponent) Expfloat = r'[0-9](?:_?[0-9])*' + Exponent Floatnumber = group(Pointfloat, Expfloat) Imagnumber = group(r'[0-9](?:_?[0-9])*[jJ]', Floatnumber + r'[jJ]') else: Hexnumber = r'0[xX][0-9a-fA-F]+' Binnumber = r'0[bB][01]+' if version_info >= (3, 0): Octnumber = r'0[oO][0-7]+' else: Octnumber = '0[oO]?[0-7]+' Decnumber = r'(?:0+|[1-9][0-9]*)' Intnumber = group(Hexnumber, Binnumber, Octnumber, Decnumber) if version_info[0] < 3: Intnumber += '[lL]?' Exponent = r'[eE][-+]?[0-9]+' Pointfloat = group(r'[0-9]+\.[0-9]*', r'\.[0-9]+') + maybe(Exponent) Expfloat = r'[0-9]+' + Exponent Floatnumber = group(Pointfloat, Expfloat) Imagnumber = group(r'[0-9]+[jJ]', Floatnumber + r'[jJ]') Number = group(Imagnumber, Floatnumber, Intnumber) # Note that since _all_string_prefixes includes the empty string, # StringPrefix can be the empty string (making it optional). possible_prefixes = _all_string_prefixes(version_info) StringPrefix = group(*possible_prefixes) StringPrefixWithF = group(*_all_string_prefixes(version_info, include_fstring=True)) fstring_prefixes = _all_string_prefixes(version_info, include_fstring=True, only_fstring=True) FStringStart = group(*fstring_prefixes) # Tail end of ' string. Single = r"(?:\\.|[^'\\])*'" # Tail end of " string. Double = r'(?:\\.|[^"\\])*"' # Tail end of ''' string. Single3 = r"(?:\\.|'(?!'')|[^'\\])*'''" # Tail end of """ string. Double3 = r'(?:\\.|"(?!"")|[^"\\])*"""' Triple = group(StringPrefixWithF + "'''", StringPrefixWithF + '"""') # Because of leftmost-then-longest match semantics, be sure to put the # longest operators first (e.g., if = came before ==, == would get # recognized as two instances of =). Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"//=?", r"->", r"[+\-*/%&@`|^!=<>]=?", r"~") Bracket = '[][(){}]' special_args = [r'\r\n?', r'\n', r'[;.,@]'] if version_info >= (3, 0): special_args.insert(0, r'\.\.\.') if version_info >= (3, 8): special_args.insert(0, ":=?") else: special_args.insert(0, ":") Special = group(*special_args) Funny = group(Operator, Bracket, Special) # First (or only) line of ' or " string. ContStr = group(StringPrefix + r"'[^\r\n'\\]*(?:\\.[^\r\n'\\]*)*" + group("'", r'\\(?:\r\n?|\n)'), StringPrefix + r'"[^\r\n"\\]*(?:\\.[^\r\n"\\]*)*' + group('"', r'\\(?:\r\n?|\n)')) pseudo_extra_pool = [Comment, Triple] all_quotes = '"', "'", '"""', "'''" if fstring_prefixes: pseudo_extra_pool.append(FStringStart + group(*all_quotes)) PseudoExtras = group(r'\\(?:\r\n?|\n)|\Z', *pseudo_extra_pool) PseudoToken = group(Whitespace, capture=True) + \ group(PseudoExtras, Number, Funny, ContStr, Name, capture=True) # For a given string prefix plus quotes, endpats maps it to a regex # to match the remainder of that string. _prefix can be empty, for # a normal single or triple quoted string (with no prefix). endpats = {} for _prefix in possible_prefixes: endpats[_prefix + "'"] = _compile(Single) endpats[_prefix + '"'] = _compile(Double) endpats[_prefix + "'''"] = _compile(Single3) endpats[_prefix + '"""'] = _compile(Double3) # A set of all of the single and triple quoted string prefixes, # including the opening quotes. single_quoted = set() triple_quoted = set() fstring_pattern_map = {} for t in possible_prefixes: for quote in '"', "'": single_quoted.add(t + quote) for quote in '"""', "'''": triple_quoted.add(t + quote) for t in fstring_prefixes: for quote in all_quotes: fstring_pattern_map[t + quote] = quote ALWAYS_BREAK_TOKENS = (';', 'import', 'class', 'def', 'try', 'except', 'finally', 'while', 'with', 'return') pseudo_token_compiled = _compile(PseudoToken) return TokenCollection( pseudo_token_compiled, single_quoted, triple_quoted, endpats, whitespace, fstring_pattern_map, ALWAYS_BREAK_TOKENS ) class Token(namedtuple('Token', ['type', 'string', 'start_pos', 'prefix'])): @property def end_pos(self): lines = split_lines(self.string) if len(lines) > 1: return self.start_pos[0] + len(lines) - 1, 0 else: return self.start_pos[0], self.start_pos[1] + len(self.string) class PythonToken(Token): def __repr__(self): return ('TokenInfo(type=%s, string=%r, start_pos=%r, prefix=%r)' % self._replace(type=self.type.name)) class FStringNode(object): def __init__(self, quote): self.quote = quote self.parentheses_count = 0 self.previous_lines = '' self.last_string_start_pos = None # In the syntax there can be multiple format_spec's nested: # {x:{y:3}} self.format_spec_count = 0 def open_parentheses(self, character): self.parentheses_count += 1 def close_parentheses(self, character): self.parentheses_count -= 1 if self.parentheses_count == 0: # No parentheses means that the format spec is also finished. self.format_spec_count = 0 def allow_multiline(self): return len(self.quote) == 3 def is_in_expr(self): return self.parentheses_count > self.format_spec_count def is_in_format_spec(self): return not self.is_in_expr() and self.format_spec_count def _close_fstring_if_necessary(fstring_stack, string, start_pos, additional_prefix): for fstring_stack_index, node in enumerate(fstring_stack): lstripped_string = string.lstrip() len_lstrip = len(string) - len(lstripped_string) if lstripped_string.startswith(node.quote): token = PythonToken( FSTRING_END, node.quote, start_pos, prefix=additional_prefix+string[:len_lstrip], ) additional_prefix = '' assert not node.previous_lines del fstring_stack[fstring_stack_index:] return token, '', len(node.quote) + len_lstrip return None, additional_prefix, 0 def _find_fstring_string(endpats, fstring_stack, line, lnum, pos): tos = fstring_stack[-1] allow_multiline = tos.allow_multiline() if tos.is_in_format_spec(): if allow_multiline: regex = fstring_format_spec_multi_line else: regex = fstring_format_spec_single_line else: if allow_multiline: regex = fstring_string_multi_line else: regex = fstring_string_single_line match = regex.match(line, pos) if match is None: return tos.previous_lines, pos if not tos.previous_lines: tos.last_string_start_pos = (lnum, pos) string = match.group(0) for fstring_stack_node in fstring_stack: end_match = endpats[fstring_stack_node.quote].match(string) if end_match is not None: string = end_match.group(0)[:-len(fstring_stack_node.quote)] new_pos = pos new_pos += len(string) # even if allow_multiline is False, we still need to check for trailing # newlines, because a single-line f-string can contain line continuations if string.endswith('\n') or string.endswith('\r'): tos.previous_lines += string string = '' else: string = tos.previous_lines + string return string, new_pos def tokenize(code, version_info, start_pos=(1, 0)): """Generate tokens from a the source code (string).""" lines = split_lines(code, keepends=True) return tokenize_lines(lines, version_info, start_pos=start_pos) def _print_tokens(func): """ A small helper function to help debug the tokenize_lines function. """ def wrapper(*args, **kwargs): for token in func(*args, **kwargs): yield token return wrapper # @_print_tokens def tokenize_lines(lines, version_info, start_pos=(1, 0)): """ A heavily modified Python standard library tokenizer. Additionally to the default information, yields also the prefix of each token. This idea comes from lib2to3. The prefix contains all information that is irrelevant for the parser like newlines in parentheses or comments. """ def dedent_if_necessary(start): while start < indents[-1]: if start > indents[-2]: yield PythonToken(ERROR_DEDENT, '', (lnum, 0), '') break yield PythonToken(DEDENT, '', spos, '') indents.pop() pseudo_token, single_quoted, triple_quoted, endpats, whitespace, \ fstring_pattern_map, always_break_tokens, = \ _get_token_collection(version_info) paren_level = 0 # count parentheses indents = [0] max = 0 numchars = '0123456789' contstr = '' contline = None # We start with a newline. This makes indent at the first position # possible. It's not valid Python, but still better than an INDENT in the # second line (and not in the first). This makes quite a few things in # Jedi's fast parser possible. new_line = True prefix = '' # Should never be required, but here for safety additional_prefix = '' first = True lnum = start_pos[0] - 1 fstring_stack = [] for line in lines: # loop over lines in stream lnum += 1 pos = 0 max = len(line) if first: if line.startswith(BOM_UTF8_STRING): additional_prefix = BOM_UTF8_STRING line = line[1:] max = len(line) # Fake that the part before was already parsed. line = '^' * start_pos[1] + line pos = start_pos[1] max += start_pos[1] first = False if contstr: # continued string endmatch = endprog.match(line) if endmatch: pos = endmatch.end(0) yield PythonToken( STRING, contstr + line[:pos], contstr_start, prefix) contstr = '' contline = None else: contstr = contstr + line contline = contline + line continue while pos < max: if fstring_stack: tos = fstring_stack[-1] if not tos.is_in_expr(): string, pos = _find_fstring_string(endpats, fstring_stack, line, lnum, pos) if string: yield PythonToken( FSTRING_STRING, string, tos.last_string_start_pos, # Never has a prefix because it can start anywhere and # include whitespace. prefix='' ) tos.previous_lines = '' continue if pos == max: break rest = line[pos:] fstring_end_token, additional_prefix, quote_length = _close_fstring_if_necessary( fstring_stack, rest, (lnum, pos), additional_prefix, ) pos += quote_length if fstring_end_token is not None: yield fstring_end_token continue # in an f-string, match until the end of the string if fstring_stack: string_line = line for fstring_stack_node in fstring_stack: quote = fstring_stack_node.quote end_match = endpats[quote].match(line, pos) if end_match is not None: end_match_string = end_match.group(0) if len(end_match_string) - len(quote) + pos < len(string_line): string_line = line[:pos] + end_match_string[:-len(quote)] pseudomatch = pseudo_token.match(string_line, pos) else: pseudomatch = pseudo_token.match(line, pos) if not pseudomatch: # scan for tokens match = whitespace.match(line, pos) if pos == 0: for t in dedent_if_necessary(match.end()): yield t pos = match.end() new_line = False yield PythonToken( ERRORTOKEN, line[pos], (lnum, pos), additional_prefix + match.group(0) ) additional_prefix = '' pos += 1 continue prefix = additional_prefix + pseudomatch.group(1) additional_prefix = '' start, pos = pseudomatch.span(2) spos = (lnum, start) token = pseudomatch.group(2) if token == '': assert prefix additional_prefix = prefix # This means that we have a line with whitespace/comments at # the end, which just results in an endmarker. break initial = token[0] if new_line and initial not in '\r\n\\#': new_line = False if paren_level == 0 and not fstring_stack: i = 0 indent_start = start while line[i] == '\f': i += 1 # TODO don't we need to change spos as well? indent_start -= 1 if indent_start > indents[-1]: yield PythonToken(INDENT, '', spos, '') indents.append(indent_start) for t in dedent_if_necessary(indent_start): yield t if (initial in numchars or # ordinary number (initial == '.' and token != '.' and token != '...')): yield PythonToken(NUMBER, token, spos, prefix) elif pseudomatch.group(3) is not None: # ordinary name if token in always_break_tokens: fstring_stack[:] = [] paren_level = 0 # We only want to dedent if the token is on a new line. if re.match(r'[ \f\t]*$', line[:start]): while True: indent = indents.pop() if indent > start: yield PythonToken(DEDENT, '', spos, '') else: indents.append(indent) break if is_identifier(token): yield PythonToken(NAME, token, spos, prefix) else: for t in _split_illegal_unicode_name(token, spos, prefix): yield t # yield from Python 2 elif initial in '\r\n': if any(not f.allow_multiline() for f in fstring_stack): # Would use fstring_stack.clear, but that's not available # in Python 2. fstring_stack[:] = [] if not new_line and paren_level == 0 and not fstring_stack: yield PythonToken(NEWLINE, token, spos, prefix) else: additional_prefix = prefix + token new_line = True elif initial == '#': # Comments assert not token.endswith("\n") if fstring_stack and fstring_stack[-1].is_in_expr(): # `#` is not allowed in f-string expressions yield PythonToken(ERRORTOKEN, initial, spos, prefix) pos = start + 1 else: additional_prefix = prefix + token elif token in triple_quoted: endprog = endpats[token] endmatch = endprog.match(line, pos) if endmatch: # all on one line pos = endmatch.end(0) token = line[start:pos] yield PythonToken(STRING, token, spos, prefix) else: contstr_start = (lnum, start) # multiple lines contstr = line[start:] contline = line break # Check up to the first 3 chars of the token to see if # they're in the single_quoted set. If so, they start # a string. # We're using the first 3, because we're looking for # "rb'" (for example) at the start of the token. If # we switch to longer prefixes, this needs to be # adjusted. # Note that initial == token[:1]. # Also note that single quote checking must come after # triple quote checking (above). elif initial in single_quoted or \ token[:2] in single_quoted or \ token[:3] in single_quoted: if token[-1] in '\r\n': # continued string # This means that a single quoted string ends with a # backslash and is continued. contstr_start = lnum, start endprog = (endpats.get(initial) or endpats.get(token[1]) or endpats.get(token[2])) contstr = line[start:] contline = line break else: # ordinary string yield PythonToken(STRING, token, spos, prefix) elif token in fstring_pattern_map: # The start of an fstring. fstring_stack.append(FStringNode(fstring_pattern_map[token])) yield PythonToken(FSTRING_START, token, spos, prefix) elif initial == '\\' and line[start:] in ('\\\n', '\\\r\n', '\\\r'): # continued stmt additional_prefix += prefix + line[start:] break else: if token in '([{': if fstring_stack: fstring_stack[-1].open_parentheses(token) else: paren_level += 1 elif token in ')]}': if fstring_stack: fstring_stack[-1].close_parentheses(token) else: if paren_level: paren_level -= 1 elif token.startswith(':') and fstring_stack \ and fstring_stack[-1].parentheses_count \ - fstring_stack[-1].format_spec_count == 1: # `:` and `:=` both count fstring_stack[-1].format_spec_count += 1 token = ':' pos = start + 1 yield PythonToken(OP, token, spos, prefix) if contstr: yield PythonToken(ERRORTOKEN, contstr, contstr_start, prefix) if contstr.endswith('\n') or contstr.endswith('\r'): new_line = True end_pos = lnum, max # As the last position we just take the maximally possible position. We # remove -1 for the last new line. for indent in indents[1:]: yield PythonToken(DEDENT, '', end_pos, '') yield PythonToken(ENDMARKER, '', end_pos, additional_prefix) def _split_illegal_unicode_name(token, start_pos, prefix): def create_token(): return PythonToken(ERRORTOKEN if is_illegal else NAME, found, pos, prefix) found = '' is_illegal = False pos = start_pos for i, char in enumerate(token): if is_illegal: if is_identifier(char): yield create_token() found = char is_illegal = False prefix = '' pos = start_pos[0], start_pos[1] + i else: found += char else: new_found = found + char if is_identifier(new_found): found = new_found else: if found: yield create_token() prefix = '' pos = start_pos[0], start_pos[1] + i found = char is_illegal = True if found: yield create_token() if __name__ == "__main__": if len(sys.argv) >= 2: path = sys.argv[1] with open(path) as f: code = f.read() else: code = sys.stdin.read() from parso.utils import python_bytes_to_unicode, parse_version_string if isinstance(code, bytes): code = python_bytes_to_unicode(code) for token in tokenize(code, parse_version_string()): print(token) parso-0.5.2/parso/python/token.py0000664000175000017500000000145313575273707016706 0ustar davedave00000000000000from __future__ import absolute_import class TokenType(object): def __init__(self, name, contains_syntax=False): self.name = name self.contains_syntax = contains_syntax def __repr__(self): return '%s(%s)' % (self.__class__.__name__, self.name) class TokenTypes(object): """ Basically an enum, but Python 2 doesn't have enums in the standard library. """ def __init__(self, names, contains_syntax): for name in names: setattr(self, name, TokenType(name, contains_syntax=name in contains_syntax)) PythonTokenTypes = TokenTypes(( 'STRING', 'NUMBER', 'NAME', 'ERRORTOKEN', 'NEWLINE', 'INDENT', 'DEDENT', 'ERROR_DEDENT', 'FSTRING_STRING', 'FSTRING_START', 'FSTRING_END', 'OP', 'ENDMARKER'), contains_syntax=('NAME', 'OP'), ) parso-0.5.2/parso/python/prefix.py0000664000175000017500000000454513575273707017070 0ustar davedave00000000000000import re from codecs import BOM_UTF8 from parso.python.tokenize import group unicode_bom = BOM_UTF8.decode('utf-8') class PrefixPart(object): def __init__(self, leaf, typ, value, spacing='', start_pos=None): assert start_pos is not None self.parent = leaf self.type = typ self.value = value self.spacing = spacing self.start_pos = start_pos @property def end_pos(self): if self.value.endswith('\n'): return self.start_pos[0] + 1, 0 if self.value == unicode_bom: # The bom doesn't have a length at the start of a Python file. return self.start_pos return self.start_pos[0], self.start_pos[1] + len(self.value) def create_spacing_part(self): column = self.start_pos[1] - len(self.spacing) return PrefixPart( self.parent, 'spacing', self.spacing, start_pos=(self.start_pos[0], column) ) def __repr__(self): return '%s(%s, %s, %s)' % ( self.__class__.__name__, self.type, repr(self.value), self.start_pos ) _comment = r'#[^\n\r\f]*' _backslash = r'\\\r?\n' _newline = r'\r?\n' _form_feed = r'\f' _only_spacing = '$' _spacing = r'[ \t]*' _bom = unicode_bom _regex = group( _comment, _backslash, _newline, _form_feed, _only_spacing, _bom, capture=True ) _regex = re.compile(group(_spacing, capture=True) + _regex) _types = { '#': 'comment', '\\': 'backslash', '\f': 'formfeed', '\n': 'newline', '\r': 'newline', unicode_bom: 'bom' } def split_prefix(leaf, start_pos): line, column = start_pos start = 0 value = spacing = '' bom = False while start != len(leaf.prefix): match =_regex.match(leaf.prefix, start) spacing = match.group(1) value = match.group(2) if not value: break type_ = _types[value[0]] yield PrefixPart( leaf, type_, value, spacing, start_pos=(line, column + start - int(bom) + len(spacing)) ) if type_ == 'bom': bom = True start = match.end(0) if value.endswith('\n'): line += 1 column = -start if value: spacing = '' yield PrefixPart( leaf, 'spacing', spacing, start_pos=(line, column + start) ) parso-0.5.2/parso/python/grammar26.txt0000664000175000017500000001443213575273707017554 0ustar davedave00000000000000# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed in PEP 306, # "How to Change Python's Grammar" # Commands for Kees Blom's railroad program #diagram:token NAME #diagram:token NUMBER #diagram:token STRING #diagram:token NEWLINE #diagram:token ENDMARKER #diagram:token INDENT #diagram:output\input python.bla #diagram:token DEDENT #diagram:output\textwidth 20.04cm\oddsidemargin 0.0cm\evensidemargin 0.0cm #diagram:rules # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() and input() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef) funcdef: 'def' NAME parameters ':' suite parameters: '(' [varargslist] ')' varargslist: ((fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']) fpdef: NAME | '(' fplist ')' fplist: fpdef (',' fpdef)* [','] stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt) expr_stmt: testlist (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist))*) augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter print_stmt: 'print' ( [ test (',' test)* [','] ] | '>>' test [ (',' test)+ [','] ] ) del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test [',' test [',' test]]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names import_from: ('from' ('.'* dotted_name | '.'+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* exec_stmt: 'exec' expr ['in' test [',' test]] assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item ':' suite # Dave: Python2.6 actually defines a little bit of a different label called # 'with_var'. However in 2.7+ this is the default. Apply it for # consistency reasons. with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test [('as' | ',') test]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT # Backward compatibility cruft to support: # [ x for x in lambda: True, lambda: False if x() ] # even while also allowing: # lambda x: 5 if x else 2 # (But not a mix of the two) testlist_safe: old_test [(',' old_test)+ [',']] old_test: or_test | old_lambdef old_lambdef: 'lambda' [varargslist] ':' old_test test: or_test ['if' or_test 'else' test] | lambdef or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_comp] ')' | '[' [listmaker] ']' | '{' [dictorsetmaker] '}' | '`' testlist1 '`' | NAME | NUMBER | strings) strings: STRING+ listmaker: test ( list_for | (',' test)* [','] ) # Dave: Renamed testlist_gexpr to testlist_comp, because in 2.7+ this is the # default. It's more consistent like this. testlist_comp: test ( gen_for | (',' test)* [','] ) lambdef: 'lambda' [varargslist] ':' test trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: expr (',' expr)* [','] testlist: test (',' test)* [','] # Dave: Rename from dictmaker to dictorsetmaker, because this is more # consistent with the following grammars. dictorsetmaker: test ':' test (',' test ':' test)* [','] classdef: 'class' NAME ['(' [testlist] ')'] ':' suite arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) argument: test [gen_for] | test '=' test # Really [keyword '='] test list_iter: list_for | list_if list_for: 'for' exprlist 'in' testlist_safe [list_iter] list_if: 'if' old_test [list_iter] gen_iter: gen_for | gen_if gen_for: 'for' exprlist 'in' or_test [gen_iter] gen_if: 'if' old_test [gen_iter] testlist1: test (',' test)* # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [testlist] parso-0.5.2/parso/python/__init__.py0000664000175000017500000000000013575273707017310 0ustar davedave00000000000000parso-0.5.2/parso/python/grammar27.txt0000664000175000017500000001351013575273707017551 0ustar davedave00000000000000# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed in PEP 306, # "How to Change Python's Grammar" # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() and input() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef) funcdef: 'def' NAME parameters ':' suite parameters: '(' [varargslist] ')' varargslist: ((fpdef ['=' test] ',')* ('*' NAME [',' '**' NAME] | '**' NAME) | fpdef ['=' test] (',' fpdef ['=' test])* [',']) fpdef: NAME | '(' fplist ')' fplist: fpdef (',' fpdef)* [','] stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | exec_stmt | assert_stmt) expr_stmt: testlist (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist))*) augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter print_stmt: 'print' ( [ test (',' test)* [','] ] | '>>' test [ (',' test)+ [','] ] ) del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test [',' test [',' test]]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names import_from: ('from' ('.'* dotted_name | '.'+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* exec_stmt: 'exec' expr ['in' test [',' test]] assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test [('as' | ',') test]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT # Backward compatibility cruft to support: # [ x for x in lambda: True, lambda: False if x() ] # even while also allowing: # lambda x: 5 if x else 2 # (But not a mix of the two) testlist_safe: old_test [(',' old_test)+ [',']] old_test: or_test | old_lambdef old_lambdef: 'lambda' [varargslist] ':' old_test test: or_test ['if' or_test 'else' test] | lambdef or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_comp] ')' | '[' [listmaker] ']' | '{' [dictorsetmaker] '}' | '`' testlist1 '`' | NAME | NUMBER | strings) strings: STRING+ listmaker: test ( list_for | (',' test)* [','] ) testlist_comp: test ( sync_comp_for | (',' test)* [','] ) lambdef: 'lambda' [varargslist] ':' test trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: '.' '.' '.' | test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: expr (',' expr)* [','] testlist: test (',' test)* [','] dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) | (test (sync_comp_for | (',' test)* [','])) ) classdef: 'class' NAME ['(' [testlist] ')'] ':' suite arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. argument: test [sync_comp_for] | test '=' test list_iter: list_for | list_if list_for: 'for' exprlist 'in' testlist_safe [list_iter] list_if: 'if' old_test [list_iter] comp_iter: sync_comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_if: 'if' old_test [comp_iter] testlist1: test (',' test)* # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [testlist] parso-0.5.2/parso/python/grammar36.txt0000664000175000017500000001542713575273707017562 0ustar davedave00000000000000# Grammar for Python # NOTE WELL: You should also follow all the steps listed at # https://docs.python.org/devguide/grammar.html # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) # NOTE: Francisco Souza/Reinoud Elhorst, using ASYNC/'await' keywords instead of # skipping python3.5+ compatibility, in favour of 3.7 solution async_funcdef: 'async' funcdef funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [','] ) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) annassign: ':' test ['=' test] testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal and annotated assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: 'async' (funcdef | with_stmt | for_stmt) if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 (which really works :-) comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'@'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom_expr ['**' factor] atom_expr: ['await'] atom trailer* atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( ((test ':' test | '**' expr) (comp_for | (',' (test ':' test | '**' expr))* [','])) | ((test | star_expr) (comp_for | (',' (test | star_expr))* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: argument (',' argument)* [','] # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. # "test '=' test" is really "keyword '=' test", but we have no such token. # These need to be in a single rule to avoid grammar that is ambiguous # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, # we explicitly match '*' here, too, to give it proper precedence. # Illegal combinations and orderings are blocked in ast.c: # multiple (test comp_for) arguments are blocked; keyword unpackings # that precede iterable unpackings are blocked; etc. argument: ( test [comp_for] | test '=' test | '**' test | '*' test ) comp_iter: comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_for: ['async'] sync_comp_for comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist strings: (STRING | fstring)+ fstring: FSTRING_START fstring_content* FSTRING_END fstring_content: FSTRING_STRING | fstring_expr fstring_conversion: '!' NAME fstring_expr: '{' testlist_comp [ fstring_conversion ] [ fstring_format_spec ] '}' fstring_format_spec: ':' fstring_content* parso-0.5.2/parso/python/grammar37.txt0000664000175000017500000001520213575273707017552 0ustar davedave00000000000000# Grammar for Python # NOTE WELL: You should also follow all the steps listed at # https://docs.python.org/devguide/grammar.html # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) async_funcdef: 'async' funcdef funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [','] ) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) annassign: ':' test ['=' test] testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal and annotated assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: 'async' (funcdef | with_stmt | for_stmt) if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 (which really works :-) comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'@'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom_expr ['**' factor] atom_expr: ['await'] atom trailer* atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( ((test ':' test | '**' expr) (comp_for | (',' (test ':' test | '**' expr))* [','])) | ((test | star_expr) (comp_for | (',' (test | star_expr))* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: argument (',' argument)* [','] # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. # "test '=' test" is really "keyword '=' test", but we have no such token. # These need to be in a single rule to avoid grammar that is ambiguous # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, # we explicitly match '*' here, too, to give it proper precedence. # Illegal combinations and orderings are blocked in ast.c: # multiple (test comp_for) arguments are blocked; keyword unpackings # that precede iterable unpackings are blocked; etc. argument: ( test [comp_for] | test '=' test | '**' test | '*' test ) comp_iter: comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_for: ['async'] sync_comp_for comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist strings: (STRING | fstring)+ fstring: FSTRING_START fstring_content* FSTRING_END fstring_content: FSTRING_STRING | fstring_expr fstring_conversion: '!' NAME fstring_expr: '{' testlist [ fstring_conversion ] [ fstring_format_spec ] '}' fstring_format_spec: ':' fstring_content* parso-0.5.2/parso/python/grammar39.txt0000664000175000017500000001657213575273707017567 0ustar davedave00000000000000# Grammar for Python # NOTE WELL: You should also follow all the steps listed at # https://devguide.python.org/grammar/ # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) async_funcdef: 'async' funcdef funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: ( (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] ( ',' tfpdef ['=' test])* ([',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]]) | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]]) | '**' tfpdef [',']]] ) | (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']) ) tfpdef: NAME [':' test] varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [','] ) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) annassign: ':' test ['=' test] testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal and annotated assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist_star_expr] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: 'async' (funcdef | with_stmt | for_stmt) if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite] while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT namedexpr_test: test [':=' test] test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 (which really works :-) comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'@'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom_expr ['**' factor] atom_expr: ['await'] atom trailer* atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') testlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( ((test ':' test | '**' expr) (comp_for | (',' (test ':' test | '**' expr))* [','])) | ((test | star_expr) (comp_for | (',' (test | star_expr))* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: argument (',' argument)* [','] # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. # "test '=' test" is really "keyword '=' test", but we have no such token. # These need to be in a single rule to avoid grammar that is ambiguous # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, # we explicitly match '*' here, too, to give it proper precedence. # Illegal combinations and orderings are blocked in ast.c: # multiple (test comp_for) arguments are blocked; keyword unpackings # that precede iterable unpackings are blocked; etc. argument: ( test [comp_for] | test ':=' test | test '=' test | '**' test | '*' test ) comp_iter: comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_for: ['async'] sync_comp_for comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist_star_expr strings: (STRING | fstring)+ fstring: FSTRING_START fstring_content* FSTRING_END fstring_content: FSTRING_STRING | fstring_expr fstring_conversion: '!' NAME fstring_expr: '{' testlist ['='] [ fstring_conversion ] [ fstring_format_spec ] '}' fstring_format_spec: ':' fstring_content* parso-0.5.2/parso/python/grammar38.txt0000664000175000017500000001657213575273707017566 0ustar davedave00000000000000# Grammar for Python # NOTE WELL: You should also follow all the steps listed at # https://devguide.python.org/grammar/ # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) async_funcdef: 'async' funcdef funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: ( (tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] ( ',' tfpdef ['=' test])* ([',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]]) | '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]]) | '**' tfpdef [',']]] ) | (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [ '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]] | '**' tfpdef [',']) ) tfpdef: NAME [':' test] varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [ '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [',']]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]] | '**' vfpdef [','] ) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) annassign: ':' test ['=' test] testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal and annotated assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist_star_expr] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: 'async' (funcdef | with_stmt | for_stmt) if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite] while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT namedexpr_test: test [':=' test] test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 (which really works :-) comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'@'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom_expr ['**' factor] atom_expr: ['await'] atom trailer* atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') testlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( ((test ':' test | '**' expr) (comp_for | (',' (test ':' test | '**' expr))* [','])) | ((test | star_expr) (comp_for | (',' (test | star_expr))* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: argument (',' argument)* [','] # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. # "test '=' test" is really "keyword '=' test", but we have no such token. # These need to be in a single rule to avoid grammar that is ambiguous # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, # we explicitly match '*' here, too, to give it proper precedence. # Illegal combinations and orderings are blocked in ast.c: # multiple (test comp_for) arguments are blocked; keyword unpackings # that precede iterable unpackings are blocked; etc. argument: ( test [comp_for] | test ':=' test | test '=' test | '**' test | '*' test ) comp_iter: comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_for: ['async'] sync_comp_for comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist_star_expr strings: (STRING | fstring)+ fstring: FSTRING_START fstring_content* FSTRING_END fstring_content: FSTRING_STRING | fstring_expr fstring_conversion: '!' NAME fstring_expr: '{' testlist ['='] [ fstring_conversion ] [ fstring_format_spec ] '}' fstring_format_spec: ':' fstring_content* parso-0.5.2/parso/python/grammar34.txt0000664000175000017500000001376213575273707017560 0ustar davedave00000000000000# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed at # https://docs.python.org/devguide/grammar.html # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef) funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' ['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' ['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom trailer* ['**' factor] atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') strings: STRING+ testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( (test ':' test (sync_comp_for | (',' test ':' test)* [','])) | (test (sync_comp_for | (',' test)* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: (argument ',')* (argument [','] |'*' test (',' argument)* [',' '**' test] |'**' test) # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. argument: test [sync_comp_for] | test '=' test # Really [keyword '='] test comp_iter: sync_comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist parso-0.5.2/parso/python/tree.py0000664000175000017500000011117713575273707016532 0ustar davedave00000000000000""" This is the syntax tree for Python syntaxes (2 & 3). The classes represent syntax elements like functions and imports. All of the nodes can be traced back to the `Python grammar file `_. If you want to know how a tree is structured, just analyse that file (for each Python version it's a bit different). There's a lot of logic here that makes it easier for Jedi (and other libraries) to deal with a Python syntax tree. By using :py:meth:`parso.tree.NodeOrLeaf.get_code` on a module, you can get back the 1-to-1 representation of the input given to the parser. This is important if you want to refactor a parser tree. >>> from parso import parse >>> parser = parse('import os') >>> module = parser.get_root_node() >>> module Any subclasses of :class:`Scope`, including :class:`Module` has an attribute :attr:`iter_imports `: >>> list(module.iter_imports()) [] Changes to the Python Grammar ----------------------------- A few things have changed when looking at Python grammar files: - :class:`Param` does not exist in Python grammar files. It is essentially a part of a ``parameters`` node. |parso| splits it up to make it easier to analyse parameters. However this just makes it easier to deal with the syntax tree, it doesn't actually change the valid syntax. - A few nodes like `lambdef` and `lambdef_nocond` have been merged in the syntax tree to make it easier to do deal with them. Parser Tree Classes ------------------- """ import re try: from collections.abc import Mapping except ImportError: from collections import Mapping from parso._compatibility import utf8_repr, unicode from parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, \ search_ancestor from parso.python.prefix import split_prefix from parso.utils import split_lines _FLOW_CONTAINERS = set(['if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt', 'async_stmt', 'suite']) _RETURN_STMT_CONTAINERS = set(['suite', 'simple_stmt']) | _FLOW_CONTAINERS _FUNC_CONTAINERS = set(['suite', 'simple_stmt', 'decorated']) | _FLOW_CONTAINERS _GET_DEFINITION_TYPES = set([ 'expr_stmt', 'sync_comp_for', 'with_stmt', 'for_stmt', 'import_name', 'import_from', 'param' ]) _IMPORTS = set(['import_name', 'import_from']) class DocstringMixin(object): __slots__ = () def get_doc_node(self): """ Returns the string leaf of a docstring. e.g. ``r'''foo'''``. """ if self.type == 'file_input': node = self.children[0] elif self.type in ('funcdef', 'classdef'): node = self.children[self.children.index(':') + 1] if node.type == 'suite': # Normally a suite node = node.children[1] # -> NEWLINE stmt else: # ExprStmt simple_stmt = self.parent c = simple_stmt.parent.children index = c.index(simple_stmt) if not index: return None node = c[index - 1] if node.type == 'simple_stmt': node = node.children[0] if node.type == 'string': return node return None class PythonMixin(object): """ Some Python specific utitilies. """ __slots__ = () def get_name_of_position(self, position): """ Given a (line, column) tuple, returns a :py:class:`Name` or ``None`` if there is no name at that position. """ for c in self.children: if isinstance(c, Leaf): if c.type == 'name' and c.start_pos <= position <= c.end_pos: return c else: result = c.get_name_of_position(position) if result is not None: return result return None class PythonLeaf(PythonMixin, Leaf): __slots__ = () def _split_prefix(self): return split_prefix(self, self.get_start_pos_of_prefix()) def get_start_pos_of_prefix(self): """ Basically calls :py:meth:`parso.tree.NodeOrLeaf.get_start_pos_of_prefix`. """ # TODO it is really ugly that we have to override it. Maybe change # indent error leafs somehow? No idea how, though. previous_leaf = self.get_previous_leaf() if previous_leaf is not None and previous_leaf.type == 'error_leaf' \ and previous_leaf.token_type in ('INDENT', 'DEDENT', 'ERROR_DEDENT'): previous_leaf = previous_leaf.get_previous_leaf() if previous_leaf is None: # It's the first leaf. lines = split_lines(self.prefix) # + 1 is needed because split_lines always returns at least ['']. return self.line - len(lines) + 1, 0 # It's the first leaf. return previous_leaf.end_pos class _LeafWithoutNewlines(PythonLeaf): """ Simply here to optimize performance. """ __slots__ = () @property def end_pos(self): return self.line, self.column + len(self.value) # Python base classes class PythonBaseNode(PythonMixin, BaseNode): __slots__ = () class PythonNode(PythonMixin, Node): __slots__ = () class PythonErrorNode(PythonMixin, ErrorNode): __slots__ = () class PythonErrorLeaf(ErrorLeaf, PythonLeaf): __slots__ = () class EndMarker(_LeafWithoutNewlines): __slots__ = () type = 'endmarker' @utf8_repr def __repr__(self): return "<%s: prefix=%s end_pos=%s>" % ( type(self).__name__, repr(self.prefix), self.end_pos ) class Newline(PythonLeaf): """Contains NEWLINE and ENDMARKER tokens.""" __slots__ = () type = 'newline' @utf8_repr def __repr__(self): return "<%s: %s>" % (type(self).__name__, repr(self.value)) class Name(_LeafWithoutNewlines): """ A string. Sometimes it is important to know if the string belongs to a name or not. """ type = 'name' __slots__ = () def __repr__(self): return "<%s: %s@%s,%s>" % (type(self).__name__, self.value, self.line, self.column) def is_definition(self, include_setitem=False): """ Returns True if the name is being defined. """ return self.get_definition(include_setitem=include_setitem) is not None def get_definition(self, import_name_always=False, include_setitem=False): """ Returns None if there's no definition for a name. :param import_name_always: Specifies if an import name is always a definition. Normally foo in `from foo import bar` is not a definition. """ node = self.parent type_ = node.type if type_ in ('funcdef', 'classdef'): if self == node.name: return node return None if type_ == 'except_clause': # TODO in Python 2 this doesn't work correctly. See grammar file. # I think we'll just let it be. Python 2 will be gone in a few # years. if self.get_previous_sibling() == 'as': return node.parent # The try_stmt. return None while node is not None: if node.type == 'suite': return None if node.type in _GET_DEFINITION_TYPES: if self in node.get_defined_names(include_setitem): return node if import_name_always and node.type in _IMPORTS: return node return None node = node.parent return None class Literal(PythonLeaf): __slots__ = () class Number(Literal): type = 'number' __slots__ = () class String(Literal): type = 'string' __slots__ = () @property def string_prefix(self): return re.match(r'\w*(?=[\'"])', self.value).group(0) def _get_payload(self): match = re.search( r'''('{3}|"{3}|'|")(.*)$''', self.value, flags=re.DOTALL ) return match.group(2)[:-len(match.group(1))] class FStringString(PythonLeaf): """ f-strings contain f-string expressions and normal python strings. These are the string parts of f-strings. """ type = 'fstring_string' __slots__ = () class FStringStart(PythonLeaf): """ f-strings contain f-string expressions and normal python strings. These are the string parts of f-strings. """ type = 'fstring_start' __slots__ = () class FStringEnd(PythonLeaf): """ f-strings contain f-string expressions and normal python strings. These are the string parts of f-strings. """ type = 'fstring_end' __slots__ = () class _StringComparisonMixin(object): def __eq__(self, other): """ Make comparisons with strings easy. Improves the readability of the parser. """ if isinstance(other, (str, unicode)): return self.value == other return self is other def __ne__(self, other): """Python 2 compatibility.""" return not self.__eq__(other) def __hash__(self): return hash(self.value) class Operator(_LeafWithoutNewlines, _StringComparisonMixin): type = 'operator' __slots__ = () class Keyword(_LeafWithoutNewlines, _StringComparisonMixin): type = 'keyword' __slots__ = () class Scope(PythonBaseNode, DocstringMixin): """ Super class for the parser tree, which represents the state of a python text file. A Scope is either a function, class or lambda. """ __slots__ = () def __init__(self, children): super(Scope, self).__init__(children) def iter_funcdefs(self): """ Returns a generator of `funcdef` nodes. """ return self._search_in_scope('funcdef') def iter_classdefs(self): """ Returns a generator of `classdef` nodes. """ return self._search_in_scope('classdef') def iter_imports(self): """ Returns a generator of `import_name` and `import_from` nodes. """ return self._search_in_scope('import_name', 'import_from') def _search_in_scope(self, *names): def scan(children): for element in children: if element.type in names: yield element if element.type in _FUNC_CONTAINERS: for e in scan(element.children): yield e return scan(self.children) def get_suite(self): """ Returns the part that is executed by the function. """ return self.children[-1] def __repr__(self): try: name = self.name.value except AttributeError: name = '' return "<%s: %s@%s-%s>" % (type(self).__name__, name, self.start_pos[0], self.end_pos[0]) class Module(Scope): """ The top scope, which is always a module. Depending on the underlying parser this may be a full module or just a part of a module. """ __slots__ = ('_used_names',) type = 'file_input' def __init__(self, children): super(Module, self).__init__(children) self._used_names = None def _iter_future_import_names(self): """ :return: A list of future import names. :rtype: list of str """ # In Python it's not allowed to use future imports after the first # actual (non-future) statement. However this is not a linter here, # just return all future imports. If people want to scan for issues # they should use the API. for imp in self.iter_imports(): if imp.type == 'import_from' and imp.level == 0: for path in imp.get_paths(): names = [name.value for name in path] if len(names) == 2 and names[0] == '__future__': yield names[1] def _has_explicit_absolute_import(self): """ Checks if imports in this module are explicitly absolute, i.e. there is a ``__future__`` import. Currently not public, might be in the future. :return bool: """ for name in self._iter_future_import_names(): if name == 'absolute_import': return True return False def get_used_names(self): """ Returns all the :class:`Name` leafs that exist in this module. This includes both definitions and references of names. """ if self._used_names is None: # Don't directly use self._used_names to eliminate a lookup. dct = {} def recurse(node): try: children = node.children except AttributeError: if node.type == 'name': arr = dct.setdefault(node.value, []) arr.append(node) else: for child in children: recurse(child) recurse(self) self._used_names = UsedNamesMapping(dct) return self._used_names class Decorator(PythonBaseNode): type = 'decorator' __slots__ = () class ClassOrFunc(Scope): __slots__ = () @property def name(self): """ Returns the `Name` leaf that defines the function or class name. """ return self.children[1] def get_decorators(self): """ :rtype: list of :class:`Decorator` """ decorated = self.parent if decorated.type == 'async_funcdef': decorated = decorated.parent if decorated.type == 'decorated': if decorated.children[0].type == 'decorators': return decorated.children[0].children else: return decorated.children[:1] else: return [] class Class(ClassOrFunc): """ Used to store the parsed contents of a python class. """ type = 'classdef' __slots__ = () def __init__(self, children): super(Class, self).__init__(children) def get_super_arglist(self): """ Returns the `arglist` node that defines the super classes. It returns None if there are no arguments. """ if self.children[2] != '(': # Has no parentheses return None else: if self.children[3] == ')': # Empty parentheses return None else: return self.children[3] def _create_params(parent, argslist_list): """ `argslist_list` is a list that can contain an argslist as a first item, but most not. It's basically the items between the parameter brackets (which is at most one item). This function modifies the parser structure. It generates `Param` objects from the normal ast. Those param objects do not exist in a normal ast, but make the evaluation of the ast tree so much easier. You could also say that this function replaces the argslist node with a list of Param objects. """ def check_python2_nested_param(node): """ Python 2 allows params to look like ``def x(a, (b, c))``, which is basically a way of unpacking tuples in params. Python 3 has ditched this behavior. Jedi currently just ignores those constructs. """ return node.type == 'fpdef' and node.children[0] == '(' try: first = argslist_list[0] except IndexError: return [] if first.type in ('name', 'fpdef'): if check_python2_nested_param(first): return [first] else: return [Param([first], parent)] elif first == '*': return [first] else: # argslist is a `typedargslist` or a `varargslist`. if first.type == 'tfpdef': children = [first] else: children = first.children new_children = [] start = 0 # Start with offset 1, because the end is higher. for end, child in enumerate(children + [None], 1): if child is None or child == ',': param_children = children[start:end] if param_children: # Could as well be comma and then end. if param_children[0] == '*' \ and (len(param_children) == 1 or param_children[1] == ',') \ or check_python2_nested_param(param_children[0]) \ or param_children[0] == '/': for p in param_children: p.parent = parent new_children += param_children else: new_children.append(Param(param_children, parent)) start = end return new_children class Function(ClassOrFunc): """ Used to store the parsed contents of a python function. Children:: 0. 1. 2. parameter list (including open-paren and close-paren s) 3. or 5. 4. or 6. Node() representing function body 3. -> (if annotation is also present) 4. annotation (if present) """ type = 'funcdef' def __init__(self, children): super(Function, self).__init__(children) parameters = self.children[2] # After `def foo` parameters.children[1:-1] = _create_params(parameters, parameters.children[1:-1]) def _get_param_nodes(self): return self.children[2].children def get_params(self): """ Returns a list of `Param()`. """ return [p for p in self._get_param_nodes() if p.type == 'param'] @property def name(self): return self.children[1] # First token after `def` def iter_yield_exprs(self): """ Returns a generator of `yield_expr`. """ def scan(children): for element in children: if element.type in ('classdef', 'funcdef', 'lambdef'): continue try: nested_children = element.children except AttributeError: if element.value == 'yield': if element.parent.type == 'yield_expr': yield element.parent else: yield element else: for result in scan(nested_children): yield result return scan(self.children) def iter_return_stmts(self): """ Returns a generator of `return_stmt`. """ def scan(children): for element in children: if element.type == 'return_stmt' \ or element.type == 'keyword' and element.value == 'return': yield element if element.type in _RETURN_STMT_CONTAINERS: for e in scan(element.children): yield e return scan(self.children) def iter_raise_stmts(self): """ Returns a generator of `raise_stmt`. Includes raise statements inside try-except blocks """ def scan(children): for element in children: if element.type == 'raise_stmt' \ or element.type == 'keyword' and element.value == 'raise': yield element if element.type in _RETURN_STMT_CONTAINERS: for e in scan(element.children): yield e return scan(self.children) def is_generator(self): """ :return bool: Checks if a function is a generator or not. """ return next(self.iter_yield_exprs(), None) is not None @property def annotation(self): """ Returns the test node after `->` or `None` if there is no annotation. """ try: if self.children[3] == "->": return self.children[4] assert self.children[3] == ":" return None except IndexError: return None class Lambda(Function): """ Lambdas are basically trimmed functions, so give it the same interface. Children:: 0. *. for each argument x -2. -1. Node() representing body """ type = 'lambdef' __slots__ = () def __init__(self, children): # We don't want to call the Function constructor, call its parent. super(Function, self).__init__(children) # Everything between `lambda` and the `:` operator is a parameter. self.children[1:-2] = _create_params(self, self.children[1:-2]) @property def name(self): """ Raises an AttributeError. Lambdas don't have a defined name. """ raise AttributeError("lambda is not named.") def _get_param_nodes(self): return self.children[1:-2] @property def annotation(self): """ Returns `None`, lambdas don't have annotations. """ return None def __repr__(self): return "<%s@%s>" % (self.__class__.__name__, self.start_pos) class Flow(PythonBaseNode): __slots__ = () class IfStmt(Flow): type = 'if_stmt' __slots__ = () def get_test_nodes(self): """ E.g. returns all the `test` nodes that are named as x, below: if x: pass elif x: pass """ for i, c in enumerate(self.children): if c in ('elif', 'if'): yield self.children[i + 1] def get_corresponding_test_node(self, node): """ Searches for the branch in which the node is and returns the corresponding test node (see function above). However if the node is in the test node itself and not in the suite return None. """ start_pos = node.start_pos for check_node in reversed(list(self.get_test_nodes())): if check_node.start_pos < start_pos: if start_pos < check_node.end_pos: return None # In this case the node is within the check_node itself, # not in the suite else: return check_node def is_node_after_else(self, node): """ Checks if a node is defined after `else`. """ for c in self.children: if c == 'else': if node.start_pos > c.start_pos: return True else: return False class WhileStmt(Flow): type = 'while_stmt' __slots__ = () class ForStmt(Flow): type = 'for_stmt' __slots__ = () def get_testlist(self): """ Returns the input node ``y`` from: ``for x in y:``. """ return self.children[3] def get_defined_names(self, include_setitem=False): return _defined_names(self.children[1], include_setitem) class TryStmt(Flow): type = 'try_stmt' __slots__ = () def get_except_clause_tests(self): """ Returns the ``test`` nodes found in ``except_clause`` nodes. Returns ``[None]`` for except clauses without an exception given. """ for node in self.children: if node.type == 'except_clause': yield node.children[1] elif node == 'except': yield None class WithStmt(Flow): type = 'with_stmt' __slots__ = () def get_defined_names(self, include_setitem=False): """ Returns the a list of `Name` that the with statement defines. The defined names are set after `as`. """ names = [] for with_item in self.children[1:-2:2]: # Check with items for 'as' names. if with_item.type == 'with_item': names += _defined_names(with_item.children[2], include_setitem) return names def get_test_node_from_name(self, name): node = name.parent if node.type != 'with_item': raise ValueError('The name is not actually part of a with statement.') return node.children[0] class Import(PythonBaseNode): __slots__ = () def get_path_for_name(self, name): """ The path is the list of names that leads to the searched name. :return list of Name: """ try: # The name may be an alias. If it is, just map it back to the name. name = self._aliases()[name] except KeyError: pass for path in self.get_paths(): if name in path: return path[:path.index(name) + 1] raise ValueError('Name should be defined in the import itself') def is_nested(self): return False # By default, sub classes may overwrite this behavior def is_star_import(self): return self.children[-1] == '*' class ImportFrom(Import): type = 'import_from' __slots__ = () def get_defined_names(self, include_setitem=False): """ Returns the a list of `Name` that the import defines. The defined names are set after `import` or in case an alias - `as` - is present that name is returned. """ return [alias or name for name, alias in self._as_name_tuples()] def _aliases(self): """Mapping from alias to its corresponding name.""" return dict((alias, name) for name, alias in self._as_name_tuples() if alias is not None) def get_from_names(self): for n in self.children[1:]: if n not in ('.', '...'): break if n.type == 'dotted_name': # from x.y import return n.children[::2] elif n == 'import': # from . import return [] else: # from x import return [n] @property def level(self): """The level parameter of ``__import__``.""" level = 0 for n in self.children[1:]: if n in ('.', '...'): level += len(n.value) else: break return level def _as_name_tuples(self): last = self.children[-1] if last == ')': last = self.children[-2] elif last == '*': return # No names defined directly. if last.type == 'import_as_names': as_names = last.children[::2] else: as_names = [last] for as_name in as_names: if as_name.type == 'name': yield as_name, None else: yield as_name.children[::2] # yields x, y -> ``x as y`` def get_paths(self): """ The import paths defined in an import statement. Typically an array like this: ``[, ]``. :return list of list of Name: """ dotted = self.get_from_names() if self.children[-1] == '*': return [dotted] return [dotted + [name] for name, alias in self._as_name_tuples()] class ImportName(Import): """For ``import_name`` nodes. Covers normal imports without ``from``.""" type = 'import_name' __slots__ = () def get_defined_names(self, include_setitem=False): """ Returns the a list of `Name` that the import defines. The defined names is always the first name after `import` or in case an alias - `as` - is present that name is returned. """ return [alias or path[0] for path, alias in self._dotted_as_names()] @property def level(self): """The level parameter of ``__import__``.""" return 0 # Obviously 0 for imports without from. def get_paths(self): return [path for path, alias in self._dotted_as_names()] def _dotted_as_names(self): """Generator of (list(path), alias) where alias may be None.""" dotted_as_names = self.children[1] if dotted_as_names.type == 'dotted_as_names': as_names = dotted_as_names.children[::2] else: as_names = [dotted_as_names] for as_name in as_names: if as_name.type == 'dotted_as_name': alias = as_name.children[2] as_name = as_name.children[0] else: alias = None if as_name.type == 'name': yield [as_name], alias else: # dotted_names yield as_name.children[::2], alias def is_nested(self): """ This checks for the special case of nested imports, without aliases and from statement:: import foo.bar """ return bool([1 for path, alias in self._dotted_as_names() if alias is None and len(path) > 1]) def _aliases(self): """ :return list of Name: Returns all the alias """ return dict((alias, path[-1]) for path, alias in self._dotted_as_names() if alias is not None) class KeywordStatement(PythonBaseNode): """ For the following statements: `assert`, `del`, `global`, `nonlocal`, `raise`, `return`, `yield`. `pass`, `continue` and `break` are not in there, because they are just simple keywords and the parser reduces it to a keyword. """ __slots__ = () @property def type(self): """ Keyword statements start with the keyword and end with `_stmt`. You can crosscheck this with the Python grammar. """ return '%s_stmt' % self.keyword @property def keyword(self): return self.children[0].value class AssertStmt(KeywordStatement): __slots__ = () @property def assertion(self): return self.children[1] class GlobalStmt(KeywordStatement): __slots__ = () def get_global_names(self): return self.children[1::2] class ReturnStmt(KeywordStatement): __slots__ = () class YieldExpr(PythonBaseNode): type = 'yield_expr' __slots__ = () def _defined_names(current, include_setitem): """ A helper function to find the defined names in statements, for loops and list comprehensions. """ names = [] if current.type in ('testlist_star_expr', 'testlist_comp', 'exprlist', 'testlist'): for child in current.children[::2]: names += _defined_names(child, include_setitem) elif current.type in ('atom', 'star_expr'): names += _defined_names(current.children[1], include_setitem) elif current.type in ('power', 'atom_expr'): if current.children[-2] != '**': # Just if there's no operation trailer = current.children[-1] if trailer.children[0] == '.': names.append(trailer.children[1]) elif trailer.children[0] == '[' and include_setitem: for node in current.children[-2::-1]: if node.type == 'trailer': names.append(node.children[1]) break if node.type == 'name': names.append(node) break else: names.append(current) return names class ExprStmt(PythonBaseNode, DocstringMixin): type = 'expr_stmt' __slots__ = () def get_defined_names(self, include_setitem=False): """ Returns a list of `Name` defined before the `=` sign. """ names = [] if self.children[1].type == 'annassign': names = _defined_names(self.children[0], include_setitem) return [ name for i in range(0, len(self.children) - 2, 2) if '=' in self.children[i + 1].value for name in _defined_names(self.children[i], include_setitem) ] + names def get_rhs(self): """Returns the right-hand-side of the equals.""" return self.children[-1] def yield_operators(self): """ Returns a generator of `+=`, `=`, etc. or None if there is no operation. """ first = self.children[1] if first.type == 'annassign': if len(first.children) <= 2: return # No operator is available, it's just PEP 484. first = first.children[2] yield first for operator in self.children[3::2]: yield operator class Param(PythonBaseNode): """ It's a helper class that makes business logic with params much easier. The Python grammar defines no ``param`` node. It defines it in a different way that is not really suited to working with parameters. """ type = 'param' def __init__(self, children, parent): super(Param, self).__init__(children) self.parent = parent for child in children: child.parent = self @property def star_count(self): """ Is `0` in case of `foo`, `1` in case of `*foo` or `2` in case of `**foo`. """ first = self.children[0] if first in ('*', '**'): return len(first.value) return 0 @property def default(self): """ The default is the test node that appears after the `=`. Is `None` in case no default is present. """ has_comma = self.children[-1] == ',' try: if self.children[-2 - int(has_comma)] == '=': return self.children[-1 - int(has_comma)] except IndexError: return None @property def annotation(self): """ The default is the test node that appears after `:`. Is `None` in case no annotation is present. """ tfpdef = self._tfpdef() if tfpdef.type == 'tfpdef': assert tfpdef.children[1] == ":" assert len(tfpdef.children) == 3 annotation = tfpdef.children[2] return annotation else: return None def _tfpdef(self): """ tfpdef: see e.g. grammar36.txt. """ offset = int(self.children[0] in ('*', '**')) return self.children[offset] @property def name(self): """ The `Name` leaf of the param. """ if self._tfpdef().type == 'tfpdef': return self._tfpdef().children[0] else: return self._tfpdef() def get_defined_names(self, include_setitem=False): return [self.name] @property def position_index(self): """ Property for the positional index of a paramter. """ index = self.parent.children.index(self) try: keyword_only_index = self.parent.children.index('*') if index > keyword_only_index: # Skip the ` *, ` index -= 2 except ValueError: pass try: keyword_only_index = self.parent.children.index('/') if index > keyword_only_index: # Skip the ` /, ` index -= 2 except ValueError: pass return index - 1 def get_parent_function(self): """ Returns the function/lambda of a parameter. """ return search_ancestor(self, 'funcdef', 'lambdef') def get_code(self, include_prefix=True, include_comma=True): """ Like all the other get_code functions, but includes the param `include_comma`. :param include_comma bool: If enabled includes the comma in the string output. """ if include_comma: return super(Param, self).get_code(include_prefix) children = self.children if children[-1] == ',': children = children[:-1] return self._get_code_for_children( children, include_prefix=include_prefix ) def __repr__(self): default = '' if self.default is None else '=%s' % self.default.get_code() return '<%s: %s>' % (type(self).__name__, str(self._tfpdef()) + default) class SyncCompFor(PythonBaseNode): type = 'sync_comp_for' __slots__ = () def get_defined_names(self, include_setitem=False): """ Returns the a list of `Name` that the comprehension defines. """ # allow async for return _defined_names(self.children[1], include_setitem) # This is simply here so an older Jedi version can work with this new parso # version. Can be deleted in the next release. CompFor = SyncCompFor class UsedNamesMapping(Mapping): """ This class exists for the sole purpose of creating an immutable dict. """ def __init__(self, dct): self._dict = dct def __getitem__(self, key): return self._dict[key] def __len__(self): return len(self._dict) def __iter__(self): return iter(self._dict) def __hash__(self): return id(self) def __eq__(self, other): # Comparing these dicts does not make sense. return self is other parso-0.5.2/parso/python/pep8.py0000664000175000017500000007673013575273707016454 0ustar davedave00000000000000import re from contextlib import contextmanager from parso.python.errors import ErrorFinder, ErrorFinderConfig from parso.normalizer import Rule from parso.python.tree import search_ancestor, Flow, Scope _IMPORT_TYPES = ('import_name', 'import_from') _SUITE_INTRODUCERS = ('classdef', 'funcdef', 'if_stmt', 'while_stmt', 'for_stmt', 'try_stmt', 'with_stmt') _NON_STAR_TYPES = ('term', 'import_from', 'power') _OPENING_BRACKETS = '(', '[', '{' _CLOSING_BRACKETS = ')', ']', '}' _FACTOR = '+', '-', '~' _ALLOW_SPACE = '*', '+', '-', '**', '/', '//', '@' _BITWISE_OPERATOR = '<<', '>>', '|', '&', '^' _NEEDS_SPACE = ('=', '%', '->', '<', '>', '==', '>=', '<=', '<>', '!=', '+=', '-=', '*=', '@=', '/=', '%=', '&=', '|=', '^=', '<<=', '>>=', '**=', '//=') _NEEDS_SPACE += _BITWISE_OPERATOR _IMPLICIT_INDENTATION_TYPES = ('dictorsetmaker', 'argument') _POSSIBLE_SLICE_PARENTS = ('subscript', 'subscriptlist', 'sliceop') class IndentationTypes(object): VERTICAL_BRACKET = object() HANGING_BRACKET = object() BACKSLASH = object() SUITE = object() IMPLICIT = object() class IndentationNode(object): type = IndentationTypes.SUITE def __init__(self, config, indentation, parent=None): self.bracket_indentation = self.indentation = indentation self.parent = parent def __repr__(self): return '<%s>' % self.__class__.__name__ def get_latest_suite_node(self): n = self while n is not None: if n.type == IndentationTypes.SUITE: return n n = n.parent class BracketNode(IndentationNode): def __init__(self, config, leaf, parent, in_suite_introducer=False): self.leaf = leaf # Figure out here what the indentation is. For chained brackets # we can basically use the previous indentation. previous_leaf = leaf n = parent if n.type == IndentationTypes.IMPLICIT: n = n.parent while True: if hasattr(n, 'leaf') and previous_leaf.line != n.leaf.line: break previous_leaf = previous_leaf.get_previous_leaf() if not isinstance(n, BracketNode) or previous_leaf != n.leaf: break n = n.parent parent_indentation = n.indentation next_leaf = leaf.get_next_leaf() if '\n' in next_leaf.prefix: # This implies code like: # foobarbaz( # a, # b, # ) self.bracket_indentation = parent_indentation \ + config.closing_bracket_hanging_indentation self.indentation = parent_indentation + config.indentation self.type = IndentationTypes.HANGING_BRACKET else: # Implies code like: # foobarbaz( # a, # b, # ) expected_end_indent = leaf.end_pos[1] if '\t' in config.indentation: self.indentation = None else: self.indentation = ' ' * expected_end_indent self.bracket_indentation = self.indentation self.type = IndentationTypes.VERTICAL_BRACKET if in_suite_introducer and parent.type == IndentationTypes.SUITE \ and self.indentation == parent_indentation + config.indentation: self.indentation += config.indentation # The closing bracket should have the same indentation. self.bracket_indentation = self.indentation self.parent = parent class ImplicitNode(BracketNode): """ Implicit indentation after keyword arguments, default arguments, annotations and dict values. """ def __init__(self, config, leaf, parent): super(ImplicitNode, self).__init__(config, leaf, parent) self.type = IndentationTypes.IMPLICIT next_leaf = leaf.get_next_leaf() if leaf == ':' and '\n' not in next_leaf.prefix: self.indentation += ' ' class BackslashNode(IndentationNode): type = IndentationTypes.BACKSLASH def __init__(self, config, parent_indentation, containing_leaf, spacing, parent=None): expr_stmt = search_ancestor(containing_leaf, 'expr_stmt') if expr_stmt is not None: equals = expr_stmt.children[-2] if '\t' in config.indentation: # TODO unite with the code of BracketNode self.indentation = None else: # If the backslash follows the equals, use normal indentation # otherwise it should align with the equals. if equals.end_pos == spacing.start_pos: self.indentation = parent_indentation + config.indentation else: # +1 because there is a space. self.indentation = ' ' * (equals.end_pos[1] + 1) else: self.indentation = parent_indentation + config.indentation self.bracket_indentation = self.indentation self.parent = parent def _is_magic_name(name): return name.value.startswith('__') and name.value.endswith('__') class PEP8Normalizer(ErrorFinder): def __init__(self, *args, **kwargs): super(PEP8Normalizer, self).__init__(*args, **kwargs) self._previous_part = None self._previous_leaf = None self._on_newline = True self._newline_count = 0 self._wanted_newline_count = None self._max_new_lines_in_prefix = 0 self._new_statement = True self._implicit_indentation_possible = False # The top of stack of the indentation nodes. self._indentation_tos = self._last_indentation_tos = \ IndentationNode(self._config, indentation='') self._in_suite_introducer = False if ' ' in self._config.indentation: self._indentation_type = 'spaces' self._wrong_indentation_char = '\t' else: self._indentation_type = 'tabs' self._wrong_indentation_char = ' ' @contextmanager def visit_node(self, node): with super(PEP8Normalizer, self).visit_node(node): with self._visit_node(node): yield @contextmanager def _visit_node(self, node): typ = node.type if typ in 'import_name': names = node.get_defined_names() if len(names) > 1: for name in names[:1]: self.add_issue(name, 401, 'Multiple imports on one line') elif typ == 'lambdef': expr_stmt = node.parent # Check if it's simply defining a single name, not something like # foo.bar or x[1], where using a lambda could make more sense. if expr_stmt.type == 'expr_stmt' and any(n.type == 'name' for n in expr_stmt.children[:-2:2]): self.add_issue(node, 731, 'Do not assign a lambda expression, use a def') elif typ == 'try_stmt': for child in node.children: # Here we can simply check if it's an except, because otherwise # it would be an except_clause. if child.type == 'keyword' and child.value == 'except': self.add_issue(child, 722, 'Do not use bare except, specify exception instead') elif typ == 'comparison': for child in node.children: if child.type not in ('atom_expr', 'power'): continue if len(child.children) > 2: continue trailer = child.children[1] atom = child.children[0] if trailer.type == 'trailer' and atom.type == 'name' \ and atom.value == 'type': self.add_issue(node, 721, "Do not compare types, use 'isinstance()") break elif typ == 'file_input': endmarker = node.children[-1] prev = endmarker.get_previous_leaf() prefix = endmarker.prefix if (not prefix.endswith('\n') and ( prefix or prev is None or prev.value != '\n')): self.add_issue(endmarker, 292, "No newline at end of file") if typ in _IMPORT_TYPES: simple_stmt = node.parent module = simple_stmt.parent #if module.type == 'simple_stmt': if module.type == 'file_input': index = module.children.index(simple_stmt) for child in module.children[:index]: children = [child] if child.type == 'simple_stmt': # Remove the newline. children = child.children[:-1] found_docstring = False for c in children: if c.type == 'string' and not found_docstring: continue found_docstring = True if c.type == 'expr_stmt' and \ all(_is_magic_name(n) for n in c.get_defined_names()): continue if c.type in _IMPORT_TYPES or isinstance(c, Flow): continue self.add_issue(node, 402, 'Module level import not at top of file') break else: continue break implicit_indentation_possible = typ in _IMPLICIT_INDENTATION_TYPES in_introducer = typ in _SUITE_INTRODUCERS if in_introducer: self._in_suite_introducer = True elif typ == 'suite': if self._indentation_tos.type == IndentationTypes.BACKSLASH: self._indentation_tos = self._indentation_tos.parent self._indentation_tos = IndentationNode( self._config, self._indentation_tos.indentation + self._config.indentation, parent=self._indentation_tos ) elif implicit_indentation_possible: self._implicit_indentation_possible = True yield if typ == 'suite': assert self._indentation_tos.type == IndentationTypes.SUITE self._indentation_tos = self._indentation_tos.parent # If we dedent, no lines are needed anymore. self._wanted_newline_count = None elif implicit_indentation_possible: self._implicit_indentation_possible = False if self._indentation_tos.type == IndentationTypes.IMPLICIT: self._indentation_tos = self._indentation_tos.parent elif in_introducer: self._in_suite_introducer = False if typ in ('classdef', 'funcdef'): self._wanted_newline_count = self._get_wanted_blank_lines_count() def _check_tabs_spaces(self, spacing): if self._wrong_indentation_char in spacing.value: self.add_issue(spacing, 101, 'Indentation contains ' + self._indentation_type) return True return False def _get_wanted_blank_lines_count(self): suite_node = self._indentation_tos.get_latest_suite_node() return int(suite_node.parent is None) + 1 def _reset_newlines(self, spacing, leaf, is_comment=False): self._max_new_lines_in_prefix = \ max(self._max_new_lines_in_prefix, self._newline_count) wanted = self._wanted_newline_count if wanted is not None: # Need to substract one blank_lines = self._newline_count - 1 if wanted > blank_lines and leaf.type != 'endmarker': # In case of a comment we don't need to add the issue, yet. if not is_comment: # TODO end_pos wrong. code = 302 if wanted == 2 else 301 message = "expected %s blank line, found %s" \ % (wanted, blank_lines) self.add_issue(spacing, code, message) self._wanted_newline_count = None else: self._wanted_newline_count = None if not is_comment: wanted = self._get_wanted_blank_lines_count() actual = self._max_new_lines_in_prefix - 1 val = leaf.value needs_lines = ( val == '@' and leaf.parent.type == 'decorator' or ( val == 'class' or val == 'async' and leaf.get_next_leaf() == 'def' or val == 'def' and self._previous_leaf != 'async' ) and leaf.parent.parent.type != 'decorated' ) if needs_lines and actual < wanted: func_or_cls = leaf.parent suite = func_or_cls.parent if suite.type == 'decorated': suite = suite.parent # The first leaf of a file or a suite should not need blank # lines. if suite.children[int(suite.type == 'suite')] != func_or_cls: code = 302 if wanted == 2 else 301 message = "expected %s blank line, found %s" \ % (wanted, actual) self.add_issue(spacing, code, message) self._max_new_lines_in_prefix = 0 self._newline_count = 0 def visit_leaf(self, leaf): super(PEP8Normalizer, self).visit_leaf(leaf) for part in leaf._split_prefix(): if part.type == 'spacing': # This part is used for the part call after for. break self._visit_part(part, part.create_spacing_part(), leaf) self._analyse_non_prefix(leaf) self._visit_part(leaf, part, leaf) # Cleanup self._last_indentation_tos = self._indentation_tos self._new_statement = leaf.type == 'newline' # TODO does this work? with brackets and stuff? if leaf.type == 'newline' and \ self._indentation_tos.type == IndentationTypes.BACKSLASH: self._indentation_tos = self._indentation_tos.parent if leaf.value == ':' and leaf.parent.type in _SUITE_INTRODUCERS: self._in_suite_introducer = False elif leaf.value == 'elif': self._in_suite_introducer = True if not self._new_statement: self._reset_newlines(part, leaf) self._max_blank_lines = 0 self._previous_leaf = leaf return leaf.value def _visit_part(self, part, spacing, leaf): value = part.value type_ = part.type if type_ == 'error_leaf': return if value == ',' and part.parent.type == 'dictorsetmaker': self._indentation_tos = self._indentation_tos.parent node = self._indentation_tos if type_ == 'comment': if value.startswith('##'): # Whole blocks of # should not raise an error. if value.lstrip('#'): self.add_issue(part, 266, "Too many leading '#' for block comment.") elif self._on_newline: if not re.match(r'#:? ', value) and not value == '#' \ and not (value.startswith('#!') and part.start_pos == (1, 0)): self.add_issue(part, 265, "Block comment should start with '# '") else: if not re.match(r'#:? [^ ]', value): self.add_issue(part, 262, "Inline comment should start with '# '") self._reset_newlines(spacing, leaf, is_comment=True) elif type_ == 'newline': if self._newline_count > self._get_wanted_blank_lines_count(): self.add_issue(part, 303, "Too many blank lines (%s)" % self._newline_count) elif leaf in ('def', 'class') \ and leaf.parent.parent.type == 'decorated': self.add_issue(part, 304, "Blank lines found after function decorator") self._newline_count += 1 if type_ == 'backslash': # TODO is this enough checking? What about ==? if node.type != IndentationTypes.BACKSLASH: if node.type != IndentationTypes.SUITE: self.add_issue(part, 502, 'The backslash is redundant between brackets') else: indentation = node.indentation if self._in_suite_introducer and node.type == IndentationTypes.SUITE: indentation += self._config.indentation self._indentation_tos = BackslashNode( self._config, indentation, part, spacing, parent=self._indentation_tos ) elif self._on_newline: indentation = spacing.value if node.type == IndentationTypes.BACKSLASH \ and self._previous_part.type == 'newline': self._indentation_tos = self._indentation_tos.parent if not self._check_tabs_spaces(spacing): should_be_indentation = node.indentation if type_ == 'comment': # Comments can be dedented. So we have to care for that. n = self._last_indentation_tos while True: if len(indentation) > len(n.indentation): break should_be_indentation = n.indentation self._last_indentation_tos = n if n == node: break n = n.parent if self._new_statement: if type_ == 'newline': if indentation: self.add_issue(spacing, 291, 'Trailing whitespace') elif indentation != should_be_indentation: s = '%s %s' % (len(self._config.indentation), self._indentation_type) self.add_issue(part, 111, 'Indentation is not a multiple of ' + s) else: if value in '])}': should_be_indentation = node.bracket_indentation else: should_be_indentation = node.indentation if self._in_suite_introducer and indentation == \ node.get_latest_suite_node().indentation \ + self._config.indentation: self.add_issue(part, 129, "Line with same indent as next logical block") elif indentation != should_be_indentation: if not self._check_tabs_spaces(spacing) and part.value != '\n': if value in '])}': if node.type == IndentationTypes.VERTICAL_BRACKET: self.add_issue(part, 124, "Closing bracket does not match visual indentation") else: self.add_issue(part, 123, "Losing bracket does not match indentation of opening bracket's line") else: if len(indentation) < len(should_be_indentation): if node.type == IndentationTypes.VERTICAL_BRACKET: self.add_issue(part, 128, 'Continuation line under-indented for visual indent') elif node.type == IndentationTypes.BACKSLASH: self.add_issue(part, 122, 'Continuation line missing indentation or outdented') elif node.type == IndentationTypes.IMPLICIT: self.add_issue(part, 135, 'xxx') else: self.add_issue(part, 121, 'Continuation line under-indented for hanging indent') else: if node.type == IndentationTypes.VERTICAL_BRACKET: self.add_issue(part, 127, 'Continuation line over-indented for visual indent') elif node.type == IndentationTypes.IMPLICIT: self.add_issue(part, 136, 'xxx') else: self.add_issue(part, 126, 'Continuation line over-indented for hanging indent') else: self._check_spacing(part, spacing) self._check_line_length(part, spacing) # ------------------------------- # Finalizing. Updating the state. # ------------------------------- if value and value in '()[]{}' and type_ != 'error_leaf' \ and part.parent.type != 'error_node': if value in _OPENING_BRACKETS: self._indentation_tos = BracketNode( self._config, part, parent=self._indentation_tos, in_suite_introducer=self._in_suite_introducer ) else: assert node.type != IndentationTypes.IMPLICIT self._indentation_tos = self._indentation_tos.parent elif value in ('=', ':') and self._implicit_indentation_possible \ and part.parent.type in _IMPLICIT_INDENTATION_TYPES: indentation = node.indentation self._indentation_tos = ImplicitNode( self._config, part, parent=self._indentation_tos ) self._on_newline = type_ in ('newline', 'backslash', 'bom') self._previous_part = part self._previous_spacing = spacing def _check_line_length(self, part, spacing): if part.type == 'backslash': last_column = part.start_pos[1] + 1 else: last_column = part.end_pos[1] if last_column > self._config.max_characters \ and spacing.start_pos[1] <= self._config.max_characters : # Special case for long URLs in multi-line docstrings or comments, # but still report the error when the 72 first chars are whitespaces. report = True if part.type == 'comment': splitted = part.value[1:].split() if len(splitted) == 1 \ and (part.end_pos[1] - len(splitted[0])) < 72: report = False if report: self.add_issue( part, 501, 'Line too long (%s > %s characters)' % (last_column, self._config.max_characters), ) def _check_spacing(self, part, spacing): def add_if_spaces(*args): if spaces: return self.add_issue(*args) def add_not_spaces(*args): if not spaces: return self.add_issue(*args) spaces = spacing.value prev = self._previous_part if prev is not None and prev.type == 'error_leaf' or part.type == 'error_leaf': return type_ = part.type if '\t' in spaces: self.add_issue(spacing, 223, 'Used tab to separate tokens') elif type_ == 'comment': if len(spaces) < self._config.spaces_before_comment: self.add_issue(spacing, 261, 'At least two spaces before inline comment') elif type_ == 'newline': add_if_spaces(spacing, 291, 'Trailing whitespace') elif len(spaces) > 1: self.add_issue(spacing, 221, 'Multiple spaces used') else: if prev in _OPENING_BRACKETS: message = "Whitespace after '%s'" % part.value add_if_spaces(spacing, 201, message) elif part in _CLOSING_BRACKETS: message = "Whitespace before '%s'" % part.value add_if_spaces(spacing, 202, message) elif part in (',', ';') or part == ':' \ and part.parent.type not in _POSSIBLE_SLICE_PARENTS: message = "Whitespace before '%s'" % part.value add_if_spaces(spacing, 203, message) elif prev == ':' and prev.parent.type in _POSSIBLE_SLICE_PARENTS: pass # TODO elif prev in (',', ';', ':'): add_not_spaces(spacing, 231, "missing whitespace after '%s'") elif part == ':': # Is a subscript # TODO pass elif part in ('*', '**') and part.parent.type not in _NON_STAR_TYPES \ or prev in ('*', '**') \ and prev.parent.type not in _NON_STAR_TYPES: # TODO pass elif prev in _FACTOR and prev.parent.type == 'factor': pass elif prev == '@' and prev.parent.type == 'decorator': pass # TODO should probably raise an error if there's a space here elif part in _NEEDS_SPACE or prev in _NEEDS_SPACE: if part == '=' and part.parent.type in ('argument', 'param') \ or prev == '=' and prev.parent.type in ('argument', 'param'): if part == '=': param = part.parent else: param = prev.parent if param.type == 'param' and param.annotation: add_not_spaces(spacing, 252, 'Expected spaces around annotation equals') else: add_if_spaces(spacing, 251, 'Unexpected spaces around keyword / parameter equals') elif part in _BITWISE_OPERATOR or prev in _BITWISE_OPERATOR: add_not_spaces(spacing, 227, 'Missing whitespace around bitwise or shift operator') elif part == '%' or prev == '%': add_not_spaces(spacing, 228, 'Missing whitespace around modulo operator') else: message_225 = 'Missing whitespace between tokens' add_not_spaces(spacing, 225, message_225) elif type_ == 'keyword' or prev.type == 'keyword': add_not_spaces(spacing, 275, 'Missing whitespace around keyword') else: prev_spacing = self._previous_spacing if prev in _ALLOW_SPACE and spaces != prev_spacing.value \ and '\n' not in self._previous_leaf.prefix: message = "Whitespace before operator doesn't match with whitespace after" self.add_issue(spacing, 229, message) if spaces and part not in _ALLOW_SPACE and prev not in _ALLOW_SPACE: message_225 = 'Missing whitespace between tokens' #print('xy', spacing) #self.add_issue(spacing, 225, message_225) # TODO why only brackets? if part in _OPENING_BRACKETS: message = "Whitespace before '%s'" % part.value add_if_spaces(spacing, 211, message) def _analyse_non_prefix(self, leaf): typ = leaf.type if typ == 'name' and leaf.value in ('l', 'O', 'I'): if leaf.is_definition(): message = "Do not define %s named 'l', 'O', or 'I' one line" if leaf.parent.type == 'class' and leaf.parent.name == leaf: self.add_issue(leaf, 742, message % 'classes') elif leaf.parent.type == 'function' and leaf.parent.name == leaf: self.add_issue(leaf, 743, message % 'function') else: self.add_issuadd_issue(741, message % 'variables', leaf) elif leaf.value == ':': if isinstance(leaf.parent, (Flow, Scope)) and leaf.parent.type != 'lambdef': next_leaf = leaf.get_next_leaf() if next_leaf.type != 'newline': if leaf.parent.type == 'funcdef': self.add_issue(next_leaf, 704, 'Multiple statements on one line (def)') else: self.add_issue(next_leaf, 701, 'Multiple statements on one line (colon)') elif leaf.value == ';': if leaf.get_next_leaf().type in ('newline', 'endmarker'): self.add_issue(leaf, 703, 'Statement ends with a semicolon') else: self.add_issue(leaf, 702, 'Multiple statements on one line (semicolon)') elif leaf.value in ('==', '!='): comparison = leaf.parent index = comparison.children.index(leaf) left = comparison.children[index - 1] right = comparison.children[index + 1] for node in left, right: if node.type == 'keyword' or node.type == 'name': if node.value == 'None': message = "comparison to None should be 'if cond is None:'" self.add_issue(leaf, 711, message) break elif node.value in ('True', 'False'): message = "comparison to False/True should be 'if cond is True:' or 'if cond:'" self.add_issue(leaf, 712, message) break elif leaf.value in ('in', 'is'): comparison = leaf.parent if comparison.type == 'comparison' and comparison.parent.type == 'not_test': if leaf.value == 'in': self.add_issue(leaf, 713, "test for membership should be 'not in'") else: self.add_issue(leaf, 714, "test for object identity should be 'is not'") elif typ == 'string': # Checking multiline strings for i, line in enumerate(leaf.value.splitlines()[1:]): indentation = re.match(r'[ \t]*', line).group(0) start_pos = leaf.line + i, len(indentation) # TODO check multiline indentation. elif typ == 'endmarker': if self._newline_count >= 2: self.add_issue(leaf, 391, 'Blank line at end of file') def add_issue(self, node, code, message): if self._previous_leaf is not None: if search_ancestor(self._previous_leaf, 'error_node') is not None: return if self._previous_leaf.type == 'error_leaf': return if search_ancestor(node, 'error_node') is not None: return if code in (901, 903): # 901 and 903 are raised by the ErrorFinder. super(PEP8Normalizer, self).add_issue(node, code, message) else: # Skip ErrorFinder here, because it has custom behavior. super(ErrorFinder, self).add_issue(node, code, message) class PEP8NormalizerConfig(ErrorFinderConfig): normalizer_class = PEP8Normalizer """ Normalizing to PEP8. Not really implemented, yet. """ def __init__(self, indentation=' ' * 4, hanging_indentation=None, max_characters=79, spaces_before_comment=2): self.indentation = indentation if hanging_indentation is None: hanging_indentation = indentation self.hanging_indentation = hanging_indentation self.closing_bracket_hanging_indentation = '' self.break_after_binary = False self.max_characters = max_characters self.spaces_before_comment = spaces_before_comment # TODO this is not yet ready. #@PEP8Normalizer.register_rule(type='endmarker') class BlankLineAtEnd(Rule): code = 392 message = 'Blank line at end of file' def is_issue(self, leaf): return self._newline_count >= 2 parso-0.5.2/parso/python/diff.py0000664000175000017500000006504513575273707016505 0ustar davedave00000000000000""" Basically a contains parser that is faster, because it tries to parse only parts and if anything changes, it only reparses the changed parts. It works with a simple diff in the beginning and will try to reuse old parser fragments. """ import re import difflib from collections import namedtuple import logging from parso.utils import split_lines from parso.python.parser import Parser from parso.python.tree import EndMarker from parso.python.tokenize import PythonToken from parso.python.token import PythonTokenTypes LOG = logging.getLogger(__name__) DEBUG_DIFF_PARSER = False _INDENTATION_TOKENS = 'INDENT', 'ERROR_DEDENT', 'DEDENT' def _get_previous_leaf_if_indentation(leaf): while leaf and leaf.type == 'error_leaf' \ and leaf.token_type in _INDENTATION_TOKENS: leaf = leaf.get_previous_leaf() return leaf def _get_next_leaf_if_indentation(leaf): while leaf and leaf.type == 'error_leaf' \ and leaf.token_type in _INDENTATION_TOKENS: leaf = leaf.get_previous_leaf() return leaf def _assert_valid_graph(node): """ Checks if the parent/children relationship is correct. This is a check that only runs during debugging/testing. """ try: children = node.children except AttributeError: # Ignore INDENT is necessary, because indent/dedent tokens don't # contain value/prefix and are just around, because of the tokenizer. if node.type == 'error_leaf' and node.token_type in _INDENTATION_TOKENS: assert not node.value assert not node.prefix return # Calculate the content between two start positions. previous_leaf = _get_previous_leaf_if_indentation(node.get_previous_leaf()) if previous_leaf is None: content = node.prefix previous_start_pos = 1, 0 else: assert previous_leaf.end_pos <= node.start_pos, \ (previous_leaf, node) content = previous_leaf.value + node.prefix previous_start_pos = previous_leaf.start_pos if '\n' in content or '\r' in content: splitted = split_lines(content) line = previous_start_pos[0] + len(splitted) - 1 actual = line, len(splitted[-1]) else: actual = previous_start_pos[0], previous_start_pos[1] + len(content) assert node.start_pos == actual, (node.start_pos, actual) else: for child in children: assert child.parent == node, (node, child) _assert_valid_graph(child) def _get_debug_error_message(module, old_lines, new_lines): current_lines = split_lines(module.get_code(), keepends=True) current_diff = difflib.unified_diff(new_lines, current_lines) old_new_diff = difflib.unified_diff(old_lines, new_lines) import parso return ( "There's an issue with the diff parser. Please " "report (parso v%s) - Old/New:\n%s\nActual Diff (May be empty):\n%s" % (parso.__version__, ''.join(old_new_diff), ''.join(current_diff)) ) def _get_last_line(node_or_leaf): last_leaf = node_or_leaf.get_last_leaf() if _ends_with_newline(last_leaf): return last_leaf.start_pos[0] else: return last_leaf.end_pos[0] def _skip_dedent_error_leaves(leaf): while leaf is not None and leaf.type == 'error_leaf' and leaf.token_type == 'DEDENT': leaf = leaf.get_previous_leaf() return leaf def _ends_with_newline(leaf, suffix=''): leaf = _skip_dedent_error_leaves(leaf) if leaf.type == 'error_leaf': typ = leaf.token_type.lower() else: typ = leaf.type return typ == 'newline' or suffix.endswith('\n') or suffix.endswith('\r') def _flows_finished(pgen_grammar, stack): """ if, while, for and try might not be finished, because another part might still be parsed. """ for stack_node in stack: if stack_node.nonterminal in ('if_stmt', 'while_stmt', 'for_stmt', 'try_stmt'): return False return True def _func_or_class_has_suite(node): if node.type == 'decorated': node = node.children[-1] if node.type in ('async_funcdef', 'async_stmt'): node = node.children[-1] return node.type in ('classdef', 'funcdef') and node.children[-1].type == 'suite' def _suite_or_file_input_is_valid(pgen_grammar, stack): if not _flows_finished(pgen_grammar, stack): return False for stack_node in reversed(stack): if stack_node.nonterminal == 'decorator': # A decorator is only valid with the upcoming function. return False if stack_node.nonterminal == 'suite': # If only newline is in the suite, the suite is not valid, yet. return len(stack_node.nodes) > 1 # Not reaching a suite means that we're dealing with file_input levels # where there's no need for a valid statement in it. It can also be empty. return True def _is_flow_node(node): if node.type == 'async_stmt': node = node.children[1] try: value = node.children[0].value except AttributeError: return False return value in ('if', 'for', 'while', 'try', 'with') class _PositionUpdatingFinished(Exception): pass def _update_positions(nodes, line_offset, last_leaf): for node in nodes: try: children = node.children except AttributeError: # Is a leaf node.line += line_offset if node is last_leaf: raise _PositionUpdatingFinished else: _update_positions(children, line_offset, last_leaf) class DiffParser(object): """ An advanced form of parsing a file faster. Unfortunately comes with huge side effects. It changes the given module. """ def __init__(self, pgen_grammar, tokenizer, module): self._pgen_grammar = pgen_grammar self._tokenizer = tokenizer self._module = module def _reset(self): self._copy_count = 0 self._parser_count = 0 self._nodes_tree = _NodesTree(self._module) def update(self, old_lines, new_lines): ''' The algorithm works as follows: Equal: - Assure that the start is a newline, otherwise parse until we get one. - Copy from parsed_until_line + 1 to max(i2 + 1) - Make sure that the indentation is correct (e.g. add DEDENT) - Add old and change positions Insert: - Parse from parsed_until_line + 1 to min(j2 + 1), hopefully not much more. Returns the new module node. ''' LOG.debug('diff parser start') # Reset the used names cache so they get regenerated. self._module._used_names = None self._parser_lines_new = new_lines self._reset() line_length = len(new_lines) sm = difflib.SequenceMatcher(None, old_lines, self._parser_lines_new) opcodes = sm.get_opcodes() LOG.debug('line_lengths old: %s; new: %s' % (len(old_lines), line_length)) for operation, i1, i2, j1, j2 in opcodes: LOG.debug('-> code[%s] old[%s:%s] new[%s:%s]', operation, i1 + 1, i2, j1 + 1, j2) if j2 == line_length and new_lines[-1] == '': # The empty part after the last newline is not relevant. j2 -= 1 if operation == 'equal': line_offset = j1 - i1 self._copy_from_old_parser(line_offset, i2, j2) elif operation == 'replace': self._parse(until_line=j2) elif operation == 'insert': self._parse(until_line=j2) else: assert operation == 'delete' # With this action all change will finally be applied and we have a # changed module. self._nodes_tree.close() if DEBUG_DIFF_PARSER: # If there is reasonable suspicion that the diff parser is not # behaving well, this should be enabled. try: assert self._module.get_code() == ''.join(new_lines) _assert_valid_graph(self._module) except AssertionError: print(_get_debug_error_message(self._module, old_lines, new_lines)) raise last_pos = self._module.end_pos[0] if last_pos != line_length: raise Exception( ('(%s != %s) ' % (last_pos, line_length)) + _get_debug_error_message(self._module, old_lines, new_lines) ) LOG.debug('diff parser end') return self._module def _enabled_debugging(self, old_lines, lines_new): if self._module.get_code() != ''.join(lines_new): LOG.warning('parser issue:\n%s\n%s', ''.join(old_lines), ''.join(lines_new)) def _copy_from_old_parser(self, line_offset, until_line_old, until_line_new): last_until_line = -1 while until_line_new > self._nodes_tree.parsed_until_line: parsed_until_line_old = self._nodes_tree.parsed_until_line - line_offset line_stmt = self._get_old_line_stmt(parsed_until_line_old + 1) if line_stmt is None: # Parse 1 line at least. We don't need more, because we just # want to get into a state where the old parser has statements # again that can be copied (e.g. not lines within parentheses). self._parse(self._nodes_tree.parsed_until_line + 1) else: p_children = line_stmt.parent.children index = p_children.index(line_stmt) from_ = self._nodes_tree.parsed_until_line + 1 copied_nodes = self._nodes_tree.copy_nodes( p_children[index:], until_line_old, line_offset ) # Match all the nodes that are in the wanted range. if copied_nodes: self._copy_count += 1 to = self._nodes_tree.parsed_until_line LOG.debug('copy old[%s:%s] new[%s:%s]', copied_nodes[0].start_pos[0], copied_nodes[-1].end_pos[0] - 1, from_, to) else: # We have copied as much as possible (but definitely not too # much). Therefore we just parse a bit more. self._parse(self._nodes_tree.parsed_until_line + 1) # Since there are potential bugs that might loop here endlessly, we # just stop here. assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line last_until_line = self._nodes_tree.parsed_until_line def _get_old_line_stmt(self, old_line): leaf = self._module.get_leaf_for_position((old_line, 0), include_prefixes=True) if _ends_with_newline(leaf): leaf = leaf.get_next_leaf() if leaf.get_start_pos_of_prefix()[0] == old_line: node = leaf while node.parent.type not in ('file_input', 'suite'): node = node.parent # Make sure that if only the `else:` line of an if statement is # copied that not the whole thing is going to be copied. if node.start_pos[0] >= old_line: return node # Must be on the same line. Otherwise we need to parse that bit. return None def _parse(self, until_line): """ Parses at least until the given line, but might just parse more until a valid state is reached. """ last_until_line = 0 while until_line > self._nodes_tree.parsed_until_line: node = self._try_parse_part(until_line) nodes = node.children self._nodes_tree.add_parsed_nodes(nodes) LOG.debug( 'parse_part from %s to %s (to %s in part parser)', nodes[0].get_start_pos_of_prefix()[0], self._nodes_tree.parsed_until_line, node.end_pos[0] - 1 ) # Since the tokenizer sometimes has bugs, we cannot be sure that # this loop terminates. Therefore assert that there's always a # change. assert last_until_line != self._nodes_tree.parsed_until_line, last_until_line last_until_line = self._nodes_tree.parsed_until_line def _try_parse_part(self, until_line): """ Sets up a normal parser that uses a spezialized tokenizer to only parse until a certain position (or a bit longer if the statement hasn't ended. """ self._parser_count += 1 # TODO speed up, shouldn't copy the whole list all the time. # memoryview? parsed_until_line = self._nodes_tree.parsed_until_line lines_after = self._parser_lines_new[parsed_until_line:] tokens = self._diff_tokenize( lines_after, until_line, line_offset=parsed_until_line ) self._active_parser = Parser( self._pgen_grammar, error_recovery=True ) return self._active_parser.parse(tokens=tokens) def _diff_tokenize(self, lines, until_line, line_offset=0): is_first_token = True omitted_first_indent = False indents = [] tokens = self._tokenizer(lines, (1, 0)) stack = self._active_parser.stack for typ, string, start_pos, prefix in tokens: start_pos = start_pos[0] + line_offset, start_pos[1] if typ == PythonTokenTypes.INDENT: indents.append(start_pos[1]) if is_first_token: omitted_first_indent = True # We want to get rid of indents that are only here because # we only parse part of the file. These indents would only # get parsed as error leafs, which doesn't make any sense. is_first_token = False continue is_first_token = False # In case of omitted_first_indent, it might not be dedented fully. # However this is a sign for us that a dedent happened. if typ == PythonTokenTypes.DEDENT \ or typ == PythonTokenTypes.ERROR_DEDENT \ and omitted_first_indent and len(indents) == 1: indents.pop() if omitted_first_indent and not indents: # We are done here, only thing that can come now is an # endmarker or another dedented code block. typ, string, start_pos, prefix = next(tokens) if '\n' in prefix or '\r' in prefix: prefix = re.sub(r'[^\n\r]+\Z', '', prefix) else: assert start_pos[1] >= len(prefix), repr(prefix) if start_pos[1] - len(prefix) == 0: prefix = '' yield PythonToken( PythonTokenTypes.ENDMARKER, '', (start_pos[0] + line_offset, 0), prefix ) break elif typ == PythonTokenTypes.NEWLINE and start_pos[0] >= until_line: yield PythonToken(typ, string, start_pos, prefix) # Check if the parser is actually in a valid suite state. if _suite_or_file_input_is_valid(self._pgen_grammar, stack): start_pos = start_pos[0] + 1, 0 while len(indents) > int(omitted_first_indent): indents.pop() yield PythonToken(PythonTokenTypes.DEDENT, '', start_pos, '') yield PythonToken(PythonTokenTypes.ENDMARKER, '', start_pos, '') break else: continue yield PythonToken(typ, string, start_pos, prefix) class _NodesTreeNode(object): _ChildrenGroup = namedtuple('_ChildrenGroup', 'prefix children line_offset last_line_offset_leaf') def __init__(self, tree_node, parent=None): self.tree_node = tree_node self._children_groups = [] self.parent = parent self._node_children = [] def finish(self): children = [] for prefix, children_part, line_offset, last_line_offset_leaf in self._children_groups: first_leaf = _get_next_leaf_if_indentation( children_part[0].get_first_leaf() ) first_leaf.prefix = prefix + first_leaf.prefix if line_offset != 0: try: _update_positions( children_part, line_offset, last_line_offset_leaf) except _PositionUpdatingFinished: pass children += children_part self.tree_node.children = children # Reset the parents for node in children: node.parent = self.tree_node for node_child in self._node_children: node_child.finish() def add_child_node(self, child_node): self._node_children.append(child_node) def add_tree_nodes(self, prefix, children, line_offset=0, last_line_offset_leaf=None): if last_line_offset_leaf is None: last_line_offset_leaf = children[-1].get_last_leaf() group = self._ChildrenGroup(prefix, children, line_offset, last_line_offset_leaf) self._children_groups.append(group) def get_last_line(self, suffix): line = 0 if self._children_groups: children_group = self._children_groups[-1] last_leaf = _get_previous_leaf_if_indentation( children_group.last_line_offset_leaf ) line = last_leaf.end_pos[0] + children_group.line_offset # Newlines end on the next line, which means that they would cover # the next line. That line is not fully parsed at this point. if _ends_with_newline(last_leaf, suffix): line -= 1 line += len(split_lines(suffix)) - 1 if suffix and not suffix.endswith('\n') and not suffix.endswith('\r'): # This is the end of a file (that doesn't end with a newline). line += 1 if self._node_children: return max(line, self._node_children[-1].get_last_line(suffix)) return line class _NodesTree(object): def __init__(self, module): self._base_node = _NodesTreeNode(module) self._working_stack = [self._base_node] self._module = module self._prefix_remainder = '' self.prefix = '' @property def parsed_until_line(self): return self._working_stack[-1].get_last_line(self.prefix) def _get_insertion_node(self, indentation_node): indentation = indentation_node.start_pos[1] # find insertion node while True: node = self._working_stack[-1] tree_node = node.tree_node if tree_node.type == 'suite': # A suite starts with NEWLINE, ... node_indentation = tree_node.children[1].start_pos[1] if indentation >= node_indentation: # Not a Dedent # We might be at the most outer layer: modules. We # don't want to depend on the first statement # having the right indentation. return node elif tree_node.type == 'file_input': return node self._working_stack.pop() def add_parsed_nodes(self, tree_nodes): old_prefix = self.prefix tree_nodes = self._remove_endmarker(tree_nodes) if not tree_nodes: self.prefix = old_prefix + self.prefix return assert tree_nodes[0].type != 'newline' node = self._get_insertion_node(tree_nodes[0]) assert node.tree_node.type in ('suite', 'file_input') node.add_tree_nodes(old_prefix, tree_nodes) # tos = Top of stack self._update_tos(tree_nodes[-1]) def _update_tos(self, tree_node): if tree_node.type in ('suite', 'file_input'): new_tos = _NodesTreeNode(tree_node) new_tos.add_tree_nodes('', list(tree_node.children)) self._working_stack[-1].add_child_node(new_tos) self._working_stack.append(new_tos) self._update_tos(tree_node.children[-1]) elif _func_or_class_has_suite(tree_node): self._update_tos(tree_node.children[-1]) def _remove_endmarker(self, tree_nodes): """ Helps cleaning up the tree nodes that get inserted. """ last_leaf = tree_nodes[-1].get_last_leaf() is_endmarker = last_leaf.type == 'endmarker' self._prefix_remainder = '' if is_endmarker: separation = max(last_leaf.prefix.rfind('\n'), last_leaf.prefix.rfind('\r')) if separation > -1: # Remove the whitespace part of the prefix after a newline. # That is not relevant if parentheses were opened. Always parse # until the end of a line. last_leaf.prefix, self._prefix_remainder = \ last_leaf.prefix[:separation + 1], last_leaf.prefix[separation + 1:] self.prefix = '' if is_endmarker: self.prefix = last_leaf.prefix tree_nodes = tree_nodes[:-1] return tree_nodes def copy_nodes(self, tree_nodes, until_line, line_offset): """ Copies tree nodes from the old parser tree. Returns the number of tree nodes that were copied. """ if tree_nodes[0].type in ('error_leaf', 'error_node'): # Avoid copying errors in the beginning. Can lead to a lot of # issues. return [] self._get_insertion_node(tree_nodes[0]) new_nodes, self._working_stack, self.prefix = self._copy_nodes( list(self._working_stack), tree_nodes, until_line, line_offset, self.prefix, ) return new_nodes def _copy_nodes(self, working_stack, nodes, until_line, line_offset, prefix=''): new_nodes = [] new_prefix = '' for node in nodes: if node.start_pos[0] > until_line: break if node.type == 'endmarker': break if node.type == 'error_leaf' and node.token_type in ('DEDENT', 'ERROR_DEDENT'): break # TODO this check might take a bit of time for large files. We # might want to change this to do more intelligent guessing or # binary search. if _get_last_line(node) > until_line: # We can split up functions and classes later. if _func_or_class_has_suite(node): new_nodes.append(node) break new_nodes.append(node) if not new_nodes: return [], working_stack, prefix tos = working_stack[-1] last_node = new_nodes[-1] had_valid_suite_last = False if _func_or_class_has_suite(last_node): suite = last_node while suite.type != 'suite': suite = suite.children[-1] suite_tos = _NodesTreeNode(suite) # Don't need to pass line_offset here, it's already done by the # parent. suite_nodes, new_working_stack, new_prefix = self._copy_nodes( working_stack + [suite_tos], suite.children, until_line, line_offset ) if len(suite_nodes) < 2: # A suite only with newline is not valid. new_nodes.pop() new_prefix = '' else: assert new_nodes tos.add_child_node(suite_tos) working_stack = new_working_stack had_valid_suite_last = True if new_nodes: last_node = new_nodes[-1] if (last_node.type in ('error_leaf', 'error_node') or _is_flow_node(new_nodes[-1])): # Error leafs/nodes don't have a defined start/end. Error # nodes might not end with a newline (e.g. if there's an # open `(`). Therefore ignore all of them unless they are # succeeded with valid parser state. # If we copy flows at the end, they might be continued # after the copy limit (in the new parser). # In this while loop we try to remove until we find a newline. new_prefix = '' new_nodes.pop() while new_nodes: last_node = new_nodes[-1] if last_node.get_last_leaf().type == 'newline': break new_nodes.pop() if new_nodes: if not _ends_with_newline(new_nodes[-1].get_last_leaf()) and not had_valid_suite_last: p = new_nodes[-1].get_next_leaf().prefix # We are not allowed to remove the newline at the end of the # line, otherwise it's going to be missing. This happens e.g. # if a bracket is around before that moves newlines to # prefixes. new_prefix = split_lines(p, keepends=True)[0] if had_valid_suite_last: last = new_nodes[-1] if last.type == 'decorated': last = last.children[-1] if last.type in ('async_funcdef', 'async_stmt'): last = last.children[-1] last_line_offset_leaf = last.children[-2].get_last_leaf() assert last_line_offset_leaf == ':' else: last_line_offset_leaf = new_nodes[-1].get_last_leaf() tos.add_tree_nodes(prefix, new_nodes, line_offset, last_line_offset_leaf) prefix = new_prefix self._prefix_remainder = '' return new_nodes, working_stack, prefix def close(self): self._base_node.finish() # Add an endmarker. try: last_leaf = self._module.get_last_leaf() except IndexError: end_pos = [1, 0] else: last_leaf = _skip_dedent_error_leaves(last_leaf) end_pos = list(last_leaf.end_pos) lines = split_lines(self.prefix) assert len(lines) > 0 if len(lines) == 1: end_pos[1] += len(lines[0]) else: end_pos[0] += len(lines) - 1 end_pos[1] = len(lines[-1]) endmarker = EndMarker('', tuple(end_pos), self.prefix + self._prefix_remainder) endmarker.parent = self._module self._module.children.append(endmarker) parso-0.5.2/parso/python/grammar35.txt0000664000175000017500000001542713575273707017561 0ustar davedave00000000000000# Grammar for Python # Note: Changing the grammar specified in this file will most likely # require corresponding changes in the parser module # (../Modules/parsermodule.c). If you can't make the changes to # that module yourself, please co-ordinate the required changes # with someone who can; ask around on python-dev for help. Fred # Drake will probably be listening there. # NOTE WELL: You should also follow all the steps listed at # https://docs.python.org/devguide/grammar.html # Start symbols for the grammar: # single_input is a single interactive statement; # file_input is a module or sequence of commands read from an input file; # eval_input is the input for the eval() functions. # NB: compound_stmt in single_input is followed by extra NEWLINE! single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE file_input: (NEWLINE | stmt)* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE decorators: decorator+ decorated: decorators (classdef | funcdef | async_funcdef) # NOTE: Reinoud Elhorst, using ASYNC/AWAIT keywords instead of tokens # skipping python3.5 compatibility, in favour of 3.7 solution async_funcdef: 'async' funcdef funcdef: 'def' NAME parameters ['->' test] ':' suite parameters: '(' [typedargslist] ')' typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' ['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]] | '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef) tfpdef: NAME [':' test] varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' ['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]] | '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef) vfpdef: NAME stmt: simple_stmt | compound_stmt simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt | import_stmt | global_stmt | nonlocal_stmt | assert_stmt) expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*) testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [','] augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' | '<<=' | '>>=' | '**=' | '//=') # For normal assignments, additional restrictions enforced by the interpreter del_stmt: 'del' exprlist pass_stmt: 'pass' flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt break_stmt: 'break' continue_stmt: 'continue' return_stmt: 'return' [testlist] yield_stmt: yield_expr raise_stmt: 'raise' [test ['from' test]] import_stmt: import_name | import_from import_name: 'import' dotted_as_names # note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+) 'import' ('*' | '(' import_as_names ')' | import_as_names)) import_as_name: NAME ['as' NAME] dotted_as_name: dotted_name ['as' NAME] import_as_names: import_as_name (',' import_as_name)* [','] dotted_as_names: dotted_as_name (',' dotted_as_name)* dotted_name: NAME ('.' NAME)* global_stmt: 'global' NAME (',' NAME)* nonlocal_stmt: 'nonlocal' NAME (',' NAME)* assert_stmt: 'assert' test [',' test] compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt async_stmt: 'async' (funcdef | with_stmt | for_stmt) if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] try_stmt: ('try' ':' suite ((except_clause ':' suite)+ ['else' ':' suite] ['finally' ':' suite] | 'finally' ':' suite)) with_stmt: 'with' with_item (',' with_item)* ':' suite with_item: test ['as' expr] # NB compile.c makes sure that the default except clause is last except_clause: 'except' [test ['as' NAME]] suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT test: or_test ['if' or_test 'else' test] | lambdef test_nocond: or_test | lambdef_nocond lambdef: 'lambda' [varargslist] ':' test lambdef_nocond: 'lambda' [varargslist] ':' test_nocond or_test: and_test ('or' and_test)* and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: expr (comp_op expr)* # <> isn't actually a valid comparison operator in Python. It's here for the # sake of a __future__ import described in PEP 401 (which really works :-) comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: '*' expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* and_expr: shift_expr ('&' shift_expr)* shift_expr: arith_expr (('<<'|'>>') arith_expr)* arith_expr: term (('+'|'-') term)* term: factor (('*'|'@'|'/'|'%'|'//') factor)* factor: ('+'|'-'|'~') factor | power power: atom_expr ['**' factor] atom_expr: ['await'] atom trailer* atom: ('(' [yield_expr|testlist_comp] ')' | '[' [testlist_comp] ']' | '{' [dictorsetmaker] '}' | NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False') strings: STRING+ testlist_comp: (test|star_expr) ( sync_comp_for | (',' (test|star_expr))* [','] ) trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME subscriptlist: subscript (',' subscript)* [','] subscript: test | [test] ':' [test] [sliceop] sliceop: ':' [test] exprlist: (expr|star_expr) (',' (expr|star_expr))* [','] testlist: test (',' test)* [','] dictorsetmaker: ( ((test ':' test | '**' expr) (sync_comp_for | (',' (test ':' test | '**' expr))* [','])) | ((test | star_expr) (sync_comp_for | (',' (test | star_expr))* [','])) ) classdef: 'class' NAME ['(' [arglist] ')'] ':' suite arglist: argument (',' argument)* [','] # The reason that keywords are test nodes instead of NAME is that using NAME # results in an ambiguity. ast.c makes sure it's a NAME. # "test '=' test" is really "keyword '=' test", but we have no such token. # These need to be in a single rule to avoid grammar that is ambiguous # to our LL(1) parser. Even though 'test' includes '*expr' in star_expr, # we explicitly match '*' here, too, to give it proper precedence. # Illegal combinations and orderings are blocked in ast.c: # multiple (test comp_for) arguments are blocked; keyword unpackings # that precede iterable unpackings are blocked; etc. argument: ( test [sync_comp_for] | test '=' test | '**' test | '*' test ) comp_iter: sync_comp_for | comp_if sync_comp_for: 'for' exprlist 'in' or_test [comp_iter] comp_if: 'if' test_nocond [comp_iter] # not used in grammar, but may appear in "node" passed from Parser to Compiler encoding_decl: NAME yield_expr: 'yield' [yield_arg] yield_arg: 'from' test | testlist parso-0.5.2/parso/__init__.py0000664000175000017500000000310713575273707016002 0ustar davedave00000000000000r""" Parso is a Python parser that supports error recovery and round-trip parsing for different Python versions (in multiple Python versions). Parso is also able to list multiple syntax errors in your python file. Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful for other projects as well. Parso consists of a small API to parse Python and analyse the syntax tree. .. _jedi: https://github.com/davidhalter/jedi A simple example: >>> import parso >>> module = parso.parse('hello + 1', version="3.6") >>> expr = module.children[0] >>> expr PythonNode(arith_expr, [, , ]) >>> print(expr.get_code()) hello + 1 >>> name = expr.children[0] >>> name >>> name.end_pos (1, 5) >>> expr.end_pos (1, 9) To list multiple issues: >>> grammar = parso.load_grammar() >>> module = grammar.parse('foo +\nbar\ncontinue') >>> error1, error2 = grammar.iter_errors(module) >>> error1.message 'SyntaxError: invalid syntax' >>> error2.message "SyntaxError: 'continue' not properly in loop" """ from parso.parser import ParserSyntaxError from parso.grammar import Grammar, load_grammar from parso.utils import split_lines, python_bytes_to_unicode __version__ = '0.5.2' def parse(code=None, **kwargs): """ A utility function to avoid loading grammars. Params are documented in :py:meth:`parso.Grammar.parse`. :param str version: The version used by :py:func:`parso.load_grammar`. """ version = kwargs.pop('version', None) grammar = load_grammar(version=version) return grammar.parse(code, **kwargs) parso-0.5.2/parso/_compatibility.py0000664000175000017500000000656113575273707017262 0ustar davedave00000000000000""" To ensure compatibility from Python ``2.6`` - ``3.3``, a module has been created. Clearly there is huge need to use conforming syntax. """ import sys import platform # Cannot use sys.version.major and minor names, because in Python 2.6 it's not # a namedtuple. py_version = int(str(sys.version_info[0]) + str(sys.version_info[1])) # unicode function try: unicode = unicode except NameError: unicode = str is_pypy = platform.python_implementation() == 'PyPy' def use_metaclass(meta, *bases): """ Create a class with a metaclass. """ if not bases: bases = (object,) return meta("HackClass", bases, {}) try: encoding = sys.stdout.encoding if encoding is None: encoding = 'utf-8' except AttributeError: encoding = 'ascii' def u(string): """Cast to unicode DAMMIT! Written because Python2 repr always implicitly casts to a string, so we have to cast back to a unicode (and we know that we always deal with valid unicode, because we check that in the beginning). """ if py_version >= 30: return str(string) if not isinstance(string, unicode): return unicode(str(string), 'UTF-8') return string try: FileNotFoundError = FileNotFoundError except NameError: FileNotFoundError = IOError def utf8_repr(func): """ ``__repr__`` methods in Python 2 don't allow unicode objects to be returned. Therefore cast them to utf-8 bytes in this decorator. """ def wrapper(self): result = func(self) if isinstance(result, unicode): return result.encode('utf-8') else: return result if py_version >= 30: return func else: return wrapper try: from functools import total_ordering except ImportError: # Python 2.6 def total_ordering(cls): """Class decorator that fills in missing ordering methods""" convert = { '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)), ('__le__', lambda self, other: self < other or self == other), ('__ge__', lambda self, other: not self < other)], '__le__': [('__ge__', lambda self, other: not self <= other or self == other), ('__lt__', lambda self, other: self <= other and not self == other), ('__gt__', lambda self, other: not self <= other)], '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)), ('__ge__', lambda self, other: self > other or self == other), ('__le__', lambda self, other: not self > other)], '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other), ('__gt__', lambda self, other: self >= other and not self == other), ('__lt__', lambda self, other: not self >= other)] } roots = set(dir(cls)) & set(convert) if not roots: raise ValueError('must define at least one ordering operation: < > <= >=') root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__ for opname, opfunc in convert[root]: if opname not in roots: opfunc.__name__ = opname opfunc.__doc__ = getattr(int, opname).__doc__ setattr(cls, opname, opfunc) return cls parso-0.5.2/parso/file_io.py0000664000175000017500000000171413575273707015653 0ustar davedave00000000000000import os class FileIO(object): def __init__(self, path): self.path = path def read(self): # Returns bytes/str # We would like to read unicode here, but we cannot, because we are not # sure if it is a valid unicode file. Therefore just read whatever is # here. with open(self.path, 'rb') as f: return f.read() def get_last_modified(self): """ Returns float - timestamp or None, if path doesn't exist. """ try: return os.path.getmtime(self.path) except OSError: # Might raise FileNotFoundError, OSError for Python 2 return None def __repr__(self): return '%s(%s)' % (self.__class__.__name__, self.path) class KnownContentFileIO(FileIO): def __init__(self, path, content): super(KnownContentFileIO, self).__init__(path) self._content = content def read(self): return self._content parso-0.5.2/parso/grammar.py0000664000175000017500000002404313575273707015673 0ustar davedave00000000000000import hashlib import os from parso._compatibility import FileNotFoundError, is_pypy from parso.pgen2 import generate_grammar from parso.utils import split_lines, python_bytes_to_unicode, parse_version_string from parso.python.diff import DiffParser from parso.python.tokenize import tokenize_lines, tokenize from parso.python.token import PythonTokenTypes from parso.cache import parser_cache, load_module, save_module from parso.parser import BaseParser from parso.python.parser import Parser as PythonParser from parso.python.errors import ErrorFinderConfig from parso.python import pep8 from parso.file_io import FileIO, KnownContentFileIO _loaded_grammars = {} class Grammar(object): """ :py:func:`parso.load_grammar` returns instances of this class. Creating custom none-python grammars by calling this is not supported, yet. """ #:param text: A BNF representation of your grammar. _error_normalizer_config = None _token_namespace = None _default_normalizer_config = pep8.PEP8NormalizerConfig() def __init__(self, text, tokenizer, parser=BaseParser, diff_parser=None): self._pgen_grammar = generate_grammar( text, token_namespace=self._get_token_namespace() ) self._parser = parser self._tokenizer = tokenizer self._diff_parser = diff_parser self._hashed = hashlib.sha256(text.encode("utf-8")).hexdigest() def parse(self, code=None, **kwargs): """ If you want to parse a Python file you want to start here, most likely. If you need finer grained control over the parsed instance, there will be other ways to access it. :param str code: A unicode or bytes string. When it's not possible to decode bytes to a string, returns a :py:class:`UnicodeDecodeError`. :param bool error_recovery: If enabled, any code will be returned. If it is invalid, it will be returned as an error node. If disabled, you will get a ParseError when encountering syntax errors in your code. :param str start_symbol: The grammar rule (nonterminal) that you want to parse. Only allowed to be used when error_recovery is False. :param str path: The path to the file you want to open. Only needed for caching. :param bool cache: Keeps a copy of the parser tree in RAM and on disk if a path is given. Returns the cached trees if the corresponding files on disk have not changed. Note that this stores pickle files on your file system (e.g. for Linux in ``~/.cache/parso/``). :param bool diff_cache: Diffs the cached python module against the new code and tries to parse only the parts that have changed. Returns the same (changed) module that is found in cache. Using this option requires you to not do anything anymore with the cached modules under that path, because the contents of it might change. This option is still somewhat experimental. If you want stability, please don't use it. :param bool cache_path: If given saves the parso cache in this directory. If not given, defaults to the default cache places on each platform. :return: A subclass of :py:class:`parso.tree.NodeOrLeaf`. Typically a :py:class:`parso.python.tree.Module`. """ if 'start_pos' in kwargs: raise TypeError("parse() got an unexpected keyword argument.") return self._parse(code=code, **kwargs) def _parse(self, code=None, error_recovery=True, path=None, start_symbol=None, cache=False, diff_cache=False, cache_path=None, file_io=None, start_pos=(1, 0)): """ Wanted python3.5 * operator and keyword only arguments. Therefore just wrap it all. start_pos here is just a parameter internally used. Might be public sometime in the future. """ if code is None and path is None and file_io is None: raise TypeError("Please provide either code or a path.") if start_symbol is None: start_symbol = self._start_nonterminal if error_recovery and start_symbol != 'file_input': raise NotImplementedError("This is currently not implemented.") if file_io is None: if code is None: file_io = FileIO(path) else: file_io = KnownContentFileIO(path, code) if cache and file_io.path is not None: module_node = load_module(self._hashed, file_io, cache_path=cache_path) if module_node is not None: return module_node if code is None: code = file_io.read() code = python_bytes_to_unicode(code) lines = split_lines(code, keepends=True) if diff_cache: if self._diff_parser is None: raise TypeError("You have to define a diff parser to be able " "to use this option.") try: module_cache_item = parser_cache[self._hashed][file_io.path] except KeyError: pass else: module_node = module_cache_item.node old_lines = module_cache_item.lines if old_lines == lines: return module_node new_node = self._diff_parser( self._pgen_grammar, self._tokenizer, module_node ).update( old_lines=old_lines, new_lines=lines ) save_module(self._hashed, file_io, new_node, lines, # Never pickle in pypy, it's slow as hell. pickling=cache and not is_pypy, cache_path=cache_path) return new_node tokens = self._tokenizer(lines, start_pos) p = self._parser( self._pgen_grammar, error_recovery=error_recovery, start_nonterminal=start_symbol ) root_node = p.parse(tokens=tokens) if cache or diff_cache: save_module(self._hashed, file_io, root_node, lines, # Never pickle in pypy, it's slow as hell. pickling=cache and not is_pypy, cache_path=cache_path) return root_node def _get_token_namespace(self): ns = self._token_namespace if ns is None: raise ValueError("The token namespace should be set.") return ns def iter_errors(self, node): """ Given a :py:class:`parso.tree.NodeOrLeaf` returns a generator of :py:class:`parso.normalizer.Issue` objects. For Python this is a list of syntax/indentation errors. """ if self._error_normalizer_config is None: raise ValueError("No error normalizer specified for this grammar.") return self._get_normalizer_issues(node, self._error_normalizer_config) def _get_normalizer(self, normalizer_config): if normalizer_config is None: normalizer_config = self._default_normalizer_config if normalizer_config is None: raise ValueError("You need to specify a normalizer, because " "there's no default normalizer for this tree.") return normalizer_config.create_normalizer(self) def _normalize(self, node, normalizer_config=None): """ TODO this is not public, yet. The returned code will be normalized, e.g. PEP8 for Python. """ normalizer = self._get_normalizer(normalizer_config) return normalizer.walk(node) def _get_normalizer_issues(self, node, normalizer_config=None): normalizer = self._get_normalizer(normalizer_config) normalizer.walk(node) return normalizer.issues def __repr__(self): nonterminals = self._pgen_grammar.nonterminal_to_dfas.keys() txt = ' '.join(list(nonterminals)[:3]) + ' ...' return '<%s:%s>' % (self.__class__.__name__, txt) class PythonGrammar(Grammar): _error_normalizer_config = ErrorFinderConfig() _token_namespace = PythonTokenTypes _start_nonterminal = 'file_input' def __init__(self, version_info, bnf_text): super(PythonGrammar, self).__init__( bnf_text, tokenizer=self._tokenize_lines, parser=PythonParser, diff_parser=DiffParser ) self.version_info = version_info def _tokenize_lines(self, lines, start_pos): return tokenize_lines(lines, self.version_info, start_pos=start_pos) def _tokenize(self, code): # Used by Jedi. return tokenize(code, self.version_info) def load_grammar(**kwargs): """ Loads a :py:class:`parso.Grammar`. The default version is the current Python version. :param str version: A python version string, e.g. ``version='3.3'``. :param str path: A path to a grammar file """ def load_grammar(language='python', version=None, path=None): if language == 'python': version_info = parse_version_string(version) file = path or os.path.join( 'python', 'grammar%s%s.txt' % (version_info.major, version_info.minor) ) global _loaded_grammars path = os.path.join(os.path.dirname(__file__), file) try: return _loaded_grammars[path] except KeyError: try: with open(path) as f: bnf_text = f.read() grammar = PythonGrammar(version_info, bnf_text) return _loaded_grammars.setdefault(path, grammar) except FileNotFoundError: message = "Python version %s is currently not supported." % version raise NotImplementedError(message) else: raise NotImplementedError("No support for language %s." % language) return load_grammar(**kwargs) parso-0.5.2/parso/normalizer.py0000664000175000017500000001206213575273707016425 0ustar davedave00000000000000from contextlib import contextmanager from parso._compatibility import use_metaclass class _NormalizerMeta(type): def __new__(cls, name, bases, dct): new_cls = type.__new__(cls, name, bases, dct) new_cls.rule_value_classes = {} new_cls.rule_type_classes = {} return new_cls class Normalizer(use_metaclass(_NormalizerMeta)): def __init__(self, grammar, config): self.grammar = grammar self._config = config self.issues = [] self._rule_type_instances = self._instantiate_rules('rule_type_classes') self._rule_value_instances = self._instantiate_rules('rule_value_classes') def _instantiate_rules(self, attr): dct = {} for base in type(self).mro(): rules_map = getattr(base, attr, {}) for type_, rule_classes in rules_map.items(): new = [rule_cls(self) for rule_cls in rule_classes] dct.setdefault(type_, []).extend(new) return dct def walk(self, node): self.initialize(node) value = self.visit(node) self.finalize() return value def visit(self, node): try: children = node.children except AttributeError: return self.visit_leaf(node) else: with self.visit_node(node): return ''.join(self.visit(child) for child in children) @contextmanager def visit_node(self, node): self._check_type_rules(node) yield def _check_type_rules(self, node): for rule in self._rule_type_instances.get(node.type, []): rule.feed_node(node) def visit_leaf(self, leaf): self._check_type_rules(leaf) for rule in self._rule_value_instances.get(leaf.value, []): rule.feed_node(leaf) return leaf.prefix + leaf.value def initialize(self, node): pass def finalize(self): pass def add_issue(self, node, code, message): issue = Issue(node, code, message) if issue not in self.issues: self.issues.append(issue) return True @classmethod def register_rule(cls, **kwargs): """ Use it as a class decorator:: normalizer = Normalizer('grammar', 'config') @normalizer.register_rule(value='foo') class MyRule(Rule): error_code = 42 """ return cls._register_rule(**kwargs) @classmethod def _register_rule(cls, value=None, values=(), type=None, types=()): values = list(values) types = list(types) if value is not None: values.append(value) if type is not None: types.append(type) if not values and not types: raise ValueError("You must register at least something.") def decorator(rule_cls): for v in values: cls.rule_value_classes.setdefault(v, []).append(rule_cls) for t in types: cls.rule_type_classes.setdefault(t, []).append(rule_cls) return rule_cls return decorator class NormalizerConfig(object): normalizer_class = Normalizer def create_normalizer(self, grammar): if self.normalizer_class is None: return None return self.normalizer_class(grammar, self) class Issue(object): def __init__(self, node, code, message): self._node = node self.code = code """ An integer code that stands for the type of error. """ self.message = message """ A message (string) for the issue. """ self.start_pos = node.start_pos """ The start position position of the error as a tuple (line, column). As always in |parso| the first line is 1 and the first column 0. """ def __eq__(self, other): return self.start_pos == other.start_pos and self.code == other.code def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self.code, self.start_pos)) def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.code) class Rule(object): code = None message = None def __init__(self, normalizer): self._normalizer = normalizer def is_issue(self, node): raise NotImplementedError() def get_node(self, node): return node def _get_message(self, message): if message is None: message = self.message if message is None: raise ValueError("The message on the class is not set.") return message def add_issue(self, node, code=None, message=None): if code is None: code = self.code if code is None: raise ValueError("The error code on the class is not set.") message = self._get_message(message) self._normalizer.add_issue(node, code, message) def feed_node(self, node): if self.is_issue(node): issue_node = self.get_node(node) self.add_issue(issue_node) parso-0.5.2/parso/tree.py0000664000175000017500000002604613575273707015211 0ustar davedave00000000000000from abc import abstractmethod, abstractproperty from parso._compatibility import utf8_repr, encoding, py_version from parso.utils import split_lines def search_ancestor(node, *node_types): """ Recursively looks at the parents of a node and returns the first found node that matches node_types. Returns ``None`` if no matching node is found. :param node: The ancestors of this node will be checked. :param node_types: type names that are searched for. :type node_types: tuple of str """ while True: node = node.parent if node is None or node.type in node_types: return node class NodeOrLeaf(object): """ The base class for nodes and leaves. """ __slots__ = () type = None ''' The type is a string that typically matches the types of the grammar file. ''' def get_root_node(self): """ Returns the root node of a parser tree. The returned node doesn't have a parent node like all the other nodes/leaves. """ scope = self while scope.parent is not None: scope = scope.parent return scope def get_next_sibling(self): """ Returns the node immediately following this node in this parent's children list. If this node does not have a next sibling, it is None """ # Can't use index(); we need to test by identity for i, child in enumerate(self.parent.children): if child is self: try: return self.parent.children[i + 1] except IndexError: return None def get_previous_sibling(self): """ Returns the node immediately preceding this node in this parent's children list. If this node does not have a previous sibling, it is None. """ # Can't use index(); we need to test by identity for i, child in enumerate(self.parent.children): if child is self: if i == 0: return None return self.parent.children[i - 1] def get_previous_leaf(self): """ Returns the previous leaf in the parser tree. Returns `None` if this is the first element in the parser tree. """ node = self while True: c = node.parent.children i = c.index(node) if i == 0: node = node.parent if node.parent is None: return None else: node = c[i - 1] break while True: try: node = node.children[-1] except AttributeError: # A Leaf doesn't have children. return node def get_next_leaf(self): """ Returns the next leaf in the parser tree. Returns None if this is the last element in the parser tree. """ node = self while True: c = node.parent.children i = c.index(node) if i == len(c) - 1: node = node.parent if node.parent is None: return None else: node = c[i + 1] break while True: try: node = node.children[0] except AttributeError: # A Leaf doesn't have children. return node @abstractproperty def start_pos(self): """ Returns the starting position of the prefix as a tuple, e.g. `(3, 4)`. :return tuple of int: (line, column) """ @abstractproperty def end_pos(self): """ Returns the end position of the prefix as a tuple, e.g. `(3, 4)`. :return tuple of int: (line, column) """ @abstractmethod def get_start_pos_of_prefix(self): """ Returns the start_pos of the prefix. This means basically it returns the end_pos of the last prefix. The `get_start_pos_of_prefix()` of the prefix `+` in `2 + 1` would be `(1, 1)`, while the start_pos is `(1, 2)`. :return tuple of int: (line, column) """ @abstractmethod def get_first_leaf(self): """ Returns the first leaf of a node or itself if this is a leaf. """ @abstractmethod def get_last_leaf(self): """ Returns the last leaf of a node or itself if this is a leaf. """ @abstractmethod def get_code(self, include_prefix=True): """ Returns the code that was input the input for the parser for this node. :param include_prefix: Removes the prefix (whitespace and comments) of e.g. a statement. """ class Leaf(NodeOrLeaf): ''' Leafs are basically tokens with a better API. Leafs exactly know where they were defined and what text preceeds them. ''' __slots__ = ('value', 'parent', 'line', 'column', 'prefix') def __init__(self, value, start_pos, prefix=''): self.value = value ''' :py:func:`str` The value of the current token. ''' self.start_pos = start_pos self.prefix = prefix ''' :py:func:`str` Typically a mixture of whitespace and comments. Stuff that is syntactically irrelevant for the syntax tree. ''' self.parent = None ''' The parent :class:`BaseNode` of this leaf. ''' @property def start_pos(self): return self.line, self.column @start_pos.setter def start_pos(self, value): self.line = value[0] self.column = value[1] def get_start_pos_of_prefix(self): previous_leaf = self.get_previous_leaf() if previous_leaf is None: lines = split_lines(self.prefix) # + 1 is needed because split_lines always returns at least ['']. return self.line - len(lines) + 1, 0 # It's the first leaf. return previous_leaf.end_pos def get_first_leaf(self): return self def get_last_leaf(self): return self def get_code(self, include_prefix=True): if include_prefix: return self.prefix + self.value else: return self.value @property def end_pos(self): lines = split_lines(self.value) end_pos_line = self.line + len(lines) - 1 # Check for multiline token if self.line == end_pos_line: end_pos_column = self.column + len(lines[-1]) else: end_pos_column = len(lines[-1]) return end_pos_line, end_pos_column @utf8_repr def __repr__(self): value = self.value if not value: value = self.type return "<%s: %s>" % (type(self).__name__, value) class TypedLeaf(Leaf): __slots__ = ('type',) def __init__(self, type, value, start_pos, prefix=''): super(TypedLeaf, self).__init__(value, start_pos, prefix) self.type = type class BaseNode(NodeOrLeaf): """ The super class for all nodes. A node has children, a type and possibly a parent node. """ __slots__ = ('children', 'parent') type = None def __init__(self, children): self.children = children """ A list of :class:`NodeOrLeaf` child nodes. """ self.parent = None ''' The parent :class:`BaseNode` of this leaf. None if this is the root node. ''' @property def start_pos(self): return self.children[0].start_pos def get_start_pos_of_prefix(self): return self.children[0].get_start_pos_of_prefix() @property def end_pos(self): return self.children[-1].end_pos def _get_code_for_children(self, children, include_prefix): if include_prefix: return "".join(c.get_code() for c in children) else: first = children[0].get_code(include_prefix=False) return first + "".join(c.get_code() for c in children[1:]) def get_code(self, include_prefix=True): return self._get_code_for_children(self.children, include_prefix) def get_leaf_for_position(self, position, include_prefixes=False): """ Get the :py:class:`parso.tree.Leaf` at ``position`` :param tuple position: A position tuple, row, column. Rows start from 1 :param bool include_prefixes: If ``False``, ``None`` will be returned if ``position`` falls on whitespace or comments before a leaf :return: :py:class:`parso.tree.Leaf` at ``position``, or ``None`` """ def binary_search(lower, upper): if lower == upper: element = self.children[lower] if not include_prefixes and position < element.start_pos: # We're on a prefix. return None # In case we have prefixes, a leaf always matches try: return element.get_leaf_for_position(position, include_prefixes) except AttributeError: return element index = int((lower + upper) / 2) element = self.children[index] if position <= element.end_pos: return binary_search(lower, index) else: return binary_search(index + 1, upper) if not ((1, 0) <= position <= self.children[-1].end_pos): raise ValueError('Please provide a position that exists within this node.') return binary_search(0, len(self.children) - 1) def get_first_leaf(self): return self.children[0].get_first_leaf() def get_last_leaf(self): return self.children[-1].get_last_leaf() @utf8_repr def __repr__(self): code = self.get_code().replace('\n', ' ').replace('\r', ' ').strip() if not py_version >= 30: code = code.encode(encoding, 'replace') return "<%s: %s@%s,%s>" % \ (type(self).__name__, code, self.start_pos[0], self.start_pos[1]) class Node(BaseNode): """Concrete implementation for interior nodes.""" __slots__ = ('type',) def __init__(self, type, children): super(Node, self).__init__(children) self.type = type def __repr__(self): return "%s(%s, %r)" % (self.__class__.__name__, self.type, self.children) class ErrorNode(BaseNode): """ A node that contains valid nodes/leaves that we're follow by a token that was invalid. This basically means that the leaf after this node is where Python would mark a syntax error. """ __slots__ = () type = 'error_node' class ErrorLeaf(Leaf): """ A leaf that is either completely invalid in a language (like `$` in Python) or is invalid at that position. Like the star in `1 +* 1`. """ __slots__ = ('token_type',) type = 'error_leaf' def __init__(self, token_type, value, start_pos, prefix=''): super(ErrorLeaf, self).__init__(value, start_pos, prefix) self.token_type = token_type def __repr__(self): return "<%s: %s:%s, %s>" % \ (type(self).__name__, self.token_type, repr(self.value), self.start_pos) parso-0.5.2/parso/cache.py0000664000175000017500000001142713575273707015312 0ustar davedave00000000000000import time import os import sys import hashlib import gc import shutil import platform import errno import logging try: import cPickle as pickle except: import pickle from parso._compatibility import FileNotFoundError LOG = logging.getLogger(__name__) _PICKLE_VERSION = 32 """ Version number (integer) for file system cache. Increment this number when there are any incompatible changes in the parser tree classes. For example, the following changes are regarded as incompatible. - A class name is changed. - A class is moved to another module. - A __slot__ of a class is changed. """ _VERSION_TAG = '%s-%s%s-%s' % ( platform.python_implementation(), sys.version_info[0], sys.version_info[1], _PICKLE_VERSION ) """ Short name for distinguish Python implementations and versions. It's like `sys.implementation.cache_tag` but for Python < 3.3 we generate something similar. See: http://docs.python.org/3/library/sys.html#sys.implementation """ def _get_default_cache_path(): if platform.system().lower() == 'windows': dir_ = os.path.join(os.getenv('LOCALAPPDATA') or '~', 'Parso', 'Parso') elif platform.system().lower() == 'darwin': dir_ = os.path.join('~', 'Library', 'Caches', 'Parso') else: dir_ = os.path.join(os.getenv('XDG_CACHE_HOME') or '~/.cache', 'parso') return os.path.expanduser(dir_) _default_cache_path = _get_default_cache_path() """ The path where the cache is stored. On Linux, this defaults to ``~/.cache/parso/``, on OS X to ``~/Library/Caches/Parso/`` and on Windows to ``%LOCALAPPDATA%\\Parso\\Parso\\``. On Linux, if environment variable ``$XDG_CACHE_HOME`` is set, ``$XDG_CACHE_HOME/parso`` is used instead of the default one. """ parser_cache = {} class _NodeCacheItem(object): def __init__(self, node, lines, change_time=None): self.node = node self.lines = lines if change_time is None: change_time = time.time() self.change_time = change_time def load_module(hashed_grammar, file_io, cache_path=None): """ Returns a module or None, if it fails. """ p_time = file_io.get_last_modified() if p_time is None: return None try: module_cache_item = parser_cache[hashed_grammar][file_io.path] if p_time <= module_cache_item.change_time: return module_cache_item.node except KeyError: return _load_from_file_system( hashed_grammar, file_io.path, p_time, cache_path=cache_path ) def _load_from_file_system(hashed_grammar, path, p_time, cache_path=None): cache_path = _get_hashed_path(hashed_grammar, path, cache_path=cache_path) try: try: if p_time > os.path.getmtime(cache_path): # Cache is outdated return None except OSError as e: if e.errno == errno.ENOENT: # In Python 2 instead of an IOError here we get an OSError. raise FileNotFoundError else: raise with open(cache_path, 'rb') as f: gc.disable() try: module_cache_item = pickle.load(f) finally: gc.enable() except FileNotFoundError: return None else: parser_cache.setdefault(hashed_grammar, {})[path] = module_cache_item LOG.debug('pickle loaded: %s', path) return module_cache_item.node def save_module(hashed_grammar, file_io, module, lines, pickling=True, cache_path=None): path = file_io.path try: p_time = None if path is None else file_io.get_last_modified() except OSError: p_time = None pickling = False item = _NodeCacheItem(module, lines, p_time) parser_cache.setdefault(hashed_grammar, {})[path] = item if pickling and path is not None: _save_to_file_system(hashed_grammar, path, item, cache_path=cache_path) def _save_to_file_system(hashed_grammar, path, item, cache_path=None): with open(_get_hashed_path(hashed_grammar, path, cache_path=cache_path), 'wb') as f: pickle.dump(item, f, pickle.HIGHEST_PROTOCOL) def clear_cache(cache_path=None): if cache_path is None: cache_path = _default_cache_path shutil.rmtree(cache_path) parser_cache.clear() def _get_hashed_path(hashed_grammar, path, cache_path=None): directory = _get_cache_directory_path(cache_path=cache_path) file_hash = hashlib.sha256(path.encode("utf-8")).hexdigest() return os.path.join(directory, '%s-%s.pkl' % (hashed_grammar, file_hash)) def _get_cache_directory_path(cache_path=None): if cache_path is None: cache_path = _default_cache_path directory = os.path.join(cache_path, _VERSION_TAG) if not os.path.exists(directory): os.makedirs(directory) return directory parso-0.5.2/LICENSE.txt0000664000175000017500000001012013575273707014361 0ustar davedave00000000000000All contributions towards parso are MIT licensed. Some Python files have been taken from the standard library and are therefore PSF licensed. Modifications on these files are dual licensed (both MIT and PSF). These files are: - parso/pgen2/* - parso/tokenize.py - parso/token.py - test/test_pgen2.py Also some test files under test/normalizer_issue_files have been copied from https://github.com/PyCQA/pycodestyle (Expat License == MIT License). ------------------------------------------------------------------------------- The MIT License (MIT) Copyright (c) <2013-2017> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------- PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------- 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement. parso-0.5.2/.coveragerc0000664000175000017500000000037313575273707014670 0ustar davedave00000000000000[run] source = parso [report] # Regexes for lines to exclude from consideration exclude_lines = # Don't complain about missing debug-only code: def __repr__ # Don't complain if non-runnable code isn't run: if __name__ == .__main__.: parso-0.5.2/README.rst0000664000175000017500000000522713575273707014241 0ustar davedave00000000000000################################################################### parso - A Python Parser ################################################################### .. image:: https://travis-ci.org/davidhalter/parso.svg?branch=master :target: https://travis-ci.org/davidhalter/parso :alt: Travis CI build status .. image:: https://coveralls.io/repos/github/davidhalter/parso/badge.svg?branch=master :target: https://coveralls.io/github/davidhalter/parso?branch=master :alt: Coverage Status .. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png Parso is a Python parser that supports error recovery and round-trip parsing for different Python versions (in multiple Python versions). Parso is also able to list multiple syntax errors in your python file. Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful for other projects as well. Parso consists of a small API to parse Python and analyse the syntax tree. A simple example: .. code-block:: python >>> import parso >>> module = parso.parse('hello + 1', version="3.6") >>> expr = module.children[0] >>> expr PythonNode(arith_expr, [, , ]) >>> print(expr.get_code()) hello + 1 >>> name = expr.children[0] >>> name >>> name.end_pos (1, 5) >>> expr.end_pos (1, 9) To list multiple issues: .. code-block:: python >>> grammar = parso.load_grammar() >>> module = grammar.parse('foo +\nbar\ncontinue') >>> error1, error2 = grammar.iter_errors(module) >>> error1.message 'SyntaxError: invalid syntax' >>> error2.message "SyntaxError: 'continue' not properly in loop" Resources ========= - `Testing `_ - `PyPI `_ - `Docs `_ - Uses `semantic versioning `_ Installation ============ pip install parso Future ====== - There will be better support for refactoring and comments. Stay tuned. - There's a WIP PEP8 validator. It's however not in a good shape, yet. Known Issues ============ - `async`/`await` are already used as keywords in Python3.6. - `from __future__ import print_function` is not ignored. Acknowledgements ================ - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). - `Salome Schneider `_ for the extremely awesome parso logo. .. _jedi: https://github.com/davidhalter/jedi parso-0.5.2/MANIFEST.in0000664000175000017500000000041213575273707014277 0ustar davedave00000000000000include README.rst include CHANGELOG.rst include LICENSE.txt include AUTHORS.txt include .coveragerc include conftest.py include pytest.ini include tox.ini include parso/python/grammar*.txt recursive-include test * recursive-include docs * recursive-exclude * *.pyc parso-0.5.2/CHANGELOG.rst0000664000175000017500000000355513575273707014575 0ustar davedave00000000000000.. :changelog: Changelog --------- 0.5.2 (2019-12-15) ++++++++++++++++++ - Add include_setitem to get_definition/is_definition and get_defined_names (#66) - Fix named expression error listing (#89, #90) - Fix some f-string tokenizer issues (#93) 0.5.1 (2019-07-13) ++++++++++++++++++ - Fix: Some unicode identifiers were not correctly tokenized - Fix: Line continuations in f-strings are now working 0.5.0 (2019-06-20) ++++++++++++++++++ - **Breaking Change** comp_for is now called sync_comp_for for all Python versions to be compatible with the Python 3.8 Grammar - Added .pyi stubs for a lot of the parso API - Small FileIO changes 0.4.0 (2019-04-05) ++++++++++++++++++ - Python 3.8 support - FileIO support, it's now possible to use abstract file IO, support is alpha 0.3.4 (2019-02-13) +++++++++++++++++++ - Fix an f-string tokenizer error 0.3.3 (2019-02-06) +++++++++++++++++++ - Fix async errors in the diff parser - A fix in iter_errors - This is a very small bugfix release 0.3.2 (2019-01-24) +++++++++++++++++++ - 20+ bugfixes in the diff parser and 3 in the tokenizer - A fuzzer for the diff parser, to give confidence that the diff parser is in a good shape. - Some bugfixes for f-string 0.3.1 (2018-07-09) +++++++++++++++++++ - Bugfixes in the diff parser and keyword-only arguments 0.3.0 (2018-06-30) +++++++++++++++++++ - Rewrote the pgen2 parser generator. 0.2.1 (2018-05-21) +++++++++++++++++++ - A bugfix for the diff parser. - Grammar files can now be loaded from a specific path. 0.2.0 (2018-04-15) +++++++++++++++++++ - f-strings are now parsed as a part of the normal Python grammar. This makes it way easier to deal with them. 0.1.1 (2017-11-05) +++++++++++++++++++ - Fixed a few bugs in the caching layer - Added support for Python 3.7 0.1.0 (2017-09-04) +++++++++++++++++++ - Pulling the library out of Jedi. Some APIs will definitely change. parso-0.5.2/parso.egg-info/0000775000175000017500000000000013575273727015364 5ustar davedave00000000000000parso-0.5.2/parso.egg-info/PKG-INFO0000664000175000017500000001607013575273727016465 0ustar davedave00000000000000Metadata-Version: 2.1 Name: parso Version: 0.5.2 Summary: A Python Parser Home-page: https://github.com/davidhalter/parso Author: David Halter Author-email: davidhalter88@gmail.com Maintainer: David Halter Maintainer-email: davidhalter88@gmail.com License: MIT Description: ################################################################### parso - A Python Parser ################################################################### .. image:: https://travis-ci.org/davidhalter/parso.svg?branch=master :target: https://travis-ci.org/davidhalter/parso :alt: Travis CI build status .. image:: https://coveralls.io/repos/github/davidhalter/parso/badge.svg?branch=master :target: https://coveralls.io/github/davidhalter/parso?branch=master :alt: Coverage Status .. image:: https://raw.githubusercontent.com/davidhalter/parso/master/docs/_static/logo_characters.png Parso is a Python parser that supports error recovery and round-trip parsing for different Python versions (in multiple Python versions). Parso is also able to list multiple syntax errors in your python file. Parso has been battle-tested by jedi_. It was pulled out of jedi to be useful for other projects as well. Parso consists of a small API to parse Python and analyse the syntax tree. A simple example: .. code-block:: python >>> import parso >>> module = parso.parse('hello + 1', version="3.6") >>> expr = module.children[0] >>> expr PythonNode(arith_expr, [, , ]) >>> print(expr.get_code()) hello + 1 >>> name = expr.children[0] >>> name >>> name.end_pos (1, 5) >>> expr.end_pos (1, 9) To list multiple issues: .. code-block:: python >>> grammar = parso.load_grammar() >>> module = grammar.parse('foo +\nbar\ncontinue') >>> error1, error2 = grammar.iter_errors(module) >>> error1.message 'SyntaxError: invalid syntax' >>> error2.message "SyntaxError: 'continue' not properly in loop" Resources ========= - `Testing `_ - `PyPI `_ - `Docs `_ - Uses `semantic versioning `_ Installation ============ pip install parso Future ====== - There will be better support for refactoring and comments. Stay tuned. - There's a WIP PEP8 validator. It's however not in a good shape, yet. Known Issues ============ - `async`/`await` are already used as keywords in Python3.6. - `from __future__ import print_function` is not ignored. Acknowledgements ================ - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). - `Salome Schneider `_ for the extremely awesome parso logo. .. _jedi: https://github.com/davidhalter/jedi .. :changelog: Changelog --------- 0.5.2 (2019-12-15) ++++++++++++++++++ - Add include_setitem to get_definition/is_definition and get_defined_names (#66) - Fix named expression error listing (#89, #90) - Fix some f-string tokenizer issues (#93) 0.5.1 (2019-07-13) ++++++++++++++++++ - Fix: Some unicode identifiers were not correctly tokenized - Fix: Line continuations in f-strings are now working 0.5.0 (2019-06-20) ++++++++++++++++++ - **Breaking Change** comp_for is now called sync_comp_for for all Python versions to be compatible with the Python 3.8 Grammar - Added .pyi stubs for a lot of the parso API - Small FileIO changes 0.4.0 (2019-04-05) ++++++++++++++++++ - Python 3.8 support - FileIO support, it's now possible to use abstract file IO, support is alpha 0.3.4 (2019-02-13) +++++++++++++++++++ - Fix an f-string tokenizer error 0.3.3 (2019-02-06) +++++++++++++++++++ - Fix async errors in the diff parser - A fix in iter_errors - This is a very small bugfix release 0.3.2 (2019-01-24) +++++++++++++++++++ - 20+ bugfixes in the diff parser and 3 in the tokenizer - A fuzzer for the diff parser, to give confidence that the diff parser is in a good shape. - Some bugfixes for f-string 0.3.1 (2018-07-09) +++++++++++++++++++ - Bugfixes in the diff parser and keyword-only arguments 0.3.0 (2018-06-30) +++++++++++++++++++ - Rewrote the pgen2 parser generator. 0.2.1 (2018-05-21) +++++++++++++++++++ - A bugfix for the diff parser. - Grammar files can now be loaded from a specific path. 0.2.0 (2018-04-15) +++++++++++++++++++ - f-strings are now parsed as a part of the normal Python grammar. This makes it way easier to deal with them. 0.1.1 (2017-11-05) +++++++++++++++++++ - Fixed a few bugs in the caching layer - Added support for Python 3.7 0.1.0 (2017-09-04) +++++++++++++++++++ - Pulling the library out of Jedi. Some APIs will definitely change. Keywords: python parser parsing Platform: any Classifier: Development Status :: 4 - Beta Classifier: Environment :: Plugins Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) Classifier: Topic :: Utilities Provides-Extra: testing parso-0.5.2/parso.egg-info/dependency_links.txt0000664000175000017500000000000113575273727021432 0ustar davedave00000000000000 parso-0.5.2/parso.egg-info/top_level.txt0000664000175000017500000000000613575273727020112 0ustar davedave00000000000000parso parso-0.5.2/parso.egg-info/requires.txt0000664000175000017500000000004013575273727017756 0ustar davedave00000000000000 [testing] pytest>=3.0.7 docopt parso-0.5.2/parso.egg-info/SOURCES.txt0000664000175000017500000000653213575273727017256 0ustar davedave00000000000000.coveragerc AUTHORS.txt CHANGELOG.rst LICENSE.txt MANIFEST.in README.rst conftest.py pytest.ini setup.cfg setup.py tox.ini docs/Makefile docs/README.md docs/conf.py docs/global.rst docs/index.rst docs/_static/logo.png docs/_static/logo_characters.png docs/_templates/ghbuttons.html docs/_templates/sidebarlogo.html docs/_themes/flask_theme_support.py docs/_themes/flask/LICENSE docs/_themes/flask/layout.html docs/_themes/flask/relations.html docs/_themes/flask/theme.conf docs/_themes/flask/static/flasky.css_t docs/_themes/flask/static/small_flask.css docs/docs/development.rst docs/docs/installation.rst docs/docs/parser-tree.rst docs/docs/usage.rst parso/__init__.py parso/_compatibility.py parso/cache.py parso/file_io.py parso/grammar.py parso/normalizer.py parso/parser.py parso/tree.py parso/utils.py parso.egg-info/PKG-INFO parso.egg-info/SOURCES.txt parso.egg-info/dependency_links.txt parso.egg-info/requires.txt parso.egg-info/top_level.txt parso/pgen2/__init__.py parso/pgen2/generator.py parso/pgen2/grammar_parser.py parso/python/__init__.py parso/python/diff.py parso/python/errors.py parso/python/grammar26.txt parso/python/grammar27.txt parso/python/grammar33.txt parso/python/grammar34.txt parso/python/grammar35.txt parso/python/grammar36.txt parso/python/grammar37.txt parso/python/grammar38.txt parso/python/grammar39.txt parso/python/parser.py parso/python/pep8.py parso/python/prefix.py parso/python/token.py parso/python/tokenize.py parso/python/tree.py test/__init__.py test/failing_examples.py test/fuzz_diff_parser.py test/test_absolute_import.py test/test_cache.py test/test_diff_parser.py test/test_error_recovery.py test/test_file_python_errors.py test/test_fstring.py test/test_get_code.py test/test_grammar.py test/test_load_grammar.py test/test_normalizer_issues_files.py test/test_old_fast_parser.py test/test_param_splitting.py test/test_parser.py test/test_parser_tree.py test/test_pep8.py test/test_pgen2.py test/test_prefix.py test/test_python_errors.py test/test_tokenize.py test/test_utils.py test/normalizer_issue_files/E10.py test/normalizer_issue_files/E101.py test/normalizer_issue_files/E11.py test/normalizer_issue_files/E12_first.py test/normalizer_issue_files/E12_not_first.py test/normalizer_issue_files/E12_not_second.py test/normalizer_issue_files/E12_second.py test/normalizer_issue_files/E12_third.py test/normalizer_issue_files/E20.py test/normalizer_issue_files/E21.py test/normalizer_issue_files/E22.py test/normalizer_issue_files/E23.py test/normalizer_issue_files/E25.py test/normalizer_issue_files/E26.py test/normalizer_issue_files/E27.py test/normalizer_issue_files/E29.py test/normalizer_issue_files/E30.py test/normalizer_issue_files/E30not.py test/normalizer_issue_files/E40.py test/normalizer_issue_files/E50.py test/normalizer_issue_files/E70.py test/normalizer_issue_files/E71.py test/normalizer_issue_files/E72.py test/normalizer_issue_files/E73.py test/normalizer_issue_files/LICENSE test/normalizer_issue_files/allowed_syntax.py test/normalizer_issue_files/allowed_syntax_python2.py test/normalizer_issue_files/allowed_syntax_python3.4.py test/normalizer_issue_files/allowed_syntax_python3.5.py test/normalizer_issue_files/allowed_syntax_python3.6.py test/normalizer_issue_files/latin-1.py test/normalizer_issue_files/python2.7.py test/normalizer_issue_files/python3.py test/normalizer_issue_files/utf-8-bom.py test/normalizer_issue_files/utf-8.pyparso-0.5.2/setup.py0000775000175000017500000000343713575273707014270 0ustar davedave00000000000000#!/usr/bin/env python from __future__ import with_statement from setuptools import setup, find_packages import parso __AUTHOR__ = 'David Halter' __AUTHOR_EMAIL__ = 'davidhalter88@gmail.com' readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read() setup(name='parso', version=parso.__version__, description='A Python Parser', author=__AUTHOR__, author_email=__AUTHOR_EMAIL__, include_package_data=True, maintainer=__AUTHOR__, maintainer_email=__AUTHOR_EMAIL__, url='https://github.com/davidhalter/parso', license='MIT', keywords='python parser parsing', long_description=readme, packages=find_packages(exclude=['test']), package_data={'parso': ['python/grammar*.txt']}, platforms=['any'], classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: Plugins', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Text Editors :: Integrated Development Environments (IDE)', 'Topic :: Utilities', ], extras_require={ 'testing': [ 'pytest>=3.0.7', 'docopt', ], }, ) parso-0.5.2/test/0000775000175000017500000000000013575273727013525 5ustar davedave00000000000000parso-0.5.2/test/test_parser_tree.py0000664000175000017500000001540513575273707017454 0ustar davedave00000000000000# -*- coding: utf-8 # This file contains Unicode characters. from textwrap import dedent import pytest from parso import parse from parso.python import tree class TestsFunctionAndLambdaParsing(object): FIXTURES = [ ('def my_function(x, y, z) -> str:\n return x + y * z\n', { 'name': 'my_function', 'call_sig': 'my_function(x, y, z)', 'params': ['x', 'y', 'z'], 'annotation': "str", }), ('lambda x, y, z: x + y * z\n', { 'name': '', 'call_sig': '(x, y, z)', 'params': ['x', 'y', 'z'], }), ] @pytest.fixture(params=FIXTURES) def node(self, request): parsed = parse(dedent(request.param[0]), version='3.5') request.keywords['expected'] = request.param[1] child = parsed.children[0] if child.type == 'simple_stmt': child = child.children[0] return child @pytest.fixture() def expected(self, request, node): return request.keywords['expected'] def test_name(self, node, expected): if node.type != 'lambdef': assert isinstance(node.name, tree.Name) assert node.name.value == expected['name'] def test_params(self, node, expected): assert isinstance(node.get_params(), list) assert all(isinstance(x, tree.Param) for x in node.get_params()) assert [str(x.name.value) for x in node.get_params()] == [x for x in expected['params']] def test_is_generator(self, node, expected): assert node.is_generator() is expected.get('is_generator', False) def test_yields(self, node, expected): assert node.is_generator() == expected.get('yields', False) def test_annotation(self, node, expected): expected_annotation = expected.get('annotation', None) if expected_annotation is None: assert node.annotation is None else: assert node.annotation.value == expected_annotation def test_end_pos_line(each_version): # jedi issue #150 s = "x()\nx( )\nx( )\nx ( )\n" module = parse(s, version=each_version) for i, simple_stmt in enumerate(module.children[:-1]): expr_stmt = simple_stmt.children[0] assert expr_stmt.end_pos == (i + 1, i + 3) def test_default_param(each_version): func = parse('def x(foo=42): pass', version=each_version).children[0] param, = func.get_params() assert param.default.value == '42' assert param.annotation is None assert not param.star_count def test_annotation_param(each_py3_version): func = parse('def x(foo: 3): pass', version=each_py3_version).children[0] param, = func.get_params() assert param.default is None assert param.annotation.value == '3' assert not param.star_count def test_annotation_params(each_py3_version): func = parse('def x(foo: 3, bar: 4): pass', version=each_py3_version).children[0] param1, param2 = func.get_params() assert param1.default is None assert param1.annotation.value == '3' assert not param1.star_count assert param2.default is None assert param2.annotation.value == '4' assert not param2.star_count def test_default_and_annotation_param(each_py3_version): func = parse('def x(foo:3=42): pass', version=each_py3_version).children[0] param, = func.get_params() assert param.default.value == '42' assert param.annotation.value == '3' assert not param.star_count def test_ellipsis_py2(each_py2_version): module = parse('[0][...]', version=each_py2_version, error_recovery=False) expr = module.children[0] trailer = expr.children[-1] subscript = trailer.children[1] assert subscript.type == 'subscript' assert [leaf.value for leaf in subscript.children] == ['.', '.', '.'] def get_yield_exprs(code, version): return list(parse(code, version=version).children[0].iter_yield_exprs()) def get_return_stmts(code): return list(parse(code).children[0].iter_return_stmts()) def get_raise_stmts(code, child): return list(parse(code).children[child].iter_raise_stmts()) def test_yields(each_version): y, = get_yield_exprs('def x(): yield', each_version) assert y.value == 'yield' assert y.type == 'keyword' y, = get_yield_exprs('def x(): (yield 1)', each_version) assert y.type == 'yield_expr' y, = get_yield_exprs('def x(): [1, (yield)]', each_version) assert y.type == 'keyword' def test_yield_from(): y, = get_yield_exprs('def x(): (yield from 1)', '3.3') assert y.type == 'yield_expr' def test_returns(): r, = get_return_stmts('def x(): return') assert r.value == 'return' assert r.type == 'keyword' r, = get_return_stmts('def x(): return 1') assert r.type == 'return_stmt' def test_raises(): code = """ def single_function(): raise Exception def top_function(): def inner_function(): raise NotImplementedError() inner_function() raise Exception def top_function_three(): try: raise NotImplementedError() except NotImplementedError: pass raise Exception """ r = get_raise_stmts(code, 0) # Lists in a simple Function assert len(list(r)) == 1 r = get_raise_stmts(code, 1) # Doesn't Exceptions list in closures assert len(list(r)) == 1 r = get_raise_stmts(code, 2) # Lists inside try-catch assert len(list(r)) == 2 @pytest.mark.parametrize( 'code, name_index, is_definition, include_setitem', [ ('x = 3', 0, True, False), ('x.y = 3', 0, False, False), ('x.y = 3', 1, True, False), ('x.y = u.v = z', 0, False, False), ('x.y = u.v = z', 1, True, False), ('x.y = u.v = z', 2, False, False), ('x.y = u.v, w = z', 3, True, False), ('x.y = u.v, w = z', 4, True, False), ('x.y = u.v, w = z', 5, False, False), ('x, y = z', 0, True, False), ('x, y = z', 1, True, False), ('x, y = z', 2, False, False), ('x, y = z', 2, False, False), ('x[0], y = z', 2, False, False), ('x[0] = z', 0, False, False), ('x[0], y = z', 0, False, False), ('x[0], y = z', 2, False, True), ('x[0] = z', 0, True, True), ('x[0], y = z', 0, True, True), ('x: int = z', 0, True, False), ('x: int = z', 1, False, False), ('x: int = z', 2, False, False), ('x: int', 0, True, False), ('x: int', 1, False, False), ] ) def test_is_definition(code, name_index, is_definition, include_setitem): module = parse(code, version='3.8') name = module.get_first_leaf() while True: if name.type == 'name': if name_index == 0: break name_index -= 1 name = name.get_next_leaf() assert name.is_definition(include_setitem=include_setitem) == is_definition parso-0.5.2/test/test_pgen2.py0000664000175000017500000002214013575273707016146 0ustar davedave00000000000000"""Test suite for 2to3's parser and grammar files. This is the place to add tests for changes to 2to3's grammar, such as those merging the grammars for Python 2 and 3. In addition to specific tests for parts of the grammar we've changed, we also make sure we can parse the test_grammar.py files from both Python 2 and Python 3. """ from textwrap import dedent import pytest from parso import load_grammar from parso import ParserSyntaxError from parso.pgen2 import generate_grammar from parso.python import tokenize def _parse(code, version=None): code = dedent(code) + "\n\n" grammar = load_grammar(version=version) return grammar.parse(code, error_recovery=False) def _invalid_syntax(code, version=None, **kwargs): with pytest.raises(ParserSyntaxError): module = _parse(code, version=version, **kwargs) # For debugging print(module.children) def test_formfeed(each_py2_version): s = u"""print 1\n\x0Cprint 2\n""" t = _parse(s, each_py2_version) assert t.children[0].children[0].type == 'print_stmt' assert t.children[1].children[0].type == 'print_stmt' s = u"""1\n\x0C\x0C2\n""" t = _parse(s, each_py2_version) def test_matrix_multiplication_operator(works_ge_py35): works_ge_py35.parse("a @ b") works_ge_py35.parse("a @= b") def test_yield_from(works_ge_py3, each_version): works_ge_py3.parse("yield from x") works_ge_py3.parse("(yield from x) + y") _invalid_syntax("yield from", each_version) def test_await_expr(works_ge_py35): works_ge_py35.parse("""async def foo(): await x """) works_ge_py35.parse("""async def foo(): def foo(): pass def foo(): pass await x """) works_ge_py35.parse("""async def foo(): return await a""") works_ge_py35.parse("""def foo(): def foo(): pass async def foo(): await x """) @pytest.mark.skipif('sys.version_info[:2] < (3, 5)') @pytest.mark.xfail(reason="acting like python 3.7") def test_async_var(): _parse("""async = 1""", "3.5") _parse("""await = 1""", "3.5") _parse("""def async(): pass""", "3.5") def test_async_for(works_ge_py35): works_ge_py35.parse("async def foo():\n async for a in b: pass") def test_async_with(works_ge_py35): works_ge_py35.parse("async def foo():\n async with a: pass") @pytest.mark.skipif('sys.version_info[:2] < (3, 5)') @pytest.mark.xfail(reason="acting like python 3.7") def test_async_with_invalid(): _invalid_syntax("""def foo(): async with a: pass""", version="3.5") def test_raise_3x_style_1(each_version): _parse("raise", each_version) def test_raise_2x_style_2(works_in_py2): works_in_py2.parse("raise E, V") def test_raise_2x_style_3(works_in_py2): works_in_py2.parse("raise E, V, T") def test_raise_2x_style_invalid_1(each_version): _invalid_syntax("raise E, V, T, Z", version=each_version) def test_raise_3x_style(works_ge_py3): works_ge_py3.parse("raise E1 from E2") def test_raise_3x_style_invalid_1(each_version): _invalid_syntax("raise E, V from E1", each_version) def test_raise_3x_style_invalid_2(each_version): _invalid_syntax("raise E from E1, E2", each_version) def test_raise_3x_style_invalid_3(each_version): _invalid_syntax("raise from E1, E2", each_version) def test_raise_3x_style_invalid_4(each_version): _invalid_syntax("raise E from", each_version) # Adapted from Python 3's Lib/test/test_grammar.py:GrammarTests.testFuncdef def test_annotation_1(works_ge_py3): works_ge_py3.parse("""def f(x) -> list: pass""") def test_annotation_2(works_ge_py3): works_ge_py3.parse("""def f(x:int): pass""") def test_annotation_3(works_ge_py3): works_ge_py3.parse("""def f(*x:str): pass""") def test_annotation_4(works_ge_py3): works_ge_py3.parse("""def f(**x:float): pass""") def test_annotation_5(works_ge_py3): works_ge_py3.parse("""def f(x, y:1+2): pass""") def test_annotation_6(each_py3_version): _invalid_syntax("""def f(a, (b:1, c:2, d)): pass""", each_py3_version) def test_annotation_7(each_py3_version): _invalid_syntax("""def f(a, (b:1, c:2, d), e:3=4, f=5, *g:6): pass""", each_py3_version) def test_annotation_8(each_py3_version): s = """def f(a, (b:1, c:2, d), e:3=4, f=5, *g:6, h:7, i=8, j:9=10, **k:11) -> 12: pass""" _invalid_syntax(s, each_py3_version) def test_except_new(each_version): s = dedent(""" try: x except E as N: y""") _parse(s, each_version) def test_except_old(works_in_py2): s = dedent(""" try: x except E, N: y""") works_in_py2.parse(s) # Adapted from Python 3's Lib/test/test_grammar.py:GrammarTests.testAtoms def test_set_literal_1(works_ge_py27): works_ge_py27.parse("""x = {'one'}""") def test_set_literal_2(works_ge_py27): works_ge_py27.parse("""x = {'one', 1,}""") def test_set_literal_3(works_ge_py27): works_ge_py27.parse("""x = {'one', 'two', 'three'}""") def test_set_literal_4(works_ge_py27): works_ge_py27.parse("""x = {2, 3, 4,}""") def test_new_octal_notation(each_version): _parse("""0o7777777777777""", each_version) _invalid_syntax("""0o7324528887""", each_version) def test_old_octal_notation(works_in_py2): works_in_py2.parse("07") def test_long_notation(works_in_py2): works_in_py2.parse("0xFl") works_in_py2.parse("0xFL") works_in_py2.parse("0b1l") works_in_py2.parse("0B1L") works_in_py2.parse("0o7l") works_in_py2.parse("0O7L") works_in_py2.parse("0l") works_in_py2.parse("0L") works_in_py2.parse("10l") works_in_py2.parse("10L") def test_new_binary_notation(each_version): _parse("""0b101010""", each_version) _invalid_syntax("""0b0101021""", each_version) def test_class_new_syntax(works_ge_py3): works_ge_py3.parse("class B(t=7): pass") works_ge_py3.parse("class B(t, *args): pass") works_ge_py3.parse("class B(t, **kwargs): pass") works_ge_py3.parse("class B(t, *args, **kwargs): pass") works_ge_py3.parse("class B(t, y=9, *args, **kwargs): pass") def test_parser_idempotency_extended_unpacking(works_ge_py3): """A cut-down version of pytree_idempotency.py.""" works_ge_py3.parse("a, *b, c = x\n") works_ge_py3.parse("[*a, b] = x\n") works_ge_py3.parse("(z, *y, w) = m\n") works_ge_py3.parse("for *z, m in d: pass\n") def test_multiline_bytes_literals(each_version): """ It's not possible to get the same result when using \xaa in Python 2/3, because it's treated differently. """ s = u""" md5test(b"\xaa" * 80, (b"Test Using Larger Than Block-Size Key " b"and Larger Than One Block-Size Data"), "6f630fad67cda0ee1fb1f562db3aa53e") """ _parse(s, each_version) def test_multiline_bytes_tripquote_literals(each_version): s = ''' b""" """ ''' _parse(s, each_version) def test_ellipsis(works_ge_py3, each_version): works_ge_py3.parse("...") _parse("[0][...]", version=each_version) def test_dict_unpacking(works_ge_py35): works_ge_py35.parse("{**dict(a=3), foo:2}") def test_multiline_str_literals(each_version): s = u""" md5test("\xaa" * 80, ("Test Using Larger Than Block-Size Key " "and Larger Than One Block-Size Data"), "6f630fad67cda0ee1fb1f562db3aa53e") """ _parse(s, each_version) def test_py2_backticks(works_in_py2): works_in_py2.parse("`1`") def test_py2_string_prefixes(works_in_py2): works_in_py2.parse("ur'1'") works_in_py2.parse("Ur'1'") works_in_py2.parse("UR'1'") _invalid_syntax("ru'1'", works_in_py2.version) def py_br(each_version): _parse('br""', each_version) def test_py3_rb(works_ge_py3): works_ge_py3.parse("rb'1'") works_ge_py3.parse("RB'1'") def test_left_recursion(): with pytest.raises(ValueError, match='left recursion'): generate_grammar('foo: foo NAME\n', tokenize.PythonTokenTypes) @pytest.mark.parametrize( 'grammar, error_match', [ ['foo: bar | baz\nbar: NAME\nbaz: NAME\n', r"foo is ambiguous.*given a TokenType\(NAME\).*bar or baz"], ['''foo: bar | baz\nbar: 'x'\nbaz: "x"\n''', r"foo is ambiguous.*given a ReservedString\(x\).*bar or baz"], ['''foo: bar | 'x'\nbar: 'x'\n''', r"foo is ambiguous.*given a ReservedString\(x\).*bar or foo"], # An ambiguity with the second (not the first) child of a production ['outer: "a" [inner] "b" "c"\ninner: "b" "c" [inner]\n', r"outer is ambiguous.*given a ReservedString\(b\).*inner or outer"], # An ambiguity hidden by a level of indirection (middle) ['outer: "a" [middle] "b" "c"\nmiddle: inner\ninner: "b" "c" [inner]\n', r"outer is ambiguous.*given a ReservedString\(b\).*middle or outer"], ] ) def test_ambiguities(grammar, error_match): with pytest.raises(ValueError, match=error_match): generate_grammar(grammar, tokenize.PythonTokenTypes) parso-0.5.2/test/test_old_fast_parser.py0000664000175000017500000000676513575273707020321 0ustar davedave00000000000000""" These tests test the cases that the old fast parser tested with the normal parser. The old fast parser doesn't exist anymore and was replaced with a diff parser. However the tests might still be relevant for the parser. """ from textwrap import dedent from parso._compatibility import u from parso import parse def test_carriage_return_splitting(): source = u(dedent(''' "string" class Foo(): pass ''')) source = source.replace('\n', '\r\n') module = parse(source) assert [n.value for lst in module.get_used_names().values() for n in lst] == ['Foo'] def check_p(src, number_parsers_used, number_of_splits=None, number_of_misses=0): if number_of_splits is None: number_of_splits = number_parsers_used module_node = parse(src) assert src == module_node.get_code() return module_node def test_for(): src = dedent("""\ for a in [1,2]: a for a1 in 1,"": a1 """) check_p(src, 1) def test_class_with_class_var(): src = dedent("""\ class SuperClass: class_super = 3 def __init__(self): self.foo = 4 pass """) check_p(src, 3) def test_func_with_if(): src = dedent("""\ def recursion(a): if foo: return recursion(a) else: if bar: return inexistent else: return a """) check_p(src, 1) def test_decorator(): src = dedent("""\ class Decorator(): @memoize def dec(self, a): return a """) check_p(src, 2) def test_nested_funcs(): src = dedent("""\ def memoize(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper """) check_p(src, 3) def test_multi_line_params(): src = dedent("""\ def x(a, b): pass foo = 1 """) check_p(src, 2) def test_class_func_if(): src = dedent("""\ class Class: def func(self): if 1: a else: b pass """) check_p(src, 3) def test_multi_line_for(): src = dedent("""\ for x in [1, 2]: pass pass """) check_p(src, 1) def test_wrong_indentation(): src = dedent("""\ def func(): a b a """) #check_p(src, 1) src = dedent("""\ def complex(): def nested(): a b a def other(): pass """) check_p(src, 3) def test_strange_parentheses(): src = dedent(""" class X(): a = (1 if 1 else 2) def x(): pass """) check_p(src, 2) def test_fake_parentheses(): """ The fast parser splitting counts parentheses, but not as correct tokens. Therefore parentheses in string tokens are included as well. This needs to be accounted for. """ src = dedent(r""" def x(): a = (')' if 1 else 2) def y(): pass def z(): pass """) check_p(src, 3, 2, 1) def test_additional_indent(): source = dedent('''\ int( def x(): pass ''') check_p(source, 2) def test_round_trip(): code = dedent(''' def x(): """hahaha""" func''') assert parse(code).get_code() == code def test_parentheses_in_string(): code = dedent(''' def x(): '(' import abc abc.''') check_p(code, 2, 1, 1) parso-0.5.2/test/normalizer_issue_files/0000775000175000017500000000000013575273727020301 5ustar davedave00000000000000parso-0.5.2/test/normalizer_issue_files/E12_first.py0000664000175000017500000000274313575273707022415 0ustar davedave00000000000000abc = "E121", ( #: E121:2 "dent") abc = "E122", ( #: E121:0 "dent") my_list = [ 1, 2, 3, 4, 5, 6, #: E123 ] abc = "E124", ("visual", "indent_two" #: E124:14 ) abc = "E124", ("visual", "indent_five" #: E124:0 ) a = (123, #: E124:0 ) #: E129+1:4 if (row < 0 or self.moduleCount <= row or col < 0 or self.moduleCount <= col): raise Exception("%s,%s - %s" % (row, col, self.moduleCount)) abc = "E126", ( #: E126:12 "dent") abc = "E126", ( #: E126:8 "dent") abc = "E127", ("over-", #: E127:18 "over-indent") abc = "E128", ("visual", #: E128:4 "hanging") abc = "E128", ("under-", #: E128:14 "under-indent") my_list = [ 1, 2, 3, 4, 5, 6, #: E123:5 ] result = { #: E121:3 'key1': 'value', #: E121:3 'key2': 'value', } rv.update(dict.fromkeys(( 'qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), #: E128:10 '?'), "foo") abricot = 3 + \ 4 + \ 5 + 6 abc = "hello", ( "there", #: E126:5 # "john", "dude") part = set_mimetype(( a.get('mime_type', 'text')), 'default') part = set_mimetype(( a.get('mime_type', 'text')), #: E127:21 'default') parso-0.5.2/test/normalizer_issue_files/E12_second.py0000664000175000017500000000774613575273707022551 0ustar davedave00000000000000if True: result = some_function_that_takes_arguments( 'a', 'b', 'c', 'd', 'e', 'f', #: E123:0 ) #: E122+1 if some_very_very_very_long_variable_name or var \ or another_very_long_variable_name: raise Exception() #: E122+1 if some_very_very_very_long_variable_name or var[0] \ or another_very_long_variable_name: raise Exception() if True: #: E122+1 if some_very_very_very_long_variable_name or var \ or another_very_long_variable_name: raise Exception() if True: #: E122+1 if some_very_very_very_long_variable_name or var[0] \ or another_very_long_variable_name: raise Exception() #: E901+1:8 dictionary = [ "is": { # Might be a E122:4, but is not because the code is invalid Python. "nested": yes(), }, ] setup('', scripts=[''], classifiers=[ #: E121:6 'Development Status :: 4 - Beta', 'Environment :: Console', 'Intended Audience :: Developers', ]) #: E123+2:4 E291:15 abc = "E123", ( "bad", "hanging", "close" ) result = { 'foo': [ 'bar', { 'baz': 'frop', #: E123 } #: E123 ] #: E123 } result = some_function_that_takes_arguments( 'a', 'b', 'c', 'd', 'e', 'f', #: E123 ) my_list = [1, 2, 3, 4, 5, 6, #: E124:0 ] my_list = [1, 2, 3, 4, 5, 6, #: E124:19 ] #: E124+2 result = some_function_that_takes_arguments('a', 'b', 'c', 'd', 'e', 'f', ) fooff(aaaa, cca( vvv, dadd ), fff, #: E124:0 ) fooff(aaaa, ccaaa( vvv, dadd ), fff, #: E124:0 ) d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE #: E124:14 ) if line_removed: self.event(cr, uid, #: E128:8 name="Removing the option for contract", #: E128:8 description="contract line has been removed", #: E124:8 ) #: E129+1:4 if foo is None and bar is "frop" and \ blah == 'yeah': blah = 'yeahnah' #: E129+1:4 E129+2:4 def long_function_name( var_one, var_two, var_three, var_four): hello(var_one) def qualify_by_address( #: E129:4 E129+1:4 self, cr, uid, ids, context=None, params_to_check=frozenset(QUALIF_BY_ADDRESS_PARAM)): """ This gets called by the web server """ #: E129+1:4 E129+2:4 if (a == 2 or b == "abc def ghi" "jkl mno"): True my_list = [ 1, 2, 3, 4, 5, 6, #: E123:8 ] abris = 3 + \ 4 + \ 5 + 6 fixed = re.sub(r'\t+', ' ', target[c::-1], 1)[::-1] + \ target[c + 1:] rv.update(dict.fromkeys(( 'qualif_nr', 'reasonComment_en', 'reasonComment_fr', #: E121:12 'reasonComment_de', 'reasonComment_it'), '?'), #: E128:4 "foo") #: E126+1:8 eat_a_dict_a_day({ "foo": "bar", }) #: E129+1:4 if ( x == ( 3 #: E129:4 ) or y == 4): pass #: E129+1:4 E121+2:8 E129+3:4 if ( x == ( 3 ) or x == ( # This one has correct indentation. 3 #: E129:4 ) or y == 4): pass troublesome_hash = { "hash": "value", #: E135+1:8 "long": "the quick brown fox jumps over the lazy dog before doing a " "somersault", } # Arguments on first line forbidden when not using vertical alignment #: E128+1:4 foo = long_function_name(var_one, var_two, var_three, var_four) #: E128+1:4 hello('l.%s\t%s\t%s\t%r' % (token[2][0], pos, tokenize.tok_name[token[0]], token[1])) def qualify_by_address(self, cr, uid, ids, context=None, #: E128:8 params_to_check=frozenset(QUALIF_BY_ADDRESS_PARAM)): """ This gets called by the web server """ parso-0.5.2/test/normalizer_issue_files/allowed_syntax_python3.5.py0000664000175000017500000000045313575273707025537 0ustar davedave00000000000000""" Mostly allowed syntax in Python 3.5. """ async def foo(): await bar() #: E901 yield from [] return #: E901 return '' # With decorator it's a different statement. @bla async def foo(): await bar() #: E901 yield from [] return #: E901 return '' parso-0.5.2/test/normalizer_issue_files/E12_not_second.py0000664000175000017500000001633113575273707023417 0ustar davedave00000000000000 def qualify_by_address( self, cr, uid, ids, context=None, params_to_check=frozenset(QUALIF_BY_ADDRESS_PARAM)): """ This gets called by the web server """ def qualify_by_address(self, cr, uid, ids, context=None, params_to_check=frozenset(QUALIF_BY_ADDRESS_PARAM)): """ This gets called by the web server """ _ipv4_re = re.compile('^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.' '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.' '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.' '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$') fct(""" AAA """ + status_2_string) if context: msg = """\ action: GET-CONFIG payload: ip_address: "%(ip)s" username: "%(username)s" """ % context if context: msg = """\ action: \ GET-CONFIG """ % context if context: #: E122+2:0 msg = """\ action: """\ """GET-CONFIG """ % context def unicode2html(s): """Convert the characters &<>'" in string s to HTML-safe sequences. Convert newline to
too.""" #: E127+1:28 return unicode((s or '').replace('&', '&') .replace('\n', '
\n')) parser.add_option('--count', action='store_true', help="print total number of errors and warnings " "to standard error and set exit code to 1 if " "total is not null") parser.add_option('--exclude', metavar='patterns', default=DEFAULT_EXCLUDE, help="exclude files or directories which match these " "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) add_option('--count', #: E135+1 help="print total number of errors " "to standard error total is not null") add_option('--count', #: E135+2:11 help="print total number of errors " "to standard error " "total is not null") help = ("print total number of errors " + "to standard error") help = "print total number of errors " \ "to standard error" help = u"print total number of errors " \ u"to standard error" help = b"print total number of errors " \ b"to standard error" #: E122+1:5 help = br"print total number of errors " \ br"to standard error" d = dict('foo', help="exclude files or directories which match these " #: E135:9 "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) d = dict('foo', help=u"exclude files or directories which match these " u"comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) #: E135+1:9 E135+2:9 d = dict('foo', help=b"exclude files or directories which match these " b"comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) d = dict('foo', help=br"exclude files or directories which match these " br"comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE) d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s, %s)" % (DEFAULT_EXCLUDE, DEFAULT_IGNORE) ) d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s, %s)" % # who knows what might happen here? (DEFAULT_EXCLUDE, DEFAULT_IGNORE) ) # parens used to allow the indenting. troublefree_hash = { "hash": "value", "long": ("the quick brown fox jumps over the lazy dog before doing a " "somersault"), "long key that tends to happen more when you're indented": ( "stringwithalongtoken you don't want to break" ), } # another accepted form troublefree_hash = { "hash": "value", "long": "the quick brown fox jumps over the lazy dog before doing " "a somersault", ("long key that tends to happen more " "when you're indented"): "stringwithalongtoken you don't want to break", } # confusing but accepted... don't do that troublesome_hash = { "hash": "value", "long": "the quick brown fox jumps over the lazy dog before doing a " #: E135:4 "somersault", "longer": "the quick brown fox jumps over the lazy dog before doing a " "somersaulty", "long key that tends to happen more " "when you're indented": "stringwithalongtoken you don't want to break", } d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE ) d = dict('foo', help="exclude files or directories which match these " "comma separated patterns (default: %s)" % DEFAULT_EXCLUDE, foobar="this clearly should work, because it is at " "the right indent level", ) rv.update(dict.fromkeys( ('qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?'), "foo", context={'alpha': 4, 'beta': 53242234, 'gamma': 17}) def f(): try: if not Debug: hello(''' If you would like to see debugging output, try: %s -d5 ''' % sys.argv[0]) # The try statement above was not finished. #: E901 d = { # comment 1: 2 } # issue 138 (we won't allow this in parso) #: E126+2:9 [ 12, # this is a multi-line inline # comment ] # issue 151 #: E122+1:3 if a > b and \ c > d: moo_like_a_cow() my_list = [ 1, 2, 3, 4, 5, 6, ] my_list = [1, 2, 3, 4, 5, 6, ] result = some_function_that_takes_arguments( 'a', 'b', 'c', 'd', 'e', 'f', ) result = some_function_that_takes_arguments('a', 'b', 'c', 'd', 'e', 'f', ) # issue 203 dica = { ('abc' 'def'): ( 'abc'), } (abcdef[0] [1]) = ( 'abc') ('abc' 'def') == ( 'abc') # issue 214 bar( 1).zap( 2) bar( 1).zap( 2) if True: def example_issue254(): return [node.copy( ( replacement # First, look at all the node's current children. for child in node.children # Replace them. for replacement in replace(child) ), dict(name=token.undefined) )] def valid_example(): return [node.copy(properties=dict( (key, val if val is not None else token.undefined) for key, val in node.items() ))] foo([ 'bug' ]) # issue 144, finally! some_hash = { "long key that tends to happen more when you're indented": "stringwithalongtoken you don't want to break", } { 1: 999999 if True else 0, } abc = dedent( ''' mkdir -p ./{build}/ mv ./build/ ./{build}/%(revision)s/ '''.format( build='build', # more stuff ) ) parso-0.5.2/test/normalizer_issue_files/E71.py0000664000175000017500000000215213575273707021205 0ustar davedave00000000000000#: E711:7 if res == None: pass #: E711:7 if res != None: pass #: E711:8 if None == res: pass #: E711:8 if None != res: pass #: E711:10 if res[1] == None: pass #: E711:10 if res[1] != None: pass #: E711:8 if None != res[1]: pass #: E711:8 if None == res[1]: pass # #: E712:7 if res == True: pass #: E712:7 if res != False: pass #: E712:8 if True != res: pass #: E712:9 if False == res: pass #: E712:10 if res[1] == True: pass #: E712:10 if res[1] != False: pass if x is False: pass # #: E713:9 if not X in Y: pass #: E713:11 if not X.B in Y: pass #: E713:9 if not X in Y and Z == "zero": pass #: E713:24 if X == "zero" or not Y in Z: pass # #: E714:9 if not X is Y: pass #: E714:11 if not X.B is Y: pass # # Okay if x not in y: pass if not (X in Y or X is Z): pass if not (X in Y): pass if x is not y: pass if TrueElement.get_element(True) == TrueElement.get_element(False): pass if (True) == TrueElement or x == TrueElement: pass assert (not foo) in bar assert {'x': not foo} in bar assert [42, not foo] in bar parso-0.5.2/test/normalizer_issue_files/E25.py0000664000175000017500000000155413575273707021211 0ustar davedave00000000000000#: E251:11 E251:13 def foo(bar = False): '''Test function with an error in declaration''' pass #: E251:8 foo(bar= True) #: E251:7 foo(bar =True) #: E251:7 E251:9 foo(bar = True) #: E251:13 y = bar(root= "sdasd") parser.add_argument('--long-option', #: E135+1:20 default= "/rather/long/filesystem/path/here/blah/blah/blah") parser.add_argument('--long-option', default= "/rather/long/filesystem") # TODO this looks so stupid. parser.add_argument('--long-option', default ="/rather/long/filesystem/path/here/blah/blah/blah") #: E251+2:7 E251+2:9 foo(True, baz=(1, 2), biz = 'foo' ) # Okay foo(bar=(1 == 1)) foo(bar=(1 != 1)) foo(bar=(1 >= 1)) foo(bar=(1 <= 1)) (options, args) = parser.parse_args() d[type(None)] = _deepcopy_atomic parso-0.5.2/test/normalizer_issue_files/E30.py0000664000175000017500000000271413575273707021204 0ustar davedave00000000000000#: E301+4 class X: def a(): pass def b(): pass #: E301+5 class X: def a(): pass # comment def b(): pass # -*- coding: utf-8 -*- def a(): pass #: E302+1:0 """Main module.""" def _main(): pass #: E302+1:0 foo = 1 def get_sys_path(): return sys.path #: E302+3:0 def a(): pass def b(): pass #: E302+5:0 def a(): pass # comment def b(): pass #: E303+3:0 print #: E303+3:0 E303+4:0 print print #: E303+3:0 print # comment print #: E303+3 E303+6 def a(): print # comment # another comment print #: E302+2 a = 3 #: E304+1 @decorator def function(): pass #: E303+3 # something """This class docstring comes on line 5. It gives error E303: too many blank lines (3) """ #: E302+6 def a(): print # comment # another comment a() #: E302+7 def a(): print # comment # another comment try: a() except Exception: pass #: E302+4 def a(): print # Two spaces before comments, too. if a(): a() #: E301+2 def a(): x = 1 def b(): pass #: E301+2 E301+4 def a(): x = 2 def b(): x = 1 def c(): pass #: E301+2 E301+4 E301+5 def a(): x = 1 class C: pass x = 2 def b(): pass #: E302+7 # Example from https://github.com/PyCQA/pycodestyle/issues/400 foo = 2 def main(): blah, blah if __name__ == '__main__': main() parso-0.5.2/test/normalizer_issue_files/python2.7.py0000664000175000017500000000037713575273707022430 0ustar davedave00000000000000import sys print 1, 2 >> sys.stdout foo = ur'This is not possible in Python 3.' # This is actually printing a tuple. #: E275:5 print(1, 2) # True and False are not keywords in Python 2 and therefore there's no need for # a space. norman = True+False parso-0.5.2/test/normalizer_issue_files/E20.py0000664000175000017500000000131413575273707021176 0ustar davedave00000000000000#: E201:5 spam( ham[1], {eggs: 2}) #: E201:9 spam(ham[ 1], {eggs: 2}) #: E201:14 spam(ham[1], { eggs: 2}) # Okay spam(ham[1], {eggs: 2}) #: E202:22 spam(ham[1], {eggs: 2} ) #: E202:21 spam(ham[1], {eggs: 2 }) #: E202:10 spam(ham[1 ], {eggs: 2}) # Okay spam(ham[1], {eggs: 2}) result = func( arg1='some value', arg2='another value', ) result = func( arg1='some value', arg2='another value' ) result = [ item for item in items if item > 5 ] #: E203:9 if x == 4 : foo(x, y) x, y = y, x if x == 4: #: E203:12 E702:13 a = x, y ; x, y = y, x if x == 4: foo(x, y) #: E203:12 x, y = y , x # Okay if x == 4: foo(x, y) x, y = y, x a[b1, :1] == 3 b = a[:, b1] parso-0.5.2/test/normalizer_issue_files/allowed_syntax_python2.py0000664000175000017500000000002313575273707025364 0ustar davedave00000000000000's' b'' u's' b'ä' parso-0.5.2/test/normalizer_issue_files/E29.py0000664000175000017500000000022213575273707021204 0ustar davedave00000000000000# Okay # 情 #: W291:5 print #: W291+1 class Foo(object): bang = 12 #: W291+1:34 '''multiline string with trailing whitespace''' parso-0.5.2/test/normalizer_issue_files/E12_third.py0000664000175000017500000000436313575273707022400 0ustar davedave00000000000000#: E128+1 foo(1, 2, 3, 4, 5, 6) #: E128+1:1 foo(1, 2, 3, 4, 5, 6) #: E128+1:2 foo(1, 2, 3, 4, 5, 6) #: E128+1:3 foo(1, 2, 3, 4, 5, 6) foo(1, 2, 3, 4, 5, 6) #: E127+1:5 foo(1, 2, 3, 4, 5, 6) #: E127+1:6 foo(1, 2, 3, 4, 5, 6) #: E127+1:7 foo(1, 2, 3, 4, 5, 6) #: E127+1:8 foo(1, 2, 3, 4, 5, 6) #: E127+1:9 foo(1, 2, 3, 4, 5, 6) #: E127+1:10 foo(1, 2, 3, 4, 5, 6) #: E127+1:11 foo(1, 2, 3, 4, 5, 6) #: E127+1:12 foo(1, 2, 3, 4, 5, 6) #: E127+1:13 foo(1, 2, 3, 4, 5, 6) if line_removed: #: E128+1:14 E128+2:14 self.event(cr, uid, name="Removing the option for contract", description="contract line has been removed", ) if line_removed: self.event(cr, uid, #: E127:16 name="Removing the option for contract", #: E127:16 description="contract line has been removed", #: E124:16 ) rv.update(d=('a', 'b', 'c'), #: E127:13 e=42) #: E135+2:17 rv.update(d=('a' + 'b', 'c'), e=42, f=42 + 42) rv.update(d=('a' + 'b', 'c'), e=42, f=42 + 42) #: E127+1:26 input1 = {'a': {'calc': 1 + 2}, 'b': 1 + 42} #: E128+2:17 rv.update(d=('a' + 'b', 'c'), e=42, f=(42 + 42)) if True: def example_issue254(): #: return [node.copy( ( #: E121:16 E121+3:20 replacement # First, look at all the node's current children. for child in node.children for replacement in replace(child) ), dict(name=token.undefined) )] # TODO multiline docstring are currently not handled. E125+1:4? if (""" """): pass # TODO same for foo in """ abc 123 """.strip().split(): hello(foo) abc = dedent( ''' mkdir -p ./{build}/ mv ./build/ ./{build}/%(revision)s/ '''.format( #: E121:4 E121+1:4 E123+2:0 build='build', # more stuff ) ) #: E701+1: E122+1 if True:\ hello(True) #: E128+1 foobar(a , end=' ') parso-0.5.2/test/normalizer_issue_files/E21.py0000664000175000017500000000050313575273707021176 0ustar davedave00000000000000#: E211:4 spam (1) #: E211:4 E211:19 dict ['key'] = list [index] #: E211:11 dict['key'] ['subkey'] = list[index] # Okay spam(1) dict['key'] = list[index] # This is not prohibited by PEP8, but avoid it. # Dave: I think this is extremely stupid. Use the same convention everywhere. #: E211:9 class Foo (Bar, Baz): pass parso-0.5.2/test/normalizer_issue_files/latin-1.py0000664000175000017500000000023013575273707022111 0ustar davedave00000000000000# -*- coding: latin-1 -*- # Test non-UTF8 encoding latin1 = ('' '') c = ("w") parso-0.5.2/test/normalizer_issue_files/E23.py0000664000175000017500000000026313575273707021203 0ustar davedave00000000000000#: E231:7 a = (1,2) #: E231:5 a[b1,:] #: E231:10 a = [{'a':''}] # Okay a = (4,) #: E202:7 b = (5, ) c = {'text': text[5:]} result = { 'key1': 'value', 'key2': 'value', } parso-0.5.2/test/normalizer_issue_files/E11.py0000664000175000017500000000202313575273707021174 0ustar davedave00000000000000if x > 2: #: E111:2 hello(x) if True: #: E111:5 print #: E111:6 # #: E111:2 # what # Comment is fine # Comment is also fine if False: pass print print #: E903:0 print mimetype = 'application/x-directory' #: E111:5 # 'httpd/unix-directory' create_date = False def start(self): # foo #: E111:8 # bar if True: # Hello self.master.start() # Comment # try: #: E111:12 # self.master.start() # except MasterExit: #: E111:12 # self.shutdown() # finally: #: E111:12 # sys.exit() # Dedent to the first level #: E111:6 # error # Dedent to the base level #: E111:2 # Also wrongly indented. # Indent is correct. def start(self): # Correct comment if True: #: E111:0 # try: #: E111:0 # self.master.start() #: E111:0 # except MasterExit: #: E111:0 # self.shutdown() self.master.start() # comment parso-0.5.2/test/normalizer_issue_files/E40.py0000664000175000017500000000105213575273707021177 0ustar davedave00000000000000#: E401:7 import os, sys # Okay import os import sys from subprocess import Popen, PIPE from myclass import MyClass from foo.bar.yourclass import YourClass import myclass import foo.bar.yourclass # All Okay from here until the definition of VERSION __all__ = ['abc'] import foo __version__ = "42" import foo __author__ = "Simon Gomizelj" import foo try: import foo except ImportError: pass else: hello('imported foo') finally: hello('made attempt to import foo') import bar VERSION = '1.2.3' #: E402 import foo #: E402 import foo parso-0.5.2/test/normalizer_issue_files/LICENSE0000664000175000017500000000260213575273707021304 0ustar davedave00000000000000Copyright © 2006-2009 Johann C. Rocholl Copyright © 2009-2014 Florent Xicluna Copyright © 2014-2016 Ian Lee Copyright © 2017-???? Dave Halter Dave: The files in this folder were ported from pydocstyle and some modifications where made. Licensed under the terms of the Expat License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. parso-0.5.2/test/normalizer_issue_files/E50.py0000664000175000017500000000575313575273707021214 0ustar davedave00000000000000#: E501:4 a = '12345678901234567890123456789012345678901234567890123456789012345678901234567890' #: E501:80 a = '1234567890123456789012345678901234567890123456789012345678901234567890' or \ 6 #: E501+1:80 a = 7 or \ '1234567890123456789012345678901234567890123456789012345678901234567890' or \ 6 #: E501+1:80 E501+2:80 a = 7 or \ '1234567890123456789012345678901234567890123456789012345678901234567890' or \ '1234567890123456789012345678901234567890123456789012345678901234567890' or \ 6 #: E501:78 a = '1234567890123456789012345678901234567890123456789012345678901234567890' # \ #: E502:78 a = ('123456789012345678901234567890123456789012345678901234567890123456789' \ '01234567890') #: E502+1:11 a = ('AAA \ BBB' \ 'CCC') #: E502:38 if (foo is None and bar is "e000" and \ blah == 'yeah'): blah = 'yeahnah' # # Okay a = ('AAA' 'BBB') a = ('AAA \ BBB' 'CCC') a = 'AAA' \ 'BBB' \ 'CCC' a = ('AAA\ BBBBBBBBB\ CCCCCCCCC\ DDDDDDDDD') # # Okay if aaa: pass elif bbb or \ ccc: pass ddd = \ ccc ('\ ' + ' \ ') (''' ''' + ' \ ') #: E501:67 E225:21 E225:22 very_long_identifiers=and_terrible_whitespace_habits(are_no_excuse+for_long_lines) # # TODO Long multiline strings are not handled. E501? '''multiline string with a long long long long long long long long long long long long long long long long line ''' #: E501 '''same thing, but this time without a terminal newline in the string long long long long long long long long long long long long long long long long line''' # # issue 224 (unavoidable long lines in docstrings) # Okay """ I'm some great documentation. Because I'm some great documentation, I'm going to give you a reference to some valuable information about some API that I'm calling: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx """ #: E501 """ longnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaces""" # Regression test for #622 def foo(): """Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis pulvinar vitae """ # Okay """ This almost_empty_line """ """ This almost_empty_line """ # A basic comment #: E501 # with a long long long long long long long long long long long long long long long long line # # Okay # I'm some great comment. Because I'm so great, I'm going to give you a # reference to some valuable information about some API that I'm calling: # # http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx x = 3 # longnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaceslongnospaces # # Okay # This # almost_empty_line # #: E501+1 # This # almost_empty_line parso-0.5.2/test/normalizer_issue_files/E70.py0000664000175000017500000000061713575273707021210 0ustar davedave00000000000000#: E701:6 if a: a = False #: E701:41 if not header or header[:6] != 'bytes=': pass #: E702:9 a = False; b = True #: E702:16 E402 import bdist_egg; bdist_egg.write_safety_flag(cmd.egg_info, safe) #: E703:12 E402 import shlex; #: E702:8 E703:22 del a[:]; a.append(42); #: E704:10 def f(x): return 2 #: E704:10 def f(x): return 2 * x while all is round: #: E704:14 def f(x): return 2 * x parso-0.5.2/test/normalizer_issue_files/E10.py0000664000175000017500000000136613575273707021204 0ustar davedave00000000000000for a in 'abc': for b in 'xyz': hello(a) # indented with 8 spaces #: E903:0 hello(b) # indented with 1 tab if True: #: E101:0 pass #: E122+1 change_2_log = \ """Change 2 by slamb@testclient on 2006/04/13 21:46:23 creation """ p4change = { 2: change_2_log, } class TestP4Poller(unittest.TestCase): def setUp(self): self.setUpGetProcessOutput() return self.setUpChangeSource() def tearDown(self): pass # if True: #: E101:0 E101+1:0 foo(1, 2) def test_keys(self): """areas.json - All regions are accounted for.""" expected = set([ #: E101:0 u'Norrbotten', #: E101:0 u'V\xe4sterbotten', ]) if True: hello(""" tab at start of this line """) parso-0.5.2/test/normalizer_issue_files/utf-8.py0000664000175000017500000000351013575273707021613 0ustar davedave00000000000000# -*- coding: utf-8 -*- # Some random text with multi-byte characters (utf-8 encoded) # # Εδώ μάτσο κειμένων τη, τρόπο πιθανό διευθυντές ώρα μη. Νέων απλό παράγει ροή # κι, το επί δεδομένη καθορίζουν. Πάντως ζητήσεις περιβάλλοντος ένα με, τη # ξέχασε αρπάζεις φαινόμενο όλη. Τρέξει εσφαλμένη χρησιμοποίησέ νέα τι. Θα όρο # πετάνε φακέλους, άρα με διακοπής λαμβάνουν εφαμοργής. Λες κι μειώσει # καθυστερεί. # 79 narrow chars # 01 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 [79] # 78 narrow chars (Na) + 1 wide char (W) # 01 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8情 # 3 narrow chars (Na) + 40 wide chars (W) # 情 情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情 # 3 narrow chars (Na) + 76 wide chars (W) # 情 情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情 # # 80 narrow chars (Na) #: E501 # 01 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 [80] # # 78 narrow chars (Na) + 2 wide char (W) #: E501 # 01 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8情情 # # 3 narrow chars (Na) + 77 wide chars (W) #: E501 # 情 情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情情 # parso-0.5.2/test/normalizer_issue_files/E26.py0000664000175000017500000000274513575273707021215 0ustar davedave00000000000000#: E261:4 pass # an inline comment #: E261:4 pass# an inline comment # Okay pass # an inline comment pass # an inline comment #: E262:11 x = x + 1 #Increment x #: E262:11 x = x + 1 # Increment x #: E262:11 x = y + 1 #: Increment x #: E265 #Block comment a = 1 #: E265+1 m = 42 #! This is important mx = 42 - 42 # Comment without anything is not an issue. # # However if there are comments at the end without anything it obviously # doesn't make too much sense. #: E262:9 foo = 1 # #: E266+2:4 E266+5:4 def how_it_feel(r): ### This is a variable ### a = 42 ### Of course it is unused return #: E266 E266+1 ##if DEBUG: ## logging.error() #: E266 ######################################### # Not at the beginning of a file #: E265 #!/usr/bin/env python # Okay pass # an inline comment x = x + 1 # Increment x y = y + 1 #: Increment x # Block comment a = 1 # Block comment1 # Block comment2 aaa = 1 # example of docstring (not parsed) def oof(): """ #foo not parsed """ ########################################################################### # A SEPARATOR # ########################################################################### # ####################################################################### # # ########################## another separator ########################## # # ####################################################################### # parso-0.5.2/test/normalizer_issue_files/utf-8-bom.py0000664000175000017500000000012313575273707022363 0ustar davedave00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- hello = 'こんにちわ' # EOF parso-0.5.2/test/normalizer_issue_files/E27.py0000664000175000017500000000123413575273707021206 0ustar davedave00000000000000# Okay from u import (a, b) from v import c, d #: E221:13 from w import (e, f) #: E275:13 from w import(e, f) #: E275:29 from importable.module import(e, f) try: #: E275:33 from importable.module import(e, f) except ImportError: pass # Okay True and False #: E221:8 True and False #: E221:4 True and False #: E221:2 if 1: pass # Syntax Error, no indentation #: E903+1 if 1: pass #: E223:8 True and False #: E223:4 E223:9 True and False #: E221:5 a and b #: E221:5 1 and b #: E221:5 a and 2 #: E221:1 E221:6 1 and b #: E221:1 E221:6 a and 2 #: E221:4 this and False #: E223:5 a and b #: E223:1 a and b #: E223:4 E223:9 this and False parso-0.5.2/test/normalizer_issue_files/allowed_syntax_python3.4.py0000664000175000017500000000005713575273707025536 0ustar davedave00000000000000*foo, a = (1,) *foo[0], a = (1,) *[], a = (1,) parso-0.5.2/test/normalizer_issue_files/E101.py0000664000175000017500000000511213575273707021256 0ustar davedave00000000000000# Used to be the file for W191 #: E101+1 if False: print # indented with 1 tab #: E101+1 y = x == 2 \ or x == 3 #: E101+5 if ( x == ( 3 ) or y == 4): pass #: E101+3 if x == 2 \ or y > 1 \ or x == 3: pass #: E101+3 if x == 2 \ or y > 1 \ or x == 3: pass #: E101+1 if (foo == bar and baz == frop): pass #: E101+1 if (foo == bar and baz == frop): pass #: E101+2 E101+3 if start[1] > end_col and not ( over_indent == 4 and indent_next): assert (0, "E121 continuation line over-" "indented for visual indent") #: E101+3 def long_function_name( var_one, var_two, var_three, var_four): hello(var_one) #: E101+2 if ((row < 0 or self.moduleCount <= row or col < 0 or self.moduleCount <= col)): raise Exception("%s,%s - %s" % (row, col, self.moduleCount)) #: E101+1 E101+2 E101+3 E101+4 E101+5 E101+6 if bar: assert ( start, 'E121 lines starting with a ' 'closing bracket should be indented ' "to match that of the opening " "bracket's line" ) # you want vertical alignment, so use a parens #: E101+3 if ((foo.bar("baz") and foo.bar("frop") )): hello("yes") #: E101+3 # also ok, but starting to look like LISP if ((foo.bar("baz") and foo.bar("frop"))): hello("yes") #: E101+1 if (a == 2 or b == "abc def ghi" "jkl mno"): assert True #: E101+2 if (a == 2 or b == """abc def ghi jkl mno"""): assert True #: E101+1 E101+2 if length > options.max_line_length: assert options.max_line_length, \ "E501 line too long (%d characters)" % length #: E101+1 E101+2 if os.path.exists(os.path.join(path, PEP8_BIN)): cmd = ([os.path.join(path, PEP8_BIN)] + self._pep8_options(targetfile)) # TODO Tabs in docstrings shouldn't be there, use \t. ''' multiline string with tab in it''' # Same here. '''multiline string with tabs and spaces ''' # Okay '''sometimes, you just need to go nuts in a multiline string and allow all sorts of crap like mixed tabs and spaces or trailing whitespace or long long long long long long long long long long long long long long long long long lines ''' # noqa # Okay '''this one will get no warning even though the noqa comment is not immediately after the string ''' + foo # noqa #: E101+2 if foo is None and bar is "frop" and \ blah == 'yeah': blah = 'yeahnah' #: E101+1 E101+2 E101+3 if True: foo( 1, 2) #: E101+1 E101+2 E101+3 E101+4 E101+5 def test_keys(self): """areas.json - All regions are accounted for.""" expected = set([ u'Norrbotten', u'V\xe4sterbotten', ]) #: E101+1 x = [ 'abc' ] parso-0.5.2/test/normalizer_issue_files/E72.py0000664000175000017500000000207413575273707021211 0ustar davedave00000000000000#: E721:3 if type(res) == type(42): pass #: E721:3 if type(res) != type(""): pass import types if res == types.IntType: pass import types #: E721:3 if type(res) is not types.ListType: pass #: E721:7 E721:35 assert type(res) == type(False) or type(res) == type(None) #: E721:7 assert type(res) == type([]) #: E721:7 assert type(res) == type(()) #: E721:7 assert type(res) == type((0,)) #: E721:7 assert type(res) == type((0)) #: E721:7 assert type(res) != type((1,)) #: E721:7 assert type(res) is type((1,)) #: E721:7 assert type(res) is not type((1,)) # Okay #: E402 import types if isinstance(res, int): pass if isinstance(res, str): pass if isinstance(res, types.MethodType): pass #: E721:3 E721:25 if type(a) != type(b) or type(a) == type(ccc): pass #: E721 type(a) != type(b) #: E721 1 != type(b) #: E721 type(b) != 1 1 != 1 try: pass #: E722 except: pass try: pass except Exception: pass #: E722 except: pass # Okay fake_code = """" try: do_something() except: pass """ try: pass except Exception: pass parso-0.5.2/test/normalizer_issue_files/python3.py0000664000175000017500000000230613575273707022256 0ustar davedave00000000000000#!/usr/bin/env python3 from typing import ClassVar, List print(1, 2) # Annotated function (Issue #29) def foo(x: int) -> int: return x + 1 # Annotated variables #575 CONST: int = 42 class Class: cls_var: ClassVar[str] def m(self): xs: List[int] = [] # True and False are keywords in Python 3 and therefore need a space. #: E275:13 E275:14 norman = True+False #: E302+3:0 def a(): pass async def b(): pass # Okay async def add(a: int = 0, b: int = 0) -> int: return a + b # Previously E251 four times #: E221:5 async def add(a: int = 0, b: int = 0) -> int: return a + b # Previously just E272+1:5 E272+4:5 #: E302+3 E221:5 E221+3:5 async def x(): pass async def x(y: int = 1): pass #: E704:16 async def f(x): return 2 a[b1, :] == a[b1, ...] # Annotated Function Definitions # Okay def munge(input: AnyStr, sep: AnyStr = None, limit=1000, extra: Union[str, dict] = None) -> AnyStr: pass #: E225:24 E225:26 def x(b: tuple = (1, 2))->int: return a + b #: E252:11 E252:12 E231:8 def b(a:int=1): pass if alpha[:-i]: *a, b = (1, 2, 3) # Named only arguments def foo(*, asdf): pass def foo2(bar, *, asdf=2): pass parso-0.5.2/test/normalizer_issue_files/E12_not_first.py0000664000175000017500000001434013575273707023271 0ustar davedave00000000000000# The issue numbers described in this file are part of the pycodestyle tracker # and not of parso. # Originally there were no issues in here, I (dave) added the ones that were # necessary and IMO useful. if ( x == ( 3 ) or y == 4): pass y = x == 2 \ or x == 3 #: E129+1:4 if x == 2 \ or y > 1 \ or x == 3: pass if x == 2 \ or y > 1 \ or x == 3: pass if (foo == bar and baz == frop): pass #: E129+1:4 E129+2:4 E123+3 if ( foo == bar and baz == frop ): pass if ( foo == bar and baz == frop #: E129:4 ): pass a = ( ) a = (123, ) if start[1] > end_col and not ( over_indent == 4 and indent_next): assert (0, "E121 continuation line over-" "indented for visual indent") abc = "OK", ("visual", "indent") abc = "Okay", ("visual", "indent_three" ) abc = "a-ok", ( "there", "dude", ) abc = "hello", ( "there", "dude") abc = "hello", ( "there", # "john", "dude") abc = "hello", ( "there", "dude") abc = "hello", ( "there", "dude", ) # Aligned with opening delimiter foo = long_function_name(var_one, var_two, var_three, var_four) # Extra indentation is not necessary. foo = long_function_name( var_one, var_two, var_three, var_four) arm = 'AAA' \ 'BBB' \ 'CCC' bbb = 'AAA' \ 'BBB' \ 'CCC' cc = ('AAA' 'BBB' 'CCC') cc = {'text': 'AAA' 'BBB' 'CCC'} cc = dict(text='AAA' 'BBB') sat = 'AAA' \ 'BBB' \ 'iii' \ 'CCC' abricot = (3 + 4 + 5 + 6) #: E122+1:4 abricot = 3 + \ 4 + \ 5 + 6 part = [-1, 2, 3, 4, 5, 6] #: E128+1:8 part = [-1, (2, 3, 4, 5, 6), 7, 8, 9, 0] fnct(1, 2, 3, 4, 5, 6) fnct(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) def long_function_name( var_one, var_two, var_three, var_four): hello(var_one) if ((row < 0 or self.moduleCount <= row or col < 0 or self.moduleCount <= col)): raise Exception("%s,%s - %s" % (row, col, self.moduleCount)) result = { 'foo': [ 'bar', { 'baz': 'frop', } ] } foo = my.func({ "foo": "bar", }, "baz") fooff(aaaa, cca( vvv, dadd ), fff, ggg) fooff(aaaa, abbb, cca( vvv, aaa, dadd), "visual indentation is not a multiple of four",) if bar: assert ( start, 'E121 lines starting with a ' 'closing bracket should be indented ' "to match that of the opening " "bracket's line" ) # you want vertical alignment, so use a parens if ((foo.bar("baz") and foo.bar("frop") )): hello("yes") # also ok, but starting to look like LISP if ((foo.bar("baz") and foo.bar("frop"))): hello("yes") #: E129+1:4 E127+2:9 if (a == 2 or b == "abc def ghi" "jkl mno"): assert True #: E129+1:4 if (a == 2 or b == """abc def ghi jkl mno"""): assert True if length > options.max_line_length: assert options.max_line_length, \ "E501 line too long (%d characters)" % length # blub asd = 'l.{line}\t{pos}\t{name}\t{text}'.format( line=token[2][0], pos=pos, name=tokenize.tok_name[token[0]], text=repr(token[1]), ) #: E121+1:6 E121+2:6 hello('%-7d %s per second (%d total)' % ( options.counters[key] / elapsed, key, options.counters[key])) if os.path.exists(os.path.join(path, PEP8_BIN)): cmd = ([os.path.join(path, PEP8_BIN)] + self._pep8_options(targetfile)) fixed = (re.sub(r'\t+', ' ', target[c::-1], 1)[::-1] + target[c + 1:]) fixed = ( re.sub(r'\t+', ' ', target[c::-1], 1)[::-1] + target[c + 1:] ) if foo is None and bar is "frop" and \ blah == 'yeah': blah = 'yeahnah' """This is a multi-line docstring.""" if blah: # is this actually readable? :) multiline_literal = """ while True: if True: 1 """.lstrip() multiline_literal = ( """ while True: if True: 1 """.lstrip() ) multiline_literal = ( """ while True: if True: 1 """ .lstrip() ) if blah: multiline_visual = (""" while True: if True: 1 """ .lstrip()) rv = {'aaa': 42} rv.update(dict.fromkeys(( #: E121:4 E121+1:4 'qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?')) rv.update(dict.fromkeys(('qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?')) #: E128+1:10 rv.update(dict.fromkeys(('qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?')) rv.update(dict.fromkeys( ('qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?' ), "foo", context={ 'alpha': 4, 'beta': 53242234, 'gamma': 17, }) rv.update( dict.fromkeys(( 'qualif_nr', 'reasonComment_en', 'reasonComment_fr', 'reasonComment_de', 'reasonComment_it'), '?'), "foo", context={ 'alpha': 4, 'beta': 53242234, 'gamma': 17, }, ) event_obj.write(cursor, user_id, { 'user': user, 'summary': text, 'data': data, }) event_obj.write(cursor, user_id, { 'user': user, 'summary': text, 'data': {'aaa': 1, 'bbb': 2}, }) event_obj.write(cursor, user_id, { 'user': user, 'summary': text, 'data': { 'aaa': 1, 'bbb': 2}, }) event_obj.write(cursor, user_id, { 'user': user, 'summary': text, 'data': {'timestamp': now, 'content': { 'aaa': 1, 'bbb': 2 }}, }) parso-0.5.2/test/normalizer_issue_files/E73.py0000664000175000017500000000033513575273707021210 0ustar davedave00000000000000#: E731:4 f = lambda x: 2 * x while False: #: E731:10 foo = lambda y, z: 2 * x # Okay f = object() f.method = lambda: 'Method' f = {} f['a'] = lambda x: x ** 2 f = [] f.append(lambda x: x ** 2) lambda: 'no-op' parso-0.5.2/test/normalizer_issue_files/E30not.py0000664000175000017500000000332313575273707021722 0ustar davedave00000000000000# Okay class X: pass # Okay def foo(): pass # Okay # -*- coding: utf-8 -*- class X: pass # Okay # -*- coding: utf-8 -*- def foo(): pass # Okay class X: def a(): pass # comment def b(): pass # This is a # ... multi-line comment def c(): pass # This is a # ... multi-line comment @some_decorator class Y: def a(): pass # comment def b(): pass @property def c(): pass try: from nonexistent import Bar except ImportError: class Bar(object): """This is a Bar replacement""" def with_feature(f): """Some decorator""" wrapper = f if has_this_feature(f): def wrapper(*args): call_feature(args[0]) return f(*args) return wrapper try: next except NameError: def next(iterator, default): for item in iterator: return item return default def a(): pass class Foo(): """Class Foo""" def b(): pass # comment def c(): pass # comment def d(): pass # This is a # ... multi-line comment # And this one is # ... a second paragraph # ... which spans on 3 lines # Function `e` is below # NOTE: Hey this is a testcase def e(): pass def a(): print # comment print print # Comment 1 # Comment 2 # Comment 3 def b(): pass # Okay def foo(): pass def bar(): pass class Foo(object): pass class Bar(object): pass if __name__ == '__main__': foo() # Okay classification_errors = None # Okay defined_properly = True # Okay defaults = {} defaults.update({}) # Okay def foo(x): classification = x definitely = not classification parso-0.5.2/test/normalizer_issue_files/allowed_syntax.py0000664000175000017500000000124213575273707023705 0ustar davedave00000000000000""" Some syntax errors are a bit complicated and need exact checking. Here we gather some of the potentially dangerous ones. """ from __future__ import division # With a dot it's not a future import anymore. from .__future__ import absolute_import '' '' ''r''u'' b'' BR'' for x in [1]: try: continue # Only the other continue and pass is an error. finally: #: E901 continue for x in [1]: break continue try: pass except ZeroDivisionError: pass #: E722:0 except: pass try: pass #: E722:0 E901:0 except: pass except ZeroDivisionError: pass r'\n' r'\x' b'\n' a = 3 def x(b=a): global a parso-0.5.2/test/normalizer_issue_files/allowed_syntax_python3.6.py0000664000175000017500000000066013575273707025540 0ustar davedave00000000000000foo: int = 4 (foo): int = 3 ((foo)): int = 3 foo.bar: int foo[3]: int def glob(): global x y: foo = x def c(): a = 3 def d(): class X(): nonlocal a def x(): a = 3 def y(): nonlocal a def x(): def y(): nonlocal a a = 3 def x(): a = 3 def y(): class z(): nonlocal a a = *args, *args error[(*args, *args)] = 3 *args, *args parso-0.5.2/test/normalizer_issue_files/E22.py0000664000175000017500000000415313575273707021204 0ustar davedave00000000000000a = 12 + 3 #: E221:5 E229:8 b = 4 + 5 #: E221:1 x = 1 #: E221:1 y = 2 long_variable = 3 #: E221:4 x[0] = 1 #: E221:4 x[1] = 2 long_variable = 3 #: E221:8 E229:19 x = f(x) + 1 y = long_variable + 2 #: E221:8 E229:19 z = x[0] + 3 #: E221+2:13 text = """ bar foo %s""" % rofl # Okay x = 1 y = 2 long_variable = 3 #: E221:7 a = a + 1 b = b + 10 #: E221:3 x = -1 #: E221:3 y = -2 long_variable = 3 #: E221:6 x[0] = 1 #: E221:6 x[1] = 2 long_variable = 3 #: E223+1:1 foobart = 4 a = 3 # aligned with tab #: E223:4 a += 1 b += 1000 #: E225:12 submitted +=1 #: E225:9 submitted+= 1 #: E225:3 c =-1 #: E229:7 x = x /2 - 1 #: E229:11 c = alpha -4 #: E229:10 c = alpha- 4 #: E229:8 z = x **y #: E229:14 z = (x + 1) **y #: E229:13 z = (x + 1)** y #: E227:14 _1kB = _1MB >>10 #: E227:11 _1kB = _1MB>> 10 #: E225:1 E225:2 E229:4 i=i+ 1 #: E225:1 E225:2 E229:5 i=i +1 #: E225:1 E225:2 i=i+1 #: E225:3 i =i+1 #: E225:1 i= i+1 #: E229:8 c = (a +b)*(a - b) #: E229:7 c = (a+ b)*(a - b) z = 2//30 c = (a+b) * (a-b) x = x*2 - 1 x = x/2 - 1 # TODO whitespace should be the other way around according to pep8. x = x / 2-1 hypot2 = x*x + y*y c = (a + b)*(a - b) def halves(n): return (i//2 for i in range(n)) #: E227:11 E227:13 _1kB = _1MB>>10 #: E227:11 E227:13 _1MB = _1kB<<10 #: E227:5 E227:6 a = b|c #: E227:5 E227:6 b = c&a #: E227:5 E227:6 c = b^a #: E228:5 E228:6 a = b%c #: E228:9 E228:10 msg = fmt%(errno, errmsg) #: E228:25 E228:26 msg = "Error %d occurred"%errno #: E228:7 a = b %c a = b % c # Okay i = i + 1 submitted += 1 x = x * 2 - 1 hypot2 = x * x + y * y c = (a + b) * (a - b) _1MiB = 2 ** 20 _1TiB = 2**30 foo(bar, key='word', *args, **kwargs) baz(**kwargs) negative = -1 spam(-1) -negative func1(lambda *args, **kw: (args, kw)) func2(lambda a, b=h[:], c=0: (a, b, c)) if not -5 < x < +5: #: E227:12 print >>sys.stderr, "x is out of range." print >> sys.stdout, "x is an integer." x = x / 2 - 1 def squares(n): return (i**2 for i in range(n)) ENG_PREFIXES = { -6: "\u03bc", # Greek letter mu -3: "m", } parso-0.5.2/test/test_fstring.py0000664000175000017500000000624213575273707016614 0ustar davedave00000000000000import pytest from textwrap import dedent from parso import load_grammar, ParserSyntaxError from parso.python.tokenize import tokenize @pytest.fixture def grammar(): return load_grammar(version='3.8') @pytest.mark.parametrize( 'code', [ # simple cases 'f"{1}"', 'f"""{1}"""', 'f"{foo} {bar}"', # empty string 'f""', 'f""""""', # empty format specifier is okay 'f"{1:}"', # use of conversion options 'f"{1!a}"', 'f"{1!a:1}"', # format specifiers 'f"{1:1}"', 'f"{1:1.{32}}"', 'f"{1::>4}"', 'f"{x:{y}}"', 'f"{x:{y:}}"', 'f"{x:{y:1}}"', # Escapes 'f"{{}}"', 'f"{{{1}}}"', 'f"{{{1}"', 'f"1{{2{{3"', 'f"}}"', # New Python 3.8 syntax f'{a=}' 'f"{a=}"', 'f"{a()=}"', # multiline f-string 'f"""abc\ndef"""', 'f"""abc{\n123}def"""', # a line continuation inside of an fstring_string 'f"abc\\\ndef"', 'f"\\\n{123}\\\n"', # a line continuation inside of an fstring_expr 'f"{\\\n123}"', # a line continuation inside of an format spec 'f"{123:.2\\\nf}"', ] ) def test_valid(code, grammar): module = grammar.parse(code, error_recovery=False) fstring = module.children[0] assert fstring.type == 'fstring' assert fstring.get_code() == code @pytest.mark.parametrize( 'code', [ # an f-string can't contain unmatched curly braces 'f"}"', 'f"{"', 'f"""}"""', 'f"""{"""', # invalid conversion characters 'f"{1!{a}}"', 'f"{!{a}}"', # The curly braces must contain an expression 'f"{}"', 'f"{:}"', 'f"{:}}}"', 'f"{:1}"', 'f"{!:}"', 'f"{!}"', 'f"{!a}"', # invalid (empty) format specifiers 'f"{1:{}}"', 'f"{1:{:}}"', # a newline without a line continuation inside a single-line string 'f"abc\ndef"', ] ) def test_invalid(code, grammar): with pytest.raises(ParserSyntaxError): grammar.parse(code, error_recovery=False) # It should work with error recovery. grammar.parse(code, error_recovery=True) @pytest.mark.parametrize( ('code', 'positions'), [ # 2 times 2, 5 because python expr and endmarker. ('f"}{"', [(1, 0), (1, 2), (1, 3), (1, 4), (1, 5)]), ('f" :{ 1 : } "', [(1, 0), (1, 2), (1, 4), (1, 6), (1, 8), (1, 9), (1, 10), (1, 11), (1, 12), (1, 13)]), ('f"""\n {\nfoo\n }"""', [(1, 0), (1, 4), (2, 1), (3, 0), (4, 1), (4, 2), (4, 5)]), ] ) def test_tokenize_start_pos(code, positions): tokens = list(tokenize(code, version_info=(3, 6))) assert positions == [p.start_pos for p in tokens] @pytest.mark.parametrize( 'code', [ dedent("""\ f'''s{ str.uppe ''' """), 'f"foo', 'f"""foo', 'f"abc\ndef"', ] ) def test_roundtrip(grammar, code): tree = grammar.parse(code) assert tree.get_code() == code parso-0.5.2/test/test_tokenize.py0000664000175000017500000003172513575273707016774 0ustar davedave00000000000000# -*- coding: utf-8 # This file contains Unicode characters. import sys from textwrap import dedent import pytest from parso._compatibility import py_version from parso.utils import split_lines, parse_version_string from parso.python.token import PythonTokenTypes from parso.python import tokenize from parso import parse from parso.python.tokenize import PythonToken # To make it easier to access some of the token types, just put them here. NAME = PythonTokenTypes.NAME NEWLINE = PythonTokenTypes.NEWLINE STRING = PythonTokenTypes.STRING NUMBER = PythonTokenTypes.NUMBER INDENT = PythonTokenTypes.INDENT DEDENT = PythonTokenTypes.DEDENT ERRORTOKEN = PythonTokenTypes.ERRORTOKEN OP = PythonTokenTypes.OP ENDMARKER = PythonTokenTypes.ENDMARKER ERROR_DEDENT = PythonTokenTypes.ERROR_DEDENT FSTRING_START = PythonTokenTypes.FSTRING_START FSTRING_STRING = PythonTokenTypes.FSTRING_STRING FSTRING_END = PythonTokenTypes.FSTRING_END def _get_token_list(string, version=None): # Load the current version. version_info = parse_version_string(version) return list(tokenize.tokenize(string, version_info)) def test_end_pos_one_line(): parsed = parse(dedent(''' def testit(): a = "huhu" ''')) simple_stmt = next(parsed.iter_funcdefs()).get_suite().children[-1] string = simple_stmt.children[0].get_rhs() assert string.end_pos == (3, 14) def test_end_pos_multi_line(): parsed = parse(dedent(''' def testit(): a = """huhu asdfasdf""" + "h" ''')) expr_stmt = next(parsed.iter_funcdefs()).get_suite().children[1].children[0] string_leaf = expr_stmt.get_rhs().children[0] assert string_leaf.end_pos == (4, 11) def test_simple_no_whitespace(): # Test a simple one line string, no preceding whitespace simple_docstring = '"""simple one line docstring"""' token_list = _get_token_list(simple_docstring) _, value, _, prefix = token_list[0] assert prefix == '' assert value == '"""simple one line docstring"""' def test_simple_with_whitespace(): # Test a simple one line string with preceding whitespace and newline simple_docstring = ' """simple one line docstring""" \r\n' token_list = _get_token_list(simple_docstring) assert token_list[0][0] == INDENT typ, value, start_pos, prefix = token_list[1] assert prefix == ' ' assert value == '"""simple one line docstring"""' assert typ == STRING typ, value, start_pos, prefix = token_list[2] assert prefix == ' ' assert typ == NEWLINE def test_function_whitespace(): # Test function definition whitespace identification fundef = dedent(''' def test_whitespace(*args, **kwargs): x = 1 if x > 0: print(True) ''') token_list = _get_token_list(fundef) for _, value, _, prefix in token_list: if value == 'test_whitespace': assert prefix == ' ' if value == '(': assert prefix == '' if value == '*': assert prefix == '' if value == '**': assert prefix == ' ' if value == 'print': assert prefix == ' ' if value == 'if': assert prefix == ' ' def test_tokenize_multiline_I(): # Make sure multiline string having newlines have the end marker on the # next line fundef = '''""""\n''' token_list = _get_token_list(fundef) assert token_list == [PythonToken(ERRORTOKEN, '""""\n', (1, 0), ''), PythonToken(ENDMARKER , '', (2, 0), '')] def test_tokenize_multiline_II(): # Make sure multiline string having no newlines have the end marker on # same line fundef = '''""""''' token_list = _get_token_list(fundef) assert token_list == [PythonToken(ERRORTOKEN, '""""', (1, 0), ''), PythonToken(ENDMARKER, '', (1, 4), '')] def test_tokenize_multiline_III(): # Make sure multiline string having newlines have the end marker on the # next line even if several newline fundef = '''""""\n\n''' token_list = _get_token_list(fundef) assert token_list == [PythonToken(ERRORTOKEN, '""""\n\n', (1, 0), ''), PythonToken(ENDMARKER, '', (3, 0), '')] def test_identifier_contains_unicode(): fundef = dedent(''' def 我あφ(): pass ''') token_list = _get_token_list(fundef) unicode_token = token_list[1] if py_version >= 30: assert unicode_token[0] == NAME else: # Unicode tokens in Python 2 seem to be identified as operators. # They will be ignored in the parser, that's ok. assert unicode_token[0] == ERRORTOKEN def test_quoted_strings(): string_tokens = [ 'u"test"', 'u"""test"""', 'U"""test"""', "u'''test'''", "U'''test'''", ] for s in string_tokens: module = parse('''a = %s\n''' % s) simple_stmt = module.children[0] expr_stmt = simple_stmt.children[0] assert len(expr_stmt.children) == 3 string_tok = expr_stmt.children[2] assert string_tok.type == 'string' assert string_tok.value == s def test_ur_literals(): """ Decided to parse `u''` literals regardless of Python version. This makes probably sense: - Python 3+ doesn't support it, but it doesn't hurt not be. While this is incorrect, it's just incorrect for one "old" and in the future not very important version. - All the other Python versions work very well with it. """ def check(literal, is_literal=True): token_list = _get_token_list(literal) typ, result_literal, _, _ = token_list[0] if is_literal: if typ != FSTRING_START: assert typ == STRING assert result_literal == literal else: assert typ == NAME check('u""') check('ur""', is_literal=not py_version >= 30) check('Ur""', is_literal=not py_version >= 30) check('UR""', is_literal=not py_version >= 30) check('bR""') # Starting with Python 3.3 this ordering is also possible. if py_version >= 33: check('Rb""') # Starting with Python 3.6 format strings where introduced. check('fr""', is_literal=py_version >= 36) check('rF""', is_literal=py_version >= 36) check('f""', is_literal=py_version >= 36) check('F""', is_literal=py_version >= 36) def test_error_literal(): error_token, newline, endmarker = _get_token_list('"\n') assert error_token.type == ERRORTOKEN assert error_token.string == '"' assert newline.type == NEWLINE assert endmarker.type == ENDMARKER assert endmarker.prefix == '' bracket, error_token, endmarker = _get_token_list('( """') assert error_token.type == ERRORTOKEN assert error_token.prefix == ' ' assert error_token.string == '"""' assert endmarker.type == ENDMARKER assert endmarker.prefix == '' def test_endmarker_end_pos(): def check(code): tokens = _get_token_list(code) lines = split_lines(code) assert tokens[-1].end_pos == (len(lines), len(lines[-1])) check('#c') check('#c\n') check('a\n') check('a') check(r'a\\n') check('a\\') xfail_py2 = dict(marks=[pytest.mark.xfail(sys.version_info[0] == 2, reason='Python 2')]) @pytest.mark.parametrize( ('code', 'types'), [ # Indentation (' foo', [INDENT, NAME, DEDENT]), (' foo\n bar', [INDENT, NAME, NEWLINE, ERROR_DEDENT, NAME, DEDENT]), (' foo\n bar \n baz', [INDENT, NAME, NEWLINE, ERROR_DEDENT, NAME, NEWLINE, ERROR_DEDENT, NAME, DEDENT]), (' foo\nbar', [INDENT, NAME, NEWLINE, DEDENT, NAME]), # Name stuff ('1foo1', [NUMBER, NAME]), pytest.param( u'மெல்லினம்', [NAME], **xfail_py2), pytest.param(u'²', [ERRORTOKEN], **xfail_py2), pytest.param(u'ä²ö', [NAME, ERRORTOKEN, NAME], **xfail_py2), pytest.param(u'ää²¹öö', [NAME, ERRORTOKEN, NAME], **xfail_py2), ] ) def test_token_types(code, types): actual_types = [t.type for t in _get_token_list(code)] assert actual_types == types + [ENDMARKER] def test_error_string(): t1, newline, endmarker = _get_token_list(' "\n') assert t1.type == ERRORTOKEN assert t1.prefix == ' ' assert t1.string == '"' assert newline.type == NEWLINE assert endmarker.prefix == '' assert endmarker.string == '' def test_indent_error_recovery(): code = dedent("""\ str( from x import a def """) lst = _get_token_list(code) expected = [ # `str(` INDENT, NAME, OP, # `from parso` NAME, NAME, # `import a` on same line as the previous from parso NAME, NAME, NEWLINE, # Dedent happens, because there's an import now and the import # statement "breaks" out of the opening paren on the first line. DEDENT, # `b` NAME, NEWLINE, ENDMARKER] assert [t.type for t in lst] == expected def test_error_token_after_dedent(): code = dedent("""\ class C: pass $foo """) lst = _get_token_list(code) expected = [ NAME, NAME, OP, NEWLINE, INDENT, NAME, NEWLINE, DEDENT, # $foo\n ERRORTOKEN, NAME, NEWLINE, ENDMARKER ] assert [t.type for t in lst] == expected def test_brackets_no_indentation(): """ There used to be an issue that the parentheses counting would go below zero. This should not happen. """ code = dedent("""\ } { } """) lst = _get_token_list(code) assert [t.type for t in lst] == [OP, NEWLINE, OP, OP, NEWLINE, ENDMARKER] def test_form_feed(): error_token, endmarker = _get_token_list(dedent('''\ \f"""''')) assert error_token.prefix == '\f' assert error_token.string == '"""' assert endmarker.prefix == '' def test_carriage_return(): lst = _get_token_list(' =\\\rclass') assert [t.type for t in lst] == [INDENT, OP, DEDENT, NAME, ENDMARKER] def test_backslash(): code = '\\\n# 1 \n' endmarker, = _get_token_list(code) assert endmarker.prefix == code @pytest.mark.parametrize( ('code', 'types'), [ ('f"', [FSTRING_START]), ('f""', [FSTRING_START, FSTRING_END]), ('f" {}"', [FSTRING_START, FSTRING_STRING, OP, OP, FSTRING_END]), ('f" "{}', [FSTRING_START, FSTRING_STRING, FSTRING_END, OP, OP]), (r'f"\""', [FSTRING_START, FSTRING_STRING, FSTRING_END]), (r'f"\""', [FSTRING_START, FSTRING_STRING, FSTRING_END]), # format spec (r'f"Some {x:.2f}{y}"', [FSTRING_START, FSTRING_STRING, OP, NAME, OP, FSTRING_STRING, OP, OP, NAME, OP, FSTRING_END]), # multiline f-string ('f"""abc\ndef"""', [FSTRING_START, FSTRING_STRING, FSTRING_END]), ('f"""abc{\n123}def"""', [ FSTRING_START, FSTRING_STRING, OP, NUMBER, OP, FSTRING_STRING, FSTRING_END ]), # a line continuation inside of an fstring_string ('f"abc\\\ndef"', [ FSTRING_START, FSTRING_STRING, FSTRING_END ]), ('f"\\\n{123}\\\n"', [ FSTRING_START, FSTRING_STRING, OP, NUMBER, OP, FSTRING_STRING, FSTRING_END ]), # a line continuation inside of an fstring_expr ('f"{\\\n123}"', [FSTRING_START, OP, NUMBER, OP, FSTRING_END]), # a line continuation inside of an format spec ('f"{123:.2\\\nf}"', [ FSTRING_START, OP, NUMBER, OP, FSTRING_STRING, OP, FSTRING_END ]), # a newline without a line continuation inside a single-line string is # wrong, and will generate an ERRORTOKEN ('f"abc\ndef"', [ FSTRING_START, FSTRING_STRING, NEWLINE, NAME, ERRORTOKEN ]), # a more complex example (r'print(f"Some {x:.2f}a{y}")', [ NAME, OP, FSTRING_START, FSTRING_STRING, OP, NAME, OP, FSTRING_STRING, OP, FSTRING_STRING, OP, NAME, OP, FSTRING_END, OP ]), # issue #86, a string-like in an f-string expression ('f"{ ""}"', [ FSTRING_START, OP, FSTRING_END, STRING ]), ('f"{ f""}"', [ FSTRING_START, OP, NAME, FSTRING_END, STRING ]), ] ) def test_fstring(code, types, version_ge_py36): actual_types = [t.type for t in _get_token_list(code, version_ge_py36)] assert types + [ENDMARKER] == actual_types @pytest.mark.parametrize( ('code', 'types'), [ # issue #87, `:=` in the outest paratheses should be tokenized # as a format spec marker and part of the format ('f"{x:=10}"', [ FSTRING_START, OP, NAME, OP, FSTRING_STRING, OP, FSTRING_END ]), ('f"{(x:=10)}"', [ FSTRING_START, OP, OP, NAME, OP, NUMBER, OP, OP, FSTRING_END ]), ] ) def test_fstring_assignment_expression(code, types, version_ge_py38): actual_types = [t.type for t in _get_token_list(code, version_ge_py38)] assert types + [ENDMARKER] == actual_types parso-0.5.2/test/test_parser.py0000664000175000017500000001412113575273707016427 0ustar davedave00000000000000# -*- coding: utf-8 -*- from textwrap import dedent import pytest from parso._compatibility import u from parso import parse from parso.python import tree from parso.utils import split_lines def test_basic_parsing(each_version): def compare(string): """Generates the AST object and then regenerates the code.""" assert parse(string, version=each_version).get_code() == string compare('\na #pass\n') compare('wblabla* 1\t\n') compare('def x(a, b:3): pass\n') compare('assert foo\n') def test_subscope_names(each_version): def get_sub(source): return parse(source, version=each_version).children[0] name = get_sub('class Foo: pass').name assert name.start_pos == (1, len('class ')) assert name.end_pos == (1, len('class Foo')) assert name.value == 'Foo' name = get_sub('def foo(): pass').name assert name.start_pos == (1, len('def ')) assert name.end_pos == (1, len('def foo')) assert name.value == 'foo' def test_import_names(each_version): def get_import(source): return next(parse(source, version=each_version).iter_imports()) imp = get_import('import math\n') names = imp.get_defined_names() assert len(names) == 1 assert names[0].value == 'math' assert names[0].start_pos == (1, len('import ')) assert names[0].end_pos == (1, len('import math')) assert imp.start_pos == (1, 0) assert imp.end_pos == (1, len('import math')) def test_end_pos(each_version): s = dedent(''' x = ['a', 'b', 'c'] def func(): y = None ''') parser = parse(s, version=each_version) scope = next(parser.iter_funcdefs()) assert scope.start_pos == (3, 0) assert scope.end_pos == (5, 0) def test_carriage_return_statements(each_version): source = dedent(''' foo = 'ns1!' # this is a namespace package ''') source = source.replace('\n', '\r\n') stmt = parse(source, version=each_version).children[0] assert '#' not in stmt.get_code() def test_incomplete_list_comprehension(each_version): """ Shouldn't raise an error, same bug as #418. """ # With the old parser this actually returned a statement. With the new # parser only valid statements generate one. children = parse('(1 for def', version=each_version).children assert [c.type for c in children] == \ ['error_node', 'error_node', 'endmarker'] def test_newline_positions(each_version): endmarker = parse('a\n', version=each_version).children[-1] assert endmarker.end_pos == (2, 0) new_line = endmarker.get_previous_leaf() assert new_line.start_pos == (1, 1) assert new_line.end_pos == (2, 0) def test_end_pos_error_correction(each_version): """ Source code without ending newline are given one, because the Python grammar needs it. However, they are removed again. We still want the right end_pos, even if something breaks in the parser (error correction). """ s = 'def x():\n .' m = parse(s, version=each_version) func = m.children[0] assert func.type == 'funcdef' assert func.end_pos == (2, 2) assert m.end_pos == (2, 2) def test_param_splitting(each_version): """ Jedi splits parameters into params, this is not what the grammar does, but Jedi does this to simplify argument parsing. """ def check(src, result): # Python 2 tuple params should be ignored for now. m = parse(src, version=each_version) if each_version.startswith('2'): # We don't want b and c to be a part of the param enumeration. Just # ignore them, because it's not what we want to support in the # future. func = next(m.iter_funcdefs()) assert [param.name.value for param in func.get_params()] == result else: assert not list(m.iter_funcdefs()) check('def x(a, (b, c)):\n pass', ['a']) check('def x((b, c)):\n pass', []) def test_unicode_string(): s = tree.String(None, u('bö'), (0, 0)) assert repr(s) # Should not raise an Error! def test_backslash_dos_style(each_version): assert parse('\\\r\n', version=each_version) def test_started_lambda_stmt(each_version): m = parse(u'lambda a, b: a i', version=each_version) assert m.children[0].type == 'error_node' def test_python2_octal(each_version): module = parse('0660', version=each_version) first = module.children[0] if each_version.startswith('2'): assert first.type == 'number' else: assert first.type == 'error_node' @pytest.mark.parametrize('code', ['foo "', 'foo """\n', 'foo """\nbar']) def test_open_string_literal(each_version, code): """ Testing mostly if removing the last newline works. """ lines = split_lines(code, keepends=True) end_pos = (len(lines), len(lines[-1])) module = parse(code, version=each_version) assert module.get_code() == code assert module.end_pos == end_pos == module.children[1].end_pos def test_too_many_params(): with pytest.raises(TypeError): parse('asdf', hello=3) def test_dedent_at_end(each_version): code = dedent(''' for foobar in [1]: foobar''') module = parse(code, version=each_version) assert module.get_code() == code suite = module.children[0].children[-1] foobar = suite.children[-1] assert foobar.type == 'name' def test_no_error_nodes(each_version): def check(node): assert node.type not in ('error_leaf', 'error_node') try: children = node.children except AttributeError: pass else: for child in children: check(child) check(parse("if foo:\n bar", version=each_version)) def test_named_expression(works_ge_py38): works_ge_py38.parse("(a := 1, a + 1)") @pytest.mark.parametrize( 'param_code', [ 'a=1, /', 'a, /', 'a=1, /, b=3', 'a, /, b', 'a, /, b', 'a, /, *, b', 'a, /, **kwargs', ] ) def test_positional_only_arguments(works_ge_py38, param_code): works_ge_py38.parse("def x(%s): pass" % param_code) parso-0.5.2/test/test_absolute_import.py0000664000175000017500000000153513575273707020350 0ustar davedave00000000000000""" Tests ``from __future__ import absolute_import`` (only important for Python 2.X) """ from parso import parse def test_explicit_absolute_imports(): """ Detect modules with ``from __future__ import absolute_import``. """ module = parse("from __future__ import absolute_import") assert module._has_explicit_absolute_import() def test_no_explicit_absolute_imports(): """ Detect modules without ``from __future__ import absolute_import``. """ assert not parse("1")._has_explicit_absolute_import() def test_dont_break_imports_without_namespaces(): """ The code checking for ``from __future__ import absolute_import`` shouldn't assume that all imports have non-``None`` namespaces. """ src = "from __future__ import absolute_import\nimport xyzzy" assert parse(src)._has_explicit_absolute_import() parso-0.5.2/test/test_load_grammar.py0000664000175000017500000000173313575273707017565 0ustar davedave00000000000000import pytest from parso.grammar import load_grammar from parso import utils def test_load_inexisting_grammar(): # This version shouldn't be out for a while, but if we ever do, wow! with pytest.raises(NotImplementedError): load_grammar(version='15.8') # The same is true for very old grammars (even though this is probably not # going to be an issue. with pytest.raises(NotImplementedError): load_grammar(version='1.5') @pytest.mark.parametrize(('string', 'result'), [ ('2', (2, 7)), ('3', (3, 6)), ('1.1', (1, 1)), ('1.1.1', (1, 1)), ('300.1.31', (300, 1)) ]) def test_parse_version(string, result): assert utils._parse_version(string) == result @pytest.mark.parametrize('string', ['1.', 'a', '#', '1.3.4.5', '1.12']) def test_invalid_grammar_version(string): with pytest.raises(ValueError): load_grammar(version=string) def test_grammar_int_version(): with pytest.raises(TypeError): load_grammar(version=3.2) parso-0.5.2/test/test_utils.py0000664000175000017500000000344613575273707016303 0ustar davedave00000000000000from codecs import BOM_UTF8 from parso.utils import split_lines, python_bytes_to_unicode import parso import pytest @pytest.mark.parametrize( ('string', 'expected_result', 'keepends'), [ ('asd\r\n', ['asd', ''], False), ('asd\r\n', ['asd\r\n', ''], True), ('asd\r', ['asd', ''], False), ('asd\r', ['asd\r', ''], True), ('asd\n', ['asd', ''], False), ('asd\n', ['asd\n', ''], True), ('asd\r\n\f', ['asd', '\f'], False), ('asd\r\n\f', ['asd\r\n', '\f'], True), ('\fasd\r\n', ['\fasd', ''], False), ('\fasd\r\n', ['\fasd\r\n', ''], True), ('', [''], False), ('', [''], True), ('\n', ['', ''], False), ('\n', ['\n', ''], True), ('\r', ['', ''], False), ('\r', ['\r', ''], True), # Invalid line breaks ('a\vb', ['a\vb'], False), ('a\vb', ['a\vb'], True), ('\x1C', ['\x1C'], False), ('\x1C', ['\x1C'], True), ] ) def test_split_lines(string, expected_result, keepends): assert split_lines(string, keepends=keepends) == expected_result def test_python_bytes_to_unicode_unicode_text(): source = ( b"# vim: fileencoding=utf-8\n" b"# \xe3\x81\x82\xe3\x81\x84\xe3\x81\x86\xe3\x81\x88\xe3\x81\x8a\n" ) actual = python_bytes_to_unicode(source) expected = source.decode('utf-8') assert actual == expected def test_utf8_bom(): unicode_bom = BOM_UTF8.decode('utf-8') module = parso.parse(unicode_bom) endmarker = module.children[0] assert endmarker.type == 'endmarker' assert unicode_bom == endmarker.prefix module = parso.parse(unicode_bom + 'foo = 1') expr_stmt = module.children[0] assert expr_stmt.type == 'expr_stmt' assert unicode_bom == expr_stmt.get_first_leaf().prefix parso-0.5.2/test/test_error_recovery.py0000664000175000017500000000534413575273707020211 0ustar davedave00000000000000from parso import parse, load_grammar def test_with_stmt(): module = parse('with x: f.\na') assert module.children[0].type == 'with_stmt' w, with_item, colon, f = module.children[0].children assert f.type == 'error_node' assert f.get_code(include_prefix=False) == 'f.' assert module.children[2].type == 'name' def test_one_line_function(each_version): module = parse('def x(): f.', version=each_version) assert module.children[0].type == 'funcdef' def_, name, parameters, colon, f = module.children[0].children assert f.type == 'error_node' module = parse('def x(a:', version=each_version) func = module.children[0] assert func.type == 'error_node' if each_version.startswith('2'): assert func.children[-1].value == 'a' else: assert func.children[-1] == ':' def test_if_else(): module = parse('if x:\n f.\nelse:\n g(') if_stmt = module.children[0] if_, test, colon, suite1, else_, colon, suite2 = if_stmt.children f = suite1.children[1] assert f.type == 'error_node' assert f.children[0].value == 'f' assert f.children[1].value == '.' g = suite2.children[1] assert g.children[0].value == 'g' assert g.children[1].value == '(' def test_if_stmt(): module = parse('if x: f.\nelse: g(') if_stmt = module.children[0] assert if_stmt.type == 'if_stmt' if_, test, colon, f = if_stmt.children assert f.type == 'error_node' assert f.children[0].value == 'f' assert f.children[1].value == '.' assert module.children[1].type == 'newline' assert module.children[1].value == '\n' assert module.children[2].type == 'error_leaf' assert module.children[2].value == 'else' assert module.children[3].type == 'error_leaf' assert module.children[3].value == ':' in_else_stmt = module.children[4] assert in_else_stmt.type == 'error_node' assert in_else_stmt.children[0].value == 'g' assert in_else_stmt.children[1].value == '(' def test_invalid_token(): module = parse('a + ? + b') error_node, q, plus_b, endmarker = module.children assert error_node.get_code() == 'a +' assert q.value == '?' assert q.type == 'error_leaf' assert plus_b.type == 'factor' assert plus_b.get_code() == ' + b' def test_invalid_token_in_fstr(): module = load_grammar(version='3.6').parse('f"{a + ? + b}"') error_node, q, plus_b, error1, error2, endmarker = module.children assert error_node.get_code() == 'f"{a +' assert q.value == '?' assert q.type == 'error_leaf' assert plus_b.type == 'error_node' assert plus_b.get_code() == ' + b' assert error1.value == '}' assert error1.type == 'error_leaf' assert error2.value == '"' assert error2.type == 'error_leaf' parso-0.5.2/test/test_param_splitting.py0000664000175000017500000000260413575273707020333 0ustar davedave00000000000000''' To make the life of any analysis easier, we are generating Param objects instead of simple parser objects. ''' from textwrap import dedent from parso import parse def assert_params(param_string, version=None, **wanted_dct): source = dedent(''' def x(%s): pass ''') % param_string module = parse(source, version=version) funcdef = next(module.iter_funcdefs()) dct = dict((p.name.value, p.default and p.default.get_code()) for p in funcdef.get_params()) assert dct == wanted_dct assert module.get_code() == source def test_split_params_with_separation_star(): assert_params(u'x, y=1, *, z=3', x=None, y='1', z='3', version='3.5') assert_params(u'*, x', x=None, version='3.5') assert_params(u'*', version='3.5') def test_split_params_with_stars(): assert_params(u'x, *args', x=None, args=None) assert_params(u'**kwargs', kwargs=None) assert_params(u'*args, **kwargs', args=None, kwargs=None) def test_kw_only_no_kw(works_ge_py3): """ Parsing this should be working. In CPython the parser also parses this and in a later step the AST complains. """ module = works_ge_py3.parse('def test(arg, *):\n pass') if module is not None: func = module.children[0] open_, p1, asterisk, close = func._get_param_nodes() assert p1.get_code('arg,') assert asterisk.value == '*' parso-0.5.2/test/__init__.py0000664000175000017500000000000013575273707015622 0ustar davedave00000000000000parso-0.5.2/test/test_cache.py0000664000175000017500000000543513575273707016206 0ustar davedave00000000000000""" Test all things related to the ``jedi.cache`` module. """ from os import unlink import pytest from parso.cache import _NodeCacheItem, save_module, load_module, \ _get_hashed_path, parser_cache, _load_from_file_system, _save_to_file_system from parso import load_grammar from parso import cache from parso import file_io @pytest.fixture() def isolated_jedi_cache(monkeypatch, tmpdir): """ Set `jedi.settings.cache_directory` to a temporary directory during test. Same as `clean_jedi_cache`, but create the temporary directory for each test case (scope='function'). """ monkeypatch.setattr(cache, '_default_cache_path', str(tmpdir)) def test_modulepickling_change_cache_dir(tmpdir): """ ParserPickling should not save old cache when cache_directory is changed. See: `#168 `_ """ dir_1 = str(tmpdir.mkdir('first')) dir_2 = str(tmpdir.mkdir('second')) item_1 = _NodeCacheItem('bla', []) item_2 = _NodeCacheItem('bla', []) path_1 = 'fake path 1' path_2 = 'fake path 2' hashed_grammar = load_grammar()._hashed _save_to_file_system(hashed_grammar, path_1, item_1, cache_path=dir_1) parser_cache.clear() cached = load_stored_item(hashed_grammar, path_1, item_1, cache_path=dir_1) assert cached == item_1.node _save_to_file_system(hashed_grammar, path_2, item_2, cache_path=dir_2) cached = load_stored_item(hashed_grammar, path_1, item_1, cache_path=dir_2) assert cached is None def load_stored_item(hashed_grammar, path, item, cache_path): """Load `item` stored at `path` in `cache`.""" item = _load_from_file_system(hashed_grammar, path, item.change_time - 1, cache_path) return item @pytest.mark.usefixtures("isolated_jedi_cache") def test_modulepickling_simulate_deleted_cache(tmpdir): """ Tests loading from a cache file after it is deleted. According to macOS `dev docs`__, Note that the system may delete the Caches/ directory to free up disk space, so your app must be able to re-create or download these files as needed. It is possible that other supported platforms treat cache files the same way. __ https://developer.apple.com/library/content/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html """ grammar = load_grammar() module = 'fake parser' # Create the file path = tmpdir.dirname + '/some_path' with open(path, 'w'): pass io = file_io.FileIO(path) save_module(grammar._hashed, io, module, lines=[]) assert load_module(grammar._hashed, io) == module unlink(_get_hashed_path(grammar._hashed, path)) parser_cache.clear() cached2 = load_module(grammar._hashed, io) assert cached2 is None parso-0.5.2/test/failing_examples.py0000664000175000017500000002001513575273707017402 0ustar davedave00000000000000# -*- coding: utf-8 -*- import sys from textwrap import dedent def indent(code): lines = code.splitlines(True) return ''.join([' ' * 2 + line for line in lines]) def build_nested(code, depth, base='def f():\n'): if depth == 0: return code new_code = base + indent(code) return build_nested(new_code, depth - 1, base=base) FAILING_EXAMPLES = [ '1 +', '?', 'continue', 'break', 'return', 'yield', # SyntaxError from Python/ast.c 'f(x for x in bar, 1)', 'from foo import a,', 'from __future__ import whatever', 'from __future__ import braces', 'from .__future__ import whatever', 'def f(x=3, y): pass', 'lambda x=3, y: x', '__debug__ = 1', 'with x() as __debug__: pass', # Mostly 3.6 relevant '[]: int', '[a, b]: int', '(): int', '(()): int', '((())): int', '{}: int', 'True: int', '(a, b): int', '*star,: int', 'a, b: int = 3', 'foo(+a=3)', 'f(lambda: 1=1)', 'f(x=1, x=2)', 'f(**x, y)', 'f(x=2, y)', 'f(**x, *y)', 'f(**x, y=3, z)', 'a, b += 3', '(a, b) += 3', '[a, b] += 3', # All assignment tests 'lambda a: 1 = 1', '[x for x in y] = 1', '{x for x in y} = 1', '{x:x for x in y} = 1', '(x for x in y) = 1', 'None = 1', '... = 1', 'a == b = 1', '{a, b} = 1', '{a: b} = 1', '1 = 1', '"" = 1', 'b"" = 1', 'b"" = 1', '"" "" = 1', '1 | 1 = 3', '1**1 = 3', '~ 1 = 3', 'not 1 = 3', '1 and 1 = 3', 'def foo(): (yield 1) = 3', 'def foo(): x = yield 1 = 3', 'async def foo(): await x = 3', '(a if a else a) = a', 'a, 1 = x', 'foo() = 1', # Cases without the equals but other assignments. 'with x as foo(): pass', 'del bar, 1', 'for x, 1 in []: pass', 'for (not 1) in []: pass', '[x for 1 in y]', '[x for a, 3 in y]', '(x for 1 in y)', '{x for 1 in y}', '{x:x for 1 in y}', # Unicode/Bytes issues. r'u"\x"', r'u"\"', r'u"\u"', r'u"""\U"""', r'u"\Uffffffff"', r"u'''\N{}'''", r"u'\N{foo}'", r'b"\x"', r'b"\"', '*a, *b = 3, 3', 'async def foo(): yield from []', 'yield from []', '*a = 3', 'del *a, b', 'def x(*): pass', '(%s *d) = x' % ('a,' * 256), '{**{} for a in [1]}', # Parser/tokenize.c r'"""', r'"', r"'''", r"'", r"\blub", # IndentationError: too many levels of indentation build_nested('pass', 100), # SyntaxErrors from Python/symtable.c 'def f(x, x): pass', 'nonlocal a', # IndentationError ' foo', 'def x():\n 1\n 2', 'def x():\n 1\n 2', 'if 1:\nfoo', 'if 1: blubb\nif 1:\npass\nTrue and False', # f-strings 'f"{}"', r'f"{\}"', 'f"{\'\\\'}"', 'f"{#}"', "f'{1!b}'", "f'{1:{5:{3}}}'", "f'{'", "f'{'", "f'}'", "f'{\"}'", "f'{\"}'", # Now nested parsing "f'{continue}'", "f'{1;1}'", "f'{a;}'", "f'{b\"\" \"\"}'", ] GLOBAL_NONLOCAL_ERROR = [ dedent(''' def glob(): x = 3 x.z global x'''), dedent(''' def glob(): x = 3 global x'''), dedent(''' def glob(): x global x'''), dedent(''' def glob(): x = 3 x.z nonlocal x'''), dedent(''' def glob(): x = 3 nonlocal x'''), dedent(''' def glob(): x nonlocal x'''), # Annotation issues dedent(''' def glob(): x[0]: foo global x'''), dedent(''' def glob(): x.a: foo global x'''), dedent(''' def glob(): x: foo global x'''), dedent(''' def glob(): x: foo = 5 global x'''), dedent(''' def glob(): x: foo = 5 x global x'''), dedent(''' def glob(): global x x: foo = 3 '''), # global/nonlocal + param dedent(''' def glob(x): global x '''), dedent(''' def glob(x): nonlocal x '''), dedent(''' def x(): a =3 def z(): nonlocal a a = 3 nonlocal a '''), dedent(''' def x(): a = 4 def y(): global a nonlocal a '''), # Missing binding of nonlocal dedent(''' def x(): nonlocal a '''), dedent(''' def x(): def y(): nonlocal a '''), dedent(''' def x(): a = 4 def y(): global a print(a) def z(): nonlocal a '''), ] if sys.version_info >= (3, 6): FAILING_EXAMPLES += GLOBAL_NONLOCAL_ERROR if sys.version_info >= (3, 5): FAILING_EXAMPLES += [ # Raises different errors so just ignore them for now. '[*[] for a in [1]]', # Raises multiple errors in previous versions. 'async def bla():\n def x(): await bla()', ] if sys.version_info >= (3, 4): # Before that del None works like del list, it gives a NameError. FAILING_EXAMPLES.append('del None') if sys.version_info >= (3,): FAILING_EXAMPLES += [ # Unfortunately assigning to False and True do not raise an error in # 2.x. '(True,) = x', '([False], a) = x', # A symtable error that raises only a SyntaxWarning in Python 2. 'def x(): from math import *', # unicode chars in bytes are allowed in python 2 'b"ä"', # combining strings and unicode is allowed in Python 2. '"s" b""', '"s" b"" ""', 'b"" "" b"" ""', ] if sys.version_info >= (3, 6): FAILING_EXAMPLES += [ # Same as above, but for f-strings. 'f"s" b""', 'b"s" f""', ] if sys.version_info >= (2, 7): # This is something that raises a different error in 2.6 than in the other # versions. Just skip it for 2.6. FAILING_EXAMPLES.append('[a, 1] += 3') if sys.version_info[:2] == (3, 5): # yields are not allowed in 3.5 async functions. Therefore test them # separately, here. FAILING_EXAMPLES += [ 'async def foo():\n yield x', 'async def foo():\n yield x', ] else: FAILING_EXAMPLES += [ 'async def foo():\n yield x\n return 1', 'async def foo():\n yield x\n return 1', ] if sys.version_info[:2] <= (3, 4): # Python > 3.4 this is valid code. FAILING_EXAMPLES += [ 'a = *[1], 2', '(*[1], 2)', ] if sys.version_info[:2] < (3, 8): FAILING_EXAMPLES += [ # Python/compile.c dedent('''\ for a in [1]: try: pass finally: continue '''), # 'continue' not supported inside 'finally' clause" ] if sys.version_info[:2] >= (3, 8): # assignment expressions from issue#89 FAILING_EXAMPLES += [ # Case 2 '(lambda: x := 1)', '((lambda: x) := 1)', # Case 3 '(a[i] := x)', '((a[i]) := x)', '(a(i) := x)', # Case 4 '(a.b := c)', '[(i.i:= 0) for ((i), j) in range(5)]', # Case 5 '[i:= 0 for i, j in range(5)]', '[(i:= 0) for ((i), j) in range(5)]', '[(i:= 0) for ((i), j), in range(5)]', '[(i:= 0) for ((i), j.i), in range(5)]', '[[(i:= i) for j in range(5)] for i in range(5)]', '[i for i, j in range(5) if True or (i:= 1)]', '[False and (i:= 0) for i, j in range(5)]', # Case 6 '[i+1 for i in (i:= range(5))]', '[i+1 for i in (j:= range(5))]', '[i+1 for i in (lambda: (j:= range(5)))()]', # Case 7 'class Example:\n [(j := i) for i in range(5)]', # Not in that issue '(await a := x)', '((await a) := x)', ] parso-0.5.2/test/test_grammar.py0000664000175000017500000000017713575273707016567 0ustar davedave00000000000000import parso import pytest def test_non_unicode(): with pytest.raises(UnicodeDecodeError): parso.parse(b'\xe4') parso-0.5.2/test/test_prefix.py0000664000175000017500000000440713575273707016436 0ustar davedave00000000000000try: from itertools import zip_longest except ImportError: # Python 2 from itertools import izip_longest as zip_longest from codecs import BOM_UTF8 import pytest import parso unicode_bom = BOM_UTF8.decode('utf-8') @pytest.mark.parametrize(('string', 'tokens'), [ ('', ['']), ('#', ['#', '']), (' # ', ['# ', '']), (' # \n', ['# ', '\n', '']), (' # \f\n', ['# ', '\f', '\n', '']), (' \n', ['\n', '']), (' \n ', ['\n', ' ']), (' \f ', ['\f', ' ']), (' \f ', ['\f', ' ']), (' \r\n', ['\r\n', '']), ('\\\n', ['\\\n', '']), ('\\\r\n', ['\\\r\n', '']), ('\t\t\n\t', ['\n', '\t']), ]) def test_simple_prefix_splitting(string, tokens): tree = parso.parse(string) leaf = tree.children[0] assert leaf.type == 'endmarker' parsed_tokens = list(leaf._split_prefix()) start_pos = (1, 0) for pt, expected in zip_longest(parsed_tokens, tokens): assert pt.value == expected # Calculate the estimated end_pos if expected.endswith('\n'): end_pos = start_pos[0] + 1, 0 else: end_pos = start_pos[0], start_pos[1] + len(expected) + len(pt.spacing) #assert start_pos == pt.start_pos assert end_pos == pt.end_pos start_pos = end_pos @pytest.mark.parametrize(('string', 'types'), [ ('# ', ['comment', 'spacing']), ('\r\n', ['newline', 'spacing']), ('\f', ['formfeed', 'spacing']), ('\\\n', ['backslash', 'spacing']), (' \t', ['spacing']), (' \t ', ['spacing']), (unicode_bom + ' # ', ['bom', 'comment', 'spacing']), ]) def test_prefix_splitting_types(string, types): tree = parso.parse(string) leaf = tree.children[0] assert leaf.type == 'endmarker' parsed_tokens = list(leaf._split_prefix()) assert [t.type for t in parsed_tokens] == types def test_utf8_bom(): tree = parso.parse(unicode_bom + 'a = 1') expr_stmt = tree.children[0] assert expr_stmt.start_pos == (1, 0) tree = parso.parse(unicode_bom + '\n') endmarker = tree.children[0] parts = list(endmarker._split_prefix()) assert [p.type for p in parts] == ['bom', 'newline', 'spacing'] assert [p.start_pos for p in parts] == [(1, 0), (1, 0), (2, 0)] assert [p.end_pos for p in parts] == [(1, 0), (2, 0), (2, 0)] parso-0.5.2/test/test_normalizer_issues_files.py0000664000175000017500000000416413575273707022100 0ustar davedave00000000000000""" To easily verify if our normalizer raises the right error codes, just use the tests of pydocstyle. """ import difflib import re import parso from parso._compatibility import total_ordering from parso.utils import python_bytes_to_unicode @total_ordering class WantedIssue(object): def __init__(self, code, line, column): self.code = code self._line = line self._column = column def __eq__(self, other): return self.code == other.code and self.start_pos == other.start_pos def __lt__(self, other): return self.start_pos < other.start_pos or self.code < other.code def __hash__(self): return hash(str(self.code) + str(self._line) + str(self._column)) @property def start_pos(self): return self._line, self._column def collect_errors(code): for line_nr, line in enumerate(code.splitlines(), 1): match = re.match(r'(\s*)#: (.*)$', line) if match is not None: codes = match.group(2) for code in codes.split(): code, _, add_indent = code.partition(':') column = int(add_indent or len(match.group(1))) code, _, add_line = code.partition('+') l = line_nr + 1 + int(add_line or 0) yield WantedIssue(code[1:], l, column) def test_normalizer_issue(normalizer_issue_case): def sort(issues): issues = sorted(issues, key=lambda i: (i.start_pos, i.code)) return ["(%s, %s): %s" % (i.start_pos[0], i.start_pos[1], i.code) for i in issues] with open(normalizer_issue_case.path, 'rb') as f: code = python_bytes_to_unicode(f.read()) desired = sort(collect_errors(code)) grammar = parso.load_grammar(version=normalizer_issue_case.python_version) module = grammar.parse(code) issues = grammar._get_normalizer_issues(module) actual = sort(issues) diff = '\n'.join(difflib.ndiff(desired, actual)) # To make the pytest -v diff a bit prettier, stop pytest to rewrite assert # statements by executing the comparison earlier. _bool = desired == actual assert _bool, '\n' + diff parso-0.5.2/test/test_diff_parser.py0000664000175000017500000007110013575273707017417 0ustar davedave00000000000000# -*- coding: utf-8 -*- from textwrap import dedent import logging import sys import pytest from parso.utils import split_lines from parso import cache from parso import load_grammar from parso.python.diff import DiffParser, _assert_valid_graph from parso import parse ANY = object() def test_simple(): """ The diff parser reuses modules. So check for that. """ grammar = load_grammar() module_a = grammar.parse('a', diff_cache=True) assert grammar.parse('b', diff_cache=True) == module_a def _check_error_leaves_nodes(node): if node.type in ('error_leaf', 'error_node'): return node try: children = node.children except AttributeError: pass else: for child in children: x_node = _check_error_leaves_nodes(child) if x_node is not None: return x_node return None class Differ(object): grammar = load_grammar() def initialize(self, code): logging.debug('differ: initialize') try: del cache.parser_cache[self.grammar._hashed][None] except KeyError: pass self.lines = split_lines(code, keepends=True) self.module = parse(code, diff_cache=True, cache=True) assert code == self.module.get_code() _assert_valid_graph(self.module) return self.module def parse(self, code, copies=0, parsers=0, expect_error_leaves=False): logging.debug('differ: parse copies=%s parsers=%s', copies, parsers) lines = split_lines(code, keepends=True) diff_parser = DiffParser( self.grammar._pgen_grammar, self.grammar._tokenizer, self.module, ) new_module = diff_parser.update(self.lines, lines) self.lines = lines assert code == new_module.get_code() _assert_valid_graph(new_module) error_node = _check_error_leaves_nodes(new_module) assert expect_error_leaves == (error_node is not None), error_node if parsers is not ANY: assert diff_parser._parser_count == parsers if copies is not ANY: assert diff_parser._copy_count == copies return new_module @pytest.fixture() def differ(): return Differ() def test_change_and_undo(differ): func_before = 'def func():\n pass\n' # Parse the function and a. differ.initialize(func_before + 'a') # Parse just b. differ.parse(func_before + 'b', copies=1, parsers=1) # b has changed to a again, so parse that. differ.parse(func_before + 'a', copies=1, parsers=1) # Same as before parsers should not be used. Just a simple copy. differ.parse(func_before + 'a', copies=1) # Now that we have a newline at the end, everything is easier in Python # syntax, we can parse once and then get a copy. differ.parse(func_before + 'a\n', copies=1, parsers=1) differ.parse(func_before + 'a\n', copies=1) # Getting rid of an old parser: Still no parsers used. differ.parse('a\n', copies=1) # Now the file has completely changed and we need to parse. differ.parse('b\n', parsers=1) # And again. differ.parse('a\n', parsers=1) def test_positions(differ): func_before = 'class A:\n pass\n' m = differ.initialize(func_before + 'a') assert m.start_pos == (1, 0) assert m.end_pos == (3, 1) m = differ.parse('a', copies=1) assert m.start_pos == (1, 0) assert m.end_pos == (1, 1) m = differ.parse('a\n\n', parsers=1) assert m.end_pos == (3, 0) m = differ.parse('a\n\n ', copies=1, parsers=2) assert m.end_pos == (3, 1) m = differ.parse('a ', parsers=1) assert m.end_pos == (1, 2) def test_if_simple(differ): src = dedent('''\ if 1: a = 3 ''') else_ = "else:\n a = ''\n" differ.initialize(src + 'a') differ.parse(src + else_ + "a", copies=0, parsers=1) differ.parse(else_, parsers=1, copies=1, expect_error_leaves=True) differ.parse(src + else_, parsers=1) def test_func_with_for_and_comment(differ): # The first newline is important, leave it. It should not trigger another # parser split. src = dedent("""\ def func(): pass for a in [1]: # COMMENT a""") differ.initialize(src) differ.parse('a\n' + src, copies=1, parsers=2) def test_one_statement_func(differ): src = dedent("""\ first def func(): a """) differ.initialize(src + 'second') differ.parse(src + 'def second():\n a', parsers=1, copies=1) def test_for_on_one_line(differ): src = dedent("""\ foo = 1 for x in foo: pass def hi(): pass """) differ.initialize(src) src = dedent("""\ def hi(): for x in foo: pass pass pass """) differ.parse(src, parsers=2) src = dedent("""\ def hi(): for x in foo: pass pass def nested(): pass """) # The second parser is for parsing the `def nested()` which is an `equal` # operation in the SequenceMatcher. differ.parse(src, parsers=1, copies=1) def test_open_parentheses(differ): func = 'def func():\n a\n' code = 'isinstance(\n\n' + func new_code = 'isinstance(\n' + func differ.initialize(code) differ.parse(new_code, parsers=1, expect_error_leaves=True) new_code = 'a = 1\n' + new_code differ.parse(new_code, parsers=2, expect_error_leaves=True) func += 'def other_func():\n pass\n' differ.initialize('isinstance(\n' + func) # Cannot copy all, because the prefix of the function is once a newline and # once not. differ.parse('isinstance()\n' + func, parsers=2, copies=1) def test_open_parentheses_at_end(differ): code = "a['" differ.initialize(code) differ.parse(code, parsers=1, expect_error_leaves=True) def test_backslash(differ): src = dedent(r""" a = 1\ if 1 else 2 def x(): pass """) differ.initialize(src) src = dedent(r""" def x(): a = 1\ if 1 else 2 def y(): pass """) differ.parse(src, parsers=2) src = dedent(r""" def first(): if foo \ and bar \ or baz: pass def second(): pass """) differ.parse(src, parsers=1) def test_full_copy(differ): code = 'def foo(bar, baz):\n pass\n bar' differ.initialize(code) differ.parse(code, copies=1) def test_wrong_whitespace(differ): code = ''' hello ''' differ.initialize(code) differ.parse(code + 'bar\n ', parsers=3) code += """abc(\npass\n """ differ.parse(code, parsers=2, copies=1, expect_error_leaves=True) def test_issues_with_error_leaves(differ): code = dedent(''' def ints(): str.. str ''') code2 = dedent(''' def ints(): str. str ''') differ.initialize(code) differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True) def test_unfinished_nodes(differ): code = dedent(''' class a(): def __init__(self, a): self.a = a def p(self): a(1) ''') code2 = dedent(''' class a(): def __init__(self, a): self.a = a def p(self): self a(1) ''') differ.initialize(code) differ.parse(code2, parsers=1, copies=2) def test_nested_if_and_scopes(differ): code = dedent(''' class a(): if 1: def b(): 2 ''') code2 = code + ' else:\n 3' differ.initialize(code) differ.parse(code2, parsers=1, copies=0) def test_word_before_def(differ): code1 = 'blub def x():\n' code2 = code1 + ' s' differ.initialize(code1) differ.parse(code2, parsers=1, copies=0, expect_error_leaves=True) def test_classes_with_error_leaves(differ): code1 = dedent(''' class X(): def x(self): blablabla assert 3 self. class Y(): pass ''') code2 = dedent(''' class X(): def x(self): blablabla assert 3 str( class Y(): pass ''') differ.initialize(code1) differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True) def test_totally_wrong_whitespace(differ): code1 = ''' class X(): raise n class Y(): pass ''' code2 = ''' class X(): raise n str( class Y(): pass ''' differ.initialize(code1) differ.parse(code2, parsers=4, copies=0, expect_error_leaves=True) def test_node_insertion(differ): code1 = dedent(''' class X(): def y(self): a = 1 b = 2 c = 3 d = 4 ''') code2 = dedent(''' class X(): def y(self): a = 1 b = 2 str c = 3 d = 4 ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=2) def test_whitespace_at_end(differ): code = dedent('str\n\n') differ.initialize(code) differ.parse(code + '\n', parsers=1, copies=1) def test_endless_while_loop(differ): """ This was a bug in Jedi #878. """ code = '#dead' differ.initialize(code) module = differ.parse(code, parsers=1) assert module.end_pos == (1, 5) code = '#dead\n' differ.initialize(code) module = differ.parse(code + '\n', parsers=1) assert module.end_pos == (3, 0) def test_in_class_movements(differ): code1 = dedent("""\ class PlaybookExecutor: p b def run(self): 1 try: x except: pass """) code2 = dedent("""\ class PlaybookExecutor: b def run(self): 1 try: x except: pass """) differ.initialize(code1) differ.parse(code2, parsers=2, copies=1) def test_in_parentheses_newlines(differ): code1 = dedent(""" x = str( True) a = 1 def foo(): pass b = 2""") code2 = dedent(""" x = str(True) a = 1 def foo(): pass b = 2""") differ.initialize(code1) differ.parse(code2, parsers=1, copies=1) def test_indentation_issue(differ): code1 = dedent(""" import module """) code2 = dedent(""" class L1: class L2: class L3: def f(): pass def f(): pass def f(): pass def f(): pass """) differ.initialize(code1) differ.parse(code2, parsers=1) def test_endmarker_newline(differ): code1 = dedent('''\ docu = None # some comment result = codet incomplete_dctassign = { "module" if "a": x = 3 # asdf ''') code2 = code1.replace('codet', 'coded') differ.initialize(code1) differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True) def test_newlines_at_end(differ): differ.initialize('a\n\n') differ.parse('a\n', copies=1) def test_end_newline_with_decorator(differ): code = dedent('''\ @staticmethod def spam(): import json json.l''') differ.initialize(code) module = differ.parse(code + '\n', copies=1, parsers=1) decorated, endmarker = module.children assert decorated.type == 'decorated' decorator, func = decorated.children suite = func.children[-1] assert suite.type == 'suite' newline, first_stmt, second_stmt = suite.children assert first_stmt.get_code() == ' import json\n' assert second_stmt.get_code() == ' json.l\n' def test_invalid_to_valid_nodes(differ): code1 = dedent('''\ def a(): foo = 3 def b(): la = 3 else: la return foo base ''') code2 = dedent('''\ def a(): foo = 3 def b(): la = 3 if foo: latte = 3 else: la return foo base ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=3) def test_if_removal_and_reappearence(differ): code1 = dedent('''\ la = 3 if foo: latte = 3 else: la pass ''') code2 = dedent('''\ la = 3 latte = 3 else: la pass ''') code3 = dedent('''\ la = 3 if foo: latte = 3 else: la ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=4, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=1) differ.parse(code3, parsers=1, copies=1) def test_add_error_indentation(differ): code = 'if x:\n 1\n' differ.initialize(code) differ.parse(code + ' 2\n', parsers=1, copies=0, expect_error_leaves=True) def test_differing_docstrings(differ): code1 = dedent('''\ def foobar(x, y): 1 return x def bazbiz(): foobar() lala ''') code2 = dedent('''\ def foobar(x, y): 2 return x + y def bazbiz(): z = foobar() lala ''') differ.initialize(code1) differ.parse(code2, parsers=3, copies=1) differ.parse(code1, parsers=3, copies=1) def test_one_call_in_function_change(differ): code1 = dedent('''\ def f(self): mro = [self] for a in something: yield a def g(self): return C( a=str, b=self, ) ''') code2 = dedent('''\ def f(self): mro = [self] def g(self): return C( a=str, t b=self, ) ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True) differ.parse(code1, parsers=2, copies=1) def test_function_deletion(differ): code1 = dedent('''\ class C(list): def f(self): def iterate(): for x in b: break return list(iterate()) ''') code2 = dedent('''\ class C(): def f(self): for x in b: break return list(iterate()) ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=0, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=0) def test_docstring_removal(differ): code1 = dedent('''\ class E(Exception): """ 1 2 3 """ class S(object): @property def f(self): return cmd def __repr__(self): return cmd2 ''') code2 = dedent('''\ class E(Exception): """ 1 3 """ class S(object): @property def f(self): return cmd return cmd2 ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=2) differ.parse(code1, parsers=2, copies=1) def test_paren_in_strange_position(differ): code1 = dedent('''\ class C: """ ha """ def __init__(self, message): self.message = message ''') code2 = dedent('''\ class C: """ ha """ ) def __init__(self, message): self.message = message ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=2, expect_error_leaves=True) differ.parse(code1, parsers=0, copies=2) def insert_line_into_code(code, index, line): lines = split_lines(code, keepends=True) lines.insert(index, line) return ''.join(lines) def test_paren_before_docstring(differ): code1 = dedent('''\ # comment """ The """ from parso import tree from parso import python ''') code2 = insert_line_into_code(code1, 1, ' ' * 16 + 'raise InternalParseError(\n') differ.initialize(code1) differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True) differ.parse(code1, parsers=2, copies=1) def test_parentheses_before_method(differ): code1 = dedent('''\ class A: def a(self): pass class B: def b(self): if 1: pass ''') code2 = dedent('''\ class A: def a(self): pass Exception.__init__(self, "x" % def b(self): if 1: pass ''') differ.initialize(code1) differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=1) def test_indentation_issues(differ): code1 = dedent('''\ class C: def f(): 1 if 2: return 3 def g(): to_be_removed pass ''') code2 = dedent('''\ class C: def f(): 1 ``something``, very ``weird``). if 2: return 3 def g(): to_be_removed pass ''') code3 = dedent('''\ class C: def f(): 1 if 2: return 3 def g(): pass ''') differ.initialize(code1) differ.parse(code2, parsers=2, copies=2, expect_error_leaves=True) differ.parse(code1, copies=2) differ.parse(code3, parsers=2, copies=1) differ.parse(code1, parsers=1, copies=2) def test_error_dedent_issues(differ): code1 = dedent('''\ while True: try: 1 except KeyError: if 2: 3 except IndexError: 4 5 ''') code2 = dedent('''\ while True: try: except KeyError: 1 except KeyError: if 2: 3 except IndexError: 4 something_inserted 5 ''') differ.initialize(code1) differ.parse(code2, parsers=6, copies=2, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=0) def test_random_text_insertion(differ): code1 = dedent('''\ class C: def f(): return node def g(): try: 1 except KeyError: 2 ''') code2 = dedent('''\ class C: def f(): return node Some'random text: yeah for push in plan.dfa_pushes: def g(): try: 1 except KeyError: 2 ''') differ.initialize(code1) differ.parse(code2, parsers=1, copies=1, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=1) def test_many_nested_ifs(differ): code1 = dedent('''\ class C: def f(self): def iterate(): if 1: yield t else: yield return def g(): 3 ''') code2 = dedent('''\ def f(self): def iterate(): if 1: yield t hahahaha if 2: else: yield return def g(): 3 ''') differ.initialize(code1) differ.parse(code2, parsers=2, copies=1, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=1) @pytest.mark.skipif(sys.version_info < (3, 5), reason="Async starts working in 3.5") @pytest.mark.parametrize('prefix', ['', 'async ']) def test_with_and_funcdef_in_call(differ, prefix): code1 = prefix + dedent('''\ with x: la = C( a=1, b=2, c=3, ) ''') code2 = insert_line_into_code(code1, 3, 'def y(self, args):\n') differ.initialize(code1) differ.parse(code2, parsers=3, expect_error_leaves=True) differ.parse(code1, parsers=1) def test_wrong_backslash(differ): code1 = dedent('''\ def y(): 1 for x in y: continue ''') code2 = insert_line_into_code(code1, 3, '\\.whl$\n') differ.initialize(code1) differ.parse(code2, parsers=2, copies=2, expect_error_leaves=True) differ.parse(code1, parsers=1, copies=1) def test_comment_change(differ): differ.initialize('') def test_random_unicode_characters(differ): """ Those issues were all found with the fuzzer. """ differ.initialize('') differ.parse(u'\x1dĔBϞɛˁşʑ˳˻ȣſéÎ\x90̕ȟòwʘ\x1dĔBϞɛˁşʑ˳˻ȣſéÎ', parsers=1, expect_error_leaves=True) differ.parse(u'\r\r', parsers=1) differ.parse(u"˟Ę\x05À\r rúƣ@\x8a\x15r()\n", parsers=1, expect_error_leaves=True) differ.parse(u'a\ntaǁ\rGĒōns__\n\nb', parsers=1, expect_error_leaves=sys.version_info[0] == 2) s = ' if not (self, "_fi\x02\x0e\x08\n\nle"):' differ.parse(s, parsers=1, expect_error_leaves=True) differ.parse('') differ.parse(s + '\n', parsers=1, expect_error_leaves=True) differ.parse(u' result = (\r\f\x17\t\x11res)', parsers=2, expect_error_leaves=True) differ.parse('') differ.parse(' a( # xx\ndef', parsers=2, expect_error_leaves=True) @pytest.mark.skipif(sys.version_info < (2, 7), reason="No set literals in Python 2.6") def test_dedent_end_positions(differ): code1 = dedent('''\ if 1: if b: 2 c = { 5} ''') code2 = dedent('''\ if 1: if ⌟ഒᜈྡྷṭb: 2 'l': ''} c = { 5} ''') differ.initialize(code1) differ.parse(code2, parsers=1, expect_error_leaves=True) differ.parse(code1, parsers=1) def test_special_no_newline_ending(differ): code1 = dedent('''\ 1 ''') code2 = dedent('''\ 1 is ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=0) def test_random_character_insertion(differ): code1 = dedent('''\ def create(self): 1 if self.path is not None: return # 3 # 4 ''') code2 = dedent('''\ def create(self): 1 if 2: x return # 3 # 4 ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=3, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=1) def test_import_opening_bracket(differ): code1 = dedent('''\ 1 2 from bubu import (X, ''') code2 = dedent('''\ 11 2 from bubu import (X, ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=2, expect_error_leaves=True) def test_opening_bracket_at_end(differ): code1 = dedent('''\ class C: 1 [ ''') code2 = dedent('''\ 3 class C: 1 [ ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=1, expect_error_leaves=True) def test_all_sorts_of_indentation(differ): code1 = dedent('''\ class C: 1 def f(): 'same' if foo: a = b end ''') code2 = dedent('''\ class C: 1 def f(yield await %|( 'same' \x02\x06\x0f\x1c\x11 if foo: a = b end ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=4, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=3) code3 = dedent('''\ if 1: a b c d \x00 ''') differ.parse(code3, parsers=2, expect_error_leaves=True) differ.parse('') def test_dont_copy_dedents_in_beginning(differ): code1 = dedent('''\ a 4 ''') code2 = dedent('''\ 1 2 3 4 ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code1, parsers=2) def test_dont_copy_error_leaves(differ): code1 = dedent('''\ def f(n): x if 2: 3 ''') code2 = dedent('''\ def f(n): def if 1: indent x if 2: 3 ''') differ.initialize(code1) differ.parse(code2, parsers=1, expect_error_leaves=True) differ.parse(code1, parsers=2) def test_error_dedent_in_between(differ): code1 = dedent('''\ class C: def f(): a if something: x z ''') code2 = dedent('''\ class C: def f(): a dedent if other_thing: b if something: x z ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=2) def test_some_other_indentation_issues(differ): code1 = dedent('''\ class C: x def f(): "" copied a ''') code2 = dedent('''\ try: de a b c d def f(): "" copied a ''') differ.initialize(code1) differ.parse(code2, copies=2, parsers=1, expect_error_leaves=True) differ.parse(code1, copies=2, parsers=2) def test_open_bracket_case1(differ): code1 = dedent('''\ class C: 1 2 # ha ''') code2 = insert_line_into_code(code1, 2, ' [str\n') code3 = insert_line_into_code(code2, 4, ' str\n') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code3, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code1, copies=1, parsers=1) def test_open_bracket_case2(differ): code1 = dedent('''\ class C: def f(self): ( b c def g(self): d ''') code2 = dedent('''\ class C: def f(self): ( b c self. def g(self): d ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=2, expect_error_leaves=True) differ.parse(code1, copies=2, parsers=0, expect_error_leaves=True) def test_some_weird_removals(differ): code1 = dedent('''\ class C: 1 ''') code2 = dedent('''\ class C: 1 @property A return # x omega ''') code3 = dedent('''\ class C: 1 ; omega ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1, expect_error_leaves=True) differ.parse(code3, copies=1, parsers=2, expect_error_leaves=True) differ.parse(code1, copies=1) @pytest.mark.skipif(sys.version_info < (3, 5), reason="Async starts working in 3.5") def test_async_copy(differ): code1 = dedent('''\ async def main(): x = 3 print( ''') code2 = dedent('''\ async def main(): x = 3 print() ''') differ.initialize(code1) differ.parse(code2, copies=1, parsers=1) differ.parse(code1, copies=1, parsers=1, expect_error_leaves=True) parso-0.5.2/test/fuzz_diff_parser.py0000664000175000017500000002446413575273707017451 0ustar davedave00000000000000""" A script to find bugs in the diff parser. This script is extremely useful if changes are made to the diff parser. By running a few thousand iterations, we can assure that the diff parser is in good shape. Usage: fuzz_diff_parser.py [--pdb|--ipdb] [-l] [-n=] [-x=] random [] fuzz_diff_parser.py [--pdb|--ipdb] [-l] redo [-o=] [-p] fuzz_diff_parser.py -h | --help Options: -h --help Show this screen -n, --maxtries= Maximum of random tries [default: 1000] -x, --changes= Amount of changes to be done to a file per try [default: 5] -l, --logging Prints all the logs -o, --only-last= Only runs the last n iterations; Defaults to running all -p, --print-code Print all test diffs --pdb Launch pdb when error is raised --ipdb Launch ipdb when error is raised """ from __future__ import print_function import logging import sys import os import random import pickle import parso from parso.utils import split_lines from test.test_diff_parser import _check_error_leaves_nodes _latest_grammar = parso.load_grammar(version='3.8') _python_reserved_strings = tuple( # Keywords are ususally only interesting in combination with spaces after # them. We don't put a space before keywords, to avoid indentation errors. s + (' ' if s.isalpha() else '') for s in _latest_grammar._pgen_grammar.reserved_syntax_strings.keys() ) _random_python_fragments = _python_reserved_strings + ( ' ', '\t', '\n', '\r', '\f', 'f"', 'F"""', "fr'", "RF'''", '"', '"""', "'", "'''", ';', ' some_random_word ', '\\', '#', ) def find_python_files_in_tree(file_path): if not os.path.isdir(file_path): yield file_path return for root, dirnames, filenames in os.walk(file_path): for name in filenames: if name.endswith('.py'): yield os.path.join(root, name) def _print_copyable_lines(lines): for line in lines: line = repr(line)[1:-1] if line.endswith(r'\n'): line = line[:-2] + '\n' print(line, end='') def _get_first_error_start_pos_or_none(module): error_leaf = _check_error_leaves_nodes(module) return None if error_leaf is None else error_leaf.start_pos class LineReplacement: def __init__(self, line_nr, new_line): self._line_nr = line_nr self._new_line = new_line def apply(self, code_lines): # print(repr(self._new_line)) code_lines[self._line_nr] = self._new_line class LineDeletion: def __init__(self, line_nr): self.line_nr = line_nr def apply(self, code_lines): del code_lines[self.line_nr] class LineCopy: def __init__(self, copy_line, insertion_line): self._copy_line = copy_line self._insertion_line = insertion_line def apply(self, code_lines): code_lines.insert( self._insertion_line, # Use some line from the file. This doesn't feel totally # random, but for the diff parser it will feel like it. code_lines[self._copy_line] ) class FileModification: @classmethod def generate(cls, code_lines, change_count): return cls( list(cls._generate_line_modifications(code_lines, change_count)), # work with changed trees more than with normal ones. check_original=random.random() > 0.8, ) @staticmethod def _generate_line_modifications(lines, change_count): def random_line(include_end=False): return random.randint(0, len(lines) - (not include_end)) lines = list(lines) for _ in range(change_count): rand = random.randint(1, 4) if rand == 1: if len(lines) == 1: # We cannot delete every line, that doesn't make sense to # fuzz and it would be annoying to rewrite everything here. continue l = LineDeletion(random_line()) elif rand == 2: # Copy / Insertion # Make it possible to insert into the first and the last line l = LineCopy(random_line(), random_line(include_end=True)) elif rand in (3, 4): # Modify a line in some weird random ways. line_nr = random_line() line = lines[line_nr] column = random.randint(0, len(line)) random_string = '' for _ in range(random.randint(1, 3)): if random.random() > 0.8: # The lower characters cause way more issues. unicode_range = 0x1f if random.randint(0, 1) else 0x3000 random_string += chr(random.randint(0, unicode_range)) else: # These insertions let us understand how random # keyword/operator insertions work. Theoretically this # could also be done with unicode insertions, but the # fuzzer is just way more effective here. random_string += random.choice(_random_python_fragments) if random.random() > 0.5: # In this case we insert at a very random place that # probably breaks syntax. line = line[:column] + random_string + line[column:] else: # Here we have better chances to not break syntax, because # we really replace the line with something that has # indentation. line = ' ' * random.randint(0, 12) + random_string + '\n' l = LineReplacement(line_nr, line) l.apply(lines) yield l def __init__(self, modification_list, check_original): self._modification_list = modification_list self._check_original = check_original def _apply(self, code_lines): changed_lines = list(code_lines) for modification in self._modification_list: modification.apply(changed_lines) return changed_lines def run(self, grammar, code_lines, print_code): code = ''.join(code_lines) modified_lines = self._apply(code_lines) modified_code = ''.join(modified_lines) if print_code: if self._check_original: print('Original:') _print_copyable_lines(code_lines) print('\nModified:') _print_copyable_lines(modified_lines) print() if self._check_original: m = grammar.parse(code, diff_cache=True) start1 = _get_first_error_start_pos_or_none(m) grammar.parse(modified_code, diff_cache=True) if self._check_original: # Also check if it's possible to "revert" the changes. m = grammar.parse(code, diff_cache=True) start2 = _get_first_error_start_pos_or_none(m) assert start1 == start2, (start1, start2) class FileTests: def __init__(self, file_path, test_count, change_count): self._path = file_path with open(file_path) as f: code = f.read() self._code_lines = split_lines(code, keepends=True) self._test_count = test_count self._code_lines = self._code_lines self._change_count = change_count self._file_modifications = [] def _run(self, grammar, file_modifications, debugger, print_code=False): try: for i, fm in enumerate(file_modifications, 1): fm.run(grammar, self._code_lines, print_code=print_code) print('.', end='') sys.stdout.flush() print() except Exception: print("Issue in file: %s" % self._path) if debugger: einfo = sys.exc_info() pdb = __import__(debugger) pdb.post_mortem(einfo[2]) raise def redo(self, grammar, debugger, only_last, print_code): mods = self._file_modifications if only_last is not None: mods = mods[-only_last:] self._run(grammar, mods, debugger, print_code=print_code) def run(self, grammar, debugger): def iterate(): for _ in range(self._test_count): fm = FileModification.generate(self._code_lines, self._change_count) self._file_modifications.append(fm) yield fm self._run(grammar, iterate(), debugger) def main(arguments): debugger = 'pdb' if arguments['--pdb'] else \ 'ipdb' if arguments['--ipdb'] else None redo_file = os.path.join(os.path.dirname(__file__), 'fuzz-redo.pickle') if arguments['--logging']: root = logging.getLogger() root.setLevel(logging.DEBUG) ch = logging.StreamHandler(sys.stdout) ch.setLevel(logging.DEBUG) root.addHandler(ch) grammar = parso.load_grammar() parso.python.diff.DEBUG_DIFF_PARSER = True if arguments['redo']: with open(redo_file, 'rb') as f: file_tests_obj = pickle.load(f) only_last = arguments['--only-last'] and int(arguments['--only-last']) file_tests_obj.redo( grammar, debugger, only_last=only_last, print_code=arguments['--print-code'] ) elif arguments['random']: # A random file is used to do diff parser checks if no file is given. # This helps us to find errors in a lot of different files. file_paths = list(find_python_files_in_tree(arguments[''] or '.')) max_tries = int(arguments['--maxtries']) tries = 0 try: while tries < max_tries: path = random.choice(file_paths) print("Checking %s: %s tries" % (path, tries)) now_tries = min(1000, max_tries - tries) file_tests_obj = FileTests(path, now_tries, int(arguments['--changes'])) file_tests_obj.run(grammar, debugger) tries += now_tries except Exception: with open(redo_file, 'wb') as f: pickle.dump(file_tests_obj, f) raise else: raise NotImplementedError('Command is not implemented') if __name__ == '__main__': from docopt import docopt arguments = docopt(__doc__) main(arguments) parso-0.5.2/test/test_get_code.py0000664000175000017500000000512613575273707016711 0ustar davedave00000000000000import difflib import pytest from parso import parse code_basic_features = ''' """A mod docstring""" def a_function(a_argument, a_default = "default"): """A func docstring""" a_result = 3 * a_argument print(a_result) # a comment b = """ from to""" + "huhu" if a_default == "default": return str(a_result) else return None ''' def diff_code_assert(a, b, n=4): if a != b: diff = "\n".join(difflib.unified_diff( a.splitlines(), b.splitlines(), n=n, lineterm="" )) assert False, "Code does not match:\n%s\n\ncreated code:\n%s" % ( diff, b ) pass def test_basic_parsing(): """Validate the parsing features""" m = parse(code_basic_features) diff_code_assert( code_basic_features, m.get_code() ) def test_operators(): src = '5 * 3' module = parse(src) diff_code_assert(src, module.get_code()) def test_get_code(): """Use the same code that the parser also generates, to compare""" s = '''"""a docstring""" class SomeClass(object, mixin): def __init__(self): self.xy = 3.0 """statement docstr""" def some_method(self): return 1 def yield_method(self): while hasattr(self, 'xy'): yield True for x in [1, 2]: yield x def empty(self): pass class Empty: pass class WithDocstring: """class docstr""" pass def method_with_docstring(): """class docstr""" pass ''' assert parse(s).get_code() == s def test_end_newlines(): """ The Python grammar explicitly needs a newline at the end. Jedi though still wants to be able, to return the exact same code without the additional new line the parser needs. """ def test(source, end_pos): module = parse(source) assert module.get_code() == source assert module.end_pos == end_pos test('a', (1, 1)) test('a\n', (2, 0)) test('a\nb', (2, 1)) test('a\n#comment\n', (3, 0)) test('a\n#comment', (2, 8)) test('a#comment', (1, 9)) test('def a():\n pass', (2, 5)) test('def a(', (1, 6)) @pytest.mark.parametrize(('code', 'types'), [ ('\r', ['endmarker']), ('\n\r', ['endmarker']) ]) def test_carriage_return_at_end(code, types): """ By adding an artificial newline this created weird side effects for \r at the end of files. """ tree = parse(code) assert tree.get_code() == code assert [c.type for c in tree.children] == types assert tree.end_pos == (len(code) + 1, 0) parso-0.5.2/test/test_python_errors.py0000664000175000017500000002647713575273707020071 0ustar davedave00000000000000""" Testing if parso finds syntax errors and indentation errors. """ import sys import warnings import pytest import parso from parso._compatibility import is_pypy from .failing_examples import FAILING_EXAMPLES, indent, build_nested if is_pypy: # The errors in PyPy might be different. Just skip the module for now. pytestmark = pytest.mark.skip() def _get_error_list(code, version=None): grammar = parso.load_grammar(version=version) tree = grammar.parse(code) return list(grammar.iter_errors(tree)) def assert_comparison(code, error_code, positions): errors = [(error.start_pos, error.code) for error in _get_error_list(code)] assert [(pos, error_code) for pos in positions] == errors @pytest.mark.parametrize('code', FAILING_EXAMPLES) def test_python_exception_matches(code): wanted, line_nr = _get_actual_exception(code) errors = _get_error_list(code) actual = None if errors: error, = errors actual = error.message assert actual in wanted # Somehow in Python3.3 the SyntaxError().lineno is sometimes None assert line_nr is None or line_nr == error.start_pos[0] def test_non_async_in_async(): """ This example doesn't work with FAILING_EXAMPLES, because the line numbers are not always the same / incorrect in Python 3.8. """ if sys.version_info[:2] < (3, 5): pytest.skip() # Raises multiple errors in previous versions. code = 'async def foo():\n def nofoo():[x async for x in []]' wanted, line_nr = _get_actual_exception(code) errors = _get_error_list(code) if errors: error, = errors actual = error.message assert actual in wanted if sys.version_info[:2] < (3, 8): assert line_nr == error.start_pos[0] else: assert line_nr == 0 # For whatever reason this is zero in Python 3.8+ @pytest.mark.parametrize( ('code', 'positions'), [ ('1 +', [(1, 3)]), ('1 +\n', [(1, 3)]), ('1 +\n2 +', [(1, 3), (2, 3)]), ('x + 2', []), ('[\n', [(2, 0)]), ('[\ndef x(): pass', [(2, 0)]), ('[\nif 1: pass', [(2, 0)]), ('1+?', [(1, 2)]), ('?', [(1, 0)]), ('??', [(1, 0)]), ('? ?', [(1, 0)]), ('?\n?', [(1, 0), (2, 0)]), ('? * ?', [(1, 0)]), ('1 + * * 2', [(1, 4)]), ('?\n1\n?', [(1, 0), (3, 0)]), ] ) def test_syntax_errors(code, positions): assert_comparison(code, 901, positions) @pytest.mark.parametrize( ('code', 'positions'), [ (' 1', [(1, 0)]), ('def x():\n 1\n 2', [(3, 0)]), ('def x():\n 1\n 2', [(3, 0)]), ('def x():\n1', [(2, 0)]), ] ) def test_indentation_errors(code, positions): assert_comparison(code, 903, positions) def _get_actual_exception(code): with warnings.catch_warnings(): # We don't care about warnings where locals/globals misbehave here. # It's as simple as either an error or not. warnings.filterwarnings('ignore', category=SyntaxWarning) try: compile(code, '', 'exec') except (SyntaxError, IndentationError) as e: wanted = e.__class__.__name__ + ': ' + e.msg line_nr = e.lineno except ValueError as e: # The ValueError comes from byte literals in Python 2 like '\x' # that are oddly enough not SyntaxErrors. wanted = 'SyntaxError: (value error) ' + str(e) line_nr = None else: assert False, "The piece of code should raise an exception." # SyntaxError # Python 2.6 has a bit different error messages here, so skip it. if sys.version_info[:2] == (2, 6) and wanted == 'SyntaxError: unexpected EOF while parsing': wanted = 'SyntaxError: invalid syntax' if wanted == 'SyntaxError: non-keyword arg after keyword arg': # The python 3.5+ way, a bit nicer. wanted = 'SyntaxError: positional argument follows keyword argument' elif wanted == 'SyntaxError: assignment to keyword': return [wanted, "SyntaxError: can't assign to keyword", 'SyntaxError: cannot assign to __debug__'], line_nr elif wanted == 'SyntaxError: assignment to None': # Python 2.6 does has a slightly different error. wanted = 'SyntaxError: cannot assign to None' elif wanted == 'SyntaxError: can not assign to __debug__': # Python 2.6 does has a slightly different error. wanted = 'SyntaxError: cannot assign to __debug__' elif wanted == 'SyntaxError: can use starred expression only as assignment target': # Python 3.4/3.4 have a bit of a different warning than 3.5/3.6 in # certain places. But in others this error makes sense. return [wanted, "SyntaxError: can't use starred expression here"], line_nr elif wanted == 'SyntaxError: f-string: unterminated string': wanted = 'SyntaxError: EOL while scanning string literal' elif wanted == 'SyntaxError: f-string expression part cannot include a backslash': return [ wanted, "SyntaxError: EOL while scanning string literal", "SyntaxError: unexpected character after line continuation character", ], line_nr elif wanted == "SyntaxError: f-string: expecting '}'": wanted = 'SyntaxError: EOL while scanning string literal' elif wanted == 'SyntaxError: f-string: empty expression not allowed': wanted = 'SyntaxError: invalid syntax' elif wanted == "SyntaxError: f-string expression part cannot include '#'": wanted = 'SyntaxError: invalid syntax' elif wanted == "SyntaxError: f-string: single '}' is not allowed": wanted = 'SyntaxError: invalid syntax' return [wanted], line_nr def test_default_except_error_postition(): # For this error the position seemed to be one line off, but that doesn't # really matter. code = 'try: pass\nexcept: pass\nexcept X: pass' wanted, line_nr = _get_actual_exception(code) error, = _get_error_list(code) assert error.message in wanted assert line_nr != error.start_pos[0] # I think this is the better position. assert error.start_pos[0] == 2 def test_statically_nested_blocks(): def build(code, depth): if depth == 0: return code new_code = 'if 1:\n' + indent(code) return build(new_code, depth - 1) def get_error(depth, add_func=False): code = build('foo', depth) if add_func: code = 'def bar():\n' + indent(code) errors = _get_error_list(code) if errors: assert errors[0].message == 'SyntaxError: too many statically nested blocks' return errors[0] return None assert get_error(19) is None assert get_error(19, add_func=True) is None assert get_error(20) assert get_error(20, add_func=True) def test_future_import_first(): def is_issue(code, *args): code = code % args return bool(_get_error_list(code)) i1 = 'from __future__ import division' i2 = 'from __future__ import absolute_import' assert not is_issue(i1) assert not is_issue(i1 + ';' + i2) assert not is_issue(i1 + '\n' + i2) assert not is_issue('"";' + i1) assert not is_issue('"";' + i1) assert not is_issue('""\n' + i1) assert not is_issue('""\n%s\n%s', i1, i2) assert not is_issue('""\n%s;%s', i1, i2) assert not is_issue('"";%s;%s ', i1, i2) assert not is_issue('"";%s\n%s ', i1, i2) assert is_issue('1;' + i1) assert is_issue('1\n' + i1) assert is_issue('"";1\n' + i1) assert is_issue('""\n%s\nfrom x import a\n%s', i1, i2) assert is_issue('%s\n""\n%s', i1, i2) def test_named_argument_issues(works_not_in_py): message = works_not_in_py.get_error_message('def foo(*, **dict): pass') message = works_not_in_py.get_error_message('def foo(*): pass') if works_not_in_py.version.startswith('2'): assert message == 'SyntaxError: invalid syntax' else: assert message == 'SyntaxError: named arguments must follow bare *' works_not_in_py.assert_no_error_in_passing('def foo(*, name): pass') works_not_in_py.assert_no_error_in_passing('def foo(bar, *, name=1): pass') works_not_in_py.assert_no_error_in_passing('def foo(bar, *, name=1, **dct): pass') def test_escape_decode_literals(each_version): """ We are using internal functions to assure that unicode/bytes escaping is without syntax errors. Here we make a bit of quality assurance that this works through versions, because the internal function might change over time. """ def get_msg(end, to=1): base = "SyntaxError: (unicode error) 'unicodeescape' " \ "codec can't decode bytes in position 0-%s: " % to return base + end def get_msgs(escape): return (get_msg('end of string in escape sequence'), get_msg(r"truncated %s escape" % escape)) error, = _get_error_list(r'u"\x"', version=each_version) assert error.message in get_msgs(r'\xXX') error, = _get_error_list(r'u"\u"', version=each_version) assert error.message in get_msgs(r'\uXXXX') error, = _get_error_list(r'u"\U"', version=each_version) assert error.message in get_msgs(r'\UXXXXXXXX') error, = _get_error_list(r'u"\N{}"', version=each_version) assert error.message == get_msg(r'malformed \N character escape', to=2) error, = _get_error_list(r'u"\N{foo}"', version=each_version) assert error.message == get_msg(r'unknown Unicode character name', to=6) # Finally bytes. error, = _get_error_list(r'b"\x"', version=each_version) wanted = r'SyntaxError: (value error) invalid \x escape' if sys.version_info >= (3, 0): # The positioning information is only available in Python 3. wanted += ' at position 0' assert error.message == wanted def test_too_many_levels_of_indentation(): assert not _get_error_list(build_nested('pass', 99)) assert _get_error_list(build_nested('pass', 100)) base = 'def x():\n if x:\n' assert not _get_error_list(build_nested('pass', 49, base=base)) assert _get_error_list(build_nested('pass', 50, base=base)) @pytest.mark.parametrize( 'code', [ "f'{*args,}'", r'f"\""', r'f"\\\""', r'fr"\""', r'fr"\\\""', r"print(f'Some {x:.2f} and some {y}')", ] ) def test_valid_fstrings(code): assert not _get_error_list(code, version='3.6') @pytest.mark.parametrize( 'code', [ 'a = (b := 1)', '[x4 := x ** 5 for x in range(7)]', '[total := total + v for v in range(10)]', 'while chunk := file.read(2):\n pass', 'numbers = [y := math.factorial(x), y**2, y**3]', ] ) def test_valid_namedexpr(code): assert not _get_error_list(code, version='3.8') @pytest.mark.parametrize( ('code', 'message'), [ ("f'{1+}'", ('invalid syntax')), (r'f"\"', ('invalid syntax')), (r'fr"\"', ('invalid syntax')), ] ) def test_invalid_fstrings(code, message): """ Some fstring errors are handled differntly in 3.6 and other versions. Therefore check specifically for these errors here. """ error, = _get_error_list(code, version='3.6') assert message in error.message @pytest.mark.parametrize( 'code', [ "from foo import (\nbar,\n rab,\n)", "from foo import (bar, rab, )", ] ) def test_trailing_comma(code): errors = _get_error_list(code) assert not errors parso-0.5.2/test/test_file_python_errors.py0000664000175000017500000000127613575273707021056 0ustar davedave00000000000000import os import parso def get_python_files(path): for dir_path, dir_names, file_names in os.walk(path): for file_name in file_names: if file_name.endswith('.py'): yield os.path.join(dir_path, file_name) def test_on_itself(each_version): """ There are obviously no syntax erros in the Python code of parso. However parso should output the same for all versions. """ grammar = parso.load_grammar(version=each_version) path = os.path.dirname(os.path.dirname(__file__)) + '/parso' for file in get_python_files(path): tree = grammar.parse(path=file) errors = list(grammar.iter_errors(tree)) assert not errors parso-0.5.2/test/test_pep8.py0000664000175000017500000000160013575273707016005 0ustar davedave00000000000000import parso def issues(code): grammar = parso.load_grammar() module = parso.parse(code) return grammar._get_normalizer_issues(module) def test_eof_newline(): def assert_issue(code): found = issues(code) assert len(found) == 1 issue, = found assert issue.code == 292 assert not issues('asdf = 1\n') assert_issue('asdf = 1') assert_issue('asdf = 1\n# foo') assert_issue('# foobar') assert_issue('') assert_issue('foo = 1 # comment') def test_eof_blankline(): def assert_issue(code): found = issues(code) assert len(found) == 1 issue, = found assert issue.code == 391 assert_issue('asdf = 1\n\n') assert_issue('# foobar\n\n') assert_issue('\n\n') def test_shebang(): assert not issues('#!\n') assert not issues('#!/foo\n') assert not issues('#! python\n') parso-0.5.2/tox.ini0000664000175000017500000000076713575273707014071 0ustar davedave00000000000000[tox] envlist = {py26,py27,py33,py34,py35,py36,py37,py38} [testenv] extras = testing deps = py26,py33: pytest>=3.0.7,<3.3 py27,py34: pytest<3.3 py26,py33: setuptools<37 coverage: coverage setenv = # https://github.com/tomchristie/django-rest-framework/issues/1957 # tox corrupts __pycache__, solution from here: PYTHONDONTWRITEBYTECODE=1 coverage: TOX_TESTENV_COMMAND=coverage run -m pytest commands = {env:TOX_TESTENV_COMMAND:pytest} {posargs} coverage: coverage report parso-0.5.2/setup.cfg0000664000175000017500000000041313575273727014365 0ustar davedave00000000000000[bdist_wheel] universal = 1 [flake8] max-line-length = 100 ignore = # do not use bare 'except' E722, # don't know why this was ever even an option, 1+1 should be possible. E226, # line break before binary operator W503, [egg_info] tag_build = tag_date = 0 parso-0.5.2/conftest.py0000664000175000017500000001154213575273707014746 0ustar davedave00000000000000import re import tempfile import shutil import logging import sys import os import pytest import parso from parso import cache from parso.utils import parse_version_string collect_ignore = ["setup.py"] VERSIONS_2 = '2.6', '2.7' VERSIONS_3 = '3.3', '3.4', '3.5', '3.6', '3.7', '3.8' @pytest.fixture(scope='session') def clean_parso_cache(): """ Set the default cache directory to a temporary directory during tests. Note that you can't use built-in `tmpdir` and `monkeypatch` fixture here because their scope is 'function', which is not used in 'session' scope fixture. This fixture is activated in ../pytest.ini. """ old = cache._default_cache_path tmp = tempfile.mkdtemp(prefix='parso-test-') cache._default_cache_path = tmp yield cache._default_cache_path = old shutil.rmtree(tmp) def pytest_addoption(parser): parser.addoption("--logging", "-L", action='store_true', help="Enables the logging output.") def pytest_generate_tests(metafunc): if 'normalizer_issue_case' in metafunc.fixturenames: base_dir = os.path.join(os.path.dirname(__file__), 'test', 'normalizer_issue_files') cases = list(colllect_normalizer_tests(base_dir)) metafunc.parametrize( 'normalizer_issue_case', cases, ids=[c.name for c in cases] ) elif 'each_version' in metafunc.fixturenames: metafunc.parametrize('each_version', VERSIONS_2 + VERSIONS_3) elif 'each_py2_version' in metafunc.fixturenames: metafunc.parametrize('each_py2_version', VERSIONS_2) elif 'each_py3_version' in metafunc.fixturenames: metafunc.parametrize('each_py3_version', VERSIONS_3) elif 'version_ge_py36' in metafunc.fixturenames: metafunc.parametrize('version_ge_py36', ['3.6', '3.7', '3.8']) elif 'version_ge_py38' in metafunc.fixturenames: metafunc.parametrize('version_ge_py38', ['3.8']) class NormalizerIssueCase(object): """ Static Analysis cases lie in the static_analysis folder. The tests also start with `#!`, like the goto_definition tests. """ def __init__(self, path): self.path = path self.name = os.path.basename(path) match = re.search(r'python([\d.]+)\.py', self.name) self.python_version = match and match.group(1) def colllect_normalizer_tests(base_dir): for f_name in os.listdir(base_dir): if f_name.endswith(".py"): path = os.path.join(base_dir, f_name) yield NormalizerIssueCase(path) def pytest_configure(config): if config.option.logging: root = logging.getLogger() root.setLevel(logging.DEBUG) ch = logging.StreamHandler(sys.stdout) ch.setLevel(logging.DEBUG) #formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') #ch.setFormatter(formatter) root.addHandler(ch) class Checker(): def __init__(self, version, is_passing): self.version = version self._is_passing = is_passing self.grammar = parso.load_grammar(version=self.version) def parse(self, code): if self._is_passing: return parso.parse(code, version=self.version, error_recovery=False) else: self._invalid_syntax(code) def _invalid_syntax(self, code): with pytest.raises(parso.ParserSyntaxError): module = parso.parse(code, version=self.version, error_recovery=False) # For debugging print(module.children) def get_error(self, code): errors = list(self.grammar.iter_errors(self.grammar.parse(code))) assert bool(errors) != self._is_passing if errors: return errors[0] def get_error_message(self, code): error = self.get_error(code) if error is None: return return error.message def assert_no_error_in_passing(self, code): if self._is_passing: module = self.grammar.parse(code) assert not list(self.grammar.iter_errors(module)) @pytest.fixture def works_not_in_py(each_version): return Checker(each_version, False) @pytest.fixture def works_in_py2(each_version): return Checker(each_version, each_version.startswith('2')) @pytest.fixture def works_ge_py27(each_version): version_info = parse_version_string(each_version) return Checker(each_version, version_info >= (2, 7)) @pytest.fixture def works_ge_py3(each_version): version_info = parse_version_string(each_version) return Checker(each_version, version_info >= (3, 0)) @pytest.fixture def works_ge_py35(each_version): version_info = parse_version_string(each_version) return Checker(each_version, version_info >= (3, 5)) @pytest.fixture def works_ge_py38(each_version): version_info = parse_version_string(each_version) return Checker(each_version, version_info >= (3, 8))