pax_global_header 0000666 0000000 0000000 00000000064 14154274402 0014515 g ustar 00root root 0000000 0000000 52 comment=eaa1062beced93061f1184682290136a82b6a490
baron-0.10.1/ 0000775 0000000 0000000 00000000000 14154274402 0012675 5 ustar 00root root 0000000 0000000 baron-0.10.1/.env 0000664 0000000 0000000 00000000076 14154274402 0013471 0 ustar 00root root 0000000 0000000 #!/bin/zsh
source $(/bin/readlink -f ${0%/*})/ve/bin/activate
baron-0.10.1/.gitignore 0000664 0000000 0000000 00000000107 14154274402 0014663 0 ustar 00root root 0000000 0000000 *.pyc
*.swp
*.swo
__pycache__
.coverage
.pytest_cache/
baron.egg-info/
baron-0.10.1/.travis.yml 0000664 0000000 0000000 00000000310 14154274402 0015000 0 ustar 00root root 0000000 0000000 language: python
install: "pip install -r requirements.txt"
python:
- "2.7"
- "3.4"
- "3.5"
- "3.6"
- "3.7"
script: "py.test tests"
notifications:
irc: "chat.freenode.net#baron"
baron-0.10.1/CHANGELOG 0000664 0000000 0000000 00000020655 14154274402 0014117 0 ustar 00root root 0000000 0000000 Changelog
=========
0.10.1 (2021-12-08)
-----------------
- bug fix: in "a._" the "._" part was incorrectly recognized as a float, by bram
0.10 (2021-12-08)
-----------------
- bug fix: baron is now able to parse "class A(b, c=d): pass" by bram
- some project cleaned and integration of tox with good pratices like flake8 and check-manifest
- bug fix for missing edge case in inner formatting by EhsanKia https://github.com/PyCQA/baron/pull/156
- complet support for float with underscores in them by tamentis https://github.com/PyCQA/baron/pull/157
- bug fix for failure of parsing of "{**a}" by wavenator https://github.com/PyCQA/baron/pull/161
0.9 (2019-02-01)
----------------
First version of full python 3.7 grammar support.
- BREAKING CHANGE: annotations are now member of {def,list,dict}_argument to flatten the data structure
- add support for ... in from import by bram
- add support for return annotation by bram
- add support for exec function by bram
- add support for variable annotation https://github.com/PyCQA/baron/pull/145 by scottbelden and additional work by bram
- add support for *var expressions in tuple assignment by bram
- add support for raise from https://github.com/PyCQA/baron/pull/120 by odcinek with additional work by bram
- add support for arglist usage in class definition inheritence by bram
- bug fix by https://github.com/PyCQA/baron/pull/126/commits/91e839a228293698cc755a7f28afeca2669cb66e kyleatmakrs
0.8 (2018-10-29)
----------------
- add typed parameters support https://github.com/PyCQA/baron/pull/140 by Scott Belden and and additional work by bram
0.7 (2018-08-21)
----------------
- fix line continuation https://github.com/PyCQA/baron/pull/92 by ibizaman
- handle corrupt cache file situation https://github.com/PyCQA/baron/pull/76 by ryu2
- fix special crashing edge case in indentation marker https://github.com/PyCQA/bar by Ahuge
- fixed incorrect tokenization case "d*e-1". Fixes #85 https://github.com/PyCQA/baron/pull/107 by boxed
- fix endl handling inside groupings by kyleatmakrs (extracted from https://github.com/PyCQA/baron/pull/126)
Python 3:
- python 3 parsing extracted from https://github.com/PyCQA/baron/pull/126
- support ellipsis https://github.com/PyCQA/baron/pull/121 by odcinek
- support matrix operator https://github.com/PyCQA/baron/pull/117 by odcinek
- support f-strings https://github.com/PyCQA/baron/pull/110 by odcinek
- support numeric literals https://github.com/PyCQA/baron/pull/111 by odcinek
- support nonlocal statement https://github.com/PyCQA/baron/pull/112 by odcinek
- support keyword only markers https://github.com/PyCQA/baron/pull/108 by boxed
- support yield from statement https://github.com/PyCQA/baron/pull/113 by odcinek and additional work by bram
- support async/await statements https://github.com/PyCQA/baron/pull/114 by odcinek and additional work by bram
0.6.6 (2017-06-12)
------------------
- fix situation where a deindented comment between a if and elif/else broken
parsing, see https://github.com/PyCQA/baron/issues/87
- around 35-40% to 75% parsing speed improvment on big files by duncf
https://github.com/PyCQA/baron/pull/99
0.6.5 (2017-01-26)
------------------
- fix previous regression fix was broken
0.6.4 (2017-01-14)
------------------
- fix regression in case a comment follow the ":" of a if/def/other
0.6.3 (2017-01-02)
------------------
- group formatting at start of file or preceded by space with comment
0.6.2 (2016-03-18)
------------------
- fix race condition when generating parser cache file
- make all user-facing errors inherit from the same BaronError class
- fix: dotted_name and float_exponant_complex were missing from
nodes_rendering_order
0.6.1 (2015-01-31)
------------------
- fix: the string was having a greedy behavior on grouping the string tokens
surrounding it (for string chains), this ends up creating an inconsistancy in
the way string was grouped in general
- fix: better number parsing handling, everything isn't fixed yet
- make all (expected) errors inherit from the same BaronError class
- fix: parsing fails correctly if a quoted string is not closed
0.6 (2014-12-11)
----------------
- FST structure modification: def_argument_tuple is no more and all arguments
now have a coherent structure:
* def_argument node name attribute has been renamed to target, like in assign
* target attribute now points to a dict, not to a string
* old name -> string are now target -> name_node
* def_argument_tuple is now a def_argument where target points to a tuple
* this specific tuple will only has name and comma and tuple members (no more
def_argument for name)
- new node: long, before int and long where merged but that was causing problems
0.5 (2014-11-10)
----------------
- rename "funcdef" node to "def" node to be way more intuitive.
0.4 (2014-09-29)
----------------
- new rendering type in the nodes_rendering_order dictionary: string. This
remove an ambiguity where a key could be pointing to a dict or a string, thus
forcing third party tools to do guessing.
0.3.1 (2014-09-04)
------------------
- setup.py wasn't working if wheel wasn't used because the CHANGELOG file
wasn't included in the MANIFEST.in
0.3 (2014-08-21)
----------------
- path becomes a simple list and is easier to deal with
- bounding box allows you to know the left most and right most position
of a node see https://baron.readthedocs.io/en/latest/#bounding-box
- redbaron is classified as supporting python3
https://github.com/PyCQA/baron/pull/51
- ensure than when a key is a string, it's empty value is an empty string and
not None to avoid breaking libs that use introspection to guess the type of
the key
- key renaming in the FST: "delimiteur" -> "delimiter"
- name_as_name and dotted_as_name node don't have the "as" key anymore as it
was useless (it can be deduce from the state of the "target" key)
- dotted_name node doesn't exist anymore, its existance was unjustified. In
import, from_import and decorator node, it has been replaced from a key to a
dict (with only a list inside of it) to a simple list.
- dumps now accept a strict boolean argument to check the validity of the FST
on dumping, but this isn't that much a public feature and should probably be
changed of API in the futur
- name_as_name and dotted_as_name empty value for target is now an empty string
and not None since this is a string type key
- boundingbox now includes the newlines at the end of a node
- all raised exceptions inherit from a common base exception to ease try/catch
constructions
- Position's left and right functions become properties and thus
attributes
- Position objects can be compared to other Position objects or any
iterables
- make_position and make_bounding_box functions are deleted in favor of
always using the corresponding class' constructor
0.2 (2014-06-11)
----------------
- Baron now provides documentation on https://baron.readthedocs.io
- feature: baron now run in python3 (*but* doesn't implement the full python3
grammar yet) by Pierre Penninckx https://github.com/ibizaman
- feature: drop the usage of ast.py to find print_function, this allow any
version of python to parse any other version of python also by Pierre
Penninckx
- fix: rare bug where a comment end up being confused as an indentation level
- 2 new helpers: show_file and show_node, see https://baron.readthedocs.io/en/latest/#show-file
and https://baron.readthedocs.io/en/latest/#show-node
- new dictionary that provides the informations on how to render a FST node:
nodes_rendering_order see https://baron.readthedocs.io/en/latest/#rendering-the-fst
- new utilities to find a node, see https://baron.readthedocs.io/en/latest/#locate-a-node
- new generic class that provide templates to work on the FST see
https://baron.readthedocs.io/en/latest/#rendering-the-fst
0.1.3 (2014-04-13)
------------------
- set sugar syntaxic notation wasn't handled by the dumper (apparently no one
use this on pypi top 100)
0.1.2 (2014-04-08)
------------------
- baron.dumps now accept a single FST node, it was only working with a list of
FST nodes
- don't add a endl node at the end if not present in the input string
- de-uniformise call_arguments and function_arguments node, this is just
creating more problems that anything else
- fix https://github.com/PyCQA/redbaron/issues/4
- fix the fact that baron can't parse "{1,}" (but "{1}" is working)
0.1.1 (2014-03-23)
------------------
- It appears that I don't know how to write MANIFEST.in correctly
0.1 (2014-03-22)
----------------
- Init
baron-0.10.1/LICENSE 0000664 0000000 0000000 00000016743 14154274402 0013715 0 ustar 00root root 0000000 0000000 GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
baron-0.10.1/MANIFEST.in 0000664 0000000 0000000 00000000247 14154274402 0014436 0 ustar 00root root 0000000 0000000 include *.md CHANGELOG LICENSE
include tox.ini
exclude *.txt
recursive-include docs *
graft tests
prune docs/_build
prune grammar
reverse-exclude */__pycache__/*
baron-0.10.1/README.md 0000664 0000000 0000000 00000015356 14154274402 0014166 0 ustar 00root root 0000000 0000000 Introduction
============
Baron is a Full Syntax Tree (FST) library for Python. By opposition to an [AST](https://en.wikipedia.org/wiki/Abstract_syntax_tree) which
drops some syntax information in the process of its creation (like empty lines,
comments, formatting), a FST keeps everything and guarantees the operation
fst\_to\_code(code\_to\_fst(source\_code)) == source\_code.
Roadmap
=======
Current roadmap is as boring as needed:
* bug fixs
* new small features (walker pattern, maybe code generation) and performance improvement.
Installation
============
pip install baron
Basic Usage
===========
```python
from baron import parse, dumps
fst = parse(source_code_string)
source_code_string == dumps(fst)
```
Unless you want to do low level things, **use
[RedBaron](https://github.com/PyCQA/redbaron) instead of using Baron
directly**. Think of Baron as the "bytecode of python source code" and RedBaron
as some sort of usable layer on top of it.
If you don't know what Baron is or don't understand yet why it might be
useful for you, read the [« Why is this important? » section](#why-is-this-important).
Documentation
=============
Baron documentation is available on [Read The Docs](http://baron.readthedocs.io/en/latest/).
Contributing
============
If you want to implement new grammar elements for newer python versions, here
are the documented steps for that:
https://github.com/PyCQA/baron/blob/master/add_new_grammar.md
Also note that reviewing most grammar modifications takes several hours of
advanced focusing (we can't really afford bugs here) so don't despair if you PR
seems to be hanging around, sorry for that :/
And thanks in advance for your work!
Financial support
=================
Baron and RedBaron are a very advanced piece of engineering that requires a lot
of time of concentration to work on. Until the end of 2018, the development
has been a full volunteer work mostly done by [Bram](https://github.com/psycojoker),
but now, to reach the next level and bring those projects to the stability and
quality you expect, we need your support.
You can join our contributors and sponsors on our transparent
[OpenCollective](https://opencollective.com/redbaron), every contribution will
count and will be mainly used to work on the projects stability and quality but
also on continuing, on the side, the R&D side of those projects.
Our supporters
--------------
[

](https://opencollective.com/redbaron/tiers/)
Why is this important?
======================
The usage of a FST might not be obvious at first sight so let's consider a
series of problems to illustrate it. Let's say that you want to write a program that will:
* rename a variable in a source file... without clashing with things that are not a variable (example: stuff inside a string)
* inline a function/method
* extract a function/method from a series of line of code
* split a class into several classes
* split a file into several modules
* convert your whole code base from one ORM to another
* do custom refactoring operation not implemented by IDE/rope
* implement the class browser of smalltalk for python (the whole one where you can edit the code of the methods, not just showing code)
It is very likely that you will end up with the awkward feeling of writing
clumpsy weak code that is very likely to break because you didn't thought about
all the annoying special cases and the formatting keeps bothering you. You may
end up playing with [ast.py](https://docs.python.org/3/library/ast.html) until
you realize that it removes too much information to be suitable for those
situations. You will probably ditch this task as simply too complicated and
really not worth the effort. You are missing a good abstraction that will take
care of all of the code structure and formatting for you so you can concentrate
on your task.
The FST tries to be this abstraction. With it you can now work on a tree which
represents your code with its formatting. Moreover, since it is the exact
representation of your code, modifying it and converting it back to a string
will give you back your code only modified where you have modified the tree.
Said in another way, what I'm trying to achieve with Baron is a paradigm change in
which writing code that will modify code is now a realist task that is worth
the price (I'm not saying a simple task, but a realistic one: it's still a
complex task).
Other
-----
Having a FST (or at least a good abstraction build on it) also makes it easier
to do code generation and code analysis while those two operations are already
quite feasible (using [ast.py](https://docs.python.org/3/library/ast.html)
and a templating engine for example).
Some technical details
======================
Baron produces a FST in the form of JSON (and by JSON I mean Python lists
and dicts that can be dumped into JSON) for maximum interoperability.
Baron FST is quite similar to Python AST with some modifications to be more
intuitive to humans, since Python AST has been made for CPython interpreter.
Since playing directly with JSON is a bit raw I'm going to build an abstraction
on top of it that will looks like BeautifulSoup/jQuery.
State of the project
====================
Currently, Baron has been tested on the top 100 projects and the FST converts
back exactly into the original source code. So, it can be considered quite
stable, but it is far away from having been battle tested.
Since the project is very young and no one is already using it except my
project, I'm open to changes of the FST nodes but I will quickly become
conservative once it gets some adoption and will probably accept to
modify it only once or twice in the future with clear indications on how to
migrate.
Baron is supporting python 2 grammar and up to python 3.7 grammar.
Tests
=====
Run either `py.test tests/` or `nosetests` in the baron directory.
Community
=========
You can reach us on [irc.freenode.net#baron](https://webchat.freenode.net/?channels=%23baron) or [irc.freenode.net##python-code-quality](https://webchat.freenode.net/?channels=%23%23python-code-quality).
Code of Conduct
===============
As a member of [PyCQA](https://github.com/PyCQA), Baron follows its [Code of Conduct](http://meta.pycqa.org/en/latest/code-of-conduct.html).
Misc
====
[Old blog post announcing the project.](http://worlddomination.be/blog/2013/the-baron-project-part-1-what-and-why.html) Not that much up to date.
baron-0.10.1/add_new_grammar.md 0000664 0000000 0000000 00000010320 14154274402 0016322 0 ustar 00root root 0000000 0000000 # How to modify what Baron can parse
This is a todo list of things to do to allows baron to parse new syntax.
This is the full version, for minor things like adding a new binary operator (like the "@" for matrix multiplication) this is not needed.
# Checklists
### Preparation
- [ ] first of all start by comparing [the grammar from python 2.7](https://docs.python.org/2/reference/grammar.html) with the [targeted version](https://docs.python.org/3.7/reference/grammar.html) (also available in https://github.com/PyCQA/baron/tree/master/grammar)
- [ ] check the reference page here https://baron.readthedocs.io/en/latest/grammar.html to see if things are already planned
- [ ] look at [baron's grammar](https://github.com/PyCQA/baron/blob/master/grammar/baron_grammar) to check if it's not colyding with something already done (very low chance)
- [ ] does the lexer needs to be modified? This is the case for new keywords and new statements
- [ ] be mentally prepared that you'll need to write tests for everything
### Modification
Lexer:
- [ ] if you need to modify the lexer, stars with it, check all the lexer steps (found here: https://github.com/PyCQA/baron/blob/master/baron/baron.py#L69, the correct line might change in the futur, it's the tokenize function)
- `split` only needs to be modified if python ever introduce new character like "?" for example
- `group` is if 2 characters needs to be merged like "?" and "="
- `_tokenize` is for new token, obviously, like new keywords or new grouped characters
- `space_group` will need to be modified for new keywords or statement, it's quite tricky, it's to group space on neighbour tokens (they will be unfold during grammar parsing) following the general rules of "a node needs to be responsible for its formatting"
- `inner_group` is a variation of the previous one, it's for the case of tokens between `() [] {}`
- `mark_indentation` is to handle inserting `IDENT`/`DEDENT` tokens, it very unlikely you'll ever need to work on this one except if python includes new statements (like the `with` statement)
- [ ] have tests for everything regarding the lexer (if possible in a TDD fashion)
Grammar:
The hardest part is going to be to correctly design the extension of the tree with new or by modifying existing nodes (if needed).
Before anything: RedBaron (and not Baron) is an API design project to make writing code that analyse and modify source code as easy as possible, Baron is here to support this task, this mean that this a tree designed to be intuitive to human, no easy to handle for interpreters.
Therefor, when you design a modification or an addition to the tree, you need to answer to the question: what will be the easiest to handle and the more intuitive for humans.
Here are some general advices:
- when that makes sens, prefer flat structure with lower number of nodes instead of sub nodes. For example: for the "async" keyword, extend the related nodes instead of creating a subnode
- prefer lists other single-child series of branches of a tree, for example, the python code "a.b.c.d" shouldn't be structured as "d->c->b->a" like in ast.py but as "[a, b, c, d]"
- uses attributes and nodes name as close as possible to python keywords and what is used in the python community (and close to the grammar)
Regarding the implementation:
- [ ] try to find the good file in which to put your code, the name and content should be enough for that https://github.com/PyCQA/baron/tree/master/baron
- [ ] write/update tests for everything regarding producing the new additions to the tree
- [ ] implement the new grammar (if relevant)
- [ ] modifying the rendering tree in [render.py](https://github.com/PyCQA/baron/blob/master/baron/render.py)
- [ ] write rendering and, if needed, rendering after modification, tests for everything here https://github.com/PyCQA/baron/blob/master/tests/test_dumper.py
And you should be good, congratz if you reached this point!
### Completion, documentation
- [ ] modify the reference page https://baron.readthedocs.io/en/latest/grammar.html
- [ ] [modify baron's grammar](https://github.com/PyCQA/baron/blob/master/grammar/baron_grammar)
- [ ] consider implementing the new additions in [RedBaron](https://github.com/pycqa/redbaron)
- [ ] udpate CHANGELOG
baron-0.10.1/baron/ 0000775 0000000 0000000 00000000000 14154274402 0013776 5 ustar 00root root 0000000 0000000 baron-0.10.1/baron/__init__.py 0000664 0000000 0000000 00000000615 14154274402 0016111 0 ustar 00root root 0000000 0000000 from . import grouper # noqa
from . import spliter # noqa
from .baron import parse, tokenize # noqa
from .dumper import dumps # noqa
from .inner_formatting_grouper import GroupingError, UnExpectedFormattingToken # noqa
from .parser import ParsingError # noqa
from .render import nodes_rendering_order # noqa
from .spliter import UntreatedError # noqa
from .utils import BaronError # noqa
baron-0.10.1/baron/baron.py 0000664 0000000 0000000 00000006255 14154274402 0015461 0 ustar 00root root 0000000 0000000 from .spliter import split
from .grouper import group
from .tokenizer import tokenize as _tokenize
from .formatting_grouper import group as space_group
from .future import has_print_function, replace_print_by_name
from .grammator import generate_parse
from .indentation_marker import mark_indentation
from .inner_formatting_grouper import group as inner_group
from .parser import ParsingError
parse_tokens = generate_parse(False)
parse_tokens_print_function = generate_parse(True)
def _parse(tokens, print_function):
parser = parse_tokens if not print_function else parse_tokens_print_function
try:
try:
return parser(tokens)
except ParsingError:
# swap parsers for print_function situation where I failed to find it
parser = parse_tokens if print_function else parse_tokens_print_function
return parser(tokens)
except ParsingError as e:
raise
except Exception as e:
import sys
import traceback
traceback.print_exc(file=sys.stderr)
sys.stderr.write("%s\n" % e)
sys.stderr.write("\nBaron has failed to parse this input. If this is valid python code (and by that I mean that the python binary successfully parse this code without any syntax error) (also consider that python does not yet parse python 3 code integrally) it would be kind if you can extract a snippet of your code that make Baron fails and open a bug here: https://github.com/PyCQA/baron/issues\n\nSorry for the inconvenience.")
def parse(source_code, print_function=None):
# Python syntax requires source code to end with an ENDL token
# the endl token is removed afterward if and only if it's the last token of the root level
# It is possible that this token end up in a 'suite' grammar rule
# which means that it is 'traped' in an indented block of code
# I don't want to recursively cross the tree to hope to find it
# This solution behave in the expected way for 90% of the case
newline_appended = False
linesep = "\r\n" if source_code.endswith("\r\n") else "\n"
if source_code and not source_code.endswith(linesep):
source_code += linesep
newline_appended = True
if print_function is None:
tokens = tokenize(source_code, False)
print_function = has_print_function(tokens)
if print_function:
replace_print_by_name(tokens)
else:
tokens = tokenize(source_code, print_function)
if newline_appended:
to_return = _parse(tokens, print_function)
if to_return[-1]["type"] == "endl" and not to_return[-1]["formatting"]:
return to_return[:-1]
elif to_return[-1]["type"] == "endl" and to_return[-1]["formatting"]:
return to_return[:-1] + to_return[-1]["formatting"]
else:
return to_return
return _parse(tokens, print_function)
def tokenize(pouet, print_function=False):
splitted = split(pouet)
grouped = group(splitted)
print_tokenized = _tokenize(grouped, print_function)
space_grouped = space_group(print_tokenized)
inner_grouped = inner_group(space_grouped)
indentation_marked = mark_indentation(inner_grouped)
return indentation_marked
baron-0.10.1/baron/dumper.py 0000664 0000000 0000000 00000000602 14154274402 0015642 0 ustar 00root root 0000000 0000000 from .render import RenderWalker
def dumps(tree, strict=False):
return Dumper(strict=strict).dump(tree)
class Dumper(RenderWalker):
def before_string(self, string, key):
self.dump += string
def before_constant(self, constant, key):
self.dump += constant
def dump(self, tree):
self.dump = ''
self.walk(tree)
return self.dump
baron-0.10.1/baron/formatting_grouper.py 0000664 0000000 0000000 00000006376 14154274402 0020301 0 ustar 00root root 0000000 0000000 from .utils import FlexibleIterator, BaronError
class UnExpectedSpaceToken(BaronError):
pass
PRIORITY_ORDER = (
"IMPORT",
"ENDL",
)
BOTH = (
"SEMICOLON",
"AS",
"IMPORT",
"DOUBLE_STAR",
"DOT",
"LEFT_SQUARE_BRACKET",
"LEFT_PARENTHESIS",
"STAR",
"SLASH",
"PERCENT",
"DOUBLE_SLASH",
"PLUS",
"MINUS",
"AT",
"LEFT_SHIFT",
"RIGHT_SHIFT",
"AMPER",
"CIRCUMFLEX",
"VBAR",
"LESS",
"GREATER",
"EQUAL_EQUAL",
"LESS_EQUAL",
"GREATER_EQUAL",
"NOT_EQUAL",
"IN",
"IS",
"NOT",
"AND",
"OR",
"IF",
"ELSE",
"EQUAL",
"PLUS_EQUAL",
"MINUS_EQUAL",
"STAR_EQUAL",
"AT_EQUAL",
"SLASH_EQUAL",
"PERCENT_EQUAL",
"AMPER_EQUAL",
"VBAR_EQUAL",
"CIRCUMFLEX_EQUAL",
"LEFT_SHIFT_EQUAL",
"RIGHT_SHIFT_EQUAL",
"DOUBLE_STAR_EQUAL",
"DOUBLE_SLASH_EQUAL",
"ENDL",
"COMMA",
"FOR",
"COLON",
"BACKQUOTE",
"RIGHT_ARROW",
"FROM",
)
STRING = (
"STRING",
"RAW_STRING",
"INTERPOLATED_STRING",
"INTERPOLATED_RAW_STRING",
"UNICODE_STRING",
"UNICODE_RAW_STRING",
"BINARY_STRING",
"BINARY_RAW_STRING",
)
GROUP_SPACE_BEFORE = BOTH + (
"RIGHT_PARENTHESIS",
"COMMENT",
) + STRING
GROUP_SPACE_AFTER = BOTH + (
"TILDE",
"RETURN",
"YIELD",
"WITH",
"DEL",
"ASSERT",
"RAISE",
"EXEC",
"GLOBAL",
"NONLOCAL",
"PRINT",
"INDENT",
"WHILE",
"ELIF",
"EXCEPT",
"DEF",
"CLASS",
"LAMBDA",
)
def less_prioritary_than(a, b):
if b not in PRIORITY_ORDER:
return False
if a not in PRIORITY_ORDER:
return True
return PRIORITY_ORDER.index(a) < PRIORITY_ORDER.index(b)
def group(sequence):
return list(group_generator(sequence))
def group_generator(sequence):
iterator = FlexibleIterator(sequence)
while not iterator.end():
current = next(iterator)
if current is None:
return
if current[0] == "SPACE" and iterator.show_next() and iterator.show_next()[0] in GROUP_SPACE_BEFORE:
new_current = next(iterator)
current = (new_current[0], new_current[1], [current])
if current[0] in GROUP_SPACE_AFTER + STRING and\
(iterator.show_next() and iterator.show_next()[0] == "SPACE") and\
(not iterator.show_next(2) or (iterator.show_next(2) and not less_prioritary_than(current[0], iterator.show_next(2)[0]))):
# do not be greedy when you are grouping on strings
if current[0] in STRING and iterator.show_next(2) and iterator.show_next(2)[0] in GROUP_SPACE_BEFORE:
yield current
continue
after_space = next(iterator)
current = (current[0], current[1], current[2] if len(current) > 2 else [], [after_space])
# in case of "def a(): # comment\n pass"
# not really happy about this solution but that avoid a broken release
if current[0] == "COLON" and iterator.show_next() and iterator.show_next()[0] == "COMMENT":
comment = next(iterator)
current = (current[0], current[1], ((current[2]) if len(current) > 2 else []), ((current[3]) if len(current) > 3 else []) + [comment])
yield current
baron-0.10.1/baron/future.py 0000664 0000000 0000000 00000002053 14154274402 0015662 0 ustar 00root root 0000000 0000000 def has_print_function(tokens):
p = 0
while p < len(tokens):
if tokens[p][0] != 'FROM':
p += 1
continue
if tokens[p + 1][0:2] != ('NAME', '__future__'):
p += 1
continue
if tokens[p + 2][0] != 'IMPORT':
p += 1
continue
current = p + 3
# ignore LEFT_PARENTHESIS token
if tokens[current][0] == 'LEFT_PARENTHESIS':
current += 1
while (current < len(tokens) and tokens[current][0] == 'NAME'):
if tokens[current][1] == 'print_function':
return True
# ignore AS and NAME tokens if present
# anyway, ignore COMMA token
if current + 1 < len(tokens) and tokens[current + 1][0] == 'AS':
current += 4
else:
current += 2
p += 1
return False
def replace_print_by_name(tokens):
def is_print(token):
return token[0] == 'PRINT'
return [('NAME', 'print') if is_print(x) else x for x in tokens]
baron-0.10.1/baron/grammator.py 0000664 0000000 0000000 00000074210 14154274402 0016345 0 ustar 00root root 0000000 0000000 from .token import BaronToken
from .parser import BaronParserGenerator
from .tokenizer import TOKENS, tokenize, tokenize_current_keywords
from .utils import create_node_from_token
from .grammator_imports import include_imports
from .grammator_control_structures import include_control_structures
from .grammator_primitives import include_primivites
from .grammator_operators import include_operators
from .grammator_data_structures import include_data_structures
from .parser import ParsingError
def generate_parse(print_function):
pg = BaronParserGenerator(tuple([x.upper() for x in tokenize_current_keywords(print_function)] + [x[1] for x in TOKENS] + ["ENDMARKER", "INDENT", "DEDENT"]), cache_id="baron")
@pg.production("main : statements")
def main(pack):
(statements,) = pack
return [x for x in statements if x] if statements else []
@pg.production("statements : statements statement")
def statements_statement(pack):
(statements, statement) = pack
return statements + statement
@pg.production("statements : statement SEMICOLON")
def statement_semicolon(pack):
(statement, semicolon) = pack
return statement +\
[{
"type": "semicolon",
"first_formatting": semicolon.hidden_tokens_before,
"second_formatting": semicolon.hidden_tokens_after,
"value": ";"
}]
@pg.production("statements : statement")
def statement(pack):
(statement,) = pack
return statement
@pg.production("statement : endl")
def statement_endl(pack):
(endl,) = pack
return endl
@pg.production("endl : ENDL")
def endl(pack):
(endl,) = pack
indent = ""
if endl.hidden_tokens_after and endl.hidden_tokens_after[0]["type"] == "space":
indent = endl.hidden_tokens_after[0]["value"]
endl.hidden_tokens_after = endl.hidden_tokens_after[1:]
return [{
"type": "endl",
"value": endl.value,
"formatting": endl.hidden_tokens_before,
"indent": indent,
}] + endl.hidden_tokens_after
@pg.production("left_parenthesis : LEFT_PARENTHESIS")
def left_parenthesis(pack):
(lp,) = pack
return lp
@pg.production("endl : COMMENT ENDL")
def comment(pack):
(comment_, endl) = pack
return [{
"type": "comment",
"value": comment_.value,
"formatting": comment_.hidden_tokens_before,
}, {
"type": "endl",
"formatting": endl.hidden_tokens_before,
"indent": endl.hidden_tokens_after[0]["value"] if endl.hidden_tokens_after else "",
"value": endl.value
}]
@pg.production("statement : ENDMARKER")
def end(_):
return [None]
@pg.production("statement : simple_stmt")
@pg.production("statement : compound_stmt")
def statement_simple_statement(pack):
(stmt,) = pack
return stmt
@pg.production("simple_stmt : small_stmt SEMICOLON endl")
def simple_stmt_semicolon_endl(pack):
(small_stmt, semicolon, endl) = pack
return [small_stmt,
{
"type": "semicolon",
"value": ";",
"first_formatting": semicolon.hidden_tokens_before,
"second_formatting": semicolon.hidden_tokens_after
}] + endl
@pg.production("simple_stmt : small_stmt endl")
def simple_stmt(pack):
(small_stmt, endl) = pack
return [small_stmt] + endl
@pg.production("simple_stmt : small_stmt SEMICOLON simple_stmt")
def simple_stmt_semicolon(pack):
(small_stmt, semicolon, simple_stmt) = pack
return [small_stmt,
{
"type": "semicolon",
"value": ";",
"first_formatting": semicolon.hidden_tokens_before,
"second_formatting": semicolon.hidden_tokens_after
}] + simple_stmt
@pg.production("small_stmt : flow_stmt")
@pg.production("small_stmt : del_stmt")
@pg.production("small_stmt : pass_stmt")
@pg.production("small_stmt : assert_stmt")
@pg.production("small_stmt : raise_stmt")
@pg.production("small_stmt : global_stmt")
@pg.production("small_stmt : nonlocal_stmt")
@pg.production("compound_stmt : if_stmt")
@pg.production("compound_stmt : while_stmt")
@pg.production("compound_stmt : for_stmt")
@pg.production("compound_stmt : try_stmt")
@pg.production("compound_stmt : funcdef")
@pg.production("compound_stmt : classdef")
@pg.production("compound_stmt : with_stmt")
@pg.production("compound_stmt : decorated")
@pg.production("compound_stmt : async_stmt")
def small_and_compound_stmt(pack):
(statement,) = pack
return statement
@pg.production("async_maybe : ")
def async_maybe(pack):
return {
"async": False,
"formatting": [],
}
@pg.production("async_maybe : NAME")
@pg.production("async : NAME")
def async_without_space(pack):
(async_,) = pack
return {
"async": True,
"value": async_.value,
"formatting": [],
}
@pg.production("async_maybe : NAME SPACE")
@pg.production("async : NAME SPACE")
def async_(pack):
(async_, space) = pack
return {
"async": True,
"value": async_.value,
"formatting": [{'type': 'space', 'value': space.value}],
}
@pg.production("async_stmt : async with_stmt")
@pg.production("async_stmt : async for_stmt")
def async_stmt(pack):
(async_, statement,) = pack
if async_["value"] != "async":
raise ParsingError("The only possible keyword before a '%s' is 'async', not '%s'" % (statement[0]["type"], async_["value"]))
statement[0]["async"] = True
statement[0]["async_formatting"] += async_["formatting"]
return statement
if not print_function:
@pg.production("small_stmt : print_stmt")
def print_statement(pack):
(statement,) = pack
return statement
@pg.production("small_stmt : expr_stmt")
@pg.production("expr_stmt : testlist_star_expr")
@pg.production("testlist : test")
@pg.production("testlist_argslist : test")
@pg.production("testlist_star_expr : test_or_star_expr")
@pg.production("test : or_test")
@pg.production("test : lambdef")
@pg.production("or_test : and_test")
@pg.production("and_test : not_test")
@pg.production("not_test : comparison")
@pg.production("comparison : expr")
@pg.production("expr : xor_expr")
@pg.production("xor_expr : and_expr")
@pg.production("and_expr : shift_expr")
@pg.production("shift_expr : arith_expr")
@pg.production("arith_expr : term")
@pg.production("term : factor")
@pg.production("factor : power")
@pg.production("power : atom")
@pg.production("exprlist : expr")
def term_factor(pack):
(level,) = pack
return level
@pg.production("with_stmt : WITH with_items COLON suite")
def with_stmt(pack):
(with_, with_items, colon, suite) = pack
return [{
"type": "with",
"async": False,
"async_formatting": [],
"value": suite,
"first_formatting": with_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
"contexts": with_items
}]
@pg.production("with_items : with_items comma with_item")
def with_items_with_item(pack):
(with_items, comma, with_item,) = pack
return with_items + [comma, with_item]
@pg.production("with_items : with_item")
def with_items(pack):
(with_item,) = pack
return [with_item]
@pg.production("with_item : test")
def with_item(pack):
(test,) = pack
return {
"type": "with_context_item",
"as": {},
"first_formatting": [],
"second_formatting": [],
"value": test
}
@pg.production("with_item : test AS expr")
def with_item_as(pack):
(test, as_, expr) = pack
return {
"type": "with_context_item",
"as": expr,
"first_formatting": as_.hidden_tokens_before,
"second_formatting": as_.hidden_tokens_after,
"value": test
}
@pg.production("classdef : CLASS NAME COLON suite")
def class_stmt(pack,):
(class_, name, colon, suite) = pack
return [{
"type": "class",
"name": name.value,
"parenthesis": False,
"first_formatting": class_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"fifth_formatting": colon.hidden_tokens_before,
"sixth_formatting": colon.hidden_tokens_after,
"inherit_from": [],
"decorators": [],
"value": suite,
}]
@pg.production("classdef : CLASS NAME LEFT_PARENTHESIS RIGHT_PARENTHESIS COLON suite")
def class_stmt_parenthesis(pack,):
(class_, name, left_parenthesis, right_parenthesis, colon, suite) = pack
return [{
"type": "class",
"name": name.value,
"parenthesis": True,
"first_formatting": class_.hidden_tokens_after,
"second_formatting": left_parenthesis.hidden_tokens_before,
"third_formatting": left_parenthesis.hidden_tokens_after,
"fourth_formatting": right_parenthesis.hidden_tokens_before,
"fifth_formatting": right_parenthesis.hidden_tokens_after + colon.hidden_tokens_before,
"sixth_formatting": colon.hidden_tokens_after,
"inherit_from": [],
"decorators": [],
"value": suite,
}]
@pg.production("classdef : CLASS NAME LEFT_PARENTHESIS testlist_argslist RIGHT_PARENTHESIS COLON suite")
def class_stmt_inherit(pack,):
def unfold_simple_call_arguments(node):
if node.get("type") == "call_argument" and not node["target"]:
return node["value"]
return node
(class_, name, left_parenthesis, testlist, right_parenthesis, colon, suite) = pack
return [{
"type": "class",
"name": name.value,
"parenthesis": True,
"first_formatting": class_.hidden_tokens_after,
"second_formatting": left_parenthesis.hidden_tokens_before,
"third_formatting": left_parenthesis.hidden_tokens_after,
"fourth_formatting": right_parenthesis.hidden_tokens_before,
"fifth_formatting": right_parenthesis.hidden_tokens_after + colon.hidden_tokens_before,
"sixth_formatting": colon.hidden_tokens_after,
"inherit_from": [unfold_simple_call_arguments(x)
for x in (testlist if isinstance(testlist, list) else [testlist])],
"decorators": [],
"value": suite,
}]
@pg.production("decorated : decorators funcdef")
@pg.production("decorated : decorators classdef")
def decorated(pack):
(decorators, funcdef) = pack
funcdef[0]["decorators"] = decorators
return funcdef
@pg.production("decorators : decorators decorator")
def decorators_decorator(pack):
(decorators, decorator,) = pack
return decorators + decorator
@pg.production("decorators : decorator")
def decorators(pack):
(decorator,) = pack
return decorator
# TODO tests
@pg.production("decorator : endl")
def decorator_endl(pack):
# thanks iPython devs, you appear to be the only one in the world that
# split decorators with empty lines... like seriously.
(endl,) = pack
return endl
@pg.production("decorator : AT dotted_name endl")
def decorator(pack):
(at, dotted_name, endl) = pack
return [{
"type": "decorator",
"value": {
"value": dotted_name,
"type": "dotted_name",
},
"call": {},
"formatting": at.hidden_tokens_after,
}] + endl
@pg.production("decorator : AT dotted_name LEFT_PARENTHESIS RIGHT_PARENTHESIS endl")
def decorator_empty_call(pack):
(at, dotted_name, left_parenthesis, right_parenthesis, endl) = pack
return [{
"type": "decorator",
"value": {
"value": dotted_name,
"type": "dotted_name",
},
"call": {
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"type": "call",
"first_formatting": left_parenthesis.hidden_tokens_before,
"value": [],
"second_formatting": left_parenthesis.hidden_tokens_after
},
"formatting": at.hidden_tokens_after,
}] + endl
@pg.production("decorator : AT dotted_name LEFT_PARENTHESIS argslist RIGHT_PARENTHESIS endl")
def decorator_call(pack):
(at, dotted_name, left_parenthesis, argslist, right_parenthesis, endl) = pack
return [{
"type": "decorator",
"value": {
"value": dotted_name,
"type": "dotted_name",
},
"call": {
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"type": "call",
"first_formatting": left_parenthesis.hidden_tokens_before,
"value": argslist,
"second_formatting": left_parenthesis.hidden_tokens_after
},
"formatting": at.hidden_tokens_after,
}] + endl
@pg.production("funcdef : async_maybe DEF NAME LEFT_PARENTHESIS typed_parameters RIGHT_PARENTHESIS return_annotation COLON suite")
def function_definition(pack):
(async_maybe, def_, name, left_parenthesis, typed_parameters, right_parenthesis, return_annotation, colon, suite) = pack
if async_maybe["async"] and async_maybe["value"] != "async":
raise ParsingError("The only possible keyword before a 'def' is 'async', not '%s'" % async_maybe["value"])
return [{
"type": "def",
"async": async_maybe["async"],
"return_annotation": return_annotation["value"],
"return_annotation_first_formatting": return_annotation["first_formatting"],
"return_annotation_second_formatting": return_annotation["second_formatting"],
"async_formatting": async_maybe.get("formatting", []),
"decorators": [],
"name": name.value,
"first_formatting": def_.hidden_tokens_after,
"second_formatting": left_parenthesis.hidden_tokens_before,
"third_formatting": left_parenthesis.hidden_tokens_after,
"fourth_formatting": right_parenthesis.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_before,
"sixth_formatting": colon.hidden_tokens_after,
"arguments": typed_parameters,
"value": suite,
}]
@pg.production("return_annotation : ")
def return_annotation_empty(pack):
return {
"value": {},
"first_formatting": [],
"second_formatting": [],
}
@pg.production("return_annotation : RIGHT_ARROW test")
def return_annotation(pack):
right_arrow, test = pack
return {
"value": test,
"first_formatting": right_arrow.hidden_tokens_before,
"second_formatting": right_arrow.hidden_tokens_after,
}
@pg.production("argslist : argslist argument")
@pg.production("testlist_argslist : argslist argument")
@pg.production("typed_parameters : typed_parameters typed_parameter")
@pg.production("parameters : parameters parameter")
def parameters_parameters_parameter(pack,):
(parameters, parameter,) = pack
return parameters + parameter
@pg.production("argslist : argument")
@pg.production("testlist_argslist : argument")
@pg.production("typed_parameters : typed_parameter")
@pg.production("parameters : parameter")
def parameters_parameter(pack,):
(parameter,) = pack
return parameter
@pg.production("argument :")
@pg.production("typed_parameter : ")
@pg.production("parameter : ")
def parameter_empty(p):
return []
@pg.production("name : NAME")
def name(pack):
(name_,) = pack
return {
"type": "name",
"value": name_.value,
}
@pg.production("typed_parameter : LEFT_PARENTHESIS name RIGHT_PARENTHESIS maybe_test")
@pg.production("parameter : LEFT_PARENTHESIS name RIGHT_PARENTHESIS maybe_test")
def parameter_fpdef(pack):
(left_parenthesis, name, right_parenthesis, (equal, test)) = pack
return [{
"type": "def_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test,
"target": {
"type": "associative_parenthesis",
"first_formatting": left_parenthesis.hidden_tokens_before,
"second_formatting": left_parenthesis.hidden_tokens_after,
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"value": name
}
}]
@pg.production("typed_parameter : LEFT_PARENTHESIS fplist RIGHT_PARENTHESIS maybe_test")
@pg.production("parameter : LEFT_PARENTHESIS fplist RIGHT_PARENTHESIS maybe_test")
def parameter_fplist(pack):
(left_parenthesis, fplist, right_parenthesis, (equal, test)) = pack
return [{
"type": "def_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test,
"target": {
"type": "tuple",
"with_parenthesis": True,
"first_formatting": left_parenthesis.hidden_tokens_after,
"second_formatting": left_parenthesis.hidden_tokens_before,
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"value": fplist,
},
}]
@pg.production("fplist : fplist parameter")
def fplist_recur(pack):
(fplist, name) = pack
if name[0]["type"] == "def_argument":
name = [name[0]["target"]]
return fplist + name
@pg.production("fplist : parameter comma")
def fplist(pack):
(name, comma) = pack
if name[0]["type"] == "def_argument":
name = [name[0]["target"]]
return name + [comma]
# really strange that left part of argument grammar can be a test
# I guess it's yet another legacy mistake
# python give me 'SyntaxError: keyword can't be an expression' when I try to
# put something else than a name (looks like a custom SyntaxError)
@pg.production("argument : test maybe_test")
def named_argument(pack):
(name, (equal, test)) = pack
return [{
"type": "call_argument",
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test if equal else name,
"target": name if equal else {}
}]
@pg.production("typed_parameter : name COLON test maybe_test")
def parameter_annotation_with_default(pack):
# name, (equal, test) = pack
name, colon, annotation, (equal, test) = pack
return [{
"type": "def_argument",
"annotation": annotation,
"annotation_first_formatting": colon.hidden_tokens_before,
"annotation_second_formatting": colon.hidden_tokens_after,
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test,
"target": name
}]
@pg.production("typed_parameter : name maybe_test")
def parameter_alone_with_default(pack):
name, (equal, test) = pack
return [{
"type": "def_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test,
"target": name
}]
@pg.production("parameter : name maybe_test")
def parameter_with_default(pack):
name, (equal, test) = pack
return [{
"type": "def_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"value": test,
"target": name
}]
@pg.production("maybe_test : EQUAL test")
def maybe_test(pack):
return pack
@pg.production("maybe_test : ")
def maybe_test_empty(pack):
return (None, {})
@pg.production("argument : test comp_for")
def generator_comprehension(pack):
(test, comp_for,) = pack
return [{
"type": "argument_generator_comprehension",
"result": test,
"generators": comp_for,
}]
@pg.production("argument : STAR test")
def argument_star(pack):
(star, test,) = pack
return [{
"type": "list_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": star.hidden_tokens_after,
"value": test,
}]
@pg.production("argument : DOUBLE_STAR test")
def argument_star_star(pack):
(double_star, test,) = pack
return [{
"type": "dict_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": double_star.hidden_tokens_after,
"value": test,
}]
@pg.production("typed_parameter : STAR NAME COLON test")
def typed_parameter_star(pack):
(star, name, colon, test) = pack
return [{
"type": "list_argument",
"formatting": star.hidden_tokens_after,
"annotation": test,
"annotation_first_formatting": colon.hidden_tokens_before if colon else [],
"annotation_second_formatting": colon.hidden_tokens_after if colon else [],
"value": {
"type": "name",
"value": name.value,
}
}]
@pg.production("typed_parameter : DOUBLE_STAR NAME COLON test")
def typed_parameter_double_star(pack):
(double_star, name, colon, test) = pack
return [{
"type": "dict_argument",
"formatting": double_star.hidden_tokens_after,
"annotation": test,
"annotation_first_formatting": colon.hidden_tokens_before if colon else [],
"annotation_second_formatting": colon.hidden_tokens_after if colon else [],
"value": {
"type": "name",
"value": name.value,
}
}]
# TODO refactor those 2 to standardize with argument_star and argument_star_star
@pg.production("typed_parameter : STAR NAME")
@pg.production("parameter : STAR NAME")
def parameter_star(pack):
(star, name,) = pack
return [{
"type": "list_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": star.hidden_tokens_after,
"value": {
"type": "name",
"value": name.value,
}
}]
# TODO refactor those 2 to standardize with argument_star and argument_star_star
@pg.production("typed_parameter : STAR")
@pg.production("parameter : STAR")
def parameter_star_only(pack):
(star, ) = pack
return [{
"type": "kwargs_only_marker",
"formatting": star.hidden_tokens_after,
}]
@pg.production("typed_parameter : DOUBLE_STAR NAME")
@pg.production("parameter : DOUBLE_STAR NAME")
def parameter_star_star(pack):
(double_star, name,) = pack
return [{
"type": "dict_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": double_star.hidden_tokens_after,
"value": {
"type": "name",
"value": name.value,
},
}]
@pg.production("argument : comma")
@pg.production("typed_parameter : comma")
@pg.production("parameter : comma")
def parameter_comma(pack):
(comma,) = pack
return [comma]
@pg.production("suite : simple_stmt")
def suite(pack):
(simple_stmt,) = pack
return simple_stmt
@pg.production("suite : endls INDENT statements DEDENT")
def suite_indent(pack):
(endls, indent, statements, dedent,) = pack
return endls + statements
@pg.production("endls : endls endl")
@pg.production("endls : endl")
def endls(p):
if len(p) == 1:
return p[0]
return p[0] + p[1]
include_imports(pg)
include_control_structures(pg)
include_primivites(pg, print_function)
include_operators(pg)
include_data_structures(pg)
@pg.production("atom : LEFT_PARENTHESIS yield_expr RIGHT_PARENTHESIS")
def yield_atom(pack):
(left_parenthesis, yield_expr, right_parenthesis) = pack
return {
"type": "yield_atom",
"value": yield_expr["value"],
"first_formatting": left_parenthesis.hidden_tokens_after,
"second_formatting": yield_expr["formatting"],
"third_formatting": right_parenthesis.hidden_tokens_before
}
@pg.production("atom : BACKQUOTE testlist1 BACKQUOTE")
def repr_atom(pack):
(backquote, testlist1, backquote2) = pack
return {
"type": "repr",
"value": testlist1,
"first_formatting": backquote.hidden_tokens_after,
"second_formatting": backquote2.hidden_tokens_before,
}
@pg.production("testlist1 : test comma testlist1")
def testlist1_double(pack):
(test, comma, test2,) = pack
return [test, comma] + test2
@pg.production("testlist1 : test")
def testlist1(pack):
(test,) = pack
return [test]
# TODO test all the things (except INT)
@pg.production("atom : INT")
@pg.production("atom : LONG")
@pg.production("atom : OCTA")
@pg.production("atom : HEXA")
@pg.production("atom : BINARY")
@pg.production("atom : FLOAT")
@pg.production("atom : FLOAT_EXPONANT")
@pg.production("atom : FLOAT_EXPONANT_COMPLEX")
@pg.production("atom : COMPLEX")
def int(pack):
(int_,) = pack
return create_node_from_token(int_, section="number")
@pg.production("atom : name")
def atom_name(pack):
(name,) = pack
return name
@pg.production("atom : strings")
def strings(pack):
(string_chain,) = pack
if len(string_chain) == 1:
return string_chain[0]
return {
"type": "string_chain",
"value": string_chain
}
@pg.production("strings : string strings")
def strings_string_strings(pack):
(string_, strings_) = pack
return string_ + strings_
@pg.production("strings : string")
def strings_string(pack):
(string_,) = pack
return string_
# TODO tests those other kind of strings
@pg.production("string : STRING")
@pg.production("string : RAW_STRING")
@pg.production("string : INTERPOLATED_STRING")
@pg.production("string : UNICODE_STRING")
@pg.production("string : BINARY_STRING")
@pg.production("string : UNICODE_RAW_STRING")
@pg.production("string : BINARY_RAW_STRING")
@pg.production("string : INTERPOLATED_RAW_STRING")
def string(pack):
(string_,) = pack
return [{
"type": string_.name.lower(),
"value": string_.value,
"first_formatting": string_.hidden_tokens_before,
"second_formatting": string_.hidden_tokens_after,
}]
@pg.production("comma : COMMA")
def comma(pack):
(comma,) = pack
return {
"type": "comma",
"first_formatting": comma.hidden_tokens_before,
"second_formatting": comma.hidden_tokens_after,
}
def parse(tokens):
if print_function:
new_tokens = []
for token in tokens:
if token[0] in ("PRINT", "EXEC"):
token = list(token)
token[0] = "NAME"
token = tuple(token)
new_tokens.append(token)
tokens = [BaronToken(*x) if x else x for x in new_tokens] + [None]
else:
tokens = [BaronToken(*x) if x else x for x in tokens] + [None]
return parser.parse(iter(tokens))
parser = pg.build()
return parse
def fake_lexer(sequence):
for i in tokenize(sequence):
if i is None:
yield None
yield BaronToken(*i)
def parse(sequence):
parser = generate_parse(print_function=False)
return parser.parse(fake_lexer(sequence))
baron-0.10.1/baron/grammator_control_structures.py 0000664 0000000 0000000 00000025361 14154274402 0022413 0 ustar 00root root 0000000 0000000 def include_control_structures(pg):
@pg.production("try_stmt : TRY COLON suite excepts")
def try_excepts_stmt(pack):
(try_, colon, suite, excepts) = pack
return [{
"type": "try",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"else": {},
"finally": {},
"excepts": excepts,
}]
@pg.production("try_stmt : TRY COLON suite excepts else_stmt")
def try_excepts_else_stmt(pack):
(try_, colon, suite, excepts, else_stmt) = pack
return [{
"type": "try",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"else": else_stmt,
"finally": {},
"excepts": excepts,
}]
@pg.production("try_stmt : TRY COLON suite excepts finally_stmt")
def try_excepts_finally_stmt(pack):
(try_, colon, suite, excepts, finally_stmt) = pack
return [{
"type": "try",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"else": {},
"finally": finally_stmt,
"excepts": excepts,
}]
@pg.production("try_stmt : TRY COLON suite excepts else_stmt finally_stmt")
def try_excepts_else_finally_stmt(pack):
(try_, colon, suite, excepts, else_stmt, finally_stmt) = pack
return [{
"type": "try",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"else": else_stmt,
"finally": finally_stmt,
"excepts": excepts,
}]
@pg.production("try_stmt : TRY COLON suite finally_stmt")
def try_stmt(pack):
(try_, colon, suite, finally_stmt) = pack
return [{
"type": "try",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"else": {},
"finally": finally_stmt,
"excepts": [],
}]
@pg.production("excepts : excepts except_stmt")
def excepts(pack):
(excepts_, except_stmt) = pack
return excepts_ + except_stmt
@pg.production("excepts : except_stmt")
def excepts_except_stmt(pack):
(except_stmt,) = pack
return except_stmt
@pg.production("except_stmt : EXCEPT test AS test COLON suite")
def except_as_stmt(pack):
(except_, test, as_, test2, colon, suite) = pack
return [{
"type": "except",
"first_formatting": except_.hidden_tokens_after,
"second_formatting": as_.hidden_tokens_before,
"third_formatting": as_.hidden_tokens_after,
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
"delimiter": "as",
"target": test2,
"exception": test,
"value": suite
}]
@pg.production("except_stmt : EXCEPT test COMMA test COLON suite")
def except_comma_stmt(pack):
(except_, test, comma, test2, colon, suite) = pack
return [{
"type": "except",
"first_formatting": except_.hidden_tokens_after,
"second_formatting": comma.hidden_tokens_before,
"third_formatting": comma.hidden_tokens_after,
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
"delimiter": ",",
"target": test2,
"exception": test,
"value": suite
}]
@pg.production("except_stmt : EXCEPT COLON suite")
def except_stmt_empty(pack):
(except_, colon, suite) = pack
return [{
"type": "except",
"first_formatting": except_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
"delimiter": "",
"target": {},
"exception": {},
"value": suite
}]
@pg.production("except_stmt : EXCEPT test COLON suite")
def except_stmt(pack):
(except_, test, colon, suite) = pack
return [{
"type": "except",
"first_formatting": except_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
"delimiter": "",
"target": {},
"exception": test,
"value": suite
}]
@pg.production("finally_stmt : FINALLY COLON suite")
def finally_stmt(pack):
(finally_, colon, suite) = pack
return {
"type": "finally",
"value": suite,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
}
@pg.production("else_stmt : ELSE COLON suite")
def else_stmt(pack):
(else_, colon, suite) = pack
return {
"type": "else",
"value": suite,
"first_formatting": else_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_after,
}
@pg.production("for_stmt : FOR exprlist IN testlist COLON suite")
def for_stmt(pack,):
(for_, exprlist, in_, testlist, colon, suite) = pack
return [{
"type": "for",
"async": False,
"async_formatting": [] + for_.hidden_tokens_before,
"value": suite,
"iterator": exprlist,
"target": testlist,
"else": {},
"first_formatting": for_.hidden_tokens_after,
"second_formatting": in_.hidden_tokens_before,
"third_formatting": in_.hidden_tokens_after,
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
}]
@pg.production("for_stmt : FOR exprlist IN testlist COLON suite else_stmt")
def for_else_stmt(pack,):
(for_, exprlist, in_, testlist, colon, suite, else_stmt) = pack
return [{
"type": "for",
"value": suite,
"async": False,
"async_formatting": [] + for_.hidden_tokens_before,
"iterator": exprlist,
"target": testlist,
"else": else_stmt,
"first_formatting": for_.hidden_tokens_after,
"second_formatting": in_.hidden_tokens_before,
"third_formatting": in_.hidden_tokens_after,
"fourth_formatting": colon.hidden_tokens_before,
"fifth_formatting": colon.hidden_tokens_after,
}]
@pg.production("while_stmt : WHILE test COLON suite")
def while_stmt(pack):
(while_, test, colon, suite) = pack
return [{
"type": "while",
"value": suite,
"test": test,
"else": {},
"first_formatting": while_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}]
@pg.production("while_stmt : WHILE test COLON suite else_stmt")
def while_stmt_else(pack):
(while_, test, colon, suite, else_stmt) = pack
return [{
"type": "while",
"value": suite,
"test": test,
"else": else_stmt,
"first_formatting": while_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}]
@pg.production("if_stmt : IF test COLON suite")
def if_stmt(pack):
(if_, test, colon, suite) = pack
return [{
"type": "ifelseblock",
"value": [{
"type": "if",
"value": suite,
"test": test,
"first_formatting": if_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}]
}]
@pg.production("if_stmt : IF test COLON suite elifs")
def if_elif_stmt(pack):
(if_, test, colon, suite, elifs) = pack
return [{
"type": "ifelseblock",
"value": [{
"type": "if",
"value": suite,
"test": test,
"first_formatting": if_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}] + elifs
}]
@pg.production("elifs : elifs ELIF test COLON suite")
def elifs_elif(pack,):
(elifs, elif_, test, colon, suite) = pack
return elifs + [{
"type": "elif",
"first_formatting": elif_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
"value": suite,
"test": test,
}]
@pg.production("elifs : ELIF test COLON suite")
def elif_(pack,):
(elif_, test, colon, suite) = pack
return [{
"type": "elif",
"first_formatting": elif_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
"value": suite,
"test": test,
}]
@pg.production("if_stmt : IF test COLON suite else_stmt")
def if_else_stmt(pack):
(if_, test, colon, suite, else_stmt) = pack
return [{
"type": "ifelseblock",
"value": [{
"type": "if",
"value": suite,
"test": test,
"first_formatting": if_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}, else_stmt]
}]
@pg.production("if_stmt : IF test COLON suite elifs else_stmt")
def if_elif_else_stmt(pack):
(if_, test, colon, suite, elifs, else_stmt) = pack
return [{
"type": "ifelseblock",
"value": [{
"type": "if",
"value": suite,
"test": test,
"first_formatting": if_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
}] + elifs + [else_stmt]
}]
baron-0.10.1/baron/grammator_data_structures.py 0000664 0000000 0000000 00000034136 14154274402 0021644 0 ustar 00root root 0000000 0000000 def include_data_structures(pg):
# TODO remove left_parenthesis and use LEFT_PARENTHESIS instead
@pg.production("atom : left_parenthesis testlist_comp RIGHT_PARENTHESIS")
def tuple(pack):
(left_parenthesis, testlist_comp, right_parenthesis,) = pack
return {
"type": "tuple",
"value": testlist_comp,
"first_formatting": left_parenthesis.hidden_tokens_before,
"second_formatting": left_parenthesis.hidden_tokens_after,
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"with_parenthesis": True,
}
@pg.production("atom : left_parenthesis test RIGHT_PARENTHESIS")
def associative_parenthesis(pack):
(left_parenthesis, test, right_parenthesis,) = pack
return {
"type": "associative_parenthesis",
"first_formatting": left_parenthesis.hidden_tokens_before,
"second_formatting": left_parenthesis.hidden_tokens_after,
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"value": test
}
@pg.production("testlist : test comma")
@pg.production("testlist_star_expr : test_or_star_expr comma")
@pg.production("exprlist : expr comma")
@pg.production("subscriptlist : subscript comma")
def implicit_tuple_alone(pack):
(test, comma) = pack
return {
"type": "tuple",
"value": [test, comma],
"first_formatting": [],
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"with_parenthesis": False,
}
@pg.production("testlist : test testlist_part")
@pg.production("testlist_star_expr : test_or_star_expr testlist_star_expr_part")
@pg.production("exprlist : expr exprlist_part")
@pg.production("subscriptlist : subscript subscriptlist_part")
def implicit_tuple(pack):
(test, testlist_part) = pack
return {
"type": "tuple",
"value": [test] + testlist_part,
"first_formatting": [],
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"with_parenthesis": False,
}
@pg.production("testlist_part : COMMA test")
@pg.production("testlist_star_expr_part : COMMA test_or_star_expr")
@pg.production("exprlist_part : COMMA expr")
@pg.production("subscriptlist_part : COMMA subscript")
def testlist_part(pack):
(comma, test) = pack
return [{
"type": "comma",
"first_formatting": comma.hidden_tokens_before,
"second_formatting": comma.hidden_tokens_after,
}, test]
@pg.production("testlist_part : COMMA test COMMA")
@pg.production("testlist_star_expr_part : COMMA test_or_star_expr COMMA")
@pg.production("exprlist_part : COMMA expr COMMA")
@pg.production("subscriptlist_part : COMMA subscript COMMA")
def testlist_part_comma(pack):
(comma, test, comma2) = pack
return [{
"type": "comma",
"first_formatting": comma.hidden_tokens_before,
"second_formatting": comma.hidden_tokens_after,
}, test, {
"type": "comma",
"first_formatting": comma2.hidden_tokens_before,
"second_formatting": comma2.hidden_tokens_after,
}]
@pg.production("testlist_part : COMMA test testlist_part")
@pg.production("testlist_star_expr_part : COMMA test_or_star_expr testlist_star_expr_part")
@pg.production("exprlist_part : COMMA expr exprlist_part")
@pg.production("subscriptlist_part : COMMA subscript subscriptlist_part")
def testlist_part_next(pack):
(comma, test, testlist_part) = pack
return [{
"type": "comma",
"first_formatting": comma.hidden_tokens_before,
"second_formatting": comma.hidden_tokens_after,
}, test] + testlist_part
@pg.production("testlist_comp :")
def testlist_comp_empty(empty):
return []
@pg.production("testlist_comp : test comma test")
def testlist_comp_two(pack):
(test, comma, test2) = pack
return [test, comma, test2]
@pg.production("testlist_comp : test comma testlist_comp")
def testlist_comp_more(pack):
(test, comma, testlist_comp) = pack
return [test, comma] + testlist_comp
@pg.production("atom : LEFT_SQUARE_BRACKET listmaker RIGHT_SQUARE_BRACKET")
def list_(pack):
(left_bracket, listmaker, right_bracket,) = pack
return {
"type": "list",
"first_formatting": left_bracket.hidden_tokens_before,
"second_formatting": left_bracket.hidden_tokens_after,
"third_formatting": right_bracket.hidden_tokens_before,
"fourth_formatting": right_bracket.hidden_tokens_after,
"value": listmaker
}
@pg.production("listmaker :")
def listmaker_empty(empty):
return []
@pg.production("listmaker : test")
def listmaker_one(pack):
(test,) = pack
return [test]
@pg.production("listmaker : test comma listmaker")
def listmaker_more(pack):
(test, comma, listmaker) = pack
return [test, comma] + listmaker
@pg.production("atom : LEFT_BRACKET dictmaker RIGHT_BRACKET")
def dict(pack):
(left_bracket, dictmaker, right_bracket,) = pack
return {
"type": "dict",
"first_formatting": left_bracket.hidden_tokens_before,
"second_formatting": left_bracket.hidden_tokens_after,
"third_formatting": right_bracket.hidden_tokens_before,
"fourth_formatting": right_bracket.hidden_tokens_after,
"value": dictmaker
}
@pg.production("dictmaker : ")
def dict_empty(empty):
return []
@pg.production("dictmaker : test COLON test")
def dict_one_colon(pack):
(test, colon, test2) = pack
return [{
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"key": test,
"value": test2,
"type": "dictitem"
}]
@pg.production("dictmaker : DOUBLE_STAR test")
def dict_one_double_star(pack):
(double_star, test) = pack
return [{
"type": "dict_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": double_star.hidden_tokens_after,
"value": test,
}]
@pg.production("dictmaker : test COLON test comma dictmaker")
def dict_more_colon(pack):
(test, colon, test2, comma, dictmaker) = pack
return [{
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"key": test,
"value": test2,
"type": "dictitem"
}, comma] + dictmaker
@pg.production("dictmaker : DOUBLE_STAR test comma dictmaker")
def dict_more_double_star(pack):
(double_star, test, comma, dictmaker) = pack
return [{
"type": "dict_argument",
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
"formatting": double_star.hidden_tokens_after,
"value": test,
}, comma] + dictmaker
@pg.production("atom : LEFT_BRACKET setmaker RIGHT_BRACKET")
def set(pack):
(left_bracket, setmaker, right_bracket,) = pack
return {
"type": "set",
"first_formatting": left_bracket.hidden_tokens_before,
"second_formatting": left_bracket.hidden_tokens_after,
"third_formatting": right_bracket.hidden_tokens_before,
"fourth_formatting": right_bracket.hidden_tokens_after,
"value": setmaker
}
@pg.production("setmaker : ")
def set_empty(empty):
return []
@pg.production("setmaker : test comma setmaker")
def set_more(pack):
(test, comma, setmaker) = pack
return [test, comma] + setmaker
@pg.production("setmaker : test")
def set_one(pack):
(test,) = pack
return [test]
@pg.production("atom : left_parenthesis test comp_for RIGHT_PARENTHESIS")
def generator_comprehension(pack):
(left_parenthesis, test, comp_for, right_parenthesis,) = pack
return {
"type": "generator_comprehension",
"first_formatting": left_parenthesis.hidden_tokens_before,
"second_formatting": left_parenthesis.hidden_tokens_after,
"third_formatting": right_parenthesis.hidden_tokens_before,
"fourth_formatting": right_parenthesis.hidden_tokens_after,
"result": test,
"generators": comp_for,
}
@pg.production("atom : LEFT_SQUARE_BRACKET test list_for RIGHT_SQUARE_BRACKET")
def list_comprehension(pack):
(left_square_bracket, test, list_for, right_square_bracket) = pack
return {
"type": "list_comprehension",
"first_formatting": left_square_bracket.hidden_tokens_before,
"second_formatting": left_square_bracket.hidden_tokens_after,
"third_formatting": right_square_bracket.hidden_tokens_before,
"fourth_formatting": right_square_bracket.hidden_tokens_after,
"result": test,
"generators": list_for,
}
@pg.production("atom : LEFT_BRACKET test COLON test comp_for RIGHT_BRACKET")
def dict_comprehension(pack):
(left_bracket, test, colon, test2, list_for, right_bracket) = pack
return {
"type": "dict_comprehension",
"first_formatting": left_bracket.hidden_tokens_before,
"second_formatting": left_bracket.hidden_tokens_after,
"third_formatting": right_bracket.hidden_tokens_before,
"fourth_formatting": right_bracket.hidden_tokens_after,
"result": {
"key": test,
"type": "dictitem",
"value": test2,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
},
"generators": list_for,
}
@pg.production("atom : LEFT_BRACKET test comp_for RIGHT_BRACKET")
def set_comprehension(pack):
(left_bracket, test, list_for, right_bracket) = pack
return {
"type": "set_comprehension",
"first_formatting": left_bracket.hidden_tokens_before,
"second_formatting": left_bracket.hidden_tokens_after,
"third_formatting": right_bracket.hidden_tokens_before,
"fourth_formatting": right_bracket.hidden_tokens_after,
"result": test,
"generators": list_for,
}
@pg.production("list_for : FOR exprlist IN old_test")
@pg.production("comp_for : FOR exprlist IN or_test")
def comp_for(pack):
(for_, exprlist, in_, or_test) = pack
return [{
"type": "comprehension_loop",
"first_formatting": for_.hidden_tokens_before,
"second_formatting": for_.hidden_tokens_after,
"third_formatting": in_.hidden_tokens_before,
"fourth_formatting": in_.hidden_tokens_after,
"target": or_test,
"iterator": exprlist,
"ifs": [],
}]
@pg.production("list_for : FOR exprlist IN old_test")
@pg.production("list_for : FOR exprlist IN testlist_safe")
def comp_for_implicite_tuple(pack):
(for_, exprlist, in_, testlist_safe) = pack
return [{
"type": "comprehension_loop",
"first_formatting": for_.hidden_tokens_before,
"second_formatting": for_.hidden_tokens_after,
"third_formatting": in_.hidden_tokens_before,
"fourth_formatting": in_.hidden_tokens_after,
"target": {
"type": "tuple",
"value": testlist_safe,
"with_parenthesis": False,
"first_formatting": [],
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
},
"iterator": exprlist,
"ifs": [],
}]
@pg.production("comp_for : FOR exprlist IN or_test comp_iter")
@pg.production("list_for : FOR exprlist IN old_test list_iter")
@pg.production("list_for : FOR exprlist IN testlist_safe list_iter")
def comp_for_iter(pack):
(for_, exprlist, in_, or_test, comp_iter) = pack
my_ifs = []
for i in comp_iter:
if i["type"] != "comprehension_if":
break
my_ifs.append(i)
comp_iter = comp_iter[1:]
return [{
"type": "comprehension_loop",
"first_formatting": for_.hidden_tokens_before,
"second_formatting": for_.hidden_tokens_after,
"third_formatting": in_.hidden_tokens_before,
"fourth_formatting": in_.hidden_tokens_after,
"target": or_test,
"iterator": exprlist,
"ifs": my_ifs,
}] + comp_iter
@pg.production("list_iter : list_for")
@pg.production("comp_iter : comp_for")
def comp_iter_comp_for(pack):
(comp_for,) = pack
return comp_for
@pg.production("list_iter : IF old_test")
@pg.production("comp_iter : IF old_test")
def comp_iter_if(pack):
(if_, old_test) = pack
return [{
"type": "comprehension_if",
"first_formatting": if_.hidden_tokens_before,
"second_formatting": if_.hidden_tokens_after,
"value": old_test
}]
@pg.production("list_iter : IF old_test list_iter")
@pg.production("comp_iter : IF old_test comp_iter")
def comp_iter_if_comp_iter(pack):
(if_, old_test, comp_iter) = pack
return [{
"type": "comprehension_if",
"first_formatting": if_.hidden_tokens_before,
"second_formatting": if_.hidden_tokens_after,
"value": old_test
}] + comp_iter
baron-0.10.1/baron/grammator_imports.py 0000664 0000000 0000000 00000013360 14154274402 0020121 0 ustar 00root root 0000000 0000000 from .utils import create_node_from_token
def include_imports(pg):
@pg.production("small_stmt : import")
@pg.production("small_stmt : from_import")
def separator(pack):
(statement,) = pack
return statement
@pg.production("import : IMPORT dotted_as_names")
def importeu(pack):
(import_, dotted_as_names) = pack
return {
"type": "import",
"value": dotted_as_names,
"first_formatting": import_.hidden_tokens_before,
"second_formatting": import_.hidden_tokens_after
}
@pg.production("from_import : FROM dotted_name IMPORT from_import_target")
def from_import_with_space(pack):
(from_, dotted_name, import_, from_import_target) = pack
return {
"type": "from_import",
"targets": from_import_target,
"first_formatting": from_.hidden_tokens_after,
"second_formatting": import_.hidden_tokens_before,
"third_formatting": import_.hidden_tokens_after,
"value": dotted_name
}
@pg.production("from_import_target : name_as_names")
def from_import_target_name_as_names(pack):
(name_as_names,) = pack
return name_as_names
@pg.production("from_import_target : LEFT_PARENTHESIS name_as_names RIGHT_PARENTHESIS")
def from_import_parenthesis(pack):
(left_parenthesis, name_as_names, right_parenthesis) = pack
return left_parenthesis.hidden_tokens_before +\
[{"type": "left_parenthesis", "value": "("}] +\
left_parenthesis.hidden_tokens_after +\
name_as_names +\
right_parenthesis.hidden_tokens_before +\
[{"type": "right_parenthesis", "value": ")"}] +\
right_parenthesis.hidden_tokens_after
@pg.production("from_import_target : STAR")
def from_import_star(pack):
(star,) = pack
return [{
"type": "star",
"value": "*",
"first_formatting": star.hidden_tokens_before,
"second_formatting": star.hidden_tokens_after
}]
@pg.production("name_as_names : name_as_names name_as_name")
def name_as_names_name_as_name(pack):
(name_as_names, name_as_name) = pack
return name_as_names + name_as_name
@pg.production("name_as_names : name_as_name")
def name_as_names(pack):
(name_as_name,) = pack
return name_as_name
@pg.production("name_as_name : NAME AS NAME")
def name_as_name_name_as_name(pack):
(name, as_, name2) = pack
return [{
"type": "name_as_name",
"value": name.value,
"first_formatting": as_.hidden_tokens_before,
"second_formatting": as_.hidden_tokens_after,
"target": name2.value
}]
@pg.production("name_as_name : NAME")
def name_as_name_name(pack):
(name,) = pack
return [{
"type": "name_as_name",
"value": name.value,
"target": "",
"first_formatting": [],
"second_formatting": []
}]
@pg.production("name_as_name : NAME SPACE")
def name_as_name_name_space(pack):
(name, space) = pack
return [{
"type": "name_as_name",
"target": None,
"value": name.value,
"first_formatting": [],
"second_formatting": []
}] + [create_node_from_token(space)]
@pg.production("name_as_name : comma")
def name_as_name_comma_space(pack):
(comma,) = pack
return [comma]
@pg.production("dotted_as_names : dotted_as_names comma dotted_as_name")
def dotted_as_names_dotted_as_names_dotted_as_name(pack):
(dotted_as_names, comma, dotted_as_names2) = pack
return dotted_as_names + [comma] + dotted_as_names2
@pg.production("dotted_as_names : dotted_as_name")
def dotted_as_names_dotted_as_name(pack):
(dotted_as_name,) = pack
return dotted_as_name
@pg.production("dotted_as_name : dotted_name AS NAME")
def dotted_as_name_as(pack):
(dotted_name, as_, name) = pack
return [{
"type": "dotted_as_name",
"value": dotted_name,
"first_formatting": as_.hidden_tokens_before,
"second_formatting": as_.hidden_tokens_after,
"target": name.value,
}]
@pg.production("dotted_as_name : dotted_name")
def dotted_as_name(pack):
(dotted_name,) = pack
return [{
"type": "dotted_as_name",
"value": dotted_name,
"first_formatting": [],
"second_formatting": [],
"target": ""
}]
@pg.production("dotted_name : dotted_name dotted_name_element")
def dotted_name_elements_element(pack):
(dotted_name, dotted_name_element) = pack
return dotted_name + dotted_name_element
@pg.production("dotted_name : dotted_name_element")
def dotted_name_element(pack):
(dotted_name_element,) = pack
return dotted_name_element
@pg.production("dotted_name_element : NAME")
@pg.production("dotted_name_element : SPACE")
def dotted_name(pack):
(token,) = pack
return [create_node_from_token(token)]
@pg.production("dotted_name_element : DOT")
def dotted_name_dot(pack):
(dot,) = pack
return [{
"type": "dot",
"first_formatting": dot.hidden_tokens_before,
"second_formatting": dot.hidden_tokens_after,
}]
@pg.production("dotted_name_element : ELLIPSIS")
def dotted_name_dot_dot_dot(pack):
ellipsis = pack[0]
return [{
"type": "ellipsis",
"first_formatting": ellipsis.hidden_tokens_before,
"second_formatting": ellipsis.hidden_tokens_after,
}]
baron-0.10.1/baron/grammator_operators.py 0000664 0000000 0000000 00000044535 14154274402 0020452 0 ustar 00root root 0000000 0000000 from .parser import ParsingError
def include_operators(pg):
@pg.production("old_test : or_test")
@pg.production("old_test : old_lambdef")
def old_test(pack):
(level,) = pack
return level
@pg.production("testlist_safe : old_test comma old_test")
def testlist_safe(pack):
(old_test, comma, old_test2) = pack
return [old_test, comma, old_test2]
@pg.production("testlist_safe : old_test comma testlist_safe")
def testlist_safe_more(pack):
(old_test, comma, testlist_safe) = pack
return [old_test, comma] + testlist_safe
@pg.production("expr_stmt : test COLON test")
def alone_annotation(pack):
target, colon, annotation = pack
return {
"type": "standalone_annotation",
"target": target,
"annotation": annotation, # not called "value" in case someone
# wants to work on both assignment and
# standalone annotations
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
}
@pg.production("expr_stmt : test COLON test EQUAL test")
def augmented_assignment_node(pack):
target, colon, annotation, equal, test = pack
return {
"type": "assignment",
"first_formatting": equal.hidden_tokens_before if equal else [],
"second_formatting": equal.hidden_tokens_after if equal else [],
"target": target,
"value": test,
"operator": "",
"annotation": annotation,
"annotation_first_formatting": colon.hidden_tokens_before,
"annotation_second_formatting": colon.hidden_tokens_after,
}
@pg.production("expr_stmt : testlist_star_expr augassign_operator testlist")
@pg.production("expr_stmt : testlist_star_expr augassign_operator yield_expr")
def augmented_assignment_node_2(pack):
(target, operator, value) = pack
return {
"type": "assignment",
"first_formatting": operator.hidden_tokens_before,
"second_formatting": operator.hidden_tokens_after,
"operator": operator.value[:-1],
"target": target,
"value": value,
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
}
@pg.production("test_or_star_expr : test")
@pg.production("test_or_star_expr : star_expr")
def test_or_star_expr(pack):
return pack[0]
@pg.production("star_expr : STAR expr")
def star_expr(pack):
star, expr = pack
return {
"type": "star_expression",
"formatting": star.hidden_tokens_after,
"value": expr,
}
@pg.production("augassign_operator : PLUS_EQUAL")
@pg.production("augassign_operator : MINUS_EQUAL")
@pg.production("augassign_operator : STAR_EQUAL")
@pg.production("augassign_operator : SLASH_EQUAL")
@pg.production("augassign_operator : PERCENT_EQUAL")
@pg.production("augassign_operator : AMPER_EQUAL")
@pg.production("augassign_operator : AT_EQUAL")
@pg.production("augassign_operator : VBAR_EQUAL")
@pg.production("augassign_operator : CIRCUMFLEX_EQUAL")
@pg.production("augassign_operator : LEFT_SHIFT_EQUAL")
@pg.production("augassign_operator : RIGHT_SHIFT_EQUAL")
@pg.production("augassign_operator : DOUBLE_STAR_EQUAL")
@pg.production("augassign_operator : DOUBLE_SLASH_EQUAL")
def augassign_operator(pack):
(operator,) = pack
return operator
@pg.production("expr_stmt : testlist_star_expr EQUAL yield_expr")
@pg.production("expr_stmt : testlist_star_expr EQUAL expr_stmt")
def assignment_node(pack):
(target, equal, value) = pack
return {
"type": "assignment",
"operator": "",
"value": value,
"target": target,
"first_formatting": equal.hidden_tokens_before,
"second_formatting": equal.hidden_tokens_after,
"annotation": {},
"annotation_first_formatting": [],
"annotation_second_formatting": [],
}
@pg.production("test : or_test IF or_test ELSE test")
def ternary_operator_node(pack):
(first, if_, second, else_, third) = pack
return {
"type": "ternary_operator",
"first": first,
"second": third,
"value": second,
"first_formatting": if_.hidden_tokens_before,
"second_formatting": if_.hidden_tokens_after,
"third_formatting": else_.hidden_tokens_before,
"fourth_formatting": else_.hidden_tokens_after,
}
@pg.production("or_test : and_test OR or_test")
@pg.production("and_test : not_test AND and_test")
def and_or_node(pack):
(first, operator, second) = pack
return {
"type": "boolean_operator",
"value": operator.value,
"first": first,
"second": second,
"first_formatting": operator.hidden_tokens_before,
"second_formatting": operator.hidden_tokens_after,
}
@pg.production("not_test : NOT not_test")
def not_node(pack):
(not_, comparison) = pack
return {
"type": "unitary_operator",
"value": "not",
"target": comparison,
"formatting": not_.hidden_tokens_after
}
@pg.production("comparison : expr LESS comparison")
@pg.production("comparison : expr GREATER comparison")
@pg.production("comparison : expr EQUAL_EQUAL comparison")
@pg.production("comparison : expr LESS_EQUAL comparison")
@pg.production("comparison : expr GREATER_EQUAL comparison")
@pg.production("comparison : expr NOT_EQUAL comparison")
@pg.production("comparison : expr IN comparison")
@pg.production("comparison : expr IS comparison")
def comparison_node(pack):
(expr, comparison_operator, comparison_) = pack
return {
"type": "comparison",
"first": expr,
"value": {
"type": "comparison_operator",
"first": comparison_operator.value,
"second": "",
"formatting": [],
},
"second": comparison_,
"first_formatting": comparison_operator.hidden_tokens_before,
"second_formatting": comparison_operator.hidden_tokens_after
}
@pg.production("comparison : expr IS NOT comparison")
@pg.production("comparison : expr NOT IN comparison")
def comparison_advanced_node(pack):
(expr, comparison_operator, comparison_operator2, comparison_) = pack
return {
"type": "comparison",
"value": {
"type": "comparison_operator",
"first": comparison_operator.value,
"second": comparison_operator2.value,
"formatting": comparison_operator.hidden_tokens_after
},
"first": expr,
"second": comparison_,
"first_formatting": comparison_operator.hidden_tokens_before,
"second_formatting": comparison_operator2.hidden_tokens_after,
}
@pg.production("expr : xor_expr VBAR expr")
@pg.production("xor_expr : and_expr CIRCUMFLEX xor_expr")
@pg.production("and_expr : shift_expr AMPER and_expr")
@pg.production("shift_expr : arith_expr RIGHT_SHIFT shift_expr")
@pg.production("shift_expr : arith_expr LEFT_SHIFT shift_expr")
@pg.production("arith_expr : term PLUS arith_expr")
@pg.production("arith_expr : term MINUS arith_expr")
@pg.production("term : factor STAR term")
@pg.production("term : factor SLASH term")
@pg.production("term : factor PERCENT term")
@pg.production("term : factor DOUBLE_SLASH term")
@pg.production("term : factor AT term")
@pg.production("power : atom DOUBLE_STAR factor")
@pg.production("power : atom DOUBLE_STAR power")
def binary_operator_node(pack):
(first, operator, second) = pack
return {
"type": "binary_operator",
"value": operator.value,
"first": first,
"second": second,
"first_formatting": operator.hidden_tokens_before,
"second_formatting": operator.hidden_tokens_after
}
@pg.production("factor : PLUS factor")
@pg.production("factor : MINUS factor")
@pg.production("factor : TILDE factor")
def factor_unitary_operator_space(pack):
(operator, factor,) = pack
return {
"type": "unitary_operator",
"value": operator.value,
"formatting": operator.hidden_tokens_after,
"target": factor,
}
@pg.production("power : atomtrailers DOUBLE_STAR factor")
@pg.production("power : atomtrailers DOUBLE_STAR power")
def power_atomtrailer_power(pack):
(atomtrailers, double_star, factor) = pack
return {
"type": "binary_operator",
"value": double_star.value,
"first": {
"type": "atomtrailers",
"value": atomtrailers,
},
"second": factor,
"first_formatting": double_star.hidden_tokens_before,
"second_formatting": double_star.hidden_tokens_after
}
@pg.production("power : atomtrailers")
def power_atomtrailers(pack):
(atomtrailers,) = pack
return {
"type": "atomtrailers",
"value": atomtrailers
}
@pg.production("power : NAME SPACE atomtrailers")
def power_atomtrailers_await(pack):
(await_, space, atomtrailers,) = pack
if await_.value != "await":
raise ParsingError("The only possible keyword before an atomtrailers is 'await', not '%s'" % await_.value)
return {
"type": "await",
"formatting": [{'type': 'space', 'value': space.value}],
"value": {
"type": "atomtrailers",
"value": atomtrailers,
}
}
@pg.production("atomtrailers : atom")
def atomtrailers_atom(pack):
(atom,) = pack
return [atom]
@pg.production("atomtrailers : atom trailers")
def atomtrailer(pack):
(atom, trailers) = pack
return [atom] + trailers
@pg.production("trailers : trailer")
def trailers(pack):
(trailer,) = pack
return trailer
@pg.production("trailers : trailers trailer")
def trailers_trailer(pack):
(trailers, trailer) = pack
return trailers + trailer
@pg.production("trailer : DOT NAME")
def trailer(pack):
(dot, name,) = pack
return [{
"type": "dot",
"first_formatting": dot.hidden_tokens_before,
"second_formatting": dot.hidden_tokens_after,
}, {
"type": "name",
"value": name.value,
}]
@pg.production("trailer : LEFT_PARENTHESIS argslist RIGHT_PARENTHESIS")
def trailer_call(pack):
(left, argslist, right) = pack
return [{
"type": "call",
"value": argslist,
"first_formatting": left.hidden_tokens_before,
"second_formatting": left.hidden_tokens_after,
"third_formatting": right.hidden_tokens_before,
"fourth_formatting": right.hidden_tokens_after,
}]
@pg.production("trailer : LEFT_SQUARE_BRACKET subscript RIGHT_SQUARE_BRACKET")
@pg.production("trailer : LEFT_SQUARE_BRACKET subscriptlist RIGHT_SQUARE_BRACKET")
def trailer_getitem_ellipsis(pack):
(left, subscript, right) = pack
return [{
"type": "getitem",
"value": subscript,
"first_formatting": left.hidden_tokens_before,
"second_formatting": left.hidden_tokens_after,
"third_formatting": right.hidden_tokens_before,
"fourth_formatting": right.hidden_tokens_after,
}]
@pg.production("subscript : ELLIPSIS")
@pg.production("atom : ELLIPSIS")
def subscript_ellipsis(pack):
ellipsis = pack[0]
return {
"type": "ellipsis",
"first_formatting": ellipsis.hidden_tokens_after,
"second_formatting": ellipsis.hidden_tokens_after,
}
@pg.production("subscript : test")
@pg.production("subscript : slice")
def subscript_test(pack):
(test,) = pack
return test
@pg.production("slice : COLON")
def slice(pack):
(colon,) = pack
return {
"type": "slice",
"lower": {},
"upper": {},
"step": {},
"has_two_colons": False,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": [],
"fourth_formatting": [],
}
@pg.production("slice : COLON COLON")
def slice_colon(pack):
(colon, colon2) = pack
return {
"type": "slice",
"lower": {},
"upper": {},
"step": {},
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : test COLON")
def slice_lower(pack):
(test, colon,) = pack
return {
"type": "slice",
"lower": test,
"upper": {},
"step": {},
"has_two_colons": False,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": [],
"fourth_formatting": [],
}
@pg.production("slice : test COLON COLON")
def slice_lower_colon_colon(pack):
(test, colon, colon2) = pack
return {
"type": "slice",
"lower": test,
"upper": {},
"step": {},
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : COLON test")
def slice_upper(pack):
(colon, test,) = pack
return {
"type": "slice",
"lower": {},
"upper": test,
"step": {},
"has_two_colons": False,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": [],
"fourth_formatting": [],
}
@pg.production("slice : COLON test COLON")
def slice_upper_colon(pack):
(colon, test, colon2) = pack
return {
"type": "slice",
"lower": {},
"upper": test,
"step": {},
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : COLON COLON test")
def slice_step(pack):
(colon, colon2, test) = pack
return {
"type": "slice",
"lower": {},
"upper": {},
"step": test,
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : test COLON test")
def slice_lower_upper(pack):
(test, colon, test2,) = pack
return {
"type": "slice",
"lower": test,
"upper": test2,
"step": {},
"has_two_colons": False,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": [],
"fourth_formatting": [],
}
@pg.production("slice : test COLON test COLON")
def slice_lower_upper_colon(pack):
(test, colon, test2, colon2) = pack
return {
"type": "slice",
"lower": test,
"upper": test2,
"step": {},
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : test COLON COLON test")
def slice_lower_step(pack):
(test, colon, colon2, test2) = pack
return {
"type": "slice",
"lower": test,
"upper": {},
"step": test2,
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : COLON test COLON test")
def slice_upper_step(pack):
(colon, test, colon2, test2) = pack
return {
"type": "slice",
"lower": {},
"upper": test,
"step": test2,
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
@pg.production("slice : test COLON test COLON test")
def slice_lower_upper_step(pack):
(test, colon, test2, colon2, test3) = pack
return {
"type": "slice",
"lower": test,
"upper": test2,
"step": test3,
"has_two_colons": True,
"first_formatting": colon.hidden_tokens_before,
"second_formatting": colon.hidden_tokens_after,
"third_formatting": colon2.hidden_tokens_before,
"fourth_formatting": colon2.hidden_tokens_after,
}
baron-0.10.1/baron/grammator_primitives.py 0000664 0000000 0000000 00000025041 14154274402 0020616 0 ustar 00root root 0000000 0000000 from .utils import create_node_from_token
def include_primivites(pg, print_function):
if not print_function:
@pg.production("print_stmt : PRINT")
def print_stmt_empty(pack):
(print_,) = pack
return {
"type": "print",
"value": [],
"destination": None,
"destination_formatting": [],
"formatting": [],
}
@pg.production("print_stmt : PRINT testlist")
def print_stmt(pack):
(print_, testlist) = pack
return {
"type": "print",
"value": testlist["value"] if testlist["type"] == "tuple" and testlist["with_parenthesis"] is False else [testlist],
"destination": None,
"destination_formatting": [],
"formatting": print_.hidden_tokens_after,
}
@pg.production("print_stmt : PRINT RIGHT_SHIFT test")
def print_stmt_redirect(pack):
(print_, right_shift, test) = pack
return {
"type": "print",
"value": [],
"destination": test,
"destination_formatting": right_shift.hidden_tokens_after,
"formatting": print_.hidden_tokens_after,
}
@pg.production("print_stmt : PRINT RIGHT_SHIFT test COMMA testlist")
def print_stmt_redirect_testlist(pack):
(print_, right_shift, test, comma, testlist) = pack
value = [{
"type": "comma",
"first_formatting": comma.hidden_tokens_before,
"second_formatting": comma.hidden_tokens_after,
}]
value += testlist["value"] if testlist["type"] == "tuple" else [testlist]
return {
"type": "print",
"value": value,
"destination": test,
"destination_formatting": right_shift.hidden_tokens_after,
"formatting": print_.hidden_tokens_after,
}
@pg.production("assert_stmt : EXEC expr")
def exec_stmt(pack):
(exec_, expr) = pack
return {
"type": "exec",
"value": expr,
"globals": None,
"locals": None,
"first_formatting": exec_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"fifth_formatting": []
}
@pg.production("assert_stmt : EXEC expr IN test")
def exec_stmt_in(pack):
(exec_, expr, in_, test) = pack
return {
"type": "exec",
"value": expr,
"globals": test,
"locals": None,
"first_formatting": exec_.hidden_tokens_after,
"second_formatting": in_.hidden_tokens_before,
"third_formatting": in_.hidden_tokens_after,
"fourth_formatting": [],
"fifth_formatting": []
}
@pg.production("assert_stmt : EXEC expr IN test COMMA test")
def exec_stmt_in_comma(pack):
(exec_, expr, in_, test, comma, test2) = pack
return {
"type": "exec",
"value": expr,
"globals": test,
"locals": test2,
"first_formatting": exec_.hidden_tokens_after,
"second_formatting": in_.hidden_tokens_before,
"third_formatting": in_.hidden_tokens_after,
"fourth_formatting": comma.hidden_tokens_before,
"fifth_formatting": comma.hidden_tokens_after
}
@pg.production("flow_stmt : return_stmt")
@pg.production("flow_stmt : break_stmt")
@pg.production("flow_stmt : continue_stmt")
@pg.production("flow_stmt : yield_stmt")
@pg.production("yield_stmt : yield_expr")
def flow(pack):
(flow_stmt,) = pack
return flow_stmt
@pg.production("return_stmt : RETURN")
def return_empty(pack):
(token,) = pack
return {
"type": token.name.lower(),
"value": None,
"formatting": token.hidden_tokens_after,
}
@pg.production("yield_expr : YIELD")
def yield_expr(pack):
(yield_,) = pack
return {
"type": yield_.name.lower(),
"value": None,
"formatting": yield_.hidden_tokens_after,
}
@pg.production("break_stmt : BREAK")
@pg.production("continue_stmt : CONTINUE")
@pg.production("pass_stmt : PASS")
def break_stmt(pack):
(token,) = pack
return {"type": token.name.lower()}
@pg.production("raise_stmt : RAISE")
def raise_stmt_empty(pack):
(raise_,) = pack
return {
"type": "raise",
"value": None,
"instance": None,
"traceback": None,
"first_formatting": raise_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"fifth_formatting": [],
"comma_or_from": None,
}
@pg.production("raise_stmt : RAISE test")
def raise_stmt(pack):
(raise_, test) = pack
return {
"type": "raise",
"value": test,
"instance": None,
"traceback": None,
"first_formatting": raise_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": [],
"fourth_formatting": [],
"fifth_formatting": [],
"comma_or_from": None,
}
@pg.production("raise_stmt : RAISE test FROM test")
def raise_stmt_from(pack):
(raise_, test, from_, test2) = pack
return {
"type": "raise",
"value": test,
"instance": test2,
"traceback": None,
"first_formatting": raise_.hidden_tokens_after,
"second_formatting": from_.hidden_tokens_before,
"third_formatting": from_.hidden_tokens_after,
"fourth_formatting": [],
"fifth_formatting": [],
"comma_or_from": "from",
}
@pg.production("raise_stmt : RAISE test COMMA test")
def raise_stmt_instance(pack):
(raise_, test, comma, test2) = pack
return {
"type": "raise",
"value": test,
"instance": test2,
"traceback": None,
"first_formatting": raise_.hidden_tokens_after,
"second_formatting": comma.hidden_tokens_before,
"third_formatting": comma.hidden_tokens_after,
"fourth_formatting": [],
"fifth_formatting": [],
"comma_or_from": ",",
}
@pg.production("raise_stmt : RAISE test COMMA test COMMA test")
def raise_stmt_instance_traceback(pack):
(raise_, test, comma, test2, comma2, test3) = pack
return {
"type": "raise",
"value": test,
"instance": test2,
"traceback": test3,
"first_formatting": raise_.hidden_tokens_after,
"second_formatting": comma.hidden_tokens_before,
"third_formatting": comma.hidden_tokens_after,
"fourth_formatting": comma2.hidden_tokens_before,
"fifth_formatting": comma2.hidden_tokens_after,
"comma_or_from": ",",
}
@pg.production("assert_stmt : ASSERT test")
def assert_stmt(pack):
(assert_, test) = pack
return {
"type": "assert",
"value": test,
"message": None,
"first_formatting": assert_.hidden_tokens_after,
"second_formatting": [],
"third_formatting": []
}
@pg.production("assert_stmt : ASSERT test COMMA test")
def assert_stmt_message(pack):
(assert_, test, comma, test2) = pack
return {
"type": "assert",
"value": test,
"message": test2,
"first_formatting": assert_.hidden_tokens_after,
"second_formatting": comma.hidden_tokens_before,
"third_formatting": comma.hidden_tokens_after
}
@pg.production("global_stmt : GLOBAL names")
def global_stmt(pack):
(global_, names) = pack
return {
"type": "global",
"formatting": global_.hidden_tokens_after,
"value": names,
}
@pg.production("nonlocal_stmt : NONLOCAL names")
def nonlocal_stmt(pack):
(token, names) = pack
return {
"type": "nonlocal",
"formatting": token.hidden_tokens_after,
"value": names,
}
@pg.production("names : NAME")
def names_name(pack):
(name,) = pack
return [create_node_from_token(name)]
@pg.production("names : names comma name")
def names_names_name(pack):
(names, comma, name,) = pack
return names + [comma, name]
@pg.production("return_stmt : RETURN testlist")
@pg.production("yield_expr : YIELD testlist")
@pg.production("del_stmt : DEL exprlist")
def return_testlist(pack):
(token, testlist) = pack
return {
"type": token.name.lower(),
"value": testlist,
"formatting": token.hidden_tokens_after,
}
@pg.production("yield_expr : YIELD FROM test")
def yield_from_expr(pack):
(yield_, from_, test) = pack
return {
"type": "yield_from",
"first_formatting": from_.hidden_tokens_after,
"value": test,
"formatting": yield_.hidden_tokens_after,
}
@pg.production("lambdef : LAMBDA COLON test")
@pg.production("old_lambdef : LAMBDA COLON old_test")
def lambdef(pack):
(lambda_, colon, test) = pack
return {
"type": "lambda",
"arguments": [],
"first_formatting": lambda_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
"value": test
}
@pg.production("lambdef : LAMBDA parameters COLON test")
@pg.production("old_lambdef : LAMBDA parameters COLON old_test")
def lambdef_arguments(pack):
(lambda_, parameters, colon, test) = pack
return {
"type": "lambda",
"arguments": parameters,
"first_formatting": lambda_.hidden_tokens_after,
"second_formatting": colon.hidden_tokens_before,
"third_formatting": colon.hidden_tokens_after,
"value": test
}
baron-0.10.1/baron/grouper.py 0000664 0000000 0000000 00000007472 14154274402 0016045 0 ustar 00root root 0000000 0000000 # encoding: utf-8
import re
from .utils import FlexibleIterator
to_group = (
("+", "="),
("-", "="),
("*", "="),
("/", "="),
("%", "="),
("&", "="),
("|", "="),
("^", "="),
("@", "="),
("/", "/"),
("*", "*"),
("<", "<"),
(">", ">"),
("=", "="),
("!", "="),
("<", ">"),
("<", "="),
(">", "="),
("**", "="),
("//", "="),
("<<", "="),
(">>", "="),
("\r", "\n"),
(".", "."),
("..", "."),
("-", ">"),
)
to_group_keys, _ = list(zip(*to_group))
def group(sequence):
return list(group_generator(sequence))
def match_on_next(regex, iterator):
return iterator.show_next() and re.match(regex, iterator.show_next())
def group_generator(sequence):
iterator = FlexibleIterator(sequence)
current = None
while True:
if iterator.end():
return
current = next(iterator)
if current in to_group_keys and matching_found(to_group, current, iterator.show_next()):
current += next(iterator)
if current in to_group_keys and matching_found(to_group, current, iterator.show_next()):
current += next(iterator)
if current in list('uUfFrRbB') and str(iterator.show_next()).startswith(('"', "'")):
current += next(iterator)
if str(current).lower() in ["ur", "br", "fr", "rf"] and str(iterator.show_next()).startswith(('"', "'")):
current += next(iterator)
if any([re.match(x, current) for x in (r'^\d+[eE]$', r'^\d+\.\d*[eE]$', r'^\.\d+[eE]$')]):
current += next(iterator)
current += next(iterator)
# It's required in a case where I have something like that:
# ['123.123e', '[+-]', '123']
assert re.match(r'^\d+[eE][-+]?\d+[jJ]?$', current) or re.match(r'^\d*.\d*[eE][-+]?\d+[jJ]?$', current)
if current == "\\" and iterator.show_next() in ('\n', '\r\n'):
current += next(iterator)
if re.match(r'^\s+$', str(iterator.show_next())):
current += next(iterator)
if current == "\\" and iterator.show_next() == "\r" and iterator.show_next(2) == "\n":
current += next(iterator)
current += next(iterator)
if re.match(r'^\s+$', str(iterator.show_next())):
current += next(iterator)
if re.match(r'^\s+$', current) and iterator.show_next() == "\\":
current += next(iterator)
current += next(iterator)
if iterator.show_next() == "\n":
current += next(iterator)
if re.match(r'^\s+$', str(iterator.show_next())):
current += next(iterator)
if (re.match(r'^[_\d]+$', current) and match_on_next(r'^\.$', iterator)) or\
(current == "." and match_on_next(r'^\d+[_\d]*([jJ]|[eE]\d*)?$', iterator)):
current += next(iterator)
if match_on_next(r'^[_\d]*[jJ]?$', iterator) and match_on_next(r'^[_\d]*[jJ]?$', iterator).group():
current += next(iterator)
if re.match(r'^\d+\.$', current) and match_on_next(r'^\d*[eE]\d*$', iterator):
current += next(iterator)
if re.match(r'^\d+\.?[eE]$', current) and match_on_next(r'^\d+$', iterator):
current += next(iterator)
if re.match(r'^\d*\.?\d*[eE]$', current) and not re.match('[eE]', current) and match_on_next(r'^[-+]$', iterator) and iterator.show_next(2) and re.match(r'^\d+$', iterator.show_next(2)):
current += next(iterator)
current += next(iterator)
# edge case where 2 dots follow themselves but not 3 (an ellipsis)
if current == "..":
yield "."
yield "."
continue
yield current
def matching_found(to_group, current, target):
return target in [x[1] for x in to_group if x[0] == current]
baron-0.10.1/baron/helpers.py 0000664 0000000 0000000 00000000630 14154274402 0016011 0 ustar 00root root 0000000 0000000 import json
import sys
from os import linesep
from . import parse
def show(source_code):
sys.stdout.write(json.dumps(parse(source_code), indent=4) + linesep)
def show_file(target_file):
with open(target_file, "r") as source_code:
sys.stdout.write(json.dumps(parse(source_code.read()), indent=4) + linesep)
def show_node(node):
sys.stdout.write(json.dumps(node, indent=4) + linesep)
baron-0.10.1/baron/indentation_marker.py 0000664 0000000 0000000 00000007434 14154274402 0020235 0 ustar 00root root 0000000 0000000 from .utils import FlexibleIterator
"""
Objective: add an INDENT token and a DEDENT token arround every block.
Strategy: put after every ":" that is not in a slice/dictionary declaration/lambda.
Slice and dictionary are easy: increase a number when a "[" or "{" is found,
decrease it for a "]" or "}". If the number is != 0, we are in a dictionary or
slice -> do not put a INDENT when a ":" is found.
Lambda are a bit different: increase another number when a "lambda" is found,
if the number is != 0 and a ":" is found, decrease this number, otherwise put a
INDENT.
For the DEDENT, I'm probably going to need to keep a list of indentation and
decheck the last one every time I encounter a meaningfull line. Still need to
test this idea.
"""
def mark_indentation(sequence):
return list(mark_indentation_generator(sequence))
def transform_tabs_to_spaces(string):
return string.replace("\t", " " * 8)
def get_space(node):
""" Return space formatting information of node.
If the node does not have a third formatting item - like in
a ('ENDL', '\n') node - then we return None as a flag value. This is
maybe not the best behavior but it seems to work for now.
"""
if len(node) < 4 or len(node[3]) == 0:
return None
return transform_tabs_to_spaces(node[3][0][1])
def mark_indentation_generator(sequence):
iterator = FlexibleIterator(sequence)
current = None, None
indentations = []
while True:
if iterator.end():
return
current = next(iterator)
if current is None:
return
# end of the file, I need to pop all indentations left and put the
# corresponding dedent token for them
if current[0] == "ENDMARKER" and indentations:
while len(indentations) > 0:
yield ('DEDENT', '')
indentations.pop()
# if were are at ":\n" like in "if stuff:\n"
if current[0] == "COLON" and iterator.show_next(1)[0] == "ENDL":
# if we aren't in "if stuff:\n\n"
if iterator.show_next(2)[0] not in ("ENDL",):
indentations.append(get_space(iterator.show_next()))
yield current
yield next(iterator)
yield ('INDENT', '')
continue
else: # else, skip all "\n"
yield current
for i in iterator:
if i[0] == 'ENDL' and iterator.show_next()[0] not in ('ENDL',):
indentations.append(get_space(i))
yield ('INDENT', '')
yield i
break
yield i
continue
# if we were in an indented situation and that the next line has a lower indentation
if indentations and current[0] == "ENDL":
the_indentation_level_changed = get_space(current) is None or get_space(current) != indentations[-1]
if the_indentation_level_changed and iterator.show_next()[0] not in ("ENDL", "COMMENT"):
new_indent = get_space(current) if len(current) == 4 else ""
yield current
# pop until reaching the matching indentation level
while indentations and string_is_bigger(indentations[-1], new_indent):
indentations.pop()
yield ('DEDENT', '')
yield next(iterator)
continue
yield current
def string_is_bigger(s1, s2):
""" Return s1 > s2 by taking into account None values.
None is always smaller than any string.
None > "string" works in python2 but not in python3. This function
makes it work in python3 too.
"""
if s1 is None:
return False
elif s2 is None:
return True
else:
return s1 > s2
baron-0.10.1/baron/inner_formatting_grouper.py 0000664 0000000 0000000 00000012205 14154274402 0021460 0 ustar 00root root 0000000 0000000 from .utils import FlexibleIterator, BaronError
class UnExpectedFormattingToken(BaronError):
pass
class GroupingError(BaronError):
pass
GROUP_THOSE = (
"ENDL",
# TODO test those 2
"COMMENT",
"SPACE",
)
ENTER_GROUPING_MODE = (
"LEFT_PARENTHESIS",
"LEFT_BRACKET",
"LEFT_SQUARE_BRACKET",
)
QUIT_GROUPING_MODE = (
"RIGHT_PARENTHESIS",
"RIGHT_BRACKET",
"RIGHT_SQUARE_BRACKET",
)
GROUP_ON = (
"COMMA",
"COLON",
# TODO test everything bellow
"STRING",
"RAW_STRING",
"INTERPOLATED_STRING",
"INTERPOLATED_RAW_STRING",
"BINARY_STRING",
"BINARY_RAW_STRING",
"UNICODE_STRING",
"UNICODE_RAW_STRING",
"AS",
"IMPORT",
"DOUBLE_STAR",
"DOT",
"LEFT_SQUARE_BRACKET",
"STAR",
"SLASH",
"PERCENT",
"DOUBLE_SLASH",
"PLUS",
"MINUS",
"LEFT_SHIFT",
"RIGHT_SHIFT",
"AMPER",
"CIRCUMFLEX",
"VBAR",
"LESS",
"GREATER",
"EQUAL_EQUAL",
"LESS_EQUAL",
"GREATER_EQUAL",
"NOT_EQUAL",
"IN",
"IS",
"NOT",
"AND",
"OR",
"AT",
"IF",
"ELSE",
"FROM",
"EQUAL",
"PLUS_EQUAL",
"MINUS_EQUAL",
"AT_EQUAL",
"STAR_EQUAL",
"SLASH_EQUAL",
"PERCENT_EQUAL",
"AMPER_EQUAL",
"VBAR_EQUAL",
"CIRCUMFLEX_EQUAL",
"LEFT_SHIFT_EQUAL",
"RIGHT_SHIFT_EQUAL",
"DOUBLE_STAR_EQUAL",
"DOUBLE_SLASH_EQUAL",
"ENDL",
"FOR",
"COLON",
"RAW_STRING",
"UNICODE_STRING",
"UNICODE_RAW_STRING",
) + ENTER_GROUPING_MODE + QUIT_GROUPING_MODE
def append_to_token_after(token, to_append_list):
if len(token) == 2:
return (token[0], token[1], [], to_append_list)
elif len(token) == 3:
return (token[0], token[1], token[2], to_append_list)
elif len(token) == 4:
return (token[0], token[1], token[2], token[3] + to_append_list)
def append_to_token_before(token, to_append_list):
if len(token) == 2:
return (token[0], token[1], to_append_list, [])
elif len(token) == 3:
return (token[0], token[1], to_append_list + token[2], [])
elif len(token) == 4:
return (token[0], token[1], to_append_list + token[2], token[3])
def group(sequence):
return list(group_generator(sequence))
def fail_on_bad_token(token, debug_file_content, in_grouping_mode):
if token[0] in GROUP_ON:
return
debug_file_content += _append_to_debug_file_content(token)
debug_file_content = debug_file_content.split("\n")
debug_file_content = list(zip(range(1, len(debug_file_content) + 1), debug_file_content))
debug_file_content = debug_file_content[-8:]
debug_file_content = "\n".join(["%4s %s" % (x[0], x[1]) for x in debug_file_content])
raise GroupingError("Fail to group formatting tokens, here:\n%s <----\n\n'%s' should have been in: %s\n\nCurrent value of 'in_grouping_mode': %s" % (debug_file_content, token, ', '.join(sorted(GROUP_ON)), in_grouping_mode))
def _append_to_debug_file_content(token):
before_debug = "".join(map(lambda x: x[1], token[2] if len(token) >= 3 else []))
after_debug = "".join(map(lambda x: x[1], token[3] if len(token) >= 4 else []))
return before_debug + token[1] + after_debug
def group_generator(sequence):
iterator = FlexibleIterator(sequence)
current = None, None
in_grouping_mode = 0
debug_file_content = ""
while True:
if iterator.end():
return
current = next(iterator)
debug_file_content += _append_to_debug_file_content(current)
if current[0] in ENTER_GROUPING_MODE:
in_grouping_mode += 1
elif current[0] in QUIT_GROUPING_MODE:
in_grouping_mode -= 1
assert in_grouping_mode >= 0
if in_grouping_mode:
if current[0] in GROUP_THOSE:
to_group = [current]
while iterator.show_next() and iterator.show_next()[0] in GROUP_THOSE:
to_group.append(next(iterator))
debug_file_content += _append_to_debug_file_content(to_group[-1])
# XXX don't remember how (:() but I can end up finding a
# DEDENT/INDENT token in this situation and I don't want to
# group on it. Need to do test for that.
if iterator.show_next()[0] in ("INDENT", "DEDENT"):
yield next(iterator)
fail_on_bad_token(iterator.show_next(), debug_file_content, in_grouping_mode)
current = append_to_token_before(next(iterator), to_group)
if current[0] in ENTER_GROUPING_MODE:
in_grouping_mode += 1
# TODO test
if current[0] in QUIT_GROUPING_MODE:
in_grouping_mode -= 1
assert in_grouping_mode >= 0
yield current
continue
if current[0] in GROUP_ON:
while iterator.show_next() and iterator.show_next()[0] in GROUP_THOSE:
debug_file_content += _append_to_debug_file_content(iterator.show_next())
current = append_to_token_after(current, [next(iterator)])
yield current
baron-0.10.1/baron/parser.py 0000664 0000000 0000000 00000014236 14154274402 0015652 0 ustar 00root root 0000000 0000000 import errno
import os
import json
import stat
import tempfile
import warnings
from .token import BaronToken
from rply import ParserGenerator
from rply.parser import LRParser
from rply.parsergenerator import LRTable
from rply.errors import ParserGeneratorWarning
from rply.grammar import Grammar
from .utils import BaronError
class ParsingError(BaronError):
pass
class BaronParserGenerator(ParserGenerator):
def build(self):
g = Grammar(self.tokens)
for level, (assoc, terms) in enumerate(self.precedence, 1):
for term in terms:
g.set_precedence(term, assoc, level)
for prod_name, syms, func, precedence in self.productions:
g.add_production(prod_name, syms, func, precedence)
g.set_start()
for unused_term in g.unused_terminals():
warnings.warn(
"Token %r is unused" % unused_term,
ParserGeneratorWarning,
stacklevel=2
)
for unused_prod in g.unused_productions():
warnings.warn(
"Production %r is not reachable" % unused_prod,
ParserGeneratorWarning,
stacklevel=2
)
g.build_lritems()
g.compute_first()
g.compute_follow()
# win32 temp directories are already per-user
if os.name == "nt":
cache_file = os.path.join(
tempfile.gettempdir(),
"rply-%s-%s-%s.json" % (self.VERSION, self.cache_id, self.compute_grammar_hash(g))
)
else:
cache_file = os.path.join(
tempfile.gettempdir(),
"rply-%s-%s-%s-%s.json" % (self.VERSION, os.getuid(), self.cache_id, self.compute_grammar_hash(g))
)
table = None
if os.path.exists(cache_file):
with open(cache_file) as f:
try:
data = json.load(f)
except Exception:
os.remove(cache_file)
data = None
if data is not None:
stat_result = os.fstat(f.fileno())
if (
os.name == "nt" or (
stat_result.st_uid == os.getuid()
and stat.S_IMODE(stat_result.st_mode) == 0o0600
)
):
if self.data_is_valid(g, data):
table = LRTable.from_cache(g, data)
if table is None:
table = LRTable.from_grammar(g)
try:
fd = os.open(cache_file, os.O_RDWR | os.O_CREAT | os.O_EXCL, 0o0600)
except OSError as e:
if e.errno != errno.EEXIST:
raise
else:
with os.fdopen(fd, "w") as f:
json.dump(self.serialize_table(table), f)
# meh :(
# if table.sr_conflicts:
# warnings.warn(
# "%d shift/reduce conflict%s" % (len(table.sr_conflicts), "s" if len(table.sr_conflicts) > 1 else ""),
# ParserGeneratorWarning,
# stacklevel=2,
# )
# if table.rr_conflicts:
# warnings.warn(
# "%d reduce/reduce conflict%s" % (len(table.rr_conflicts), "s" if len(table.rr_conflicts) > 1 else ""),
# ParserGeneratorWarning,
# stacklevel=2,
# )
return BaronLRParser(table, self.error_handler)
class BaronLRParser(LRParser):
def parse(self, tokenizer, state=None):
lookahead = None
lookaheadstack = []
statestack = [0]
symstack = [BaronToken("$end", "$end")]
current_state = 0
parsed_file_content = ""
while True:
if self.lr_table.default_reductions[current_state]:
t = self.lr_table.default_reductions[current_state]
current_state = self._reduce_production(t, symstack, statestack, state)
continue
if lookahead is None:
if lookaheadstack:
lookahead = lookaheadstack.pop()
else:
try:
lookahead = next(tokenizer)
except StopIteration:
lookahead = None
if lookahead is None:
lookahead = BaronToken("$end", "$end")
else:
parsed_file_content += lookahead.render()
ltype = lookahead.gettokentype()
if ltype in self.lr_table.lr_action[current_state]:
t = self.lr_table.lr_action[current_state][ltype]
if t > 0:
statestack.append(t)
current_state = t
symstack.append(lookahead)
lookahead = None
continue
elif t < 0:
current_state = self._reduce_production(t, symstack, statestack, state)
continue
else:
n = symstack[-1]
return n
else:
debug_output = parsed_file_content.split("\n")
debug_output = list(zip(range(1, len(debug_output) + 1), debug_output))
debug_output = debug_output[-8:]
debug_output = "\n".join(["%4s %s" % (x[0], x[1]) for x in debug_output])
debug_output += "<---- here"
debug_output = "Error, got an unexpected token %s here:\n\n" % ltype + debug_output
debug_output += "\n\nThe token %s should be one of those: %s" % (ltype, ", ".join(sorted(self.lr_table.lr_action[current_state].keys())))
debug_output += "\n\nBaron has failed to parse this input. If this is valid python code (and by that I mean that the python binary successfully parse this code without any syntax error) (also consider that baron does not yet parse python 3 code integrally) it would be kind if you can extract a snippet of your code that make Baron fails and open a bug here: https://github.com/PyCQA/baron/issues\n\nSorry for the inconvenience."
raise ParsingError(debug_output)
baron-0.10.1/baron/path.py 0000664 0000000 0000000 00000022543 14154274402 0015312 0 ustar 00root root 0000000 0000000 from .render import RenderWalker, child_by_key
from .utils import is_newline, split_on_newlines, total_ordering
from copy import deepcopy
def position_to_path(tree, position):
"""Path to the node located at the given line and column
This function locates a node in the rendered source code
"""
return PositionFinder().find(tree, position)
def path_to_node(tree, path):
"""FST node located at the given path"""
if path is None:
return None
node = tree
for key in path:
node = child_by_key(node, key)
return node
def position_to_node(tree, position):
"""FST node located at the given line and column"""
return path_to_node(tree, position_to_path(tree, position))
def node_to_bounding_box(node):
"""Bounding box of the given node
The bounding box of a node represents its left most and right most
position in the rendered source code. Its left position is here
always (1, 1).
"""
return BoundingBoxFinder().compute(node)
def path_to_bounding_box(tree, path):
"""Absolute bounding box of the node located at the given path"""
return BoundingBoxFinder().compute(tree, path)
@total_ordering
class Position(object):
"""Handles a cursor's line and column
Operations requiring another Position as argument can be given an
indexable object of len >= 2 where the index 0 contains the line and
the index 1 contains the column. For example a tuple of len 2.
"""
def __init__(self, position):
if hasattr(position, 'line') and hasattr(position, 'column'):
self.line = position.line
self.column = position.column
elif len(position) >= 2:
self.line = position[0]
self.column = position[1]
else:
raise AttributeError(position)
def advance_columns(self, columns):
"""(3, 10) -> (3, 11)"""
self.column += columns
def advance_line(self):
"""(3, 10) -> (4, 1)"""
self.line += 1
self.column = 1
@property
def left(self):
"""(3, 10) -> (3, 9)"""
return Position((self.line, self.column - 1))
@property
def right(self):
"""(3, 10) -> (3, 11)"""
return Position((self.line, self.column + 1))
def __add__(self, other):
"""(1, 1) + (1, 1) -> (2, 2)"""
other = Position(other)
return Position((self.line + other.line,
self.column + other.column))
def __neg__(self):
"""(1, -1) -> (-1, 1)"""
return Position((-self.line, -self.column))
def __sub__(self, other):
"""(1, 1) - (1, 1) -> (0, 0)"""
other = Position(other)
return Position((self.line - other.line,
self.column - other.column))
def __nonzero__(self):
return self.line >= 0 and self.column >= 0
def __bool__(self):
return self.__nonzero__()
def __eq__(self, other):
"""Compares Positions or Position and tuple
Will not fail if other is an unsupported type"""
if not (hasattr(other, 'line') and hasattr(other, 'column')) and len(other) < 2:
return False
other = Position(other)
return self.line == other.line and self.column == other.column
def __lt__(self, other):
"""Compares Position with Position or indexable object"""
other = Position(other)
return (self.line, self.column) < (other.line, other.column)
def __repr__(self):
return 'Position (%s, %s)' % (str(self.line), str(self.column))
def to_tuple(self):
return (self.line, self.column)
class BoundingBox(object):
"""Handles a selection's top_left and bottom_right position
Operations requiring another BoundingBox as argument can be given an
indexable object of len >= 2 where the index 0 contains the top_left
position, either as a Position or an indexable object. The index
1 must contain, in a similar manner, the bottom_right position.
"""
def __init__(self, bounding_box):
if hasattr(bounding_box, 'top_left') and hasattr(bounding_box, 'bottom_right'):
self.top_left = Position(bounding_box.top_left)
self.bottom_right = Position(bounding_box.bottom_right)
elif len(bounding_box) >= 2:
self.top_left = Position(bounding_box[0])
self.bottom_right = Position(bounding_box[1])
else:
raise AttributeError(bounding_box)
def __eq__(self, other):
"""Compares BoundingBox with BoundingBox or indexable object"""
if not (hasattr(other, 'top_left') and hasattr(other, 'bottom_right')) and len(other) < 2:
return False
other = BoundingBox(other)
return self.top_left == other.top_left and self.bottom_right == other.bottom_right
def __repr__(self):
return 'BoundingBox (%s, %s)' % (str(self.top_left), str(self.bottom_right))
class PathWalker(RenderWalker):
"""Gives the current path while walking the rendered tree
It adds an attribute "current_path" which is updated each time the
walker takes a step.
"""
def walk(self, tree):
self.current_path = []
super(PathWalker, self).walk(tree)
def before(self, key_type, item, render_key):
if render_key is not None:
self.current_path.append(render_key)
return super(PathWalker, self).before(key_type, item, render_key)
def after(self, key_type, item, render_key):
stop = super(PathWalker, self).after(key_type, item, render_key)
if render_key is not None:
self.current_path.pop()
return stop
class PositionFinder(PathWalker):
"""Find a node by line and column and return the path to it.
First, walk through all the nodes while maintaining the current line
and column. When the targetted node is found, stop there and build
the path while going back up the tree.
"""
def find(self, tree, position):
self.current = Position((1, 1))
self.target = Position(position)
self.found_path = None
self.walk(tree)
return self.found_path
def before_constant(self, constant, key):
"""Determine if we're on the targetted node.
If the targetted column is reached, `stop` and `path_found` are
set. If the targetted line is passed, only `stop` is set. This
prevents unnecessary tree travelling when the targetted column
is out of bounds.
"""
newlines_split = split_on_newlines(constant)
for c in newlines_split:
if is_newline(c):
self.current.advance_line()
# if target line is passed
if self.current.line > self.target.line:
return self.STOP
else:
advance_by = len(c)
if self.is_on_targetted_node(advance_by):
self.found_path = deepcopy(self.current_path)
return self.STOP
self.current.advance_columns(advance_by)
before_string = before_constant
def is_on_targetted_node(self, advance_by):
return self.target.line == self.current.line \
and self.target.column >= self.current.column \
and self.target.column < self.current.column + advance_by
class BoundingBoxFinder(PathWalker):
"""Compute the bounding box of the given node.
First, walk to the target path while incrementing the position.
When reached, the top-left position is set to the current position.
Then walk the whole node, still incrementing the position. When
arriving at the end of the node, store the previous position, not
the current one, as the bottom-right position.
If no target path is given, assume the targetted node is the whole
tree.
"""
def compute(self, tree, target_path=None):
self.target_path = target_path
self.current_position = Position((1, 1))
self.left_of_current_position = Position((1, 0))
self.top_left = None
self.bottom_right = None
self.found = True if self.target_path is None or len(target_path) == 0 else False
self.walk(tree)
if self.found and self.top_left is None and self.bottom_right is None:
return BoundingBox((Position((1, 1)), self.left_of_current_position))
return BoundingBox((self.top_left, self.bottom_right))
def before(self, key_type, item, render_key):
stop = super(BoundingBoxFinder, self).before(key_type, item, render_key)
if self.current_path == self.target_path:
self.found = True
self.top_left = deepcopy(self.current_position)
if key_type not in ['constant', 'string']:
return stop
newlines_split = split_on_newlines(item)
for c in newlines_split:
if is_newline(c):
self.current_position.advance_line()
self.left_of_current_position = self.current_position.left
elif c != "":
self.current_position.advance_columns(len(c))
self.left_of_current_position = self.current_position.left
return stop
def after(self, key_type, item, render_key):
if self.bottom_right is None and self.found and self.current_path == self.target_path:
self.bottom_right = deepcopy(self.left_of_current_position)
return super(BoundingBoxFinder, self).after(key_type, item, render_key)
baron-0.10.1/baron/render.py 0000664 0000000 0000000 00000116245 14154274402 0015640 0 ustar 00root root 0000000 0000000 import sys
import json
def render(node, strict=False):
"""Recipe to render a given FST node.
The FST is composed of branch nodes which are either lists or dicts
and of leaf nodes which are strings. Branch nodes can have other
list, dict or leaf nodes as childs.
To render a string, simply output it. To render a list, render each
of its elements in order. To render a dict, you must follow the
node's entry in the nodes_rendering_order dictionary and its
dependents constraints.
This function hides all this algorithmic complexity by returning
a structured rendering recipe, whatever the type of node. But even
better, you should subclass the RenderWalker which simplifies
drastically working with the rendered FST.
The recipe is a list of steps, each step correspond to a child and is actually a 3-uple composed of the following fields:
- `key_type` is a string determining the type of the child in the second field (`item`) of the tuple. It can be one of:
- 'constant': the child is a string
- 'node': the child is a dict
- 'key': the child is an element of a dict
- 'list': the child is a list
- 'formatting': the child is a list specialized in formatting
- `item` is the child itself: either a string, a dict or a list.
- `render_key` gives the key used to access this child from the parent node. It's a string if the node is a dict or a number if its a list.
Please note that "bool" `key_types` are never rendered, that's why
they are not shown here.
"""
if isinstance(node, list):
return render_list(node)
elif isinstance(node, dict):
return render_node(node, strict=strict)
else:
raise NotImplementedError("You tried to render a %s. Only list and dicts can be rendered." % node.__class__.__name__)
def render_list(node):
for pos, child in enumerate(node):
yield ('node', child, pos)
def render_node(node, strict=False):
if node["type"] not in nodes_rendering_order:
raise Exception("There are no defined rules for rendering a node of type '%s', has it been defined in render.py?" % node["type"])
for key_type, render_key, dependent in nodes_rendering_order[node['type']]:
if not dependent:
continue
elif key_type == "bool":
raise NotImplementedError("Bool keys are only used for dependency, they cannot be rendered. Please set the \"%s\"'s dependent key in \"%s\" node to False" % ((key_type, render_key, dependent), node['type']))
elif isinstance(dependent, str) and not node.get(dependent):
continue
elif isinstance(dependent, list) and not all([node.get(x) for x in dependent]):
continue
if strict:
try:
if key_type == "key":
assert isinstance(node[render_key], (dict, type(None))), "Key '%s' is expected to have type of 'key' (dict/None) but has type of '%s' instead" % (render_key, type(node[render_key]))
elif key_type == "string":
assert isinstance(node[render_key], str), "Key '%s' is expected to have type of 'string' but has type of '%s' instead" % (render_key, type(node[render_key]))
elif key_type in ("list", "formatting"):
assert isinstance(node[render_key], list), "Key '%s' is expected to have type of 'list' but has type of '%s' instead" % (render_key, type(node[render_key]))
elif key_type == "constant":
pass
else:
raise Exception("Invalid key_type '%s', should be one of those: key, string, list, formatting" % key_type)
if dependent is True:
pass
elif isinstance(dependent, str):
assert dependent in node
elif isinstance(dependent, list):
assert all([x in node for x in dependent])
except AssertionError as e:
sys.stdout.write("Where node.type == '%s', render_key == '%s' and node ==\n%s\n" % (node["type"], render_key, json.dumps(node, indent=4, sort_keys=True)))
raise e
if key_type in ['key', 'string', 'list', 'formatting']:
yield (key_type, node[render_key], render_key)
elif key_type in ['constant', 'string']:
yield (key_type, render_key, render_key)
else:
raise NotImplementedError("Unknown key type \"%s\" in \"%s\" node" % (key_type, node['type']))
node_types = set(['node', 'list', 'key', 'formatting', 'constant', 'bool', 'string'])
def node_keys(node):
return [key for (_, key, _) in nodes_rendering_order[node['type']]]
def child_by_key(node, key):
if isinstance(node, list):
return node[key]
if key in node:
return node[key]
if key in node_keys(node):
return key
raise AttributeError("Cannot access key \"%s\" in node \"%s\"" % (key, node))
# for a surprising exception, we won't honnor pep8 here because it really increase lisibility
nodes_rendering_order = {
"int": [("string", "value", True)], # noqa
"long": [("string", "value", True)], # noqa
"name": [("string", "value", True)], # noqa
"hexa": [("string", "value", True)], # noqa
"octa": [("string", "value", True)], # noqa
"float": [("string", "value", True)], # noqa
"space": [("string", "value", True)], # noqa
"binary": [("string", "value", True)], # noqa
"complex": [("string", "value", True)], # noqa
"float_exponant": [("string", "value", True)], # noqa
"left_parenthesis": [("string", "value", True)], # noqa
"right_parenthesis": [("string", "value", True)], # noqa
"float_exponant_complex": [("string", "value", True)], # noqa
"break": [("string", "type", True)], # noqa
"continue": [("string", "type", True)], # noqa
"pass": [("string", "type", True)], # noqa
"dotted_name": [("list", "value", True)], # noqa
"ifelseblock": [("list", "value", True)], # noqa
"atomtrailers": [("list", "value", True)], # noqa
"string_chain": [("list", "value", True)], # noqa
"endl": [
("formatting", "formatting", True), # noqa
("string", "value", True), # noqa
("string", "indent", True), # noqa
],
"star": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"star_expression": [
("constant", "*", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", True), # noqa
],
"string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"raw_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"binary_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"unicode_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"binary_raw_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"interpolated_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"interpolated_raw_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"unicode_raw_string": [
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
],
# FIXME ugly, comment can end up in formatting of another
# node or being standalone, this is bad
"comment": [
("formatting", "formatting", "formatting"), # noqa
("string", "value", True), # noqa
],
"ternary_operator": [
("key", "first", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", "if", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "else", True), # noqa
("formatting", "fourth_formatting", True), # noqa
("key", "second", True), # noqa
],
"ellipsis": [
("constant", ".", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ".", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ".", True), # noqa
],
"dot": [
("formatting", "first_formatting", True), # noqa
("constant", ".", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"semicolon": [
("formatting", "first_formatting", True), # noqa
("constant", ";", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"comma": [
("formatting", "first_formatting", True), # noqa
("constant", ",", True), # noqa
("formatting", "second_formatting", True), # noqa
],
"call": [
("formatting", "first_formatting", True), # noqa
("constant", "(", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", ")", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"decorator": [
("constant", "@", True), # noqa
("key", "value", True), # noqa
("key", "call", "call"), # noqa
],
"class": [
("list", "decorators", True), # noqa
("constant", "class", True), # noqa
("formatting", "first_formatting", True), # noqa
("string", "name", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", "(", "parenthesis"), # noqa
("formatting", "third_formatting", True), # noqa
("list", "inherit_from", True), # noqa
("formatting", "fourth_formatting", True), # noqa
("constant", ")", "parenthesis"), # noqa
("formatting", "fifth_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "sixth_formatting", True), # noqa
("list", "value", True), # noqa
("bool", "parenthesis", False), # noqa
],
"repr": [
("constant", "`", True), # noqa
("formatting", "first_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", "`", True), # noqa
],
"list": [
("formatting", "first_formatting", True), # noqa
("constant", "[", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "]", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"associative_parenthesis": [
("formatting", "first_formatting", True), # noqa
("constant", "(", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", ")", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"tuple": [
("formatting", "first_formatting", "with_parenthesis"), # noqa
("constant", "(", "with_parenthesis"), # noqa
("formatting", "second_formatting", "with_parenthesis"), # noqa
("list", "value", True), # noqa
("formatting", "third_formatting", "with_parenthesis"), # noqa
("constant", ")", "with_parenthesis"), # noqa
("formatting", "fourth_formatting", "with_parenthesis"), # noqa
("bool", "with_parenthesis", False), # noqa
],
"await": [
("constant", "await", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", True), # noqa
],
"def": [
("list", "decorators", True), # noqa
("bool", "async", False), # noqa
("constant", "async", "async"), # noqa
("formatting", "async_formatting", "async"), # noqa
("constant", "def", True), # noqa
("formatting", "first_formatting", True), # noqa
("string", "name", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", "(", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "arguments", True), # noqa
("formatting", "fourth_formatting", True), # noqa
("constant", ")", True), # noqa
("formatting", "return_annotation_first_formatting", "return_annotation"), # noqa
("constant", "->", "return_annotation"), # noqa
("formatting", "return_annotation_second_formatting", "return_annotation"), # noqa
("key", "return_annotation", "return_annotation"), # noqa
("formatting", "fifth_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "sixth_formatting", True), # noqa
("list", "value", True), # noqa
],
"call_argument": [
("key", "target", "target"), # noqa
("formatting", "first_formatting", "target"), # noqa
("constant", "=", "target"), # noqa
("formatting", "second_formatting", "target"), # noqa
("key", "value", True), # noqa
],
"def_argument": [
("key", "target", True), # noqa
("formatting", "annotation_first_formatting", "annotation"), # noqa
("constant", ":", "annotation"), # noqa
("formatting", "annotation_second_formatting", "annotation"), # noqa
("key", "annotation", "annotation"), # noqa
("formatting", "first_formatting", "value"), # noqa
("constant", "=", "value"), # noqa
("formatting", "second_formatting", "value"), # noqa
("key", "value", "value"), # noqa
],
"list_argument": [
("constant", "*", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "annotation_first_formatting", "annotation"), # noqa
("constant", ":", "annotation"), # noqa
("formatting", "annotation_second_formatting", "annotation"), # noqa
("key", "annotation", "annotation"), # noqa
],
"kwargs_only_marker": [
("constant", "*", True), # noqa
("formatting", "formatting", True), # noqa
],
"dict_argument": [
("constant", "**", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "annotation_first_formatting", "annotation"), # noqa
("constant", ":", "annotation"), # noqa
("formatting", "annotation_second_formatting", "annotation"), # noqa
("key", "annotation", "annotation"), # noqa
],
"return": [
("constant", "return", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", "value"), # noqa
],
"raise": [
("constant", "raise", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "value", "value"), # noqa
("formatting", "second_formatting", "instance"), # noqa
("string", "comma_or_from", "instance"), # noqa
("formatting", "third_formatting", "instance"), # noqa
("key", "instance", "instance"), # noqa
("formatting", "fourth_formatting", "traceback"), # noqa
("constant", ",", "traceback"), # noqa
("formatting", "fifth_formatting", "traceback"), # noqa
("key", "traceback", "traceback"), # noqa
],
"assert": [
("constant", "assert", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "second_formatting", "message"), # noqa
("constant", ",", "message"), # noqa
("formatting", "third_formatting", "message"), # noqa
("key", "message", "message"), # noqa
],
"set_comprehension": [
("formatting", "first_formatting", True), # noqa
("constant", "{", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "result", True), # noqa
("list", "generators", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "}", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"dict_comprehension": [
("formatting", "first_formatting", True), # noqa
("constant", "{", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "result", True), # noqa
("list", "generators", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "}", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"argument_generator_comprehension": [
("key", "result", True), # noqa
("list", "generators", True), # noqa
],
"generator_comprehension": [
("formatting", "first_formatting", True), # noqa
("constant", "(", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "result", True), # noqa
("list", "generators", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", ")", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"list_comprehension": [
("formatting", "first_formatting", True), # noqa
("constant", "[", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "result", True), # noqa
("list", "generators", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "]", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"comprehension_loop": [
("formatting", "first_formatting", True), # noqa
("constant", "for", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "iterator", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "in", True), # noqa
("formatting", "fourth_formatting", True), # noqa
("key", "target", True), # noqa
("list", "ifs", True), # noqa
],
"comprehension_if": [
("formatting", "first_formatting", True), # noqa
("constant", "if", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
],
"getitem": [
("formatting", "first_formatting", True), # noqa
("constant", "[", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "]", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"slice": [
("key", "lower", "lower"), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "upper", "upper"), # noqa
("formatting", "third_formatting", "has_two_colons"), # noqa
("constant", ":", "has_two_colons"), # noqa
("formatting", "fourth_formatting", "has_two_colons"), # noqa
("key", "step", ["has_two_colons", "step"]), # noqa
("bool", "has_two_colons", False), # noqa
],
"assignment": [
("key", "target", True), # noqa
("formatting", "annotation_first_formatting", "annotation"), # noqa
("constant", ":", "annotation"), # noqa
("formatting", "annotation_second_formatting", "annotation"), # noqa
("key", "annotation", "annotation"), # noqa
("formatting", "first_formatting", True), # noqa
# FIXME should probably be a different node type # noqa
("string", "operator", "operator"), # noqa
("constant", "=", "target"), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
],
"standalone_annotation": [
("key", "target", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "annotation", True), # noqa
],
"unitary_operator": [
("string", "value", True), # noqa
("formatting", "formatting", True), # noqa
("key", "target", True), # noqa
],
"binary_operator": [
("key", "first", True), # noqa
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "second", True), # noqa
],
"boolean_operator": [
("key", "first", True), # noqa
("formatting", "first_formatting", True), # noqa
("string", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "second", True), # noqa
],
"comparison_operator": [
("string", "first", True), # noqa
("formatting", "formatting", True), # noqa
("string", "second", "second"), # noqa
],
"comparison": [
("key", "first", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "second", True), # noqa
],
"with": [
("bool", "async", False), # noqa
("constant", "async", "async"), # noqa
("formatting", "async_formatting", "async"), # noqa
("constant", "with", True), # noqa
("formatting", "first_formatting", True), # noqa
("list", "contexts", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "value", True), # noqa
],
"with_context_item": [
("key", "value", True), # noqa
("formatting", "first_formatting", "as"), # noqa
("constant", "as", "as"), # noqa
("formatting", "second_formatting", "as"), # noqa
("key", "as", "as"), # noqa
],
"nonlocal": [
("constant", "nonlocal", True), # noqa
("formatting", "formatting", True), # noqa
("list", "value", True), # noqa
],
"del": [
("constant", "del", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", True), # noqa
],
"yield": [
("constant", "yield", True), # noqa
("formatting", "formatting", True), # noqa
("key", "value", "value"), # noqa
],
"yield_from": [
("constant", "yield", True), # noqa
("formatting", "formatting", True), # noqa
("constant", "from", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "value", "value"), # noqa
],
"yield_atom": [
("constant", "(", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", "yield", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", "value"), # noqa
("formatting", "third_formatting", True), # noqa
("constant", ")", True), # noqa
],
"exec": [
("constant", "exec", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "value", True), # noqa
("formatting", "second_formatting", "globals"), # noqa
("constant", "in", "globals"), # noqa
("formatting", "third_formatting", "globals"), # noqa
("key", "globals", "globals"), # noqa
("formatting", "fourth_formatting", "locals"), # noqa
("constant", ",", "locals"), # noqa
("formatting", "fifth_formatting", "locals"), # noqa
("key", "locals", "locals"), # noqa
],
"global": [
("constant", "global", True), # noqa
("formatting", "formatting", True), # noqa
("list", "value", True), # noqa
],
"while": [
("constant", "while", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "test", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "value", True), # noqa
("key", "else", "else"), # noqa
],
"for": [
("bool", "async", False), # noqa
("constant", "async", "async"), # noqa
("formatting", "async_formatting", "async"), # noqa
("constant", "for", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "iterator", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", "in", True), # noqa
("formatting", "third_formatting", True), # noqa
("key", "target", True), # noqa
("formatting", "fourth_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "fifth_formatting", True), # noqa
("list", "value", True), # noqa
("key", "else", "else"), # noqa
],
"if": [
("constant", "if", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "test", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "value", True), # noqa
],
"elif": [
("constant", "elif", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "test", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "value", True), # noqa
],
"else": [
("constant", "else", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
],
"lambda": [
("constant", "lambda", True), # noqa
("formatting", "first_formatting", True), # noqa
("list", "arguments", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "third_formatting", True), # noqa
("key", "value", True), # noqa
],
"try": [
("constant", "try", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
("list", "excepts", True), # noqa
("key", "else", "else"), # noqa
("key", "finally", "finally"), # noqa
],
"except": [
("constant", "except", True), # noqa
("formatting", "first_formatting", True), # noqa
("key", "exception", "exception"), # noqa
("formatting", "second_formatting", "delimiter"), # noqa
("string", "delimiter", "delimiter"), # noqa
("formatting", "third_formatting", "delimiter"), # noqa
("key", "target", "delimiter"), # noqa
("formatting", "fourth_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "fifth_formatting", True), # noqa
("list", "value", True), # noqa
],
"finally": [
("constant", "finally", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
],
"dict": [
("formatting", "first_formatting", True), # noqa
("constant", "{", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "}", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"set": [
("formatting", "first_formatting", True), # noqa
("constant", "{", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "third_formatting", True), # noqa
("constant", "}", True), # noqa
("formatting", "fourth_formatting", True), # noqa
],
"dictitem": [
("key", "key", True), # noqa
("formatting", "first_formatting", True), # noqa
("constant", ":", True), # noqa
("formatting", "second_formatting", True), # noqa
("key", "value", True), # noqa
],
"import": [
("formatting", "first_formatting", True), # noqa
("constant", "import", True), # noqa
("formatting", "second_formatting", True), # noqa
("list", "value", True), # noqa
],
"from_import": [
("constant", "from", True), # noqa
("formatting", "first_formatting", True), # noqa
("list", "value", True), # noqa
("formatting", "second_formatting", True), # noqa
("constant", "import", True), # noqa
("formatting", "third_formatting", True), # noqa
("list", "targets", True), # noqa
],
"dotted_as_name": [
("list", "value", True), # noqa
("formatting", "first_formatting", "target"), # noqa
("constant", "as", "target"), # noqa
("formatting", "second_formatting", "target"), # noqa
("string", "target", "target"), # noqa
],
"name_as_name": [
("string", "value", True), # noqa
("formatting", "first_formatting", "target"), # noqa
("constant", "as", "target"), # noqa
("formatting", "second_formatting", "target"), # noqa
("string", "target", "target"), # noqa
],
"print": [
("constant", "print", True), # noqa
("formatting", "formatting", True), # noqa
("constant", ">>", "destination"), # noqa
("formatting", "destination_formatting", "destination"), # noqa
("key", "destination", "destination"), # noqa
("list", "value", "value"), # noqa
],
}
class RenderWalker(object):
"""Inherit me and overload the methods you want.
When calling walk() on a FST node, this class will traverse all the
node's subtree by following the recipe given by the `render`
function for the node and recursively for all its childs. At each
recipe step, it will call methods that you can override to make a
specific process.
For every "node", "key", "list", "formatting" and "constant" childs,
it will call the `before` method when going down the tree and the
`after` method when going up. There are also specific
`before_[node,key,list,formatting,constant]` and
`after_[node,key,list,formatting,constant]` methods provided for
convenience.
The latter are called on specific steps:
* before_list: called before encountering a list of nodes
* after_list: called after encountering a list of nodes
* before_formatting: called before encountering a formatting list
* after_formatting: called after encountering a formatting list
* before_node: called before encountering a node
* after_node: called after encountering a node
* before_key: called before encountering a key type entry
* after_key: called after encountering a key type entry
* before_leaf: called before encountering a leaf of the FST (can be a constant (like "def" in a function definition) or an actual value like the value a name node)
* after_leaf: called after encountering a leaf of the FST (can be a constant (like "def" in a function definition) or an actual value like the value a name node)
Every method has the same signature: (self, node, render_pos, render_key).
"""
STOP = True
def __init__(self, strict=False):
self.strict = strict
def before_list(self, node, render_key):
pass
def after_list(self, node, render_key):
pass
def before_formatting(self, node, render_key):
pass
def after_formatting(self, node, render_key):
pass
def before_node(self, node, render_key):
pass
def after_node(self, node, render_key):
pass
def before_key(self, node, render_key):
pass
def after_key(self, node, render_key):
pass
def before_constant(self, node, render_key):
pass
def after_constant(self, node, render_key):
pass
def before_string(self, node, render_key):
pass
def after_string(self, node, render_key):
pass
def before(self, key_type, item, render_key):
if key_type not in node_types:
raise NotImplementedError("Unknown key type: %s" % key_type)
to_call = getattr(self, 'before_' + key_type)
return to_call(item, render_key)
def after(self, key_type, item, render_key):
if key_type not in node_types:
raise NotImplementedError("Unknown key type: %s" % key_type)
to_call = getattr(self, 'after_' + key_type)
return to_call(item, render_key)
def walk(self, node):
return self._walk(node)
def _walk(self, node):
for key_type, item, render_key in render(node, strict=getattr(self, "strict", False)):
stop = self._walk_on_item(key_type, item, render_key)
if stop == self.STOP:
return self.STOP
def _walk_on_item(self, key_type, item, render_key):
stop_before = self.before(key_type, item, render_key)
if stop_before:
return self.STOP
stop = self._walk(item) if key_type not in ['constant', 'string'] else False
stop_after = self.after(key_type, item, render_key)
if stop or stop_after:
return self.STOP
baron-0.10.1/baron/setup.cfg 0000664 0000000 0000000 00000000026 14154274402 0015615 0 ustar 00root root 0000000 0000000 [wheel]
universal = 1
baron-0.10.1/baron/spliter.py 0000664 0000000 0000000 00000005673 14154274402 0016045 0 ustar 00root root 0000000 0000000 import string
from .utils import FlexibleIterator, BaronError
def split(sequence):
return list(split_generator(sequence))
class UntreatedError(BaronError):
pass
def split_generator(sequence):
iterator = FlexibleIterator(sequence)
# Pay attention that if a next() call fails, a StopIteration error
# is raised. This coincidently is the same error used by python to
# understand that a function using yield has finished processing.
# It's not a bad thing, but it must be kept in mind.
while not iterator.end():
not_found = True
if iterator.next_in("#"):
not_found = False
result = iterator.grab(lambda iterator: (iterator.show_next() not in "\r\n"))
yield result
for section in ("'", '"'):
if iterator.next_starts_with(section * 3):
not_found = False
result = next(iterator)
result += next(iterator)
result += next(iterator)
result += iterator.grab_string(lambda iterator: not iterator.next_starts_with(section * 3))
# This next() call can fail if no closing quote exists. We
# still want to yield so we catch it.
try:
result += next(iterator)
result += next(iterator)
result += next(iterator)
except StopIteration:
pass
yield result
elif iterator.next_in(section):
not_found = False
result = next(iterator)
result += iterator.grab_string(lambda iterator: iterator.show_next() not in section)
# This next() call can fail if no closing quote exists. We
# still want to yield so we catch it.
try:
result += next(iterator)
except StopIteration:
pass
yield result
for section in (string.ascii_letters + "_" + "1234567890", " \t"):
if iterator.next_in(section):
not_found = False
yield iterator.grab(lambda iterator: iterator.show_next() in section)
for one in "@,.;()=*:+-/^%&<>|\r\n~[]{}!``\\":
if iterator.next_in(one):
not_found = False
yield next(iterator)
if iterator.show_next().__repr__().startswith("'\\x"):
# guys, seriously, how do you manage to put this shit in your code?
# I mean, I don't even know how this is possible!
# example of guilty file: ve/lib/python2.7/site-packages/tests/test_oauth.py
# example of crapy unicode stuff found in some source files: \x0c\xef\xbb\xbf
not_found = False
# let's drop that crap
next(iterator)
if not_found:
raise UntreatedError("Untreated elements: %s" % iterator.rest_of_the_sequence().__repr__()[:50])
baron-0.10.1/baron/token.py 0000664 0000000 0000000 00000005265 14154274402 0015500 0 ustar 00root root 0000000 0000000 from rply.token import BaseBox
class BaronToken(BaseBox):
"""
Represents a syntactically relevant piece of text.
:param name: A string describing the kind of text represented.
:param value: The actual text represented.
:param source_pos: A :class:`SourcePosition` object representing the
position of the first character in the source from which
this token was generated.
"""
def __init__(self, name, value, hidden_tokens_before=None, hidden_tokens_after=None):
self.name = name
self.value = value
self.hidden_tokens_before = list(map(self._translate_tokens_to_ast_node, hidden_tokens_before if hidden_tokens_before else []))
self.hidden_tokens_after = list(map(self._translate_tokens_to_ast_node, hidden_tokens_after if hidden_tokens_after else []))
def _translate_tokens_to_ast_node(self, token):
if token[0] == "ENDL":
return {
"type": token[0].lower(),
"value": token[1],
"indent": token[3][0][1] if len(token) == 4 and token[3] else "",
"formatting": list(map(self._translate_tokens_to_ast_node, token[2]) if len(token) >= 3 else []),
}
if len(token) >= 3:
return {
"type": token[0].lower(),
"value": token[1],
"formatting": list(map(self._translate_tokens_to_ast_node, token[2]) if len(token) >= 3 else []),
}
if token[0] == "COMMENT":
return {
"type": token[0].lower(),
"value": token[1],
"formatting": [],
}
return {
"type": token[0].lower(),
"value": token[1],
}
def __repr__(self):
return "Token(%r, %r, %s, %s)" % (self.name, self.value, self.hidden_tokens_before, self.hidden_tokens_after)
def __eq__(self, other):
if not isinstance(other, BaronToken):
return NotImplemented
return self.name == other.name and self.value == other.value
def render(self):
before = "".join([(x["indent"] if x["type"] == "endl" else "") + x["value"] for x in self.hidden_tokens_before])
after = "".join([(x["indent"] if x["type"] == "endl" else "") + x["value"] for x in self.hidden_tokens_after])
# print self.hidden_tokens_before, self.value, self.hidden_tokens_after
return before + self.value + after
def gettokentype(self):
"""
Returns the type or name of the token.
"""
return self.name
def getstr(self):
"""
Returns the string represented by this token.
"""
return self.value
baron-0.10.1/baron/tokenizer.py 0000664 0000000 0000000 00000007651 14154274402 0016373 0 ustar 00root root 0000000 0000000 import re
from .utils import BaronError
class UnknowItem(BaronError):
pass
KEYWORDS = ("and", "as", "assert", "break", "class", "continue", "def", "del",
"elif", "else", "except", "exec", "finally", "for", "from",
"global", "nonlocal", "if", "import", "in", "is", "lambda", "not",
"or", "pass", "print", "raise", "return", "try", "while", "with",
"yield")
TOKENS = (
(r'[a-zA-Z_]\w*', 'NAME'),
(r'0', 'INT'),
(r'[-+]?\d+[eE][-+]?\d+[jJ]', 'FLOAT_EXPONANT_COMPLEX'),
(r'[-+]?\d+.\d?[eE][-+]?\d+[jJ]', 'FLOAT_EXPONANT_COMPLEX'),
(r'[-+]?\d?.\d+[eE][-+]?\d+[jJ]', 'FLOAT_EXPONANT_COMPLEX'),
(r'\d+[eE][-+]?\d*', 'FLOAT_EXPONANT'),
(r'\d+\.\d*[eE][-+]?\d*', 'FLOAT_EXPONANT'),
(r'\.\d+[eE][-+]?\d*', 'FLOAT_EXPONANT'),
(r'\d*\.\d+[jJ]', 'COMPLEX'),
(r'\d+\.[jJ]', 'COMPLEX'),
(r'\d+[jJ]', 'COMPLEX'),
(r'\d+\.', 'FLOAT'),
(r'\d*[_\d]*\.[_\d]+[lL]?', 'FLOAT'),
(r'\d+[_\d]+\.[_\d]*[lL]?', 'FLOAT'),
(r'\.', 'DOT'),
(r'[1-9]+[_\d]*[lL]', 'LONG'),
(r'[1-9]+[_\d]*', 'INT'),
(r'0[xX][\d_a-fA-F]+[lL]?', 'HEXA'),
(r'(0[oO][0-7]+)|(0[0-7_]*)[lL]?', 'OCTA'),
(r'0[bB][01_]+[lL]?', 'BINARY'),
(r'\(', 'LEFT_PARENTHESIS'),
(r'\)', 'RIGHT_PARENTHESIS'),
(r':', 'COLON'),
(r',', 'COMMA'),
(r';', 'SEMICOLON'),
(r'@', 'AT'),
(r'\+', 'PLUS'),
(r'-', 'MINUS'),
(r'\*', 'STAR'),
(r'/', 'SLASH'),
(r'\|', 'VBAR'),
(r'&', 'AMPER'),
(r'@', 'AT'),
(r'<', 'LESS'),
(r'>', 'GREATER'),
(r'=', 'EQUAL'),
(r'%', 'PERCENT'),
(r'\[', 'LEFT_SQUARE_BRACKET'),
(r'\]', 'RIGHT_SQUARE_BRACKET'),
(r'\{', 'LEFT_BRACKET'),
(r'\}', 'RIGHT_BRACKET'),
(r'`', 'BACKQUOTE'),
(r'==', 'EQUAL_EQUAL'),
(r'<>', 'NOT_EQUAL'),
(r'!=', 'NOT_EQUAL'),
(r'<=', 'LESS_EQUAL'),
(r'>=', 'GREATER_EQUAL'),
(r'~', 'TILDE'),
(r'\^', 'CIRCUMFLEX'),
(r'<<', 'LEFT_SHIFT'),
(r'>>', 'RIGHT_SHIFT'),
(r'\*\*', 'DOUBLE_STAR'),
(r'\+=', 'PLUS_EQUAL'),
(r'-=', 'MINUS_EQUAL'),
(r'@=', 'AT_EQUAL'),
(r'\*=', 'STAR_EQUAL'),
(r'/=', 'SLASH_EQUAL'),
(r'%=', 'PERCENT_EQUAL'),
(r'&=', 'AMPER_EQUAL'),
(r'\|=', 'VBAR_EQUAL'),
(r'\^=', 'CIRCUMFLEX_EQUAL'),
(r'<<=', 'LEFT_SHIFT_EQUAL'),
(r'>>=', 'RIGHT_SHIFT_EQUAL'),
(r'\.\.\.', 'ELLIPSIS'),
(r'->', 'RIGHT_ARROW'),
(r'\*\*=', 'DOUBLE_STAR_EQUAL'),
(r'//', 'DOUBLE_SLASH'),
(r'//=', 'DOUBLE_SLASH_EQUAL'),
(r'\n', 'ENDL'),
(r'\r\n', 'ENDL'),
(r'#.*', 'COMMENT'),
(r'(\s|\\\n|\\\r\n)+', 'SPACE'),
(r'["\'](.|\n|\r)*["\']', 'STRING'),
(r'[uU]["\'](.|\n|\r)*["\']', 'UNICODE_STRING'),
(r'[fF]["\'](.|\n|\r)*["\']', 'INTERPOLATED_STRING'),
(r'[rR]["\'](.|\n|\r)*["\']', 'RAW_STRING'),
(r'[bB]["\'](.|\n|\r)*["\']', 'BINARY_STRING'),
(r'[uU][rR]["\'](.|\n|\r)*["\']', 'UNICODE_RAW_STRING'),
(r'[bB][rR]["\'](.|\n|\r)*["\']', 'BINARY_RAW_STRING'),
(r'[fF][rR]["\'](.|\n|\r)*["\']', 'INTERPOLATED_RAW_STRING'),
(r'[rR][fF]["\'](.|\n|\r)*["\']', 'INTERPOLATED_RAW_STRING'),
)
TOKENS = [(re.compile('^' + x[0] + '$'), x[1]) for x in TOKENS]
def tokenize(sequence, print_function=False):
return list(tokenize_generator(sequence, print_function))
def tokenize_current_keywords(print_function=False):
if print_function is True:
return [x for x in KEYWORDS if x not in ("print", "exec")]
else:
return KEYWORDS
def tokenize_generator(sequence, print_function=False):
current_keywords = tokenize_current_keywords()
for item in sequence:
if item in current_keywords:
yield (item.upper(), item)
continue
for candidate, token_name in TOKENS:
if candidate.match(item):
yield (token_name, item)
break
else:
raise UnknowItem("Can't find a matching token for this item: '%s'" % item)
yield ('ENDMARKER', '')
yield
baron-0.10.1/baron/utils.py 0000664 0000000 0000000 00000010412 14154274402 0015506 0 ustar 00root root 0000000 0000000 import sys
import re
python_version = sys.version_info[0]
python_subversion = sys.version_info[1]
string_instance = str if python_version == 3 else basestring
class BaronError(Exception):
pass
class FlexibleIterator():
def __init__(self, sequence):
self.sequence = sequence
self.position = -1
def __iter__(self):
return self
def next(self):
return self.__next__()
def __next__(self):
self.position += 1
if self.position == len(self.sequence):
raise StopIteration
return self.sequence[self.position]
def next_starts_with(self, sentence):
size_of_choice = len(sentence)
return self.sequence[self.position + 1: self.position + 1 + size_of_choice] == sentence
def next_in(self, choice):
if self.position + 1 >= len(self.sequence):
return False
return self.sequence[self.position + 1] in choice
def show_next(self, at=1):
if self.position + at >= len(self.sequence):
return None
return self.sequence[self.position + at]
def rest_of_the_sequence(self):
return self.sequence[self.position + 1:]
def end(self):
return self.position == (len(self.sequence) - 1)
def grab(self, test):
to_return = ""
current = None
while self.show_next() is not None and test(self):
current = next(self)
to_return += current
return to_return
def grab_string(self, test):
to_return = ""
current = None
escaped = False
while self.show_next() is not None and (escaped or test(self)):
current = next(self)
to_return += current
if escaped:
escaped = False
elif current == "\\":
escaped = True
return to_return
def create_node_from_token(token, **kwargs):
result = {"type": token.name.lower(), "value": token.value}
if kwargs:
result.update(kwargs)
return result
def create_node(name, value, **kwargs):
result = {"type": name, "value": value}
if kwargs:
result.update(kwargs)
return result
newline_regex = re.compile("(\r\n|\n|\r)")
def is_newline(text):
return newline_regex.match(text)
def split_on_newlines(text):
newlines = newline_regex.finditer(text)
if not newlines:
yield text
else:
current_position = 0
for newline in newlines:
yield text[current_position:newline.start(1)]
yield text[newline.start(1):newline.end(1)]
current_position = newline.end(1)
yield text[current_position:]
# Thanks to
# https://github.com/nvie/rq/commit/282f4be9316d608ebbacd6114aab1203591e8f95
if python_version >= 3 or python_subversion >= 7:
from functools import total_ordering
else:
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: other < self),
('__le__', lambda self, other: not other < self),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: other <= self),
('__lt__', lambda self, other: not other <= self),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: other > self),
('__ge__', lambda self, other: not other > self),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: other >= self),
('__gt__', lambda self, other: not other >= self),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=') # noqa
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls
baron-0.10.1/docs/ 0000775 0000000 0000000 00000000000 14154274402 0013625 5 ustar 00root root 0000000 0000000 baron-0.10.1/docs/Makefile 0000664 0000000 0000000 00000012670 14154274402 0015273 0 ustar 00root root 0000000 0000000 # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Baron.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Baron.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Baron"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Baron"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
baron-0.10.1/docs/advanced.rst 0000664 0000000 0000000 00000010463 14154274402 0016130 0 ustar 00root root 0000000 0000000 Advanced Usage
==============
The topics presented here are less often needed but are still very useful.
Locate a Node
-------------
Since Baron produces a tree, a path is sufficient to locate univocally
a node in the tree. A common task where a path is involved is when
translating a position in a file (a line and a column) into a node of
the FST.
Baron provides 2 helper functions for that:
* :file:`position_to_node(fst, line, column)`
* :file:`position_to_path(fst, line, column)`
Both take a FST tree as first argument, then the line number and the
column number. Line and column numbers **start at 1**, like in a text
editor.
:file:`position_to_node` returns an FST node. This is okay if you only
want to know which node it is but not enough to locate the node in the
tree. Indeed, there can be mutiple identical nodes within the tree.
That's where :file:`position_to_path` is useful. It returns a list of
int and strings which represent either the key to take in a Node or the
index in a ListNode. For example: :file:`["target", "value", 0]`)
Let's first see the difference between the two functions:
.. ipython:: python
from baron import parse
from baron.path import position_to_node, position_to_path
from baron.helpers import show_node
some_code = """from baron import parse\nfrom baron.helpers import show_node\nfst = parse("a = 1")\nshow_node(fst)"""
print some_code
tree = parse(some_code)
node = position_to_node(tree, (3, 8))
show_node(node)
path = position_to_path(tree, (3, 8))
path
The first one gives the node and the second one the node's path in the
tree. The latter tells you that to get to the node, you must take the
4th index of the root ListNode, followed twice by the "value" key of
first the "assignment" Node and next the "atomtrailers" Node. Finally,
take the 0th index in the resulting ListNode:
.. ipython:: python
show_node(tree[4]["value"]["value"][0])
Neat. This is so common that there is a function to do that:
.. ipython:: python
from baron.path import path_to_node
show_node(path_to_node(tree, path))
With the two above, that's a total of three functions to locate a node.
You can also locate easily a "constant" node like a left parenthesis in
a :file:`funcdef` node:
.. ipython:: python
from baron.path import position_to_path
fst = parse("a(1)")
position_to_path(fst, (1, 1))
position_to_path(fst, (1, 2))
position_to_path(fst, (1, 3))
position_to_path(fst, (1, 4))
By the way, out of bound positions are handled gracefully:
.. ipython:: python
print(position_to_node(fst, (-1, 1)))
print(position_to_node(fst, (1, 0)))
print(position_to_node(fst, (1, 5)))
print(position_to_node(fst, (2, 4)))
Bounding Box
------------
Sometimes you want to know what are the left most and right most
position of a rendered node or part of it. It is not a trivial task
since you do not know easily each rendered line's length. That's why
baron provides two helpers:
* :file:`node_to_bounding_box(fst)`
* :file:`path_to_bounding_box(fst, path)`
Examples are worth a thousand words so:
.. ipython:: python
from baron.path import node_to_bounding_box, path_to_bounding_box
from baron import dumps
fst = parse("a(1)\nb(2)")
fst
print dumps(fst)
node_to_bounding_box(fst)
path_to_bounding_box(fst, [])
fst[0]
print dumps(fst[0])
node_to_bounding_box(fst[0])
path_to_bounding_box(fst, [0])
fst[0]["value"]
print dumps(fst[0]["value"])
node_to_bounding_box(fst[1])
path_to_bounding_box(fst, [1])
fst[0]["value"][1]
print dumps(fst[0]["value"][1])
node_to_bounding_box(fst[0]["value"][1])
path_to_bounding_box(fst, [0, "value", 1])
fst[0]["value"][1]["value"]
print dumps(fst[0]["value"][1]["value"])
node_to_bounding_box(fst[0]["value"][1]["value"])
path_to_bounding_box(fst, [0, "value", 1, "value"])
The bounding box's `top_left` and `bottom_right` positions follow the
same convention as for when locating a node: the line and column start
at 1.
As you can see, the major difference between the two functions is that
:file:`node_to_bounding_box` will always give a left position of
:file:`(1, 1)` since it considers you want the bounding box of the whole
node while :file:`path_to_bounding_box` takes the location of the node
in the fst into account.
baron-0.10.1/docs/basics.rst 0000664 0000000 0000000 00000003264 14154274402 0015630 0 ustar 00root root 0000000 0000000 Basic Usage
===========
Baron provides two main functions:
* :file:`parse` to transform a string into Baron's FST;
* :file:`dumps` to transform the FST back into a string.
.. ipython:: python
:suppress:
import sys
sys.path.append("..")
.. ipython:: python
from baron import parse, dumps
source_code = "def f(x = 1):\n return x\n"
fst = parse(source_code)
generated_source_code = dumps(fst)
generated_source_code
source_code == generated_source_code
Like said in the introduction, the FST keeps the formatting unlike ASTs.
Here the following 3 codes are equivalent but their formatting is
different. Baron keeps the difference so when dumping back the FST, all
the formatting is respected:
.. ipython:: python
dumps(parse("a = 1"))
dumps(parse("a=1"))
dumps(parse("a = 1"))
Helpers
-------
Baron also provides 3 helper functions `show`, `show_file` and
`show_node` to explore the FST (in iPython for example). Those functions
will print a formatted version of the FST so you can play with it to
explore the FST and have an idea of what you are playing with.
Show
~~~~
:file:`show` is used directly on a string:
.. ipython:: python
from baron.helpers import show
show("a = 1")
show("a += b")
Show_file
~~~~~~~~~
:file:`show_file` is used on a file path:
::
from baron.helpers import show_file
show_file("/path/to/a/file")
Show_node
~~~~~~~~~
:file:`show_node` is used on an already parsed string:
.. ipython:: python
from baron.helpers import show_node
fst = parse("a = 1")
show_node(fst)
Under the hood, the FST is serialized into JSON so the helpers are
simply encapsulating JSON pretty printers.
baron-0.10.1/docs/conf.py 0000664 0000000 0000000 00000021641 14154274402 0015130 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
#
# Baron documentation build configuration file, created by
# sphinx-quickstart on Sat May 10 02:16:20 2014.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.todo',
'IPython.sphinxext.ipython_directive',
'IPython.sphinxext.ipython_console_highlighting',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Baron'
copyright = u'2014, Laurent Peuch'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.6'
# The full version, including alpha/beta/rc tags.
release = '0.6'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Barondoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'Baron.tex', u'Baron Documentation',
u'Laurent Peuch', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'baron', u'Baron Documentation',
[u'Laurent Peuch'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Baron', u'Baron Documentation',
u'Laurent Peuch', 'Baron', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'Baron'
epub_author = u'Laurent Peuch'
epub_publisher = u'Laurent Peuch'
epub_copyright = u'2014, Laurent Peuch'
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
#epub_exclude_files = []
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
baron-0.10.1/docs/grammar-python-2.7-3.6-diff-1.png 0000664 0000000 0000000 00000672672 14154274402 0021200 0 ustar 00root root 0000000 0000000 PNG
IHDR { M sBITO IDATx^|TR{(APD |hP,C@Z@E@ $@萐Mmِ¹`w3gժ!8 @ @ qu:ŮjD@ ^9qDrɑr @ <womq˰#WQ( @ @ G@#n汙cdp @ G vٻ&7#q 'PuNsک_|~@ ʛ { 8g @ T%U'Vzl@ @ /.ؘOی,2bll='P??N]9ET,~m Ӈtccx{EQ_lw*e>pYa;|pw)lb<יKcp`-ڮ^y:*BS~x @ =*o?{8'+O @tXI{j[v%/^^?٪yW
cS]jwvQ[_m߬i&o8nH5okꩈܺЬP6|}w,.-
@ "Pe
ǜv @ &FWܷZv"ؔj=5sڹ?Ok;ݞZ|FʽsA{aHDe^1.&Y^u"GHȺy%C~gdrM;Vgrs|x._8Vq sRy̬M-UcP
@ <|U ̹?5h 7Si]NLC`椻^z BG~w=G*E-K"J[3I۟`*!
#2YmBNbU[C_BfMOR
fMxp)ь).*(q y.tq;'ퟋC,Wӿ;a7NUQ>T؈5fQ!)"`k8itN 'PE
1'0c(D eD)>V]C")UP5V9fً/W(5L:}?P]SC>A8xGjB3Y/DȠg->;s{oR8MYbB_>yOeXi?=gğ#w:4xC!X8
@ @7$6B @ <*y#|Hx(]ɑUv)?w;bj4A0h/ewsSQ7}>ϵғHL;ٷ5RLPhw5 QqAl@ϼo7iTtcxcNY+-tpW]PUHH12,26 @ <sG3 @e#M
oU9~{ώKe_m
_|BԺamS>s^zn=})¾#8!
?y39cY'{>Yr{wo˗'Θ3ǭ[o{<g;tr?66-H#jaCԮwN|w^dzm+g@ P TY1gp= Xtr@ 0",,LV @ @` VKn[ UĘ8qG. @ HMNr8r^\G @ @
ovN*@ (/*(.Sy @ 7A_ RZ?V
xp)FVP
7h
V 3u"f: @ @ Jqc}
J_ɩ XR>69ntG]j0|OgַWz~!-E[Ιߵ^f?$x*;Ki6ʔaq?\ 9ٻ&iSw2+}%L|h"sRc
cVY=5YRҜ^pOc}.
} "UZ1*Zkqn+IrjاӴV_ͣjve+Yү͘ߗMߔȕEe-SC+(c?XY}9OI@} !rC|hEE#ǜuce{S"daӢ?XZX]q@ Mu&ۢMx6x( ܟ)$܆U*kZ-jx* N 1] 8V4}TI4%^b^Ѳȗq{+rp w xF07t32GhúdZZڜt