astor-0.8.1/0000755000076500000240000000000013573574747014466 5ustar berkerpeksagstaff00000000000000astor-0.8.1/PKG-INFO0000644000076500000240000001123613573574747015566 0ustar berkerpeksagstaff00000000000000Metadata-Version: 1.2 Name: astor Version: 0.8.1 Summary: Read/rewrite/write Python ASTs Home-page: https://github.com/berkerpeksag/astor Author: Patrick Maupin Author-email: pmaupin@gmail.com License: BSD-3-Clause Description: ============================= astor -- AST observe/rewrite ============================= :PyPI: https://pypi.org/project/astor/ :Documentation: https://astor.readthedocs.io :Source: https://github.com/berkerpeksag/astor :License: 3-clause BSD :Build status: .. image:: https://secure.travis-ci.org/berkerpeksag/astor.svg :alt: Travis CI :target: https://travis-ci.org/berkerpeksag/astor/ astor is designed to allow easy manipulation of Python source via the AST. There are some other similar libraries, but astor focuses on the following areas: - Round-trip an AST back to Python [1]_: - Modified AST doesn't need linenumbers, ctx, etc. or otherwise be directly compileable for the round-trip to work. - Easy to read generated code as, well, code - Can round-trip two different source trees to compare for functional differences, using the astor.rtrip tool (for example, after PEP8 edits). - Dump pretty-printing of AST - Harder to read than round-tripped code, but more accurate to figure out what is going on. - Easier to read than dump from built-in AST module - Non-recursive treewalk - Sometimes you want a recursive treewalk (and astor supports that, starting at any node on the tree), but sometimes you don't need to do that. astor doesn't require you to explicitly visit sub-nodes unless you want to: - You can add code that executes before a node's children are visited, and/or - You can add code that executes after a node's children are visited, and/or - You can add code that executes and keeps the node's children from being visited (and optionally visit them yourself via a recursive call) - Write functions to access the tree based on object names and/or attribute names - Enjoy easy access to parent node(s) for tree rewriting .. [1] The decompilation back to Python is based on code originally written by Armin Ronacher. Armin's code was well-structured, but failed on some obscure corner cases of the Python language (and even more corner cases when the AST changed on different versions of Python), and its output arguably had cosmetic issues -- for example, it produced parentheses even in some cases where they were not needed, to avoid having to reason about precedence. Other derivatives of Armin's code are floating around, and typically have fixes for a few corner cases that happened to be noticed by the maintainers, but most of them have not been tested as thoroughly as astor. One exception may be the version of codegen `maintained at github by CensoredUsername`__. This has been tested to work properly on Python 2.7 using astor's test suite, and, as it is a single source file, it may be easier to drop into some applications that do not require astor's other features or Python 3.x compatibility. __ https://github.com/CensoredUsername/codegen Keywords: ast,codegen,PEP 8 Platform: Independent Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Topic :: Software Development :: Code Generators Classifier: Topic :: Software Development :: Compilers Requires-Python: !=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7 astor-0.8.1/LICENSE0000644000076500000240000000302213205364752015451 0ustar berkerpeksagstaff00000000000000Copyright (c) 2012, Patrick Maupin Copyright (c) 2013, Berker Peksag Copyright (c) 2008, Armin Ronacher All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. astor-0.8.1/AUTHORS0000644000076500000240000000124713470303666015524 0ustar berkerpeksagstaff00000000000000Original author of astor/codegen.py: * Armin Ronacher And with some modifications based on Armin's code: * Paul Dubs * Berker Peksag * Patrick Maupin * Abhishek L * Bob Tolbert * Whyzgeek * Zack M. Davis * Ryan Gonzalez * Lenny Truong * Radomír Bosák * Kodi Arfer * Felix Yan * Chris Rink * Batuhan Taskaya astor-0.8.1/tests/0000755000076500000240000000000013573574747015630 5ustar berkerpeksagstaff00000000000000astor-0.8.1/tests/test_misc.py0000644000076500000240000000710413573326607020164 0ustar berkerpeksagstaff00000000000000import ast import sys import warnings try: import unittest2 as unittest except ImportError: import unittest import astor from astor.source_repr import split_lines from .support import import_fresh_module class GetSymbolTestCase(unittest.TestCase): @unittest.skipUnless(sys.version_info >= (3, 5), "ast.MatMult introduced in Python 3.5") def test_get_mat_mult(self): self.assertEqual('@', astor.get_op_symbol(ast.MatMult())) class PublicAPITestCase(unittest.TestCase): def test_aliases(self): self.assertIs(astor.parse_file, astor.code_to_ast.parse_file) def test_codegen_from_root(self): with self.assertWarns(DeprecationWarning) as cm: astor = import_fresh_module('astor') astor.codegen.SourceGenerator self.assertEqual(len(cm.warnings), 1) # This message comes from 'astor/__init__.py'. self.assertEqual( str(cm.warning), 'astor.codegen is deprecated. Please use astor.code_gen.' ) def test_codegen_as_submodule(self): with self.assertWarns(DeprecationWarning) as cm: import astor.codegen self.assertEqual(len(cm.warnings), 1) # This message comes from 'astor/codegen.py'. self.assertEqual( str(cm.warning), 'astor.codegen module is deprecated. Please import ' 'astor.code_gen module instead.' ) def test_to_source_invalid_customize_generator(self): class InvalidGenerator: pass node = ast.parse('spam = 42') with self.assertRaises(TypeError) as cm: astor.to_source(node, source_generator_class=InvalidGenerator) self.assertEqual( str(cm.exception), 'source_generator_class should be a subclass of SourceGenerator', ) with self.assertRaises(TypeError) as cm: astor.to_source( node, source_generator_class=astor.SourceGenerator(indent_with=' ' * 4), ) self.assertEqual( str(cm.exception), 'source_generator_class should be a class', ) class FastCompareTestCase(unittest.TestCase): def test_fast_compare(self): fast_compare = astor.node_util.fast_compare def check(a, b): ast_a = ast.parse(a) ast_b = ast.parse(b) dump_a = astor.dump_tree(ast_a) dump_b = astor.dump_tree(ast_b) self.assertEqual(dump_a == dump_b, fast_compare(ast_a, ast_b)) check('a = 3', 'a = 3') check('a = 3', 'a = 5') check('a = 3 - (3, 4, 5)', 'a = 3 - (3, 4, 5)') check('a = 3 - (3, 4, 5)', 'a = 3 - (3, 4, 6)') class TreeWalkTestCase(unittest.TestCase): def test_auto_generated_attributes(self): # See #136 for more details. treewalk = astor.TreeWalk() self.assertIsInstance(treewalk.__dict__, dict) # Check that the inital state of the instance is empty. self.assertEqual(treewalk.__dict__['nodestack'], []) self.assertEqual(treewalk.__dict__['pre_handlers'], {}) self.assertEqual(treewalk.__dict__['post_handlers'], {}) class SourceReprTestCase(unittest.TestCase): """ Tests for helpers in astor.source_repr module. Note that these APIs are not public. """ @unittest.skipUnless(sys.version_info[0] == 2, 'only applies to Python 2') def test_split_lines_unicode_support(self): source = [u'copy', '\n'] self.assertEqual(split_lines(source), source) if __name__ == '__main__': unittest.main() astor-0.8.1/tests/support.py0000644000076500000240000000314113205364752017676 0ustar berkerpeksagstaff00000000000000import importlib import sys def _save_and_remove_module(name, orig_modules): """Helper function to save and remove a module from sys.modules Raise ImportError if the module can't be imported. """ # try to import the module and raise an error if it can't be imported if name not in sys.modules: __import__(name) del sys.modules[name] for modname in list(sys.modules): if modname == name or modname.startswith(name + '.'): orig_modules[modname] = sys.modules[modname] del sys.modules[modname] def import_fresh_module(name, fresh=(), blocked=()): """Import and return a module, deliberately bypassing sys.modules. This function imports and returns a fresh copy of the named Python module by removing the named module from sys.modules before doing the import. Note that unlike reload, the original module is not affected by this operation. """ orig_modules = {} names_to_remove = [] _save_and_remove_module(name, orig_modules) try: for fresh_name in fresh: _save_and_remove_module(fresh_name, orig_modules) for blocked_name in blocked: if not _save_and_block_module(blocked_name, orig_modules): names_to_remove.append(blocked_name) fresh_module = importlib.import_module(name) except ImportError: fresh_module = None finally: for orig_name, module in orig_modules.items(): sys.modules[orig_name] = module for name_to_remove in names_to_remove: del sys.modules[name_to_remove] return fresh_module astor-0.8.1/tests/check_astunparse.py0000644000076500000240000000107313470272551021505 0ustar berkerpeksagstaff00000000000000import ast try: import unittest2 as unittest except ImportError: import unittest from . import test_code_gen import astunparse class MyTests(test_code_gen.CodegenTestCase): to_source = staticmethod(astunparse.unparse) # Just see if it'll do anything good at all assertSrcRoundtrips = test_code_gen.CodegenTestCase.assertAstRoundtrips # Don't look for exact comparison; see if ASTs match def assertSrcEqual(self, src1, src2): self.assertAstEqual(ast.parse(src1), ast.parse(src2)) if __name__ == '__main__': unittest.main() astor-0.8.1/tests/test_code_gen.py0000644000076500000240000005672713573172164021010 0ustar berkerpeksagstaff00000000000000# coding: utf-8 """ Part of the astor library for Python AST manipulation License: 3-clause BSD Copyright (c) 2014 Berker Peksag Copyright (c) 2015 Patrick Maupin """ import ast import math import sys import textwrap try: import unittest2 as unittest except ImportError: import unittest import astor def canonical(srctxt): return textwrap.dedent(srctxt).strip() def astorexpr(x): return eval(astor.to_source(ast.Expression(body=x))) def astornum(x): return astorexpr(ast.Num(n=x)) class Comparisons(object): to_source = staticmethod(astor.to_source) def assertAstEqual(self, ast1, ast2): dmp1 = astor.dump_tree(ast1) dmp2 = astor.dump_tree(ast2) self.assertEqual(dmp1, dmp2) def assertAstEqualsSource(self, tree, source): self.assertEqual(self.to_source(tree).rstrip(), source) def assertAstRoundtrips(self, srctxt): """This asserts that the reconstituted source code can be compiled into the exact same AST as the original source code. """ srctxt = canonical(srctxt) srcast = ast.parse(srctxt) dsttxt = self.to_source(srcast) dstast = ast.parse(dsttxt) self.assertAstEqual(srcast, dstast) def assertAstRoundtripsGtVer(self, source, min_should_work, max_should_error=None): if max_should_error is None: max_should_error = min_should_work[0], min_should_work[1] - 1 if sys.version_info >= min_should_work: self.assertAstRoundtrips(source) elif sys.version_info <= max_should_error: self.assertRaises(SyntaxError, ast.parse, source) def assertSrcRoundtrips(self, srctxt): """This asserts that the reconstituted source code is identical to the original source code. This is a much stronger statement than assertAstRoundtrips, which may not always be appropriate. """ srctxt = canonical(srctxt) self.assertEqual(self.to_source(ast.parse(srctxt)).rstrip(), srctxt) def assertSrcDoesNotRoundtrip(self, srctxt): srctxt = canonical(srctxt) self.assertNotEqual(self.to_source(ast.parse(srctxt)).rstrip(), srctxt) def assertSrcRoundtripsGtVer(self, source, min_should_work, max_should_error=None): if max_should_error is None: max_should_error = min_should_work[0], min_should_work[1] - 1 if sys.version_info >= min_should_work: self.assertSrcRoundtrips(source) elif sys.version_info <= max_should_error: self.assertRaises(SyntaxError, ast.parse, source) class CodegenTestCase(unittest.TestCase, Comparisons): def test_imports(self): source = "import ast" self.assertSrcRoundtrips(source) source = "import operator as op" self.assertSrcRoundtrips(source) source = "from math import floor" self.assertSrcRoundtrips(source) source = "from .. import foobar" self.assertSrcRoundtrips(source) source = "from ..aaa import foo, bar as bar2" self.assertSrcRoundtrips(source) def test_empty_iterable_literals(self): self.assertSrcRoundtrips('()') self.assertSrcRoundtrips('[]') self.assertSrcRoundtrips('{}') # Python has no literal for empty sets, but code_gen should produce an # expression that evaluates to one. self.assertEqual(astorexpr(ast.Set([])), set()) def test_dictionary_literals(self): source = "{'a': 1, 'b': 2}" self.assertSrcRoundtrips(source) another_source = "{'nested': ['structures', {'are': 'important'}]}" self.assertSrcRoundtrips(another_source) def test_try_expect(self): source = """ try: 'spam'[10] except IndexError: pass""" self.assertAstRoundtrips(source) source = """ try: 'spam'[10] except IndexError as exc: sys.stdout.write(exc)""" self.assertAstRoundtrips(source) source = """ try: 'spam'[10] except IndexError as exc: sys.stdout.write(exc) else: pass finally: pass""" self.assertAstRoundtrips(source) source = """ try: size = len(iterable) except (TypeError, AttributeError): pass else: if n >= size: return sorted(iterable, key=key, reverse=True)[:n]""" self.assertAstRoundtrips(source) def test_del_statement(self): source = "del l[0]" self.assertSrcRoundtrips(source) source = "del obj.x" self.assertSrcRoundtrips(source) def test_arguments(self): source = """ j = [1, 2, 3] def test(a1, a2, b1=j, b2='123', b3={}, b4=[]): pass""" self.assertSrcRoundtrips(source) @unittest.skipUnless(sys.version_info >= (3, 8, 0, "alpha", 4), "positional only arguments introduced in Python 3.8") def test_positional_only_arguments(self): source = """ def test(a, b, /, c, *, d, **kwargs): pass def test(a=3, b=4, /, c=7): pass def test(a, b=4, /, c=8, d=9): pass """ self.assertSrcRoundtrips(source) def test_pass_arguments_node(self): source = canonical(""" j = [1, 2, 3] def test(a1, a2, b1=j, b2='123', b3={}, b4=[]): pass""") root_node = ast.parse(source) arguments_node = [n for n in ast.walk(root_node) if isinstance(n, ast.arguments)][0] self.assertEqual(self.to_source(arguments_node).rstrip(), "a1, a2, b1=j, b2='123', b3={}, b4=[]") source = """ def call(*popenargs, timeout=None, **kwargs): pass""" # Probably also works on < 3.4, but doesn't work on 2.7... self.assertSrcRoundtripsGtVer(source, (3, 4), (2, 7)) def test_attribute(self): self.assertSrcRoundtrips("x.foo") self.assertSrcRoundtrips("(5).foo") def test_matrix_multiplication(self): for source in ("(a @ b)", "a @= b"): self.assertAstRoundtripsGtVer(source, (3, 5)) def test_multiple_call_unpackings(self): source = """ my_function(*[1], *[2], **{'three': 3}, **{'four': 'four'})""" self.assertSrcRoundtripsGtVer(source, (3, 5)) def test_right_hand_side_dictionary_unpacking(self): source = """ our_dict = {'a': 1, **{'b': 2, 'c': 3}}""" self.assertSrcRoundtripsGtVer(source, (3, 5)) def test_async_def_with_for(self): source = """ async def read_data(db): async with connect(db) as db_cxn: data = await db_cxn.fetch('SELECT foo FROM bar;') async for datum in data: if quux(datum): return datum""" self.assertSrcRoundtripsGtVer(source, (3, 5)) def test_double_await(self): source = """ async def foo(): return await (await bar())""" self.assertSrcRoundtripsGtVer(source, (3, 5)) def test_class_definition_with_starbases_and_kwargs(self): source = """ class TreeFactory(*[FactoryMixin, TreeBase], **{'metaclass': Foo}): pass""" self.assertSrcRoundtripsGtVer(source, (3, 0)) def test_yield(self): source = "yield" self.assertAstRoundtrips(source) source = """ def dummy(): yield""" self.assertAstRoundtrips(source) source = "foo((yield bar))" self.assertAstRoundtrips(source) source = "(yield bar)()" self.assertAstRoundtrips(source) source = "return (yield 1)" self.assertAstRoundtrips(source) source = "return (yield from sam())" self.assertAstRoundtripsGtVer(source, (3, 3)) source = "((yield a) for b in c)" self.assertAstRoundtrips(source) source = "[(yield)]" self.assertAstRoundtrips(source) source = "if (yield): pass" self.assertAstRoundtrips(source) source = "if (yield from foo): pass" self.assertAstRoundtripsGtVer(source, (3, 3)) source = "(yield from (a, b))" self.assertAstRoundtripsGtVer(source, (3, 3)) source = "yield from sam()" self.assertSrcRoundtripsGtVer(source, (3, 3)) def test_with(self): source = """ with foo: pass """ self.assertSrcRoundtrips(source) source = """ with foo as bar: pass """ self.assertSrcRoundtrips(source) source = """ with foo as bar, mary, william as bill: pass """ self.assertAstRoundtripsGtVer(source, (2, 7)) def test_huge_int(self): for n in (10**10000, 0xdfa21cd2a530ccc8c870aa60d9feb3b35deeab81c3215a96557abbd683d21f4600f38e475d87100da9a4404220eeb3bb5584e5a2b5b48ffda58530ea19104a32577d7459d91e76aa711b241050f4cc6d5327ccee254f371bcad3be56d46eb5919b73f20dbdb1177b700f00891c5bf4ed128bb90ed541b778288285bcfa28432ab5cbcb8321b6e24760e998e0daa519f093a631e44276d7dd252ce0c08c75e2ab28a7349ead779f97d0f20a6d413bf3623cd216dc35375f6366690bcc41e3b2d5465840ec7ee0dc7e3f1c101d674a0c7dbccbc3942788b111396add2f8153b46a0e4b50d66e57ee92958f1c860dd97cc0e40e32febff915343ed53573142bdf4b): self.assertEqual(astornum(n), n) def test_complex(self): source = """ (3) + (4j) + (1+2j) + (1+0j) """ self.assertAstRoundtrips(source) self.assertIsInstance(astornum(1+0j), complex) def test_inf(self): source = """ (1e1000) + (-1e1000) + (1e1000j) + (-1e1000j) """ self.assertAstRoundtrips(source) # We special case infinities in code_gen. So we will # return the same AST construction but it won't # roundtrip to 'source'. See the SourceGenerator.visit_Num # method for details. (#82) source = 'a = 1e400' self.assertAstRoundtrips(source) # Returns 'a = 1e1000'. self.assertSrcDoesNotRoundtrip(source) self.assertIsInstance(astornum((1e1000+1e1000)+0j), complex) def test_nan(self): self.assertTrue(math.isnan(astornum(float('nan')))) v = astornum(complex(-1e1000, float('nan'))) self.assertEqual(v.real, -1e1000) self.assertTrue(math.isnan(v.imag)) v = astornum(complex(float('nan'), -1e1000)) self.assertTrue(math.isnan(v.real)) self.assertEqual(v.imag, -1e1000) def test_unary(self): source = """ -(1) + ~(2) + +(3) """ self.assertAstRoundtrips(source) def test_pow(self): source = """ (-2) ** (-3) """ self.assertAstRoundtrips(source) source = """ (+2) ** (+3) """ self.assertAstRoundtrips(source) source = """ 2 ** 3 ** 4 """ self.assertAstRoundtrips(source) source = """ -2 ** -3 """ self.assertAstRoundtrips(source) source = """ -2 ** -3 ** -4 """ self.assertAstRoundtrips(source) source = """ -((-1) ** other._sign) (-1) ** self._sign """ self.assertAstRoundtrips(source) def test_comprehension(self): source = """ ((x,y) for x,y in zip(a,b) if a) """ self.assertAstRoundtrips(source) source = """ fields = [(a, _format(b)) for (a, b) in iter_fields(node) if a] """ self.assertAstRoundtrips(source) source = """ ra = np.fromiter(((i * 3, i * 2) for i in range(10)), n, dtype='i8,f8') """ self.assertAstRoundtrips(source) def test_async_comprehension(self): source = """ async def f(): [(await x) async for x in y] [(await i) for i in b if await c] (await x async for x in y) {i for i in b async for i in a if await i for b in i} """ self.assertSrcRoundtripsGtVer(source, (3, 6)) def test_tuple_corner_cases(self): source = """ a = () """ self.assertAstRoundtrips(source) source = """ assert (a, b), (c, d) """ self.assertAstRoundtrips(source) source = """ return UUID(fields=(time_low, time_mid, time_hi_version, clock_seq_hi_variant, clock_seq_low, node), version=1) """ self.assertAstRoundtrips(source) source = """ raise(os.error, ('multiple errors:', errors)) """ self.assertAstRoundtrips(source) source = """ exec(expr, global_dict, local_dict) """ self.assertAstRoundtrips(source) source = """ with (a, b) as (c, d): pass """ self.assertAstRoundtrips(source) self.assertAstRoundtrips(source) source = """ with (a, b) as (c, d), (e,f) as (h,g): pass """ self.assertAstRoundtripsGtVer(source, (2, 7)) source = """ Pxx[..., (0,-1)] = xft[..., (0,-1)]**2 """ self.assertAstRoundtripsGtVer(source, (2, 7)) source = """ responses = { v: (v.phrase, v.description) for v in HTTPStatus.__members__.values() } """ self.assertAstRoundtripsGtVer(source, (2, 7)) def test_output_formatting(self): source = """ __all__ = ['ArgumentParser', 'ArgumentError', 'ArgumentTypeError', 'FileType', 'HelpFormatter', 'ArgumentDefaultsHelpFormatter', 'RawDescriptionHelpFormatter', 'RawTextHelpFormatter', 'Namespace', 'Action', 'ONE_OR_MORE', 'OPTIONAL', 'PARSER', 'REMAINDER', 'SUPPRESS', 'ZERO_OR_MORE'] """ # NOQA self.maxDiff = 2000 self.assertSrcRoundtrips(source) def test_elif(self): source = """ if a: b elif c: d elif e: f else: g """ self.assertSrcRoundtrips(source) def test_fstrings(self): source = """ x = f'{x}' x = f'{x.y}' x = f'{int(x)}' x = f'a{b:c}d' x = f'a{b!s:c{d}e}f' x = f'{x + y}' x = f'""' x = f'"\\'' """ self.assertSrcRoundtripsGtVer(source, (3, 6)) source = """ a_really_long_line_will_probably_break_things = ( f'a{b!s:c{d}e}fghijka{b!s:c{d}e}a{b!s:c{d}e}a{b!s:c{d}e}') """ self.assertSrcRoundtripsGtVer(source, (3, 6)) source = """ return f"functools.{qualname}({', '.join(args)})" """ self.assertSrcRoundtripsGtVer(source, (3, 6)) def test_assignment_expr(self): cases = ( "(x := 3)", "1 + (x := y)", "x = (y := 0)", "1 + (p := 1 if 2 else 3)", "[y := f(x), y**2, y**3]", "(2 ** 3 * 4 + 5 and 6, x := 2 ** 3 * 4 + 5 and 6)", "foo(x := 3, cat='vector')", "foo(x=(y := f(x)))", "any(len(longline := line) >= 100 for line in lines)", "[[y := f(x), x/y] for x in range(5)]", "lambda: (x := 1)", "def foo(answer=(p := 42)): pass", "def foo(answer: (p := 42) = 5): pass", "if reductor := dispatch_table.get(cls): pass", "while line := fp.readline(): pass", "while (command := input('> ')) != 'quit': pass") for case in cases: self.assertAstRoundtripsGtVer(case, (3, 8)) @unittest.skipUnless(sys.version_info <= (3, 3), "ast.Name used for True, False, None until Python 3.4") def test_deprecated_constants_as_name(self): self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Name(id='True')), "spam = True") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Name(id='False')), "spam = False") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Name(id='None')), "spam = None") @unittest.skipUnless(sys.version_info >= (3, 4), "ast.NameConstant introduced in Python 3.4") def test_deprecated_name_constants(self): self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.NameConstant(value=True)), "spam = True") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.NameConstant(value=False)), "spam = False") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.NameConstant(value=None)), "spam = None") def test_deprecated_constant_nodes(self): self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Num(3)), "spam = 3") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Num(-93)), "spam = -93") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Num(837.3888)), "spam = 837.3888") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Num(-0.9877)), "spam = -0.9877") self.assertAstEqualsSource(ast.Ellipsis(), "...") if sys.version_info >= (3, 0): self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Bytes(b"Bytes")), "spam = b'Bytes'") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Str("String")), "spam = 'String'") @unittest.skipUnless(sys.version_info >= (3, 6), "ast.Constant introduced in Python 3.6") def test_constant_nodes(self): self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=3)), "spam = 3") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=-93)), "spam = -93") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=837.3888)), "spam = 837.3888") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=-0.9877)), "spam = -0.9877") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=True)), "spam = True") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=False)), "spam = False") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value=None)), "spam = None") self.assertAstEqualsSource(ast.Constant(value=Ellipsis), "...") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(b"Bytes")), "spam = b'Bytes'") self.assertAstEqualsSource( ast.Assign(targets=[ast.Name(id='spam')], value=ast.Constant(value="String")), "spam = 'String'") def test_annassign(self): source = """ a: int (a): int a.b: int (a.b): int b: Tuple[int, str, ...] c.d[e].f: Any q: 3 = (1, 2, 3) t: Tuple[int, ...] = (1, 2, 3) some_list: List[int] = [] (a): int = 0 a:int = 0 (a.b): int = 0 a.b: int = 0 """ self.assertAstRoundtripsGtVer(source, (3, 6)) def test_compile_types(self): code = '(a + b + c) * (d + e + f)\n' for mode in 'exec eval single'.split(): srcast = compile(code, 'dummy', mode, ast.PyCF_ONLY_AST) dsttxt = self.to_source(srcast) if code.strip() != dsttxt.strip(): self.assertEqual('(%s)' % code.strip(), dsttxt.strip()) def test_unicode_literals(self): source = """ from __future__ import (print_function, unicode_literals) x = b'abc' y = u'abc' """ self.assertAstRoundtrips(source) def test_slicing(self): source = """ x[1,2] x[...,...] x[1,...] x[...,3] x[:,:] x[:,] x[1,:] x[:,2] x[1:2,] x[3:4,...] x[5:6,7:8] x[1:2,3:4] x[5:6:7,] x[1:2:-3,] x[4:5:6,...] x[7:8:-9,...] x[1:2:3,4:5] x[6:7:-8,9:0] x[...,1:2] x[1:2,3:4] x[...,1:2:3] x[...,4:5:-6] x[1:2,3:4:5] x[1:2,3:4:-5] """ self.assertAstRoundtrips(source) def test_non_string_leakage(self): source = ''' tar_compression = {'gzip': 'gz', None: ''} ''' self.assertAstRoundtrips(source) def test_fstring_trailing_newline(self): source = ''' x = f"""{host}\n\t{port}\n""" ''' self.assertSrcRoundtripsGtVer(source, (3, 6)) source = ''' if 1: x = f'{host}\\n\\t{port}\\n' ''' self.assertSrcRoundtripsGtVer(source, (3, 6)) def test_fstring_escaped_braces(self): source = ''' x = f'{{hello world}}' ''' self.assertSrcRoundtripsGtVer(source, (3, 6)) source = ''' x = f'{f.name}={{self.{f.name}!r}}' ''' self.assertSrcRoundtripsGtVer(source, (3, 6)) @unittest.skipUnless(sys.version_info >= (3, 8, 0, "alpha", 4), "f-string debugging introduced in Python 3.8") def test_fstring_debugging(self): source = """ x = f'{5=}' y = f'{5=!r}' z = f'{3*x+15=}' f'{x=:}' f'{x=:.2f}' f'alpha α {pi=} ω omega' """ self.assertAstRoundtripsGtVer(source, (3, 8)) def test_docstring_function(self): source = ''' def f(arg): """ docstring """ return 3 ''' self.assertSrcRoundtrips(source) def test_docstring_class(self): source = ''' class Class: """ docstring """ pass ''' self.assertSrcRoundtrips(source) def test_docstring_method(self): source = ''' class Class: def f(arg): """ docstring """ return 3 ''' self.assertSrcRoundtrips(source) def test_docstring_module(self): source = ''' """ docstring1 """ class Class: def f(arg): pass ''' self.assertSrcRoundtrips(source) if __name__ == '__main__': unittest.main() astor-0.8.1/tests/__init__.py0000644000076500000240000000000013205364752017710 0ustar berkerpeksagstaff00000000000000astor-0.8.1/tests/test_optional.py0000644000076500000240000000166313205364752021055 0ustar berkerpeksagstaff00000000000000""" Part of the astor library for Python AST manipulation License: 3-clause BSD Copyright (c) 2014 Berker Peksag Copyright (c) 2015, 2017 Patrick Maupin Use this by putting a link to astunparse's common.py test file. """ try: import unittest2 as unittest except ImportError: import unittest try: from test_code_gen import Comparisons except ImportError: from .test_code_gen import Comparisons try: from astunparse_common import AstunparseCommonTestCase except ImportError: AstunparseCommonTestCase = None if AstunparseCommonTestCase is not None: class UnparseTestCase(AstunparseCommonTestCase, unittest.TestCase, Comparisons): def check_roundtrip(self, code1, mode=None): self.assertAstRoundtrips(code1) def test_files(self): """ Don't bother -- we do this manually and more thoroughly """ if __name__ == '__main__': unittest.main() astor-0.8.1/tests/check_expressions.py0000755000076500000240000000525213470272551021710 0ustar berkerpeksagstaff00000000000000#! /usr/bin/env python # -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin This module reads the strings generated by build_expressions, and runs them through the Python interpreter. For strings that are suboptimal (too many spaces, etc.), it simply dumps them to a miscompare file. For strings that seem broken (do not parse after roundtrip) or are maybe too compressed, it dumps information to the console. This module does not take too long to execute; however, the underlying build_expressions module takes forever, so this should not be part of the automated regressions. """ import sys import ast import astor try: import importlib except ImportError: try: import all_expr_2_6 as mymod except ImportError: print("Expression list does not exist -- building") from . import build_expressions build_expressions.makelib() print("Expression list built") import all_expr_2_6 as mymod else: mymodname = 'all_expr_%s_%s' % sys.version_info[:2] try: mymod = importlib.import_module(mymodname) except ImportError: print("Expression list does not exist -- building") from . import build_expressions build_expressions.makelib() print("Expression list built") mymod = importlib.import_module(mymodname) def checklib(): print("Checking expressions") parse = ast.parse dump_tree = astor.dump_tree to_source = astor.to_source with open('mismatch_%s_%s.txt' % sys.version_info[:2], 'wb') as f: for srctxt in mymod.all_expr.strip().splitlines(): srcast = parse(srctxt) dsttxt = to_source(srcast) if dsttxt != srctxt: srcdmp = dump_tree(srcast) try: dstast = parse(dsttxt) except SyntaxError: bad = True dstdmp = 'aborted' else: dstdmp = dump_tree(dstast) bad = srcdmp != dstdmp if bad or len(dsttxt) < len(srctxt): print(srctxt, dsttxt) if bad: print('****************** Original') print(srcdmp) print('****************** Extra Crispy') print(dstdmp) print('******************') print() print() f.write(('%s %s\n' % (repr(srctxt), repr(dsttxt))).encode('utf-8')) if __name__ == '__main__': checklib() astor-0.8.1/tests/test_rtrip.py0000644000076500000240000000074513205364752020370 0ustar berkerpeksagstaff00000000000000""" Part of the astor library for Python AST manipulation License: 3-clause BSD Copyright (c) 2017 Patrick Maupin """ import os try: import unittest2 as unittest except ImportError: import unittest import astor.rtrip class RtripTestCase(unittest.TestCase): def test_convert_stdlib(self): srcdir = os.path.dirname(os.__file__) result = astor.rtrip.convert(srcdir) self.assertEqual(result, []) if __name__ == '__main__': unittest.main() astor-0.8.1/tests/build_expressions.py0000755000076500000240000001664213205364752021740 0ustar berkerpeksagstaff00000000000000#! /usr/bin/env python # -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin This module generates a lot of permutations of Python expressions, and dumps them into a python module all_expr_x_y.py (where x and y are the python version tuple) as a string. This string is later used by check_expressions. This module takes a loooooooooong time to execute. """ import sys import collections import itertools import textwrap import ast import astor all_operators = ( # Selected special operands '3 -3 () yield', # operators with one parameter 'yield lambda_: not + - ~ $, yield_from', # operators with two parameters 'or and == != > >= < <= in not_in is is_not ' '| ^ & << >> + - * / % // @ ** for$in$ $($) $[$] . ' '$,$ ', # operators with 3 parameters '$if$else$ $for$in$' ) select_operators = ( # Selected special operands -- remove # some at redundant precedence levels '-3', # operators with one parameter 'yield lambda_: not - ~ $,', # operators with two parameters 'or and == in is ' '| ^ & >> - % ** for$in$ $($) . ', # operators with 3 parameters '$if$else$ $for$in$' ) def get_primitives(base): """Attempt to return formatting strings for all operators, and selected operands. Here, I use the term operator loosely to describe anything that accepts an expression and can be used in an additional expression. """ operands = [] operators = [] for nparams, s in enumerate(base): s = s.replace('%', '%%').split() for s in (x.replace('_', ' ') for x in s): if nparams and '$' not in s: assert nparams in (1, 2) s = '%s%s$' % ('$' if nparams == 2 else '', s) assert nparams == s.count('$'), (nparams, s) s = s.replace('$', ' %s ').strip() # Normalize the spacing s = s.replace(' ,', ',') s = s.replace(' . ', '.') s = s.replace(' [ ', '[').replace(' ]', ']') s = s.replace(' ( ', '(').replace(' )', ')') if nparams == 1: s = s.replace('+ ', '+') s = s.replace('- ', '-') s = s.replace('~ ', '~') if nparams: operators.append((s, nparams)) else: operands.append(s) return operators, operands def get_sub_combinations(maxop): """Return a dictionary of lists of combinations suitable for recursively building expressions. Each dictionary key is a tuple of (numops, numoperands), where: numops is the number of operators we should build an expression for numterms is the number of operands required by the current operator. Each list contains all permutations of the number of operators that the recursively called function should use for each operand. """ combo = collections.defaultdict(list) for numops in range(maxop+1): if numops: combo[numops, 1].append((numops-1,)) for op1 in range(numops): combo[numops, 2].append((op1, numops - op1 - 1)) for op2 in range(numops - op1): combo[numops, 3].append((op1, op2, numops - op1 - op2 - 1)) return combo def get_paren_combos(): """This function returns a list of lists. The first list is indexed by the number of operands the current operator has. Each sublist contains all permutations of wrapping the operands in parentheses or not. """ results = [None] * 4 options = [('%s', '(%s)')] for i in range(1, 4): results[i] = list(itertools.product(*(i * options))) return results def operand_combo(expressions, operands, max_operand=13): op_combos = [] operands = list(operands) operands.append('%s') for n in range(max_operand): this_combo = [] op_combos.append(this_combo) for i in range(n): for op in operands: mylist = ['%s'] * n mylist[i] = op this_combo.append(tuple(mylist)) for expr in expressions: expr = expr.replace('%%', '%%%%') for op in op_combos[expr.count('%s')]: yield expr % op def build(numops=2, all_operators=all_operators, use_operands=False, # Runtime optimization tuple=tuple): operators, operands = get_primitives(all_operators) combo = get_sub_combinations(numops) paren_combos = get_paren_combos() product = itertools.product try: izip = itertools.izip except AttributeError: izip = zip def recurse_build(numops): if not numops: yield '%s' for myop, nparams in operators: myop = myop.replace('%%', '%%%%') myparens = paren_combos[nparams] # print combo[numops, nparams] for mycombo in combo[numops, nparams]: # print mycombo call_again = (recurse_build(x) for x in mycombo) for subexpr in product(*call_again): for parens in myparens: wrapped = tuple(x % y for (x, y) in izip(parens, subexpr)) yield myop % wrapped result = recurse_build(numops) return operand_combo(result, operands) if use_operands else result def makelib(): parse = ast.parse dump_tree = astor.dump_tree def default_value(): return 1000000, '' mydict = collections.defaultdict(default_value) allparams = [tuple('abcdefghijklmnop'[:x]) for x in range(13)] alltxt = itertools.chain(build(1, use_operands=True), build(2, use_operands=True), build(3, select_operators)) yieldrepl = list(('yield %s %s' % (operator, operand), 'yield %s%s' % (operator, operand)) for operator in '+-' for operand in '(ab') yieldrepl.append(('yield[', 'yield [')) # alltxt = itertools.chain(build(1), build(2)) badexpr = 0 goodexpr = 0 silly = '3( 3.( 3[ 3.['.split() for expr in alltxt: params = allparams[expr.count('%s')] expr %= params try: myast = parse(expr) except: badexpr += 1 continue goodexpr += 1 key = dump_tree(myast) expr = expr.replace(', - ', ', -') ignore = [x for x in silly if x in expr] if ignore: continue if 'yield' in expr: for x in yieldrepl: expr = expr.replace(*x) mydict[key] = min(mydict[key], (len(expr), expr)) print(badexpr, goodexpr) stuff = [x[1] for x in mydict.values()] stuff.sort() lineend = '\n'.encode('utf-8') with open('all_expr_%s_%s.py' % sys.version_info[:2], 'wb') as f: f.write(textwrap.dedent(''' # AUTOMAGICALLY GENERATED!!! DO NOT MODIFY!! # all_expr = """ ''').encode('utf-8')) for item in stuff: f.write(item.encode('utf-8')) f.write(lineend) f.write('"""\n'.encode('utf-8')) if __name__ == '__main__': makelib() astor-0.8.1/MANIFEST.in0000644000076500000240000000016413573172164016210 0ustar berkerpeksagstaff00000000000000include README.rst AUTHORS LICENSE CHANGES include setuputils.py include astor/VERSION recursive-include tests *.py astor-0.8.1/astor/0000755000076500000240000000000013573574747015616 5ustar berkerpeksagstaff00000000000000astor-0.8.1/astor/rtrip.py0000755000076500000240000001512513470303666017321 0ustar berkerpeksagstaff00000000000000#! /usr/bin/env python # -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin """ import sys import os import ast import shutil import logging from astor.code_gen import to_source from astor.file_util import code_to_ast from astor.node_util import (allow_ast_comparison, dump_tree, strip_tree, fast_compare) dsttree = 'tmp_rtrip' # TODO: Remove this workaround once we remove version 2 support def out_prep(s, pre_encoded=(sys.version_info[0] == 2)): return s if pre_encoded else s.encode('utf-8') def convert(srctree, dsttree=dsttree, readonly=False, dumpall=False, ignore_exceptions=False, fullcomp=False): """Walk the srctree, and convert/copy all python files into the dsttree """ if fullcomp: allow_ast_comparison() parse_file = code_to_ast.parse_file find_py_files = code_to_ast.find_py_files srctree = os.path.normpath(srctree) if not readonly: dsttree = os.path.normpath(dsttree) logging.info('') logging.info('Trashing ' + dsttree) shutil.rmtree(dsttree, True) unknown_src_nodes = set() unknown_dst_nodes = set() badfiles = set() broken = [] oldpath = None allfiles = find_py_files(srctree, None if readonly else dsttree) for srcpath, fname in allfiles: # Create destination directory if not readonly and srcpath != oldpath: oldpath = srcpath if srcpath >= srctree: dstpath = srcpath.replace(srctree, dsttree, 1) if not dstpath.startswith(dsttree): raise ValueError("%s not a subdirectory of %s" % (dstpath, dsttree)) else: assert srctree.startswith(srcpath) dstpath = dsttree os.makedirs(dstpath) srcfname = os.path.join(srcpath, fname) logging.info('Converting %s' % srcfname) try: srcast = parse_file(srcfname) except SyntaxError: badfiles.add(srcfname) continue try: dsttxt = to_source(srcast) except Exception: if not ignore_exceptions: raise dsttxt = '' if not readonly: dstfname = os.path.join(dstpath, fname) try: with open(dstfname, 'wb') as f: f.write(out_prep(dsttxt)) except UnicodeEncodeError: badfiles.add(dstfname) # As a sanity check, make sure that ASTs themselves # round-trip OK try: dstast = ast.parse(dsttxt) if readonly else parse_file(dstfname) except SyntaxError: dstast = [] if fullcomp: unknown_src_nodes.update(strip_tree(srcast)) unknown_dst_nodes.update(strip_tree(dstast)) bad = srcast != dstast else: bad = not fast_compare(srcast, dstast) if dumpall or bad: srcdump = dump_tree(srcast) dstdump = dump_tree(dstast) logging.warning(' calculating dump -- %s' % ('bad' if bad else 'OK')) if bad: broken.append(srcfname) if dumpall or bad: if not readonly: try: with open(dstfname[:-3] + '.srcdmp', 'wb') as f: f.write(out_prep(srcdump)) except UnicodeEncodeError: badfiles.add(dstfname[:-3] + '.srcdmp') try: with open(dstfname[:-3] + '.dstdmp', 'wb') as f: f.write(out_prep(dstdump)) except UnicodeEncodeError: badfiles.add(dstfname[:-3] + '.dstdmp') elif dumpall: sys.stdout.write('\n\nAST:\n\n ') sys.stdout.write(srcdump.replace('\n', '\n ')) sys.stdout.write('\n\nDecompile:\n\n ') sys.stdout.write(dsttxt.replace('\n', '\n ')) sys.stdout.write('\n\nNew AST:\n\n ') sys.stdout.write('(same as old)' if dstdump == srcdump else dstdump.replace('\n', '\n ')) sys.stdout.write('\n') if badfiles: logging.warning('\nFiles not processed due to syntax errors:') for fname in sorted(badfiles): logging.warning(' %s' % fname) if broken: logging.warning('\nFiles failed to round-trip to AST:') for srcfname in broken: logging.warning(' %s' % srcfname) ok_to_strip = 'col_offset _precedence _use_parens lineno _p_op _pp' ok_to_strip = set(ok_to_strip.split()) bad_nodes = (unknown_dst_nodes | unknown_src_nodes) - ok_to_strip if bad_nodes: logging.error('\nERROR -- UNKNOWN NODES STRIPPED: %s' % bad_nodes) logging.info('\n') return broken def usage(msg): raise SystemExit(textwrap.dedent(""" Error: %s Usage: python -m astor.rtrip [readonly] [] This utility tests round-tripping of Python source to AST and back to source. If readonly is specified, then the source will be tested, but no files will be written. if the source is specified to be "stdin" (without quotes) then any source entered at the command line will be compiled into an AST, converted back to text, and then compiled to an AST again, and the results will be displayed to stdout. If neither readonly nor stdin is specified, then rtrip will create a mirror directory named tmp_rtrip and will recursively round-trip all the Python source from the source into the tmp_rtrip dir, after compiling it and then reconstituting it through code_gen.to_source. If the source is not specified, the entire Python library will be used. """) % msg) if __name__ == '__main__': import textwrap args = sys.argv[1:] readonly = 'readonly' in args if readonly: args.remove('readonly') if not args: args = [os.path.dirname(textwrap.__file__)] if len(args) > 1: usage("Too many arguments") fname, = args dumpall = False if not os.path.exists(fname): dumpall = fname == 'stdin' or usage("Cannot find directory %s" % fname) logging.basicConfig(format='%(msg)s', level=logging.INFO) convert(fname, readonly=readonly or dumpall, dumpall=dumpall) astor-0.8.1/astor/__init__.py0000644000076500000240000000436313573172164017720 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright 2012 (c) Patrick Maupin Copyright 2013 (c) Berker Peksag """ import os import warnings from .code_gen import SourceGenerator, to_source # NOQA from .node_util import iter_node, strip_tree, dump_tree # NOQA from .node_util import ExplicitNodeVisitor # NOQA from .file_util import CodeToAst, code_to_ast # NOQA from .op_util import get_op_symbol, get_op_precedence # NOQA from .op_util import symbol_data # NOQA from .tree_walk import TreeWalk # NOQA ROOT = os.path.dirname(__file__) with open(os.path.join(ROOT, 'VERSION')) as version_file: __version__ = version_file.read().strip() parse_file = code_to_ast.parse_file # DEPRECATED!!! # These aliases support old programs. Please do not use in future. deprecated = """ get_boolop = get_binop = get_cmpop = get_unaryop = get_op_symbol get_anyop = get_op_symbol parsefile = code_to_ast.parse_file codetoast = code_to_ast dump = dump_tree all_symbols = symbol_data treewalk = tree_walk codegen = code_gen """ exec(deprecated) def deprecate(): def wrap(deprecated_name, target_name): if '.' in target_name: target_mod, target_fname = target_name.split('.') target_func = getattr(globals()[target_mod], target_fname) else: target_func = globals()[target_name] msg = "astor.%s is deprecated. Please use astor.%s." % ( deprecated_name, target_name) if callable(target_func): def newfunc(*args, **kwarg): warnings.warn(msg, DeprecationWarning, stacklevel=2) return target_func(*args, **kwarg) else: class ModProxy: def __getattr__(self, name): warnings.warn(msg, DeprecationWarning, stacklevel=2) return getattr(target_func, name) newfunc = ModProxy() globals()[deprecated_name] = newfunc for line in deprecated.splitlines(): # NOQA line = line.split('#')[0].replace('=', '').split() if line: target_name = line.pop() for deprecated_name in line: wrap(deprecated_name, target_name) deprecate() del deprecate, deprecated astor-0.8.1/astor/file_util.py0000644000076500000240000000630413205364752020130 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2012-2015 Patrick Maupin Copyright (c) 2013-2015 Berker Peksag Functions that interact with the filesystem go here. """ import ast import sys import os try: from tokenize import open as fopen except ImportError: fopen = open class CodeToAst(object): """Given a module, or a function that was compiled as part of a module, re-compile the module into an AST and extract the sub-AST for the function. Allow caching to reduce number of compiles. Also contains static helper utility functions to look for python files, to parse python files, and to extract the file/line information from a code object. """ @staticmethod def find_py_files(srctree, ignore=None): """Return all the python files in a source tree Ignores any path that contains the ignore string This is not used by other class methods, but is designed to be used in code that uses this class. """ if not os.path.isdir(srctree): yield os.path.split(srctree) for srcpath, _, fnames in os.walk(srctree): # Avoid infinite recursion for silly users if ignore is not None and ignore in srcpath: continue for fname in (x for x in fnames if x.endswith('.py')): yield srcpath, fname @staticmethod def parse_file(fname): """Parse a python file into an AST. This is a very thin wrapper around ast.parse TODO: Handle encodings other than the default for Python 2 (issue #26) """ try: with fopen(fname) as f: fstr = f.read() except IOError: if fname != 'stdin': raise sys.stdout.write('\nReading from stdin:\n\n') fstr = sys.stdin.read() fstr = fstr.replace('\r\n', '\n').replace('\r', '\n') if not fstr.endswith('\n'): fstr += '\n' return ast.parse(fstr, filename=fname) @staticmethod def get_file_info(codeobj): """Returns the file and line number of a code object. If the code object has a __file__ attribute (e.g. if it is a module), then the returned line number will be 0 """ fname = getattr(codeobj, '__file__', None) linenum = 0 if fname is None: func_code = codeobj.__code__ fname = func_code.co_filename linenum = func_code.co_firstlineno fname = fname.replace('.pyc', '.py') return fname, linenum def __init__(self, cache=None): self.cache = cache or {} def __call__(self, codeobj): cache = self.cache key = self.get_file_info(codeobj) result = cache.get(key) if result is not None: return result fname = key[0] cache[(fname, 0)] = mod_ast = self.parse_file(fname) for obj in mod_ast.body: if not isinstance(obj, ast.FunctionDef): continue cache[(fname, obj.lineno)] = obj return cache[key] code_to_ast = CodeToAst() astor-0.8.1/astor/tree_walk.py0000644000076500000240000001360413470303666020133 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright 2012 (c) Patrick Maupin Copyright 2013 (c) Berker Peksag This file contains a TreeWalk class that views a node tree as a unified whole and allows several modes of traversal. """ from .node_util import iter_node class MetaFlatten(type): """This metaclass is used to flatten classes to remove class hierarchy. This makes it easier to manipulate classes (find attributes in a single dict, etc.) """ def __new__(clstype, name, bases, clsdict): newbases = (object,) newdict = {} for base in reversed(bases): if base not in newbases: newdict.update(vars(base)) newdict.update(clsdict) # These are class-bound, we should let Python recreate them. newdict.pop('__dict__', None) newdict.pop('__weakref__', None) # Delegate the real work to type return type.__new__(clstype, name, newbases, newdict) MetaFlatten = MetaFlatten('MetaFlatten', (object,), {}) class TreeWalk(MetaFlatten): """The TreeWalk class can be used as a superclass in order to walk an AST or similar tree. Unlike other treewalkers, this class can walk a tree either recursively or non-recursively. Subclasses can define methods with the following signatures:: def pre_xxx(self): pass def post_xxx(self): pass def init_xxx(self): pass Where 'xxx' is one of: - A class name - An attribute member name concatenated with '_name' For example, 'pre_targets_name' will process nodes that are referenced by the name 'targets' in their parent's node. - An attribute member name concatenated with '_item' For example, 'pre_targets_item' will process nodes that are in a list that is the targets attribute of some node. pre_xxx will process a node before processing any of its subnodes. if the return value from pre_xxx evalates to true, then walk will not process any of the subnodes. Those can be manually processed, if desired, by calling self.walk(node) on the subnodes before returning True. post_xxx will process a node after processing all its subnodes. init_xxx methods can decorate the class instance with subclass-specific information. A single init_whatever method could be written, but to make it easy to keep initialization with use, any number of init_xxx methods can be written. They will be called in alphabetical order. """ def __init__(self, node=None): self.nodestack = [] self.setup() if node is not None: self.walk(node) def setup(self): """All the node-specific handlers are setup at object initialization time. """ self.pre_handlers = pre_handlers = {} self.post_handlers = post_handlers = {} for name in sorted(vars(type(self))): if name.startswith('init_'): getattr(self, name)() elif name.startswith('pre_'): pre_handlers[name[4:]] = getattr(self, name) elif name.startswith('post_'): post_handlers[name[5:]] = getattr(self, name) def walk(self, node, name='', list=list, len=len, type=type): """Walk the tree starting at a given node. Maintain a stack of nodes. """ pre_handlers = self.pre_handlers.get post_handlers = self.post_handlers.get nodestack = self.nodestack emptystack = len(nodestack) append, pop = nodestack.append, nodestack.pop append([node, name, list(iter_node(node, name + '_item')), -1]) while len(nodestack) > emptystack: node, name, subnodes, index = nodestack[-1] if index >= len(subnodes): handler = (post_handlers(type(node).__name__) or post_handlers(name + '_name')) if handler is None: pop() continue self.cur_node = node self.cur_name = name handler() current = nodestack and nodestack[-1] popstack = current and current[0] is node if popstack and current[-1] >= len(current[-2]): pop() continue nodestack[-1][-1] = index + 1 if index < 0: handler = (pre_handlers(type(node).__name__) or pre_handlers(name + '_name')) if handler is not None: self.cur_node = node self.cur_name = name if handler(): pop() else: node, name = subnodes[index] append([node, name, list(iter_node(node, name + '_item')), -1]) @property def parent(self): """Return the parent node of the current node.""" nodestack = self.nodestack if len(nodestack) < 2: return None return nodestack[-2][0] @property def parent_name(self): """Return the parent node and name.""" nodestack = self.nodestack if len(nodestack) < 2: return None return nodestack[-2][:2] def replace(self, new_node): """Replace a node after first checking integrity of node stack.""" cur_node = self.cur_node nodestack = self.nodestack cur = nodestack.pop() prev = nodestack[-1] index = prev[-1] - 1 oldnode, name = prev[-2][index] assert cur[0] is cur_node is oldnode, (cur[0], cur_node, prev[-2], index) parent = prev[0] if isinstance(parent, list): parent[index] = new_node else: setattr(parent, name, new_node) astor-0.8.1/astor/code_gen.py0000644000076500000240000007644013573172164017731 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2008 Armin Ronacher Copyright (c) 2012-2017 Patrick Maupin Copyright (c) 2013-2017 Berker Peksag This module converts an AST into Python source code. Before being version-controlled as part of astor, this code came from here (in 2012): https://gist.github.com/1250562 """ import ast import inspect import math import sys from .op_util import get_op_symbol, get_op_precedence, Precedence from .node_util import ExplicitNodeVisitor from .string_repr import pretty_string from .source_repr import pretty_source def to_source(node, indent_with=' ' * 4, add_line_information=False, pretty_string=pretty_string, pretty_source=pretty_source, source_generator_class=None): """This function can convert a node tree back into python sourcecode. This is useful for debugging purposes, especially if you're dealing with custom asts not generated by python itself. It could be that the sourcecode is evaluable when the AST itself is not compilable / evaluable. The reason for this is that the AST contains some more data than regular sourcecode does, which is dropped during conversion. Each level of indentation is replaced with `indent_with`. Per default this parameter is equal to four spaces as suggested by PEP 8, but it might be adjusted to match the application's styleguide. If `add_line_information` is set to `True` comments for the line numbers of the nodes are added to the output. This can be used to spot wrong line number information of statement nodes. `source_generator_class` defaults to `SourceGenerator`, and specifies the class that will be instantiated and used to generate the source code. """ if source_generator_class is None: source_generator_class = SourceGenerator elif not inspect.isclass(source_generator_class): raise TypeError('source_generator_class should be a class') elif not issubclass(source_generator_class, SourceGenerator): raise TypeError('source_generator_class should be a subclass of SourceGenerator') generator = source_generator_class( indent_with, add_line_information, pretty_string) generator.visit(node) generator.result.append('\n') if set(generator.result[0]) == set('\n'): generator.result[0] = '' return pretty_source(generator.result) def precedence_setter(AST=ast.AST, get_op_precedence=get_op_precedence, isinstance=isinstance, list=list): """ This only uses a closure for performance reasons, to reduce the number of attribute lookups. (set_precedence is called a lot of times.) """ def set_precedence(value, *nodes): """Set the precedence (of the parent) into the children. """ if isinstance(value, AST): value = get_op_precedence(value) for node in nodes: if isinstance(node, AST): node._pp = value elif isinstance(node, list): set_precedence(value, *node) else: assert node is None, node return set_precedence set_precedence = precedence_setter() class Delimit(object): """A context manager that can add enclosing delimiters around the output of a SourceGenerator method. By default, the parentheses are added, but the enclosed code may set discard=True to get rid of them. """ discard = False def __init__(self, tree, *args): """ use write instead of using result directly for initial data, because it may flush preceding data into result. """ delimiters = '()' node = None op = None for arg in args: if isinstance(arg, ast.AST): if node is None: node = arg else: op = arg else: delimiters = arg tree.write(delimiters[0]) result = self.result = tree.result self.index = len(result) self.closing = delimiters[1] if node is not None: self.p = p = get_op_precedence(op or node) self.pp = pp = tree.get__pp(node) self.discard = p >= pp def __enter__(self): return self def __exit__(self, *exc_info): result = self.result start = self.index - 1 if self.discard: result[start] = '' else: result.append(self.closing) class SourceGenerator(ExplicitNodeVisitor): """This visitor is able to transform a well formed syntax tree into Python sourcecode. For more details have a look at the docstring of the `node_to_source` function. """ using_unicode_literals = False def __init__(self, indent_with, add_line_information=False, pretty_string=pretty_string, # constants len=len, isinstance=isinstance, callable=callable): self.result = [] self.indent_with = indent_with self.add_line_information = add_line_information self.indentation = 0 # Current indentation level self.new_lines = 0 # Number of lines to insert before next code self.colinfo = 0, 0 # index in result of string containing linefeed, and # position of last linefeed in that string self.pretty_string = pretty_string AST = ast.AST visit = self.visit result = self.result append = result.append def write(*params): """ self.write is a closure for performance (to reduce the number of attribute lookups). """ for item in params: if isinstance(item, AST): visit(item) elif callable(item): item() else: if self.new_lines: append('\n' * self.new_lines) self.colinfo = len(result), 0 append(self.indent_with * self.indentation) self.new_lines = 0 if item: append(item) self.write = write def __getattr__(self, name, defaults=dict(keywords=(), _pp=Precedence.highest).get): """ Get an attribute of the node. like dict.get (returns None if doesn't exist) """ if not name.startswith('get_'): raise AttributeError geta = getattr shortname = name[4:] default = defaults(shortname) def getter(node): return geta(node, shortname, default) setattr(self, name, getter) return getter def delimit(self, *args): return Delimit(self, *args) def conditional_write(self, *stuff): if stuff[-1] is not None: self.write(*stuff) # Inform the caller that we wrote return True def newline(self, node=None, extra=0): self.new_lines = max(self.new_lines, 1 + extra) if node is not None and self.add_line_information: self.write('# line: %s' % node.lineno) self.new_lines = 1 def body(self, statements): self.indentation += 1 self.write(*statements) self.indentation -= 1 def else_body(self, elsewhat): if elsewhat: self.write(self.newline, 'else:') self.body(elsewhat) def body_or_else(self, node): self.body(node.body) self.else_body(node.orelse) def visit_arguments(self, node): want_comma = [] def write_comma(): if want_comma: self.write(', ') else: want_comma.append(True) def loop_args(args, defaults): set_precedence(Precedence.Comma, defaults) padding = [None] * (len(args) - len(defaults)) for arg, default in zip(args, padding + defaults): self.write(write_comma, arg) self.conditional_write('=', default) posonlyargs = getattr(node, 'posonlyargs', []) offset = 0 if posonlyargs: offset += len(node.defaults) - len(node.args) loop_args(posonlyargs, node.defaults[:offset]) self.write(write_comma, '/') loop_args(node.args, node.defaults[offset:]) self.conditional_write(write_comma, '*', node.vararg) kwonlyargs = self.get_kwonlyargs(node) if kwonlyargs: if node.vararg is None: self.write(write_comma, '*') loop_args(kwonlyargs, node.kw_defaults) self.conditional_write(write_comma, '**', node.kwarg) def statement(self, node, *params, **kw): self.newline(node) self.write(*params) def decorators(self, node, extra): self.newline(extra=extra) for decorator in node.decorator_list: self.statement(decorator, '@', decorator) def comma_list(self, items, trailing=False): set_precedence(Precedence.Comma, *items) for idx, item in enumerate(items): self.write(', ' if idx else '', item) self.write(',' if trailing else '') # Statements def visit_Assign(self, node): set_precedence(node, node.value, *node.targets) self.newline(node) for target in node.targets: self.write(target, ' = ') self.visit(node.value) def visit_AugAssign(self, node): set_precedence(node, node.value, node.target) self.statement(node, node.target, get_op_symbol(node.op, ' %s= '), node.value) def visit_AnnAssign(self, node): set_precedence(node, node.target, node.annotation) set_precedence(Precedence.Comma, node.value) need_parens = isinstance(node.target, ast.Name) and not node.simple begin = '(' if need_parens else '' end = ')' if need_parens else '' self.statement(node, begin, node.target, end, ': ', node.annotation) self.conditional_write(' = ', node.value) def visit_ImportFrom(self, node): self.statement(node, 'from ', node.level * '.', node.module or '', ' import ') self.comma_list(node.names) # Goofy stuff for Python 2.7 _pyio module if node.module == '__future__' and 'unicode_literals' in ( x.name for x in node.names): self.using_unicode_literals = True def visit_Import(self, node): self.statement(node, 'import ') self.comma_list(node.names) def visit_Expr(self, node): set_precedence(node, node.value) self.statement(node) self.generic_visit(node) def visit_FunctionDef(self, node, is_async=False): prefix = 'async ' if is_async else '' self.decorators(node, 1 if self.indentation else 2) self.statement(node, '%sdef %s' % (prefix, node.name), '(') self.visit_arguments(node.args) self.write(')') self.conditional_write(' ->', self.get_returns(node)) self.write(':') self.body(node.body) if not self.indentation: self.newline(extra=2) # introduced in Python 3.5 def visit_AsyncFunctionDef(self, node): self.visit_FunctionDef(node, is_async=True) def visit_ClassDef(self, node): have_args = [] def paren_or_comma(): if have_args: self.write(', ') else: have_args.append(True) self.write('(') self.decorators(node, 2) self.statement(node, 'class %s' % node.name) for base in node.bases: self.write(paren_or_comma, base) # keywords not available in early version for keyword in self.get_keywords(node): self.write(paren_or_comma, keyword.arg or '', '=' if keyword.arg else '**', keyword.value) self.conditional_write(paren_or_comma, '*', self.get_starargs(node)) self.conditional_write(paren_or_comma, '**', self.get_kwargs(node)) self.write(have_args and '):' or ':') self.body(node.body) if not self.indentation: self.newline(extra=2) def visit_If(self, node): set_precedence(node, node.test) self.statement(node, 'if ', node.test, ':') self.body(node.body) while True: else_ = node.orelse if len(else_) == 1 and isinstance(else_[0], ast.If): node = else_[0] set_precedence(node, node.test) self.write(self.newline, 'elif ', node.test, ':') self.body(node.body) else: self.else_body(else_) break def visit_For(self, node, is_async=False): set_precedence(node, node.target) prefix = 'async ' if is_async else '' self.statement(node, '%sfor ' % prefix, node.target, ' in ', node.iter, ':') self.body_or_else(node) # introduced in Python 3.5 def visit_AsyncFor(self, node): self.visit_For(node, is_async=True) def visit_While(self, node): set_precedence(node, node.test) self.statement(node, 'while ', node.test, ':') self.body_or_else(node) def visit_With(self, node, is_async=False): prefix = 'async ' if is_async else '' self.statement(node, '%swith ' % prefix) if hasattr(node, "context_expr"): # Python < 3.3 self.visit_withitem(node) else: # Python >= 3.3 self.comma_list(node.items) self.write(':') self.body(node.body) # new for Python 3.5 def visit_AsyncWith(self, node): self.visit_With(node, is_async=True) # new for Python 3.3 def visit_withitem(self, node): self.write(node.context_expr) self.conditional_write(' as ', node.optional_vars) # deprecated in Python 3.8 def visit_NameConstant(self, node): self.write(repr(node.value)) def visit_Pass(self, node): self.statement(node, 'pass') def visit_Print(self, node): # XXX: python 2.6 only self.statement(node, 'print ') values = node.values if node.dest is not None: self.write(' >> ') values = [node.dest] + node.values self.comma_list(values, not node.nl) def visit_Delete(self, node): self.statement(node, 'del ') self.comma_list(node.targets) def visit_TryExcept(self, node): self.statement(node, 'try:') self.body(node.body) self.write(*node.handlers) self.else_body(node.orelse) # new for Python 3.3 def visit_Try(self, node): self.statement(node, 'try:') self.body(node.body) self.write(*node.handlers) self.else_body(node.orelse) if node.finalbody: self.statement(node, 'finally:') self.body(node.finalbody) def visit_ExceptHandler(self, node): self.statement(node, 'except') if self.conditional_write(' ', node.type): self.conditional_write(' as ', node.name) self.write(':') self.body(node.body) def visit_TryFinally(self, node): self.statement(node, 'try:') self.body(node.body) self.statement(node, 'finally:') self.body(node.finalbody) def visit_Exec(self, node): dicts = node.globals, node.locals dicts = dicts[::-1] if dicts[0] is None else dicts self.statement(node, 'exec ', node.body) self.conditional_write(' in ', dicts[0]) self.conditional_write(', ', dicts[1]) def visit_Assert(self, node): set_precedence(node, node.test, node.msg) self.statement(node, 'assert ', node.test) self.conditional_write(', ', node.msg) def visit_Global(self, node): self.statement(node, 'global ', ', '.join(node.names)) def visit_Nonlocal(self, node): self.statement(node, 'nonlocal ', ', '.join(node.names)) def visit_Return(self, node): set_precedence(node, node.value) self.statement(node, 'return') self.conditional_write(' ', node.value) def visit_Break(self, node): self.statement(node, 'break') def visit_Continue(self, node): self.statement(node, 'continue') def visit_Raise(self, node): # XXX: Python 2.6 / 3.0 compatibility self.statement(node, 'raise') if self.conditional_write(' ', self.get_exc(node)): self.conditional_write(' from ', node.cause) elif self.conditional_write(' ', self.get_type(node)): set_precedence(node, node.inst) self.conditional_write(', ', node.inst) self.conditional_write(', ', node.tback) # Expressions def visit_Attribute(self, node): self.write(node.value, '.', node.attr) def visit_Call(self, node, len=len): write = self.write want_comma = [] def write_comma(): if want_comma: write(', ') else: want_comma.append(True) args = node.args keywords = node.keywords starargs = self.get_starargs(node) kwargs = self.get_kwargs(node) numargs = len(args) + len(keywords) numargs += starargs is not None numargs += kwargs is not None p = Precedence.Comma if numargs > 1 else Precedence.call_one_arg set_precedence(p, *args) self.visit(node.func) write('(') for arg in args: write(write_comma, arg) set_precedence(Precedence.Comma, *(x.value for x in keywords)) for keyword in keywords: # a keyword.arg of None indicates dictionary unpacking # (Python >= 3.5) arg = keyword.arg or '' write(write_comma, arg, '=' if arg else '**', keyword.value) # 3.5 no longer has these self.conditional_write(write_comma, '*', starargs) self.conditional_write(write_comma, '**', kwargs) write(')') def visit_Name(self, node): self.write(node.id) # ast.Constant is new in Python 3.6 and it replaces ast.Bytes, # ast.Ellipsis, ast.NameConstant, ast.Num, ast.Str in Python 3.8 def visit_Constant(self, node): value = node.value if isinstance(value, (int, float, complex)): with self.delimit(node): self._handle_numeric_constant(value) elif isinstance(value, str): self._handle_string_constant(node, node.value) elif value is Ellipsis: self.write('...') else: self.write(repr(value)) def visit_JoinedStr(self, node): self._handle_string_constant(node, None, is_joined=True) def _handle_string_constant(self, node, value, is_joined=False): # embedded is used to control when we might want # to use a triple-quoted string. We determine # if we are in an assignment and/or in an expression precedence = self.get__pp(node) embedded = ((precedence > Precedence.Expr) + (precedence >= Precedence.Assign)) # Flush any pending newlines, because we're about # to severely abuse the result list. self.write('') result = self.result # Calculate the string representing the line # we are working on, up to but not including # the string we are adding. res_index, str_index = self.colinfo current_line = self.result[res_index:] if str_index: current_line[0] = current_line[0][str_index:] current_line = ''.join(current_line) has_ast_constant = sys.version_info >= (3, 6) if is_joined: # Handle new f-strings. This is a bit complicated, because # the tree can contain subnodes that recurse back to JoinedStr # subnodes... def recurse(node): for value in node.values: if isinstance(value, ast.Str): # Double up braces to escape them. self.write(value.s.replace('{', '{{').replace('}', '}}')) elif isinstance(value, ast.FormattedValue): with self.delimit('{}'): # expr_text used for f-string debugging syntax. if getattr(value, 'expr_text', None): self.write(value.expr_text) else: set_precedence(value, value.value) self.visit(value.value) if value.conversion != -1: self.write('!%s' % chr(value.conversion)) if value.format_spec is not None: self.write(':') recurse(value.format_spec) elif has_ast_constant and isinstance(value, ast.Constant): self.write(value.value) else: kind = type(value).__name__ assert False, 'Invalid node %s inside JoinedStr' % kind index = len(result) recurse(node) # Flush trailing newlines (so that they are part of mystr) self.write('') mystr = ''.join(result[index:]) del result[index:] self.colinfo = res_index, str_index # Put it back like we found it uni_lit = False # No formatted byte strings else: assert value is not None, "Node value cannot be None" mystr = value uni_lit = self.using_unicode_literals mystr = self.pretty_string(mystr, embedded, current_line, uni_lit) if is_joined: mystr = 'f' + mystr elif getattr(node, 'kind', False): # Constant.kind is a Python 3.8 addition. mystr = node.kind + mystr self.write(mystr) lf = mystr.rfind('\n') + 1 if lf: self.colinfo = len(result) - 1, lf # deprecated in Python 3.8 def visit_Str(self, node): self._handle_string_constant(node, node.s) # deprecated in Python 3.8 def visit_Bytes(self, node): self.write(repr(node.s)) def _handle_numeric_constant(self, value): x = value def part(p, imaginary): # Represent infinity as 1e1000 and NaN as 1e1000-1e1000. s = 'j' if imaginary else '' try: if math.isinf(p): if p < 0: return '-1e1000' + s return '1e1000' + s if math.isnan(p): return '(1e1000%s-1e1000%s)' % (s, s) except OverflowError: # math.isinf will raise this when given an integer # that's too large to convert to a float. pass return repr(p) + s real = part(x.real if isinstance(x, complex) else x, imaginary=False) if isinstance(x, complex): imag = part(x.imag, imaginary=True) if x.real == 0: s = imag elif x.imag == 0: s = '(%s+0j)' % real else: # x has nonzero real and imaginary parts. s = '(%s%s%s)' % (real, ['+', ''][imag.startswith('-')], imag) else: s = real self.write(s) def visit_Num(self, node, # constants new=sys.version_info >= (3, 0)): with self.delimit(node) as delimiters: self._handle_numeric_constant(node.n) # We can leave the delimiters handling in visit_Num # since this is meant to handle a Python 2.x specific # issue and ast.Constant exists only in 3.6+ # The Python 2.x compiler merges a unary minus # with a number. This is a premature optimization # that we deal with here... if not new and delimiters.discard: if not isinstance(node.n, complex) and node.n < 0: pow_lhs = Precedence.Pow + 1 delimiters.discard = delimiters.pp != pow_lhs else: op = self.get__p_op(node) delimiters.discard = not isinstance(op, ast.USub) def visit_Tuple(self, node): with self.delimit(node) as delimiters: # Two things are special about tuples: # 1) We cannot discard the enclosing parentheses if empty # 2) We need the trailing comma if only one item elts = node.elts delimiters.discard = delimiters.discard and elts self.comma_list(elts, len(elts) == 1) def visit_List(self, node): with self.delimit('[]'): self.comma_list(node.elts) def visit_Set(self, node): if node.elts: with self.delimit('{}'): self.comma_list(node.elts) else: # If we tried to use "{}" to represent an empty set, it would be # interpreted as an empty dictionary. We can't use "set()" either # because the name "set" might be rebound. self.write('{1}.__class__()') def visit_Dict(self, node): set_precedence(Precedence.Comma, *node.values) with self.delimit('{}'): for idx, (key, value) in enumerate(zip(node.keys, node.values)): self.write(', ' if idx else '', key if key else '', ': ' if key else '**', value) def visit_BinOp(self, node): op, left, right = node.op, node.left, node.right with self.delimit(node, op) as delimiters: ispow = isinstance(op, ast.Pow) p = delimiters.p set_precedence((Precedence.Pow + 1) if ispow else p, left) set_precedence(Precedence.PowRHS if ispow else (p + 1), right) self.write(left, get_op_symbol(op, ' %s '), right) def visit_BoolOp(self, node): with self.delimit(node, node.op) as delimiters: op = get_op_symbol(node.op, ' %s ') set_precedence(delimiters.p + 1, *node.values) for idx, value in enumerate(node.values): self.write(idx and op or '', value) def visit_Compare(self, node): with self.delimit(node, node.ops[0]) as delimiters: set_precedence(delimiters.p + 1, node.left, *node.comparators) self.visit(node.left) for op, right in zip(node.ops, node.comparators): self.write(get_op_symbol(op, ' %s '), right) # assignment expressions; new for Python 3.8 def visit_NamedExpr(self, node): with self.delimit(node) as delimiters: p = delimiters.p set_precedence(p, node.target) set_precedence(p + 1, node.value) # Python is picky about delimiters for assignment # expressions: it requires at least one pair in any # statement that uses an assignment expression, even # when not necessary according to the precedence # rules. We address this with the kludge of forcing a # pair of parentheses around every assignment # expression. delimiters.discard = False self.write(node.target, ' := ', node.value) def visit_UnaryOp(self, node): with self.delimit(node, node.op) as delimiters: set_precedence(delimiters.p, node.operand) # In Python 2.x, a unary negative of a literal # number is merged into the number itself. This # bit of ugliness means it is useful to know # what the parent operation was... node.operand._p_op = node.op sym = get_op_symbol(node.op) self.write(sym, ' ' if sym.isalpha() else '', node.operand) def visit_Subscript(self, node): set_precedence(node, node.slice) self.write(node.value, '[', node.slice, ']') def visit_Slice(self, node): set_precedence(node, node.lower, node.upper, node.step) self.conditional_write(node.lower) self.write(':') self.conditional_write(node.upper) if node.step is not None: self.write(':') if not (isinstance(node.step, ast.Name) and node.step.id == 'None'): self.visit(node.step) def visit_Index(self, node): with self.delimit(node) as delimiters: set_precedence(delimiters.p, node.value) self.visit(node.value) def visit_ExtSlice(self, node): dims = node.dims set_precedence(node, *dims) self.comma_list(dims, len(dims) == 1) def visit_Yield(self, node): with self.delimit(node): set_precedence(get_op_precedence(node) + 1, node.value) self.write('yield') self.conditional_write(' ', node.value) # new for Python 3.3 def visit_YieldFrom(self, node): with self.delimit(node): self.write('yield from ', node.value) # new for Python 3.5 def visit_Await(self, node): with self.delimit(node): self.write('await ', node.value) def visit_Lambda(self, node): with self.delimit(node) as delimiters: set_precedence(delimiters.p, node.body) self.write('lambda ') self.visit_arguments(node.args) self.write(': ', node.body) def visit_Ellipsis(self, node): self.write('...') def visit_ListComp(self, node): with self.delimit('[]'): self.write(node.elt, *node.generators) def visit_GeneratorExp(self, node): with self.delimit(node) as delimiters: if delimiters.pp == Precedence.call_one_arg: delimiters.discard = True set_precedence(Precedence.Comma, node.elt) self.write(node.elt, *node.generators) def visit_SetComp(self, node): with self.delimit('{}'): self.write(node.elt, *node.generators) def visit_DictComp(self, node): with self.delimit('{}'): self.write(node.key, ': ', node.value, *node.generators) def visit_IfExp(self, node): with self.delimit(node) as delimiters: set_precedence(delimiters.p + 1, node.body, node.test) set_precedence(delimiters.p, node.orelse) self.write(node.body, ' if ', node.test, ' else ', node.orelse) def visit_Starred(self, node): self.write('*', node.value) def visit_Repr(self, node): # XXX: python 2.6 only with self.delimit('``'): self.visit(node.value) def visit_Module(self, node): self.write(*node.body) visit_Interactive = visit_Module def visit_Expression(self, node): self.visit(node.body) # Helper Nodes def visit_arg(self, node): self.write(node.arg) self.conditional_write(': ', node.annotation) def visit_alias(self, node): self.write(node.name) self.conditional_write(' as ', node.asname) def visit_comprehension(self, node): set_precedence(node, node.iter, *node.ifs) set_precedence(Precedence.comprehension_target, node.target) stmt = ' async for ' if self.get_is_async(node) else ' for ' self.write(stmt, node.target, ' in ', node.iter) for if_ in node.ifs: self.write(' if ', if_) astor-0.8.1/astor/source_repr.py0000644000076500000240000001631513573326607020514 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin Pretty-print source -- post-process for the decompiler The goals of the initial cut of this engine are: 1) Do a passable, if not PEP8, job of line-wrapping. 2) Serve as an example of an interface to the decompiler for anybody who wants to do a better job. :) """ def pretty_source(source): """ Prettify the source. """ return ''.join(split_lines(source)) def split_lines(source, maxline=79): """Split inputs according to lines. If a line is short enough, just yield it. Otherwise, fix it. """ result = [] extend = result.extend append = result.append line = [] multiline = False count = 0 for item in source: newline = type(item)('\n') index = item.find(newline) if index: line.append(item) multiline = index > 0 count += len(item) else: if line: if count <= maxline or multiline: extend(line) else: wrap_line(line, maxline, result) count = 0 multiline = False line = [] append(item) return result def count(group, slen=str.__len__): return sum([slen(x) for x in group]) def wrap_line(line, maxline=79, result=[], count=count): """ We have a line that is too long, so we're going to try to wrap it. """ # Extract the indentation append = result.append extend = result.extend indentation = line[0] lenfirst = len(indentation) indent = lenfirst - len(indentation.lstrip()) assert indent in (0, lenfirst) indentation = line.pop(0) if indent else '' # Get splittable/non-splittable groups dgroups = list(delimiter_groups(line)) unsplittable = dgroups[::2] splittable = dgroups[1::2] # If the largest non-splittable group won't fit # on a line, try to add parentheses to the line. if max(count(x) for x in unsplittable) > maxline - indent: line = add_parens(line, maxline, indent) dgroups = list(delimiter_groups(line)) unsplittable = dgroups[::2] splittable = dgroups[1::2] # Deal with the first (always unsplittable) group, and # then set up to deal with the remainder in pairs. first = unsplittable[0] append(indentation) extend(first) if not splittable: return result pos = indent + count(first) indentation += ' ' indent += 4 if indent >= maxline / 2: maxline = maxline / 2 + indent for sg, nsg in zip(splittable, unsplittable[1:]): if sg: # If we already have stuff on the line and even # the very first item won't fit, start a new line if pos > indent and pos + len(sg[0]) > maxline: append('\n') append(indentation) pos = indent # Dump lines out of the splittable group # until the entire thing fits csg = count(sg) while pos + csg > maxline: ready, sg = split_group(sg, pos, maxline) if ready[-1].endswith(' '): ready[-1] = ready[-1][:-1] extend(ready) append('\n') append(indentation) pos = indent csg = count(sg) # Dump the remainder of the splittable group if sg: extend(sg) pos += csg # Dump the unsplittable group, optionally # preceded by a linefeed. cnsg = count(nsg) if pos > indent and pos + cnsg > maxline: append('\n') append(indentation) pos = indent extend(nsg) pos += cnsg def split_group(source, pos, maxline): """ Split a group into two subgroups. The first will be appended to the current line, the second will start the new line. Note that the first group must always contain at least one item. The original group may be destroyed. """ first = [] source.reverse() while source: tok = source.pop() first.append(tok) pos += len(tok) if source: tok = source[-1] allowed = (maxline + 1) if tok.endswith(' ') else (maxline - 4) if pos + len(tok) > allowed: break source.reverse() return first, source begin_delim = set('([{') end_delim = set(')]}') end_delim.add('):') def delimiter_groups(line, begin_delim=begin_delim, end_delim=end_delim): """Split a line into alternating groups. The first group cannot have a line feed inserted, the next one can, etc. """ text = [] line = iter(line) while True: # First build and yield an unsplittable group for item in line: text.append(item) if item in begin_delim: break if not text: break yield text # Now build and yield a splittable group level = 0 text = [] for item in line: if item in begin_delim: level += 1 elif item in end_delim: level -= 1 if level < 0: yield text text = [item] break text.append(item) else: assert not text, text break statements = set(['del ', 'return', 'yield ', 'if ', 'while ']) def add_parens(line, maxline, indent, statements=statements, count=count): """Attempt to add parentheses around the line in order to make it splittable. """ if line[0] in statements: index = 1 if not line[0].endswith(' '): index = 2 assert line[1] == ' ' line.insert(index, '(') if line[-1] == ':': line.insert(-1, ')') else: line.append(')') # That was the easy stuff. Now for assignments. groups = list(get_assign_groups(line)) if len(groups) == 1: # So sad, too bad return line counts = list(count(x) for x in groups) didwrap = False # If the LHS is large, wrap it first if sum(counts[:-1]) >= maxline - indent - 4: for group in groups[:-1]: didwrap = False # Only want to know about last group if len(group) > 1: group.insert(0, '(') group.insert(-1, ')') didwrap = True # Might not need to wrap the RHS if wrapped the LHS if not didwrap or counts[-1] > maxline - indent - 10: groups[-1].insert(0, '(') groups[-1].append(')') return [item for group in groups for item in group] # Assignment operators ops = list('|^&+-*/%@~') + '<< >> // **'.split() + [''] ops = set(' %s= ' % x for x in ops) def get_assign_groups(line, ops=ops): """ Split a line into groups by assignment (including augmented assignment) """ group = [] for item in line: group.append(item) if item in ops: yield group group = [] yield group astor-0.8.1/astor/VERSION0000644000076500000240000000000613573573656016660 0ustar berkerpeksagstaff000000000000000.8.1 astor-0.8.1/astor/string_repr.py0000644000076500000240000000554513470303666020521 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin Pretty-print strings for the decompiler We either return the repr() of the string, or try to format it as a triple-quoted string. This is a lot harder than you would think. This has lots of Python 2 / Python 3 ugliness. """ import re try: special_unicode = unicode except NameError: class special_unicode(object): pass try: basestring = basestring except NameError: basestring = str def _properly_indented(s, line_indent): mylist = s.split('\n')[1:] mylist = [x.rstrip() for x in mylist] mylist = [x for x in mylist if x] if not s: return False counts = [(len(x) - len(x.lstrip())) for x in mylist] return counts and min(counts) >= line_indent mysplit = re.compile(r'(\\|\"\"\"|\"$)').split replacements = {'\\': '\\\\', '"""': '""\\"', '"': '\\"'} def _prep_triple_quotes(s, mysplit=mysplit, replacements=replacements): """ Split the string up and force-feed some replacements to make sure it will round-trip OK """ s = mysplit(s) s[1::2] = (replacements[x] for x in s[1::2]) return ''.join(s) def string_triplequote_repr(s): """Return string's python representation in triple quotes. """ return '"""%s"""' % _prep_triple_quotes(s) def pretty_string(s, embedded, current_line, uni_lit=False, min_trip_str=20, max_line=100): """There are a lot of reasons why we might not want to or be able to return a triple-quoted string. We can always punt back to the default normal string. """ default = repr(s) # Punt on abnormal strings if (isinstance(s, special_unicode) or not isinstance(s, basestring)): return default if uni_lit and isinstance(s, bytes): return 'b' + default len_s = len(default) if current_line.strip(): len_current = len(current_line) second_line_start = s.find('\n') + 1 if embedded > 1 and not second_line_start: return default if len_s < min_trip_str: return default line_indent = len_current - len(current_line.lstrip()) # Could be on a line by itself... if embedded and not second_line_start: return default total_len = len_current + len_s if total_len < max_line and not _properly_indented(s, line_indent): return default fancy = string_triplequote_repr(s) # Sometimes this doesn't work. One reason is that # the AST has no understanding of whether \r\n was # entered that way in the string or was a cr/lf in the # file. So we punt just so we can round-trip properly. try: if eval(fancy) == s and '\r' not in fancy: return fancy except Exception: pass return default astor-0.8.1/astor/op_util.py0000644000076500000240000000616713573172164017640 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright (c) 2015 Patrick Maupin This module provides data and functions for mapping AST nodes to symbols and precedences. """ import ast op_data = """ GeneratorExp 1 Assign 1 AnnAssign 1 AugAssign 0 Expr 0 Yield 1 YieldFrom 0 If 1 For 0 AsyncFor 0 While 0 Return 1 Slice 1 Subscript 0 Index 1 ExtSlice 1 comprehension_target 1 Tuple 0 FormattedValue 0 Comma 1 NamedExpr 1 Assert 0 Raise 0 call_one_arg 1 Lambda 1 IfExp 0 comprehension 1 Or or 1 And and 1 Not not 1 Eq == 1 Gt > 0 GtE >= 0 In in 0 Is is 0 NotEq != 0 Lt < 0 LtE <= 0 NotIn not in 0 IsNot is not 0 BitOr | 1 BitXor ^ 1 BitAnd & 1 LShift << 1 RShift >> 0 Add + 1 Sub - 0 Mult * 1 Div / 0 Mod % 0 FloorDiv // 0 MatMult @ 0 PowRHS 1 Invert ~ 1 UAdd + 0 USub - 0 Pow ** 1 Await 1 Num 1 Constant 1 """ op_data = [x.split() for x in op_data.splitlines()] op_data = [[x[0], ' '.join(x[1:-1]), int(x[-1])] for x in op_data if x] for index in range(1, len(op_data)): op_data[index][2] *= 2 op_data[index][2] += op_data[index - 1][2] precedence_data = dict((getattr(ast, x, None), z) for x, y, z in op_data) symbol_data = dict((getattr(ast, x, None), y) for x, y, z in op_data) def get_op_symbol(obj, fmt='%s', symbol_data=symbol_data, type=type): """Given an AST node object, returns a string containing the symbol. """ return fmt % symbol_data[type(obj)] def get_op_precedence(obj, precedence_data=precedence_data, type=type): """Given an AST node object, returns the precedence. """ return precedence_data[type(obj)] class Precedence(object): vars().update((x, z) for x, y, z in op_data) highest = max(z for x, y, z in op_data) + 2 astor-0.8.1/astor/node_util.py0000644000076500000240000001461613205364752020143 0ustar berkerpeksagstaff00000000000000# -*- coding: utf-8 -*- """ Part of the astor library for Python AST manipulation. License: 3-clause BSD Copyright 2012-2015 (c) Patrick Maupin Copyright 2013-2015 (c) Berker Peksag Utilities for node (and, by extension, tree) manipulation. For a whole-tree approach, see the treewalk submodule. """ import ast import itertools try: zip_longest = itertools.zip_longest except AttributeError: zip_longest = itertools.izip_longest class NonExistent(object): """This is not the class you are looking for. """ pass def iter_node(node, name='', unknown=None, # Runtime optimization list=list, getattr=getattr, isinstance=isinstance, enumerate=enumerate, missing=NonExistent): """Iterates over an object: - If the object has a _fields attribute, it gets attributes in the order of this and returns name, value pairs. - Otherwise, if the object is a list instance, it returns name, value pairs for each item in the list, where the name is passed into this function (defaults to blank). - Can update an unknown set with information about attributes that do not exist in fields. """ fields = getattr(node, '_fields', None) if fields is not None: for name in fields: value = getattr(node, name, missing) if value is not missing: yield value, name if unknown is not None: unknown.update(set(vars(node)) - set(fields)) elif isinstance(node, list): for value in node: yield value, name def dump_tree(node, name=None, initial_indent='', indentation=' ', maxline=120, maxmerged=80, # Runtime optimization iter_node=iter_node, special=ast.AST, list=list, isinstance=isinstance, type=type, len=len): """Dumps an AST or similar structure: - Pretty-prints with indentation - Doesn't print line/column/ctx info """ def dump(node, name=None, indent=''): level = indent + indentation name = name and name + '=' or '' values = list(iter_node(node)) if isinstance(node, list): prefix, suffix = '%s[' % name, ']' elif values: prefix, suffix = '%s%s(' % (name, type(node).__name__), ')' elif isinstance(node, special): prefix, suffix = name + type(node).__name__, '' else: return '%s%s' % (name, repr(node)) node = [dump(a, b, level) for a, b in values if b != 'ctx'] oneline = '%s%s%s' % (prefix, ', '.join(node), suffix) if len(oneline) + len(indent) < maxline: return '%s' % oneline if node and len(prefix) + len(node[0]) < maxmerged: prefix = '%s%s,' % (prefix, node.pop(0)) node = (',\n%s' % level).join(node).lstrip() return '%s\n%s%s%s' % (prefix, level, node, suffix) return dump(node, name, initial_indent) def strip_tree(node, # Runtime optimization iter_node=iter_node, special=ast.AST, list=list, isinstance=isinstance, type=type, len=len): """Strips an AST by removing all attributes not in _fields. Returns a set of the names of all attributes stripped. This canonicalizes two trees for comparison purposes. """ stripped = set() def strip(node, indent): unknown = set() leaf = True for subnode, _ in iter_node(node, unknown=unknown): leaf = False strip(subnode, indent + ' ') if leaf: if isinstance(node, special): unknown = set(vars(node)) stripped.update(unknown) for name in unknown: delattr(node, name) if hasattr(node, 'ctx'): delattr(node, 'ctx') if 'ctx' in node._fields: mylist = list(node._fields) mylist.remove('ctx') node._fields = mylist strip(node, '') return stripped class ExplicitNodeVisitor(ast.NodeVisitor): """This expands on the ast module's NodeVisitor class to remove any implicit visits. """ def abort_visit(node): # XXX: self? msg = 'No defined handler for node of type %s' raise AttributeError(msg % node.__class__.__name__) def visit(self, node, abort=abort_visit): """Visit a node.""" method = 'visit_' + node.__class__.__name__ visitor = getattr(self, method, abort) return visitor(node) def allow_ast_comparison(): """This ugly little monkey-patcher adds in a helper class to all the AST node types. This helper class allows eq/ne comparisons to work, so that entire trees can be easily compared by Python's comparison machinery. Used by the anti8 functions to compare old and new ASTs. Could also be used by the test library. """ class CompareHelper(object): def __eq__(self, other): return type(self) == type(other) and vars(self) == vars(other) def __ne__(self, other): return type(self) != type(other) or vars(self) != vars(other) for item in vars(ast).values(): if type(item) != type: continue if issubclass(item, ast.AST): try: item.__bases__ = tuple(list(item.__bases__) + [CompareHelper]) except TypeError: pass def fast_compare(tree1, tree2): """ This is optimized to compare two AST trees for equality. It makes several assumptions that are currently true for AST trees used by rtrip, and it doesn't examine the _attributes. """ geta = ast.AST.__getattribute__ work = [(tree1, tree2)] pop = work.pop extend = work.extend # TypeError in cPython, AttributeError in PyPy exception = TypeError, AttributeError zipl = zip_longest type_ = type list_ = list while work: n1, n2 = pop() try: f1 = geta(n1, '_fields') f2 = geta(n2, '_fields') except exception: if type_(n1) is list_: extend(zipl(n1, n2)) continue if n1 == n2: continue return False else: f1 = [x for x in f1 if x != 'ctx'] if f1 != [x for x in f2 if x != 'ctx']: return False extend((geta(n1, fname), geta(n2, fname)) for fname in f1) return True astor-0.8.1/astor/codegen.py0000644000076500000240000000031413205364752017553 0ustar berkerpeksagstaff00000000000000import warnings from .code_gen import * # NOQA warnings.warn( 'astor.codegen module is deprecated. Please import ' 'astor.code_gen module instead.', DeprecationWarning, stacklevel=2 ) astor-0.8.1/setup.py0000644000076500000240000000004613573172164016163 0ustar berkerpeksagstaff00000000000000from setuptools import setup setup() astor-0.8.1/astor.egg-info/0000755000076500000240000000000013573574747017310 5ustar berkerpeksagstaff00000000000000astor-0.8.1/astor.egg-info/PKG-INFO0000644000076500000240000001123613573574747020410 0ustar berkerpeksagstaff00000000000000Metadata-Version: 1.2 Name: astor Version: 0.8.1 Summary: Read/rewrite/write Python ASTs Home-page: https://github.com/berkerpeksag/astor Author: Patrick Maupin Author-email: pmaupin@gmail.com License: BSD-3-Clause Description: ============================= astor -- AST observe/rewrite ============================= :PyPI: https://pypi.org/project/astor/ :Documentation: https://astor.readthedocs.io :Source: https://github.com/berkerpeksag/astor :License: 3-clause BSD :Build status: .. image:: https://secure.travis-ci.org/berkerpeksag/astor.svg :alt: Travis CI :target: https://travis-ci.org/berkerpeksag/astor/ astor is designed to allow easy manipulation of Python source via the AST. There are some other similar libraries, but astor focuses on the following areas: - Round-trip an AST back to Python [1]_: - Modified AST doesn't need linenumbers, ctx, etc. or otherwise be directly compileable for the round-trip to work. - Easy to read generated code as, well, code - Can round-trip two different source trees to compare for functional differences, using the astor.rtrip tool (for example, after PEP8 edits). - Dump pretty-printing of AST - Harder to read than round-tripped code, but more accurate to figure out what is going on. - Easier to read than dump from built-in AST module - Non-recursive treewalk - Sometimes you want a recursive treewalk (and astor supports that, starting at any node on the tree), but sometimes you don't need to do that. astor doesn't require you to explicitly visit sub-nodes unless you want to: - You can add code that executes before a node's children are visited, and/or - You can add code that executes after a node's children are visited, and/or - You can add code that executes and keeps the node's children from being visited (and optionally visit them yourself via a recursive call) - Write functions to access the tree based on object names and/or attribute names - Enjoy easy access to parent node(s) for tree rewriting .. [1] The decompilation back to Python is based on code originally written by Armin Ronacher. Armin's code was well-structured, but failed on some obscure corner cases of the Python language (and even more corner cases when the AST changed on different versions of Python), and its output arguably had cosmetic issues -- for example, it produced parentheses even in some cases where they were not needed, to avoid having to reason about precedence. Other derivatives of Armin's code are floating around, and typically have fixes for a few corner cases that happened to be noticed by the maintainers, but most of them have not been tested as thoroughly as astor. One exception may be the version of codegen `maintained at github by CensoredUsername`__. This has been tested to work properly on Python 2.7 using astor's test suite, and, as it is a single source file, it may be easier to drop into some applications that do not require astor's other features or Python 3.x compatibility. __ https://github.com/CensoredUsername/codegen Keywords: ast,codegen,PEP 8 Platform: Independent Classifier: Development Status :: 5 - Production/Stable Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Programming Language :: Python :: Implementation :: PyPy Classifier: Topic :: Software Development :: Code Generators Classifier: Topic :: Software Development :: Compilers Requires-Python: !=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7 astor-0.8.1/astor.egg-info/zip-safe0000644000076500000240000000000113457636270020727 0ustar berkerpeksagstaff00000000000000 astor-0.8.1/astor.egg-info/SOURCES.txt0000644000076500000240000000114113573574747021171 0ustar berkerpeksagstaff00000000000000AUTHORS LICENSE MANIFEST.in README.rst setup.cfg setup.py setuputils.py astor/VERSION astor/__init__.py astor/code_gen.py astor/codegen.py astor/file_util.py astor/node_util.py astor/op_util.py astor/rtrip.py astor/source_repr.py astor/string_repr.py astor/tree_walk.py astor.egg-info/PKG-INFO astor.egg-info/SOURCES.txt astor.egg-info/dependency_links.txt astor.egg-info/top_level.txt astor.egg-info/zip-safe tests/__init__.py tests/build_expressions.py tests/check_astunparse.py tests/check_expressions.py tests/support.py tests/test_code_gen.py tests/test_misc.py tests/test_optional.py tests/test_rtrip.pyastor-0.8.1/astor.egg-info/top_level.txt0000644000076500000240000000000613573574747022036 0ustar berkerpeksagstaff00000000000000astor astor-0.8.1/astor.egg-info/dependency_links.txt0000644000076500000240000000000113573574747023356 0ustar berkerpeksagstaff00000000000000 astor-0.8.1/setup.cfg0000644000076500000240000000267413573574747016320 0ustar berkerpeksagstaff00000000000000[metadata] name = astor description = Read/rewrite/write Python ASTs long_description = file:README.rst version = file:astor/VERSION author = Patrick Maupin author_email = pmaupin@gmail.com platforms = Independent url = https://github.com/berkerpeksag/astor license = BSD-3-Clause keywords = ast, codegen, PEP 8 classifiers = Development Status :: 5 - Production/Stable Environment :: Console Intended Audience :: Developers License :: OSI Approved :: BSD License Operating System :: OS Independent Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.4 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: Implementation Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy Topic :: Software Development :: Code Generators Topic :: Software Development :: Compilers [options] zip_safe = True include_package_data = True packages = find: python_requires = >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.* tests_requires = ["nose", "astunparse"] test_suite = nose.collector [options.packages.find] exclude = tests [bdist_wheel] universal = 1 [build-system] requires = ['setuptools', 'wheel'] [egg_info] tag_build = tag_date = 0 astor-0.8.1/setuputils.py0000644000076500000240000000032013573172164017237 0ustar berkerpeksagstaff00000000000000import codecs import os.path def read(*parts): file_path = os.path.join(os.path.dirname(__file__), *parts) with codecs.open(file_path, 'r') as fobj: content = fobj.read() return content astor-0.8.1/README.rst0000644000076500000240000000555213470272551016144 0ustar berkerpeksagstaff00000000000000============================= astor -- AST observe/rewrite ============================= :PyPI: https://pypi.org/project/astor/ :Documentation: https://astor.readthedocs.io :Source: https://github.com/berkerpeksag/astor :License: 3-clause BSD :Build status: .. image:: https://secure.travis-ci.org/berkerpeksag/astor.svg :alt: Travis CI :target: https://travis-ci.org/berkerpeksag/astor/ astor is designed to allow easy manipulation of Python source via the AST. There are some other similar libraries, but astor focuses on the following areas: - Round-trip an AST back to Python [1]_: - Modified AST doesn't need linenumbers, ctx, etc. or otherwise be directly compileable for the round-trip to work. - Easy to read generated code as, well, code - Can round-trip two different source trees to compare for functional differences, using the astor.rtrip tool (for example, after PEP8 edits). - Dump pretty-printing of AST - Harder to read than round-tripped code, but more accurate to figure out what is going on. - Easier to read than dump from built-in AST module - Non-recursive treewalk - Sometimes you want a recursive treewalk (and astor supports that, starting at any node on the tree), but sometimes you don't need to do that. astor doesn't require you to explicitly visit sub-nodes unless you want to: - You can add code that executes before a node's children are visited, and/or - You can add code that executes after a node's children are visited, and/or - You can add code that executes and keeps the node's children from being visited (and optionally visit them yourself via a recursive call) - Write functions to access the tree based on object names and/or attribute names - Enjoy easy access to parent node(s) for tree rewriting .. [1] The decompilation back to Python is based on code originally written by Armin Ronacher. Armin's code was well-structured, but failed on some obscure corner cases of the Python language (and even more corner cases when the AST changed on different versions of Python), and its output arguably had cosmetic issues -- for example, it produced parentheses even in some cases where they were not needed, to avoid having to reason about precedence. Other derivatives of Armin's code are floating around, and typically have fixes for a few corner cases that happened to be noticed by the maintainers, but most of them have not been tested as thoroughly as astor. One exception may be the version of codegen `maintained at github by CensoredUsername`__. This has been tested to work properly on Python 2.7 using astor's test suite, and, as it is a single source file, it may be easier to drop into some applications that do not require astor's other features or Python 3.x compatibility. __ https://github.com/CensoredUsername/codegen