jedi-0.11.1/0000775000175000017500000000000013214571377012401 5ustar davedave00000000000000jedi-0.11.1/PKG-INFO0000664000175000017500000003352613214571377013507 0ustar davedave00000000000000Metadata-Version: 1.1 Name: jedi Version: 0.11.1 Summary: An autocompletion tool for Python that can be used for text editors. Home-page: https://github.com/davidhalter/jedi Author: David Halter Author-email: davidhalter88@gmail.com License: MIT Description: ################################################################### Jedi - an awesome autocompletion/static analysis library for Python ################################################################### .. image:: https://secure.travis-ci.org/davidhalter/jedi.png?branch=master :target: http://travis-ci.org/davidhalter/jedi :alt: Travis-CI build status .. image:: https://coveralls.io/repos/davidhalter/jedi/badge.png?branch=master :target: https://coveralls.io/r/davidhalter/jedi :alt: Coverage Status *If you have specific questions, please add an issue or ask on* `stackoverflow `_ *with the label* ``python-jedi``. Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its historic focus is autocompletion, but does static analysis for now as well. Jedi is fast and is very well tested. It understands Python on a deeper level than all other static analysis frameworks for Python. Jedi has support for two different goto functions. It's possible to search for related names and to list all names in a Python file and infer them. Jedi understands docstrings and you can use Jedi autocompletion in your REPL as well. Jedi uses a very simple API to connect with IDE's. There's a reference implementation as a `VIM-Plugin `_, which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs. It's really easy. Jedi can currently be used with the following editors/projects: - Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_) - Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_) - Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3]) - TextMate_ (Not sure if it's actually working) - Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof `_] - Atom_ (autocomplete-python-jedi_) - SourceLair_ - `GNOME Builder`_ (with support for GObject Introspection) - `Visual Studio Code`_ (via `Python Extension `_) - Gedit (gedi_) - wdb_ - Web Debugger - `Eric IDE`_ (Available as a plugin) - `Ipython 6.0.0+ `_ and many more! Here are some pictures taken from jedi-vim_: .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_complete.png Completion for almost anything (Ctrl+Space). .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_function.png Display of function/class bodies, docstrings. .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_pydoc.png Pydoc support (Shift+k). There is also support for goto and renaming. Get the latest version from `github `_ (master branch should always be kind of stable/working). Docs are available at `https://jedi.readthedocs.org/en/latest/ `_. Pull requests with documentation enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic versioning `_. Installation ============ pip install jedi Note: This just installs the Jedi library, not the editor plugins. For information about how to make it work with your editor, refer to the corresponding documentation. You don't want to use ``pip``? Please refer to the `manual `_. Feature Support and Caveats =========================== Jedi really understands your Python code. For a comprehensive list what Jedi understands, see: `Features `_. A list of caveats can be found on the same page. You can run Jedi on cPython 2.6, 2.7, 3.3, 3.4 or 3.5 but it should also understand/parse code older than those versions. Tips on how to use Jedi efficiently can be found `here `_. API --- You can find the documentation for the `API here `_. Autocompletion / Goto / Pydoc ----------------------------- Please check the API for a good explanation. There are the following commands: - ``jedi.Script.goto_assignments`` - ``jedi.Script.completions`` - ``jedi.Script.usages`` The returned objects are very powerful and really all you might need. Autocompletion in your REPL (IPython, etc.) ------------------------------------------- Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion in IPython is therefore possible without additional configuration. It's possible to have Jedi autocompletion in REPL modes - `example video `_. This means that in Python you can enable tab completion in a `REPL `_. Static Analysis / Linter ------------------------ To do all forms of static analysis, please try to use ``jedi.names``. It will return a list of names that you can use to infer types and so on. Linting is another thing that is going to be part of Jedi. For now you can try an alpha version ``python -m jedi linter``. The API might change though and it's still buggy. It's Jedi's goal to be smarter than classic linter and understand ``AttributeError`` and other code issues. Refactoring ----------- Jedi's parser would support refactoring, but there's no API to use it right now. If you're interested in helping out here, let me know. With the latest parser changes, it should be very easy to actually make it work. Development =========== There's a pretty good and extensive `development documentation `_. Testing ======= The test suite depends on ``tox`` and ``pytest``:: pip install tox pytest To run the tests for all supported Python versions:: tox If you want to test only a specific Python version (e.g. Python 2.7), it's as easy as :: tox -e py27 Tests are also run automatically on `Travis CI `_. For more detailed information visit the `testing documentation `_ Acknowledgements ================ - Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of other things. - Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :). - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). .. _jedi-vim: https://github.com/davidhalter/jedi-vim .. _youcompleteme: http://valloric.github.io/YouCompleteMe/ .. _deoplete-jedi: https://github.com/zchee/deoplete-jedi .. _completor.vim: https://github.com/maralla/completor.vim .. _Jedi.el: https://github.com/tkf/emacs-jedi .. _company-mode: https://github.com/syohex/emacs-company-jedi .. _elpy: https://github.com/jorgenschaefer/elpy .. _anaconda-mode: https://github.com/proofit404/anaconda-mode .. _ycmd: https://github.com/abingham/emacs-ycmd .. _sublimejedi: https://github.com/srusskih/SublimeJEDI .. _anaconda: https://github.com/DamnWidget/anaconda .. _wdb: https://github.com/Kozea/wdb .. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle .. _Kate: http://kate-editor.org .. _Atom: https://atom.io/ .. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi .. _SourceLair: https://www.sourcelair.com .. _GNOME Builder: https://wiki.gnome.org/Apps/Builder .. _Visual Studio Code: https://code.visualstudio.com/ .. _gedi: https://github.com/isamert/gedi .. _Eric IDE: http://eric-ide.python-projects.org .. :changelog: Changelog --------- 0.11.0 (2017-09-20) +++++++++++++++++++ - Split Jedi's parser into a separate project called ``parso``. - Avoiding side effects in REPL completion. - Numpy docstring support should be much better. - Moved the `settings.*recursion*` away, they are no longer usable. 0.10.2 (2017-04-05) +++++++++++++++++++ - Python Packaging sucks. Some files were not included in 0.10.1. 0.10.1 (2017-04-05) +++++++++++++++++++ - Fixed a few very annoying bugs. - Prepared the parser to be factored out of Jedi. 0.10.0 (2017-02-03) +++++++++++++++++++ - Actual semantic completions for the complete Python syntax. - Basic type inference for ``yield from`` PEP 380. - PEP 484 support (most of the important features of it). Thanks Claude! (@reinhrst) - Added ``get_line_code`` to ``Definition`` and ``Completion`` objects. - Completely rewritten the type inference engine. - A new and better parser for (fast) parsing diffs of Python code. 0.9.0 (2015-04-10) ++++++++++++++++++ - The import logic has been rewritten to look more like Python's. There is now an ``Evaluator.modules`` import cache, which resembles ``sys.modules``. - Integrated the parser of 2to3. This will make refactoring possible. It will also be possible to check for error messages (like compiling an AST would give) in the future. - With the new parser, the evaluation also completely changed. It's now simpler and more readable. - Completely rewritten REPL completion. - Added ``jedi.names``, a command to do static analysis. Thanks to that sourcegraph guys for sponsoring this! - Alpha version of the linter. 0.8.1 (2014-07-23) +++++++++++++++++++ - Bugfix release, the last release forgot to include files that improve autocompletion for builtin libraries. Fixed. 0.8.0 (2014-05-05) +++++++++++++++++++ - Memory Consumption for compiled modules (e.g. builtins, sys) has been reduced drastically. Loading times are down as well (it takes basically as long as an import). - REPL completion is starting to become usable. - Various small API changes. Generally this release focuses on stability and refactoring of internal APIs. - Introducing operator precedence, which makes calculating correct Array indices and ``__getattr__`` strings possible. 0.7.0 (2013-08-09) ++++++++++++++++++ - Switched from LGPL to MIT license. - Added an Interpreter class to the API to make autocompletion in REPL possible. - Added autocompletion support for namespace packages. - Add sith.py, a new random testing method. 0.6.0 (2013-05-14) ++++++++++++++++++ - Much faster parser with builtin part caching. - A test suite, thanks @tkf. 0.5 versions (2012) +++++++++++++++++++ - Initial development. Keywords: python completion refactoring vim Platform: any Classifier: Development Status :: 4 - Beta Classifier: Environment :: Plugins Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) Classifier: Topic :: Utilities jedi-0.11.1/requirements.txt0000664000175000017500000000001513214571123015646 0ustar davedave00000000000000parso==0.1.1 jedi-0.11.1/AUTHORS.txt0000664000175000017500000000325013214571123014254 0ustar davedave00000000000000Main Authors ============ David Halter (@davidhalter) Takafumi Arakaki (@tkf) Code Contributors ================= Danilo Bargen (@dbrgn) Laurens Van Houtven (@lvh) <_@lvh.cc> Aldo Stracquadanio (@Astrac) Jean-Louis Fuchs (@ganwell) tek (@tek) Yasha Borevich (@jjay) Aaron Griffin andviro (@andviro) Mike Gilbert (@floppym) Aaron Meurer (@asmeurer) Lubos Trilety Akinori Hattori (@hattya) srusskih (@srusskih) Steven Silvester (@blink1073) Colin Duquesnoy (@ColinDuquesnoy) Jorgen Schaefer (@jorgenschaefer) Fredrik Bergroth (@fbergroth) Mathias Fußenegger (@mfussenegger) Syohei Yoshida (@syohex) ppalucky (@ppalucky) immerrr (@immerrr) immerrr@gmail.com Albertas Agejevas (@alga) Savor d'Isavano (@KenetJervet) Phillip Berndt (@phillipberndt) Ian Lee (@IanLee1521) Farkhad Khatamov (@hatamov) Kevin Kelley (@kelleyk) Sid Shanker (@squidarth) Reinoud Elhorst (@reinhrst) Guido van Rossum (@gvanrossum) Dmytro Sadovnychyi (@sadovnychyi) Cristi Burcă (@scribu) bstaint (@bstaint) Mathias Rav (@Mortal) Daniel Fiterman (@dfit99) Simon Ruggier (@sruggier) Élie Gouzien (@ElieGouzien) Robin Roth (@robinro) Malte Plath (@langsamer) Note: (@user) means a github user name. jedi-0.11.1/pytest.ini0000664000175000017500000000073013214571123014417 0ustar davedave00000000000000[pytest] addopts = --doctest-modules # Ignore broken files in blackbox test directories norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis not_in_sys_path buildout_project sample_venvs init_extension_module simple_import # Activate `clean_jedi_cache` fixture for all tests. This should be # fine as long as we are using `clean_jedi_cache` as a session scoped # fixture. usefixtures = clean_jedi_cache jedi-0.11.1/docs/0000775000175000017500000000000013214571377013331 5ustar davedave00000000000000jedi-0.11.1/docs/Makefile0000664000175000017500000001266413214571123014767 0ustar davedave00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Jedi.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Jedi.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Jedi" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Jedi" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." jedi-0.11.1/docs/_templates/0000775000175000017500000000000013214571377015466 5ustar davedave00000000000000jedi-0.11.1/docs/_templates/ghbuttons.html0000664000175000017500000000034613214571123020361 0ustar davedave00000000000000

Github



jedi-0.11.1/docs/_templates/sidebarlogo.html0000664000175000017500000000021013214571123020624 0ustar davedave00000000000000 jedi-0.11.1/docs/docs/0000775000175000017500000000000013214571377014261 5ustar davedave00000000000000jedi-0.11.1/docs/docs/parser.rst0000664000175000017500000000077013214571123016300 0ustar davedave00000000000000.. _xxx: Parser Tree =========== Usage ----- .. automodule:: jedi.parser.python :members: :undoc-members: Parser Tree Base Class ---------------------- All nodes and leaves have these methods/properties: .. autoclass:: jedi.parser.tree.NodeOrLeaf :members: :undoc-members: Python Parser Tree ------------------ .. automodule:: jedi.parser.python.tree :members: :undoc-members: :show-inheritance: Utility ------- .. autofunction:: jedi.parser.tree.search_ancestor jedi-0.11.1/docs/docs/development.rst0000664000175000017500000001310713214571123017324 0ustar davedave00000000000000.. include:: ../global.rst Jedi Development ================ .. currentmodule:: jedi .. note:: This documentation is for Jedi developers who want to improve Jedi itself, but have no idea how Jedi works. If you want to use Jedi for your IDE, look at the `plugin api `_. Introduction ------------ This page tries to address the fundamental demand for documentation of the |jedi| internals. Understanding a dynamic language is a complex task. Especially because type inference in Python can be a very recursive task. Therefore |jedi| couldn't get rid of complexity. I know that **simple is better than complex**, but unfortunately it sometimes requires complex solutions to understand complex systems. Since most of the Jedi internals have been written by me (David Halter), this introduction will be written mostly by me, because no one else understands to the same level how Jedi works. Actually this is also the reason for exactly this part of the documentation. To make multiple people able to edit the Jedi core. In five chapters I'm trying to describe the internals of |jedi|: - :ref:`The Jedi Core ` - :ref:`Core Extensions ` - :ref:`Imports & Modules ` - :ref:`Caching & Recursions ` - :ref:`Helper modules ` .. note:: Testing is not documented here, you'll find that `right here `_. .. _core: The Jedi Core ------------- The core of Jedi consists of three parts: - :ref:`Parser ` - :ref:`Python code evaluation ` - :ref:`API ` Most people are probably interested in :ref:`code evaluation `, because that's where all the magic happens. I need to introduce the :ref:`parser ` first, because :mod:`jedi.evaluate` uses it extensively. .. _parser: Parser (parser/__init__.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.parser Parser Tree (parser/tree.py) ++++++++++++++++++++++++++++++++++++++++++++++++ .. automodule:: jedi.parser.tree Class inheritance diagram: .. inheritance-diagram:: Module Class Function Lambda Flow ForStmt Import ExprStmt Param Name CompFor :parts: 1 .. _evaluate: Evaluation of python code (evaluate/__init__.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate Evaluation Representation (evaluate/representation.py) ++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. automodule:: jedi.evaluate.representation .. inheritance-diagram:: jedi.evaluate.instance.TreeInstance jedi.evaluate.representation.ClassContext jedi.evaluate.representation.FunctionContext jedi.evaluate.representation.FunctionExecutionContext :parts: 1 .. _name_resolution: Name resolution (evaluate/finder.py) ++++++++++++++++++++++++++++++++++++ .. automodule:: jedi.evaluate.finder .. _dev-api: API (api.py and api_classes.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The API has been designed to be as easy to use as possible. The API documentation can be found `here `_. The API itself contains little code that needs to be mentioned here. Generally I'm trying to be conservative with the API. I'd rather not add new API features if they are not necessary, because it's much harder to deprecate stuff than to add it later. .. _core-extensions: Core Extensions --------------- Core Extensions is a summary of the following topics: - :ref:`Iterables & Dynamic Arrays ` - :ref:`Dynamic Parameters ` - :ref:`Diff Parser ` - :ref:`Docstrings ` - :ref:`Refactoring ` These topics are very important to understand what Jedi additionally does, but they could be removed from Jedi and Jedi would still work. But slower and without some features. .. _iterables: Iterables & Dynamic Arrays (evaluate/iterable.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To understand Python on a deeper level, |jedi| needs to understand some of the dynamic features of Python like lists that are filled after creation: .. automodule:: jedi.evaluate.iterable .. _dynamic: Parameter completion (evaluate/dynamic.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate.dynamic .. _diff-parser: Diff Parser (parser/diff.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.parser.python.diff .. _docstrings: Docstrings (evaluate/docstrings.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate.docstrings .. _refactoring: Refactoring (evaluate/refactoring.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.refactoring .. _imports-modules: Imports & Modules ------------------- - :ref:`Modules ` - :ref:`Builtin Modules ` - :ref:`Imports ` .. _builtin: Compiled Modules (evaluate/compiled.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate.compiled .. _imports: Imports (evaluate/imports.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate.imports .. _caching-recursions: Caching & Recursions -------------------- - :ref:`Caching ` - :ref:`Recursions ` .. _cache: Caching (cache.py) ~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.cache .. _recursion: Recursions (recursion.py) ~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.evaluate.recursion .. _dev-helpers: Helper Modules --------------- Most other modules are not really central to how Jedi works. They all contain relevant code, but you if you understand the modules above, you pretty much understand Jedi. Python 2/3 compatibility (_compatibility.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi._compatibility jedi-0.11.1/docs/docs/settings.rst0000664000175000017500000000011513214571123016635 0ustar davedave00000000000000.. include:: ../global.rst Settings ======== .. automodule:: jedi.settings jedi-0.11.1/docs/docs/static_analysis.rst0000664000175000017500000000350113214571123020171 0ustar davedave00000000000000 This file is the start of the documentation of how static analysis works. Below is a list of parser names that are used within nodes_to_execute. ------------ cared for: global_stmt exec_stmt # no priority assert_stmt if_stmt while_stmt for_stmt try_stmt (except_clause) with_stmt (with_item) (with_var) print_stmt del_stmt return_stmt raise_stmt yield_expr file_input funcdef param old_lambdef lambdef import_name import_from (import_as_name) (dotted_as_name) (import_as_names) (dotted_as_names) (dotted_name) classdef comp_for (comp_if) ? decorator ----------- add basic test or_test and_test not_test expr xor_expr and_expr shift_expr arith_expr term factor power atom comparison expr_stmt testlist testlist1 testlist_safe ----------- special care: # mostly depends on how we handle the other ones. testlist_star_expr # should probably just work with expr_stmt star_expr exprlist # just ignore? then names are just resolved. Strange anyway, bc expr is not really allowed in the list, typically. ----------- ignore: suite subscriptlist subscript simple_stmt ?? sliceop # can probably just be added. testlist_comp # prob ignore and care about it with atom. dictorsetmaker trailer decorators decorated # always execute function arguments? -> no problem with stars. # Also arglist and argument are different in different grammars. arglist argument ----------- remove: tname # only exists in current Jedi parser. REMOVE! tfpdef # python 2: tuple assignment; python 3: annotation vfpdef # reduced in python 3 and therefore not existing. tfplist # not in 3 vfplist # not in 3 --------- not existing with parser reductions. small_stmt import_stmt flow_stmt compound_stmt stmt pass_stmt break_stmt continue_stmt comp_op augassign old_test typedargslist # afaik becomes [param] varargslist # dito vname comp_iter test_nocond jedi-0.11.1/docs/docs/installation.rst0000664000175000017500000000507413214571123017507 0ustar davedave00000000000000.. include:: ../global.rst Installation and Configuration ============================== You can either include |jedi| as a submodule in your text editor plugin (like jedi-vim_ does by default), or you can install it systemwide. .. note:: This just installs the |jedi| library, not the :ref:`editor plugins `. For information about how to make it work with your editor, refer to the corresponding documentation. The preferred way ----------------- On any system you can install |jedi| directly from the Python package index using pip:: sudo pip install jedi If you want to install the current development version (master branch):: sudo pip install -e git://github.com/davidhalter/jedi.git#egg=jedi System-wide installation via a package manager ---------------------------------------------- Arch Linux ~~~~~~~~~~ You can install |jedi| directly from official Arch Linux packages: - `python-jedi `__ (Python 3) - `python2-jedi `__ (Python 2) The specified Python version just refers to the *runtime environment* for |jedi|. Use the Python 2 version if you're running vim (or whatever editor you use) under Python 2. Otherwise, use the Python 3 version. But whatever version you choose, both are able to complete both Python 2 and 3 *code*. (There is also a packaged version of the vim plugin available: `vim-jedi at Arch Linux `__.) Debian ~~~~~~ Debian packages are available in the `unstable repository `__. Others ~~~~~~ We are in the discussion of adding |jedi| to the Fedora repositories. Manual installation from a downloaded package --------------------------------------------- If you prefer not to use an automated package installer, you can `download `__ a current copy of |jedi| and install it manually. To install it, navigate to the directory containing `setup.py` on your console and type:: sudo python setup.py install Inclusion as a submodule ------------------------ If you use an editor plugin like jedi-vim_, you can simply include |jedi| as a git submodule of the plugin directory. Vim plugin managers like Vundle_ or Pathogen_ make it very easy to keep submodules up to date. .. _jedi-vim: https://github.com/davidhalter/jedi-vim .. _vundle: https://github.com/gmarik/vundle .. _pathogen: https://github.com/tpope/vim-pathogen jedi-0.11.1/docs/docs/plugin-api.rst0000664000175000017500000000407213214571123017050 0ustar davedave00000000000000.. include:: ../global.rst The Plugin API ============== .. currentmodule:: jedi Note: This documentation is for Plugin developers, who want to improve their editors/IDE autocompletion If you want to use |jedi|, you first need to ``import jedi``. You then have direct access to the :class:`.Script`. You can then call the functions documented here. These functions return :ref:`API classes `. Deprecations ------------ The deprecation process is as follows: 1. A deprecation is announced in the next major/minor release. 2. We wait either at least a year & at least two minor releases until we remove the deprecated functionality. API documentation ----------------- API Interface ~~~~~~~~~~~~~ .. automodule:: jedi.api :members: :undoc-members: Examples -------- Completions: .. sourcecode:: python >>> import jedi >>> source = '''import json; json.l''' >>> script = jedi.Script(source, 1, 19, '') >>> script >>> completions = script.completions() >>> completions [, ] >>> completions[1] >>> completions[1].complete 'oads' >>> completions[1].name 'loads' Definitions / Goto: .. sourcecode:: python >>> import jedi >>> source = '''def my_func(): ... print 'called' ... ... alias = my_func ... my_list = [1, None, alias] ... inception = my_list[2] ... ... inception()''' >>> script = jedi.Script(source, 8, 1, '') >>> >>> script.goto_assignments() [] >>> >>> script.goto_definitions() [] Related names: .. sourcecode:: python >>> import jedi >>> source = '''x = 3 ... if 1 == 2: ... x = 4 ... else: ... del x''' >>> script = jedi.Script(source, 5, 8, '') >>> rns = script.related_names() >>> rns [, ] >>> rns[0].start_pos (3, 4) >>> rns[0].is_keyword False >>> rns[0].text 'x' jedi-0.11.1/docs/docs/usage.rst0000664000175000017500000000555213214571123016113 0ustar davedave00000000000000.. include:: ../global.rst End User Usage ============== If you are a not an IDE Developer, the odds are that you just want to use |jedi| as a browser plugin or in the shell. Yes that's :ref:`also possible `! |jedi| is relatively young and can be used in a variety of Plugins and Software. If your Editor/IDE is not among them, recommend |jedi| to your IDE developers. .. _editor-plugins: Editor Plugins -------------- Vim: - jedi-vim_ - YouCompleteMe_ - deoplete-jedi_ Emacs: - Jedi.el_ - elpy_ - anaconda-mode_ Sublime Text 2/3: - SublimeJEDI_ (ST2 & ST3) - anaconda_ (only ST3) SynWrite: - SynJedi_ TextMate: - Textmate_ (Not sure if it's actually working) Kate: - Kate_ version 4.13+ `supports it natively `__, you have to enable it, though. Visual Studio Code: - `Python Extension`_ Atom: - autocomplete-python-jedi_ SourceLair: - SourceLair_ GNOME Builder: - `GNOME Builder`_ `supports it natively `__, and is enabled by default. Gedit: - gedi_ Eric IDE: - `Eric IDE`_ (Available as a plugin) Web Debugger: - wdb_ and many more! .. _repl-completion: Tab Completion in the Python Shell ---------------------------------- Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion in IPython is therefore possible without additional configuration. There are two different options how you can use Jedi autocompletion in your Python interpreter. One with your custom ``$HOME/.pythonrc.py`` file and one that uses ``PYTHONSTARTUP``. Using ``PYTHONSTARTUP`` ~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: jedi.api.replstartup Using a custom ``$HOME/.pythonrc.py`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autofunction:: jedi.utils.setup_readline .. _jedi-vim: https://github.com/davidhalter/jedi-vim .. _youcompleteme: http://valloric.github.io/YouCompleteMe/ .. _deoplete-jedi: https://github.com/zchee/deoplete-jedi .. _Jedi.el: https://github.com/tkf/emacs-jedi .. _elpy: https://github.com/jorgenschaefer/elpy .. _anaconda-mode: https://github.com/proofit404/anaconda-mode .. _sublimejedi: https://github.com/srusskih/SublimeJEDI .. _anaconda: https://github.com/DamnWidget/anaconda .. _SynJedi: http://uvviewsoft.com/synjedi/ .. _wdb: https://github.com/Kozea/wdb .. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle .. _kate: http://kate-editor.org/ .. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi .. _SourceLair: https://www.sourcelair.com .. _GNOME Builder: https://wiki.gnome.org/Apps/Builder/ .. _gedi: https://github.com/isamert/gedi .. _Eric IDE: http://eric-ide.python-projects.org .. _Python Extension: https://marketplace.visualstudio.com/items?itemName=donjayamanne.python jedi-0.11.1/docs/docs/testing.rst0000664000175000017500000000154113214571123016456 0ustar davedave00000000000000.. include:: ../global.rst Jedi Testing ============ The test suite depends on ``tox`` and ``pytest``:: pip install tox pytest To run the tests for all supported Python versions:: tox If you want to test only a specific Python version (e.g. Python 2.7), it's as easy as:: tox -e py27 Tests are also run automatically on `Travis CI `_. You want to add a test for |jedi|? Great! We love that. Normally you should write your tests as :ref:`Blackbox Tests `. Most tests would fit right in there. For specific API testing we're using simple unit tests, with a focus on a simple and readable testing structure. .. _blackbox: Blackbox Tests (run.py) ~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: test.run Refactoring Tests (refactor.py) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automodule:: test.refactor jedi-0.11.1/docs/docs/features.rst0000664000175000017500000001777013214571123016632 0ustar davedave00000000000000.. include:: ../global.rst Features and Caveats ==================== Jedi obviously supports autocompletion. It's also possible to get it working in (:ref:`your REPL (IPython, etc.) `). Static analysis is also possible by using the command ``jedi.names``. The Jedi Linter is currently in an alpha version and can be tested by calling ``python -m jedi linter``. Jedi would in theory support refactoring, but we have never publicized it, because it's not production ready. If you're interested in helping out here, let me know. With the latest parser changes, it should be very easy to actually make it work. General Features ---------------- - python 2.6+ and 3.3+ support - ignores syntax errors and wrong indentation - can deal with complex module / function / class structures - virtualenv support - can infer function arguments from sphinx, epydoc and basic numpydoc docstrings, and PEP0484-style type hints (:ref:`type hinting `) Supported Python Features ------------------------- |jedi| supports many of the widely used Python features: - builtins - returns, yields, yield from - tuple assignments / array indexing / dictionary indexing / star unpacking - with-statement / exception handling - ``*args`` / ``**kwargs`` - decorators / lambdas / closures - generators / iterators - some descriptors: property / staticmethod / classmethod - some magic methods: ``__call__``, ``__iter__``, ``__next__``, ``__get__``, ``__getitem__``, ``__init__`` - ``list.append()``, ``set.add()``, ``list.extend()``, etc. - (nested) list comprehensions / ternary expressions - relative imports - ``getattr()`` / ``__getattr__`` / ``__getattribute__`` - function annotations (py3k feature, are ignored right now, but being parsed. I don't know what to do with them.) - class decorators (py3k feature, are being ignored too, until I find a use case, that doesn't work with |jedi|) - simple/usual ``sys.path`` modifications - ``isinstance`` checks for if/while/assert - namespace packages (includes ``pkgutil`` and ``pkg_resources`` namespaces) - Django / Flask / Buildout support Not Supported ------------- Not yet implemented: - manipulations of instances outside the instance variables without using methods - implicit namespace packages (Python 3.3+, `PEP 420 `_) Will probably never be implemented: - metaclasses (how could an auto-completion ever support this) - ``setattr()``, ``__import__()`` - writing to some dicts: ``globals()``, ``locals()``, ``object.__dict__`` - evaluating ``if`` / ``while`` / ``del`` Caveats ------- **Slow Performance** Importing ``numpy`` can be quite slow sometimes, as well as loading the builtins the first time. If you want to speed things up, you could write import hooks in |jedi|, which preload stuff. However, once loaded, this is not a problem anymore. The same is true for huge modules like ``PySide``, ``wx``, etc. **Security** Security is an important issue for |jedi|. Therefore no Python code is executed. As long as you write pure python, everything is evaluated statically. But: If you use builtin modules (``c_builtin``) there is no other option than to execute those modules. However: Execute isn't that critical (as e.g. in pythoncomplete, which used to execute *every* import!), because it means one import and no more. So basically the only dangerous thing is using the import itself. If your ``c_builtin`` uses some strange initializations, it might be dangerous. But if it does you're screwed anyways, because eventually you're going to execute your code, which executes the import. Recipes ------- Here are some tips on how to use |jedi| efficiently. .. _type-hinting: Type Hinting ~~~~~~~~~~~~ If |jedi| cannot detect the type of a function argument correctly (due to the dynamic nature of Python), you can help it by hinting the type using one of the following docstring/annotation syntax styles: **PEP-0484 style** https://www.python.org/dev/peps/pep-0484/ function annotations (python 3 only; python 2 function annotations with comments in planned but not yet implemented) :: def myfunction(node: ProgramNode, foo: str) -> None: """Do something with a ``node``. """ node.| # complete here assignment, for-loop and with-statement type hints (all python versions). Note that the type hints must be on the same line as the statement :: x = foo() # type: int x, y = 2, 3 # type: typing.Optional[int], typing.Union[int, str] # typing module is mostly supported for key, value in foo.items(): # type: str, Employee # note that Employee must be in scope pass with foo() as f: # type: int print(f + 3) Most of the features in PEP-0484 are supported including the typing module (for python < 3.5 you have to do ``pip install typing`` to use these), and forward references. Things that are missing (and this is not an exhaustive list; some of these are planned, others might be hard to implement and provide little worth): - annotating functions with comments: https://www.python.org/dev/peps/pep-0484/#suggested-syntax-for-python-2-7-and-straddling-code - understanding ``typing.cast()`` - stub files: https://www.python.org/dev/peps/pep-0484/#stub-files - ``typing.Callable`` - ``typing.TypeVar`` - User defined generic types: https://www.python.org/dev/peps/pep-0484/#user-defined-generic-types **Sphinx style** http://sphinx-doc.org/domains.html#info-field-lists :: def myfunction(node, foo): """Do something with a ``node``. :type node: ProgramNode :param str foo: foo parameter description """ node.| # complete here **Epydoc** http://epydoc.sourceforge.net/manual-fields.html :: def myfunction(node): """Do something with a ``node``. @type node: ProgramNode """ node.| # complete here **Numpydoc** https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt In order to support the numpydoc format, you need to install the `numpydoc `__ package. :: def foo(var1, var2, long_var_name='hi'): r"""A one-line summary that does not use variable names or the function name. ... Parameters ---------- var1 : array_like Array_like means all those objects -- lists, nested lists, etc. -- that can be converted to an array. We can also refer to variables like `var1`. var2 : int The type above can either refer to an actual Python type (e.g. ``int``), or describe the type of the variable in more detail, e.g. ``(N,) ndarray`` or ``array_like``. long_variable_name : {'hi', 'ho'}, optional Choices in brackets, default first when optional. ... """ var2.| # complete here A little history ---------------- The Star Wars Jedi are awesome. My Jedi software tries to imitate a little bit of the precognition the Jedi have. There's even an awesome `scene `_ of Monty Python Jedis :-). But actually the name hasn't so much to do with Star Wars. It's part of my second name. After I explained Guido van Rossum, how some parts of my auto-completion work, he said (we drank a beer or two): *"Oh, that worries me..."* When it's finished, I hope he'll like it :-) I actually started Jedi, because there were no good solutions available for VIM. Most auto-completions just didn't work well. The only good solution was PyCharm. But I like my good old VIM. Rope was never really intended to be an auto-completion (and also I really hate project folders for my Python scripts). It's more of a refactoring suite. So I decided to do my own version of a completion, which would execute non-dangerous code. But I soon realized, that this wouldn't work. So I built an extremely recursive thing which understands many of Python's key features. By the way, I really tried to program it as understandable as possible. But I think understanding it might need quite some time, because of its recursive nature. jedi-0.11.1/docs/docs/plugin-api-classes.rst0000664000175000017500000000023713214571123020502 0ustar davedave00000000000000.. include:: ../global.rst .. _plugin-api-classes: API Return Classes ------------------ .. automodule:: jedi.api.classes :members: :undoc-members: jedi-0.11.1/docs/conf.py0000664000175000017500000002176113214571123014624 0ustar davedave00000000000000# -*- coding: utf-8 -*- # # Jedi documentation build configuration file, created by # sphinx-quickstart on Wed Dec 26 00:11:34 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os import datetime # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) sys.path.append(os.path.abspath('_themes')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.todo', 'sphinx.ext.intersphinx', 'sphinx.ext.inheritance_diagram'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Jedi' copyright = u'2012 - {today.year}, Jedi contributors'.format(today=datetime.date.today()) import jedi from jedi.utils import version_info # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '.'.join(str(x) for x in version_info()[:2]) # The full version, including alpha/beta/rc tags. release = jedi.__version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'flask' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = { '**': [ 'sidebarlogo.html', 'localtoc.html', #'relations.html', 'ghbuttons.html', #'sourcelink.html', #'searchbox.html' ] } # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Jedidoc' #html_style = 'default.css' # Force usage of default template on RTD # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Jedi.tex', u'Jedi Documentation', u'Jedi contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'jedi', u'Jedi Documentation', [u'Jedi contributors'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Jedi', u'Jedi Documentation', u'Jedi contributors', 'Jedi', 'Awesome Python autocompletion library.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # -- Options for todo module --------------------------------------------------- todo_include_todos = False # -- Options for autodoc module ------------------------------------------------ autoclass_content = 'both' autodoc_member_order = 'bysource' autodoc_default_flags = [] #autodoc_default_flags = ['members', 'undoc-members'] # -- Options for intersphinx module -------------------------------------------- intersphinx_mapping = { 'http://docs.python.org/': None, } def skip_deprecated(app, what, name, obj, skip, options): """ All attributes containing a deprecated note shouldn't be documented anymore. This makes it even clearer that they are not supported anymore. """ doc = obj.__doc__ return skip or doc and '.. deprecated::' in doc def setup(app): app.connect('autodoc-skip-member', skip_deprecated) jedi-0.11.1/docs/index.rst0000664000175000017500000000141513214571123015160 0ustar davedave00000000000000.. include global.rst Jedi - an awesome autocompletion/static analysis library for Python =================================================================== Release v\ |release|. (:doc:`Installation `) .. automodule:: jedi Autocompletion can look like this (e.g. VIM plugin): .. figure:: _screenshots/screenshot_complete.png .. _toc: Docs ---- .. toctree:: :maxdepth: 2 docs/usage docs/installation docs/features docs/plugin-api docs/plugin-api-classes docs/settings docs/development docs/testing .. _resources: Resources --------- - `Source Code on Github `_ - `Travis Testing `_ - `Python Package Index `_ jedi-0.11.1/docs/_screenshots/0000775000175000017500000000000013214571377016030 5ustar davedave00000000000000jedi-0.11.1/docs/_screenshots/screenshot_pydoc.png0000664000175000017500000005303113214571123022100 0ustar davedave00000000000000PNG  IHDR&%sRGBbKGD pHYs  tIME  30ﵞ IDATxwxlnz )"METPTlȵ]*^+6PQvQQJG@:(5@M?] |?(Μ2UZ;*h,ABaTm| ByOB BILB!@nDQq\t9Q̣+89D3?kKٞ-?61Њj,&-'N+S6 o_](/N٬ cBV8=A1gdWΚOx'i%_+]g@߇{}<'{R4yNPv ?4*bt5~"JӼFcI'g_aiT9Wv:RAU vZ 66ڍpPRq 0)(9FFbi{?9Pm2?NXO|PL<lWJncw>Enf3NBBJ ޿>#ٱo6$ /[ե8^Jw\4PA)mtg.-8SMՓuSUl; eN76Ui꽝t`3^ q`&XD–4v`K厇\Xr8VLڱak.Ԇ͗™/hEASӤdOz8T 7q#'OV:i~Zy5TCs4گK6`Ms|8yZC'N#6ķ4qp{U;rJ(|xLe<'vͣ`7!? g Q`eǏ1bpj(6zX1A>Z7O*g/O8~^`q⯩4 s61v#sRo99C3}:KE/S-[G<(WC_iAڏW>/\4@Ś`q5UbbM~__FzUIXra9$X6Ejrb~ڠ?>'u6PY $ 墁 ߿CLw@%0s}Kŭ+x>뿀d)O1x;^~/eB~h1]C 濟1\|)MRMA5MLMu8RA_Zt!.jO&`,uWђ,5̂,X1 亙:m] ƌZfKlԶ*ng!pu,/$C{;=Ƙ̸͋PM-lE!6]A/ Iw8iP\[ƴ~,$L<[ |KtY,W:<`>AֆL5*k3IzF<V&lf!R\7bQBSfݤxZ g)QVN)ֆV00t~7<&ڨ[O%& zBGemާs`ELLL<; H3zk()@މ"obu֏SZf;l7ëaGmWp:_ONnPK!qu-4(O7MrS1,?6Ńg|l_d95@qX+h!F ˘;ok4n~> #``WO{/ 5su'f[;5N}?/g# zshwAMTq񙤓mR&+J [E$t4Sp;6weB/TB Hp 8tq⫄(KۦOtW")K9nڱ%Ioɶ ⣄il>~QѶ!'Nb~Gter&њb U~ ڿ¹2 VJ&EA8:T7:\ERӰTpHGi`1 Mx/Q{czYg=>dbb;F-fd xWP jC5f1X!JEgcW?xԵ= {;9_12"~^/gO7 :7Wh~9kVE~@#_ǧZb"LddxlV}A/hIcq\t~|~2`Gř&K)`Qd)7Tוo|hOm-Xۥ[ǣj؂/cb83Yg/ 'B}Nh` `.? Ey4+EbQ(G+s_.qWoU3߿ʜܬ-xn|VS(6XklyA bL=/[TCAՏRQHU(2/R/XSU:5Ɔf(}T~SP UZRw"zٻ ~k=zTWB6*\ZIJjMN~&腍g`LRKşQW 2G1]X²m$#;DF,Pn;54cVP8U*V&Z#+-J؝/ 5tXЂM`GD޽VR4iATLfQ?J}ĤlOn~u+-nKw3w\;n@VxXj'^αp:IߤRf=zBUH-(@+7Pa LiN4.^bmKɴ2?4%2~HgEAmŠ7z/?=}Ha8+]Px;YW\g[IlxU t4IN.[ FcU,{Տ8is58QW_P Zr|EwƒngK<i|B|>/B׬áel]b?zרF }'l{Kn`x=&c;!gV 6J*Gym2)KШ^dPI.:h{ uX4Djm;ƹU 2ʋUs~lflM67*D.+=9ILZª S/'$μBD1C:;pvj)sWLdT}~S8 ȩ08Wj9j$#y~V(cgΩ\V*,1#N)+&3J\s OR9?X3#+&GVLVb+&|tK%1B!DUJB!B!$&B!D!"(|]X!]2=_^͵%2 2g !"<( qf1,BZZp Mb,B0Ebbl|wOH}S3QVzwQ'J,T.el? (e_zGcrȜtwɐ)yAB!NҊɱTNf`dY!HL3>*y WqVl]n!~M'r+GQiq_=nGq1Wt/Bo娜qo=iCʭt+ƪ8l(E!1zm]Xd!Y@ !d$&B!Q"wZB!B!$&B!D!B!BHb"B!l4yj&~@vbt϶f;5/ RʧQ -!W7HE"ZBպuώ3D;›?)S,fAi&%e.5V$~j?4?y_\D~bzMp;Wy!׼z+^ùg E!W.9_Po/jѶ_m^,c>>|l&y?,yV/J?49\UR&s< %2z~_BY19z!g9|MLdb$ӈlW#ILI^^v~ĽC㣍Y?tNQ!;cNe9ܽ{9V3ŢL2W3;ht f;rH&XNJ-,~ iPSSzR?R m˙vu?X^I{3f=NZGjۅ7G_u?2cjnx:ƣ#OVCU,wgc_c2oX%)\_Ȩ+xG7}:u ߭yKZ̩7n}郡V½L2[X+4>B?w+]4٣87VO7*?9WL2J7CPM {P|{COE5jozNJn QF5?˷_{F~L*/g~iϝ7g3}?z.cqGf\aOuK]ȽxaB_jh>OO ٌvcVLcrb ah9}ݽ,(b\X^Zi5&W_V`=dC?T/87)3zs]$c۷t:_(5nc?3֨9_{i00W{N >*c%,_M'!1Q,\ޜoNdj{ְ;͇8z-ʊ$zLEt7+?XA?8ԟ {sY|L zv0rM0<.'cz u̜Cl~]CQt`k^i-hB'dBL&Shi |ɶo-(qUYXC>|{g$J/8nT,U]mKܙ} fl̢ڧ+ =C0(bVRlȉG`363GQHhmcw_&6͵O) y11R&v,.?:⃔jmIS!^cщBCD+l2Jsj$w{m|&1jcxz^¡c 5 LURV*>a/Ek$]V>׵Wf@|qsVTW'&Sd?WK͐C>юNgUInN+t{ zվ_A!*!1Qc;{9RpՉc1FYF|'6:SuFo]vA!%*T9} [Gi9;:DYy~wLwa4I>HauԸVnT~;fR>fʗnC=k u[Сk=J_>jG Q+&6 IcK .Í tvK)y{Lf1_W}/+ V]I<~T/oIqC)=Tdh3eb8/6/dtMi3;j` lYȴEs1o̧Z?E(qW{Bwuvg6n6>k(g{| ? |Ī\Y})?z(`ѐ3OZǪ ]6[.VrCWNjw8q*r"ukgSFh\wlܾŠriѪJfq󂫹L|!WQ)+&U%Ò,Td(wẌ́vʤ. !8!*Qoݤ^̗bv\HBߩ+G! y>\!!BHb"BILB!NnbYluh !"|QpZZ>l;c6kR487wlI5Vwf99?y HBHbrki`aݥ 㤪*UIda^#O&p8޹_~bKY=.Z\Uyyu>KCB!$19$.]V?%'zLu*U6Sd͈4R/\Mհ+,Wn{ĞEH/ IDATp;Y\F!Meo}D'*!+U:݌q=bl3/UfLP-V!N2-t{0rA],_EIl{>kpbnAQĚgR!N .0''R(fW(eTi\nL5=4;M3O:E̡4>UumG2B2)GLBHb.q] <0.m>*79U@fθ"ztFrכ9;FgSu7GK~Dz+eInJho槌a'B!),Sժ`*Qj Hg|AʚzM^Šv19sRI"SV hɁtT!j%Nj:t~&ZC)JBi&K&(oŞ@OVN>SuFo]uiQuSqٵ;oL#:wA`wŒ!TWr/ yfmuY%8q%b^\@'xu6iΣo34i2O!TW Ϙ!B5Jz!BHb"B!B!$1B!D!!BHbr32w BBY)<=9vdΖޔg:IsBbo:7'4>sHBWkdޜX:v aMURF*eBow3.z/'W2z <绥 !8-ES}yr|&ab$`ZKoQI M(nSRd&2ܛa֣tfΖؓ9d'g]O +Kvv??b+qNYB!+&s^{@A*nphU/=xB%&&3)P陠r[7 Lg|AF aتѵnksf◼4!+~=gJIB!+&AS ƪ83[&g$5M7a0 uĥz-6jtexY9j uҭL99;Q9i`ъ^7N?{ ѢCZB!+&ǻt&eS$5NK&=mN:#49}*M`?S|vSG80?a!{% 0ԍQMjhI E_ tdQIRPh5 9TFVϤwXjw³)͹BHb^/'6+՟JgVCq)3,@i&K&(oC=9;rdWY_xgmZ:̰X({ @?(,XK@d{x9=~Šo9/%7-yq9McϬݗfom.='jLU(#IL?kpK_ *dUƞpT -MZL4|\0-vYlSk*Yگxn8  Щq vCΚoX;%_!}dqM[鷿(gCsAˁ !8KlK*#Ao_*&Xo种i2bՠ~c^-6٬d(jJk Zeq5π ΁) 3Gu{(qfoԝ x" h";cZNJy#`&;!UHt~XF;~Ɓn6ZL}e|Y' ;88=̲p]L s&/= s{U+>:F`>޳/]ً2B^ 3 n*NZt3ǐvHdo"!u΄9:'r)X|lQ*&gP en@gmV6=GK(OڜjN CݎHn{ 5&Z5OGz+26óh6k޴mr w}Ǹ8Bq%&F8Ja~CQ-i_=u^7wSe5V {~=Wʏ=Z ɱGCŦ28TFԝG7ncYxL;-E)QS^e;ѱEjHRɕLBn-=;2 ,t TRxOSYb8J5W GO}nĬua$Wxm~3U\ /bPXwWV5hgWaֺר:}IR"KL׺H+R0v0-Dލa38i )mx㛩='ax xylP3?a."-ҸMt},'g2n̢\ƀB.11 zΧbr/|(!&WL_:ׅBa^2q /ꄟY+XM5 VB?:z\. o⤥BY1B!Dv6 B!dD!!BHb"B*+*1Q]ZmЊx<2 !BSDc-N玳UpdH8:/=Cb,B0Eeę(7ٹσu`OR$B!8yE i>j*-1B!Dg{_,4h$1N7atB!NfbZZC9Gtq?Ժ.K|G!'-1qЧ;~%|L!ޘ8#F,Bl;k iq$49$B!OT~+!Nt|WG{`ŝW\[oT5m6]-Y%7烧INrG857xEݿ܋7<@{,.K+W-㧪S;U|EhyVbzr|N׫Aڑ '_0/$ϻ7ajݿ^ʖty+xSϩߪ?"osi'Y ٸ7scI.gPtu ,[~X#3vfc|XWSSzR?R m˙vuLΖ&lZw&~+ u|uL20KUo2z2fYӣ\5~2s3(] C0FתwfWXTIFj&}rFʯwhdc6ؿΏ ~ f;rH&XNJ-,~ i*}|G~<~d?;sc3> LVP{a/9g'Zਈ>͟ʲO6^9)sD[?(*vDQ <ۗSTsPtؽsx]z_L\ coi5&W_V`DeWK]HFݸgryFB8QqÐ5㶦~'ܺo=\ϡ]y<ʭFxƬ}JzKxf#K:;3뱁\Ҍs[G&W6c?aӮW>Q,\?i<2f3պ[3jYŒDoRgvIG&;䞃P:/W]6Cdنw4yUQVK&qPj2n2vˠ4%5lڿϿO}'JjoI=7wP䗍 y̚BSrm5+ɝ`l,1v1o,; }q {sY|L zvsIOTL|eۍЪ%[N~L9}&Gw#Wү#c{WRO 6~C/e~l/}wϧ=Q??G:~#n0yL^ꍪɀu z|РCԇ6NML [9 N'wqC-1M5x>?F(T}e(b/V1paɤq5HzQOP^v~%9oG4z1nkǑg9OEZ$jE;ڏ}գs|qo[}EϑP| )JxbpֳMUEPB7'm/܅> c]b;yQ2Ț2SᅆpWcxǖEaD+o62'+~o،p_HNBfհR ԋxn0x*}9|xvx#lz^ި(BLj%iPl뒤4^M?9x{pE21?dN~U"'k04hm)5peQi%0F~,Lf-FM4YYy~wLwa4iY (j?Â2>tKJLj=ZvS $3U\gf5i,?Oٹ[PυCBtZG(5/Vs8qQsgT{ ;{cn4%Ń$H緈7nb ?|㴈 gGC-ff`f|\<sˠ 1mQߝe(ŹwQɥ&sGe.U[|y!3WClŢRrym:V3Y 75Ge^>ק/Xu`_gU?}xKwg`˲;Tș/mĕ[ף[6.=Q!yecgZUl4\Xk4bR>ʚ9%{?tC/zF#>TԂ9*mWV_W$ƦQU|v17qXE ~2Qj=w< ͡-,A0O͟">Zz_F"A×O<>|^'͹ܿ"B?d|aS^Ƃ JkgƮtbk+g\1|s%e!ĩ"!B޵-wi^ *N5%VgdJR"iNVLBQeȯ !PLӔ $ BBdD!!BTij).LNwDó3[֥B!*511nMhS m}A7 `T6MIeӔjl~1yn?(@Vl$chz!xϮ5.MS}aSٙjs1E;Ës<8Tv&_:6ڻՄ6~5ޛ'xPwE7UovMh:fIF.{LT')IV4\wouU;+:(jo|_|:PP.#1/|u7 z rs ˜] ޻jZtBO%(45T{WRO 5)ӻkwhWu`K{Zũ. y)p7m0lX fۇk7N>Ek#h+{v{ISz޹q>^{5LoA#v:@&צjF7a\2!_IL4<ϋAsQJ02S~>ל7ȸu%opg5<$7rds~~&^X]ӮFx?@@i {ʱ6$%jToѨM=Zǡ֏+3PӨ噔oQY@?}W{؇ ]յiy M^»\v)Z?DqBRë|R;b!7\b AQO6su ۊm%+Θm__ŏt^.BDKf=Y)?i{?MƐyLL'^dnћ5<;ONpGq.q+'$zaW6ԬPDPņ4Q#Υ݀ 8լFj+-ZO).{6Eh11_%0 >D{_2=P%ܸŸ?E\8xǩ3 XfdOjpzG'y]߈?"|ӄ;F#ҟ0JRixeUIg_:l.l.("^`X(ʀ,WEE" . :l2aG.{  ת!pT$8K~ӧoA¨U^#,T Ns^UQ0D%zQ)$9Nٯ,zV(!#wIDAT=;E]d2V 0*nPO;_ˣh[=/_^_NɿΩwׯQsh % uC/5Q}6 =ȟ%n ,|֟vNK}x$ F|1ܚ$51c=PBM'8 pk>!>fJ) @) 43a$Ec _Fޑ\|D`T~HbГ pn=Ɉ_!vv* 'ѽ'+pRv%Ս#$ ^-Yw=?R̨T㞠T߂m8SDYz>>eKv;B!%%]q8Y;qg䭔 j`H|ޭKEk?A[s P<0o|S_c֚tgߺcSQN0wd%^_:%G3y"b"Dyk7%˸ (#9{t1@_:,N4gn|݆S=hS!Д6c1lnA5v a%֖-sb 5YGlZӥoV3" 8L%>s):E|!Dm`&soz6P0f5lL \0p/T+u:Y+% 'z.ؚ3jbxSws> 0>TG!crCD!o.,BZCiѪUwLI!B\5!B !B)L×G` D=4ߑJc/xA휿v)ErSQMs~d>dnSEwF ؎}$?BZtLL #6? 5Ksôi`. P$Sf3 qe3&T[UH=Gv| ac 䰖Ht~ seGQ _9,Ϳ4M:?a"?&9Sʰ$=F ]eW|j G7=lNͳmC/09JA,a~؄ʶ;Qy/ɞCPuϏ'jT=U!`{>(LuoL4P#:%nJ&L6X4}ϙQy{|2>RKrX 7|>;n'}A7oFB45edl؋=t)rWT /MezǙno˻iȘͥgzϙƓ[HV.ϣmU{ʶ*=HNsX63Ι!p@w??> rTa+SP{{~סcb4mWchc(oU)*:Wݣ5(lFp+ff@A8$u'_9?i(-ѳěJIɡ@ȻUc9T@+`5ًo~JtMo8䧰xvr옲K)8gt| 3ɪ}r|wvH~~VOgO-SnJCFd.o*s@PJŹ5xu<_߱&J`zJX~f3&u&{y  DYz7;# ku~5gTAvF%v6=n'dZHJW=,^ԃfސlvթbyS)9Qs'I)>ط=._Ǹ{) 2/q(S@,&c]-^uL,I-dnSEwF ؎KB\k115d72.΍Ӧ4@fL#(=RI̘T#&&1gK6deJғ LyR^Eb Su.aBQ _9,Ϳ4M:?a"?&9Sʰ$=FB.w4L-6ޅ#pX$f C^R\il;]/$_7'KI.pϲ%hSUԂZ0lr1]"CypF֟IJhom{q!gw٫g _].22GL1Qyp*2p%+*3.޹/0ySwngA+yu.~m힛y>M"X]=XuBT?cpOk~*J@#ww>F2M4EHNI S:V7h}_X0ڭĚG,{4,S`榡44R(@G:=@ǦIrǒ'SuiJů.XFʧS91D$e<(D5ap=J?EX ο Vq0.$?1a%W|58^ D#/|[:d̡_q~*މ=z&^9hǑ6hP &]8d뫨t ,8l/K>,Ԥ`j{@j r a)'`~,L OSEAi[:ufꁚ bbƃǯptÅKQq2~%:Z(F5#`XCDR HL˺Z5A/{/u+M&/nI}lpW}*GUBT{ǔPd|>|IL[Hz:YԦMuf2>q'3bAZ}Kcؽhlڗ͸]J x Xnױ:ma>K6rk4G3@Yʏ& .PƖ# ?aՠY6%ilKyk^UQϿ;co .Ҳr.v櫩M&jO&!O(-Zꣻz@")Bj:-!i>J~&q`ϳܡ!B\+1B!D! !B !B)LBQk?P@IENDB`jedi-0.11.1/docs/_screenshots/screenshot_function.png0000664000175000017500000011614313214571123022613 0ustar davedave00000000000000PNG  IHDR6sRGBbKGD pHYs  tIME  0&S IDATxy|L߳fUAPb5 RJ([]-4$b-X"L֙?(Z y^^W{sysG@ DL^@ xPExC  B~. W@ Ep@ (ߘu?ETQJ|5ߎbW~b]3Vmɱ0"缋pIK D1hTt?NyrOgl,k1R^'PWSNlss.御`:Df[^7b/1?_8߷n%KyyWߚky%|ar*0sSNʝ;lQVJ&_%,2s V$4mR"DE$![%W-sFQxfLMc؜p ecJ[' ߡN৬+\(@t7=u6$ۗLĜIS̒utp| :QV]}b?=Kԑt0ChV !QzPO{v({5A,_v΍q8ـև_bP*ҹqh%'.",.wm_W97:Pwt4ri&M^'?[>0f w6-犕,:v6{M1־?3#L8>9j Ge4KNR}J9 &x Vk$͜‘u ߕEYHzpgݸvq>E_xSW|wלb˷/])3io״/gՍ"rdv EQ  wQMf. Άb+r7`Xl;Dһ\.e?cR_Lj&p;GT=סն6&<&?nrViW3cWvM bZpuA?ӽTY|ẜ~-q>8gh?`ƒ=Iԋ4b$]k.nɼi57~п?ߟFڍ 7Hɰ?~cl4&OAaD6Tm1{LJ3+|8twUdr'k굤Dt=w3lVǽn W2YqH/;>`S'iMY6{>Yµ[|~}=$:gq~77s]IvO >jZOeOϑ0, v,2yQe>um:%uiN".6(}(2gQz 7صj5σhP&Ȱ;䷎QlJ 7YQ55LXҿGOV>D]9grkX=H|] AVp\9[FhCܱz@ A%toIlm rLt KCzI%ټvGhF  kb^]˞.6R3&zMZy !9>Qп{h޸Z!ʹcU{|*YS_n(&[{ӳIQlOVl+z RxP2 QĥީjXBKt))kݾ%L,Mf&O9CFqY5旂yygdbǏ@N̸y?)+ gO$$I^ %pp(Yg 8?LC1dJVTb=K*IR!{7jGwO,+tRC)p4nӛпֆtvP i?(吖xR qdB|pkW.>\ rD¹q-@ 0a)f(W(ȼPبX$rvJ*@\aI R1+fRiS ) үq,իoy{'zѫ#JZɐYI;]ύTEpR|?lI[M?KĖǪmݽ%>JBMSuIdjzj92Ҽ+S[9*YsVZJ>fЧp.qw M?ӹMoF/9BPg7Y@m0c֜!U|F ^bPo/aܷ^ҟu'nОǴ|!a7ձcU_{e={@]km&dXL1́Irŏqkr>vH ;U~;+}ʪGܻ6>r#,Ni\? ߖ#כ?#ʨ/k'c_Hm+iNΔfԺ86v'~^GȖ?CQ|DZqf^:&.(G%ްJU;Os r]"ef>D}l}JEYm9PEx0tme.g^]iD=Tz5ݬK/313uFoCI3b|,nFsJDPR? dʬz日}4g ' %b_UM\1wT^nAޙU8up1ܡ"\.|׮Vʪa:n: =5%ZezkL|([ҸȝZbp>7+$@` 13X+u[OJ{H^Kxy3v <ݽAl|y99rN`ĉL ̠(@ bP@ P@ @@ Q @ $, :.|9(**a0>o%s=S8Z 0bAo[#Ig1tj{}Zm:۫ @ ^  DͻwL6Gz[^g%jm@ u*u}dYI8 J M) )AC})@ N,6LAa2S_ײ@ ^bP&ofR!KOc|X'H@ }1(Sip)S#S8ie˱V @  )̟w` oT-+un@ (LrP1(@ x%A+@#A@ Q @ D1(@ Š@ GPbP @75>c9ʬ}eť´~^'qǐk,>KaWQp _cSA߁%Y{ R(mmz"pƘ]H&RMؕMLVW%\1}| k9Y?)? Az'3| g]r^}\g-:NԳD@щg>~( KY288/ M^nW(CZNwG<iϰfWZx\qmfTsSۗОL͵G_Š+.Z S@&EϨ V}z`NwXy(;RvNBJp)茮4N??c78DZm.ʁYuO!H&25uRLȑXtr_`/Mg y|u~w1уu}E:]DjNy0>u/IR/o[G< w{s];|R(cBٲ8;Ӻ_03h";d^ͅҾ0'TM䞶wA+c&$*C'rc}L=Gԟž9Գ2}6ya; ApHz~/Sg?=wp05H]\ȖnM?/}cо ޼?gg6p K.~plϘ 91m4Ǯm`L<颗B_ ĦҥMewL([RBenw|ߌ/#S&?㍶o@T/ J`Hu=9Yl3,cRmޫ?BI6<TxOMbՙZՑGb[M.3ƴ*ۀA0h'x%Ҏ`WVC4t~4遟fk@LKQoHэ?J62g'_WSM=dFvtz)_ƥҦ!LےYOS|ۍxۻ={1fT LYe7ӳ;tr짫qjњqg}&}1;}s0?gt gl@SZm[혴-7m),+*'vQZutF$aȇsӷqvԙ3K4*Տmwޮ1Z? <ʼ 8ŇV&|}dw/W(ʚCW {{"1 qQ?>e74ڴ: NJa{@K6c1ߌo~;>7˘ϘX7ͪD!hٙA+q"sJ;=o0n;R2 zՈa`nh 4W=iRao2 hR:vG񬂇М5)=nyw@mWb:\ӭWǐ~ݳBȨ _'\uSws;CC&)30X}ٗw*Y '  CVqi}>MO} Gw k9h?r]Xc?3erg)`H:|JQz8ȏ9c}R2sEs6*F~7r̩6gk&.k "Y7I­_ ٗ~oϿ&gh2Ƅ_>/%$;gLcY.A!eXsiHyhex8)+]KRDOALIW48`@VY\] ,Mft~?\!P:ڍy_P +Nrx(i&L r\ΠQB.[¦TqTQ$ Cv&z$$lM6$ ze(LEɞ,S3OFQ79ƞ&.h󲮀_OܲtME4@3_\֘WWWs_ܗZe5;oFrD\^28)Rf6;1޽2߼[_3P_)F_@abQI\NcټL:Qabj-Yy-c"+w]qUrV@q%s8!ɦs%V֜Lx>69Ф93OeU3RŐ_ݚ2apفK)YSyNx* K&0H2vN1j Q;O3tMgNLe[1x,ch3%NDԴL;L|$,AHzz&[}TC<*]QsDz?g~5y7HRAc$ ڿFMY:7 -˝llHfָɰ+@6 #ӹkLX3~'6xڡuijjm*Uc%99%ldlKxt`uus޽Yz)0˷ZCѻ2~deJ%۽&UT DZ9hCRlْJ3Sfo8.uƿ}-pCQ?i\ߗSI̶,jэJ^fWQQCCDfUDaz">L)@?QoKHRM5],G>l'9dG_Фrn9 eeE@?ە1s ឝ0+w'>^4$q`h7~'fבj잯 A|ywszmlM,6'GQg1sfeB|^ھ܅FKB?F?1?S\ɾƺp<\Y'Np[<IDVخcT-[ ;Xa?[ L}J\'Fs"v== ;yi~#ϹOFᬺ}sלf)ae 踽x&HF~[[ _ql%G>"Sf}+56cGhsǏ)W~3\=> X҇CF^61R@VӶvقW̺<!񊡮6U[p6ME/S`[3!Usw"S8IO7/ ILDU}b$ve(=n$ D~D @ xL##?}lt8}.2@ JM2@ ew @ bP @JQ }c0ƿDi-+.Ex Nx>CCf-X^Il$ @ؕ|TF#U0M߅_w}_^EO` ƗǾGe *[E"+U_;Sa]#pHM6;8rUҌFcŐ>n37%s~:S~<`l;7[tg_~)B޿\_&űe}|]i9GWC?/ςE9eBiOxbd}\"ԫR)~CcGT&b[՜"=^ % 2% 4ib0X2/ + 4(a8ʭ4jrYsA *?ʊLMՉڅRnZNKqihbzh;v}̾"~h"5'ټ~~E=/n2bskO] ewL([`Dk\< m4ĢG4}\T} χ&rO[\R6߳9"2^k}~cסʪ+' KmT(pk3Î&R{CF0x}k6 eOE5fƝ0&vۛ}Fk[a[t7Zڽ %xKWJLڷ`FĢ{jO[6p K.&RMdq6тNQwA+c&$*C'rkg,ԟž9Գ7M?eapO!H&25uRec7/S[}ˋ0xNv^>;ci}l$>WCFsƴ/8/{7`j4 - u\he7Tȱ+[*>Q/cbJ)y4%~ >cѡ!nmagg|6:+YW(% IDATeA.Lso MA> c4fxيqݘdnw#dFvtz)_ƥҦ!m.{dqz+3d'xoOJm Qfwax<ZC泓ߩ&zn@OS|ۍxۻ={1fT \xЅ.=x<&=K buk>cM?ͬ6m`R)ƥҲͪ{S?>mY:4wӪlm(àn2oB9N|! (߅q3B,w5^~rVh͸y혝>9ֿyo _Ui>^;SgӭXxuf7Vu˔})h~Svԙ3K4*Տmwޮ1њvCW|ƒ ~ᣬ9t@'VT1Oͣ<QlHlJĩjUIگ5)V@ʹdYb|1#~_g>ϋvT_8cҲlKjǤϹW__XA9>JŀջLnuT~?ݡk{{êޥ>>~QIz =+u"96qGd##bQҊUC͋iߨ}Yd=En'^{j8= YX0u73$0dr?Q˾WI2O8IpP,>.h6̺L)Eܢ5\V\|92 hR:vG|#q~vqќJdz㉏8TܗptAnJe)C)) To1,  !W'MJ+Mn+)[Ŗ;OCYSʹ琌ć1ʜjӮ~&nƀ>!uӎm05u$]z;:*0-`Zơ,卣W]wF+nsY?X2w|13~_/QYآtIf_f ZY1d2cd qŞ#]"OS(R)=~`w:MZvp8=UXIp;Hz\^A&G(_@FŦ~4u 7IWA?O4q$fJϼ3wy;FF 5l RQE['n<Kڋv*_duϘ|= ?w@F|Wk0  %\%>`#aDdБƀA2̴gf(YSP(]KRDǃ>I3%]UpYgD/} _WsIBxֲM/q5}Bl`qYc^2o'ٱ,eBee@ ̓6g|E_fJruLQ9fJh'vV1TE}zjԨ(OM2$@<Ӷy^|R.yAl7mq.;E<'8RnD4e†0=\%Sԝ,J_f \G99v4{g١i6iM|DH|AsTCsSEa*?ʀoRI!M6*Ҹ;vFʽ jgl8R H YՊ'ƯJ ޕ{2nj%lTR2Jcm߄bj@Mȩd2#V9$&-ɱԊ=AԜĿ%rI }ǣeH܇wo*lJQGfnfq3kk:ZnIVSF̶GX v_׽7SHi+}ҹ~7-TSH|t-a45me+f;M_5j ;Hd]Lv6BOr>$MѢuvEjH? ǿgg,?}1UlꫦX5m|~P(-3 Μz%zܼ#H1+S),>#%w2L.d$ OQ~:2^$Dc6t&?s 鵱56}eqڭAUOFQhIvިdz~#]aiVelѥ09g QYupYq?\ ,?]VMD}ǣ! 1B8t#1oElowo5 +-'ȪDkMԏ1}=5w ]a(+.rٮdD5^(ɂm3r݄bluHsYu a89͆?Sa̿8͎#XC)}#}/%.߉Dh9s=,吡ѣפOrˊ=kotig 9m}j4_D)_A볠3Doe:F3>ˏ/  ,5\BB#:O&[xD xQW-Yz8Φ)-ׁݪ97 ̺<Q(M6c؎N`)m1#X(7UrKӢy(#$Bl&7>cӺfjΰNJ7L##?}lt8\ vbX ebX M{ @ ^bP)A@ 7BV ́TD|XO``;4d#)8{|eť´/iU'oU ~ת*Δۮ si}#z3_A;>>_]2çЭ|}%_Cf_[lB},.O-}d-xyX22x)ȸCK{gHdgee }p:˶Z39yc_A;>> IˍAɸ#$=>q:g-M.,,43(á m\!)̀&@F > F-.0xFkʹ `FĢ{jOϧG+mRL`ݲt~]FlupϢK eLiߡ!nmagZu/IR/o[G<ܑVؕM$ïdy}&)ёji"Dg-(xh_쿗lOaOٛ?[1Cgu)r?MO(pk3Î&R{CF襰L|?Fg~L dx!i9vmcGh|ϲkԞgǣ!>poͦY̸@,Ư )!ǵ8s |T&phXoZ5~XH߂ע" ))-_B ,His>=O+C)7{FLEt(O-:|e:[#0y\۲thr4U PAs?[ Ji:n*Pǫ3K#^J0?Ȍv:81S.KץMChMm߮"Дe[2V;&-hƦP몵wI[4fxيqݘ҈,rF n2oB9N|! (߅q3B\oǔ5^~rVh͸y혝>9v}y')BF ݏힽ3֖O3RS?ycUxb/|5DbOPw0V5x|.L^&t!%K6&=K bu4ǯ1,F/H]ɹpCY(o#m1gIn"ގkJ'.2|T LBgl Z+εݑh<[F K&sCkBȄIZIk9h?r]c݄Jr}Ibq}QOo>SmdM\'Dn#nCf#*YAsɪ0dFae* oO)5$pP-B5M߼ ć]ET6ce s[ұvtRq1Y3\h )c:ڬVOܲtME4@3_\֘O#>8 $9t-I}wO Ir .,P/g8:(j /&̧G\Wk7* 8?(5yQPU"l>^=^c<0dgGB2dc@ \f~em.vgЈ(F-aS8(,?Ǩ_qR8d'ơѻ#j}]MRU司86n,]j/fVt|=c,2xXON4`8Ҹ) e+w?,oS >5娒!,9xYJ'&hc2vs1IŎUrV@aY6Z=߳ <\@ƣ;O3< o~M3TVexcϋdbe4PIϸ0H26]=ό $nMa8@%Wɔ~OvTC<\Rػ`χ(lq.8]+x`? d=@ȹx[92įI)!e.lgXeZpj̬2AfѷLi,tb†e&6T*|8Ҷ=o9p,ȰVdX));fߞ(֨!UKء@"3*WdSY$1bX znN 2R4݇b+}](r^*^*@\}1UlꫦXȩdA!%fV+>Rv(U68zWRXˌƯ_G>.}#q6MWq+E?ole@v:߾JP֡W-4-SY9hCRlْJ%O/Y?Rq֔!H`6oB15p&X>?2^ EGckT??>$eo8Q ~5pU 3˨?LַM58}ӣ\pbpo\/b݉^ErZza /U]h$ soY}~?S`.߉Dh9s Ks ]a(+.rٮdD5zxmL~r˳Ɵkckj {'"u귅{ܼ80?EglI“OLz5vPa0;ZD! 1B8t#1oEl@5 +-'ȪDk-$M?WlK6;`}Ej_J>#L<-3t'\ӥ)aT?E,Nu;Y}= (ٚ_1~a^T˫ȇ}^>UQ9~o) &11Xմ-'qhȬ˓kۘIauߴ9Oyš(!!'E`i_!cθ |E10p:Q1 ^ .AMBk~}r*sb"1 /%E[ c@,*4qqK )e2?̤BP fBWp@qh5Us5=VeVPnne\ JP<"\.r6Tzeb@ 7x03ȃN 'TXS8A >t @ ɈbP @9C|Q>_Ϻ@Mē -ilq@ bP_;j;V׳#+<}#f|;j݌_wE ,nݕ%𬄯cyjUbHH{c%پdUMV%_r?}\qi>t_5;%0`z}ߎn%[ Đ|S3iÜk,W>@Z&lB v|q-k`tC s|PO˨d1.& ˻|˾?hq U?,^bI*s+bY\/H?jݹ] qȐ4B<[ Y}۟ 'ėgW_#?A dA׬/O]{jUU`hז6s?z>}ץq麴}#]W:|_?R-3?x{2nmH;ѐԓ<1F֥Eʱ@4L?^I EJXa_244Ɠ:iKМ |03- 2OGs#Τ+ [E|+\{kIs ;ŇZ5i,XrGSߤ=|w{8,>K];jDܖ5*>uccj[-r#׈t.;&Xɜ6yS?%>hAÉόUW+ G.}2&Db9viΎ}aK<ٰ1`ejOľsFrWFlfE7eH’Oyikܕ A)+IHč4[ɧn2bfVͮt] ewL([`gjˊ{#8| ۠i 5S¦js!I7Ⱥ8,E>cŨY6EU'sm YͰJԊ_2f^2܁CO濫ϧ iiHiɈUc3Uw@m q0_CdFL+~r6f#o$N'á^7Xgm q2ٿOՇaV| _]D^NG\hJJ'!Z#y(m+ҹ@ ٺ%$ 2QD.KƠӁ*`H@w!2٣* /7QJro"K;7 ?yu1~Egۺ KQP<3ҹWtDedI+مbVk>`RtWYgEGWQAi4])Q'c;CcJJԆ@ ÝB}X>k W۞*8 ۹asMHqtbʸ9QIxrp}e5 ,ֲL΍lJS.b*U&/YAXr~tIÂ0&RixŐ˽PUل I$=.u@1~ТʖLƝVؗ\^*(yM9;s>c̘F JehْB("RfWh!ReK1 c;ssB2#CGs~^ػoNF '2(/a𵧯 VmOZo0˽簿N/nRۯZSʧod$>Yʾ_C4N6b!8<~-v7O]P#ˮ#L(-P*QYt8\L@A<b&)~|7րKl?qYف?7XOˣٲgn) xS(10_u$+Ga5I99W%6W?5U1={♙+ g6-߇3зgsm)uHt/۽ 4K Ğ؂w5iBGffr7uj NqlK;wKȕA'欟ˬ_yK (Iӆҧf#'ҤzON&`!4nED-V -jܭ y6mDèt&Z}9 fH ^ 7)B<+'aƹ>9I=wTΒקcvZ 4'g]Sr''@F?ÿaXTV?j r6 iѽд_vAަN8tDަ27Xl6+W÷y`d5S?ګ:סm ШDԛisH332ܖؖ'ǻvԼ<]8Uq۷WAAvښpUq5YvQz*j!8t͝tpmTm^hԡA;HSo$B={4u ^W*U77v<vıqئbڭ'~ܽU8^<މ7+]E{lj>yOU埍3x~*gpwXo(Չ}[!Ŀ6/_Ygg9AE|5[ |fAr5[N)XNSegb'8%g}z؎ϛ-抲}x|*iT, ޺u큾,w߇S>t}[!ĿTOԦ5N' ſX͠B!OB! @]G ) !Bː !BHB!uVO1o6\vQFX]XB!(àfi$E֊4t~fkѧZQJјkS)73~cOH;qc)BRT*wk%P,,-aP|hkn3kw|].L!B\aR)QV ^OMJV4ɬb `hf2LK3Q Tl!vTSǷ :J5T?kEV|o`I|&wǡZ|U TQY7+>}P< 'v>_&n/"On&5֙]xM Y y7?ڕ X;6mihۭa[GRhmT|S7buC`]T1 ! 9Զdm| *G>uR`Lr` zn̬S2k@[Ş?@Q p.$9G֌JUC5޷ز!5Fjׂ@k;ڡmwpm ,*-&KdEY3'˟g“ݓw`e`f|w\L0`BN;`䷹&҄EAe lFqpH[3¨nGC}Ǫ|à+ә z0;E.BUj •w"E1E5t'=_`/ʓ}y.nO'=i#sI=jcGlJKF},#/K,]4B!{Rot]9Tn&O=\.v k.D͔j%֛ O X (d;+\-X^q\L̼L #JBíjM|\G V@ANK}hTفj'R-L"B!ΔJ7OJWOvqUJ(z@bV P"٤+KI]x+j4xO/aU[`Q {>e/C~mGBq$NIkD^?'~0g?OρI䫑 O5 Z.[(fo\V_g|@^5뇐^ֿ;W-׹B!.]Tݼ".׾41vR!B⌘9Z&B!IB!]C7c}B!8KR3(B!aP!BHB!B!:DuƒIv͔B!tàBh}~̃a'N@x=B!.0o<D! ]]+>?1 B!.0h~u3 !'? '0QHB!({J7qa7VIA !BEm::ժ`'}HB!.0h u~EF0@A"TB!ĥ6!GQó4ǃdA!BK9 |,@B!.&2B!A!B!aP!BHB!B!A!B!a_x]y>Z_{k6>!Xq4FP(X(e'BH,)kxnA"&iOmv_& h^xel k XvptZrZ#iJvNEO7cLczn;ڼhsS"i<.K`EL!)H"I9*Gd.fk.1EW|[h?]NJ2Id/\֮jY <NB3Y;A^~ InsHB!.0Rw`4G<Pp֛и8Jb ?Kz1y*u QZTi)S<4=ZmޗYOP.CxKeT=qO6)Ld>|fQrT:Z3,A(3bW C8l ^ib6w@Vߎ+PG GmW>هO~} ꣭PB?_)@}mj Tw"(fq0 PݯP0R'O 5 21PHd- V ڤ,NОdA/a@Cz.F&a^ {=M\cGOXhs`J֘)r !tX\N%:X%0(ZoL"dܤp}dbAxr{)ܤp~۩(yp^/q7hNG5 VLw \ru$;Ž Ŧ* *~0/< c2rB[PBi<VPk_a%c#yPŠ9k=\gy1yHv#85)G2fܼB!Dĸٌd&_ Ao V;+v9LG;|sHd(>EP,*V_<ȍfQ>*_Ice%CtiST͙ 03/ 54դqg^?\ !-(P! կa(d愿 Lg@HDv=ɟRH9,BOS3( @)XMM7פIUU݊lO"23ӼƎЦNQ4H^|nOL%Y^[cs>̣yT+&̂M;TlP!ECQLEtS͖s so,z{ ]AYRؑE p?! (AM ã[P氫$~?}P@PIwcb{zl6@#fh& skS!Y(7t 8nKRs# A-!<eP7ȱb%f3/?c:w~8uES P` @ Qܸ3PBQR$Tr[Sx+'Q[U(-ԭPԅD8 +DׂVh`9PѵeRh0j:G5&TKB!Jr]Pyf3 /0tF >ƬBӨ۷Ѥ ??<4VA,*3ܙœұRˋ=xno w诽#EyiEȃf-n9*>[AYp4@LA(-@bO@?{UB});kqssX[ I{ald#Yf:QG(A-ڷIL^Oixn-ޓԤ&:3~ ?>A>+q:QR8kЦ|2Mؾy/lҞq;hB oxQ0P]uؿ=^K*F!LgѰ'A1{6nRh`?|kWAnV:Ά0rIth&N `U(wѻ$Z$c_,Ƹ>>>L D9qt:u&W6M&t( -Sxf3zۏ/[Q'go +Q ;1ݎׇA˩XotE"c:vA&|x2BqAvy,u,W$8Udd~8UjԴ[}jJZ~!fMQ7_I'5]?6 7Y7-R̟K~0:ݛyY|C,9u,X@y7/'mp~.GMbhM`ɨOؚce4GI!(17tm=^ޝw1_-*Amjk:۶W OjN{zLS!DHtjp4A`*OEѢF021ntLLÏAQih="U+ƒKk:/QeJhԢ2dB!DYVzYCѨHmX|1 'B֕;RJaԛ\A,* :K7ܙœwC y5'Eg0H]v/3KjXy[3$#*On}*n3MFgoşkD^?'~0g?OρI䫑 O5 Z.[(fo\V(b2j$#!w8O܇[3!즠낪_j0BLIaNjL_x ޳[7!B\ZF XAA3|B g4_\Mԁ$렕ݗ2x>ހk3v%*cU^N3v4-S :%ѣi&Q}">n='<x9TD i|Zߨ0c1/ƾCb?{.?#/7BQ9jݻsIEN C4=ʷ.ܼ=d{ ,QVڑR]G5Mk ףSS7߿z&漺%4'y\ܞNzFz$,Ǝٔ Y:F^NYLAh-}Bt:nxO6Pp8C4%wV⃨$L#=c6kbu뱶Rެ4S 6;WGnC?i89L&Nm]"|ucb~N/&E$MEiU4V,B\aH W1X59] RFf`oh'k\e3o<݅fmRyy(o7yڍt oڒ0hQrPb#,PB ^Zeπ1O>HfAxB!."ӆ{FÓDYi9pFСҼe u( ˟};ĵh'Ս氢f 'S%HA NàfT /0ޘ] (9@3c~@ШNXZ4jQ@En2!^7?Ě&[^Ʉy 8%ơżVlb-֌]㭛"[=k7Cse|Ȉg7P {;XiUϮ2VZ3G1m?#@,=WxĢi\b)#of}_=mb=&Bi֛X!B\\J7B!HB!0(B!$ !B B!B B!0(B!.UڐD R8аT%'-BQN͠j˕][ų*L`BB!nkbi7X<1s4ƅ#[8UB!.0h&jnWpć'D_.B!`,'Jz&ĐrB!(JcT]+2vdx]zL@"Bq A( 5k0%B!%Q@j]=?9B!ĥUÉ$kkoO=B!ĥDB!."2B!A!B!aP!BHB!B!A!B!a ǷjJ4 .Z Qb6I{ _k*ӥrOX6Ʊ(coj(-F[?`ٱWQ*jr% !A5NY$Iүi81,/ aIˆ]\ AVXRFVD`Ct(z7_M?D!H0 R3X+[!#]ehBVfX֔L#S'W{۰CS^gfRۢk<˩-+"P"`qc?N'8]PTw;e4F&@dy9>ߙ8\B!(A(=#;qnJn6>6:vCT;yZL0'po] 0 |V(:<&Aph&GVetȻ to%<0@~Tu~u'gܿ7܏\gc 6}>-:D5@{IeT26 cYk:y(jݰN|DiPơV)3ca;߭e@i4f MѾEcMTǑ|9,q}93cCW%k<bOf>֮d͚eh!B:5E(Vb_&kz nѰSѱ3XI:N|~JX0oRp,oi3DARxF.y51#[1+JDaX#"?UQ~_FLBkQ1u 0%?dMd̙(F@`9ԞL7HU.6-ݏ#ۍ$ԤÆaȘur !,}BMLn:biDU e[ Ј:ϭIWwh!wVMk@aKQ s=h}BQ0N6;*%|P\ CPQ[re61`QEnckvm4\*D:4ȑ`!B0ǯi***/&Uq.7\1aRsXk FL]1Xergd3|!9GP~v ]v&RSԷGI4 Y PP,C2c~EmfC?c9v"CGhsKB!άtzXՠr\_+),CJ1aM*N jT vh`Svw&!yw>M*QjB&6ϵ 64/fh1"C7Ei%1SAr~dAsoU_CEh71# !*Eytfp4F+&ߗkt-!)k<%a45*Í !Ҫd)InLZ ήRy]cf%?Oۏ:+&xua1{>?QE}kე4Υ;{P﯍rL/ڹԋ0)CVw>ydcnre'dJƳoaOsJzko%FӐR֒+׿BqS .J -YqSOȲ4N_6H'g3XV*P#.pYuJ6߯DiSmWBA3؜ehBG4Z֭[,E؟bn1Q1bq/6_ih'w ሡb\eoiEtfb𚑌#\';=hNu~n%.@#"]^ovz%JsL+,%X._:^ӨQ#,YB5, ;#J7 5ode܅fPP.>hҷ>ry?Z?ym2^X~c#I/V\!Ŀ͛7端"$[wJ Œ_n>2OyO  t['݅>IG_ׄu 08_DK~>`3/^EU$T 5LdTbμ~x_bgm۶|gյ+9XH,Vl9>2:v`I]LjS6lii"8TOZ|ܙ<8DLv5"4- |JD(٬Sμ\!ĿcXuxh׮޽z ZH ! $De9;<;IvZ YE/Ç7rvj*^?;,4iDt9u`X7O^u v v%2mOX_r!c5^'ڵ+oN-<SY14yp_YGpmHg/ =VIx]$x._wy>Ĝ| 1?J\!ĿO|յkc۱ʫRB!eR3h\h^頂;2BB!àRJLmEc݇X! !B\aPQٿ2o'u.`@"Ӝ!BQ3Ca6e\OaK [!St1ʭRB!iB!0(B!$ !B B!B B!0(B!.U)q1 l<2z?V{W#'xeZOUMWc^X)VD%/¿-]'^wQ}l\!ʪTH$Zc >\O7&io돚|R8UKMx y#nPׂ`LS" \>jt}3r}pM_;9~-/cႾ VD 庞/0b YM6@ƪ7xu⋼X<]BqnaQ; UF*aO"4/˲s7ۥܩ)ڤ\%Jr~?[*~ֻܝ2A#W_0EHVmyxR:~=#~gasezqu ^pvxJ &ëz9| ύ$w!G!JAO_8`qqXz ? Q DV+40d }ڍnÍasq hsM<09zv!4Y~ +Rb:}F5/cүJ@V[#oo,s1~-AF6@jXeyġڮJQDaƓ`Ru @e=ˇ&~:aX'4\xQB` LsxJTǑ37gh<6/ㄿmҠ+;'[ptjۄarO]_-f@ NiR9dӢYq%7v&A(w~:`M]]+~ׁ%̚<.T!80Os`>s W)jUn)#ut|?)~!7)d Xvo(/Nl5Z{L] 6|>%(iOC9 ,R!POw䦧 ?gz7~xcQ)g@\s( Q*Y0AE[,XwXcE}uV0|Z.h F;~8:e03̌](BQ!K_[)RRQHD?Bl%.m0a̘19?F0|-s?s}>|G b6& }atTd,m,o)/n/8'\)ks3wأ?߽k^\I1%GsӊزOoKl'8zʇz=ǿZбH%ٓg1%/e},fX$?tɿ$\.*O@/tJfI3T㩁!BH3~9,enT r/st[hj F?Ts{ǁx+'f'I scEz6J໏WrPE-8NGߐ~__T-g13z${xa' M7Aw@"HG9IDATu-QA*Bc&HmsN -=&KB[mS"E[HUo=\Ej3yUC˻ ٳ籵D'ǡʥi B6$dNBjؙ.O7!To%}vb3z &tnag͸_9VÂ,ƳDZx1}*@ KQnluZS˻_BÙuc'8oOdfvgPP#ȎBT 3LJh3t&xR*W$+8Sy'1TOaƴ pM=%λY% 9K`o^8 *JH~ ,ReϬI` 4dߊ(Dg&rfSXʊ{He;M&;¼ JU 7g:X;ft2HE-0o\f̓-Aq/g xn$N(mصT7}AD, c̙a(=^-}3Wҵbɛoc2V҅/jH Gi@@0?GT?Bup6)>LeC `7BMf}T(oWmI2եsG5Le\[FN22?n:<ܥ'>ųR&S7`Ϥ$7zI.aF \~:x;/*zKxGg `c唳jg[ _k NߑnUxdsEDwV3;L|) =pvYBj' NUW܏ D>T g:iH[ 5|6jHg[ D:t2DK/;wKz Lj>@f\]dedvOP !{aO'>Վ/ ٻXL7YBCL^ϗ$K-e/8ЬIۉ/}=oڕ~vVy>^EgƤhHk 9l6PR'׀ZAF ujl)N:ϡ|hh}C=jjP?w_[ӐfǞ tn0t${FKj"\]j0` Iu;/ > B=17 I*Hee@!,Ir ٷ3}|{${ïvDG^f`҃\=2c|Kk]^c۸;0> u)gFBzK-$!m/؉x֝VWDqin1N*/d g    Ŵj=h% p- AADAA@2PA-Ň=hh>?)n<@؜Gjc6B\q:aTݴ1͆9BHn] E)ebG5À׽/.26bT l*К) v?+$QiQAЫ>L|7æF/?t1~xQ/hv6bG?7OŻ8~$ۧ>:$M#Xv%F&@38Ss |<W{]7Y[Snʰ1#rOe,U@ںw6z;4pN?Ԕ z=i}e{>.jzIЖ?j$xARxMV+ r>k)sdԨGvaHO{040V!{[ / u&* L n=}YfLƄl>١2G{ G0\/TW:  :(~|m[0wmũ*INf B-&޼vq@ =՞( z0P=ds9 gzOϳeW3r錺>˧%Ȫqp$ldD\x=܅ W1p4"|} mmh*C|iVgbQ }gL=xu_7f0$ zuÖLŨ,;ri:vbuOSq/ՖN3QSʢʸ{`/6^<؞ >ZGnI~+s$1geL7ۣMۿiۆTr@#.<@@NT][~${1%ԯ9Ruب uwcKSk`},*S|JIBdۈB~ sJA>-U'Y֐u_!ρQV:{NǴ7bKv pp&f߱Lr8s`G25'[LdXYJAޑEa|8 J%ӃH!K^ɪ;arh@R]J$)? I~d6 9oͯAy߮B:ati1$B1Z?Sȁ8 bAe4qqv&KzImFәHNe*L_ ?MCf_3A, `Z0z:3=?|2@Fz]IdKQW:p,!|77D]E$#Z֗.Y$ޣPl/ V@WAXe y%_&8*$6$azhKH\dq$ @oWWW B\( 0fO%8z,Ox#"1Om"٘ X+)}^&?zg|`!kҮ3/z$+#}v<׾@bg; \zTԯ`@VdE>/y*&a-e~l"*b3+EO:[Z͋jqKyU$qy{Q]lF t5zheՒS$*6oG&c C-j0݋!nlgOʏ'0×{R$n\-P˹~2bp^AA\.;s$I:o#n{<$t }w(zqډWYzmܩ,:f`OXnͻBq}Lx^:NZIGENJrY3OdMxoɟ. a@hp?CR%6gVvQ! sep| U6#{r sV8V_F.L?f{S>}dtTZwhpCN8cсٌ*D'oy9mAZ 9r1Jye^8Gpa65>Ђ~hN'zJ|V)ra݅%Tߑy@#H 9}'Ԣh'kt'&W~Yb'AJg A-oCzJŽ#1ojԴy_ zeb9? g)j>DrBV.uABH-94 $Xَ9/tf7UAPAA&*#3@s_*AY,=cp %]&mzA:s-AAChAAdsxoWILVvLe@?΄-+ ByAWOCؽ4 iVlٽEU,8a>o'fj}~|<˃]eDoʦiSk\֮b(֯Z_C,BlNLV}&o=hAxm\OhOpdf^9i֮X?\~[0+WEe"t+X{|&2eܕl^ook; %s--kl'fzM@s|W% &oAPAntg:X$]"e#%Π=#ѳl$07Vo~Ycx9_1:ztvh%q Z!YȩLWH!KH=;^yaO~D@Dξydbo{qrn Kѣ&jr* {_ηfzwQť\~X&ds. 'm;`)wENs9p_\wˌ֋5.e|h:wһ4)4ڿ!ge:5_9ӃiNؕ' , K;Ɵ摞xk:'Su0DG J&3sH}D[ݕk9iSQ GumbyyICAN ;$Ax;)])HkۆlMllV]/ $#6S|RbBoJVQAdO6Lӿo 4פe5+{  aЭcM27zJP3#Ҕ&eLVzQ?{ο~j6)—'7l-saPfBLw' _k!m`>14mwh.n4TՅjHί._fMؼM_=E'TMA({^# 6MLx*V8WjN"[Ce_ 2CDOc?g2= ^O#˞Ʀ0ȥ4G6DI !AIu 2ѹ~ULC N\-^c|l1 A~pfށJ{5F +6/""֤KcRLöD2c]^u;~J}c5kA3 3'?@13 WvNKWR`F#OI03:hvo8^gWP`G FADPf>1¢y@;r򋛒.<ʇ>q ɉ3qoOG#yTxƭY?M/$& #SD5͹ώ(>7Z\*rc+)>//*Xc O[Ȩ)1Aa Ư%ٵ1#b.Ǎu@d>Qup wg߈?fæO^xozvm> +S['jŎ#  &&zJe1cܷ| 9=1ۯ\E MF>Ӭ>t':*aM" - " w՚N  pۄA`h ]늯{A%ӂ  wjIENDB`jedi-0.11.1/docs/_screenshots/screenshot_complete.png0000664000175000017500000004132513214571123022575 0ustar davedave00000000000000PNG  IHDR-{sRGBbKGD pHYs  tIME 8/  IDATxy|MGǿ%77{BBl%RZTK-Em UUUԮ Z[k%B%քrAm/ |ɜy3<;3sH(@ sd@ B@ ;Qxɀ=&J-_Y2Qon]FosXE! -&P=D[0مW@::~W(A޵/ h9`7 Gql; Ⰹk}pIX`q JQ/=B)O&OMmhMØ9jmd?̈D7cS1ÚUBCvQ%+(822r~lW4oT0\;_Kn=t2 ]?yxA.~MoS?.R;1x]E&?;uxy>anXv@[CHfSF|wBuy/ܼ4ό8NNGȞTy(z 8d~J w{V}WgHSJs8Z-% 'wj3+{xԭԖj}S{?Nc䝘rnPl{U:,YaNB ~L; sޓ"e Ss[`$;r D0""f6x4WkW`cv "Cq!6Z2X:uGd4g `௹EvvWua.Y72PQ>P'w1{g,R>!8kjOB$/\눉GzЅi6;6HSbB^G{k m ,S&e0x]wJig" ҙfQ`\o>FMӌ n@S'kܨ$eNC4*ыuW>;4%ygϴNNT/Ҟy|-ER*:Ocƨ2-Vr1e6Z?ħU#yZu]5+ {`<\^-ffufo pR/-ؿJr]uо%]@SW;s392USssD[ۍ9 "d6kc5"_l\=QyL`=a'SZccfң\m^)YBBG$g׆e2aM7b1R4(2&#WW"͌k$l Q^JƜ)&t^Ul6[i[bK&{*̧,)&;lX/r6,':C%s["0TP{S+["I={ct5a7an~w_O~otd@ @тr{z,j W7H40m c7&f [ 7u,W| nCgV\4nsﺽ៳uLVG1 }%>sVA(;evE %ѐڝ}췥\&)2;7ejzCvA\4 II#;1}'L|[s?_9&HQ[ImxQp1FHuй tj1$eN(YOZPR/!*r`=tzO ɱɘѥm}'{;4JF7h_3_AڍIk:M IHygfYtUQJ%mƲmfE1h5JPVq\ֺ*Zóx%j/chd >͚D@YnHrewfa%= Nݣ^U 羋Isܶ_޴%*BSbF  bcp>MRg4r~ Gfn&;8UX"yD; ʤQT_pNľ1H;L% Yx(aI9h<_ZX.,qǂ^ nhQ,dm،ܹaGL>ΞY׿ѯ32<0)Gؼg*^)+y}Ff=ӟzNtfОn؞P+MD߅ɒSlֳ9L yylBڗGe8?mdʤվ||8WX:" 8011Pw.A~΃+j bᚪ,i1ҰI*e0eg__kC9Zpl$Nɳd"iyVo5/ 0hPHI 6.볊 -"3Z _k1:~*0stp~9>㟱{,WgpD_,eyo +ϋ"32I @ I@ B@ "@ǍK|u_D@ xE«/6T \é3Y4΋f ψP ҩxꁢ =ΟT""?4s-D1C7-C"eFu5&ytH̀,8c4׽n=3.ؖ%q@ xRE52z_=ƪ9*jpty_2q"2NW`^*LWcFqO,{T\v^Ku] kSMDևar"/2xhfgLzx<7"}Vmb=lO3eP6?/gsR4ìZ֟Z*'ϩce ZKZW}fǖ}"n  _qsvYtSL8t?C0Z~_; ;3M~evO`/v i)RKfa8V^]/UVL~uig" ҙfQ`nKӦi{J7Rŵ8QލN]$:KXw5^᳿3@y *+P[̔Zv~sߞ)ZѱA13Q6M}L V*j}DKCL<B8u)KJa7d&+Do9 .'!fq%A0Bֳau,gor3P29%c@e ߘʱڛҕ\I l?ԀS-lj!Q6}(YQOrh*S @۝d|K+sf'ڌ$$`MH\?w|Jܫ1ODҶ1|^̶(PPVF3v6d PS]zjApYAW(-r(|sy($NESiM [Mou^*mx;_W?jluۍA *r,/ё@ $"T[4H|>݉V9*l)ħ;1OscyMWʨap=KJq|Q|&ֽ3Yyqd&aNu=rhܔw/Ʃh@/X0l]$@ xWbKmd|8gH 2ZɹJd5oLPPƸ[AdJ?IrBrl2fEƭktipkiKCiyu;sJn~[[cSj╨UN(ܖq,W1zE40@ o͍1xx2 l;RmG+HxDI9¢=5fF֜%W3CfmLE O gJsȖ* d壄%iM `OfGΌÄat{s)oo*V_ cЋ>&$Ԧ.Ӄ㣉_F]he@ <b D1߮n6 vYb8kƌfֺ+ @@ |X@ D@ -@ h)h);|;jG]-u/Wk_{"D'"Z @ %G4jj~XŇ+s܍3Y0֋ygӄQ~aʲ|;fW-˽H;Wl6J=AxR:n3o7 zZt7ǝ <'_ʮhV|TƹZBDa}I|~Q|T4ybɯ%swq5קHf+Gva![ӓqP@ DK-^V"%3Ӽve*ʘ<;H_ Ur>ņsd=J}]W#a1#J"[q!G3߬?{Gf#)9djNa{>n_9,}[z[a6PցlMYNtJ&D` LaޔʖHm`O>e$tnigW Ja<ǖCɪݎ7br>='?pi\IP,Ccy)V3HT!*4#U+=&I³;=Ѫo)=OaRAlg#n N0!5U:ɢ1j?Qm (lHBڬI+.s]7̈́.Hj~W + bœ'3s.RQE*LnNS]zjApYAW(bN|m޴QdPIHHӚxTCXw~*kpQˠll((v+f; vH*2k%$ o ~ ~xӉJq|55 uI|TjY\ %7E!t:gd j36圖튄2ַ0nr OVܻ~{N.^$x[Mc%'\q9U1ROwBqXlTUk;W3HK4.HjW0Ykcc^2>jAU{ʸI{Q6t{lŕJ61'BӅgrxl;gL&jB&켾4aKLRev4nȻUE@CCrnL$SxFmOEPRTtx9^Tqg ;}\KGirg\)Xn.~˞JbC@ <1ѢQ'$m&.!C=ytHWӽE35mFa{˲guJj =Gʕ29).Gr>#jv:G .dRvl)ذ*g7oZP8(F vv3a37ތbM_//y֤KCfÞ} &35 1{&a+p ٻlY-΃g3EΧSiߜ 1JehP*+D{) H {L0>ڕ>a[``0gZ$YTЖ]\XբZ$͙pȵ۬Q(켛\I4:nCWyGC73$ .J4^:pbWDZ6sɐːC即,XdKKe$*ݱkd*C$@ N' 9YO Fvݟ8),E̒՞IYɓg9xODe80Q <+O\+_ɇ9̴c `QrD1&UW>(i8 OW0"75߬<,@ Dp-&57F~Bh|y#'/m.G)YXAuS ? f7aCAG, gJ7OWN{w["@ D@ !ZAޠ)׌ covdy &~7(x1^hʇ|_3fv7jOw&])scڔ<28ϢE~E xڰ i]9ndOlޟJ^$W>O1ӳM3zkcg#i&\|Jq&~v?L䇛U <=("\ڥd%Ra62g0w" @РS0]8Aą4t G D@ ޖ~}Jႄǀm,pteVl/t,김SY {qXWَ)g,d#GY^oVCg/5*`\wSNkiDx70c4+Xc0c~ ]~)tum-e냱x| %4`ꎕ>tSz^CkGsr)_x(/{-_YʟwcTߑ}'5F M2q!Rln!-7y{AZ^VLDS?9FG.!?.l.Lv`>_!??S>LP nقѧ"Zk)bC@_.a{.0 yMnwoU קӢYiؓac&b-yh xgVGYp)Oj|*L:omUŅݓdND6/<8qs;xg[K'w:JG=Y6SF ١}69$czX!͟:OW02gsͭ#|ϋ-݈'1$Y!& -}'9{%}{_n2-bX^/&A`BKe&9.v V".[̵.PyRɦdZlq|fP?l JV<"b]R9ؒم Y[(VN юuKT|A?Kl^If i3Ӓ}7+C<.ߑH%jEUh=|)VeIo^j__K>ye[QSgu*qdH4(Ӵ"7*/gŏ#csw([CA~GEϺѤU>i–/6p&yk{o;3uElB7] U^ːxl#x n`:5:瑼IŽ|s G}w(+Hڴ_tӲ@^Vm+u:g{?IZ^5-8U,YɄ-ds-|sDž Q qx]K~B<ݐbє#;ٹ}h?6'0gٱc¦X80Q x큉.1 = 燻L3-AGRZ̴*T˨H^h6 r7)A.A &yt5C~@])Ӗh hyw&A9F.l] 4cvY=G<3A, O]<$36^$ @vo@ !b@ hsXnÈ>4; h[4?`k)ي)se 7_!Zcz /z)IOg{ԪK6U/y6dʜ)w߯Z'Kx5L>w|"?8؈+hG&ч/(i|5B!v֝}:ݯr㟧O bE x P⃉Fd&2톅(m-ET*ځGQS,o9-Èܿ5{s+=,ڰ B 5OwL\֍NoC15a;/tg۶ ys.[Ahfv{Uo%8ٝ*3ov"È ʑ//]+ޅ9kCٺ1BpQܥ\OaDnq>0~[2wOm."wy^W߲?';Y>ʁ'S;,Y;!}Gڿ z~LT_ `O>N-}*q1)&10u1:NOcp30.v/ևuI*K3hjQm_/v1&/|=F8a.k#f2u5Z]MOg*9-^iyY({ifNLFҡ?H "xؓ~+IR/xQΊCpPiFK+T-Nd;1 `󨃏և?oINǞO)|@fZmOWEڋ /GI1g!с}[ i쳦\!U^7-Dk#vǼځfv"̸6p:ӖF\cIH\*0n$ TvXL w0jgbgRkf%);=1d+yY+jtszMκg54|ѐفtn *nT3N? _I3&])Y6aMЋ8Q?'}؈+p7X߈b.k{u$26Z%׺4 ®sdROsdJ~owwGXZՃpOԖkEH*x@U_@l]e벣)=ahyQJzVƵ$cy@BVy糱;5L>/3Jͷdojn.r-ӔIalT;ؿr 85ZC`#WX'>ۿ]I KVvyk9?DJ•\0J ;F>2bqt`M@1|z]OOB\`73z"ګɼɊsg)g6WCG|ӀΙuy?gD`q|.ep!t0&SQ?YUZhxuol(g:Sbz8}~=+Q]\;#o.&deذ)>51yb0u-aKHng,?S'p@ xiUk.bU3蟂i0bE 3-b#@ X0M^N{;-fZ@ h@ D@ 4aY ߎBM;9^$s^hʇ|_3fv7jVVxЗǢ!)ydɛ}J/-ӆT(Ox؃S%JtWO I7S?ӳM3zkcy_๿ߧx~)3KJ3 '𨲑?IX?a msEww!Y-C_$~bk~cQ ]~)tum-e냱x| %4`ꎕ>tcxU{~ tdw;+MQȞ17YđHy ߐ)iQ{Yo?˱r|ȥot~h}o3 KljݲOE4R6.&$ [ou;߽sxEpOȼ~h|Z4>Ͷ{2s ?D-݈'1$Y!FNA;|\]~;?YC**;2qd_%6|\Kά'\Oj|*L:omUŅݓdND6/<8_K'w:JG=Y6SF ١} 3I'OsJI?1PB?%?u>{2ad<{;Rn p{5"v$"g$Zo w9CE AŎb &^qTfRΜf%6/$4Ov:E 98/P𘎁[8q (f@w<5ʓJĸO6'bÔ|o$\ B$c$d+(Y^uZHؾwg.dsd=nѢX92F;JU,8Qp}%81oWI@PDKʖߙf˭3¹Y0q0ٳ0?d[2oV:rqu=X~c]Ig*=s-$o$2=28/?`>[z2rgqO FR4>k~B0gW I0BAy:R᡼t(.{H``Y%vDXLq -[5;[;Ih$]\033 fS J <N2rk_YN">x7L_jԳT7{jFs5kn2<̽ľ^<ƼlMv#ĕqZNlGBv{i'j!7frbAp,>L14ceŻd>^o}d0cqN gi:irُ2jhlB+oIn&qqH=wOr)x=B*Rlsu}_aヱWAdv'}u[9rO.S׾0`D+-" EPvCsZBX%FdSq|P""SRP>Sh{Hdc~{HDd=EDDD|A<-"""RQ~@IENDB`jedi-0.11.1/docs/_themes/0000775000175000017500000000000013214571377014755 5ustar davedave00000000000000jedi-0.11.1/docs/_themes/flask_theme_support.py0000664000175000017500000001502013214571123021370 0ustar davedave00000000000000""" Copyright (c) 2010 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ # flasky extensions. flasky pygments style based on tango style from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic, Whitespace, Punctuation, Other, Literal class FlaskyStyle(Style): background_color = "#f8f8f8" default_style = "" styles = { # No corresponding class for the following: #Text: "", # class: '' Whitespace: "underline #f8f8f8", # class: 'w' Error: "#a40000 border:#ef2929", # class: 'err' Other: "#000000", # class 'x' Comment: "italic #8f5902", # class: 'c' Comment.Preproc: "noitalic", # class: 'cp' Keyword: "bold #004461", # class: 'k' Keyword.Constant: "bold #004461", # class: 'kc' Keyword.Declaration: "bold #004461", # class: 'kd' Keyword.Namespace: "bold #004461", # class: 'kn' Keyword.Pseudo: "bold #004461", # class: 'kp' Keyword.Reserved: "bold #004461", # class: 'kr' Keyword.Type: "bold #004461", # class: 'kt' Operator: "#582800", # class: 'o' Operator.Word: "bold #004461", # class: 'ow' - like keywords Punctuation: "bold #000000", # class: 'p' # because special names such as Name.Class, Name.Function, etc. # are not recognized as such later in the parsing, we choose them # to look the same as ordinary variables. Name: "#000000", # class: 'n' Name.Attribute: "#c4a000", # class: 'na' - to be revised Name.Builtin: "#004461", # class: 'nb' Name.Builtin.Pseudo: "#3465a4", # class: 'bp' Name.Class: "#000000", # class: 'nc' - to be revised Name.Constant: "#000000", # class: 'no' - to be revised Name.Decorator: "#888", # class: 'nd' - to be revised Name.Entity: "#ce5c00", # class: 'ni' Name.Exception: "bold #cc0000", # class: 'ne' Name.Function: "#000000", # class: 'nf' Name.Property: "#000000", # class: 'py' Name.Label: "#f57900", # class: 'nl' Name.Namespace: "#000000", # class: 'nn' - to be revised Name.Other: "#000000", # class: 'nx' Name.Tag: "bold #004461", # class: 'nt' - like a keyword Name.Variable: "#000000", # class: 'nv' - to be revised Name.Variable.Class: "#000000", # class: 'vc' - to be revised Name.Variable.Global: "#000000", # class: 'vg' - to be revised Name.Variable.Instance: "#000000", # class: 'vi' - to be revised Number: "#990000", # class: 'm' Literal: "#000000", # class: 'l' Literal.Date: "#000000", # class: 'ld' String: "#4e9a06", # class: 's' String.Backtick: "#4e9a06", # class: 'sb' String.Char: "#4e9a06", # class: 'sc' String.Doc: "italic #8f5902", # class: 'sd' - like a comment String.Double: "#4e9a06", # class: 's2' String.Escape: "#4e9a06", # class: 'se' String.Heredoc: "#4e9a06", # class: 'sh' String.Interpol: "#4e9a06", # class: 'si' String.Other: "#4e9a06", # class: 'sx' String.Regex: "#4e9a06", # class: 'sr' String.Single: "#4e9a06", # class: 's1' String.Symbol: "#4e9a06", # class: 'ss' Generic: "#000000", # class: 'g' Generic.Deleted: "#a40000", # class: 'gd' Generic.Emph: "italic #000000", # class: 'ge' Generic.Error: "#ef2929", # class: 'gr' Generic.Heading: "bold #000080", # class: 'gh' Generic.Inserted: "#00A000", # class: 'gi' Generic.Output: "#888", # class: 'go' Generic.Prompt: "#745334", # class: 'gp' Generic.Strong: "bold #000000", # class: 'gs' Generic.Subheading: "bold #800080", # class: 'gu' Generic.Traceback: "bold #a40000", # class: 'gt' } jedi-0.11.1/docs/_themes/flask/0000775000175000017500000000000013214571377016055 5ustar davedave00000000000000jedi-0.11.1/docs/_themes/flask/layout.html0000664000175000017500000000171313214571123020247 0ustar davedave00000000000000{%- extends "basic/layout.html" %} {%- block extrahead %} {{ super() }} {% if theme_touch_icon %} {% endif %} Fork me on GitHub {% endblock %} {%- block relbar2 %}{% endblock %} {% block header %} {{ super() }} {% if pagename == 'index' %}
{% endif %} {% endblock %} {%- block footer %} {% if pagename == 'index' %}
{% endif %} {%- endblock %} jedi-0.11.1/docs/_themes/flask/LICENSE0000664000175000017500000000337513214571123017057 0ustar davedave00000000000000Copyright (c) 2010 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. jedi-0.11.1/docs/_themes/flask/theme.conf0000664000175000017500000000024213214571123020011 0ustar davedave00000000000000[theme] inherit = basic stylesheet = flasky.css pygments_style = flask_theme_support.FlaskyStyle [options] index_logo = index_logo_height = 120px touch_icon = jedi-0.11.1/docs/_themes/flask/static/0000775000175000017500000000000013214571377017344 5ustar davedave00000000000000jedi-0.11.1/docs/_themes/flask/static/flasky.css_t0000664000175000017500000001441313214571123021662 0ustar davedave00000000000000/* * flasky.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Georgia', serif; font-size: 17px; background-color: white; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 30px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Garamond', 'Georgia', serif; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: 'Georgia', serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } {% if theme_index_logo %} div.indexwrapper h1 { text-indent: -999999px; background: url({{ theme_index_logo }}) no-repeat center center; height: {{ theme_index_logo_height }}; } {% endif %} div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #eee; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: -60px; padding-left: 60px; } dl dl pre { margin-left: -90px; padding-left: 90px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #004B6B; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } jedi-0.11.1/docs/_themes/flask/static/small_flask.css0000664000175000017500000000172013214571123022333 0ustar davedave00000000000000/* * small_flask.css_t * ~~~~~~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ body { margin: 0; padding: 20px 30px; } div.documentwrapper { float: none; background: white; } div.sphinxsidebar { display: block; float: none; width: 102.5%; margin: 50px -30px -20px -30px; padding: 10px 20px; background: #333; color: white; } div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar a { color: #aaa; } div.sphinxsidebar p.logo { display: none; } div.document { width: 100%; margin: 0; } div.related { display: block; margin: 0; padding: 10px 0 20px 0; } div.related ul, div.related ul li { margin: 0; padding: 0; } div.footer { display: none; } div.bodywrapper { margin: 0; } div.body { min-height: 0; padding: 0; } jedi-0.11.1/docs/_themes/flask/relations.html0000664000175000017500000000111613214571123020727 0ustar davedave00000000000000

Related Topics

jedi-0.11.1/docs/_static/0000775000175000017500000000000013214571377014757 5ustar davedave00000000000000jedi-0.11.1/docs/_static/logo.png0000664000175000017500000007017313214571123016422 0ustar davedave00000000000000PNG  IHDR o AiCCPICC ProfileH wTSϽ7" %z ;HQIP&vDF)VdTG"cE b PQDE݌k 5ޚYg}׺PtX4X\XffGD=HƳ.d,P&s"7C$ E6<~&S2)212 "įl+ɘ&Y4Pޚ%ᣌ\%g|eTI(L0_&l2E9r9hxgIbטifSb1+MxL 0oE%YmhYh~S=zU&ϞAYl/$ZUm@O ޜl^ ' lsk.+7oʿ9V;?#I3eE妧KD d9i,UQ h A1vjpԁzN6p\W p G@ K0ށiABZyCAP8C@&*CP=#t] 4}a ٰ;GDxJ>,_“@FXDBX$!k"EHqaYbVabJ0՘cVL6f3bձX'?v 6-V``[a;p~\2n5׌ &x*sb|! ߏƿ' Zk! $l$T4QOt"y\b)AI&NI$R$)TIj"]&=&!:dGrY@^O$ _%?P(&OJEBN9J@y@yCR nXZOD}J}/G3ɭk{%Oחw_.'_!JQ@SVF=IEbbbb5Q%O@%!BӥyҸM:e0G7ӓ e%e[(R0`3R46i^)*n*|"fLUo՝mO0j&jajj.ϧwϝ_4갺zj=U45nɚ4ǴhZ ZZ^0Tf%9->ݫ=cXgN].[7A\SwBOK/X/_Q>QG[ `Aaac#*Z;8cq>[&IIMST`ϴ kh&45ǢYYF֠9<|y+ =X_,,S-,Y)YXmĚk]c}džjcΦ浭-v};]N"&1=xtv(}'{'IߝY) Σ -rqr.d._xpUەZM׍vm=+KGǔ ^WWbj>:>>>v}/avO8 FV> 2 u/_$\BCv< 5 ]s.,4&yUx~xw-bEDCĻHGKwFGEGME{EEKX,YFZ ={$vrK .3\rϮ_Yq*©L_wד+]eD]cIIIOAu_䩔)3ѩiB%a+]3='/40CiU@ёL(sYfLH$%Y jgGeQn~5f5wugv5k֮\۹Nw]m mHFˍenQQ`hBBQ-[lllfjۗ"^bO%ܒY}WwvwXbY^Ю]WVa[q`id2JjGէ{׿m>PkAma꺿g_DHGGu;776ƱqoC{P38!9 ҝˁ^r۽Ug9];}}_~imp㭎}]/}.{^=}^?z8hc' O*?f`ϳgC/Oϩ+FFGGόzˌㅿ)ѫ~wgbk?Jި9mdwi獵ޫ?cǑOO?w| x&mf2:Y~ pHYs  iTXtXML:com.adobe.xmp 1 5 72 1 72 200 1 226 2013-07-05T14:07:80 Pixelmator 2.2 '1s@IDATx} ]E-zIwgYȾB q\WG qAEtfQe}Ig{׷ߩ{_w^u[U:u9Uu.r]7y 5дyh^y PyA^-h -('k WAh.+Q>pUV||CN9^O%nr͇, 9"I\H?{vi*96d-p҉s+29Dîx`B?dۘm۱}hhQ /_/ <zSkq֒w,veYs/vQEڱn8": Nt$-C3eIláx83ܽ#GV|MqR;H%n^*(L"Q?;pR)g ! !8 0(J{#^g\w>МeA:/^~aH$FNKÀ]%PĜ)bd搙!B[*av$qѝoZɇk סp;)禴^JCwJje)!$LB0I I+\~EǪUY! y;>Ux̜?H;})#1nJ Q5wv/pȨsI v?.<oNaL0q+X5}D3J4r#BI4v(Q_g̺Ko+dtB/NHm… #[[\;)pxN灴*q5 -NxΥeK'- cQ[z,.HDCN]}N9k5u}AN@i,Y(lꑞp?~91o9*&/N!I *QNCXiWyUA㤒paՇl-vi R"7/a_3fR{gjȇ]k;m[ 2҇q Q?JV8R:7>aّ\|. ;V.1ȓԯ8F#L/ڡ,Ns9 ; Kbʎhq(8JKJwس),U^o8zLTV%L0ݸp|بx}[Km7TR: ZoHhF^ȐP:Շuc|-;a%g؎|h6= //H}&z 7NUUnz5Q|k̴i[_RF.:i cZ]ulF2vCHĊU8 Kͯ {͊xJyi%}޵XN\`q_t*W^ݹb7~}(q3kRRd~29*W,uY~[;}RݿpJ|kM`xݱc;ު][6] a=?w ?&EUUw֧*Ĺ%UAZaƞwʤTU[oUlnXNq-bzcEMTWDxWq<;kjҌbzf=pxUS{N WoH/wWGp+lӷݿ~SoFtyT=z9"+q n*yU1gϻvvI]GRAtwv/7o~}s- Tsp7,ZS=a\g6).))8VDq:ӇOס؂ [|qX#Xj뙐h(Wva±X!n~zú:ig_~milmǤ>dҥY*wXhK(YJ5K1>&=X6vqio1p}~{|)3h@/DT2m)w>hq2=U=]ۆ^uC5WuV =d`IZ#Fc9ppU_[;=ܷC.m rHc 9YĚ0guۛW֥P|yۨ{ԩ)3YM˯ȎED3[n\9]_7p9 `[x-xnjNf !0r˲cؽ0"Dë! \b>ZR:'qGQhн12a% #XG{ׁ ڋЀd{ @\8Fphj:(w`~$ڷ7OؘIޣ҂A0GYP4^ҶY#0BFbʫ'[cB* p_ٜ0m1[d .jϷ 8k?%^ Ϙ3l b?DOL$ H$Muzl&V)L<wx`/ Gwۨɟk*W9ȸsϛ떕 7 Vj7WŇ7BS 4+7 Aa3 m5]c-(+ԳS9%XtYI|ƨ0riƌmgO;Vhi -47rXFgԘ3f^iA&ѷҖ.3遆@s``&Bt9{ >rZ6VhJ eTrG;BkadE?DžgJzc}> nXh̿gAԄ2͂`UTu/:cp2~]s߄"/e G6er)= V&t#YB|ӱօ^,honX9𓟌[CN]('mz">xpxy|ڞ>b7HUƭo 1.?/]ڳvN{ê%91'u z48J:[{rO1c&B.ԣ& }PˤJ2t!3 pE_{EKTIb,S&5@? ^Q/A@guw-;{|ǶpfGW5+2&}4PelWTfmE}5@ [t=q?~mA_EH6,AO2Hұm̙_&wS=h'ۀiK|+XpVJx6& XLxƥhs$2EooMvA {*`ñyGg5x+!z*(Y_\jB>@LNe(L fxVxu̼yq;Ȥy[ֈr|6T[ t:/p| \/=ӓL ̶ț-%2&2jǁJ?׳Z<1Qerrۍf͟bÏ]o^!XG۔F삖el}:g:*nN;d_[]~ _?ⵇ@j2[b "3$oW})Xa=XTޯخ%wo n߁^.pftn.Nw\kE~ esph|t)MUVnXSe{U?2CUoJ4pRrT8;n1N+}qMnYf֦QLEld | ciZJ/$j>|>J;t#ܸ闿w,yYf\E= 'q*yrM:ZSu{utedU}D)]W?+RXnXMMaE >>`v$d:U2$;ٻϯZ92zD*5Oiۅ>S*ey@%)9;IpI3& ԰LҵK[93f_f=6:13D`Σ@P7#hB'M^'?x뭷ұmu[.3SOխ)6*p41 U8ȘyLLR?3AWd`t4'nxիlwp~?}'Z҇ޓӑ3f P( 7ALB'#( #fDo@6=8E0fN#2&)Q]XĶ_w/p=|<9RՈą=ĮM>ϾMKp r{W'ί):I(G@TK6Yiy}ͪ^v* *M04EAgh~U E0ު[wk5QMƜ1*7r:Wffai ɼ@m&ͼ(b*+Y':p3g5M e8Q"Vmϯ<:AO=kW#Ex1^U7oq>SNh_'ILF4F8PA ="`ݛ~9aƝvΈʐg3ӞS͒ZFJ1K8qKYef ~u:%b^>>ڽm۶;SD.2|*17c\՚MH7xiXnBA[a>Qyj?ϻ*OVw8o,\NjARH2J];zSGگ~(yvڅ/F08\>!fbQEI%Nj\mY߿;Tz4mڴ}[sŨ=,`륩,?,%e*b+~)|;)lxa|TI;%[vmYuyk?D'Fˡ>Ɯr_'9ϾrM펫r:DˑiO9I@)iHxsv4sPf|tihY&Kn]u%OQ[w!Y Y=Y5)It[iF szp[k1s/9J|ͿF4 'JI Du8t3ff-*x|WoxlSӧ݁S4ʤŸXa<C\SDTBTm4ڮ ¡ʟRٌG=C5_C#,cA$G‹w|V׬ ӦWH6oJYƥ L&ʈsOW8;ts˾[hj02)yJFK1|tJ|$ 'LIw~#;=:d@ҩau{tL Y<.3:y!8KZƲG]]g91CڬdS^8RS9Ɣ @t6t9uѐsSfٺfogAdN#MC1y0fL833eɺ2j:4li#8 `LD@%^^$"{ 07LNoc#=ܓ.) (.?vs6V̊~Υرt!FŘy )қYxȒHW帯<#r6y!:9$N4Id؁diLV v4|Œ3jBWHtӦ*#;ɼz3sPmo>WQQɪk?gr>Q'Xiÿp;[N~%;'n5cVƵَ7^~ve<|ڙӝX|dJJD2.ip e8Wu6?aji\8֍4q[|:[²o=|"i4M;DxI!c@3CHc݂:&,}J56#8fU N )cE_ŅrN~`GAY7Ï:|9Ȍ3qtZRA3 441.|0}vvO-{cMQűbMЎ9J!P;x%Ro_/3yو3'jS'.mݟvCKuC fuJм}yig79贷5 {j`<UÑ4v[ Mx/~:(vp%7 #6"1&c{k.(TCɫ?~֑'JFI :fHΗ ܆֯!@nSvզזXOh ʀrhǀxt~rG. ;AS}ʹx76Ӽi<<ĆuwwF:DǢJs]]4pti6*A§ >;S_O4|^` /M%:qt3yu0UA4|++ˍ'Ոm6Pq_"-Ӕ6xC8*jđF\I:X#vnCKuk՜[E۞A)CBXde ,waPGB{e]DxPqU$2 4ԋXD^!t N hE׵2^9Xd 6~$lvf(h)R֏xNvmJ4;G ΎS +]XX3ZaWM\@|TF]P%#6\bQx}Qlf(-)~o֋#U;p̙u:SOoxŠ7SvCUV+TnLbڕ{"_?͕St)C08!)t*a`>D CFY853w!05 NwsEK#DT?p[qC;ڊq $ieV ]pNwZV #njͮfe7{etճx%bȆ }DL"ȧRrPhW5Aˬy{ZF}'$mtxXv[rPxbVpیG3%t1FipiL{V l [ќ +[.OFc12^KL 6 ,Ih^h5@@GZQ#`J 54,bqޕ?wsD*: %.NfhK: knAbC`K04p ʧ =};,)EC8̒Er]J AgK/At+'\TRB;qd`@6X@po=-PQ'# A}5~s *hc: }㸴¨chGAC]C;*pW2dxO79hȼȠ/}T`̢^F'LQA NX8'"#?4ZMFX|,#z0HQc~Jc7+-Th[onSjKBPO YLAXU6mRY8xxRtD:>@NZ~R(/0Oƈ[Y# 6^Ջa0SBe3pnZ »XQR1B3bkCc$>6xj`aL|\<":H=ԭ~DvD=au}MqY]o؟E`f: FJ jDH=) mWίί_َ<읆 ƋiD "4&Ol$Q*db LWbDp~&U52+cp@V?P{N_rmp!)29_hTǍA) 0=ey tB*H?Q9-DC ڮxk"Ԉ:~LjQC^0QI~oxiw5moh: {/@C; ủꨰU> b0>m$݊*qźMxEy82PT꜁ʔ!!ksLX*!V}c쁧X44jN"i3J,viwHyGnS8HOsQlk){lg6>)3zmti39% 0oQ屒2uh"҇DVp,y2zb'n5+m )NGum a;r :BC][ ۍ9KAi{ ښNCO")L'Z@TrK 큉DF85s] ~ sfly|kXoѽ]zPȸXQA|8zheƧ$!KUt%sq|zyK>LuA>}q tBJtoYx\;l6s?#|qɂx>-zrPPU,!{wGK;W3R%t:O du+C ~ x* H(l&TF I O& a&LWR4seWEd7VΐSbɒ:1BݛX=:h˪RW/@ڣEWJYgitQF)a[4OE=_Q 4 ?s2uASV=wiEAg dy%+xp@m(Ov]8uŧz_`w ?jWk\>93%|Cb(yTYF)e!!A2IB%}%%ƓDJcVĀ[HQ*Qa~;yI6NH1,qڇ?7}M4yUh+v &&` 7nPߑć~ cTG0?Ah8I{ΣnT Mg`^ʃ'=:Ej> B61Uj{[ߠNȁ+p$| :۽=;A nY'jS]:-oFg9gpcbbTʬiC&vAM6֬L6uA]r%BN#[-;I:<@=ӫ-l2{6~Ӯy;ABl3>Cru=@ߡaÆ;\?M *% K!N2aA7:4JYN>ll_kQN_;b D[uٴN;UԩBo-{h$mk@U(~4yg#3fpSZN92kT @+hʤωDNm|*J|ÛDB@` %YeҞ$xfL1#:p*)㇟<|YLNx]&;3'" mgPqettrűA|hT*vԬDڒ1})  `)`eARP33J"jFpZPeHʫ:pV'LL+,CӟKCȑ"!Ѧgg6@EG-T"hҫO?1 h/7]YN]fs2>f`N5?iMyepB`ʘM 2Gq~_PiK(`ʫGa+:5#(KI@Ty0ATM:ܻI9w/p7h[ߙ9ұU0jpgs{/Ig ^^-؁u~)J/Zu~ &Og*aKX@,UsM͐g$t KSH'b1R $ 3H%iJ& (l4#K24^̳2 9^-,Z/^pO?|u;uͯlaג)^_PXP8ɗ\n+j靏7AC}تt>N-. D @ݿa3HTLz1&LFp, *cP1s(jq S7A7:P)By_rA 嬗} iܨѬbd p#o#naqɓn۰fӀw|FAY%]:ء=KHO1NGԤM +5cca4_Pu 7Z\֙A 2d}&@yq0p _x˙HՍ:|2:|麝_y/-sߎ7^mv]Q`ǺK <8үߨ{vO,?`bnx8%k1:%KO[v*@石 Sοx|2來mBe!FPb?Gv<%v vM۾.4*-X{)eeR;N.c G{{bKhX*0#*R1А/|(dRRVg +c+2I#e`!He5| ЇD>K r+5o Uq,TeM[=k I|bp¬"L %[xhaРz;Qx(:/k #-Tm~D04f dLzڧj}VywXW*jRN_xmъҨY#bU94\l /iIO$wu&K^lY`.1KU 4w,eo gAcVZoW!찆헎DB |JIhLQLIG"_rE,N<S$Pg$M«JM(GF*2W>#dTĜPg՚@gdѕ1Ih WIMO^ڽT2л! %<JFXK-zŏ/1))TB+=F'Ώwu{EաK}<;!yq觅^Fpy_hv&/0Wܦ.ʁiIi NI2&c$z ,k0p)j:Hޗ]}^ӧk`o6AH.iSpEjWaLSd?zyg'bC񋭪4n)R4-q*D4KQ~{F¼8K,1Aע%k P7cQ0Lt` L0!@O%Dy4јb &PV y -c I&=#iAp&/ HnԮd=x+Z6[l9NbPJeJuVPR4?*>Arb"4чX0U2@MG "Gȟ' d)J(,DJLe`@hx+NʨRš$"hJ`R0,"Eh<锢I 9CO (}-Zk^?SSF ݏOF1k17ZɌTG !<ҊXK'Lc^ټ eM'tM N8Ĵ_*$N`ZЊ N)rU Wе2WX$,Xk"&-f0*. Y5#QޱĊ%!hfv%*a(])xv, *CYAyFxIeLM^8ib6Rf,]ơJTyYXQMPZYȫL8~gSi< .!RA#eL2 &iuL{O(톶 -@9!g89qQމqQX{tTI f.UGiΈi(r0R0jc1iO=yH Lx,HEkx *PFB.@X$-T_D J9opLtcԴ5p14DqbpfPhGGVT,Qw;®] |S0 F` 1U8b>oM'ey"\Jx'YvǠY9L{y]BI1\%@ ~ę@1O$m{M`ڬ4glr?`U>^X1Z%)uH,~@w`wXo95ډueTopHҔbR6@wBS& ԴP||hZ4M ɌZjeDRX-B$kH᚟ƼȭdR'Rij9*荾4qAT錳Bx:{ .szJxfek[gJǛuHC1f V̬VJ)ؙ iPEYvp7PEN4d~@#Dpbe'OMBd·vɍB#v ?p%yO:HcNJY(@ذ$T90/"B/PqQgL3z%8%BNL$_%/DiHC[f5٥2OMU0Ӽ7h;!#,cHZYAY ?$5x~1߷;nG vSɭ5Ƿڿkoe2oиoV?z9E%7 ϘcO(ݴX >/yuP^ Lc""Ҽkp݁E3KZ Ӵ%܋'̯YԵnO(T쎜اpАanzLءu5ՓX'b|Әz*V씞hjh!xiKR(͟#Z`gb0EBML~MlrྗbNy4U_܋2|i3()ԎSQn7}!bFz͌ #V?L+"20i8bk&e19VnOȉV~k{q]pvp/Dcx3 4B!M!Щa^ #fa LJ$>ي;rYNt\xkP:<{*&$8EBrE*G':k-^}-,8P[k]u:&\E :{V8H6q IKʣ`BҺl ["e\Ymy غ曋wxLF˞s- afmsK%]@F#8 2IXJ@uG>מY>-3 IDAT\<> TWv$"m&51P$OʥOoZЃ4,NRNHbH|sIrK6jMV[[{~aςK/'B)7O(ǐ7Ncb`lM(0>bo}K,r?s}>N֟ΐ`{AfU[+,yg3noَПe#؍hV+*qm5|7:oO;_}߮xlyW;Pdۏ1h.Ahs\YL5b`,i ŊԀn,ӤN/u9䱣oPfH|[W}ֶVW+-+HnqQ련Wh %bg`J$R/߳魷˻3w~3.T_6J3ѫY_YV-Cࢗ83b|.G/Zs?J7atو%}MWh(-iS +Vliv[d[,ҎZV4RXXuՖV~$I%&m*@⋫}֔&0|z|bص6peI{L#k wO "J핁#YC?m6acNv &8QMnV-\6A :~\ho2/@Vmq׺YYFtlx5nY&`;r+"gEHmtsq^Ƹ\=}e]!1m-u52:___T X˸3B @IJ>yͪ-L 2lYCWـp9$c:0SxcE]8~؁_gпcf3U 2YuAxwsA4n}ϣy);Rq{ʅhS!36(/".krO᮵}ӣ|áT{|V-8౒T\hTN8꧞2.Q4zunM i /ϥoex8aA2NBhWIM88V }`9kVlʅ&匃p 'Kd@>䤝zI2ܰGCO?c(q8L8*~:,~kƢ<7u'o?_o}ߌQVm~~ūiP.ҖhQ )50<%Sʔ Pű}EpvoRQq4WZSB1V7Z)FwsfeCaԪz\:v3(:P)+NT\)Zţp ~wV=sE9rA/+b @ԍu%!ʮWs!dh"e&t)gs ގo`׈:'esAȷW=2~ 3.I[ҨK/Ph'I.ӻl땝?aEgl^9 /_;ÓĒs/F=U#keK-B)2Yg-'#,=H+f|h`G#O?gL"߷H3jZ/'dӣ_x4J;k޼a~0bb 0{2򪌂=gh t漡 cY+KY~ōšx 럅mrAURQ$ nLƸ&ds?\Aԭs1AG :#QކXeA3\cN=H:״e͐rӗ* m6|J1W6zBe G>KOa`M0ru "V?1v`Ĵ|Gδk.]}vvΐP>lO+p  ʹOrKG?wNP:Phdį~?0/V'= _mkCW/`Bi4HA^9*1}lU^9s*7`ʲdI:ST.ut CrA^{jd0.9d' U*Xt_{sǴ힏-CF~DV<[!Y35y‘Aa&LqM:[8GӠa_>[9dڕi!hdF!T';dv8vݟ}~;|nemO]q/A&C-6nԤlzq n>DUi#< ˓#`":[> 1hb>|֭}fSwO ^9Өq> ON|̙]"4x8_rC\tبLr^曹T~p7+[]m_jZF"*&ǘS|"~agbˊ]qZp) 2ż0ҳ*% DYO?t)7ץNYRj!Pe}*L{P}^^}~V=GO=Ab+TTUyOȨ9hdB8}:ag1miSp(`f#|1APq H)2LiqH_0I7Qy?S"O;&Hה1Bu~3R#e;%*{o6-wkU#QJ|`(_4~v"`g8ᴟV1ki[VR7TI( 8+ڵksr[׈j 6l8^eBJb4)ƾ߹KN[Z=.^>|”+:;n*CϠ)#ʬcpp}'A+öC?Xpq;:kפMZWЮC_RSCȮAz8nXd=֮%le84(`Ą#ө4|5c+ij4ѡ:i77xĄ?q+(lcF^p|)#iN/j':gS7m\'XYb#Ξx w1۷NC xKxa8zr5\zP!c'.JD Mvt;#2?¦6iɛNa%[u葻/ٶ^ t4?Py4 "6زS)peL; +|A2}v:j" >j ALnJYUim*n"{PwL/9qG/ Ht5.>²HsO’ _0 t?~}3xәT~!M3yCtQСN2?fE/|mJY9w~"b(l_I1y Pzr@L %{5Ʊ w=] lpU;''Vc`=oAbT}eQ07pG %kSDⅿ1kol.&'{d't jfylD}4.uw qۯ,oB?炩 :\t]f qeHNbP0,(~Yౢʋ:ʺ6ai mE"cU)"l\bP+/ iZ+\ھ;7,PS'kR)#nĞ"ѳ0 ZH;b .ݺ-^=0+z:SՎ5IL\:-!u kIH=}`9~hyվ}L_Owո}  Vp|y|CLkF3IedYæQүŵU Be܇E) g 8g:L{2Np +@',SCqCMT]s3xg܂HsIhM,QG@*fj+Sc{#ȡIX}4jyGl/j2E6J%km>}&xR}s38L$`U$~`xq|eӔ4RzxG?:G>I7*vU+,RyUᒋ^tt"PQ~Y!7= lLmXi K&ק9mz #TMHa񫉣G r*S}  e[!8x9!=a0hcGܱeK`l8=?Z1]6ױ#WG cY1ފ>g<iTcg:qp$r±G1RzN=̡%O G f+gdb.1qTf]C MXu0k)oTxljH']\/;zdG-d2vةoWփR>gႁI$LR% ,|SN&b&Ad>eϦdYs?w퇃{[;HæYVIh ;:/Jӓap4f$Ƃ_^ GDɺ]^?{ײxzwѣGg/RTVvjر$֑Cbe K"bXaXDbEE>&sF DcҸK,SV9R:xڃwm1e{?eԜ;<}܁%Kmn.t}(ydkjpjj&T3OO8VRH9\UL+tʻm\_>vk}Yi]5up3ut 7`AoEѳNhEW^|'>q٠A;iyhtx3u~^sM՟N1g$M>3d諿Y G~ଏ}n*LYnŮ|pCyԬw' χڥf@^h (&k;HhAyiA9yT^y@^-h -('k yk Qy $oy <*m 4wG5i* y@ rx M[0|l~/ 1nSl|iOv| Dݕ$@T4AVhZ[UhB `W|ꙁ@SB4m0Hf%v'7p±ƹq`ZtOtE-hF ,GCv`'"uT>Βe8F8gq<682ĝזR8К~gGp< G48[mpztyMjp\$I;I43-ppkV:'yp" ]us7 HCll?6!ppn1weMtyi_o>cy}E ًW6Jƥֳf{IlgoWV;y-e8>F,Ʊ #yiF8ɽh_}_w9h.h]#;HҶ7N"?#J)n*|^;Hֶ7 N׆ȣ.m"H{}kb/nס뺴 ,8.lzs_N[m $yi_?2gy~i 4h[CKm#maIӞwAdRo4HiAO}=vVQs;+9Ӧ JKBI瀸]&B~eVξA| y9iM mLҿG[u '՝l[fym%Ьk]"2N ?h1$whˠ|MxC sX|-ݽ-뵨st\qs_<9KUМ(:G6}:P_[l-$@p=tOֆԝtt\??Gl֞Z,ѐcryr0זGSUfd8^m ߇JrA/ᘍ/F4.FD-@K4RR/N},|r+nkw8~:8nk*4d}G)z/ΰ-؇;V$X=@vn)_J >nA猃[z*`4Zs߃̏mF}pk '$|sL 09&^^Wy^kq S!IENDB`jedi-0.11.1/docs/_static/logo-src.txt0000664000175000017500000000017213214571123017232 0ustar davedave00000000000000The source of the logo is a photoshop file hosted here: https://dl.dropboxusercontent.com/u/170011615/Jedi12_Logo.psd.xz jedi-0.11.1/docs/README.md0000664000175000017500000000035713214571123014602 0ustar davedave00000000000000Installation ------------ Install the graphviz library:: sudo apt-get install graphviz Install sphinx:: sudo pip install sphinx You might also need to install the Python graphviz interface:: sudo pip install graphviz jedi-0.11.1/docs/global.rst0000664000175000017500000000004513214571123015307 0ustar davedave00000000000000:orphan: .. |jedi| replace:: *Jedi* jedi-0.11.1/LICENSE.txt0000664000175000017500000000233113214571123014210 0ustar davedave00000000000000All contributions towards Jedi are MIT licensed. ------------------------------------------------------------------------------- The MIT License (MIT) Copyright (c) <2013> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. jedi-0.11.1/.coveragerc0000664000175000017500000000067713214571123014521 0ustar davedave00000000000000[run] omit = jedi/_compatibility.py jedi/evaluate/site.py [report] # Regexes for lines to exclude from consideration exclude_lines = # Don't complain about missing debug-only code: def __repr__ if self\.debug # Don't complain if tests don't hit defensive assertion code: raise AssertionError raise NotImplementedError # Don't complain if non-runnable code isn't run: if 0: if __name__ == .__main__.: jedi-0.11.1/README.rst0000664000175000017500000001725513214571123014067 0ustar davedave00000000000000################################################################### Jedi - an awesome autocompletion/static analysis library for Python ################################################################### .. image:: https://secure.travis-ci.org/davidhalter/jedi.png?branch=master :target: http://travis-ci.org/davidhalter/jedi :alt: Travis-CI build status .. image:: https://coveralls.io/repos/davidhalter/jedi/badge.png?branch=master :target: https://coveralls.io/r/davidhalter/jedi :alt: Coverage Status *If you have specific questions, please add an issue or ask on* `stackoverflow `_ *with the label* ``python-jedi``. Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its historic focus is autocompletion, but does static analysis for now as well. Jedi is fast and is very well tested. It understands Python on a deeper level than all other static analysis frameworks for Python. Jedi has support for two different goto functions. It's possible to search for related names and to list all names in a Python file and infer them. Jedi understands docstrings and you can use Jedi autocompletion in your REPL as well. Jedi uses a very simple API to connect with IDE's. There's a reference implementation as a `VIM-Plugin `_, which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs. It's really easy. Jedi can currently be used with the following editors/projects: - Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_) - Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_) - Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3]) - TextMate_ (Not sure if it's actually working) - Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof `_] - Atom_ (autocomplete-python-jedi_) - SourceLair_ - `GNOME Builder`_ (with support for GObject Introspection) - `Visual Studio Code`_ (via `Python Extension `_) - Gedit (gedi_) - wdb_ - Web Debugger - `Eric IDE`_ (Available as a plugin) - `Ipython 6.0.0+ `_ and many more! Here are some pictures taken from jedi-vim_: .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_complete.png Completion for almost anything (Ctrl+Space). .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_function.png Display of function/class bodies, docstrings. .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_pydoc.png Pydoc support (Shift+k). There is also support for goto and renaming. Get the latest version from `github `_ (master branch should always be kind of stable/working). Docs are available at `https://jedi.readthedocs.org/en/latest/ `_. Pull requests with documentation enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic versioning `_. Installation ============ pip install jedi Note: This just installs the Jedi library, not the editor plugins. For information about how to make it work with your editor, refer to the corresponding documentation. You don't want to use ``pip``? Please refer to the `manual `_. Feature Support and Caveats =========================== Jedi really understands your Python code. For a comprehensive list what Jedi understands, see: `Features `_. A list of caveats can be found on the same page. You can run Jedi on cPython 2.6, 2.7, 3.3, 3.4 or 3.5 but it should also understand/parse code older than those versions. Tips on how to use Jedi efficiently can be found `here `_. API --- You can find the documentation for the `API here `_. Autocompletion / Goto / Pydoc ----------------------------- Please check the API for a good explanation. There are the following commands: - ``jedi.Script.goto_assignments`` - ``jedi.Script.completions`` - ``jedi.Script.usages`` The returned objects are very powerful and really all you might need. Autocompletion in your REPL (IPython, etc.) ------------------------------------------- Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion in IPython is therefore possible without additional configuration. It's possible to have Jedi autocompletion in REPL modes - `example video `_. This means that in Python you can enable tab completion in a `REPL `_. Static Analysis / Linter ------------------------ To do all forms of static analysis, please try to use ``jedi.names``. It will return a list of names that you can use to infer types and so on. Linting is another thing that is going to be part of Jedi. For now you can try an alpha version ``python -m jedi linter``. The API might change though and it's still buggy. It's Jedi's goal to be smarter than classic linter and understand ``AttributeError`` and other code issues. Refactoring ----------- Jedi's parser would support refactoring, but there's no API to use it right now. If you're interested in helping out here, let me know. With the latest parser changes, it should be very easy to actually make it work. Development =========== There's a pretty good and extensive `development documentation `_. Testing ======= The test suite depends on ``tox`` and ``pytest``:: pip install tox pytest To run the tests for all supported Python versions:: tox If you want to test only a specific Python version (e.g. Python 2.7), it's as easy as :: tox -e py27 Tests are also run automatically on `Travis CI `_. For more detailed information visit the `testing documentation `_ Acknowledgements ================ - Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of other things. - Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :). - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). .. _jedi-vim: https://github.com/davidhalter/jedi-vim .. _youcompleteme: http://valloric.github.io/YouCompleteMe/ .. _deoplete-jedi: https://github.com/zchee/deoplete-jedi .. _completor.vim: https://github.com/maralla/completor.vim .. _Jedi.el: https://github.com/tkf/emacs-jedi .. _company-mode: https://github.com/syohex/emacs-company-jedi .. _elpy: https://github.com/jorgenschaefer/elpy .. _anaconda-mode: https://github.com/proofit404/anaconda-mode .. _ycmd: https://github.com/abingham/emacs-ycmd .. _sublimejedi: https://github.com/srusskih/SublimeJEDI .. _anaconda: https://github.com/DamnWidget/anaconda .. _wdb: https://github.com/Kozea/wdb .. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle .. _Kate: http://kate-editor.org .. _Atom: https://atom.io/ .. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi .. _SourceLair: https://www.sourcelair.com .. _GNOME Builder: https://wiki.gnome.org/Apps/Builder .. _Visual Studio Code: https://code.visualstudio.com/ .. _gedi: https://github.com/isamert/gedi .. _Eric IDE: http://eric-ide.python-projects.org jedi-0.11.1/MANIFEST.in0000664000175000017500000000054313214571123014126 0ustar davedave00000000000000include README.rst include CHANGELOG.rst include LICENSE.txt include AUTHORS.txt include .coveragerc include sith.py include conftest.py include pytest.ini include tox.ini include requirements.txt include jedi/evaluate/compiled/fake/*.pym include jedi/parser/python/grammar*.txt recursive-include test * recursive-include docs * recursive-exclude * *.pyc jedi-0.11.1/CHANGELOG.rst0000664000175000017500000000523013214571123014407 0ustar davedave00000000000000.. :changelog: Changelog --------- 0.11.0 (2017-09-20) +++++++++++++++++++ - Split Jedi's parser into a separate project called ``parso``. - Avoiding side effects in REPL completion. - Numpy docstring support should be much better. - Moved the `settings.*recursion*` away, they are no longer usable. 0.10.2 (2017-04-05) +++++++++++++++++++ - Python Packaging sucks. Some files were not included in 0.10.1. 0.10.1 (2017-04-05) +++++++++++++++++++ - Fixed a few very annoying bugs. - Prepared the parser to be factored out of Jedi. 0.10.0 (2017-02-03) +++++++++++++++++++ - Actual semantic completions for the complete Python syntax. - Basic type inference for ``yield from`` PEP 380. - PEP 484 support (most of the important features of it). Thanks Claude! (@reinhrst) - Added ``get_line_code`` to ``Definition`` and ``Completion`` objects. - Completely rewritten the type inference engine. - A new and better parser for (fast) parsing diffs of Python code. 0.9.0 (2015-04-10) ++++++++++++++++++ - The import logic has been rewritten to look more like Python's. There is now an ``Evaluator.modules`` import cache, which resembles ``sys.modules``. - Integrated the parser of 2to3. This will make refactoring possible. It will also be possible to check for error messages (like compiling an AST would give) in the future. - With the new parser, the evaluation also completely changed. It's now simpler and more readable. - Completely rewritten REPL completion. - Added ``jedi.names``, a command to do static analysis. Thanks to that sourcegraph guys for sponsoring this! - Alpha version of the linter. 0.8.1 (2014-07-23) +++++++++++++++++++ - Bugfix release, the last release forgot to include files that improve autocompletion for builtin libraries. Fixed. 0.8.0 (2014-05-05) +++++++++++++++++++ - Memory Consumption for compiled modules (e.g. builtins, sys) has been reduced drastically. Loading times are down as well (it takes basically as long as an import). - REPL completion is starting to become usable. - Various small API changes. Generally this release focuses on stability and refactoring of internal APIs. - Introducing operator precedence, which makes calculating correct Array indices and ``__getattr__`` strings possible. 0.7.0 (2013-08-09) ++++++++++++++++++ - Switched from LGPL to MIT license. - Added an Interpreter class to the API to make autocompletion in REPL possible. - Added autocompletion support for namespace packages. - Add sith.py, a new random testing method. 0.6.0 (2013-05-14) ++++++++++++++++++ - Much faster parser with builtin part caching. - A test suite, thanks @tkf. 0.5 versions (2012) +++++++++++++++++++ - Initial development. jedi-0.11.1/setup.py0000775000175000017500000000413713214571123014110 0ustar davedave00000000000000#!/usr/bin/env python from setuptools import setup, find_packages import ast import sys __AUTHOR__ = 'David Halter' __AUTHOR_EMAIL__ = 'davidhalter88@gmail.com' # Get the version from within jedi. It's defined in exactly one place now. with open('jedi/__init__.py') as f: tree = ast.parse(f.read()) if sys.version_info > (3, 7): version = tree.body[0].value.s else: version = tree.body[1].value.s readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read() with open('requirements.txt') as f: install_requires = f.read().splitlines() setup(name='jedi', version=version, description='An autocompletion tool for Python that can be used for text editors.', author=__AUTHOR__, author_email=__AUTHOR_EMAIL__, include_package_data=True, maintainer=__AUTHOR__, maintainer_email=__AUTHOR_EMAIL__, url='https://github.com/davidhalter/jedi', license='MIT', keywords='python completion refactoring vim', long_description=readme, packages=find_packages(exclude=['test']), install_requires=install_requires, extras_require={'dev': ['docopt']}, package_data={'jedi': ['evaluate/compiled/fake/*.pym']}, platforms=['any'], classifiers=[ 'Development Status :: 4 - Beta', 'Environment :: Plugins', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Text Editors :: Integrated Development Environments (IDE)', 'Topic :: Utilities', ], ) jedi-0.11.1/jedi.egg-info/0000775000175000017500000000000013214571377015006 5ustar davedave00000000000000jedi-0.11.1/jedi.egg-info/PKG-INFO0000664000175000017500000003352613214571377016114 0ustar davedave00000000000000Metadata-Version: 1.1 Name: jedi Version: 0.11.1 Summary: An autocompletion tool for Python that can be used for text editors. Home-page: https://github.com/davidhalter/jedi Author: David Halter Author-email: davidhalter88@gmail.com License: MIT Description: ################################################################### Jedi - an awesome autocompletion/static analysis library for Python ################################################################### .. image:: https://secure.travis-ci.org/davidhalter/jedi.png?branch=master :target: http://travis-ci.org/davidhalter/jedi :alt: Travis-CI build status .. image:: https://coveralls.io/repos/davidhalter/jedi/badge.png?branch=master :target: https://coveralls.io/r/davidhalter/jedi :alt: Coverage Status *If you have specific questions, please add an issue or ask on* `stackoverflow `_ *with the label* ``python-jedi``. Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its historic focus is autocompletion, but does static analysis for now as well. Jedi is fast and is very well tested. It understands Python on a deeper level than all other static analysis frameworks for Python. Jedi has support for two different goto functions. It's possible to search for related names and to list all names in a Python file and infer them. Jedi understands docstrings and you can use Jedi autocompletion in your REPL as well. Jedi uses a very simple API to connect with IDE's. There's a reference implementation as a `VIM-Plugin `_, which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs. It's really easy. Jedi can currently be used with the following editors/projects: - Vim (jedi-vim_, YouCompleteMe_, deoplete-jedi_, completor.vim_) - Emacs (Jedi.el_, company-mode_, elpy_, anaconda-mode_, ycmd_) - Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3]) - TextMate_ (Not sure if it's actually working) - Kate_ version 4.13+ supports it natively, you have to enable it, though. [`proof `_] - Atom_ (autocomplete-python-jedi_) - SourceLair_ - `GNOME Builder`_ (with support for GObject Introspection) - `Visual Studio Code`_ (via `Python Extension `_) - Gedit (gedi_) - wdb_ - Web Debugger - `Eric IDE`_ (Available as a plugin) - `Ipython 6.0.0+ `_ and many more! Here are some pictures taken from jedi-vim_: .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_complete.png Completion for almost anything (Ctrl+Space). .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_function.png Display of function/class bodies, docstrings. .. image:: https://github.com/davidhalter/jedi/raw/master/docs/_screenshots/screenshot_pydoc.png Pydoc support (Shift+k). There is also support for goto and renaming. Get the latest version from `github `_ (master branch should always be kind of stable/working). Docs are available at `https://jedi.readthedocs.org/en/latest/ `_. Pull requests with documentation enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic versioning `_. Installation ============ pip install jedi Note: This just installs the Jedi library, not the editor plugins. For information about how to make it work with your editor, refer to the corresponding documentation. You don't want to use ``pip``? Please refer to the `manual `_. Feature Support and Caveats =========================== Jedi really understands your Python code. For a comprehensive list what Jedi understands, see: `Features `_. A list of caveats can be found on the same page. You can run Jedi on cPython 2.6, 2.7, 3.3, 3.4 or 3.5 but it should also understand/parse code older than those versions. Tips on how to use Jedi efficiently can be found `here `_. API --- You can find the documentation for the `API here `_. Autocompletion / Goto / Pydoc ----------------------------- Please check the API for a good explanation. There are the following commands: - ``jedi.Script.goto_assignments`` - ``jedi.Script.completions`` - ``jedi.Script.usages`` The returned objects are very powerful and really all you might need. Autocompletion in your REPL (IPython, etc.) ------------------------------------------- Starting with Ipython `6.0.0` Jedi is a dependency of IPython. Autocompletion in IPython is therefore possible without additional configuration. It's possible to have Jedi autocompletion in REPL modes - `example video `_. This means that in Python you can enable tab completion in a `REPL `_. Static Analysis / Linter ------------------------ To do all forms of static analysis, please try to use ``jedi.names``. It will return a list of names that you can use to infer types and so on. Linting is another thing that is going to be part of Jedi. For now you can try an alpha version ``python -m jedi linter``. The API might change though and it's still buggy. It's Jedi's goal to be smarter than classic linter and understand ``AttributeError`` and other code issues. Refactoring ----------- Jedi's parser would support refactoring, but there's no API to use it right now. If you're interested in helping out here, let me know. With the latest parser changes, it should be very easy to actually make it work. Development =========== There's a pretty good and extensive `development documentation `_. Testing ======= The test suite depends on ``tox`` and ``pytest``:: pip install tox pytest To run the tests for all supported Python versions:: tox If you want to test only a specific Python version (e.g. Python 2.7), it's as easy as :: tox -e py27 Tests are also run automatically on `Travis CI `_. For more detailed information visit the `testing documentation `_ Acknowledgements ================ - Takafumi Arakaki (@tkf) for creating a solid test environment and a lot of other things. - Danilo Bargen (@dbrgn) for general housekeeping and being a good friend :). - Guido van Rossum (@gvanrossum) for creating the parser generator pgen2 (originally used in lib2to3). .. _jedi-vim: https://github.com/davidhalter/jedi-vim .. _youcompleteme: http://valloric.github.io/YouCompleteMe/ .. _deoplete-jedi: https://github.com/zchee/deoplete-jedi .. _completor.vim: https://github.com/maralla/completor.vim .. _Jedi.el: https://github.com/tkf/emacs-jedi .. _company-mode: https://github.com/syohex/emacs-company-jedi .. _elpy: https://github.com/jorgenschaefer/elpy .. _anaconda-mode: https://github.com/proofit404/anaconda-mode .. _ycmd: https://github.com/abingham/emacs-ycmd .. _sublimejedi: https://github.com/srusskih/SublimeJEDI .. _anaconda: https://github.com/DamnWidget/anaconda .. _wdb: https://github.com/Kozea/wdb .. _TextMate: https://github.com/lawrenceakka/python-jedi.tmbundle .. _Kate: http://kate-editor.org .. _Atom: https://atom.io/ .. _autocomplete-python-jedi: https://atom.io/packages/autocomplete-python-jedi .. _SourceLair: https://www.sourcelair.com .. _GNOME Builder: https://wiki.gnome.org/Apps/Builder .. _Visual Studio Code: https://code.visualstudio.com/ .. _gedi: https://github.com/isamert/gedi .. _Eric IDE: http://eric-ide.python-projects.org .. :changelog: Changelog --------- 0.11.0 (2017-09-20) +++++++++++++++++++ - Split Jedi's parser into a separate project called ``parso``. - Avoiding side effects in REPL completion. - Numpy docstring support should be much better. - Moved the `settings.*recursion*` away, they are no longer usable. 0.10.2 (2017-04-05) +++++++++++++++++++ - Python Packaging sucks. Some files were not included in 0.10.1. 0.10.1 (2017-04-05) +++++++++++++++++++ - Fixed a few very annoying bugs. - Prepared the parser to be factored out of Jedi. 0.10.0 (2017-02-03) +++++++++++++++++++ - Actual semantic completions for the complete Python syntax. - Basic type inference for ``yield from`` PEP 380. - PEP 484 support (most of the important features of it). Thanks Claude! (@reinhrst) - Added ``get_line_code`` to ``Definition`` and ``Completion`` objects. - Completely rewritten the type inference engine. - A new and better parser for (fast) parsing diffs of Python code. 0.9.0 (2015-04-10) ++++++++++++++++++ - The import logic has been rewritten to look more like Python's. There is now an ``Evaluator.modules`` import cache, which resembles ``sys.modules``. - Integrated the parser of 2to3. This will make refactoring possible. It will also be possible to check for error messages (like compiling an AST would give) in the future. - With the new parser, the evaluation also completely changed. It's now simpler and more readable. - Completely rewritten REPL completion. - Added ``jedi.names``, a command to do static analysis. Thanks to that sourcegraph guys for sponsoring this! - Alpha version of the linter. 0.8.1 (2014-07-23) +++++++++++++++++++ - Bugfix release, the last release forgot to include files that improve autocompletion for builtin libraries. Fixed. 0.8.0 (2014-05-05) +++++++++++++++++++ - Memory Consumption for compiled modules (e.g. builtins, sys) has been reduced drastically. Loading times are down as well (it takes basically as long as an import). - REPL completion is starting to become usable. - Various small API changes. Generally this release focuses on stability and refactoring of internal APIs. - Introducing operator precedence, which makes calculating correct Array indices and ``__getattr__`` strings possible. 0.7.0 (2013-08-09) ++++++++++++++++++ - Switched from LGPL to MIT license. - Added an Interpreter class to the API to make autocompletion in REPL possible. - Added autocompletion support for namespace packages. - Add sith.py, a new random testing method. 0.6.0 (2013-05-14) ++++++++++++++++++ - Much faster parser with builtin part caching. - A test suite, thanks @tkf. 0.5 versions (2012) +++++++++++++++++++ - Initial development. Keywords: python completion refactoring vim Platform: any Classifier: Development Status :: 4 - Beta Classifier: Environment :: Plugins Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Text Editors :: Integrated Development Environments (IDE) Classifier: Topic :: Utilities jedi-0.11.1/jedi.egg-info/dependency_links.txt0000664000175000017500000000000113214571377021054 0ustar davedave00000000000000 jedi-0.11.1/jedi.egg-info/top_level.txt0000664000175000017500000000001213214571377017531 0ustar davedave00000000000000jedi test jedi-0.11.1/jedi.egg-info/requires.txt0000664000175000017500000000003313214571377017402 0ustar davedave00000000000000parso==0.1.1 [dev] docopt jedi-0.11.1/jedi.egg-info/SOURCES.txt0000664000175000017500000002415513214571377016701 0ustar davedave00000000000000.coveragerc AUTHORS.txt CHANGELOG.rst LICENSE.txt MANIFEST.in README.rst conftest.py pytest.ini requirements.txt setup.cfg setup.py sith.py tox.ini docs/Makefile docs/README.md docs/conf.py docs/global.rst docs/index.rst docs/_screenshots/screenshot_complete.png docs/_screenshots/screenshot_function.png docs/_screenshots/screenshot_pydoc.png docs/_static/logo-src.txt docs/_static/logo.png docs/_templates/ghbuttons.html docs/_templates/sidebarlogo.html docs/_themes/flask_theme_support.py docs/_themes/flask/LICENSE docs/_themes/flask/layout.html docs/_themes/flask/relations.html docs/_themes/flask/theme.conf docs/_themes/flask/static/flasky.css_t docs/_themes/flask/static/small_flask.css docs/docs/development.rst docs/docs/features.rst docs/docs/installation.rst docs/docs/parser.rst docs/docs/plugin-api-classes.rst docs/docs/plugin-api.rst docs/docs/settings.rst docs/docs/static_analysis.rst docs/docs/testing.rst docs/docs/usage.rst jedi/__init__.py jedi/__main__.py jedi/_compatibility.py jedi/cache.py jedi/debug.py jedi/parser_utils.py jedi/refactoring.py jedi/settings.py jedi/utils.py jedi.egg-info/PKG-INFO jedi.egg-info/SOURCES.txt jedi.egg-info/dependency_links.txt jedi.egg-info/requires.txt jedi.egg-info/top_level.txt jedi/api/__init__.py jedi/api/classes.py jedi/api/completion.py jedi/api/helpers.py jedi/api/interpreter.py jedi/api/keywords.py jedi/api/replstartup.py jedi/common/__init__.py jedi/common/context.py jedi/evaluate/__init__.py jedi/evaluate/analysis.py jedi/evaluate/arguments.py jedi/evaluate/base_context.py jedi/evaluate/cache.py jedi/evaluate/docstrings.py jedi/evaluate/dynamic.py jedi/evaluate/filters.py jedi/evaluate/finder.py jedi/evaluate/flow_analysis.py jedi/evaluate/helpers.py jedi/evaluate/imports.py jedi/evaluate/jedi_typing.py jedi/evaluate/lazy_context.py jedi/evaluate/param.py jedi/evaluate/parser_cache.py jedi/evaluate/pep0484.py jedi/evaluate/project.py jedi/evaluate/recursion.py jedi/evaluate/site.py jedi/evaluate/stdlib.py jedi/evaluate/syntax_tree.py jedi/evaluate/sys_path.py jedi/evaluate/usages.py jedi/evaluate/utils.py jedi/evaluate/compiled/__init__.py jedi/evaluate/compiled/fake.py jedi/evaluate/compiled/getattr_static.py jedi/evaluate/compiled/mixed.py jedi/evaluate/compiled/fake/_functools.pym jedi/evaluate/compiled/fake/_sqlite3.pym jedi/evaluate/compiled/fake/_sre.pym jedi/evaluate/compiled/fake/_weakref.pym jedi/evaluate/compiled/fake/builtins.pym jedi/evaluate/compiled/fake/datetime.pym jedi/evaluate/compiled/fake/io.pym jedi/evaluate/compiled/fake/operator.pym jedi/evaluate/compiled/fake/posix.pym jedi/evaluate/context/__init__.py jedi/evaluate/context/function.py jedi/evaluate/context/instance.py jedi/evaluate/context/iterable.py jedi/evaluate/context/klass.py jedi/evaluate/context/module.py jedi/evaluate/context/namespace.py test/__init__.py test/blabla_test_documentation.py test/conftest.py test/helpers.py test/refactor.py test/run.py test/test_cache.py test/test_debug.py test/test_integration.py test/test_integration_keyword.py test/test_regression.py test/test_speed.py test/test_utils.py test/completion/__init__.py test/completion/arrays.py test/completion/async_.py test/completion/basic.py test/completion/classes.py test/completion/completion.py test/completion/complex.py test/completion/comprehensions.py test/completion/context.py test/completion/decorators.py test/completion/definition.py test/completion/descriptors.py test/completion/docstring.py test/completion/dynamic_arrays.py test/completion/dynamic_params.py test/completion/flow_analysis.py test/completion/functions.py test/completion/generators.py test/completion/goto.py test/completion/imports.py test/completion/invalid.py test/completion/isinstance.py test/completion/keywords.py test/completion/lambdas.py test/completion/named_param.py test/completion/on_import.py test/completion/ordering.py test/completion/parser.py test/completion/pep0484_basic.py test/completion/pep0484_comments.py test/completion/pep0484_typing.py test/completion/pep0526_variables.py test/completion/precedence.py test/completion/recursion.py test/completion/stdlib.py test/completion/sys_path.py test/completion/types.py test/completion/usages.py test/completion/import_tree/__init__.py test/completion/import_tree/classes.py test/completion/import_tree/flow_import.py test/completion/import_tree/invisible_pkg.py test/completion/import_tree/mod1.py test/completion/import_tree/mod2.py test/completion/import_tree/random.py test/completion/import_tree/recurse_class1.py test/completion/import_tree/recurse_class2.py test/completion/import_tree/rename1.py test/completion/import_tree/rename2.py test/completion/import_tree/pkg/__init__.py test/completion/import_tree/pkg/mod1.py test/completion/thirdparty/PyQt4_.py test/completion/thirdparty/django_.py test/completion/thirdparty/jedi_.py test/completion/thirdparty/psycopg2_.py test/completion/thirdparty/pylab_.py test/refactor/extract.py test/refactor/inline.py test/refactor/rename.py test/speed/precedence.py test/static_analysis/attribute_error.py test/static_analysis/attribute_warnings.py test/static_analysis/branches.py test/static_analysis/builtins.py test/static_analysis/class_simple.py test/static_analysis/comprehensions.py test/static_analysis/descriptors.py test/static_analysis/generators.py test/static_analysis/imports.py test/static_analysis/iterable.py test/static_analysis/keywords.py test/static_analysis/normal_arguments.py test/static_analysis/operations.py test/static_analysis/python2.py test/static_analysis/star_arguments.py test/static_analysis/try_except.py test/static_analysis/import_tree/__init__.py test/static_analysis/import_tree/a.py test/static_analysis/import_tree/b.py test/test_api/__init__.py test/test_api/test_analysis.py test/test_api/test_api.py test/test_api/test_api_classes_follow_definition.py test/test_api/test_call_signatures.py test/test_api/test_classes.py test/test_api/test_completion.py test/test_api/test_defined_names.py test/test_api/test_full_name.py test/test_api/test_interpreter.py test/test_api/test_unicode.py test/test_api/test_usages.py test/test_api/import_tree_for_usages/__init__.py test/test_api/import_tree_for_usages/a.py test/test_api/import_tree_for_usages/b.py test/test_api/simple_import/__init__.py test/test_api/simple_import/module.py test/test_api/simple_import/module2.py test/test_evaluate/__init__.py test/test_evaluate/test_absolute_import.py test/test_evaluate/test_annotations.py test/test_evaluate/test_buildout_detection.py test/test_evaluate/test_compiled.py test/test_evaluate/test_context.py test/test_evaluate/test_docstring.py test/test_evaluate/test_extension.py test/test_evaluate/test_helpers.py test/test_evaluate/test_implicit_namespace_package.py test/test_evaluate/test_imports.py test/test_evaluate/test_literals.py test/test_evaluate/test_mixed.py test/test_evaluate/test_namespace_package.py test/test_evaluate/test_precedence.py test/test_evaluate/test_pyc.py test/test_evaluate/test_representation.py test/test_evaluate/test_stdlib.py test/test_evaluate/test_sys_path.py test/test_evaluate/absolute_import/local_module.py test/test_evaluate/absolute_import/unittest.py test/test_evaluate/buildout_project/buildout.cfg test/test_evaluate/buildout_project/bin/app test/test_evaluate/buildout_project/bin/binary_file test/test_evaluate/buildout_project/bin/empty_file test/test_evaluate/buildout_project/src/proj_name/module_name.py test/test_evaluate/flask-site-packages/flask_foo.py test/test_evaluate/flask-site-packages/flask/__init__.py test/test_evaluate/flask-site-packages/flask/ext/__init__.py test/test_evaluate/flask-site-packages/flask_baz/__init__.py test/test_evaluate/flask-site-packages/flaskext/__init__.py test/test_evaluate/flask-site-packages/flaskext/bar.py test/test_evaluate/flask-site-packages/flaskext/moo/__init__.py test/test_evaluate/implicit_namespace_package/ns1/pkg/ns1_file.py test/test_evaluate/implicit_namespace_package/ns2/pkg/ns2_file.py test/test_evaluate/implicit_nested_namespaces/namespace/pkg/module.py test/test_evaluate/init_extension_module/__init__.cpython-34m.so test/test_evaluate/init_extension_module/module.c test/test_evaluate/init_extension_module/setup.py test/test_evaluate/namespace_package/ns1/pkg/__init__.py test/test_evaluate/namespace_package/ns1/pkg/ns1_file.py test/test_evaluate/namespace_package/ns1/pkg/ns1_folder/__init__.py test/test_evaluate/namespace_package/ns2/pkg/ns2_file.py test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/__init__.py test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/nested/__init__.py test/test_evaluate/nested_namespaces/__init__.py test/test_evaluate/nested_namespaces/namespace/__init__.py test/test_evaluate/nested_namespaces/namespace/pkg/__init__.py test/test_evaluate/not_in_sys_path/__init__.py test/test_evaluate/not_in_sys_path/not_in_sys_path.py test/test_evaluate/not_in_sys_path/not_in_sys_path_package/__init__.py test/test_evaluate/not_in_sys_path/not_in_sys_path_package/module.py test/test_evaluate/not_in_sys_path/pkg/__init__.py test/test_evaluate/not_in_sys_path/pkg/module.py test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/egg_link.egg-link test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/foo.pth test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/import_smth.pth test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/relative.egg-link test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/smth.py test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/dir-from-foo-pth/__init__.py test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/egg_link.egg-link test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/foo.pth test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/import_smth.pth test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/relative.egg-link test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/smth.py test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/dir-from-foo-pth/__init__.py test/test_evaluate/zipped_imports/not_pkg.zip test/test_evaluate/zipped_imports/pkg.zip test/test_parso_integration/test_basic.py test/test_parso_integration/test_error_correction.py test/test_parso_integration/test_parser_utils.pyjedi-0.11.1/test/0000775000175000017500000000000013214571377013360 5ustar davedave00000000000000jedi-0.11.1/test/completion/0000775000175000017500000000000013214571377015531 5ustar davedave00000000000000jedi-0.11.1/test/completion/arrays.py0000664000175000017500000001211713214571123017373 0ustar davedave00000000000000# ----------------- # basic array lookups # ----------------- #? int() [1,""][0] #? str() [1,""][1] #? int() str() [1,""][2] #? int() str() [1,""][20] #? int() str() [1,""][str(hello)] a = list() #? list() [a][0] #? list() [[a,a,a]][2][100] c = [[a,""]] #? str() c[0][1] b = [6,7] #? int() b[8-7] # ----------------- # Slices # ----------------- #? list() b[8:] #? list() b[int():] #? list() b[:] class _StrangeSlice(): def __getitem__(self, sliced): return sliced # Should not result in an error, just because the slice itself is returned. #? slice() _StrangeSlice()[1:2] # ----------------- # iterable multiplication # ----------------- a = ['']*2 #? list() a # ----------------- # tuple assignments # ----------------- a1, b1 = (1, "") #? int() a1 #? str() b1 (a2, b2) = (1, "") #? int() a2 #? str() b2 # list assignment [list1, list2] = (1, "") #? int() list1 #? str() list2 [list3, list4] = [1, ""] #? int() list3 #? str() list4 # ----------------- # subtuple assignment # ----------------- (a3, (b3, c3)) = (1, ("", list)) #? list c3 a4, (b4, c4) = (1, ("", list)) #? list c4 #? int() a4 #? str() b4 # ----------------- # multiple assignments # ----------------- a = b = 1 #? int() a #? int() b (a, b) = (c, (e, f)) = ('2', (3, 4)) #? str() a #? tuple() b #? str() c #? int() e #? int() f # ----------------- # unnessecary braces # ----------------- a = (1) #? int() a #? int() (1) #? int() ((1)) #? int() ((1)+1) u, v = 1, "" #? int() u ((u1, v1)) = 1, "" #? int() u1 #? int() (u1) (a), b = 1, '' #? int() a def a(): return '' #? str() (a)() #? str() (a)().replace() #? int() (tuple).index() #? int() (tuple)().index() class C(): def __init__(self): self.a = (str()).upper() #? str() C().a # ----------------- # imbalanced sides # ----------------- (f, g) = (1,) #? int() f #? [] g. (f, g, h) = (1,'') #? int() f #? str() g #? [] h. (f1, g1) = 1 #? [] f1. #? [] g1. (f, g) = (1,'',1.0) #? int() f #? str() g # ----------------- # dicts # ----------------- dic2 = {'asdf': 3, 'b': 'str'} #? int() dic2['asdf'] # string literal #? int() dic2[r'asdf'] #? int() dic2[r'asdf'] #? int() dic2[r'as' 'd' u'f'] #? int() str() dic2['just_something'] # unpacking a, b = dic2 #? str() a a, b = {1: 'x', 2.0: 1j} #? int() float() a #? int() float() b def f(): """ github #83 """ r = {} r['status'] = (200, 'ok') return r #? dict() f() # completion within dicts #? 9 ['str'] {str: str} # iteration problem (detected with sith) d = dict({'a':''}) def y(a): return a #? y(**d) # problem with more complicated casts dic = {str(key): ''} #? str() dic[''] for x in {1: 3.0, '': 1j}: #? int() str() x # ----------------- # with variable as index # ----------------- a = (1, "") index = 1 #? str() a[index] # these should just ouput the whole array index = int #? int() str() a[index] index = int() #? int() str() a[index] # dicts index = 'asdf' dic2 = {'asdf': 3, 'b': 'str'} #? int() dic2[index] # ----------------- # __getitem__ # ----------------- class GetItem(): def __getitem__(self, index): return 1.0 #? float() GetItem()[0] class GetItem(): def __init__(self, el): self.el = el def __getitem__(self, index): return self.el #? str() GetItem("")[1] class GetItemWithList(): def __getitem__(self, index): return [1, 1.0, 's'][index] #? float() GetItemWithList()[1] for i in 0, 2: #? int() str() GetItemWithList()[i] # With super class SuperYeah(list): def __getitem__(self, index): return super()[index] #? SuperYeah([1])[0] #? SuperYeah()[0] # ----------------- # conversions # ----------------- a = [1, ""] #? int() str() list(a)[1] #? int() str() list(a)[0] #? set(a)[0] #? int() str() list(set(a))[1] #? int() str() next(iter(set(a))) #? int() str() list(list(set(a)))[1] # does not yet work, because the recursion catching is not good enough (catches # to much) #? int() str() list(set(list(set(a))))[1] #? int() str() list(set(set(a)))[1] # frozenset #? int() str() list(frozenset(a))[1] #? int() str() list(set(frozenset(a)))[1] # iter #? int() str() list(iter(a))[1] #? int() str() list(iter(list(set(a))))[1] # tuple #? int() str() tuple(a)[1] #? int() str() tuple(list(set(a)))[1] #? int() tuple((1,))[0] # implementation detail for lists, should not be visible #? [] list().__iterable # With a list comprehension. for i in set(a for a in [1]): #? int() i # ----------------- # Merged Arrays # ----------------- for x in [1] + ['']: #? int() str() x # ----------------- # For loops with attribute assignment. # ----------------- def test_func(): x = 'asdf' for x.something in [6,7,8]: pass #? str() x for x.something, b in [[6, 6.0]]: pass #? str() x # python >= 2.7 # Set literals are not valid in 2.6. #? int() tuple({1})[0] # python >= 3.3 # ----------------- # PEP 3132 Extended Iterable Unpacking (star unpacking) # ----------------- a, *b, c = [1, 'b', list, dict] #? int() a #? str() b #? list c # Not valid syntax a, *b, *c = [1, 'd', list] #? int() a #? str() b #? list c lc = [x for a, *x in [(1, '', 1.0)]] #? lc[0][0] jedi-0.11.1/test/completion/parser.py0000664000175000017500000000135013214571123017363 0ustar davedave00000000000000""" Issues with the parser and not the type inference should be part of this file. """ class IndentIssues(): """ issue jedi-vim#288 Which is really a fast parser issue. It used to start a new block at the parentheses, because it had problems with the indentation. """ def one_param( self, ): return 1 def with_param( self, y): return y #? int() IndentIssues().one_param() #? str() IndentIssues().with_param('') """ Just because there's a def keyword, doesn't mean it should not be able to complete to definition. """ definition = 0 #? ['definition'] str(def # It might be hard to determine the context class Foo(object): @property #? ['str'] def bar(str jedi-0.11.1/test/completion/classes.py0000664000175000017500000002015113214571123017524 0ustar davedave00000000000000def find_class(): """ This scope is special, because its in front of TestClass """ #? ['ret'] TestClass.ret if 1: #? ['ret'] TestClass.ret class FindClass(): #? [] TestClass.ret if a: #? [] TestClass.ret def find_class(self): #? ['ret'] TestClass.ret if 1: #? ['ret'] TestClass.ret #? [] FindClass().find_class.self #? [] FindClass().find_class.self.find_class # set variables, which should not be included, because they don't belong to the # class second = 1 second = "" class TestClass(object): var_class = TestClass(1) def __init__(self2, first_param, second_param, third=1.0): self2.var_inst = first_param self2.second = second_param self2.first = first_param a = 3 def var_func(self): return 1 def get_first(self): # traversal self.second_new = self.second return self.var_inst def values(self): self.var_local = 3 #? ['var_class', 'var_func', 'var_inst', 'var_local'] self.var_ #? var_local def ret(self, a1): # should not know any class functions! #? [] values #? ['return'] ret return a1 # should not work #? [] var_local #? [] var_inst #? [] var_func # instance inst = TestClass(1) #? ['var_class', 'var_func', 'var_inst', 'var_local'] inst.var #? ['var_class', 'var_func'] TestClass.var #? int() inst.var_local #? [] TestClass.var_local. #? int() TestClass().ret(1) # Should not return int(), because we want the type before `.ret(1)`. #? 11 TestClass() TestClass().ret(1) #? int() inst.ret(1) myclass = TestClass(1, '', 3.0) #? int() myclass.get_first() #? [] myclass.get_first.real # too many params #? int() TestClass(1,1,1).var_inst # too few params #? int() TestClass(1).first #? [] TestClass(1).second. # complicated variable settings in class #? str() myclass.second #? str() myclass.second_new # multiple classes / ordering ints = TestClass(1, 1.0) strs = TestClass("", '') #? float() ints.second #? str() strs.second #? ['var_class'] TestClass.var_class.var_class.var_class.var_class # operations (+, *, etc) shouldn't be InstanceElements - #246 class A(): def __init__(self): self.addition = 1 + 2 #? int() A().addition # should also work before `=` #? 8 int() A().addition = None #? 8 int() A(1).addition = None #? 1 A A(1).addition = None a = A() #? 8 int() a.addition = None # ----------------- # inheritance # ----------------- class Base(object): def method_base(self): return 1 class SuperClass(Base): class_super = 3 def __init__(self): self.var_super = '' def method_super(self): self.var2_super = list class Mixin(SuperClass): def method_mixin(self): return int #? 20 SuperClass class SubClass(SuperClass): class_sub = 3 def __init__(self): self.var_sub = '' def method_sub(self): self.var_sub = list return tuple instance = SubClass() #? ['method_base', 'method_sub', 'method_super'] instance.method_ #? ['var2_super', 'var_sub', 'var_super'] instance.var #? ['class_sub', 'class_super'] instance.class_ #? ['method_base', 'method_sub', 'method_super'] SubClass.method_ #? [] SubClass.var #? ['class_sub', 'class_super'] SubClass.class_ # ----------------- # inheritance of builtins # ----------------- class Base(str): pass #? ['upper'] Base.upper #? ['upper'] Base().upper # ----------------- # dynamic inheritance # ----------------- class Angry(object): def shout(self): return 'THIS IS MALARKEY!' def classgetter(): return Angry class Dude(classgetter()): def react(self): #? ['shout'] self.s # ----------------- # __call__ # ----------------- class CallClass(): def __call__(self): return 1 #? int() CallClass()() # ----------------- # variable assignments # ----------------- class V: def __init__(self, a): self.a = a def ret(self): return self.a d = b b = ret if 1: c = b #? int() V(1).b() #? int() V(1).c() #? V(1).d() # Only keywords should be possible to complete. #? ['is', 'in', 'not', 'and', 'or', 'if'] V(1).d() # ----------------- # ordering # ----------------- class A(): def b(self): #? int() a_func() #? str() self.a_func() return a_func() def a_func(self): return "" def a_func(): return 1 #? int() A().b() #? str() A().a_func() # ----------------- # nested classes # ----------------- class A(): class B(): pass def b(self): return 1.0 #? float() A().b() class A(): def b(self): class B(): def b(self): return [] return B().b() #? list() A().b() # ----------------- # ducktyping # ----------------- def meth(self): return self.a, self.b class WithoutMethod(): a = 1 def __init__(self): self.b = 1.0 def blub(self): return self.b m = meth class B(): b = '' a = WithoutMethod().m() #? int() a[0] #? float() a[1] #? float() WithoutMethod.blub(WithoutMethod()) #? str() WithoutMethod.blub(B()) # ----------------- # __getattr__ / getattr() / __getattribute__ # ----------------- #? str().upper getattr(str(), 'upper') #? str.upper getattr(str, 'upper') # some strange getattr calls #? getattr(str, 1) #? getattr() #? getattr(str) #? getattr(getattr, 1) #? getattr(str, []) class Base(): def ret(self, b): return b class Wrapper(): def __init__(self, obj): self.obj = obj def __getattr__(self, name): return getattr(self.obj, name) class Wrapper2(): def __getattribute__(self, name): return getattr(Base(), name) #? int() Wrapper(Base()).ret(3) #? int() Wrapper2(Base()).ret(3) class GetattrArray(): def __getattr__(self, name): return [1] #? int() GetattrArray().something[0] # ----------------- # private vars # ----------------- class PrivateVar(): def __init__(self): self.__var = 1 #? int() self.__var #? ['__var'] self.__var def __private_func(self): return 1 def wrap_private(self): return self.__private_func() #? [] PrivateVar().__var #? PrivateVar().__var #? [] PrivateVar().__private_func #? int() PrivateVar().wrap_private() class PrivateSub(PrivateVar): def test(self): #? [] self.__var def wrap_private(self): #? [] self.__var #? [] PrivateSub().__var # ----------------- # super # ----------------- class Super(object): a = 3 def return_sup(self): return 1 class TestSuper(Super): #? super() def test(self): #? Super() super() #? ['a'] super().a if 1: #? Super() super() def a(): #? super() def return_sup(self): #? int() return super().return_sup() #? int() TestSuper().return_sup() # ----------------- # if flow at class level # ----------------- class TestX(object): def normal_method(self): return 1 if True: def conditional_method(self): var = self.normal_method() #? int() var return 2 def other_method(self): var = self.conditional_method() #? int() var # ----------------- # mro method # ----------------- class A(object): a = 3 #? ['mro'] A.mro #? [] A().mro # ----------------- # mro resolution # ----------------- class B(A()): b = 3 #? B.a #? B().a #? int() B.b #? int() B().b # ----------------- # With import # ----------------- from import_tree.classes import Config2, BaseClass class Config(BaseClass): """#884""" #? Config2() Config.mode #? int() Config.mode2 # ----------------- # Nested class/def/class # ----------------- class Foo(object): a = 3 def create_class(self): class X(): a = self.a self.b = 3.0 return X #? int() Foo().create_class().a #? float() Foo().b class Foo(object): def comprehension_definition(self): return [1 for self.b in [1]] #? int() Foo().b jedi-0.11.1/test/completion/pep0484_typing.py0000664000175000017500000001155313214571123020573 0ustar davedave00000000000000""" Test the typing library, with docstrings. This is needed since annotations are not supported in python 2.7 else then annotating by comment (and this is still TODO at 2016-01-23) """ # There's no Python 2.6 typing module. # python >= 2.7 import typing class B: pass def we_can_has_sequence(p, q, r, s, t, u): """ :type p: typing.Sequence[int] :type q: typing.Sequence[B] :type r: typing.Sequence[int] :type s: typing.Sequence["int"] :type t: typing.MutableSequence[dict] :type u: typing.List[float] """ #? ["count"] p.c #? int() p[1] #? ["count"] q.c #? B() q[1] #? ["count"] r.c #? int() r[1] #? ["count"] s.c #? int() s[1] #? [] s.a #? ["append"] t.a #? dict() t[1] #? ["append"] u.a #? float() u[1] def iterators(ps, qs, rs, ts): """ :type ps: typing.Iterable[int] :type qs: typing.Iterator[str] :type rs: typing.Sequence["ForwardReference"] :type ts: typing.AbstractSet["float"] """ for p in ps: #? int() p #? next(ps) a, b = ps #? int() a ##? int() --- TODO fix support for tuple assignment # https://github.com/davidhalter/jedi/pull/663#issuecomment-172317854 # test below is just to make sure that in case it gets fixed by accident # these tests will be fixed as well the way they should be #? b for q in qs: #? str() q #? str() next(qs) for r in rs: #? ForwardReference() r #? next(rs) for t in ts: #? float() t def sets(p, q): """ :type p: typing.AbstractSet[int] :type q: typing.MutableSet[float] """ #? [] p.a #? ["add"] q.a def tuple(p, q, r): """ :type p: typing.Tuple[int] :type q: typing.Tuple[int, str, float] :type r: typing.Tuple[B, ...] """ #? int() p[0] #? int() q[0] #? str() q[1] #? float() q[2] #? B() r[0] #? B() r[1] #? B() r[2] #? B() r[10000] i, s, f = q #? int() i ##? str() --- TODO fix support for tuple assignment # https://github.com/davidhalter/jedi/pull/663#issuecomment-172317854 #? s ##? float() --- TODO fix support for tuple assignment # https://github.com/davidhalter/jedi/pull/663#issuecomment-172317854 #? f class Key: pass class Value: pass def mapping(p, q, d, r, s, t): """ :type p: typing.Mapping[Key, Value] :type q: typing.MutableMapping[Key, Value] :type d: typing.Dict[Key, Value] :type r: typing.KeysView[Key] :type s: typing.ValuesView[Value] :type t: typing.ItemsView[Key, Value] """ #? [] p.setd #? ["setdefault"] q.setd #? ["setdefault"] d.setd #? Value() p[1] for key in p: #? Key() key for key in p.keys(): #? Key() key for value in p.values(): #? Value() value for item in p.items(): #? Key() item[0] #? Value() item[1] (key, value) = item #? Key() key #? Value() value for key, value in p.items(): #? Key() key #? Value() value for key in r: #? Key() key for value in s: #? Value() value for key, value in t: #? Key() key #? Value() value def union(p, q, r, s, t): """ :type p: typing.Union[int] :type q: typing.Union[int, int] :type r: typing.Union[int, str, "int"] :type s: typing.Union[int, typing.Union[str, "typing.Union['float', 'dict']"]] :type t: typing.Union[int, None] """ #? int() p #? int() q #? int() str() r #? int() str() float() dict() s #? int() t def optional(p): """ :type p: typing.Optional[int] Optional does not do anything special. However it should be recognised as being of that type. Jedi doesn't do anything with the extra into that it can be None as well """ #? int() p class ForwardReference: pass class TestDict(typing.Dict[str, int]): def setdud(self): pass def testdict(x): """ :type x: TestDict """ #? ["setdud", "setdefault"] x.setd for key in x.keys(): #? str() key for value in x.values(): #? int() value x = TestDict() #? ["setdud", "setdefault"] x.setd for key in x.keys(): #? str() key for value in x.values(): #? int() value # python >= 3.2 """ docstrings have some auto-import, annotations can use all of Python's import logic """ import typing as t def union2(x: t.Union[int, str]): #? int() str() x from typing import Union def union3(x: Union[int, str]): #? int() str() x from typing import Union as U def union4(x: U[int, str]): #? int() str() x jedi-0.11.1/test/completion/dynamic_arrays.py0000664000175000017500000001024713214571123021101 0ustar davedave00000000000000""" Checking for ``list.append`` and all the other possible array modifications. """ # ----------------- # list.append # ----------------- arr = [] for a in [1,2]: arr.append(a); arr.append # should not cause an exception arr.append() # should not cause an exception #? int() arr[10] arr = [tuple()] for a in [1,2]: arr.append(a); #? int() tuple() arr[10] #? int() arr[10].index() arr = list([]) arr.append(1) #? int() arr[0] # ----------------- # list.insert # ----------------- arr = [""] arr.insert(0, 1.0) # on exception due to this, please! arr.insert(0) arr.insert() #? float() str() arr[10] for a in arr: #? float() str() a #? float() str() list(arr)[10] # ----------------- # list.extend / set.update # ----------------- arr = [1.0] arr.extend([1,2,3]) arr.extend([]) arr.extend("") # should ignore #? float() int() arr[100] a = set(arr) a.update(list(["", 1])) #? float() int() str() list(a)[0] # ----------------- # set/list initialized as functions # ----------------- st = set() st.add(1) #? int() for s in st: s lst = list() lst.append(1) #? int() for i in lst: i # ----------------- # renames / type changes # ----------------- arr = [] arr2 = arr arr2.append('') #? str() arr2[0] lst = [1] lst.append(1.0) s = set(lst) s.add("") lst = list(s) lst.append({}) #? dict() int() float() str() lst[0] # should work with tuple conversion, too. #? dict() int() float() str() tuple(lst)[0] # but not with an iterator #? iter(lst)[0] # ----------------- # complex including += # ----------------- class C(): pass class D(): pass class E(): pass lst = [1] lst.append(1.0) lst += [C] s = set(lst) s.add("") s += [D] lst = list(s) lst.append({}) lst += [E] ##? dict() int() float() str() C D E lst[0] # ----------------- # functions # ----------------- def arr_append(arr4, a): arr4.append(a) def add_to_arr(arr2, a): arr2.append(a) return arr2 def app(a): arr3.append(a) arr3 = [1.0] res = add_to_arr(arr3, 1) arr_append(arr3, 'str') app(set()) #? float() str() int() set() arr3[10] #? float() str() int() set() res[10] # ----------------- # returns, special because the module dicts are not correct here. # ----------------- def blub(): a = [] a.append(1.0) #? float() a[0] return a #? float() blub()[0] # list with default def blub(): a = list([1]) a.append(1.0) return a #? int() float() blub()[0] # empty list def blub(): a = list() a.append(1.0) return a #? float() blub()[0] # with if def blub(): if 1: a = [] a.append(1.0) return a #? float() blub()[0] # with else clause def blub(): if random.choice([0, 1]): 1 else: a = [] a.append(1) return a #? int() blub()[0] # ----------------- # returns, the same for classes # ----------------- class C(): def blub(self, b): if 1: a = [] a.append(b) return a def blub2(self): """ mapper function """ a = self.blub(1.0) #? float() a[0] return a def literal_arr(self, el): self.a = [] self.a.append(el) #? int() self.a[0] return self.a def list_arr(self, el): self.b = list([]) self.b.append(el) #? float() self.b[0] return self.b #? int() C().blub(1)[0] #? float() C().blub2(1)[0] #? int() C().a[0] #? int() C().literal_arr(1)[0] #? float() C().b[0] #? float() C().list_arr(1.0)[0] # ----------------- # array recursions # ----------------- a = set([1.0]) a.update(a) a.update([1]) #? float() int() list(a)[0] def first(a): b = [] b.append(a) b.extend(second(a)) return list(b) def second(a): b = [] b.extend(first(a)) return list(b) #? float() first(1.0)[0] def third(): b = [] b.extend extend() b.extend(first()) return list(b) #? third()[0] # ----------------- # set.add # ----------------- # Set literals are not valid in 2.6. # python >= 2.7 st = {1.0} for a in [1,2]: st.add(a) st.append('') # lists should not have an influence st.add # should not cause an exception st.add() st = {1.0} st.add(1) lst = list(st) lst.append('') #? float() int() str() lst[0] jedi-0.11.1/test/completion/invalid.py0000664000175000017500000000522513214571123017522 0ustar davedave00000000000000""" This file is less about the results and much more about the fact, that no exception should be thrown. Basically this file could change depending on the current implementation. But there should never be any errors. """ # wait until keywords are out of definitions (pydoc function). ##? 5 's'() #? [] str()).upper # ----------------- # funcs # ----------------- def asdf(a or b): # multiple param names return a #? asdf(2) asdf = '' from a import (b def blub(): return 0 def wrong_indents(): asdf = 3 asdf asdf( # TODO this seems to be wrong now? ##? int() asdf def openbrace(): asdf = 3 asdf( #? int() asdf return 1 #? int() openbrace() blub([ #? int() openbrace() def indentfault(): asd( indentback #? [] indentfault(). def openbrace2(): asd( def normalfunc(): return 1 #? int() normalfunc() # dots in param def f(seq1...=None): return seq1 #? f(1) @ def test_empty_decorator(): return 1 #? int() test_empty_decorator() def invalid_param(param=): #? param # ----------------- # flows # ----------------- # first part not complete (raised errors) if a a else: #? ['AttributeError'] AttributeError try #? ['AttributeError'] except AttributeError pass finally: pass #? ['isinstance'] if isi try: except TypeError: #? str() str() def break(): pass # wrong ternary expression a = '' a = 1 if #? str() a # No completions for for loops without the right syntax for for_local in : for_local #? [] for_local #? for_local # ----------------- # list comprehensions # ----------------- a2 = [for a2 in [0]] #? a2[0] a3 = [for xyz in] #? a3[0] a3 = [a4 for in 'b'] #? a3[0] a3 = [a4 for a in for x in y] #? a3[0] a = [for a in def break(): pass #? a[0] a = [a for a in [1,2] def break(): pass #? a[0] #? [] int()).real # ----------------- # keywords # ----------------- #! [] as def empty_assert(): x = 3 assert #? int() x import datetime as # ----------------- # statements # ----------------- call = '' invalid = .call #? invalid invalid = call?.call #? str() invalid # comma invalid = ,call #? str() invalid # ----------------- # classes # ----------------- class BrokenPartsOfClass(): def foo(self): # This construct contains two places where Jedi with Python 3 can fail. # It should just ignore those constructs and still execute `bar`. pass if 2: try: pass except ValueError, e: raise TypeError, e else: pass def bar(self): self.x = 3 return '' #? str() BrokenPartsOfClass().bar() jedi-0.11.1/test/completion/isinstance.py0000664000175000017500000000332313214571123020231 0ustar davedave00000000000000if isinstance(i, str): #? str() i if isinstance(j, (str, int)): #? str() int() j while isinstance(k, (str, int)): #? str() int() k if not isinstance(k, (str, int)): #? k while not isinstance(k, (str, int)): #? k assert isinstance(ass, int) #? int() ass assert isinstance(ass, str) assert not isinstance(ass, int) if 2: #? str() ass # ----------------- # invalid arguments # ----------------- if isinstance(wrong, str()): #? wrong # ----------------- # in functions # ----------------- import datetime def fooooo(obj): if isinstance(obj, datetime.datetime): #? datetime.datetime() obj def fooooo2(obj): if isinstance(obj, datetime.date): return obj else: return 1 a # In earlier versions of Jedi, this returned both datetime and int, but now # Jedi does flow checks and realizes that the top return isn't executed. #? int() fooooo2('') def isinstance_func(arr): for value in arr: if isinstance(value, dict): # Shouldn't fail, even with the dot. #? 17 dict() value. elif isinstance(value, int): x = value #? int() x # ----------------- # Names with multiple indices. # ----------------- class Test(): def __init__(self, testing): if isinstance(testing, str): self.testing = testing else: self.testing = 10 def boo(self): if isinstance(self.testing, str): # TODO this is wrong, it should only be str. #? str() int() self.testing #? Test() self # ----------------- # Syntax # ----------------- #? isinstance(1, int()) jedi-0.11.1/test/completion/types.py0000664000175000017500000000255013214571123017236 0ustar davedave00000000000000# ----------------- # non array # ----------------- #? ['imag'] int.imag #? [] int.is_integer #? ['is_integer'] float.is_int #? ['is_integer'] 1.0.is_integer #? ['upper'] "".upper #? ['upper'] r"".upper # strangely this didn't work, because the = is used for assignments #? ['upper'] "=".upper a = "=" #? ['upper'] a.upper # ----------------- # lists # ----------------- arr = [] #? ['append'] arr.app #? ['append'] list().app #? ['append'] [].append arr2 = [1,2,3] #? ['append'] arr2.app #? int() arr.count(1) x = [] #? x.pop() x = [3] #? int() x.pop() x = [] x.append(1.0) #? float() x.pop() # ----------------- # dicts # ----------------- dic = {} #? ['copy', 'clear'] dic.c dic2 = dict(a=1, b=2) #? ['pop', 'popitem'] dic2.p #? ['popitem'] {}.popitem dic2 = {'asdf': 3} #? ['popitem'] dic2.popitem #? int() dic2['asdf'] d = {'a': 3, 1.0: list} #? int() list d.values()[0] ##? int() list dict(d).values()[0] #? str() d.items()[0][0] #? int() d.items()[0][1] # ----------------- # tuples # ----------------- tup = ('',2) #? ['count'] tup.c tup2 = tuple() #? ['index'] tup2.i #? ['index'] ().i tup3 = 1,"" #? ['index'] tup3.index tup4 = 1,"" #? ['index'] tup4.index # ----------------- # set # ----------------- # Set literals are not valid in 2.6. # python >= 2.7 set_t = {1,2} #? ['clear', 'copy'] set_t.c set_t2 = set() #? ['clear', 'copy'] set_t2.c jedi-0.11.1/test/completion/keywords.py0000664000175000017500000000110113214571123017730 0ustar davedave00000000000000 #? ['raise'] raise #? ['Exception'] except #? [] b + continu #? [] b + continue #? ['continue'] b; continue #? ['continue'] b; continu #? [] c + brea #? [] a + break #? ['break'] b; break # ----------------- # Keywords should not appear everywhere. # ----------------- #? [] with open() as f #? [] def i #? [] class i #? [] continue i # More syntax details, e.g. while only after newline, but not after semicolon, # continue also after semicolon #? ['while'] while #? [] x while #? [] x; while #? ['continue'] x; continue #? [] and #? ['and'] x and #? [] x * and jedi-0.11.1/test/completion/precedence.py0000664000175000017500000000356613214571123020177 0ustar davedave00000000000000""" Test Jedi's operation understanding. Jedi should understand simple additions, multiplications, etc. """ # ----------------- # numbers # ----------------- x = [1, 'a', 1.0] #? int() str() float() x[12] #? float() x[1 + 1] index = 0 + 1 #? str() x[index] #? int() x[1 + (-1)] def calculate(number): return number + constant constant = 1 #? float() x[calculate(1)] def calculate(number): return number + constant # ----------------- # strings # ----------------- x = 'upp' + 'e' #? str.upper getattr(str, x + 'r') a = "a"*3 #? str() a a = 3 * "a" #? str() a a = 3 * "a" #? str() a #? int() (3 ** 3) #? int() str() (3 ** 'a') # ----------------- # assignments # ----------------- x = [1, 'a', 1.0] i = 0 i += 1 i += 1 #? float() x[i] i = 1 i += 1 i -= 3 i += 1 #? int() x[i] # ----------------- # in # ----------------- if 'X' in 'Y': a = 3 else: a = '' # For now don't really check for truth values. So in should return both # results. #? str() int() a # ----------------- # for flow assignments # ----------------- class FooBar(object): fuu = 0.1 raboof = 'fourtytwo' # targets should be working target = '' for char in ['f', 'u', 'u']: target += char #? float() getattr(FooBar, target) # github #24 target = u'' for char in reversed(['f', 'o', 'o', 'b', 'a', 'r']): target += char #? str() getattr(FooBar, target) # ----------------- # repetition problems -> could be very slow and memory expensive - shouldn't # be. # ----------------- b = [str(1)] l = list for x in [l(0), l(1), l(2), l(3), l(4), l(5), l(6), l(7), l(8), l(9), l(10), l(11), l(12), l(13), l(14), l(15), l(16), l(17), l(18), l(19), l(20), l(21), l(22), l(23), l(24), l(25), l(26), l(27), l(28), l(29)]: b += x #? str() b[1] # ----------------- # undefined names # ----------------- a = foobarbaz + 'hello' #? int() float() {'hello': 1, 'bar': 1.0}[a] jedi-0.11.1/test/completion/on_import.py0000664000175000017500000000401513214571123020076 0ustar davedave00000000000000def from_names(): #? ['mod1'] from import_tree.pkg. #? ['path'] from os. def from_names_goto(): from import_tree import pkg #? pkg from import_tree.pkg def builtin_test(): #? ['math'] import math # ----------------- # completions within imports # ----------------- #? ['sqlite3'] import sqlite3 #? ['classes'] import classes #? ['timedelta'] from datetime import timedel #? 21 [] from datetime.timedel import timedel # should not be possible, because names can only be looked up 1 level deep. #? [] from datetime.timedelta import resolution #? [] from datetime.timedelta import #? ['Cursor'] from sqlite3 import Cursor #? ['some_variable'] from . import some_variable #? ['arrays'] from . import arrays #? [] from . import import_tree as ren #? [] import json as import os #? os.path.join from os.path import join # ----------------- # special positions -> edge cases # ----------------- import datetime #? 6 datetime from datetime.time import time #? [] import datetime. #? [] import datetime.date #? 21 ['import'] from import_tree.pkg import pkg #? 49 ['a', 'foobar', '__name__', '__doc__', '__file__', '__package__'] from import_tree.pkg.mod1 import not_existant, # whitespace before #? ['a', 'foobar', '__name__', '__doc__', '__file__', '__package__'] from import_tree.pkg.mod1 import not_existant, #? 22 ['mod1'] from import_tree.pkg. import mod1 #? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'classes', 'recurse_class1', 'recurse_class2', 'invisible_pkg', 'flow_import'] from import_tree. import pkg #? 18 ['pkg'] from import_tree.p import pkg #? 17 ['import_tree'] from .import_tree import #? 10 ['run'] from ..run import #? ['run'] from ..run #? 10 ['run'] from ..run. #? [] from ..run. #? ['run'] from .. import run #? [] from not_a_module import #137 import json #? 23 json.dump from json import load, dump #? 17 json.load from json import load, dump # without the from clause: import json, datetime #? 7 json import json, datetime #? 13 datetime import json, datetime jedi-0.11.1/test/completion/async_.py0000664000175000017500000000077213214571123017352 0ustar davedave00000000000000""" Tests for all async use cases. Currently we're not supporting completion of them, but they should at least not raise errors or return extremely strange results. """ async def x(): argh = await x() #? argh return 2 #? int() x() a = await x() #? a async def x2(): async with open('asdf') as f: #? ['readlines'] f.readlines class A(): @staticmethod async def b(c=1, d=2): return 1 #! 9 ['def b'] await A.b() #! 11 ['param d=2'] await A.b(d=3) jedi-0.11.1/test/completion/goto.py0000664000175000017500000000570313214571123017045 0ustar davedave00000000000000# goto_assignments command tests are different in syntax definition = 3 #! 0 ['a = definition'] a = definition #! [] b #! ['a = definition'] a b = a c = b #! ['c = b'] c cd = 1 #! 1 ['cd = c'] cd = c #! 0 ['cd = e'] cd = e #! ['module math'] import math #! ['module math'] math #! ['module math'] b = math #! ['b = math'] b #! 18 ['foo = 10'] foo = 10;print(foo) # ----------------- # classes # ----------------- class C(object): def b(self): #! ['b = math'] b #! ['def b'] self.b #! 14 ['def b'] self.b() #! 11 ['param self'] self.b return 1 #! ['def b'] b #! ['b = math'] b #! ['def b'] C.b #! ['def b'] C().b #! 0 ['class C'] C().b #! 0 ['class C'] C().b D = C #! ['def b'] D.b #! ['def b'] D().b #! 0 ['D = C'] D().b #! 0 ['D = C'] D().b def c(): return '' #! ['def c'] c #! 0 ['def c'] c() class ClassVar(): x = 3 #! ['x = 3'] ClassVar.x #! ['x = 3'] ClassVar().x # before assignments #! 10 ['x = 3'] ClassVar.x = '' #! 12 ['x = 3'] ClassVar().x = '' # Recurring use of the same var name, github #315 def f(t=None): #! 9 ['param t=None'] t = t or 1 class X(): pass #! 3 [] X(foo=x) # ----------------- # imports # ----------------- #! ['module import_tree'] import import_tree #! ["a = ''"] import_tree.a #! ['module mod1'] import import_tree.mod1 #! ['a = 1'] import_tree.mod1.a #! ['module pkg'] import import_tree.pkg #! ['a = list'] import_tree.pkg.a #! ['module mod1'] import import_tree.pkg.mod1 #! ['a = 1.0'] import_tree.pkg.mod1.a #! ["a = ''"] import_tree.a #! ['module mod1'] from import_tree.pkg import mod1 #! ['a = 1.0'] mod1.a #! ['module mod1'] from import_tree import mod1 #! ['a = 1'] mod1.a #! ['a = 1.0'] from import_tree.pkg.mod1 import a #! ['module os'] from .imports import os #! ['some_variable = 1'] from . import some_variable # ----------------- # anonymous classes # ----------------- def func(): class A(): def b(self): return 1 return A() #! 8 ['def b'] func().b() # ----------------- # on itself # ----------------- #! 7 ['class ClassDef'] class ClassDef(): """ abc """ pass # ----------------- # params # ----------------- param = ClassDef #! 8 ['param param'] def ab1(param): pass #! 9 ['param param'] def ab2(param): pass #! 11 ['param = ClassDef'] def ab3(a=param): pass ab1(ClassDef);ab2(ClassDef);ab3(ClassDef) # ----------------- # for loops # ----------------- for i in range(1): #! ['for i in range(1): i'] i for key, value in [(1,2)]: #! ['for key, value in [(1,2)]: key'] key for i in []: #! ['for i in []: i'] i # ----------------- # decorator # ----------------- def dec(dec_param=3): pass #! 8 ['param dec_param=3'] @dec(dec_param=5) def y(): pass class ClassDec(): def class_func(func): return func #! 14 ['def class_func'] @ClassDec.class_func def x(): pass #! 2 ['class ClassDec'] @ClassDec.class_func def z(): pass jedi-0.11.1/test/completion/usages.py0000664000175000017500000001341613214571123017364 0ustar davedave00000000000000""" Renaming tests. This means search for usages. I always leave a little bit of space to add room for additions, because the results always contain position informations. """ #< 4 (0,4), (3,0), (5,0), (17,0), (12,4), (14,5), (15,0) def abc(): pass #< 0 (-3,4), (0,0), (2,0), (14,0), (9,4), (11,5), (12,0) abc.d.a.bsaasd.abc.d abc # unicode chars shouldn't be a problem. x['smörbröd'].abc # With the new parser these statements are not recognized as stateents, because # they are not valid Python. if 1: abc = else: (abc) = abc = #< (-17,4), (-14,0), (-12,0), (0,0), (-2,0), (-3,5), (-5,4) abc abc = 5 Abc = 3 #< 6 (0,6), (2,4), (5,8), (17,0) class Abc(): #< (-2,6), (0,4), (3,8), (15,0) Abc def Abc(self): Abc; self.c = 3 #< 17 (0,16), (2,8) def a(self, Abc): #< 10 (-2,16), (0,8) Abc #< 19 (0,18), (2,8) def self_test(self): #< 12 (-2,18), (0,8) self.b Abc.d.Abc #< 4 (0,4), (5,1) def blubi(): pass #< (-5,4), (0,1) @blubi def a(): pass #< 0 (0,0), (1,0) set_object_var = object() set_object_var.var = 1 response = 5 #< 0 (0,0), (1,0), (2,0), (4,0) response = HttpResponse(mimetype='application/pdf') response['Content-Disposition'] = 'attachment; filename=%s.pdf' % id response.write(pdf) #< (-4,0), (-3,0), (-2,0), (0,0) response # ----------------- # imports # ----------------- #< (0,7), (3,0) import module_not_exists #< (-3,7), (0,0) module_not_exists #< ('rename1', 1,0), (0,24), (3,0), (6,17), ('rename2', 4,5), (11,17), (14,17), ('imports', 72, 16) from import_tree import rename1 #< (0,8), ('rename1',3,0), ('rename2',4,20), ('rename2',6,0), (3,32), (8,32), (5,0) rename1.abc #< (-3,8), ('rename1', 3,0), ('rename2', 4,20), ('rename2', 6,0), (0,32), (5,32), (2,0) from import_tree.rename1 import abc #< (-5,8), (-2,32), ('rename1', 3,0), ('rename2', 4,20), ('rename2', 6,0), (0,0), (3,32) abc #< 20 ('rename1', 1,0), ('rename2', 4,5), (-11,24), (-8,0), (-5,17), (0,17), (3,17), ('imports', 72, 16) from import_tree.rename1 import abc #< (0, 32), from import_tree.rename1 import not_existing # Shouldn't raise an error or do anything weird. from not_existing import * # ----------------- # classes # ----------------- class TestMethods(object): #< 8 (0,8), (2,13) def a_method(self): #< 13 (-2,8), (0,13) self.a_method() #< 13 (2,8), (0,13), (3,13) self.b_method() def b_method(self): self.b_method class TestClassVar(object): #< 4 (0,4), (5,13), (7,21) class_v = 1 def a(self): class_v = 1 #< (-5,4), (0,13), (2,21) self.class_v #< (-7,4), (-2,13), (0,21) TestClassVar.class_v #< (0,8), (-7, 8) class_v class TestInstanceVar(): def a(self): #< 13 (4,13), (0,13) self._instance_var = 3 def b(self): #< (-4,13), (0,13) self._instance_var # A call to self used to trigger an error, because it's also a trailer # with two children. self() class NestedClass(): def __getattr__(self, name): return self # Shouldn't find a definition, because there's other `instance`. # TODO reenable that test ##< (0, 14), NestedClass().instance # ----------------- # inheritance # ----------------- class Super(object): #< 4 (0,4), (23,18), (25,13) base_class = 1 #< 4 (0,4), class_var = 1 #< 8 (0,8), def base_method(self): #< 13 (0,13), (20,13) self.base_var = 1 #< 13 (0,13), self.instance_var = 1 #< 8 (0,8), def just_a_method(self): pass #< 20 (0,16), (-18,6) class TestClass(Super): #< 4 (0,4), class_var = 1 def x_method(self): #< (0,18), (2,13), (-23,4) TestClass.base_class #< (-2,18), (0,13), (-25,4) self.base_class #< (-20,13), (0,13) self.base_var #< (0, 18), TestClass.base_var #< 13 (5,13), (0,13) self.instance_var = 3 #< 9 (0,8), def just_a_method(self): #< (-5,13), (0,13) self.instance_var # ----------------- # properties # ----------------- class TestProperty: @property #< 10 (0,8), (5,13) def prop(self): return 1 def a(self): #< 13 (-5,8), (0,13) self.prop @property #< 13 (0,8), (4,5) def rw_prop(self): return self._rw_prop #< 8 (-4,8), (0,5) @rw_prop.setter #< 8 (0,8), (5,13) def rw_prop(self, value): self._rw_prop = value def b(self): #< 13 (-5,8), (0,13) self.rw_prop # ----------------- # *args, **kwargs # ----------------- #< 11 (1,11), (0,8) def f(**kwargs): return kwargs # ----------------- # No result # ----------------- if isinstance(j, int): #< (0, 4), j # ----------------- # Dynamic Param Search # ----------------- class DynamicParam(): def foo(self): return def check(instance): #< 13 (-5,8), (0,13) instance.foo() check(DynamicParam()) # ----------------- # Compiled Objects # ----------------- import _sre #< 0 (-3,7), (0,0), ('_sre', None, None) _sre # ----------------- # on syntax # ----------------- #< 0 import undefined # ----------------- # comprehensions # ----------------- #< 0 (0,0), (2,12) x = 32 #< 12 (-2,0), (0,12) [x for x in x] #< 0 (0,0), (2,1), (2,12) x = 32 #< 12 (-2,0), (0,1), (0,12) [x for b in x] #< 1 (0,1), (0,7) [x for x in something] #< 7 (0,1), (0,7) [x for x in something] x = 3 # Not supported syntax in Python 2.6. # python >= 2.7 #< 1 (0,1), (0,10) {x:1 for x in something} #< 10 (0,1), (0,10) {x:1 for x in something} def x(): zzz = 3 if UNDEFINED: zzz = 5 if UNDEFINED2: #< (3, 8), (4, 4), (0, 12), (-3, 8), (-5, 4) zzz else: #< (0, 8), (1, 4), (-3, 12), (-6, 8), (-8, 4) zzz zzz jedi-0.11.1/test/completion/__init__.py0000664000175000017500000000011513214571123017624 0ustar davedave00000000000000""" needed for some modules to test against packages. """ some_variable = 1 jedi-0.11.1/test/completion/ordering.py0000664000175000017500000000404013214571123017677 0ustar davedave00000000000000# ----------------- # normal # ----------------- a = "" a = 1 #? int() a #? [] a.append a = list b = 1; b = "" #? str() b # temp should not be accessible before definition #? [] temp a = 1 temp = b; b = a a = temp #? int() b #? int() b #? str() a a = tuple if 1: a = list #? ['append'] a.append #? ['index'] a.index # ----------------- # tuples exchanges # ----------------- a, b = 1, "" #? int() a #? str() b b, a = a, b #? int() b #? str() a b, a = a, b #? int() a #? str() b # ----------------- # function # ----------------- def a(a=3): #? int() a #? [] a.func return a #? int() a(2) #? [] a(2).func a_param = 3 def func(a_param): # should not be int #? [] a_param. from os import path # should not return a function, because `a` is a function above def f(b, a): return a #? [] f(b=3). # ----------------- # closure # ----------------- def x(): a = 0 def x(): return a a = 3.0 return x() #? float() x() # ----------------- # class # ----------------- class A(object): a = "" a = 3 #? int() a a = list() def __init__(self): self.b = "" def before(self): self.b = 3 # TODO should this be so? include entries after cursor? #? int() str() list self.b self.b = list self.a = 1 #? str() int() self.a #? ['after'] self.after self.c = 3 #? int() self.c def after(self): self.a = '' c = set() #? list() A.a a = A() #? ['after'] a.after #? [] a.upper #? [] a.append #? [] a.real #? str() int() a.a a = 3 class a(): def __init__(self, a): self.a = a #? float() a(1.0).a #? a().a # ----------------- # imports # ----------------- math = 3 import math #? ['cosh'] math.cosh #? [] math.real math = 3 #? int() math #? [] math.cos # do the same for star imports cosh = 3 from math import * # cosh doesn't work, but that's not a problem, star imports should be at the # start of EVERY script! cosh.real cosh = 3 #? int() cosh jedi-0.11.1/test/completion/sys_path.py0000664000175000017500000000071213214571123017722 0ustar davedave00000000000000 import sys import os from os import dirname sys.path.insert(0, '../../jedi') sys.path.append(dirname(os.path.abspath('thirdparty' + os.path.sep + 'asdf'))) # modifications, that should fail: # syntax err sys.path.append('a' +* '/thirdparty') #? ['evaluate'] import evaluate #? ['evaluator_function_cache'] evaluate.Evaluator_fu # Those don't work because dirname and abspath are not properly understood. ##? ['jedi_'] import jedi_ ##? ['el'] jedi_.el jedi-0.11.1/test/completion/basic.py0000664000175000017500000000747013214571123017161 0ustar davedave00000000000000# ----------------- # cursor position # ----------------- #? 0 int int() #? 3 int int() #? 4 str int(str) # ----------------- # should not complete # ----------------- #? [] . #? [] str.. #? [] a(0):. # ----------------- # if/else/elif # ----------------- if (random.choice([0, 1])): 1 elif(random.choice([0, 1])): a = 3 else: a = '' #? int() str() a def func(): if random.choice([0, 1]): 1 elif(random.choice([0, 1])): a = 3 else: a = '' #? int() str() return a #? int() str() func() # ----------------- # keywords # ----------------- #? list() assert [] def focus_return(): #? list() return [] # ----------------- # for loops # ----------------- for a in [1,2]: #? int() a for a1 in 1,"": #? int() str() a1 for a3, b3 in (1,""), (1,""), (1,""): #? int() a3 #? str() b3 for a4, (b4, c4) in (1,("", list)), (1,("", list)): #? int() a4 #? str() b4 #? list c4 a = [] for i in [1,'']: #? int() str() i a += [i] #? int() str() a[0] for i in list([1,'']): #? int() str() i #? int() str() for x in [1,'']: x a = [] b = [1.0,''] for i in b: a += [i] #? float() str() a[0] for i in [1,2,3]: #? int() i else: i # ----------------- # range() # ----------------- for i in range(10): #? int() i # ----------------- # ternary operator # ----------------- a = 3 b = '' if a else set() #? str() set() b def ret(a): return ['' if a else set()] #? str() set() ret(1)[0] #? str() set() ret()[0] # ----------------- # global vars # ----------------- def global_define(): global global_var_in_func global_var_in_func = 3 #? int() global_var_in_func def funct1(): # From issue #610 global global_dict_var global_dict_var = dict() def funct2(): global global_dict_var #? dict() global_dict_var # ----------------- # within docstrs # ----------------- def a(): """ #? ['global_define'] global_define """ pass #? # str literals in comment """ upper def completion_in_comment(): #? ['Exception'] # might fail because the comment is not a leaf: Exception pass some_word #? ['Exception'] # Very simple comment completion: Exception # Commment after it # ----------------- # magic methods # ----------------- class A(object): pass class B(): pass #? ['__init__'] A.__init__ #? ['__init__'] B.__init__ #? ['__init__'] int().__init__ # ----------------- # comments # ----------------- class A(): def __init__(self): self.hello = {} # comment shouldn't be a string #? dict() A().hello # ----------------- # unicode # ----------------- a = 'smörbröd' #? str() a xyz = 'smörbröd.py' if 1: #? str() xyz #? ¹. # ----------------- # exceptions # ----------------- try: import math except ImportError as i_a: #? ['i_a'] i_a #? ImportError() i_a try: import math except ImportError, i_b: # TODO check this only in Python2 ##? ['i_b'] i_b ##? ImportError() i_b class MyException(Exception): def __init__(self, my_attr): self.my_attr = my_attr try: raise MyException(1) except MyException as e: #? ['my_attr'] e.my_attr #? 22 ['my_attr'] for x in e.my_attr: pass # ----------------- # continuations # ----------------- foo = \ 1 #? int() foo # ----------------- # module attributes # ----------------- # Don't move this to imports.py, because there's a star import. #? str() __file__ #? ['__file__'] __file__ # ----------------- # with statements # ----------------- with open('') as f: #? ['closed'] f.closed for line in f: #? str() line # Nested with statements don't exist in Python 2.6. # python >= 2.7 with open('') as f1, open('') as f2: #? ['closed'] f1.closed #? ['closed'] f2.closed jedi-0.11.1/test/completion/imports.py0000664000175000017500000001160013214571123017563 0ustar davedave00000000000000# ----------------- # own structure # ----------------- # do separate scopes def scope_basic(): from import_tree import mod1 #? int() mod1.a #? [] import_tree.a #? [] import_tree.mod1 import import_tree #? str() import_tree.a def scope_pkg(): import import_tree.mod1 #? str() import_tree.a #? ['mod1'] import_tree.mod1 #? int() import_tree.mod1.a def scope_nested(): import import_tree.pkg.mod1 #? str() import_tree.a #? list import_tree.pkg.a #? ['sqrt'] import_tree.pkg.sqrt #? ['pkg'] import_tree.p #? float() import_tree.pkg.mod1.a #? ['a', 'foobar', '__name__', '__package__', '__file__', '__doc__'] a = import_tree.pkg.mod1. import import_tree.random #? set import_tree.random.a def scope_nested2(): """Multiple modules should be indexable, if imported""" import import_tree.mod1 import import_tree.pkg #? ['mod1'] import_tree.mod1 #? ['pkg'] import_tree.pkg # With the latest changes this completion also works, because submodules # are always included (some nested import structures lead to this, # typically). #? ['rename1'] import_tree.rename1 def scope_from_import_variable(): """ All of them shouldn't work, because "fake" imports don't work in python without the use of ``sys.modules`` modifications (e.g. ``os.path`` see also github issue #213 for clarification. """ a = 3 #? from import_tree.mod2.fake import a #? from import_tree.mod2.fake import c #? a #? c def scope_from_import_variable_with_parenthesis(): from import_tree.mod2.fake import ( a, foobarbaz ) #? a #? foobarbaz # shouldn't complete, should still list the name though. #? ['foobarbaz'] foobarbaz def as_imports(): from import_tree.mod1 import a as xyz #? int() xyz import not_existant, import_tree.mod1 as foo #? int() foo.a import import_tree.mod1 as bar #? int() bar.a def test_import_priorities(): """ It's possible to overwrite import paths in an ``__init__.py`` file, by just assigining something there. See also #536. """ from import_tree import the_pkg, invisible_pkg #? int() invisible_pkg # In real Python, this would be the module, but it's not, because Jedi # doesn't care about most stateful issues such as __dict__, which it would # need to, to do this in a correct way. #? int() the_pkg # Importing foo is still possible, even though inivisible_pkg got changed. #? float() from import_tree.invisible_pkg import foo # ----------------- # std lib modules # ----------------- import tokenize #? ['tok_name'] tokenize.tok_name from pyclbr import * #? ['readmodule_ex'] readmodule_ex import os #? ['dirname'] os.path.dirname from os.path import ( expanduser ) #? os.path.expanduser expanduser from itertools import (tee, islice) #? ['islice'] islice from functools import (partial, wraps) #? ['wraps'] wraps from keyword import kwlist, \ iskeyword #? ['kwlist'] kwlist #? [] from keyword import not_existing1, not_existing2 from tokenize import io tokenize.generate_tokens # ----------------- # builtins # ----------------- import sys #? ['prefix'] sys.prefix #? ['append'] sys.path.append from math import * #? ['cos', 'cosh'] cos def func_with_import(): import time return time #? ['sleep'] func_with_import().sleep # ----------------- # relative imports # ----------------- from .import_tree import mod1 #? int() mod1.a from ..import_tree import mod1 #? mod1.a from .......import_tree import mod1 #? mod1.a from .. import helpers #? int() helpers.sample_int from ..helpers import sample_int as f #? int() f from . import run #? [] run. from . import import_tree as imp_tree #? str() imp_tree.a from . import datetime as mod1 #? [] mod1. # self import # this can cause recursions from imports import * # ----------------- # packages # ----------------- from import_tree.mod1 import c #? set c from import_tree import recurse_class1 #? ['a'] recurse_class1.C.a # github #239 RecursionError #? ['a'] recurse_class1.C().a # ----------------- # Jedi debugging # ----------------- # memoizing issues (check git history for the fix) import not_existing_import if not_existing_import: a = not_existing_import else: a = not_existing_import #? a # ----------------- # module underscore descriptors # ----------------- def underscore(): import keyword #? ['__file__'] keyword.__file__ #? str() keyword.__file__ # Does that also work for the our own module? #? ['__file__'] __file__ # ----------------- # complex relative imports #784 # ----------------- def relative(): #? ['foobar'] from import_tree.pkg.mod1 import foobar #? int() foobar jedi-0.11.1/test/completion/pep0526_variables.py0000664000175000017500000000060213214571123021217 0ustar davedave00000000000000""" PEP 526 introduced a new way of using type annotations on variables. It was introduced in Python 3.6. """ # python >= 3.6 import typing asdf = '' asdf: int # This is not necessarily correct, but for now this is ok (at least no error). #? int() asdf direct: int = NOT_DEFINED #? int() direct with_typing_module: typing.List[float] = NOT_DEFINED #? float() with_typing_module[0] jedi-0.11.1/test/completion/completion.py0000664000175000017500000000106513214571123020243 0ustar davedave00000000000000""" Special cases of completions (typically special positions that caused issues with context parsing. """ def pass_decorator(func): return func def x(): return ( 1, #? ["tuple"] tuple ) # Comment just somewhere class MyClass: @pass_decorator def x(foo, #? 5 ["tuple"] tuple, ): return 1 if x: pass #? ['else'] else try: pass #? ['except', 'Exception'] except try: pass #? 6 ['except', 'Exception'] except AttributeError: pass #? ['finally'] finally for x in y: pass #? ['else'] else jedi-0.11.1/test/completion/pep0484_comments.py0000664000175000017500000000320513214571123021101 0ustar davedave00000000000000a = 3 # type: str #? str() a b = 3 # type: str but I write more #? int() b c = 3 # type: str # I comment more #? str() c d = "It should not read comments from the next line" # type: int #? str() d # type: int e = "It should not read comments from the previous line" #? str() e class BB: pass def test(a, b): a = a # type: BB c = a # type: str d = a # type: str e = a # type: str # Should ignore long whitespace #? BB() a #? str() c #? BB() d #? str() e a,b = 1, 2 # type: str, float #? str() a #? float() b class Employee: pass # The typing library is not installable for Python 2.6, therefore ignore the # following tests. # python >= 2.7 from typing import List x = [] # type: List[Employee] #? Employee() x[1] x, y, z = [], [], [] # type: List[int], List[int], List[str] #? int() y[2] x, y, z = [], [], [] # type: (List[float], List[float], List[BB]) for zi in z: #? BB() zi x = [ 1, 2, ] # type: List[str] #? str() x[1] for bar in foo(): # type: str #? str() bar for bar, baz in foo(): # type: int, float #? int() bar #? float() baz for bar, baz in foo(): # type: str, str """ type hinting on next line should not work """ #? bar #? baz with foo(): # type: int ... with foo() as f: # type: str #? str() f with foo() as f: # type: str """ type hinting on next line should not work """ #? f aaa = some_extremely_long_function_name_that_doesnt_leave_room_for_hints() \ # type: float # We should be able to put hints on the next line with a \ #? float() aaa jedi-0.11.1/test/completion/lambdas.py0000664000175000017500000000345113214571123017476 0ustar davedave00000000000000# ----------------- # lambdas # ----------------- a = lambda: 3 #? int() a() x = [] a = lambda x: x #? int() a(0) #? float() (lambda x: x)(3.0) arg_l = lambda x, y: y, x #? float() arg_l[0]('', 1.0) #? list() arg_l[1] arg_l = lambda x, y: (y, x) args = 1,"" result = arg_l(*args) #? tuple() result #? str() result[0] #? int() result[1] def with_lambda(callable_lambda, *args, **kwargs): return callable_lambda(1, *args, **kwargs) #? int() with_lambda(arg_l, 1.0)[1] #? float() with_lambda(arg_l, 1.0)[0] #? float() with_lambda(arg_l, y=1.0)[0] #? int() with_lambda(lambda x: x) #? float() with_lambda(lambda x, y: y, y=1.0) arg_func = lambda *args, **kwargs: (args[0], kwargs['a']) #? int() arg_func(1, 2, a='', b=10)[0] #? list() arg_func(1, 2, a=[], b=10)[1] # magic method a = lambda: 3 #? ['__closure__'] a.__closure__ class C(): def __init__(self, foo=1.0): self.a = lambda: 1 self.foo = foo def ret(self): return lambda: self.foo def with_param(self): return lambda x: x + self.a() lambd = lambda self: self.foo #? int() C().a() #? str() C('foo').ret()() index = C().with_param()(1) #? float() ['', 1, 1.0][index] #? float() C().lambd() #? int() C(1).lambd() def xy(param): def ret(a, b): return a + b return lambda b: ret(param, b) #? int() xy(1)(2) # ----------------- # lambda param (#379) # ----------------- class Test(object): def __init__(self, pred=lambda a, b: a): self.a = 1 #? int() self.a #? float() pred(1.0, 2) # ----------------- # test_nocond in grammar (happens in list comprehensions with `if`) # ----------------- # Doesn't need to do anything yet. It should just not raise an error. These # nocond lambdas make no sense at all. #? int() [a for a in [1,2] if lambda: 3][0] jedi-0.11.1/test/completion/complex.py0000664000175000017500000000030513214571123017535 0ustar davedave00000000000000""" Mostly for stupid error reports of @dbrgn. :-) """ import time class Foo(object): global time asdf = time def asdfy(): return Foo xorz = getattr(asdfy()(), 'asdf') #? time xorz jedi-0.11.1/test/completion/thirdparty/0000775000175000017500000000000013214571377017723 5ustar davedave00000000000000jedi-0.11.1/test/completion/thirdparty/jedi_.py0000664000175000017500000000270213214571123021335 0ustar davedave00000000000000 from jedi import functions, evaluate, parsing el = functions.completions()[0] #? ['description'] el.description #? str() el.description scopes, path, dot, like = \ api._prepare_goto(source, row, column, path, True) # has problems with that (sometimes) very deep nesting. #? set() el = scopes # get_names_for_scope is also recursion stuff #? tuple() el = list(evaluate.get_names_for_scope())[0] #? int() parsing.Module() el = list(evaluate.get_names_for_scope(1))[0][0] #? parsing.Module() el = list(evaluate.get_names_for_scope())[0][0] #? list() el = list(evaluate.get_names_for_scope(1))[0][1] #? list() el = list(evaluate.get_names_for_scope())[0][1] #? list() parsing.Scope((0,0)).get_set_vars() #? parsing.Import() parsing.Name() parsing.Scope((0,0)).get_set_vars()[0] # TODO access parent is not possible, because that is not set in the class ## parsing.Class() parsing.Scope((0,0)).get_set_vars()[0].parent #? parsing.Import() parsing.Name() el = list(evaluate.get_names_for_scope())[0][1][0] #? evaluate.Array() evaluate.Class() evaluate.Function() evaluate.Instance() list(evaluate.follow_call())[0] # With the right recursion settings, this should be possible (and maybe more): # Array Class Function Generator Instance Module # However, this was produced with the recursion settings 10/350/10000, and # lasted 18.5 seconds. So we just have to be content with the results. #? evaluate.Class() evaluate.Function() evaluate.get_scopes_for_name()[0] jedi-0.11.1/test/completion/thirdparty/pylab_.py0000664000175000017500000000104013214571123021523 0ustar davedave00000000000000import pylab # two gotos #! ['module numpy'] import numpy #! ['module random'] import numpy.random #? ['array2string'] numpy.array2string #? ['shape'] numpy.matrix().shape #? ['random_integers'] pylab.random_integers #? [] numpy.random_integers #? ['random_integers'] numpy.random.random_integers #? ['sample'] numpy.random.sample import numpy na = numpy.array([1,2]) #? ['shape'] na.shape # shouldn't raise an error #29, jedi-vim # doesn't return something, because matplotlib uses __import__ fig = pylab.figure() #? fig.add_subplot jedi-0.11.1/test/completion/thirdparty/django_.py0000664000175000017500000000032413214571123021662 0ustar davedave00000000000000#! ['class ObjectDoesNotExist'] from django.core.exceptions import ObjectDoesNotExist import django #? ['get_version'] django.get_version from django.conf import settings #? ['configured'] settings.configured jedi-0.11.1/test/completion/thirdparty/psycopg2_.py0000664000175000017500000000020613214571123022165 0ustar davedave00000000000000import psycopg2 conn = psycopg2.connect('dbname=test') #? ['cursor'] conn.cursor cur = conn.cursor() #? ['fetchall'] cur.fetchall jedi-0.11.1/test/completion/thirdparty/PyQt4_.py0000664000175000017500000000050013214571123021375 0ustar davedave00000000000000from PyQt4.QtCore import * from PyQt4.QtGui import * #? ['QActionGroup'] QActionGroup #? ['currentText'] QStyleOptionComboBox().currentText #? [] QStyleOptionComboBox().currentText. from PyQt4 import QtGui #? ['currentText'] QtGui.QStyleOptionComboBox().currentText #? [] QtGui.QStyleOptionComboBox().currentText. jedi-0.11.1/test/completion/flow_analysis.py0000664000175000017500000000767413214571123020760 0ustar davedave00000000000000# ----------------- # First a few name resolution things # ----------------- x = 3 if NOT_DEFINED: x = '' #? 6 int() elif x: pass else: #? int() x x = 1 try: x = '' #? 8 int() str() except x: #? 5 int() str() x x = 1.0 else: #? 5 int() str() x x = list finally: #? 5 int() str() float() list x x = tuple # ----------------- # Return checks # ----------------- def foo(x): if 1.0: return 1 else: return '' #? int() foo(1) # Exceptions are not analyzed. So check both if branches def try_except(x): try: if 0: return 1 else: return '' except AttributeError: return 1.0 #? float() str() try_except(1) # Exceptions are not analyzed. So check both if branches def try_except(x): try: if 0: return 1 else: return '' except AttributeError: return 1.0 #? float() str() try_except(1) # ----------------- # elif # ----------------- def elif_flows1(x): if False: return 1 elif True: return 1.0 else: return '' #? float() elif_flows1(1) def elif_flows2(x): try: if False: return 1 elif 0: return 1.0 else: return '' except ValueError: return set #? str() set elif_flows2(1) def elif_flows3(x): try: if True: return 1 elif 0: return 1.0 else: return '' except ValueError: return set #? int() set elif_flows3(1) # ----------------- # mid-difficulty if statements # ----------------- def check(a): if a is None: return 1 return '' return set #? int() check(None) #? str() check('asb') a = list if 2 == True: a = set elif 1 == True: a = 0 #? int() a if check != 1: a = '' #? str() a if check == check: a = list #? list a if check != check: a = set else: a = dict #? dict a if not (check is not check): a = 1 #? int() a # ----------------- # name resolution # ----------------- a = list def elif_name(x): try: if True: a = 1 elif 0: a = 1.0 else: return '' except ValueError: a = x return a #? int() set elif_name(set) if 0: a = '' else: a = int #? int a # ----------------- # isinstance # ----------------- class A(): pass def isinst(x): if isinstance(x, A): return dict elif isinstance(x, int) and x == 1 or x is True: return set elif isinstance(x, (float, reversed)): return list elif not isinstance(x, str): return tuple return 1 #? dict isinst(A()) #? set isinst(True) #? set isinst(1) #? tuple isinst(2) #? list isinst(1.0) #? tuple isinst(False) #? int() isinst('') # ----------------- # flows that are not reachable should be able to access parent scopes. # ----------------- foobar = '' if 0: within_flow = 1.0 #? float() within_flow #? str() foobar if 0: nested = 1 #? int() nested #? float() within_flow #? str() foobar #? nested if False: in_false = 1 #? ['in_false'] in_false # ----------------- # True objects like modules # ----------------- class X(): pass if X: a = 1 else: a = '' #? int() a # ----------------- # Recursion issues # ----------------- def possible_recursion_error(filename): if filename == 'a': return filename # It seems like without the brackets there wouldn't be a RecursionError. elif type(filename) == str: return filename if NOT_DEFINED: s = str() else: s = str() #? str() possible_recursion_error(s) # ----------------- # In combination with imports # ----------------- from import_tree import flow_import if 1 == flow_import.env: a = 1 elif 2 == flow_import.env: a = '' elif 3 == flow_import.env: a = 1.0 #? int() str() a jedi-0.11.1/test/completion/named_param.py0000664000175000017500000000141613214571123020336 0ustar davedave00000000000000""" Named Params: >>> def a(abc): pass ... >>> a(abc=3) # <- this stuff (abc) """ def a(abc): pass #? 5 ['abc'] a(abc) def a(*some_args, **some_kwargs): pass #? 11 [] a(some_args) #? 13 [] a(some_kwargs) def multiple(foo, bar): pass #? 17 ['bar'] multiple(foo, bar) #? ['bar'] multiple(foo, bar my_lambda = lambda lambda_param: lambda_param + 1 #? 22 ['lambda_param'] my_lambda(lambda_param) # __call__ / __init__ class Test(object): def __init__(self, hello_other): pass def __call__(self, hello): pass def test(self, blub): pass #? 10 ['hello_other'] Test(hello=) #? 12 ['hello'] Test()(hello=) #? 11 [] Test()(self=) #? 16 [] Test().test(self=) #? 16 ['blub'] Test().test(blub=) # builtins #? 12 [] any(iterable=) jedi-0.11.1/test/completion/recursion.py0000664000175000017500000000232513214571123020103 0ustar davedave00000000000000""" Code that might cause recursion issues (or has caused in the past). """ def Recursion(): def recurse(self): self.a = self.a self.b = self.b.recurse() #? Recursion().a #? Recursion().b class X(): def __init__(self): self.recursive = [1, 3] def annoying(self): self.recursive = [self.recursive[0]] def recurse(self): self.recursive = [self.recursive[1]] #? int() X().recursive[0] def to_list(iterable): return list(set(iterable)) def recursion1(foo): return to_list(to_list(foo)) + recursion1(foo) #? int() recursion1([1,2])[0] class FooListComp(): def __init__(self): self.recursive = [1] def annoying(self): self.recursive = [x for x in self.recursive] #? int() FooListComp().recursive[0] class InstanceAttributeIfs: def b(self): self.a1 = 1 self.a2 = 1 def c(self): self.a2 = '' def x(self): self.b() if self.a1 == 1: self.a1 = self.a1 + 1 if self.a2 == UNDEFINED: self.a2 = self.a2 + 1 #? int() self.a1 #? int() str() self.a2 #? int() InstanceAttributeIfs().a1 #? int() str() InstanceAttributeIfs().a2 jedi-0.11.1/test/completion/docstring.py0000664000175000017500000000701313214571123020065 0ustar davedave00000000000000""" Test docstrings in functions and classes, which are used to infer types """ # ----------------- # sphinx style # ----------------- def sphinxy(a, b, c, d, x): """ asdfasdf :param a: blablabla :type a: str :type b: (str, int) :type c: random.Random :type d: :class:`random.Random` :param str x: blablabla :rtype: dict """ #? str() a #? str() b[0] #? int() b[1] #? ['seed'] c.seed #? ['seed'] d.seed #? ['lower'] x.lower #? dict() sphinxy() # wrong declarations def sphinxy2(a, b, x): """ :param a: Forgot type declaration :type a: :param b: Just something :type b: `` :param x: Just something without type :rtype: """ #? a #? b #? x #? sphinxy2() def sphinxy_param_type_wrapped(a): """ :param str a: Some description wrapped onto the next line with no space after the colon. """ #? str() a # local classes -> github #370 class ProgramNode(): pass def local_classes(node, node2): """ :type node: ProgramNode ... and the class definition after this func definition: :type node2: ProgramNode2 """ #? ProgramNode() node #? ProgramNode2() node2 class ProgramNode2(): pass def list_with_non_imports(lst): """ Should be able to work with tuples and lists and still import stuff. :type lst: (random.Random, [collections.defaultdict, ...]) """ #? ['seed'] lst[0].seed import collections as col # use some weird index #? col.defaultdict() lst[1][10] def two_dots(a): """ :type a: json.decoder.JSONDecoder """ #? ['raw_decode'] a.raw_decode # sphinx returns def return_module_object(): """ :rtype: :class:`random.Random` """ #? ['seed'] return_module_object().seed # ----------------- # epydoc style # ----------------- def epydoc(a, b): """ asdfasdf @type a: str @param a: blablabla @type b: (str, int) @param b: blablah @rtype: list """ #? str() a #? str() b[0] #? int() b[1] #? list() epydoc() # Returns with param type only def rparam(a,b): """ @type a: str """ return a #? str() rparam() # Composite types def composite(): """ @rtype: (str, int, dict) """ x, y, z = composite() #? str() x #? int() y #? dict() z # Both docstring and calculated return type def both(): """ @rtype: str """ return 23 #? str() int() both() class Test(object): def __init__(self): self.teststr = "" """ # jedi issue #210 """ def test(self): #? ['teststr'] self.teststr # ----------------- # statement docstrings # ----------------- d = '' """ bsdf """ #? str() d.upper() # ----------------- # class docstrings # ----------------- class InInit(): def __init__(self, foo): """ :type foo: str """ #? str() foo class InClass(): """ :type foo: str """ def __init__(self, foo): #? str() foo class InBoth(): """ :type foo: str """ def __init__(self, foo): """ :type foo: int """ #? str() int() foo def __init__(foo): """ :type foo: str """ #? str() foo # ----------------- # Renamed imports (#507) # ----------------- import datetime from datetime import datetime as datetime_imported def import_issues(foo): """ @type foo: datetime_imported """ #? datetime.datetime() foo jedi-0.11.1/test/completion/functions.py0000664000175000017500000001511313214571123020101 0ustar davedave00000000000000def x(): return #? None x() def array(first_param): #? ['first_param'] first_param return list() #? [] array.first_param #? [] array.first_param. func = array #? [] func.first_param #? list() array() #? ['array'] arr def inputs(param): return param #? list inputs(list) def variable_middle(): var = 3 return var #? int() variable_middle() def variable_rename(param): var = param return var #? int() variable_rename(1) def multi_line_func(a, # comment blabla b): return b #? str() multi_line_func(1,'') def multi_line_call(b): return b multi_line_call( #? int() b=1) # nothing after comma def asdf(a): return a x = asdf(a=1, ) #? int() x # ----------------- # double execution # ----------------- def double_exe(param): return param #? str() variable_rename(double_exe)("") # -> shouldn't work (and throw no error) #? [] variable_rename(list())(). #? [] variable_rename(1)(). # ----------------- # recursions (should ignore) # ----------------- def recursion(a, b): if a: return b else: return recursion(a+".", b+1) # Does not also return int anymore, because we now support operators in simple cases. #? float() recursion("a", 1.0) def other(a): return recursion2(a) def recursion2(a): if random.choice([0, 1]): return other(a) else: if random.choice([0, 1]): return recursion2("") else: return a #? int() str() recursion2(1) # ----------------- # ordering # ----------------- def a(): #? int() b() return b() def b(): return 1 #? int() a() # ----------------- # keyword arguments # ----------------- def func(a=1, b=''): return a, b exe = func(b=list, a=tuple) #? tuple exe[0] #? list exe[1] # ----------------- # default arguments # ----------------- #? int() func()[0] #? str() func()[1] #? float() func(1.0)[0] #? str() func(1.0)[1] #? float() func(a=1.0)[0] #? str() func(a=1.0)[1] #? int() func(b=1.0)[0] #? float() func(b=1.0)[1] #? list func(a=list, b=set)[0] #? set func(a=list, b=set)[1] def func_default(a, b=1): return a, b def nested_default(**kwargs): return func_default(**kwargs) #? float() nested_default(a=1.0)[0] #? int() nested_default(a=1.0)[1] #? str() nested_default(a=1.0, b='')[1] # Defaults should only work if they are defined before - not after. def default_function(a=default): #? return a #? default_function() default = int() def default_function(a=default): #? int() return a #? int() default_function() # ----------------- # closures # ----------------- def a(): l = 3 def func_b(): l = '' #? str() l #? ['func_b'] func_b #? int() l # ----------------- # *args # ----------------- def args_func(*args): #? tuple() return args exe = args_func(1, "") #? int() exe[0] #? str() exe[1] # illegal args (TypeError) #? args_func(*1)[0] # iterator #? int() args_func(*iter([1]))[0] # different types e = args_func(*[1+"", {}]) #? int() str() e[0] #? dict() e[1] _list = [1,""] exe2 = args_func(_list)[0] #? str() exe2[1] exe3 = args_func([1,""])[0] #? str() exe3[1] def args_func(arg1, *args): return arg1, args exe = args_func(1, "", list) #? int() exe[0] #? tuple() exe[1] #? list exe[1][1] # In a dynamic search, both inputs should be given. def simple(a): #? int() str() return a def xargs(*args): return simple(*args) xargs(1) xargs('') # *args without a self symbol def memoize(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper class Something(): @memoize def x(self, a, b=1): return a #? int() Something().x(1) # ----------------- # ** kwargs # ----------------- def kwargs_func(**kwargs): #? ['keys'] kwargs.keys #? dict() return kwargs exe = kwargs_func(a=3,b=4.0) #? dict() exe #? int() exe['a'] #? float() exe['b'] #? int() float() exe['c'] a = 'a' exe2 = kwargs_func(**{a:3, 'b':4.0}) #? int() exe2['a'] #? float() exe2['b'] #? int() float() exe2['c'] # ----------------- # *args / ** kwargs # ----------------- def func_without_call(*args, **kwargs): #? tuple() args #? dict() kwargs def fu(a=1, b="", *args, **kwargs): return a, b, args, kwargs exe = fu(list, 1, "", c=set, d="") #? list exe[0] #? int() exe[1] #? tuple() exe[2] #? str() exe[2][0] #? dict() exe[3] #? set exe[3]['c'] def kwargs_iteration(**kwargs): return kwargs for x in kwargs_iteration(d=3): #? float() {'d': 1.0, 'c': '1'}[x] # ----------------- # nested *args # ----------------- def function_args(a, b, c): return b def nested_args(*args): return function_args(*args) def nested_args2(*args, **kwargs): return nested_args(*args) #? int() nested_args('', 1, 1.0, list) #? [] nested_args(''). #? int() nested_args2('', 1, 1.0) #? [] nested_args2(''). # ----------------- # nested **kwargs # ----------------- def nested_kw(**kwargs1): return function_args(**kwargs1) def nested_kw2(**kwargs2): return nested_kw(**kwargs2) # invalid command, doesn't need to return anything #? nested_kw(b=1, c=1.0, list) #? int() nested_kw(b=1) # invalid command, doesn't need to return anything #? nested_kw(d=1.0, b=1, list) #? int() nested_kw(a=3.0, b=1) #? int() nested_kw(b=1, a=r"") #? [] nested_kw(1, ''). #? [] nested_kw(a=''). #? int() nested_kw2(b=1) #? int() nested_kw2(b=1, c=1.0) #? int() nested_kw2(c=1.0, b=1) #? [] nested_kw2(''). #? [] nested_kw2(a=''). #? [] nested_kw2('', b=1). # ----------------- # nested *args/**kwargs # ----------------- def nested_both(*args, **kwargs): return function_args(*args, **kwargs) def nested_both2(*args, **kwargs): return nested_both(*args, **kwargs) # invalid commands, may return whatever. #? list nested_both('', b=1, c=1.0, list) #? list nested_both('', c=1.0, b=1, list) #? [] nested_both(''). #? int() nested_both2('', b=1, c=1.0) #? int() nested_both2('', c=1.0, b=1) #? [] nested_both2(''). # ----------------- # nested *args/**kwargs with a default arg # ----------------- def function_def(a, b, c): return a, b def nested_def(a, *args, **kwargs): return function_def(a, *args, **kwargs) def nested_def2(*args, **kwargs): return nested_def(*args, **kwargs) #? str() nested_def2('', 1, 1.0)[0] #? str() nested_def2('', b=1, c=1.0)[0] #? str() nested_def2('', c=1.0, b=1)[0] #? int() nested_def2('', 1, 1.0)[1] #? int() nested_def2('', b=1, c=1.0)[1] #? int() nested_def2('', c=1.0, b=1)[1] #? [] nested_def2('')[1]. # ----------------- # magic methods # ----------------- def a(): pass #? ['__closure__'] a.__closure__ jedi-0.11.1/test/completion/descriptors.py0000664000175000017500000000653513214571123020442 0ustar davedave00000000000000class RevealAccess(object): """ A data descriptor that sets and returns values normally and prints a message logging their access. """ def __init__(self, initval=None, name='var'): self.val = initval self.name = name def __get__(self, obj, objtype): print('Retrieving', self.name) return self.val def __set__(self, obj, val): print('Updating', self.name) self.val = val def just_a_method(self): pass class C(object): x = RevealAccess(10, 'var "x"') #? RevealAccess() x #? ['just_a_method'] x.just_a_method y = 5.0 def __init__(self): #? int() self.x #? [] self.just_a_method #? [] C.just_a_method m = C() #? int() m.x #? float() m.y #? int() C.x #? [] m.just_a_method #? [] C.just_a_method # ----------------- # properties # ----------------- class B(): @property def r(self): return 1 @r.setter def r(self, value): return '' def t(self): return '' p = property(t) #? [] B().r(). #? int() B().r #? str() B().p #? [] B().p(). class PropClass(): def __init__(self, a): self.a = a @property def ret(self): return self.a @ret.setter def ret(self, value): return 1.0 def ret2(self): return self.a ret2 = property(ret2) @property def nested(self): """ causes recusions in properties, should work """ return self.ret @property def nested2(self): """ causes recusions in properties, should not work """ return self.nested2 @property def join1(self): """ mutual recusion """ return self.join2 @property def join2(self): """ mutual recusion """ return self.join1 #? str() PropClass("").ret #? [] PropClass().ret. #? str() PropClass("").ret2 #? PropClass().ret2 #? int() PropClass(1).nested #? [] PropClass().nested. #? PropClass(1).nested2 #? [] PropClass().nested2. #? PropClass(1).join1 # ----------------- # staticmethod/classmethod # ----------------- class E(object): a = '' def __init__(self, a): self.a = a def f(x): return x f = staticmethod(f) #? f.__func @staticmethod def g(x): return x def s(cls, x): return x s = classmethod(s) @classmethod def t(cls, x): return x @classmethod def u(cls, x): return cls.a e = E(1) #? int() e.f(1) #? int() E.f(1) #? int() e.g(1) #? int() E.g(1) #? int() e.s(1) #? int() E.s(1) #? int() e.t(1) #? int() E.t(1) #? str() e.u(1) #? str() E.u(1) # ----------------- # Conditions # ----------------- from functools import partial class Memoize(): def __init__(self, func): self.func = func def __get__(self, obj, objtype): if obj is None: return self.func return partial(self, obj) def __call__(self, *args, **kwargs): # We don't do caching here, but that's what would normally happen. return self.func(*args, **kwargs) class MemoizeTest(): def __init__(self, x): self.x = x @Memoize def some_func(self): return self.x #? int() MemoizeTest(10).some_func() # Now also call the same function over the class (see if clause above). #? float() MemoizeTest.some_func(MemoizeTest(10.0)) jedi-0.11.1/test/completion/comprehensions.py0000664000175000017500000000601313214571123021124 0ustar davedave00000000000000# ----------------- # list comprehensions # ----------------- # basics: a = ['' for a in [1]] #? str() a[0] #? ['insert'] a.insert a = [a for a in [1]] #? int() a[0] y = 1.0 # Should not leak. [y for y in [3]] #? float() y a = [a for a in (1, 2)] #? int() a[0] a = [a for a,b in [(1,'')]] #? int() a[0] arr = [1,''] a = [a for a in arr] #? int() a[0] #? str() a[1] #? int() str() a[2] a = [a if 1.0 else '' for a in [1] if [1.0]] #? int() str() a[0] # name resolve should be correct left, right = 'a', 'b' left, right = [x for x in (left, right)] #? str() left # with a dict literal #? int() [a for a in {1:'x'}][0] # list comprehensions should also work in combination with functions def listen(arg): for x in arg: #? str() x listen(['' for x in [1]]) #? ([str for x in []])[0] # ----------------- # nested list comprehensions # ----------------- b = [a for arr in [[1, 1.0]] for a in arr] #? int() b[0] #? float() b[1] b = [arr for arr in [[1, 1.0]] for a in arr] #? int() b[0][0] #? float() b[1][1] b = [a for arr in [[1]] if '' for a in arr if ''] #? int() b[0] b = [b for arr in [[[1.0]]] for a in arr for b in a] #? float() b[0] #? str() [x for x in 'chr'][0] # jedi issue #26 #? list() a = [[int(v) for v in line.strip().split() if v] for line in ["123", str(), "123"] if line] #? list() a[0] #? int() a[0][0] # ----------------- # generator comprehensions # ----------------- left, right = (i for i in (1, '')) #? int() left #? str() right gen = (i for i in (1,)) #? int() next(gen) #? gen[0] gen = (a for arr in [[1.0]] for a in arr) #? float() next(gen) #? int() (i for i in (1,)).send() # issues with different formats left, right = (i for i in ('1', 2)) #? str() left #? int() right # ----------------- # name resolution in comprehensions. # ----------------- def x(): """Should not try to resolve to the if hio, which was a bug.""" #? 22 [a for a in h if hio] if hio: pass # ----------------- # slices # ----------------- #? list() foo = [x for x in [1, '']][:1] #? int() foo[0] #? str() foo[1] # ----------------- # In class # ----------------- class X(): def __init__(self, bar): self.bar = bar def foo(self): x = [a for a in self.bar][0] #? int() x return x #? int() X([1]).foo() # set/dict comprehensions were introduced in 2.7, therefore: # python >= 2.7 # ----------------- # dict comprehensions # ----------------- #? int() list({a - 1: 3 for a in [1]})[0] d = {a - 1: b for a, b in {1: 'a', 3: 1.0}.items()} #? int() list(d)[0] #? str() float() d.values()[0] #? str() d[0] #? float() str() d[1] #? float() d[2] # ----------------- # set comprehensions # ----------------- #? set() {a - 1 for a in [1]} #? set() {a for a in range(10)} #? int() [x for x in {a for a in range(10)}][0] #? int() {a for a in range(10)}.pop() #? float() str() {b for a in [[3.0], ['']] for b in a}.pop() #? int() next(iter({a for a in range(10)})) # with a set literal (also doesn't work in 2.6). #? int() [a for a in {1, 2, 3}][0] jedi-0.11.1/test/completion/context.py0000664000175000017500000000101013214571123017544 0ustar davedave00000000000000class Base(): myfoobar = 3 class X(Base): def func(self, foo): pass class Y(X): def actual_function(self): pass #? [] def actual_function #? ['func'] def f #? [] def __class__ #? ['__repr__'] def __repr__ #? [] def mro #? ['myfoobar'] myfoobar #? [] myfoobar # ----------------- # Inheritance # ----------------- class Super(): enabled = True if enabled: yo_dude = 4 class Sub(Super): #? ['yo_dude'] yo_dud jedi-0.11.1/test/completion/decorators.py0000664000175000017500000001211613214571123020236 0ustar davedave00000000000000# ----------------- # normal decorators # ----------------- def decorator(func): def wrapper(*args): return func(1, *args) return wrapper @decorator def decorated(a,b): return a,b exe = decorated(set, '') #? set exe[1] #? int() exe[0] # more complicated with args/kwargs def dec(func): def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper @dec def fu(a, b, c, *args, **kwargs): return a, b, c, args, kwargs exe = fu(list, c=set, b=3, d='') #? list exe[0] #? int() exe[1] #? set exe[2] #? [] exe[3][0]. #? str() exe[4]['d'] exe = fu(list, set, 3, '', d='') #? str() exe[3][0] # ----------------- # multiple decorators # ----------------- def dec2(func2): def wrapper2(first_arg, *args2, **kwargs2): return func2(first_arg, *args2, **kwargs2) return wrapper2 @dec2 @dec def fu2(a, b, c, *args, **kwargs): return a, b, c, args, kwargs exe = fu2(list, c=set, b=3, d='str') #? list exe[0] #? int() exe[1] #? set exe[2] #? [] exe[3][0]. #? str() exe[4]['d'] # ----------------- # Decorator is a class # ----------------- def same_func(func): return func class Decorator(object): def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): return self.func(1, *args, **kwargs) @Decorator def nothing(a,b,c): return a,b,c #? int() nothing("")[0] #? str() nothing("")[1] @same_func @Decorator def nothing(a,b,c): return a,b,c #? int() nothing("")[0] class MethodDecoratorAsClass(): class_var = 3 @Decorator def func_without_self(arg, arg2): return arg, arg2 @Decorator def func_with_self(self, arg): return self.class_var #? int() MethodDecoratorAsClass().func_without_self('')[0] #? str() MethodDecoratorAsClass().func_without_self('')[1] #? MethodDecoratorAsClass().func_with_self(1) class SelfVars(): """Init decorator problem as an instance, #247""" @Decorator def __init__(self): """ __init__ decorators should be ignored when looking up variables in the class. """ self.c = list @Decorator def shouldnt_expose_var(not_self): """ Even though in real Python this shouldn't expose the variable, in this case Jedi exposes the variable, because these kind of decorators are normally descriptors, which SHOULD be exposed (at least 90%). """ not_self.b = 1.0 def other_method(self): #? float() self.b #? list self.c # ----------------- # not found decorators (are just ignored) # ----------------- @not_found_decorator def just_a_func(): return 1 #? int() just_a_func() #? ['__closure__'] just_a_func.__closure__ class JustAClass: @not_found_decorator2 def a(self): return 1 #? ['__call__'] JustAClass().a.__call__ #? int() JustAClass().a() #? ['__call__'] JustAClass.a.__call__ #? int() JustAClass.a() # ----------------- # illegal decorators # ----------------- class DecoratorWithoutCall(): def __init__(self, func): self.func = func @DecoratorWithoutCall def f(): return 1 # cannot be resolved - should be ignored @DecoratorWithoutCall(None) def g(): return 1 #? f() #? int() g() class X(): @str def x(self): pass def y(self): #? str() self.x #? self.x() # ----------------- # method decorators # ----------------- def dec(f): def wrapper(s): return f(s) return wrapper class MethodDecorators(): _class_var = 1 def __init__(self): self._method_var = '' @dec def constant(self): return 1.0 @dec def class_var(self): return self._class_var @dec def method_var(self): return self._method_var #? float() MethodDecorators().constant() #? int() MethodDecorators().class_var() #? str() MethodDecorators().method_var() class Base(): @not_existing def __init__(self): pass @not_existing def b(self): return '' @dec def c(self): return 1 class MethodDecoratorDoesntExist(Base): """#272 github: combination of method decorators and super()""" def a(self): #? super().__init__() #? str() super().b() #? int() super().c() #? float() self.d() @doesnt_exist def d(self): return 1.0 # ----------------- # others # ----------------- def memoize(function): def wrapper(*args): if random.choice([0, 1]): pass else: rv = function(*args) return rv return wrapper @memoize def follow_statement(stmt): return stmt # here we had problems with the else clause, because the parent was not right. #? int() follow_statement(1) # ----------------- # class decorators # ----------------- # class decorators should just be ignored @should_ignore class A(): def ret(self): return 1 #? int() A().ret() # ----------------- # On decorator completions # ----------------- import abc #? ['abc'] @abc #? ['abstractmethod'] @abc.abstractmethod jedi-0.11.1/test/completion/generators.py0000664000175000017500000000552113214571123020244 0ustar davedave00000000000000# ----------------- # yield statement # ----------------- def gen(): if random.choice([0, 1]): yield 1 else: yield "" gen_exe = gen() #? int() str() next(gen_exe) #? int() str() list next(gen_exe, list) def gen_ret(value): yield value #? int() next(gen_ret(1)) #? [] next(gen_ret()). # generators evaluate to true if cast by bool. a = '' if gen_ret(): a = 3 #? int() a # ----------------- # generators should not be indexable # ----------------- def get(param): if random.choice([0, 1]): yield 1 else: yield "" #? [] get()[0]. # ----------------- # __iter__ # ----------------- for a in get(): #? int() str() a class Get(): def __iter__(self): if random.choice([0, 1]): yield 1 else: yield "" b = [] for a in Get(): #? int() str() a b += [a] #? list() b #? int() str() b[0] g = iter(Get()) #? int() str() next(g) g = iter([1.0]) #? float() next(g) # ----------------- # __next__ # ----------------- class Counter: def __init__(self, low, high): self.current = low self.high = high def __iter__(self): return self def next(self): """ need to have both __next__ and next, because of py2/3 testing """ return self.__next__() def __next__(self): if self.current > self.high: raise StopIteration else: self.current += 1 return self.current - 1 for c in Counter(3, 8): #? int() print c # ----------------- # tuple assignments # ----------------- def gen(): if random.choice([0,1]): yield 1, "" else: yield 2, 1.0 a, b = next(gen()) #? int() a #? str() float() b def simple(): if random.choice([0, 1]): yield 1 else: yield "" a, b = simple() #? int() str() a # For now this is ok. #? b def simple2(): yield 1 yield "" a, b = simple2() #? int() a #? str() b a, = (a for a in [1]) #? int() a # ----------------- # More complicated access # ----------------- # `close` is a method wrapper. #? ['__call__'] gen().close.__call__ #? gen().throw() #? ['co_consts'] gen().gi_code.co_consts #? [] gen.gi_code.co_consts # `send` is also a method wrapper. #? ['__call__'] gen().send.__call__ #? tuple() gen().send() #? gen()() # ----------------- # empty yield # ----------------- def x(): yield #? None next(x()) #? gen() x() def x(): for i in range(3): yield #? None next(x()) # ----------------- # yield in expression # ----------------- def x(): a= [(yield 1)] #? int() next(x()) # ----------------- # yield from # ----------------- # python >= 3.3 def yield_from(): yield from iter([1]) #? int() next(yield_from()) def yield_from_multiple(): yield from iter([1]) yield str() x, y = yield_from_multiple() #? int() x #? str() y jedi-0.11.1/test/completion/definition.py0000664000175000017500000000206013214571123020216 0ustar davedave00000000000000""" Fallback to callee definition when definition not found. - https://github.com/davidhalter/jedi/issues/131 - https://github.com/davidhalter/jedi/pull/149 """ """Parenthesis closed at next line.""" # Ignore these definitions for a little while, not sure if we really want them. # python <= 2.5 #? isinstance isinstance( ) #? isinstance isinstance( ) #? isinstance isinstance(None, ) #? isinstance isinstance(None, ) """Parenthesis closed at same line.""" # Note: len('isinstance(') == 11 #? 11 isinstance isinstance() # Note: len('isinstance(None,') == 16 ##? 16 isinstance isinstance(None,) # Note: len('isinstance(None,') == 16 ##? 16 isinstance isinstance(None, ) # Note: len('isinstance(None, ') == 17 ##? 17 isinstance isinstance(None, ) # Note: len('isinstance( ') == 12 ##? 12 isinstance isinstance( ) """Unclosed parenthesis.""" #? isinstance isinstance( def x(): pass # acts like EOF ##? isinstance isinstance( def x(): pass # acts like EOF #? isinstance isinstance(None, def x(): pass # acts like EOF ##? isinstance isinstance(None, jedi-0.11.1/test/completion/pep0484_basic.py0000664000175000017500000000517713214571123020347 0ustar davedave00000000000000""" Pep-0484 type hinting """ # python >= 3.2 class A(): pass def function_parameters(a: A, b, c: str, d: int, e: str, f: str, g: int=4): """ :param e: if docstring and annotation agree, only one should be returned :type e: str :param f: if docstring and annotation disagree, both should be returned :type f: int """ #? A() a #? b #? str() c #? int() d #? str() e #? int() str() f # int() g def return_unspecified(): pass #? return_unspecified() def return_none() -> None: """ Return type None means the same as no return type as far as jedi is concerned """ pass #? return_none() def return_str() -> str: pass #? str() return_str() def return_custom_class() -> A: pass #? A() return_custom_class() def return_annotation_and_docstring() -> str: """ :rtype: int """ pass #? str() int() return_annotation_and_docstring() def return_annotation_and_docstring_different() -> str: """ :rtype: str """ pass #? str() return_annotation_and_docstring_different() def annotation_forward_reference(b: "B") -> "B": #? B() b #? ["test_element"] annotation_forward_reference(1).t class B: test_element = 1 pass #? B() annotation_forward_reference(1) class SelfReference: test_element = 1 def test_method(self, x: "SelfReference") -> "SelfReference": #? SelfReference() x #? ["test_element", "test_method"] self.t #? ["test_element", "test_method"] x.t #? ["test_element", "test_method"] self.test_method(1).t #? SelfReference() SelfReference().test_method() def function_with_non_pep_0484_annotation( x: "I can put anything here", xx: "", yy: "\r\n\0;+*&^564835(---^&*34", y: 3 + 3, zz: float) -> int("42"): # infers int from function call #? int() x # infers int from function call #? int() xx # infers int from function call #? int() yy # infers str from function call #? str() y #? float() zz #? function_with_non_pep_0484_annotation(1, 2, 3, "force string") def function_forward_reference_dynamic( x: return_str_type(), y: "return_str_type()") -> None: #? x #? str() y def return_str_type(): return str X = str def function_with_assined_class_in_reference(x: X, y: "Y"): #? str() x #? int() y Y = int def just_because_we_can(x: "flo" + "at"): #? float() x def keyword_only(a: str, *, b: str): #? ['startswith'] a.startswi #? ['startswith'] b.startswi jedi-0.11.1/test/completion/dynamic_params.py0000664000175000017500000000360113214571123021057 0ustar davedave00000000000000""" This is used for dynamic object completion. Jedi tries to guess param types with a backtracking approach. """ def func(a, default_arg=2): #? int() default_arg #? int() str() return a #? int() func(1) func int(1) + (int(2))+ func('') # Again the same function, but with another call. def func(a): #? float() return a func(1.0) # Again the same function, but with no call. def func(a): #? return a def func(a): #? float() return a str(func(1.0)) # ----------------- # *args, **args # ----------------- def arg(*args): #? tuple() args #? int() args[0] arg(1,"") # ----------------- # decorators # ----------------- def def_func(f): def wrapper(*args, **kwargs): return f(*args, **kwargs) return wrapper @def_func def func(c): #? str() return c #? str() func("something") @def_func def func(c=1): #? float() return c func(1.0) def tricky_decorator(func): def wrapper(*args): return func(1, *args) return wrapper @tricky_decorator def func(a, b): #? int() a #? float() b func(1.0) # Needs to be here, because in this case func is an import -> shouldn't lead to # exceptions. import sys as func func.sys # ----------------- # classes # ----------------- class A(): def __init__(self, a): #? str() a A("s") class A(): def __init__(self, a): #? int() a self.a = a def test(self, a): #? float() a self.c = self.test2() def test2(self): #? int() return self.a def test3(self): #? int() self.test2() #? int() self.c A(3).test(2.0) A(3).test2() # ----------------- # comprehensions # ----------------- def from_comprehension(foo): #? int() float() return foo [from_comprehension(1.0) for n in (1,)] [from_comprehension(n) for n in (1,)] jedi-0.11.1/test/completion/stdlib.py0000664000175000017500000000657313214571123017364 0ustar davedave00000000000000""" std library stuff """ # ----------------- # builtins # ----------------- arr = [''] #? str() sorted(arr)[0] #? str() next(reversed(arr)) next(reversed(arr)) # should not fail if there's no return value. def yielder(): yield None #? None next(reversed(yielder())) # empty reversed should not raise an error #? next(reversed()) #? str() next(open('')) #? int() {'a':2}.setdefault('a', 3) # Compiled classes should have the meta class attributes. #? ['__itemsize__'] tuple.__itemsize__ # ----------------- # type() calls with one parameter # ----------------- #? int type(1) #? int type(int()) #? type type(int) #? type type(type) #? list type([]) def x(): yield 1 generator = type(x()) #? generator type(x for x in []) #? type(x) type(lambda: x) import math import os #? type(os) type(math) class X(): pass #? type type(X) with open('foo') as f: for line in f.readlines(): #? str() line # ----------------- # enumerate # ----------------- for i, j in enumerate(["as", "ad"]): #? int() i #? str() j # ----------------- # re # ----------------- import re c = re.compile(r'a') # re.compile should not return str -> issue #68 #? [] c.startswith #? int() c.match().start() #? int() re.match(r'a', 'a').start() for a in re.finditer('a', 'a'): #? int() a.start() #? str() re.sub('a', 'a') # ----------------- # ref # ----------------- import weakref #? int() weakref.proxy(1) #? weakref.ref() weakref.ref(1) #? int() weakref.ref(1)() # ----------------- # functools # ----------------- import functools basetwo = functools.partial(int, base=2) #? int() basetwo() def function(a, b): return a, b a = functools.partial(function, 0) #? int() a('')[0] #? str() a('')[1] kw = functools.partial(function, b=1.0) tup = kw(1) #? int() tup[0] #? float() tup[1] def my_decorator(f): @functools.wraps(f) def wrapper(*args, **kwds): return f(*args, **kwds) return wrapper @my_decorator def example(a): return a #? str() example('') # ----------------- # sqlite3 (#84) # ----------------- import sqlite3 #? sqlite3.Connection() con = sqlite3.connect() #? sqlite3.Cursor() c = con.cursor() #? sqlite3.Row() row = c.fetchall()[0] #? str() row.keys()[0] def huhu(db): """ :type db: sqlite3.Connection :param db: the db connection """ #? sqlite3.Connection() db # ----------------- # hashlib # ----------------- import hashlib #? ['md5'] hashlib.md5 # ----------------- # copy # ----------------- import copy #? int() copy.deepcopy(1) #? copy.copy() # ----------------- # json # ----------------- # We don't want any results for json, because it depends on IO. import json #? json.load('asdf') #? json.loads('[1]') # ----------------- # random # ----------------- import random class A(object): def say(self): pass class B(object): def shout(self): pass cls = random.choice([A, B]) #? ['say', 'shout'] cls().s # ----------------- # random # ----------------- import zipfile z = zipfile.ZipFile("foo") # It's too slow. So we don't run it at the moment. ##? ['upper'] z.read('name').upper # ----------------- # contextlib # ----------------- import contextlib with contextlib.closing('asd') as string: #? str() string # ----------------- # shlex # ----------------- # Github issue #929 import shlex qsplit = shlex.split("foo, ferwerwerw werw werw e") for part in qsplit: #? str() None part jedi-0.11.1/test/completion/import_tree/0000775000175000017500000000000013214571377020062 5ustar davedave00000000000000jedi-0.11.1/test/completion/import_tree/classes.py0000664000175000017500000000020313214571123022051 0ustar davedave00000000000000blub = 1 class Config2(): pass class BaseClass(): mode = Config2() if isinstance(whaat, int): mode2 = whaat jedi-0.11.1/test/completion/import_tree/mod1.py0000664000175000017500000000007513214571123021263 0ustar davedave00000000000000a = 1 from import_tree.random import a as c foobarbaz = 3.0 jedi-0.11.1/test/completion/import_tree/flow_import.py0000664000175000017500000000004713214571123022763 0ustar davedave00000000000000if name: env = 1 else: env = 2 jedi-0.11.1/test/completion/import_tree/__init__.py0000664000175000017500000000012013214571123022151 0ustar davedave00000000000000a = '' from . import invisible_pkg the_pkg = invisible_pkg invisible_pkg = 1 jedi-0.11.1/test/completion/import_tree/recurse_class1.py0000664000175000017500000000012013214571123023330 0ustar davedave00000000000000import recurse_class2 class C(recurse_class2.C): def a(self): pass jedi-0.11.1/test/completion/import_tree/recurse_class2.py0000664000175000017500000000007313214571123023340 0ustar davedave00000000000000import recurse_class1 class C(recurse_class1.C): pass jedi-0.11.1/test/completion/import_tree/random.py0000664000175000017500000000011213214571123021673 0ustar davedave00000000000000""" Here because random is also a builtin module. """ a = set foobar = 0 jedi-0.11.1/test/completion/import_tree/mod2.py0000664000175000017500000000003313214571123021256 0ustar davedave00000000000000from . import mod1 as fake jedi-0.11.1/test/completion/import_tree/rename2.py0000664000175000017500000000007713214571123021756 0ustar davedave00000000000000""" used for renaming tests """ from rename1 import abc abc jedi-0.11.1/test/completion/import_tree/rename1.py0000664000175000017500000000005113214571123021745 0ustar davedave00000000000000""" used for renaming tests """ abc = 3 jedi-0.11.1/test/completion/import_tree/invisible_pkg.py0000664000175000017500000000030113214571123023240 0ustar davedave00000000000000""" It should not be possible to import this pkg except for the import_tree itself, because it is overwritten there. (It would be possible with a sys.path modification, though). """ foo = 1.0 jedi-0.11.1/test/completion/import_tree/pkg/0000775000175000017500000000000013214571377020643 5ustar davedave00000000000000jedi-0.11.1/test/completion/import_tree/pkg/mod1.py0000664000175000017500000000004513214571123022041 0ustar davedave00000000000000a = 1.0 from ..random import foobar jedi-0.11.1/test/completion/import_tree/pkg/__init__.py0000664000175000017500000000003513214571123022737 0ustar davedave00000000000000a = list from math import * jedi-0.11.1/test/test_api/0000775000175000017500000000000013214571377015170 5ustar davedave00000000000000jedi-0.11.1/test/test_api/test_interpreter.py0000664000175000017500000002045613214571123021140 0ustar davedave00000000000000""" Tests of ``jedi.api.Interpreter``. """ import pytest import jedi from jedi._compatibility import is_py33, py_version from jedi.evaluate.compiled import mixed if py_version > 30: def exec_(source, global_map): exec(source, global_map) else: eval(compile("""def exec_(source, global_map): exec source in global_map """, 'blub', 'exec')) class _GlobalNameSpace(): class SideEffectContainer(): pass def get_completion(source, namespace): i = jedi.Interpreter(source, [namespace]) completions = i.completions() assert len(completions) == 1 return completions[0] def test_builtin_details(): import keyword class EmptyClass: pass variable = EmptyClass() def func(): pass cls = get_completion('EmptyClass', locals()) var = get_completion('variable', locals()) f = get_completion('func', locals()) m = get_completion('keyword', locals()) assert cls.type == 'class' assert var.type == 'instance' assert f.type == 'function' assert m.type == 'module' def test_numpy_like_non_zero(): """ Numpy-like array can't be caster to bool and need to be compacre with `is`/`is not` and not `==`/`!=` """ class NumpyNonZero: def __zero__(self): raise ValueError('Numpy arrays would raise and tell you to use .any() or all()') def __bool__(self): raise ValueError('Numpy arrays would raise and tell you to use .any() or all()') class NumpyLike: def __eq__(self, other): return NumpyNonZero() def something(self): pass x = NumpyLike() d = {'a': x} # just assert these do not raise. They (strangely) trigger different # codepath get_completion('d["a"].some', {'d':d}) get_completion('x.some', {'x':x}) def test_nested_resolve(): class XX(): def x(): pass cls = get_completion('XX', locals()) func = get_completion('XX.x', locals()) assert (func.line, func.column) == (cls.line + 1, 12) def test_side_effect_completion(): """ In the repl it's possible to cause side effects that are not documented in Python code, however we want references to Python code as well. Therefore we need some mixed kind of magic for tests. """ _GlobalNameSpace.SideEffectContainer.foo = 1 side_effect = get_completion('SideEffectContainer', _GlobalNameSpace.__dict__) # It's a class that contains MixedObject. context, = side_effect._name.infer() assert isinstance(context, mixed.MixedObject) foo = get_completion('SideEffectContainer.foo', _GlobalNameSpace.__dict__) assert foo.name == 'foo' def _assert_interpreter_complete(source, namespace, completions, **kwds): script = jedi.Interpreter(source, [namespace], **kwds) cs = script.completions() actual = [c.name for c in cs] assert sorted(actual) == sorted(completions) def test_complete_raw_function(): from os.path import join _assert_interpreter_complete('join("").up', locals(), ['upper']) def test_complete_raw_function_different_name(): from os.path import join as pjoin _assert_interpreter_complete('pjoin("").up', locals(), ['upper']) def test_complete_raw_module(): import os _assert_interpreter_complete('os.path.join("a").up', locals(), ['upper']) def test_complete_raw_instance(): import datetime dt = datetime.datetime(2013, 1, 1) completions = ['time', 'timetz', 'timetuple'] if is_py33: completions += ['timestamp'] _assert_interpreter_complete('(dt - dt).ti', locals(), completions) def test_list(): array = ['haha', 1] _assert_interpreter_complete('array[0].uppe', locals(), ['upper']) _assert_interpreter_complete('array[0].real', locals(), []) # something different, no index given, still just return the right _assert_interpreter_complete('array[int].real', locals(), ['real']) _assert_interpreter_complete('array[int()].real', locals(), ['real']) # inexistent index _assert_interpreter_complete('array[2].upper', locals(), ['upper']) def test_slice(): class Foo1(): bar = [] baz = 'xbarx' _assert_interpreter_complete('getattr(Foo1, baz[1:-1]).append', locals(), ['append']) def test_getitem_side_effects(): class Foo2(): def __getitem__(self, index): # Possible side effects here, should therefore not call this. if True: raise NotImplementedError() return index foo = Foo2() _assert_interpreter_complete('foo["asdf"].upper', locals(), ['upper']) def test_property_error_oldstyle(): lst = [] class Foo3(): @property def bar(self): lst.append(1) raise ValueError foo = Foo3() _assert_interpreter_complete('foo.bar', locals(), ['bar']) _assert_interpreter_complete('foo.bar.baz', locals(), []) # There should not be side effects assert lst == [] def test_property_error_newstyle(): lst = [] class Foo3(object): @property def bar(self): lst.append(1) raise ValueError foo = Foo3() _assert_interpreter_complete('foo.bar', locals(), ['bar']) _assert_interpreter_complete('foo.bar.baz', locals(), []) # There should not be side effects assert lst == [] def test_param_completion(): def foo(bar): pass lambd = lambda xyz: 3 _assert_interpreter_complete('foo(bar', locals(), ['bar']) # TODO we're not yet using the Python3.5 inspect.signature, yet. assert not jedi.Interpreter('lambd(xyz', [locals()]).completions() def test_endless_yield(): lst = [1] * 10000 # If iterating over lists it should not be possible to take an extremely # long time. _assert_interpreter_complete('list(lst)[9000].rea', locals(), ['real']) @pytest.mark.skipif('py_version < 33', reason='inspect.signature was created in 3.3.') def test_completion_params(): foo = lambda a, b=3: None script = jedi.Interpreter('foo', [locals()]) c, = script.completions() assert [p.name for p in c.params] == ['a', 'b'] assert c.params[0]._goto_definitions() == [] t, = c.params[1]._goto_definitions() assert t.name == 'int' @pytest.mark.skipif('py_version < 33', reason='inspect.signature was created in 3.3.') def test_completion_param_annotations(): # Need to define this function not directly in Python. Otherwise Jedi is to # clever and uses the Python code instead of the signature object. code = 'def foo(a: 1, b: str, c: int = 1.0): pass' exec_(code, locals()) script = jedi.Interpreter('foo', [locals()]) c, = script.completions() a, b, c = c.params assert a._goto_definitions() == [] assert [d.name for d in b._goto_definitions()] == ['str'] assert set([d.name for d in c._goto_definitions()]) == set(['int', 'float']) def test_more_complex_instances(): class Something: def foo(self, other): return self class Base(): def wow(self): return Something() #script = jedi.Interpreter('Base().wow().foo', [locals()]) #c, = script.completions() #assert c.name == 'foo' x = Base() script = jedi.Interpreter('x.wow().foo', [locals()]) c, = script.completions() assert c.name == 'foo' def test_repr_execution_issue(): """ Anticipate inspect.getfile executing a __repr__ of all kinds of objects. See also #919. """ class ErrorRepr: def __repr__(self): raise Exception('xyz') er = ErrorRepr() script = jedi.Interpreter('er', [locals()]) d, = script.goto_definitions() assert d.name == 'ErrorRepr' assert d.type == 'instance' jedi-0.11.1/test/test_api/test_usages.py0000664000175000017500000000025213214571123020054 0ustar davedave00000000000000import jedi def test_import_usage(): s = jedi.Script("from .. import foo", line=1, column=18, path="foo.py") assert [usage.line for usage in s.usages()] == [1] jedi-0.11.1/test/test_api/test_api.py0000664000175000017500000001537713214571123017354 0ustar davedave00000000000000""" Test all things related to the ``jedi.api`` module. """ import os from textwrap import dedent from jedi import api from jedi._compatibility import is_py3 from pytest import raises from parso import cache def test_preload_modules(): def check_loaded(*modules): # +1 for None module (currently used) grammar_cache = next(iter(cache.parser_cache.values())) assert len(grammar_cache) == len(modules) + 1 for i in modules: assert [i in k for k in grammar_cache.keys() if k is not None] old_cache = cache.parser_cache.copy() cache.parser_cache.clear() try: api.preload_module('sys') check_loaded() # compiled (c_builtin) modules shouldn't be in the cache. api.preload_module('types', 'token') check_loaded('types', 'token') finally: cache.parser_cache.update(old_cache) def test_empty_script(): assert api.Script('') def test_line_number_errors(): """ Script should raise a ValueError if line/column numbers are not in a valid range. """ s = 'hello' # lines with raises(ValueError): api.Script(s, 2, 0) with raises(ValueError): api.Script(s, 0, 0) # columns with raises(ValueError): api.Script(s, 1, len(s) + 1) with raises(ValueError): api.Script(s, 1, -1) # ok api.Script(s, 1, 0) api.Script(s, 1, len(s)) def _check_number(source, result='float'): completions = api.Script(source).completions() assert completions[0].parent().name == result def test_completion_on_number_literals(): # No completions on an int literal (is a float). assert [c.name for c in api.Script('1.').completions()] \ == ['and', 'if', 'in', 'is', 'not', 'or'] # Multiple points after an int literal basically mean that there's a float # and a call after that. _check_number('1..') _check_number('1.0.') # power notation _check_number('1.e14.') _check_number('1.e-3.') _check_number('9e3.') assert api.Script('1.e3..').completions() == [] assert api.Script('1.e-13..').completions() == [] def test_completion_on_hex_literals(): assert api.Script('0x1..').completions() == [] _check_number('0x1.', 'int') # hexdecimal # Completing binary literals doesn't work if they are not actually binary # (invalid statements). assert api.Script('0b2.b').completions() == [] _check_number('0b1.', 'int') # binary _check_number('0x2e.', 'int') _check_number('0xE7.', 'int') _check_number('0xEa.', 'int') # theoretically, but people can just check for syntax errors: #assert api.Script('0x.').completions() == [] def test_completion_on_complex_literals(): assert api.Script('1j..').completions() == [] _check_number('1j.', 'complex') _check_number('44.j.', 'complex') _check_number('4.0j.', 'complex') # No dot no completion - I thought, but 4j is actually a literall after # which a keyword like or is allowed. Good times, haha! assert (set([c.name for c in api.Script('4j').completions()]) == set(['if', 'and', 'in', 'is', 'not', 'or'])) def test_goto_assignments_on_non_name(): assert api.Script('for').goto_assignments() == [] assert api.Script('assert').goto_assignments() == [] if is_py3: assert api.Script('True').goto_assignments() == [] else: # In Python 2.7 True is still a name. assert api.Script('True').goto_assignments()[0].description == 'instance True' def test_goto_definitions_on_non_name(): assert api.Script('import x', column=0).goto_definitions() == [] def test_goto_definitions_on_generator(): def_, = api.Script('def x(): yield 1\ny=x()\ny').goto_definitions() assert def_.name == 'generator' def test_goto_definition_not_multiple(): """ There should be only one Definition result if it leads back to the same origin (e.g. instance method) """ s = dedent('''\ import random class A(): def __init__(self, a): self.a = 3 def foo(self): pass if random.randint(0, 1): a = A(2) else: a = A(1) a''') assert len(api.Script(s).goto_definitions()) == 1 def test_usage_description(): descs = [u.description for u in api.Script("foo = ''; foo").usages()] assert set(descs) == set(["foo = ''", 'foo']) def test_get_line_code(): def get_line_code(source, line=None, **kwargs): return api.Script(source, line=line).completions()[0].get_line_code(**kwargs) # On builtin assert get_line_code('') == '' # On custom code first_line = 'def foo():\n' line = ' foo' code = '%s%s' % (first_line, line) assert get_line_code(code) == first_line # With before/after code = code + '\nother_line' assert get_line_code(code, line=2) == first_line assert get_line_code(code, line=2, after=1) == first_line + line + '\n' assert get_line_code(code, line=2, after=2, before=1) == code # Should just be the whole thing, since there are no more lines on both # sides. assert get_line_code(code, line=2, after=3, before=3) == code def test_goto_assignments_follow_imports(): code = dedent(""" import inspect inspect.isfunction""") definition, = api.Script(code, column=0).goto_assignments(follow_imports=True) assert 'inspect.py' in definition.module_path assert (definition.line, definition.column) == (1, 0) definition, = api.Script(code).goto_assignments(follow_imports=True) assert 'inspect.py' in definition.module_path assert (definition.line, definition.column) > (1, 0) code = '''def param(p): pass\nparam(1)''' start_pos = 1, len('def param(') script = api.Script(code, *start_pos) definition, = script.goto_assignments(follow_imports=True) assert (definition.line, definition.column) == start_pos assert definition.name == 'p' result, = definition.goto_assignments() assert result.name == 'p' result, = definition._goto_definitions() assert result.name == 'int' result, = result._goto_definitions() assert result.name == 'int' definition, = script.goto_assignments() assert (definition.line, definition.column) == start_pos d, = api.Script('a = 1\na').goto_assignments(follow_imports=True) assert d.name == 'a' def test_goto_module(): def check(line, expected): script = api.Script(path=path, line=line) module, = script.goto_assignments() assert module.module_path == expected base_path = os.path.join(os.path.dirname(__file__), 'simple_import') path = os.path.join(base_path, '__init__.py') check(1, os.path.join(base_path, 'module.py')) check(5, os.path.join(base_path, 'module2.py')) jedi-0.11.1/test/test_api/import_tree_for_usages/0000775000175000017500000000000013214571377021736 5ustar davedave00000000000000jedi-0.11.1/test/test_api/import_tree_for_usages/b.py0000664000175000017500000000002413214571123022512 0ustar davedave00000000000000def bar(): pass jedi-0.11.1/test/test_api/import_tree_for_usages/__init__.py0000664000175000017500000000005513214571123024034 0ustar davedave00000000000000""" An import tree, for testing usages. """ jedi-0.11.1/test/test_api/import_tree_for_usages/a.py0000664000175000017500000000005013214571123022510 0ustar davedave00000000000000from . import b def foo(): b.bar() jedi-0.11.1/test/test_api/test_defined_names.py0000664000175000017500000000507413214571123021355 0ustar davedave00000000000000""" Tests for `api.defined_names`. """ from textwrap import dedent from jedi import names from ..helpers import TestCase class TestDefinedNames(TestCase): def assert_definition_names(self, definitions, names_): assert [d.name for d in definitions] == names_ def check_defined_names(self, source, names_): definitions = names(dedent(source)) self.assert_definition_names(definitions, names_) return definitions def test_get_definitions_flat(self): self.check_defined_names(""" import module class Class: pass def func(): pass data = None """, ['module', 'Class', 'func', 'data']) def test_dotted_assignment(self): self.check_defined_names(""" x = Class() x.y.z = None """, ['x', 'z']) # TODO is this behavior what we want? def test_multiple_assignment(self): self.check_defined_names(""" x = y = None """, ['x', 'y']) def test_multiple_imports(self): self.check_defined_names(""" from module import a, b from another_module import * """, ['a', 'b']) def test_nested_definitions(self): definitions = self.check_defined_names(""" class Class: def f(): pass def g(): pass """, ['Class']) subdefinitions = definitions[0].defined_names() self.assert_definition_names(subdefinitions, ['f', 'g']) self.assertEqual([d.full_name for d in subdefinitions], ['__main__.Class.f', '__main__.Class.g']) def test_nested_class(self): definitions = self.check_defined_names(""" class L1: class L2: class L3: def f(): pass def f(): pass def f(): pass def f(): pass """, ['L1', 'f']) subdefs = definitions[0].defined_names() subsubdefs = subdefs[0].defined_names() self.assert_definition_names(subdefs, ['L2', 'f']) self.assert_definition_names(subsubdefs, ['L3', 'f']) self.assert_definition_names(subsubdefs[0].defined_names(), ['f']) def test_follow_imports(): # github issue #344 imp = names('import datetime')[0] assert imp.name == 'datetime' datetime_names = [str(d.name) for d in imp.defined_names()] assert 'timedelta' in datetime_names def test_names_twice(): source = dedent(''' def lol(): pass ''') defs = names(source=source) assert defs[0].defined_names() == [] jedi-0.11.1/test/test_api/__init__.py0000664000175000017500000000000013214571123017254 0ustar davedave00000000000000jedi-0.11.1/test/test_api/test_api_classes_follow_definition.py0000664000175000017500000000376013214571123024654 0ustar davedave00000000000000from itertools import chain import jedi from ..helpers import cwd_at def test_import_empty(): """ github #340, return the full word. """ completion = jedi.Script("import ").completions()[0] definition = completion.follow_definition()[0] assert definition def check_follow_definition_types(source): # nested import completions = jedi.Script(source, path='some_path.py').completions() defs = chain.from_iterable(c.follow_definition() for c in completions) return [d.type for d in defs] def test_follow_import_incomplete(): """ Completion on incomplete imports should always take the full completion to do any evaluation. """ datetime = check_follow_definition_types("import itertool") assert datetime == ['module'] # empty `from * import` parts itert = jedi.Script("from itertools import ").completions() definitions = [d for d in itert if d.name == 'chain'] assert len(definitions) == 1 assert [d.type for d in definitions[0].follow_definition()] == ['class'] # incomplete `from * import` part datetime = check_follow_definition_types("from datetime import datetim") assert set(datetime) == set(['class', 'instance']) # py33: builtin and pure py version # os.path check ospath = check_follow_definition_types("from os.path import abspat") assert ospath == ['function'] # alias alias = check_follow_definition_types("import io as abcd; abcd") assert alias == ['module'] @cwd_at('test/completion/import_tree') def test_follow_definition_nested_import(): types = check_follow_definition_types("import pkg.mod1; pkg") assert types == ['module'] types = check_follow_definition_types("import pkg.mod1; pkg.mod1") assert types == ['module'] types = check_follow_definition_types("import pkg.mod1; pkg.mod1.a") assert types == ['instance'] def test_follow_definition_land_on_import(): types = check_follow_definition_types("import datetime; datetim") assert types == ['module'] jedi-0.11.1/test/test_api/test_completion.py0000664000175000017500000000217213214571123020741 0ustar davedave00000000000000from textwrap import dedent from jedi import Script def test_in_whitespace(): code = dedent(''' def x(): pass''') assert len(Script(code, column=2).completions()) > 20 def test_empty_init(): """This was actually an issue.""" code = dedent('''\ class X(object): pass X(''') assert Script(code).completions() def test_in_empty_space(): code = dedent('''\ class X(object): def __init__(self): hello ''') comps = Script(code, 3, 7).completions() self, = [c for c in comps if c.name == 'self'] assert self.name == 'self' def_, = self._goto_definitions() assert def_.name == 'X' def test_indent_context(): """ If an INDENT is the next supposed token, we should still be able to complete. """ code = 'if 1:\nisinstanc' comp, = Script(code).completions() assert comp.name == 'isinstance' def test_keyword_context(): def get_names(*args, **kwargs): return [d.name for d in Script(*args, **kwargs).completions()] names = get_names('if 1:\n pass\n') assert 'if' in names assert 'elif' in names jedi-0.11.1/test/test_api/simple_import/0000775000175000017500000000000013214571377020053 5ustar davedave00000000000000jedi-0.11.1/test/test_api/simple_import/module.py0000664000175000017500000000000013214571123021665 0ustar davedave00000000000000jedi-0.11.1/test/test_api/simple_import/__init__.py0000664000175000017500000000013413214571123022147 0ustar davedave00000000000000from simple_import import module def in_function(): from simple_import import module2 jedi-0.11.1/test/test_api/simple_import/module2.py0000664000175000017500000000000013214571123021747 0ustar davedave00000000000000jedi-0.11.1/test/test_api/test_call_signatures.py0000664000175000017500000003040513214571123021747 0ustar davedave00000000000000from textwrap import dedent import inspect import warnings from ..helpers import TestCase from jedi import Script from jedi import cache from jedi._compatibility import is_py33 def assert_signature(source, expected_name, expected_index=0, line=None, column=None): signatures = Script(source, line, column).call_signatures() assert len(signatures) <= 1 if not signatures: assert expected_name is None, \ 'There are no signatures, but `%s` expected.' % expected_name else: assert signatures[0].name == expected_name assert signatures[0].index == expected_index return signatures[0] class TestCallSignatures(TestCase): def _run_simple(self, source, name, index=0, column=None, line=1): assert_signature(source, name, index, line, column) def test_valid_call(self): assert_signature('str()', 'str', column=4) def test_simple(self): run = self._run_simple # simple s1 = "sorted(a, str(" run(s1, 'sorted', 0, 7) run(s1, 'sorted', 1, 9) run(s1, 'sorted', 1, 10) run(s1, 'sorted', 1, 11) run(s1, 'str', 0, 14) s2 = "abs(), " run(s2, 'abs', 0, 4) run(s2, None, column=5) run(s2, None) s3 = "abs()." run(s3, None, column=5) run(s3, None) def test_more_complicated(self): run = self._run_simple s4 = 'abs(zip(), , set,' run(s4, None, column=3) run(s4, 'abs', 0, 4) run(s4, 'zip', 0, 8) run(s4, 'abs', 0, 9) run(s4, 'abs', None, 10) s5 = "sorted(1,\nif 2:\n def a():" run(s5, 'sorted', 0, 7) run(s5, 'sorted', 1, 9) s6 = "str().center(" run(s6, 'center', 0) run(s6, 'str', 0, 4) s7 = "str().upper().center(" s8 = "str(int[zip(" run(s7, 'center', 0) run(s8, 'zip', 0) run(s8, 'str', 0, 8) run("import time; abc = time; abc.sleep(", 'sleep', 0) def test_issue_57(self): # jedi #57 s = "def func(alpha, beta): pass\n" \ "func(alpha='101'," self._run_simple(s, 'func', 0, column=13, line=2) def test_flows(self): # jedi-vim #9 self._run_simple("with open(", 'open', 0) # jedi-vim #11 self._run_simple("for sorted(", 'sorted', 0) self._run_simple("for s in sorted(", 'sorted', 0) def test_complex(self): s = """ def abc(a,b): pass def a(self): abc( if 1: pass """ assert_signature(s, 'abc', 0, line=6, column=24) s = """ import re def huhu(it): re.compile( return it * 2 """ assert_signature(s, 'compile', 0, line=4, column=31) # jedi-vim #70 s = """def foo(""" assert Script(s).call_signatures() == [] # jedi-vim #116 s = """import itertools; test = getattr(itertools, 'chain'); test(""" assert_signature(s, 'chain', 0) def test_call_signature_on_module(self): """github issue #240""" s = 'import datetime; datetime(' # just don't throw an exception (if numpy doesn't exist, just ignore it) assert Script(s).call_signatures() == [] def test_call_signatures_empty_parentheses_pre_space(self): s = dedent("""\ def f(a, b): pass f( )""") assert_signature(s, 'f', 0, line=3, column=3) def test_multiple_signatures(self): s = dedent("""\ if x: def f(a, b): pass else: def f(a, b): pass f(""") assert len(Script(s).call_signatures()) == 2 def test_call_signatures_whitespace(self): s = dedent("""\ abs( def x(): pass """) assert_signature(s, 'abs', 0, line=1, column=5) def test_decorator_in_class(self): """ There's still an implicit param, with a decorator. Github issue #319. """ s = dedent("""\ def static(func): def wrapped(obj, *args): return f(type(obj), *args) return wrapped class C(object): @static def test(cls): return 10 C().test(""") signatures = Script(s).call_signatures() assert len(signatures) == 1 x = [p.description for p in signatures[0].params] assert x == ['param *args'] def test_additional_brackets(self): assert_signature('str((', 'str', 0) def test_unterminated_strings(self): assert_signature('str(";', 'str', 0) def test_whitespace_before_bracket(self): assert_signature('str (', 'str', 0) assert_signature('str (";', 'str', 0) assert_signature('str\n(', None) def test_brackets_in_string_literals(self): assert_signature('str (" (', 'str', 0) assert_signature('str (" )', 'str', 0) def test_function_definitions_should_break(self): """ Function definitions (and other tokens that cannot exist within call signatures) should break and not be able to return a call signature. """ assert_signature('str(\ndef x', 'str', 0) assert not Script('str(\ndef x(): pass').call_signatures() def test_flow_call(self): assert not Script('if (1').call_signatures() def test_chained_calls(self): source = dedent(''' class B(): def test2(self, arg): pass class A(): def test1(self): return B() A().test1().test2(''') assert_signature(source, 'test2', 0) def test_return(self): source = dedent(''' def foo(): return '.'.join()''') assert_signature(source, 'join', 0, column=len(" return '.'.join(")) class TestParams(TestCase): def params(self, source, line=None, column=None): signatures = Script(source, line, column).call_signatures() assert len(signatures) == 1 return signatures[0].params def test_param_name(self): if not is_py33: p = self.params('''int(''') # int is defined as: `int(x[, base])` assert p[0].name == 'x' # `int` docstring has been redefined: # http://bugs.python.org/issue14783 # TODO have multiple call signatures for int (like in the docstr) #assert p[1].name == 'base' p = self.params('''open(something,''') assert p[0].name in ['file', 'name'] assert p[1].name == 'mode' def test_builtins(self): """ The self keyword should be visible even for builtins, if not instantiated. """ p = self.params('str.endswith(') assert p[0].name == 'self' assert p[1].name == 'suffix' p = self.params('str().endswith(') assert p[0].name == 'suffix' def test_signature_is_definition(): """ Through inheritance, a call signature is a sub class of Definition. Check if the attributes match. """ s = """class Spam(): pass\nSpam""" signature = Script(s + '(').call_signatures()[0] definition = Script(s + '(', column=0).goto_definitions()[0] signature.line == 1 signature.column == 6 # Now compare all the attributes that a CallSignature must also have. for attr_name in dir(definition): dont_scan = ['defined_names', 'parent', 'goto_assignments', 'params'] if attr_name.startswith('_') or attr_name in dont_scan: continue # Might trigger some deprecation warnings. with warnings.catch_warnings(record=True): attribute = getattr(definition, attr_name) signature_attribute = getattr(signature, attr_name) if inspect.ismethod(attribute): assert attribute() == signature_attribute() else: assert attribute == signature_attribute def test_no_signature(): # str doesn't have a __call__ method assert Script('str()(').call_signatures() == [] s = dedent("""\ class X(): pass X()(""") assert Script(s).call_signatures() == [] assert len(Script(s, column=2).call_signatures()) == 1 assert Script('').call_signatures() == [] def test_dict_literal_in_incomplete_call(): source = """\ import json def foo(): json.loads( json.load.return_value = {'foo': [], 'bar': True} c = Foo() """ script = Script(dedent(source), line=4, column=15) assert script.call_signatures() def test_completion_interference(): """Seems to cause problems, see also #396.""" cache.parser_cache.pop(None, None) assert Script('open(').call_signatures() # complete something usual, before doing the same call_signatures again. assert Script('from datetime import ').completions() assert Script('open(').call_signatures() def test_keyword_argument_index(): def get(source, column=None): return Script(source, column=column).call_signatures()[0] assert get('sorted([], key=a').index == 2 assert get('sorted([], key=').index == 2 assert get('sorted([], no_key=a').index is None kw_func = 'def foo(a, b): pass\nfoo(b=3, a=4)' assert get(kw_func, column=len('foo(b')).index == 0 assert get(kw_func, column=len('foo(b=')).index == 1 assert get(kw_func, column=len('foo(b=3, a=')).index == 0 kw_func_simple = 'def foo(a, b): pass\nfoo(b=4)' assert get(kw_func_simple, column=len('foo(b')).index == 0 assert get(kw_func_simple, column=len('foo(b=')).index == 1 args_func = 'def foo(*kwargs): pass\n' assert get(args_func + 'foo(a').index == 0 assert get(args_func + 'foo(a, b').index == 0 kwargs_func = 'def foo(**kwargs): pass\n' assert get(kwargs_func + 'foo(a=2').index == 0 assert get(kwargs_func + 'foo(a=2, b=2').index == 0 both = 'def foo(*args, **kwargs): pass\n' assert get(both + 'foo(a=2').index == 1 assert get(both + 'foo(a=2, b=2').index == 1 assert get(both + 'foo(a=2, b=2)', column=len('foo(b=2, a=2')).index == 1 assert get(both + 'foo(a, b, c').index == 0 def test_bracket_start(): def bracket_start(src): signatures = Script(src).call_signatures() assert len(signatures) == 1 return signatures[0].bracket_start assert bracket_start('str(') == (1, 3) def test_different_caller(): """ It's possible to not use names, but another function result or an array index and then get the call signature of it. """ assert_signature('[str][0](', 'str', 0) assert_signature('[str][0]()', 'str', 0, column=len('[str][0](')) assert_signature('(str)(', 'str', 0) assert_signature('(str)()', 'str', 0, column=len('(str)(')) def test_in_function(): code = dedent('''\ class X(): @property def func(''') assert not Script(code).call_signatures() def test_lambda_params(): code = dedent('''\ my_lambda = lambda x: x+1 my_lambda(1)''') sig, = Script(code, column=11).call_signatures() assert sig.index == 0 assert sig.name == '' assert [p.name for p in sig.params] == ['x'] def test_class_creation(): code = dedent('''\ class X(): def __init__(self, foo, bar): self.foo = foo ''') sig, = Script(code + 'X(').call_signatures() assert sig.index == 0 assert sig.name == 'X' assert [p.name for p in sig.params] == ['foo', 'bar'] sig, = Script(code + 'X.__init__(').call_signatures() assert [p.name for p in sig.params] == ['self', 'foo', 'bar'] sig, = Script(code + 'X().__init__(').call_signatures() assert [p.name for p in sig.params] == ['foo', 'bar'] def test_call_magic_method(): code = dedent('''\ class X(): def __call__(self, baz): pass ''') sig, = Script(code + 'X()(').call_signatures() assert sig.index == 0 assert sig.name == 'X' assert [p.name for p in sig.params] == ['baz'] sig, = Script(code + 'X.__call__(').call_signatures() assert [p.name for p in sig.params] == ['self', 'baz'] sig, = Script(code + 'X().__call__(').call_signatures() assert [p.name for p in sig.params] == ['baz'] jedi-0.11.1/test/test_api/test_full_name.py0000664000175000017500000000550413214571123020534 0ustar davedave00000000000000""" Tests for :attr:`.BaseDefinition.full_name`. There are three kinds of test: #. Test classes derived from :class:`MixinTestFullName`. Child class defines :attr:`.operation` to alter how the api definition instance is created. #. :class:`TestFullDefinedName` is to test combination of ``obj.full_name`` and ``jedi.defined_names``. #. Misc single-function tests. """ import textwrap import pytest import jedi from ..helpers import TestCase class MixinTestFullName(object): operation = None def check(self, source, desired): script = jedi.Script(textwrap.dedent(source)) definitions = getattr(script, type(self).operation)() for d in definitions: self.assertEqual(d.full_name, desired) def test_os_path_join(self): self.check('import os; os.path.join', 'os.path.join') def test_builtin(self): self.check('TypeError', 'TypeError') class TestFullNameWithGotoDefinitions(MixinTestFullName, TestCase): operation = 'goto_definitions' @pytest.mark.skipif('sys.version_info[0] < 3', reason='Python 2 also yields None.') def test_tuple_mapping(self): self.check(""" import re any_re = re.compile('.*') any_re""", '_sre.SRE_Pattern') def test_from_import(self): self.check('from os import path', 'os.path') class TestFullNameWithCompletions(MixinTestFullName, TestCase): operation = 'completions' class TestFullDefinedName(TestCase): """ Test combination of ``obj.full_name`` and ``jedi.defined_names``. """ def check(self, source, desired): definitions = jedi.names(textwrap.dedent(source)) full_names = [d.full_name for d in definitions] self.assertEqual(full_names, desired) def test_local_names(self): self.check(""" def f(): pass class C: pass """, ['__main__.f', '__main__.C']) def test_imports(self): self.check(""" import os from os import path from os.path import join from os import path as opath """, ['os', 'os.path', 'os.path.join', 'os.path']) def test_sub_module(): """ ``full_name needs to check sys.path to actually find it's real path module path. """ defs = jedi.Script('from jedi.api import classes; classes').goto_definitions() assert [d.full_name for d in defs] == ['jedi.api.classes'] defs = jedi.Script('import jedi.api; jedi.api').goto_definitions() assert [d.full_name for d in defs] == ['jedi.api'] def test_os_path(): d, = jedi.Script('from os.path import join').completions() assert d.full_name == 'os.path.join' d, = jedi.Script('import os.p').completions() assert d.full_name == 'os.path' def test_os_issues(): """Issue #873""" c, = jedi.Script('import os\nos.nt''').completions() assert c.full_name == 'nt' jedi-0.11.1/test/test_api/test_classes.py0000664000175000017500000002715113214571123020231 0ustar davedave00000000000000""" Test all things related to the ``jedi.api_classes`` module. """ from textwrap import dedent from inspect import cleandoc import pytest from jedi import Script, __doc__ as jedi_doc, names from ..helpers import cwd_at from ..helpers import TestCase def test_is_keyword(): #results = Script('import ', 1, 1, None).goto_definitions() #assert len(results) == 1 and results[0].is_keyword is True results = Script('str', 1, 1, None).goto_definitions() assert len(results) == 1 and results[0].is_keyword is False def make_definitions(): """ Return a list of definitions for parametrized tests. :rtype: [jedi.api_classes.BaseDefinition] """ source = dedent(""" import sys class C: pass x = C() def f(): pass def g(): yield h = lambda: None """) definitions = [] definitions += names(source) source += dedent(""" variable = sys or C or x or f or g or g() or h""") lines = source.splitlines() script = Script(source, len(lines), len('variable'), None) definitions += script.goto_definitions() script2 = Script(source, 4, len('class C'), None) definitions += script2.usages() source_param = "def f(a): return a" script_param = Script(source_param, 1, len(source_param), None) definitions += script_param.goto_assignments() return definitions @pytest.mark.parametrize('definition', make_definitions()) def test_basedefinition_type(definition): assert definition.type in ('module', 'class', 'instance', 'function', 'generator', 'statement', 'import', 'param') def test_basedefinition_type_import(): def get_types(source, **kwargs): return set([t.type for t in Script(source, **kwargs).completions()]) # import one level assert get_types('import t') == set(['module']) assert get_types('import ') == set(['module']) assert get_types('import datetime; datetime') == set(['module']) # from assert get_types('from datetime import timedelta') == set(['class']) assert get_types('from datetime import timedelta; timedelta') == set(['class']) assert get_types('from json import tool') == set(['module']) assert get_types('from json import tool; tool') == set(['module']) # import two levels assert get_types('import json.tool; json') == set(['module']) assert get_types('import json.tool; json.tool') == set(['module']) assert get_types('import json.tool; json.tool.main') == set(['function']) assert get_types('import json.tool') == set(['module']) assert get_types('import json.tool', column=9) == set(['module']) def test_function_call_signature_in_doc(): defs = Script(""" def f(x, y=1, z='a'): pass f""").goto_definitions() doc = defs[0].docstring() assert "f(x, y=1, z='a')" in str(doc) def test_class_call_signature(): defs = Script(""" class Foo: def __init__(self, x, y=1, z='a'): pass Foo""").goto_definitions() doc = defs[0].docstring() assert "Foo(self, x, y=1, z='a')" in str(doc) def test_position_none_if_builtin(): gotos = Script('import sys; sys.path').goto_assignments() assert gotos[0].line is None assert gotos[0].column is None @cwd_at('.') def test_completion_docstring(): """ Jedi should follow imports in certain conditions """ def docstr(src, result): c = Script(src).completions()[0] assert c.docstring(raw=True, fast=False) == cleandoc(result) c = Script('import jedi\njed').completions()[0] assert c.docstring(fast=False) == cleandoc(jedi_doc) docstr('import jedi\njedi.Scr', cleandoc(Script.__doc__)) docstr('abcd=3;abcd', '') docstr('"hello"\nabcd=3\nabcd', '') docstr(dedent(''' def x(): "hello" 0 x'''), 'hello' ) docstr(dedent(''' def x(): "hello";0 x'''), 'hello' ) # Shouldn't work with a tuple. docstr(dedent(''' def x(): "hello",0 x'''), '' ) # Should also not work if we rename something. docstr(dedent(''' def x(): "hello" y = x y'''), '' ) def test_completion_params(): c = Script('import string; string.capwords').completions()[0] assert [p.name for p in c.params] == ['s', 'sep'] def test_signature_params(): def check(defs): params = defs[0].params assert len(params) == 1 assert params[0].name == 'bar' s = dedent(''' def foo(bar): pass foo''') check(Script(s).goto_definitions()) check(Script(s).goto_assignments()) check(Script(s + '\nbar=foo\nbar').goto_assignments()) def test_param_endings(): """ Params should be represented without the comma and whitespace they have around them. """ sig = Script('def x(a, b=5, c=""): pass\n x(').call_signatures()[0] assert [p.description for p in sig.params] == ['param a', 'param b=5', 'param c=""'] class TestIsDefinition(TestCase): def _def(self, source, index=-1): return names(dedent(source), references=True, all_scopes=True)[index] def _bool_is_definitions(self, source): ns = names(dedent(source), references=True, all_scopes=True) # Assure that names are definitely sorted. ns = sorted(ns, key=lambda name: (name.line, name.column)) return [name.is_definition() for name in ns] def test_name(self): d = self._def('name') assert d.name == 'name' assert not d.is_definition() def test_stmt(self): src = 'a = f(x)' d = self._def(src, 0) assert d.name == 'a' assert d.is_definition() d = self._def(src, 1) assert d.name == 'f' assert not d.is_definition() d = self._def(src) assert d.name == 'x' assert not d.is_definition() def test_import(self): assert self._bool_is_definitions('import x as a') == [False, True] assert self._bool_is_definitions('from x import y') == [False, True] assert self._bool_is_definitions('from x.z import y') == [False, False, True] class TestParent(TestCase): def _parent(self, source, line=None, column=None): def_, = Script(dedent(source), line, column).goto_assignments() return def_.parent() def test_parent(self): parent = self._parent('foo=1\nfoo') assert parent.type == 'module' parent = self._parent(''' def spam(): if 1: y=1 y''') assert parent.name == 'spam' assert parent.parent().type == 'module' def test_on_function(self): parent = self._parent('''\ def spam(): pass''', 1, len('def spam')) assert parent.name == '' assert parent.type == 'module' def test_parent_on_completion(self): parent = Script(dedent('''\ class Foo(): def bar(): pass Foo().bar''')).completions()[0].parent() assert parent.name == 'Foo' assert parent.type == 'class' parent = Script('str.join').completions()[0].parent() assert parent.name == 'str' assert parent.type == 'class' def test_type(): for c in Script('a = [str()]; a[0].').completions(): if c.name == '__class__': assert c.type == 'class' else: assert c.type in ('function', 'instance') # Github issue #397, type should never raise an error. for c in Script('import os; os.path.').completions(): assert c.type def test_type_II(): """ GitHub Issue #833, `keyword`s are seen as `module`s """ for c in Script('f').completions(): if c.name == 'for': assert c.type == 'keyword' class TestGotoAssignments(TestCase): """ This tests the BaseDefinition.goto_assignments function, not the jedi function. They are not really different in functionality, but really different as an implementation. """ def test_repetition(self): defs = names('a = 1; a', references=True, definitions=False) # Repeat on the same variable. Shouldn't change once we're on a # definition. for _ in range(3): assert len(defs) == 1 ass = defs[0].goto_assignments() assert ass[0].description == 'a = 1' def test_named_params(self): src = """\ def foo(a=1, bar=2): pass foo(bar=1) """ bar = names(dedent(src), references=True)[-1] param = bar.goto_assignments()[0] assert (param.line, param.column) == (1, 13) assert param.type == 'param' def test_class_call(self): src = 'from threading import Thread; Thread(group=1)' n = names(src, references=True)[-1] assert n.name == 'group' param_def = n.goto_assignments()[0] assert param_def.name == 'group' assert param_def.type == 'param' def test_parentheses(self): n = names('("").upper', references=True)[-1] assert n.goto_assignments()[0].name == 'upper' def test_import(self): nms = names('from json import load', references=True) assert nms[0].name == 'json' assert nms[0].type == 'module' n = nms[0].goto_assignments()[0] assert n.name == 'json' assert n.type == 'module' assert nms[1].name == 'load' assert nms[1].type == 'function' n = nms[1].goto_assignments()[0] assert n.name == 'load' assert n.type == 'function' nms = names('import os; os.path', references=True) assert nms[0].name == 'os' assert nms[0].type == 'module' n = nms[0].goto_assignments()[0] assert n.name == 'os' assert n.type == 'module' n = nms[2].goto_assignments()[0] assert n.name == 'path' assert n.type == 'module' nms = names('import os.path', references=True) n = nms[0].goto_assignments()[0] assert n.name == 'os' assert n.type == 'module' n = nms[1].goto_assignments()[0] # This is very special, normally the name doesn't chance, but since # os.path is a sys.modules hack, it does. assert n.name in ('ntpath', 'posixpath', 'os2emxpath') assert n.type == 'module' def test_import_alias(self): nms = names('import json as foo', references=True) assert nms[0].name == 'json' assert nms[0].type == 'module' assert nms[0]._name.tree_name.parent.type == 'dotted_as_name' n = nms[0].goto_assignments()[0] assert n.name == 'json' assert n.type == 'module' assert n._name._context.tree_node.type == 'file_input' assert nms[1].name == 'foo' assert nms[1].type == 'module' assert nms[1]._name.tree_name.parent.type == 'dotted_as_name' ass = nms[1].goto_assignments() assert len(ass) == 1 assert ass[0].name == 'json' assert ass[0].type == 'module' assert ass[0]._name._context.tree_node.type == 'file_input' def test_added_equals_to_params(): def run(rest_source): source = dedent(""" def foo(bar, baz): pass """) results = Script(source + rest_source).completions() assert len(results) == 1 return results[0] assert run('foo(bar').name_with_symbols == 'bar=' assert run('foo(bar').complete == '=' assert run('foo(bar, baz').complete == '=' assert run(' bar').name_with_symbols == 'bar' assert run(' bar').complete == '' x = run('foo(bar=isins').name_with_symbols assert x == 'isinstance' jedi-0.11.1/test/test_api/test_analysis.py0000664000175000017500000000053213214571123020411 0ustar davedave00000000000000""" Test of keywords and ``jedi.keywords`` """ from jedi import Script def test_issue436(): code = "bar = 0\nbar += 'foo' + 4" errors = set(repr(e) for e in Script(code)._analysis()) assert len(errors) == 2 assert '' in errors assert '' in errors jedi-0.11.1/test/test_api/test_unicode.py0000664000175000017500000000414313214571123020216 0ustar davedave00000000000000# -*- coding: utf-8 -*- """ All character set and unicode related tests. """ from jedi import Script from jedi._compatibility import u, unicode def test_unicode_script(): """ normally no unicode objects are being used. (<=2.7) """ s = unicode("import datetime; datetime.timedelta") completions = Script(s).completions() assert len(completions) assert type(completions[0].description) is unicode s = u("author='öä'; author") completions = Script(s).completions() x = completions[0].description assert type(x) is unicode s = u("#-*- coding: iso-8859-1 -*-\nauthor='öä'; author") s = s.encode('latin-1') completions = Script(s).completions() assert type(completions[0].description) is unicode def test_unicode_attribute(): """ github jedi-vim issue #94 """ s1 = u('#-*- coding: utf-8 -*-\nclass Person():\n' ' name = "e"\n\nPerson().name.') completions1 = Script(s1).completions() assert 'strip' in [c.name for c in completions1] s2 = u('#-*- coding: utf-8 -*-\nclass Person():\n' ' name = "é"\n\nPerson().name.') completions2 = Script(s2).completions() assert 'strip' in [c.name for c in completions2] def test_multibyte_script(): """ `jedi.Script` must accept multi-byte string source. """ try: code = u("import datetime; datetime.d") comment = u("# multi-byte comment あいうえおä") s = (u('%s\n%s') % (code, comment)).encode('utf-8') except NameError: pass # python 3 has no unicode method else: assert len(Script(s, 1, len(code)).completions()) def test_goto_definition_at_zero(): """At zero usually sometimes raises unicode issues.""" assert Script("a", 1, 1).goto_definitions() == [] s = Script("str", 1, 1).goto_definitions() assert len(s) == 1 assert list(s)[0].description == 'class str' assert Script("", 1, 0).goto_definitions() == [] def test_complete_at_zero(): s = Script("str", 1, 3).completions() assert len(s) == 1 assert list(s)[0].name == 'str' s = Script("", 1, 0).completions() assert len(s) > 0 jedi-0.11.1/test/run.py0000775000175000017500000003471513214571123014540 0ustar davedave00000000000000#!/usr/bin/env python """ |jedi| is mostly being tested by what I would call "Blackbox Tests". These tests are just testing the interface and do input/output testing. This makes a lot of sense for |jedi|. Jedi supports so many different code structures, that it is just stupid to write 200'000 unittests in the manner of ``regression.py``. Also, it is impossible to do doctests/unittests on most of the internal data structures. That's why |jedi| uses mostly these kind of tests. There are different kind of tests: - completions / goto_definitions ``#?`` - goto_assignments: ``#!`` - usages: ``#<`` How to run tests? +++++++++++++++++ Jedi uses pytest_ to run unit and integration tests. To run tests, simply run ``py.test``. You can also use tox_ to run tests for multiple Python versions. .. _pytest: http://pytest.org .. _tox: http://testrun.org/tox Integration test cases are located in ``test/completion`` directory and each test case is indicated by either the comment ``#?`` (completions / definitions), ``#!`` (assignments), or ``#<`` (usages). There is also support for third party libraries. In a normal test run they are not being executed, you have to provide a ``--thirdparty`` option. In addition to standard `-k` and `-m` options in py.test, you can use `-T` (`--test-files`) option to specify integration test cases to run. It takes the format of ``FILE_NAME[:LINE[,LINE[,...]]]`` where ``FILE_NAME`` is a file in ``test/completion`` and ``LINE`` is a line number of the test comment. Here is some recipes: Run tests only in ``basic.py`` and ``imports.py``:: py.test test/test_integration.py -T basic.py -T imports.py Run test at line 4, 6, and 8 in ``basic.py``:: py.test test/test_integration.py -T basic.py:4,6,8 See ``py.test --help`` for more information. If you want to debug a test, just use the ``--pdb`` option. Alternate Test Runner +++++++++++++++++++++ If you don't like the output of ``py.test``, there's an alternate test runner that you can start by running ``./run.py``. The above example could be run by:: ./run.py basic 4 6 8 50-80 The advantage of this runner is simplicity and more customized error reports. Using both runners will help you to have a quicker overview of what's happening. Auto-Completion +++++++++++++++ Uses comments to specify a test in the next line. The comment says which results are expected. The comment always begins with `#?`. The last row symbolizes the cursor. For example:: #? ['real'] a = 3; a.rea Because it follows ``a.rea`` and a is an ``int``, which has a ``real`` property. Goto Definitions ++++++++++++++++ Definition tests use the same symbols like completion tests. This is possible because the completion tests are defined with a list:: #? int() ab = 3; ab Goto Assignments ++++++++++++++++ Tests look like this:: abc = 1 #! ['abc=1'] abc Additionally it is possible to specify the column by adding a number, which describes the position of the test (otherwise it's just the end of line):: #! 2 ['abc=1'] abc Usages ++++++ Tests look like this:: abc = 1 #< abc@1,0 abc@3,0 abc """ import os import re import sys import operator from ast import literal_eval from io import StringIO from functools import reduce import parso import jedi from jedi import debug from jedi._compatibility import unicode, is_py3 from jedi.api.classes import Definition from jedi.api.completion import get_user_scope from jedi import parser_utils TEST_COMPLETIONS = 0 TEST_DEFINITIONS = 1 TEST_ASSIGNMENTS = 2 TEST_USAGES = 3 grammar36 = parso.load_grammar(version='3.6') class IntegrationTestCase(object): def __init__(self, test_type, correct, line_nr, column, start, line, path=None, skip=None): self.test_type = test_type self.correct = correct self.line_nr = line_nr self.column = column self.start = start self.line = line self.path = path self.skip = skip @property def module_name(self): return os.path.splitext(os.path.basename(self.path))[0] @property def line_nr_test(self): """The test is always defined on the line before.""" return self.line_nr - 1 def __repr__(self): return '<%s: %s:%s %r>' % (self.__class__.__name__, self.path, self.line_nr_test, self.line.rstrip()) def script(self): return jedi.Script(self.source, self.line_nr, self.column, self.path) def run(self, compare_cb): testers = { TEST_COMPLETIONS: self.run_completion, TEST_DEFINITIONS: self.run_goto_definitions, TEST_ASSIGNMENTS: self.run_goto_assignments, TEST_USAGES: self.run_usages, } return testers[self.test_type](compare_cb) def run_completion(self, compare_cb): completions = self.script().completions() #import cProfile; cProfile.run('script.completions()') comp_str = set([c.name for c in completions]) return compare_cb(self, comp_str, set(literal_eval(self.correct))) def run_goto_definitions(self, compare_cb): script = self.script() evaluator = script._evaluator def comparison(definition): suffix = '()' if definition.type == 'instance' else '' return definition.desc_with_module + suffix def definition(correct, correct_start, path): should_be = set() for match in re.finditer('(?:[^ ]+)', correct): string = match.group(0) parser = grammar36.parse(string, start_symbol='eval_input', error_recovery=False) parser_utils.move(parser.get_root_node(), self.line_nr) element = parser.get_root_node() module_context = script._get_module() # The context shouldn't matter for the test results. user_context = get_user_scope(module_context, (self.line_nr, 0)) if user_context.api_type == 'function': user_context = user_context.get_function_execution() element.parent = user_context.tree_node results = evaluator.eval_element(user_context, element) if not results: raise Exception('Could not resolve %s on line %s' % (match.string, self.line_nr - 1)) should_be |= set(Definition(evaluator, r.name) for r in results) debug.dbg('Finished getting types', color='YELLOW') # Because the objects have different ids, `repr`, then compare. should = set(comparison(r) for r in should_be) return should should = definition(self.correct, self.start, script.path) result = script.goto_definitions() is_str = set(comparison(r) for r in result) return compare_cb(self, is_str, should) def run_goto_assignments(self, compare_cb): result = self.script().goto_assignments() comp_str = str(sorted(str(r.description) for r in result)) return compare_cb(self, comp_str, self.correct) def run_usages(self, compare_cb): result = self.script().usages() self.correct = self.correct.strip() compare = sorted((r.module_name, r.line, r.column) for r in result) wanted = [] if not self.correct: positions = [] else: positions = literal_eval(self.correct) for pos_tup in positions: if type(pos_tup[0]) == str: # this means that there is a module specified wanted.append(pos_tup) else: line = pos_tup[0] if pos_tup[0] is not None: line += self.line_nr wanted.append((self.module_name, line, pos_tup[1])) return compare_cb(self, compare, sorted(wanted)) def skip_python_version(line): comp_map = { '==': 'eq', '<=': 'le', '>=': 'ge', '<': 'lt', '>': 'gt', } # check for python minimal version number match = re.match(r" *# *python *([<>]=?|==) *(\d+(?:\.\d+)?)$", line) if match: minimal_python_version = tuple( map(int, match.group(2).split("."))) operation = getattr(operator, comp_map[match.group(1)]) if not operation(sys.version_info, minimal_python_version): return "Minimal python version %s %s" % (match.group(1), match.group(2)) return None def collect_file_tests(path, lines, lines_to_execute): def makecase(t): return IntegrationTestCase(t, correct, line_nr, column, start, line, path=path, skip=skip) start = None correct = None test_type = None skip = None for line_nr, line in enumerate(lines, 1): if correct is not None: r = re.match('^(\d+)\s*(.*)$', correct) if r: column = int(r.group(1)) correct = r.group(2) start += r.regs[2][0] # second group, start index else: column = len(line) - 1 # -1 for the \n if test_type == '!': yield makecase(TEST_ASSIGNMENTS) elif test_type == '<': yield makecase(TEST_USAGES) elif correct.startswith('['): yield makecase(TEST_COMPLETIONS) else: yield makecase(TEST_DEFINITIONS) correct = None else: skip = skip or skip_python_version(line) try: r = re.search(r'(?:^|(?<=\s))#([?!<])\s*([^\n]*)', line) # test_type is ? for completion and ! for goto_assignments test_type = r.group(1) correct = r.group(2) # Quick hack to make everything work (not quite a bloody unicorn hack though). if correct == '': correct = ' ' start = r.start() except AttributeError: correct = None else: # Skip the test, if this is not specified test. for l in lines_to_execute: if isinstance(l, tuple) and l[0] <= line_nr <= l[1] \ or line_nr == l: break else: if lines_to_execute: correct = None def collect_dir_tests(base_dir, test_files, check_thirdparty=False): for f_name in os.listdir(base_dir): files_to_execute = [a for a in test_files.items() if f_name.startswith(a[0])] lines_to_execute = reduce(lambda x, y: x + y[1], files_to_execute, []) if f_name.endswith(".py") and (not test_files or files_to_execute): skip = None if check_thirdparty: lib = f_name.replace('_.py', '') try: # there is always an underline at the end. # It looks like: completion/thirdparty/pylab_.py __import__(lib) except ImportError: skip = 'Thirdparty-Library %s not found.' % lib path = os.path.join(base_dir, f_name) if is_py3: source = open(path, encoding='utf-8').read() else: source = unicode(open(path).read(), 'UTF-8') for case in collect_file_tests(path, StringIO(source), lines_to_execute): case.source = source if skip: case.skip = skip yield case docoptstr = """ Using run.py to make debugging easier with integration tests. An alternative testing format, which is much more hacky, but very nice to work with. Usage: run.py [--pdb] [--debug] [--thirdparty] [...] run.py --help Options: -h --help Show this screen. --pdb Enable pdb debugging on fail. -d, --debug Enable text output debugging (please install ``colorama``). --thirdparty Also run thirdparty tests (in ``completion/thirdparty``). """ if __name__ == '__main__': import docopt arguments = docopt.docopt(docoptstr) import time t_start = time.time() if arguments['--debug']: jedi.set_debug_function() # get test list, that should be executed test_files = {} last = None for arg in arguments['']: match = re.match('(\d+)-(\d+)', arg) if match: start, end = match.groups() test_files[last].append((int(start), int(end))) elif arg.isdigit(): if last is None: continue test_files[last].append(int(arg)) else: test_files[arg] = [] last = arg # completion tests: dir_ = os.path.dirname(os.path.realpath(__file__)) completion_test_dir = os.path.join(dir_, '../test/completion') completion_test_dir = os.path.abspath(completion_test_dir) summary = [] tests_fail = 0 # execute tests cases = list(collect_dir_tests(completion_test_dir, test_files)) if test_files or arguments['--thirdparty']: completion_test_dir += '/thirdparty' cases += collect_dir_tests(completion_test_dir, test_files, True) def file_change(current, tests, fails): if current is not None: current = os.path.basename(current) print('%s \t\t %s tests and %s fails.' % (current, tests, fails)) def report(case, actual, desired): if actual == desired: return 0 else: print("\ttest fail @%d, actual = %s, desired = %s" % (case.line_nr - 1, actual, desired)) return 1 import traceback current = cases[0].path if cases else None count = fails = 0 for c in cases: if c.skip: continue if current != c.path: file_change(current, count, fails) current = c.path count = fails = 0 try: if c.run(report): tests_fail += 1 fails += 1 except Exception: traceback.print_exc() print("\ttest fail @%d" % (c.line_nr - 1)) tests_fail += 1 fails += 1 if arguments['--pdb']: import pdb pdb.post_mortem() count += 1 file_change(current, count, fails) print('\nSummary: (%s fails of %s tests) in %.3fs' % (tests_fail, len(cases), time.time() - t_start)) for s in summary: print(s) exit_code = 1 if tests_fail else 0 sys.exit(exit_code) jedi-0.11.1/test/test_integration_keyword.py0000664000175000017500000000140213214571123021042 0ustar davedave00000000000000""" Test of keywords and ``jedi.keywords`` """ from jedi._compatibility import is_py3 from jedi import Script def test_goto_assignments_keyword(): """ Bug: goto assignments on ``in`` used to raise AttributeError:: 'unicode' object has no attribute 'generate_call_path' """ Script('in').goto_assignments() def test_keyword(): """ github jedi-vim issue #44 """ defs = Script("print").goto_definitions() if is_py3: assert [d.docstring() for d in defs] else: assert defs == [] assert Script("import").goto_assignments() == [] completions = Script("import", 1, 1).completions() assert len(completions) > 10 and 'if' in [c.name for c in completions] assert Script("assert").goto_definitions() == [] jedi-0.11.1/test/test_utils.py0000664000175000017500000000676013214571123016127 0ustar davedave00000000000000try: import readline except ImportError: readline = False from jedi import utils from .helpers import unittest, cwd_at @unittest.skipIf(not readline, "readline not found") class TestSetupReadline(unittest.TestCase): class NameSpace(object): pass def __init__(self, *args, **kwargs): super(type(self), self).__init__(*args, **kwargs) self.namespace = self.NameSpace() utils.setup_readline(self.namespace) def completions(self, text): completer = readline.get_completer() i = 0 completions = [] while True: completion = completer(text, i) if completion is None: break completions.append(completion) i += 1 return completions def test_simple(self): assert self.completions('list') == ['list'] assert self.completions('importerror') == ['ImportError'] s = "print(BaseE" assert self.completions(s) == [s + 'xception'] def test_nested(self): assert self.completions('list.Insert') == ['list.insert'] assert self.completions('list().Insert') == ['list().insert'] def test_magic_methods(self): assert self.completions('list.__getitem__') == ['list.__getitem__'] assert self.completions('list().__getitem__') == ['list().__getitem__'] def test_modules(self): import sys import os self.namespace.sys = sys self.namespace.os = os try: assert self.completions('os.path.join') == ['os.path.join'] string = 'os.path.join("a").upper' assert self.completions(string) == [string] c = set(['os.' + d for d in dir(os) if d.startswith('ch')]) assert set(self.completions('os.ch')) == set(c) finally: del self.namespace.sys del self.namespace.os def test_calls(self): s = 'str(bytes' assert self.completions(s) == [s, 'str(BytesWarning'] def test_import(self): s = 'from os.path import a' assert set(self.completions(s)) == set([s + 'ltsep', s + 'bspath']) assert self.completions('import keyword') == ['import keyword'] import os s = 'from os import ' goal = set([s + el for el in dir(os)]) # There are minor differences, e.g. the dir doesn't include deleted # items as well as items that are not only available on linux. assert len(set(self.completions(s)).symmetric_difference(goal)) < 20 @cwd_at('test') def test_local_import(self): s = 'import test_utils' assert self.completions(s) == [s] def test_preexisting_values(self): self.namespace.a = range(10) assert set(self.completions('a.')) == set(['a.' + n for n in dir(range(1))]) del self.namespace.a def test_colorama(self): """ Only test it if colorama library is available. This module is being tested because it uses ``setattr`` at some point, which Jedi doesn't understand, but it should still work in the REPL. """ try: # if colorama is installed import colorama except ImportError: pass else: self.namespace.colorama = colorama assert self.completions('colorama') assert self.completions('colorama.Fore.BLACK') == ['colorama.Fore.BLACK'] del self.namespace.colorama def test_version_info(): assert utils.version_info()[:2] > (0, 7) jedi-0.11.1/test/__init__.py0000664000175000017500000000000013214571123015444 0ustar davedave00000000000000jedi-0.11.1/test/test_debug.py0000664000175000017500000000032113214571123016040 0ustar davedave00000000000000import jedi from jedi import debug def test_simple(): jedi.set_debug_function() debug.speed('foo') debug.dbg('bar') debug.warning('baz') jedi.set_debug_function(None, False, False, False) jedi-0.11.1/test/test_cache.py0000664000175000017500000000143313214571123016022 0ustar davedave00000000000000""" Test all things related to the ``jedi.cache`` module. """ import jedi def test_cache_call_signatures(): """ See github issue #390. """ def check(column, call_name, path=None): assert jedi.Script(s, 1, column, path).call_signatures()[0].name == call_name s = 'str(int())' for i in range(3): check(8, 'int') check(4, 'str') # Can keep doing these calls and always get the right result. # Now lets specify a source_path of boo and alternate these calls, it # should still work. for i in range(3): check(8, 'int', 'boo') check(4, 'str', 'boo') def test_cache_line_split_issues(): """Should still work even if there's a newline.""" assert jedi.Script('int(\n').call_signatures()[0].name == 'int' jedi-0.11.1/test/refactor.py0000775000175000017500000000662613214571123015541 0ustar davedave00000000000000#!/usr/bin/env python """ Refactoring tests work a little bit similar to Black Box tests. But the idea is here to compare two versions of code. **Note: Refactoring is currently not in active development (and was never stable), the tests are therefore not really valuable - just ignore them.** """ from __future__ import with_statement import os import re from functools import reduce import jedi from jedi import refactoring class RefactoringCase(object): def __init__(self, name, source, line_nr, index, path, new_name, start_line_test, desired): self.name = name self.source = source self.line_nr = line_nr self.index = index self.path = path self.new_name = new_name self.start_line_test = start_line_test self.desired = desired def refactor(self): script = jedi.Script(self.source, self.line_nr, self.index, self.path) f_name = os.path.basename(self.path) refactor_func = getattr(refactoring, f_name.replace('.py', '')) args = (self.new_name,) if self.new_name else () return refactor_func(script, *args) def run(self): refactor_object = self.refactor() # try to get the right excerpt of the newfile f = refactor_object.new_files()[self.path] lines = f.splitlines()[self.start_line_test:] end = self.start_line_test + len(lines) pop_start = None for i, l in enumerate(lines): if l.startswith('# +++'): end = i break elif '#? ' in l: pop_start = i lines.pop(pop_start) self.result = '\n'.join(lines[:end - 1]).strip() return self.result def check(self): return self.run() == self.desired def __repr__(self): return '<%s: %s:%s>' % (self.__class__.__name__, self.name, self.line_nr - 1) def collect_file_tests(source, path, lines_to_execute): r = r'^# --- ?([^\n]*)\n((?:(?!\n# \+\+\+).)*)' \ r'\n# \+\+\+((?:(?!\n# ---).)*)' for match in re.finditer(r, source, re.DOTALL | re.MULTILINE): name = match.group(1).strip() first = match.group(2).strip() second = match.group(3).strip() start_line_test = source[:match.start()].count('\n') + 1 # get the line with the position of the operation p = re.match(r'((?:(?!#\?).)*)#\? (\d*) ?([^\n]*)', first, re.DOTALL) if p is None: print("Please add a test start.") continue until = p.group(1) index = int(p.group(2)) new_name = p.group(3) line_nr = start_line_test + until.count('\n') + 2 if lines_to_execute and line_nr - 1 not in lines_to_execute: continue yield RefactoringCase(name, source, line_nr, index, path, new_name, start_line_test, second) def collect_dir_tests(base_dir, test_files): for f_name in os.listdir(base_dir): files_to_execute = [a for a in test_files.items() if a[0] in f_name] lines_to_execute = reduce(lambda x, y: x + y[1], files_to_execute, []) if f_name.endswith(".py") and (not test_files or files_to_execute): path = os.path.join(base_dir, f_name) with open(path) as f: source = f.read() for case in collect_file_tests(source, path, lines_to_execute): yield case jedi-0.11.1/test/static_analysis/0000775000175000017500000000000013214571377016552 5ustar davedave00000000000000jedi-0.11.1/test/static_analysis/attribute_error.py0000664000175000017500000000433513214571123022332 0ustar davedave00000000000000class Cls(): class_attr = '' def __init__(self, input): self.instance_attr = 3 self.input = input def f(self): #! 12 attribute-error return self.not_existing def undefined_object(self, obj): """ Uses an arbitrary object and performs an operation on it, shouldn't be a problem. """ obj.arbitrary_lookup def defined_lookup(self, obj): """ `obj` is defined by a call into this function. """ obj.upper #! 4 attribute-error obj.arbitrary_lookup #! 13 name-error class_attr = a Cls(1).defined_lookup('') c = Cls(1) c.class_attr Cls.class_attr #! 4 attribute-error Cls.class_attr_error c.instance_attr #! 2 attribute-error c.instance_attr_error c.something = None #! 12 name-error something = a something # ----------------- # Unused array variables should still raise attribute errors. # ----------------- # should not raise anything. for loop_variable in [1, 2]: #! 4 name-error x = undefined loop_variable #! 28 name-error for loop_variable in [1, 2, undefined]: pass #! 7 attribute-error [1, ''.undefined_attr] def return_one(something): return 1 #! 14 attribute-error return_one(''.undefined_attribute) #! 12 name-error [r for r in undefined] #! 1 name-error [undefined for r in [1, 2]] [r for r in [1, 2]] # some random error that showed up class NotCalled(): def match_something(self, param): seems_to_need_an_assignment = param return [value.match_something() for value in []] # ----------------- # decorators # ----------------- #! 1 name-error @undefined_decorator def func(): return 1 # ----------------- # operators # ----------------- string = '%s %s' % (1, 2) # Shouldn't raise an error, because `string` is really just a string, not an # array or something. string.upper # ----------------- # imports # ----------------- # Star imports and the like in modules should not cause attribute errors in # this module. import import_tree import_tree.a import_tree.b # This is something that raised an error, because it was using a complex # mixture of Jedi fakes and compiled objects. import _sre #! 15 attribute-error _sre.compile().not_existing jedi-0.11.1/test/static_analysis/keywords.py0000664000175000017500000000015413214571123020760 0ustar davedave00000000000000def raises(): raise KeyError() def wrong_name(): #! 6 name-error raise NotExistingException() jedi-0.11.1/test/static_analysis/branches.py0000664000175000017500000000131413214571123020675 0ustar davedave00000000000000# ----------------- # Simple tests # ----------------- import random if random.choice([0, 1]): x = '' else: x = 1 if random.choice([0, 1]): y = '' else: y = 1 # A simple test if x != 1: x.upper() else: #! 2 attribute-error x.upper() pass # This operation is wrong, because the types could be different. #! 6 type-error-operation z = x + y # However, here we have correct types. if x == y: z = x + y else: #! 6 type-error-operation z = x + y # ----------------- # With a function # ----------------- def addition(a, b): if type(a) == type(b): return a + b else: #! 9 type-error-operation return a + b addition(1, 1) addition(1.0, '') jedi-0.11.1/test/static_analysis/star_arguments.py0000664000175000017500000000427513214571123022157 0ustar davedave00000000000000# ----------------- # *args # ----------------- def simple(a): return a def nested(*args): return simple(*args) nested(1) #! 6 type-error-too-few-arguments nested() def nested_no_call_to_function(*args): return simple(1, *args) def simple2(a, b, c): return b def nested(*args): return simple2(1, *args) def nested_twice(*args1): return nested(*args1) nested_twice(2, 3) #! 13 type-error-too-few-arguments nested_twice(2) #! 19 type-error-too-many-arguments nested_twice(2, 3, 4) # A named argument can be located before *args. def star_args_with_named(*args): return simple2(c='', *args) star_args_with_named(1, 2) # ----------------- # **kwargs # ----------------- def kwargs_test(**kwargs): return simple2(1, **kwargs) kwargs_test(c=3, b=2) #! 12 type-error-too-few-arguments kwargs_test(c=3) #! 12 type-error-too-few-arguments kwargs_test(b=2) #! 22 type-error-keyword-argument kwargs_test(b=2, c=3, d=4) #! 12 type-error-multiple-values kwargs_test(b=2, c=3, a=4) def kwargs_nested(**kwargs): return kwargs_test(b=2, **kwargs) kwargs_nested(c=3) #! 13 type-error-too-few-arguments kwargs_nested() #! 19 type-error-keyword-argument kwargs_nested(c=2, d=4) #! 14 type-error-multiple-values kwargs_nested(c=2, a=4) # TODO reenable ##! 14 type-error-multiple-values #kwargs_nested(b=3, c=2) # ----------------- # mixed *args/**kwargs # ----------------- def simple_mixed(a, b, c): return b def mixed(*args, **kwargs): return simple_mixed(1, *args, **kwargs) mixed(1, 2) mixed(1, c=2) mixed(b=2, c=3) mixed(c=4, b='') # need separate functions, otherwise these might swallow the errors def mixed2(*args, **kwargs): return simple_mixed(1, *args, **kwargs) #! 7 type-error-too-few-arguments mixed2(c=2) #! 7 type-error-too-few-arguments mixed2(3) #! 13 type-error-too-many-arguments mixed2(3, 4, 5) # TODO reenable ##! 13 type-error-too-many-arguments #mixed2(3, 4, c=5) #! 7 type-error-multiple-values mixed2(3, b=5) # ----------------- # plain wrong arguments # ----------------- #! 12 type-error-star-star simple(1, **[]) #! 12 type-error-star-star simple(1, **1) class A(): pass #! 12 type-error-star-star simple(1, **A()) #! 11 type-error-star simple(1, *1) jedi-0.11.1/test/static_analysis/attribute_warnings.py0000664000175000017500000000135213214571123023025 0ustar davedave00000000000000""" Jedi issues warnings for possible errors if ``__getattr__``, ``__getattribute__`` or ``setattr`` are used. """ # ----------------- # __getattr*__ # ----------------- class Cls(): def __getattr__(self, name): return getattr(str, name) Cls().upper #! 6 warning attribute-error Cls().undefined class Inherited(Cls): pass Inherited().upper #! 12 warning attribute-error Inherited().undefined # ----------------- # setattr # ----------------- class SetattrCls(): def __init__(self, dct): # Jedi doesn't even try to understand such code for k, v in dct.items(): setattr(self, k, v) self.defined = 3 c = SetattrCls({'a': 'b'}) c.defined #! 2 warning attribute-error c.undefined jedi-0.11.1/test/static_analysis/class_simple.py0000664000175000017500000000023013214571123021562 0ustar davedave00000000000000class Base(object): class Nested(): def foo(): pass class X(Base.Nested): pass X().foo() #! 4 attribute-error X().bar() jedi-0.11.1/test/static_analysis/operations.py0000664000175000017500000000026413214571123021276 0ustar davedave00000000000000-1 + 1 1 + 1.0 #! 2 type-error-operation 1 + '1' #! 2 type-error-operation 1 - '1' -1 - - 1 -1 - int() int() - float() float() - 3.0 a = 3 b = '' #! 2 type-error-operation a + b jedi-0.11.1/test/static_analysis/imports.py0000664000175000017500000000057213214571123020612 0ustar davedave00000000000000 #! 7 import-error import not_existing import os from os.path import abspath #! 20 import-error from os.path import not_existing from datetime import date date.today #! 5 attribute-error date.not_existing_attribute #! 14 import-error from datetime.date import today #! 16 import-error import datetime.date #! 7 import-error import not_existing_nested.date import os.path jedi-0.11.1/test/static_analysis/iterable.py0000664000175000017500000000055613214571123020706 0ustar davedave00000000000000 a, b = {'asdf': 3, 'b': 'str'} a x = [1] x[0], b = {'a': 1, 'b': '2'} dct = {3: ''} for x in dct: pass #! 4 type-error-not-iterable for x, y in dct: pass # Shouldn't cause issues, because if there are no types (or we don't know what # the types are, we should just ignore it. #! 0 value-error-too-few-values a, b = [] #! 7 name-error a, b = NOT_DEFINED jedi-0.11.1/test/static_analysis/python2.py0000664000175000017500000000020013214571123020504 0ustar davedave00000000000000""" Some special cases of Python 2. """ # python <= 2.7 # print is syntax: print 1 print(1) #! 6 name-error print NOT_DEFINED jedi-0.11.1/test/static_analysis/builtins.py0000664000175000017500000000027113214571123020742 0ustar davedave00000000000000# ---------- # isinstance # ---------- isinstance(1, int) isinstance(1, (int, str)) #! 14 type-error-isinstance isinstance(1, 1) #! 14 type-error-isinstance isinstance(1, [int, str]) jedi-0.11.1/test/static_analysis/try_except.py0000664000175000017500000000263013214571123021300 0ustar davedave00000000000000try: #! 4 attribute-error str.not_existing except TypeError: pass try: str.not_existing except AttributeError: #! 4 attribute-error str.not_existing pass try: import not_existing_import except ImportError: pass try: #! 7 import-error import not_existing_import except AttributeError: pass # ----------------- # multi except # ----------------- try: str.not_existing except (TypeError, AttributeError): pass try: str.not_existing except ImportError: pass except (NotImplementedError, AttributeError): pass try: #! 4 attribute-error str.not_existing except (TypeError, NotImplementedError): pass # ----------------- # detailed except # ----------------- try: str.not_existing except ((AttributeError)): pass try: #! 4 attribute-error str.not_existing except [AttributeError]: pass # Should be able to detect errors in except statement as well. try: pass #! 7 name-error except Undefined: pass # ----------------- # inheritance # ----------------- try: undefined except Exception: pass # should catch everything try: undefined except: pass # ----------------- # kind of similar: hasattr # ----------------- if hasattr(str, 'undefined'): str.undefined str.upper #! 4 attribute-error str.undefined2 #! 4 attribute-error int.undefined else: str.upper #! 4 attribute-error str.undefined jedi-0.11.1/test/static_analysis/descriptors.py0000664000175000017500000000036513214571123021456 0ustar davedave00000000000000# classmethod class TarFile(): @classmethod def open(cls, name, **kwargs): return cls.taropen(name, **kwargs) @classmethod def taropen(cls, name, **kwargs): return name # should just work TarFile.open('hallo') jedi-0.11.1/test/static_analysis/comprehensions.py0000664000175000017500000000140513214571123022145 0ustar davedave00000000000000[a + 1 for a in [1, 2]] #! 3 type-error-operation [a + '' for a in [1, 2]] #! 3 type-error-operation (a + '' for a in [1, 2]) #! 12 type-error-not-iterable [a for a in 1] tuple(str(a) for a in [1]) #! 8 type-error-operation tuple(a + 3 for a in ['']) # ---------- # Some variables within are not defined # ---------- abcdef = [] #! 12 name-error [1 for a in NOT_DEFINFED for b in abcdef if 1] #! 25 name-error [1 for a in [1] for b in NOT_DEFINED if 1] #! 12 name-error [1 for a in NOT_DEFINFED for b in [1] if 1] #! 19 name-error (1 for a in [1] if NOT_DEFINED) # ---------- # unbalanced sides. # ---------- # ok (1 for a, b in [(1, 2)]) #! 13 value-error-too-few-values (1 for a, b, c in [(1, 2)]) #! 10 value-error-too-many-values (1 for a, b in [(1, 2, 3)]) jedi-0.11.1/test/static_analysis/normal_arguments.py0000664000175000017500000000217613214571123022474 0ustar davedave00000000000000# ----------------- # normal arguments (no keywords) # ----------------- def simple(a): return a simple(1) #! 6 type-error-too-few-arguments simple() #! 10 type-error-too-many-arguments simple(1, 2) #! 10 type-error-too-many-arguments simple(1, 2, 3) # ----------------- # keyword arguments # ----------------- simple(a=1) #! 7 type-error-keyword-argument simple(b=1) #! 10 type-error-too-many-arguments simple(1, a=1) def two_params(x, y): return y two_params(y=2, x=1) two_params(1, y=2) #! 11 type-error-multiple-values two_params(1, x=2) #! 17 type-error-too-many-arguments two_params(1, 2, y=3) # ----------------- # default arguments # ----------------- def default(x, y=1, z=2): return x #! 7 type-error-too-few-arguments default() default(1) default(1, 2) default(1, 2, 3) #! 17 type-error-too-many-arguments default(1, 2, 3, 4) default(x=1) # ----------------- # class arguments # ----------------- class Instance(): def __init__(self, foo): self.foo = foo Instance(1).foo Instance(foo=1).foo #! 12 type-error-too-many-arguments Instance(1, 2).foo #! 8 type-error-too-few-arguments Instance().foo jedi-0.11.1/test/static_analysis/generators.py0000664000175000017500000000014613214571123021263 0ustar davedave00000000000000def generator(): yield 1 #! 11 type-error-not-subscriptable generator()[0] list(generator())[0] jedi-0.11.1/test/static_analysis/import_tree/0000775000175000017500000000000013214571377021103 5ustar davedave00000000000000jedi-0.11.1/test/static_analysis/import_tree/b.py0000664000175000017500000000000013214571123021651 0ustar davedave00000000000000jedi-0.11.1/test/static_analysis/import_tree/__init__.py0000664000175000017500000000014213214571123023176 0ustar davedave00000000000000""" Another import tree, this time not for completion, but static analysis. """ from .a import * jedi-0.11.1/test/static_analysis/import_tree/a.py0000664000175000017500000000002013214571123021652 0ustar davedave00000000000000from . import b jedi-0.11.1/test/test_speed.py0000664000175000017500000000462113214571123016061 0ustar davedave00000000000000""" Speed tests of Jedi. To prove that certain things don't take longer than they should. """ import time import functools from .helpers import TestCase, cwd_at import jedi class TestSpeed(TestCase): def _check_speed(time_per_run, number=4, run_warm=True): """ Speed checks should typically be very tolerant. Some machines are faster than others, but the tests should still pass. These tests are here to assure that certain effects that kill jedi performance are not reintroduced to Jedi.""" def decorated(func): @functools.wraps(func) def wrapper(self): if run_warm: func(self) first = time.time() for i in range(number): func(self) single_time = (time.time() - first) / number message = 'speed issue %s, %s' % (func, single_time) assert single_time < time_per_run, message return wrapper return decorated @_check_speed(0.2) def test_os_path_join(self): s = "from posixpath import join; join('', '')." assert len(jedi.Script(s).completions()) > 10 # is a str completion @_check_speed(0.15) def test_scipy_speed(self): s = 'import scipy.weave; scipy.weave.inline(' script = jedi.Script(s, 1, len(s), '') script.call_signatures() #print(jedi.imports.imports_processed) @_check_speed(0.8) @cwd_at('test') def test_precedence_slowdown(self): """ Precedence calculation can slow down things significantly in edge cases. Having strange recursion structures increases the problem. """ with open('speed/precedence.py') as f: line = len(f.read().splitlines()) assert jedi.Script(line=line, path='speed/precedence.py').goto_definitions() @_check_speed(0.1) def test_no_repr_computation(self): """ For Interpreter completion aquisition of sourcefile can trigger unwanted computation of repr(). Exemple : big pandas data. See issue #919. """ class SlowRepr(): "class to test what happens if __repr__ is very slow." def some_method(self): pass def __repr__(self): time.sleep(0.2) test = SlowRepr() jedi.Interpreter('test.som', [locals()]).completions() jedi-0.11.1/test/test_parso_integration/0000775000175000017500000000000013214571377020146 5ustar davedave00000000000000jedi-0.11.1/test/test_parso_integration/test_error_correction.py0000664000175000017500000000200613214571123025122 0ustar davedave00000000000000from textwrap import dedent import jedi def test_error_correction_with(): source = """ with open() as f: try: f.""" comps = jedi.Script(source).completions() assert len(comps) > 30 # `open` completions have a closed attribute. assert [1 for c in comps if c.name == 'closed'] def test_string_literals(): """Simplified case of jedi-vim#377.""" source = dedent(""" x = ur''' def foo(): pass """) script = jedi.Script(dedent(source)) assert script._get_module().tree_node.end_pos == (6, 0) assert script.completions() def test_incomplete_function(): source = '''return ImportErr''' script = jedi.Script(dedent(source), 1, 3) assert script.completions() def test_decorator_string_issue(): """ Test case from #589 """ source = dedent('''\ """ @""" def bla(): pass bla.''') s = jedi.Script(source) assert s.completions() assert s._get_module().tree_node.get_code() == source jedi-0.11.1/test/test_parso_integration/test_basic.py0000664000175000017500000000376013214571123022633 0ustar davedave00000000000000from textwrap import dedent from parso import parse import jedi def test_form_feed_characters(): s = "\f\nclass Test(object):\n pass" jedi.Script(s, line=2, column=18).call_signatures() def check_p(src): module_node = parse(src) assert src == module_node.get_code() return module_node def test_if(): src = dedent('''\ def func(): x = 3 if x: def y(): return x return y() func() ''') # Two parsers needed, one for pass and one for the function. check_p(src) assert [d.name for d in jedi.Script(src, 8, 6).goto_definitions()] == ['int'] def test_class_and_if(): src = dedent("""\ class V: def __init__(self): pass if 1: c = 3 def a_func(): return 1 # COMMENT a_func()""") check_p(src) assert [d.name for d in jedi.Script(src).goto_definitions()] == ['int'] def test_add_to_end(): """ The diff parser doesn't parse everything again. It just updates with the help of caches, this is an example that didn't work. """ a = dedent("""\ class Abc(): def abc(self): self.x = 3 class Two(Abc): def g(self): self """) # ^ here is the first completion b = " def h(self):\n" \ " self." def complete(code, line=None, column=None): script = jedi.Script(code, line, column, 'example.py') assert script.completions() complete(a, 7, 12) complete(a + b) a = a[:-1] + '.\n' complete(a, 7, 13) complete(a + b) def test_tokenizer_with_string_literal_backslash(): c = jedi.Script("statement = u'foo\\\n'; statement").goto_definitions() assert c[0]._name._context.obj == 'foo' def test_ellipsis(): def_, = jedi.Script(dedent("""\ class Foo(): def __getitem__(self, index): return index Foo()[...]""")).goto_definitions() assert def_.name == 'ellipsis' jedi-0.11.1/test/test_parso_integration/test_parser_utils.py0000664000175000017500000000517413214571123024267 0ustar davedave00000000000000# -*- coding: utf-8 -*- from jedi._compatibility import is_py3 from jedi import parser_utils from parso import parse from parso.python import tree import pytest class TestCallAndName(): def get_call(self, source): # Get the simple_stmt and then the first one. node = parse(source).children[0] if node.type == 'simple_stmt': return node.children[0] return node def test_name_and_call_positions(self): name = self.get_call('name\nsomething_else') assert name.value == 'name' assert name.start_pos == (1, 0) assert name.end_pos == (1, 4) leaf = self.get_call('1.0\n') assert leaf.value == '1.0' assert parser_utils.safe_literal_eval(leaf.value) == 1.0 assert leaf.start_pos == (1, 0) assert leaf.end_pos == (1, 3) def test_call_type(self): call = self.get_call('hello') assert isinstance(call, tree.Name) def test_literal_type(self): literal = self.get_call('1.0') assert isinstance(literal, tree.Literal) assert type(parser_utils.safe_literal_eval(literal.value)) == float literal = self.get_call('1') assert isinstance(literal, tree.Literal) assert type(parser_utils.safe_literal_eval(literal.value)) == int literal = self.get_call('"hello"') assert isinstance(literal, tree.Literal) assert parser_utils.safe_literal_eval(literal.value) == 'hello' def test_user_statement_on_import(): """github #285""" s = "from datetime import (\n" \ " time)" for pos in [(2, 1), (2, 4)]: p = parse(s) stmt = parser_utils.get_statement_of_position(p, pos) assert isinstance(stmt, tree.Import) assert [n.value for n in stmt.get_defined_names()] == ['time'] def test_hex_values_in_docstring(): source = r''' def foo(object): """ \xff """ return 1 ''' doc = parser_utils.clean_scope_docstring(next(parse(source).iter_funcdefs())) if is_py3: assert doc == '\xff' else: assert doc == u'�' @pytest.mark.parametrize( 'code,call_signature', [ ('def my_function(x, y, z) -> str:\n return', 'my_function(x, y, z)'), ('lambda x, y, z: x + y * z\n', '(x, y, z)') ]) def test_get_call_signature(code, call_signature): node = parse(code, version='3.5').children[0] if node.type == 'simple_stmt': node = node.children[0] assert parser_utils.get_call_signature(node) == call_signature assert parser_utils.get_doc_with_call_signature(node) == (call_signature + '\n\n') jedi-0.11.1/test/refactor/0000775000175000017500000000000013214571377015165 5ustar davedave00000000000000jedi-0.11.1/test/refactor/inline.py0000664000175000017500000000036613214571123017007 0ustar davedave00000000000000# --- simple def test(): #? 4 a = (30 + b, c) + 1 return test(100, a) # +++ def test(): return test(100, (30 + b, c) + 1) # --- simple if 1: #? 4 a = 1, 2 return test(100, a) # +++ if 1: return test(100, (1, 2)) jedi-0.11.1/test/refactor/extract.py0000664000175000017500000000127113214571123017177 0ustar davedave00000000000000# --- simple def test(): #? 35 a return test(100, (30 + b, c) + 1) # +++ def test(): a = (30 + b, c) + 1 return test(100, a) # --- simple #2 def test(): #? 25 a return test(100, (30 + b, c) + 1) # +++ def test(): a = 30 + b return test(100, (a, c) + 1) # --- multiline def test(): #? 30 x return test(1, (30 + b, c) + 1) # +++ def test(): x = ((30 + b, c) + 1) return test(1, x ) # --- multiline #2 def test(): #? 25 x return test(1, (30 + b, c) + 1) # +++ def test(): x = 30 + b return test(1, (x, c) + 1) jedi-0.11.1/test/refactor/rename.py0000664000175000017500000000043713214571123016777 0ustar davedave00000000000000""" Test coverage for renaming is mostly being done by testing `Script.usages`. """ # --- simple def test1(): #? 7 blabla test1() AssertionError return test1, test1.not_existing # +++ def blabla(): blabla() AssertionError return blabla, blabla.not_existing jedi-0.11.1/test/test_evaluate/0000775000175000017500000000000013214571377016225 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/zipped_imports/0000775000175000017500000000000013214571377021275 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/zipped_imports/pkg.zip0000664000175000017500000000077013214571123022573 0ustar davedave00000000000000PK Hpkg/UX wWwWPKHpkg/__init__.pyUX wWwW-/*QO)IJ+υ2!2e9\\y% F\PKP0/7PKH pkg/module.pyUX vW`W+K)MUUP/NO)I+sPK0PPK H @Apkg/UXwWwWPKHP0/7 @2pkg/__init__.pyUXwWwWPKH0P @pkg/module.pyUXvW`WPKjedi-0.11.1/test/test_evaluate/zipped_imports/not_pkg.zip0000664000175000017500000000031413214571123023445 0ustar davedave00000000000000PKH not_pkg.pyUX "W"W+K)MUUP/KIOIUJI($*$&g'EPK<6*,PKH<6*, @not_pkg.pyUX"W"WPKDrjedi-0.11.1/test/test_evaluate/test_sys_path.py0000664000175000017500000000402313214571123021454 0ustar davedave00000000000000import os from glob import glob import sys import pytest from jedi.evaluate import sys_path from jedi import Script def test_paths_from_assignment(): def paths(src): script = Script(src, path='/foo/bar.py') expr_stmt = script._get_module_node().children[0] return set(sys_path._paths_from_assignment(script._get_module(), expr_stmt)) assert paths('sys.path[0:0] = ["a"]') == set(['/foo/a']) assert paths('sys.path = ["b", 1, x + 3, y, "c"]') == set(['/foo/b', '/foo/c']) assert paths('sys.path = a = ["a"]') == set(['/foo/a']) # Fail for complicated examples. assert paths('sys.path, other = ["a"], 2') == set() # Currently venv site-packages resolution only seeks pythonX.Y/site-packages # that belong to the same version as the interpreter to avoid issues with # cross-version imports. "venvs/" dir contains "venv27" and "venv34" that # mimic venvs created for py2.7 and py3.4 respectively. If test runner is # invoked with one of those versions, the test below will be run for the # matching directory. CUR_DIR = os.path.dirname(__file__) VENVS = list(glob( os.path.join(CUR_DIR, 'sample_venvs/venv%d%d' % sys.version_info[:2]))) @pytest.mark.parametrize('venv', VENVS) def test_get_venv_path(venv): pjoin = os.path.join venv_path = sys_path.get_venv_path(venv) site_pkgs = (glob(pjoin(venv, 'lib', 'python*', 'site-packages')) + glob(pjoin(venv, 'lib', 'site-packages')))[0] ETALON = [ pjoin('/path', 'from', 'egg-link'), pjoin(site_pkgs, '.', 'relative', 'egg-link', 'path'), site_pkgs, pjoin(site_pkgs, 'dir-from-foo-pth'), ] # Ensure that pth and egg-link paths were added. assert venv_path[:len(ETALON)] == ETALON # Ensure that none of venv dirs leaked to the interpreter. assert not set(sys.path).intersection(ETALON) # Ensure that "import ..." lines were ignored. assert pjoin('/path', 'from', 'smth.py') not in venv_path assert pjoin('/path', 'from', 'smth.py:extend_path') not in venv_path jedi-0.11.1/test/test_evaluate/namespace_package/0000775000175000017500000000000013214571377021634 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns1/0000775000175000017500000000000013214571377022335 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns1/pkg/0000775000175000017500000000000013214571377023116 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns1/pkg/ns1_file.py0000664000175000017500000000002213214571123025147 0ustar davedave00000000000000foo = 'ns1_file!' jedi-0.11.1/test/test_evaluate/namespace_package/ns1/pkg/ns1_folder/0000775000175000017500000000000013214571377025152 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns1/pkg/ns1_folder/__init__.py0000664000175000017500000000002413214571123027244 0ustar davedave00000000000000foo = 'ns1_folder!' jedi-0.11.1/test/test_evaluate/namespace_package/ns1/pkg/__init__.py0000664000175000017500000000032613214571123025215 0ustar davedave00000000000000foo = 'ns1!' # this is a namespace package try: import pkg_resources pkg_resources.declare_namespace(__name__) except ImportError: import pkgutil __path__ = pkgutil.extend_path(__path__, __name__) jedi-0.11.1/test/test_evaluate/namespace_package/ns2/0000775000175000017500000000000013214571377022336 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/0000775000175000017500000000000013214571377023117 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/ns2_file.py0000664000175000017500000000002213214571123025151 0ustar davedave00000000000000foo = 'ns2_file!' jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/0000775000175000017500000000000013214571377025154 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/__init__.py0000664000175000017500000000002413214571123027246 0ustar davedave00000000000000foo = 'ns2_folder!' jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/nested/0000775000175000017500000000000013214571377026436 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/namespace_package/ns2/pkg/ns2_folder/nested/__init__.py0000664000175000017500000000002013214571123030524 0ustar davedave00000000000000foo = 'nested!' jedi-0.11.1/test/test_evaluate/flask-site-packages/0000775000175000017500000000000013214571377022043 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flaskext/0000775000175000017500000000000013214571377023664 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flaskext/bar.py0000664000175000017500000000003413214571123024764 0ustar davedave00000000000000class Bar(object): pass jedi-0.11.1/test/test_evaluate/flask-site-packages/flaskext/__init__.py0000664000175000017500000000000013214571123025750 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flaskext/moo/0000775000175000017500000000000013214571377024456 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flaskext/moo/__init__.py0000664000175000017500000000001013214571123026543 0ustar davedave00000000000000Moo = 1 jedi-0.11.1/test/test_evaluate/flask-site-packages/flask/0000775000175000017500000000000013214571377023143 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flask/__init__.py0000664000175000017500000000002513214571123025236 0ustar davedave00000000000000 jedi-0.11.1/test/test_evaluate/flask-site-packages/flask/ext/0000775000175000017500000000000013214571377023743 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flask/ext/__init__.py0000664000175000017500000000000113214571123026030 0ustar davedave00000000000000 jedi-0.11.1/test/test_evaluate/flask-site-packages/flask_baz/0000775000175000017500000000000013214571377023777 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/flask-site-packages/flask_baz/__init__.py0000664000175000017500000000001013214571123026064 0ustar davedave00000000000000Baz = 1 jedi-0.11.1/test/test_evaluate/flask-site-packages/flask_foo.py0000664000175000017500000000003413214571123024342 0ustar davedave00000000000000class Foo(object): pass jedi-0.11.1/test/test_evaluate/test_docstring.py0000664000175000017500000002127113214571123021622 0ustar davedave00000000000000""" Testing of docstring related issues and especially ``jedi.docstrings``. """ from textwrap import dedent import jedi import pytest from ..helpers import unittest try: import numpydoc # NOQA except ImportError: numpydoc_unavailable = True else: numpydoc_unavailable = False try: import numpy except ImportError: numpy_unavailable = True else: numpy_unavailable = False class TestDocstring(unittest.TestCase): def test_function_doc(self): defs = jedi.Script(""" def func(): '''Docstring of `func`.''' func""").goto_definitions() self.assertEqual(defs[0].docstring(), 'func()\n\nDocstring of `func`.') def test_class_doc(self): defs = jedi.Script(""" class TestClass(): '''Docstring of `TestClass`.''' TestClass""").goto_definitions() self.assertEqual(defs[0].docstring(), 'Docstring of `TestClass`.') def test_instance_doc(self): defs = jedi.Script(""" class TestClass(): '''Docstring of `TestClass`.''' tc = TestClass() tc""").goto_definitions() self.assertEqual(defs[0].docstring(), 'Docstring of `TestClass`.') @unittest.skip('need evaluator class for that') def test_attribute_docstring(self): defs = jedi.Script(""" x = None '''Docstring of `x`.''' x""").goto_definitions() self.assertEqual(defs[0].docstring(), 'Docstring of `x`.') @unittest.skip('need evaluator class for that') def test_multiple_docstrings(self): defs = jedi.Script(""" def func(): '''Original docstring.''' x = func '''Docstring of `x`.''' x""").goto_definitions() docs = [d.docstring() for d in defs] self.assertEqual(docs, ['Original docstring.', 'Docstring of `x`.']) def test_completion(self): assert jedi.Script(''' class DocstringCompletion(): #? [] """ asdfas """''').completions() def test_docstrings_type_dotted_import(self): s = """ def func(arg): ''' :type arg: random.Random ''' arg.""" names = [c.name for c in jedi.Script(s).completions()] assert 'seed' in names def test_docstrings_param_type(self): s = """ def func(arg): ''' :param str arg: some description ''' arg.""" names = [c.name for c in jedi.Script(s).completions()] assert 'join' in names def test_docstrings_type_str(self): s = """ def func(arg): ''' :type arg: str ''' arg.""" names = [c.name for c in jedi.Script(s).completions()] assert 'join' in names def test_docstring_instance(self): # The types hint that it's a certain kind s = dedent(""" class A: def __init__(self,a): ''' :type a: threading.Thread ''' if a is not None: a.start() self.a = a def method_b(c): ''' :type c: A ''' c.""") names = [c.name for c in jedi.Script(s).completions()] assert 'a' in names assert '__init__' in names assert 'mro' not in names # Exists only for types. def test_docstring_keyword(self): completions = jedi.Script('assert').completions() self.assertIn('assert', completions[0].docstring()) # ---- Numpy Style Tests --- @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_parameters(): s = dedent(''' def foobar(x, y): """ Parameters ---------- x : int y : str """ y.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' in names assert 'capitalize' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_parameters_set_of_values(): s = dedent(''' def foobar(x, y): """ Parameters ---------- x : {'foo', 'bar', 100500}, optional """ x.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' in names assert 'capitalize' in names assert 'numerator' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_parameters_alternative_types(): s = dedent(''' def foobar(x, y): """ Parameters ---------- x : int or str or list """ x.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' in names assert 'capitalize' in names assert 'numerator' in names assert 'append' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_returns(): s = dedent(''' def foobar(): """ Returns ---------- x : int y : str """ return x def bazbiz(): z = foobar() z.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' in names assert 'capitalize' in names assert 'numerator' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_returns_set_of_values(): s = dedent(''' def foobar(): """ Returns ---------- x : {'foo', 'bar', 100500} """ return x def bazbiz(): z = foobar() z.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' in names assert 'capitalize' in names assert 'numerator' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_returns_alternative_types(): s = dedent(''' def foobar(): """ Returns ---------- int or list of str """ return x def bazbiz(): z = foobar() z.''') names = [c.name for c in jedi.Script(s).completions()] assert 'isupper' not in names assert 'capitalize' not in names assert 'numerator' in names assert 'append' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_returns_list_of(): s = dedent(''' def foobar(): """ Returns ---------- list of str """ return x def bazbiz(): z = foobar() z.''') names = [c.name for c in jedi.Script(s).completions()] assert 'append' in names assert 'isupper' not in names assert 'capitalize' not in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_returns_obj(): s = dedent(''' def foobar(x, y): """ Returns ---------- int or random.Random """ return x + y def bazbiz(): z = foobar(x, y) z.''') script = jedi.Script(s) names = [c.name for c in script.completions()] assert 'numerator' in names assert 'seed' in names @pytest.mark.skipif(numpydoc_unavailable, reason='numpydoc module is unavailable') def test_numpydoc_yields(): s = dedent(''' def foobar(): """ Yields ---------- x : int y : str """ return x def bazbiz(): z = foobar(): z.''') names = [c.name for c in jedi.Script(s).completions()] print('names',names) assert 'isupper' in names assert 'capitalize' in names assert 'numerator' in names @pytest.mark.skipif(numpydoc_unavailable or numpy_unavailable, reason='numpydoc or numpy module is unavailable') def test_numpy_returns(): s = dedent(''' import numpy x = numpy.asarray([]) x.d''') names = [c.name for c in jedi.Script(s).completions()] print(names) assert 'diagonal' in names @pytest.mark.skipif(numpydoc_unavailable or numpy_unavailable, reason='numpydoc or numpy module is unavailable') def test_numpy_comp_returns(): s = dedent(''' import numpy x = numpy.array([]) x.d''') names = [c.name for c in jedi.Script(s).completions()] print(names) assert 'diagonal' in names jedi-0.11.1/test/test_evaluate/test_precedence.py0000664000175000017500000000073513214571123021725 0ustar davedave00000000000000from jedi.evaluate.compiled import CompiledObject from jedi import Script import pytest @pytest.mark.skipif('sys.version_info[0] < 3') # Ellipsis does not exists in 2 @pytest.mark.parametrize('source', [ '1 == 1', '1.0 == 1', '... == ...' ]) def test_equals(source): script = Script(source) node = script._get_module_node().children[0] first, = script._get_module().eval_node(node) assert isinstance(first, CompiledObject) and first.obj is True jedi-0.11.1/test/test_evaluate/test_mixed.py0000664000175000017500000000024013214571123020725 0ustar davedave00000000000000import jedi def test_on_code(): from functools import wraps i = jedi.Interpreter("wraps.__code__", [{'wraps':wraps}]) assert i.goto_definitions() jedi-0.11.1/test/test_evaluate/absolute_import/0000775000175000017500000000000013214571377021435 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/absolute_import/local_module.py0000664000175000017500000000057713214571123024444 0ustar davedave00000000000000""" This is a module that imports the *standard library* unittest, despite there being a local "unittest" module. It specifies that it wants the stdlib one with the ``absolute_import`` __future__ import. The twisted equivalent of this module is ``twisted.trial._synctest``. """ from __future__ import absolute_import import unittest class Assertions(unittest.TestCase): pass jedi-0.11.1/test/test_evaluate/absolute_import/unittest.py0000664000175000017500000000066513214571123023662 0ustar davedave00000000000000""" This is a module that shadows a builtin (intentionally). It imports a local module, which in turn imports stdlib unittest (the name shadowed by this module). If that is properly resolved, there's no problem. However, if jedi doesn't understand absolute_imports, it will get this module again, causing infinite recursion. """ from local_module import Assertions class TestCase(Assertions): def test(self): self.assertT jedi-0.11.1/test/test_evaluate/test_absolute_import.py0000664000175000017500000000045113214571123023033 0ustar davedave00000000000000""" Tests ``from __future__ import absolute_import`` (only important for Python 2.X) """ import jedi from .. import helpers @helpers.cwd_at("test/test_evaluate/absolute_import") def test_can_complete_when_shadowing(): script = jedi.Script(path="unittest.py") assert script.completions() jedi-0.11.1/test/test_evaluate/implicit_namespace_package/0000775000175000017500000000000013214571377023526 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns1/0000775000175000017500000000000013214571377024227 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns1/pkg/0000775000175000017500000000000013214571377025010 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns1/pkg/ns1_file.py0000664000175000017500000000002213214571123027041 0ustar davedave00000000000000foo = 'ns1_file!' jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns2/0000775000175000017500000000000013214571377024230 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns2/pkg/0000775000175000017500000000000013214571377025011 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_namespace_package/ns2/pkg/ns2_file.py0000664000175000017500000000002213214571123027043 0ustar davedave00000000000000foo = 'ns2_file!' jedi-0.11.1/test/test_evaluate/test_compiled.py0000664000175000017500000000512513214571123021422 0ustar davedave00000000000000from textwrap import dedent import parso from jedi._compatibility import builtins, is_py3 from jedi.evaluate import compiled from jedi.evaluate.context import instance from jedi.evaluate.context.function import FunctionContext from jedi.evaluate import Evaluator from jedi.evaluate.project import Project from jedi.parser_utils import clean_scope_docstring from jedi import Script def _evaluator(): return Evaluator(parso.load_grammar(), Project()) def test_simple(): e = _evaluator() bltn = compiled.CompiledObject(e, builtins) obj = compiled.CompiledObject(e, '_str_', bltn) upper, = obj.py__getattribute__('upper') objs = list(upper.execute_evaluated()) assert len(objs) == 1 assert isinstance(objs[0], instance.CompiledInstance) def test_fake_loading(): e = _evaluator() assert isinstance(compiled.create(e, next), FunctionContext) builtin = compiled.get_special_object(e, 'BUILTINS') string, = builtin.py__getattribute__('str') from_name = compiled._create_from_name(e, builtin, string, '__init__') assert isinstance(from_name, FunctionContext) def test_fake_docstr(): node = compiled.create(_evaluator(), next).tree_node assert clean_scope_docstring(node) == next.__doc__ def test_parse_function_doc_illegal_docstr(): docstr = """ test_func(o doesn't have a closing bracket. """ assert ('', '') == compiled._parse_function_doc(docstr) def test_doc(): """ Even CompiledObject docs always return empty docstrings - not None, that's just a Jedi API definition. """ obj = compiled.CompiledObject(_evaluator(), ''.__getnewargs__) assert obj.py__doc__() == '' def test_string_literals(): def typ(string): d = Script("a = %s; a" % string).goto_definitions()[0] return d.name assert typ('""') == 'str' assert typ('r""') == 'str' if is_py3: assert typ('br""') == 'bytes' assert typ('b""') == 'bytes' assert typ('u""') == 'str' else: assert typ('b""') == 'str' assert typ('u""') == 'unicode' def test_method_completion(): code = dedent(''' class Foo: def bar(self): pass foo = Foo() foo.bar.__func__''') if is_py3: result = [] else: result = ['__func__'] assert [c.name for c in Script(code).completions()] == result def test_time_docstring(): import time comp, = Script('import time\ntime.sleep').completions() assert comp.docstring() == time.sleep.__doc__ def test_dict_values(): assert Script('import sys\nsys.modules["alshdb;lasdhf"]').goto_definitions() jedi-0.11.1/test/test_evaluate/test_buildout_detection.py0000664000175000017500000000462113214571123023513 0ustar davedave00000000000000import os from textwrap import dedent from jedi import Script from jedi.evaluate.sys_path import (_get_parent_dir_with_file, _get_buildout_script_paths, check_sys_path_modifications) from ..helpers import cwd_at def check_module_test(code): module_context = Script(code)._get_module() return check_sys_path_modifications(module_context) @cwd_at('test/test_evaluate/buildout_project/src/proj_name') def test_parent_dir_with_file(): parent = _get_parent_dir_with_file( os.path.abspath(os.curdir), 'buildout.cfg') assert parent is not None assert parent.endswith(os.path.join('test', 'test_evaluate', 'buildout_project')) @cwd_at('test/test_evaluate/buildout_project/src/proj_name') def test_buildout_detection(): scripts = _get_buildout_script_paths(os.path.abspath('./module_name.py')) assert len(scripts) == 1 curdir = os.path.abspath(os.curdir) appdir_path = os.path.normpath(os.path.join(curdir, '../../bin/app')) assert scripts[0] == appdir_path def test_append_on_non_sys_path(): code = dedent(""" class Dummy(object): path = [] d = Dummy() d.path.append('foo')""" ) paths = check_module_test(code) assert not paths assert 'foo' not in paths def test_path_from_invalid_sys_path_assignment(): code = dedent(""" import sys sys.path = 'invalid'""" ) paths = check_module_test(code) assert not paths assert 'invalid' not in paths @cwd_at('test/test_evaluate/buildout_project/src/proj_name/') def test_sys_path_with_modifications(): code = dedent(""" import os """) path = os.path.abspath(os.path.join(os.curdir, 'module_name.py')) paths = Script(code, path=path)._evaluator.project.sys_path assert '/tmp/.buildout/eggs/important_package.egg' in paths def test_path_from_sys_path_assignment(): code = dedent(""" #!/usr/bin/python import sys sys.path[0:0] = [ '/usr/lib/python3.4/site-packages', '/home/test/.buildout/eggs/important_package.egg' ] path[0:0] = [1] import important_package if __name__ == '__main__': sys.exit(important_package.main())""" ) paths = check_module_test(code) assert 1 not in paths assert '/home/test/.buildout/eggs/important_package.egg' in paths jedi-0.11.1/test/test_evaluate/test_literals.py0000664000175000017500000000221013214571123021435 0ustar davedave00000000000000import pytest import jedi from jedi._compatibility import py_version, unicode def _eval_literal(code): def_, = jedi.Script(code).goto_definitions() return def_._name._context.obj @pytest.mark.skipif('sys.version_info[:2] < (3, 6)') def test_f_strings(): """ f literals are not really supported in Jedi. They just get ignored and an empty string is returned. """ assert _eval_literal('f"asdf"') == '' assert _eval_literal('f"{asdf}"') == '' assert _eval_literal('F"{asdf}"') == '' assert _eval_literal('rF"{asdf}"') == '' def test_rb_strings(): assert _eval_literal('br"asdf"') == b'asdf' obj = _eval_literal('rb"asdf"') if py_version < 33: # rb is not valid in Python 2. Due to error recovery we just get a # string. assert obj == 'asdf' else: assert obj == b'asdf' @pytest.mark.skipif('sys.version_info[:2] < (3, 6)') def test_thousand_separators(): assert _eval_literal('1_2_3') == 123 assert _eval_literal('123_456_789') == 123456789 assert _eval_literal('0x3_4') == 52 assert _eval_literal('0b1_0') == 2 assert _eval_literal('0o1_0') == 8 jedi-0.11.1/test/test_evaluate/__init__.py0000664000175000017500000000000013214571123020311 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/test_stdlib.py0000664000175000017500000000426613214571123021114 0ustar davedave00000000000000""" Tests of various stdlib related things that could not be tested with "Black Box Tests". """ from textwrap import dedent import pytest from jedi import Script from jedi._compatibility import is_py26 # The namedtuple is different for different Python2.7 versions. Some versions # are missing the attribute `_class_template`. pytestmark = pytest.mark.skipif('sys.version_info[0] < 3') @pytest.mark.parametrize(['letter', 'expected'], [ ('n', ['name']), ('s', ['smart']), ]) def test_namedtuple_str(letter, expected): source = dedent("""\ import collections Person = collections.namedtuple('Person', 'name smart') dave = Person('Dave', False) dave.%s""") % letter result = Script(source).completions() completions = set(r.name for r in result) if is_py26: assert completions == set() else: assert completions == set(expected) def test_namedtuple_list(): source = dedent("""\ import collections Cat = collections.namedtuple('Person', ['legs', u'length', 'large']) garfield = Cat(4, '85cm', True) garfield.l""") result = Script(source).completions() completions = set(r.name for r in result) if is_py26: assert completions == set() else: assert completions == set(['legs', 'length', 'large']) def test_namedtuple_content(): source = dedent("""\ import collections Foo = collections.namedtuple('Foo', ['bar', 'baz']) named = Foo(baz=4, bar=3.0) unnamed = Foo(4, '') """) def d(source): x, = Script(source).goto_definitions() return x.name assert d(source + 'unnamed.bar') == 'int' assert d(source + 'unnamed.baz') == 'str' assert d(source + 'named.bar') == 'float' assert d(source + 'named.baz') == 'int' def test_nested_namedtuples(): """ From issue #730. """ s = Script(dedent(''' import collections Dataset = collections.namedtuple('Dataset', ['data']) Datasets = collections.namedtuple('Datasets', ['train']) train_x = Datasets(train=Dataset('data_value')) train_x.train.''' )) assert 'data' in [c.name for c in s.completions()] jedi-0.11.1/test/test_evaluate/test_namespace_package.py0000664000175000017500000000463713214571123023244 0ustar davedave00000000000000import jedi from os.path import dirname, join def test_namespace_package(): sys_path = [join(dirname(__file__), d) for d in ['namespace_package/ns1', 'namespace_package/ns2']] def script_with_path(*args, **kwargs): return jedi.Script(sys_path=sys_path, *args, **kwargs) # goto definition assert script_with_path('from pkg import ns1_file').goto_definitions() assert script_with_path('from pkg import ns2_file').goto_definitions() assert not script_with_path('from pkg import ns3_file').goto_definitions() # goto assignment tests = { 'from pkg.ns2_folder.nested import foo': 'nested!', 'from pkg.ns2_folder import foo': 'ns2_folder!', 'from pkg.ns2_file import foo': 'ns2_file!', 'from pkg.ns1_folder import foo': 'ns1_folder!', 'from pkg.ns1_file import foo': 'ns1_file!', 'from pkg import foo': 'ns1!', } for source, solution in tests.items(): ass = script_with_path(source).goto_assignments() assert len(ass) == 1 assert ass[0].description == "foo = '%s'" % solution # completion completions = script_with_path('from pkg import ').completions() names = [str(c.name) for c in completions] # str because of unicode compare = ['foo', 'ns1_file', 'ns1_folder', 'ns2_folder', 'ns2_file', 'pkg_resources', 'pkgutil', '__name__', '__path__', '__package__', '__file__', '__doc__'] # must at least contain these items, other items are not important assert set(compare) == set(names) tests = { 'from pkg import ns2_folder as x': 'ns2_folder!', 'from pkg import ns2_file as x': 'ns2_file!', 'from pkg.ns2_folder import nested as x': 'nested!', 'from pkg import ns1_folder as x': 'ns1_folder!', 'from pkg import ns1_file as x': 'ns1_file!', 'import pkg as x': 'ns1!', } for source, solution in tests.items(): for c in script_with_path(source + '; x.').completions(): if c.name == 'foo': completion = c solution = "foo = '%s'" % solution assert completion.description == solution def test_nested_namespace_package(): code = 'from nested_namespaces.namespace.pkg import CONST' sys_path = [dirname(__file__)] script = jedi.Script(sys_path=sys_path, source=code, line=1, column=45) result = script.goto_definitions() assert len(result) == 1 jedi-0.11.1/test/test_evaluate/init_extension_module/0000775000175000017500000000000013214571377022631 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/init_extension_module/__init__.cpython-34m.so0000775000175000017500000004015513214571123027014 0ustar davedave00000000000000ELF>@H0@8@"    88 8 $$Ptd000QtdRtd  GNUue+Betaߜ`@  )֡BE|qX p \m|+ "? 0     p__gmon_start___init_fini__cxa_finalize_Jv_RegisterClassesPyInit_init_extension_modulePyModule_Create2_Py_NoneStructPyModule_AddObjectlibpthread.so.0libc.so.6_edata__bss_start_endGLIBC_2.2.5ui     h        HWMH5Z %\ @%Z h%R h%J hHH HtHÐU= HATSubH= t H= H  L% H L)HHH9s DHHe AHZ H9rF [A\]fH= UHtHS Ht]H= @]ÐSH= H H5VHHH[UHSHH0 HtH# HHHuH[]ÐHHfooinit_extension_module;`8p`zRx $ @FJ w?;*3$"D0An p o8  H( oxooVo8   GCC: (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3,0od 8?int5 ii i;17xb#(###4 # k#(#0#8 #@ #H!#P"#X$#`5&#h(b#p#,b#tN.p#x2F#w3T#U4#8#RA{#J#K#L#M#N-#+Pb#6R#   # # fb#  8x  8 g0?  G iY k # lZ#NZ.O#xP#Q # Q #(U #0V #85W #@mX. #HaY#PZf #X|^O#`\_U#h`[#pdr #xe#ff #g" #hZ # ka#>n8#7p#t( #w#({ #C ~ #@ # ##*##VZ## #1 # # #I(# #z #># #H#%#y#k# #M# #Ym0o .p`# q #rk`%^$b4?E U`f{ b b Pxbuf#obj#len #  # 1b# b#$ #( ex#0 \ x#8 x#@ #H bb~Cb   b( 3 9 bS  ^ # # # # R# #( #0 #8 #@ #H #P  #X #` h#h #p #x ## ###`## #- #% #\ #k #{#j##P## #tS P 4##U#VU## >{#(^#0 #8:"#@#U#HJ$j &M '4#(#)#* -  .#k/#0Y 3   _4   :  b  bJ<  " =>: @ bZ ?G@A~   zB   bCD@EFGH(Z# I4:OZ ^  M  T ( )# *# +b# -#g (  #get #set @#doc #  # oT/g" ( !x . "`# - #~#  $ #  %# x &3 !h /  0# x 1#( * 2#0 U 3 #8 L 4-#@  5#H  6( #P  7#X < 8 #`O %+@Z KQbk m L  @ MVS`% : ; I$ > $ >   I : ; : ;I8 : ;  : ;  : ; I8 I !I/ &I : ; : ; ' II : ; I8 '  : ; : ;I : ;< ' I.? : ; ' I@4: ; I4: ; I 4: ; I? < 4: ;I? < *  /usr/lib/gcc/x86_64-linux-gnu/4.6/include/usr/include/x86_64-linux-gnu/bits/usr/include/usr/include/python3.4mmodule.cstddef.htypes.hstdio.hlibio.hpyport.hobject.hmethodobject.hdescrobject.hmoduleobject.h  !=getiterfunc__off_tnb_inplace_subtract_IO_read_ptr_chainm_freereleasebufferproc_shortbufdestructornb_floor_dividetp_freePyMemberDefnb_xorml_namesq_lengthobjobjargproc_IO_buf_basemp_ass_subscriptfreefunclong long unsigned intnb_inplace_powerPyObject_IO_buf_endPyMappingMethodstp_memberstp_getattrdescrgetfuncstdoutPyMethodDefnb_inplace_multiplyPyNumberMethodshashfunctp_dictoffsettp_getattronb_subtracttp_as_mappingtp_init_IO_read_endPy_bufferclosure_fileno_IO_backup_basetp_methods__ssize_tinitproc_cur_columnm_doclong long inttp_is_gcdouble_old_offsetsettertp_reservedtp_setattrm_nameiternextfuncnb_multiply_typeobjectprintfunc__pad2nb_rshiftob_typetp_dict_nextwas_sq_slice_IO_markerstdinPyInit_init_extension_modulenb_inplace_remainderternaryfuncreadonlysq_inplace_concatm_methods_Py_NoneStructshape_IO_write_ptrtp_subclassesm_traversem_copytp_clearshort unsigned intsq_concatml_methtp_itemsizem_base_IO_save_basetp_basicsizePyModuleDef_Baseobjobjproc_locknb_positive_flags2_modetp_descr_setsq_ass_itemgetattrfuncsq_itemwas_sq_ass_slicemodule.ctp_nametp_delsq_repeatdescrsetfuncndimPyGetSetDefnb_reservedtp_hashgetbufferproc_IO_write_endvisitprocnb_boolml_flagstp_iternext__off64_tgetattrofunc_objectnb_inplace_rshift_IO_FILEtp_itertp_mrogettertp_baseinquiry_posbf_releasebuffertp_as_numberbinaryfuncnb_absolute_markersm_indexnb_inplace_true_dividetp_descr_getnewfunctp_traversessizeargfunclenfuncPyBufferProcsnb_intunsigned charPyModuleDefnb_inplace_lshiftshort inttp_allocnb_divmodtp_as_sequencetp_weaklist_vtable_offsetm_reloadob_sizessizeobjargprocformattraverseproctp_calltp_version_tagtp_reprnb_addtp_newsetattrfuncm_clearGNU C 4.6.3tp_strsq_containsmp_subscripttp_richcomparetp_doctp_flagsreprfuncnb_inplace_floor_dividenb_andPyCFunctionnb_inplace_orml_docnb_power_IO_lock_ttp_setattrosq_inplace_repeat_IO_read_base_IO_save_endPy_hash_tbufferinfotp_dealloc__pad1PyVarObject__pad3__pad4__pad5tp_cacheob_base_unused2Py_ssize_tPySequenceMethodsnb_inplace_andnb_inplace_xorrichcmpfunctp_printmp_lengthnb_negativenb_invertnb_true_dividenb_inplace_addsetattrofuncnb_remaindertp_finalizeunaryfunctp_getsetnb_lshiftnb_indextp_basesallocfuncm_init_IO_write_basetp_weaklistoffsetm_sizestrides/home/david/source/jedi/test/test_evaluate/init_extension_moduletp_as_buffernb_floatsuboffsets_sbufnb_orinternalob_refcntbf_getbufferw/w/0w#*P*/S/0P.symtab.strtab.shstrtab.note.gnu.build-id.gnu.hash.dynsym.dynstr.gnu.version.gnu.version_r.rela.dyn.rela.plt.init.text.fini.rodata.eh_frame_hdr.eh_frame.ctors.dtors.jcr.dynamic.got.got.plt.data.bss.comment.debug_aranges.debug_info.debug_abbrev.debug_line.debug_str.debug_loc$.oH8 88P@HoVVUoxx dn((H xpps@~8200PPd   0 08 8   0    0*0!]#.0$ (}./38!3 >8Vx( p    0P   0 8        *  80 E [ j x p 0  @ h(   8     &7F K Wj ~ 0" pcall_gmon_startcrtstuff.c__CTOR_LIST____DTOR_LIST____JCR_LIST____do_global_dtors_auxcompleted.6531dtor_idx.6533frame_dummy__CTOR_END____FRAME_END____JCR_END____do_global_ctors_auxmodule.cmodule__DTOR_END____dso_handle_DYNAMIC_GLOBAL_OFFSET_TABLE__edata_fini__gmon_start__PyModule_Create2_Py_NoneStruct_end__bss_startPyModule_AddObject_Jv_RegisterClassesPyInit_init_extension_module__cxa_finalize@@GLIBC_2.2.5_initjedi-0.11.1/test/test_evaluate/init_extension_module/setup.py0000664000175000017500000000037213214571123024332 0ustar davedave00000000000000from distutils.core import setup, Extension setup(name='init_extension_module', version='0.0', description='', ext_modules=[ Extension('init_extension_module.__init__', sources=['module.c']) ] ) jedi-0.11.1/test/test_evaluate/init_extension_module/module.c0000664000175000017500000000044613214571123024253 0ustar davedave00000000000000#include "Python.h" static struct PyModuleDef module = { PyModuleDef_HEAD_INIT, "init_extension_module", NULL, -1, NULL }; PyMODINIT_FUNC PyInit_init_extension_module(void){ PyObject *m = PyModule_Create(&module); PyModule_AddObject(m, "foo", Py_None); return m; } jedi-0.11.1/test/test_evaluate/implicit_nested_namespaces/0000775000175000017500000000000013214571377023600 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_nested_namespaces/namespace/0000775000175000017500000000000013214571377025534 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_nested_namespaces/namespace/pkg/0000775000175000017500000000000013214571377026315 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/implicit_nested_namespaces/namespace/pkg/module.py0000664000175000017500000000001213214571123030132 0ustar davedave00000000000000CONST = 1 jedi-0.11.1/test/test_evaluate/sample_venvs/0000775000175000017500000000000013214571377020727 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/0000775000175000017500000000000013214571377022056 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/0000775000175000017500000000000013214571377022624 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/0000775000175000017500000000000013214571377024374 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/0000775000175000017500000000000013214571377027114 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/import_smth.pth0000664000175000017500000000003713214571123032163 0ustar davedave00000000000000import smth; smth.extend_path()jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/foo.pth0000664000175000017500000000002313214571123030374 0ustar davedave00000000000000./dir-from-foo-pth jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/relative.egg-link0000664000175000017500000000003113214571123032325 0ustar davedave00000000000000./relative/egg-link/path jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/dir-from-foo-pth/0000775000175000017500000000000013214571377032205 5ustar davedave00000000000000././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/dir-from-foo-pth/__init__.pyjedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/dir-from-foo-pth/__in0000664000175000017500000000015213214571123033017 0ustar davedave00000000000000# This file is here to force git to create the directory, as *.pth files only # add existing directories. jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/smth.py0000664000175000017500000000017413214571123030430 0ustar davedave00000000000000import sys sys.path.append('/path/from/smth.py') def extend_path(): sys.path.append('/path/from/smth.py:extend_path') jedi-0.11.1/test/test_evaluate/sample_venvs/venv27/lib/python2.7/site-packages/egg_link.egg-link0000664000175000017500000000002413214571123032273 0ustar davedave00000000000000/path/from/egg-link jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/0000775000175000017500000000000013214571377022054 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/0000775000175000017500000000000013214571377022622 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/0000775000175000017500000000000013214571377024370 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/0000775000175000017500000000000013214571377027110 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/import_smth.pth0000664000175000017500000000003713214571123032157 0ustar davedave00000000000000import smth; smth.extend_path()jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/foo.pth0000664000175000017500000000002313214571123030370 0ustar davedave00000000000000./dir-from-foo-pth jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/relative.egg-link0000664000175000017500000000003113214571123032321 0ustar davedave00000000000000./relative/egg-link/path jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/dir-from-foo-pth/0000775000175000017500000000000013214571377032201 5ustar davedave00000000000000././@LongLink0000000000000000000000000000015400000000000011215 Lustar 00000000000000jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/dir-from-foo-pth/__init__.pyjedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/dir-from-foo-pth/__in0000664000175000017500000000015213214571123033013 0ustar davedave00000000000000# This file is here to force git to create the directory, as *.pth files only # add existing directories. jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/smth.py0000664000175000017500000000017413214571123030424 0ustar davedave00000000000000import sys sys.path.append('/path/from/smth.py') def extend_path(): sys.path.append('/path/from/smth.py:extend_path') jedi-0.11.1/test/test_evaluate/sample_venvs/venv34/lib/python3.4/site-packages/egg_link.egg-link0000664000175000017500000000002413214571123032267 0ustar davedave00000000000000/path/from/egg-link jedi-0.11.1/test/test_evaluate/test_helpers.py0000664000175000017500000000051613214571123021267 0ustar davedave00000000000000from textwrap import dedent from jedi import names from jedi.evaluate import helpers def test_call_of_leaf_in_brackets(): s = dedent(""" x = 1 type(x) """) last_x = names(s, references=True, definitions=False)[-1] name = last_x._name.tree_name call = helpers.call_of_leaf(name) assert call == name jedi-0.11.1/test/test_evaluate/test_representation.py0000664000175000017500000000172013214571123022665 0ustar davedave00000000000000from textwrap import dedent from jedi import Script def get_definition_and_evaluator(source): first, = Script(dedent(source)).goto_definitions() return first._name._context, first._evaluator def test_function_execution(): """ We've been having an issue of a mutable list that was changed inside the function execution. Test if an execution always returns the same result. """ s = """ def x(): return str() x""" func, evaluator = get_definition_and_evaluator(s) # Now just use the internals of the result (easiest way to get a fully # usable function). # Should return the same result both times. assert len(func.execute_evaluated()) == 1 assert len(func.execute_evaluated()) == 1 def test_class_mro(): s = """ class X(object): pass X""" cls, evaluator = get_definition_and_evaluator(s) mro = cls.py__mro__() assert [c.name.string_name for c in mro] == ['X', 'object'] jedi-0.11.1/test/test_evaluate/test_implicit_namespace_package.py0000664000175000017500000000374513214571123025135 0ustar davedave00000000000000from os.path import dirname, join import jedi import pytest @pytest.mark.skipif('sys.version_info[:2] < (3,4)') def test_implicit_namespace_package(): sys_path = [join(dirname(__file__), d) for d in ['implicit_namespace_package/ns1', 'implicit_namespace_package/ns2']] def script_with_path(*args, **kwargs): return jedi.Script(sys_path=sys_path, *args, **kwargs) # goto definition assert script_with_path('from pkg import ns1_file').goto_definitions() assert script_with_path('from pkg import ns2_file').goto_definitions() assert not script_with_path('from pkg import ns3_file').goto_definitions() # goto assignment tests = { 'from pkg.ns2_file import foo': 'ns2_file!', 'from pkg.ns1_file import foo': 'ns1_file!', } for source, solution in tests.items(): ass = script_with_path(source).goto_assignments() assert len(ass) == 1 assert ass[0].description == "foo = '%s'" % solution # completion completions = script_with_path('from pkg import ').completions() names = [c.name for c in completions] compare = ['ns1_file', 'ns2_file'] # must at least contain these items, other items are not important assert set(compare) == set(names) tests = { 'from pkg import ns2_file as x': 'ns2_file!', 'from pkg import ns1_file as x': 'ns1_file!' } for source, solution in tests.items(): for c in script_with_path(source + '; x.').completions(): if c.name == 'foo': completion = c solution = "foo = '%s'" % solution assert completion.description == solution @pytest.mark.skipif('sys.version_info[:2] < (3,4)') def test_implicit_nested_namespace_package(): CODE = 'from implicit_nested_namespaces.namespace.pkg.module import CONST' sys_path = [dirname(__file__)] script = jedi.Script(sys_path=sys_path, source=CODE, line=1, column=61) result = script.goto_definitions() assert len(result) == 1 jedi-0.11.1/test/test_evaluate/test_context.py0000664000175000017500000000040713214571123021310 0ustar davedave00000000000000from jedi import Script def test_module_attributes(): def_, = Script('__name__').completions() assert def_.name == '__name__' assert def_.line == None assert def_.column == None str_, = def_._goto_definitions() assert str_.name == 'str' jedi-0.11.1/test/test_evaluate/test_pyc.py0000664000175000017500000000322413214571123020417 0ustar davedave00000000000000""" Test completions from *.pyc files: - generate a dummy python module - compile the dummy module to generate a *.pyc - delete the pure python dummy module - try jedi on the generated *.pyc """ import os import shutil import sys import pytest import jedi from ..helpers import cwd_at SRC = """class Foo: pass class Bar: pass """ def generate_pyc(): os.mkdir("dummy_package") with open("dummy_package/__init__.py", 'w'): pass with open("dummy_package/dummy.py", 'w') as f: f.write(SRC) import compileall compileall.compile_file("dummy_package/dummy.py") os.remove("dummy_package/dummy.py") if sys.version_info[0] == 3: # Python3 specific: # To import pyc modules, we must move them out of the __pycache__ # directory and rename them to remove ".cpython-%s%d" # see: http://stackoverflow.com/questions/11648440/python-does-not-detect-pyc-files for f in os.listdir("dummy_package/__pycache__"): dst = f.replace('.cpython-%s%s' % sys.version_info[:2], "") dst = os.path.join("dummy_package", dst) shutil.copy(os.path.join("dummy_package/__pycache__", f), dst) # Python 2.6 does not necessarily come with `compileall.compile_file`. @pytest.mark.skipif("sys.version_info > (2,6)") @cwd_at('test/test_evaluate') def test_pyc(): """ The list of completion must be greater than 2. """ try: generate_pyc() s = jedi.Script("from dummy_package import dummy; dummy.", path='blub.py') assert len(s.completions()) >= 2 finally: shutil.rmtree("dummy_package") if __name__ == "__main__": test_pyc() jedi-0.11.1/test/test_evaluate/test_imports.py0000664000175000017500000002046013214571123021322 0ustar davedave00000000000000""" Tests of various import related things that could not be tested with "Black Box Tests". """ import os import sys import pytest import jedi from jedi._compatibility import find_module_py33, find_module from ..helpers import cwd_at from jedi import Script from jedi._compatibility import is_py26 @pytest.mark.skipif('sys.version_info < (3,3)') def test_find_module_py33(): """Needs to work like the old find_module.""" assert find_module_py33('_io') == (None, '_io', False) def test_find_module_package(): file, path, is_package = find_module('json') assert file is None assert path.endswith('json') assert is_package is True def test_find_module_not_package(): file, path, is_package = find_module('io') assert file is not None assert path.endswith('io.py') assert is_package is False def test_find_module_package_zipped(): if 'zipped_imports/pkg.zip' not in sys.path: sys.path.append(os.path.join(os.path.dirname(__file__), 'zipped_imports/pkg.zip')) file, path, is_package = find_module('pkg') assert file is not None assert path.endswith('pkg.zip') assert is_package is True assert len(jedi.Script('import pkg; pkg.mod', 1, 19).completions()) == 1 @pytest.mark.skipif('sys.version_info < (2,7)') def test_find_module_not_package_zipped(): if 'zipped_imports/not_pkg.zip' not in sys.path: sys.path.append(os.path.join(os.path.dirname(__file__), 'zipped_imports/not_pkg.zip')) file, path, is_package = find_module('not_pkg') assert file is not None assert path.endswith('not_pkg.zip') assert is_package is False assert len( jedi.Script('import not_pkg; not_pkg.val', 1, 27).completions()) == 1 @cwd_at('test/test_evaluate/not_in_sys_path/pkg') def test_import_not_in_sys_path(): """ non-direct imports (not in sys.path) """ a = jedi.Script(path='module.py', line=5).goto_definitions() assert a[0].name == 'int' a = jedi.Script(path='module.py', line=6).goto_definitions() assert a[0].name == 'str' a = jedi.Script(path='module.py', line=7).goto_definitions() assert a[0].name == 'str' @pytest.mark.parametrize("script,name", [ ("from flask.ext import foo; foo.", "Foo"), # flask_foo.py ("from flask.ext import bar; bar.", "Bar"), # flaskext/bar.py ("from flask.ext import baz; baz.", "Baz"), # flask_baz/__init__.py ("from flask.ext import moo; moo.", "Moo"), # flaskext/moo/__init__.py ("from flask.ext.", "foo"), ("from flask.ext.", "bar"), ("from flask.ext.", "baz"), ("from flask.ext.", "moo"), pytest.mark.xfail(("import flask.ext.foo; flask.ext.foo.", "Foo")), pytest.mark.xfail(("import flask.ext.bar; flask.ext.bar.", "Foo")), pytest.mark.xfail(("import flask.ext.baz; flask.ext.baz.", "Foo")), pytest.mark.xfail(("import flask.ext.moo; flask.ext.moo.", "Foo")), ]) def test_flask_ext(script, name): """flask.ext.foo is really imported from flaskext.foo or flask_foo. """ path = os.path.join(os.path.dirname(__file__), 'flask-site-packages') completions = jedi.Script(script, sys_path=[path]).completions() assert name in [c.name for c in completions] @cwd_at('test/test_evaluate/') def test_not_importable_file(): src = 'import not_importable_file as x; x.' assert not jedi.Script(src, path='example.py').completions() def test_import_unique(): src = "import os; os.path" defs = jedi.Script(src, path='example.py').goto_definitions() parent_contexts = [d._name._context for d in defs] assert len(parent_contexts) == len(set(parent_contexts)) def test_cache_works_with_sys_path_param(tmpdir): foo_path = tmpdir.join('foo') bar_path = tmpdir.join('bar') foo_path.join('module.py').write('foo = 123', ensure=True) bar_path.join('module.py').write('bar = 123', ensure=True) foo_completions = jedi.Script('import module; module.', sys_path=[foo_path.strpath]).completions() bar_completions = jedi.Script('import module; module.', sys_path=[bar_path.strpath]).completions() assert 'foo' in [c.name for c in foo_completions] assert 'bar' not in [c.name for c in foo_completions] assert 'bar' in [c.name for c in bar_completions] assert 'foo' not in [c.name for c in bar_completions] def test_import_completion_docstring(): import abc s = jedi.Script('"""test"""\nimport ab') completions = s.completions() assert len(completions) == 1 assert completions[0].docstring(fast=False) == abc.__doc__ # However for performance reasons not all modules are loaded and the # docstring is empty in this case. assert completions[0].docstring() == '' def test_goto_definition_on_import(): assert Script("import sys_blabla", 1, 8).goto_definitions() == [] assert len(Script("import sys", 1, 8).goto_definitions()) == 1 @cwd_at('jedi') def test_complete_on_empty_import(): assert Script("from datetime import").completions()[0].name == 'import' # should just list the files in the directory assert 10 < len(Script("from .", path='whatever.py').completions()) < 30 # Global import assert len(Script("from . import", 1, 5, 'whatever.py').completions()) > 30 # relative import assert 10 < len(Script("from . import", 1, 6, 'whatever.py').completions()) < 30 # Global import assert len(Script("from . import classes", 1, 5, 'whatever.py').completions()) > 30 # relative import assert 10 < len(Script("from . import classes", 1, 6, 'whatever.py').completions()) < 30 wanted = set(['ImportError', 'import', 'ImportWarning']) assert set([c.name for c in Script("import").completions()]) == wanted if not is_py26: # python 2.6 doesn't always come with a library `import*`. assert len(Script("import import", path='').completions()) > 0 # 111 assert Script("from datetime import").completions()[0].name == 'import' assert Script("from datetime import ").completions() def test_imports_on_global_namespace_without_path(): """If the path is None, there shouldn't be any import problem""" completions = Script("import operator").completions() assert [c.name for c in completions] == ['operator'] completions = Script("import operator", path='example.py').completions() assert [c.name for c in completions] == ['operator'] # the first one has a path the second doesn't completions = Script("import keyword", path='example.py').completions() assert [c.name for c in completions] == ['keyword'] completions = Script("import keyword").completions() assert [c.name for c in completions] == ['keyword'] def test_named_import(): """named import - jedi-vim issue #8""" s = "import time as dt" assert len(Script(s, 1, 15, '/').goto_definitions()) == 1 assert len(Script(s, 1, 10, '/').goto_definitions()) == 1 @pytest.mark.skipif('True', reason='The nested import stuff is still very messy.') def test_goto_following_on_imports(): s = "import multiprocessing.dummy; multiprocessing.dummy" g = Script(s).goto_assignments() assert len(g) == 1 assert (g[0].line, g[0].column) != (0, 0) def test_os_after_from(): def check(source, result, column=None): completions = Script(source, column=column).completions() assert [c.name for c in completions] == result check('\nfrom os. ', ['path']) check('\nfrom os ', ['import']) check('from os ', ['import']) check('\nfrom os import whatever', ['import'], len('from os im')) check('from os\\\n', ['import']) check('from os \\\n', ['import']) def test_os_issues(): def import_names(*args, **kwargs): return [d.name for d in jedi.Script(*args, **kwargs).completions()] # Github issue #759 s = 'import os, s' assert 'sys' in import_names(s) assert 'path' not in import_names(s, column=len(s) - 1) assert 'os' in import_names(s, column=len(s) - 3) # Some more checks s = 'from os import path, e' assert 'environ' in import_names(s) assert 'json' not in import_names(s, column=len(s) - 1) assert 'environ' in import_names(s, column=len(s) - 1) assert 'path' in import_names(s, column=len(s) - 3) def test_path_issues(): """ See pull request #684 for details. """ source = '''from datetime import ''' assert jedi.Script(source).completions() jedi-0.11.1/test/test_evaluate/buildout_project/0000775000175000017500000000000013214571377021602 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/bin/0000775000175000017500000000000013214571377022352 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/bin/binary_file0000664000175000017500000000000513214571123024540 0ustar davedave00000000000000PNG jedi-0.11.1/test/test_evaluate/buildout_project/bin/app0000664000175000017500000000034313214571123023042 0ustar davedave00000000000000#!/usr/bin/python import sys sys.path[0:0] = [ '/usr/lib/python3.4/site-packages', '/tmp/.buildout/eggs/important_package.egg' ] import important_package if __name__ == '__main__': sys.exit(important_package.main()) jedi-0.11.1/test/test_evaluate/buildout_project/bin/empty_file0000664000175000017500000000000013214571123024405 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/src/0000775000175000017500000000000013214571377022371 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/src/proj_name/0000775000175000017500000000000013214571377024343 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/src/proj_name/module_name.py0000664000175000017500000000000013214571123027155 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/buildout_project/buildout.cfg0000664000175000017500000000000013214571123024065 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/test_annotations.py0000664000175000017500000000266313214571123022167 0ustar davedave00000000000000from textwrap import dedent import jedi import pytest @pytest.mark.skipif('sys.version_info[0] < 3') def test_simple_annotations(): """ Annotations only exist in Python 3. If annotations adhere to PEP-0484, we use them (they override inference), else they are parsed but ignored """ source = dedent("""\ def annot(a:3): return a annot('')""") assert [d.name for d in jedi.Script(source, ).goto_definitions()] == ['str'] source = dedent("""\ def annot_ret(a:3) -> 3: return a annot_ret('')""") assert [d.name for d in jedi.Script(source, ).goto_definitions()] == ['str'] source = dedent("""\ def annot(a:int): return a annot('')""") assert [d.name for d in jedi.Script(source, ).goto_definitions()] == ['int'] @pytest.mark.skipif('sys.version_info[0] < 3') @pytest.mark.parametrize('reference', [ 'assert 1', '1', 'def x(): pass', '1, 2', r'1\n' ]) def test_illegal_forward_references(reference): source = 'def foo(bar: "%s"): bar' % reference assert not jedi.Script(source).goto_definitions() @pytest.mark.skipif('sys.version_info[0] < 3') def test_lambda_forward_references(): source = 'def foo(bar: "lambda: 3"): bar' # For now just receiving the 3 is ok. I'm doubting that this is what we # want. We also execute functions. Should we only execute classes? assert jedi.Script(source).goto_definitions() jedi-0.11.1/test/test_evaluate/nested_namespaces/0000775000175000017500000000000013214571377021706 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/nested_namespaces/__init__.py0000664000175000017500000000000013214571123023772 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/nested_namespaces/namespace/0000775000175000017500000000000013214571377023642 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/nested_namespaces/namespace/__init__.py0000664000175000017500000000013613214571123025740 0ustar davedave00000000000000try: __import__('pkg_resources').declare_namespace(__name__) except ImportError: pass jedi-0.11.1/test/test_evaluate/nested_namespaces/namespace/pkg/0000775000175000017500000000000013214571377024423 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/nested_namespaces/namespace/pkg/__init__.py0000664000175000017500000000001213214571123026512 0ustar davedave00000000000000CONST = 1 jedi-0.11.1/test/test_evaluate/not_in_sys_path/0000775000175000017500000000000013214571377021425 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/not_in_sys_path/not_in_sys_path_package/0000775000175000017500000000000013214571377026300 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/not_in_sys_path/not_in_sys_path_package/module.py0000664000175000017500000000003113214571123030116 0ustar davedave00000000000000value = 'package.module' jedi-0.11.1/test/test_evaluate/not_in_sys_path/not_in_sys_path_package/__init__.py0000664000175000017500000000002213214571123030370 0ustar davedave00000000000000value = 'package' jedi-0.11.1/test/test_evaluate/not_in_sys_path/__init__.py0000664000175000017500000000000013214571123023511 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/not_in_sys_path/not_in_sys_path.py0000664000175000017500000000001213214571123025155 0ustar davedave00000000000000value = 3 jedi-0.11.1/test/test_evaluate/not_in_sys_path/pkg/0000775000175000017500000000000013214571377022206 5ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/not_in_sys_path/pkg/module.py0000664000175000017500000000033513214571123024033 0ustar davedave00000000000000from not_in_sys_path import not_in_sys_path from not_in_sys_path import not_in_sys_path_package from not_in_sys_path.not_in_sys_path_package import module not_in_sys_path.value not_in_sys_path_package.value module.value jedi-0.11.1/test/test_evaluate/not_in_sys_path/pkg/__init__.py0000664000175000017500000000000013214571123024272 0ustar davedave00000000000000jedi-0.11.1/test/test_evaluate/test_extension.py0000664000175000017500000000322013214571123021634 0ustar davedave00000000000000""" Test compiled module """ import os import jedi from ..helpers import cwd_at import pytest def test_completions(): s = jedi.Script('import _ctypes; _ctypes.') assert len(s.completions()) >= 15 def test_call_signatures_extension(): if os.name == 'nt': func = 'LoadLibrary' params = 1 else: func = 'dlopen' params = 2 s = jedi.Script('import _ctypes; _ctypes.%s(' % (func,)) sigs = s.call_signatures() assert len(sigs) == 1 assert len(sigs[0].params) == params def test_call_signatures_stdlib(): s = jedi.Script('import math; math.cos(') sigs = s.call_signatures() assert len(sigs) == 1 assert len(sigs[0].params) == 1 # Check only on linux 64 bit platform and Python3.4. @pytest.mark.skipif('sys.platform != "linux" or sys.maxsize <= 2**32 or sys.version_info[:2] != (3, 4)') @cwd_at('test/test_evaluate') def test_init_extension_module(): """ ``__init__`` extension modules are also packages and Jedi should understand that. Originally coming from #472. This test was built by the module.c and setup.py combination you can find in the init_extension_module folder. You can easily build the `__init__.cpython-34m.so` by compiling it (create a virtualenv and run `setup.py install`. This is also why this test only runs on certain systems (and Python 3.4). """ s = jedi.Script('import init_extension_module as i\ni.', path='not_existing.py') assert 'foo' in [c.name for c in s.completions()] s = jedi.Script('from init_extension_module import foo\nfoo', path='not_existing.py') assert ['foo'] == [c.name for c in s.completions()] jedi-0.11.1/test/test_integration.py0000664000175000017500000000313213214571123017300 0ustar davedave00000000000000import os import pytest from . import helpers def assert_case_equal(case, actual, desired): """ Assert ``actual == desired`` with formatted message. This is not needed for typical py.test use case, but as we need ``--assert=plain`` (see ../pytest.ini) to workaround some issue due to py.test magic, let's format the message by hand. """ assert actual == desired, """ Test %r failed. actual = %s desired = %s """ % (case, actual, desired) def assert_static_analysis(case, actual, desired): """A nicer formatting for static analysis tests.""" a = set(actual) d = set(desired) assert actual == desired, """ Test %r failed. not raised = %s unspecified = %s """ % (case, sorted(d - a), sorted(a - d)) def test_completion(case, monkeypatch): if case.skip is not None: pytest.skip(case.skip) repo_root = helpers.root_dir monkeypatch.chdir(os.path.join(repo_root, 'jedi')) case.run(assert_case_equal) def test_static_analysis(static_analysis_case): if static_analysis_case.skip is not None: pytest.skip(static_analysis_case.skip) else: static_analysis_case.run(assert_static_analysis) def test_refactor(refactor_case): """ Run refactoring test case. :type refactor_case: :class:`.refactor.RefactoringCase` """ if 0: # TODO Refactoring is not relevant at the moment, it will be changed # significantly in the future, but maybe we can use these tests: refactor_case.run() assert_case_equal(refactor_case, refactor_case.result, refactor_case.desired) jedi-0.11.1/test/helpers.py0000664000175000017500000000213113214571123015356 0ustar davedave00000000000000""" A helper module for testing, improves compatibility for testing (as ``jedi._compatibility``) as well as introducing helper functions. """ import sys from contextlib import contextmanager if sys.hexversion < 0x02070000: import unittest2 as unittest else: import unittest TestCase = unittest.TestCase import os from os.path import abspath, dirname import functools test_dir = dirname(abspath(__file__)) root_dir = dirname(test_dir) sample_int = 1 # This is used in completion/imports.py def cwd_at(path): """ Decorator to run function at `path`. :type path: str :arg path: relative path from repository root (e.g., ``'jedi'``). """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kwds): with set_cwd(path): return func(*args, **kwds) return wrapper return decorator @contextmanager def set_cwd(path, absolute_path=False): repo_root = os.path.dirname(test_dir) oldcwd = os.getcwd() os.chdir(os.path.join(repo_root, path)) try: yield finally: os.chdir(oldcwd) jedi-0.11.1/test/speed/0000775000175000017500000000000013214571377014460 5ustar davedave00000000000000jedi-0.11.1/test/speed/precedence.py0000664000175000017500000000120713214571123017114 0ustar davedave00000000000000def marks(code): if '.' in code: another(code[:code.index(',') - 1] + '!') else: another(code + '.') def another(code2): call(numbers(code2 + 'haha')) marks('start1 ') marks('start2 ') def alphabet(code4): if 1: if 2: return code4 + 'a' else: return code4 + 'b' else: if 2: return code4 + 'c' else: return code4 + 'd' def numbers(code5): if 2: return alphabet(code5 + '1') else: return alphabet(code5 + '2') def call(code3): code3 = numbers(numbers('end')) + numbers(code3) code3.partition jedi-0.11.1/test/test_regression.py0000664000175000017500000001333713214571123017145 0ustar davedave00000000000000""" Unit tests to avoid errors of the past. These are also all tests that didn't found a good place in any other testing module. """ import os import sys import textwrap import pytest from jedi import Script from jedi import api from jedi.evaluate import imports from .helpers import TestCase, cwd_at #jedi.set_debug_function() class TestRegression(TestCase): def test_goto_definition_cursor(self): s = ("class A():\n" " def _something(self):\n" " return\n" " def different_line(self,\n" " b):\n" " return\n" "A._something\n" "A.different_line" ) in_name = 2, 9 under_score = 2, 8 cls = 2, 7 should1 = 7, 10 diff_line = 4, 10 should2 = 8, 10 def get_def(pos): return [d.description for d in Script(s, *pos).goto_definitions()] in_name = get_def(in_name) under_score = get_def(under_score) should1 = get_def(should1) should2 = get_def(should2) diff_line = get_def(diff_line) assert should1 == in_name assert should1 == under_score assert should2 == diff_line assert get_def(cls) == [] @pytest.mark.skipif('True', reason='Skip for now, test case is not really supported.') @cwd_at('jedi') def test_add_dynamic_mods(self): fname = '__main__.py' api.settings.additional_dynamic_modules = [fname] # Fictional module that defines a function. src1 = "def r(a): return a" # Other fictional modules in another place in the fs. src2 = 'from .. import setup; setup.r(1)' script = Script(src1, path='../setup.py') imports.load_module(script._evaluator, os.path.abspath(fname), src2) result = script.goto_definitions() assert len(result) == 1 assert result[0].description == 'class int' def test_os_nowait(self): """ github issue #45 """ s = Script("import os; os.P_").completions() assert 'P_NOWAIT' in [i.name for i in s] def test_points_in_completion(self): """At some point, points were inserted into the completions, this caused problems, sometimes. """ c = Script("if IndentationErr").completions() assert c[0].name == 'IndentationError' self.assertEqual(c[0].complete, 'or') def test_no_statement_parent(self): source = textwrap.dedent(""" def f(): pass class C: pass variable = f if random.choice([0, 1]) else C""") defs = Script(source, column=3).goto_definitions() defs = sorted(defs, key=lambda d: d.line) self.assertEqual([d.description for d in defs], ['def f', 'class C']) def check_definition_by_marker(self, source, after_cursor, names): r""" Find definitions specified by `after_cursor` and check what found For example, for the following configuration, you can pass ``after_cursor = 'y)'``.:: function( x, y) \ `- You want cursor to be here """ source = textwrap.dedent(source) for (i, line) in enumerate(source.splitlines()): if after_cursor in line: break column = len(line) - len(after_cursor) defs = Script(source, i + 1, column).goto_definitions() assert [d.name for d in defs] == names def test_backslash_continuation(self): """ Test that ModuleWithCursor.get_path_until_cursor handles continuation """ self.check_definition_by_marker(r""" x = 0 a = \ [1, 2, 3, 4, 5, 6, 7, 8, 9, x] # <-- here """, '] # <-- here', ['int']) # completion in whitespace s = 'asdfxyxxxxxxxx sds\\\n hello' assert Script(s, 2, 4).goto_assignments() == [] def test_backslash_continuation_and_bracket(self): self.check_definition_by_marker(r""" x = 0 a = \ [1, 2, 3, 4, 5, 6, 7, 8, 9, (x)] # <-- here """, '(x)] # <-- here', ['int']) def test_generator(self): # Did have some problems with the usage of generator completions this # way. s = "def abc():\n" \ " yield 1\n" \ "abc()." assert Script(s).completions() def test_fake_subnodes(self): """ Test the number of subnodes of a fake object. There was a bug where the number of child nodes would grow on every call to :func:``jedi.evaluate.compiled.fake.get_faked``. See Github PR#649 and isseu #591. """ def get_str_completion(values): for c in values: if c.name == 'str': return c limit = None for i in range(2): completions = Script('').completions() c = get_str_completion(completions) str_context, = c._name.infer() n = len(str_context.tree_node.children[-1].children) if i == 0: limit = n else: assert n == limit def test_loading_unicode_files_with_bad_global_charset(monkeypatch, tmpdir): dirname = str(tmpdir.mkdir('jedi-test')) filename1 = os.path.join(dirname, 'test1.py') filename2 = os.path.join(dirname, 'test2.py') if sys.version_info < (3, 0): data = "# coding: latin-1\nfoo = 'm\xf6p'\n" else: data = "# coding: latin-1\nfoo = 'm\xf6p'\n".encode("latin-1") with open(filename1, "wb") as f: f.write(data) s = Script("from test1 import foo\nfoo.", line=2, column=4, path=filename2) s.completions() jedi-0.11.1/test/blabla_test_documentation.py0000664000175000017500000000150713214571123021127 0ustar davedave00000000000000# todo probably remove test_integration_keyword def test_keyword_doc(): r = list(Script("or", 1, 1).goto_definitions()) assert len(r) == 1 assert len(r[0].doc) > 100 r = list(Script("asfdasfd", 1, 1).goto_definitions()) assert len(r) == 0 k = Script("fro").completions()[0] imp_start = '\nThe ``import' assert k.raw_doc.startswith(imp_start) assert k.doc.startswith(imp_start) def test_blablabla(): defs = Script("import").goto_definitions() assert len(defs) == 1 and [1 for d in defs if d.doc] # unrelated to #44 def test_operator_doc(self): r = list(Script("a == b", 1, 3).goto_definitions()) assert len(r) == 1 assert len(r[0].doc) > 100 def test_lambda(): defs = Script('lambda x: x', column=0).goto_definitions() assert [d.type for d in defs] == ['keyword'] jedi-0.11.1/test/conftest.py0000664000175000017500000001054113214571123015545 0ustar davedave00000000000000import os import re import pytest from . import helpers from . import run from . import refactor import jedi from jedi.evaluate.analysis import Warning def pytest_addoption(parser): parser.addoption( "--integration-case-dir", default=os.path.join(helpers.test_dir, 'completion'), help="Directory in which integration test case files locate.") parser.addoption( "--refactor-case-dir", default=os.path.join(helpers.test_dir, 'refactor'), help="Directory in which refactoring test case files locate.") parser.addoption( "--test-files", "-T", default=[], action='append', help=( "Specify test files using FILE_NAME[:LINE[,LINE[,...]]]. " "For example: -T generators.py:10,13,19. " "Note that you can use -m to specify the test case by id.")) parser.addoption( "--thirdparty", action='store_true', help="Include integration tests that requires third party modules.") def parse_test_files_option(opt): """ Parse option passed to --test-files into a key-value pair. >>> parse_test_files_option('generators.py:10,13,19') ('generators.py', [10, 13, 19]) """ opt = str(opt) if ':' in opt: (f_name, rest) = opt.split(':', 1) return (f_name, list(map(int, rest.split(',')))) else: return (opt, []) def pytest_generate_tests(metafunc): """ :type metafunc: _pytest.python.Metafunc """ test_files = dict(map(parse_test_files_option, metafunc.config.option.test_files)) if 'case' in metafunc.fixturenames: base_dir = metafunc.config.option.integration_case_dir thirdparty = metafunc.config.option.thirdparty cases = list(run.collect_dir_tests(base_dir, test_files)) if thirdparty: cases.extend(run.collect_dir_tests( os.path.join(base_dir, 'thirdparty'), test_files, True)) ids = ["%s:%s" % (c.module_name, c.line_nr_test) for c in cases] metafunc.parametrize('case', cases, ids=ids) if 'refactor_case' in metafunc.fixturenames: base_dir = metafunc.config.option.refactor_case_dir metafunc.parametrize( 'refactor_case', refactor.collect_dir_tests(base_dir, test_files)) if 'static_analysis_case' in metafunc.fixturenames: base_dir = os.path.join(os.path.dirname(__file__), 'static_analysis') cases = list(collect_static_analysis_tests(base_dir, test_files)) metafunc.parametrize( 'static_analysis_case', cases, ids=[c.name for c in cases] ) def collect_static_analysis_tests(base_dir, test_files): for f_name in os.listdir(base_dir): files_to_execute = [a for a in test_files.items() if a[0] in f_name] if f_name.endswith(".py") and (not test_files or files_to_execute): path = os.path.join(base_dir, f_name) yield StaticAnalysisCase(path) class StaticAnalysisCase(object): """ Static Analysis cases lie in the static_analysis folder. The tests also start with `#!`, like the goto_definition tests. """ def __init__(self, path): self._path = path self.name = os.path.basename(path) with open(path) as f: self._source = f.read() self.skip = False for line in self._source.splitlines(): self.skip = self.skip or run.skip_python_version(line) def collect_comparison(self): cases = [] for line_nr, line in enumerate(self._source.splitlines(), 1): match = re.match(r'(\s*)#! (\d+ )?(.*)$', line) if match is not None: column = int(match.group(2) or 0) + len(match.group(1)) cases.append((line_nr + 1, column, match.group(3))) return cases def run(self, compare_cb): analysis = jedi.Script(self._source, path=self._path)._analysis() typ_str = lambda inst: 'warning ' if isinstance(inst, Warning) else '' analysis = [(r.line, r.column, typ_str(r) + r.name) for r in analysis] compare_cb(self, analysis, self.collect_comparison()) def __repr__(self): return "<%s: %s>" % (self.__class__.__name__, os.path.basename(self._path)) @pytest.fixture() def cwd_tmpdir(monkeypatch, tmpdir): with helpers.set_cwd(tmpdir.dirpath): yield tmpdir jedi-0.11.1/sith.py0000775000175000017500000001614613214571123013722 0ustar davedave00000000000000#!/usr/bin/env python """ Sith attacks (and helps debugging) Jedi. Randomly search Python files and run Jedi on it. Exception and used arguments are recorded to ``./record.json`` (specified by --record):: ./sith.py random /path/to/sourcecode Redo recorded exception:: ./sith.py redo Show recorded exception:: ./sith.py show Run a specific operation ./sith.py run Where operation is one of completions, goto_assignments, goto_definitions, usages, or call_signatures. Note: Line numbers start at 1; columns start at 0 (this is consistent with many text editors, including Emacs). Usage: sith.py [--pdb|--ipdb|--pudb] [-d] [-n=] [-f] [--record=] random [-s] [] sith.py [--pdb|--ipdb|--pudb] [-d] [-f] [--record=] redo sith.py [--pdb|--ipdb|--pudb] [-d] [-f] run sith.py show [--record=] sith.py -h | --help Options: -h --help Show this screen. --record= Exceptions are recorded in here [default: record.json]. -f, --fs-cache By default, file system cache is off for reproducibility. -n, --maxtries= Maximum of random tries [default: 100] -d, --debug Jedi print debugging when an error is raised. -s Shows the path/line numbers of every completion before it starts. --pdb Launch pdb when error is raised. --ipdb Launch ipdb when error is raised. --pudb Launch pudb when error is raised. """ from __future__ import print_function, division, unicode_literals from docopt import docopt import json import os import random import sys import traceback import jedi class SourceFinder(object): _files = None @staticmethod def fetch(file_path): if not os.path.isdir(file_path): yield file_path return for root, dirnames, filenames in os.walk(file_path): for name in filenames: if name.endswith('.py'): yield os.path.join(root, name) @classmethod def files(cls, file_path): if cls._files is None: cls._files = list(cls.fetch(file_path)) return cls._files class TestCase(object): def __init__(self, operation, path, line, column, traceback=None): if operation not in self.operations: raise ValueError("%s is not a valid operation" % operation) # Set other attributes self.operation = operation self.path = path self.line = line self.column = column self.traceback = traceback @classmethod def from_cache(cls, record): with open(record) as f: args = json.load(f) return cls(*args) operations = [ 'completions', 'goto_assignments', 'goto_definitions', 'usages', 'call_signatures'] @classmethod def generate(cls, file_path): operation = random.choice(cls.operations) path = random.choice(SourceFinder.files(file_path)) with open(path) as f: source = f.read() lines = source.splitlines() if not lines: lines = [''] line = random.randint(1, len(lines)) column = random.randint(0, len(lines[line - 1])) return cls(operation, path, line, column) def run(self, debugger, record=None, print_result=False): try: with open(self.path) as f: self.script = jedi.Script(f.read(), self.line, self.column, self.path) kwargs = {} if self.operation == 'goto_assignments': kwargs['follow_imports'] = random.choice([False, True]) self.objects = getattr(self.script, self.operation)(**kwargs) if print_result: print("{path}: Line {line} column {column}".format(**self.__dict__)) self.show_location(self.line, self.column) self.show_operation() except jedi.NotFoundError: pass except Exception: self.traceback = traceback.format_exc() if record is not None: call_args = (self.operation, self.path, self.line, self.column, self.traceback) with open(record, 'w') as f: json.dump(call_args, f) self.show_errors() if debugger: einfo = sys.exc_info() pdb = __import__(debugger) if debugger == 'pudb': pdb.post_mortem(einfo[2], einfo[0], einfo[1]) else: pdb.post_mortem(einfo[2]) exit(1) def show_location(self, lineno, column, show=3): # Three lines ought to be enough lower = lineno - show if lineno - show > 0 else 0 prefix = ' |' for i, line in enumerate(self.script._source.split('\n')[lower:lineno]): print(prefix, lower + i + 1, line) print(prefix, ' ', ' ' * (column + len(str(lineno))), '^') def show_operation(self): print("%s:\n" % self.operation.capitalize()) if self.operation == 'completions': self.show_completions() else: self.show_definitions() def show_completions(self): for completion in self.objects: print(completion.name) def show_definitions(self): for completion in self.objects: print(completion.desc_with_module) if completion.module_path is None: continue if os.path.abspath(completion.module_path) == os.path.abspath(self.path): self.show_location(completion.line, completion.column) def show_errors(self): sys.stderr.write(self.traceback) print(("Error with running Script(...).{operation}() with\n" "\tpath: {path}\n" "\tline: {line}\n" "\tcolumn: {column}").format(**self.__dict__)) def main(arguments): debugger = 'pdb' if arguments['--pdb'] else \ 'ipdb' if arguments['--ipdb'] else \ 'pudb' if arguments['--pudb'] else None record = arguments['--record'] jedi.settings.use_filesystem_cache = arguments['--fs-cache'] if arguments['--debug']: jedi.set_debug_function() if arguments['redo'] or arguments['show']: t = TestCase.from_cache(record) if arguments['show']: t.show_errors() else: t.run(debugger) elif arguments['run']: TestCase( arguments[''], arguments[''], int(arguments['']), int(arguments['']) ).run(debugger, print_result=True) else: for _ in range(int(arguments['--maxtries'])): t = TestCase.generate(arguments[''] or '.') if arguments['-s']: print('%s %s %s %s ' % (t.operation, t.path, t.line, t.column)) sys.stdout.flush() else: print('.', end='') t.run(debugger, record) sys.stdout.flush() print() if __name__ == '__main__': arguments = docopt(__doc__) main(arguments) jedi-0.11.1/tox.ini0000664000175000017500000000220613214571123013701 0ustar davedave00000000000000[tox] envlist = py27, py33, py34, py35, py36 [testenv] deps = pytest>=2.3.5, < 3.3 pytest-cache # docopt for sith doctests docopt # coloroma for colored debug output colorama -rrequirements.txt setenv = # https://github.com/tomchristie/django-rest-framework/issues/1957 # tox corrupts __pycache__, solution from here: PYTHONDONTWRITEBYTECODE=1 commands = py.test {posargs:jedi test} [testenv:py26] deps = unittest2 {[testenv]deps} [testenv:py27] deps = # for testing the typing module typing # numpydoc for typing scipy stack numpydoc {[testenv]deps} [testenv:py33] deps = typing {[testenv]deps} [testenv:py34] deps = typing numpydoc {[testenv]deps} [testenv:py35] deps = numpydoc {[testenv]deps} [testenv:py36] deps = numpydoc {[testenv]deps} [testenv:cov] deps = coverage numpydoc {[testenv]deps} commands = coverage run --source jedi -m py.test coverage report [testenv:sith] commands = {envpython} -c "import os; a='{envtmpdir}'; os.path.exists(a) or os.makedirs(a)" {envpython} sith.py --record {envtmpdir}/record.json random {posargs:jedi} jedi-0.11.1/jedi/0000775000175000017500000000000013214571377013314 5ustar davedave00000000000000jedi-0.11.1/jedi/refactoring.py0000664000175000017500000001530313214571123016160 0ustar davedave00000000000000""" Introduce some basic refactoring functions to |jedi|. This module is still in a very early development stage and needs much testing and improvement. .. warning:: I won't do too much here, but if anyone wants to step in, please do. Refactoring is none of my priorities It uses the |jedi| `API `_ and supports currently the following functions (sometimes bug-prone): - rename - extract variable - inline variable """ import difflib from parso import python_bytes_to_unicode, split_lines from jedi.evaluate import helpers class Refactoring(object): def __init__(self, change_dct): """ :param change_dct: dict(old_path=(new_path, old_lines, new_lines)) """ self.change_dct = change_dct def old_files(self): dct = {} for old_path, (new_path, old_l, new_l) in self.change_dct.items(): dct[old_path] = '\n'.join(old_l) return dct def new_files(self): dct = {} for old_path, (new_path, old_l, new_l) in self.change_dct.items(): dct[new_path] = '\n'.join(new_l) return dct def diff(self): texts = [] for old_path, (new_path, old_l, new_l) in self.change_dct.items(): if old_path: udiff = difflib.unified_diff(old_l, new_l) else: udiff = difflib.unified_diff(old_l, new_l, old_path, new_path) texts.append('\n'.join(udiff)) return '\n'.join(texts) def rename(script, new_name): """ The `args` / `kwargs` params are the same as in `api.Script`. :param operation: The refactoring operation to execute. :type operation: str :type source: str :return: list of changed lines/changed files """ return Refactoring(_rename(script.usages(), new_name)) def _rename(names, replace_str): """ For both rename and inline. """ order = sorted(names, key=lambda x: (x.module_path, x.line, x.column), reverse=True) def process(path, old_lines, new_lines): if new_lines is not None: # goto next file, save last dct[path] = path, old_lines, new_lines dct = {} current_path = object() new_lines = old_lines = None for name in order: if name.in_builtin_module(): continue if current_path != name.module_path: current_path = name.module_path process(current_path, old_lines, new_lines) if current_path is not None: # None means take the source that is a normal param. with open(current_path) as f: source = f.read() new_lines = split_lines(python_bytes_to_unicode(source)) old_lines = new_lines[:] nr, indent = name.line, name.column line = new_lines[nr - 1] new_lines[nr - 1] = line[:indent] + replace_str + \ line[indent + len(name.name):] process(current_path, old_lines, new_lines) return dct def extract(script, new_name): """ The `args` / `kwargs` params are the same as in `api.Script`. :param operation: The refactoring operation to execute. :type operation: str :type source: str :return: list of changed lines/changed files """ new_lines = split_lines(python_bytes_to_unicode(script.source)) old_lines = new_lines[:] user_stmt = script._parser.user_stmt() # TODO care for multiline extracts dct = {} if user_stmt: pos = script._pos line_index = pos[0] - 1 arr, index = helpers.array_for_pos(user_stmt, pos) if arr is not None: start_pos = arr[index].start_pos end_pos = arr[index].end_pos # take full line if the start line is different from end line e = end_pos[1] if end_pos[0] == start_pos[0] else None start_line = new_lines[start_pos[0] - 1] text = start_line[start_pos[1]:e] for l in range(start_pos[0], end_pos[0] - 1): text += '\n' + l if e is None: end_line = new_lines[end_pos[0] - 1] text += '\n' + end_line[:end_pos[1]] # remove code from new lines t = text.lstrip() del_start = start_pos[1] + len(text) - len(t) text = t.rstrip() del_end = len(t) - len(text) if e is None: new_lines[end_pos[0] - 1] = end_line[end_pos[1] - del_end:] e = len(start_line) else: e = e - del_end start_line = start_line[:del_start] + new_name + start_line[e:] new_lines[start_pos[0] - 1] = start_line new_lines[start_pos[0]:end_pos[0] - 1] = [] # add parentheses in multiline case open_brackets = ['(', '[', '{'] close_brackets = [')', ']', '}'] if '\n' in text and not (text[0] in open_brackets and text[-1] == close_brackets[open_brackets.index(text[0])]): text = '(%s)' % text # add new line before statement indent = user_stmt.start_pos[1] new = "%s%s = %s" % (' ' * indent, new_name, text) new_lines.insert(line_index, new) dct[script.path] = script.path, old_lines, new_lines return Refactoring(dct) def inline(script): """ :type script: api.Script """ new_lines = split_lines(python_bytes_to_unicode(script.source)) dct = {} definitions = script.goto_assignments() assert len(definitions) == 1 stmt = definitions[0]._definition usages = script.usages() inlines = [r for r in usages if not stmt.start_pos <= (r.line, r.column) <= stmt.end_pos] inlines = sorted(inlines, key=lambda x: (x.module_path, x.line, x.column), reverse=True) expression_list = stmt.expression_list() # don't allow multiline refactorings for now. assert stmt.start_pos[0] == stmt.end_pos[0] index = stmt.start_pos[0] - 1 line = new_lines[index] replace_str = line[expression_list[0].start_pos[1]:stmt.end_pos[1] + 1] replace_str = replace_str.strip() # tuples need parentheses if expression_list and isinstance(expression_list[0], pr.Array): arr = expression_list[0] if replace_str[0] not in ['(', '[', '{'] and len(arr) > 1: replace_str = '(%s)' % replace_str # if it's the only assignment, remove the statement if len(stmt.get_defined_names()) == 1: line = line[:stmt.start_pos[1]] + line[stmt.end_pos[1]:] dct = _rename(inlines, replace_str) # remove the empty line new_lines = dct[script.path][2] if line.strip(): new_lines[index] = line else: new_lines.pop(index) return Refactoring(dct) jedi-0.11.1/jedi/utils.py0000664000175000017500000001103313214571123015011 0ustar davedave00000000000000""" Utilities for end-users. """ from __future__ import absolute_import import __main__ from collections import namedtuple import logging import traceback import re import os import sys from parso import split_lines from jedi import Interpreter from jedi.api.helpers import get_on_completion_name READLINE_DEBUG = False def setup_readline(namespace_module=__main__): """ Install Jedi completer to :mod:`readline`. This function setups :mod:`readline` to use Jedi in Python interactive shell. If you want to use a custom ``PYTHONSTARTUP`` file (typically ``$HOME/.pythonrc.py``), you can add this piece of code:: try: from jedi.utils import setup_readline setup_readline() except ImportError: # Fallback to the stdlib readline completer if it is installed. # Taken from http://docs.python.org/2/library/rlcompleter.html print("Jedi is not installed, falling back to readline") try: import readline import rlcompleter readline.parse_and_bind("tab: complete") except ImportError: print("Readline is not installed either. No tab completion is enabled.") This will fallback to the readline completer if Jedi is not installed. The readline completer will only complete names in the global namespace, so for example:: ran will complete to ``range`` with both Jedi and readline, but:: range(10).cou will show complete to ``range(10).count`` only with Jedi. You'll also need to add ``export PYTHONSTARTUP=$HOME/.pythonrc.py`` to your shell profile (usually ``.bash_profile`` or ``.profile`` if you use bash). """ if READLINE_DEBUG: logging.basicConfig( filename='/tmp/jedi.log', filemode='a', level=logging.DEBUG ) class JediRL(object): def complete(self, text, state): """ This complete stuff is pretty weird, a generator would make a lot more sense, but probably due to backwards compatibility this is still the way how it works. The only important part is stuff in the ``state == 0`` flow, everything else has been copied from the ``rlcompleter`` std. library module. """ if state == 0: sys.path.insert(0, os.getcwd()) # Calling python doesn't have a path, so add to sys.path. try: logging.debug("Start REPL completion: " + repr(text)) interpreter = Interpreter(text, [namespace_module.__dict__]) lines = split_lines(text) position = (len(lines), len(lines[-1])) name = get_on_completion_name( interpreter._get_module_node(), lines, position ) before = text[:len(text) - len(name)] completions = interpreter.completions() except: logging.error("REPL Completion error:\n" + traceback.format_exc()) raise finally: sys.path.pop(0) self.matches = [before + c.name_with_symbols for c in completions] try: return self.matches[state] except IndexError: return None try: import readline except ImportError: print("Jedi: Module readline not available.") else: readline.set_completer(JediRL().complete) readline.parse_and_bind("tab: complete") # jedi itself does the case matching readline.parse_and_bind("set completion-ignore-case on") # because it's easier to hit the tab just once readline.parse_and_bind("set show-all-if-unmodified") readline.parse_and_bind("set show-all-if-ambiguous on") # don't repeat all the things written in the readline all the time readline.parse_and_bind("set completion-prefix-display-length 2") # No delimiters, Jedi handles that. readline.set_completer_delims('') def version_info(): """ Returns a namedtuple of Jedi's version, similar to Python's ``sys.version_info``. """ Version = namedtuple('Version', 'major, minor, micro') from jedi import __version__ tupl = re.findall('[a-z]+|\d+', __version__) return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)]) jedi-0.11.1/jedi/common/0000775000175000017500000000000013214571377014604 5ustar davedave00000000000000jedi-0.11.1/jedi/common/__init__.py0000664000175000017500000000007413214571123016703 0ustar davedave00000000000000from jedi.common.context import BaseContextSet, BaseContext jedi-0.11.1/jedi/common/context.py0000664000175000017500000000340613214571123016632 0ustar davedave00000000000000class BaseContext(object): def __init__(self, evaluator, parent_context=None): self.evaluator = evaluator self.parent_context = parent_context def get_root_context(self): context = self while True: if context.parent_context is None: return context context = context.parent_context class BaseContextSet(object): def __init__(self, *args): self._set = set(args) @classmethod def from_iterable(cls, iterable): return cls.from_set(set(iterable)) @classmethod def from_set(cls, set_): self = cls() self._set = set_ return self @classmethod def from_sets(cls, sets): """ Used to work with an iterable of set. """ aggregated = set() sets = list(sets) for set_ in sets: if isinstance(set_, BaseContextSet): aggregated |= set_._set else: aggregated |= set_ return cls.from_set(aggregated) def __or__(self, other): return type(self).from_set(self._set | other._set) def __iter__(self): for element in self._set: yield element def __bool__(self): return bool(self._set) def __len__(self): return len(self._set) def __repr__(self): return '%s(%s)' % (self.__class__.__name__, ', '.join(str(s) for s in self._set)) def filter(self, filter_func): return type(self).from_iterable(filter(filter_func, self._set)) def __getattr__(self, name): def mapper(*args, **kwargs): return type(self).from_sets( getattr(context, name)(*args, **kwargs) for context in self._set ) return mapper jedi-0.11.1/jedi/parser_utils.py0000664000175000017500000001647413214571123016403 0ustar davedave00000000000000import textwrap from inspect import cleandoc from jedi._compatibility import literal_eval, is_py3 from parso.python import tree _EXECUTE_NODES = set([ 'funcdef', 'classdef', 'import_from', 'import_name', 'test', 'or_test', 'and_test', 'not_test', 'comparison', 'expr', 'xor_expr', 'and_expr', 'shift_expr', 'arith_expr', 'atom_expr', 'term', 'factor', 'power', 'atom' ]) _FLOW_KEYWORDS = ( 'try', 'except', 'finally', 'else', 'if', 'elif', 'with', 'for', 'while' ) def get_executable_nodes(node, last_added=False): """ For static analysis. """ result = [] typ = node.type if typ == 'name': next_leaf = node.get_next_leaf() if last_added is False and node.parent.type != 'param' and next_leaf != '=': result.append(node) elif typ == 'expr_stmt': # I think evaluating the statement (and possibly returned arrays), # should be enough for static analysis. result.append(node) for child in node.children: result += get_executable_nodes(child, last_added=True) elif typ == 'decorator': # decorator if node.children[-2] == ')': node = node.children[-3] if node != '(': result += get_executable_nodes(node) else: try: children = node.children except AttributeError: pass else: if node.type in _EXECUTE_NODES and not last_added: result.append(node) for child in children: result += get_executable_nodes(child, last_added) return result def get_comp_fors(comp_for): yield comp_for last = comp_for.children[-1] while True: if last.type == 'comp_for': yield last elif not last.type == 'comp_if': break last = last.children[-1] def for_stmt_defines_one_name(for_stmt): """ Returns True if only one name is returned: ``for x in y``. Returns False if the for loop is more complicated: ``for x, z in y``. :returns: bool """ return for_stmt.children[1].type == 'name' def get_flow_branch_keyword(flow_node, node): start_pos = node.start_pos if not (flow_node.start_pos < start_pos <= flow_node.end_pos): raise ValueError('The node is not part of the flow.') keyword = None for i, child in enumerate(flow_node.children): if start_pos < child.start_pos: return keyword first_leaf = child.get_first_leaf() if first_leaf in _FLOW_KEYWORDS: keyword = first_leaf return 0 def get_statement_of_position(node, pos): for c in node.children: if c.start_pos <= pos <= c.end_pos: if c.type not in ('decorated', 'simple_stmt', 'suite') \ and not isinstance(c, (tree.Flow, tree.ClassOrFunc)): return c else: try: return get_statement_of_position(c, pos) except AttributeError: pass # Must be a non-scope return None def clean_scope_docstring(scope_node): """ Returns a cleaned version of the docstring token. """ node = scope_node.get_doc_node() if node is not None: # TODO We have to check next leaves until there are no new # leaves anymore that might be part of the docstring. A # docstring can also look like this: ``'foo' 'bar' # Returns a literal cleaned version of the ``Token``. cleaned = cleandoc(safe_literal_eval(node.value)) # Since we want the docstr output to be always unicode, just # force it. if is_py3 or isinstance(cleaned, unicode): return cleaned else: return unicode(cleaned, 'UTF-8', 'replace') return '' def safe_literal_eval(value): first_two = value[:2].lower() if first_two[0] == 'f' or first_two in ('fr', 'rf'): # literal_eval is not able to resovle f literals. We have to do that # manually, but that's right now not implemented. return '' try: return literal_eval(value) except SyntaxError: # It's possible to create syntax errors with literals like rb'' in # Python 2. This should not be possible and in that case just return an # empty string. # Before Python 3.3 there was a more strict definition in which order # you could define literals. return '' def get_call_signature(funcdef, width=72, call_string=None): """ Generate call signature of this function. :param width: Fold lines if a line is longer than this value. :type width: int :arg func_name: Override function name when given. :type func_name: str :rtype: str """ # Lambdas have no name. if call_string is None: if funcdef.type == 'lambdef': call_string = '' else: call_string = funcdef.name.value if funcdef.type == 'lambdef': p = '(' + ''.join(param.get_code() for param in funcdef.get_params()).strip() + ')' else: p = funcdef.children[2].get_code() code = call_string + p return '\n'.join(textwrap.wrap(code, width)) def get_doc_with_call_signature(scope_node): """ Return a document string including call signature. """ call_signature = None if scope_node.type == 'classdef': for funcdef in scope_node.iter_funcdefs(): if funcdef.name.value == '__init__': call_signature = \ get_call_signature(funcdef, call_string=scope_node.name.value) elif scope_node.type in ('funcdef', 'lambdef'): call_signature = get_call_signature(scope_node) doc = clean_scope_docstring(scope_node) if call_signature is None: return doc return '%s\n\n%s' % (call_signature, doc) def move(node, line_offset): """ Move the `Node` start_pos. """ try: children = node.children except AttributeError: node.line += line_offset else: for c in children: move(c, line_offset) def get_following_comment_same_line(node): """ returns (as string) any comment that appears on the same line, after the node, including the # """ try: if node.type == 'for_stmt': whitespace = node.children[5].get_first_leaf().prefix elif node.type == 'with_stmt': whitespace = node.children[3].get_first_leaf().prefix else: whitespace = node.get_last_leaf().get_next_leaf().prefix except AttributeError: return None except ValueError: # TODO in some particular cases, the tree doesn't seem to be linked # correctly return None if "#" not in whitespace: return None comment = whitespace[whitespace.index("#"):] if "\r" in comment: comment = comment[:comment.index("\r")] if "\n" in comment: comment = comment[:comment.index("\n")] return comment def is_scope(node): return node.type in ('file_input', 'classdef', 'funcdef', 'lambdef', 'comp_for') def get_parent_scope(node, include_flows=False): """ Returns the underlying scope. """ scope = node.parent while scope is not None: if include_flows and isinstance(scope, tree.Flow): return scope if is_scope(scope): break scope = scope.parent return scope jedi-0.11.1/jedi/__init__.py0000664000175000017500000000307513214571123015417 0ustar davedave00000000000000""" Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its historic focus is autocompletion, but does static analysis for now as well. Jedi is fast and is very well tested. It understands Python on a deeper level than all other static analysis frameworks for Python. Jedi has support for two different goto functions. It's possible to search for related names and to list all names in a Python file and infer them. Jedi understands docstrings and you can use Jedi autocompletion in your REPL as well. Jedi uses a very simple API to connect with IDE's. There's a reference implementation as a `VIM-Plugin `_, which uses Jedi's autocompletion. We encourage you to use Jedi in your IDEs. It's really easy. To give you a simple example how you can use the Jedi library, here is an example for the autocompletion feature: >>> import jedi >>> source = ''' ... import datetime ... datetime.da''' >>> script = jedi.Script(source, 3, len('datetime.da'), 'example.py') >>> script >>> completions = script.completions() >>> completions #doctest: +ELLIPSIS [, , ...] >>> print(completions[0].complete) te >>> print(completions[0].name) date As you see Jedi is pretty simple and allows you to concentrate on writing a good text editor, while still having very good IDE features for Python. """ __version__ = '0.11.1' from jedi.api import Script, Interpreter, set_debug_function, \ preload_module, names from jedi import settings jedi-0.11.1/jedi/_compatibility.py0000664000175000017500000002147613214571123016675 0ustar davedave00000000000000""" To ensure compatibility from Python ``2.6`` - ``3.3``, a module has been created. Clearly there is huge need to use conforming syntax. """ import sys import imp import os import re import pkgutil import warnings try: import importlib except ImportError: pass # Cannot use sys.version.major and minor names, because in Python 2.6 it's not # a namedtuple. is_py3 = sys.version_info[0] >= 3 is_py33 = is_py3 and sys.version_info[1] >= 3 is_py34 = is_py3 and sys.version_info[1] >= 4 is_py35 = is_py3 and sys.version_info[1] >= 5 is_py26 = not is_py3 and sys.version_info[1] < 7 py_version = int(str(sys.version_info[0]) + str(sys.version_info[1])) class DummyFile(object): def __init__(self, loader, string): self.loader = loader self.string = string def read(self): return self.loader.get_source(self.string) def close(self): del self.loader def find_module_py34(string, path=None, fullname=None): implicit_namespace_pkg = False spec = None loader = None spec = importlib.machinery.PathFinder.find_spec(string, path) if hasattr(spec, 'origin'): origin = spec.origin implicit_namespace_pkg = origin == 'namespace' # We try to disambiguate implicit namespace pkgs with non implicit namespace pkgs if implicit_namespace_pkg: fullname = string if not path else fullname implicit_ns_info = ImplicitNSInfo(fullname, spec.submodule_search_locations._path) return None, implicit_ns_info, False # we have found the tail end of the dotted path if hasattr(spec, 'loader'): loader = spec.loader return find_module_py33(string, path, loader) def find_module_py33(string, path=None, loader=None, fullname=None): loader = loader or importlib.machinery.PathFinder.find_module(string, path) if loader is None and path is None: # Fallback to find builtins try: with warnings.catch_warnings(record=True): # Mute "DeprecationWarning: Use importlib.util.find_spec() # instead." While we should replace that in the future, it's # probably good to wait until we deprecate Python 3.3, since # it was added in Python 3.4 and find_loader hasn't been # removed in 3.6. loader = importlib.find_loader(string) except ValueError as e: # See #491. Importlib might raise a ValueError, to avoid this, we # just raise an ImportError to fix the issue. raise ImportError("Originally " + repr(e)) if loader is None: raise ImportError("Couldn't find a loader for {0}".format(string)) try: is_package = loader.is_package(string) if is_package: if hasattr(loader, 'path'): module_path = os.path.dirname(loader.path) else: # At least zipimporter does not have path attribute module_path = os.path.dirname(loader.get_filename(string)) if hasattr(loader, 'archive'): module_file = DummyFile(loader, string) else: module_file = None else: module_path = loader.get_filename(string) module_file = DummyFile(loader, string) except AttributeError: # ExtensionLoader has not attribute get_filename, instead it has a # path attribute that we can use to retrieve the module path try: module_path = loader.path module_file = DummyFile(loader, string) except AttributeError: module_path = string module_file = None finally: is_package = False if hasattr(loader, 'archive'): module_path = loader.archive return module_file, module_path, is_package def find_module_pre_py33(string, path=None, fullname=None): try: module_file, module_path, description = imp.find_module(string, path) module_type = description[2] return module_file, module_path, module_type is imp.PKG_DIRECTORY except ImportError: pass if path is None: path = sys.path for item in path: loader = pkgutil.get_importer(item) if loader: try: loader = loader.find_module(string) if loader: is_package = loader.is_package(string) is_archive = hasattr(loader, 'archive') try: module_path = loader.get_filename(string) except AttributeError: # fallback for py26 try: module_path = loader._get_filename(string) except AttributeError: continue if is_package: module_path = os.path.dirname(module_path) if is_archive: module_path = loader.archive file = None if not is_package or is_archive: file = DummyFile(loader, string) return (file, module_path, is_package) except ImportError: pass raise ImportError("No module named {0}".format(string)) find_module = find_module_py33 if is_py33 else find_module_pre_py33 find_module = find_module_py34 if is_py34 else find_module find_module.__doc__ = """ Provides information about a module. This function isolates the differences in importing libraries introduced with python 3.3 on; it gets a module name and optionally a path. It will return a tuple containin an open file for the module (if not builtin), the filename or the name of the module if it is a builtin one and a boolean indicating if the module is contained in a package. """ class ImplicitNSInfo(object): """Stores information returned from an implicit namespace spec""" def __init__(self, name, paths): self.name = name self.paths = paths # unicode function try: unicode = unicode except NameError: unicode = str # re-raise function if is_py3: def reraise(exception, traceback): raise exception.with_traceback(traceback) else: eval(compile(""" def reraise(exception, traceback): raise exception, None, traceback """, 'blub', 'exec')) reraise.__doc__ = """ Re-raise `exception` with a `traceback` object. Usage:: reraise(Exception, sys.exc_info()[2]) """ class Python3Method(object): def __init__(self, func): self.func = func def __get__(self, obj, objtype): if obj is None: return lambda *args, **kwargs: self.func(*args, **kwargs) else: return lambda *args, **kwargs: self.func(obj, *args, **kwargs) def use_metaclass(meta, *bases): """ Create a class with a metaclass. """ if not bases: bases = (object,) return meta("HackClass", bases, {}) try: encoding = sys.stdout.encoding if encoding is None: encoding = 'utf-8' except AttributeError: encoding = 'ascii' def u(string): """Cast to unicode DAMMIT! Written because Python2 repr always implicitly casts to a string, so we have to cast back to a unicode (and we now that we always deal with valid unicode, because we check that in the beginning). """ if is_py3: return str(string) if not isinstance(string, unicode): return unicode(str(string), 'UTF-8') return string try: import builtins # module name in python 3 except ImportError: import __builtin__ as builtins import ast def literal_eval(string): # py3.0, py3.1 and py32 don't support unicode literals. Support those, I # don't want to write two versions of the tokenizer. if is_py3 and sys.version_info.minor < 3: if re.match('[uU][\'"]', string): string = string[1:] return ast.literal_eval(string) try: from itertools import zip_longest except ImportError: from itertools import izip_longest as zip_longest # Python 2 try: FileNotFoundError = FileNotFoundError except NameError: FileNotFoundError = IOError def no_unicode_pprint(dct): """ Python 2/3 dict __repr__ may be different, because of unicode differens (with or without a `u` prefix). Normally in doctests we could use `pprint` to sort dicts and check for equality, but here we have to write a separate function to do that. """ import pprint s = pprint.pformat(dct) print(re.sub("u'", "'", s)) def utf8_repr(func): """ ``__repr__`` methods in Python 2 don't allow unicode objects to be returned. Therefore cast them to utf-8 bytes in this decorator. """ def wrapper(self): result = func(self) if isinstance(result, unicode): return result.encode('utf-8') else: return result if is_py3: return func else: return wrapper jedi-0.11.1/jedi/api/0000775000175000017500000000000013214571377014065 5ustar davedave00000000000000jedi-0.11.1/jedi/api/classes.py0000664000175000017500000005470313214571123016072 0ustar davedave00000000000000""" The :mod:`jedi.api.classes` module contains the return classes of the API. These classes are the much bigger part of the whole API, because they contain the interesting information about completion and goto operations. """ import re from parso.cache import parser_cache from parso.python.tree import search_ancestor from jedi._compatibility import u from jedi import settings from jedi.evaluate.utils import ignored, unite from jedi.cache import memoize_method from jedi.evaluate import imports from jedi.evaluate import compiled from jedi.evaluate.filters import ParamName from jedi.evaluate.imports import ImportName from jedi.evaluate.context import instance from jedi.evaluate.context import ClassContext, FunctionContext, FunctionExecutionContext from jedi.api.keywords import KeywordName def _sort_names_by_start_pos(names): return sorted(names, key=lambda s: s.start_pos or (0, 0)) def defined_names(evaluator, context): """ List sub-definitions (e.g., methods in class). :type scope: Scope :rtype: list of Definition """ filter = next(context.get_filters(search_global=True)) names = [name for name in filter.values()] return [Definition(evaluator, n) for n in _sort_names_by_start_pos(names)] class BaseDefinition(object): _mapping = { 'posixpath': 'os.path', 'riscospath': 'os.path', 'ntpath': 'os.path', 'os2emxpath': 'os.path', 'macpath': 'os.path', 'genericpath': 'os.path', 'posix': 'os', '_io': 'io', '_functools': 'functools', '_sqlite3': 'sqlite3', '__builtin__': '', 'builtins': '', } _tuple_mapping = dict((tuple(k.split('.')), v) for (k, v) in { 'argparse._ActionsContainer': 'argparse.ArgumentParser', }.items()) def __init__(self, evaluator, name): self._evaluator = evaluator self._name = name """ An instance of :class:`parso.reprsentation.Name` subclass. """ self.is_keyword = isinstance(self._name, KeywordName) # generate a path to the definition self._module = name.get_root_context() if self.in_builtin_module(): self.module_path = None else: self.module_path = self._module.py__file__() """Shows the file path of a module. e.g. ``/usr/lib/python2.7/os.py``""" @property def name(self): """ Name of variable/function/class/module. For example, for ``x = None`` it returns ``'x'``. :rtype: str or None """ return self._name.string_name @property def type(self): """ The type of the definition. Here is an example of the value of this attribute. Let's consider the following source. As what is in ``variable`` is unambiguous to Jedi, :meth:`jedi.Script.goto_definitions` should return a list of definition for ``sys``, ``f``, ``C`` and ``x``. >>> from jedi import Script >>> source = ''' ... import keyword ... ... class C: ... pass ... ... class D: ... pass ... ... x = D() ... ... def f(): ... pass ... ... for variable in [keyword, f, C, x]: ... variable''' >>> script = Script(source) >>> defs = script.goto_definitions() Before showing what is in ``defs``, let's sort it by :attr:`line` so that it is easy to relate the result to the source code. >>> defs = sorted(defs, key=lambda d: d.line) >>> defs # doctest: +NORMALIZE_WHITESPACE [, , , ] Finally, here is what you can get from :attr:`type`: >>> defs[0].type 'module' >>> defs[1].type 'class' >>> defs[2].type 'instance' >>> defs[3].type 'function' """ tree_name = self._name.tree_name resolve = False if tree_name is not None: # TODO move this to their respective names. definition = tree_name.get_definition() if definition is not None and definition.type == 'import_from' and \ tree_name.is_definition(): resolve = True if isinstance(self._name, imports.SubModuleName) or resolve: for context in self._name.infer(): return context.api_type return self._name.api_type def _path(self): """The path to a module/class/function definition.""" def to_reverse(): name = self._name if name.api_type == 'module': try: name = list(name.infer())[0].name except IndexError: pass if name.api_type == 'module': module_contexts = name.infer() if module_contexts: module_context, = module_contexts for n in reversed(module_context.py__name__().split('.')): yield n else: # We don't really know anything about the path here. This # module is just an import that would lead in an # ImportError. So simply return the name. yield name.string_name return else: yield name.string_name parent_context = name.parent_context while parent_context is not None: try: method = parent_context.py__name__ except AttributeError: try: yield parent_context.name.string_name except AttributeError: pass else: for name in reversed(method().split('.')): yield name parent_context = parent_context.parent_context return reversed(list(to_reverse())) @property def module_name(self): """ The module name. >>> from jedi import Script >>> source = 'import json' >>> script = Script(source, path='example.py') >>> d = script.goto_definitions()[0] >>> print(d.module_name) # doctest: +ELLIPSIS json """ return self._module.name.string_name def in_builtin_module(self): """Whether this is a builtin module.""" return isinstance(self._module, compiled.CompiledObject) @property def line(self): """The line where the definition occurs (starting with 1).""" start_pos = self._name.start_pos if start_pos is None: return None return start_pos[0] @property def column(self): """The column where the definition occurs (starting with 0).""" start_pos = self._name.start_pos if start_pos is None: return None return start_pos[1] def docstring(self, raw=False, fast=True): r""" Return a document string for this completion object. Example: >>> from jedi import Script >>> source = '''\ ... def f(a, b=1): ... "Document for function f." ... ''' >>> script = Script(source, 1, len('def f'), 'example.py') >>> doc = script.goto_definitions()[0].docstring() >>> print(doc) f(a, b=1) Document for function f. Notice that useful extra information is added to the actual docstring. For function, it is call signature. If you need actual docstring, use ``raw=True`` instead. >>> print(script.goto_definitions()[0].docstring(raw=True)) Document for function f. :param fast: Don't follow imports that are only one level deep like ``import foo``, but follow ``from foo import bar``. This makes sense for speed reasons. Completing `import a` is slow if you use the ``foo.docstring(fast=False)`` on every object, because it parses all libraries starting with ``a``. """ return _Help(self._name).docstring(fast=fast, raw=raw) @property def description(self): """A textual description of the object.""" return u(self._name.string_name) @property def full_name(self): """ Dot-separated path of this object. It is in the form of ``[.[...]][.]``. It is useful when you want to look up Python manual of the object at hand. Example: >>> from jedi import Script >>> source = ''' ... import os ... os.path.join''' >>> script = Script(source, 3, len('os.path.join'), 'example.py') >>> print(script.goto_definitions()[0].full_name) os.path.join Notice that it returns ``'os.path.join'`` instead of (for example) ``'posixpath.join'``. This is not correct, since the modules name would be `````. However most users find the latter more practical. """ path = list(self._path()) # TODO add further checks, the mapping should only occur on stdlib. if not path: return None # for keywords the path is empty with ignored(KeyError): path[0] = self._mapping[path[0]] for key, repl in self._tuple_mapping.items(): if tuple(path[:len(key)]) == key: path = [repl] + path[len(key):] return '.'.join(path if path[0] else path[1:]) def goto_assignments(self): if self._name.tree_name is None: return self names = self._evaluator.goto(self._name.parent_context, self._name.tree_name) return [Definition(self._evaluator, n) for n in names] def _goto_definitions(self): # TODO make this function public. return [Definition(self._evaluator, d.name) for d in self._name.infer()] @property @memoize_method def params(self): """ Raises an ``AttributeError``if the definition is not callable. Otherwise returns a list of `Definition` that represents the params. """ def get_param_names(context): param_names = [] if context.api_type == 'function': param_names = list(context.get_param_names()) if isinstance(context, instance.BoundMethod): param_names = param_names[1:] elif isinstance(context, (instance.AbstractInstanceContext, ClassContext)): if isinstance(context, ClassContext): search = '__init__' else: search = '__call__' names = context.get_function_slot_names(search) if not names: return [] # Just take the first one here, not optimal, but currently # there's no better solution. inferred = names[0].infer() param_names = get_param_names(next(iter(inferred))) if isinstance(context, ClassContext): param_names = param_names[1:] return param_names elif isinstance(context, compiled.CompiledObject): return list(context.get_param_names()) return param_names followed = list(self._name.infer()) if not followed or not hasattr(followed[0], 'py__call__'): raise AttributeError() context = followed[0] # only check the first one. return [Definition(self._evaluator, n) for n in get_param_names(context)] def parent(self): context = self._name.parent_context if context is None: return None if isinstance(context, FunctionExecutionContext): # TODO the function context should be a part of the function # execution context. context = FunctionContext( self._evaluator, context.parent_context, context.tree_node) return Definition(self._evaluator, context.name) def __repr__(self): return "<%s %s>" % (type(self).__name__, self.description) def get_line_code(self, before=0, after=0): """ Returns the line of code where this object was defined. :param before: Add n lines before the current line to the output. :param after: Add n lines after the current line to the output. :return str: Returns the line(s) of code or an empty string if it's a builtin. """ if self.in_builtin_module(): return '' path = self._name.get_root_context().py__file__() lines = parser_cache[self._evaluator.grammar._hashed][path].lines index = self._name.start_pos[0] - 1 start_index = max(index - before, 0) return ''.join(lines[start_index:index + after + 1]) class Completion(BaseDefinition): """ `Completion` objects are returned from :meth:`api.Script.completions`. They provide additional information about a completion. """ def __init__(self, evaluator, name, stack, like_name_length): super(Completion, self).__init__(evaluator, name) self._like_name_length = like_name_length self._stack = stack # Completion objects with the same Completion name (which means # duplicate items in the completion) self._same_name_completions = [] def _complete(self, like_name): append = '' if settings.add_bracket_after_function \ and self.type == 'Function': append = '(' if isinstance(self._name, ParamName) and self._stack is not None: node_names = list(self._stack.get_node_names(self._evaluator.grammar._pgen_grammar)) if 'trailer' in node_names and 'argument' not in node_names: append += '=' name = self._name.string_name if like_name: name = name[self._like_name_length:] return name + append @property def complete(self): """ Return the rest of the word, e.g. completing ``isinstance``:: isinstan# <-- Cursor is here would return the string 'ce'. It also adds additional stuff, depending on your `settings.py`. Assuming the following function definition:: def foo(param=0): pass completing ``foo(par`` would give a ``Completion`` which `complete` would be `am=` """ return self._complete(True) @property def name_with_symbols(self): """ Similar to :attr:`name`, but like :attr:`name` returns also the symbols, for example assuming the following function definition:: def foo(param=0): pass completing ``foo(`` would give a ``Completion`` which ``name_with_symbols`` would be "param=". """ return self._complete(False) def docstring(self, raw=False, fast=True): if self._like_name_length >= 3: # In this case we can just resolve the like name, because we # wouldn't load like > 100 Python modules anymore. fast = False return super(Completion, self).docstring(raw=raw, fast=fast) @property def description(self): """Provide a description of the completion object.""" # TODO improve the class structure. return Definition.description.__get__(self) def __repr__(self): return '<%s: %s>' % (type(self).__name__, self._name.string_name) @memoize_method def follow_definition(self): """ Return the original definitions. I strongly recommend not using it for your completions, because it might slow down |jedi|. If you want to read only a few objects (<=20), it might be useful, especially to get the original docstrings. The basic problem of this function is that it follows all results. This means with 1000 completions (e.g. numpy), it's just PITA-slow. """ defs = self._name.infer() return [Definition(self._evaluator, d.name) for d in defs] class Definition(BaseDefinition): """ *Definition* objects are returned from :meth:`api.Script.goto_assignments` or :meth:`api.Script.goto_definitions`. """ def __init__(self, evaluator, definition): super(Definition, self).__init__(evaluator, definition) @property def description(self): """ A description of the :class:`.Definition` object, which is heavily used in testing. e.g. for ``isinstance`` it returns ``def isinstance``. Example: >>> from jedi import Script >>> source = ''' ... def f(): ... pass ... ... class C: ... pass ... ... variable = f if random.choice([0,1]) else C''' >>> script = Script(source, column=3) # line is maximum by default >>> defs = script.goto_definitions() >>> defs = sorted(defs, key=lambda d: d.line) >>> defs [, ] >>> str(defs[0].description) # strip literals in python2 'def f' >>> str(defs[1].description) 'class C' """ typ = self.type tree_name = self._name.tree_name if typ in ('function', 'class', 'module', 'instance') or tree_name is None: if typ == 'function': # For the description we want a short and a pythonic way. typ = 'def' return typ + ' ' + u(self._name.string_name) elif typ == 'param': code = search_ancestor(tree_name, 'param').get_code( include_prefix=False, include_comma=False ) return typ + ' ' + code definition = tree_name.get_definition() or tree_name # Remove the prefix, because that's not what we want for get_code # here. txt = definition.get_code(include_prefix=False) # Delete comments: txt = re.sub('#[^\n]+\n', ' ', txt) # Delete multi spaces/newlines txt = re.sub('\s+', ' ', txt).strip() return txt @property def desc_with_module(self): """ In addition to the definition, also return the module. .. warning:: Don't use this function yet, its behaviour may change. If you really need it, talk to me. .. todo:: Add full path. This function is should return a `module.class.function` path. """ position = '' if self.in_builtin_module else '@%s' % (self.line) return "%s:%s%s" % (self.module_name, self.description, position) @memoize_method def defined_names(self): """ List sub-definitions (e.g., methods in class). :rtype: list of Definition """ defs = self._name.infer() return sorted( unite(defined_names(self._evaluator, d) for d in defs), key=lambda s: s._name.start_pos or (0, 0) ) def is_definition(self): """ Returns True, if defined as a name in a statement, function or class. Returns False, if it's a reference to such a definition. """ if self._name.tree_name is None: return True else: return self._name.tree_name.is_definition() def __eq__(self, other): return self._name.start_pos == other._name.start_pos \ and self.module_path == other.module_path \ and self.name == other.name \ and self._evaluator == other._evaluator def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self._name.start_pos, self.module_path, self.name, self._evaluator)) class CallSignature(Definition): """ `CallSignature` objects is the return value of `Script.function_definition`. It knows what functions you are currently in. e.g. `isinstance(` would return the `isinstance` function. without `(` it would return nothing. """ def __init__(self, evaluator, executable_name, bracket_start_pos, index, key_name_str): super(CallSignature, self).__init__(evaluator, executable_name) self._index = index self._key_name_str = key_name_str self._bracket_start_pos = bracket_start_pos @property def index(self): """ The Param index of the current call. Returns None if the index cannot be found in the curent call. """ if self._key_name_str is not None: for i, param in enumerate(self.params): if self._key_name_str == param.name: return i if self.params: param_name = self.params[-1]._name if param_name.tree_name is not None: if param_name.tree_name.get_definition().star_count == 2: return i return None if self._index >= len(self.params): for i, param in enumerate(self.params): tree_name = param._name.tree_name if tree_name is not None: # *args case if tree_name.get_definition().star_count == 1: return i return None return self._index @property def bracket_start(self): """ The indent of the bracket that is responsible for the last function call. """ return self._bracket_start_pos def __repr__(self): return '<%s: %s index %s>' % \ (type(self).__name__, self._name.string_name, self.index) class _Help(object): """ Temporary implementation, will be used as `Script.help() or something in the future. """ def __init__(self, definition): self._name = definition @memoize_method def _get_contexts(self, fast): if isinstance(self._name, ImportName) and fast: return {} if self._name.api_type == 'statement': return {} return self._name.infer() def docstring(self, fast=True, raw=True): """ The docstring ``__doc__`` for any object. See :attr:`doc` for example. """ # TODO: Use all of the followed objects as output. Possibly divinding # them by a few dashes. for context in self._get_contexts(fast=fast): return context.py__doc__(include_call_signature=not raw) return '' jedi-0.11.1/jedi/api/interpreter.py0000664000175000017500000000273113214571123016772 0ustar davedave00000000000000""" TODO Some parts of this module are still not well documented. """ from jedi.evaluate.context import ModuleContext from jedi.evaluate import compiled from jedi.evaluate.compiled import mixed from jedi.evaluate.base_context import Context class NamespaceObject(object): def __init__(self, dct): self.__dict__ = dct class MixedModuleContext(Context): resets_positions = True type = 'mixed_module' def __init__(self, evaluator, tree_module, namespaces, path): self.evaluator = evaluator self._namespaces = namespaces self._namespace_objects = [NamespaceObject(n) for n in namespaces] self._module_context = ModuleContext(evaluator, tree_module, path=path) self.tree_node = tree_module def get_node(self): return self.tree_node def get_filters(self, *args, **kwargs): for filter in self._module_context.get_filters(*args, **kwargs): yield filter for namespace_obj in self._namespace_objects: compiled_object = compiled.create(self.evaluator, namespace_obj) mixed_object = mixed.MixedObject( self.evaluator, parent_context=self, compiled_object=compiled_object, tree_context=self._module_context ) for filter in mixed_object.get_filters(*args, **kwargs): yield filter def __getattr__(self, name): return getattr(self._module_context, name) jedi-0.11.1/jedi/api/keywords.py0000664000175000017500000000717313214571123016303 0ustar davedave00000000000000import pydoc import keyword from jedi._compatibility import is_py3, is_py35 from jedi.evaluate.utils import ignored from jedi.evaluate.filters import AbstractNameDefinition from parso.python.tree import Leaf try: from pydoc_data import topics as pydoc_topics except ImportError: # Python 2 try: import pydoc_topics except ImportError: # This is for Python 3 embeddable version, which dont have # pydoc_data module in its file python3x.zip. pydoc_topics = None if is_py3: if is_py35: # in python 3.5 async and await are not proper keywords, but for # completion pursposes should as as though they are keys = keyword.kwlist + ["async", "await"] else: keys = keyword.kwlist else: keys = keyword.kwlist + ['None', 'False', 'True'] def has_inappropriate_leaf_keyword(pos, module): relevant_errors = filter( lambda error: error.first_pos[0] == pos[0], module.error_statement_stacks) for error in relevant_errors: if error.next_token in keys: return True return False def completion_names(evaluator, stmt, pos, module): keyword_list = all_keywords(evaluator) if not isinstance(stmt, Leaf) or has_inappropriate_leaf_keyword(pos, module): keyword_list = filter( lambda keyword: not keyword.only_valid_as_leaf, keyword_list ) return [keyword.name for keyword in keyword_list] def all_keywords(evaluator, pos=(0, 0)): return set([Keyword(evaluator, k, pos) for k in keys]) def keyword(evaluator, string, pos=(0, 0)): if string in keys: return Keyword(evaluator, string, pos) else: return None def get_operator(evaluator, string, pos): return Keyword(evaluator, string, pos) keywords_only_valid_as_leaf = ( 'continue', 'break', ) class KeywordName(AbstractNameDefinition): api_type = 'keyword' def __init__(self, evaluator, name): self.evaluator = evaluator self.string_name = name self.parent_context = evaluator.BUILTINS def eval(self): return set() def infer(self): return [Keyword(self.evaluator, self.string_name, (0, 0))] class Keyword(object): api_type = 'keyword' def __init__(self, evaluator, name, pos): self.name = KeywordName(evaluator, name) self.start_pos = pos self.parent = evaluator.BUILTINS @property def only_valid_as_leaf(self): return self.name.value in keywords_only_valid_as_leaf @property def names(self): """ For a `parsing.Name` like comparision """ return [self.name] def py__doc__(self, include_call_signature=False): return imitate_pydoc(self.name.string_name) def __repr__(self): return '<%s: %s>' % (type(self).__name__, self.name) def imitate_pydoc(string): """ It's not possible to get the pydoc's without starting the annoying pager stuff. """ if pydoc_topics is None: return '' # str needed because of possible unicode stuff in py2k (pydoc doesn't work # with unicode strings) string = str(string) h = pydoc.help with ignored(KeyError): # try to access symbols string = h.symbols[string] string, _, related = string.partition(' ') get_target = lambda s: h.topics.get(s, h.keywords.get(s)) while isinstance(string, str): string = get_target(string) try: # is a tuple now label, related = string except TypeError: return '' try: return pydoc_topics.topics[label].strip() if pydoc_topics else '' except KeyError: return '' jedi-0.11.1/jedi/api/__init__.py0000664000175000017500000004035013214571123016165 0ustar davedave00000000000000""" The API basically only provides one class. You can create a :class:`Script` and use its methods. Additionally you can add a debug function with :func:`set_debug_function`. Alternatively, if you don't need a custom function and are happy with printing debug messages to stdout, simply call :func:`set_debug_function` without arguments. .. warning:: Please, note that Jedi is **not thread safe**. """ import os import sys import parso from parso.python import tree from parso import python_bytes_to_unicode, split_lines from jedi.parser_utils import get_executable_nodes, get_statement_of_position from jedi import debug from jedi import settings from jedi import cache from jedi.api import classes from jedi.api import interpreter from jedi.api import helpers from jedi.api.completion import Completion from jedi.evaluate import Evaluator from jedi.evaluate import imports from jedi.evaluate import usages from jedi.evaluate.project import Project from jedi.evaluate.arguments import try_iter_content from jedi.evaluate.helpers import get_module_names, evaluate_call_of_leaf from jedi.evaluate.sys_path import dotted_path_in_sys_path from jedi.evaluate.filters import TreeNameDefinition from jedi.evaluate.syntax_tree import tree_name_to_contexts from jedi.evaluate.context import ModuleContext from jedi.evaluate.context.module import ModuleName from jedi.evaluate.context.iterable import unpack_tuple_to_dict # Jedi uses lots and lots of recursion. By setting this a little bit higher, we # can remove some "maximum recursion depth" errors. sys.setrecursionlimit(3000) class Script(object): """ A Script is the base for completions, goto or whatever you want to do with |jedi|. You can either use the ``source`` parameter or ``path`` to read a file. Usually you're going to want to use both of them (in an editor). The script might be analyzed in a different ``sys.path`` than |jedi|: - if `sys_path` parameter is not ``None``, it will be used as ``sys.path`` for the script; - if `sys_path` parameter is ``None`` and ``VIRTUAL_ENV`` environment variable is defined, ``sys.path`` for the specified environment will be guessed (see :func:`jedi.evaluate.sys_path.get_venv_path`) and used for the script; - otherwise ``sys.path`` will match that of |jedi|. :param source: The source code of the current file, separated by newlines. :type source: str :param line: The line to perform actions on (starting with 1). :type line: int :param column: The column of the cursor (starting with 0). :type column: int :param path: The path of the file in the file system, or ``''`` if it hasn't been saved yet. :type path: str or None :param encoding: The encoding of ``source``, if it is not a ``unicode`` object (default ``'utf-8'``). :type encoding: str :param source_encoding: The encoding of ``source``, if it is not a ``unicode`` object (default ``'utf-8'``). :type encoding: str :param sys_path: ``sys.path`` to use during analysis of the script :type sys_path: list """ def __init__(self, source=None, line=None, column=None, path=None, encoding='utf-8', sys_path=None): self._orig_path = path # An empty path (also empty string) should always result in no path. self.path = os.path.abspath(path) if path else None if source is None: # TODO add a better warning than the traceback! with open(path, 'rb') as f: source = f.read() # TODO do we really want that? self._source = python_bytes_to_unicode(source, encoding, errors='replace') self._code_lines = split_lines(self._source) line = max(len(self._code_lines), 1) if line is None else line if not (0 < line <= len(self._code_lines)): raise ValueError('`line` parameter is not in a valid range.') line_len = len(self._code_lines[line - 1]) column = line_len if column is None else column if not (0 <= column <= line_len): raise ValueError('`column` parameter is not in a valid range.') self._pos = line, column self._path = path cache.clear_time_caches() debug.reset_time() # Load the Python grammar of the current interpreter. self._grammar = parso.load_grammar() project = Project(sys_path=sys_path) self._evaluator = Evaluator(self._grammar, project) project.add_script_path(self.path) debug.speed('init') @cache.memoize_method def _get_module_node(self): return self._grammar.parse( code=self._source, path=self.path, cache=False, # No disk cache, because the current script often changes. diff_cache=True, cache_path=settings.cache_directory ) @cache.memoize_method def _get_module(self): module = ModuleContext( self._evaluator, self._get_module_node(), self.path ) if self.path is not None: name = dotted_path_in_sys_path(self._evaluator.project.sys_path, self.path) if name is not None: imports.add_module(self._evaluator, name, module) return module def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, repr(self._orig_path)) def completions(self): """ Return :class:`classes.Completion` objects. Those objects contain information about the completions, more than just names. :return: Completion objects, sorted by name and __ comes last. :rtype: list of :class:`classes.Completion` """ debug.speed('completions start') completion = Completion( self._evaluator, self._get_module(), self._code_lines, self._pos, self.call_signatures ) completions = completion.completions() debug.speed('completions end') return completions def goto_definitions(self): """ Return the definitions of a the path under the cursor. goto function! This follows complicated paths and returns the end, not the first definition. The big difference between :meth:`goto_assignments` and :meth:`goto_definitions` is that :meth:`goto_assignments` doesn't follow imports and statements. Multiple objects may be returned, because Python itself is a dynamic language, which means depending on an option you can have two different versions of a function. :rtype: list of :class:`classes.Definition` """ module_node = self._get_module_node() leaf = module_node.get_name_of_position(self._pos) if leaf is None: leaf = module_node.get_leaf_for_position(self._pos) if leaf is None: return [] context = self._evaluator.create_context(self._get_module(), leaf) definitions = helpers.evaluate_goto_definition(self._evaluator, context, leaf) names = [s.name for s in definitions] defs = [classes.Definition(self._evaluator, name) for name in names] # The additional set here allows the definitions to become unique in an # API sense. In the internals we want to separate more things than in # the API. return helpers.sorted_definitions(set(defs)) def goto_assignments(self, follow_imports=False): """ Return the first definition found, while optionally following imports. Multiple objects may be returned, because Python itself is a dynamic language, which means depending on an option you can have two different versions of a function. :rtype: list of :class:`classes.Definition` """ def filter_follow_imports(names, check): for name in names: if check(name): for result in filter_follow_imports(name.goto(), check): yield result else: yield name tree_name = self._get_module_node().get_name_of_position(self._pos) if tree_name is None: return [] context = self._evaluator.create_context(self._get_module(), tree_name) names = list(self._evaluator.goto(context, tree_name)) if follow_imports: def check(name): if isinstance(name, ModuleName): return False return name.api_type == 'module' else: def check(name): return isinstance(name, imports.SubModuleName) names = filter_follow_imports(names, check) defs = [classes.Definition(self._evaluator, d) for d in set(names)] return helpers.sorted_definitions(defs) def usages(self, additional_module_paths=()): """ Return :class:`classes.Definition` objects, which contain all names that point to the definition of the name under the cursor. This is very useful for refactoring (renaming), or to show all usages of a variable. .. todo:: Implement additional_module_paths :rtype: list of :class:`classes.Definition` """ tree_name = self._get_module_node().get_name_of_position(self._pos) if tree_name is None: # Must be syntax return [] names = usages.usages(self._get_module(), tree_name) definitions = [classes.Definition(self._evaluator, n) for n in names] return helpers.sorted_definitions(definitions) def call_signatures(self): """ Return the function object of the call you're currently in. E.g. if the cursor is here:: abs(# <-- cursor is here This would return the ``abs`` function. On the other hand:: abs()# <-- cursor is here This would return an empty list.. :rtype: list of :class:`classes.CallSignature` """ call_signature_details = \ helpers.get_call_signature_details(self._get_module_node(), self._pos) if call_signature_details is None: return [] context = self._evaluator.create_context( self._get_module(), call_signature_details.bracket_leaf ) definitions = helpers.cache_call_signatures( self._evaluator, context, call_signature_details.bracket_leaf, self._code_lines, self._pos ) debug.speed('func_call followed') return [classes.CallSignature(self._evaluator, d.name, call_signature_details.bracket_leaf.start_pos, call_signature_details.call_index, call_signature_details.keyword_name_str) for d in definitions if hasattr(d, 'py__call__')] def _analysis(self): self._evaluator.is_analysis = True module_node = self._get_module_node() self._evaluator.analysis_modules = [module_node] try: for node in get_executable_nodes(module_node): context = self._get_module().create_context(node) if node.type in ('funcdef', 'classdef'): # Resolve the decorators. tree_name_to_contexts(self._evaluator, context, node.children[1]) elif isinstance(node, tree.Import): import_names = set(node.get_defined_names()) if node.is_nested(): import_names |= set(path[-1] for path in node.get_paths()) for n in import_names: imports.infer_import(context, n) elif node.type == 'expr_stmt': types = context.eval_node(node) for testlist in node.children[:-1:2]: # Iterate tuples. unpack_tuple_to_dict(context, types, testlist) else: if node.type == 'name': defs = self._evaluator.goto_definitions(context, node) else: defs = evaluate_call_of_leaf(context, node) try_iter_content(defs) self._evaluator.reset_recursion_limitations() ana = [a for a in self._evaluator.analysis if self.path == a.path] return sorted(set(ana), key=lambda x: x.line) finally: self._evaluator.is_analysis = False class Interpreter(Script): """ Jedi API for Python REPLs. In addition to completion of simple attribute access, Jedi supports code completion based on static code analysis. Jedi can complete attributes of object which is not initialized yet. >>> from os.path import join >>> namespace = locals() >>> script = Interpreter('join("").up', [namespace]) >>> print(script.completions()[0].name) upper """ def __init__(self, source, namespaces, **kwds): """ Parse `source` and mixin interpreted Python objects from `namespaces`. :type source: str :arg source: Code to parse. :type namespaces: list of dict :arg namespaces: a list of namespace dictionaries such as the one returned by :func:`locals`. Other optional arguments are same as the ones for :class:`Script`. If `line` and `column` are None, they are assumed be at the end of `source`. """ try: namespaces = [dict(n) for n in namespaces] except Exception: raise TypeError("namespaces must be a non-empty list of dicts.") super(Interpreter, self).__init__(source, **kwds) self.namespaces = namespaces def _get_module(self): parser_module = super(Interpreter, self)._get_module_node() return interpreter.MixedModuleContext( self._evaluator, parser_module, self.namespaces, path=self.path ) def names(source=None, path=None, encoding='utf-8', all_scopes=False, definitions=True, references=False): """ Returns a list of `Definition` objects, containing name parts. This means you can call ``Definition.goto_assignments()`` and get the reference of a name. The parameters are the same as in :py:class:`Script`, except or the following ones: :param all_scopes: If True lists the names of all scopes instead of only the module namespace. :param definitions: If True lists the names that have been defined by a class, function or a statement (``a = b`` returns ``a``). :param references: If True lists all the names that are not listed by ``definitions=True``. E.g. ``a = b`` returns ``b``. """ def def_ref_filter(_def): is_def = _def._name.tree_name.is_definition() return definitions and is_def or references and not is_def # Set line/column to a random position, because they don't matter. script = Script(source, line=1, column=0, path=path, encoding=encoding) module_context = script._get_module() defs = [ classes.Definition( script._evaluator, TreeNameDefinition( module_context.create_context(name if name.parent.type == 'file_input' else name.parent), name ) ) for name in get_module_names(script._get_module_node(), all_scopes) ] return sorted(filter(def_ref_filter, defs), key=lambda x: (x.line, x.column)) def preload_module(*modules): """ Preloading modules tells Jedi to load a module now, instead of lazy parsing of modules. Usful for IDEs, to control which modules to load on startup. :param modules: different module names, list of string. """ for m in modules: s = "import %s as x; x." % m Script(s, 1, len(s), None).completions() def set_debug_function(func_cb=debug.print_to_stdout, warnings=True, notices=True, speed=True): """ Define a callback debug function to get all the debug messages. If you don't specify any arguments, debug messages will be printed to stdout. :param func_cb: The callback function for debug messages, with n params. """ debug.debug_function = func_cb debug.enable_warning = warnings debug.enable_notice = notices debug.enable_speed = speed jedi-0.11.1/jedi/api/replstartup.py0000664000175000017500000000150013214571123017005 0ustar davedave00000000000000""" To use Jedi completion in Python interpreter, add the following in your shell setup (e.g., ``.bashrc``):: export PYTHONSTARTUP="$(python -m jedi repl)" Then you will be able to use Jedi completer in your Python interpreter:: $ python Python 2.7.2+ (default, Jul 20 2012, 22:15:08) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.join().split().in # doctest: +SKIP os.path.join().split().index os.path.join().split().insert """ import jedi.utils from jedi import __version__ as __jedi_version__ print('REPL completion using Jedi %s' % __jedi_version__) jedi.utils.setup_readline() del jedi # Note: try not to do many things here, as it will contaminate global # namespace of the interpreter. jedi-0.11.1/jedi/api/completion.py0000664000175000017500000002677213214571123016613 0ustar davedave00000000000000from parso.python import token from parso.python import tree from parso.tree import search_ancestor, Leaf from jedi import debug from jedi import settings from jedi.api import classes from jedi.api import helpers from jedi.evaluate import imports from jedi.api import keywords from jedi.evaluate.helpers import evaluate_call_of_leaf from jedi.evaluate.filters import get_global_filters from jedi.parser_utils import get_statement_of_position def get_call_signature_param_names(call_signatures): # add named params for call_sig in call_signatures: for p in call_sig.params: # Allow protected access, because it's a public API. tree_name = p._name.tree_name # Compiled modules typically don't allow keyword arguments. if tree_name is not None: # Allow access on _definition here, because it's a # public API and we don't want to make the internal # Name object public. tree_param = tree.search_ancestor(tree_name, 'param') if tree_param.star_count == 0: # no *args/**kwargs yield p._name def filter_names(evaluator, completion_names, stack, like_name): comp_dct = {} for name in completion_names: if settings.case_insensitive_completion \ and name.string_name.lower().startswith(like_name.lower()) \ or name.string_name.startswith(like_name): new = classes.Completion( evaluator, name, stack, len(like_name) ) k = (new.name, new.complete) # key if k in comp_dct and settings.no_completion_duplicates: comp_dct[k]._same_name_completions.append(new) else: comp_dct[k] = new yield new def get_user_scope(module_context, position): """ Returns the scope in which the user resides. This includes flows. """ user_stmt = get_statement_of_position(module_context.tree_node, position) if user_stmt is None: def scan(scope): for s in scope.children: if s.start_pos <= position <= s.end_pos: if isinstance(s, (tree.Scope, tree.Flow)): return scan(s) or s elif s.type in ('suite', 'decorated'): return scan(s) return None scanned_node = scan(module_context.tree_node) if scanned_node: return module_context.create_context(scanned_node, node_is_context=True) return module_context else: return module_context.create_context(user_stmt) def get_flow_scope_node(module_node, position): node = module_node.get_leaf_for_position(position, include_prefixes=True) while not isinstance(node, (tree.Scope, tree.Flow)): node = node.parent return node class Completion: def __init__(self, evaluator, module, code_lines, position, call_signatures_method): self._evaluator = evaluator self._module_context = module self._module_node = module.tree_node self._code_lines = code_lines # The first step of completions is to get the name self._like_name = helpers.get_on_completion_name(self._module_node, code_lines, position) # The actual cursor position is not what we need to calculate # everything. We want the start of the name we're on. self._position = position[0], position[1] - len(self._like_name) self._call_signatures_method = call_signatures_method def completions(self): completion_names = self._get_context_completions() completions = filter_names(self._evaluator, completion_names, self.stack, self._like_name) return sorted(completions, key=lambda x: (x.name.startswith('__'), x.name.startswith('_'), x.name.lower())) def _get_context_completions(self): """ Analyzes the context that a completion is made in and decides what to return. Technically this works by generating a parser stack and analysing the current stack for possible grammar nodes. Possible enhancements: - global/nonlocal search global - yield from / raise from <- could be only exceptions/generators - In args: */**: no completion - In params (also lambda): no completion before = """ grammar = self._evaluator.grammar try: self.stack = helpers.get_stack_at_position( grammar, self._code_lines, self._module_node, self._position ) except helpers.OnErrorLeaf as e: self.stack = None if e.error_leaf.value == '.': # After ErrorLeaf's that are dots, we will not do any # completions since this probably just confuses the user. return [] # If we don't have a context, just use global completion. return self._global_completions() allowed_keywords, allowed_tokens = \ helpers.get_possible_completion_types(grammar._pgen_grammar, self.stack) if 'if' in allowed_keywords: leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True) previous_leaf = leaf.get_previous_leaf() indent = self._position[1] if not (leaf.start_pos <= self._position <= leaf.end_pos): indent = leaf.start_pos[1] if previous_leaf is not None: stmt = previous_leaf while True: stmt = search_ancestor( stmt, 'if_stmt', 'for_stmt', 'while_stmt', 'try_stmt', 'error_node', ) if stmt is None: break type_ = stmt.type if type_ == 'error_node': first = stmt.children[0] if isinstance(first, Leaf): type_ = first.value + '_stmt' # Compare indents if stmt.start_pos[1] == indent: if type_ == 'if_stmt': allowed_keywords += ['elif', 'else'] elif type_ == 'try_stmt': allowed_keywords += ['except', 'finally', 'else'] elif type_ == 'for_stmt': allowed_keywords.append('else') completion_names = list(self._get_keyword_completion_names(allowed_keywords)) if token.NAME in allowed_tokens or token.INDENT in allowed_tokens: # This means that we actually have to do type inference. symbol_names = list(self.stack.get_node_names(grammar._pgen_grammar)) nodes = list(self.stack.get_nodes()) if nodes and nodes[-1] in ('as', 'def', 'class'): # No completions for ``with x as foo`` and ``import x as foo``. # Also true for defining names as a class or function. return list(self._get_class_context_completions(is_function=True)) elif "import_stmt" in symbol_names: level, names = self._parse_dotted_names(nodes, "import_from" in symbol_names) only_modules = not ("import_from" in symbol_names and 'import' in nodes) completion_names += self._get_importer_names( names, level, only_modules=only_modules, ) elif symbol_names[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.': dot = self._module_node.get_leaf_for_position(self._position) completion_names += self._trailer_completions(dot.get_previous_leaf()) else: completion_names += self._global_completions() completion_names += self._get_class_context_completions(is_function=False) if 'trailer' in symbol_names: call_signatures = self._call_signatures_method() completion_names += get_call_signature_param_names(call_signatures) return completion_names def _get_keyword_completion_names(self, keywords_): for k in keywords_: yield keywords.keyword(self._evaluator, k).name def _global_completions(self): context = get_user_scope(self._module_context, self._position) debug.dbg('global completion scope: %s', context) flow_scope_node = get_flow_scope_node(self._module_node, self._position) filters = get_global_filters( self._evaluator, context, self._position, origin_scope=flow_scope_node ) completion_names = [] for filter in filters: completion_names += filter.values() return completion_names def _trailer_completions(self, previous_leaf): user_context = get_user_scope(self._module_context, self._position) evaluation_context = self._evaluator.create_context( self._module_context, previous_leaf ) contexts = evaluate_call_of_leaf(evaluation_context, previous_leaf) completion_names = [] debug.dbg('trailer completion contexts: %s', contexts) for context in contexts: for filter in context.get_filters( search_global=False, origin_scope=user_context.tree_node): completion_names += filter.values() return completion_names def _parse_dotted_names(self, nodes, is_import_from): level = 0 names = [] for node in nodes[1:]: if node in ('.', '...'): if not names: level += len(node.value) elif node.type == 'dotted_name': names += node.children[::2] elif node.type == 'name': names.append(node) elif node == ',': if not is_import_from: names = [] else: # Here if the keyword `import` comes along it stops checking # for names. break return level, names def _get_importer_names(self, names, level=0, only_modules=True): names = [n.value for n in names] i = imports.Importer(self._evaluator, names, self._module_context, level) return i.completion_names(self._evaluator, only_modules=only_modules) def _get_class_context_completions(self, is_function=True): """ Autocomplete inherited methods when overriding in child class. """ leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True) cls = tree.search_ancestor(leaf, 'classdef') if isinstance(cls, (tree.Class, tree.Function)): # Complete the methods that are defined in the super classes. random_context = self._module_context.create_context( cls, node_is_context=True ) else: return if cls.start_pos[1] >= leaf.start_pos[1]: return filters = random_context.get_filters(search_global=False, is_instance=True) # The first dict is the dictionary of class itself. next(filters) for filter in filters: for name in filter.values(): if (name.api_type == 'function') == is_function: yield name jedi-0.11.1/jedi/api/helpers.py0000664000175000017500000002506513214571123016076 0ustar davedave00000000000000""" Helpers for the API """ import re from collections import namedtuple from textwrap import dedent from parso.python.parser import Parser from parso.python import tree from parso import split_lines from jedi._compatibility import u from jedi.evaluate.syntax_tree import eval_atom from jedi.evaluate.helpers import evaluate_call_of_leaf from jedi.cache import time_cache CompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name']) def sorted_definitions(defs): # Note: `or ''` below is required because `module_path` could be return sorted(defs, key=lambda x: (x.module_path or '', x.line or 0, x.column or 0)) def get_on_completion_name(module_node, lines, position): leaf = module_node.get_leaf_for_position(position) if leaf is None or leaf.type in ('string', 'error_leaf'): # Completions inside strings are a bit special, we need to parse the # string. The same is true for comments and error_leafs. line = lines[position[0] - 1] # The first step of completions is to get the name return re.search(r'(?!\d)\w+$|$', line[:position[1]]).group(0) elif leaf.type not in ('name', 'keyword'): return '' return leaf.value[:position[1] - leaf.start_pos[1]] def _get_code(code_lines, start_pos, end_pos): # Get relevant lines. lines = code_lines[start_pos[0] - 1:end_pos[0]] # Remove the parts at the end of the line. lines[-1] = lines[-1][:end_pos[1]] # Remove first line indentation. lines[0] = lines[0][start_pos[1]:] return '\n'.join(lines) class OnErrorLeaf(Exception): @property def error_leaf(self): return self.args[0] def _is_on_comment(leaf, position): comment_lines = split_lines(leaf.prefix) difference = leaf.start_pos[0] - position[0] prefix_start_pos = leaf.get_start_pos_of_prefix() if difference == 0: indent = leaf.start_pos[1] elif position[0] == prefix_start_pos[0]: indent = prefix_start_pos[1] else: indent = 0 line = comment_lines[-difference - 1][:position[1] - indent] return '#' in line def _get_code_for_stack(code_lines, module_node, position): leaf = module_node.get_leaf_for_position(position, include_prefixes=True) # It might happen that we're on whitespace or on a comment. This means # that we would not get the right leaf. if leaf.start_pos >= position: if _is_on_comment(leaf, position): return u('') # If we're not on a comment simply get the previous leaf and proceed. leaf = leaf.get_previous_leaf() if leaf is None: return u('') # At the beginning of the file. is_after_newline = leaf.type == 'newline' while leaf.type == 'newline': leaf = leaf.get_previous_leaf() if leaf is None: return u('') if leaf.type == 'error_leaf' or leaf.type == 'string': if leaf.start_pos[0] < position[0]: # On a different line, we just begin anew. return u('') # Error leafs cannot be parsed, completion in strings is also # impossible. raise OnErrorLeaf(leaf) else: user_stmt = leaf while True: if user_stmt.parent.type in ('file_input', 'suite', 'simple_stmt'): break user_stmt = user_stmt.parent if is_after_newline: if user_stmt.start_pos[1] > position[1]: # This means that it's actually a dedent and that means that we # start without context (part of a suite). return u('') # This is basically getting the relevant lines. return _get_code(code_lines, user_stmt.get_start_pos_of_prefix(), position) def get_stack_at_position(grammar, code_lines, module_node, pos): """ Returns the possible node names (e.g. import_from, xor_test or yield_stmt). """ class EndMarkerReached(Exception): pass def tokenize_without_endmarker(code): # TODO This is for now not an official parso API that exists purely # for Jedi. tokens = grammar._tokenize(code) for token_ in tokens: if token_.string == safeword: raise EndMarkerReached() else: yield token_ # The code might be indedented, just remove it. code = dedent(_get_code_for_stack(code_lines, module_node, pos)) # We use a word to tell Jedi when we have reached the start of the # completion. # Use Z as a prefix because it's not part of a number suffix. safeword = 'ZZZ_USER_WANTS_TO_COMPLETE_HERE_WITH_JEDI' code = code + safeword p = Parser(grammar._pgen_grammar, error_recovery=True) try: p.parse(tokens=tokenize_without_endmarker(code)) except EndMarkerReached: return Stack(p.pgen_parser.stack) raise SystemError("This really shouldn't happen. There's a bug in Jedi.") class Stack(list): def get_node_names(self, grammar): for dfa, state, (node_number, nodes) in self: yield grammar.number2symbol[node_number] def get_nodes(self): for dfa, state, (node_number, nodes) in self: for node in nodes: yield node def get_possible_completion_types(pgen_grammar, stack): def add_results(label_index): try: grammar_labels.append(inversed_tokens[label_index]) except KeyError: try: keywords.append(inversed_keywords[label_index]) except KeyError: t, v = pgen_grammar.labels[label_index] assert t >= 256 # See if it's a symbol and if we're in its first set inversed_keywords itsdfa = pgen_grammar.dfas[t] itsstates, itsfirst = itsdfa for first_label_index in itsfirst.keys(): add_results(first_label_index) inversed_keywords = dict((v, k) for k, v in pgen_grammar.keywords.items()) inversed_tokens = dict((v, k) for k, v in pgen_grammar.tokens.items()) keywords = [] grammar_labels = [] def scan_stack(index): dfa, state, node = stack[index] states, first = dfa arcs = states[state] for label_index, new_state in arcs: if label_index == 0: # An accepting state, check the stack below. scan_stack(index - 1) else: add_results(label_index) scan_stack(-1) return keywords, grammar_labels def evaluate_goto_definition(evaluator, context, leaf): if leaf.type == 'name': # In case of a name we can just use goto_definition which does all the # magic itself. return evaluator.goto_definitions(context, leaf) parent = leaf.parent if parent.type == 'atom': return context.eval_node(leaf.parent) elif parent.type == 'trailer': return evaluate_call_of_leaf(context, leaf) elif isinstance(leaf, tree.Literal): return eval_atom(context, leaf) return [] CallSignatureDetails = namedtuple( 'CallSignatureDetails', ['bracket_leaf', 'call_index', 'keyword_name_str'] ) def _get_index_and_key(nodes, position): """ Returns the amount of commas and the keyword argument string. """ nodes_before = [c for c in nodes if c.start_pos < position] if nodes_before[-1].type == 'arglist': nodes_before = [c for c in nodes_before[-1].children if c.start_pos < position] key_str = None if nodes_before: last = nodes_before[-1] if last.type == 'argument' and last.children[1].end_pos <= position: # Checked if the argument key_str = last.children[0].value elif last == '=': key_str = nodes_before[-2].value return nodes_before.count(','), key_str def _get_call_signature_details_from_error_node(node, position): for index, element in reversed(list(enumerate(node.children))): # `index > 0` means that it's a trailer and not an atom. if element == '(' and element.end_pos <= position and index > 0: # It's an error node, we don't want to match too much, just # until the parentheses is enough. children = node.children[index:] name = element.get_previous_leaf() if name is None: continue if name.type == 'name' or name.parent.type in ('trailer', 'atom'): return CallSignatureDetails( element, *_get_index_and_key(children, position) ) def get_call_signature_details(module, position): leaf = module.get_leaf_for_position(position, include_prefixes=True) if leaf.start_pos >= position: # Whitespace / comments after the leaf count towards the previous leaf. leaf = leaf.get_previous_leaf() if leaf is None: return None if leaf == ')': if leaf.end_pos == position: leaf = leaf.get_next_leaf() # Now that we know where we are in the syntax tree, we start to look at # parents for possible function definitions. node = leaf.parent while node is not None: if node.type in ('funcdef', 'classdef'): # Don't show call signatures if there's stuff before it that just # makes it feel strange to have a call signature. return None for n in node.children[::-1]: if n.start_pos < position and n.type == 'error_node': result = _get_call_signature_details_from_error_node(n, position) if result is not None: return result if node.type == 'trailer' and node.children[0] == '(': leaf = node.get_previous_leaf() if leaf is None: return None return CallSignatureDetails( node.children[0], *_get_index_and_key(node.children, position)) node = node.parent return None @time_cache("call_signatures_validity") def cache_call_signatures(evaluator, context, bracket_leaf, code_lines, user_pos): """This function calculates the cache key.""" index = user_pos[0] - 1 before_cursor = code_lines[index][:user_pos[1]] other_lines = code_lines[bracket_leaf.start_pos[0]:index] whole = '\n'.join(other_lines + [before_cursor]) before_bracket = re.match(r'.*\(', whole, re.DOTALL) module_path = context.get_root_context().py__file__() if module_path is None: yield None # Don't cache! else: yield (module_path, before_bracket, bracket_leaf.start_pos) yield evaluate_goto_definition( evaluator, context, bracket_leaf.get_previous_leaf() ) jedi-0.11.1/jedi/debug.py0000664000175000017500000000653113214571123014746 0ustar davedave00000000000000from jedi._compatibility import encoding, is_py3, u import os import time def _lazy_colorama_init(): """ Lazily init colorama if necessary, not to screw up stdout is debug not enabled. This version of the function does nothing. """ pass _inited=False try: if os.name == 'nt': # Does not work on Windows, as pyreadline and colorama interfere raise ImportError else: # Use colorama for nicer console output. from colorama import Fore, init from colorama import initialise def _lazy_colorama_init(): """ Lazily init colorama if necessary, not to screw up stdout is debug not enabled. This version of the function does init colorama. """ global _inited if not _inited: # pytest resets the stream at the end - causes troubles. Since # after every output the stream is reset automatically we don't # need this. initialise.atexit_done = True try: init() except Exception: # Colorama fails with initializing under vim and is buggy in # version 0.3.6. pass _inited = True except ImportError: class Fore(object): RED = '' GREEN = '' YELLOW = '' MAGENTA = '' RESET = '' NOTICE = object() WARNING = object() SPEED = object() enable_speed = False enable_warning = False enable_notice = False # callback, interface: level, str debug_function = None _debug_indent = 0 _start_time = time.time() def reset_time(): global _start_time, _debug_indent _start_time = time.time() _debug_indent = 0 def increase_indent(func): """Decorator for makin """ def wrapper(*args, **kwargs): global _debug_indent _debug_indent += 1 try: return func(*args, **kwargs) finally: _debug_indent -= 1 return wrapper def dbg(message, *args, **kwargs): """ Looks at the stack, to see if a debug message should be printed. """ # Python 2 compatibility, because it doesn't understand default args color = kwargs.pop('color', 'GREEN') assert color if debug_function and enable_notice: i = ' ' * _debug_indent _lazy_colorama_init() debug_function(color, i + 'dbg: ' + message % tuple(u(repr(a)) for a in args)) def warning(message, *args, **kwargs): format = kwargs.pop('format', True) assert not kwargs if debug_function and enable_warning: i = ' ' * _debug_indent if format: message = message % tuple(u(repr(a)) for a in args) debug_function('RED', i + 'warning: ' + message) def speed(name): if debug_function and enable_speed: now = time.time() i = ' ' * _debug_indent debug_function('YELLOW', i + 'speed: ' + '%s %s' % (name, now - _start_time)) def print_to_stdout(color, str_out): """ The default debug function that prints to standard out. :param str color: A string that is an attribute of ``colorama.Fore``. """ col = getattr(Fore, color) _lazy_colorama_init() if not is_py3: str_out = str_out.encode(encoding, 'replace') print(col + str_out + Fore.RESET) # debug_function = print_to_stdout jedi-0.11.1/jedi/settings.py0000664000175000017500000000727013214571123015521 0ustar davedave00000000000000""" This module contains variables with global |jedi| settings. To change the behavior of |jedi|, change the variables defined in :mod:`jedi.settings`. Plugins should expose an interface so that the user can adjust the configuration. Example usage:: from jedi import settings settings.case_insensitive_completion = True Completion output ~~~~~~~~~~~~~~~~~ .. autodata:: case_insensitive_completion .. autodata:: add_bracket_after_function .. autodata:: no_completion_duplicates Filesystem cache ~~~~~~~~~~~~~~~~ .. autodata:: cache_directory .. autodata:: use_filesystem_cache Parser ~~~~~~ .. autodata:: fast_parser Dynamic stuff ~~~~~~~~~~~~~ .. autodata:: dynamic_array_additions .. autodata:: dynamic_params .. autodata:: dynamic_params_for_other_modules .. autodata:: additional_dynamic_modules .. autodata:: auto_import_modules Caching ~~~~~~~ .. autodata:: call_signatures_validity """ import os import platform # ---------------- # completion output settings # ---------------- case_insensitive_completion = True """ The completion is by default case insensitive. """ add_bracket_after_function = False """ Adds an opening bracket after a function, because that's normal behaviour. Removed it again, because in VIM that is not very practical. """ no_completion_duplicates = True """ If set, completions with the same name don't appear in the output anymore, but are in the `same_name_completions` attribute. """ # ---------------- # Filesystem cache # ---------------- use_filesystem_cache = True """ Use filesystem cache to save once parsed files with pickle. """ if platform.system().lower() == 'windows': _cache_directory = os.path.join(os.getenv('APPDATA') or '~', 'Jedi', 'Jedi') elif platform.system().lower() == 'darwin': _cache_directory = os.path.join('~', 'Library', 'Caches', 'Jedi') else: _cache_directory = os.path.join(os.getenv('XDG_CACHE_HOME') or '~/.cache', 'jedi') cache_directory = os.path.expanduser(_cache_directory) """ The path where the cache is stored. On Linux, this defaults to ``~/.cache/jedi/``, on OS X to ``~/Library/Caches/Jedi/`` and on Windows to ``%APPDATA%\\Jedi\\Jedi\\``. On Linux, if environment variable ``$XDG_CACHE_HOME`` is set, ``$XDG_CACHE_HOME/jedi`` is used instead of the default one. """ # ---------------- # parser # ---------------- fast_parser = True """ Use the fast parser. This means that reparsing is only being done if something has been changed e.g. to a function. If this happens, only the function is being reparsed. """ # ---------------- # dynamic stuff # ---------------- dynamic_array_additions = True """ check for `append`, etc. on arrays: [], {}, () as well as list/set calls. """ dynamic_params = True """ A dynamic param completion, finds the callees of the function, which define the params of a function. """ dynamic_params_for_other_modules = True """ Do the same for other modules. """ additional_dynamic_modules = [] """ Additional modules in which |jedi| checks if statements are to be found. This is practical for IDEs, that want to administrate their modules themselves. """ dynamic_flow_information = True """ Check for `isinstance` and other information to infer a type. """ auto_import_modules = [ 'hashlib', # setattr ] """ Modules that are not analyzed but imported, although they contain Python code. This improves autocompletion for libraries that use ``setattr`` or ``globals()`` modifications a lot. """ # ---------------- # caching validity (time) # ---------------- call_signatures_validity = 3.0 """ Finding function calls might be slow (0.1-0.5s). This is not acceptible for normal writing. Therefore cache it for a short time. """ jedi-0.11.1/jedi/__main__.py0000664000175000017500000000262313214571123015376 0ustar davedave00000000000000import sys from os.path import join, dirname, abspath, isdir def _start_linter(): """ This is a pre-alpha API. You're not supposed to use it at all, except for testing. It will very likely change. """ import jedi if '--debug' in sys.argv: jedi.set_debug_function() for path in sys.argv[2:]: if path.startswith('--'): continue if isdir(path): import fnmatch import os paths = [] for root, dirnames, filenames in os.walk(path): for filename in fnmatch.filter(filenames, '*.py'): paths.append(os.path.join(root, filename)) else: paths = [path] try: for path in paths: for error in jedi.Script(path=path)._analysis(): print(error) except Exception: if '--pdb' in sys.argv: import traceback traceback.print_exc() import pdb pdb.post_mortem() else: raise if len(sys.argv) == 2 and sys.argv[1] == 'repl': # don't want to use __main__ only for repl yet, maybe we want to use it for # something else. So just use the keyword ``repl`` for now. print(join(dirname(abspath(__file__)), 'api', 'replstartup.py')) elif len(sys.argv) > 1 and sys.argv[1] == 'linter': _start_linter() jedi-0.11.1/jedi/evaluate/0000775000175000017500000000000013214571377015122 5ustar davedave00000000000000jedi-0.11.1/jedi/evaluate/pep0484.py0000664000175000017500000001762713214571123016602 0ustar davedave00000000000000""" PEP 0484 ( https://www.python.org/dev/peps/pep-0484/ ) describes type hints through function annotations. There is a strong suggestion in this document that only the type of type hinting defined in PEP0484 should be allowed as annotations in future python versions. The (initial / probably incomplete) implementation todo list for pep-0484: v Function parameter annotations with builtin/custom type classes v Function returntype annotations with builtin/custom type classes v Function parameter annotations with strings (forward reference) v Function return type annotations with strings (forward reference) v Local variable type hints v Assigned types: `Url = str\ndef get(url:Url) -> str:` v Type hints in `with` statements x Stub files support x support `@no_type_check` and `@no_type_check_decorator` x support for typing.cast() operator x support for type hint comments for functions, `# type: (int, str) -> int`. See comment from Guido https://github.com/davidhalter/jedi/issues/662 """ import os import re from parso import ParserSyntaxError from parso.python import tree from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate import compiled from jedi.evaluate.base_context import NO_CONTEXTS, ContextSet from jedi.evaluate.lazy_context import LazyTreeContext from jedi.evaluate.context import ModuleContext from jedi import debug from jedi import _compatibility from jedi import parser_utils def _evaluate_for_annotation(context, annotation, index=None): """ Evaluates a string-node, looking for an annotation If index is not None, the annotation is expected to be a tuple and we're interested in that index """ if annotation is not None: context_set = context.eval_node(_fix_forward_reference(context, annotation)) if index is not None: context_set = context_set.filter( lambda context: context.array_type == 'tuple' \ and len(list(context.py__iter__())) >= index ).py__getitem__(index) return context_set.execute_evaluated() else: return NO_CONTEXTS def _fix_forward_reference(context, node): evaled_nodes = context.eval_node(node) if len(evaled_nodes) != 1: debug.warning("Eval'ed typing index %s should lead to 1 object, " " not %s" % (node, evaled_nodes)) return node evaled_node = list(evaled_nodes)[0] if isinstance(evaled_node, compiled.CompiledObject) and \ isinstance(evaled_node.obj, str): try: new_node = context.evaluator.grammar.parse( _compatibility.unicode(evaled_node.obj), start_symbol='eval_input', error_recovery=False ) except ParserSyntaxError: debug.warning('Annotation not parsed: %s' % evaled_node.obj) return node else: module = node.get_root_node() parser_utils.move(new_node, module.end_pos[0]) new_node.parent = context.tree_node return new_node else: return node @evaluator_method_cache() def infer_param(execution_context, param): annotation = param.annotation module_context = execution_context.get_root_context() return _evaluate_for_annotation(module_context, annotation) def py__annotations__(funcdef): return_annotation = funcdef.annotation if return_annotation: dct = {'return': return_annotation} else: dct = {} for function_param in funcdef.get_params(): param_annotation = function_param.annotation if param_annotation is not None: dct[function_param.name.value] = param_annotation return dct @evaluator_method_cache() def infer_return_types(function_context): annotation = py__annotations__(function_context.tree_node).get("return", None) module_context = function_context.get_root_context() return _evaluate_for_annotation(module_context, annotation) _typing_module = None def _get_typing_replacement_module(grammar): """ The idea is to return our jedi replacement for the PEP-0484 typing module as discussed at https://github.com/davidhalter/jedi/issues/663 """ global _typing_module if _typing_module is None: typing_path = \ os.path.abspath(os.path.join(__file__, "../jedi_typing.py")) with open(typing_path) as f: code = _compatibility.unicode(f.read()) _typing_module = grammar.parse(code) return _typing_module def py__getitem__(context, typ, node): if not typ.get_root_context().name.string_name == "typing": return None # we assume that any class using [] in a module called # "typing" with a name for which we have a replacement # should be replaced by that class. This is not 100% # airtight but I don't have a better idea to check that it's # actually the PEP-0484 typing module and not some other if node.type == "subscriptlist": nodes = node.children[::2] # skip the commas else: nodes = [node] del node nodes = [_fix_forward_reference(context, node) for node in nodes] type_name = typ.name.string_name # hacked in Union and Optional, since it's hard to do nicely in parsed code if type_name in ("Union", '_Union'): # In Python 3.6 it's still called typing.Union but it's an instance # called _Union. return ContextSet.from_sets(context.eval_node(node) for node in nodes) if type_name in ("Optional", '_Optional'): # Here we have the same issue like in Union. Therefore we also need to # check for the instance typing._Optional (Python 3.6). return context.eval_node(nodes[0]) typing = ModuleContext( context.evaluator, module_node=_get_typing_replacement_module(context.evaluator.latest_grammar), path=None ) factories = typing.py__getattribute__("factory") assert len(factories) == 1 factory = list(factories)[0] assert factory function_body_nodes = factory.tree_node.children[4].children valid_classnames = set(child.name.value for child in function_body_nodes if isinstance(child, tree.Class)) if type_name not in valid_classnames: return None compiled_classname = compiled.create(context.evaluator, type_name) from jedi.evaluate.context.iterable import FakeSequence args = FakeSequence( context.evaluator, "tuple", [LazyTreeContext(context, n) for n in nodes] ) result = factory.execute_evaluated(compiled_classname, args) return result def find_type_from_comment_hint_for(context, node, name): return _find_type_from_comment_hint(context, node, node.children[1], name) def find_type_from_comment_hint_with(context, node, name): assert len(node.children[1].children) == 3, \ "Can only be here when children[1] is 'foo() as f'" varlist = node.children[1].children[2] return _find_type_from_comment_hint(context, node, varlist, name) def find_type_from_comment_hint_assign(context, node, name): return _find_type_from_comment_hint(context, node, node.children[0], name) def _find_type_from_comment_hint(context, node, varlist, name): index = None if varlist.type in ("testlist_star_expr", "exprlist", "testlist"): # something like "a, b = 1, 2" index = 0 for child in varlist.children: if child == name: break if child.type == "operator": continue index += 1 else: return [] comment = parser_utils.get_following_comment_same_line(node) if comment is None: return [] match = re.match(r"^#\s*type:\s*([^#]*)", comment) if not match: return [] annotation = tree.String( repr(str(match.group(1).strip())), node.start_pos) annotation.parent = node.parent return _evaluate_for_annotation(context, annotation, index) jedi-0.11.1/jedi/evaluate/finder.py0000664000175000017500000002515213214571123016735 0ustar davedave00000000000000""" Searching for names with given scope and name. This is very central in Jedi and Python. The name resolution is quite complicated with descripter, ``__getattribute__``, ``__getattr__``, ``global``, etc. If you want to understand name resolution, please read the first few chapters in http://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/. Flow checks +++++++++++ Flow checks are not really mature. There's only a check for ``isinstance``. It would check whether a flow has the form of ``if isinstance(a, type_or_tuple)``. Unfortunately every other thing is being ignored (e.g. a == '' would be easy to check for -> a is a string). There's big potential in these checks. """ from parso.python import tree from parso.tree import search_ancestor from jedi import debug from jedi import settings from jedi.evaluate.context import AbstractInstanceContext from jedi.evaluate import compiled from jedi.evaluate import analysis from jedi.evaluate import flow_analysis from jedi.evaluate.arguments import TreeArguments from jedi.evaluate import helpers from jedi.evaluate.context import iterable from jedi.evaluate.filters import get_global_filters, TreeNameDefinition from jedi.evaluate.base_context import ContextSet from jedi.parser_utils import is_scope, get_parent_scope class NameFinder(object): def __init__(self, evaluator, context, name_context, name_or_str, position=None, analysis_errors=True): self._evaluator = evaluator # Make sure that it's not just a syntax tree node. self._context = context self._name_context = name_context self._name = name_or_str if isinstance(name_or_str, tree.Name): self._string_name = name_or_str.value else: self._string_name = name_or_str self._position = position self._found_predefined_types = None self._analysis_errors = analysis_errors @debug.increase_indent def find(self, filters, attribute_lookup): """ :params bool attribute_lookup: Tell to logic if we're accessing the attribute or the contents of e.g. a function. """ names = self.filter_name(filters) if self._found_predefined_types is not None and names: check = flow_analysis.reachability_check( self._context, self._context.tree_node, self._name) if check is flow_analysis.UNREACHABLE: return ContextSet() return self._found_predefined_types types = self._names_to_types(names, attribute_lookup) if not names and self._analysis_errors and not types \ and not (isinstance(self._name, tree.Name) and isinstance(self._name.parent.parent, tree.Param)): if isinstance(self._name, tree.Name): if attribute_lookup: analysis.add_attribute_error( self._name_context, self._context, self._name) else: message = ("NameError: name '%s' is not defined." % self._string_name) analysis.add(self._name_context, 'name-error', self._name, message) return types def _get_origin_scope(self): if isinstance(self._name, tree.Name): scope = self._name while scope.parent is not None: # TODO why if classes? if not isinstance(scope, tree.Scope): break scope = scope.parent return scope else: return None def get_filters(self, search_global=False): origin_scope = self._get_origin_scope() if search_global: return get_global_filters(self._evaluator, self._context, self._position, origin_scope) else: return self._context.get_filters(search_global, self._position, origin_scope=origin_scope) def filter_name(self, filters): """ Searches names that are defined in a scope (the different ``filters``), until a name fits. """ names = [] if self._context.predefined_names: # TODO is this ok? node might not always be a tree.Name node = self._name while node is not None and not is_scope(node): node = node.parent if node.type in ("if_stmt", "for_stmt", "comp_for"): try: name_dict = self._context.predefined_names[node] types = name_dict[self._string_name] except KeyError: continue else: self._found_predefined_types = types break for filter in filters: names = filter.get(self._string_name) if names: if len(names) == 1: n, = names if isinstance(n, TreeNameDefinition): # Something somewhere went terribly wrong. This # typically happens when using goto on an import in an # __init__ file. I think we need a better solution, but # it's kind of hard, because for Jedi it's not clear # that that name has not been defined, yet. if n.tree_name == self._name: if self._name.get_definition().type == 'import_from': continue break debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self._string_name, self._context, names, self._position) return list(names) def _check_getattr(self, inst): """Checks for both __getattr__ and __getattribute__ methods""" # str is important, because it shouldn't be `Name`! name = compiled.create(self._evaluator, self._string_name) # This is a little bit special. `__getattribute__` is in Python # executed before `__getattr__`. But: I know no use case, where # this could be practical and where Jedi would return wrong types. # If you ever find something, let me know! # We are inversing this, because a hand-crafted `__getattribute__` # could still call another hand-crafted `__getattr__`, but not the # other way around. names = (inst.get_function_slot_names('__getattr__') or inst.get_function_slot_names('__getattribute__')) return inst.execute_function_slots(names, name) def _names_to_types(self, names, attribute_lookup): contexts = ContextSet.from_sets(name.infer() for name in names) debug.dbg('finder._names_to_types: %s -> %s', names, contexts) if not names and isinstance(self._context, AbstractInstanceContext): # handling __getattr__ / __getattribute__ return self._check_getattr(self._context) # Add isinstance and other if/assert knowledge. if not contexts and isinstance(self._name, tree.Name) and \ not isinstance(self._name_context, AbstractInstanceContext): flow_scope = self._name base_node = self._name_context.tree_node if base_node.type == 'comp_for': return contexts while True: flow_scope = get_parent_scope(flow_scope, include_flows=True) n = _check_flow_information(self._name_context, flow_scope, self._name, self._position) if n is not None: return n if flow_scope == base_node: break return contexts def _check_flow_information(context, flow, search_name, pos): """ Try to find out the type of a variable just with the information that is given by the flows: e.g. It is also responsible for assert checks.:: if isinstance(k, str): k. # <- completion here ensures that `k` is a string. """ if not settings.dynamic_flow_information: return None result = None if is_scope(flow): # Check for asserts. module_node = flow.get_root_node() try: names = module_node.get_used_names()[search_name.value] except KeyError: return None names = reversed([ n for n in names if flow.start_pos <= n.start_pos < (pos or flow.end_pos) ]) for name in names: ass = search_ancestor(name, 'assert_stmt') if ass is not None: result = _check_isinstance_type(context, ass.assertion, search_name) if result is not None: return result if flow.type in ('if_stmt', 'while_stmt'): potential_ifs = [c for c in flow.children[1::4] if c != ':'] for if_test in reversed(potential_ifs): if search_name.start_pos > if_test.end_pos: return _check_isinstance_type(context, if_test, search_name) return result def _check_isinstance_type(context, element, search_name): try: assert element.type in ('power', 'atom_expr') # this might be removed if we analyze and, etc assert len(element.children) == 2 first, trailer = element.children assert first.type == 'name' and first.value == 'isinstance' assert trailer.type == 'trailer' and trailer.children[0] == '(' assert len(trailer.children) == 3 # arglist stuff arglist = trailer.children[1] args = TreeArguments(context.evaluator, context, arglist, trailer) param_list = list(args.unpack()) # Disallow keyword arguments assert len(param_list) == 2 (key1, lazy_context_object), (key2, lazy_context_cls) = param_list assert key1 is None and key2 is None call = helpers.call_of_leaf(search_name) is_instance_call = helpers.call_of_leaf(lazy_context_object.data) # Do a simple get_code comparison. They should just have the same code, # and everything will be all right. normalize = context.evaluator.grammar._normalize assert normalize(is_instance_call) == normalize(call) except AssertionError: return None context_set = ContextSet() for cls_or_tup in lazy_context_cls.infer(): if isinstance(cls_or_tup, iterable.AbstractIterable) and \ cls_or_tup.array_type == 'tuple': for lazy_context in cls_or_tup.py__iter__(): for context in lazy_context.infer(): context_set |= context.execute_evaluated() else: context_set |= cls_or_tup.execute_evaluated() return context_set jedi-0.11.1/jedi/evaluate/filters.py0000664000175000017500000003375313214571123017144 0ustar davedave00000000000000""" Filters are objects that you can use to filter names in different scopes. They are needed for name resolution. """ from abc import abstractmethod from parso.tree import search_ancestor from jedi._compatibility import is_py3 from jedi.evaluate import flow_analysis from jedi.evaluate.base_context import ContextSet, Context from jedi.parser_utils import get_parent_scope from jedi.evaluate.utils import to_list class AbstractNameDefinition(object): start_pos = None string_name = None parent_context = None tree_name = None @abstractmethod def infer(self): raise NotImplementedError @abstractmethod def goto(self): # Typically names are already definitions and therefore a goto on that # name will always result on itself. return set([self]) def get_root_context(self): return self.parent_context.get_root_context() def __repr__(self): if self.start_pos is None: return '<%s: %s>' % (self.__class__.__name__, self.string_name) return '<%s: %s@%s>' % (self.__class__.__name__, self.string_name, self.start_pos) def execute(self, arguments): return self.infer().execute(arguments) def execute_evaluated(self, *args, **kwargs): return self.infer().execute_evaluated(*args, **kwargs) @property def api_type(self): return self.parent_context.api_type class AbstractTreeName(AbstractNameDefinition): def __init__(self, parent_context, tree_name): self.parent_context = parent_context self.tree_name = tree_name def goto(self): return self.parent_context.evaluator.goto(self.parent_context, self.tree_name) @property def string_name(self): return self.tree_name.value @property def start_pos(self): return self.tree_name.start_pos class ContextNameMixin(object): def infer(self): return ContextSet(self._context) def get_root_context(self): if self.parent_context is None: return self._context return super(ContextNameMixin, self).get_root_context() @property def api_type(self): return self._context.api_type class ContextName(ContextNameMixin, AbstractTreeName): def __init__(self, context, tree_name): super(ContextName, self).__init__(context.parent_context, tree_name) self._context = context class TreeNameDefinition(AbstractTreeName): _API_TYPES = dict( import_name='module', import_from='module', funcdef='function', param='param', classdef='class', ) def infer(self): # Refactor this, should probably be here. from jedi.evaluate.syntax_tree import tree_name_to_contexts return tree_name_to_contexts(self.parent_context.evaluator, self.parent_context, self.tree_name) @property def api_type(self): definition = self.tree_name.get_definition(import_name_always=True) if definition is None: return 'statement' return self._API_TYPES.get(definition.type, 'statement') class ParamName(AbstractTreeName): api_type = 'param' def __init__(self, parent_context, tree_name): self.parent_context = parent_context self.tree_name = tree_name def infer(self): return self.get_param().infer() def get_param(self): params = self.parent_context.get_params() param_node = search_ancestor(self.tree_name, 'param') return params[param_node.position_index] class AnonymousInstanceParamName(ParamName): def infer(self): param_node = search_ancestor(self.tree_name, 'param') # TODO I think this should not belong here. It's not even really true, # because classmethod and other descriptors can change it. if param_node.position_index == 0: # This is a speed optimization, to return the self param (because # it's known). This only affects anonymous instances. return ContextSet(self.parent_context.instance) else: return self.get_param().infer() class AbstractFilter(object): _until_position = None def _filter(self, names): if self._until_position is not None: return [n for n in names if n.start_pos < self._until_position] return names @abstractmethod def get(self, name): raise NotImplementedError @abstractmethod def values(self): raise NotImplementedError class AbstractUsedNamesFilter(AbstractFilter): name_class = TreeNameDefinition def __init__(self, context, parser_scope): self._parser_scope = parser_scope self._used_names = self._parser_scope.get_root_node().get_used_names() self.context = context def get(self, name): try: names = self._used_names[str(name)] except KeyError: return [] return self._convert_names(self._filter(names)) def _convert_names(self, names): return [self.name_class(self.context, name) for name in names] def values(self): return self._convert_names(name for name_list in self._used_names.values() for name in self._filter(name_list)) def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.context) class ParserTreeFilter(AbstractUsedNamesFilter): def __init__(self, evaluator, context, node_context=None, until_position=None, origin_scope=None): """ node_context is an option to specify a second context for use cases like the class mro where the parent class of a new name would be the context, but for some type inference it's important to have a local context of the other classes. """ if node_context is None: node_context = context super(ParserTreeFilter, self).__init__(context, node_context.tree_node) self._node_context = node_context self._origin_scope = origin_scope self._until_position = until_position def _filter(self, names): names = super(ParserTreeFilter, self)._filter(names) names = [n for n in names if self._is_name_reachable(n)] return list(self._check_flows(names)) def _is_name_reachable(self, name): if not name.is_definition(): return False parent = name.parent if parent.type == 'trailer': return False base_node = parent if parent.type in ('classdef', 'funcdef') else name return get_parent_scope(base_node) == self._parser_scope def _check_flows(self, names): for name in sorted(names, key=lambda name: name.start_pos, reverse=True): check = flow_analysis.reachability_check( self._node_context, self._parser_scope, name, self._origin_scope ) if check is not flow_analysis.UNREACHABLE: yield name if check is flow_analysis.REACHABLE: break class FunctionExecutionFilter(ParserTreeFilter): param_name = ParamName def __init__(self, evaluator, context, node_context=None, until_position=None, origin_scope=None): super(FunctionExecutionFilter, self).__init__( evaluator, context, node_context, until_position, origin_scope ) @to_list def _convert_names(self, names): for name in names: param = search_ancestor(name, 'param') if param: yield self.param_name(self.context, name) else: yield TreeNameDefinition(self.context, name) class AnonymousInstanceFunctionExecutionFilter(FunctionExecutionFilter): param_name = AnonymousInstanceParamName class GlobalNameFilter(AbstractUsedNamesFilter): def __init__(self, context, parser_scope): super(GlobalNameFilter, self).__init__(context, parser_scope) @to_list def _filter(self, names): for name in names: if name.parent.type == 'global_stmt': yield name class DictFilter(AbstractFilter): def __init__(self, dct): self._dct = dct def get(self, name): try: value = self._convert(name, self._dct[str(name)]) except KeyError: return [] return list(self._filter([value])) def values(self): return self._filter(self._convert(*item) for item in self._dct.items()) def _convert(self, name, value): return value class _BuiltinMappedMethod(Context): """``Generator.__next__`` ``dict.values`` methods and so on.""" api_type = 'function' def __init__(self, builtin_context, method, builtin_func): super(_BuiltinMappedMethod, self).__init__( builtin_context.evaluator, parent_context=builtin_context ) self._method = method self._builtin_func = builtin_func def py__call__(self, params): return self._method(self.parent_context) def __getattr__(self, name): return getattr(self._builtin_func, name) class SpecialMethodFilter(DictFilter): """ A filter for methods that are defined in this module on the corresponding classes like Generator (for __next__, etc). """ class SpecialMethodName(AbstractNameDefinition): api_type = 'function' def __init__(self, parent_context, string_name, callable_, builtin_context): self.parent_context = parent_context self.string_name = string_name self._callable = callable_ self._builtin_context = builtin_context def infer(self): filter = next(self._builtin_context.get_filters()) # We can take the first index, because on builtin methods there's # always only going to be one name. The same is true for the # inferred values. builtin_func = next(iter(filter.get(self.string_name)[0].infer())) return ContextSet(_BuiltinMappedMethod(self.parent_context, self._callable, builtin_func)) def __init__(self, context, dct, builtin_context): super(SpecialMethodFilter, self).__init__(dct) self.context = context self._builtin_context = builtin_context """ This context is what will be used to introspect the name, where as the other context will be used to execute the function. We distinguish, because we have to. """ def _convert(self, name, value): return self.SpecialMethodName(self.context, name, value, self._builtin_context) def has_builtin_methods(cls): base_dct = {} # Need to care properly about inheritance. Builtin Methods should not get # lost, just because they are not mentioned in a class. for base_cls in reversed(cls.__bases__): try: base_dct.update(base_cls.builtin_methods) except AttributeError: pass cls.builtin_methods = base_dct for func in cls.__dict__.values(): try: cls.builtin_methods.update(func.registered_builtin_methods) except AttributeError: pass return cls def register_builtin_method(method_name, python_version_match=None): def wrapper(func): if python_version_match and python_version_match != 2 + int(is_py3): # Some functions do only apply to certain versions. return func dct = func.__dict__.setdefault('registered_builtin_methods', {}) dct[method_name] = func return func return wrapper def get_global_filters(evaluator, context, until_position, origin_scope): """ Returns all filters in order of priority for name resolution. For global name lookups. The filters will handle name resolution themselves, but here we gather possible filters downwards. >>> from jedi._compatibility import u, no_unicode_pprint >>> from jedi import Script >>> script = Script(u(''' ... x = ['a', 'b', 'c'] ... def func(): ... y = None ... ''')) >>> module_node = script._get_module_node() >>> scope = next(module_node.iter_funcdefs()) >>> scope >>> context = script._get_module().create_context(scope) >>> filters = list(get_global_filters(context.evaluator, context, (4, 0), None)) First we get the names names from the function scope. >>> no_unicode_pprint(filters[0]) > >>> sorted(str(n) for n in filters[0].values()) ['', ''] >>> filters[0]._until_position (4, 0) Then it yields the names from one level "lower". In this example, this is the module scope. As a side note, you can see, that the position in the filter is now None, because typically the whole module is loaded before the function is called. >>> filters[1].values() # global names -> there are none in our example. [] >>> list(filters[2].values()) # package modules -> Also empty. [] >>> sorted(name.string_name for name in filters[3].values()) # Module attributes ['__doc__', '__file__', '__name__', '__package__'] >>> print(filters[1]._until_position) None Finally, it yields the builtin filter, if `include_builtin` is true (default). >>> filters[4].values() #doctest: +ELLIPSIS [, ...] """ from jedi.evaluate.context.function import FunctionExecutionContext while context is not None: # Names in methods cannot be resolved within the class. for filter in context.get_filters( search_global=True, until_position=until_position, origin_scope=origin_scope): yield filter if isinstance(context, FunctionExecutionContext): # The position should be reset if the current scope is a function. until_position = None context = context.parent_context # Add builtins to the global scope. for filter in evaluator.BUILTINS.get_filters(search_global=True): yield filter jedi-0.11.1/jedi/evaluate/base_context.py0000664000175000017500000002112613214571123020141 0ustar davedave00000000000000from parso.python.tree import ExprStmt, CompFor from jedi import debug from jedi._compatibility import Python3Method, zip_longest, unicode from jedi.parser_utils import clean_scope_docstring, get_doc_with_call_signature from jedi.common import BaseContextSet, BaseContext class Context(BaseContext): """ Should be defined, otherwise the API returns empty types. """ predefined_names = {} tree_node = None """ To be defined by subclasses. """ @property def api_type(self): # By default just lower name of the class. Can and should be # overwritten. return self.__class__.__name__.lower() @debug.increase_indent def execute(self, arguments): """ In contrast to py__call__ this function is always available. `hasattr(x, py__call__)` can also be checked to see if a context is executable. """ if self.evaluator.is_analysis: arguments.eval_all() debug.dbg('execute: %s %s', self, arguments) from jedi.evaluate import stdlib try: # Some stdlib functions like super(), namedtuple(), etc. have been # hard-coded in Jedi to support them. return stdlib.execute(self.evaluator, self, arguments) except stdlib.NotInStdLib: pass try: func = self.py__call__ except AttributeError: debug.warning("no execution possible %s", self) return NO_CONTEXTS else: context_set = func(arguments) debug.dbg('execute result: %s in %s', context_set, self) return context_set return self.evaluator.execute(self, arguments) def execute_evaluated(self, *value_list): """ Execute a function with already executed arguments. """ from jedi.evaluate.arguments import ValuesArguments arguments = ValuesArguments([ContextSet(value) for value in value_list]) return self.execute(arguments) def iterate(self, contextualized_node=None): debug.dbg('iterate') try: iter_method = self.py__iter__ except AttributeError: if contextualized_node is not None: from jedi.evaluate import analysis analysis.add( contextualized_node.context, 'type-error-not-iterable', contextualized_node.node, message="TypeError: '%s' object is not iterable" % self) return iter([]) else: return iter_method() def get_item(self, index_contexts, contextualized_node): from jedi.evaluate.compiled import CompiledObject from jedi.evaluate.context.iterable import Slice, AbstractIterable result = ContextSet() for index in index_contexts: if isinstance(index, (CompiledObject, Slice)): index = index.obj if type(index) not in (float, int, str, unicode, slice, type(Ellipsis)): # If the index is not clearly defined, we have to get all the # possiblities. if isinstance(self, AbstractIterable) and self.array_type == 'dict': result |= self.dict_values() else: result |= iterate_contexts(ContextSet(self)) continue # The actual getitem call. try: getitem = self.py__getitem__ except AttributeError: from jedi.evaluate import analysis # TODO this context is probably not right. analysis.add( contextualized_node.context, 'type-error-not-subscriptable', contextualized_node.node, message="TypeError: '%s' object is not subscriptable" % self ) else: try: result |= getitem(index) except IndexError: result |= iterate_contexts(ContextSet(self)) except KeyError: # Must be a dict. Lists don't raise KeyErrors. result |= self.dict_values() return result def eval_node(self, node): return self.evaluator.eval_element(self, node) @Python3Method def py__getattribute__(self, name_or_str, name_context=None, position=None, search_global=False, is_goto=False, analysis_errors=True): """ :param position: Position of the last statement -> tuple of line, column """ if name_context is None: name_context = self from jedi.evaluate import finder f = finder.NameFinder(self.evaluator, self, name_context, name_or_str, position, analysis_errors=analysis_errors) filters = f.get_filters(search_global) if is_goto: return f.filter_name(filters) return f.find(filters, attribute_lookup=not search_global) return self.evaluator.find_types( self, name_or_str, name_context, position, search_global, is_goto, analysis_errors) def create_context(self, node, node_is_context=False, node_is_object=False): return self.evaluator.create_context(self, node, node_is_context, node_is_object) def is_class(self): return False def py__bool__(self): """ Since Wrapper is a super class for classes, functions and modules, the return value will always be true. """ return True def py__doc__(self, include_call_signature=False): try: self.tree_node.get_doc_node except AttributeError: return '' else: if include_call_signature: return get_doc_with_call_signature(self.tree_node) else: return clean_scope_docstring(self.tree_node) return None def iterate_contexts(contexts, contextualized_node=None): """ Calls `iterate`, on all contexts but ignores the ordering and just returns all contexts that the iterate functions yield. """ return ContextSet.from_sets( lazy_context.infer() for lazy_context in contexts.iterate(contextualized_node) ) class TreeContext(Context): def __init__(self, evaluator, parent_context=None): super(TreeContext, self).__init__(evaluator, parent_context) self.predefined_names = {} def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.tree_node) class ContextualizedNode(object): def __init__(self, context, node): self.context = context self.node = node def get_root_context(self): return self.context.get_root_context() def infer(self): return self.context.eval_node(self.node) class ContextualizedName(ContextualizedNode): # TODO merge with TreeNameDefinition?! @property def name(self): return self.node def assignment_indexes(self): """ Returns an array of tuple(int, node) of the indexes that are used in tuple assignments. For example if the name is ``y`` in the following code:: x, (y, z) = 2, '' would result in ``[(1, xyz_node), (0, yz_node)]``. """ indexes = [] node = self.node.parent compare = self.node while node is not None: if node.type in ('testlist', 'testlist_comp', 'testlist_star_expr', 'exprlist'): for i, child in enumerate(node.children): if child == compare: indexes.insert(0, (int(i / 2), node)) break else: raise LookupError("Couldn't find the assignment.") elif isinstance(node, (ExprStmt, CompFor)): break compare = node node = node.parent return indexes class ContextSet(BaseContextSet): def py__class__(self): return ContextSet.from_iterable(c.py__class__() for c in self._set) def iterate(self, contextualized_node=None): from jedi.evaluate.lazy_context import get_merged_lazy_context type_iters = [c.iterate(contextualized_node) for c in self._set] for lazy_contexts in zip_longest(*type_iters): yield get_merged_lazy_context( [l for l in lazy_contexts if l is not None] ) NO_CONTEXTS = ContextSet() def iterator_to_context_set(func): def wrapper(*args, **kwargs): return ContextSet.from_iterable(func(*args, **kwargs)) return wrapper jedi-0.11.1/jedi/evaluate/utils.py0000664000175000017500000000645313214571123016631 0ustar davedave00000000000000""" A universal module with functions / classes without dependencies. """ import sys import contextlib import functools from jedi._compatibility import reraise def to_list(func): def wrapper(*args, **kwargs): return list(func(*args, **kwargs)) return wrapper def unite(iterable): """Turns a two dimensional array into a one dimensional.""" return set(typ for types in iterable for typ in types) class UncaughtAttributeError(Exception): """ Important, because `__getattr__` and `hasattr` catch AttributeErrors implicitly. This is really evil (mainly because of `__getattr__`). `hasattr` in Python 2 is even more evil, because it catches ALL exceptions. Therefore this class originally had to be derived from `BaseException` instead of `Exception`. But because I removed relevant `hasattr` from the code base, we can now switch back to `Exception`. :param base: return values of sys.exc_info(). """ def safe_property(func): return property(reraise_uncaught(func)) def reraise_uncaught(func): """ Re-throw uncaught `AttributeError`. Usage: Put ``@rethrow_uncaught`` in front of the function which does **not** suppose to raise `AttributeError`. AttributeError is easily get caught by `hasattr` and another ``except AttributeError`` clause. This becomes problem when you use a lot of "dynamic" attributes (e.g., using ``@property``) because you can't distinguish if the property does not exist for real or some code inside of the "dynamic" attribute through that error. In a well written code, such error should not exist but getting there is very difficult. This decorator is to help us getting there by changing `AttributeError` to `UncaughtAttributeError` to avoid unexpected catch. This helps us noticing bugs earlier and facilitates debugging. .. note:: Treating StopIteration here is easy. Add that feature when needed. """ @functools.wraps(func) def wrapper(*args, **kwds): try: return func(*args, **kwds) except AttributeError: exc_info = sys.exc_info() reraise(UncaughtAttributeError(exc_info[1]), exc_info[2]) return wrapper class PushBackIterator(object): def __init__(self, iterator): self.pushes = [] self.iterator = iterator self.current = None def push_back(self, value): self.pushes.append(value) def __iter__(self): return self def next(self): """ Python 2 Compatibility """ return self.__next__() def __next__(self): if self.pushes: self.current = self.pushes.pop() else: self.current = next(self.iterator) return self.current @contextlib.contextmanager def ignored(*exceptions): """ Context manager that ignores all of the specified exceptions. This will be in the standard library starting with Python 3.4. """ try: yield except exceptions: pass def indent_block(text, indention=' '): """This function indents a text block with a default of four spaces.""" temp = '' while text and text[-1] == '\n': temp += text[-1] text = text[:-1] lines = text.split('\n') return '\n'.join(map(lambda s: indention + s, lines)) + temp jedi-0.11.1/jedi/evaluate/arguments.py0000664000175000017500000002266613214571123017502 0ustar davedave00000000000000from parso.python import tree from jedi._compatibility import zip_longest from jedi import debug from jedi.evaluate import analysis from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \ LazyTreeContext, get_merged_lazy_context from jedi.evaluate.filters import ParamName from jedi.evaluate.base_context import NO_CONTEXTS from jedi.evaluate.context import iterable from jedi.evaluate.param import get_params, ExecutedParam def try_iter_content(types, depth=0): """Helper method for static analysis.""" if depth > 10: # It's possible that a loop has references on itself (especially with # CompiledObject). Therefore don't loop infinitely. return for typ in types: try: f = typ.py__iter__ except AttributeError: pass else: for lazy_context in f(): try_iter_content(lazy_context.infer(), depth + 1) class AbstractArguments(object): context = None def eval_argument_clinic(self, parameters): """Uses a list with argument clinic information (see PEP 436).""" iterator = self.unpack() for i, (name, optional, allow_kwargs) in enumerate(parameters): key, argument = next(iterator, (None, None)) if key is not None: raise NotImplementedError if argument is None and not optional: debug.warning('TypeError: %s expected at least %s arguments, got %s', name, len(parameters), i) raise ValueError values = NO_CONTEXTS if argument is None else argument.infer() if not values and not optional: # For the stdlib we always want values. If we don't get them, # that's ok, maybe something is too hard to resolve, however, # we will not proceed with the evaluation of that function. debug.warning('argument_clinic "%s" not resolvable.', name) raise ValueError yield values def eval_all(self, funcdef=None): """ Evaluates all arguments as a support for static analysis (normally Jedi). """ for key, lazy_context in self.unpack(): types = lazy_context.infer() try_iter_content(types) def get_calling_nodes(self): raise NotImplementedError def unpack(self, funcdef=None): raise NotImplementedError def get_params(self, execution_context): return get_params(execution_context, self) class AnonymousArguments(AbstractArguments): def get_params(self, execution_context): from jedi.evaluate.dynamic import search_params return search_params( execution_context.evaluator, execution_context, execution_context.tree_node ) class TreeArguments(AbstractArguments): def __init__(self, evaluator, context, argument_node, trailer=None): """ The argument_node is either a parser node or a list of evaluated objects. Those evaluated objects may be lists of evaluated objects themselves (one list for the first argument, one for the second, etc). :param argument_node: May be an argument_node or a list of nodes. """ self.argument_node = argument_node self.context = context self._evaluator = evaluator self.trailer = trailer # Can be None, e.g. in a class definition. def _split(self): if isinstance(self.argument_node, (tuple, list)): for el in self.argument_node: yield 0, el else: if not (self.argument_node.type == 'arglist' or ( # in python 3.5 **arg is an argument, not arglist (self.argument_node.type == 'argument') and self.argument_node.children[0] in ('*', '**'))): yield 0, self.argument_node return iterator = iter(self.argument_node.children) for child in iterator: if child == ',': continue elif child in ('*', '**'): yield len(child.value), next(iterator) elif child.type == 'argument' and \ child.children[0] in ('*', '**'): assert len(child.children) == 2 yield len(child.children[0].value), child.children[1] else: yield 0, child def unpack(self, funcdef=None): named_args = [] for star_count, el in self._split(): if star_count == 1: arrays = self.context.eval_node(el) iterators = [_iterate_star_args(self.context, a, el, funcdef) for a in arrays] iterators = list(iterators) for values in list(zip_longest(*iterators)): # TODO zip_longest yields None, that means this would raise # an exception? yield None, get_merged_lazy_context( [v for v in values if v is not None] ) elif star_count == 2: arrays = self._evaluator.eval_element(self.context, el) for dct in arrays: for key, values in _star_star_dict(self.context, dct, el, funcdef): yield key, values else: if el.type == 'argument': c = el.children if len(c) == 3: # Keyword argument. named_args.append((c[0].value, LazyTreeContext(self.context, c[2]),)) else: # Generator comprehension. # Include the brackets with the parent. comp = iterable.GeneratorComprehension( self._evaluator, self.context, self.argument_node.parent) yield None, LazyKnownContext(comp) else: yield None, LazyTreeContext(self.context, el) # Reordering var_args is necessary, because star args sometimes appear # after named argument, but in the actual order it's prepended. for named_arg in named_args: yield named_arg def as_tree_tuple_objects(self): for star_count, argument in self._split(): if argument.type == 'argument': argument, default = argument.children[::2] else: default = None yield argument, default, star_count def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.argument_node) def get_calling_nodes(self): from jedi.evaluate.dynamic import MergedExecutedParams old_arguments_list = [] arguments = self while arguments not in old_arguments_list: if not isinstance(arguments, TreeArguments): break old_arguments_list.append(arguments) for name, default, star_count in reversed(list(arguments.as_tree_tuple_objects())): if not star_count or not isinstance(name, tree.Name): continue names = self._evaluator.goto(arguments.context, name) if len(names) != 1: break if not isinstance(names[0], ParamName): break param = names[0].get_param() if isinstance(param, MergedExecutedParams): # For dynamic searches we don't even want to see errors. return [] if not isinstance(param, ExecutedParam): break if param.var_args is None: break arguments = param.var_args break return [arguments.argument_node or arguments.trailer] class ValuesArguments(AbstractArguments): def __init__(self, values_list): self._values_list = values_list def unpack(self, funcdef=None): for values in self._values_list: yield None, LazyKnownContexts(values) def get_calling_nodes(self): return [] def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self._values_list) def _iterate_star_args(context, array, input_node, funcdef=None): try: iter_ = array.py__iter__ except AttributeError: if funcdef is not None: # TODO this funcdef should not be needed. m = "TypeError: %s() argument after * must be a sequence, not %s" \ % (funcdef.name.value, array) analysis.add(context, 'type-error-star', input_node, message=m) else: for lazy_context in iter_(): yield lazy_context def _star_star_dict(context, array, input_node, funcdef): from jedi.evaluate.context.instance import CompiledInstance if isinstance(array, CompiledInstance) and array.name.string_name == 'dict': # For now ignore this case. In the future add proper iterators and just # make one call without crazy isinstance checks. return {} elif isinstance(array, iterable.AbstractIterable) and array.array_type == 'dict': return array.exact_key_items() else: if funcdef is not None: m = "TypeError: %s argument after ** must be a mapping, not %s" \ % (funcdef.name.value, array) analysis.add(context, 'type-error-star-star', input_node, message=m) return {} jedi-0.11.1/jedi/evaluate/site.py0000664000175000017500000000701013214571123016423 0ustar davedave00000000000000"""An adapted copy of relevant site-packages functionality from Python stdlib. This file contains some functions related to handling site-packages in Python with jedi-specific modifications: - the functions operate on sys_path argument rather than global sys.path - in .pth files "import ..." lines that allow execution of arbitrary code are skipped to prevent code injection into jedi interpreter """ # Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved from __future__ import print_function import sys import os def makepath(*paths): dir = os.path.join(*paths) try: dir = os.path.abspath(dir) except OSError: pass return dir, os.path.normcase(dir) def _init_pathinfo(sys_path): """Return a set containing all existing directory entries from sys_path""" d = set() for dir in sys_path: try: if os.path.isdir(dir): dir, dircase = makepath(dir) d.add(dircase) except TypeError: continue return d def addpackage(sys_path, sitedir, name, known_paths): """Process a .pth file within the site-packages directory: For each line in the file, either combine it with sitedir to a path and add that to known_paths, or execute it if it starts with 'import '. """ if known_paths is None: known_paths = _init_pathinfo(sys_path) reset = 1 else: reset = 0 fullname = os.path.join(sitedir, name) try: f = open(fullname, "r") except OSError: return with f: for n, line in enumerate(f): if line.startswith("#"): continue try: if line.startswith(("import ", "import\t")): # Change by immerrr: don't evaluate import lines to prevent # code injection into jedi through pth files. # # exec(line) continue line = line.rstrip() dir, dircase = makepath(sitedir, line) if not dircase in known_paths and os.path.exists(dir): sys_path.append(dir) known_paths.add(dircase) except Exception: print("Error processing line {:d} of {}:\n".format(n+1, fullname), file=sys.stderr) import traceback for record in traceback.format_exception(*sys.exc_info()): for line in record.splitlines(): print(' '+line, file=sys.stderr) print("\nRemainder of file ignored", file=sys.stderr) break if reset: known_paths = None return known_paths def addsitedir(sys_path, sitedir, known_paths=None): """Add 'sitedir' argument to sys_path if missing and handle .pth files in 'sitedir'""" if known_paths is None: known_paths = _init_pathinfo(sys_path) reset = 1 else: reset = 0 sitedir, sitedircase = makepath(sitedir) if not sitedircase in known_paths: sys_path.append(sitedir) # Add path component known_paths.add(sitedircase) try: names = os.listdir(sitedir) except OSError: return names = [name for name in names if name.endswith(".pth")] for name in sorted(names): addpackage(sys_path, sitedir, name, known_paths) if reset: known_paths = None return known_paths jedi-0.11.1/jedi/evaluate/context/0000775000175000017500000000000013214571377016606 5ustar davedave00000000000000jedi-0.11.1/jedi/evaluate/context/module.py0000664000175000017500000001675113214571123020444 0ustar davedave00000000000000import pkgutil import imp import re import os from parso import python_bytes_to_unicode from jedi._compatibility import use_metaclass from jedi.evaluate.cache import CachedMetaClass, evaluator_method_cache from jedi.evaluate.filters import GlobalNameFilter, ContextNameMixin, \ AbstractNameDefinition, ParserTreeFilter, DictFilter from jedi.evaluate import compiled from jedi.evaluate.base_context import TreeContext from jedi.evaluate.imports import SubModuleName, infer_import class _ModuleAttributeName(AbstractNameDefinition): """ For module attributes like __file__, __str__ and so on. """ api_type = 'instance' def __init__(self, parent_module, string_name): self.parent_context = parent_module self.string_name = string_name def infer(self): return compiled.create(self.parent_context.evaluator, str).execute_evaluated() class ModuleName(ContextNameMixin, AbstractNameDefinition): start_pos = 1, 0 def __init__(self, context, name): self._context = context self._name = name @property def string_name(self): return self._name class ModuleContext(use_metaclass(CachedMetaClass, TreeContext)): api_type = 'module' parent_context = None def __init__(self, evaluator, module_node, path): super(ModuleContext, self).__init__(evaluator, parent_context=None) self.tree_node = module_node self._path = path def get_filters(self, search_global, until_position=None, origin_scope=None): yield ParserTreeFilter( self.evaluator, context=self, until_position=until_position, origin_scope=origin_scope ) yield GlobalNameFilter(self, self.tree_node) yield DictFilter(self._sub_modules_dict()) yield DictFilter(self._module_attributes_dict()) for star_module in self.star_imports(): yield next(star_module.get_filters(search_global)) # I'm not sure if the star import cache is really that effective anymore # with all the other really fast import caches. Recheck. Also we would need # to push the star imports into Evaluator.modules, if we reenable this. @evaluator_method_cache([]) def star_imports(self): modules = [] for i in self.tree_node.iter_imports(): if i.is_star_import(): name = i.get_paths()[-1][-1] new = infer_import(self, name) for module in new: if isinstance(module, ModuleContext): modules += module.star_imports() modules += new return modules @evaluator_method_cache() def _module_attributes_dict(self): names = ['__file__', '__package__', '__doc__', '__name__'] # All the additional module attributes are strings. return dict((n, _ModuleAttributeName(self, n)) for n in names) @property def _string_name(self): """ This is used for the goto functions. """ if self._path is None: return '' # no path -> empty name else: sep = (re.escape(os.path.sep),) * 2 r = re.search(r'([^%s]*?)(%s__init__)?(\.py|\.so)?$' % sep, self._path) # Remove PEP 3149 names return re.sub('\.[a-z]+-\d{2}[mud]{0,3}$', '', r.group(1)) @property @evaluator_method_cache() def name(self): return ModuleName(self, self._string_name) def _get_init_directory(self): """ :return: The path to the directory of a package. None in case it's not a package. """ for suffix, _, _ in imp.get_suffixes(): ending = '__init__' + suffix py__file__ = self.py__file__() if py__file__ is not None and py__file__.endswith(ending): # Remove the ending, including the separator. return self.py__file__()[:-len(ending) - 1] return None def py__name__(self): for name, module in self.evaluator.modules.items(): if module == self and name != '': return name return '__main__' def py__file__(self): """ In contrast to Python's __file__ can be None. """ if self._path is None: return None return os.path.abspath(self._path) def py__package__(self): if self._get_init_directory() is None: return re.sub(r'\.?[^\.]+$', '', self.py__name__()) else: return self.py__name__() def _py__path__(self): search_path = self.evaluator.project.sys_path init_path = self.py__file__() if os.path.basename(init_path) == '__init__.py': with open(init_path, 'rb') as f: content = python_bytes_to_unicode(f.read(), errors='replace') # these are strings that need to be used for namespace packages, # the first one is ``pkgutil``, the second ``pkg_resources``. options = ('declare_namespace(__name__)', 'extend_path(__path__') if options[0] in content or options[1] in content: # It is a namespace, now try to find the rest of the # modules on sys_path or whatever the search_path is. paths = set() for s in search_path: other = os.path.join(s, self.name.string_name) if os.path.isdir(other): paths.add(other) if paths: return list(paths) # TODO I'm not sure if this is how nested namespace # packages work. The tests are not really good enough to # show that. # Default to this. return [self._get_init_directory()] @property def py__path__(self): """ Not seen here, since it's a property. The callback actually uses a variable, so use it like:: foo.py__path__(sys_path) In case of a package, this returns Python's __path__ attribute, which is a list of paths (strings). Raises an AttributeError if the module is not a package. """ path = self._get_init_directory() if path is None: raise AttributeError('Only packages have __path__ attributes.') else: return self._py__path__ @evaluator_method_cache() def _sub_modules_dict(self): """ Lists modules in the directory of this module (if this module is a package). """ path = self._path names = {} if path is not None and path.endswith(os.path.sep + '__init__.py'): mods = pkgutil.iter_modules([os.path.dirname(path)]) for module_loader, name, is_pkg in mods: # It's obviously a relative import to the current module. names[name] = SubModuleName(self, name) # TODO add something like this in the future, its cleaner than the # import hacks. # ``os.path`` is a hardcoded exception, because it's a # ``sys.modules`` modification. # if str(self.name) == 'os': # names.append(Name('path', parent_context=self)) return names def py__class__(self): return compiled.get_special_object(self.evaluator, 'MODULE_CLASS') def __repr__(self): return "<%s: %s@%s-%s>" % ( self.__class__.__name__, self._string_name, self.tree_node.start_pos[0], self.tree_node.end_pos[0]) jedi-0.11.1/jedi/evaluate/context/function.py0000664000175000017500000002174013214571123020776 0ustar davedave00000000000000from parso.python import tree from jedi._compatibility import use_metaclass from jedi import debug from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass from jedi.evaluate import compiled from jedi.evaluate import recursion from jedi.evaluate import docstrings from jedi.evaluate import pep0484 from jedi.evaluate import flow_analysis from jedi.evaluate import helpers from jedi.evaluate.arguments import AnonymousArguments from jedi.evaluate.filters import ParserTreeFilter, FunctionExecutionFilter, \ ContextName, AbstractNameDefinition, ParamName from jedi.evaluate.base_context import ContextualizedNode, NO_CONTEXTS, \ ContextSet, TreeContext from jedi.evaluate.lazy_context import LazyKnownContexts, LazyKnownContext, \ LazyTreeContext from jedi.evaluate.context import iterable from jedi import parser_utils from jedi.evaluate.parser_cache import get_yield_exprs class LambdaName(AbstractNameDefinition): string_name = '' def __init__(self, lambda_context): self._lambda_context = lambda_context self.parent_context = lambda_context.parent_context def start_pos(self): return self._lambda_context.tree_node.start_pos def infer(self): return ContextSet(self._lambda_context) class FunctionContext(use_metaclass(CachedMetaClass, TreeContext)): """ Needed because of decorators. Decorators are evaluated here. """ api_type = 'function' def __init__(self, evaluator, parent_context, funcdef): """ This should not be called directly """ super(FunctionContext, self).__init__(evaluator, parent_context) self.tree_node = funcdef def get_filters(self, search_global, until_position=None, origin_scope=None): if search_global: yield ParserTreeFilter( self.evaluator, context=self, until_position=until_position, origin_scope=origin_scope ) else: scope = self.py__class__() for filter in scope.get_filters(search_global=False, origin_scope=origin_scope): yield filter def infer_function_execution(self, function_execution): """ Created to be used by inheritance. """ yield_exprs = get_yield_exprs(self.evaluator, self.tree_node) if yield_exprs: return ContextSet(iterable.Generator(self.evaluator, function_execution)) else: return function_execution.get_return_values() def get_function_execution(self, arguments=None): if arguments is None: arguments = AnonymousArguments() return FunctionExecutionContext(self.evaluator, self.parent_context, self, arguments) def py__call__(self, arguments): function_execution = self.get_function_execution(arguments) return self.infer_function_execution(function_execution) def py__class__(self): # This differentiation is only necessary for Python2. Python3 does not # use a different method class. if isinstance(parser_utils.get_parent_scope(self.tree_node), tree.Class): name = 'METHOD_CLASS' else: name = 'FUNCTION_CLASS' return compiled.get_special_object(self.evaluator, name) @property def name(self): if self.tree_node.type == 'lambdef': return LambdaName(self) return ContextName(self, self.tree_node.name) def get_param_names(self): function_execution = self.get_function_execution() return [ParamName(function_execution, param.name) for param in self.tree_node.get_params()] class FunctionExecutionContext(TreeContext): """ This class is used to evaluate functions and their returns. This is the most complicated class, because it contains the logic to transfer parameters. It is even more complicated, because there may be multiple calls to functions and recursion has to be avoided. But this is responsibility of the decorators. """ function_execution_filter = FunctionExecutionFilter def __init__(self, evaluator, parent_context, function_context, var_args): super(FunctionExecutionContext, self).__init__(evaluator, parent_context) self.function_context = function_context self.tree_node = function_context.tree_node self.var_args = var_args @evaluator_method_cache(default=NO_CONTEXTS) @recursion.execution_recursion_decorator() def get_return_values(self, check_yields=False): funcdef = self.tree_node if funcdef.type == 'lambdef': return self.evaluator.eval_element(self, funcdef.children[-1]) if check_yields: context_set = NO_CONTEXTS returns = get_yield_exprs(self.evaluator, funcdef) else: returns = funcdef.iter_return_stmts() context_set = docstrings.infer_return_types(self.function_context) context_set |= pep0484.infer_return_types(self.function_context) for r in returns: check = flow_analysis.reachability_check(self, funcdef, r) if check is flow_analysis.UNREACHABLE: debug.dbg('Return unreachable: %s', r) else: if check_yields: context_set |= ContextSet.from_sets( lazy_context.infer() for lazy_context in self._eval_yield(r) ) else: try: children = r.children except AttributeError: context_set |= ContextSet(compiled.create(self.evaluator, None)) else: context_set |= self.eval_node(children[1]) if check is flow_analysis.REACHABLE: debug.dbg('Return reachable: %s', r) break return context_set def _eval_yield(self, yield_expr): if yield_expr.type == 'keyword': # `yield` just yields None. yield LazyKnownContext(compiled.create(self.evaluator, None)) return node = yield_expr.children[1] if node.type == 'yield_arg': # It must be a yield from. cn = ContextualizedNode(self, node.children[1]) for lazy_context in cn.infer().iterate(cn): yield lazy_context else: yield LazyTreeContext(self, node) @recursion.execution_recursion_decorator(default=iter([])) def get_yield_values(self): for_parents = [(y, tree.search_ancestor(y, 'for_stmt', 'funcdef', 'while_stmt', 'if_stmt')) for y in get_yield_exprs(self.evaluator, self.tree_node)] # Calculate if the yields are placed within the same for loop. yields_order = [] last_for_stmt = None for yield_, for_stmt in for_parents: # For really simple for loops we can predict the order. Otherwise # we just ignore it. parent = for_stmt.parent if parent.type == 'suite': parent = parent.parent if for_stmt.type == 'for_stmt' and parent == self.tree_node \ and parser_utils.for_stmt_defines_one_name(for_stmt): # Simplicity for now. if for_stmt == last_for_stmt: yields_order[-1][1].append(yield_) else: yields_order.append((for_stmt, [yield_])) elif for_stmt == self.tree_node: yields_order.append((None, [yield_])) else: types = self.get_return_values(check_yields=True) if types: yield LazyKnownContexts(types) return last_for_stmt = for_stmt for for_stmt, yields in yields_order: if for_stmt is None: # No for_stmt, just normal yields. for yield_ in yields: for result in self._eval_yield(yield_): yield result else: input_node = for_stmt.get_testlist() cn = ContextualizedNode(self, input_node) ordered = cn.infer().iterate(cn) ordered = list(ordered) for lazy_context in ordered: dct = {str(for_stmt.children[1].value): lazy_context.infer()} with helpers.predefine_names(self, for_stmt, dct): for yield_in_same_for_stmt in yields: for result in self._eval_yield(yield_in_same_for_stmt): yield result def get_filters(self, search_global, until_position=None, origin_scope=None): yield self.function_execution_filter(self.evaluator, self, until_position=until_position, origin_scope=origin_scope) @evaluator_method_cache() def get_params(self): return self.var_args.get_params(self) jedi-0.11.1/jedi/evaluate/context/__init__.py0000664000175000017500000000051213214571123020702 0ustar davedave00000000000000from jedi.evaluate.context.module import ModuleContext from jedi.evaluate.context.klass import ClassContext from jedi.evaluate.context.function import FunctionContext, FunctionExecutionContext from jedi.evaluate.context.instance import AnonymousInstance, BoundMethod, \ CompiledInstance, AbstractInstanceContext, TreeInstance jedi-0.11.1/jedi/evaluate/context/namespace.py0000664000175000017500000000424613214571123021107 0ustar davedave00000000000000import os from itertools import chain from jedi._compatibility import use_metaclass from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass from jedi.evaluate import imports from jedi.evaluate.filters import DictFilter, AbstractNameDefinition from jedi.evaluate.base_context import NO_CONTEXTS, TreeContext class ImplicitNSName(AbstractNameDefinition): """ Accessing names for implicit namespace packages should infer to nothing. This object will prevent Jedi from raising exceptions """ def __init__(self, implicit_ns_context, string_name): self.implicit_ns_context = implicit_ns_context self.string_name = string_name def infer(self): return NO_CONTEXTS def get_root_context(self): return self.implicit_ns_context class ImplicitNamespaceContext(use_metaclass(CachedMetaClass, TreeContext)): """ Provides support for implicit namespace packages """ api_type = 'module' parent_context = None def __init__(self, evaluator, fullname): super(ImplicitNamespaceContext, self).__init__(evaluator, parent_context=None) self.evaluator = evaluator self.fullname = fullname def get_filters(self, search_global, until_position=None, origin_scope=None): yield DictFilter(self._sub_modules_dict()) @property @evaluator_method_cache() def name(self): string_name = self.py__package__().rpartition('.')[-1] return ImplicitNSName(self, string_name) def py__file__(self): return None def py__package__(self): """Return the fullname """ return self.fullname @property def py__path__(self): return lambda: [self.paths] @evaluator_method_cache() def _sub_modules_dict(self): names = {} paths = self.paths file_names = chain.from_iterable(os.listdir(path) for path in paths) mods = [ file_name.rpartition('.')[0] if '.' in file_name else file_name for file_name in file_names if file_name != '__pycache__' ] for name in mods: names[name] = imports.SubModuleName(self, name) return names jedi-0.11.1/jedi/evaluate/context/iterable.py0000664000175000017500000005764213214571123020752 0ustar davedave00000000000000""" Contains all classes and functions to deal with lists, dicts, generators and iterators in general. Array modifications ******************* If the content of an array (``set``/``list``) is requested somewhere, the current module will be checked for appearances of ``arr.append``, ``arr.insert``, etc. If the ``arr`` name points to an actual array, the content will be added This can be really cpu intensive, as you can imagine. Because |jedi| has to follow **every** ``append`` and check wheter it's the right array. However this works pretty good, because in *slow* cases, the recursion detector and other settings will stop this process. It is important to note that: 1. Array modfications work only in the current module. 2. Jedi only checks Array additions; ``list.pop``, etc are ignored. """ from jedi import debug from jedi import settings from jedi.evaluate import compiled from jedi.evaluate import analysis from jedi.evaluate import recursion from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts, \ LazyTreeContext from jedi.evaluate.helpers import is_string, predefine_names, evaluate_call_of_leaf from jedi.evaluate.utils import safe_property from jedi.evaluate.utils import to_list from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.filters import ParserTreeFilter, has_builtin_methods, \ register_builtin_method, SpecialMethodFilter from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS, Context, \ TreeContext, ContextualizedNode from jedi.parser_utils import get_comp_fors class AbstractIterable(Context): builtin_methods = {} api_type = 'instance' def __init__(self, evaluator): super(AbstractIterable, self).__init__(evaluator, evaluator.BUILTINS) def get_filters(self, search_global, until_position=None, origin_scope=None): raise NotImplementedError @property def name(self): return compiled.CompiledContextName(self, self.array_type) @has_builtin_methods class GeneratorMixin(object): array_type = None @register_builtin_method('send') @register_builtin_method('next', python_version_match=2) @register_builtin_method('__next__', python_version_match=3) def py__next__(self): # TODO add TypeError if params are given. return ContextSet.from_sets(lazy_context.infer() for lazy_context in self.py__iter__()) def get_filters(self, search_global, until_position=None, origin_scope=None): gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT') yield SpecialMethodFilter(self, self.builtin_methods, gen_obj) for filter in gen_obj.get_filters(search_global): yield filter def py__bool__(self): return True def py__class__(self): gen_obj = compiled.get_special_object(self.evaluator, 'GENERATOR_OBJECT') return gen_obj.py__class__() @property def name(self): return compiled.CompiledContextName(self, 'generator') class Generator(GeneratorMixin, Context): """Handling of `yield` functions.""" def __init__(self, evaluator, func_execution_context): super(Generator, self).__init__(evaluator, parent_context=evaluator.BUILTINS) self._func_execution_context = func_execution_context def py__iter__(self): return self._func_execution_context.get_yield_values() def __repr__(self): return "<%s of %s>" % (type(self).__name__, self._func_execution_context) class CompForContext(TreeContext): @classmethod def from_comp_for(cls, parent_context, comp_for): return cls(parent_context.evaluator, parent_context, comp_for) def __init__(self, evaluator, parent_context, comp_for): super(CompForContext, self).__init__(evaluator, parent_context) self.tree_node = comp_for def get_node(self): return self.tree_node def get_filters(self, search_global, until_position=None, origin_scope=None): yield ParserTreeFilter(self.evaluator, self) class Comprehension(AbstractIterable): @staticmethod def from_atom(evaluator, context, atom): bracket = atom.children[0] if bracket == '{': if atom.children[1].children[1] == ':': cls = DictComprehension else: cls = SetComprehension elif bracket == '(': cls = GeneratorComprehension elif bracket == '[': cls = ListComprehension return cls(evaluator, context, atom) def __init__(self, evaluator, defining_context, atom): super(Comprehension, self).__init__(evaluator) self._defining_context = defining_context self._atom = atom def _get_comprehension(self): # The atom contains a testlist_comp return self._atom.children[1] def _get_comp_for(self): # The atom contains a testlist_comp return self._get_comprehension().children[1] def _eval_node(self, index=0): """ The first part `x + 1` of the list comprehension: [x + 1 for x in foo] """ return self._get_comprehension().children[index] @evaluator_method_cache() def _get_comp_for_context(self, parent_context, comp_for): # TODO shouldn't this be part of create_context? return CompForContext.from_comp_for(parent_context, comp_for) def _nested(self, comp_fors, parent_context=None): comp_for = comp_fors[0] input_node = comp_for.children[3] parent_context = parent_context or self._defining_context input_types = parent_context.eval_node(input_node) cn = ContextualizedNode(parent_context, input_node) iterated = input_types.iterate(cn) exprlist = comp_for.children[1] for i, lazy_context in enumerate(iterated): types = lazy_context.infer() dct = unpack_tuple_to_dict(parent_context, types, exprlist) context_ = self._get_comp_for_context( parent_context, comp_for, ) with predefine_names(context_, comp_for, dct): try: for result in self._nested(comp_fors[1:], context_): yield result except IndexError: iterated = context_.eval_node(self._eval_node()) if self.array_type == 'dict': yield iterated, context_.eval_node(self._eval_node(2)) else: yield iterated @evaluator_method_cache(default=[]) @to_list def _iterate(self): comp_fors = tuple(get_comp_fors(self._get_comp_for())) for result in self._nested(comp_fors): yield result def py__iter__(self): for set_ in self._iterate(): yield LazyKnownContexts(set_) def __repr__(self): return "<%s of %s>" % (type(self).__name__, self._atom) class ArrayMixin(object): def get_filters(self, search_global, until_position=None, origin_scope=None): # `array.type` is a string with the type, e.g. 'list'. compiled_obj = compiled.builtin_from_name(self.evaluator, self.array_type) yield SpecialMethodFilter(self, self.builtin_methods, compiled_obj) for typ in compiled_obj.execute_evaluated(self): for filter in typ.get_filters(): yield filter def py__bool__(self): return None # We don't know the length, because of appends. def py__class__(self): return compiled.builtin_from_name(self.evaluator, self.array_type) @safe_property def parent(self): return self.evaluator.BUILTINS def dict_values(self): return ContextSet.from_sets( self._defining_context.eval_node(v) for k, v in self._items() ) class ListComprehension(ArrayMixin, Comprehension): array_type = 'list' def py__getitem__(self, index): if isinstance(index, slice): return ContextSet(self) all_types = list(self.py__iter__()) return all_types[index].infer() class SetComprehension(ArrayMixin, Comprehension): array_type = 'set' @has_builtin_methods class DictComprehension(ArrayMixin, Comprehension): array_type = 'dict' def _get_comp_for(self): return self._get_comprehension().children[3] def py__iter__(self): for keys, values in self._iterate(): yield LazyKnownContexts(keys) def py__getitem__(self, index): for keys, values in self._iterate(): for k in keys: if isinstance(k, compiled.CompiledObject): if k.obj == index: return values return self.dict_values() def dict_values(self): return ContextSet.from_sets(values for keys, values in self._iterate()) @register_builtin_method('values') def _imitate_values(self): lazy_context = LazyKnownContexts(self.dict_values()) return ContextSet(FakeSequence(self.evaluator, 'list', [lazy_context])) @register_builtin_method('items') def _imitate_items(self): items = ContextSet.from_iterable( FakeSequence( self.evaluator, 'tuple' (LazyKnownContexts(keys), LazyKnownContexts(values)) ) for keys, values in self._iterate() ) return create_evaluated_sequence_set(self.evaluator, items, sequence_type='list') class GeneratorComprehension(GeneratorMixin, Comprehension): pass class SequenceLiteralContext(ArrayMixin, AbstractIterable): mapping = {'(': 'tuple', '[': 'list', '{': 'set'} def __init__(self, evaluator, defining_context, atom): super(SequenceLiteralContext, self).__init__(evaluator) self.atom = atom self._defining_context = defining_context if self.atom.type in ('testlist_star_expr', 'testlist'): self.array_type = 'tuple' else: self.array_type = SequenceLiteralContext.mapping[atom.children[0]] """The builtin name of the array (list, set, tuple or dict).""" def py__getitem__(self, index): """Here the index is an int/str. Raises IndexError/KeyError.""" if self.array_type == 'dict': for key, value in self._items(): for k in self._defining_context.eval_node(key): if isinstance(k, compiled.CompiledObject) \ and index == k.obj: return self._defining_context.eval_node(value) raise KeyError('No key found in dictionary %s.' % self) # Can raise an IndexError if isinstance(index, slice): return ContextSet(self) else: return self._defining_context.eval_node(self._items()[index]) def py__iter__(self): """ While values returns the possible values for any array field, this function returns the value for a certain index. """ if self.array_type == 'dict': # Get keys. types = ContextSet() for k, _ in self._items(): types |= self._defining_context.eval_node(k) # We don't know which dict index comes first, therefore always # yield all the types. for _ in types: yield LazyKnownContexts(types) else: for node in self._items(): yield LazyTreeContext(self._defining_context, node) for addition in check_array_additions(self._defining_context, self): yield addition def _values(self): """Returns a list of a list of node.""" if self.array_type == 'dict': return ContextSet.from_sets(v for k, v in self._items()) else: return self._items() def _items(self): c = self.atom.children if self.atom.type in ('testlist_star_expr', 'testlist'): return c[::2] array_node = c[1] if array_node in (']', '}', ')'): return [] # Direct closing bracket, doesn't contain items. if array_node.type == 'testlist_comp': return array_node.children[::2] elif array_node.type == 'dictorsetmaker': kv = [] iterator = iter(array_node.children) for key in iterator: op = next(iterator, None) if op is None or op == ',': kv.append(key) # A set. else: assert op == ':' # A dict. kv.append((key, next(iterator))) next(iterator, None) # Possible comma. return kv else: return [array_node] def exact_key_items(self): """ Returns a generator of tuples like dict.items(), where the key is resolved (as a string) and the values are still lazy contexts. """ for key_node, value in self._items(): for key in self._defining_context.eval_node(key_node): if is_string(key): yield key.obj, LazyTreeContext(self._defining_context, value) def __repr__(self): return "<%s of %s>" % (self.__class__.__name__, self.atom) @has_builtin_methods class DictLiteralContext(SequenceLiteralContext): array_type = 'dict' def __init__(self, evaluator, defining_context, atom): super(SequenceLiteralContext, self).__init__(evaluator) self._defining_context = defining_context self.atom = atom @register_builtin_method('values') def _imitate_values(self): lazy_context = LazyKnownContexts(self.dict_values()) return ContextSet(FakeSequence(self.evaluator, 'list', [lazy_context])) @register_builtin_method('items') def _imitate_items(self): lazy_contexts = [ LazyKnownContext(FakeSequence( self.evaluator, 'tuple', (LazyTreeContext(self._defining_context, key_node), LazyTreeContext(self._defining_context, value_node)) )) for key_node, value_node in self._items() ] return ContextSet(FakeSequence(self.evaluator, 'list', lazy_contexts)) class _FakeArray(SequenceLiteralContext): def __init__(self, evaluator, container, type): super(SequenceLiteralContext, self).__init__(evaluator) self.array_type = type self.atom = container # TODO is this class really needed? class FakeSequence(_FakeArray): def __init__(self, evaluator, array_type, lazy_context_list): """ type should be one of "tuple", "list" """ super(FakeSequence, self).__init__(evaluator, None, array_type) self._lazy_context_list = lazy_context_list def py__getitem__(self, index): return self._lazy_context_list[index].infer() def py__iter__(self): return self._lazy_context_list def py__bool__(self): return bool(len(self._lazy_context_list)) def __repr__(self): return "<%s of %s>" % (type(self).__name__, self._lazy_context_list) class FakeDict(_FakeArray): def __init__(self, evaluator, dct): super(FakeDict, self).__init__(evaluator, dct, 'dict') self._dct = dct def py__iter__(self): for key in self._dct: yield LazyKnownContext(compiled.create(self.evaluator, key)) def py__getitem__(self, index): return self._dct[index].infer() def dict_values(self): return ContextSet.from_sets(lazy_context.infer() for lazy_context in self._dct.values()) def exact_key_items(self): return self._dct.items() class MergedArray(_FakeArray): def __init__(self, evaluator, arrays): super(MergedArray, self).__init__(evaluator, arrays, arrays[-1].array_type) self._arrays = arrays def py__iter__(self): for array in self._arrays: for lazy_context in array.py__iter__(): yield lazy_context def py__getitem__(self, index): return ContextSet.from_sets(lazy_context.infer() for lazy_context in self.py__iter__()) def _items(self): for array in self._arrays: for a in array._items(): yield a def __len__(self): return sum(len(a) for a in self._arrays) def unpack_tuple_to_dict(context, types, exprlist): """ Unpacking tuple assignments in for statements and expr_stmts. """ if exprlist.type == 'name': return {exprlist.value: types} elif exprlist.type == 'atom' and exprlist.children[0] in '([': return unpack_tuple_to_dict(context, types, exprlist.children[1]) elif exprlist.type in ('testlist', 'testlist_comp', 'exprlist', 'testlist_star_expr'): dct = {} parts = iter(exprlist.children[::2]) n = 0 for lazy_context in types.iterate(exprlist): n += 1 try: part = next(parts) except StopIteration: # TODO this context is probably not right. analysis.add(context, 'value-error-too-many-values', part, message="ValueError: too many values to unpack (expected %s)" % n) else: dct.update(unpack_tuple_to_dict(context, lazy_context.infer(), part)) has_parts = next(parts, None) if types and has_parts is not None: # TODO this context is probably not right. analysis.add(context, 'value-error-too-few-values', has_parts, message="ValueError: need more than %s values to unpack" % n) return dct elif exprlist.type == 'power' or exprlist.type == 'atom_expr': # Something like ``arr[x], var = ...``. # This is something that is not yet supported, would also be difficult # to write into a dict. return {} elif exprlist.type == 'star_expr': # `a, *b, c = x` type unpackings # Currently we're not supporting them. return {} raise NotImplementedError def check_array_additions(context, sequence): """ Just a mapper function for the internal _check_array_additions """ if sequence.array_type not in ('list', 'set'): # TODO also check for dict updates return NO_CONTEXTS return _check_array_additions(context, sequence) @evaluator_method_cache(default=NO_CONTEXTS) @debug.increase_indent def _check_array_additions(context, sequence): """ Checks if a `Array` has "add" (append, insert, extend) statements: >>> a = [""] >>> a.append(1) """ from jedi.evaluate import arguments debug.dbg('Dynamic array search for %s' % sequence, color='MAGENTA') module_context = context.get_root_context() if not settings.dynamic_array_additions or isinstance(module_context, compiled.CompiledObject): debug.dbg('Dynamic array search aborted.', color='MAGENTA') return ContextSet() def find_additions(context, arglist, add_name): params = list(arguments.TreeArguments(context.evaluator, context, arglist).unpack()) result = set() if add_name in ['insert']: params = params[1:] if add_name in ['append', 'add', 'insert']: for key, whatever in params: result.add(whatever) elif add_name in ['extend', 'update']: for key, lazy_context in params: result |= set(lazy_context.infer().iterate()) return result temp_param_add, settings.dynamic_params_for_other_modules = \ settings.dynamic_params_for_other_modules, False is_list = sequence.name.string_name == 'list' search_names = (['append', 'extend', 'insert'] if is_list else ['add', 'update']) added_types = set() for add_name in search_names: try: possible_names = module_context.tree_node.get_used_names()[add_name] except KeyError: continue else: for name in possible_names: context_node = context.tree_node if not (context_node.start_pos < name.start_pos < context_node.end_pos): continue trailer = name.parent power = trailer.parent trailer_pos = power.children.index(trailer) try: execution_trailer = power.children[trailer_pos + 1] except IndexError: continue else: if execution_trailer.type != 'trailer' \ or execution_trailer.children[0] != '(' \ or execution_trailer.children[1] == ')': continue random_context = context.create_context(name) with recursion.execution_allowed(context.evaluator, power) as allowed: if allowed: found = evaluate_call_of_leaf( random_context, name, cut_own_trailer=True ) if sequence in found: # The arrays match. Now add the results added_types |= find_additions( random_context, execution_trailer.children[1], add_name ) # reset settings settings.dynamic_params_for_other_modules = temp_param_add debug.dbg('Dynamic array result %s' % added_types, color='MAGENTA') return added_types def get_dynamic_array_instance(instance): """Used for set() and list() instances.""" if not settings.dynamic_array_additions: return instance.var_args ai = _ArrayInstance(instance) from jedi.evaluate import arguments return arguments.ValuesArguments([ContextSet(ai)]) class _ArrayInstance(object): """ Used for the usage of set() and list(). This is definitely a hack, but a good one :-) It makes it possible to use set/list conversions. In contrast to Array, ListComprehension and all other iterable types, this is something that is only used inside `evaluate/compiled/fake/builtins.py` and therefore doesn't need filters, `py__bool__` and so on, because we don't use these operations in `builtins.py`. """ def __init__(self, instance): self.instance = instance self.var_args = instance.var_args def py__iter__(self): var_args = self.var_args try: _, lazy_context = next(var_args.unpack()) except StopIteration: pass else: for lazy in lazy_context.infer().iterate(): yield lazy from jedi.evaluate import arguments if isinstance(var_args, arguments.TreeArguments): additions = _check_array_additions(var_args.context, self.instance) for addition in additions: yield addition def iterate(self, contextualized_node=None): return self.py__iter__() class Slice(Context): def __init__(self, context, start, stop, step): super(Slice, self).__init__( context.evaluator, parent_context=context.evaluator.BUILTINS ) self._context = context # all of them are either a Precedence or None. self._start = start self._stop = stop self._step = step @property def obj(self): """ Imitate CompiledObject.obj behavior and return a ``builtin.slice()`` object. """ def get(element): if element is None: return None result = self._context.eval_node(element) if len(result) != 1: # For simplicity, we want slices to be clear defined with just # one type. Otherwise we will return an empty slice object. raise IndexError try: return list(result)[0].obj except AttributeError: return None try: return slice(get(self._start), get(self._stop), get(self._step)) except IndexError: return slice(None, None, None) jedi-0.11.1/jedi/evaluate/context/klass.py0000664000175000017500000001764413214571123020276 0ustar davedave00000000000000""" Like described in the :mod:`parso.python.tree` module, there's a need for an ast like module to represent the states of parsed modules. But now there are also structures in Python that need a little bit more than that. An ``Instance`` for example is only a ``Class`` before it is instantiated. This class represents these cases. So, why is there also a ``Class`` class here? Well, there are decorators and they change classes in Python 3. Representation modules also define "magic methods". Those methods look like ``py__foo__`` and are typically mappable to the Python equivalents ``__call__`` and others. Here's a list: ====================================== ======================================== **Method** **Description** -------------------------------------- ---------------------------------------- py__call__(params: Array) On callable objects, returns types. py__bool__() Returns True/False/None; None means that there's no certainty. py__bases__() Returns a list of base classes. py__mro__() Returns a list of classes (the mro). py__iter__() Returns a generator of a set of types. py__class__() Returns the class of an instance. py__getitem__(index: int/str) Returns a a set of types of the index. Can raise an IndexError/KeyError. py__file__() Only on modules. Returns None if does not exist. py__package__() Only on modules. For the import system. py__path__() Only on modules. For the import system. py__get__(call_object) Only on instances. Simulates descriptors. py__doc__(include_call_signature: Returns the docstring for a context. bool) ====================================== ======================================== """ from jedi._compatibility import use_metaclass from jedi.evaluate.cache import evaluator_method_cache, CachedMetaClass from jedi.evaluate import compiled from jedi.evaluate.lazy_context import LazyKnownContext from jedi.evaluate.filters import ParserTreeFilter, TreeNameDefinition, \ ContextName, AnonymousInstanceParamName from jedi.evaluate.base_context import ContextSet, iterator_to_context_set, \ TreeContext def apply_py__get__(context, base_context): try: method = context.py__get__ except AttributeError: yield context else: for descriptor_context in method(base_context): yield descriptor_context class ClassName(TreeNameDefinition): def __init__(self, parent_context, tree_name, name_context): super(ClassName, self).__init__(parent_context, tree_name) self._name_context = name_context @iterator_to_context_set def infer(self): # TODO this _name_to_types might get refactored and be a part of the # parent class. Once it is, we can probably just overwrite method to # achieve this. from jedi.evaluate.syntax_tree import tree_name_to_contexts inferred = tree_name_to_contexts( self.parent_context.evaluator, self._name_context, self.tree_name) for result_context in inferred: for c in apply_py__get__(result_context, self.parent_context): yield c class ClassFilter(ParserTreeFilter): name_class = ClassName def _convert_names(self, names): return [self.name_class(self.context, name, self._node_context) for name in names] class ClassContext(use_metaclass(CachedMetaClass, TreeContext)): """ This class is not only important to extend `tree.Class`, it is also a important for descriptors (if the descriptor methods are evaluated or not). """ api_type = 'class' def __init__(self, evaluator, parent_context, classdef): super(ClassContext, self).__init__(evaluator, parent_context=parent_context) self.tree_node = classdef @evaluator_method_cache(default=()) def py__mro__(self): def add(cls): if cls not in mro: mro.append(cls) mro = [self] # TODO Do a proper mro resolution. Currently we are just listing # classes. However, it's a complicated algorithm. for lazy_cls in self.py__bases__(): # TODO there's multiple different mro paths possible if this yields # multiple possibilities. Could be changed to be more correct. for cls in lazy_cls.infer(): # TODO detect for TypeError: duplicate base class str, # e.g. `class X(str, str): pass` try: mro_method = cls.py__mro__ except AttributeError: # TODO add a TypeError like: """ >>> class Y(lambda: test): pass Traceback (most recent call last): File "", line 1, in TypeError: function() argument 1 must be code, not str >>> class Y(1): pass Traceback (most recent call last): File "", line 1, in TypeError: int() takes at most 2 arguments (3 given) """ pass else: add(cls) for cls_new in mro_method(): add(cls_new) return tuple(mro) @evaluator_method_cache(default=()) def py__bases__(self): arglist = self.tree_node.get_super_arglist() if arglist: from jedi.evaluate import arguments args = arguments.TreeArguments(self.evaluator, self, arglist) return [value for key, value in args.unpack() if key is None] else: return [LazyKnownContext(compiled.create(self.evaluator, object))] def py__call__(self, params): from jedi.evaluate.context import TreeInstance return ContextSet(TreeInstance(self.evaluator, self.parent_context, self, params)) def py__class__(self): return compiled.create(self.evaluator, type) def get_params(self): from jedi.evaluate.context import AnonymousInstance anon = AnonymousInstance(self.evaluator, self.parent_context, self) return [AnonymousInstanceParamName(anon, param.name) for param in self.funcdef.get_params()] def get_filters(self, search_global, until_position=None, origin_scope=None, is_instance=False): if search_global: yield ParserTreeFilter( self.evaluator, context=self, until_position=until_position, origin_scope=origin_scope ) else: for cls in self.py__mro__(): if isinstance(cls, compiled.CompiledObject): for filter in cls.get_filters(is_instance=is_instance): yield filter else: yield ClassFilter( self.evaluator, self, node_context=cls, origin_scope=origin_scope) def is_class(self): return True def get_function_slot_names(self, name): for filter in self.get_filters(search_global=False): names = filter.get(name) if names: return names return [] def get_param_names(self): for name in self.get_function_slot_names('__init__'): for context_ in name.infer(): try: method = context_.get_param_names except AttributeError: pass else: return list(method())[1:] return [] @property def name(self): return ContextName(self, self.tree_node.name) jedi-0.11.1/jedi/evaluate/context/instance.py0000664000175000017500000004043413214571123020756 0ustar davedave00000000000000from abc import abstractproperty from jedi._compatibility import is_py3 from jedi import debug from jedi.evaluate import compiled from jedi.evaluate import filters from jedi.evaluate.base_context import Context, NO_CONTEXTS, ContextSet, \ iterator_to_context_set from jedi.evaluate.lazy_context import LazyKnownContext, LazyKnownContexts from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.arguments import AbstractArguments, AnonymousArguments from jedi.cache import memoize_method from jedi.evaluate.context.function import FunctionExecutionContext, FunctionContext from jedi.evaluate.context.klass import ClassContext, apply_py__get__ from jedi.evaluate.context import iterable from jedi.parser_utils import get_parent_scope class InstanceFunctionExecution(FunctionExecutionContext): def __init__(self, instance, parent_context, function_context, var_args): self.instance = instance var_args = InstanceVarArgs(self, var_args) super(InstanceFunctionExecution, self).__init__( instance.evaluator, parent_context, function_context, var_args) class AnonymousInstanceFunctionExecution(FunctionExecutionContext): function_execution_filter = filters.AnonymousInstanceFunctionExecutionFilter def __init__(self, instance, parent_context, function_context, var_args): self.instance = instance super(AnonymousInstanceFunctionExecution, self).__init__( instance.evaluator, parent_context, function_context, var_args) class AbstractInstanceContext(Context): """ This class is used to evaluate instances. """ api_type = 'instance' function_execution_cls = InstanceFunctionExecution def __init__(self, evaluator, parent_context, class_context, var_args): super(AbstractInstanceContext, self).__init__(evaluator, parent_context) # Generated instances are classes that are just generated by self # (No var_args) used. self.class_context = class_context self.var_args = var_args def is_class(self): return False @property def py__call__(self): names = self.get_function_slot_names('__call__') if not names: # Means the Instance is not callable. raise AttributeError def execute(arguments): return ContextSet.from_sets(name.execute(arguments) for name in names) return execute def py__class__(self): return self.class_context def py__bool__(self): # Signalize that we don't know about the bool type. return None def get_function_slot_names(self, name): # Python classes don't look at the dictionary of the instance when # looking up `__call__`. This is something that has to do with Python's # internal slot system (note: not __slots__, but C slots). for filter in self.get_filters(include_self_names=False): names = filter.get(name) if names: return names return [] def execute_function_slots(self, names, *evaluated_args): return ContextSet.from_sets( name.execute_evaluated(*evaluated_args) for name in names ) def py__get__(self, obj): # Arguments in __get__ descriptors are obj, class. # `method` is the new parent of the array, don't know if that's good. names = self.get_function_slot_names('__get__') if names: if isinstance(obj, AbstractInstanceContext): return self.execute_function_slots(names, obj, obj.class_context) else: none_obj = compiled.create(self.evaluator, None) return self.execute_function_slots(names, none_obj, obj) else: return ContextSet(self) def get_filters(self, search_global=None, until_position=None, origin_scope=None, include_self_names=True): if include_self_names: for cls in self.class_context.py__mro__(): if isinstance(cls, compiled.CompiledObject): if cls.tree_node is not None: # In this case we're talking about a fake object, it # doesn't make sense for normal compiled objects to # search for self variables. yield SelfNameFilter(self.evaluator, self, cls, origin_scope) else: yield SelfNameFilter(self.evaluator, self, cls, origin_scope) for cls in self.class_context.py__mro__(): if isinstance(cls, compiled.CompiledObject): yield CompiledInstanceClassFilter(self.evaluator, self, cls) else: yield InstanceClassFilter(self.evaluator, self, cls, origin_scope) def py__getitem__(self, index): try: names = self.get_function_slot_names('__getitem__') except KeyError: debug.warning('No __getitem__, cannot access the array.') return NO_CONTEXTS else: index_obj = compiled.create(self.evaluator, index) return self.execute_function_slots(names, index_obj) def py__iter__(self): iter_slot_names = self.get_function_slot_names('__iter__') if not iter_slot_names: debug.warning('No __iter__ on %s.' % self) return for generator in self.execute_function_slots(iter_slot_names): if isinstance(generator, AbstractInstanceContext): # `__next__` logic. name = '__next__' if is_py3 else 'next' iter_slot_names = generator.get_function_slot_names(name) if iter_slot_names: yield LazyKnownContexts( generator.execute_function_slots(iter_slot_names) ) else: debug.warning('Instance has no __next__ function in %s.', generator) else: for lazy_context in generator.py__iter__(): yield lazy_context @abstractproperty def name(self): pass def _create_init_execution(self, class_context, func_node): bound_method = BoundMethod( self.evaluator, self, class_context, self.parent_context, func_node ) return self.function_execution_cls( self, class_context.parent_context, bound_method, self.var_args ) def create_init_executions(self): for name in self.get_function_slot_names('__init__'): if isinstance(name, LazyInstanceName): yield self._create_init_execution(name.class_context, name.tree_name.parent) @evaluator_method_cache() def create_instance_context(self, class_context, node): if node.parent.type in ('funcdef', 'classdef'): node = node.parent scope = get_parent_scope(node) if scope == class_context.tree_node: return class_context else: parent_context = self.create_instance_context(class_context, scope) if scope.type == 'funcdef': if scope.name.value == '__init__' and parent_context == class_context: return self._create_init_execution(class_context, scope) else: bound_method = BoundMethod( self.evaluator, self, class_context, parent_context, scope ) return bound_method.get_function_execution() elif scope.type == 'classdef': class_context = ClassContext(self.evaluator, scope, parent_context) return class_context elif scope.type == 'comp_for': # Comprehensions currently don't have a special scope in Jedi. return self.create_instance_context(class_context, scope) else: raise NotImplementedError return class_context def __repr__(self): return "<%s of %s(%s)>" % (self.__class__.__name__, self.class_context, self.var_args) class CompiledInstance(AbstractInstanceContext): def __init__(self, *args, **kwargs): super(CompiledInstance, self).__init__(*args, **kwargs) # I don't think that dynamic append lookups should happen here. That # sounds more like something that should go to py__iter__. if self.class_context.name.string_name in ['list', 'set'] \ and self.parent_context.get_root_context() == self.evaluator.BUILTINS: # compare the module path with the builtin name. self.var_args = iterable.get_dynamic_array_instance(self) @property def name(self): return compiled.CompiledContextName(self, self.class_context.name.string_name) def create_instance_context(self, class_context, node): if get_parent_scope(node).type == 'classdef': return class_context else: return super(CompiledInstance, self).create_instance_context(class_context, node) class TreeInstance(AbstractInstanceContext): def __init__(self, evaluator, parent_context, class_context, var_args): super(TreeInstance, self).__init__(evaluator, parent_context, class_context, var_args) self.tree_node = class_context.tree_node @property def name(self): return filters.ContextName(self, self.class_context.name.tree_name) class AnonymousInstance(TreeInstance): function_execution_cls = AnonymousInstanceFunctionExecution def __init__(self, evaluator, parent_context, class_context): super(AnonymousInstance, self).__init__( evaluator, parent_context, class_context, var_args=AnonymousArguments(), ) class CompiledInstanceName(compiled.CompiledName): def __init__(self, evaluator, instance, parent_context, name): super(CompiledInstanceName, self).__init__(evaluator, parent_context, name) self._instance = instance @iterator_to_context_set def infer(self): for result_context in super(CompiledInstanceName, self).infer(): if isinstance(result_context, FunctionContext): parent_context = result_context.parent_context while parent_context.is_class(): parent_context = parent_context.parent_context yield BoundMethod( result_context.evaluator, self._instance, self.parent_context, parent_context, result_context.tree_node ) else: if result_context.api_type == 'function': yield CompiledBoundMethod(result_context) else: yield result_context class CompiledInstanceClassFilter(compiled.CompiledObjectFilter): name_class = CompiledInstanceName def __init__(self, evaluator, instance, compiled_object): super(CompiledInstanceClassFilter, self).__init__( evaluator, compiled_object, is_instance=True, ) self._instance = instance def _create_name(self, name): return self.name_class( self._evaluator, self._instance, self._compiled_object, name) class BoundMethod(FunctionContext): def __init__(self, evaluator, instance, class_context, *args, **kwargs): super(BoundMethod, self).__init__(evaluator, *args, **kwargs) self._instance = instance self._class_context = class_context def get_function_execution(self, arguments=None): if arguments is None: arguments = AnonymousArguments() return AnonymousInstanceFunctionExecution( self._instance, self.parent_context, self, arguments) else: return InstanceFunctionExecution( self._instance, self.parent_context, self, arguments) class CompiledBoundMethod(compiled.CompiledObject): def __init__(self, func): super(CompiledBoundMethod, self).__init__( func.evaluator, func.obj, func.parent_context, func.tree_node) def get_param_names(self): return list(super(CompiledBoundMethod, self).get_param_names())[1:] class InstanceNameDefinition(filters.TreeNameDefinition): def infer(self): return super(InstanceNameDefinition, self).infer() class LazyInstanceName(filters.TreeNameDefinition): """ This name calculates the parent_context lazily. """ def __init__(self, instance, class_context, tree_name): self._instance = instance self.class_context = class_context self.tree_name = tree_name @property def parent_context(self): return self._instance.create_instance_context(self.class_context, self.tree_name) class LazyInstanceClassName(LazyInstanceName): @iterator_to_context_set def infer(self): for result_context in super(LazyInstanceClassName, self).infer(): if isinstance(result_context, FunctionContext): # Classes are never used to resolve anything within the # functions. Only other functions and modules will resolve # those things. parent_context = result_context.parent_context while parent_context.is_class(): parent_context = parent_context.parent_context yield BoundMethod( result_context.evaluator, self._instance, self.class_context, parent_context, result_context.tree_node ) else: for c in apply_py__get__(result_context, self._instance): yield c class InstanceClassFilter(filters.ParserTreeFilter): name_class = LazyInstanceClassName def __init__(self, evaluator, context, class_context, origin_scope): super(InstanceClassFilter, self).__init__( evaluator=evaluator, context=context, node_context=class_context, origin_scope=origin_scope ) self._class_context = class_context def _equals_origin_scope(self): node = self._origin_scope while node is not None: if node == self._parser_scope or node == self.context: return True node = get_parent_scope(node) return False def _access_possible(self, name): return not name.value.startswith('__') or name.value.endswith('__') \ or self._equals_origin_scope() def _filter(self, names): names = super(InstanceClassFilter, self)._filter(names) return [name for name in names if self._access_possible(name)] def _convert_names(self, names): return [self.name_class(self.context, self._class_context, name) for name in names] class SelfNameFilter(InstanceClassFilter): name_class = LazyInstanceName def _filter(self, names): names = self._filter_self_names(names) if isinstance(self._parser_scope, compiled.CompiledObject) and False: # This would be for builtin skeletons, which are not yet supported. return list(names) else: start, end = self._parser_scope.start_pos, self._parser_scope.end_pos return [n for n in names if start < n.start_pos < end] def _filter_self_names(self, names): for name in names: trailer = name.parent if trailer.type == 'trailer' \ and len(trailer.children) == 2 \ and trailer.children[0] == '.': if name.is_definition() and self._access_possible(name): yield name def _check_flows(self, names): return names class InstanceVarArgs(AbstractArguments): def __init__(self, execution_context, var_args): self._execution_context = execution_context self._var_args = var_args @memoize_method def _get_var_args(self): return self._var_args @property def argument_node(self): return self._var_args.argument_node @property def trailer(self): return self._var_args.trailer def unpack(self, func=None): yield None, LazyKnownContext(self._execution_context.instance) for values in self._get_var_args().unpack(func): yield values def get_calling_nodes(self): return self._get_var_args().get_calling_nodes() jedi-0.11.1/jedi/evaluate/usages.py0000664000175000017500000000451613214571123016756 0ustar davedave00000000000000from jedi.evaluate import imports from jedi.evaluate.filters import TreeNameDefinition from jedi.evaluate.context import ModuleContext def _resolve_names(definition_names, avoid_names=()): for name in definition_names: if name in avoid_names: # Avoiding recursions here, because goto on a module name lands # on the same module. continue if not isinstance(name, imports.SubModuleName): # SubModuleNames are not actually existing names but created # names when importing something like `import foo.bar.baz`. yield name if name.api_type == 'module': for name in _resolve_names(name.goto(), definition_names): yield name def _dictionarize(names): return dict( (n if n.tree_name is None else n.tree_name, n) for n in names ) def _find_names(module_context, tree_name): context = module_context.create_context(tree_name) name = TreeNameDefinition(context, tree_name) found_names = set(name.goto()) found_names.add(name) return _dictionarize(_resolve_names(found_names)) def usages(module_context, tree_name): search_name = tree_name.value found_names = _find_names(module_context, tree_name) modules = set(d.get_root_context() for d in found_names.values()) modules = set(m for m in modules if isinstance(m, ModuleContext)) non_matching_usage_maps = {} for m in imports.get_modules_containing_name(module_context.evaluator, modules, search_name): for name_leaf in m.tree_node.get_used_names().get(search_name, []): new = _find_names(m, name_leaf) if any(tree_name in found_names for tree_name in new): found_names.update(new) for tree_name in new: for dct in non_matching_usage_maps.get(tree_name, []): # A usage that was previously searched for matches with # a now found name. Merge. found_names.update(dct) try: del non_matching_usage_maps[tree_name] except KeyError: pass else: for name in new: non_matching_usage_maps.setdefault(name, []).append(new) return found_names.values() jedi-0.11.1/jedi/evaluate/__init__.py0000664000175000017500000003737313214571123017235 0ustar davedave00000000000000""" Evaluation of Python code in |jedi| is based on three assumptions: * The code uses as least side effects as possible. Jedi understands certain list/tuple/set modifications, but there's no guarantee that Jedi detects everything (list.append in different modules for example). * No magic is being used: - metaclasses - ``setattr()`` / ``__import__()`` - writing to ``globals()``, ``locals()``, ``object.__dict__`` * The programmer is not a total dick, e.g. like `this `_ :-) The actual algorithm is based on a principle called lazy evaluation. That said, the typical entry point for static analysis is calling ``eval_expr_stmt``. There's separate logic for autocompletion in the API, the evaluator is all about evaluating an expression. TODO this paragraph is not what jedi does anymore. Now you need to understand what follows after ``eval_expr_stmt``. Let's make an example:: import datetime datetime.date.toda# <-- cursor here First of all, this module doesn't care about completion. It really just cares about ``datetime.date``. At the end of the procedure ``eval_expr_stmt`` will return the ``date`` class. To *visualize* this (simplified): - ``Evaluator.eval_expr_stmt`` doesn't do much, because there's no assignment. - ``Context.eval_node`` cares for resolving the dotted path - ``Evaluator.find_types`` searches for global definitions of datetime, which it finds in the definition of an import, by scanning the syntax tree. - Using the import logic, the datetime module is found. - Now ``find_types`` is called again by ``eval_node`` to find ``date`` inside the datetime module. Now what would happen if we wanted ``datetime.date.foo.bar``? Two more calls to ``find_types``. However the second call would be ignored, because the first one would return nothing (there's no foo attribute in ``date``). What if the import would contain another ``ExprStmt`` like this:: from foo import bar Date = bar.baz Well... You get it. Just another ``eval_expr_stmt`` recursion. It's really easy. Python can obviously get way more complicated then this. To understand tuple assignments, list comprehensions and everything else, a lot more code had to be written. Jedi has been tested very well, so you can just start modifying code. It's best to write your own test first for your "new" feature. Don't be scared of breaking stuff. As long as the tests pass, you're most likely to be fine. I need to mention now that lazy evaluation is really good because it only *evaluates* what needs to be *evaluated*. All the statements and modules that are not used are just being ignored. """ import sys from parso.python import tree import parso from jedi import debug from jedi import parser_utils from jedi.evaluate.utils import unite from jedi.evaluate import imports from jedi.evaluate import recursion from jedi.evaluate.cache import evaluator_function_cache from jedi.evaluate import compiled from jedi.evaluate import helpers from jedi.evaluate.filters import TreeNameDefinition, ParamName from jedi.evaluate.base_context import ContextualizedName, ContextualizedNode, \ ContextSet, NO_CONTEXTS, iterate_contexts from jedi.evaluate.context import ClassContext, FunctionContext, \ AnonymousInstance, BoundMethod from jedi.evaluate.context.iterable import CompForContext from jedi.evaluate.syntax_tree import eval_trailer, eval_expr_stmt, \ eval_node, check_tuple_assignments class Evaluator(object): def __init__(self, grammar, project): self.grammar = grammar self.latest_grammar = parso.load_grammar(version='3.6') self.memoize_cache = {} # for memoize decorators # To memorize modules -> equals `sys.modules`. self.modules = {} # like `sys.modules`. self.compiled_cache = {} # see `evaluate.compiled.create()` self.inferred_element_counts = {} self.mixed_cache = {} # see `evaluate.compiled.mixed._create()` self.analysis = [] self.dynamic_params_depth = 0 self.is_analysis = False self.python_version = sys.version_info[:2] self.project = project project.add_evaluator(self) self.reset_recursion_limitations() # Constants self.BUILTINS = compiled.get_special_object(self, 'BUILTINS') def reset_recursion_limitations(self): self.recursion_detector = recursion.RecursionDetector() self.execution_recursion_detector = recursion.ExecutionRecursionDetector(self) def eval_element(self, context, element): if isinstance(context, CompForContext): return eval_node(context, element) if_stmt = element while if_stmt is not None: if_stmt = if_stmt.parent if if_stmt.type in ('if_stmt', 'for_stmt'): break if parser_utils.is_scope(if_stmt): if_stmt = None break predefined_if_name_dict = context.predefined_names.get(if_stmt) if predefined_if_name_dict is None and if_stmt and if_stmt.type == 'if_stmt': if_stmt_test = if_stmt.children[1] name_dicts = [{}] # If we already did a check, we don't want to do it again -> If # context.predefined_names is filled, we stop. # We don't want to check the if stmt itself, it's just about # the content. if element.start_pos > if_stmt_test.end_pos: # Now we need to check if the names in the if_stmt match the # names in the suite. if_names = helpers.get_names_of_node(if_stmt_test) element_names = helpers.get_names_of_node(element) str_element_names = [e.value for e in element_names] if any(i.value in str_element_names for i in if_names): for if_name in if_names: definitions = self.goto_definitions(context, if_name) # Every name that has multiple different definitions # causes the complexity to rise. The complexity should # never fall below 1. if len(definitions) > 1: if len(name_dicts) * len(definitions) > 16: debug.dbg('Too many options for if branch evaluation %s.', if_stmt) # There's only a certain amount of branches # Jedi can evaluate, otherwise it will take to # long. name_dicts = [{}] break original_name_dicts = list(name_dicts) name_dicts = [] for definition in definitions: new_name_dicts = list(original_name_dicts) for i, name_dict in enumerate(new_name_dicts): new_name_dicts[i] = name_dict.copy() new_name_dicts[i][if_name.value] = ContextSet(definition) name_dicts += new_name_dicts else: for name_dict in name_dicts: name_dict[if_name.value] = definitions if len(name_dicts) > 1: result = ContextSet() for name_dict in name_dicts: with helpers.predefine_names(context, if_stmt, name_dict): result |= eval_node(context, element) return result else: return self._eval_element_if_evaluated(context, element) else: if predefined_if_name_dict: return eval_node(context, element) else: return self._eval_element_if_evaluated(context, element) def _eval_element_if_evaluated(self, context, element): """ TODO This function is temporary: Merge with eval_element. """ parent = element while parent is not None: parent = parent.parent predefined_if_name_dict = context.predefined_names.get(parent) if predefined_if_name_dict is not None: return eval_node(context, element) return self._eval_element_cached(context, element) @evaluator_function_cache(default=NO_CONTEXTS) def _eval_element_cached(self, context, element): return eval_node(context, element) def goto_definitions(self, context, name): def_ = name.get_definition(import_name_always=True) if def_ is not None: type_ = def_.type if type_ == 'classdef': return [ClassContext(self, context, name.parent)] elif type_ == 'funcdef': return [FunctionContext(self, context, name.parent)] if type_ == 'expr_stmt': is_simple_name = name.parent.type not in ('power', 'trailer') if is_simple_name: return eval_expr_stmt(context, def_, name) if type_ == 'for_stmt': container_types = context.eval_node(def_.children[3]) cn = ContextualizedNode(context, def_.children[3]) for_types = iterate_contexts(container_types, cn) c_node = ContextualizedName(context, name) return check_tuple_assignments(self, c_node, for_types) if type_ in ('import_from', 'import_name'): return imports.infer_import(context, name) return helpers.evaluate_call_of_leaf(context, name) def goto(self, context, name): definition = name.get_definition(import_name_always=True) if definition is not None: type_ = definition.type if type_ == 'expr_stmt': # Only take the parent, because if it's more complicated than just # a name it's something you can "goto" again. is_simple_name = name.parent.type not in ('power', 'trailer') if is_simple_name: return [TreeNameDefinition(context, name)] elif type_ == 'param': return [ParamName(context, name)] elif type_ in ('funcdef', 'classdef'): return [TreeNameDefinition(context, name)] elif type_ in ('import_from', 'import_name'): module_names = imports.infer_import(context, name, is_goto=True) return module_names par = name.parent node_type = par.type if node_type == 'argument' and par.children[1] == '=' and par.children[0] == name: # Named param goto. trailer = par.parent if trailer.type == 'arglist': trailer = trailer.parent if trailer.type != 'classdef': if trailer.type == 'decorator': context_set = context.eval_node(trailer.children[1]) else: i = trailer.parent.children.index(trailer) to_evaluate = trailer.parent.children[:i] if to_evaluate[0] == 'await': to_evaluate.pop(0) context_set = context.eval_node(to_evaluate[0]) for trailer in to_evaluate[1:]: context_set = eval_trailer(context, context_set, trailer) param_names = [] for context in context_set: try: get_param_names = context.get_param_names except AttributeError: pass else: for param_name in get_param_names(): if param_name.string_name == name.value: param_names.append(param_name) return param_names elif node_type == 'dotted_name': # Is a decorator. index = par.children.index(name) if index > 0: new_dotted = helpers.deep_ast_copy(par) new_dotted.children[index - 1:] = [] values = context.eval_node(new_dotted) return unite( value.py__getattribute__(name, name_context=context, is_goto=True) for value in values ) if node_type == 'trailer' and par.children[0] == '.': values = helpers.evaluate_call_of_leaf(context, name, cut_own_trailer=True) return unite( value.py__getattribute__(name, name_context=context, is_goto=True) for value in values ) else: stmt = tree.search_ancestor( name, 'expr_stmt', 'lambdef' ) or name if stmt.type == 'lambdef': stmt = name return context.py__getattribute__( name, position=stmt.start_pos, search_global=True, is_goto=True ) def create_context(self, base_context, node, node_is_context=False, node_is_object=False): def parent_scope(node): while True: node = node.parent if parser_utils.is_scope(node): return node elif node.type in ('argument', 'testlist_comp'): if node.children[1].type == 'comp_for': return node.children[1] elif node.type == 'dictorsetmaker': for n in node.children[1:4]: # In dictionaries it can be pretty much anything. if n.type == 'comp_for': return n def from_scope_node(scope_node, child_is_funcdef=None, is_nested=True, node_is_object=False): if scope_node == base_node: return base_context is_funcdef = scope_node.type in ('funcdef', 'lambdef') parent_scope = parser_utils.get_parent_scope(scope_node) parent_context = from_scope_node(parent_scope, child_is_funcdef=is_funcdef) if is_funcdef: if isinstance(parent_context, AnonymousInstance): func = BoundMethod( self, parent_context, parent_context.class_context, parent_context.parent_context, scope_node ) else: func = FunctionContext( self, parent_context, scope_node ) if is_nested and not node_is_object: return func.get_function_execution() return func elif scope_node.type == 'classdef': class_context = ClassContext(self, parent_context, scope_node) if child_is_funcdef: # anonymous instance return AnonymousInstance(self, parent_context, class_context) else: return class_context elif scope_node.type == 'comp_for': if node.start_pos >= scope_node.children[-1].start_pos: return parent_context return CompForContext.from_comp_for(parent_context, scope_node) raise Exception("There's a scope that was not managed.") base_node = base_context.tree_node if node_is_context and parser_utils.is_scope(node): scope_node = node else: if node.parent.type in ('funcdef', 'classdef') and node.parent.name == node: # When we're on class/function names/leafs that define the # object itself and not its contents. node = node.parent scope_node = parent_scope(node) return from_scope_node(scope_node, is_nested=True, node_is_object=node_is_object) jedi-0.11.1/jedi/evaluate/sys_path.py0000664000175000017500000002525013214571123017317 0ustar davedave00000000000000import glob import os import sys import imp from jedi.evaluate.site import addsitedir from jedi._compatibility import unicode from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.base_context import ContextualizedNode from jedi.evaluate.helpers import is_string from jedi import settings from jedi import debug from jedi.evaluate.utils import ignored def get_venv_path(venv): """Get sys.path for specified virtual environment.""" sys_path = _get_venv_path_dirs(venv) with ignored(ValueError): sys_path.remove('') sys_path = _get_sys_path_with_egglinks(sys_path) # As of now, get_venv_path_dirs does not scan built-in pythonpath and # user-local site-packages, let's approximate them using path from Jedi # interpreter. return sys_path + sys.path def _get_sys_path_with_egglinks(sys_path): """Find all paths including those referenced by egg-links. Egg-link-referenced directories are inserted into path immediately before the directory on which their links were found. Such directories are not taken into consideration by normal import mechanism, but they are traversed when doing pkg_resources.require. """ result = [] for p in sys_path: # pkg_resources does not define a specific order for egg-link files # using os.listdir to enumerate them, we're sorting them to have # reproducible tests. for egg_link in sorted(glob.glob(os.path.join(p, '*.egg-link'))): with open(egg_link) as fd: for line in fd: line = line.strip() if line: result.append(os.path.join(p, line)) # pkg_resources package only interprets the first # non-empty line in egg-link files. break result.append(p) return result def _get_venv_path_dirs(venv): """Get sys.path for venv without starting up the interpreter.""" venv = os.path.abspath(venv) sitedir = _get_venv_sitepackages(venv) sys_path = [] addsitedir(sys_path, sitedir) return sys_path def _get_venv_sitepackages(venv): if os.name == 'nt': p = os.path.join(venv, 'lib', 'site-packages') else: p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2], 'site-packages') return p def _abs_path(module_context, path): module_path = module_context.py__file__() if os.path.isabs(path): return path if module_path is None: # In this case we have no idea where we actually are in the file # system. return None base_dir = os.path.dirname(module_path) return os.path.abspath(os.path.join(base_dir, path)) def _paths_from_assignment(module_context, expr_stmt): """ Extracts the assigned strings from an assignment that looks as follows:: >>> sys.path[0:0] = ['module/path', 'another/module/path'] This function is in general pretty tolerant (and therefore 'buggy'). However, it's not a big issue usually to add more paths to Jedi's sys_path, because it will only affect Jedi in very random situations and by adding more paths than necessary, it usually benefits the general user. """ for assignee, operator in zip(expr_stmt.children[::2], expr_stmt.children[1::2]): try: assert operator in ['=', '+='] assert assignee.type in ('power', 'atom_expr') and \ len(assignee.children) > 1 c = assignee.children assert c[0].type == 'name' and c[0].value == 'sys' trailer = c[1] assert trailer.children[0] == '.' and trailer.children[1].value == 'path' # TODO Essentially we're not checking details on sys.path # manipulation. Both assigment of the sys.path and changing/adding # parts of the sys.path are the same: They get added to the end of # the current sys.path. """ execution = c[2] assert execution.children[0] == '[' subscript = execution.children[1] assert subscript.type == 'subscript' assert ':' in subscript.children """ except AssertionError: continue cn = ContextualizedNode(module_context.create_context(expr_stmt), expr_stmt) for lazy_context in cn.infer().iterate(cn): for context in lazy_context.infer(): if is_string(context): abs_path = _abs_path(module_context, context.obj) if abs_path is not None: yield abs_path def _paths_from_list_modifications(module_context, trailer1, trailer2): """ extract the path from either "sys.path.append" or "sys.path.insert" """ # Guarantee that both are trailers, the first one a name and the second one # a function execution with at least one param. if not (trailer1.type == 'trailer' and trailer1.children[0] == '.' and trailer2.type == 'trailer' and trailer2.children[0] == '(' and len(trailer2.children) == 3): return name = trailer1.children[1].value if name not in ['insert', 'append']: return arg = trailer2.children[1] if name == 'insert' and len(arg.children) in (3, 4): # Possible trailing comma. arg = arg.children[2] for context in module_context.create_context(arg).eval_node(arg): if is_string(context): abs_path = _abs_path(module_context, context.obj) if abs_path is not None: yield abs_path @evaluator_method_cache(default=[]) def check_sys_path_modifications(module_context): """ Detect sys.path modifications within module. """ def get_sys_path_powers(names): for name in names: power = name.parent.parent if power.type in ('power', 'atom_expr'): c = power.children if c[0].type == 'name' and c[0].value == 'sys' \ and c[1].type == 'trailer': n = c[1].children[1] if n.type == 'name' and n.value == 'path': yield name, power if module_context.tree_node is None: return [] added = [] try: possible_names = module_context.tree_node.get_used_names()['path'] except KeyError: pass else: for name, power in get_sys_path_powers(possible_names): expr_stmt = power.parent if len(power.children) >= 4: added.extend( _paths_from_list_modifications( module_context, *power.children[2:4] ) ) elif expr_stmt is not None and expr_stmt.type == 'expr_stmt': added.extend(_paths_from_assignment(module_context, expr_stmt)) return added def sys_path_with_modifications(evaluator, module_context): return evaluator.project.sys_path + check_sys_path_modifications(module_context) def detect_additional_paths(evaluator, script_path): django_paths = _detect_django_path(script_path) buildout_script_paths = set() for buildout_script_path in _get_buildout_script_paths(script_path): for path in _get_paths_from_buildout_script(evaluator, buildout_script_path): buildout_script_paths.add(path) return django_paths + list(buildout_script_paths) def _get_paths_from_buildout_script(evaluator, buildout_script_path): try: module_node = evaluator.grammar.parse( path=buildout_script_path, cache=True, cache_path=settings.cache_directory ) except IOError: debug.warning('Error trying to read buildout_script: %s', buildout_script_path) return from jedi.evaluate.context import ModuleContext module = ModuleContext(evaluator, module_node, buildout_script_path) for path in check_sys_path_modifications(module): yield path def traverse_parents(path): while True: new = os.path.dirname(path) if new == path: return path = new yield path def _get_parent_dir_with_file(path, filename): for parent in traverse_parents(path): if os.path.isfile(os.path.join(parent, filename)): return parent return None def _detect_django_path(module_path): """ Detects the path of the very well known Django library (if used) """ result = [] for parent in traverse_parents(module_path): with ignored(IOError): with open(parent + os.path.sep + 'manage.py'): debug.dbg('Found django path: %s', module_path) result.append(parent) return result def _get_buildout_script_paths(module_path): """ if there is a 'buildout.cfg' file in one of the parent directories of the given module it will return a list of all files in the buildout bin directory that look like python files. :param module_path: absolute path to the module. :type module_path: str """ project_root = _get_parent_dir_with_file(module_path, 'buildout.cfg') if not project_root: return [] bin_path = os.path.join(project_root, 'bin') if not os.path.exists(bin_path): return [] extra_module_paths = [] for filename in os.listdir(bin_path): try: filepath = os.path.join(bin_path, filename) with open(filepath, 'r') as f: firstline = f.readline() if firstline.startswith('#!') and 'python' in firstline: extra_module_paths.append(filepath) except (UnicodeDecodeError, IOError) as e: # Probably a binary file; permission error or race cond. because file got deleted # ignore debug.warning(unicode(e)) continue return extra_module_paths def dotted_path_in_sys_path(sys_path, module_path): """ Returns the dotted path inside a sys.path. """ # First remove the suffix. for suffix, _, _ in imp.get_suffixes(): if module_path.endswith(suffix): module_path = module_path[:-len(suffix)] break else: # There should always be a suffix in a valid Python file on the path. return None if module_path.startswith(os.path.sep): # The paths in sys.path most of the times don't end with a slash. module_path = module_path[1:] for p in sys_path: if module_path.startswith(p): rest = module_path[len(p):] if rest: split = rest.split(os.path.sep) for string in split: if not string or '.' in string: return None return '.'.join(split) return None jedi-0.11.1/jedi/evaluate/imports.py0000664000175000017500000005341213214571123017163 0ustar davedave00000000000000""" :mod:`jedi.evaluate.imports` is here to resolve import statements and return the modules/classes/functions/whatever, which they stand for. However there's not any actual importing done. This module is about finding modules in the filesystem. This can be quite tricky sometimes, because Python imports are not always that simple. This module uses imp for python up to 3.2 and importlib for python 3.3 on; the correct implementation is delegated to _compatibility. This module also supports import autocompletion, which means to complete statements like ``from datetim`` (curser at the end would return ``datetime``). """ import imp import os import pkgutil import sys from parso.python import tree from parso.tree import search_ancestor from parso.cache import parser_cache from parso import python_bytes_to_unicode from jedi._compatibility import find_module, unicode, ImplicitNSInfo from jedi import debug from jedi import settings from jedi.evaluate import sys_path from jedi.evaluate import helpers from jedi.evaluate import compiled from jedi.evaluate import analysis from jedi.evaluate.utils import unite from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.filters import AbstractNameDefinition from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS # This memoization is needed, because otherwise we will infinitely loop on # certain imports. @evaluator_method_cache(default=NO_CONTEXTS) def infer_import(context, tree_name, is_goto=False): module_context = context.get_root_context() import_node = search_ancestor(tree_name, 'import_name', 'import_from') import_path = import_node.get_path_for_name(tree_name) from_import_name = None evaluator = context.evaluator try: from_names = import_node.get_from_names() except AttributeError: # Is an import_name pass else: if len(from_names) + 1 == len(import_path): # We have to fetch the from_names part first and then check # if from_names exists in the modules. from_import_name = import_path[-1] import_path = from_names importer = Importer(evaluator, tuple(import_path), module_context, import_node.level) types = importer.follow() #if import_node.is_nested() and not self.nested_resolve: # scopes = [NestedImportModule(module, import_node)] if not types: return NO_CONTEXTS if from_import_name is not None: types = unite( t.py__getattribute__( from_import_name, name_context=context, is_goto=is_goto, analysis_errors=False ) for t in types ) if not is_goto: types = ContextSet.from_set(types) if not types: path = import_path + [from_import_name] importer = Importer(evaluator, tuple(path), module_context, import_node.level) types = importer.follow() # goto only accepts `Name` if is_goto: types = set(s.name for s in types) else: # goto only accepts `Name` if is_goto: types = set(s.name for s in types) debug.dbg('after import: %s', types) return types class NestedImportModule(tree.Module): """ TODO while there's no use case for nested import module right now, we might be able to use them for static analysis checks later on. """ def __init__(self, module, nested_import): self._module = module self._nested_import = nested_import def _get_nested_import_name(self): """ Generates an Import statement, that can be used to fake nested imports. """ i = self._nested_import # This is not an existing Import statement. Therefore, set position to # 0 (0 is not a valid line number). zero = (0, 0) names = [unicode(name) for name in i.namespace_names[1:]] name = helpers.FakeName(names, self._nested_import) new = tree.Import(i._sub_module, zero, zero, name) new.parent = self._module debug.dbg('Generated a nested import: %s', new) return helpers.FakeName(str(i.namespace_names[1]), new) def __getattr__(self, name): return getattr(self._module, name) def __repr__(self): return "<%s: %s of %s>" % (self.__class__.__name__, self._module, self._nested_import) def _add_error(context, name, message=None): # Should be a name, not a string! if hasattr(name, 'parent'): analysis.add(context, 'import-error', name, message) def get_init_path(directory_path): """ The __init__ file can be searched in a directory. If found return it, else None. """ for suffix, _, _ in imp.get_suffixes(): path = os.path.join(directory_path, '__init__' + suffix) if os.path.exists(path): return path return None class ImportName(AbstractNameDefinition): start_pos = (1, 0) _level = 0 def __init__(self, parent_context, string_name): self.parent_context = parent_context self.string_name = string_name def infer(self): return Importer( self.parent_context.evaluator, [self.string_name], self.parent_context, level=self._level, ).follow() def goto(self): return [m.name for m in self.infer()] def get_root_context(self): # Not sure if this is correct. return self.parent_context.get_root_context() @property def api_type(self): return 'module' class SubModuleName(ImportName): _level = 1 class Importer(object): def __init__(self, evaluator, import_path, module_context, level=0): """ An implementation similar to ``__import__``. Use `follow` to actually follow the imports. *level* specifies whether to use absolute or relative imports. 0 (the default) means only perform absolute imports. Positive values for level indicate the number of parent directories to search relative to the directory of the module calling ``__import__()`` (see PEP 328 for the details). :param import_path: List of namespaces (strings or Names). """ debug.speed('import %s' % (import_path,)) self._evaluator = evaluator self.level = level self.module_context = module_context try: self.file_path = module_context.py__file__() except AttributeError: # Can be None for certain compiled modules like 'builtins'. self.file_path = None if level: base = module_context.py__package__().split('.') if base == ['']: base = [] if level > len(base): path = module_context.py__file__() if path is not None: import_path = list(import_path) p = path for i in range(level): p = os.path.dirname(p) dir_name = os.path.basename(p) # This is not the proper way to do relative imports. However, since # Jedi cannot be sure about the entry point, we just calculate an # absolute path here. if dir_name: # TODO those sys.modules modifications are getting # really stupid. this is the 3rd time that we're using # this. We should probably refactor. if path.endswith(os.path.sep + 'os.py'): import_path.insert(0, 'os') else: import_path.insert(0, dir_name) else: _add_error(module_context, import_path[-1]) import_path = [] # TODO add import error. debug.warning('Attempted relative import beyond top-level package.') # If no path is defined in the module we have no ideas where we # are in the file system. Therefore we cannot know what to do. # In this case we just let the path there and ignore that it's # a relative path. Not sure if that's a good idea. else: # Here we basically rewrite the level to 0. base = tuple(base) if level > 1: base = base[:-level + 1] import_path = base + tuple(import_path) self.import_path = import_path @property def str_import_path(self): """Returns the import path as pure strings instead of `Name`.""" return tuple( name.value if isinstance(name, tree.Name) else name for name in self.import_path) def sys_path_with_modifications(self): in_path = [] sys_path_mod = self._evaluator.project.sys_path \ + sys_path.check_sys_path_modifications(self.module_context) if self.file_path is not None: # If you edit e.g. gunicorn, there will be imports like this: # `from gunicorn import something`. But gunicorn is not in the # sys.path. Therefore look if gunicorn is a parent directory, #56. if self.import_path: # TODO is this check really needed? for path in sys_path.traverse_parents(self.file_path): if os.path.basename(path) == self.str_import_path[0]: in_path.append(os.path.dirname(path)) # Since we know nothing about the call location of the sys.path, # it's a possibility that the current directory is the origin of # the Python execution. sys_path_mod.insert(0, os.path.dirname(self.file_path)) return in_path + sys_path_mod def follow(self): if not self.import_path: return NO_CONTEXTS return self._do_import(self.import_path, self.sys_path_with_modifications()) def _do_import(self, import_path, sys_path): """ This method is very similar to importlib's `_gcd_import`. """ import_parts = [ i.value if isinstance(i, tree.Name) else i for i in import_path ] # Handle "magic" Flask extension imports: # ``flask.ext.foo`` is really ``flask_foo`` or ``flaskext.foo``. if len(import_path) > 2 and import_parts[:2] == ['flask', 'ext']: # New style. ipath = ('flask_' + str(import_parts[2]),) + import_path[3:] modules = self._do_import(ipath, sys_path) if modules: return modules else: # Old style return self._do_import(('flaskext',) + import_path[2:], sys_path) module_name = '.'.join(import_parts) try: return ContextSet(self._evaluator.modules[module_name]) except KeyError: pass if len(import_path) > 1: # This is a recursive way of importing that works great with # the module cache. bases = self._do_import(import_path[:-1], sys_path) if not bases: return NO_CONTEXTS # We can take the first element, because only the os special # case yields multiple modules, which is not important for # further imports. parent_module = list(bases)[0] # This is a huge exception, we follow a nested import # ``os.path``, because it's a very important one in Python # that is being achieved by messing with ``sys.modules`` in # ``os``. if import_parts == ['os', 'path']: return parent_module.py__getattribute__('path') try: method = parent_module.py__path__ except AttributeError: # The module is not a package. _add_error(self.module_context, import_path[-1]) return NO_CONTEXTS else: paths = method() debug.dbg('search_module %s in paths %s', module_name, paths) for path in paths: # At the moment we are only using one path. So this is # not important to be correct. try: if not isinstance(path, list): path = [path] module_file, module_path, is_pkg = \ find_module(import_parts[-1], path, fullname=module_name) break except ImportError: module_path = None if module_path is None: _add_error(self.module_context, import_path[-1]) return NO_CONTEXTS else: parent_module = None try: debug.dbg('search_module %s in %s', import_parts[-1], self.file_path) # Override the sys.path. It works only good that way. # Injecting the path directly into `find_module` did not work. sys.path, temp = sys_path, sys.path try: module_file, module_path, is_pkg = \ find_module(import_parts[-1], fullname=module_name) finally: sys.path = temp except ImportError: # The module is not a package. _add_error(self.module_context, import_path[-1]) return NO_CONTEXTS code = None if is_pkg: # In this case, we don't have a file yet. Search for the # __init__ file. if module_path.endswith(('.zip', '.egg')): code = module_file.loader.get_source(module_name) else: module_path = get_init_path(module_path) elif module_file: code = module_file.read() module_file.close() if isinstance(module_path, ImplicitNSInfo): from jedi.evaluate.context.namespace import ImplicitNamespaceContext fullname, paths = module_path.name, module_path.paths module = ImplicitNamespaceContext(self._evaluator, fullname=fullname) module.paths = paths elif module_file is None and not module_path.endswith(('.py', '.zip', '.egg')): module = compiled.load_module(self._evaluator, module_path) else: module = _load_module(self._evaluator, module_path, code, sys_path, parent_module) if module is None: # The file might raise an ImportError e.g. and therefore not be # importable. return NO_CONTEXTS self._evaluator.modules[module_name] = module return ContextSet(module) def _generate_name(self, name, in_module=None): # Create a pseudo import to be able to follow them. if in_module is None: return ImportName(self.module_context, name) return SubModuleName(in_module, name) def _get_module_names(self, search_path=None, in_module=None): """ Get the names of all modules in the search_path. This means file names and not names defined in the files. """ names = [] # add builtin module names if search_path is None and in_module is None: names += [self._generate_name(name) for name in sys.builtin_module_names] if search_path is None: search_path = self.sys_path_with_modifications() for module_loader, name, is_pkg in pkgutil.iter_modules(search_path): names.append(self._generate_name(name, in_module=in_module)) return names def completion_names(self, evaluator, only_modules=False): """ :param only_modules: Indicates wheter it's possible to import a definition that is not defined in a module. """ from jedi.evaluate.context import ModuleContext from jedi.evaluate.context.namespace import ImplicitNamespaceContext names = [] if self.import_path: # flask if self.str_import_path == ('flask', 'ext'): # List Flask extensions like ``flask_foo`` for mod in self._get_module_names(): modname = mod.string_name if modname.startswith('flask_'): extname = modname[len('flask_'):] names.append(self._generate_name(extname)) # Now the old style: ``flaskext.foo`` for dir in self.sys_path_with_modifications(): flaskext = os.path.join(dir, 'flaskext') if os.path.isdir(flaskext): names += self._get_module_names([flaskext]) for context in self.follow(): # Non-modules are not completable. if context.api_type != 'module': # not a module continue # namespace packages if isinstance(context, ModuleContext) and context.py__file__().endswith('__init__.py'): paths = context.py__path__() names += self._get_module_names(paths, in_module=context) # implicit namespace packages elif isinstance(context, ImplicitNamespaceContext): paths = context.paths names += self._get_module_names(paths) if only_modules: # In the case of an import like `from x.` we don't need to # add all the variables. if ('os',) == self.str_import_path and not self.level: # os.path is a hardcoded exception, because it's a # ``sys.modules`` modification. names.append(self._generate_name('path', context)) continue for filter in context.get_filters(search_global=False): names += filter.values() else: # Empty import path=completion after import if not self.level: names += self._get_module_names() if self.file_path is not None: path = os.path.abspath(self.file_path) for i in range(self.level - 1): path = os.path.dirname(path) names += self._get_module_names([path]) return names def _load_module(evaluator, path=None, code=None, sys_path=None, parent_module=None): if sys_path is None: sys_path = evaluator.project.sys_path dotted_path = path and compiled.dotted_from_fs_path(path, sys_path) if path is not None and path.endswith(('.py', '.zip', '.egg')) \ and dotted_path not in settings.auto_import_modules: module_node = evaluator.grammar.parse( code=code, path=path, cache=True, diff_cache=True, cache_path=settings.cache_directory) from jedi.evaluate.context import ModuleContext return ModuleContext(evaluator, module_node, path=path) else: return compiled.load_module(evaluator, path) def add_module(evaluator, module_name, module): if '.' not in module_name: # We cannot add paths with dots, because that would collide with # the sepatator dots for nested packages. Therefore we return # `__main__` in ModuleWrapper.py__name__(), which is similar to # Python behavior. evaluator.modules[module_name] = module def get_modules_containing_name(evaluator, modules, name): """ Search a name in the directories of modules. """ from jedi.evaluate.context import ModuleContext def check_directories(paths): for p in paths: if p is not None: # We need abspath, because the seetings paths might not already # have been converted to absolute paths. d = os.path.dirname(os.path.abspath(p)) for file_name in os.listdir(d): path = os.path.join(d, file_name) if file_name.endswith('.py'): yield path def check_python_file(path): try: # TODO I don't think we should use the cache here?! node_cache_item = parser_cache[evaluator.grammar._hashed][path] except KeyError: try: return check_fs(path) except IOError: return None else: module_node = node_cache_item.node return ModuleContext(evaluator, module_node, path=path) def check_fs(path): with open(path, 'rb') as f: code = python_bytes_to_unicode(f.read(), errors='replace') if name in code: module = _load_module(evaluator, path, code) module_name = sys_path.dotted_path_in_sys_path(evaluator.project.sys_path, path) if module_name is not None: add_module(evaluator, module_name, module) return module # skip non python modules used_mod_paths = set() for m in modules: try: path = m.py__file__() except AttributeError: pass else: used_mod_paths.add(path) yield m if not settings.dynamic_params_for_other_modules: return additional = set(os.path.abspath(p) for p in settings.additional_dynamic_modules) # Check the directories of used modules. paths = (additional | set(check_directories(used_mod_paths))) \ - used_mod_paths # Sort here to make issues less random. for p in sorted(paths): # make testing easier, sort it - same results on every interpreter m = check_python_file(p) if m is not None and not isinstance(m, compiled.CompiledObject): yield m jedi-0.11.1/jedi/evaluate/syntax_tree.py0000664000175000017500000005456313214571123020043 0ustar davedave00000000000000""" Functions evaluating the syntax tree. """ import copy import operator as op from parso.python import tree from jedi import debug from jedi import parser_utils from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS, ContextualizedNode, \ ContextualizedName, iterator_to_context_set, iterate_contexts from jedi.evaluate import compiled from jedi.evaluate import pep0484 from jedi.evaluate import recursion from jedi.evaluate import helpers from jedi.evaluate import analysis from jedi.evaluate import imports from jedi.evaluate import arguments from jedi.evaluate.context import ClassContext, FunctionContext from jedi.evaluate.context import iterable from jedi.evaluate.context import TreeInstance, CompiledInstance from jedi.evaluate.finder import NameFinder from jedi.evaluate.helpers import is_string, is_literal, is_number, is_compiled def _limit_context_infers(func): """ This is for now the way how we limit type inference going wild. There are other ways to ensure recursion limits as well. This is mostly necessary because of instance (self) access that can be quite tricky to limit. I'm still not sure this is the way to go, but it looks okay for now and we can still go anther way in the future. Tests are there. ~ dave """ def wrapper(context, *args, **kwargs): n = context.tree_node evaluator = context.evaluator try: evaluator.inferred_element_counts[n] += 1 if evaluator.inferred_element_counts[n] > 300: debug.warning('In context %s there were too many inferences.', n) return NO_CONTEXTS except KeyError: evaluator.inferred_element_counts[n] = 1 return func(context, *args, **kwargs) return wrapper @debug.increase_indent @_limit_context_infers def eval_node(context, element): debug.dbg('eval_element %s@%s', element, element.start_pos) evaluator = context.evaluator typ = element.type if typ in ('name', 'number', 'string', 'atom'): return eval_atom(context, element) elif typ == 'keyword': # For False/True/None if element.value in ('False', 'True', 'None'): return ContextSet(compiled.builtin_from_name(evaluator, element.value)) # else: print e.g. could be evaluated like this in Python 2.7 return NO_CONTEXTS elif typ == 'lambdef': return ContextSet(FunctionContext(evaluator, context, element)) elif typ == 'expr_stmt': return eval_expr_stmt(context, element) elif typ in ('power', 'atom_expr'): first_child = element.children[0] if not (first_child.type == 'keyword' and first_child.value == 'await'): context_set = eval_atom(context, first_child) for trailer in element.children[1:]: if trailer == '**': # has a power operation. right = evaluator.eval_element(context, element.children[2]) context_set = _eval_comparison( evaluator, context, context_set, trailer, right ) break context_set = eval_trailer(context, context_set, trailer) return context_set return NO_CONTEXTS elif typ in ('testlist_star_expr', 'testlist',): # The implicit tuple in statements. return ContextSet(iterable.SequenceLiteralContext(evaluator, context, element)) elif typ in ('not_test', 'factor'): context_set = context.eval_node(element.children[-1]) for operator in element.children[:-1]: context_set = eval_factor(context_set, operator) return context_set elif typ == 'test': # `x if foo else y` case. return (context.eval_node(element.children[0]) | context.eval_node(element.children[-1])) elif typ == 'operator': # Must be an ellipsis, other operators are not evaluated. # In Python 2 ellipsis is coded as three single dot tokens, not # as one token 3 dot token. assert element.value in ('.', '...') return ContextSet(compiled.create(evaluator, Ellipsis)) elif typ == 'dotted_name': context_set = eval_atom(context, element.children[0]) for next_name in element.children[2::2]: # TODO add search_global=True? context_set = context_set.py__getattribute__(next_name, name_context=context) return context_set elif typ == 'eval_input': return eval_node(context, element.children[0]) elif typ == 'annassign': return pep0484._evaluate_for_annotation(context, element.children[1]) else: return eval_or_test(context, element) def eval_trailer(context, base_contexts, trailer): trailer_op, node = trailer.children[:2] if node == ')': # `arglist` is optional. node = () if trailer_op == '[': trailer_op, node, _ = trailer.children # TODO It's kind of stupid to cast this from a context set to a set. foo = set(base_contexts) # special case: PEP0484 typing module, see # https://github.com/davidhalter/jedi/issues/663 result = ContextSet() for typ in list(foo): if isinstance(typ, (ClassContext, TreeInstance)): typing_module_types = pep0484.py__getitem__(context, typ, node) if typing_module_types is not None: foo.remove(typ) result |= typing_module_types return result | base_contexts.get_item( eval_subscript_list(context.evaluator, context, node), ContextualizedNode(context, trailer) ) else: debug.dbg('eval_trailer: %s in %s', trailer, base_contexts) if trailer_op == '.': return base_contexts.py__getattribute__( name_context=context, name_or_str=node ) else: assert trailer_op == '(' args = arguments.TreeArguments(context.evaluator, context, node, trailer) return base_contexts.execute(args) def eval_atom(context, atom): """ Basically to process ``atom`` nodes. The parser sometimes doesn't generate the node (because it has just one child). In that case an atom might be a name or a literal as well. """ if atom.type == 'name': # This is the first global lookup. stmt = tree.search_ancestor( atom, 'expr_stmt', 'lambdef' ) or atom if stmt.type == 'lambdef': stmt = atom return context.py__getattribute__( name_or_str=atom, position=stmt.start_pos, search_global=True ) elif isinstance(atom, tree.Literal): string = parser_utils.safe_literal_eval(atom.value) return ContextSet(compiled.create(context.evaluator, string)) else: c = atom.children if c[0].type == 'string': # Will be one string. context_set = eval_atom(context, c[0]) for string in c[1:]: right = eval_atom(context, string) context_set = _eval_comparison(context.evaluator, context, context_set, '+', right) return context_set # Parentheses without commas are not tuples. elif c[0] == '(' and not len(c) == 2 \ and not(c[1].type == 'testlist_comp' and len(c[1].children) > 1): return context.eval_node(c[1]) try: comp_for = c[1].children[1] except (IndexError, AttributeError): pass else: if comp_for == ':': # Dict comprehensions have a colon at the 3rd index. try: comp_for = c[1].children[3] except IndexError: pass if comp_for.type == 'comp_for': return ContextSet(iterable.Comprehension.from_atom(context.evaluator, context, atom)) # It's a dict/list/tuple literal. array_node = c[1] try: array_node_c = array_node.children except AttributeError: array_node_c = [] if c[0] == '{' and (array_node == '}' or ':' in array_node_c): context = iterable.DictLiteralContext(context.evaluator, context, atom) else: context = iterable.SequenceLiteralContext(context.evaluator, context, atom) return ContextSet(context) @_limit_context_infers def eval_expr_stmt(context, stmt, seek_name=None): with recursion.execution_allowed(context.evaluator, stmt) as allowed: if allowed or context.get_root_context() == context.evaluator.BUILTINS: return _eval_expr_stmt(context, stmt, seek_name) return NO_CONTEXTS @debug.increase_indent def _eval_expr_stmt(context, stmt, seek_name=None): """ The starting point of the completion. A statement always owns a call list, which are the calls, that a statement does. In case multiple names are defined in the statement, `seek_name` returns the result for this name. :param stmt: A `tree.ExprStmt`. """ debug.dbg('eval_expr_stmt %s (%s)', stmt, seek_name) rhs = stmt.get_rhs() context_set = context.eval_node(rhs) if seek_name: c_node = ContextualizedName(context, seek_name) context_set = check_tuple_assignments(context.evaluator, c_node, context_set) first_operator = next(stmt.yield_operators(), None) if first_operator not in ('=', None) and first_operator.type == 'operator': # `=` is always the last character in aug assignments -> -1 operator = copy.copy(first_operator) operator.value = operator.value[:-1] name = stmt.get_defined_names()[0].value left = context.py__getattribute__( name, position=stmt.start_pos, search_global=True) for_stmt = tree.search_ancestor(stmt, 'for_stmt') if for_stmt is not None and for_stmt.type == 'for_stmt' and context_set \ and parser_utils.for_stmt_defines_one_name(for_stmt): # Iterate through result and add the values, that's possible # only in for loops without clutter, because they are # predictable. Also only do it, if the variable is not a tuple. node = for_stmt.get_testlist() cn = ContextualizedNode(context, node) ordered = list(cn.infer().iterate(cn)) for lazy_context in ordered: dct = {for_stmt.children[1].value: lazy_context.infer()} with helpers.predefine_names(context, for_stmt, dct): t = context.eval_node(rhs) left = _eval_comparison(context.evaluator, context, left, operator, t) context_set = left else: context_set = _eval_comparison(context.evaluator, context, left, operator, context_set) debug.dbg('eval_expr_stmt result %s', context_set) return context_set def eval_or_test(context, or_test): iterator = iter(or_test.children) types = context.eval_node(next(iterator)) for operator in iterator: right = next(iterator) if operator.type == 'comp_op': # not in / is not operator = ' '.join(c.value for c in operator.children) # handle lazy evaluation of and/or here. if operator in ('and', 'or'): left_bools = set(left.py__bool__() for left in types) if left_bools == set([True]): if operator == 'and': types = context.eval_node(right) elif left_bools == set([False]): if operator != 'and': types = context.eval_node(right) # Otherwise continue, because of uncertainty. else: types = _eval_comparison(context.evaluator, context, types, operator, context.eval_node(right)) debug.dbg('eval_or_test types %s', types) return types @iterator_to_context_set def eval_factor(context_set, operator): """ Calculates `+`, `-`, `~` and `not` prefixes. """ for context in context_set: if operator == '-': if is_number(context): yield compiled.create(context.evaluator, -context.obj) elif operator == 'not': value = context.py__bool__() if value is None: # Uncertainty. return yield compiled.create(context.evaluator, not value) else: yield context # Maps Python syntax to the operator module. COMPARISON_OPERATORS = { '==': op.eq, '!=': op.ne, 'is': op.is_, 'is not': op.is_not, '<': op.lt, '<=': op.le, '>': op.gt, '>=': op.ge, } def _literals_to_types(evaluator, result): # Changes literals ('a', 1, 1.0, etc) to its type instances (str(), # int(), float(), etc). new_result = NO_CONTEXTS for typ in result: if is_literal(typ): # Literals are only valid as long as the operations are # correct. Otherwise add a value-free instance. cls = compiled.builtin_from_name(evaluator, typ.name.string_name) new_result |= cls.execute_evaluated() else: new_result |= ContextSet(typ) return new_result def _eval_comparison(evaluator, context, left_contexts, operator, right_contexts): if not left_contexts or not right_contexts: # illegal slices e.g. cause left/right_result to be None result = (left_contexts or NO_CONTEXTS) | (right_contexts or NO_CONTEXTS) return _literals_to_types(evaluator, result) else: # I don't think there's a reasonable chance that a string # operation is still correct, once we pass something like six # objects. if len(left_contexts) * len(right_contexts) > 6: return _literals_to_types(evaluator, left_contexts | right_contexts) else: return ContextSet.from_sets( _eval_comparison_part(evaluator, context, left, operator, right) for left in left_contexts for right in right_contexts ) def _is_tuple(context): return isinstance(context, iterable.AbstractIterable) and context.array_type == 'tuple' def _is_list(context): return isinstance(context, iterable.AbstractIterable) and context.array_type == 'list' def _eval_comparison_part(evaluator, context, left, operator, right): l_is_num = is_number(left) r_is_num = is_number(right) if operator == '*': # for iterables, ignore * operations if isinstance(left, iterable.AbstractIterable) or is_string(left): return ContextSet(left) elif isinstance(right, iterable.AbstractIterable) or is_string(right): return ContextSet(right) elif operator == '+': if l_is_num and r_is_num or is_string(left) and is_string(right): return ContextSet(compiled.create(evaluator, left.obj + right.obj)) elif _is_tuple(left) and _is_tuple(right) or _is_list(left) and _is_list(right): return ContextSet(iterable.MergedArray(evaluator, (left, right))) elif operator == '-': if l_is_num and r_is_num: return ContextSet(compiled.create(evaluator, left.obj - right.obj)) elif operator == '%': # With strings and numbers the left type typically remains. Except for # `int() % float()`. return ContextSet(left) elif operator in COMPARISON_OPERATORS: operation = COMPARISON_OPERATORS[operator] if is_compiled(left) and is_compiled(right): # Possible, because the return is not an option. Just compare. left = left.obj right = right.obj try: result = operation(left, right) except TypeError: # Could be True or False. return ContextSet(compiled.create(evaluator, True), compiled.create(evaluator, False)) else: return ContextSet(compiled.create(evaluator, result)) elif operator == 'in': return NO_CONTEXTS def check(obj): """Checks if a Jedi object is either a float or an int.""" return isinstance(obj, CompiledInstance) and \ obj.name.string_name in ('int', 'float') # Static analysis, one is a number, the other one is not. if operator in ('+', '-') and l_is_num != r_is_num \ and not (check(left) or check(right)): message = "TypeError: unsupported operand type(s) for +: %s and %s" analysis.add(context, 'type-error-operation', operator, message % (left, right)) return ContextSet(left, right) def _remove_statements(evaluator, context, stmt, name): """ This is the part where statements are being stripped. Due to lazy evaluation, statements like a = func; b = a; b() have to be evaluated. """ pep0484_contexts = \ pep0484.find_type_from_comment_hint_assign(context, stmt, name) if pep0484_contexts: return pep0484_contexts return eval_expr_stmt(context, stmt, seek_name=name) def tree_name_to_contexts(evaluator, context, tree_name): types = [] node = tree_name.get_definition(import_name_always=True) if node is None: node = tree_name.parent if node.type == 'global_stmt': context = evaluator.create_context(context, tree_name) finder = NameFinder(evaluator, context, context, tree_name.value) filters = finder.get_filters(search_global=True) # For global_stmt lookups, we only need the first possible scope, # which means the function itself. filters = [next(filters)] return finder.find(filters, attribute_lookup=False) elif node.type not in ('import_from', 'import_name'): raise ValueError("Should not happen.") typ = node.type if typ == 'for_stmt': types = pep0484.find_type_from_comment_hint_for(context, node, tree_name) if types: return types if typ == 'with_stmt': types = pep0484.find_type_from_comment_hint_with(context, node, tree_name) if types: return types if typ in ('for_stmt', 'comp_for'): try: types = context.predefined_names[node][tree_name.value] except KeyError: cn = ContextualizedNode(context, node.children[3]) for_types = iterate_contexts(cn.infer(), cn) c_node = ContextualizedName(context, tree_name) types = check_tuple_assignments(evaluator, c_node, for_types) elif typ == 'expr_stmt': types = _remove_statements(evaluator, context, node, tree_name) elif typ == 'with_stmt': context_managers = context.eval_node(node.get_test_node_from_name(tree_name)) enter_methods = context_managers.py__getattribute__('__enter__') return enter_methods.execute_evaluated() elif typ in ('import_from', 'import_name'): types = imports.infer_import(context, tree_name) elif typ in ('funcdef', 'classdef'): types = _apply_decorators(context, node) elif typ == 'try_stmt': # TODO an exception can also be a tuple. Check for those. # TODO check for types that are not classes and add it to # the static analysis report. exceptions = context.eval_node(tree_name.get_previous_sibling().get_previous_sibling()) types = exceptions.execute_evaluated() else: raise ValueError("Should not happen.") return types def _apply_decorators(context, node): """ Returns the function, that should to be executed in the end. This is also the places where the decorators are processed. """ if node.type == 'classdef': decoratee_context = ClassContext( context.evaluator, parent_context=context, classdef=node ) else: decoratee_context = FunctionContext( context.evaluator, parent_context=context, funcdef=node ) initial = values = ContextSet(decoratee_context) for dec in reversed(node.get_decorators()): debug.dbg('decorator: %s %s', dec, values) dec_values = context.eval_node(dec.children[1]) trailer_nodes = dec.children[2:-1] if trailer_nodes: # Create a trailer and evaluate it. trailer = tree.PythonNode('trailer', trailer_nodes) trailer.parent = dec dec_values = eval_trailer(context, dec_values, trailer) if not len(dec_values): debug.warning('decorator not found: %s on %s', dec, node) return initial values = dec_values.execute(arguments.ValuesArguments([values])) if not len(values): debug.warning('not possible to resolve wrappers found %s', node) return initial debug.dbg('decorator end %s', values) return values def check_tuple_assignments(evaluator, contextualized_name, context_set): """ Checks if tuples are assigned. """ lazy_context = None for index, node in contextualized_name.assignment_indexes(): cn = ContextualizedNode(contextualized_name.context, node) iterated = context_set.iterate(cn) for _ in range(index + 1): try: lazy_context = next(iterated) except StopIteration: # We could do this with the default param in next. But this # would allow this loop to run for a very long time if the # index number is high. Therefore break if the loop is # finished. return ContextSet() context_set = lazy_context.infer() return context_set def eval_subscript_list(evaluator, context, index): """ Handles slices in subscript nodes. """ if index == ':': # Like array[:] return ContextSet(iterable.Slice(context, None, None, None)) elif index.type == 'subscript' and not index.children[0] == '.': # subscript basically implies a slice operation, except for Python 2's # Ellipsis. # e.g. array[:3] result = [] for el in index.children: if el == ':': if not result: result.append(None) elif el.type == 'sliceop': if len(el.children) == 2: result.append(el.children[1]) else: result.append(el) result += [None] * (3 - len(result)) return ContextSet(iterable.Slice(context, *result)) # No slices return context.eval_node(index) jedi-0.11.1/jedi/evaluate/dynamic.py0000664000175000017500000001656013214571123017115 0ustar davedave00000000000000""" One of the really important features of |jedi| is to have an option to understand code like this:: def foo(bar): bar. # completion here foo(1) There's no doubt wheter bar is an ``int`` or not, but if there's also a call like ``foo('str')``, what would happen? Well, we'll just show both. Because that's what a human would expect. It works as follows: - |Jedi| sees a param - search for function calls named ``foo`` - execute these calls and check the input. """ from parso.python import tree from jedi import settings from jedi import debug from jedi.evaluate.cache import evaluator_function_cache from jedi.evaluate import imports from jedi.evaluate.arguments import TreeArguments from jedi.evaluate.param import create_default_params from jedi.evaluate.helpers import is_stdlib_path from jedi.evaluate.utils import to_list from jedi.parser_utils import get_parent_scope from jedi.evaluate.context import ModuleContext, instance from jedi.evaluate.base_context import ContextSet MAX_PARAM_SEARCHES = 20 class MergedExecutedParams(object): """ Simulates being a parameter while actually just being multiple params. """ def __init__(self, executed_params): self._executed_params = executed_params def infer(self): return ContextSet.from_sets(p.infer() for p in self._executed_params) @debug.increase_indent def search_params(evaluator, execution_context, funcdef): """ A dynamic search for param values. If you try to complete a type: >>> def func(foo): ... foo >>> func(1) >>> func("") It is not known what the type ``foo`` without analysing the whole code. You have to look for all calls to ``func`` to find out what ``foo`` possibly is. """ if not settings.dynamic_params: return create_default_params(execution_context, funcdef) evaluator.dynamic_params_depth += 1 try: path = execution_context.get_root_context().py__file__() if path is not None and is_stdlib_path(path): # We don't want to search for usages in the stdlib. Usually people # don't work with it (except if you are a core maintainer, sorry). # This makes everything slower. Just disable it and run the tests, # you will see the slowdown, especially in 3.6. return create_default_params(execution_context, funcdef) debug.dbg('Dynamic param search in %s.', funcdef.name.value, color='MAGENTA') module_context = execution_context.get_root_context() function_executions = _search_function_executions( evaluator, module_context, funcdef ) if function_executions: zipped_params = zip(*list( function_execution.get_params() for function_execution in function_executions )) params = [MergedExecutedParams(executed_params) for executed_params in zipped_params] # Evaluate the ExecutedParams to types. else: return create_default_params(execution_context, funcdef) debug.dbg('Dynamic param result finished', color='MAGENTA') return params finally: evaluator.dynamic_params_depth -= 1 @evaluator_function_cache(default=None) @to_list def _search_function_executions(evaluator, module_context, funcdef): """ Returns a list of param names. """ func_string_name = funcdef.name.value compare_node = funcdef if func_string_name == '__init__': cls = get_parent_scope(funcdef) if isinstance(cls, tree.Class): func_string_name = cls.name.value compare_node = cls found_executions = False i = 0 for for_mod_context in imports.get_modules_containing_name( evaluator, [module_context], func_string_name): if not isinstance(module_context, ModuleContext): return for name, trailer in _get_possible_nodes(for_mod_context, func_string_name): i += 1 # This is a simple way to stop Jedi's dynamic param recursion # from going wild: The deeper Jedi's in the recursion, the less # code should be evaluated. if i * evaluator.dynamic_params_depth > MAX_PARAM_SEARCHES: return random_context = evaluator.create_context(for_mod_context, name) for function_execution in _check_name_for_execution( evaluator, random_context, compare_node, name, trailer): found_executions = True yield function_execution # If there are results after processing a module, we're probably # good to process. This is a speed optimization. if found_executions: return def _get_possible_nodes(module_context, func_string_name): try: names = module_context.tree_node.get_used_names()[func_string_name] except KeyError: return for name in names: bracket = name.get_next_leaf() trailer = bracket.parent if trailer.type == 'trailer' and bracket == '(': yield name, trailer def _check_name_for_execution(evaluator, context, compare_node, name, trailer): from jedi.evaluate.context.function import FunctionExecutionContext def create_func_excs(): arglist = trailer.children[1] if arglist == ')': arglist = () args = TreeArguments(evaluator, context, arglist, trailer) if value_node.type == 'funcdef': yield value.get_function_execution(args) else: created_instance = instance.TreeInstance( evaluator, value.parent_context, value, args ) for execution in created_instance.create_init_executions(): yield execution for value in evaluator.goto_definitions(context, name): value_node = value.tree_node if compare_node == value_node: for func_execution in create_func_excs(): yield func_execution elif isinstance(value.parent_context, FunctionExecutionContext) and \ compare_node.type == 'funcdef': # Here we're trying to find decorators by checking the first # parameter. It's not very generic though. Should find a better # solution that also applies to nested decorators. params = value.parent_context.get_params() if len(params) != 1: continue values = params[0].infer() nodes = [v.tree_node for v in values] if nodes == [compare_node]: # Found a decorator. module_context = context.get_root_context() execution_context = next(create_func_excs()) for name, trailer in _get_possible_nodes(module_context, params[0].string_name): if value_node.start_pos < name.start_pos < value_node.end_pos: random_context = evaluator.create_context(execution_context, name) iterator = _check_name_for_execution( evaluator, random_context, compare_node, name, trailer ) for function_execution in iterator: yield function_execution jedi-0.11.1/jedi/evaluate/flow_analysis.py0000664000175000017500000000775513214571123020351 0ustar davedave00000000000000from jedi.parser_utils import get_flow_branch_keyword, is_scope, get_parent_scope class Status(object): lookup_table = {} def __init__(self, value, name): self._value = value self._name = name Status.lookup_table[value] = self def invert(self): if self is REACHABLE: return UNREACHABLE elif self is UNREACHABLE: return REACHABLE else: return UNSURE def __and__(self, other): if UNSURE in (self, other): return UNSURE else: return REACHABLE if self._value and other._value else UNREACHABLE def __repr__(self): return '<%s: %s>' % (type(self).__name__, self._name) REACHABLE = Status(True, 'reachable') UNREACHABLE = Status(False, 'unreachable') UNSURE = Status(None, 'unsure') def _get_flow_scopes(node): while True: node = get_parent_scope(node, include_flows=True) if node is None or is_scope(node): return yield node def reachability_check(context, context_scope, node, origin_scope=None): first_flow_scope = get_parent_scope(node, include_flows=True) if origin_scope is not None: origin_flow_scopes = list(_get_flow_scopes(origin_scope)) node_flow_scopes = list(_get_flow_scopes(node)) branch_matches = True for flow_scope in origin_flow_scopes: if flow_scope in node_flow_scopes: node_keyword = get_flow_branch_keyword(flow_scope, node) origin_keyword = get_flow_branch_keyword(flow_scope, origin_scope) branch_matches = node_keyword == origin_keyword if flow_scope.type == 'if_stmt': if not branch_matches: return UNREACHABLE elif flow_scope.type == 'try_stmt': if not branch_matches and origin_keyword == 'else' \ and node_keyword == 'except': return UNREACHABLE break # Direct parents get resolved, we filter scopes that are separate # branches. This makes sense for autocompletion and static analysis. # For actual Python it doesn't matter, because we're talking about # potentially unreachable code. # e.g. `if 0:` would cause all name lookup within the flow make # unaccessible. This is not a "problem" in Python, because the code is # never called. In Jedi though, we still want to infer types. while origin_scope is not None: if first_flow_scope == origin_scope and branch_matches: return REACHABLE origin_scope = origin_scope.parent return _break_check(context, context_scope, first_flow_scope, node) def _break_check(context, context_scope, flow_scope, node): reachable = REACHABLE if flow_scope.type == 'if_stmt': if flow_scope.is_node_after_else(node): for check_node in flow_scope.get_test_nodes(): reachable = _check_if(context, check_node) if reachable in (REACHABLE, UNSURE): break reachable = reachable.invert() else: flow_node = flow_scope.get_corresponding_test_node(node) if flow_node is not None: reachable = _check_if(context, flow_node) elif flow_scope.type in ('try_stmt', 'while_stmt'): return UNSURE # Only reachable branches need to be examined further. if reachable in (UNREACHABLE, UNSURE): return reachable if context_scope != flow_scope and context_scope != flow_scope.parent: flow_scope = get_parent_scope(flow_scope, include_flows=True) return reachable & _break_check(context, context_scope, flow_scope, node) else: return reachable def _check_if(context, node): types = context.eval_node(node) values = set(x.py__bool__() for x in types) if len(values) == 1: return Status.lookup_table[values.pop()] else: return UNSURE jedi-0.11.1/jedi/evaluate/jedi_typing.py0000664000175000017500000000500313214571123017764 0ustar davedave00000000000000""" This module is not intended to be used in jedi, rather it will be fed to the jedi-parser to replace classes in the typing module """ try: from collections import abc except ImportError: # python 2 import collections as abc def factory(typing_name, indextypes): class Iterable(abc.Iterable): def __iter__(self): while True: yield indextypes[0]() class Iterator(Iterable, abc.Iterator): def next(self): """ needed for python 2 """ return self.__next__() def __next__(self): return indextypes[0]() class Sequence(abc.Sequence): def __getitem__(self, index): return indextypes[0]() class MutableSequence(Sequence, abc.MutableSequence): pass class List(MutableSequence, list): pass class Tuple(Sequence, tuple): def __getitem__(self, index): if indextypes[1] == Ellipsis: # https://www.python.org/dev/peps/pep-0484/#the-typing-module # Tuple[int, ...] means a tuple of ints of indetermined length return indextypes[0]() else: return indextypes[index]() class AbstractSet(Iterable, abc.Set): pass class MutableSet(AbstractSet, abc.MutableSet): pass class KeysView(Iterable, abc.KeysView): pass class ValuesView(abc.ValuesView): def __iter__(self): while True: yield indextypes[1]() class ItemsView(abc.ItemsView): def __iter__(self): while True: yield (indextypes[0](), indextypes[1]()) class Mapping(Iterable, abc.Mapping): def __getitem__(self, item): return indextypes[1]() def keys(self): return KeysView() def values(self): return ValuesView() def items(self): return ItemsView() class MutableMapping(Mapping, abc.MutableMapping): pass class Dict(MutableMapping, dict): pass dct = { "Sequence": Sequence, "MutableSequence": MutableSequence, "List": List, "Iterable": Iterable, "Iterator": Iterator, "AbstractSet": AbstractSet, "MutableSet": MutableSet, "Mapping": Mapping, "MutableMapping": MutableMapping, "Tuple": Tuple, "KeysView": KeysView, "ItemsView": ItemsView, "ValuesView": ValuesView, "Dict": Dict, } return dct[typing_name] jedi-0.11.1/jedi/evaluate/docstrings.py0000664000175000017500000002406513214571123017647 0ustar davedave00000000000000""" Docstrings are another source of information for functions and classes. :mod:`jedi.evaluate.dynamic` tries to find all executions of functions, while the docstring parsing is much easier. There are three different types of docstrings that |jedi| understands: - `Sphinx `_ - `Epydoc `_ - `Numpydoc `_ For example, the sphinx annotation ``:type foo: str`` clearly states that the type of ``foo`` is ``str``. As an addition to parameter searching, this module also provides return annotations. """ import re from textwrap import dedent from parso import parse from jedi._compatibility import u from jedi.evaluate.utils import indent_block from jedi.evaluate.cache import evaluator_method_cache from jedi.evaluate.base_context import iterator_to_context_set, ContextSet, \ NO_CONTEXTS from jedi.evaluate.lazy_context import LazyKnownContexts DOCSTRING_PARAM_PATTERNS = [ r'\s*:type\s+%s:\s*([^\n]+)', # Sphinx r'\s*:param\s+(\w+)\s+%s:[^\n]*', # Sphinx param with type r'\s*@type\s+%s:\s*([^\n]+)', # Epydoc ] DOCSTRING_RETURN_PATTERNS = [ re.compile(r'\s*:rtype:\s*([^\n]+)', re.M), # Sphinx re.compile(r'\s*@rtype:\s*([^\n]+)', re.M), # Epydoc ] REST_ROLE_PATTERN = re.compile(r':[^`]+:`([^`]+)`') try: from numpydoc.docscrape import NumpyDocString except ImportError: def _search_param_in_numpydocstr(docstr, param_str): return [] def _search_return_in_numpydocstr(docstr): return [] else: def _search_param_in_numpydocstr(docstr, param_str): """Search `docstr` (in numpydoc format) for type(-s) of `param_str`.""" try: # This is a non-public API. If it ever changes we should be # prepared and return gracefully. params = NumpyDocString(docstr)._parsed_data['Parameters'] except (KeyError, AttributeError): return [] for p_name, p_type, p_descr in params: if p_name == param_str: m = re.match('([^,]+(,[^,]+)*?)(,[ ]*optional)?$', p_type) if m: p_type = m.group(1) return list(_expand_typestr(p_type)) return [] def _search_return_in_numpydocstr(docstr): """ Search `docstr` (in numpydoc format) for type(-s) of function returns. """ doc = NumpyDocString(docstr) try: # This is a non-public API. If it ever changes we should be # prepared and return gracefully. returns = doc._parsed_data['Returns'] returns += doc._parsed_data['Yields'] except (KeyError, AttributeError): raise StopIteration for r_name, r_type, r_descr in returns: #Return names are optional and if so the type is in the name if not r_type: r_type = r_name for type_ in _expand_typestr(r_type): yield type_ def _expand_typestr(type_str): """ Attempts to interpret the possible types in `type_str` """ # Check if alternative types are specified with 'or' if re.search('\\bor\\b', type_str): for t in type_str.split('or'): yield t.split('of')[0].strip() # Check if like "list of `type`" and set type to list elif re.search('\\bof\\b', type_str): yield type_str.split('of')[0] # Check if type has is a set of valid literal values eg: {'C', 'F', 'A'} elif type_str.startswith('{'): node = parse(type_str, version='3.6').children[0] if node.type == 'atom': for leaf in node.children[1].children: if leaf.type == 'number': if '.' in leaf.value: yield 'float' else: yield 'int' elif leaf.type == 'string': if 'b' in leaf.string_prefix.lower(): yield 'bytes' else: yield 'str' # Ignore everything else. # Otherwise just work with what we have. else: yield type_str def _search_param_in_docstr(docstr, param_str): """ Search `docstr` for type(-s) of `param_str`. >>> _search_param_in_docstr(':type param: int', 'param') ['int'] >>> _search_param_in_docstr('@type param: int', 'param') ['int'] >>> _search_param_in_docstr( ... ':type param: :class:`threading.Thread`', 'param') ['threading.Thread'] >>> bool(_search_param_in_docstr('no document', 'param')) False >>> _search_param_in_docstr(':param int param: some description', 'param') ['int'] """ # look at #40 to see definitions of those params patterns = [re.compile(p % re.escape(param_str)) for p in DOCSTRING_PARAM_PATTERNS] for pattern in patterns: match = pattern.search(docstr) if match: return [_strip_rst_role(match.group(1))] return (_search_param_in_numpydocstr(docstr, param_str) or []) def _strip_rst_role(type_str): """ Strip off the part looks like a ReST role in `type_str`. >>> _strip_rst_role(':class:`ClassName`') # strip off :class: 'ClassName' >>> _strip_rst_role(':py:obj:`module.Object`') # works with domain 'module.Object' >>> _strip_rst_role('ClassName') # do nothing when not ReST role 'ClassName' See also: http://sphinx-doc.org/domains.html#cross-referencing-python-objects """ match = REST_ROLE_PATTERN.match(type_str) if match: return match.group(1) else: return type_str def _evaluate_for_statement_string(module_context, string): code = dedent(u(""" def pseudo_docstring_stuff(): ''' Create a pseudo function for docstring statements. Need this docstring so that if the below part is not valid Python this is still a function. ''' {0} """)) if string is None: return [] for element in re.findall('((?:\w+\.)*\w+)\.', string): # Try to import module part in dotted name. # (e.g., 'threading' in 'threading.Thread'). string = 'import %s\n' % element + string # Take the default grammar here, if we load the Python 2.7 grammar here, it # will be impossible to use `...` (Ellipsis) as a token. Docstring types # don't need to conform with the current grammar. grammar = module_context.evaluator.latest_grammar module = grammar.parse(code.format(indent_block(string))) try: funcdef = next(module.iter_funcdefs()) # First pick suite, then simple_stmt and then the node, # which is also not the last item, because there's a newline. stmt = funcdef.children[-1].children[-1].children[-2] except (AttributeError, IndexError): return [] from jedi.evaluate.context import FunctionContext function_context = FunctionContext( module_context.evaluator, module_context, funcdef ) func_execution_context = function_context.get_function_execution() # Use the module of the param. # TODO this module is not the module of the param in case of a function # call. In that case it's the module of the function call. # stuffed with content from a function call. return list(_execute_types_in_stmt(func_execution_context, stmt)) def _execute_types_in_stmt(module_context, stmt): """ Executing all types or general elements that we find in a statement. This doesn't include tuple, list and dict literals, because the stuff they contain is executed. (Used as type information). """ definitions = module_context.eval_node(stmt) return ContextSet.from_sets( _execute_array_values(module_context.evaluator, d) for d in definitions ) def _execute_array_values(evaluator, array): """ Tuples indicate that there's not just one return value, but the listed ones. `(str, int)` means that it returns a tuple with both types. """ from jedi.evaluate.context.iterable import SequenceLiteralContext, FakeSequence if isinstance(array, SequenceLiteralContext): values = [] for lazy_context in array.py__iter__(): objects = ContextSet.from_sets( _execute_array_values(evaluator, typ) for typ in lazy_context.infer() ) values.append(LazyKnownContexts(objects)) return set([FakeSequence(evaluator, array.array_type, values)]) else: return array.execute_evaluated() @evaluator_method_cache() def infer_param(execution_context, param): from jedi.evaluate.context.instance import AnonymousInstanceFunctionExecution def eval_docstring(docstring): return ContextSet.from_iterable( p for param_str in _search_param_in_docstr(docstring, param.name.value) for p in _evaluate_for_statement_string(module_context, param_str) ) module_context = execution_context.get_root_context() func = param.get_parent_function() if func.type == 'lambdef': return NO_CONTEXTS types = eval_docstring(execution_context.py__doc__()) if isinstance(execution_context, AnonymousInstanceFunctionExecution) and \ execution_context.function_context.name.string_name == '__init__': class_context = execution_context.instance.class_context types |= eval_docstring(class_context.py__doc__()) return types @evaluator_method_cache() @iterator_to_context_set def infer_return_types(function_context): def search_return_in_docstr(code): for p in DOCSTRING_RETURN_PATTERNS: match = p.search(code) if match: yield _strip_rst_role(match.group(1)) # Check for numpy style return hint for type_ in _search_return_in_numpydocstr(code): yield type_ for type_str in search_return_in_docstr(function_context.py__doc__()): for type_eval in _evaluate_for_statement_string(function_context.get_root_context(), type_str): yield type_eval jedi-0.11.1/jedi/evaluate/analysis.py0000664000175000017500000001726313214571123017315 0ustar davedave00000000000000""" Module for statical analysis. """ from jedi import debug from parso.python import tree from jedi.evaluate.compiled import CompiledObject CODES = { 'attribute-error': (1, AttributeError, 'Potential AttributeError.'), 'name-error': (2, NameError, 'Potential NameError.'), 'import-error': (3, ImportError, 'Potential ImportError.'), 'type-error-too-many-arguments': (4, TypeError, None), 'type-error-too-few-arguments': (5, TypeError, None), 'type-error-keyword-argument': (6, TypeError, None), 'type-error-multiple-values': (7, TypeError, None), 'type-error-star-star': (8, TypeError, None), 'type-error-star': (9, TypeError, None), 'type-error-operation': (10, TypeError, None), 'type-error-not-iterable': (11, TypeError, None), 'type-error-isinstance': (12, TypeError, None), 'type-error-not-subscriptable': (13, TypeError, None), 'value-error-too-many-values': (14, ValueError, None), 'value-error-too-few-values': (15, ValueError, None), } class Error(object): def __init__(self, name, module_path, start_pos, message=None): self.path = module_path self._start_pos = start_pos self.name = name if message is None: message = CODES[self.name][2] self.message = message @property def line(self): return self._start_pos[0] @property def column(self): return self._start_pos[1] @property def code(self): # The class name start first = self.__class__.__name__[0] return first + str(CODES[self.name][0]) def __unicode__(self): return '%s:%s:%s: %s %s' % (self.path, self.line, self.column, self.code, self.message) def __str__(self): return self.__unicode__() def __eq__(self, other): return (self.path == other.path and self.name == other.name and self._start_pos == other._start_pos) def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self.path, self._start_pos, self.name)) def __repr__(self): return '<%s %s: %s@%s,%s>' % (self.__class__.__name__, self.name, self.path, self._start_pos[0], self._start_pos[1]) class Warning(Error): pass def add(node_context, error_name, node, message=None, typ=Error, payload=None): exception = CODES[error_name][1] if _check_for_exception_catch(node_context, node, exception, payload): return # TODO this path is probably not right module_context = node_context.get_root_context() module_path = module_context.py__file__() instance = typ(error_name, module_path, node.start_pos, message) debug.warning(str(instance), format=False) node_context.evaluator.analysis.append(instance) def _check_for_setattr(instance): """ Check if there's any setattr method inside an instance. If so, return True. """ from jedi.evaluate.context import ModuleContext module = instance.get_root_context() if not isinstance(module, ModuleContext): return False node = module.tree_node try: stmts = node.get_used_names()['setattr'] except KeyError: return False return any(node.start_pos < stmt.start_pos < node.end_pos for stmt in stmts) def add_attribute_error(name_context, lookup_context, name): message = ('AttributeError: %s has no attribute %s.' % (lookup_context, name)) from jedi.evaluate.context.instance import AbstractInstanceContext, CompiledInstanceName # Check for __getattr__/__getattribute__ existance and issue a warning # instead of an error, if that happens. typ = Error if isinstance(lookup_context, AbstractInstanceContext): slot_names = lookup_context.get_function_slot_names('__getattr__') + \ lookup_context.get_function_slot_names('__getattribute__') for n in slot_names: if isinstance(name, CompiledInstanceName) and \ n.parent_context.obj == object: typ = Warning break if _check_for_setattr(lookup_context): typ = Warning payload = lookup_context, name add(name_context, 'attribute-error', name, message, typ, payload) def _check_for_exception_catch(node_context, jedi_name, exception, payload=None): """ Checks if a jedi object (e.g. `Statement`) sits inside a try/catch and doesn't count as an error (if equal to `exception`). Also checks `hasattr` for AttributeErrors and uses the `payload` to compare it. Returns True if the exception was catched. """ def check_match(cls, exception): try: return isinstance(cls, CompiledObject) and issubclass(exception, cls.obj) except TypeError: return False def check_try_for_except(obj, exception): # Only nodes in try iterator = iter(obj.children) for branch_type in iterator: colon = next(iterator) suite = next(iterator) if branch_type == 'try' \ and not (branch_type.start_pos < jedi_name.start_pos <= suite.end_pos): return False for node in obj.get_except_clause_tests(): if node is None: return True # An exception block that catches everything. else: except_classes = node_context.eval_node(node) for cls in except_classes: from jedi.evaluate.context import iterable if isinstance(cls, iterable.AbstractIterable) and \ cls.array_type == 'tuple': # multiple exceptions for lazy_context in cls.py__iter__(): for typ in lazy_context.infer(): if check_match(typ, exception): return True else: if check_match(cls, exception): return True def check_hasattr(node, suite): try: assert suite.start_pos <= jedi_name.start_pos < suite.end_pos assert node.type in ('power', 'atom_expr') base = node.children[0] assert base.type == 'name' and base.value == 'hasattr' trailer = node.children[1] assert trailer.type == 'trailer' arglist = trailer.children[1] assert arglist.type == 'arglist' from jedi.evaluate.arguments import TreeArguments args = list(TreeArguments(node_context.evaluator, node_context, arglist).unpack()) # Arguments should be very simple assert len(args) == 2 # Check name key, lazy_context = args[1] names = list(lazy_context.infer()) assert len(names) == 1 and isinstance(names[0], CompiledObject) assert names[0].obj == payload[1].value # Check objects key, lazy_context = args[0] objects = lazy_context.infer() return payload[0] in objects except AssertionError: return False obj = jedi_name while obj is not None and not isinstance(obj, (tree.Function, tree.Class)): if isinstance(obj, tree.Flow): # try/except catch check if obj.type == 'try_stmt' and check_try_for_except(obj, exception): return True # hasattr check if exception == AttributeError and obj.type in ('if_stmt', 'while_stmt'): if check_hasattr(obj.children[1], obj.children[3]): return True obj = obj.parent return False jedi-0.11.1/jedi/evaluate/recursion.py0000664000175000017500000001020113214571123017464 0ustar davedave00000000000000""" Recursions are the recipe of |jedi| to conquer Python code. However, someone must stop recursions going mad. Some settings are here to make |jedi| stop at the right time. You can read more about them :ref:`here `. Next to :mod:`jedi.evaluate.cache` this module also makes |jedi| not thread-safe. Why? ``execution_recursion_decorator`` uses class variables to count the function calls. .. _settings-recursion: Settings ~~~~~~~~~~ Recursion settings are important if you don't want extremly recursive python code to go absolutely crazy. The default values are based on experiments while completing the |jedi| library itself (inception!). But I don't think there's any other Python library that uses recursion in a similarly extreme way. Completion should also be fast and therefore the quality might not always be maximal. .. autodata:: recursion_limit .. autodata:: total_function_execution_limit .. autodata:: per_function_execution_limit .. autodata:: per_function_recursion_limit """ from contextlib import contextmanager from jedi import debug from jedi.evaluate.base_context import NO_CONTEXTS recursion_limit = 15 """ Like ``sys.getrecursionlimit()``, just for |jedi|. """ total_function_execution_limit = 200 """ This is a hard limit of how many non-builtin functions can be executed. """ per_function_execution_limit = 6 """ The maximal amount of times a specific function may be executed. """ per_function_recursion_limit = 2 """ A function may not be executed more than this number of times recursively. """ class RecursionDetector(object): def __init__(self): self.pushed_nodes = [] @contextmanager def execution_allowed(evaluator, node): """ A decorator to detect recursions in statements. In a recursion a statement at the same place, in the same module may not be executed two times. """ pushed_nodes = evaluator.recursion_detector.pushed_nodes if node in pushed_nodes: debug.warning('catched stmt recursion: %s @%s', node, node.start_pos) yield False else: pushed_nodes.append(node) yield True pushed_nodes.pop() def execution_recursion_decorator(default=NO_CONTEXTS): def decorator(func): def wrapper(execution, **kwargs): detector = execution.evaluator.execution_recursion_detector allowed = detector.push_execution(execution) try: if allowed: result = default else: result = func(execution, **kwargs) finally: detector.pop_execution() return result return wrapper return decorator class ExecutionRecursionDetector(object): """ Catches recursions of executions. """ def __init__(self, evaluator): self._evaluator = evaluator self._recursion_level = 0 self._parent_execution_funcs = [] self._funcdef_execution_counts = {} self._execution_count = 0 def pop_execution(self): self._parent_execution_funcs.pop() self._recursion_level -= 1 def push_execution(self, execution): funcdef = execution.tree_node # These two will be undone in pop_execution. self._recursion_level += 1 self._parent_execution_funcs.append(funcdef) module = execution.get_root_context() if module == self._evaluator.BUILTINS: # We have control over builtins so we know they are not recursing # like crazy. Therefore we just let them execute always, because # they usually just help a lot with getting good results. return False if self._recursion_level > recursion_limit: return True if self._execution_count >= total_function_execution_limit: return True self._execution_count += 1 if self._funcdef_execution_counts.setdefault(funcdef, 0) >= per_function_execution_limit: return True self._funcdef_execution_counts[funcdef] += 1 if self._parent_execution_funcs.count(funcdef) > per_function_recursion_limit: return True return False jedi-0.11.1/jedi/evaluate/lazy_context.py0000664000175000017500000000330713214571123020207 0ustar davedave00000000000000from jedi.evaluate.base_context import ContextSet, NO_CONTEXTS class AbstractLazyContext(object): def __init__(self, data): self.data = data def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.data) def infer(self): raise NotImplementedError class LazyKnownContext(AbstractLazyContext): """data is a context.""" def infer(self): return ContextSet(self.data) class LazyKnownContexts(AbstractLazyContext): """data is a ContextSet.""" def infer(self): return self.data class LazyUnknownContext(AbstractLazyContext): def __init__(self): super(LazyUnknownContext, self).__init__(None) def infer(self): return NO_CONTEXTS class LazyTreeContext(AbstractLazyContext): def __init__(self, context, node): super(LazyTreeContext, self).__init__(node) self._context = context # We need to save the predefined names. It's an unfortunate side effect # that needs to be tracked otherwise results will be wrong. self._predefined_names = dict(context.predefined_names) def infer(self): old, self._context.predefined_names = \ self._context.predefined_names, self._predefined_names try: return self._context.eval_node(self.data) finally: self._context.predefined_names = old def get_merged_lazy_context(lazy_contexts): if len(lazy_contexts) > 1: return MergedLazyContexts(lazy_contexts) else: return lazy_contexts[0] class MergedLazyContexts(AbstractLazyContext): """data is a list of lazy contexts.""" def infer(self): return ContextSet.from_sets(l.infer() for l in self.data) jedi-0.11.1/jedi/evaluate/helpers.py0000664000175000017500000001405313214571123017126 0ustar davedave00000000000000import copy import sys import re import os from itertools import chain from contextlib import contextmanager from parso.python import tree from jedi._compatibility import unicode from jedi.parser_utils import get_parent_scope from jedi.evaluate.compiled import CompiledObject def is_stdlib_path(path): # Python standard library paths look like this: # /usr/lib/python3.5/... # TODO The implementation below is probably incorrect and not complete. if 'dist-packages' in path or 'site-packages' in path: return False base_path = os.path.join(sys.prefix, 'lib', 'python') return bool(re.match(re.escape(base_path) + '\d.\d', path)) def deep_ast_copy(obj): """ Much, much faster than copy.deepcopy, but just for parser tree nodes. """ # If it's already in the cache, just return it. new_obj = copy.copy(obj) # Copy children new_children = [] for child in obj.children: if isinstance(child, tree.Leaf): new_child = copy.copy(child) new_child.parent = new_obj else: new_child = deep_ast_copy(child) new_child.parent = new_obj new_children.append(new_child) new_obj.children = new_children return new_obj def evaluate_call_of_leaf(context, leaf, cut_own_trailer=False): """ Creates a "call" node that consist of all ``trailer`` and ``power`` objects. E.g. if you call it with ``append``:: list([]).append(3) or None You would get a node with the content ``list([]).append`` back. This generates a copy of the original ast node. If you're using the leaf, e.g. the bracket `)` it will return ``list([])``. We use this function for two purposes. Given an expression ``bar.foo``, we may want to - infer the type of ``foo`` to offer completions after foo - infer the type of ``bar`` to be able to jump to the definition of foo The option ``cut_own_trailer`` must be set to true for the second purpose. """ trailer = leaf.parent # The leaf may not be the last or first child, because there exist three # different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples # we should not match anything more than x. if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]): if trailer.type == 'atom': return context.eval_node(trailer) return context.eval_node(leaf) power = trailer.parent index = power.children.index(trailer) if cut_own_trailer: cut = index else: cut = index + 1 if power.type == 'error_node': start = index while True: start -= 1 base = power.children[start] if base.type != 'trailer': break trailers = power.children[start + 1: index + 1] else: base = power.children[0] trailers = power.children[1:cut] if base == 'await': base = trailers[0] trailers = trailers[1:] values = context.eval_node(base) from jedi.evaluate.syntax_tree import eval_trailer for trailer in trailers: values = eval_trailer(context, values, trailer) return values def call_of_leaf(leaf): """ Creates a "call" node that consist of all ``trailer`` and ``power`` objects. E.g. if you call it with ``append``:: list([]).append(3) or None You would get a node with the content ``list([]).append`` back. This generates a copy of the original ast node. If you're using the leaf, e.g. the bracket `)` it will return ``list([])``. """ # TODO this is the old version of this call. Try to remove it. trailer = leaf.parent # The leaf may not be the last or first child, because there exist three # different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples # we should not match anything more than x. if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]): if trailer.type == 'atom': return trailer return leaf power = trailer.parent index = power.children.index(trailer) new_power = copy.copy(power) new_power.children = list(new_power.children) new_power.children[index + 1:] = [] if power.type == 'error_node': start = index while True: start -= 1 if power.children[start].type != 'trailer': break transformed = tree.Node('power', power.children[start:]) transformed.parent = power.parent return transformed return power def get_names_of_node(node): try: children = node.children except AttributeError: if node.type == 'name': return [node] else: return [] else: return list(chain.from_iterable(get_names_of_node(c) for c in children)) def get_module_names(module, all_scopes): """ Returns a dictionary with name parts as keys and their call paths as values. """ names = chain.from_iterable(module.get_used_names().values()) if not all_scopes: # We have to filter all the names that don't have the module as a # parent_scope. There's None as a parent, because nodes in the module # node have the parent module and not suite as all the others. # Therefore it's important to catch that case. names = [n for n in names if get_parent_scope(n).parent in (module, None)] return names @contextmanager def predefine_names(context, flow_scope, dct): predefined = context.predefined_names if flow_scope in predefined: raise NotImplementedError('Why does this happen?') predefined[flow_scope] = dct try: yield finally: del predefined[flow_scope] def is_compiled(context): return isinstance(context, CompiledObject) def is_string(context): return is_compiled(context) and isinstance(context.obj, (str, unicode)) def is_literal(context): return is_number(context) or is_string(context) def is_number(context): return is_compiled(context) and isinstance(context.obj, (int, float)) jedi-0.11.1/jedi/evaluate/param.py0000664000175000017500000001735213214571123016571 0ustar davedave00000000000000from collections import defaultdict from jedi.evaluate.utils import PushBackIterator from jedi.evaluate import analysis from jedi.evaluate.lazy_context import LazyKnownContext, \ LazyTreeContext, LazyUnknownContext from jedi.evaluate import docstrings from jedi.evaluate import pep0484 from jedi.evaluate.context import iterable def _add_argument_issue(parent_context, error_name, lazy_context, message): if isinstance(lazy_context, LazyTreeContext): node = lazy_context.data if node.parent.type == 'argument': node = node.parent analysis.add(parent_context, error_name, node, message) class ExecutedParam(object): """Fake a param and give it values.""" def __init__(self, execution_context, param_node, lazy_context): self._execution_context = execution_context self._param_node = param_node self._lazy_context = lazy_context self.string_name = param_node.name.value def infer(self): pep0484_hints = pep0484.infer_param(self._execution_context, self._param_node) doc_params = docstrings.infer_param(self._execution_context, self._param_node) if pep0484_hints or doc_params: return pep0484_hints | doc_params return self._lazy_context.infer() @property def var_args(self): return self._execution_context.var_args def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.string_name) def get_params(execution_context, var_args): result_params = [] param_dict = {} funcdef = execution_context.tree_node parent_context = execution_context.parent_context for param in funcdef.get_params(): param_dict[param.name.value] = param unpacked_va = list(var_args.unpack(funcdef)) var_arg_iterator = PushBackIterator(iter(unpacked_va)) non_matching_keys = defaultdict(lambda: []) keys_used = {} keys_only = False had_multiple_value_error = False for param in funcdef.get_params(): # The value and key can both be null. There, the defaults apply. # args / kwargs will just be empty arrays / dicts, respectively. # Wrong value count is just ignored. If you try to test cases that are # not allowed in Python, Jedi will maybe not show any completions. key, argument = next(var_arg_iterator, (None, None)) while key is not None: keys_only = True try: key_param = param_dict[key] except KeyError: non_matching_keys[key] = argument else: if key in keys_used: had_multiple_value_error = True m = ("TypeError: %s() got multiple values for keyword argument '%s'." % (funcdef.name, key)) for node in var_args.get_calling_nodes(): analysis.add(parent_context, 'type-error-multiple-values', node, message=m) else: keys_used[key] = ExecutedParam(execution_context, key_param, argument) key, argument = next(var_arg_iterator, (None, None)) try: result_params.append(keys_used[param.name.value]) continue except KeyError: pass if param.star_count == 1: # *args param lazy_context_list = [] if argument is not None: lazy_context_list.append(argument) for key, argument in var_arg_iterator: # Iterate until a key argument is found. if key: var_arg_iterator.push_back((key, argument)) break lazy_context_list.append(argument) seq = iterable.FakeSequence(execution_context.evaluator, 'tuple', lazy_context_list) result_arg = LazyKnownContext(seq) elif param.star_count == 2: # **kwargs param dct = iterable.FakeDict(execution_context.evaluator, dict(non_matching_keys)) result_arg = LazyKnownContext(dct) non_matching_keys = {} else: # normal param if argument is None: # No value: Return an empty container if param.default is None: result_arg = LazyUnknownContext() if not keys_only: for node in var_args.get_calling_nodes(): m = _error_argument_count(funcdef, len(unpacked_va)) analysis.add(parent_context, 'type-error-too-few-arguments', node, message=m) else: result_arg = LazyTreeContext(parent_context, param.default) else: result_arg = argument result_params.append(ExecutedParam(execution_context, param, result_arg)) if not isinstance(result_arg, LazyUnknownContext): keys_used[param.name.value] = result_params[-1] if keys_only: # All arguments should be handed over to the next function. It's not # about the values inside, it's about the names. Jedi needs to now that # there's nothing to find for certain names. for k in set(param_dict) - set(keys_used): param = param_dict[k] if not (non_matching_keys or had_multiple_value_error or param.star_count or param.default): # add a warning only if there's not another one. for node in var_args.get_calling_nodes(): m = _error_argument_count(funcdef, len(unpacked_va)) analysis.add(parent_context, 'type-error-too-few-arguments', node, message=m) for key, lazy_context in non_matching_keys.items(): m = "TypeError: %s() got an unexpected keyword argument '%s'." \ % (funcdef.name, key) _add_argument_issue( parent_context, 'type-error-keyword-argument', lazy_context, message=m ) remaining_arguments = list(var_arg_iterator) if remaining_arguments: m = _error_argument_count(funcdef, len(unpacked_va)) # Just report an error for the first param that is not needed (like # cPython). first_key, lazy_context = remaining_arguments[0] if var_args.get_calling_nodes(): # There might not be a valid calling node so check for that first. _add_argument_issue(parent_context, 'type-error-too-many-arguments', lazy_context, message=m) return result_params def _error_argument_count(funcdef, actual_count): params = funcdef.get_params() default_arguments = sum(1 for p in params if p.default or p.star_count) if default_arguments == 0: before = 'exactly ' else: before = 'from %s to ' % (len(params) - default_arguments) return ('TypeError: %s() takes %s%s arguments (%s given).' % (funcdef.name, before, len(params), actual_count)) def _create_default_param(execution_context, param): if param.star_count == 1: result_arg = LazyKnownContext( iterable.FakeSequence(execution_context.evaluator, 'tuple', []) ) elif param.star_count == 2: result_arg = LazyKnownContext( iterable.FakeDict(execution_context.evaluator, {}) ) elif param.default is None: result_arg = LazyUnknownContext() else: result_arg = LazyTreeContext(execution_context.parent_context, param.default) return ExecutedParam(execution_context, param, result_arg) def create_default_params(execution_context, funcdef): return [_create_default_param(execution_context, p) for p in funcdef.get_params()] jedi-0.11.1/jedi/evaluate/stdlib.py0000664000175000017500000002613713214571123016753 0ustar davedave00000000000000""" Implementations of standard library functions, because it's not possible to understand them with Jedi. To add a new implementation, create a function and add it to the ``_implemented`` dict at the bottom of this module. Note that this module exists only to implement very specific functionality in the standard library. The usual way to understand the standard library is the compiled module that returns the types for C-builtins. """ import collections import re from jedi import debug from jedi.evaluate.arguments import ValuesArguments from jedi.evaluate import analysis from jedi.evaluate import compiled from jedi.evaluate.context.instance import InstanceFunctionExecution, \ AbstractInstanceContext, CompiledInstance, BoundMethod, \ AnonymousInstanceFunctionExecution from jedi.evaluate.base_context import ContextualizedNode, \ NO_CONTEXTS, ContextSet from jedi.evaluate.context import ClassContext, ModuleContext from jedi.evaluate.context import iterable from jedi.evaluate.lazy_context import LazyTreeContext from jedi.evaluate.syntax_tree import is_string # Now this is all part of fake tuples in Jedi. However super doesn't work on # __init__ and __new__ doesn't work at all. So adding this to nametuples is # just the easiest way. _NAMEDTUPLE_INIT = """ def __init__(_cls, {arg_list}): 'A helper function for namedtuple.' self.__iterable = ({arg_list}) def __iter__(self): for i in self.__iterable: yield i def __getitem__(self, y): return self.__iterable[y] """ class NotInStdLib(LookupError): pass def execute(evaluator, obj, arguments): if isinstance(obj, BoundMethod): raise NotInStdLib() try: obj_name = obj.name.string_name except AttributeError: pass else: if obj.parent_context == evaluator.BUILTINS: module_name = 'builtins' elif isinstance(obj.parent_context, ModuleContext): module_name = obj.parent_context.name.string_name else: module_name = '' # for now we just support builtin functions. try: func = _implemented[module_name][obj_name] except KeyError: pass else: return func(evaluator, obj, arguments) raise NotInStdLib() def _follow_param(evaluator, arguments, index): try: key, lazy_context = list(arguments.unpack())[index] except IndexError: return NO_CONTEXTS else: return lazy_context.infer() def argument_clinic(string, want_obj=False, want_context=False, want_arguments=False): """ Works like Argument Clinic (PEP 436), to validate function params. """ clinic_args = [] allow_kwargs = False optional = False while string: # Optional arguments have to begin with a bracket. And should always be # at the end of the arguments. This is therefore not a proper argument # clinic implementation. `range()` for exmple allows an optional start # value at the beginning. match = re.match('(?:(?:(\[),? ?|, ?|)(\w+)|, ?/)\]*', string) string = string[len(match.group(0)):] if not match.group(2): # A slash -> allow named arguments allow_kwargs = True continue optional = optional or bool(match.group(1)) word = match.group(2) clinic_args.append((word, optional, allow_kwargs)) def f(func): def wrapper(evaluator, obj, arguments): debug.dbg('builtin start %s' % obj, color='MAGENTA') try: lst = list(arguments.eval_argument_clinic(clinic_args)) except ValueError: return NO_CONTEXTS else: kwargs = {} if want_context: kwargs['context'] = arguments.context if want_obj: kwargs['obj'] = obj if want_arguments: kwargs['arguments'] = arguments return func(evaluator, *lst, **kwargs) finally: debug.dbg('builtin end', color='MAGENTA') return wrapper return f @argument_clinic('iterator[, default], /') def builtins_next(evaluator, iterators, defaults): """ TODO this function is currently not used. It's a stab at implementing next in a different way than fake objects. This would be a bit more flexible. """ if evaluator.python_version[0] == 2: name = 'next' else: name = '__next__' context_set = NO_CONTEXTS for iterator in iterators: if isinstance(iterator, AbstractInstanceContext): context_set = ContextSet.from_sets( n.infer() for filter in iterator.get_filters(include_self_names=True) for n in filter.get(name) ).execute_evaluated() if context_set: return context_set return defaults @argument_clinic('object, name[, default], /') def builtins_getattr(evaluator, objects, names, defaults=None): # follow the first param for obj in objects: for name in names: if is_string(name): return obj.py__getattribute__(name.obj) else: debug.warning('getattr called without str') continue return NO_CONTEXTS @argument_clinic('object[, bases, dict], /') def builtins_type(evaluator, objects, bases, dicts): if bases or dicts: # It's a type creation... maybe someday... return NO_CONTEXTS else: return objects.py__class__() class SuperInstance(AbstractInstanceContext): """To be used like the object ``super`` returns.""" def __init__(self, evaluator, cls): su = cls.py_mro()[1] super().__init__(evaluator, su and su[0] or self) @argument_clinic('[type[, obj]], /', want_context=True) def builtins_super(evaluator, types, objects, context): # TODO make this able to detect multiple inheritance super if isinstance(context, (InstanceFunctionExecution, AnonymousInstanceFunctionExecution)): su = context.instance.py__class__().py__bases__() return su[0].infer().execute_evaluated() return NO_CONTEXTS @argument_clinic('sequence, /', want_obj=True, want_arguments=True) def builtins_reversed(evaluator, sequences, obj, arguments): # While we could do without this variable (just by using sequences), we # want static analysis to work well. Therefore we need to generated the # values again. key, lazy_context = next(arguments.unpack()) cn = None if isinstance(lazy_context, LazyTreeContext): # TODO access private cn = ContextualizedNode(lazy_context._context, lazy_context.data) ordered = list(sequences.iterate(cn)) rev = list(reversed(ordered)) # Repack iterator values and then run it the normal way. This is # necessary, because `reversed` is a function and autocompletion # would fail in certain cases like `reversed(x).__iter__` if we # just returned the result directly. seq = iterable.FakeSequence(evaluator, 'list', rev) arguments = ValuesArguments([ContextSet(seq)]) return ContextSet(CompiledInstance(evaluator, evaluator.BUILTINS, obj, arguments)) @argument_clinic('obj, type, /', want_arguments=True) def builtins_isinstance(evaluator, objects, types, arguments): bool_results = set() for o in objects: try: mro_func = o.py__class__().py__mro__ except AttributeError: # This is temporary. Everything should have a class attribute in # Python?! Maybe we'll leave it here, because some numpy objects or # whatever might not. return ContextSet(compiled.create(True), compiled.create(False)) mro = mro_func() for cls_or_tup in types: if cls_or_tup.is_class(): bool_results.add(cls_or_tup in mro) elif cls_or_tup.name.string_name == 'tuple' \ and cls_or_tup.get_root_context() == evaluator.BUILTINS: # Check for tuples. classes = ContextSet.from_sets( lazy_context.infer() for lazy_context in cls_or_tup.iterate() ) bool_results.add(any(cls in mro for cls in classes)) else: _, lazy_context = list(arguments.unpack())[1] if isinstance(lazy_context, LazyTreeContext): node = lazy_context.data message = 'TypeError: isinstance() arg 2 must be a ' \ 'class, type, or tuple of classes and types, ' \ 'not %s.' % cls_or_tup analysis.add(lazy_context._context, 'type-error-isinstance', node, message) return ContextSet.from_iterable(compiled.create(evaluator, x) for x in bool_results) def collections_namedtuple(evaluator, obj, arguments): """ Implementation of the namedtuple function. This has to be done by processing the namedtuple class template and evaluating the result. .. note:: |jedi| only supports namedtuples on Python >2.6. """ # Namedtuples are not supported on Python 2.6 if not hasattr(collections, '_class_template'): return NO_CONTEXTS # Process arguments # TODO here we only use one of the types, we should use all. name = list(_follow_param(evaluator, arguments, 0))[0].obj _fields = list(_follow_param(evaluator, arguments, 1))[0] if isinstance(_fields, compiled.CompiledObject): fields = _fields.obj.replace(',', ' ').split() elif isinstance(_fields, iterable.AbstractIterable): fields = [ v.obj for lazy_context in _fields.py__iter__() for v in lazy_context.infer() if hasattr(v, 'obj') ] else: return NO_CONTEXTS base = collections._class_template base += _NAMEDTUPLE_INIT # Build source source = base.format( typename=name, field_names=tuple(fields), num_fields=len(fields), arg_list = repr(tuple(fields)).replace("'", "")[1:-1], repr_fmt=', '.join(collections._repr_template.format(name=name) for name in fields), field_defs='\n'.join(collections._field_template.format(index=index, name=name) for index, name in enumerate(fields)) ) # Parse source module = evaluator.grammar.parse(source) generated_class = next(module.iter_classdefs()) parent_context = ModuleContext(evaluator, module, '') return ContextSet(ClassContext(evaluator, parent_context, generated_class)) @argument_clinic('first, /') def _return_first_param(evaluator, firsts): return firsts _implemented = { 'builtins': { 'getattr': builtins_getattr, 'type': builtins_type, 'super': builtins_super, 'reversed': builtins_reversed, 'isinstance': builtins_isinstance, }, 'copy': { 'copy': _return_first_param, 'deepcopy': _return_first_param, }, 'json': { 'load': lambda *args: NO_CONTEXTS, 'loads': lambda *args: NO_CONTEXTS, }, 'collections': { 'namedtuple': collections_namedtuple, }, } jedi-0.11.1/jedi/evaluate/project.py0000664000175000017500000000177713214571123017143 0ustar davedave00000000000000import os import sys from jedi.evaluate.sys_path import get_venv_path, detect_additional_paths from jedi.cache import underscore_memoization class Project(object): def __init__(self, sys_path=None): if sys_path is not None: self._sys_path = sys_path venv = os.getenv('VIRTUAL_ENV') if venv: sys_path = get_venv_path(venv) if sys_path is None: sys_path = sys.path base_sys_path = list(sys_path) try: base_sys_path.remove('') except ValueError: pass self._base_sys_path = base_sys_path def add_script_path(self, script_path): self._script_path = script_path def add_evaluator(self, evaluator): self._evaluator = evaluator @property @underscore_memoization def sys_path(self): if self._script_path is None: return self._base_sys_path return self._base_sys_path + detect_additional_paths(self._evaluator, self._script_path) jedi-0.11.1/jedi/evaluate/cache.py0000664000175000017500000000463513214571123016534 0ustar davedave00000000000000""" - the popular ``_memoize_default`` works like a typical memoize and returns the default otherwise. - ``CachedMetaClass`` uses ``_memoize_default`` to do the same with classes. """ _NO_DEFAULT = object() def _memoize_default(default=_NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False): """ This is a typical memoization decorator, BUT there is one difference: To prevent recursion it sets defaults. Preventing recursion is in this case the much bigger use than speed. I don't think, that there is a big speed difference, but there are many cases where recursion could happen (think about a = b; b = a). """ def func(function): def wrapper(obj, *args, **kwargs): # TODO These checks are kind of ugly and slow. if evaluator_is_first_arg: cache = obj.memoize_cache elif second_arg_is_evaluator: cache = args[0].memoize_cache # needed for meta classes else: cache = obj.evaluator.memoize_cache try: memo = cache[function] except KeyError: memo = {} cache[function] = memo key = (obj, args, frozenset(kwargs.items())) if key in memo: return memo[key] else: if default is not _NO_DEFAULT: memo[key] = default rv = function(obj, *args, **kwargs) memo[key] = rv return rv return wrapper return func def evaluator_function_cache(default=_NO_DEFAULT): def decorator(func): return _memoize_default(default=default, evaluator_is_first_arg=True)(func) return decorator def evaluator_method_cache(default=_NO_DEFAULT): def decorator(func): return _memoize_default(default=default)(func) return decorator def _memoize_meta_class(): def decorator(call): return _memoize_default(second_arg_is_evaluator=True)(call) return decorator class CachedMetaClass(type): """ This is basically almost the same than the decorator above, it just caches class initializations. Either you do it this way or with decorators, but with decorators you lose class access (isinstance, etc). """ @_memoize_meta_class() def __call__(self, *args, **kwargs): return super(CachedMetaClass, self).__call__(*args, **kwargs) jedi-0.11.1/jedi/evaluate/parser_cache.py0000664000175000017500000000025413214571123020101 0ustar davedave00000000000000from jedi.evaluate.cache import evaluator_function_cache @evaluator_function_cache() def get_yield_exprs(evaluator, funcdef): return list(funcdef.iter_yield_exprs()) jedi-0.11.1/jedi/evaluate/compiled/0000775000175000017500000000000013214571377016716 5ustar davedave00000000000000jedi-0.11.1/jedi/evaluate/compiled/fake/0000775000175000017500000000000013214571377017624 5ustar davedave00000000000000jedi-0.11.1/jedi/evaluate/compiled/fake/posix.pym0000664000175000017500000000007213214571123021501 0ustar davedave00000000000000def getcwd(): return '' def getcwdu(): return '' jedi-0.11.1/jedi/evaluate/compiled/fake/io.pym0000664000175000017500000000031513214571123020746 0ustar davedave00000000000000class TextIOWrapper(): def __next__(self): return str() def __iter__(self): yield str() def readlines(self): return [''] def __enter__(self): return self jedi-0.11.1/jedi/evaluate/compiled/fake/_sqlite3.pym0000664000175000017500000000075113214571123022066 0ustar davedave00000000000000def connect(database, timeout=None, isolation_level=None, detect_types=None, factory=None): return Connection() class Connection(): def cursor(self): return Cursor() class Cursor(): def cursor(self): return Cursor() def fetchone(self): return Row() def fetchmany(self, size=cursor.arraysize): return [self.fetchone()] def fetchall(self): return [self.fetchone()] class Row(): def keys(self): return [''] jedi-0.11.1/jedi/evaluate/compiled/fake/_functools.pym0000664000175000017500000000050513214571123022513 0ustar davedave00000000000000class partial(): def __init__(self, func, *args, **keywords): self.__func = func self.__args = args self.__keywords = keywords def __call__(self, *args, **kwargs): # TODO should be **dict(self.__keywords, **kwargs) return self.__func(*(self.__args + args), **self.__keywords) jedi-0.11.1/jedi/evaluate/compiled/fake/datetime.pym0000664000175000017500000000011513214571123022131 0ustar davedave00000000000000class datetime(): @staticmethod def now(): return datetime() jedi-0.11.1/jedi/evaluate/compiled/fake/_weakref.pym0000664000175000017500000000030613214571123022122 0ustar davedave00000000000000def proxy(object, callback=None): return object class ref(): def __init__(self, object, callback=None): self.__object = object def __call__(self): return self.__object jedi-0.11.1/jedi/evaluate/compiled/fake/_sre.pym0000664000175000017500000000571313214571123021276 0ustar davedave00000000000000def compile(): class SRE_Match(): endpos = int() lastgroup = int() lastindex = int() pos = int() string = str() regs = ((int(), int()),) def __init__(self, pattern): self.re = pattern def start(self): return int() def end(self): return int() def span(self): return int(), int() def expand(self): return str() def group(self, nr): return str() def groupdict(self): return {str(): str()} def groups(self): return (str(),) class SRE_Pattern(): flags = int() groupindex = {} groups = int() pattern = str() def findall(self, string, pos=None, endpos=None): """ findall(string[, pos[, endpos]]) --> list. Return a list of all non-overlapping matches of pattern in string. """ return [str()] def finditer(self, string, pos=None, endpos=None): """ finditer(string[, pos[, endpos]]) --> iterator. Return an iterator over all non-overlapping matches for the RE pattern in string. For each match, the iterator returns a match object. """ yield SRE_Match(self) def match(self, string, pos=None, endpos=None): """ match(string[, pos[, endpos]]) --> match object or None. Matches zero or more characters at the beginning of the string pattern """ return SRE_Match(self) def scanner(self, string, pos=None, endpos=None): pass def search(self, string, pos=None, endpos=None): """ search(string[, pos[, endpos]]) --> match object or None. Scan through string looking for a match, and return a corresponding MatchObject instance. Return None if no position in the string matches. """ return SRE_Match(self) def split(self, string, maxsplit=0]): """ split(string[, maxsplit = 0]) --> list. Split string by the occurrences of pattern. """ return [str()] def sub(self, repl, string, count=0): """ sub(repl, string[, count = 0]) --> newstring Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. """ return str() def subn(self, repl, string, count=0): """ subn(repl, string[, count = 0]) --> (newstring, number of subs) Return the tuple (new_string, number_of_subs_made) found by replacing the leftmost non-overlapping occurrences of pattern with the replacement repl. """ return (str(), int()) return SRE_Pattern() jedi-0.11.1/jedi/evaluate/compiled/fake/operator.pym0000664000175000017500000000176713214571123022206 0ustar davedave00000000000000# Just copied this code from Python 3.6. class itemgetter: """ Return a callable object that fetches the given item(s) from its operand. After f = itemgetter(2), the call f(r) returns r[2]. After g = itemgetter(2, 5, 3), the call g(r) returns (r[2], r[5], r[3]) """ __slots__ = ('_items', '_call') def __init__(self, item, *items): if not items: self._items = (item,) def func(obj): return obj[item] self._call = func else: self._items = items = (item,) + items def func(obj): return tuple(obj[i] for i in items) self._call = func def __call__(self, obj): return self._call(obj) def __repr__(self): return '%s.%s(%s)' % (self.__class__.__module__, self.__class__.__name__, ', '.join(map(repr, self._items))) def __reduce__(self): return self.__class__, self._items jedi-0.11.1/jedi/evaluate/compiled/fake/builtins.pym0000664000175000017500000001267013214571123022177 0ustar davedave00000000000000""" Pure Python implementation of some builtins. This code is not going to be executed anywhere. These implementations are not always correct, but should work as good as possible for the auto completion. """ def next(iterator, default=None): if random.choice([0, 1]): if hasattr("next"): return iterator.next() else: return iterator.__next__() else: if default is not None: return default def iter(collection, sentinel=None): if sentinel: yield collection() else: for c in collection: yield c def range(start, stop=None, step=1): return [0] class file(): def __iter__(self): yield '' def next(self): return '' def readlines(self): return [''] def __enter__(self): return self class xrange(): # Attention: this function doesn't exist in Py3k (there it is range). def __iter__(self): yield 1 def count(self): return 1 def index(self): return 1 def open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True): import io return io.TextIOWrapper(file, mode, buffering, encoding, errors, newline, closefd) def open_python2(name, mode=None, buffering=None): return file(name, mode, buffering) #-------------------------------------------------------- # descriptors #-------------------------------------------------------- class property(): def __init__(self, fget, fset=None, fdel=None, doc=None): self.fget = fget self.fset = fset self.fdel = fdel self.__doc__ = doc def __get__(self, obj, cls): return self.fget(obj) def __set__(self, obj, value): self.fset(obj, value) def __delete__(self, obj): self.fdel(obj) def setter(self, func): self.fset = func return self def getter(self, func): self.fget = func return self def deleter(self, func): self.fdel = func return self class staticmethod(): def __init__(self, func): self.__func = func def __get__(self, obj, cls): return self.__func class classmethod(): def __init__(self, func): self.__func = func def __get__(self, obj, cls): def _method(*args, **kwargs): return self.__func(cls, *args, **kwargs) return _method #-------------------------------------------------------- # array stuff #-------------------------------------------------------- class list(): def __init__(self, iterable=[]): self.__iterable = [] for i in iterable: self.__iterable += [i] def __iter__(self): for i in self.__iterable: yield i def __getitem__(self, y): return self.__iterable[y] def pop(self): return self.__iterable[int()] class tuple(): def __init__(self, iterable=[]): self.__iterable = [] for i in iterable: self.__iterable += [i] def __iter__(self): for i in self.__iterable: yield i def __getitem__(self, y): return self.__iterable[y] def index(self): return 1 def count(self): return 1 class set(): def __init__(self, iterable=[]): self.__iterable = iterable def __iter__(self): for i in self.__iterable: yield i def pop(self): return list(self.__iterable)[-1] def copy(self): return self def difference(self, other): return self - other def intersection(self, other): return self & other def symmetric_difference(self, other): return self ^ other def union(self, other): return self | other class frozenset(): def __init__(self, iterable=[]): self.__iterable = iterable def __iter__(self): for i in self.__iterable: yield i def copy(self): return self class dict(): def __init__(self, **elements): self.__elements = elements def clear(self): # has a strange docstr pass def get(self, k, d=None): # TODO implement try: #return self.__elements[k] pass except KeyError: return d def values(self): return self.__elements.values() def setdefault(self, k, d): # TODO maybe also return the content return d class enumerate(): def __init__(self, sequence, start=0): self.__sequence = sequence def __iter__(self): for i in self.__sequence: yield 1, i def __next__(self): return next(self.__iter__()) def next(self): return next(self.__iter__()) class reversed(): def __init__(self, sequence): self.__sequence = sequence def __iter__(self): for i in self.__sequence: yield i def __next__(self): return next(self.__iter__()) def next(self): return next(self.__iter__()) def sorted(iterable, cmp=None, key=None, reverse=False): return iterable #-------------------------------------------------------- # basic types #-------------------------------------------------------- class int(): def __init__(self, x, base=None): pass class str(): def __init__(self, obj): pass def strip(self): return str() def split(self): return [str()] class type(): def mro(): return [object] jedi-0.11.1/jedi/evaluate/compiled/mixed.py0000664000175000017500000001742313214571123020372 0ustar davedave00000000000000""" Used only for REPL Completion. """ import inspect import os from jedi import settings from jedi.evaluate import compiled from jedi.cache import underscore_memoization from jedi.evaluate import imports from jedi.evaluate.base_context import Context, ContextSet from jedi.evaluate.context import ModuleContext from jedi.evaluate.cache import evaluator_function_cache from jedi.evaluate.compiled.getattr_static import getattr_static class MixedObject(object): """ A ``MixedObject`` is used in two ways: 1. It uses the default logic of ``parser.python.tree`` objects, 2. except for getattr calls. The names dicts are generated in a fashion like ``CompiledObject``. This combined logic makes it possible to provide more powerful REPL completion. It allows side effects that are not noticable with the default parser structure to still be completeable. The biggest difference from CompiledObject to MixedObject is that we are generally dealing with Python code and not with C code. This will generate fewer special cases, because we in Python you don't have the same freedoms to modify the runtime. """ def __init__(self, evaluator, parent_context, compiled_object, tree_context): self.evaluator = evaluator self.parent_context = parent_context self.compiled_object = compiled_object self._context = tree_context self.obj = compiled_object.obj # We have to overwrite everything that has to do with trailers, name # lookups and filters to make it possible to route name lookups towards # compiled objects and the rest towards tree node contexts. def py__getattribute__(*args, **kwargs): return Context.py__getattribute__(*args, **kwargs) def get_filters(self, *args, **kwargs): yield MixedObjectFilter(self.evaluator, self) def __repr__(self): return '<%s: %s>' % (type(self).__name__, repr(self.obj)) def __getattr__(self, name): return getattr(self._context, name) class MixedName(compiled.CompiledName): """ The ``CompiledName._compiled_object`` is our MixedObject. """ @property def start_pos(self): contexts = list(self.infer()) if not contexts: # This means a start_pos that doesn't exist (compiled objects). return (0, 0) return contexts[0].name.start_pos @start_pos.setter def start_pos(self, value): # Ignore the __init__'s start_pos setter call. pass @underscore_memoization def infer(self): obj = self.parent_context.obj try: # TODO use logic from compiled.CompiledObjectFilter obj = getattr(obj, self.string_name) except AttributeError: # Happens e.g. in properties of # PyQt4.QtGui.QStyleOptionComboBox.currentText # -> just set it to None obj = None return ContextSet( _create(self._evaluator, obj, parent_context=self.parent_context) ) @property def api_type(self): return next(iter(self.infer())).api_type class MixedObjectFilter(compiled.CompiledObjectFilter): name_class = MixedName def __init__(self, evaluator, mixed_object, is_instance=False): super(MixedObjectFilter, self).__init__( evaluator, mixed_object, is_instance) self._mixed_object = mixed_object #def _create(self, name): #return MixedName(self._evaluator, self._compiled_object, name) @evaluator_function_cache() def _load_module(evaluator, path, python_object): module = evaluator.grammar.parse( path=path, cache=True, diff_cache=True, cache_path=settings.cache_directory ).get_root_node() python_module = inspect.getmodule(python_object) evaluator.modules[python_module.__name__] = module return module def _get_object_to_check(python_object): """Check if inspect.getfile has a chance to find the source.""" if (inspect.ismodule(python_object) or inspect.isclass(python_object) or inspect.ismethod(python_object) or inspect.isfunction(python_object) or inspect.istraceback(python_object) or inspect.isframe(python_object) or inspect.iscode(python_object)): return python_object try: return python_object.__class__ except AttributeError: raise TypeError # Prevents computation of `repr` within inspect. def find_syntax_node_name(evaluator, python_object): try: python_object = _get_object_to_check(python_object) path = inspect.getsourcefile(python_object) except TypeError: # The type might not be known (e.g. class_with_dict.__weakref__) return None, None if path is None or not os.path.exists(path): # The path might not exist or be e.g. . return None, None module = _load_module(evaluator, path, python_object) if inspect.ismodule(python_object): # We don't need to check names for modules, because there's not really # a way to write a module in a module in Python (and also __name__ can # be something like ``email.utils``). return module, path try: name_str = python_object.__name__ except AttributeError: # Stuff like python_function.__code__. return None, None if name_str == '': return None, None # It's too hard to find lambdas. # Doesn't always work (e.g. os.stat_result) try: names = module.get_used_names()[name_str] except KeyError: return None, None names = [n for n in names if n.is_definition()] try: code = python_object.__code__ # By using the line number of a code object we make the lookup in a # file pretty easy. There's still a possibility of people defining # stuff like ``a = 3; foo(a); a = 4`` on the same line, but if people # do so we just don't care. line_nr = code.co_firstlineno except AttributeError: pass else: line_names = [name for name in names if name.start_pos[0] == line_nr] # There's a chance that the object is not available anymore, because # the code has changed in the background. if line_names: return line_names[-1].parent, path # It's really hard to actually get the right definition, here as a last # resort we just return the last one. This chance might lead to odd # completions at some points but will lead to mostly correct type # inference, because people tend to define a public name in a module only # once. return names[-1].parent, path @compiled.compiled_objects_cache('mixed_cache') def _create(evaluator, obj, parent_context=None, *args): tree_node, path = find_syntax_node_name(evaluator, obj) compiled_object = compiled.create( evaluator, obj, parent_context=parent_context.compiled_object) if tree_node is None: return compiled_object module_node = tree_node.get_root_node() if parent_context.tree_node.get_root_node() == module_node: module_context = parent_context.get_root_context() else: module_context = ModuleContext(evaluator, module_node, path=path) # TODO this __name__ is probably wrong. name = compiled_object.get_root_context().py__name__() imports.add_module(evaluator, name, module_context) tree_context = module_context.create_context( tree_node, node_is_context=True, node_is_object=True ) if tree_node.type == 'classdef': if not inspect.isclass(obj): # Is an instance, not a class. tree_context, = tree_context.execute_evaluated() return MixedObject( evaluator, parent_context, compiled_object, tree_context=tree_context ) jedi-0.11.1/jedi/evaluate/compiled/__init__.py0000664000175000017500000005214513214571123021023 0ustar davedave00000000000000""" Imitate the parser representation. """ import inspect import re import sys import os import types from functools import partial from jedi._compatibility import builtins as _builtins, unicode, py_version from jedi import debug from jedi.cache import underscore_memoization, memoize_method from jedi.evaluate.filters import AbstractFilter, AbstractNameDefinition, \ ContextNameMixin from jedi.evaluate.base_context import Context, ContextSet from jedi.evaluate.lazy_context import LazyKnownContext from jedi.evaluate.compiled.getattr_static import getattr_static from . import fake _sep = os.path.sep if os.path.altsep is not None: _sep += os.path.altsep _path_re = re.compile('(?:\.[^{0}]+|[{0}]__init__\.py)$'.format(re.escape(_sep))) del _sep # Those types don't exist in typing. MethodDescriptorType = type(str.replace) WrapperDescriptorType = type(set.__iter__) # `object.__subclasshook__` is an already executed descriptor. object_class_dict = type.__dict__["__dict__"].__get__(object) ClassMethodDescriptorType = type(object_class_dict['__subclasshook__']) ALLOWED_DESCRIPTOR_ACCESS = ( types.FunctionType, types.GetSetDescriptorType, types.MemberDescriptorType, MethodDescriptorType, WrapperDescriptorType, ClassMethodDescriptorType, staticmethod, classmethod, ) class CheckAttribute(object): """Raises an AttributeError if the attribute X isn't available.""" def __init__(self, func): self.func = func # Remove the py in front of e.g. py__call__. self.check_name = func.__name__[2:] def __get__(self, instance, owner): # This might raise an AttributeError. That's wanted. if self.check_name == '__iter__': # Python iterators are a bit strange, because there's no need for # the __iter__ function as long as __getitem__ is defined (it will # just start with __getitem__(0). This is especially true for # Python 2 strings, where `str.__iter__` is not even defined. try: iter(instance.obj) except TypeError: raise AttributeError else: getattr(instance.obj, self.check_name) return partial(self.func, instance) class CompiledObject(Context): path = None # modules have this attribute - set it to None. used_names = lambda self: {} # To be consistent with modules. def __init__(self, evaluator, obj, parent_context=None, faked_class=None): super(CompiledObject, self).__init__(evaluator, parent_context) self.obj = obj # This attribute will not be set for most classes, except for fakes. self.tree_node = faked_class def get_root_node(self): # To make things a bit easier with filters we add this method here. return self.get_root_context() @CheckAttribute def py__call__(self, params): if inspect.isclass(self.obj): from jedi.evaluate.context import CompiledInstance return ContextSet(CompiledInstance(self.evaluator, self.parent_context, self, params)) else: return ContextSet.from_iterable(self._execute_function(params)) @CheckAttribute def py__class__(self): return create(self.evaluator, self.obj.__class__) @CheckAttribute def py__mro__(self): return (self,) + tuple(create(self.evaluator, cls) for cls in self.obj.__mro__[1:]) @CheckAttribute def py__bases__(self): return tuple(create(self.evaluator, cls) for cls in self.obj.__bases__) def py__bool__(self): return bool(self.obj) def py__file__(self): try: return self.obj.__file__ except AttributeError: return None def is_class(self): return inspect.isclass(self.obj) def py__doc__(self, include_call_signature=False): return inspect.getdoc(self.obj) or '' def get_param_names(self): obj = self.obj try: if py_version < 33: raise ValueError("inspect.signature was introduced in 3.3") if py_version == 34: # In 3.4 inspect.signature are wrong for str and int. This has # been fixed in 3.5. The signature of object is returned, # because no signature was found for str. Here we imitate 3.5 # logic and just ignore the signature if the magic methods # don't match object. # 3.3 doesn't even have the logic and returns nothing for str # and classes that inherit from object. user_def = inspect._signature_get_user_defined_method if (inspect.isclass(obj) and not user_def(type(obj), '__init__') and not user_def(type(obj), '__new__') and (obj.__init__ != object.__init__ or obj.__new__ != object.__new__)): raise ValueError signature = inspect.signature(obj) except ValueError: # Has no signature params_str, ret = self._parse_function_doc() tokens = params_str.split(',') if inspect.ismethoddescriptor(obj): tokens.insert(0, 'self') for p in tokens: parts = p.strip().split('=') yield UnresolvableParamName(self, parts[0]) else: for signature_param in signature.parameters.values(): yield SignatureParamName(self, signature_param) def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, repr(self.obj)) @underscore_memoization def _parse_function_doc(self): doc = self.py__doc__() if doc is None: return '', '' return _parse_function_doc(doc) @property def api_type(self): obj = self.obj if inspect.isclass(obj): return 'class' elif inspect.ismodule(obj): return 'module' elif inspect.isbuiltin(obj) or inspect.ismethod(obj) \ or inspect.ismethoddescriptor(obj) or inspect.isfunction(obj): return 'function' # Everything else... return 'instance' @property def type(self): """Imitate the tree.Node.type values.""" cls = self._get_class() if inspect.isclass(cls): return 'classdef' elif inspect.ismodule(cls): return 'file_input' elif inspect.isbuiltin(cls) or inspect.ismethod(cls) or \ inspect.ismethoddescriptor(cls): return 'funcdef' @underscore_memoization def _cls(self): """ We used to limit the lookups for instantiated objects like list(), but this is not the case anymore. Python itself """ # Ensures that a CompiledObject is returned that is not an instance (like list) return self def _get_class(self): if not fake.is_class_instance(self.obj) or \ inspect.ismethoddescriptor(self.obj): # slots return self.obj try: return self.obj.__class__ except AttributeError: # happens with numpy.core.umath._UFUNC_API (you get it # automatically by doing `import numpy`. return type def get_filters(self, search_global=False, is_instance=False, until_position=None, origin_scope=None): yield self._ensure_one_filter(is_instance) @memoize_method def _ensure_one_filter(self, is_instance): """ search_global shouldn't change the fact that there's one dict, this way there's only one `object`. """ return CompiledObjectFilter(self.evaluator, self, is_instance) @CheckAttribute def py__getitem__(self, index): if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict): # Get rid of side effects, we won't call custom `__getitem__`s. return ContextSet() return ContextSet(create(self.evaluator, self.obj[index])) @CheckAttribute def py__iter__(self): if type(self.obj) not in (str, list, tuple, unicode, bytes, bytearray, dict): # Get rid of side effects, we won't call custom `__getitem__`s. return for i, part in enumerate(self.obj): if i > 20: # Should not go crazy with large iterators break yield LazyKnownContext(create(self.evaluator, part)) def py__name__(self): try: return self._get_class().__name__ except AttributeError: return None @property def name(self): try: name = self._get_class().__name__ except AttributeError: name = repr(self.obj) return CompiledContextName(self, name) def _execute_function(self, params): from jedi.evaluate import docstrings if self.type != 'funcdef': return for name in self._parse_function_doc()[1].split(): try: bltn_obj = getattr(_builtins, name) except AttributeError: continue else: if bltn_obj is None: # We want to evaluate everything except None. # TODO do we? continue bltn_obj = create(self.evaluator, bltn_obj) for result in bltn_obj.execute(params): yield result for type_ in docstrings.infer_return_types(self): yield type_ def get_self_attributes(self): return [] # Instance compatibility def get_imports(self): return [] # Builtins don't have imports def dict_values(self): return ContextSet.from_iterable( create(self.evaluator, v) for v in self.obj.values() ) class CompiledName(AbstractNameDefinition): def __init__(self, evaluator, parent_context, name): self._evaluator = evaluator self.parent_context = parent_context self.string_name = name def __repr__(self): try: name = self.parent_context.name # __name__ is not defined all the time except AttributeError: name = None return '<%s: (%s).%s>' % (self.__class__.__name__, name, self.string_name) @property def api_type(self): return next(iter(self.infer())).api_type @underscore_memoization def infer(self): module = self.parent_context.get_root_context() return ContextSet(_create_from_name( self._evaluator, module, self.parent_context, self.string_name )) class SignatureParamName(AbstractNameDefinition): api_type = 'param' def __init__(self, compiled_obj, signature_param): self.parent_context = compiled_obj.parent_context self._signature_param = signature_param @property def string_name(self): return self._signature_param.name def infer(self): p = self._signature_param evaluator = self.parent_context.evaluator contexts = ContextSet() if p.default is not p.empty: contexts = ContextSet(create(evaluator, p.default)) if p.annotation is not p.empty: annotation = create(evaluator, p.annotation) contexts |= annotation.execute_evaluated() return contexts class UnresolvableParamName(AbstractNameDefinition): api_type = 'param' def __init__(self, compiled_obj, name): self.parent_context = compiled_obj.parent_context self.string_name = name def infer(self): return ContextSet() class CompiledContextName(ContextNameMixin, AbstractNameDefinition): def __init__(self, context, name): self.string_name = name self._context = context self.parent_context = context.parent_context class EmptyCompiledName(AbstractNameDefinition): """ Accessing some names will raise an exception. To avoid not having any completions, just give Jedi the option to return this object. It infers to nothing. """ def __init__(self, evaluator, name): self.parent_context = evaluator.BUILTINS self.string_name = name def infer(self): return ContextSet() class CompiledObjectFilter(AbstractFilter): name_class = CompiledName def __init__(self, evaluator, compiled_object, is_instance=False): self._evaluator = evaluator self._compiled_object = compiled_object self._is_instance = is_instance @memoize_method def get(self, name): name = str(name) obj = self._compiled_object.obj try: attr, is_get_descriptor = getattr_static(obj, name) except AttributeError: return [] else: if is_get_descriptor \ and not type(attr) in ALLOWED_DESCRIPTOR_ACCESS: # In case of descriptors that have get methods we cannot return # it's value, because that would mean code execution. return [EmptyCompiledName(self._evaluator, name)] if self._is_instance and name not in dir(obj): return [] return [self._create_name(name)] def values(self): obj = self._compiled_object.obj names = [] for name in dir(obj): names += self.get(name) is_instance = self._is_instance or fake.is_class_instance(obj) # ``dir`` doesn't include the type names. if not inspect.ismodule(obj) and (obj is not type) and not is_instance: for filter in create(self._evaluator, type).get_filters(): names += filter.values() return names def _create_name(self, name): return self.name_class(self._evaluator, self._compiled_object, name) def dotted_from_fs_path(fs_path, sys_path): """ Changes `/usr/lib/python3.4/email/utils.py` to `email.utils`. I.e. compares the path with sys.path and then returns the dotted_path. If the path is not in the sys.path, just returns None. """ if os.path.basename(fs_path).startswith('__init__.'): # We are calculating the path. __init__ files are not interesting. fs_path = os.path.dirname(fs_path) # prefer # - UNIX # /path/to/pythonX.Y/lib-dynload # /path/to/pythonX.Y/site-packages # - Windows # C:\path\to\DLLs # C:\path\to\Lib\site-packages # over # - UNIX # /path/to/pythonX.Y # - Windows # C:\path\to\Lib path = '' for s in sys_path: if (fs_path.startswith(s) and len(path) < len(s)): path = s # - Window # X:\path\to\lib-dynload/datetime.pyd => datetime module_path = fs_path[len(path):].lstrip(os.path.sep).lstrip('/') # - Window # Replace like X:\path\to\something/foo/bar.py return _path_re.sub('', module_path).replace(os.path.sep, '.').replace('/', '.') def load_module(evaluator, path=None, name=None): sys_path = list(evaluator.project.sys_path) if path is not None: dotted_path = dotted_from_fs_path(path, sys_path=sys_path) else: dotted_path = name temp, sys.path = sys.path, sys_path try: __import__(dotted_path) except RuntimeError: if 'PySide' in dotted_path or 'PyQt' in dotted_path: # RuntimeError: the PyQt4.QtCore and PyQt5.QtCore modules both wrap # the QObject class. # See https://github.com/davidhalter/jedi/pull/483 return None raise except ImportError: # If a module is "corrupt" or not really a Python module or whatever. debug.warning('Module %s not importable in path %s.', dotted_path, path) return None finally: sys.path = temp # Just access the cache after import, because of #59 as well as the very # complicated import structure of Python. module = sys.modules[dotted_path] return create(evaluator, module) docstr_defaults = { 'floating point number': 'float', 'character': 'str', 'integer': 'int', 'dictionary': 'dict', 'string': 'str', } def _parse_function_doc(doc): """ Takes a function and returns the params and return value as a tuple. This is nothing more than a docstring parser. TODO docstrings like utime(path, (atime, mtime)) and a(b [, b]) -> None TODO docstrings like 'tuple of integers' """ # parse round parentheses: def func(a, (b,c)) try: count = 0 start = doc.index('(') for i, s in enumerate(doc[start:]): if s == '(': count += 1 elif s == ')': count -= 1 if count == 0: end = start + i break param_str = doc[start + 1:end] except (ValueError, UnboundLocalError): # ValueError for doc.index # UnboundLocalError for undefined end in last line debug.dbg('no brackets found - no param') end = 0 param_str = '' else: # remove square brackets, that show an optional param ( = None) def change_options(m): args = m.group(1).split(',') for i, a in enumerate(args): if a and '=' not in a: args[i] += '=None' return ','.join(args) while True: param_str, changes = re.subn(r' ?\[([^\[\]]+)\]', change_options, param_str) if changes == 0: break param_str = param_str.replace('-', '_') # see: isinstance.__doc__ # parse return value r = re.search('-[>-]* ', doc[end:end + 7]) if r is None: ret = '' else: index = end + r.end() # get result type, which can contain newlines pattern = re.compile(r'(,\n|[^\n-])+') ret_str = pattern.match(doc, index).group(0).strip() # New object -> object() ret_str = re.sub(r'[nN]ew (.*)', r'\1()', ret_str) ret = docstr_defaults.get(ret_str, ret_str) return param_str, ret def _create_from_name(evaluator, module, compiled_object, name): obj = compiled_object.obj faked = None try: faked = fake.get_faked(evaluator, module, obj, parent_context=compiled_object, name=name) if faked.type == 'funcdef': from jedi.evaluate.context.function import FunctionContext return FunctionContext(evaluator, compiled_object, faked) except fake.FakeDoesNotExist: pass try: obj = getattr(obj, name) except AttributeError: # Happens e.g. in properties of # PyQt4.QtGui.QStyleOptionComboBox.currentText # -> just set it to None obj = None return create(evaluator, obj, parent_context=compiled_object, faked=faked) def builtin_from_name(evaluator, string): bltn_obj = getattr(_builtins, string) return create(evaluator, bltn_obj) def _a_generator(foo): """Used to have an object to return for generators.""" yield 42 yield foo _SPECIAL_OBJECTS = { 'FUNCTION_CLASS': type(load_module), 'METHOD_CLASS': type(CompiledObject.is_class), 'MODULE_CLASS': type(os), 'GENERATOR_OBJECT': _a_generator(1.0), 'BUILTINS': _builtins, } def get_special_object(evaluator, identifier): obj = _SPECIAL_OBJECTS[identifier] return create(evaluator, obj, parent_context=create(evaluator, _builtins)) def compiled_objects_cache(attribute_name): def decorator(func): """ This decorator caches just the ids, oopposed to caching the object itself. Caching the id has the advantage that an object doesn't need to be hashable. """ def wrapper(evaluator, obj, parent_context=None, module=None, faked=None): cache = getattr(evaluator, attribute_name) # Do a very cheap form of caching here. key = id(obj), id(parent_context) try: return cache[key][0] except KeyError: # TODO this whole decorator is way too ugly result = func(evaluator, obj, parent_context, module, faked) # Need to cache all of them, otherwise the id could be overwritten. cache[key] = result, obj, parent_context, module, faked return result return wrapper return decorator @compiled_objects_cache('compiled_cache') def create(evaluator, obj, parent_context=None, module=None, faked=None): """ A very weird interface class to this module. The more options provided the more acurate loading compiled objects is. """ if inspect.ismodule(obj): if parent_context is not None: # Modules don't have parents, be careful with caching: recurse. return create(evaluator, obj) else: if parent_context is None and obj is not _builtins: return create(evaluator, obj, create(evaluator, _builtins)) try: faked = fake.get_faked(evaluator, module, obj, parent_context=parent_context) if faked.type == 'funcdef': from jedi.evaluate.context.function import FunctionContext return FunctionContext(evaluator, parent_context, faked) except fake.FakeDoesNotExist: pass return CompiledObject(evaluator, obj, parent_context, faked) jedi-0.11.1/jedi/evaluate/compiled/fake.py0000664000175000017500000001460613214571123020172 0ustar davedave00000000000000""" Loads functions that are mixed in to the standard library. E.g. builtins are written in C (binaries), but my autocompletion only understands Python code. By mixing in Python code, the autocompletion should work much better for builtins. """ import os import inspect import types from itertools import chain from parso.python import tree from jedi._compatibility import is_py3, builtins, unicode, is_py34 modules = {} MethodDescriptorType = type(str.replace) # These are not considered classes and access is granted even though they have # a __class__ attribute. NOT_CLASS_TYPES = ( types.BuiltinFunctionType, types.CodeType, types.FrameType, types.FunctionType, types.GeneratorType, types.GetSetDescriptorType, types.LambdaType, types.MemberDescriptorType, types.MethodType, types.ModuleType, types.TracebackType, MethodDescriptorType ) if is_py3: NOT_CLASS_TYPES += ( types.MappingProxyType, types.SimpleNamespace ) if is_py34: NOT_CLASS_TYPES += (types.DynamicClassAttribute,) class FakeDoesNotExist(Exception): pass def _load_faked_module(grammar, module): module_name = module.__name__ if module_name == '__builtin__' and not is_py3: module_name = 'builtins' try: return modules[module_name] except KeyError: path = os.path.dirname(os.path.abspath(__file__)) try: with open(os.path.join(path, 'fake', module_name) + '.pym') as f: source = f.read() except IOError: modules[module_name] = None return modules[module_name] = m = grammar.parse(unicode(source)) if module_name == 'builtins' and not is_py3: # There are two implementations of `open` for either python 2/3. # -> Rename the python2 version (`look at fake/builtins.pym`). open_func = _search_scope(m, 'open') open_func.children[1].value = 'open_python3' open_func = _search_scope(m, 'open_python2') open_func.children[1].value = 'open' return m def _search_scope(scope, obj_name): for s in chain(scope.iter_classdefs(), scope.iter_funcdefs()): if s.name.value == obj_name: return s def get_module(obj): if inspect.ismodule(obj): return obj try: obj = obj.__objclass__ except AttributeError: pass try: imp_plz = obj.__module__ except AttributeError: # Unfortunately in some cases like `int` there's no __module__ return builtins else: if imp_plz is None: # Happens for example in `(_ for _ in []).send.__module__`. return builtins else: try: return __import__(imp_plz) except ImportError: # __module__ can be something arbitrary that doesn't exist. return builtins def _faked(grammar, module, obj, name): # Crazy underscore actions to try to escape all the internal madness. if module is None: module = get_module(obj) faked_mod = _load_faked_module(grammar, module) if faked_mod is None: return None, None # Having the module as a `parser.python.tree.Module`, we need to scan # for methods. if name is None: if inspect.isbuiltin(obj) or inspect.isclass(obj): return _search_scope(faked_mod, obj.__name__), faked_mod elif not inspect.isclass(obj): # object is a method or descriptor try: objclass = obj.__objclass__ except AttributeError: return None, None else: cls = _search_scope(faked_mod, objclass.__name__) if cls is None: return None, None return _search_scope(cls, obj.__name__), faked_mod else: if obj is module: return _search_scope(faked_mod, name), faked_mod else: try: cls_name = obj.__name__ except AttributeError: return None, None cls = _search_scope(faked_mod, cls_name) if cls is None: return None, None return _search_scope(cls, name), faked_mod return None, None def memoize_faked(obj): """ A typical memoize function that ignores issues with non hashable results. """ cache = obj.cache = {} def memoizer(*args, **kwargs): key = (obj, args, frozenset(kwargs.items())) try: result = cache[key] except (TypeError, ValueError): return obj(*args, **kwargs) except KeyError: result = obj(*args, **kwargs) if result is not None: cache[key] = obj(*args, **kwargs) return result else: return result return memoizer @memoize_faked def _get_faked(grammar, module, obj, name=None): result, fake_module = _faked(grammar, module, obj, name) if result is None: # We're not interested in classes. What we want is functions. raise FakeDoesNotExist elif result.type == 'classdef': return result, fake_module else: # Set the docstr which was previously not set (faked modules don't # contain it). assert result.type == 'funcdef' doc = '"""%s"""' % obj.__doc__ # TODO need escapes. suite = result.children[-1] string = tree.String(doc, (0, 0), '') new_line = tree.Newline('\n', (0, 0)) docstr_node = tree.PythonNode('simple_stmt', [string, new_line]) suite.children.insert(1, docstr_node) return result, fake_module def get_faked(evaluator, module, obj, name=None, parent_context=None): if parent_context and parent_context.tree_node is not None: # Try to search in already clearly defined stuff. found = _search_scope(parent_context.tree_node, name) if found is not None: return found else: raise FakeDoesNotExist faked, fake_module = _get_faked(evaluator.latest_grammar, module and module.obj, obj, name) if module is not None: module.get_used_names = fake_module.get_used_names return faked def is_class_instance(obj): """Like inspect.* methods.""" try: cls = obj.__class__ except AttributeError: return False else: return cls != type and not issubclass(cls, NOT_CLASS_TYPES) jedi-0.11.1/jedi/evaluate/compiled/getattr_static.py0000664000175000017500000001321413214571123022277 0ustar davedave00000000000000""" A static version of getattr. This is a backport of the Python 3 code with a little bit of additional information returned to enable Jedi to make decisions. """ import types from jedi._compatibility import py_version _sentinel = object() def _check_instance(obj, attr): instance_dict = {} try: instance_dict = object.__getattribute__(obj, "__dict__") except AttributeError: pass return dict.get(instance_dict, attr, _sentinel) def _check_class(klass, attr): for entry in _static_getmro(klass): if _shadowed_dict(type(entry)) is _sentinel: try: return entry.__dict__[attr] except KeyError: pass return _sentinel def _is_type(obj): try: _static_getmro(obj) except TypeError: return False return True def _shadowed_dict_newstyle(klass): dict_attr = type.__dict__["__dict__"] for entry in _static_getmro(klass): try: class_dict = dict_attr.__get__(entry)["__dict__"] except KeyError: pass else: if not (type(class_dict) is types.GetSetDescriptorType and class_dict.__name__ == "__dict__" and class_dict.__objclass__ is entry): return class_dict return _sentinel def _static_getmro_newstyle(klass): return type.__dict__['__mro__'].__get__(klass) if py_version >= 30: _shadowed_dict = _shadowed_dict_newstyle _get_type = type _static_getmro = _static_getmro_newstyle else: def _shadowed_dict(klass): """ In Python 2 __dict__ is not overwritable: class Foo(object): pass setattr(Foo, '__dict__', 4) Traceback (most recent call last): File "", line 1, in TypeError: __dict__ must be a dictionary object It applies to both newstyle and oldstyle classes: class Foo(object): pass setattr(Foo, '__dict__', 4) Traceback (most recent call last): File "", line 1, in AttributeError: attribute '__dict__' of 'type' objects is not writable It also applies to instances of those objects. However to keep things straight forward, newstyle classes always use the complicated way of accessing it while oldstyle classes just use getattr. """ if type(klass) is _oldstyle_class_type: return getattr(klass, '__dict__', _sentinel) return _shadowed_dict_newstyle(klass) class _OldStyleClass(): pass _oldstyle_instance_type = type(_OldStyleClass()) _oldstyle_class_type = type(_OldStyleClass) def _get_type(obj): type_ = object.__getattribute__(obj, '__class__') if type_ is _oldstyle_instance_type: # Somehow for old style classes we need to access it directly. return obj.__class__ return type_ def _static_getmro(klass): if type(klass) is _oldstyle_class_type: def oldstyle_mro(klass): """ Oldstyle mro is a really simplistic way of look up mro: https://stackoverflow.com/questions/54867/what-is-the-difference-between-old-style-and-new-style-classes-in-python """ yield klass for base in klass.__bases__: for yield_from in oldstyle_mro(base): yield yield_from return oldstyle_mro(klass) return _static_getmro_newstyle(klass) def _safe_hasattr(obj, name): return _check_class(_get_type(obj), name) is not _sentinel def _safe_is_data_descriptor(obj): return (_safe_hasattr(obj, '__set__') or _safe_hasattr(obj, '__delete__')) def getattr_static(obj, attr, default=_sentinel): """Retrieve attributes without triggering dynamic lookup via the descriptor protocol, __getattr__ or __getattribute__. Note: this function may not be able to retrieve all attributes that getattr can fetch (like dynamically created attributes) and may find attributes that getattr can't (like descriptors that raise AttributeError). It can also return descriptor objects instead of instance members in some cases. See the documentation for details. Returns a tuple `(attr, is_get_descriptor)`. is_get_descripter means that the attribute is a descriptor that has a `__get__` attribute. """ instance_result = _sentinel if not _is_type(obj): klass = _get_type(obj) dict_attr = _shadowed_dict(klass) if (dict_attr is _sentinel or type(dict_attr) is types.MemberDescriptorType): instance_result = _check_instance(obj, attr) else: klass = obj klass_result = _check_class(klass, attr) if instance_result is not _sentinel and klass_result is not _sentinel: if _safe_hasattr(klass_result, '__get__') \ and _safe_is_data_descriptor(klass_result): # A get/set descriptor has priority over everything. return klass_result, True if instance_result is not _sentinel: return instance_result, False if klass_result is not _sentinel: return klass_result, _safe_hasattr(klass_result, '__get__') if obj is klass: # for types we check the metaclass too for entry in _static_getmro(type(klass)): if _shadowed_dict(type(entry)) is _sentinel: try: return entry.__dict__[attr], False except KeyError: pass if default is not _sentinel: return default, False raise AttributeError(attr) jedi-0.11.1/jedi/cache.py0000664000175000017500000000702013214571123014715 0ustar davedave00000000000000""" This caching is very important for speed and memory optimizations. There's nothing really spectacular, just some decorators. The following cache types are available: - ``time_cache`` can be used to cache something for just a limited time span, which can be useful if there's user interaction and the user cannot react faster than a certain time. This module is one of the reasons why |jedi| is not thread-safe. As you can see there are global variables, which are holding the cache information. Some of these variables are being cleaned after every API usage. """ import time from jedi import settings from parso.cache import parser_cache _time_caches = {} def underscore_memoization(func): """ Decorator for methods:: class A(object): def x(self): if self._x: self._x = 10 return self._x Becomes:: class A(object): @underscore_memoization def x(self): return 10 A now has an attribute ``_x`` written by this decorator. """ name = '_' + func.__name__ def wrapper(self): try: return getattr(self, name) except AttributeError: result = func(self) setattr(self, name, result) return result return wrapper def clear_time_caches(delete_all=False): """ Jedi caches many things, that should be completed after each completion finishes. :param delete_all: Deletes also the cache that is normally not deleted, like parser cache, which is important for faster parsing. """ global _time_caches if delete_all: for cache in _time_caches.values(): cache.clear() parser_cache.clear() else: # normally just kill the expired entries, not all for tc in _time_caches.values(): # check time_cache for expired entries for key, (t, value) in list(tc.items()): if t < time.time(): # delete expired entries del tc[key] def time_cache(time_add_setting): """ This decorator works as follows: Call it with a setting and after that use the function with a callable that returns the key. But: This function is only called if the key is not available. After a certain amount of time (`time_add_setting`) the cache is invalid. If the given key is None, the function will not be cached. """ def _temp(key_func): dct = {} _time_caches[time_add_setting] = dct def wrapper(*args, **kwargs): generator = key_func(*args, **kwargs) key = next(generator) try: expiry, value = dct[key] if expiry > time.time(): return value except KeyError: pass value = next(generator) time_add = getattr(settings, time_add_setting) if key is not None: dct[key] = time.time() + time_add, value return value return wrapper return _temp def memoize_method(method): """A normal memoize function.""" def wrapper(self, *args, **kwargs): cache_dict = self.__dict__.setdefault('_memoize_method_dct', {}) dct = cache_dict.setdefault(method, {}) key = (args, frozenset(kwargs.items())) try: return dct[key] except KeyError: result = method(self, *args, **kwargs) dct[key] = result return result return wrapper jedi-0.11.1/setup.cfg0000664000175000017500000000010313214571377014214 0ustar davedave00000000000000[bdist_wheel] universal = 1 [egg_info] tag_build = tag_date = 0 jedi-0.11.1/conftest.py0000664000175000017500000000417313214571123014572 0ustar davedave00000000000000import tempfile import shutil import pytest import jedi collect_ignore = ["setup.py"] # The following hooks (pytest_configure, pytest_unconfigure) are used # to modify `jedi.settings.cache_directory` because `clean_jedi_cache` # has no effect during doctests. Without these hooks, doctests uses # user's cache (e.g., ~/.cache/jedi/). We should remove this # workaround once the problem is fixed in py.test. # # See: # - https://github.com/davidhalter/jedi/pull/168 # - https://bitbucket.org/hpk42/pytest/issue/275/ jedi_cache_directory_orig = None jedi_cache_directory_temp = None def pytest_addoption(parser): parser.addoption("--jedi-debug", "-D", action='store_true', help="Enables Jedi's debug output.") parser.addoption("--warning-is-error", action='store_true', help="Warnings are treated as errors.") def pytest_configure(config): global jedi_cache_directory_orig, jedi_cache_directory_temp jedi_cache_directory_orig = jedi.settings.cache_directory jedi_cache_directory_temp = tempfile.mkdtemp(prefix='jedi-test-') jedi.settings.cache_directory = jedi_cache_directory_temp if config.option.jedi_debug: jedi.set_debug_function() if config.option.warning_is_error: import warnings warnings.simplefilter("error") def pytest_unconfigure(config): global jedi_cache_directory_orig, jedi_cache_directory_temp jedi.settings.cache_directory = jedi_cache_directory_orig shutil.rmtree(jedi_cache_directory_temp) @pytest.fixture(scope='session') def clean_jedi_cache(request): """ Set `jedi.settings.cache_directory` to a temporary directory during test. Note that you can't use built-in `tmpdir` and `monkeypatch` fixture here because their scope is 'function', which is not used in 'session' scope fixture. This fixture is activated in ../pytest.ini. """ from jedi import settings old = settings.cache_directory tmp = tempfile.mkdtemp(prefix='jedi-test-') settings.cache_directory = tmp @request.addfinalizer def restore(): settings.cache_directory = old shutil.rmtree(tmp)