Pygments-2.1/0000755000175000017500000000000012646734115012575 5ustar dmitrydmitryPygments-2.1/AUTHORS0000644000175000017500000001623612645467227013663 0ustar dmitrydmitryPygments is written and maintained by Georg Brandl . Major developers are Tim Hatch and Armin Ronacher . Other contributors, listed alphabetically, are: * Sam Aaron -- Ioke lexer * Ali Afshar -- image formatter * Thomas Aglassinger -- Easytrieve, JCL and Rexx lexers * Muthiah Annamalai -- Ezhil lexer * Kumar Appaiah -- Debian control lexer * Andreas Amann -- AppleScript lexer * Timothy Armstrong -- Dart lexer fixes * Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers * Jeremy Ashkenas -- CoffeeScript lexer * José Joaquín Atria -- Praat lexer * Stefan Matthias Aust -- Smalltalk lexer * Lucas Bajolet -- Nit lexer * Ben Bangert -- Mako lexers * Max Battcher -- Darcs patch lexer * Thomas Baruchel -- APL lexer * Tim Baumann -- (Literate) Agda lexer * Paul Baumgart, 280 North, Inc. -- Objective-J lexer * Michael Bayer -- Myghty lexers * Thomas Beale -- Archetype lexers * John Benediktsson -- Factor lexer * Trevor Bergeron -- mIRC formatter * Vincent Bernat -- LessCSS lexer * Christopher Bertels -- Fancy lexer * Sébastien Bigaret -- QVT Operational lexer * Jarrett Billingsley -- MiniD lexer * Adam Blinkinsop -- Haskell, Redcode lexers * Frits van Bommel -- assembler lexers * Pierre Bourdon -- bugfixes * chebee7i -- Python traceback lexer improvements * Hiram Chirino -- Scaml and Jade lexers * Ian Cooper -- VGL lexer * David Corbett -- Inform, Jasmin, and TADS 3 lexers * Leaf Corcoran -- MoonScript lexer * Christopher Creutzig -- MuPAD lexer * Daniël W. Crompton -- Pike lexer * Pete Curry -- bugfixes * Bryan Davis -- EBNF lexer * Bruno Deferrari -- Shen lexer * Giedrius Dubinskas -- HTML formatter improvements * Owen Durni -- Haxe lexer * Alexander Dutton, Oxford University Computing Services -- SPARQL lexer * James Edwards -- Terraform lexer * Nick Efford -- Python 3 lexer * Sven Efftinge -- Xtend lexer * Artem Egorkine -- terminal256 formatter * Matthew Fernandez -- CAmkES lexer * Michael Ficarra -- CPSA lexer * James H. Fisher -- PostScript lexer * William S. Fulton -- SWIG lexer * Carlos Galdino -- Elixir and Elixir Console lexers * Michael Galloy -- IDL lexer * Naveen Garg -- Autohotkey lexer * Laurent Gautier -- R/S lexer * Alex Gaynor -- PyPy log lexer * Richard Gerkin -- Igor Pro lexer * Alain Gilbert -- TypeScript lexer * Alex Gilding -- BlitzBasic lexer * Bertrand Goetzmann -- Groovy lexer * Krzysiek Goj -- Scala lexer * Matt Good -- Genshi, Cheetah lexers * Michał Górny -- vim modeline support * Alex Gosse -- TrafficScript lexer * Patrick Gotthardt -- PHP namespaces support * Olivier Guibe -- Asymptote lexer * Jordi Gutiérrez Hermoso -- Octave lexer * Florian Hahn -- Boogie lexer * Martin Harriman -- SNOBOL lexer * Matthew Harrison -- SVG formatter * Steven Hazel -- Tcl lexer * Dan Michael Heggø -- Turtle lexer * Aslak Hellesøy -- Gherkin lexer * Greg Hendershott -- Racket lexer * Justin Hendrick -- ParaSail lexer * David Hess, Fish Software, Inc. -- Objective-J lexer * Varun Hiremath -- Debian control lexer * Rob Hoelz -- Perl 6 lexer * Doug Hogan -- Mscgen lexer * Ben Hollis -- Mason lexer * Max Horn -- GAP lexer * Alastair Houghton -- Lexer inheritance facility * Tim Howard -- BlitzMax lexer * Dustin Howett -- Logos lexer * Ivan Inozemtsev -- Fantom lexer * Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session, MSDOS session, BC * Brian R. Jackson -- Tea lexer * Christian Jann -- ShellSession lexer * Dennis Kaarsemaker -- sources.list lexer * Dmitri Kabak -- Inferno Limbo lexer * Igor Kalnitsky -- vhdl lexer * Alexander Kit -- MaskJS lexer * Pekka Klärck -- Robot Framework lexer * Gerwin Klein -- Isabelle lexer * Eric Knibbe -- Lasso lexer * Stepan Koltsov -- Clay lexer * Adam Koprowski -- Opa lexer * Benjamin Kowarsch -- Modula-2 lexer * Domen Kožar -- Nix lexer * Oleh Krekel -- Emacs Lisp lexer * Alexander Kriegisch -- Kconfig and AspectJ lexers * Marek Kubica -- Scheme lexer * Jochen Kupperschmidt -- Markdown processor * Gerd Kurzbach -- Modelica lexer * Jon Larimer, Google Inc. -- Smali lexer * Olov Lassus -- Dart lexer * Matt Layman -- TAP lexer * Sylvestre Ledru -- Scilab lexer * Mark Lee -- Vala lexer * Valentin Lorentz -- C++ lexer improvements * Ben Mabey -- Gherkin lexer * Angus MacArthur -- QML lexer * Louis Mandel -- X10 lexer * Louis Marchand -- Eiffel lexer * Simone Margaritelli -- Hybris lexer * Kirk McDonald -- D lexer * Gordon McGregor -- SystemVerilog lexer * Stephen McKamey -- Duel/JBST lexer * Brian McKenna -- F# lexer * Charles McLaughlin -- Puppet lexer * Lukas Meuser -- BBCode formatter, Lua lexer * Cat Miller -- Pig lexer * Paul Miller -- LiveScript lexer * Hong Minhee -- HTTP lexer * Michael Mior -- Awk lexer * Bruce Mitchener -- Dylan lexer rewrite * Reuben Morais -- SourcePawn lexer * Jon Morton -- Rust lexer * Paulo Moura -- Logtalk lexer * Mher Movsisyan -- DTD lexer * Dejan Muhamedagic -- Crmsh lexer * Ana Nelson -- Ragel, ANTLR, R console lexers * Nam T. Nguyen -- Monokai style * Jesper Noehr -- HTML formatter "anchorlinenos" * Mike Nolta -- Julia lexer * Jonas Obrist -- BBCode lexer * Edward O'Callaghan -- Cryptol lexer * David Oliva -- Rebol lexer * Pat Pannuto -- nesC lexer * Jon Parise -- Protocol buffers and Thrift lexers * Benjamin Peterson -- Test suite refactoring * Ronny Pfannschmidt -- BBCode lexer * Dominik Picheta -- Nimrod lexer * Andrew Pinkham -- RTF Formatter Refactoring * Clément Prévost -- UrbiScript lexer * Elias Rabel -- Fortran fixed form lexer * raichoo -- Idris lexer * Kashif Rasul -- CUDA lexer * Justin Reidy -- MXML lexer * Norman Richards -- JSON lexer * Corey Richardson -- Rust lexer updates * Lubomir Rintel -- GoodData MAQL and CL lexers * Andre Roberge -- Tango style * Konrad Rudolph -- LaTeX formatter enhancements * Mario Ruggier -- Evoque lexers * Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements * Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers * Matteo Sasso -- Common Lisp lexer * Joe Schafer -- Ada lexer * Ken Schutte -- Matlab lexers * Tassilo Schweyer -- Io, MOOCode lexers * Ted Shaw -- AutoIt lexer * Joerg Sieker -- ABAP lexer * Robert Simmons -- Standard ML lexer * Kirill Simonov -- YAML lexer * Alexander Smishlajev -- Visual FoxPro lexer * Steve Spigarelli -- XQuery lexer * Jerome St-Louis -- eC lexer * James Strachan -- Kotlin lexer * Tom Stuart -- Treetop lexer * Colin Sullivan -- SuperCollider lexer * Edoardo Tenani -- Arduino lexer * Tiberius Teng -- default style overhaul * Jeremy Thurgood -- Erlang, Squid config lexers * Brian Tiffin -- OpenCOBOL lexer * Bob Tolbert -- Hy lexer * Erick Tryzelaar -- Felix lexer * Alexander Udalov -- Kotlin lexer improvements * Thomas Van Doren -- Chapel lexer * Daniele Varrazzo -- PostgreSQL lexers * Abe Voelker -- OpenEdge ABL lexer * Pepijn de Vos -- HTML formatter CTags support * Matthias Vallentin -- Bro lexer * Linh Vu Hong -- RSL lexer * Nathan Weizenbaum -- Haml and Sass lexers * Nathan Whetsell -- Csound lexers * Dietmar Winkler -- Modelica lexer * Nils Winter -- Smalltalk lexer * Davy Wybiral -- Clojure lexer * Whitney Young -- ObjectiveC lexer * Diego Zamboni -- CFengine3 lexer * Enrique Zamudio -- Ceylon lexer * Alex Zimin -- Nemerle lexer * Rob Zimmerman -- Kal lexer * Vincent Zurczak -- Roboconf lexer Many thanks for all contributions! Pygments-2.1/Pygments.egg-info/0000755000175000017500000000000012646734115016075 5ustar dmitrydmitryPygments-2.1/Pygments.egg-info/PKG-INFO0000644000175000017500000000325612646734115017200 0ustar dmitrydmitryMetadata-Version: 1.1 Name: Pygments Version: 2.1 Summary: Pygments is a syntax highlighting package written in Python. Home-page: http://pygments.org/ Author: Georg Brandl Author-email: georg@python.org License: BSD License Description: Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. Keywords: syntax highlighting Platform: any Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: System Administrators Classifier: Development Status :: 6 - Mature Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Operating System :: OS Independent Classifier: Topic :: Text Processing :: Filters Classifier: Topic :: Utilities Pygments-2.1/Pygments.egg-info/dependency_links.txt0000644000175000017500000000000112646734115022143 0ustar dmitrydmitry Pygments-2.1/Pygments.egg-info/top_level.txt0000644000175000017500000000001112646734115020617 0ustar dmitrydmitrypygments Pygments-2.1/Pygments.egg-info/entry_points.txt0000644000175000017500000000006612646734115021375 0ustar dmitrydmitry[console_scripts] pygmentize = pygments.cmdline:main Pygments-2.1/Pygments.egg-info/SOURCES.txt0000644000175000017500000005355612646734115017777 0ustar dmitrydmitryAUTHORS CHANGES LICENSE MANIFEST.in Makefile README.rst TODO ez_setup.py pygmentize setup.cfg setup.py Pygments.egg-info/PKG-INFO Pygments.egg-info/SOURCES.txt Pygments.egg-info/dependency_links.txt Pygments.egg-info/entry_points.txt Pygments.egg-info/not-zip-safe Pygments.egg-info/top_level.txt doc/Makefile doc/conf.py doc/download.rst doc/faq.rst doc/index.rst doc/languages.rst doc/make.bat doc/pygmentize.1 doc/_static/favicon.ico doc/_static/logo_new.png doc/_static/logo_only.png doc/_templates/docssidebar.html doc/_templates/indexsidebar.html doc/_themes/pygments14/layout.html doc/_themes/pygments14/theme.conf doc/_themes/pygments14/static/bodybg.png doc/_themes/pygments14/static/docbg.png doc/_themes/pygments14/static/listitem.png doc/_themes/pygments14/static/logo.png doc/_themes/pygments14/static/pocoo.png doc/_themes/pygments14/static/pygments14.css_t doc/docs/api.rst doc/docs/authors.rst doc/docs/changelog.rst doc/docs/cmdline.rst doc/docs/filterdevelopment.rst doc/docs/filters.rst doc/docs/formatterdevelopment.rst doc/docs/formatters.rst doc/docs/index.rst doc/docs/integrate.rst doc/docs/java.rst doc/docs/lexerdevelopment.rst doc/docs/lexers.rst doc/docs/moinmoin.rst doc/docs/plugins.rst doc/docs/quickstart.rst doc/docs/rstdirective.rst doc/docs/styles.rst doc/docs/tokens.rst doc/docs/unicode.rst external/autopygmentize external/lasso-builtins-generator-9.lasso external/markdown-processor.py external/moin-parser.py external/pygments.bashcomp external/rst-directive.py pygments/__init__.py pygments/cmdline.py pygments/console.py pygments/filter.py pygments/formatter.py pygments/lexer.py pygments/modeline.py pygments/plugin.py pygments/regexopt.py pygments/scanner.py pygments/sphinxext.py pygments/style.py pygments/token.py pygments/unistring.py pygments/util.py pygments/filters/__init__.py pygments/formatters/__init__.py pygments/formatters/_mapping.py pygments/formatters/bbcode.py pygments/formatters/html.py pygments/formatters/img.py pygments/formatters/irc.py pygments/formatters/latex.py pygments/formatters/other.py pygments/formatters/rtf.py pygments/formatters/svg.py pygments/formatters/terminal.py pygments/formatters/terminal256.py pygments/lexers/__init__.py pygments/lexers/_asy_builtins.py pygments/lexers/_cl_builtins.py pygments/lexers/_cocoa_builtins.py pygments/lexers/_csound_builtins.py pygments/lexers/_lasso_builtins.py pygments/lexers/_lua_builtins.py pygments/lexers/_mapping.py pygments/lexers/_mql_builtins.py pygments/lexers/_openedge_builtins.py pygments/lexers/_php_builtins.py pygments/lexers/_postgres_builtins.py pygments/lexers/_scilab_builtins.py pygments/lexers/_sourcemod_builtins.py pygments/lexers/_stan_builtins.py pygments/lexers/_vim_builtins.py pygments/lexers/actionscript.py pygments/lexers/agile.py pygments/lexers/algebra.py pygments/lexers/ambient.py pygments/lexers/apl.py pygments/lexers/archetype.py pygments/lexers/asm.py pygments/lexers/automation.py pygments/lexers/basic.py pygments/lexers/business.py pygments/lexers/c_cpp.py pygments/lexers/c_like.py pygments/lexers/chapel.py pygments/lexers/compiled.py pygments/lexers/configs.py pygments/lexers/console.py pygments/lexers/csound.py pygments/lexers/css.py pygments/lexers/d.py pygments/lexers/dalvik.py pygments/lexers/data.py pygments/lexers/diff.py pygments/lexers/dotnet.py pygments/lexers/dsls.py pygments/lexers/dylan.py pygments/lexers/ecl.py pygments/lexers/eiffel.py pygments/lexers/elm.py pygments/lexers/erlang.py pygments/lexers/esoteric.py pygments/lexers/ezhil.py pygments/lexers/factor.py pygments/lexers/fantom.py pygments/lexers/felix.py pygments/lexers/fortran.py pygments/lexers/foxpro.py pygments/lexers/functional.py pygments/lexers/go.py pygments/lexers/grammar_notation.py pygments/lexers/graph.py pygments/lexers/graphics.py pygments/lexers/haskell.py pygments/lexers/haxe.py pygments/lexers/hdl.py pygments/lexers/hexdump.py pygments/lexers/html.py pygments/lexers/idl.py pygments/lexers/igor.py pygments/lexers/inferno.py pygments/lexers/installers.py pygments/lexers/int_fiction.py pygments/lexers/iolang.py pygments/lexers/j.py pygments/lexers/javascript.py pygments/lexers/julia.py pygments/lexers/jvm.py pygments/lexers/lisp.py pygments/lexers/make.py pygments/lexers/markup.py pygments/lexers/math.py pygments/lexers/matlab.py pygments/lexers/ml.py pygments/lexers/modeling.py pygments/lexers/modula2.py pygments/lexers/nimrod.py pygments/lexers/nit.py pygments/lexers/nix.py pygments/lexers/oberon.py pygments/lexers/objective.py pygments/lexers/ooc.py pygments/lexers/other.py pygments/lexers/parasail.py pygments/lexers/parsers.py pygments/lexers/pascal.py pygments/lexers/pawn.py pygments/lexers/perl.py pygments/lexers/php.py pygments/lexers/praat.py pygments/lexers/prolog.py pygments/lexers/python.py pygments/lexers/qvt.py pygments/lexers/r.py pygments/lexers/rdf.py pygments/lexers/rebol.py pygments/lexers/resource.py pygments/lexers/roboconf.py pygments/lexers/robotframework.py pygments/lexers/ruby.py pygments/lexers/rust.py pygments/lexers/scripting.py pygments/lexers/shell.py pygments/lexers/smalltalk.py pygments/lexers/snobol.py pygments/lexers/special.py pygments/lexers/sql.py pygments/lexers/supercollider.py pygments/lexers/tcl.py pygments/lexers/templates.py pygments/lexers/testing.py pygments/lexers/text.py pygments/lexers/textedit.py pygments/lexers/textfmts.py pygments/lexers/theorem.py pygments/lexers/trafficscript.py pygments/lexers/urbi.py pygments/lexers/web.py pygments/lexers/webmisc.py pygments/lexers/x10.py pygments/styles/__init__.py pygments/styles/algol.py pygments/styles/algol_nu.py pygments/styles/arduino.py pygments/styles/autumn.py pygments/styles/borland.py pygments/styles/bw.py pygments/styles/colorful.py pygments/styles/default.py pygments/styles/emacs.py pygments/styles/friendly.py pygments/styles/fruity.py pygments/styles/igor.py pygments/styles/lovelace.py pygments/styles/manni.py pygments/styles/monokai.py pygments/styles/murphy.py pygments/styles/native.py pygments/styles/paraiso_dark.py pygments/styles/paraiso_light.py pygments/styles/pastie.py pygments/styles/perldoc.py pygments/styles/rrt.py pygments/styles/tango.py pygments/styles/trac.py pygments/styles/vim.py pygments/styles/vs.py pygments/styles/xcode.py scripts/check_sources.py scripts/debug_lexer.py scripts/detect_missing_analyse_text.py scripts/epydoc.css scripts/find_error.py scripts/get_vimkw.py scripts/pylintrc scripts/vim2pygments.py tests/.coverage tests/run.py tests/string_asserts.py tests/string_asserts.pyc tests/support.py tests/support.pyc tests/test_basic_api.py tests/test_basic_api.pyc tests/test_cfm.py tests/test_cfm.pyc tests/test_clexer.py tests/test_clexer.pyc tests/test_cmdline.py tests/test_cmdline.pyc tests/test_examplefiles.py tests/test_examplefiles.pyc tests/test_ezhil.py tests/test_ezhil.pyc tests/test_html_formatter.py tests/test_html_formatter.pyc tests/test_inherit.py tests/test_inherit.pyc tests/test_irc_formatter.py tests/test_irc_formatter.pyc tests/test_java.py tests/test_java.pyc tests/test_latex_formatter.py tests/test_latex_formatter.pyc tests/test_lexers_other.py tests/test_lexers_other.pyc tests/test_objectiveclexer.py tests/test_objectiveclexer.pyc tests/test_perllexer.py tests/test_perllexer.pyc tests/test_qbasiclexer.py tests/test_qbasiclexer.pyc tests/test_regexlexer.py tests/test_regexlexer.pyc tests/test_regexopt.py tests/test_regexopt.pyc tests/test_rtf_formatter.py tests/test_rtf_formatter.pyc tests/test_ruby.py tests/test_ruby.pyc tests/test_shell.py tests/test_shell.pyc tests/test_smarty.py tests/test_smarty.pyc tests/test_string_asserts.py tests/test_string_asserts.pyc tests/test_terminal_formatter.py tests/test_terminal_formatter.pyc tests/test_textfmts.py tests/test_textfmts.pyc tests/test_token.py tests/test_token.pyc tests/test_unistring.py tests/test_unistring.pyc tests/test_using_api.py tests/test_using_api.pyc tests/test_util.py tests/test_util.pyc tests/__pycache__/string_asserts.cpython-33.pyc tests/__pycache__/support.cpython-33.pyc tests/__pycache__/test_basic_api.cpython-33.pyc tests/__pycache__/test_cfm.cpython-33.pyc tests/__pycache__/test_clexer.cpython-33.pyc tests/__pycache__/test_cmdline.cpython-33.pyc tests/__pycache__/test_examplefiles.cpython-33.pyc tests/__pycache__/test_html_formatter.cpython-33.pyc tests/__pycache__/test_inherit.cpython-33.pyc tests/__pycache__/test_java.cpython-33.pyc tests/__pycache__/test_latex_formatter.cpython-33.pyc tests/__pycache__/test_lexers_other.cpython-33.pyc tests/__pycache__/test_objectiveclexer.cpython-33.pyc tests/__pycache__/test_perllexer.cpython-33.pyc tests/__pycache__/test_qbasiclexer.cpython-33.pyc tests/__pycache__/test_regexlexer.cpython-33.pyc tests/__pycache__/test_regexopt.cpython-33.pyc tests/__pycache__/test_rtf_formatter.cpython-33.pyc tests/__pycache__/test_ruby.cpython-33.pyc tests/__pycache__/test_shell.cpython-33.pyc tests/__pycache__/test_smarty.cpython-33.pyc tests/__pycache__/test_string_asserts.cpython-33.pyc tests/__pycache__/test_textfmts.cpython-33.pyc tests/__pycache__/test_token.cpython-33.pyc tests/__pycache__/test_unistring.cpython-33.pyc tests/__pycache__/test_using_api.cpython-33.pyc tests/__pycache__/test_util.cpython-33.pyc tests/cover/coverage_html.js tests/cover/jquery.hotkeys.js tests/cover/jquery.isonscreen.js tests/cover/jquery.min.js tests/cover/jquery.tablesorter.min.js tests/cover/keybd_closed.png tests/cover/keybd_open.png tests/cover/status.dat tests/cover/style.css tests/dtds/HTML4-f.dtd tests/dtds/HTML4-s.dtd tests/dtds/HTML4.dcl tests/dtds/HTML4.dtd tests/dtds/HTML4.soc tests/dtds/HTMLlat1.ent tests/dtds/HTMLspec.ent tests/dtds/HTMLsym.ent tests/examplefiles/99_bottles_of_beer.chpl tests/examplefiles/AcidStateAdvanced.hs tests/examplefiles/AlternatingGroup.mu tests/examplefiles/BOM.js tests/examplefiles/Blink.ino tests/examplefiles/CPDictionary.j tests/examplefiles/Config.in.cache tests/examplefiles/Constants.mo tests/examplefiles/DancingSudoku.lhs tests/examplefiles/Deflate.fs tests/examplefiles/Error.pmod tests/examplefiles/Errors.scala tests/examplefiles/FakeFile.pike tests/examplefiles/Get-CommandDefinitionHtml.ps1 tests/examplefiles/IPDispatchC.nc tests/examplefiles/IPDispatchP.nc tests/examplefiles/Intro.java tests/examplefiles/Makefile tests/examplefiles/Object.st tests/examplefiles/OrderedMap.hx tests/examplefiles/RoleQ.pm6 tests/examplefiles/SmallCheck.hs tests/examplefiles/Sorting.mod tests/examplefiles/Sudoku.lhs tests/examplefiles/abnf_example1.abnf tests/examplefiles/abnf_example2.abnf tests/examplefiles/addressbook.proto tests/examplefiles/ahcon.f tests/examplefiles/all.nit tests/examplefiles/antlr_ANTLRv3.g tests/examplefiles/antlr_throws tests/examplefiles/apache2.conf tests/examplefiles/as3_test.as tests/examplefiles/as3_test2.as tests/examplefiles/as3_test3.as tests/examplefiles/aspx-cs_example tests/examplefiles/autoit_submit.au3 tests/examplefiles/automake.mk tests/examplefiles/badcase.java tests/examplefiles/bigtest.nsi tests/examplefiles/bnf_example1.bnf tests/examplefiles/boot-9.scm tests/examplefiles/ca65_example tests/examplefiles/cbmbas_example tests/examplefiles/cells.ps tests/examplefiles/ceval.c tests/examplefiles/char.scala tests/examplefiles/cheetah_example.html tests/examplefiles/classes.dylan tests/examplefiles/clojure-weird-keywords.clj tests/examplefiles/condensed_ruby.rb tests/examplefiles/coq_RelationClasses tests/examplefiles/core.cljs tests/examplefiles/database.pytb tests/examplefiles/de.MoinMoin.po tests/examplefiles/demo.ahk tests/examplefiles/demo.cfm tests/examplefiles/demo.css.in tests/examplefiles/demo.hbs tests/examplefiles/demo.js.in tests/examplefiles/demo.thrift tests/examplefiles/demo.xul.in tests/examplefiles/django_sample.html+django tests/examplefiles/docker.docker tests/examplefiles/dwarf.cw tests/examplefiles/eg_example1.eg tests/examplefiles/ember.handlebars tests/examplefiles/erl_session tests/examplefiles/es6.js tests/examplefiles/escape_semicolon.clj tests/examplefiles/eval.rs tests/examplefiles/evil_regex.js tests/examplefiles/example.Rd tests/examplefiles/example.als tests/examplefiles/example.bat tests/examplefiles/example.bc tests/examplefiles/example.bug tests/examplefiles/example.c tests/examplefiles/example.ceylon tests/examplefiles/example.chai tests/examplefiles/example.clay tests/examplefiles/example.cls tests/examplefiles/example.cob tests/examplefiles/example.coffee tests/examplefiles/example.cpp tests/examplefiles/example.e tests/examplefiles/example.elm tests/examplefiles/example.ezt tests/examplefiles/example.f90 tests/examplefiles/example.feature tests/examplefiles/example.fish tests/examplefiles/example.gd tests/examplefiles/example.gi tests/examplefiles/example.golo tests/examplefiles/example.groovy tests/examplefiles/example.gs tests/examplefiles/example.gst tests/examplefiles/example.hs tests/examplefiles/example.hx tests/examplefiles/example.i6t tests/examplefiles/example.i7x tests/examplefiles/example.j tests/examplefiles/example.jag tests/examplefiles/example.java tests/examplefiles/example.jcl tests/examplefiles/example.jsonld tests/examplefiles/example.kal tests/examplefiles/example.kt tests/examplefiles/example.lagda tests/examplefiles/example.liquid tests/examplefiles/example.lua tests/examplefiles/example.ma tests/examplefiles/example.mac tests/examplefiles/example.monkey tests/examplefiles/example.moo tests/examplefiles/example.moon tests/examplefiles/example.mq4 tests/examplefiles/example.mqh tests/examplefiles/example.msc tests/examplefiles/example.ni tests/examplefiles/example.nim tests/examplefiles/example.nix tests/examplefiles/example.ns2 tests/examplefiles/example.pas tests/examplefiles/example.pcmk tests/examplefiles/example.pp tests/examplefiles/example.praat tests/examplefiles/example.prg tests/examplefiles/example.rb tests/examplefiles/example.red tests/examplefiles/example.reds tests/examplefiles/example.reg tests/examplefiles/example.rexx tests/examplefiles/example.rhtml tests/examplefiles/example.rkt tests/examplefiles/example.rpf tests/examplefiles/example.rts tests/examplefiles/example.scd tests/examplefiles/example.sh tests/examplefiles/example.sh-session tests/examplefiles/example.shell-session tests/examplefiles/example.slim tests/examplefiles/example.sls tests/examplefiles/example.sml tests/examplefiles/example.snobol tests/examplefiles/example.stan tests/examplefiles/example.tap tests/examplefiles/example.tea tests/examplefiles/example.tf tests/examplefiles/example.thy tests/examplefiles/example.todotxt tests/examplefiles/example.ts tests/examplefiles/example.ttl tests/examplefiles/example.u tests/examplefiles/example.weechatlog tests/examplefiles/example.x10 tests/examplefiles/example.xhtml tests/examplefiles/example.xtend tests/examplefiles/example.yaml tests/examplefiles/example1.cadl tests/examplefiles/example2.aspx tests/examplefiles/example2.msc tests/examplefiles/exampleScript.cfc tests/examplefiles/exampleTag.cfc tests/examplefiles/example_coq.v tests/examplefiles/example_elixir.ex tests/examplefiles/example_file.fy tests/examplefiles/ezhil_primefactors.n tests/examplefiles/firefox.mak tests/examplefiles/flipflop.sv tests/examplefiles/foo.sce tests/examplefiles/format.ml tests/examplefiles/fucked_up.rb tests/examplefiles/function.mu tests/examplefiles/functional.rst tests/examplefiles/garcia-wachs.kk tests/examplefiles/genclass.clj tests/examplefiles/genshi_example.xml+genshi tests/examplefiles/genshitext_example.genshitext tests/examplefiles/glsl.frag tests/examplefiles/glsl.vert tests/examplefiles/grammar-test.p6 tests/examplefiles/hash_syntax.rb tests/examplefiles/hello.at tests/examplefiles/hello.golo tests/examplefiles/hello.lsl tests/examplefiles/hello.smali tests/examplefiles/hello.sp tests/examplefiles/hexdump_debugexe tests/examplefiles/hexdump_hd tests/examplefiles/hexdump_hexcat tests/examplefiles/hexdump_hexdump tests/examplefiles/hexdump_od tests/examplefiles/hexdump_xxd tests/examplefiles/html+php_faulty.php tests/examplefiles/http_request_example tests/examplefiles/http_response_example tests/examplefiles/hybris_File.hy tests/examplefiles/idl_sample.pro tests/examplefiles/iex_example tests/examplefiles/inet_pton6.dg tests/examplefiles/inform6_example tests/examplefiles/interp.scala tests/examplefiles/intro.ik tests/examplefiles/ints.php tests/examplefiles/intsyn.fun tests/examplefiles/intsyn.sig tests/examplefiles/irb_heredoc tests/examplefiles/irc.lsp tests/examplefiles/java.properties tests/examplefiles/jbst_example1.jbst tests/examplefiles/jbst_example2.jbst tests/examplefiles/jinjadesignerdoc.rst tests/examplefiles/json.lasso tests/examplefiles/json.lasso9 tests/examplefiles/language.hy tests/examplefiles/lighttpd_config.conf tests/examplefiles/limbo.b tests/examplefiles/linecontinuation.py tests/examplefiles/livescript-demo.ls tests/examplefiles/logos_example.xm tests/examplefiles/ltmain.sh tests/examplefiles/main.cmake tests/examplefiles/markdown.lsp tests/examplefiles/matlab_noreturn tests/examplefiles/matlab_sample tests/examplefiles/matlabsession_sample.txt tests/examplefiles/metagrammar.treetop tests/examplefiles/minehunt.qml tests/examplefiles/minimal.ns2 tests/examplefiles/modula2_test_cases.def tests/examplefiles/moin_SyntaxReference.txt tests/examplefiles/multiline_regexes.rb tests/examplefiles/nanomsg.intr tests/examplefiles/nasm_aoutso.asm tests/examplefiles/nasm_objexe.asm tests/examplefiles/nemerle_sample.n tests/examplefiles/nginx_nginx.conf tests/examplefiles/noexcept.cpp tests/examplefiles/numbers.c tests/examplefiles/objc_example.m tests/examplefiles/openedge_example tests/examplefiles/pacman.conf tests/examplefiles/pacman.ijs tests/examplefiles/pawn_example tests/examplefiles/perl_misc tests/examplefiles/perl_perl5db tests/examplefiles/perl_regex-delims tests/examplefiles/perlfunc.1 tests/examplefiles/phpMyAdmin.spec tests/examplefiles/phpcomplete.vim tests/examplefiles/pkgconfig_example.pc tests/examplefiles/pleac.in.rb tests/examplefiles/postgresql_test.txt tests/examplefiles/pppoe.applescript tests/examplefiles/psql_session.txt tests/examplefiles/py3_test.txt tests/examplefiles/py3tb_test.py3tb tests/examplefiles/pycon_ctrlc_traceback tests/examplefiles/pycon_test.pycon tests/examplefiles/pytb_test2.pytb tests/examplefiles/pytb_test3.pytb tests/examplefiles/python25-bsd.mak tests/examplefiles/qbasic_example tests/examplefiles/qsort.prolog tests/examplefiles/r-console-transcript.Rout tests/examplefiles/r6rs-comments.scm tests/examplefiles/ragel-cpp_rlscan tests/examplefiles/ragel-cpp_snippet tests/examplefiles/regex.js tests/examplefiles/resourcebundle_demo tests/examplefiles/reversi.lsp tests/examplefiles/roboconf.graph tests/examplefiles/roboconf.instances tests/examplefiles/robotframework_test.txt tests/examplefiles/rql-queries.rql tests/examplefiles/ruby_func_def.rb tests/examplefiles/sample.qvto tests/examplefiles/scilab.sci tests/examplefiles/scope.cirru tests/examplefiles/session.dylan-console tests/examplefiles/sibling.prolog tests/examplefiles/simple.camkes tests/examplefiles/simple.croc tests/examplefiles/smarty_example.html tests/examplefiles/source.lgt tests/examplefiles/sources.list tests/examplefiles/sparql.rq tests/examplefiles/sphere.pov tests/examplefiles/sqlite3.sqlite3-console tests/examplefiles/squid.conf tests/examplefiles/string.jl tests/examplefiles/string_delimiters.d tests/examplefiles/stripheredoc.sh tests/examplefiles/subr.el tests/examplefiles/swig_java.swg tests/examplefiles/swig_std_vector.i tests/examplefiles/tads3_example.t tests/examplefiles/termcap tests/examplefiles/terminfo tests/examplefiles/test-3.0.xq tests/examplefiles/test-exist-update.xq tests/examplefiles/test.R tests/examplefiles/test.adb tests/examplefiles/test.adls tests/examplefiles/test.agda tests/examplefiles/test.apl tests/examplefiles/test.asy tests/examplefiles/test.awk tests/examplefiles/test.bb tests/examplefiles/test.bmx tests/examplefiles/test.boo tests/examplefiles/test.bpl tests/examplefiles/test.bro tests/examplefiles/test.cadl tests/examplefiles/test.cs tests/examplefiles/test.csd tests/examplefiles/test.css tests/examplefiles/test.cu tests/examplefiles/test.cyp tests/examplefiles/test.d tests/examplefiles/test.dart tests/examplefiles/test.dtd tests/examplefiles/test.ebnf tests/examplefiles/test.ec tests/examplefiles/test.eh tests/examplefiles/test.erl tests/examplefiles/test.evoque tests/examplefiles/test.fan tests/examplefiles/test.flx tests/examplefiles/test.gdc tests/examplefiles/test.gradle tests/examplefiles/test.groovy tests/examplefiles/test.html tests/examplefiles/test.idr tests/examplefiles/test.ini tests/examplefiles/test.java tests/examplefiles/test.jsp tests/examplefiles/test.lean tests/examplefiles/test.maql tests/examplefiles/test.mask tests/examplefiles/test.mod tests/examplefiles/test.moo tests/examplefiles/test.myt tests/examplefiles/test.nim tests/examplefiles/test.odin tests/examplefiles/test.opa tests/examplefiles/test.orc tests/examplefiles/test.p6 tests/examplefiles/test.pan tests/examplefiles/test.pas tests/examplefiles/test.php tests/examplefiles/test.pig tests/examplefiles/test.plot tests/examplefiles/test.ps1 tests/examplefiles/test.psl tests/examplefiles/test.pwn tests/examplefiles/test.pypylog tests/examplefiles/test.r3 tests/examplefiles/test.rb tests/examplefiles/test.rhtml tests/examplefiles/test.rsl tests/examplefiles/test.scaml tests/examplefiles/test.sco tests/examplefiles/test.shen tests/examplefiles/test.ssp tests/examplefiles/test.swift tests/examplefiles/test.tcsh tests/examplefiles/test.vb tests/examplefiles/test.vhdl tests/examplefiles/test.xqy tests/examplefiles/test.xsl tests/examplefiles/test.zep tests/examplefiles/test2.odin tests/examplefiles/test2.pypylog tests/examplefiles/test_basic.adls tests/examplefiles/truncated.pytb tests/examplefiles/twig_test tests/examplefiles/type.lisp tests/examplefiles/underscore.coffee tests/examplefiles/unicode.applescript tests/examplefiles/unicode.go tests/examplefiles/unicode.js tests/examplefiles/unicodedoc.py tests/examplefiles/unix-io.lid tests/examplefiles/vbnet_test.bas tests/examplefiles/vctreestatus_hg tests/examplefiles/vimrc tests/examplefiles/vpath.mk tests/examplefiles/webkit-transition.css tests/examplefiles/while.pov tests/examplefiles/wiki.factor tests/examplefiles/xml_example tests/examplefiles/yahalom.cpsa tests/examplefiles/zmlrpc.f90 tests/support/tagsPygments-2.1/Pygments.egg-info/not-zip-safe0000644000175000017500000000000112646734102020317 0ustar dmitrydmitry Pygments-2.1/scripts/0000755000175000017500000000000012646734115014264 5ustar dmitrydmitryPygments-2.1/scripts/detect_missing_analyse_text.py0000644000175000017500000000170312642443625022417 0ustar dmitrydmitryfrom __future__ import print_function import sys from pygments.lexers import get_all_lexers, find_lexer_class from pygments.lexer import Lexer def main(): uses = {} for name, aliases, filenames, mimetypes in get_all_lexers(): cls = find_lexer_class(name) if not cls.aliases: print(cls, "has no aliases") for f in filenames: if f not in uses: uses[f] = [] uses[f].append(cls) ret = 0 for k, v in uses.items(): if len(v) > 1: #print "Multiple for", k, v for i in v: if i.analyse_text is None: print(i, "has a None analyse_text") ret |= 1 elif Lexer.analyse_text.__doc__ == i.analyse_text.__doc__: print(i, "needs analyse_text, multiple lexers for", k) ret |= 2 return ret if __name__ == '__main__': sys.exit(main()) Pygments-2.1/scripts/find_error.py0000777000175000017500000000000012642443625021622 2debug_lexer.pyustar dmitrydmitryPygments-2.1/scripts/get_vimkw.py0000644000175000017500000000421512645467227016642 0ustar dmitrydmitryfrom __future__ import print_function import re from pygments.util import format_lines r_line = re.compile(r"^(syn keyword vimCommand contained|syn keyword vimOption " r"contained|syn keyword vimAutoEvent contained)\s+(.*)") r_item = re.compile(r"(\w+)(?:\[(\w+)\])?") HEADER = '''\ # -*- coding: utf-8 -*- """ pygments.lexers._vim_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file is autogenerated by scripts/get_vimkw.py :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Split up in multiple functions so it's importable by jython, which has a # per-method size limit. ''' METHOD = '''\ def _get%(key)s(): %(body)s return var %(key)s = _get%(key)s() ''' def getkw(input, output): out = file(output, 'w') # Copy template from an existing file. print(HEADER, file=out) output_info = {'command': [], 'option': [], 'auto': []} for line in file(input): m = r_line.match(line) if m: # Decide which output gets mapped to d if 'vimCommand' in m.group(1): d = output_info['command'] elif 'AutoEvent' in m.group(1): d = output_info['auto'] else: d = output_info['option'] # Extract all the shortened versions for i in r_item.finditer(m.group(2)): d.append('(%r,%r)' % (i.group(1), "%s%s" % (i.group(1), i.group(2) or ''))) output_info['option'].append("('nnoremap','nnoremap')") output_info['option'].append("('inoremap','inoremap')") output_info['option'].append("('vnoremap','vnoremap')") for key, keywordlist in output_info.items(): keywordlist.sort() body = format_lines('var', keywordlist, raw=True, indent_level=1) print(METHOD % locals(), file=out) def is_keyword(w, keywords): for i in range(len(w), 0, -1): if w[:i] in keywords: return keywords[w[:i]][:len(w)] == w return False if __name__ == "__main__": getkw("/usr/share/vim/vim74/syntax/vim.vim", "pygments/lexers/_vim_builtins.py") Pygments-2.1/scripts/debug_lexer.py0000755000175000017500000002067012645467227017141 0ustar dmitrydmitry#!/usr/bin/python # -*- coding: utf-8 -*- """ Lexing error finder ~~~~~~~~~~~~~~~~~~~ For the source files given on the command line, display the text where Error tokens are being generated, along with some context. :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import os import sys # always prefer Pygments from source if exists srcpath = os.path.join(os.path.dirname(__file__), '..') if os.path.isdir(os.path.join(srcpath, 'pygments')): sys.path.insert(0, srcpath) from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \ ProfilingRegexLexer, ProfilingRegexLexerMeta from pygments.lexers import get_lexer_by_name, find_lexer_class, \ find_lexer_class_for_filename from pygments.token import Error, Text, _TokenType from pygments.cmdline import _parse_options class DebuggingRegexLexer(ExtendedRegexLexer): """Make the state stack, position and current match instance attributes.""" def get_tokens_unprocessed(self, text, stack=('root',)): """ Split ``text`` into (tokentype, text) pairs. ``stack`` is the inital stack (default: ``['root']``) """ tokendefs = self._tokens self.ctx = ctx = LexerContext(text, 0) ctx.stack = list(stack) statetokens = tokendefs[ctx.stack[-1]] while 1: for rexmatch, action, new_state in statetokens: self.m = m = rexmatch(text, ctx.pos, ctx.end) if m: if action is not None: if type(action) is _TokenType: yield ctx.pos, action, m.group() ctx.pos = m.end() else: if not isinstance(self, ExtendedRegexLexer): for item in action(self, m): yield item ctx.pos = m.end() else: for item in action(self, m, ctx): yield item if not new_state: # altered the state stack? statetokens = tokendefs[ctx.stack[-1]] if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': ctx.stack.pop() elif state == '#push': ctx.stack.append(ctx.stack[-1]) else: ctx.stack.append(state) elif isinstance(new_state, int): # pop del ctx.stack[new_state:] elif new_state == '#push': ctx.stack.append(ctx.stack[-1]) else: assert False, 'wrong state def: %r' % new_state statetokens = tokendefs[ctx.stack[-1]] break else: try: if ctx.pos >= ctx.end: break if text[ctx.pos] == '\n': # at EOL, reset state to 'root' ctx.stack = ['root'] statetokens = tokendefs['root'] yield ctx.pos, Text, u'\n' ctx.pos += 1 continue yield ctx.pos, Error, text[ctx.pos] ctx.pos += 1 except IndexError: break def main(fn, lexer=None, options={}): if lexer is not None: lxcls = get_lexer_by_name(lexer).__class__ else: lxcls = find_lexer_class_for_filename(os.path.basename(fn)) if lxcls is None: name, rest = fn.split('_', 1) lxcls = find_lexer_class(name) if lxcls is None: raise AssertionError('no lexer found for file %r' % fn) debug_lexer = False # if profile: # # does not work for e.g. ExtendedRegexLexers # if lxcls.__bases__ == (RegexLexer,): # # yes we can! (change the metaclass) # lxcls.__class__ = ProfilingRegexLexerMeta # lxcls.__bases__ = (ProfilingRegexLexer,) # lxcls._prof_sort_index = profsort # else: # if lxcls.__bases__ == (RegexLexer,): # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True # elif lxcls.__bases__ == (DebuggingRegexLexer,): # # already debugged before # debug_lexer = True # else: # # HACK: ExtendedRegexLexer subclasses will only partially work here. # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True lx = lxcls(**options) lno = 1 if fn == '-': text = sys.stdin.read() else: with open(fn, 'rb') as fp: text = fp.read().decode('utf-8') text = text.strip('\n') + '\n' tokens = [] states = [] def show_token(tok, state): reprs = list(map(repr, tok)) print(' ' + reprs[1] + ' ' + ' ' * (29-len(reprs[1])) + reprs[0], end=' ') if debug_lexer: print(' ' + ' ' * (29-len(reprs[0])) + ' : '.join(state) if state else '', end=' ') print() for type, val in lx.get_tokens(text): lno += val.count('\n') if type == Error and not ignerror: print('Error parsing', fn, 'on line', lno) if not showall: print('Previous tokens' + (debug_lexer and ' and states' or '') + ':') for i in range(max(len(tokens) - num, 0), len(tokens)): if debug_lexer: show_token(tokens[i], states[i]) else: show_token(tokens[i], None) print('Error token:') l = len(repr(val)) print(' ' + repr(val), end=' ') if debug_lexer and hasattr(lx, 'ctx'): print(' ' * (60-l) + ' : '.join(lx.ctx.stack), end=' ') print() print() return 1 tokens.append((type, val)) if debug_lexer: if hasattr(lx, 'ctx'): states.append(lx.ctx.stack[:]) else: states.append(None) if showall: show_token((type, val), states[-1] if debug_lexer else None) return 0 def print_help(): print('''\ Pygments development helper to quickly debug lexers. scripts/debug_lexer.py [options] file ... Give one or more filenames to lex them and display possible error tokens and/or profiling info. Files are assumed to be encoded in UTF-8. Selecting lexer and options: -l NAME use lexer named NAME (default is to guess from the given filenames) -O OPTIONSTR use lexer options parsed from OPTIONSTR Debugging lexing errors: -n N show the last N tokens on error -a always show all lexed tokens (default is only to show them when an error occurs) -e do not stop on error tokens Profiling: -p use the ProfilingRegexLexer to profile regexes instead of the debugging lexer -s N sort profiling output by column N (default is column 4, the time per call) ''') num = 10 showall = False ignerror = False lexer = None options = {} profile = False profsort = 4 if __name__ == '__main__': import getopt opts, args = getopt.getopt(sys.argv[1:], 'n:l:aepO:s:h') for opt, val in opts: if opt == '-n': num = int(val) elif opt == '-a': showall = True elif opt == '-e': ignerror = True elif opt == '-l': lexer = val elif opt == '-p': profile = True elif opt == '-s': profsort = int(val) elif opt == '-O': options = _parse_options([val]) elif opt == '-h': print_help() sys.exit(0) ret = 0 if not args: print_help() for f in args: ret += main(f, lexer, options) sys.exit(bool(ret)) Pygments-2.1/scripts/vim2pygments.py0000755000175000017500000006331412642443625017313 0ustar dmitrydmitry#!/usr/bin/env python # -*- coding: utf-8 -*- """ Vim Colorscheme Converter ~~~~~~~~~~~~~~~~~~~~~~~~~ This script converts vim colorscheme files to valid pygments style classes meant for putting into modules. :copyright 2006 by Armin Ronacher. :license: BSD, see LICENSE for details. """ from __future__ import print_function import sys import re from os import path from io import StringIO split_re = re.compile(r'(? 2 and \ len(parts[0]) >= 2 and \ 'highlight'.startswith(parts[0]): token = parts[1].lower() if token not in TOKENS: continue for item in parts[2:]: p = item.split('=', 1) if not len(p) == 2: continue key, value = p if key in ('ctermfg', 'guifg'): color = get_vim_color(value) if color: set('color', color) elif key in ('ctermbg', 'guibg'): color = get_vim_color(value) if color: set('bgcolor', color) elif key in ('term', 'cterm', 'gui'): items = value.split(',') for item in items: item = item.lower() if item == 'none': set('noinherit', True) elif item == 'bold': set('bold', True) elif item == 'underline': set('underline', True) elif item == 'italic': set('italic', True) if bg_color is not None and not colors['Normal'].get('bgcolor'): colors['Normal']['bgcolor'] = bg_color color_map = {} for token, styles in colors.items(): if token in TOKENS: tmp = [] if styles.get('noinherit'): tmp.append('noinherit') if 'color' in styles: tmp.append(styles['color']) if 'bgcolor' in styles: tmp.append('bg:' + styles['bgcolor']) if styles.get('bold'): tmp.append('bold') if styles.get('italic'): tmp.append('italic') if styles.get('underline'): tmp.append('underline') tokens = TOKENS[token] if not isinstance(tokens, tuple): tokens = (tokens,) for token in tokens: color_map[token] = ' '.join(tmp) default_token = color_map.pop('') return default_token, color_map class StyleWriter(object): def __init__(self, code, name): self.code = code self.name = name.lower() def write_header(self, out): out.write('# -*- coding: utf-8 -*-\n"""\n') out.write(' %s Colorscheme\n' % self.name.title()) out.write(' %s\n\n' % ('~' * (len(self.name) + 12))) out.write(' Converted by %s\n' % SCRIPT_NAME) out.write('"""\nfrom pygments.style import Style\n') out.write('from pygments.token import Token, %s\n\n' % ', '.join(TOKEN_TYPES)) out.write('class %sStyle(Style):\n\n' % self.name.title()) def write(self, out): self.write_header(out) default_token, tokens = find_colors(self.code) tokens = list(tokens.items()) tokens.sort(lambda a, b: cmp(len(a[0]), len(a[1]))) bg_color = [x[3:] for x in default_token.split() if x.startswith('bg:')] if bg_color: out.write(' background_color = %r\n' % bg_color[0]) out.write(' styles = {\n') out.write(' %-20s%r,\n' % ('Token:', default_token)) for token, definition in tokens: if definition: out.write(' %-20s%r,\n' % (token + ':', definition)) out.write(' }') def __repr__(self): out = StringIO() self.write_style(out) return out.getvalue() def convert(filename, stream=None): name = path.basename(filename) if name.endswith('.vim'): name = name[:-4] f = file(filename) code = f.read() f.close() writer = StyleWriter(code, name) if stream is not None: out = stream else: out = StringIO() writer.write(out) if stream is None: return out.getvalue() def main(): if len(sys.argv) != 2 or sys.argv[1] in ('-h', '--help'): print('Usage: %s ' % sys.argv[0]) return 2 if sys.argv[1] in ('-v', '--version'): print('%s %s' % (SCRIPT_NAME, SCRIPT_VERSION)) return filename = sys.argv[1] if not (path.exists(filename) and path.isfile(filename)): print('Error: %s not found' % filename) return 1 convert(filename, sys.stdout) sys.stdout.write('\n') if __name__ == '__main__': sys.exit(main() or 0) Pygments-2.1/scripts/check_sources.py0000755000175000017500000001411312645467227017467 0ustar dmitrydmitry#!/usr/bin/env python # -*- coding: utf-8 -*- """ Checker for file headers ~~~~~~~~~~~~~~~~~~~~~~~~ Make sure each Python file has a correct file header including copyright and license information. :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import io import os import re import sys import getopt from os.path import join, splitext, abspath checkers = {} def checker(*suffixes, **kwds): only_pkg = kwds.pop('only_pkg', False) def deco(func): for suffix in suffixes: checkers.setdefault(suffix, []).append(func) func.only_pkg = only_pkg return func return deco name_mail_re = r'[\w ]+(<.*?>)?' copyright_re = re.compile(r'^ :copyright: Copyright 2006-2015 by ' r'the Pygments team, see AUTHORS\.$', re.UNICODE) copyright_2_re = re.compile(r'^ %s(, %s)*[,.]$' % (name_mail_re, name_mail_re), re.UNICODE) is_const_re = re.compile(r'if.*?==\s+(None|False|True)\b') misspellings = ["developement", "adress", "verificate", # ALLOW-MISSPELLING "informations", "unlexer"] # ALLOW-MISSPELLING @checker('.py') def check_syntax(fn, lines): if '#!/' in lines[0]: lines = lines[1:] if 'coding:' in lines[0]: lines = lines[1:] try: compile('\n'.join(lines), fn, "exec") except SyntaxError as err: yield 0, "not compilable: %s" % err @checker('.py') def check_style_and_encoding(fn, lines): for lno, line in enumerate(lines): if len(line) > 110: yield lno+1, "line too long" if is_const_re.search(line): yield lno+1, 'using == None/True/False' @checker('.py', only_pkg=True) def check_fileheader(fn, lines): # line number correction c = 1 if lines[0:1] == ['#!/usr/bin/env python']: lines = lines[1:] c = 2 llist = [] docopen = False for lno, l in enumerate(lines): llist.append(l) if lno == 0: if l != '# -*- coding: utf-8 -*-': yield 1, "missing coding declaration" elif lno == 1: if l != '"""' and l != 'r"""': yield 2, 'missing docstring begin (""")' else: docopen = True elif docopen: if l == '"""': # end of docstring if lno <= 4: yield lno+c, "missing module name in docstring" break if l != "" and l[:4] != ' ' and docopen: yield lno+c, "missing correct docstring indentation" if lno == 2: # if not in package, don't check the module name modname = fn[:-3].replace('/', '.').replace('.__init__', '') while modname: if l.lower()[4:] == modname: break modname = '.'.join(modname.split('.')[1:]) else: yield 3, "wrong module name in docstring heading" modnamelen = len(l.strip()) elif lno == 3: if l.strip() != modnamelen * "~": yield 4, "wrong module name underline, should be ~~~...~" else: yield 0, "missing end and/or start of docstring..." # check for copyright and license fields license = llist[-2:-1] if license != [" :license: BSD, see LICENSE for details."]: yield 0, "no correct license info" ci = -3 copyright = llist[ci:ci+1] while copyright and copyright_2_re.match(copyright[0]): ci -= 1 copyright = llist[ci:ci+1] if not copyright or not copyright_re.match(copyright[0]): yield 0, "no correct copyright info" def main(argv): try: gopts, args = getopt.getopt(argv[1:], "vi:") except getopt.GetoptError: print("Usage: %s [-v] [-i ignorepath]* [path]" % argv[0]) return 2 opts = {} for opt, val in gopts: if opt == '-i': val = abspath(val) opts.setdefault(opt, []).append(val) if len(args) == 0: path = '.' elif len(args) == 1: path = args[0] else: print("Usage: %s [-v] [-i ignorepath]* [path]" % argv[0]) return 2 verbose = '-v' in opts num = 0 out = io.StringIO() # TODO: replace os.walk run with iteration over output of # `svn list -R`. for root, dirs, files in os.walk(path): if '.hg' in dirs: dirs.remove('.hg') if 'examplefiles' in dirs: dirs.remove('examplefiles') if '-i' in opts and abspath(root) in opts['-i']: del dirs[:] continue # XXX: awkward: for the Makefile call: don't check non-package # files for file headers in_pygments_pkg = root.startswith('./pygments') for fn in files: fn = join(root, fn) if fn[:2] == './': fn = fn[2:] if '-i' in opts and abspath(fn) in opts['-i']: continue ext = splitext(fn)[1] checkerlist = checkers.get(ext, None) if not checkerlist: continue if verbose: print("Checking %s..." % fn) try: lines = open(fn, 'rb').read().decode('utf-8').splitlines() except (IOError, OSError) as err: print("%s: cannot open: %s" % (fn, err)) num += 1 continue for checker in checkerlist: if not in_pygments_pkg and checker.only_pkg: continue for lno, msg in checker(fn, lines): print(u"%s:%d: %s" % (fn, lno, msg), file=out) num += 1 if verbose: print() if num == 0: print("No errors found.") else: print(out.getvalue().rstrip('\n')) print("%d error%s found." % (num, num > 1 and "s" or "")) return int(num > 0) if __name__ == '__main__': sys.exit(main(sys.argv)) Pygments-2.1/scripts/epydoc.css0000644000175000017500000003277411713467263016277 0ustar dmitrydmitry /* Epydoc CSS Stylesheet * * This stylesheet can be used to customize the appearance of epydoc's * HTML output. * */ /* Adapted for Pocoo API docs by Georg Brandl */ /* Default Colors & Styles * - Set the default foreground & background color with 'body'; and * link colors with 'a:link' and 'a:visited'. * - Use bold for decision list terms. * - The heading styles defined here are used for headings *within* * docstring descriptions. All headings used by epydoc itself use * either class='epydoc' or class='toc' (CSS styles for both * defined below). */ body { background: #ffffff; color: #000000; font-family: Trebuchet MS,Tahoma,sans-serif; font-size: 0.9em; line-height: 140%; margin: 0; padding: 0 1.2em 1.2em 1.2em; } a:link { color: #C87900; text-decoration: none; border-bottom: 1px solid #C87900; } a:visited { color: #C87900; text-decoration: none; border-bottom: 1px dotted #C87900; } a:hover { color: #F8A900; border-bottom-color: #F8A900; } dt { font-weight: bold; } h1 { font-size: +180%; font-style: italic; font-weight: bold; margin-top: 1.5em; } h2 { font-size: +140%; font-style: italic; font-weight: bold; } h3 { font-size: +110%; font-style: italic; font-weight: normal; } p { margin-top: .5em; margin-bottom: .5em; } hr { margin-top: 1.5em; margin-bottom: 1.5em; border: 1px solid #BBB; } tt.literal { background: #F5FFD0; padding: 2px; font-size: 110%; } table.rst-docutils { border: 0; } table.rst-docutils td { border: 0; padding: 5px 20px 5px 0px; } /* Page Header & Footer * - The standard page header consists of a navigation bar (with * pointers to standard pages such as 'home' and 'trees'); a * breadcrumbs list, which can be used to navigate to containing * classes or modules; options links, to show/hide private * variables and to show/hide frames; and a page title (using *

). The page title may be followed by a link to the * corresponding source code (using 'span.codelink'). * - The footer consists of a navigation bar, a timestamp, and a * pointer to epydoc's homepage. */ h1.epydoc { margin-top: .4em; margin-bottom: .4em; font-size: +180%; font-weight: bold; font-style: normal; } h2.epydoc { font-size: +130%; font-weight: bold; font-style: normal; } h3.epydoc { font-size: +115%; font-weight: bold; font-style: normal; } table.navbar { background: #E6F8A0; color: #000000; border-top: 1px solid #c0d0d0; border-bottom: 1px solid #c0d0d0; margin: -1px -1.2em 1em -1.2em; } table.navbar th { padding: 2px 7px 2px 0px; } th.navbar-select { background-color: transparent; } th.navbar-select:before { content: ">" } th.navbar-select:after { content: "<" } table.navbar a { border: 0; } span.breadcrumbs { font-size: 95%; font-weight: bold; } span.options { font-size: 80%; } span.codelink { font-size: 85%; } td.footer { font-size: 85%; } /* Table Headers * - Each summary table and details section begins with a 'header' * row. This row contains a section title (marked by * 'span.table-header') as well as a show/hide private link * (marked by 'span.options', defined above). * - Summary tables that contain user-defined groups mark those * groups using 'group header' rows. */ td.table-header { background: #B6C870; color: #000000; border-bottom: 1px solid #FFF; } span.table-header { font-size: 110%; font-weight: bold; } th.group-header { text-align: left; font-style: italic; font-size: 110%; } td.spacer { width: 5%; } /* Summary Tables (functions, variables, etc) * - Each object is described by a single row of the table with * two cells. The left cell gives the object's type, and is * marked with 'code.summary-type'. The right cell gives the * object's name and a summary description. * - CSS styles for the table's header and group headers are * defined above, under 'Table Headers' */ table.summary { border-collapse: collapse; background: #E6F8A0; color: #000000; margin: 1em 0 .5em 0; border: 0; } table.summary tr { border-bottom: 1px solid #BBB; } td.summary a { font-weight: bold; } code.summary-type { font-size: 85%; } /* Details Tables (functions, variables, etc) * - Each object is described in its own single-celled table. * - A single-row summary table w/ table-header is used as * a header for each details section (CSS style for table-header * is defined above, under 'Table Headers'). */ table.detsummary { margin-top: 2em; } table.details { border-collapse: collapse; background: #E6F8A0; color: #000000; border-bottom: 1px solid #BBB; margin: 0; } table.details td { padding: .2em .2em .2em .5em; } table.details table td { padding: 0; } table.details h3 { margin: 5px 0 5px 0; font-size: 105%; font-style: normal; } table.details dd { display: inline; margin-left: 5px; } table.details dl { margin-left: 5px; } /* Index tables (identifier index, term index, etc) * - link-index is used for indices containing lists of links * (namely, the identifier index & term index). * - index-where is used in link indices for the text indicating * the container/source for each link. * - metadata-index is used for indices containing metadata * extracted from fields (namely, the bug index & todo index). */ table.link-index { border-collapse: collapse; background: #F6FFB0; color: #000000; border: 1px solid #608090; } td.link-index { border-width: 0px; } span.index-where { font-size: 70%; } table.metadata-index { border-collapse: collapse; background: #F6FFB0; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } td.metadata-index { border-width: 1px; border-style: solid; } /* Function signatures * - sig* is used for the signature in the details section. * - .summary-sig* is used for the signature in the summary * table, and when listing property accessor functions. * */ .sig-name { color: #006080; } .sig-arg { color: #008060; } .sig-default { color: #602000; } .summary-sig-name { font-weight: bold; } .summary-sig-arg { color: #006040; } .summary-sig-default { color: #501800; } /* Variable values * - In the 'variable details' sections, each varaible's value is * listed in a 'pre.variable' box. The width of this box is * restricted to 80 chars; if the value's repr is longer than * this it will be wrapped, using a backslash marked with * class 'variable-linewrap'. If the value's repr is longer * than 3 lines, the rest will be ellided; and an ellipsis * marker ('...' marked with 'variable-ellipsis') will be used. * - If the value is a string, its quote marks will be marked * with 'variable-quote'. * - If the variable is a regexp, it is syntax-highlighted using * the re* CSS classes. */ pre.variable { padding: .5em; margin: 0; background-color: #dce4ec; border: 1px solid #708890; } .variable-linewrap { display: none; } .variable-ellipsis { color: #604000; font-weight: bold; } .variable-quote { color: #604000; font-weight: bold; } .re { color: #000000; } .re-char { color: #006030; } .re-op { color: #600000; } .re-group { color: #003060; } .re-ref { color: #404040; } /* Base tree * - Used by class pages to display the base class hierarchy. */ pre.base-tree { font-size: 90%; margin: 1em 0 2em 0; line-height: 100%;} /* Frames-based table of contents headers * - Consists of two frames: one for selecting modules; and * the other listing the contents of the selected module. * - h1.toc is used for each frame's heading * - h2.toc is used for subheadings within each frame. */ h1.toc { text-align: center; font-size: 105%; margin: 0; font-weight: bold; padding: 0; } h2.toc { font-size: 100%; font-weight: bold; margin: 0.5em 0 0 -0.3em; } /* Syntax Highlighting for Source Code * - doctest examples are displayed in a 'pre.py-doctest' block. * If the example is in a details table entry, then it will use * the colors specified by the 'table pre.py-doctest' line. * - Source code listings are displayed in a 'pre.py-src' block. * Each line is marked with 'span.py-line' (used to draw a line * down the left margin, separating the code from the line * numbers). Line numbers are displayed with 'span.py-lineno'. * The expand/collapse block toggle button is displayed with * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not * modify the font size of the text.) * - If a source code page is opened with an anchor, then the * corresponding code block will be highlighted. The code * block's header is highlighted with 'py-highlight-hdr'; and * the code block's body is highlighted with 'py-highlight'. * - The remaining py-* classes are used to perform syntax * highlighting (py-string for string literals, py-name for names, * etc.) */ pre.rst-literal-block, pre.py-doctest { margin-left: 1em; margin-right: 1.5em; line-height: 150%; background-color: #F5FFD0; padding: .5em; border: 1px solid #B6C870; font-size: 110%; } pre.py-src { border: 1px solid #BBB; margin-top: 3em; background: #f0f0f0; color: #000000; line-height: 150%; } span.py-line { margin-left: .2em; padding-left: .4em; } span.py-lineno { border-right: 1px solid #BBB; padding: .3em .5em .3em .5em; font-style: italic; font-size: 90%; } a.py-toggle { text-decoration: none; } div.py-highlight-hdr { border-top: 1px solid #BBB; background: #d0e0e0; } div.py-highlight { border-bottom: 1px solid #BBB; background: #d0e0e0; } .py-prompt { color: #005050; font-weight: bold;} .py-string { color: #006030; } .py-comment { color: #003060; } .py-keyword { color: #600000; } .py-output { color: #404040; } .py-name { color: #000050; } .py-name:link { color: #000050; } .py-name:visited { color: #000050; } .py-number { color: #005000; } .py-def-name { color: #000060; font-weight: bold; } .py-base-class { color: #000060; } .py-param { color: #000060; } .py-docstring { color: #006030; } .py-decorator { color: #804020; } /* Use this if you don't want links to names underlined: */ /*a.py-name { text-decoration: none; }*/ /* Graphs & Diagrams * - These CSS styles are used for graphs & diagrams generated using * Graphviz dot. 'img.graph-without-title' is used for bare * diagrams (to remove the border created by making the image * clickable). */ img.graph-without-title { border: none; } img.graph-with-title { border: 1px solid #000000; } span.graph-title { font-weight: bold; } span.graph-caption { } /* General-purpose classes * - 'p.indent-wrapped-lines' defines a paragraph whose first line * is not indented, but whose subsequent lines are. * - The 'nomargin-top' class is used to remove the top margin (e.g. * from lists). The 'nomargin' class is used to remove both the * top and bottom margin (but not the left or right margin -- * for lists, that would cause the bullets to disappear.) */ p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em; margin: 0; } .nomargin-top { margin-top: 0; } .nomargin { margin-top: 0; margin-bottom: 0; } Pygments-2.1/scripts/pylintrc0000644000175000017500000002114711713467263016061 0ustar dmitrydmitry# lint Python modules using external checkers. # # This is the main checker controling the other ones and the reports # generation. It is itself both a raw checker and an astng checker in order # to: # * handle message activation / deactivation at the module level # * handle some basic but necessary stats'data (number of classes, methods...) # [MASTER] # Specify a configuration file. #rcfile= # Profiled execution. profile=no # Add to the black list. It should be a base name, not a # path. You may set this option multiple times. ignore=.svn # Pickle collected data for later comparisons. persistent=yes # Set the cache size for astng objects. cache-size=500 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable only checker(s) with the given id(s). This option conflict with the # disable-checker option #enable-checker= # Enable all checker(s) except those with the given id(s). This option conflict # with the disable-checker option #disable-checker= # Enable all messages in the listed categories. #enable-msg-cat= # Disable all messages in the listed categories. #disable-msg-cat= # Enable the message(s) with the given id(s). #enable-msg= # Disable the message(s) with the given id(s). disable-msg=C0323,W0142,C0301,C0103,C0111,E0213,C0302,C0203,W0703,R0201 [REPORTS] # set the output format. Available formats are text, parseable, colorized and # html output-format=colorized # Include message's id in output include-ids=yes # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells wether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note).You have access to the variables errors warning, statement which # respectivly contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (R0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (R0004). comment=no # Enable the report(s) with the given id(s). #enable-report= # Disable the report(s) with the given id(s). #disable-report= # checks for # * unused variables / imports # * undefined variables # * redefinition of variable from builtins or from an outer scope # * use of variable before assigment # [VARIABLES] # Tells wether we should check for unused import in __init__ files. init-import=no # A regular expression matching names used for dummy variables (i.e. not used). dummy-variables-rgx=_|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= # try to find bugs in the code using type inference # [TYPECHECK] # Tells wether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # When zope mode is activated, consider the acquired-members option to ignore # access to some undefined attributes. zope=no # List of members which are usually get through zope's acquisition mecanism and # so shouldn't trigger E0201 when accessed (need zope=yes to be considered). acquired-members=REQUEST,acl_users,aq_parent # checks for : # * doc strings # * modules / classes / functions / methods / arguments / variables name # * number of arguments, local variables, branchs, returns and statements in # functions, methods # * required module attributes # * dangerous default values as arguments # * redefinition of function / method / class # * uses of the global statement # [BASIC] # Required attributes for module, separated by a comma required-attributes= # Regular expression which should only match functions or classes name which do # not require a docstring no-docstring-rgx=__.*__ # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=(([A-Z_][A-Z1-9_]*)|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct instance attribute names attr-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input # checks for sign of poor/misdesign: # * number of methods, attributes, local variables... # * size, complexity of functions, methods # [DESIGN] # Maximum number of arguments for function / method max-args=12 # Maximum number of locals for function / method body max-locals=30 # Maximum number of return / yield for function / method body max-returns=12 # Maximum number of branch for function / method body max-branchs=30 # Maximum number of statements in function / method body max-statements=60 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=20 # checks for # * external modules dependencies # * relative / wildcard imports # * cyclic imports # * uses of deprecated modules # [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,string,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report R0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report R0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report R0402 must # not be disabled) int-import-graph= # checks for : # * methods without self as first argument # * overridden methods signature # * access only to existant members via self # * attributes not defined in the __init__ method # * supported interfaces implementation # * unreachable code # [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # checks for similarities and duplicated code. This computation may be # memory / CPU intensive, so you should disable it if you experiments some # problems. # [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=10 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # checks for: # * warning notes in the code like FIXME, XXX # * PEP 263: source code with non ascii character but no encoding declaration # [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO # checks for : # * unauthorized constructions # * strict indentation # * line length # * use of <> instead of != # [FORMAT] # Maximum number of characters on a single line. max-line-length=90 # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' Pygments-2.1/PKG-INFO0000644000175000017500000000325612646734115013700 0ustar dmitrydmitryMetadata-Version: 1.1 Name: Pygments Version: 2.1 Summary: Pygments is a syntax highlighting package written in Python. Home-page: http://pygments.org/ Author: Georg Brandl Author-email: georg@python.org License: BSD License Description: Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2015 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. Keywords: syntax highlighting Platform: any Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: System Administrators Classifier: Development Status :: 6 - Mature Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Operating System :: OS Independent Classifier: Topic :: Text Processing :: Filters Classifier: Topic :: Utilities Pygments-2.1/README.rst0000644000175000017500000000206012645467227014270 0ustar dmitrydmitryREADME for Pygments =================== This is the source of Pygments. It is a generic syntax highlighter that supports over 300 languages and text formats, for use in code hosting, forums, wikis or other applications that need to prettify source code. Installing ---------- ... works as usual, use ``python setup.py install``. Documentation ------------- ... can be found online at http://pygments.org/ or created by :: cd doc make html Development ----------- ... takes place on `Bitbucket `_, where the Mercurial repository, tickets and pull requests can be viewed. Continuous testing runs on drone.io: .. image:: https://drone.io/bitbucket.org/birkenfeld/pygments-main/status.png :target: https://drone.io/bitbucket.org/birkenfeld/pygments-main The authors ----------- Pygments is maintained by **Georg Brandl**, e-mail address *georg*\ *@*\ *python.org*. Many lexers and fixes have been contributed by **Armin Ronacher**, the rest of the `Pocoo `_ team and **Tim Hatch**. Pygments-2.1/LICENSE0000644000175000017500000000246312645467227013615 0ustar dmitrydmitryCopyright (c) 2006-2015 by the respective authors (see AUTHORS file). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Pygments-2.1/setup.cfg0000644000175000017500000000021412646734115014413 0ustar dmitrydmitry[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [aliases] release = egg_info -RDb '' upload = upload --sign --identity=36580288 Pygments-2.1/CHANGES0000644000175000017500000007067512646734033013606 0ustar dmitrydmitryPygments changelog ================== Issue numbers refer to the tracker at , pull request numbers to the requests at . Version 2.1 ----------- (released Jan 17, 2016) - Added lexers: * Emacs Lisp (PR#431) * Arduino (PR#442) * Modula-2 with multi-dialect support (#1090) * Fortran fixed format (PR#213) * Archetype Definition language (PR#483) * Terraform (PR#432) * Jcl, Easytrieve (PR#208) * ParaSail (PR#381) * Boogie (PR#420) * Turtle (PR#425) * Fish Shell (PR#422) * Roboconf (PR#449) * Test Anything Protocol (PR#428) * Shen (PR#385) * Component Pascal (PR#437) * SuperCollider (PR#472) * Shell consoles (Tcsh, PowerShell, MSDOS) (PR#479) * Elm and J (PR#452) * Crmsh (PR#440) * Praat (PR#492) * CSound (PR#494) * Ezhil (PR#443) * Thrift (PR#469) * QVT Operational (PR#204) * Hexdump (PR#508) * CAmkES Configuration (PR#462) - Added styles: * Lovelace (PR#456) * Algol and Algol-nu (#1090) - Added formatters: * IRC (PR#458) * True color (24-bit) terminal ANSI sequences (#1142) (formatter alias: "16m") - New "filename" option for HTML formatter (PR#527). - Improved performance of the HTML formatter for long lines (PR#504). - Updated autopygmentize script (PR#445). - Fixed style inheritance for non-standard token types in HTML output. - Added support for async/await to Python 3 lexer. - Rewrote linenos option for TerminalFormatter (it's better, but slightly different output than before) (#1147). - Javascript lexer now supports most of ES6 (#1100). - Cocoa builtins updated for iOS 8.1 (PR#433). - Combined BashSessionLexer and ShellSessionLexer, new version should support the prompt styles of either. - Added option to pygmentize to show a full traceback on exceptions. - Fixed incomplete output on Windows and Python 3 (e.g. when using iPython Notebook) (#1153). - Allowed more traceback styles in Python console lexer (PR#253). - Added decorators to TypeScript (PR#509). - Fix highlighting of certain IRC logs formats (#1076). Version 2.0.2 ------------- (released Jan 20, 2015) - Fix Python tracebacks getting duplicated in the console lexer (#1068). - Backquote-delimited identifiers are now recognized in F# (#1062). Version 2.0.1 ------------- (released Nov 10, 2014) - Fix an encoding issue when using ``pygmentize`` with the ``-o`` option. Version 2.0 ----------- (released Nov 9, 2014) - Default lexer encoding is now "guess", i.e. UTF-8 / Locale / Latin1 is tried in that order. - Major update to Swift lexer (PR#410). - Multiple fixes to lexer guessing in conflicting cases: * recognize HTML5 by doctype * recognize XML by XML declaration * don't recognize C/C++ as SystemVerilog - Simplified regexes and builtin lists. Version 2.0rc1 -------------- (released Oct 16, 2014) - Dropped Python 2.4 and 2.5 compatibility. This is in favor of single-source compatibility between Python 2.6, 2.7 and 3.3+. - New website and documentation based on Sphinx (finally!) - Lexers added: * APL (#969) * Agda and Literate Agda (PR#203) * Alloy (PR#355) * AmbientTalk * BlitzBasic (PR#197) * ChaiScript (PR#24) * Chapel (PR#256) * Cirru (PR#275) * Clay (PR#184) * ColdFusion CFC (PR#283) * Cryptol and Literate Cryptol (PR#344) * Cypher (PR#257) * Docker config files * EBNF (PR#193) * Eiffel (PR#273) * GAP (PR#311) * Golo (PR#309) * Handlebars (PR#186) * Hy (PR#238) * Idris and Literate Idris (PR#210) * Igor Pro (PR#172) * Inform 6/7 (PR#281) * Intel objdump (PR#279) * Isabelle (PR#386) * Jasmin (PR#349) * JSON-LD (PR#289) * Kal (PR#233) * Lean (PR#399) * LSL (PR#296) * Limbo (PR#291) * Liquid (#977) * MQL (PR#285) * MaskJS (PR#280) * Mozilla preprocessors * Mathematica (PR#245) * NesC (PR#166) * Nit (PR#375) * Nix (PR#267) * Pan * Pawn (PR#211) * Perl 6 (PR#181) * Pig (PR#304) * Pike (PR#237) * QBasic (PR#182) * Red (PR#341) * ResourceBundle (#1038) * Rexx (PR#199) * Rql (PR#251) * Rsl * SPARQL (PR#78) * Slim (PR#366) * Swift (PR#371) * Swig (PR#168) * TADS 3 (PR#407) * Todo.txt todo lists * Twig (PR#404) - Added a helper to "optimize" regular expressions that match one of many literal words; this can save 20% and more lexing time with lexers that highlight many keywords or builtins. - New styles: "xcode" and "igor", similar to the default highlighting of the respective IDEs. - The command-line "pygmentize" tool now tries a little harder to find the correct encoding for files and the terminal (#979). - Added "inencoding" option for lexers to override "encoding" analogous to "outencoding" (#800). - Added line-by-line "streaming" mode for pygmentize with the "-s" option. (PR#165) Only fully works for lexers that have no constructs spanning lines! - Added an "envname" option to the LaTeX formatter to select a replacement verbatim environment (PR#235). - Updated the Makefile lexer to yield a little more useful highlighting. - Lexer aliases passed to ``get_lexer_by_name()`` are now case-insensitive. - File name matching in lexers and formatters will now use a regex cache for speed (PR#205). - Pygments will now recognize "vim" modelines when guessing the lexer for a file based on content (PR#118). - Major restructure of the ``pygments.lexers`` module namespace. There are now many more modules with less lexers per module. Old modules are still around and re-export the lexers they previously contained. - The NameHighlightFilter now works with any Name.* token type (#790). - Python 3 lexer: add new exceptions from PEP 3151. - Opa lexer: add new keywords (PR#170). - Julia lexer: add keywords and underscore-separated number literals (PR#176). - Lasso lexer: fix method highlighting, update builtins. Fix guessing so that plain XML isn't always taken as Lasso (PR#163). - Objective C/C++ lexers: allow "@" prefixing any expression (#871). - Ruby lexer: fix lexing of Name::Space tokens (#860) and of symbols in hashes (#873). - Stan lexer: update for version 2.4.0 of the language (PR#162, PR#255, PR#377). - JavaScript lexer: add the "yield" keyword (PR#196). - HTTP lexer: support for PATCH method (PR#190). - Koka lexer: update to newest language spec (PR#201). - Haxe lexer: rewrite and support for Haxe 3 (PR#174). - Prolog lexer: add different kinds of numeric literals (#864). - F# lexer: rewrite with newest spec for F# 3.0 (#842), fix a bug with dotted chains (#948). - Kotlin lexer: general update (PR#271). - Rebol lexer: fix comment detection and analyse_text (PR#261). - LLVM lexer: update keywords to v3.4 (PR#258). - PHP lexer: add new keywords and binary literals (PR#222). - external/markdown-processor.py updated to newest python-markdown (PR#221). - CSS lexer: some highlighting order fixes (PR#231). - Ceylon lexer: fix parsing of nested multiline comments (#915). - C family lexers: fix parsing of indented preprocessor directives (#944). - Rust lexer: update to 0.9 language version (PR#270, PR#388). - Elixir lexer: update to 0.15 language version (PR#392). - Fix swallowing incomplete tracebacks in Python console lexer (#874). Version 1.6 ----------- (released Feb 3, 2013) - Lexers added: * Dylan console (PR#149) * Logos (PR#150) * Shell sessions (PR#158) - Fix guessed lexers not receiving lexer options (#838). - Fix unquoted HTML attribute lexing in Opa (#841). - Fixes to the Dart lexer (PR#160). Version 1.6rc1 -------------- (released Jan 9, 2013) - Lexers added: * AspectJ (PR#90) * AutoIt (PR#122) * BUGS-like languages (PR#89) * Ceylon (PR#86) * Croc (new name for MiniD) * CUDA (PR#75) * Dg (PR#116) * IDL (PR#115) * Jags (PR#89) * Julia (PR#61) * Kconfig (#711) * Lasso (PR#95, PR#113) * LiveScript (PR#84) * Monkey (PR#117) * Mscgen (PR#80) * NSIS scripts (PR#136) * OpenCOBOL (PR#72) * QML (PR#123) * Puppet (PR#133) * Racket (PR#94) * Rdoc (PR#99) * Robot Framework (PR#137) * RPM spec files (PR#124) * Rust (PR#67) * Smali (Dalvik assembly) * SourcePawn (PR#39) * Stan (PR#89) * Treetop (PR#125) * TypeScript (PR#114) * VGL (PR#12) * Visual FoxPro (#762) * Windows Registry (#819) * Xtend (PR#68) - The HTML formatter now supports linking to tags using CTags files, when the python-ctags package is installed (PR#87). - The HTML formatter now has a "linespans" option that wraps every line in a tag with a specific id (PR#82). - When deriving a lexer from another lexer with token definitions, definitions for states not in the child lexer are now inherited. If you override a state in the child lexer, an "inherit" keyword has been added to insert the base state at that position (PR#141). - The C family lexers now inherit token definitions from a common base class, removing code duplication (PR#141). - Use "colorama" on Windows for console color output (PR#142). - Fix Template Haskell highlighting (PR#63). - Fix some S/R lexer errors (PR#91). - Fix a bug in the Prolog lexer with names that start with 'is' (#810). - Rewrite Dylan lexer, add Dylan LID lexer (PR#147). - Add a Java quickstart document (PR#146). - Add a "external/autopygmentize" file that can be used as .lessfilter (#802). Version 1.5 ----------- (codename Zeitdilatation, released Mar 10, 2012) - Lexers added: * Awk (#630) * Fancy (#633) * PyPy Log * eC * Nimrod * Nemerle (#667) * F# (#353) * Groovy (#501) * PostgreSQL (#660) * DTD * Gosu (#634) * Octave (PR#22) * Standard ML (PR#14) * CFengine3 (#601) * Opa (PR#37) * HTTP sessions (PR#42) * JSON (PR#31) * SNOBOL (PR#30) * MoonScript (PR#43) * ECL (PR#29) * Urbiscript (PR#17) * OpenEdge ABL (PR#27) * SystemVerilog (PR#35) * Coq (#734) * PowerShell (#654) * Dart (#715) * Fantom (PR#36) * Bro (PR#5) * NewLISP (PR#26) * VHDL (PR#45) * Scilab (#740) * Elixir (PR#57) * Tea (PR#56) * Kotlin (PR#58) - Fix Python 3 terminal highlighting with pygmentize (#691). - In the LaTeX formatter, escape special &, < and > chars (#648). - In the LaTeX formatter, fix display problems for styles with token background colors (#670). - Enhancements to the Squid conf lexer (#664). - Several fixes to the reStructuredText lexer (#636). - Recognize methods in the ObjC lexer (#638). - Fix Lua "class" highlighting: it does not have classes (#665). - Fix degenerate regex in Scala lexer (#671) and highlighting bugs (#713, 708). - Fix number pattern order in Ocaml lexer (#647). - Fix generic type highlighting in ActionScript 3 (#666). - Fixes to the Clojure lexer (PR#9). - Fix degenerate regex in Nemerle lexer (#706). - Fix infinite looping in CoffeeScript lexer (#729). - Fix crashes and analysis with ObjectiveC lexer (#693, #696). - Add some Fortran 2003 keywords. - Fix Boo string regexes (#679). - Add "rrt" style (#727). - Fix infinite looping in Darcs Patch lexer. - Lots of misc fixes to character-eating bugs and ordering problems in many different lexers. Version 1.4 ----------- (codename Unschärfe, released Jan 03, 2011) - Lexers added: * Factor (#520) * PostScript (#486) * Verilog (#491) * BlitzMax Basic (#478) * Ioke (#465) * Java properties, split out of the INI lexer (#445) * Scss (#509) * Duel/JBST * XQuery (#617) * Mason (#615) * GoodData (#609) * SSP (#473) * Autohotkey (#417) * Google Protocol Buffers * Hybris (#506) - Do not fail in analyse_text methods (#618). - Performance improvements in the HTML formatter (#523). - With the ``noclasses`` option in the HTML formatter, some styles present in the stylesheet were not added as inline styles. - Four fixes to the Lua lexer (#480, #481, #482, #497). - More context-sensitive Gherkin lexer with support for more i18n translations. - Support new OO keywords in Matlab lexer (#521). - Small fix in the CoffeeScript lexer (#519). - A bugfix for backslashes in ocaml strings (#499). - Fix unicode/raw docstrings in the Python lexer (#489). - Allow PIL to work without PIL.pth (#502). - Allow seconds as a unit in CSS (#496). - Support ``application/javascript`` as a JavaScript mime type (#504). - Support `Offload `_ C++ Extensions as keywords in the C++ lexer (#484). - Escape more characters in LaTeX output (#505). - Update Haml/Sass lexers to version 3 (#509). - Small PHP lexer string escaping fix (#515). - Support comments before preprocessor directives, and unsigned/ long long literals in C/C++ (#613, #616). - Support line continuations in the INI lexer (#494). - Fix lexing of Dylan string and char literals (#628). - Fix class/procedure name highlighting in VB.NET lexer (#624). Version 1.3.1 ------------- (bugfix release, released Mar 05, 2010) - The ``pygmentize`` script was missing from the distribution. Version 1.3 ----------- (codename Schneeglöckchen, released Mar 01, 2010) - Added the ``ensurenl`` lexer option, which can be used to suppress the automatic addition of a newline to the lexer input. - Lexers added: * Ada * Coldfusion * Modula-2 * Haxe * R console * Objective-J * Haml and Sass * CoffeeScript - Enhanced reStructuredText highlighting. - Added support for PHP 5.3 namespaces in the PHP lexer. - Added a bash completion script for `pygmentize`, to the external/ directory (#466). - Fixed a bug in `do_insertions()` used for multi-lexer languages. - Fixed a Ruby regex highlighting bug (#476). - Fixed regex highlighting bugs in Perl lexer (#258). - Add small enhancements to the C lexer (#467) and Bash lexer (#469). - Small fixes for the Tcl, Debian control file, Nginx config, Smalltalk, Objective-C, Clojure, Lua lexers. - Gherkin lexer: Fixed single apostrophe bug and added new i18n keywords. Version 1.2.2 ------------- (bugfix release, released Jan 02, 2010) * Removed a backwards incompatibility in the LaTeX formatter that caused Sphinx to produce invalid commands when writing LaTeX output (#463). * Fixed a forever-backtracking regex in the BashLexer (#462). Version 1.2.1 ------------- (bugfix release, released Jan 02, 2010) * Fixed mishandling of an ellipsis in place of the frames in a Python console traceback, resulting in clobbered output. Version 1.2 ----------- (codename Neujahr, released Jan 01, 2010) - Dropped Python 2.3 compatibility. - Lexers added: * Asymptote * Go * Gherkin (Cucumber) * CMake * Ooc * Coldfusion * Haxe * R console - Added options for rendering LaTeX in source code comments in the LaTeX formatter (#461). - Updated the Logtalk lexer. - Added `line_number_start` option to image formatter (#456). - Added `hl_lines` and `hl_color` options to image formatter (#457). - Fixed the HtmlFormatter's handling of noclasses=True to not output any classes (#427). - Added the Monokai style (#453). - Fixed LLVM lexer identifier syntax and added new keywords (#442). - Fixed the PythonTracebackLexer to handle non-traceback data in header or trailer, and support more partial tracebacks that start on line 2 (#437). - Fixed the CLexer to not highlight ternary statements as labels. - Fixed lexing of some Ruby quoting peculiarities (#460). - A few ASM lexer fixes (#450). Version 1.1.1 ------------- (bugfix release, released Sep 15, 2009) - Fixed the BBCode lexer (#435). - Added support for new Jinja2 keywords. - Fixed test suite failures. - Added Gentoo-specific suffixes to Bash lexer. Version 1.1 ----------- (codename Brillouin, released Sep 11, 2009) - Ported Pygments to Python 3. This needed a few changes in the way encodings are handled; they may affect corner cases when used with Python 2 as well. - Lexers added: * Antlr/Ragel, thanks to Ana Nelson * (Ba)sh shell * Erlang shell * GLSL * Prolog * Evoque * Modelica * Rebol * MXML * Cython * ABAP * ASP.net (VB/C#) * Vala * Newspeak - Fixed the LaTeX formatter's output so that output generated for one style can be used with the style definitions of another (#384). - Added "anchorlinenos" and "noclobber_cssfile" (#396) options to HTML formatter. - Support multiline strings in Lua lexer. - Rewrite of the JavaScript lexer by Pumbaa80 to better support regular expression literals (#403). - When pygmentize is asked to highlight a file for which multiple lexers match the filename, use the analyse_text guessing engine to determine the winner (#355). - Fixed minor bugs in the JavaScript lexer (#383), the Matlab lexer (#378), the Scala lexer (#392), the INI lexer (#391), the Clojure lexer (#387) and the AS3 lexer (#389). - Fixed three Perl heredoc lexing bugs (#379, #400, #422). - Fixed a bug in the image formatter which misdetected lines (#380). - Fixed bugs lexing extended Ruby strings and regexes. - Fixed a bug when lexing git diffs. - Fixed a bug lexing the empty commit in the PHP lexer (#405). - Fixed a bug causing Python numbers to be mishighlighted as floats (#397). - Fixed a bug when backslashes are used in odd locations in Python (#395). - Fixed various bugs in Matlab and S-Plus lexers, thanks to Winston Chang (#410, #411, #413, #414) and fmarc (#419). - Fixed a bug in Haskell single-line comment detection (#426). - Added new-style reStructuredText directive for docutils 0.5+ (#428). Version 1.0 ----------- (codename Dreiundzwanzig, released Nov 23, 2008) - Don't use join(splitlines()) when converting newlines to ``\n``, because that doesn't keep all newlines at the end when the ``stripnl`` lexer option is False. - Added ``-N`` option to command-line interface to get a lexer name for a given filename. - Added Tango style, written by Andre Roberge for the Crunchy project. - Added Python3TracebackLexer and ``python3`` option to PythonConsoleLexer. - Fixed a few bugs in the Haskell lexer. - Fixed PythonTracebackLexer to be able to recognize SyntaxError and KeyboardInterrupt (#360). - Provide one formatter class per image format, so that surprises like:: pygmentize -f gif -o foo.gif foo.py creating a PNG file are avoided. - Actually use the `font_size` option of the image formatter. - Fixed numpy lexer that it doesn't listen for `*.py` any longer. - Fixed HTML formatter so that text options can be Unicode strings (#371). - Unified Diff lexer supports the "udiff" alias now. - Fixed a few issues in Scala lexer (#367). - RubyConsoleLexer now supports simple prompt mode (#363). - JavascriptLexer is smarter about what constitutes a regex (#356). - Add Applescript lexer, thanks to Andreas Amann (#330). - Make the codetags more strict about matching words (#368). - NginxConfLexer is a little more accurate on mimetypes and variables (#370). Version 0.11.1 -------------- (released Aug 24, 2008) - Fixed a Jython compatibility issue in pygments.unistring (#358). Version 0.11 ------------ (codename Straußenei, released Aug 23, 2008) Many thanks go to Tim Hatch for writing or integrating most of the bug fixes and new features. - Lexers added: * Nasm-style assembly language, thanks to delroth * YAML, thanks to Kirill Simonov * ActionScript 3, thanks to Pierre Bourdon * Cheetah/Spitfire templates, thanks to Matt Good * Lighttpd config files * Nginx config files * Gnuplot plotting scripts * Clojure * POV-Ray scene files * Sqlite3 interactive console sessions * Scala source files, thanks to Krzysiek Goj - Lexers improved: * C lexer highlights standard library functions now and supports C99 types. * Bash lexer now correctly highlights heredocs without preceding whitespace. * Vim lexer now highlights hex colors properly and knows a couple more keywords. * Irc logs lexer now handles xchat's default time format (#340) and correctly highlights lines ending in ``>``. * Support more delimiters for perl regular expressions (#258). * ObjectiveC lexer now supports 2.0 features. - Added "Visual Studio" style. - Updated markdown processor to Markdown 1.7. - Support roman/sans/mono style defs and use them in the LaTeX formatter. - The RawTokenFormatter is no longer registered to ``*.raw`` and it's documented that tokenization with this lexer may raise exceptions. - New option ``hl_lines`` to HTML formatter, to highlight certain lines. - New option ``prestyles`` to HTML formatter. - New option *-g* to pygmentize, to allow lexer guessing based on filetext (can be slowish, so file extensions are still checked first). - ``guess_lexer()`` now makes its decision much faster due to a cache of whether data is xml-like (a check which is used in several versions of ``analyse_text()``. Several lexers also have more accurate ``analyse_text()`` now. Version 0.10 ------------ (codename Malzeug, released May 06, 2008) - Lexers added: * Io * Smalltalk * Darcs patches * Tcl * Matlab * Matlab sessions * FORTRAN * XSLT * tcsh * NumPy * Python 3 * S, S-plus, R statistics languages * Logtalk - In the LatexFormatter, the *commandprefix* option is now by default 'PY' instead of 'C', since the latter resulted in several collisions with other packages. Also, the special meaning of the *arg* argument to ``get_style_defs()`` was removed. - Added ImageFormatter, to format code as PNG, JPG, GIF or BMP. (Needs the Python Imaging Library.) - Support doc comments in the PHP lexer. - Handle format specifications in the Perl lexer. - Fix comment handling in the Batch lexer. - Add more file name extensions for the C++, INI and XML lexers. - Fixes in the IRC and MuPad lexers. - Fix function and interface name highlighting in the Java lexer. - Fix at-rule handling in the CSS lexer. - Handle KeyboardInterrupts gracefully in pygmentize. - Added BlackWhiteStyle. - Bash lexer now correctly highlights math, does not require whitespace after semicolons, and correctly highlights boolean operators. - Makefile lexer is now capable of handling BSD and GNU make syntax. Version 0.9 ----------- (codename Herbstzeitlose, released Oct 14, 2007) - Lexers added: * Erlang * ActionScript * Literate Haskell * Common Lisp * Various assembly languages * Gettext catalogs * Squid configuration * Debian control files * MySQL-style SQL * MOOCode - Lexers improved: * Greatly improved the Haskell and OCaml lexers. * Improved the Bash lexer's handling of nested constructs. * The C# and Java lexers exhibited abysmal performance with some input code; this should now be fixed. * The IRC logs lexer is now able to colorize weechat logs too. * The Lua lexer now recognizes multi-line comments. * Fixed bugs in the D and MiniD lexer. - The encoding handling of the command line mode (pygmentize) was enhanced. You shouldn't get UnicodeErrors from it anymore if you don't give an encoding option. - Added a ``-P`` option to the command line mode which can be used to give options whose values contain commas or equals signs. - Added 256-color terminal formatter. - Added an experimental SVG formatter. - Added the ``lineanchors`` option to the HTML formatter, thanks to Ian Charnas for the idea. - Gave the line numbers table a CSS class in the HTML formatter. - Added a Vim 7-like style. Version 0.8.1 ------------- (released Jun 27, 2007) - Fixed POD highlighting in the Ruby lexer. - Fixed Unicode class and namespace name highlighting in the C# lexer. - Fixed Unicode string prefix highlighting in the Python lexer. - Fixed a bug in the D and MiniD lexers. - Fixed the included MoinMoin parser. Version 0.8 ----------- (codename Maikäfer, released May 30, 2007) - Lexers added: * Haskell, thanks to Adam Blinkinsop * Redcode, thanks to Adam Blinkinsop * D, thanks to Kirk McDonald * MuPad, thanks to Christopher Creutzig * MiniD, thanks to Jarrett Billingsley * Vim Script, by Tim Hatch - The HTML formatter now has a second line-numbers mode in which it will just integrate the numbers in the same ``
`` tag as the
  code.

- The `CSharpLexer` now is Unicode-aware, which means that it has an
  option that can be set so that it correctly lexes Unicode
  identifiers allowed by the C# specs.

- Added a `RaiseOnErrorTokenFilter` that raises an exception when the
  lexer generates an error token, and a `VisibleWhitespaceFilter` that
  converts whitespace (spaces, tabs, newlines) into visible
  characters.

- Fixed the `do_insertions()` helper function to yield correct
  indices.

- The ReST lexer now automatically highlights source code blocks in
  ".. sourcecode:: language" and ".. code:: language" directive
  blocks.

- Improved the default style (thanks to Tiberius Teng). The old
  default is still available as the "emacs" style (which was an alias
  before).

- The `get_style_defs` method of HTML formatters now uses the
  `cssclass` option as the default selector if it was given.

- Improved the ReST and Bash lexers a bit.

- Fixed a few bugs in the Makefile and Bash lexers, thanks to Tim
  Hatch.

- Fixed a bug in the command line code that disallowed ``-O`` options
  when using the ``-S`` option.

- Fixed a bug in the `RawTokenFormatter`.


Version 0.7.1
-------------
(released Feb 15, 2007)

- Fixed little highlighting bugs in the Python, Java, Scheme and
  Apache Config lexers.

- Updated the included manpage.

- Included a built version of the documentation in the source tarball.


Version 0.7
-----------
(codename Faschingskrapfn, released Feb 14, 2007)

- Added a MoinMoin parser that uses Pygments. With it, you get
  Pygments highlighting in Moin Wiki pages.

- Changed the exception raised if no suitable lexer, formatter etc. is
  found in one of the `get_*_by_*` functions to a custom exception,
  `pygments.util.ClassNotFound`. It is, however, a subclass of
  `ValueError` in order to retain backwards compatibility.

- Added a `-H` command line option which can be used to get the
  docstring of a lexer, formatter or filter.

- Made the handling of lexers and formatters more consistent. The
  aliases and filename patterns of formatters are now attributes on
  them.

- Added an OCaml lexer, thanks to Adam Blinkinsop.

- Made the HTML formatter more flexible, and easily subclassable in
  order to make it easy to implement custom wrappers, e.g. alternate
  line number markup. See the documentation.

- Added an `outencoding` option to all formatters, making it possible
  to override the `encoding` (which is used by lexers and formatters)
  when using the command line interface. Also, if using the terminal
  formatter and the output file is a terminal and has an encoding
  attribute, use it if no encoding is given.

- Made it possible to just drop style modules into the `styles`
  subpackage of the Pygments installation.

- Added a "state" keyword argument to the `using` helper.

- Added a `commandprefix` option to the `LatexFormatter` which allows
  to control how the command names are constructed.

- Added quite a few new lexers, thanks to Tim Hatch:

  * Java Server Pages
  * Windows batch files
  * Trac Wiki markup
  * Python tracebacks
  * ReStructuredText
  * Dylan
  * and the Befunge esoteric programming language (yay!)

- Added Mako lexers by Ben Bangert.

- Added "fruity" style, another dark background originally vim-based
  theme.

- Added sources.list lexer by Dennis Kaarsemaker.

- Added token stream filters, and a pygmentize option to use them.

- Changed behavior of `in` Operator for tokens.

- Added mimetypes for all lexers.

- Fixed some problems lexing Python strings.

- Fixed tickets: #167, #178, #179, #180, #185, #201.


Version 0.6
-----------
(codename Zimtstern, released Dec 20, 2006)

- Added option for the HTML formatter to write the CSS to an external
  file in "full document" mode.

- Added RTF formatter.

- Added Bash and Apache configuration lexers (thanks to Tim Hatch).

- Improved guessing methods for various lexers.

- Added `@media` support to CSS lexer (thanks to Tim Hatch).

- Added a Groff lexer (thanks to Tim Hatch).

- License change to BSD.

- Added lexers for the Myghty template language.

- Added a Scheme lexer (thanks to Marek Kubica).

- Added some functions to iterate over existing lexers, formatters and
  lexers.

- The HtmlFormatter's `get_style_defs()` can now take a list as an
  argument to generate CSS with multiple prefixes.

- Support for guessing input encoding added.

- Encoding support added: all processing is now done with Unicode
  strings, input and output are converted from and optionally to byte
  strings (see the ``encoding`` option of lexers and formatters).

- Some improvements in the C(++) lexers handling comments and line
  continuations.


Version 0.5.1
-------------
(released Oct 30, 2006)

- Fixed traceback in ``pygmentize -L`` (thanks to Piotr Ozarowski).


Version 0.5
-----------
(codename PyKleur, released Oct 30, 2006)

- Initial public release.
Pygments-2.1/doc/0000755000175000017500000000000012646734115013342 5ustar  dmitrydmitryPygments-2.1/doc/docs/0000755000175000017500000000000012646734115014272 5ustar  dmitrydmitryPygments-2.1/doc/docs/quickstart.rst0000644000175000017500000001477312642443625017231 0ustar  dmitrydmitry.. -*- mode: rst -*-

===========================
Introduction and Quickstart
===========================


Welcome to Pygments! This document explains the basic concepts and terms and
gives a few examples of how to use the library.


Architecture
============

There are four types of components that work together highlighting a piece of
code:

* A **lexer** splits the source into tokens, fragments of the source that
  have a token type that determines what the text represents semantically
  (e.g., keyword, string, or comment). There is a lexer for every language
  or markup format that Pygments supports.
* The token stream can be piped through **filters**, which usually modify
  the token types or text fragments, e.g. uppercasing all keywords.
* A **formatter** then takes the token stream and writes it to an output
  file, in a format such as HTML, LaTeX or RTF.
* While writing the output, a **style** determines how to highlight all the
  different token types. It maps them to attributes like "red and bold".


Example
=======

Here is a small example for highlighting Python code:

.. sourcecode:: python

    from pygments import highlight
    from pygments.lexers import PythonLexer
    from pygments.formatters import HtmlFormatter

    code = 'print "Hello World"'
    print highlight(code, PythonLexer(), HtmlFormatter())

which prints something like this:

.. sourcecode:: html

    
print "Hello World"
As you can see, Pygments uses CSS classes (by default, but you can change that) instead of inline styles in order to avoid outputting redundant style information over and over. A CSS stylesheet that contains all CSS classes possibly used in the output can be produced by: .. sourcecode:: python print HtmlFormatter().get_style_defs('.highlight') The argument to :func:`get_style_defs` is used as an additional CSS selector: the output may look like this: .. sourcecode:: css .highlight .k { color: #AA22FF; font-weight: bold } .highlight .s { color: #BB4444 } ... Options ======= The :func:`highlight()` function supports a fourth argument called *outfile*, it must be a file object if given. The formatted output will then be written to this file instead of being returned as a string. Lexers and formatters both support options. They are given to them as keyword arguments either to the class or to the lookup method: .. sourcecode:: python from pygments import highlight from pygments.lexers import get_lexer_by_name from pygments.formatters import HtmlFormatter lexer = get_lexer_by_name("python", stripall=True) formatter = HtmlFormatter(linenos=True, cssclass="source") result = highlight(code, lexer, formatter) This makes the lexer strip all leading and trailing whitespace from the input (`stripall` option), lets the formatter output line numbers (`linenos` option), and sets the wrapping ``
``'s class to ``source`` (instead of ``highlight``). Important options include: `encoding` : for lexers and formatters Since Pygments uses Unicode strings internally, this determines which encoding will be used to convert to or from byte strings. `style` : for formatters The name of the style to use when writing the output. For an overview of builtin lexers and formatters and their options, visit the :doc:`lexer ` and :doc:`formatters ` lists. For a documentation on filters, see :doc:`this page `. Lexer and formatter lookup ========================== If you want to lookup a built-in lexer by its alias or a filename, you can use one of the following methods: .. sourcecode:: pycon >>> from pygments.lexers import (get_lexer_by_name, ... get_lexer_for_filename, get_lexer_for_mimetype) >>> get_lexer_by_name('python') >>> get_lexer_for_filename('spam.rb') >>> get_lexer_for_mimetype('text/x-perl') All these functions accept keyword arguments; they will be passed to the lexer as options. A similar API is available for formatters: use :func:`.get_formatter_by_name()` and :func:`.get_formatter_for_filename()` from the :mod:`pygments.formatters` module for this purpose. Guessing lexers =============== If you don't know the content of the file, or you want to highlight a file whose extension is ambiguous, such as ``.html`` (which could contain plain HTML or some template tags), use these functions: .. sourcecode:: pycon >>> from pygments.lexers import guess_lexer, guess_lexer_for_filename >>> guess_lexer('#!/usr/bin/python\nprint "Hello World!"') >>> guess_lexer_for_filename('test.py', 'print "Hello World!"') :func:`.guess_lexer()` passes the given content to the lexer classes' :meth:`analyse_text()` method and returns the one for which it returns the highest number. All lexers have two different filename pattern lists: the primary and the secondary one. The :func:`.get_lexer_for_filename()` function only uses the primary list, whose entries are supposed to be unique among all lexers. :func:`.guess_lexer_for_filename()`, however, will first loop through all lexers and look at the primary and secondary filename patterns if the filename matches. If only one lexer matches, it is returned, else the guessing mechanism of :func:`.guess_lexer()` is used with the matching lexers. As usual, keyword arguments to these functions are given to the created lexer as options. Command line usage ================== You can use Pygments from the command line, using the :program:`pygmentize` script:: $ pygmentize test.py will highlight the Python file test.py using ANSI escape sequences (a.k.a. terminal colors) and print the result to standard output. To output HTML, use the ``-f`` option:: $ pygmentize -f html -o test.html test.py to write an HTML-highlighted version of test.py to the file test.html. Note that it will only be a snippet of HTML, if you want a full HTML document, use the "full" option:: $ pygmentize -f html -O full -o test.html test.py This will produce a full HTML document with included stylesheet. A style can be selected with ``-O style=``. If you need a stylesheet for an existing HTML file using Pygments CSS classes, it can be created with:: $ pygmentize -S default -f html > style.css where ``default`` is the style name. More options and tricks and be found in the :doc:`command line reference `. Pygments-2.1/doc/docs/filterdevelopment.rst0000644000175000017500000000453112645467227020565 0ustar dmitrydmitry.. -*- mode: rst -*- ===================== Write your own filter ===================== .. versionadded:: 0.7 Writing own filters is very easy. All you have to do is to subclass the `Filter` class and override the `filter` method. Additionally a filter is instantiated with some keyword arguments you can use to adjust the behavior of your filter. Subclassing Filters =================== As an example, we write a filter that converts all `Name.Function` tokens to normal `Name` tokens to make the output less colorful. .. sourcecode:: python from pygments.util import get_bool_opt from pygments.token import Name from pygments.filter import Filter class UncolorFilter(Filter): def __init__(self, **options): Filter.__init__(self, **options) self.class_too = get_bool_opt(options, 'classtoo') def filter(self, lexer, stream): for ttype, value in stream: if ttype is Name.Function or (self.class_too and ttype is Name.Class): ttype = Name yield ttype, value Some notes on the `lexer` argument: that can be quite confusing since it doesn't need to be a lexer instance. If a filter was added by using the `add_filter()` function of lexers, that lexer is registered for the filter. In that case `lexer` will refer to the lexer that has registered the filter. It *can* be used to access options passed to a lexer. Because it could be `None` you always have to check for that case if you access it. Using a decorator ================= You can also use the `simplefilter` decorator from the `pygments.filter` module: .. sourcecode:: python from pygments.util import get_bool_opt from pygments.token import Name from pygments.filter import simplefilter @simplefilter def uncolor(self, lexer, stream, options): class_too = get_bool_opt(options, 'classtoo') for ttype, value in stream: if ttype is Name.Function or (class_too and ttype is Name.Class): ttype = Name yield ttype, value The decorator automatically subclasses an internal filter class and uses the decorated function as a method for filtering. (That's why there is a `self` argument that you probably won't end up using in the method.) Pygments-2.1/doc/docs/unicode.rst0000644000175000017500000000441712645467227016466 0ustar dmitrydmitry===================== Unicode and Encodings ===================== Since Pygments 0.6, all lexers use unicode strings internally. Because of that you might encounter the occasional :exc:`UnicodeDecodeError` if you pass strings with the wrong encoding. Per default all lexers have their input encoding set to `guess`. This means that the following encodings are tried: * UTF-8 (including BOM handling) * The locale encoding (i.e. the result of `locale.getpreferredencoding()`) * As a last resort, `latin1` If you pass a lexer a byte string object (not unicode), it tries to decode the data using this encoding. You can override the encoding using the `encoding` or `inencoding` lexer options. If you have the `chardet`_ library installed and set the encoding to ``chardet`` if will analyse the text and use the encoding it thinks is the right one automatically: .. sourcecode:: python from pygments.lexers import PythonLexer lexer = PythonLexer(encoding='chardet') The best way is to pass Pygments unicode objects. In that case you can't get unexpected output. The formatters now send Unicode objects to the stream if you don't set the output encoding. You can do so by passing the formatters an `encoding` option: .. sourcecode:: python from pygments.formatters import HtmlFormatter f = HtmlFormatter(encoding='utf-8') **You will have to set this option if you have non-ASCII characters in the source and the output stream does not accept Unicode written to it!** This is the case for all regular files and for terminals. Note: The Terminal formatter tries to be smart: if its output stream has an `encoding` attribute, and you haven't set the option, it will encode any Unicode string with this encoding before writing it. This is the case for `sys.stdout`, for example. The other formatters don't have that behavior. Another note: If you call Pygments via the command line (`pygmentize`), encoding is handled differently, see :doc:`the command line docs `. .. versionadded:: 0.7 The formatters now also accept an `outencoding` option which will override the `encoding` option if given. This makes it possible to use a single options dict with lexers and formatters, and still have different input and output encodings. .. _chardet: http://chardet.feedparser.org/ Pygments-2.1/doc/docs/integrate.rst0000644000175000017500000000147112645467227017017 0ustar dmitrydmitry.. -*- mode: rst -*- =================================== Using Pygments in various scenarios =================================== Markdown -------- Since Pygments 0.9, the distribution ships Markdown_ preprocessor sample code that uses Pygments to render source code in :file:`external/markdown-processor.py`. You can copy and adapt it to your liking. .. _Markdown: http://www.freewisdom.org/projects/python-markdown/ TextMate -------- Antonio Cangiano has created a Pygments bundle for TextMate that allows to colorize code via a simple menu option. It can be found here_. .. _here: http://antoniocangiano.com/2008/10/28/pygments-textmate-bundle/ Bash completion --------------- The source distribution contains a file ``external/pygments.bashcomp`` that sets up completion for the ``pygmentize`` command in bash. Pygments-2.1/doc/docs/filters.rst0000644000175000017500000000227612642443625016502 0ustar dmitrydmitry.. -*- mode: rst -*- ======= Filters ======= .. versionadded:: 0.7 You can filter token streams coming from lexers to improve or annotate the output. For example, you can highlight special words in comments, convert keywords to upper or lowercase to enforce a style guide etc. To apply a filter, you can use the `add_filter()` method of a lexer: .. sourcecode:: pycon >>> from pygments.lexers import PythonLexer >>> l = PythonLexer() >>> # add a filter given by a string and options >>> l.add_filter('codetagify', case='lower') >>> l.filters [] >>> from pygments.filters import KeywordCaseFilter >>> # or give an instance >>> l.add_filter(KeywordCaseFilter(case='lower')) The `add_filter()` method takes keyword arguments which are forwarded to the constructor of the filter. To get a list of all registered filters by name, you can use the `get_all_filters()` function from the `pygments.filters` module that returns an iterable for all known filters. If you want to write your own filter, have a look at :doc:`Write your own filter `. Builtin Filters =============== .. pygmentsdoc:: filters Pygments-2.1/doc/docs/api.rst0000644000175000017500000002415712642443625015605 0ustar dmitrydmitry.. -*- mode: rst -*- ===================== The full Pygments API ===================== This page describes the Pygments API. High-level API ============== .. module:: pygments Functions from the :mod:`pygments` module: .. function:: lex(code, lexer) Lex `code` with the `lexer` (must be a `Lexer` instance) and return an iterable of tokens. Currently, this only calls `lexer.get_tokens()`. .. function:: format(tokens, formatter, outfile=None) Format a token stream (iterable of tokens) `tokens` with the `formatter` (must be a `Formatter` instance). The result is written to `outfile`, or if that is ``None``, returned as a string. .. function:: highlight(code, lexer, formatter, outfile=None) This is the most high-level highlighting function. It combines `lex` and `format` in one function. .. module:: pygments.lexers Functions from :mod:`pygments.lexers`: .. function:: get_lexer_by_name(alias, **options) Return an instance of a `Lexer` subclass that has `alias` in its aliases list. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is found. .. function:: get_lexer_for_filename(fn, **options) Return a `Lexer` subclass instance that has a filename pattern matching `fn`. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no lexer for that filename is found. .. function:: get_lexer_for_mimetype(mime, **options) Return a `Lexer` subclass instance that has `mime` in its mimetype list. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if not lexer for that mimetype is found. .. function:: guess_lexer(text, **options) Return a `Lexer` subclass instance that's guessed from the text in `text`. For that, the :meth:`.analyse_text()` method of every known lexer class is called with the text as argument, and the lexer which returned the highest value will be instantiated and returned. :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can handle the content. .. function:: guess_lexer_for_filename(filename, text, **options) As :func:`guess_lexer()`, but only lexers which have a pattern in `filenames` or `alias_filenames` that matches `filename` are taken into consideration. :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can handle the content. .. function:: get_all_lexers() Return an iterable over all registered lexers, yielding tuples in the format:: (longname, tuple of aliases, tuple of filename patterns, tuple of mimetypes) .. versionadded:: 0.6 .. module:: pygments.formatters Functions from :mod:`pygments.formatters`: .. function:: get_formatter_by_name(alias, **options) Return an instance of a :class:`.Formatter` subclass that has `alias` in its aliases list. The formatter is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no formatter with that alias is found. .. function:: get_formatter_for_filename(fn, **options) Return a :class:`.Formatter` subclass instance that has a filename pattern matching `fn`. The formatter is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no formatter for that filename is found. .. module:: pygments.styles Functions from :mod:`pygments.styles`: .. function:: get_style_by_name(name) Return a style class by its short name. The names of the builtin styles are listed in :data:`pygments.styles.STYLE_MAP`. Will raise :exc:`pygments.util.ClassNotFound` if no style of that name is found. .. function:: get_all_styles() Return an iterable over all registered styles, yielding their names. .. versionadded:: 0.6 .. module:: pygments.lexer Lexers ====== The base lexer class from which all lexers are derived is: .. class:: Lexer(**options) The constructor takes a \*\*keywords dictionary of options. Every subclass must first process its own options and then call the `Lexer` constructor, since it processes the `stripnl`, `stripall` and `tabsize` options. An example looks like this: .. sourcecode:: python def __init__(self, **options): self.compress = options.get('compress', '') Lexer.__init__(self, **options) As these options must all be specifiable as strings (due to the command line usage), there are various utility functions available to help with that, see `Option processing`_. .. method:: get_tokens(text) This method is the basic interface of a lexer. It is called by the `highlight()` function. It must process the text and return an iterable of ``(tokentype, value)`` pairs from `text`. Normally, you don't need to override this method. The default implementation processes the `stripnl`, `stripall` and `tabsize` options and then yields all tokens from `get_tokens_unprocessed()`, with the ``index`` dropped. .. method:: get_tokens_unprocessed(text) This method should process the text and return an iterable of ``(index, tokentype, value)`` tuples where ``index`` is the starting position of the token within the input text. This method must be overridden by subclasses. .. staticmethod:: analyse_text(text) A static method which is called for lexer guessing. It should analyse the text and return a float in the range from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer will not be selected as the most probable one, if it returns ``1.0``, it will be selected immediately. .. note:: You don't have to add ``@staticmethod`` to the definition of this method, this will be taken care of by the Lexer's metaclass. For a list of known tokens have a look at the :doc:`tokens` page. A lexer also can have the following attributes (in fact, they are mandatory except `alias_filenames`) that are used by the builtin lookup mechanism. .. attribute:: name Full name for the lexer, in human-readable form. .. attribute:: aliases A list of short, unique identifiers that can be used to lookup the lexer from a list, e.g. using `get_lexer_by_name()`. .. attribute:: filenames A list of `fnmatch` patterns that match filenames which contain content for this lexer. The patterns in this list should be unique among all lexers. .. attribute:: alias_filenames A list of `fnmatch` patterns that match filenames which may or may not contain content for this lexer. This list is used by the :func:`.guess_lexer_for_filename()` function, to determine which lexers are then included in guessing the correct one. That means that e.g. every lexer for HTML and a template language should include ``\*.html`` in this list. .. attribute:: mimetypes A list of MIME types for content that can be lexed with this lexer. .. module:: pygments.formatter Formatters ========== A formatter is derived from this class: .. class:: Formatter(**options) As with lexers, this constructor processes options and then must call the base class :meth:`__init__`. The :class:`Formatter` class recognizes the options `style`, `full` and `title`. It is up to the formatter class whether it uses them. .. method:: get_style_defs(arg='') This method must return statements or declarations suitable to define the current style for subsequent highlighted text (e.g. CSS classes in the `HTMLFormatter`). The optional argument `arg` can be used to modify the generation and is formatter dependent (it is standardized because it can be given on the command line). This method is called by the ``-S`` :doc:`command-line option `, the `arg` is then given by the ``-a`` option. .. method:: format(tokensource, outfile) This method must format the tokens from the `tokensource` iterable and write the formatted version to the file object `outfile`. Formatter options can control how exactly the tokens are converted. .. versionadded:: 0.7 A formatter must have the following attributes that are used by the builtin lookup mechanism. .. attribute:: name Full name for the formatter, in human-readable form. .. attribute:: aliases A list of short, unique identifiers that can be used to lookup the formatter from a list, e.g. using :func:`.get_formatter_by_name()`. .. attribute:: filenames A list of :mod:`fnmatch` patterns that match filenames for which this formatter can produce output. The patterns in this list should be unique among all formatters. .. module:: pygments.util Option processing ================= The :mod:`pygments.util` module has some utility functions usable for option processing: .. exception:: OptionError This exception will be raised by all option processing functions if the type or value of the argument is not correct. .. function:: get_bool_opt(options, optname, default=None) Interpret the key `optname` from the dictionary `options` as a boolean and return it. Return `default` if `optname` is not in `options`. The valid string values for ``True`` are ``1``, ``yes``, ``true`` and ``on``, the ones for ``False`` are ``0``, ``no``, ``false`` and ``off`` (matched case-insensitively). .. function:: get_int_opt(options, optname, default=None) As :func:`get_bool_opt`, but interpret the value as an integer. .. function:: get_list_opt(options, optname, default=None) If the key `optname` from the dictionary `options` is a string, split it at whitespace and return it. If it is already a list or a tuple, it is returned as a list. .. function:: get_choice_opt(options, optname, allowed, default=None) If the key `optname` from the dictionary is not in the sequence `allowed`, raise an error, otherwise return it. .. versionadded:: 0.8 Pygments-2.1/doc/docs/formatterdevelopment.rst0000644000175000017500000001404712642443625021277 0ustar dmitrydmitry.. -*- mode: rst -*- ======================== Write your own formatter ======================== As well as creating :doc:`your own lexer `, writing a new formatter for Pygments is easy and straightforward. A formatter is a class that is initialized with some keyword arguments (the formatter options) and that must provides a `format()` method. Additionally a formatter should provide a `get_style_defs()` method that returns the style definitions from the style in a form usable for the formatter's output format. Quickstart ========== The most basic formatter shipped with Pygments is the `NullFormatter`. It just sends the value of a token to the output stream: .. sourcecode:: python from pygments.formatter import Formatter class NullFormatter(Formatter): def format(self, tokensource, outfile): for ttype, value in tokensource: outfile.write(value) As you can see, the `format()` method is passed two parameters: `tokensource` and `outfile`. The first is an iterable of ``(token_type, value)`` tuples, the latter a file like object with a `write()` method. Because the formatter is that basic it doesn't overwrite the `get_style_defs()` method. Styles ====== Styles aren't instantiated but their metaclass provides some class functions so that you can access the style definitions easily. Styles are iterable and yield tuples in the form ``(ttype, d)`` where `ttype` is a token and `d` is a dict with the following keys: ``'color'`` Hexadecimal color value (eg: ``'ff0000'`` for red) or `None` if not defined. ``'bold'`` `True` if the value should be bold ``'italic'`` `True` if the value should be italic ``'underline'`` `True` if the value should be underlined ``'bgcolor'`` Hexadecimal color value for the background (eg: ``'eeeeeee'`` for light gray) or `None` if not defined. ``'border'`` Hexadecimal color value for the border (eg: ``'0000aa'`` for a dark blue) or `None` for no border. Additional keys might appear in the future, formatters should ignore all keys they don't support. HTML 3.2 Formatter ================== For an more complex example, let's implement a HTML 3.2 Formatter. We don't use CSS but inline markup (````, ````, etc). Because this isn't good style this formatter isn't in the standard library ;-) .. sourcecode:: python from pygments.formatter import Formatter class OldHtmlFormatter(Formatter): def __init__(self, **options): Formatter.__init__(self, **options) # create a dict of (start, end) tuples that wrap the # value of a token so that we can use it in the format # method later self.styles = {} # we iterate over the `_styles` attribute of a style item # that contains the parsed style values. for token, style in self.style: start = end = '' # a style item is a tuple in the following form: # colors are readily specified in hex: 'RRGGBB' if style['color']: start += '' % style['color'] end = '' + end if style['bold']: start += '' end = '' + end if style['italic']: start += '' end = '' + end if style['underline']: start += '' end = '' + end self.styles[token] = (start, end) def format(self, tokensource, outfile): # lastval is a string we use for caching # because it's possible that an lexer yields a number # of consecutive tokens with the same token type. # to minimize the size of the generated html markup we # try to join the values of same-type tokens here lastval = '' lasttype = None # wrap the whole output with
            outfile.write('
')

            for ttype, value in tokensource:
                # if the token type doesn't exist in the stylemap
                # we try it with the parent of the token type
                # eg: parent of Token.Literal.String.Double is
                # Token.Literal.String
                while ttype not in self.styles:
                    ttype = ttype.parent
                if ttype == lasttype:
                    # the current token type is the same of the last
                    # iteration. cache it
                    lastval += value
                else:
                    # not the same token as last iteration, but we
                    # have some data in the buffer. wrap it with the
                    # defined style and write it to the output file
                    if lastval:
                        stylebegin, styleend = self.styles[lasttype]
                        outfile.write(stylebegin + lastval + styleend)
                    # set lastval/lasttype to current values
                    lastval = value
                    lasttype = ttype

            # if something is left in the buffer, write it to the
            # output file, then close the opened 
 tag
            if lastval:
                stylebegin, styleend = self.styles[lasttype]
                outfile.write(stylebegin + lastval + styleend)
            outfile.write('
\n') The comments should explain it. Again, this formatter doesn't override the `get_style_defs()` method. If we would have used CSS classes instead of inline HTML markup, we would need to generate the CSS first. For that purpose the `get_style_defs()` method exists: Generating Style Definitions ============================ Some formatters like the `LatexFormatter` and the `HtmlFormatter` don't output inline markup but reference either macros or css classes. Because the definitions of those are not part of the output, the `get_style_defs()` method exists. It is passed one parameter (if it's used and how it's used is up to the formatter) and has to return a string or ``None``. Pygments-2.1/doc/docs/tokens.rst0000644000175000017500000002342012645467227016336 0ustar dmitrydmitry.. -*- mode: rst -*- ============== Builtin Tokens ============== .. module:: pygments.token In the :mod:`pygments.token` module, there is a special object called `Token` that is used to create token types. You can create a new token type by accessing an attribute of `Token`: .. sourcecode:: pycon >>> from pygments.token import Token >>> Token.String Token.String >>> Token.String is Token.String True Note that tokens are singletons so you can use the ``is`` operator for comparing token types. As of Pygments 0.7 you can also use the ``in`` operator to perform set tests: .. sourcecode:: pycon >>> from pygments.token import Comment >>> Comment.Single in Comment True >>> Comment in Comment.Multi False This can be useful in :doc:`filters ` and if you write lexers on your own without using the base lexers. You can also split a token type into a hierarchy, and get the parent of it: .. sourcecode:: pycon >>> String.split() [Token, Token.Literal, Token.Literal.String] >>> String.parent Token.Literal In principle, you can create an unlimited number of token types but nobody can guarantee that a style would define style rules for a token type. Because of that, Pygments proposes some global token types defined in the `pygments.token.STANDARD_TYPES` dict. For some tokens aliases are already defined: .. sourcecode:: pycon >>> from pygments.token import String >>> String Token.Literal.String Inside the :mod:`pygments.token` module the following aliases are defined: ============= ============================ ==================================== `Text` `Token.Text` for any type of text data `Whitespace` `Token.Text.Whitespace` for specially highlighted whitespace `Error` `Token.Error` represents lexer errors `Other` `Token.Other` special token for data not matched by a parser (e.g. HTML markup in PHP code) `Keyword` `Token.Keyword` any kind of keywords `Name` `Token.Name` variable/function names `Literal` `Token.Literal` Any literals `String` `Token.Literal.String` string literals `Number` `Token.Literal.Number` number literals `Operator` `Token.Operator` operators (``+``, ``not``...) `Punctuation` `Token.Punctuation` punctuation (``[``, ``(``...) `Comment` `Token.Comment` any kind of comments `Generic` `Token.Generic` generic tokens (have a look at the explanation below) ============= ============================ ==================================== The `Whitespace` token type is new in Pygments 0.8. It is used only by the `VisibleWhitespaceFilter` currently. Normally you just create token types using the already defined aliases. For each of those token aliases, a number of subtypes exists (excluding the special tokens `Token.Text`, `Token.Error` and `Token.Other`) The `is_token_subtype()` function in the `pygments.token` module can be used to test if a token type is a subtype of another (such as `Name.Tag` and `Name`). (This is the same as ``Name.Tag in Name``. The overloaded `in` operator was newly introduced in Pygments 0.7, the function still exists for backwards compatibility.) With Pygments 0.7, it's also possible to convert strings to token types (for example if you want to supply a token from the command line): .. sourcecode:: pycon >>> from pygments.token import String, string_to_tokentype >>> string_to_tokentype("String") Token.Literal.String >>> string_to_tokentype("Token.Literal.String") Token.Literal.String >>> string_to_tokentype(String) Token.Literal.String Keyword Tokens ============== `Keyword` For any kind of keyword (especially if it doesn't match any of the subtypes of course). `Keyword.Constant` For keywords that are constants (e.g. ``None`` in future Python versions). `Keyword.Declaration` For keywords used for variable declaration (e.g. ``var`` in some programming languages like JavaScript). `Keyword.Namespace` For keywords used for namespace declarations (e.g. ``import`` in Python and Java and ``package`` in Java). `Keyword.Pseudo` For keywords that aren't really keywords (e.g. ``None`` in old Python versions). `Keyword.Reserved` For reserved keywords. `Keyword.Type` For builtin types that can't be used as identifiers (e.g. ``int``, ``char`` etc. in C). Name Tokens =========== `Name` For any name (variable names, function names, classes). `Name.Attribute` For all attributes (e.g. in HTML tags). `Name.Builtin` Builtin names; names that are available in the global namespace. `Name.Builtin.Pseudo` Builtin names that are implicit (e.g. ``self`` in Ruby, ``this`` in Java). `Name.Class` Class names. Because no lexer can know if a name is a class or a function or something else this token is meant for class declarations. `Name.Constant` Token type for constants. In some languages you can recognise a token by the way it's defined (the value after a ``const`` keyword for example). In other languages constants are uppercase by definition (Ruby). `Name.Decorator` Token type for decorators. Decorators are syntactic elements in the Python language. Similar syntax elements exist in C# and Java. `Name.Entity` Token type for special entities. (e.g. `` `` in HTML). `Name.Exception` Token type for exception names (e.g. ``RuntimeError`` in Python). Some languages define exceptions in the function signature (Java). You can highlight the name of that exception using this token then. `Name.Function` Token type for function names. `Name.Label` Token type for label names (e.g. in languages that support ``goto``). `Name.Namespace` Token type for namespaces. (e.g. import paths in Java/Python), names following the ``module``/``namespace`` keyword in other languages. `Name.Other` Other names. Normally unused. `Name.Tag` Tag names (in HTML/XML markup or configuration files). `Name.Variable` Token type for variables. Some languages have prefixes for variable names (PHP, Ruby, Perl). You can highlight them using this token. `Name.Variable.Class` same as `Name.Variable` but for class variables (also static variables). `Name.Variable.Global` same as `Name.Variable` but for global variables (used in Ruby, for example). `Name.Variable.Instance` same as `Name.Variable` but for instance variables. Literals ======== `Literal` For any literal (if not further defined). `Literal.Date` for date literals (e.g. ``42d`` in Boo). `String` For any string literal. `String.Backtick` Token type for strings enclosed in backticks. `String.Char` Token type for single characters (e.g. Java, C). `String.Doc` Token type for documentation strings (for example Python). `String.Double` Double quoted strings. `String.Escape` Token type for escape sequences in strings. `String.Heredoc` Token type for "heredoc" strings (e.g. in Ruby or Perl). `String.Interpol` Token type for interpolated parts in strings (e.g. ``#{foo}`` in Ruby). `String.Other` Token type for any other strings (for example ``%q{foo}`` string constructs in Ruby). `String.Regex` Token type for regular expression literals (e.g. ``/foo/`` in JavaScript). `String.Single` Token type for single quoted strings. `String.Symbol` Token type for symbols (e.g. ``:foo`` in LISP or Ruby). `Number` Token type for any number literal. `Number.Bin` Token type for binary literals (e.g. ``0b101010``). `Number.Float` Token type for float literals (e.g. ``42.0``). `Number.Hex` Token type for hexadecimal number literals (e.g. ``0xdeadbeef``). `Number.Integer` Token type for integer literals (e.g. ``42``). `Number.Integer.Long` Token type for long integer literals (e.g. ``42L`` in Python). `Number.Oct` Token type for octal literals. Operators ========= `Operator` For any punctuation operator (e.g. ``+``, ``-``). `Operator.Word` For any operator that is a word (e.g. ``not``). Punctuation =========== .. versionadded:: 0.7 `Punctuation` For any punctuation which is not an operator (e.g. ``[``, ``(``...) Comments ======== `Comment` Token type for any comment. `Comment.Hashbang` Token type for hashbang comments (i.e. first lines of files that start with ``#!``). `Comment.Multiline` Token type for multiline comments. `Comment.Preproc` Token type for preprocessor comments (also ```_ is used to guess the encoding of the input. .. versionadded:: 0.6 The "Short Names" field lists the identifiers that can be used with the `get_lexer_by_name()` function. These lexers are builtin and can be imported from `pygments.lexers`: .. pygmentsdoc:: lexers Iterating over all lexers ------------------------- .. versionadded:: 0.6 To get all lexers (both the builtin and the plugin ones), you can use the `get_all_lexers()` function from the `pygments.lexers` module: .. sourcecode:: pycon >>> from pygments.lexers import get_all_lexers >>> i = get_all_lexers() >>> i.next() ('Diff', ('diff',), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')) >>> i.next() ('Delphi', ('delphi', 'objectpascal', 'pas', 'pascal'), ('*.pas',), ('text/x-pascal',)) >>> i.next() ('XML+Ruby', ('xml+erb', 'xml+ruby'), (), ()) As you can see, the return value is an iterator which yields tuples in the form ``(name, aliases, filetypes, mimetypes)``. Pygments-2.1/doc/docs/styles.rst0000644000175000017500000001005312642443625016345 0ustar dmitrydmitry.. -*- mode: rst -*- ====== Styles ====== Pygments comes with some builtin styles that work for both the HTML and LaTeX formatter. The builtin styles can be looked up with the `get_style_by_name` function: .. sourcecode:: pycon >>> from pygments.styles import get_style_by_name >>> get_style_by_name('colorful') You can pass a instance of a `Style` class to a formatter as the `style` option in form of a string: .. sourcecode:: pycon >>> from pygments.styles import get_style_by_name >>> from pygments.formatters import HtmlFormatter >>> HtmlFormatter(style='colorful').style Or you can also import your own style (which must be a subclass of `pygments.style.Style`) and pass it to the formatter: .. sourcecode:: pycon >>> from yourapp.yourmodule import YourStyle >>> from pygments.formatters import HtmlFormatter >>> HtmlFormatter(style=YourStyle).style Creating Own Styles =================== So, how to create a style? All you have to do is to subclass `Style` and define some styles: .. sourcecode:: python from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic class YourStyle(Style): default_style = "" styles = { Comment: 'italic #888', Keyword: 'bold #005', Name: '#f00', Name.Function: '#0f0', Name.Class: 'bold #0f0', String: 'bg:#eee #111' } That's it. There are just a few rules. When you define a style for `Name` the style automatically also affects `Name.Function` and so on. If you defined ``'bold'`` and you don't want boldface for a subtoken use ``'nobold'``. (Philosophy: the styles aren't written in CSS syntax since this way they can be used for a variety of formatters.) `default_style` is the style inherited by all token types. To make the style usable for Pygments, you must * either register it as a plugin (see :doc:`the plugin docs `) * or drop it into the `styles` subpackage of your Pygments distribution one style class per style, where the file name is the style name and the class name is `StylenameClass`. For example, if your style should be called ``"mondrian"``, name the class `MondrianStyle`, put it into the file ``mondrian.py`` and this file into the ``pygments.styles`` subpackage directory. Style Rules =========== Here a small overview of all allowed styles: ``bold`` render text as bold ``nobold`` don't render text as bold (to prevent subtokens being highlighted bold) ``italic`` render text italic ``noitalic`` don't render text as italic ``underline`` render text underlined ``nounderline`` don't render text underlined ``bg:`` transparent background ``bg:#000000`` background color (black) ``border:`` no border ``border:#ffffff`` border color (white) ``#ff0000`` text color (red) ``noinherit`` don't inherit styles from supertoken Note that there may not be a space between ``bg:`` and the color value since the style definition string is split at whitespace. Also, using named colors is not allowed since the supported color names vary for different formatters. Furthermore, not all lexers might support every style. Builtin Styles ============== Pygments ships some builtin styles which are maintained by the Pygments team. To get a list of known styles you can use this snippet: .. sourcecode:: pycon >>> from pygments.styles import STYLE_MAP >>> STYLE_MAP.keys() ['default', 'emacs', 'friendly', 'colorful'] Getting a list of available styles ================================== .. versionadded:: 0.6 Because it could be that a plugin registered a style, there is a way to iterate over all styles: .. sourcecode:: pycon >>> from pygments.styles import get_all_styles >>> styles = list(get_all_styles()) Pygments-2.1/doc/docs/changelog.rst0000644000175000017500000000003312642443625016746 0ustar dmitrydmitry.. include:: ../../CHANGES Pygments-2.1/doc/docs/formatters.rst0000644000175000017500000000312012642443625017205 0ustar dmitrydmitry.. -*- mode: rst -*- ==================== Available formatters ==================== This page lists all builtin formatters. Common options ============== All formatters support these options: `encoding` If given, must be an encoding name (such as ``"utf-8"``). This will be used to convert the token strings (which are Unicode strings) to byte strings in the output (default: ``None``). It will also be written in an encoding declaration suitable for the document format if the `full` option is given (e.g. a ``meta content-type`` directive in HTML or an invocation of the `inputenc` package in LaTeX). If this is ``""`` or ``None``, Unicode strings will be written to the output file, which most file-like objects do not support. For example, `pygments.highlight()` will return a Unicode string if called with no `outfile` argument and a formatter that has `encoding` set to ``None`` because it uses a `StringIO.StringIO` object that supports Unicode arguments to `write()`. Using a regular file object wouldn't work. .. versionadded:: 0.6 `outencoding` When using Pygments from the command line, any `encoding` option given is passed to the lexer and the formatter. This is sometimes not desirable, for example if you want to set the input encoding to ``"guess"``. Therefore, `outencoding` has been introduced which overrides `encoding` for the formatter if given. .. versionadded:: 0.7 Formatter classes ================= All these classes are importable from :mod:`pygments.formatters`. .. pygmentsdoc:: formatters Pygments-2.1/doc/docs/java.rst0000644000175000017500000000443112645467227015755 0ustar dmitrydmitry===================== Use Pygments in Java ===================== Thanks to `Jython `_ it is possible to use Pygments in Java. This page is a simple tutorial to get an idea of how this works. You can then look at the `Jython documentation `_ for more advanced uses. Since version 1.5, Pygments is deployed on `Maven Central `_ as a JAR, as is Jython which makes it a lot easier to create a Java project. Here is an example of a `Maven `_ ``pom.xml`` file for a project running Pygments: .. sourcecode:: xml 4.0.0 example example 1.0-SNAPSHOT org.python jython-standalone 2.5.3 org.pygments pygments 1.5 runtime The following Java example: .. sourcecode:: java PythonInterpreter interpreter = new PythonInterpreter(); // Set a variable with the content you want to work with interpreter.set("code", code); // Simple use Pygments as you would in Python interpreter.exec("from pygments import highlight\n" + "from pygments.lexers import PythonLexer\n" + "from pygments.formatters import HtmlFormatter\n" + "\nresult = highlight(code, PythonLexer(), HtmlFormatter())"); // Get the result that has been set in a variable System.out.println(interpreter.get("result", String.class)); will print something like: .. sourcecode:: html
print "Hello World"
Pygments-2.1/doc/docs/cmdline.rst0000644000175000017500000001171312642443625016441 0ustar dmitrydmitry.. -*- mode: rst -*- ====================== Command Line Interface ====================== You can use Pygments from the shell, provided you installed the :program:`pygmentize` script:: $ pygmentize test.py print "Hello World" will print the file test.py to standard output, using the Python lexer (inferred from the file name extension) and the terminal formatter (because you didn't give an explicit formatter name). If you want HTML output:: $ pygmentize -f html -l python -o test.html test.py As you can see, the -l option explicitly selects a lexer. As seen above, if you give an input file name and it has an extension that Pygments recognizes, you can omit this option. The ``-o`` option gives an output file name. If it is not given, output is written to stdout. The ``-f`` option selects a formatter (as with ``-l``, it can also be omitted if an output file name is given and has a supported extension). If no output file name is given and ``-f`` is omitted, the :class:`.TerminalFormatter` is used. The above command could therefore also be given as:: $ pygmentize -o test.html test.py To create a full HTML document, including line numbers and stylesheet (using the "emacs" style), highlighting the Python file ``test.py`` to ``test.html``:: $ pygmentize -O full,style=emacs -o test.html test.py Options and filters ------------------- Lexer and formatter options can be given using the ``-O`` option:: $ pygmentize -f html -O style=colorful,linenos=1 -l python test.py Be sure to enclose the option string in quotes if it contains any special shell characters, such as spaces or expansion wildcards like ``*``. If an option expects a list value, separate the list entries with spaces (you'll have to quote the option value in this case too, so that the shell doesn't split it). Since the ``-O`` option argument is split at commas and expects the split values to be of the form ``name=value``, you can't give an option value that contains commas or equals signs. Therefore, an option ``-P`` is provided (as of Pygments 0.9) that works like ``-O`` but can only pass one option per ``-P``. Its value can then contain all characters:: $ pygmentize -P "heading=Pygments, the Python highlighter" ... Filters are added to the token stream using the ``-F`` option:: $ pygmentize -f html -l pascal -F keywordcase:case=upper main.pas As you see, options for the filter are given after a colon. As for ``-O``, the filter name and options must be one shell word, so there may not be any spaces around the colon. Generating styles ----------------- Formatters normally don't output full style information. For example, the HTML formatter by default only outputs ```` tags with ``class`` attributes. Therefore, there's a special ``-S`` option for generating style definitions. Usage is as follows:: $ pygmentize -f html -S colorful -a .syntax generates a CSS style sheet (because you selected the HTML formatter) for the "colorful" style prepending a ".syntax" selector to all style rules. For an explanation what ``-a`` means for :doc:`a particular formatter `, look for the `arg` argument for the formatter's :meth:`.get_style_defs()` method. Getting lexer names ------------------- .. versionadded:: 1.0 The ``-N`` option guesses a lexer name for a given filename, so that :: $ pygmentize -N setup.py will print out ``python``. It won't highlight anything yet. If no specific lexer is known for that filename, ``text`` is printed. Getting help ------------ The ``-L`` option lists lexers, formatters, along with their short names and supported file name extensions, styles and filters. If you want to see only one category, give it as an argument:: $ pygmentize -L filters will list only all installed filters. The ``-H`` option will give you detailed information (the same that can be found in this documentation) about a lexer, formatter or filter. Usage is as follows:: $ pygmentize -H formatter html will print the help for the HTML formatter, while :: $ pygmentize -H lexer python will print the help for the Python lexer, etc. A note on encodings ------------------- .. versionadded:: 0.9 Pygments tries to be smart regarding encodings in the formatting process: * If you give an ``encoding`` option, it will be used as the input and output encoding. * If you give an ``outencoding`` option, it will override ``encoding`` as the output encoding. * If you give an ``inencoding`` option, it will override ``encoding`` as the input encoding. * If you don't give an encoding and have given an output file, the default encoding for lexer and formatter is the terminal encoding or the default locale encoding of the system. As a last resort, ``latin1`` is used (which will pass through all non-ASCII characters). * If you don't give an encoding and haven't given an output file (that means output is written to the console), the default encoding for lexer and formatter is the terminal encoding (``sys.stdout.encoding``). Pygments-2.1/doc/docs/lexerdevelopment.rst0000644000175000017500000006311112645467227020416 0ustar dmitrydmitry.. -*- mode: rst -*- .. highlight:: python ==================== Write your own lexer ==================== If a lexer for your favorite language is missing in the Pygments package, you can easily write your own and extend Pygments. All you need can be found inside the :mod:`pygments.lexer` module. As you can read in the :doc:`API documentation `, a lexer is a class that is initialized with some keyword arguments (the lexer options) and that provides a :meth:`.get_tokens_unprocessed()` method which is given a string or unicode object with the data to lex. The :meth:`.get_tokens_unprocessed()` method must return an iterator or iterable containing tuples in the form ``(index, token, value)``. Normally you don't need to do this since there are base lexers that do most of the work and that you can subclass. RegexLexer ========== The lexer base class used by almost all of Pygments' lexers is the :class:`RegexLexer`. This class allows you to define lexing rules in terms of *regular expressions* for different *states*. States are groups of regular expressions that are matched against the input string at the *current position*. If one of these expressions matches, a corresponding action is performed (such as yielding a token with a specific type, or changing state), the current position is set to where the last match ended and the matching process continues with the first regex of the current state. Lexer states are kept on a stack: each time a new state is entered, the new state is pushed onto the stack. The most basic lexers (like the `DiffLexer`) just need one state. Each state is defined as a list of tuples in the form (`regex`, `action`, `new_state`) where the last item is optional. In the most basic form, `action` is a token type (like `Name.Builtin`). That means: When `regex` matches, emit a token with the match text and type `tokentype` and push `new_state` on the state stack. If the new state is ``'#pop'``, the topmost state is popped from the stack instead. To pop more than one state, use ``'#pop:2'`` and so on. ``'#push'`` is a synonym for pushing the current state on the stack. The following example shows the `DiffLexer` from the builtin lexers. Note that it contains some additional attributes `name`, `aliases` and `filenames` which aren't required for a lexer. They are used by the builtin lexer lookup functions. :: from pygments.lexer import RegexLexer from pygments.token import * class DiffLexer(RegexLexer): name = 'Diff' aliases = ['diff'] filenames = ['*.diff'] tokens = { 'root': [ (r' .*\n', Text), (r'\+.*\n', Generic.Inserted), (r'-.*\n', Generic.Deleted), (r'@.*\n', Generic.Subheading), (r'Index.*\n', Generic.Heading), (r'=.*\n', Generic.Heading), (r'.*\n', Text), ] } As you can see this lexer only uses one state. When the lexer starts scanning the text, it first checks if the current character is a space. If this is true it scans everything until newline and returns the data as a `Text` token (which is the "no special highlighting" token). If this rule doesn't match, it checks if the current char is a plus sign. And so on. If no rule matches at the current position, the current char is emitted as an `Error` token that indicates a lexing error, and the position is increased by one. Adding and testing a new lexer ============================== To make Pygments aware of your new lexer, you have to perform the following steps: First, change to the current directory containing the Pygments source code: .. code-block:: console $ cd .../pygments-main Select a matching module under ``pygments/lexers``, or create a new module for your lexer class. Next, make sure the lexer is known from outside of the module. All modules in the ``pygments.lexers`` specify ``__all__``. For example, ``esoteric.py`` sets:: __all__ = ['BrainfuckLexer', 'BefungeLexer', ...] Simply add the name of your lexer class to this list. Finally the lexer can be made publicly known by rebuilding the lexer mapping: .. code-block:: console $ make mapfiles To test the new lexer, store an example file with the proper extension in ``tests/examplefiles``. For example, to test your ``DiffLexer``, add a ``tests/examplefiles/example.diff`` containing a sample diff output. Now you can use pygmentize to render your example to HTML: .. code-block:: console $ ./pygmentize -O full -f html -o /tmp/example.html tests/examplefiles/example.diff Note that this explicitly calls the ``pygmentize`` in the current directory by preceding it with ``./``. This ensures your modifications are used. Otherwise a possibly already installed, unmodified version without your new lexer would have been called from the system search path (``$PATH``). To view the result, open ``/tmp/example.html`` in your browser. Once the example renders as expected, you should run the complete test suite: .. code-block:: console $ make test It also tests that your lexer fulfills the lexer API and certain invariants, such as that the concatenation of all token text is the same as the input text. Regex Flags =========== You can either define regex flags locally in the regex (``r'(?x)foo bar'``) or globally by adding a `flags` attribute to your lexer class. If no attribute is defined, it defaults to `re.MULTILINE`. For more information about regular expression flags see the page about `regular expressions`_ in the Python documentation. .. _regular expressions: http://docs.python.org/library/re.html#regular-expression-syntax Scanning multiple tokens at once ================================ So far, the `action` element in the rule tuple of regex, action and state has been a single token type. Now we look at the first of several other possible values. Here is a more complex lexer that highlights INI files. INI files consist of sections, comments and ``key = value`` pairs:: from pygments.lexer import RegexLexer, bygroups from pygments.token import * class IniLexer(RegexLexer): name = 'INI' aliases = ['ini', 'cfg'] filenames = ['*.ini', '*.cfg'] tokens = { 'root': [ (r'\s+', Text), (r';.*?$', Comment), (r'\[.*?\]$', Keyword), (r'(.*?)(\s*)(=)(\s*)(.*?)$', bygroups(Name.Attribute, Text, Operator, Text, String)) ] } The lexer first looks for whitespace, comments and section names. Later it looks for a line that looks like a key, value pair, separated by an ``'='`` sign, and optional whitespace. The `bygroups` helper yields each capturing group in the regex with a different token type. First the `Name.Attribute` token, then a `Text` token for the optional whitespace, after that a `Operator` token for the equals sign. Then a `Text` token for the whitespace again. The rest of the line is returned as `String`. Note that for this to work, every part of the match must be inside a capturing group (a ``(...)``), and there must not be any nested capturing groups. If you nevertheless need a group, use a non-capturing group defined using this syntax: ``(?:some|words|here)`` (note the ``?:`` after the beginning parenthesis). If you find yourself needing a capturing group inside the regex which shouldn't be part of the output but is used in the regular expressions for backreferencing (eg: ``r'(<(foo|bar)>)(.*?)()'``), you can pass `None` to the bygroups function and that group will be skipped in the output. Changing states =============== Many lexers need multiple states to work as expected. For example, some languages allow multiline comments to be nested. Since this is a recursive pattern it's impossible to lex just using regular expressions. Here is a lexer that recognizes C++ style comments (multi-line with ``/* */`` and single-line with ``//`` until end of line):: from pygments.lexer import RegexLexer from pygments.token import * class CppCommentLexer(RegexLexer): name = 'Example Lexer with states' tokens = { 'root': [ (r'[^/]+', Text), (r'/\*', Comment.Multiline, 'comment'), (r'//.*?$', Comment.Singleline), (r'/', Text) ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ] } This lexer starts lexing in the ``'root'`` state. It tries to match as much as possible until it finds a slash (``'/'``). If the next character after the slash is an asterisk (``'*'``) the `RegexLexer` sends those two characters to the output stream marked as `Comment.Multiline` and continues lexing with the rules defined in the ``'comment'`` state. If there wasn't an asterisk after the slash, the `RegexLexer` checks if it's a Singleline comment (i.e. followed by a second slash). If this also wasn't the case it must be a single slash, which is not a comment starter (the separate regex for a single slash must also be given, else the slash would be marked as an error token). Inside the ``'comment'`` state, we do the same thing again. Scan until the lexer finds a star or slash. If it's the opening of a multiline comment, push the ``'comment'`` state on the stack and continue scanning, again in the ``'comment'`` state. Else, check if it's the end of the multiline comment. If yes, pop one state from the stack. Note: If you pop from an empty stack you'll get an `IndexError`. (There is an easy way to prevent this from happening: don't ``'#pop'`` in the root state). If the `RegexLexer` encounters a newline that is flagged as an error token, the stack is emptied and the lexer continues scanning in the ``'root'`` state. This can help producing error-tolerant highlighting for erroneous input, e.g. when a single-line string is not closed. Advanced state tricks ===================== There are a few more things you can do with states: - You can push multiple states onto the stack if you give a tuple instead of a simple string as the third item in a rule tuple. For example, if you want to match a comment containing a directive, something like: .. code-block:: text /* rest of comment */ you can use this rule:: tokens = { 'root': [ (r'/\* <', Comment, ('comment', 'directive')), ... ], 'directive': [ (r'[^>]*', Comment.Directive), (r'>', Comment, '#pop'), ], 'comment': [ (r'[^*]+', Comment), (r'\*/', Comment, '#pop'), (r'\*', Comment), ] } When this encounters the above sample, first ``'comment'`` and ``'directive'`` are pushed onto the stack, then the lexer continues in the directive state until it finds the closing ``>``, then it continues in the comment state until the closing ``*/``. Then, both states are popped from the stack again and lexing continues in the root state. .. versionadded:: 0.9 The tuple can contain the special ``'#push'`` and ``'#pop'`` (but not ``'#pop:n'``) directives. - You can include the rules of a state in the definition of another. This is done by using `include` from `pygments.lexer`:: from pygments.lexer import RegexLexer, bygroups, include from pygments.token import * class ExampleLexer(RegexLexer): tokens = { 'comments': [ (r'/\*.*?\*/', Comment), (r'//.*?\n', Comment), ], 'root': [ include('comments'), (r'(function )(\w+)( {)', bygroups(Keyword, Name, Keyword), 'function'), (r'.', Text), ], 'function': [ (r'[^}/]+', Text), include('comments'), (r'/', Text), (r'\}', Keyword, '#pop'), ] } This is a hypothetical lexer for a language that consist of functions and comments. Because comments can occur at toplevel and in functions, we need rules for comments in both states. As you can see, the `include` helper saves repeating rules that occur more than once (in this example, the state ``'comment'`` will never be entered by the lexer, as it's only there to be included in ``'root'`` and ``'function'``). - Sometimes, you may want to "combine" a state from existing ones. This is possible with the `combined` helper from `pygments.lexer`. If you, instead of a new state, write ``combined('state1', 'state2')`` as the third item of a rule tuple, a new anonymous state will be formed from state1 and state2 and if the rule matches, the lexer will enter this state. This is not used very often, but can be helpful in some cases, such as the `PythonLexer`'s string literal processing. - If you want your lexer to start lexing in a different state you can modify the stack by overriding the `get_tokens_unprocessed()` method:: from pygments.lexer import RegexLexer class ExampleLexer(RegexLexer): tokens = {...} def get_tokens_unprocessed(self, text, stack=('root', 'otherstate')): for item in RegexLexer.get_tokens_unprocessed(text, stack): yield item Some lexers like the `PhpLexer` use this to make the leading ``', Name.Tag), ], 'script-content': [ (r'(.+?)(<\s*/\s*script\s*>)', bygroups(using(JavascriptLexer), Name.Tag), '#pop'), ] } Here the content of a ```` end tag is processed by the `JavascriptLexer`, while the end tag is yielded as a normal token with the `Name.Tag` type. Also note the ``(r'<\s*script\s*', Name.Tag, ('script-content', 'tag'))`` rule. Here, two states are pushed onto the state stack, ``'script-content'`` and ``'tag'``. That means that first ``'tag'`` is processed, which will lex attributes and the closing ``>``, then the ``'tag'`` state is popped and the next state on top of the stack will be ``'script-content'``. Since you cannot refer to the class currently being defined, use `this` (imported from `pygments.lexer`) to refer to the current lexer class, i.e. ``using(this)``. This construct may seem unnecessary, but this is often the most obvious way of lexing arbitrary syntax between fixed delimiters without introducing deeply nested states. The `using()` helper has a special keyword argument, `state`, which works as follows: if given, the lexer to use initially is not in the ``"root"`` state, but in the state given by this argument. This does not work with advanced `RegexLexer` subclasses such as `ExtendedRegexLexer` (see below). Any other keywords arguments passed to `using()` are added to the keyword arguments used to create the lexer. Delegating Lexer ================ Another approach for nested lexers is the `DelegatingLexer` which is for example used for the template engine lexers. It takes two lexers as arguments on initialisation: a `root_lexer` and a `language_lexer`. The input is processed as follows: First, the whole text is lexed with the `language_lexer`. All tokens yielded with the special type of ``Other`` are then concatenated and given to the `root_lexer`. The language tokens of the `language_lexer` are then inserted into the `root_lexer`'s token stream at the appropriate positions. :: from pygments.lexer import DelegatingLexer from pygments.lexers.web import HtmlLexer, PhpLexer class HtmlPhpLexer(DelegatingLexer): def __init__(self, **options): super(HtmlPhpLexer, self).__init__(HtmlLexer, PhpLexer, **options) This procedure ensures that e.g. HTML with template tags in it is highlighted correctly even if the template tags are put into HTML tags or attributes. If you want to change the needle token ``Other`` to something else, you can give the lexer another token type as the third parameter:: DelegatingLexer.__init__(MyLexer, OtherLexer, Text, **options) Callbacks ========= Sometimes the grammar of a language is so complex that a lexer would be unable to process it just by using regular expressions and stacks. For this, the `RegexLexer` allows callbacks to be given in rule tuples, instead of token types (`bygroups` and `using` are nothing else but preimplemented callbacks). The callback must be a function taking two arguments: * the lexer itself * the match object for the last matched rule The callback must then return an iterable of (or simply yield) ``(index, tokentype, value)`` tuples, which are then just passed through by `get_tokens_unprocessed()`. The ``index`` here is the position of the token in the input string, ``tokentype`` is the normal token type (like `Name.Builtin`), and ``value`` the associated part of the input string. You can see an example here:: from pygments.lexer import RegexLexer from pygments.token import Generic class HypotheticLexer(RegexLexer): def headline_callback(lexer, match): equal_signs = match.group(1) text = match.group(2) yield match.start(), Generic.Headline, equal_signs + text + equal_signs tokens = { 'root': [ (r'(=+)(.*?)(\1)', headline_callback) ] } If the regex for the `headline_callback` matches, the function is called with the match object. Note that after the callback is done, processing continues normally, that is, after the end of the previous match. The callback has no possibility to influence the position. There are not really any simple examples for lexer callbacks, but you can see them in action e.g. in the `SMLLexer` class in `ml.py`_. .. _ml.py: http://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ml.py The ExtendedRegexLexer class ============================ The `RegexLexer`, even with callbacks, unfortunately isn't powerful enough for the funky syntax rules of languages such as Ruby. But fear not; even then you don't have to abandon the regular expression approach: Pygments has a subclass of `RegexLexer`, the `ExtendedRegexLexer`. All features known from RegexLexers are available here too, and the tokens are specified in exactly the same way, *except* for one detail: The `get_tokens_unprocessed()` method holds its internal state data not as local variables, but in an instance of the `pygments.lexer.LexerContext` class, and that instance is passed to callbacks as a third argument. This means that you can modify the lexer state in callbacks. The `LexerContext` class has the following members: * `text` -- the input text * `pos` -- the current starting position that is used for matching regexes * `stack` -- a list containing the state stack * `end` -- the maximum position to which regexes are matched, this defaults to the length of `text` Additionally, the `get_tokens_unprocessed()` method can be given a `LexerContext` instead of a string and will then process this context instead of creating a new one for the string argument. Note that because you can set the current position to anything in the callback, it won't be automatically be set by the caller after the callback is finished. For example, this is how the hypothetical lexer above would be written with the `ExtendedRegexLexer`:: from pygments.lexer import ExtendedRegexLexer from pygments.token import Generic class ExHypotheticLexer(ExtendedRegexLexer): def headline_callback(lexer, match, ctx): equal_signs = match.group(1) text = match.group(2) yield match.start(), Generic.Headline, equal_signs + text + equal_signs ctx.pos = match.end() tokens = { 'root': [ (r'(=+)(.*?)(\1)', headline_callback) ] } This might sound confusing (and it can really be). But it is needed, and for an example look at the Ruby lexer in `ruby.py`_. .. _ruby.py: https://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ruby.py Handling Lists of Keywords ========================== For a relatively short list (hundreds) you can construct an optimized regular expression directly using ``words()`` (longer lists, see next section). This function handles a few things for you automatically, including escaping metacharacters and Python's first-match rather than longest-match in alternations. Feel free to put the lists themselves in ``pygments/lexers/_$lang_builtins.py`` (see examples there), and generated by code if possible. An example of using ``words()`` is something like:: from pygments.lexer import RegexLexer, words, Name class MyLexer(RegexLexer): tokens = { 'root': [ (words(('else', 'elseif'), suffix=r'\b'), Name.Builtin), (r'\w+', Name), ], } As you can see, you can add ``prefix`` and ``suffix`` parts to the constructed regex. Modifying Token Streams ======================= Some languages ship a lot of builtin functions (for example PHP). The total amount of those functions differs from system to system because not everybody has every extension installed. In the case of PHP there are over 3000 builtin functions. That's an incredibly huge amount of functions, much more than you want to put into a regular expression. But because only `Name` tokens can be function names this is solvable by overriding the ``get_tokens_unprocessed()`` method. The following lexer subclasses the `PythonLexer` so that it highlights some additional names as pseudo keywords:: from pygments.lexers.python import PythonLexer from pygments.token import Name, Keyword class MyPythonLexer(PythonLexer): EXTRA_KEYWORDS = set(('foo', 'bar', 'foobar', 'barfoo', 'spam', 'eggs')) def get_tokens_unprocessed(self, text): for index, token, value in PythonLexer.get_tokens_unprocessed(self, text): if token is Name and value in self.EXTRA_KEYWORDS: yield index, Keyword.Pseudo, value else: yield index, token, value The `PhpLexer` and `LuaLexer` use this method to resolve builtin functions. Pygments-2.1/doc/docs/authors.rst0000644000175000017500000000011012642443625016500 0ustar dmitrydmitryFull contributor list ===================== .. include:: ../../AUTHORS Pygments-2.1/doc/docs/plugins.rst0000644000175000017500000000502512642443625016506 0ustar dmitrydmitry================ Register Plugins ================ If you want to extend Pygments without hacking the sources, but want to use the lexer/formatter/style/filter lookup functions (`lexers.get_lexer_by_name` et al.), you can use `setuptools`_ entrypoints to add new lexers, formatters or styles as if they were in the Pygments core. .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools That means you can use your highlighter modules with the `pygmentize` script, which relies on the mentioned functions. Entrypoints =========== Here is a list of setuptools entrypoints that Pygments understands: `pygments.lexers` This entrypoint is used for adding new lexers to the Pygments core. The name of the entrypoint values doesn't really matter, Pygments extracts required metadata from the class definition: .. sourcecode:: ini [pygments.lexers] yourlexer = yourmodule:YourLexer Note that you have to define ``name``, ``aliases`` and ``filename`` attributes so that you can use the highlighter from the command line: .. sourcecode:: python class YourLexer(...): name = 'Name Of Your Lexer' aliases = ['alias'] filenames = ['*.ext'] `pygments.formatters` You can use this entrypoint to add new formatters to Pygments. The name of an entrypoint item is the name of the formatter. If you prefix the name with a slash it's used as a filename pattern: .. sourcecode:: ini [pygments.formatters] yourformatter = yourmodule:YourFormatter /.ext = yourmodule:YourFormatter `pygments.styles` To add a new style you can use this entrypoint. The name of the entrypoint is the name of the style: .. sourcecode:: ini [pygments.styles] yourstyle = yourmodule:YourStyle `pygments.filters` Use this entrypoint to register a new filter. The name of the entrypoint is the name of the filter: .. sourcecode:: ini [pygments.filters] yourfilter = yourmodule:YourFilter How To Use Entrypoints ====================== This documentation doesn't explain how to use those entrypoints because this is covered in the `setuptools documentation`_. That page should cover everything you need to write a plugin. .. _setuptools documentation: http://peak.telecommunity.com/DevCenter/setuptools Extending The Core ================== If you have written a Pygments plugin that is open source, please inform us about that. There is a high chance that we'll add it to the Pygments distribution. Pygments-2.1/doc/docs/moinmoin.rst0000644000175000017500000000267012642443625016655 0ustar dmitrydmitry.. -*- mode: rst -*- ============================ Using Pygments with MoinMoin ============================ From Pygments 0.7, the source distribution ships a `Moin`_ parser plugin that can be used to get Pygments highlighting in Moin wiki pages. To use it, copy the file `external/moin-parser.py` from the Pygments distribution to the `data/plugin/parser` subdirectory of your Moin instance. Edit the options at the top of the file (currently ``ATTACHMENTS`` and ``INLINESTYLES``) and rename the file to the name that the parser directive should have. For example, if you name the file ``code.py``, you can get a highlighted Python code sample with this Wiki markup:: {{{ #!code python [...] }}} where ``python`` is the Pygments name of the lexer to use. Additionally, if you set the ``ATTACHMENTS`` option to True, Pygments will also be called for all attachments for whose filenames there is no other parser registered. You are responsible for including CSS rules that will map the Pygments CSS classes to colors. You can output a stylesheet file with `pygmentize`, put it into the `htdocs` directory of your Moin instance and then include it in the `stylesheets` configuration option in the Moin config, e.g.:: stylesheets = [('screen', '/htdocs/pygments.css')] If you do not want to do that and are willing to accept larger HTML output, you can set the ``INLINESTYLES`` option to True. .. _Moin: http://moinmoin.wikiwikiweb.de/ Pygments-2.1/doc/docs/index.rst0000644000175000017500000000157412642443625016141 0ustar dmitrydmitryPygments documentation ====================== **Starting with Pygments** .. toctree:: :maxdepth: 1 ../download quickstart cmdline **Builtin components** .. toctree:: :maxdepth: 1 lexers filters formatters styles **Reference** .. toctree:: :maxdepth: 1 unicode tokens api **Hacking for Pygments** .. toctree:: :maxdepth: 1 lexerdevelopment formatterdevelopment filterdevelopment plugins **Hints and tricks** .. toctree:: :maxdepth: 1 rstdirective moinmoin java integrate **About Pygments** .. toctree:: :maxdepth: 1 changelog authors If you find bugs or have suggestions for the documentation, please look :ref:`here ` for info on how to contact the team. .. XXX You can download an offline version of this documentation from the :doc:`download page `. Pygments-2.1/doc/docs/rstdirective.rst0000644000175000017500000000155112642443625017534 0ustar dmitrydmitry.. -*- mode: rst -*- ================================ Using Pygments in ReST documents ================================ Many Python people use `ReST`_ for documentation their sourcecode, programs, scripts et cetera. This also means that documentation often includes sourcecode samples or snippets. You can easily enable Pygments support for your ReST texts using a custom directive -- this is also how this documentation displays source code. From Pygments 0.9, the directive is shipped in the distribution as `external/rst-directive.py`. You can copy and adapt this code to your liking. .. removed -- too confusing *Loosely related note:* The ReST lexer now recognizes ``.. sourcecode::`` and ``.. code::`` directives and highlights the contents in the specified language if the `handlecodeblocks` option is true. .. _ReST: http://docutils.sf.net/rst.html Pygments-2.1/doc/conf.py0000644000175000017500000001706612645467227014661 0ustar dmitrydmitry# -*- coding: utf-8 -*- # # Pygments documentation build configuration file # import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) import pygments # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'pygments.sphinxext'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Pygments' copyright = u'2015, Georg Brandl' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = pygments.__version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. #pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'pygments14' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = '_static/favicon.ico' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = {'index': 'indexsidebar.html', 'docs/*': 'docssidebar.html'} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Pygmentsdoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Pygments.tex', u'Pygments Documentation', u'Georg Brandl', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'pygments', u'Pygments Documentation', [u'Georg Brandl'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Pygments', u'Pygments Documentation', u'Georg Brandl', 'Pygments', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # Example configuration for intersphinx: refer to the Python standard library. #intersphinx_mapping = {'http://docs.python.org/': None} Pygments-2.1/doc/_themes/0000755000175000017500000000000012646734115014766 5ustar dmitrydmitryPygments-2.1/doc/_themes/pygments14/0000755000175000017500000000000012646734115017001 5ustar dmitrydmitryPygments-2.1/doc/_themes/pygments14/layout.html0000644000175000017500000000633612645467227021222 0ustar dmitrydmitry{# sphinxdoc/layout.html ~~~~~~~~~~~~~~~~~~~~~ Sphinx layout template for the sphinxdoc theme. :copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- extends "basic/layout.html" %} {# put the sidebar before the body #} {% block sidebar1 %}{{ sidebar() }}{% endblock %} {% block sidebar2 %}{% endblock %} {% block relbar1 %}{% endblock %} {% block relbar2 %}{% endblock %} {% block extrahead %} {{ super() }} {%- if not embedded %} {%- endif %} {% endblock %} {% block header %}
{% endblock %} {% block footer %}
{# closes "outerwrapper" div #} {% endblock %} {% block sidebarrel %} {% endblock %} {% block sidebarsourcelink %} {% endblock %} Pygments-2.1/doc/_themes/pygments14/theme.conf0000644000175000017500000000040112642443625020744 0ustar dmitrydmitry[theme] inherit = basic stylesheet = pygments14.css pygments_style = friendly [options] green = #66b55e darkgreen = #36852e darkgray = #666666 border = #66b55e yellow = #f4cd00 darkyellow = #d4ad00 lightyellow = #fffbe3 background = #f9f9f9 font = PT Sans Pygments-2.1/doc/_themes/pygments14/static/0000755000175000017500000000000012646734115020270 5ustar dmitrydmitryPygments-2.1/doc/_themes/pygments14/static/logo.png0000644000175000017500000006446512642443625021754 0ustar dmitrydmitryPNG  IHDRd<-υbKGDd_l pHYs  tIME" IDATx]w|~fZz!PC(7H$ HAED!i?nD"'@HR/#wrHξ;3ϼex6CQJmDwӯm9-mGpnk4DAAAdd„ ]bYD.qwwOy6yzz>*ʼ\fĎTZlٲ[呬,ѣ`Ylͭ@TYYY_owdذa|+W `k``?}t͈RaÆنO>|OăG5kL&G3fX OOW̫V s3㍷O 2D%Z!^006xgghWFB4Z@CN WQŪ^F K [&L()&UffςC(U)2o%M28SE;v\v}ccDs׮].\ *`nnCV}W* Σɏ]pm,(]BRGjVjn{{Qe|o3Fi 11eQ~Z! %d 7<oU:߿?]rn+OgB Jbcct҅UCUcb5GiM~,hYB@( cE˗OE~.zwvjp=pwwo˜K;2݅P&BKT,E_ѤڔAVWJw^s$ݶ7*be~tUy. 5XK5+EFI9'Yas.x+i]\\~8zFFoe xФoNy~g$;!#H”F!(8,p 8 ك4$`j*4Ћ o’%QJCEg<_6ajPFb.MR(QPZ"ػnbK (ZaJnjӪU+i6m CBB0hɐ___lݺFݿ۷c̘]7n, ?}Utt֭[7l2R^dmm*((O ܤQFչ̇yfpѣGo>2mذu߾}l`` q֭;e$>g߯]m۶5 ޼ysc(u3333e~ia֔Rŋ[4|1ܹG"Ȏ"p*SSSuwRǎ%d899СC.]h߾}ɨQ׮])Sȑ#8p `vM/_.SJK(\TT>Rijj׬Y.]&oE:TV2Bee߷oj3A^ۦ(]v9( Ǥ$V;R )))͛YZWTvE7vI?rC5Ib9RJL8j5m ]7jpgbL4g;r}8GYEO$CJDJseRKk,%5#U:yߢѾ$TOcm3N+V8]W[0gΜƍܭP(la.]+00Djjj+eF?~Ȅ }fffY2 cM)T*G.D"TlX\lkk{aȐ!_:;;ԗpO81ɓ3(4#))T%%%9r$ݾ}=Rl(Je>eYD"p'e&uۇ avɓa(FJs@ ˲qvvv ƌ <+(JB#Ph-+[ a9, 3ӧO&:Y\R9,^V~~>c`` 2uphaaa=:G&u %%%"g&|UV{joܸ$::P6U*v횤V3t82 svСK^СCݽ{wtII˲J @,C.gH$"'''NIQׅ^IO7bד İ:D8V8tief̼m;PA*;rm/' ZSAjXCֵ󫄌VƭviqhH:qz+WIJ4I4k__ߴ]|ԱeڭDvСy<;ܼyvaGPŋ;B*iST*딈88::fРAu@)uoA'ǡE}h"33ZF)qpttT?~_"8~x[n6sޚ(^~m\}3fee1;wQPPpB*՗>˲pss[йs-[̭}ׯ_Ç,˂8HfϞMo.(..^E]...{v޽Z#G\t)RP1 S J)ڶm`ܸq+_СCk׮6 A걉 ,--o~{WXxI>m.%>;NM\ҍI= Gٞ06n9Ta/n Nfcd4ږ6,kNi#\2O<kNo#phM~PPPCLZ) ЬO9 6ԡCF7da'=fddtYBF___R=z˗/o+,v{gΜtJMM^XX(4bH$H$ Ba@ PVaۆ8w\Ovz@H_ri:o֭- T:DTB0 ,--PJzzz 6LFI6''ㆉ ٚfۉBmVV@(bkk{jJ 4iR!!Du;}u;b\zu={΍1BJ)|SRRC(5j9''gk&M2{I$DJEddٙ3g >zhlQQQ ]y,[,S}̏t 퐑zX`2sbvvӟi`mm~jjO%##C|4,dرN@ff&8IBa9Q*Q`… gr'7>Dڦ<|0Օ WwرQxx8OHH/B-X|yhxx-Z(_Op}<<*d k㓢|RGp `d}zߛp-7xm:SG ||#Nk2"Қͩ+KT:LKVr8ٌnᱩڶm[;w.G1C؁SJnڴiS.X B{...9t0 O4Ѕn~+##nyӦMVS4͛7_V:(utt6&YXborrr#DH~fy>O&loo:|5==B(GcǎTUDC)['OzD"N2-k׮ѕ،r{5s&`ɒ%'{Zlwܸq'[Y@=G90['ŕΕ*Q}MV5y z6YBjp7yH@gBUnD[i -kgX;apa7jִC)%˗/YA999%ua]eemll: rUjjjC׷L{',tIa ƍK6o666 u-!iuPKKKxvBM u͸skccױ QQQ}esDZd2Ae_>>++ :O?o߾*eРA4$$Yf.Vq,,,ڶm|i>ܙGJ+꫖=6Ǩi=ͭ;,D^FC`ZTo: 34᧋h{ּS~ɪnV2DA˂(TF\ ЙME7݊UӯLZSIJJ2\j˲VO?q)))m eYN,䲣GN֟=(--Y.< !2OvvALL,}ŎcP{('M4(B<ă6NjQ*jQ]&j SN ptt8k֬u]lƍ3fI ?J$ N&JMu$ S 2 ,,LQWL:u ˲q;!!ڵk#jV`8;;w;wn6%!2!ӧϧA {}i۷o[C? !5Ϋ҆o߾&LXٶYfٞϛ7Çs/'er (\R m[,EXtuؔhU (%C6Zn7쌭wVʸ(t?v)%5АNuPdK7=q١UJ{B-, c6Ξ=RpB;JD❟_axuɓOUe``0#|R[A/`"sԩ'+h7mdq\9r޼yϟ_gAj-K>8zhv0qt++'Nsh~pp0w~kŊO=t;/CCC899MԒG] LMMw%1T,y bs&&&#GqFE˗.233{p>qܹfff999:}_-\C>ƌk? LsѤ /Aj20KkTRݠ{B$=5;x J^]`rړ6ʯipEybZ Ez+c6ȑ|5mM w~~~YC'(KK˟L`ևvڝeHQt.((hZ:s bbbz@ۮ]UurP~%<@;wPNbYLTvxa]MPׯCѺV$P("9/+6997ntdVQN7˗1p666_ff+Ԥ^z-"//|DDDYeggGuqpqqZ;0-L=>2 1ThV%!DC>gåX+ 3WsT` AsQFlwěn]߹|4MĈL{,)o+ۯ1ZAZ$Q9ҐCп IDATJp 4;wGnU~ڰaCAAA{埔yMddd #W!8oJ@ iҤIinnƍ{2uO}6iuÇ~?8IOORTՍj !|}{$ijpɒ%uXY{kggg_${'-N>AA/y&_pPn/2yz݂Fk҄SksO*FGOH@ztw>0jwPo"׼tN kJ]&ɤc+ӼȒ/o^mjcظ񯵌 Ō4k֬ Rg:4 /ݼy₂6r7|#MhG*[9'*F<g,)) H$KӺ#Gq4CCC%xyy)ZXXGWgGn``<꠫kn9$U&ME:t{rX[[W uT*WرcG_\_L$eS?4pP_<%SW:Qr:DBs16elWvYI ζlLBEAZSCXDcҾD@.OO I{w-%QQw\&_533[֪Ufj0eʔQ΅yo @pn'#mcbbZ4VMJJ??SN2PJ%WP---,Ru};Nccc莮 ///z׮DOHIIG2^[V#++N͛?sڶ3 &&&?R) _bũ={ ֱB<>SD #x:Bu> \̉W7}lۀN_O[wQ# hL(Ӗ']a@Ta^)7ur\?z45`.КyLLL ю9#yyy 8g?.G8TUwpZ 84l=WXXX&TEF ZH0mMY Ay7ɪ]v}.=T*Ц5i>k9222u__lYr mu&J)))=-[Vdmm$%cpTtZBh=TAimG>W>Fzm͈vhu,16qGV*e-JJTgjUMO-<"EMO_;Yo\trjΝYDBի.nÇ3wqhbQu%5-!$''oJ\.]d3_[RA&U0Ὥ9sf+JRSSRSSW/X 0 rQW?aG=Y]WB:~Q|<5JLYKqѤģX_7XӼ6O@! (L*PZ_HPЬY3vݑpII >|x\pϟg8@8˲8@co8- Zx!#+M5nl2xfMDTܧ 0/1aO0孓bXC6?uTtG.]о}mWh>9c ŀxB)$oh)bQ0v5~~~666A% DZ))),_lHHȇϝ ;%_u4$uڒZʟcZdKz6v>EͿ_-@[k_dfȨ]\fڙ(#! #u@X]Jܠ.k*y޸qM3\tS J͝;w>p>`e}S)))|~8oҲAf"::5 ؘn WWWv`Vh* ._|`)C|RN.de[3@kJU}RA3TQm|䧟p|sk_n!8=Jej Cʆ-U\ =:%y5aDs4i.\ n^$ T*,Pejj׷˗/̃ZF֭{xx$ִRf}XA?x sѢE~%%%u@P" k;ֺfxӠ 5j={J,WXϋ86::w…E5j(Ւ,_ ۃd[;Fqhvv6o߼F-[VXpUj ~sf}Yv҅w=V(M&55޽{ [l3gD׮]ۨ P(|Ը+nݺU<<<|ccch7kMLL_v҇Ç޽{ WXX`) Zrڴic[Zs-o@sJa[>m_mQ7Ҹ]O/_WWq=(JohH)5zزeMvspr>BaJo}@AmѹshEHHHA Cz1K~~~Ǐl…eH$*7<}tU)I,mT.GuzP7:gƌaٯ Txyq|/[ôQOUFfFQJ%ŲoGfIr۶me$>>Rwo^i:::kRPPB(] XRJhWӧOaggeccS˗/3I|*:} j̫N$\@X=}?2un qrzB5hL5iXyN1AZBVVV ͙֘ ؤ& D} 魬ujm,뒈*h4A~''{r9xɼ: 3fЯ?֙ =Iq+vT*Tn«L{$MVF,g3[ήAE#",KT?Bܿ;f7^~!!!0L //oܹ󏢢r&4hQlllJ)SMp5j$Ŧ|k(EDD +tYFuظ,:0HLLDHtϞjPT(M>!듳CYxن6n ,cP *3 4hLLL322]Çmt,cccL4iƍImqsi}((**G)wʠ@tr܄o h޼9 !Ҥ4ٹs,m.*(2~ÇW!++ra5eh߲$"}Zj͊ui켏lh5=Gx5huaŰZ)}UR$~2-1;ڷo~ťK511 j>>>{r eӍ6lG}=c _~FLLtСCvvvu)0te 4]===_Kͬ Bw0 ܹS7bٲemo^rל9sUPJ [*mTMR$נ1Ue+=X^e)?Ēu6m+i6-$$}RRRs%6ǏԣGf͚]>詇vYkZjHLL*f1cƗIiDl i$Xbŧ~-;wnի׍5 {dsB?UǏ&LZ*JeZZڨŋ={֢.e8xt!͛7ORDwXP'ZTâ<ŕE|yJ_"h[J;iWTѝ;R fƌGG3fht}78pO>QyhhhիWNJJګKz<ˊٴiӥqrrrSN;qĀ͛7{_RFFv\Z/BCC<뢎/fګZFFF;v4JSUJsss? uȑ/_C ɓ'<~JSDRf1RbS]Eܾp[ Bjx["3ey~E$?{b^ !ꙒBgBahv(J˗ZaBBΝ;)ݳݲeEČP,_+G>}p}8::lddPw`@AaaSlܸ[tEFFd?Zر 6\ݽ{⎺-RJR_7BJlmmoER.>|xԉ'~W"ц *`~_Jvٕ˖-K>pK,1x-nܸaq!?. {͚5g?J*J8Re+1dܿ pj~MTFqұYl, Gu[s$(-H|lNƲr< ؾ52>|xŋd2_rI}7stn޼@ (q5Q)?C3LLLHQQ`ǎ2,##Ò2-RJ% QTEjڈo O !!!}ўSTPFRkBeUprrZW\'MJҭ\{K*lCqh%| <طUVlmm39slN)W;u:.ckkԵPJall̩I^,?@ Pʩ+N<4`6mJ,>&RpSS===M>!)o\r+++V߿ijjZOWW-BJ0 _^AXcƌApp0&NՊk J+++8994iҰu֑ T>`_ ]vWQՒ 5O2ZXjur!ìKL\Djefp%YWP&9g6%-7Y׮]Xhi &O|Ugn/!!]33#F͊,hrr2>_mN566aF[ 'ѣ̙3;/>$РA{滵ub1{YMPedwyM.+M6mFFFR???DRL)N,1 Slkk_~}'Ny6 O~RʔR,]Hfff?] s|E޽rrrRu/\N affp)n#! !SL600E)7hiiytkx^ڵk盘ߙ2{׮]=fL:X]˭{z{3 cЪU˽{~\֭[ɓ'kظSJJ G)NNNƎUSptHMMu۹sRljaaOZM5Z%dW177;vBH~]'ۭ)}#ytԮ%U?#Ӿ7\BJҚ. rk걳juenܸ'O(ryvvvEP5N,vy_5agSw,n5p:GS:_+?=!jqN(nx/y񝢷]Rt^F"DU.lmm;^PPpo-...6+0B|wmD0ɽ{&0z7n8Y_ ۢGRݺukׯ_VkR0F}ʺhU^JH~T8x0ȂϿ/$t"JZЁ輤M8._DGGo%(pww?nddty.@)]x1%<<|eDD/;w~Νu6mXXX5qlmmŃ')= C2=d01WCJD]g\{pTW\ֽ:dESDiQ uavʣnp:VPPJIIIؚaccuܹspRRvL&믿߿'|W|||$44L&3b 2d"ߍIJf buV{b zU-U}w.3RΈ߼/@7 wK(F-a$097+a[MPPTT}ҥ$''wϒݸqUG~}@Vݻw 444qͶmۚ, ήcSÆ o=eyxUL38Y;}8.5LeɫF$'~Tl/g_,{C~ʂ" TN~MS&d%Lڷ2o۶ C 1ygLIIJI8嫯zaq/1X,%K ggg0 &&&rrrJRg8Yƍ8|L&rĘTQKo3kONtŔR"ο<`b2{"AkX7'4T^)3(y>h] FV&Znؖfvު|֭G JR駝 IDATlNxxx֘1c{Y_~eqQQѼjBi?kSU;;>Ҳx$U \PrMϻU&ݬgB`Q}nٯ8KcF+r\QT[m -84 8 Vuy /VxBԿI^^^ʗQǏr!L&k}x{{ڵ777.885%u<.br@ҳu٥ ][$IPPˎ>Kw:9}F+K_(Hv"(Q7b aV;4 [IP`iioVLMMS\\\>9s_ A%''_~-ZdbeeD"e,#6V dY6Ȁڌ7 .pxkRupTW2ٮKUkCT叉ivO΀^rFH:kPTOtty+;n<ܶ999YӉ]88@q;۳g].]g&֦S {rr)Jr{ v޽{?PHQ%)RP7]7gh>wԩ߾'nֱUwy$e*|V/)系hT@q}Πy%hC+/F/PEA- J -ߪѩVf0zM_z_AػwCuvvv[kw:tРy={ܺu={ϳǎkw/Z;wXlKD,P-Q(oD. 6$6ky?@@]P?cdv m^d111dբ-[PT&$>{m6T~zk)))f#??rqq1PKÙ3gн{}}T*o%%%;j6HrݻkjLaaׯ_ٿh|J"hlYMGBT2zݫƙgr+)\J(WrF J#XM] "A@Æ ȑ#}vFN:Ǐ'o6mZdǍ)~9ujAUIqȬkJGh֬YaXll,(^^^#!JN2 H]ʮQTAUgU8NU{%E+ Bj*ѣG;<3gξVZܳ*4mT1]m۶9zhs/^$u+!jR^B˾ețf+\T+\V.@H5)Ak#|4y$^@ Z xUZ5) @b5i nEkV"qJvʕ7oޒaӕk֬عs牐tk`Yrҥ ?3ni|nɽ +kgr (CA(?|G[xu_b+J("Z\ds<7 != d?JlD\J_}񃟹wΝ9wsΜ9Clmmo&%%>`<==Kp¦2Zp82Co߾ yyyB(kmm`jj n ŋfϞ1@GGG;wdff8qbBb> nŋ_! >>ƍMP}.]!-- BCCJ޽t`РA%99h4^AAAFggLW^2VTTdPGbVVV@$!77X,VĶm/_0j(sqss ۶m+`ii ŠRcvM$-qppx<5G+**A&-< zUmmjg08vy@, `ee݃KK8 ! j=o9P~~L6!Φ𡑤z􃱠, Qdjkeښk' rBYE_7K̹֏>{M) {I\& -*`$,e5Q@ϝ;w%++daAWW74$X(B.,,O *\ //ӧO988p8wHa0E֭ߚp%WV4۳\]] ?Vobb1MMM˖-8ppG$ x<޽UV=}Njj{Ϝ9S$M|}}͛痢EȉJ;wOMM+;;;w/{72̇4F"d2N0'..SLYYð*222X_~ጻwN޿}z77ao7iO7mT{ѹsF I$|D"Q\]]o|Kg K:鿿cǎuM60TT*h42__͓'Oһ%פz2P&^7 mO`A3R௰E:TTlLW_= p e?FX?& VoAH$II6lH{`0 6l2 myyy,/cǎFR$iFߓ2,̬$222s̙Ot:$yk~+WMIȑ#78ބw]ѱJ.]t`ƌK8NT*}7--mad2L<9*//OKPu9g*Rl~_Z:yyŅ>_\U3=r/#,HQ~2 BI$dD"2U#FbXq͂χÇ{Ʒ̙3__ z9Nݻwz~ðc /^RR$F!|nh)ʭ d2n)ggg~xwuuuAUUK&""BjA'.//WxxxT3bc0FYe<|:\\\tN ;vlB$IvCCߏq(Bxҥ~^@B2 l6y>]minn.>cǎ7j*T*NPzJz!)+!!aX,N! Pxy֭ӦM;aoo/ILL-00pMRRt"&_|m޼VTZZZ #""z{.uSg >0@ V8j=BHmqBrr2={䄄;ߏ$2'WL:}[vpp$IޛVVV#Df%:cww50iҤz^+)){{PSS9Qpz7 kQ\\R5`&8u CZ-HRdX6+?0gm ';ѶT/VN1}:(+JImI&@2({%E gΜE |J JKMM@H$^ux}$k/8'T&&&^ojju:K jdddjnnH]]ضq_oݺuҲe˞K 0,..lYY8#Ç_#8.v^^g~qmy}k9PDX0PV/ $AL:ɬNrkBʾO$}OPƍ:b1VY onCؠA>ӣQQQcpj< jxBPT*{%lhI(ʥ4 ٓBYZ8[^r喉 jggu׮]LQTPQQAoo~~!* ǏާbH$sNkS{4t:rNî&MUUU!w؋C  ((znz`_8BL]]Jk4302,>|xa{{{?DO"FY&jrGp=Q*PWWQTU]]]_pBuxx ssڹ>zpČݻlaaJ* _'멽ð2T3ᎳܱcHI;j2q,T*5ѣG=z#HRSScbb_zj$>A /.GTi~ L>.5"۳jP#0\J$ yyyaX Djkjjb,Ci{mE V BR_```ѣrgϖGE?~|ݺuꛚܭusskmPĨ"HXccxڵkW8p zk=*< rƌA,,☏O,D2,>>>ڵknP(P[[{~߾} O>kjj>7\;pX,_knnիW1 }k2oEEEj0ۥ/IDAT`v5k4|r$--[FKJP__$;;{N u耆愄ϊk׮'bؾ5_uqqQs8{:s… xxxlq/..>k׮~!ٳ%&&\T*MMML#I1ڗm ;V ۇH 4Zna]i4@t֔p}4٧1xucSQcɁNTϐVͣB̙3GD&[LMM4-S(>љB\#ܹ`bbRd2[Bt7V@ 8s L8qϪBF%333_8ׯ ~" !00Z-aDќ;, xx]]]wT*X,t҆ ]\\-,,~d-EXիWRg(DW @Jf(F ɝjZ B"""blllR(O?:++kCii u@ |K5ZiR<O{lmmSG}h44 jI$={v#>tw}XZ]4dul6[d2@mnxNòD_ ɠt:=ڵknT*ǻ,`0X,FRI s_vWMJN\tU* p3fv`,h7@ PRPϛ8X@dL&[QӄS֐H$K.r''&&) 3[Bϝ;'N,󲱱a"bBHD"c6Id2---wYxqBiv055d{>[(°aî| sBH3o޼`N3-,, pޅRZ3lv*NI*ʟD" M~YXXc0sѣGXYY5My͛wz b1---Æ #H$\.!Tk@5==xښ>Wc ]caawttܢ/d2-e\tz'''g.dXLXrggg#B虭2l \6 44v mmmslmmL&s())333DH?!44t= iee4hwhh*{ !T!E.0`@#f͚\.!())sѣxzz2X,ܜiggǜ1c &&''CYY`v׿ķĥ7TjK2rOS/Ѳ'_C\B@% GݵxǏ}غ#F>}տz蓢nٲpSSl6mC.\GGG<QQQΤ$_{o%eۋjکSZZZRy@ H!!!ika޽***d2y*au~{{ `aaa3o j5СCB:r+Mv9rDE_7n\S_l}qq"۷-*NZbP( 3f̐]} Ѿ}:::ZXXlJd2A"bXß:0ˆ?B… F8}tcR6#0#0#0#0#0OHҵIENDB`Pygments-2.1/doc/_themes/pygments14/static/docbg.png0000644000175000017500000016756012642443625022072 0ustar dmitrydmitryPNG  IHDRr pHYsHHFk> vpAgJ|IDATxp^uHJڍDsGɌw .X-whU2T@DYݪn%m>֎=16J! ?ə^9bZkmNh`JV?Qnd=&VlPr"K$~|q{s]F5;Uq?X}<Ky׉jX -?g|;.xͻ'i,ut;;Mq wvl-~ѵ?V# Pv<'X.w䯺5GH<k=ok vlh)>Fe!s|Dv:gw;rW>g+WrydLw{]φ>'>'^Ŏ<\l)|kt=»qස)wBDD08Q}}};-6|ۯũ↌vweܶ+w coG8|o6{8WHFo c(;[aǍ_Z "Yxdxyw# +qpͿ̗ oVu]C\wWJhVs[f 77|t`zpcs~xxx9YyssKv-e69yly'|1K,{~Ʒh7w|:-2xMw6ۜG~;:Ov;_s~ks}}<|q6T(rYTݦ)A} |ة`w  ^`ﳇ 3Ӻ~e;.'<`v< /{.~u}W^}5A6j[n[O |\|Lb[*Gs{bRe=> B+LEkfNUO |vLk,1m=gYpAӻpջo7`5WDog3H6UũjX hV݌;8soǓ4ȑ#;N}f[ mQлvXmīaMMƱ ^7](vybK<s9YLv;b?Q8k1 JGߎ{St^ƛ^yx6t7iGHhc}.[Ll-212%bΩJ"7Zf bw_ED ~-mH3&yxrkWi%yy9唟cXN 픳H U*HuR  ;q7^'iTy4_V aMp=>~+HuvrSNrXNi?VeMXoڕ7F>ۃ;r惽ծЊ7b;Lyʧh6=c~)ŋa&h,R o}t+{:k6|+Y^-86!۾-Xj\-4ws'1ߦU&7_/y& kt;XaM'>+`%o)Xle'fkGI7KqClÝv8{.&JheKy;?ۡݡ;9cΩt`([l <5&fQI#ڂڑG~;Υ*l󳧢[+!r!icZ)VcA2(ͩ*Eta1Hs1yeN ̑ϱ4? *ѫp;ˆJ2` ǭS*jNs:m`q'3- Wozj]2Clqj O>oG>fOaSiYQco늫ol,֖4@7eTF¦8nE־-`L hJ[Zu6wއ-HBnzd]`E?աL]iw+1lzkj/ s▌Y6(=+Hil+dTJuupC^MC }?a67/UaxwIf%2P-Zg b(;_`$Plմ!oNd8@UJ+`0 0O)nNٴQCEBp# 9m L=j`2P᧑oa 5.0 8 ,-geABU3(e:1' D &tBDU ߾W@s#MeaCqbleG٫R[/'YtWZegq$%/xAS> t@1{:.ȷ+e63%,J_2z^ ˥hEəuu&$ V=Re1dU (ɓ穿yŃ71֣LvWeTeI)0bhfo}ѵZyUj#iX%I0HLRGoɝQ;GbvZR0pȏ}bU. qW8L@32H:Xq3*<.NunpivOɅ(|zYO?TT>" N~'?+O֮GF=j9a NP(:LsYjix"ФEuPƶI3KQ@t &}P2k>^gE^ݺнIG(!|yRYW!7\9ce-7ܦAֱ?k.^LX:n.;ɞ+w[eITL&S &g؛Qʶ٠/Ƒ,9< I MLsڴru}TqZ >}pPN }h0le7eF((Q欬` V}5W~noidW%s @ڏ[/F+ʱ$Oa7N9nq?{&\FU2P튝 e1,kjŋXIQNlR pZ$.t'ٌo2 }ۿb%d$ B3r1[4 tħȈ"4Dž Z ݦf ;'D;,_db+Lp>]v#05,{$/%ŧ˕`>ӒixM]Y1SXG~8C%̚t.23|  팲-0k{0؉z{2]?zп?mw\~v~ e(W MI5r9|_)Pɷ'<>zͣ9Ca4mhfu0Hr}haZ<{촊Ь*?_zʍ ,ߟ=}!|v 98Y&1ƋgH H>l yyxdzjUN2[Sid;۠9UϺ7E`-6p2 ->QhX\鼧/,P@9¦0Y;.ƫq8W@j\)eCK~_v\ofr6.ʅE:+ w972RI7]6Id2)Fjśp3~8ܧuwۇT?Z*_mSD6>1X2LA6D /۵WZ( $n %Sa6)GnӑtցgiuڤVI/D>+a\ _Z<6ic) b[?WMY4V1 L U~g _ϠYd"Hm vȧ3wqI-p)9z Fq.+f[_>xꪀO=løkaQ!DlJXF6.=Z5(iS (hRhxdFik,fal2RV 9K46l#m Zw7/gW&R[ٴ 3p~{$󗑔6: E W0Ʀ$=#c )quTel;h\;&?=֪h؜ Ry|bX7ifT1E q9rNeo̥B8کa?WIÆȥlz 9 Gյ}bl\Ҩ:A5ec72,P(Ɵi7sFBBc`$c`V;/). FzRo[./!Ʒ5ThJ귓_B&L`=b:\1G(-c+,Jaɏ{rf¤:ܣm%E- !'h(ŔX;00O Y]*j|RW?5tfI;lqʶlXȩƌa܊ȅ D͜a$ 'O\%MKt?.vE}xrϹ0OZ,'ad+E!|j0vˉn+V]>OCtCt< 5XR$r7"ѽv9Rp1Nymڦ "{oDWp܍SPf\*_V>UV9Ix->9D[Xک?#d<OaԛqSGTce$XF }NHP|:wV)n%͆/cGB񤮇VKi2w#Ş+V+(!\τs YGvιOoއ`xXbL®Hb|) H=*OI0FgDsrWpDYv]T\SMROJǎ|y1Ej:67P^ JMFFOaK }8/q- S|N%CB 'ܸSaH rBY~WS*Ut˴DmZ-ƃ }={K |2 4IcU'ȷqAJXE+\({&Kk+lW29 6q.لΆݜvSHz~'ߡyZl}|G_{͑FV] 已Ĝ*fS0b`d;mv2v̖ wTk 3kǍz0߮…+t]uC/BG'/~~{:_|?TXWKpH\)TQSTfİ.(gf+i7qX|PclHS0XLC=&ݪYm?J%$=#=GNk Ť"C+<.+FH=ksXRcx0]*d S;HypTywPT 4V%(\j =iZ.K OhJWfK7I/X.9~]%6""-veV@p'b:-$C^H0xBⓢ9z^=9ES0yN8W eQfבR'~.BUTa#w~+=’ݫӼ״ox%tYJ#8p:{ I11d!yPS8pgVqE+K%"}#ʰUrN1ߊRvqݎa^}X<[?TzU>gbVy>+AcRN=a&܁fF UHPV%,gTK{l%@zJz\L^-,X TT')G4ʕ,0Td[7M"AtòdL3hy NPx.p6`ƖŹѼv=vH]DݙI<}h(2KVrp WFY`89RDS('%:/_I0/y\QOZK0gyyE0ֺWL&+*/s)uL߅$׽砐 GpT̯~vܧ6g?BcR]t{T "b@O)4& hςh({JUHBf<1:ˀ vL %WiTV8*w gQm Ov;.)^̜E;,Vcbp.I t(ZYG\D9llR{sPp\)Jzy|]U5Iɨ SQHhQYZrb OH+;~"PkxU2Xh\VbZCzlxh*E[ .\&_2LƱ0-f팸-;[qХ#< )~N0SXQ7B+Ӭ#IsGEn痕\^CRqxbܶSyB -# O9O+1KT٩^̡q)w69FIZ[-h#CbkwEx!j&Yg.ȞbV/Lki앰(* f]*+sbu fE0>?{OfbHDteWVNuMIN9ġ=O뿢 ?%Uĺ+J}2a9W)@Uq%{f#TUWJIٕb+y>M"!Ӡxu %  PWn)i bմ~a*.(DrBſ!3UE ucuO];#x6 8K9úyD}ؤhV0ޯJM,^]BQ o"g(,jxa6m[cI֬ `)=\̒Қ2h^2pFMV*5}JZ Sp|3 s#gػuK5jHuWģɓ76o; #^D~&{γ[R[*.Nգ9;]XʀM{ڕVz7w 'N W^*SS&"uz_0͖5VDk:9x[hPǒ)kzK!NME C k8F}yhb<"2R5 Ha=S _2ͫ:PxʊrTcEjv1C0[ vU⡈?>OS0p~2T*)>gMqnOⴅA@# Gÿb`M z0MubϡOP +Ǜ"3Dx6챏e L%tYШc,Ui =(+(ی;Of<< qgJْҵ._ r~|sT^+[3ً:m:Otc? CɃ8YWf2|:ڑ亳2Ճ'?CIsg O2fdN5 / -dLA`Fipm=6YSy6TJsmvu]dhBT=ZyJE3d?UvU tv7vJ9W.)R_)WrN~Mb7rūǹ U޼A~<]ULfߢKqS{Rzv,D;'LYc%"q'Ja"--"rnƯ%j:=U1 (Pb.nŷt!ȭlh)& QhrbE7Q䗔ʦwcTZWMwPȔȖlv}u4.򎍼-\@_I_,_L4CD$Hâ+]THj P=H4{~j[]C;gNSKN~6I'2)!2c{lnEDG `]PO)*+?G7ѱ z5edTi`MЃj>Y#@,Te%C'A(Ռ+dlS=EJDHG)m% >>(| kEjyNŊpsX#O @3%pPhڰ$ΓtΙāo ~Iybij֫~q]?u:$ /v esI&U  N&h00YPw@o 4V qScu!E힟Hf] 8ܨ,b*~S2i[rmx4y{-{U8%5XR۸&\Q7u]JĨU a% !kST y.{>/ Z7ȿ'] .皷0D1$mlʠQ6Pك wsiˠ Xpe7]ީ_4:FMjeƚ #fKmRM&KfԲDYDDr l*)1 bJ>H Fۣ%MD >Dz'+B "h3T2{UR^1V2VםC^*t T35ڱ@a V45Oh&9M8j 'Pi1QUU`Cslԟ \88C'|H2^OdX;#f8mߎn3*PQ-sDD2PD)(>:ƀE?q:A,^>x4Kٌ_d )x.q~۵s)?fT=>{n(CJ1DD K8T3O Y]5mk[AR=8q&VSSκڌ('r8;?ߥ8/`q;wg/y qytLz7h>owigàK10<< xpr~"L73x;WKΖrfWՉm[_Iz鵫VcF$čE|d|_3tӟ[` ?:5UMh+](vH\@ UA-2> Cw:xc]CVĩ'cdG쏥ٕأ"WYHjr$s C[z:*9meېV]_Tr+pLocn&*6[J@er *1:EUzzK)pDXl;25Y?maP"՘hHH2$NjnVAOPP*W) '3?"܏,g+>Qt([C+R7lš+Va&}6>V"ZcZ ]f֙zׄ#*Vzߴ+NV:i:V)[*ِʅiiV+o%tUmi[oZQUM]Fs",/І}6P֐Iއ"U=H@)@$M`ސC8pq[X/OCZlu x=4uk*ٱBKEw$?Gh|G<) ]sՆXyףxHXr[cYJ- [$ixR GnV~ݷ2Y~~ܯ1s-nl<;'ro3][?"A)N9P)Tznt*@gE8IMfc>lxVmzG̚*s?}jFOj*;9w)mh٩~ƷG!cUέe ~{hއUY*4fכ_3jOcq\(s唟y+5y1@*^n&+md$JĶJ@>jˠN[aoe*V+fn GD -?<hXθCs^*'_;z>'G*X~nu<,UTd<4Cfc9EBVpF=벚:*˻ưکfM?Bرُp Ar1X3ce.!rd)HH J TW"Q΄2~J9^I%M_p 엘DgrirI mf!#j@:(DP1wX6VxL:8/x.H˕@K4!b$hQ?~[y;b(4LU+Lj:>7y(]P>ś3d;bthSKB&秳簉/dK_k^r+-b330/dX*;T ) )`){+ClqlGig먲 <颻ؗG&m',QSu5)0*\71ed|/au(qla-..VYyV")Ż=2kCQ{:_r!(AA.%ҲHn jgɫ x6)&MCp3,14ahc2ҪPzSĪlw9Q%RCq63GzP=pa&u&vrDckb.0R3X䖳k[sb)?3Dg%"j&IjT+ k 4DBr HD3ehV+M[MIh;_r19aKWJX;)ǟgar~hC'0NnJ'"*]L#^? ,*+ëf /1 ZXy$ 47k=p1NV+)E&g`ӳAB<I,hPIi&ePFI"Kq$粼Oo[̴voWUc٠ +Toen#3%y6Q>3#W_/ﲜ̧sSs`%Q/e}QUc07C/:V$vxRD'ܸkv)E^l(m]LpIĭ]iGfҕI8 Q- M\mt5QCzJjbT*62zp٠ja=apv+TձQ/qc?Tίi \ PvgA0`OEdC mu7Nm򉬝r%G㩄p%O18n 15Y`snMtlճ̭0߶NF9!#:ʹU\ T7޷'\y)V/jjr\+Q㐧.X}.IvhkDlMLKu$$tZaǯMO;ҭm"3Im>BݞB(`/ Y= |]$2 k %OC@GOB̛ e9{kV=Tg=m,<4a5,~?Gc'b/, w ۂ .WQeCSh(SլLMEk/UoUG{^sX=_Bg0ol^J4 X]\َШ%>DZ.n͉tv YK R:ò?,]l_(úB-\d7| IC[SzSǐDn-0,{4Qф瘉~S&KOW{Z.>CB]Mv[e<sp69J |e>}q cPBkCSsݯGR@_AC .nW'Hqِ4Bۿ 6ji7ij#'DZ{^ v]ju*37AcqGacq/;N౽ۍJ4ԍ\bENa6pRTkfzq{:_5s2QE^dݩ9MeV,l[PA#r0hq AT4vWn`Û;`UjI9Ӵ }x`ۺ a(6Xv,Ń߹ } VO5#IaaZ8Y{,.|:5i$4Ց[& + ~@rJ̑bղꐉŹϋz[L7Uz*ny~,3xʋָ,SxN"{ XFvAb}@`^2 J 3wvw~e*^0/t6SUCײlV9'8hu;l/Mh61[toʂD@F|@%Dce40?!.GcJZ?tVnk#Gs83P $~DhƗe`\&YUn5UjTaW 8|Єja鲰4X$Akk8lÚȴh >4ha x3ukC,3U{\6SE8T(ςJY_fe)]S9Az4 ,emԅ)v6=PB?°1$x1hJ,PRv-L\M,/婰r~Vb JN1W dzxanHGS˜jbc* BZQMF="J'e34,Zuu*I4hp@ӵo]&Mzvȟ=ljmxd&I ;kǪ)4Aw+tk;Od,` 6Qj T96$fGip .c=&A2zxZТh2 Qaw-ܘ f 4jQKcYP=c>޶,aYaxSH.N D= (6F7CE[ģ.6Zc* 7?:v\z#_#L9%td_ qF^R>>zah lh,n=4|H&0"pqٌg%eF| `؈;{ol,Ϋ@ ن!FVzS cl/ WaTC5jNڲ8Rj~dɒ1ůUo6+|Tq#U+V+CXG/VR(?gr +aʑt,S ^]6Ulݖ߮͹^q3 c^7v;Y۟MM8hyVqD]-ΖV'rD"ÚD%%a!,qʃ?CS.bV'ܒvW b)dʏT~"N^xgAiRz@ r)RBk+s7ôE@J.u>}(x|-1U {#0qJAP[J TVM""$mj5tV;1XfHNd0[ʧz9RY层ȅ[Ҽ#I!$"hBcO;Rz顟=r~an'O$a6(~.k%)$[䯦8gɸO%#-Z^^,l_b1ce+͸SNL\+WPOxYQwqj[iS%>B 8鱽7xw-oW0s#HuKO_>HvR+N~'?ir1>.cG4rhU#& #YuISZu2j sUhZ xC:}}4En@3˘Wf0 oߦd0S[H]S8mZ]A`sԖ8; s!w [}"%57d$'CX^Rjc`[v T-*߄.mn;f;`/4 #긤B*C>Gª٠Q?ђ+bZM,0a:hYZ8mm~ M;/t>qxB @%%O4Tۈϲ5HJHVt=fZ G S7(cM5sO[ ,rF':}>Z0/~?!FneB‡A5wU9Jݶ5( /#O y!9Q`ߣ6ek4FNj$SGf:TXu\~[>ĉa_Ycjmtl>S*$)YĄahY";Π<962&Jߋʓ?) 9!4Z K݆ m֑  1#d[-ʖu\Up )lWSUW7=_I#92xT ~Kz26CHS6q^Ӟv]e~Wv=.D%SN "ܙl:|ht'˅bAEWmٖv~+8^ZZ(˷ N.}4vv;hFy_I#+\>qw.F:%Gl&b2zqY7Z~o݄ fVquBOӭ4/fc5#w6Zr( 4^-o6IKԦ5k%WUOq$7]Nj CPs{xmڠMJ@ǜkUl@ن&[Ugɵ]bgHt1T/8bٓ JD`UM *x97XL>Άt;{+{~~C9v'7ܪ[q;)!|(#?{27fֳ$],}8U(삄fZQmTtZF=TiM }@J@1k,b#hUSI uU$:X.Ή\6~XAv8oJ)8YV@mҐ;S\s"[w7)F 77~@\`yB]ѕ71sҪzX Z`Y e|EÃ4 >Zhtt#z)!jl}Go\`ZaV]RS> ne3/;? ?]G{[x?[xxcdxK84r %🇟;o_e]{Mw_qww3i<1 o_l;__Mڠߒy( ~Fѯ>d8~/n#yz?_Ŀ397g櫻w?w>w'e>@voY Mw>gyng<2tiwOstIr/wOrt ?~k-lٸ?_'7_8?K/q~3o?K-9t}߿?E/;/f/??? ewc8/+|x|V_3I4@jxcMǤp;!94.Oijnh]ϋb#b__}[/G_.89wxBJ?QJI8s5#Il?AaWaut5Q@F 0b0R !.XivF KAPݷ7o2.Q*0/#s _Wt-}'e5N]lkC..:}+1!ʙے \rm,GqQ}ˤS 4fE=O:LBA. Jׄ^<P@}%k:\L Y@ sb ]PЁܒMc゛nT-Nm5*_̈́I ͖su8tenXD ?_Jri*{rR&}tD=b.{cGL*;N x|)=z =;O6Z` 3GTJ v XF!fxfD q]q"h#  lsψùǠւzR꼧5NX= #tl\4SZDMș?#_B:Sdyχ\N&LOGdnq7ָRg009gr\q*$i$(ktLjhl1kgL _bTސɷ}\l{0”un9P@R;0~ca*w˕Jvf5O߃ك}џO/nvDO ҷ+q+2FRb]ٹbOd۹:^54t+r_=ݷItc7}4R] eɻ!GN] /ND; me?Qs 5 n˽r%DٺBLr'2?M]a$(&=/&*XsFD4,]$1:h]HugEr\)[VFɷuCNyԍeb ::8X=*V"⦵V./B_ʮ|;" TIDN^lUezȑӔ jqHD@aJECʈӾB6d2+$YeE)hVi Mwm?ʏ?0XBMH@o><$unYjM8ll7][UpcQ*d^74Dǩ&jUN'iwbݣb8Ut~̵jGэtD܍0R+>ƪ➥Lsa$<6ǰfy^hd. Q.F56Ri1hd1 6ڕox^+FJq#rS@+Vћ:j{ HKpVozG-/FcL./xS@NNy`f*)Z 2MH3^־PN{mW^a9[6 ʿ6G}IJ{cG ,X%n-;K5WO@:v!)[춦2"8W($si`et hJ;TvT!yljN~vm|KT0JdGSe;`-!,yU@*>YUmA z: ( uyUN؃gmsvhiQ3D2 |iRAJ>4NEP*8Op k$"'2߻w/a I$cJ) d6x|hmæVc"MtM]:V,v#(IQGUP-lk0nMx"!y~}#5a_k? 2Eg`|FHX<=ĦRI~Rmbn`i=RTg)ķc;4Cavmd༕3 *Rg(50 qal1#;zr i N*3YәRWaAWy\rB ~lv/ q_qOqr呲(?=&moX2q~gĿ;~/.t'ēBƙߩ)7|E9e~$<乎?G~_B#xco1A#U%t;[~E dJd~sWA++քRx.7C `V36U>NGO~z2\xծBQL&i,s&%ſ=88уiimpE`Xퟸ0}sZE$S_7!F^R\$Wx}F3qi=\fs[PIe~۞tBGs{biiDlTY {) Ș/Ҩt()2#唶]bKi!c!zmXB&تMUQx@:^`Am07gMCħq SxK!ߒbݡVʏXOJm PП粰/Ȥުa_sצ~Zm'3C@  )Ԕ_5 /=W0Mu>T~ʊ)*HBaS7"'EOVG&V|.E<ٙ|-{5|'w76GCiG[-'ձxgnVVUZ=~#`f,iyeJ/$Ooe _L!@C6;-FDT5D8|-Q^{k?ϊC }7%نָ96cS 5T=vKŁny ]a|IMXӺNpܵh1Khw,t$T27NVyBYA{%E3 0ܰD>aQYa|Wj<cM-Qz0 7\Zd m\#VܼCV4pTo)+l6eս_}]Cvd.1V6-dq4OE=N);PT^qD.cAbrWieL-rAhV6+SOIG'8Mb! D/ynҷ, h} Y7itDO;"~qx[p+nZ,(>D~#;xdeWޖFJ8)^շ?p{4Gb+ʶ9򥼝8Cr$nt)M1cy+)->$Jȿ7}⚳$ w:yRb{DԵ0f:M"J4CE8[tn߸H3&yx Wi%yy9ȟlyr)gˑ2ƚS>֜  ;q7^Lo1ģa\Ma+l5xZ'*)R^^,|+2އ&ƷEu#i ^?a5'iݍ0YvHd։Kp6dF"quY n'Lj~kb~'oVVJ a-3/eWhc rx(u] '$3=)4ܼ VdGU H|Jw]oW|6^hlS~Փ׉,n+ch,ۮ1)6[QPcQa$|yJ9^)|66U 7k)=]/I-oWXw_4e;7rJheKy;?ۡݡA͡`$hTj #*c,n?{O`"Ibkw(Lb1O~.k%NgҍQ6DB^?e7w3K6ȇ9f9+u1svկAf6Oq dO)zڴ&48J@Rt!_`oDpkAV. ܍wǹDv"/Eh>ܶ_8OYca+^OO,F+L $/aiK?8rt-!F(ʖJ7=|8z:ҽcLSg($k|ODoѷc l؇b]tX͆Kev 0e - 9ۚcq˲oW8L!ToS@Myaۯ0rJ!@ 3$6ݎ PP2s{=A;@ ٘5++ m y)]D.7+%ꉰL~ɡrLYi0y<F`?an]Cc}P5Xv{C h D;yg%󳧢[+!U(_VqEQpH4E*ς~4" #65LL"ܲpwN/84`,-eXJsPhoթ\ZAʞM Ya8USՙV 3Tn6UFs!·s(r:8|ܡnYנxĎal{Sg$Pc.uG͊6k4i gأ(p- 76U\〠TVz,]B"]z7?)ڔ#|U |gDT&L jsM|%ރtu gca=xƽK_\+W, 4jx'DS=jB"Kxd+b0+%uJD|ӑ7_Q'?g8T%60_nѷ+}ʯi"ʪb\Te_Ytr 7<'l1hcjp1 ' HoF]2cɫK:6G"6zSTa~wlV"+kL* ƒS=,^ ‹領I)FP&VշA16ȖOuKt1a8vjep5 ]n?UPbO!nȓ ݙ" X Ó'xK:JUM%Mz*i&\BQds'ƶxTjKiDkeKi !_Уu2IR7@YDbB4t!Ƒx++$@\/OU$:B0{p:XQq/۹n+gN>߰1?Ejpq߸=`B' ^g1r`{WeAhS|E bu_f5 tW "UVpʎ0$$ۢZϋNd,![MٖnS +쪱gFmpj e `&oPۣFجG~9KCZ`M4'gH/H @wK]8@*P䩒{"aEQ[$U  |iz)>p94L"&F`Zw״ l*”:|K9-"`v<:bU.GCC@2ч3C OB oi GI Zn%e63%,KY clBs5,-V7Ŵ{XfŁv?- q}~CR&,Rm-aK.! CA ٶ-}z-SJӐpPf˔6`Jֳ:'O0Ojsb? *6"ʁυໟ(kũ .C(Sb(> BZ< ;%B`qs&Ϟ^,ARYL=`ae%K(J6DY] h9Z+a52#; k4X+N< GbbjozUO&˗k|WD "V|fL 0XEʭ `&e`Y }1Y~;+`#3Zy;uBo GX/'q;f|1(ᦈR\hF f V g}:*BETscg5h7兙B4@wrRX[;bfSA8R,vF&gdClHKwWbspv/H H>lw4Yy.xcuRMʈ`&͓lpcFL+/Xw` S tw|e~zǸwl<&CI[z-6alux݇JT>籌<0Ϲ$z5[J:b PFm/O^R)ڷl3PREvv)(#v>oem!.v\+v;;_/Cɏv@ %TP_+>THŦ;)H9)x2p1Lq& YJ |,k" A7sjϺ7E`)sxUo:Ξ@#%ݡbjZ,x@u,dH4^qiD8fJofr6.[q;Nqmd HɽŊTAH~h#*[~_M`A=8V,3Ebfɼ@|xLz-7Ow5 om;˹p|yA1L&duXΕRnF9eq<lcW"-sMq^ cv~@ht'i"ޣa^ZUfhZZ %cvK S4G-;N\r7y'4Z~*L ٓon=`O܊á;o 8l[q;&f"n;V܎{ʨEsJwH/pD44#4%e7}p$ '*jhʼGK`VG+3_8V?xo_ȶzUksZF֎ l3_.wk/0)~#~M8^V蕳HOBI7IU~&kgǝb4*JVJ';y+}]wg:+}/@e^ޖڒMdY3Y%ÄsbhIܮողg4.oxR0zXu͆0m:R 8ׁg4uڤVI/D>+a\ _Z<)UtY-⃟+x~[tC1 9痍G*n㋸_ϠX bBxm8VG>XJ4xd105[@3dm7)+\-!C_46wcw`L*B#…/Kȣ˗d-|lڄˎC8?8m_1'%{lPւhn(f[_>xꪀO} Ց pVŐiЪض);ixy6=ȡVK-vF@|y݃$$2ȇCf ^'l$3bT2*6jGb%VzX&k2b| XgMn tM.{6]^&L/+bA;$%'A ڠXX@Jr<_wŦ$=#bc )ٜ*uTeK`{(A|\V}P}ߟ=}NkHǺ@xNs^/*NI7!\謰N l1$2v!vm6VՋϜ`I-n=Z`<#j|aUr𱓚ibK@jX.[q!W$r|8<=KG3+O_cLq.)N"wөd0ݫȑ†J_' *}%nBbV:'"J§ʟz>5<难'1h kY;UGq$'8q4nj}l1_h1ҹ OwΊHDrP%m'-P\)[,Pw}*;wmf}>,oL] @) 88g<[F70v8&~7U'%` ű8%ڤ/ő$[UonUuNJ?gbg( %9'#U#l0V㥆Tܠ}8/.sf?FB1:~nY. BI{F~u'WUS[OrF` ׽bKm~xJqaMvߵ &,F_nUqnQLz!FD UUl.ʺϓcaUe@u꼧5ɗt ؛raKVgPSTfM]\Sٙ|VyUE@r)v}-o'Fd]k FY`8܉Wzuw<[y8x K|Z':iZ#NQ-I8V%̓/KW;}gliJɷ@:߾xb*ql8xnC Ho~+H`ӤVV1(q.&s*J9Q(M3ʕ0hs5AM€v=/gདB>s.XLqd*KńF4O86q?<# 4 wXgbA h8lYA[:{ gBCp0-K^<򠬤P&:7*\J^*'٤7% q#Ke4>Pm|@-M ?ˮt9KǛTW| Ye7 Sьap6^6>8\Hz0}<_Y\=/ 5zfcK˚ ç_w@_$S~Z-v?% {`wLp ՋV!/4RWײe*C>ɸsWw.^ S\]^+nio5ș̏J) Vh8m^o h<5$Gdi))*+7M rE_Z^uK\h<Ɔk'"چ|f\ ->^)OAsSl-wmٻ|8G8Ŭe{+RFAC|ȂRS9ULbEf| ,Op&=jL;*r 7w8_=ʕU̠k#(HLshm;U'.$2˹x*\AFZ Mcc3K=Ndh"œJH ΍T v)I7]U] -) 3(>Ԓ`47N$X4 g%ԝDX)[4J#ڋDD\MyսW80v!L=yC | +ׂJ_d<b6sGs3 %D| "0H'H*-5% ~ٝp] 0G#œ{qL0z\Zf嗢ͭS&sDI|TUb~/vג*m8&Q}2avwPT$PU%g\I\g[xJi@t][l BvۙD`-w.' E{'H򿬖}vTitkUk8q/^:HQF w_X`V/L 4|ז%1Ȣ̪.}?Ο) TVԖCTZwQ25u~uVq&s"pmnjizedfAf (1DZ"6JP= ˏm&U ׼[p (r%sy<]iP&mO!9*Jڃn)pXq~;I!aRD7U?쀹?ˀ%c˅/Gp%C_%4Bm,0 ܐrW"{]h#E#x^]ycj H,-0=08%I|'T) V ,<Ii*lnc S=Wu9wh:Y8VhQT>Uxi{n1%'h,'3sM3[jeln.ڐDǖ SAB+ (J.ޏaDLc1`LC"V6g;-oefi_ȫ\6⟆Ό\uZ4bRdQw$_m]Ypuؖ.JG`bGm%ZVߖiPh[1foޕMefDFMv[䂫>ݨTC~p|3uX(Cix@*n K^E uwFj *Um Ȋ2yb{ ",WFgiY\K 9ZgD9={(ZL#j7OZY>mQw l5ؑv4y3;ȩ~Yۛ^_]0ha&:I &۟ADBTgMFY\Iherdڬf|ӻ@ L=<9߹%%л2z=Iwbgre%[`J=vo$n#OڣAV)xD{K BI ̶?4=ҿ񵄷'j(;>`;ݕha6o8{o&kˈ ԣ Ol^O-B4idm{0Œı+کlwyj 1rY҆oxwḰ}G3E[H8|:u5njU6'.UɜRI?k:Kws}ڐPp:aU1Ov -v6x>$wVhg|rmۛ 0OS'>l#'9AՍBbzXMtM;p-4j16DS4̲VzOCøYw10U)em4i&+=xqP(3}ʫ2p nH e`autljdo6b)[+?Tlucgd[aM已gut;tRUc bL?ρrB7N֎/mF;awѝ_=Xp[xS7z?ssx l0dX 忛4*Am!<6._q!^PT䇙wEʮB6tJeAKMU`%?FkHmRrbMFֳ삨Sooo 6խ\i!hn׽JłYZSV˱OwK,w# w.r]S%0p1R# ~A -\񪾸8Gcq&y+lxtΙ*Emw6NYwNЙ:N2ܸ5[:Wh1[t3~-Q=\ *sGpq?OgbluXf+mgΝnGK_'טZN*dȄ9U Lwyde|F"xNeõo'a4ގO 752/u^8SO g:g㓝yܒ\QHj&b?>|l4;]da76:4S{bڎ)4+a"^]&xf(zC"kq, x1d1C`Y[{MRdL&mKC )Ր]]|[1YAc&T" {_}X VT&8eXkv_crm4'dyAE E,&BW|u';|3L7U3Y5xTblV K)sDJ&Ue  NRC Lf,&Nd e`%%ppL+2hun7z4!7t*/d)i"2\*^fJ!k ƪPR&,$t>:J.cP{톉& !Y`ij JѴMrm%qTeC DAȁ|J}(] B42(\|5{0߀ċϥN~'A@b nE`~P<:z4k;n-6Z6"IA~ ] 2\'ƥMƱ .+bşۮSN87ٰߟerHDD:&"b\l'h ;%aYUO]ݦ񯺶_ KJ1p7i?b҃_p R@dIDO1>Dv?h-/;RP9(W lj0ʝmw֦ۅx9ϟRλ3E8 QI<#kHh܌:GJ166|J^av.r5@a3C gu[ҵ.`-,v\+'[ŕ|LE`"H;,t' ӧ]fa9Bnb_F`$7D/!Z$ȚmUovuQ%Ւw=-q;Hzyq'2)S)(>:rLƥ<_jhsDJ^. V$Mް\dC X!q%5mx[d|G8*4VFv]C:4sZmgt~O qmfOdIˈhJ'R)Vm+i1z(riHC2FdcKTS7@HY+@N0.[ļ[_̞֊k.ݦmGTWb+v>9r?N.-x[Q-or7ƪ:[-ajC>p?a/;6H:af|=t UxB9^(lеVWa |Tar3/~ll#;&4J6(*"BP6El'oT!0̊b,MY/s~qrAetYЙm&t3o! ʸWu- ~kKΓts,+0%f cM 􈮦^L ny-\W<~͍D森pn=}!wa5fzr6gD2ms4$a)ͱJ*˻ưکfM?qkd.n'ǛL^܊ZE;"0\f^yЧ,%Dl5ɱ^VAA l<Dn(`,&ۛUE 33P@F )t<Y0|o$k_>x['^[45ܮ&K ͺ] "!+8#ffkZt7>h,&GS<2m4p_m2 kRS*ix2oאϢ\m7S6+h tPL@CUj6s}}d/LiV5MbmQ3YSq `b}.z+TGPV(vSD~:{_ӡM-e %;Ξæ}xZwQ8F|JkdC9 "/'U P*5ڡoT•7${kB[,{%`"r=,qWﰉbL.My[ UȆay,n );j+<9}1sI X^{=Z`lQ?^͍[y;XJO| @y%hV!u`6eCxpGԬ ?Qe+aR2v; ȓ.~FTj 6h~Bzl77a6ִrRp,jb54uD,9<퉩^ѨS`۷c$W2nF%U<*F:V $c]3ZJ=@ٲ1f- cU*{ l0W!#pXф)sc2"܈=S *]NԐbP89]{ }&u&vrDckaf|(,Tv,n ( Tma-!9VY)g"PoS޺=5($œt,CFV!+`MМYgɫ bX^4ҍR&Bi2`x sYР@/ѓJ8sYh|2W}ۿR~;S|_U^ȏ/VS uQNg/hܬ0)Tq8Prb -;fEDEU( [NdgލIMpUۥȀTD{UE(Mw1%YbvvQI v\`;l1Z7DZ0J*ԕh 4DBr$6M&"Q".eL9Zdjt3 L\8\WĪ tI9? [t@|o<N)wŸvS t?AVb.d9VX:la:"Msh 50HIoP8[a?Ta/tsXٱl^Bm7("EZt+6 Dß 6˲ιx|$Hljv|j re!\H+JY7Ȉl^ eVmP!-[^l_(úqSmȰo4m{lG&v4l{ ea4[&!yq [={hsdZ?v/5Yz:d˩Sv؉JHlz0@kQK!%=<)%JPBPS Ʊ+w0_wP^(fi`M{.7P)z 5Q.{7|jGi5$~1&,èA( lɦXA5yC蕓Q>;M񴾡h24QaRw-ܸ8 Q%; u@YS)\W8[ !AniM}v J,2Ǐ7C2m6Jf Z1g;@_'k0c< J@ո&jU6 D0;^G7ѱB-wGaʿwEi+}jet .f"ߝɞ I̬w+S0?z Ͽ#. >Ԡll)ѩ fc+LCXQxD<$ VVy*e=j끃X(ydp۰+I !ۘh=X L7-kD1Tz& -9֨?wfF;E $_-wCҜ {*அ'MS(wkjZ5R\THyvL)cnAӄs~KC%;`L@zEaӒ-SPN{T&>@d N?mG̎g d 0!&4Xm5\ĸaa7J"8$6L+;5Ha+ò0"pqٌ d,ךm4%DcdSe)v!o6lĬZ<0#*^ί¨njsj,Fo"T5%cp ^ m ţũF>z^Ha;At |XoR%*&R_5u6:@Z&va6D_ e|om?#I%J^7rTwDz0)DDuW _T6G~Gɓ8e$d@TnL<dEw:[+ˋ?mK"J+[qmƝrJZ`q5J&rLqMzJj_a%L9rNޚ~Qr[~8ZCKtz.dn$[ˏ/̳zdm65 Ԣ |he;NDJQ؎ Bj+l)ok%{"G$b;<6JI|XD/c|~ G)wl1kz'ܒvW b)dʏ vrVj#&z5)= %U(014"S"6hƓ |?̿!gj0m:`>P!p3e8GH׳ NzX+I0TUQk J7llυ,vJl8Y Y򶋜2H$ ʤ|]^9 ~b;*:m>8 ߯n ]%`2m_1U=C+3) 3&ABA~O9i _ee>X{J~2IZo]#;,z\$Cco-1ajkRH4%Ǣbʨ ϱZH5bi2PInhşFe)։7zTXH-mj𹫳H7lNۣ'rpAb CHDS߼S 3*a[:.i騐+ϑc l,POdcLqΫL:[VI'+I~#Q)p=nϛ_2A1mCUDGi( خ-Y۸'SgWR -&2wQJ7 %>ǜZsGS{PU6ڮpD) {j هIΓX**paXqU F}k>ᘦJ+} ) Z$PD+@9 ty0MuRUH%LcE_>8*n ŸHq7hݘ4n#ܥnO M]}螒Ai՞-ș*yu:q`ў'aG6jB^$\ll9HCTR_k00>;WR|<˰? pa GQ|^8/%|kSajk0* drƂ"JbWTG9t|5NSQ%`-!1-j$E%'o!&Z+dSNQ?-.O;Vps'qԭS샹<f.<;2;|TԘ.>j_3 WjrZ;A{I{?mCvgZnq8`@ F^Im Z^1Vxko dziPKacɺxA$ŇZnshk_"n+؜7~!]A{a_$vf1sOgtͧ&ɒ̣/,s|lڏ|i} ~_hrsh/-]e$'w[ɟ;e]ku{0]%|2dkv$1Z봅f$/DCQG?sŬ7~>1nԽ_z3w.16C3mb^7e~Ɠ!cwQ< ~_*z`4[W^TF|=tv7*j߶M7m0Wdz)yƦ.5yI0WfsbH7e~9jy9__w/GKU|&4>fg#~}}֖"%Y>zڎ jssȜnsGGvdFa~^*yVgMv:;.uMn___y/HS쏮6Ƃ5Rggإ~/xddUOk8__섿B4b9F8{!dPt_uߋoy|hw1?n2_5K/m3ۈ]oE~/&;`ط}_;2-4jWdD4K7P3}hX[]l,vAmͱjdJq 6LNh&l ] >y:NcIB# JR,bd5ln|G1n  $ i*~[? bS( #>fh#'q:I;Hnd )DG"Jc[hNfPh+Tɑ&Oe։j q3DWzmg4]kFʺ[m=hoΨOeLmF1~}ʧ $͝ BRzb̰t &\M2׽Q1 ]KhanԴR:B_[]33@&A5ɉ\䱫vFyOTc?aRg_AgB&J[F_p+Uh$ꊺEh] w!i G 1m3:?]Jser_M$;.↳&HpfBD{ޟI `b^D)x,8`f )T IA]3ސ!=wMQraAR@4n6J`cvמ۾hmg<ɓVJ+HX+]=ukV_@3Z Be)6:r+'O=eb1auDl|͠+ dVRf6"hKARqcK<:\[J&G\:Aokzx R-dHpe~of鰣_9V/zrK*j&5Z3s9wǩR2,\iǦ;zN~P~cr?fHG}5#٦QuU r e)Wj*@ Ȝ>7I! P<(B9כ0=RpUru T2fnў8w!̰jRU}hP)]_A(@g]*%ĵ4ΥjΑHzYVk|(O7'l[5*ᘋw`R^z{nCD?q 3(i (#"1j#ة; ̩ds 1^ , ;c/Tf+uP$mEӾ^ajY%?֐>vUc)9kp5yK (AȺ#<;&ZLV_KAѭ{qB"t!%IBZb0YXċKXЍ}1\eQv( q%ULiK OM hٳ>5?~Lk l/bduL0LArb$1z '?wH0D1 Kt K h+QeY5 44(ާ5f\GwiGx7ý<\[/ P뢿]ni9+%&&ђ{|'?naz?#ֲft=g,gkB;.磣hy)/r_rJϖIAg/  #o`0!@k0LgY\-EDi/7ud?fg=k<8z|ۮ{`{ OA!<{aq^͟dۓq~lXgF${ rTJ #$|FO+WC*򩝹*y.8r\zgG@j|4Mѱ`C-e9[/9<*-&v\΅qs(v)Zg ڃ.IśxoyqE8(O-?<9GUoQ5M':˯pBʽSNFz .&O˩N*6 Ϣc8a'<`n(a*Gqbd:ʣX uJ)ʅvUX᧖y,Ty_@?tlĐA#CHG^F sύB#yϪ(UUBZf=,=`+T@ˮt݉x A6 Ns>~Em$Ap  U${G9e lE'[2Eٽ 3}U 8;x-u:BXCj6Pv[a%oŞ \{;k\ Q;lWTħ =*R+3EG0#1 vwtyAV" D!rR0Lb < ʀEל5܇=? 4҇KAZ_>b|1+]O`QdpYC;II]aJA+7[ @R6 8'NzW(Z6 C괽n֠ıRQFPKo[[8N9Hd2"h,Mn[H$3Orz,OKbH"|!ŋ% !6n*5esxR^7Ƽ>Oj?MRyKB&@qӉl]uk=4m:7DeeW$RiUzN1F[v#؞@,Ǚ)q0m qO7xCÒ\j)zj ^ r7UQDn֮",VC7lJf ].יּE{/*˰!7u`K bt^IJ[:w. ˣd@B8qY<)jk vҬAwp8*9zZ"0BljiI0gz*m^5ck51BeZhG]%di'2X\2\T)M` ,Q2dd'!8DԻ<67RFvfR`! YBt"!҄!3jȍ; D&"!?KFJa۞ yi*#qTt=i >JC `骣\rt(1/F6f񙛠͕~`fa]Xyx496LOoN^/@Ɍf,aWrc ;x,d h3:first-child { margin-top: 0.5em; border: none; } div.sphinxsidebar h3 a { color: #333; } div.sphinxsidebar ul { color: #444; margin-top: 7px; padding: 0; line-height: 130%; } div.sphinxsidebar ul ul { margin-left: 20px; list-style-image: url(listitem.png); } div.footer { color: {{ theme_darkgray }}; text-shadow: 0 0 .2px rgba(255, 255, 255, 0.8); padding: 2em; text-align: center; clear: both; font-size: 0.8em; } /* -- body styles ----------------------------------------------------------- */ p { margin: 0.8em 0 0.5em 0; } a { color: {{ theme_darkgreen }}; text-decoration: none; } a:hover { color: {{ theme_darkyellow }}; } div.body a { text-decoration: underline; } h1 { margin: 10px 0 0 0; font-size: 2.4em; color: {{ theme_darkgray }}; font-weight: 300; } h2 { margin: 1.em 0 0.2em 0; font-size: 1.5em; font-weight: 300; padding: 0; color: {{ theme_darkgreen }}; } h3 { margin: 1em 0 -0.3em 0; font-size: 1.3em; font-weight: 300; } div.body h1 a, div.body h2 a, div.body h3 a, div.body h4 a, div.body h5 a, div.body h6 a { text-decoration: none; } div.body h1 a tt, div.body h2 a tt, div.body h3 a tt, div.body h4 a tt, div.body h5 a tt, div.body h6 a tt { color: {{ theme_darkgreen }} !important; font-size: inherit !important; } a.headerlink { color: {{ theme_green }} !important; font-size: 12px; margin-left: 6px; padding: 0 4px 0 4px; text-decoration: none !important; float: right; } a.headerlink:hover { background-color: #ccc; color: white!important; } cite, code, tt { font-family: 'Consolas', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 14px; letter-spacing: -0.02em; } tt { background-color: #f2f2f2; border: 1px solid #ddd; border-radius: 2px; color: #333; padding: 1px; } tt.descname, tt.descclassname, tt.xref { border: 0; } hr { border: 1px solid #abc; margin: 2em; } a tt { border: 0; color: {{ theme_darkgreen }}; } a tt:hover { color: {{ theme_darkyellow }}; } pre { font-family: 'Consolas', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 13px; letter-spacing: 0.015em; line-height: 120%; padding: 0.5em; border: 1px solid #ccc; border-radius: 2px; background-color: #f8f8f8; } pre a { color: inherit; text-decoration: underline; } td.linenos pre { padding: 0.5em 0; } div.quotebar { background-color: #f8f8f8; max-width: 250px; float: right; padding: 0px 7px; border: 1px solid #ccc; margin-left: 1em; } div.topic { background-color: #f8f8f8; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } div.admonition, div.warning { font-size: 0.9em; margin: 1em 0 1em 0; border: 1px solid #86989B; border-radius: 2px; background-color: #f7f7f7; padding: 0; } div.admonition p, div.warning p { margin: 0.5em 1em 0.5em 1em; padding: 0; } div.admonition pre, div.warning pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title, div.warning p.admonition-title { margin-top: 1em; padding-top: 0.5em; font-weight: bold; } div.warning { border: 1px solid #940000; /* background-color: #FFCCCF;*/ } div.warning p.admonition-title { } div.admonition ul, div.admonition ol, div.warning ul, div.warning ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } .viewcode-back { font-family: {{ theme_font }}, 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } Pygments-2.1/doc/_themes/pygments14/static/listitem.png0000644000175000017500000000031712642443625022630 0ustar dmitrydmitryPNG  IHDR \ pHYs A8tIME" knIDATc`X0H9:)-XAFQt{FFo0L| -:Hu3"(Ab30000h[v0RYIMé`$%IENDB`Pygments-2.1/doc/_themes/pygments14/static/pocoo.png0000644000175000017500000000415212642443625022116 0ustar dmitrydmitryPNG  IHDRJ?G?^bKGD pHYs A8tIMEgIDATx{u?.e,Je1ıT, }L3r&D(bT 1NcZ^HMD,YR "X.ۙò={vgf}{y}y~,- u`x8%ӦӁ Ty {{ʮw`{xX >)ie٢Tg "oQ  IV&="qh"iutsxoqDt%m$j$7InjE-`pq!Z@>I}_x{#I=ZGp2 s.E:G"XOD)h hP$ [ T!^on If{ULOYe,YfZh U:2ΫW1!I(EmV&~5 v6RFĕ <.7~}Z0QkQDRg Jҵqt3SzIfG#%RWhDt[,&ǵz`W6$"!I BB߷m.-43 \+^eֱ!5>0:O-63%Z7YwtuxkKm o%"TNp3^E:8wgaI&v[{1 1f7\h@]dk+ihֹYlIz 5d3j[Oͫ96b? gر`vU^}^ʚڍ ֙3ぷb<\[n,w\WFt8-LDnvEK-v.9nIJh{uaw%z%DTMi$D$_)0Pn^nQiv%O Cg[(rT\y81rAۀޅTD*ѽڻHTlɴ] /7pm?s:. PEJs6B̲նzFM:fvZ;:5\e%=vqqHd-\Ie:3U\'7@#(2N)"*=-Ϡ'E^xbw:2+kJ(O#*eRI>Xo[ZqiVKuIZ@=ͨr7.A BfFRDij)td&5P@uDtAO>IK(Gf,-e]i"',͵|G`HW^6Ȫ+K(ӄj]42ejFc/tod&6z>)i{^r`"b~KDɔ,e)KY${RpdIENDB`Pygments-2.1/doc/_themes/pygments14/static/bodybg.png0000644000175000017500000014527712642443625022263 0ustar dmitrydmitryPNG  IHDR)+ pHYsHHFk> vpAgg܊IDATxM;ɕ-`$ 1ȠCcіF4$(RUA*Jjz;A{u"3qe%5O~&W}on_ι~Oӿ܏~O]\ï';Ͽ=G~?n?w^yϧ?\ryo}Ol̞wϿ}ݞϧwͯ|u|+wFgЫS~-/﷾_|\_}on0sY\ܿ>_=|ϋxCnɟ|?-1o%vY>toz5 ~s<<8lOk>d)g1c<#K|o6obKx"okky{%- 8Kyr~9e9de>b\Hl<:f}if>s휟c ۯ_u5h%>w޷|%f\kNY,yϻ~.8oYϋ8?v f?W~MysUv{=98;*W,J{+s}O]JU0z=Ul|cZZ^|>/<Kȕחh|e,|?U}drחu8$.,HFM^In0_U{R^]s9a)_yfL2k3R{no'<{>[wYɅeg<_.om{ֳ~9S⹼H5玞/ݘ6Ȏ~F[yvNN odv^iy. DD,gT'^<#5w|:yYk =_6s~9 f!=1Ny "adYyc__h?soȳ] 柅|_KəWJKk^Z󼼗輲}³Cs;з~ɍdu'AdKprfe' Ҳ=ߞ{rN.s.fE1ֽoyYV'w< dP|G6繣~B{_ٿ~I?#18hr,|F"s'_|^yYph&3$rY6}6yuJ 3_z׼<g|H^q?>gf-LJ|tm=>&ח\X>)Q]^w{iR5K/k77+)-ߐxKJvv^37 =(YA/$Cl,25z4idɱy>{0]6Iw5Q%דŚMCsK^'K<ٿλ36f'eqzɒ̫brw5eO٬O^Ye{n;W\4'Iϟ/+syrmirA+,<)&a8c o˞Ƿĭ<ƦAy= o?܉j(!҉إr5feE%~J~&eY֟J-W|,ZI_*y(p!qMS%>Tp/=ee&vwS&nd5j@r17k.<__ݲe yy Ҝ#arx(| =W ҞF/-d==NCP޲z(:i[gq/9F6q"uYf{SVh_YAWT0!? z?;UoyyPY>Is_5 6n~8KL"AO#Cn,m^\{_>./Y#h.!&mٍzM_Z`K͓sZݘDX\|s9Քd&FHVsf %*%ndq'@ _t*@f1Ȇ6]||.,Ą 5gXn*Uk7'7D̅7HޙCv.4 <,/m_}Ǟ[Ys˕TRfL g>/,#iYy\.crǞ!i˲3Iou PL) :9Z6DKC%Fmgi-Jm%-gkMTiΑwaj$QPj&9l6ywJ`<D9g-+/\&Fd4H(+X k&Hg6:[9Idɤ gI"? ^cQYY@-wI_[;?Z[ }rc_:b:A Ұk̷ڡ m-vM^kl_ލu>6ߚE+j_]ukr3ɟbt_n$Y1CׄkU~,ӫʍeԓ3=@b[$i-`QG3Ma?aK*BRc'0HEC>JDz,gySm?ˌY_6W_!R;I:pz$Ҍr RݴdV\A^^&|!{>2̔Y(D~mjwsS|r7S;y4IJ`4*V{'%xxGyy/b)uqIj oאG#!/-!-3@8g%祘r}$([%&=ziYm]}¦ָk֖ԡٗV})(5 {)&ZiGăl-׊HP7ƚS%EJ$}iZd?bD"_z[HZs[4oHx ی򐇣ݑƌzK@6M'w"u+K($/4 \k{i|b5KZ4ms)RH4\WGQפz9)ٍ-<y Y=?3N*Vs*5.!E psݷ3Rf_1m6̛qm9&-$ٺpjZ6hM*إmpHL!+&7~u I;$رlk`@S<+:K5K*=;UefJᑚrOIטNyWWnEիP&IdG7ash{߬˷%Ni1sȣD ١"C]reAYNq3j5D=F>]l@zay&!Y$%`K˦{rV9SrA wxn #T`.7ALNQʶAWnPu ϊ9 :櫿KOiv쓾f-L+~V۝ܕ}oMOga_;|NU$g('fOgT֜(Xغt;2D[*^њ%$3qN=$bT9jw7W/`ϟP8]^rK!Le?,3{?geq<74pJ}/Z2;ŵýz9ЀQdR8GRھ|%5эY"%o'RôTh9m%l1m;ؼl:XxSkI{7i v ftouDY[ 6ġl3U"o١Y!5mO UE^|^QAnVlS^~%wLGLžȫPY'͛fȗB9QӁw@{2ɬ#)4({kG#/@ E?TNB ja)@xZWYfp)ⲗJDڿAUW:6c]ݴ̲N@Vr˼ЎP 9k' ܧyO&j?* Yʒ9Зx3P$ 8\[0Mc8&itv;̟$ I̓MƝYs%%u)$I"jj?ڻ%QuKm@:?U`uQUWwOg866oxmO푪\#-IB )w[Iv8Xls/ W*gHRDV&cĕaZCeNG]̼ٝ=f3Oræ=dtT*Kf9rGU"@ K{̈́T~^]h$PҎa4,oB^x$,+ѣ ,dhj ^Yu\!%c4j NHݙq;)"<\ fE\OYZ5-9޳P8=Kz&%wcCQ|}55 !wY3L2\禝5v:kbLҮ]ދxpI .ޝaA{'٦QmX(S$[>ē\ɔͪGG2WM_ (uR kxף됦N{"TokiHΰsU_P5zt}`52ZG-+:KȠhǴY>$Ǡȫ5ŊM sǽʂJM}BMr0Cv9fY,c}QWwyd̢,(qtOo W&%@gg췄I !0G H"+ "G fB'gGIfw3Bsr,c܇¿s[9I5 ";dmSI˴\Xv_H#:UbҙhY˷;_sRv^@&9|r|K=w缾\l&?krL=3O YSfI]EA5eҡZmr>!b[餈 ȠSfY~yO)iLnZH0yN8_ s ߵd|dK4l8\i?6IC*M6!Df63zNB١Zpٵ .Q, ASZ=,PvYEv.ր Ne&dgUS<$i&\ {A_t;,n.0/I,H"e-1"KG6(B= 5l~]{k# ؆ nh-XxJ@b 0"mtƐ!UR6+Qm1{ɫSҹOiu cr3^r  Y1q!< U\#vlf{*ɰesL`` 䡔 hJ].+J#M̱QPutoE.0D +٫YFf,0MH}DBi>*cd`<{ x2ez Q=4gr?Y|`b &?ffa!OY2J) AN|o 9[ VŢ]P&o;/-7A:9>C9$12{*VNA3]~\Q`(9|I034q,"utmL,܂`yDG7c<ъI.4W+҅"Ęk `PdXXr E|JD-g'4}{޵XFS6۸&geal 9ojap-O !&W*tRnq[TWFQ ȽWQ1Q⽙JJh;BYd(-h3zbbg e7+ԭ(k+}3lZa?o&)X[;>`ܾqI)6_dx3*)#09͋)(+IYv NwWc0mW㮎&';Jh#j[\Kq jHL\C.[#8:&NuNv,/gizt%&$ɖ0'J<;|ؙ]Q% dJ/[Dn$ٕݿW)k~Ĺ]HYdy9meyg=-.J1})hXE ZIT^VL 7{4.HI*QR`%@1MhtM#P]N,Sa_V-_IͫU7"b.jhYUŅRE1YT] <M`!i.v*sHR}-XSdܥ{g^qIz4ƣ ZUћj~@I!Wߍrѻ&6iUcUQy >aiq6U(]GETm)حY~,!)_YNgcjmdLY &N#:M{ξDT&3(n%B\o3m mM$:-)mJܒEHey 5+IeވxІ,N QO*&լ=$G0t/tA%"zJyDT_5J{E|sAtQ^w˜S^9\#˦S"kޔ3Q]roKʭ@65at[!׾kM$b/yizbUFZکӾq>Y䯛nGdJYqN旰Libm6? (EE ${|MI[5#`SEu_0@RY El)O@es ?7."Dw$Eċ߾ką r] ͉Mњ/WpyQퟤQqE4عf7,Bm5YjK_$GsjĬ]ўx@{R=P!wiH2Еj}W\c:w.bC**{X )cAQNȗe4yWvfX \5Jؑc#%/nXI5^E.vHȖ{ r}Gy2˩I%ʹe\omRZM= >i#4NmVѕ?uuȆVi@0dL Ш±s20 M/7^y(uJFc$pjg5NU)$!!.S2evt"V3aYJ_z_2rE5⊈SHVW(`$I%\}QފLiŒL'C4'XTByYG!*L\dXq4O2נՖ)Tl^GV/_Ffei?4T'Hq$8D}HKU9%{OTI/2kwaSsܙTsO )jy6;Ώ_!a"%#3IkbcTZmڃ)qdzx}laMPߔtƨBl1Zdl Y@kXuPgOu4-Ex% M.B!4{xقY8]{߼Xlhb6VGOl!4"滋ԥԴp5 "4SIJHlKv7I5WyYR#@ f^ SZ8(;g3rg4o0@TEN!aNW]VLt (TK)R (.]Gك=PIf#h "9r:yk?!sDŽf-:oLa-BFp*%ULhyK PDž)JDKWos-Pdfe$d]'ӫPV}^ U>i>i|{IK0+S0$[@W5/ )K2fݻol !t`.Seg,&LLgY@)-NzGؼ4 89X5{ב35qR5A[>W([cnvEbB:QL7SWeAY1*L>)ws+X -yM"®FRb}1(^C(d~YćRMɗۤ?݈<*{;F&*Q!Gʯ&ż+f#λg/x~vah*4L =!ky8  E\KН c {R"wU[1֝>ۊ*l6DK_7bAJyWd 3i*ښp?@C@UOtb|:̃pfLL#ȶ̃&b'ʜt8CW~4m5K 7yc=<ѱRzs J/y.23դ d{3KxR*PiLd=Ԩj2U2xK\{ ^:,YWd!(MdyGUj6YDd$& M#g-`Q1O&Jқ9yP9`L*a!\H(8pR9ռRa=E|[>D"Zy,켍Mʱ}&T‹UBV2y7Ac f"+֍rNy][k$|rǦiGablh)baXam{s,#sl@Ӽ Ti).ϥð h"uNABy)gk|[.L7@A] "*<͆uRŴ ʋz(۬xT5-5Zv_)]I]/ GO(\.9ЍXPglr5=ԙ0s$%[O6^P1/+ kn`*!~<->D`.O<4]{M'mf8)"ΏҦ{莸@^GW@\q9gEbQsh)Y#Ȯa-faF"&x7]Nnѝq(eμ"+jڕESK2/o8_ W&ְS)ILXʴr^ D35#:4 ~ ,J][a5V"4nQ\1.Sk%743G 6 2$u ;d\վ i=QGF3!AeFIU9o@uX"ӵo0z\U&,6YitCND&Y.Ԝ02n` Zވƭę@..L%c2A6[ rʨ{ܑ>-?cn䰨#@HrN[f"ĝM+GCd9RSr0]Im7_5єі~;e0$Xg aM^^cc[1/9!*EOzTfj MEs`IXk )I/sy䳯Fݙ,BN%DP\{qß(oe#;7EC dqM`pHLP롴ϝ`z7Wۄ`,0ECx@O_aڟYCYYiv}u YO#YQ楇Y jKdTR&~k W5yP̫p#6{IC;?c:|jP{as1#[^ ,Z5ʧwr^X?}Nyi*u盥jJċͨad(Ul,F尔3PY%mE(#@dAVYc5X㿣 ;r aqc-G5`2Ȿo!1|˃p56X'FnѢeS@@BQӬ/@&7iYV -7{d]!81i'D5d_^YP%M0N(1i\5| j9vE|Ԡ"Ry77fW'NׅibU S%P|fjHbH MjsJju?|Kh5VwF45mLLs 1#ʱFCo|!G PҒ,B5a#;,#xOnvϏ7"0JȿgB3l|ߌo , 6Nzwm \B)8$+-xlf?^hdd֚5d`y>[ `˼h^e_9Sb@UlsٞD"=2_ .4BoE26Е^^Pbo~4xS!PK".SMqS哉l虜nfQH$eȥa`8 eM ;JDDh>Y癆j\+^t_ d"RO'5Hc &ڨKW)%i{~)OJ,2O4\glL;?XJΎVdr[;m;Aq{<,ὨIs,d&ZTlPVS6~ ,`dva6-QkԳs+/ABR^x0<*~@#R[rq?&Qv^=|GVh8ěܘX4<| 0o۷,󴈾ũY=j2H/@ia%otiUJ XׯK2ԇ$|FR8Td*f€ @TKGѡWz a'JZa^k|XNsܪc]vs=Kʩf.cRG7n NkIv98ƽJ2P3w"]Pf85W7K/+,=jq}.L@7KEa'ዴaH {%y>fv'$" 냺o@DċLxÆKo+9pE.p MƴI3TieC'ao˿VguO:g5)7(I]v w/'ALq&\/ E x#c%e c\]ݔ"z^iưv7ZI%M':Ә5M Dz(qF{W Oct/ZxeGkUt(K9ub'C*hLΰ8^.wL/&HfF^U,YW $twuݷˁ0`U6GWKv4?K'Z wv{[*j"$%W! *d銏/R{Gkrd?_ .x3!ɥ#)ΟꌃpI:)`gFhL |[fcyc됬YɃgHʭoƘ-NylhTvy^oåmJ\Ru5 kM7dm6بe; 7"* _K5y2â>2%mEx\è3IƈWW~JMOUj{Nм#Ը"L]4A y!uqR[ghu[`jZ+`Hz  ' |Ӱ*e*;Q캨d!-HZ*PVxWAfڰlHMDN|[( IL}Ó 3 y"`ѳ1R6=,}ח\zGcayeqs=0T/ L =y^V;r7EJv1)GuL$Jus mѰ`@Q&SO][BwIIL𫒑m!mw a7o:䡬B#ax"f&!qYp+6&#Zk]Ps~I^Niyds l V'|9ҲXpVC ra%ӪR'%2-tܒY. kr_]#y!2į@iJBd1{p0hܪmݖL%ٺ@ruj>93Yռ:SYpC@#d F~ .[C=\}R!`$;S@|wܯP_ix,σܨ36%YksN]_oX4"Ngf ÙJ ,'0k ͰZrFm%@МO8-hl*Xn <[ycwpqT="R*XU(2&1e]y} \ڠ}bm#lH]A~`'g#Gn\~aV-UYK===0XRLACP$Pk#[QM*\Iβ(ؒ^eҫ)($YSf%Ƞ8ScE/'-P45CNq gT|CD)Z8fNC?/Tn4;ƪ۔Zpxz (ksp6NVQ3eAϨ7r/&\g]E/$?P뒢B6:_Z\F XCjBhψhтX ʄI.) jV=;0w^Qy-y$zS`Isd76B;uMh2*R=q'w_q-']"\rHNI"n a6#ЈJ.@ɑb 4]6EDW`lcU(y~3` ;i1<`-cFzܨ$7S ̭nOa^}⨣bgX_Ka 7E/U\ $j}LF /%YC!+~ t:w#cс]> ۑ /DN|aJ3zHd4}{X'M X U<0yS@c}6 ɥsUtQ4HeU  zɝr+e&\hz$N30H$[FEtըiPp"j{:wzi`Mƚ/j] R {J p1RbP#.#!' -u2ƴT,ttyC[V5|2}UY5f~]fĶ7^8. O[}\R؀40b2CI]Rnrjx֩^=QhQF :c[\uM;-8HqD:b|ɌuԱCQɱr˺*ײ^^LRD~4\bxf9oek,:Dh˖}pQ Cq3W` r}ndSƭ3U{_Bmm\tm x1C@A4LhѤ!9!9b:ND6:)Ճgׂ] \`Y]*/6[& bJ(/h^QR4cE5fjTh5iƬKx;{.n^'@u7[}.N3#G%SnP8Ƿ/fūkQck&ouvk^&Fw4 p Y@Nv9uM`PpJdY.džʔ6γR A 7 i'=6sh\\#;\U2`B,-nZ}0\H{E֨gF7b cg%qH:&)-RPd04)7pNNlEj띅b.ܩݨl%^afXrYY8{jm#B]>t~uқ ezJ{!6lhAߛ,bu د ,IJRTB2p}RI)r:y™Z:4áY2zB]n%p V'AptݿB!p%SsO@Mq$U [5¶:A/::q:&Df [J[=uOӄدzYTƉ\+Et:30Hih-ec"9\Z߲4:;1Reڹ .y?ⵕoGX_rPz^8Gw bW[_SPR`T;NR)\r1L4TW%T o(LcḰ^[eyr* OܓcSFwe'S*LyZ-i*wW q ^-exsߐMi nJ&SM-M\j?*1dF5>ΟxU`PdNb*|f!w5BZh8{.;Fܾ#fk:d5B -̉H[@ 5+UGEpkqԘq-$ f]U!: [<̰uV'ePW-Pj߈^R.MXʂV,!,Z|Mxw(Ly},uNf:,9s<+Ǚ2uMI; 忚/UL^ /('6JZPLҼe+ 4ѵ?0sN-c*NHFڮHFأQ}Z(ĝp,w,Tە.4:eYS:%w68?a RJjLT,Ȅ0~bb=OY]9.FEd+| h :wufx#q[eL; VgIkOISs*#oA KˮݗP%CQԖ+[W2F80Jpr"|]HA\4bȼ$3[3X[t{YesV]i3LvǮ/L8Rm+%еtʵ䅎c;FAy"քROLN-suo@t 8M<_q:ʋyvaPXL n4%lq*92}*/[Oc@JZ!P)j*jo)|i3{GQSbN+RJʡV20'UߋP&59WQahkh2 *WV6WEZb]H<Ҡ,H @pWKQc$0PG$jq~W;& .\;J7"c7# p˓+k̷)fol3\z"t.[I=p|"%"}6[?`m4ޗ9f%|2h\d04SmDLjWxpy1@4k;0o\wWs̨S7uH drս)ar- WQrk`&jbu榎f!)B-N+EKG8Y\]oA#]scaL(uU JXb9"]iIQ#5 U>/[x{7zmz 7@7s3gcYLY{@J>5lW&"=Sp[::]qGCQ{%C=$^.UO\^*WEaAh^ &bְ8~CO!,iFF^W,D6[W{%1>W2z; _^\l*E_zKK1Rq6Q ZCV$%-R_U2=ؔen,ա3&/#Uiœ~"Az7+IBqp;zy2re((hЍ!Wr\)!BHV2/Ad5Dd7uf?ؙ'fk\ Hה.Noʡb7nuW2rਖcG_Zc|3+CX!0N51Bi0=faӶ}.P*paϋ`H2:WPٕX-?B50J," V%9"qȡtHQRY<7e4TgLo""}d0icR *An¼<čF^.^]~𚃅>ֆ3$3'd-RO&Y/*WGU5P2I Z|Ǒ"#=gLQXYᖔp%t\ Uq {N[qDB:eP>˅YoSŧU6t};$Komz-|0E-O%`Ӎj8Mqg$( <9p0ȡPcn.o/[rO9l;O<( Gb"LͽvtVYeI.hD:朘Kޑ+%aZ(ЂS\5(eL,` AK@/8ծ*('BEdԧ^`o?,ڱY<Omɿ7C&J3} Cߤ,ngv@ OxhS2׀-xʎ.<6rGV%#;Sޤ*U Av[!zJ (<;/DwxE Z+yS"R.S3Y3@#ЗLjE&cc |B!YڨfWmׅpJdD$ kikkZ? K6/,0y٦:"0wO/đ;_&V}?Sf畑 %Mz+qFܥ[huAW/|s :}@\Y˼E>tHqWma.YQ jxer}Qg }ot25]5"%p F= {TfȺQ N'Q79YDZxy s({l@)'cd'){ c0/3jGUB|1c"\<3i ((vb+kZ0vL2CQaݹ_* AD|Ι,"}ҦNDzղ8p+¢+ug#U Sׂ{[:ߍfA8/wRlOKŗ5lRxB*؍4goGma>ZW!JmAspj'uC _ġ gʙkudd#4n>vYK)SiQXhM@DˠceK!(,Df5 7Q_(IJ6P5Q_-&' $1)cO&JFK bq E_s-=0c|1!?).)d<:{&WslYny]QgkJP~!48y]jkލztMsItd3Q$qePiRi|k_KCБŬ Uk-`s w7YMvgk<.@@dMj;T -P=}p)(@#cɑwΙm==UUȕ|B]3u.~ձ(6aiSAXuZiL[ .C^9 ,h{Jg28P "ӎ>g$l,ɝt ;zmw܁ăav~O][FX;?)MoFEXEV6`{M:F \oD;`&7 ܼt < ?膚uyeuZ&%j-dͤ *ɋ"G_vO[H}@A:bոnnOpv p2wZ]y؃bR;OW h_,l8r D&H{]FعеeY_Aö4 ȼtlIbŐsZ:K[Q-~7\ƍl{jRVx6JDH"kj|vOF=x&qJЍiE# LfzC'9I[C$l^nP2LU[vwh 7|An }=*xlB b`GaU^@Q u)rqtwaЪV$r HZV*z]ux."w"Ј+ȬlߒעSFzf [TN}:yphۿKzU $ z6G^JgI1Ʊ Q "-{ #WBB|}^'TT$q[:;nj4 9.Ej_|TٿfC4y,:{<[]ɪw:n#Qi zm֑=(-:6|B5&ϧj]ѹ:~yc =,_^rd5tHsYul WNcc]=m`ʹ1它O\AxtTtMqa]ךTĵ }4REJ;=ܯ#(z!iբleg/r&ux=NvtsHZ^J;>iL[P2NwzC3lSBw7/=H^ӠVyFuXO?, v4+KW~Xj_Gc|1r.,/V/~6qM|ӆ9\5)`]̹+ʠX@8="zm4AlUˉYH:D s(YatGttL|R[34UN3-,JB8Q(ڐxlpS0qQדZe|߉ ~cE(caft' N$0<e5wT9 0^ ca Q\n o:!g)vN2&RS0;]R#Ls 8GKt08:9S笱Ń1M۸;sRlJ(Zrc{u lsz W@ JZ;[J?'eT꺧^_WwnL:Qu+lU>w)` ͢H@ v3sOQ'&`%Nj/'a,j~czDraG&}S Tߑb5@ "@eMt!LiOPb {F*C=׽ :$"1#*Q 6wlj0/^/$tkucȫ%=Wr}D7"ܬʚUβE6PD[Ψ FdfQ?ӜLy].uRΤe&gD48ZMjV+TP5X9uB_̬Bq2˳N~^8f;/)KSI\np{! aU((Zc,))xBTyjrjJu%G{w<J#by46 6߃") A'9_D8wOo?,媎L SyA{{C+ڲS2XCj6$I6!Qw,,NH jtqR"w[e+)lʺ}q>yNUKz_æ +~0F{}ㅜ#Zwz9Zu"s&3ͨRyPYv'57'J"AӞpp1@*;ta ǖlt#Yy:/PGSV<yJ2Uf/M=q+H51/UA.Ű x:%E}p Z063 0!YjGp%'9NHfaV,_:@>_^`FؚvRkڇx;0t]K.s\URTRONCo~/:EcI3m.K12'%Y|IIDATM|xk85]5)Ѡ`:3fNl~U =/<<:xBáUcWWqa CRX.ytFD^{p2\L!/@ h8b'(b+[\1[7oPd I.' l4xQHX3m` IJ?-e@H1C8x ,#m?5lF.E oB:"_3虾脻nAxZ)5iļ%.K 5K mQ,*BBÁ׃ӱ޴TOMvu[O.]:;Zi"Vd5Džg'x R(lP0H`TRS'LG !UQȽtp_m{5?a-,SA4[?;2F\6o5DM0gPMTs:,IPYZK Л/5kȵTͨ#--pYz,5gizg`lʜǧHݗj)ݢl?pgT+ R5g]v!e3Nf8IzmmǍlk 34l8|DU`,ّ;xO cP+(E+]Hw]|$?tS:"4}t|p|/< ߸0 pPZ*⼉"<7"yjɶ[m9 +vV_RRA Ѓb,i e$D fwC:jӆ#KFY'XM(>//<Ecy*XBgY'{ѴRt;;W„Lrh>9` QJA'X sh675cEP݅!%:lyJ07, D(II $H#вHr {4#.j(a|9XeʃX6Hc`ԫ4jxW;IP IR}d S~: R*́^YVYm\!z|& 嫌 [85;o5}8pec\@UNߑAE3/SPTY۸<apx\(дN 9 B]W|;5xmRPT%6ڙP(dzm%TC= '9&Rc)Iw <#>t+SH BF[ V-x`m҆Ǩ5x*kа~"GaQַeaA=M0!ٿ+G=#5!+\ A:2jˆ_KkGYruځz &.u2[,8BH?Lb:}h ;X(h:]`VY| %}XzqKq7$Q5)봀5D&0}.?ꏟK/>(Zzw>j#-/"H0}霖FF&NWIG:7 ~L+2[#ߠ~6]ƖlÁ5y-Pa$4jQ*2!ca4/V&yl5gYWNCh4EU37*…m-\@MBpTR2uLtÐܯ+g,Pg ֝#H{"n.֪Ǚ#v?3EJ2k lD $ oȯq@;}8`9%:ҽ3f Sq Z{3tuۭ} e(mSn1 8iq)ƴ򇹴Ta3Uc{Q$7D1oU,v 5K.4`YtRvVX^S$*K-IY-aϿQF֬Kn%mc2VY+1#8 %auU:L9MIҒrLBbrS)UMQ?⧥r,+ ȚeAtX6#6\1~vSS)lz xkҚؼXA~\?4 %]ModU;@YvͬMmZА]\ZdZ7mz h#/RDa*-yRPewlVe 76y f5|G_E ,۔h_%'icN3ߑ,t?¨ cig)ȲM1 x1Oޏ1yHk Q䛸1:-M>tZ}lA!9]ɐ_@m|q뫇wKRE,nˬ8&Qd/ɘu:tξa<0,tv$t(`jer8PJi9wS/Gm+cUw9SCwD'TÄR.d@BjRT^`Uk8 4bTÍڀaksY}ݠ%ITv0–A_$:*[\B5; - >4>(Ք|Yt͟NnD=?)o,Wne~.5ͿC8@UWR8m]zpk<OΧL BI#sq97o3TK`ra)gvτuv"[t3oJ5 09GЈ+t?{se6 0b P>QNt z`tМJ `eNFyKࡋƚ"GRD"szn_2ĘKhVJM-~tD5h&M^4<% T"qj];B/kybed +Z99HLeY%^U鋂qÜ $O `$Y3EL[6cFZ\ZzHV5;b`TS\X%,˒?4)s%hEViPڸlG|PɋzH/wS~ZszxZYo|P;{J TrS!AI=vh΄ f;6_)5Hxou:YB2M]TNNB%[?H-6n碗PA)+\Z]{V1RGw@ENE_W~޾ \X-&t3ty[[*O(?ԻXn CۋX4qz{A1.ߠ+@^Ց0VȪ^c,qqvfi^P.hl`q"aB"TFH rf|4?)Tx`Gr4&SzHlu@7k䎱_ݻ {E-BZ׵L4$mHN:AԌ `YQ4J+Զ(6tm 9KFy1C udobA(n'%n;jTmGV3-AJ ɸzO ;-H$E%y.Mm3>fqv-VKJ= hHUD/$LDeտdAGzWlD lZ^9grAא&qE\ r*ߓL3݄D[>_%ZsKRX.שM)YjlPe;+b;q(r#Tѹ0]GUa’1j^mF72,$ոW LAڸ8Eo3}T*ne+l RP9[V2UXerS ( Ip8b0RH$+9H3`Z.ak b|RKxb폑m$jef#yE)!ahX:Krʒa j A"V ,|lMz,0MԒޠhiSCC+yTdKW(?6<8ɕ3Q1Q6D_G_[ZzhsR/>d1Rȕ\A/'R)w\J&=,[^ ,Z<:Co +X?}Ny2*%$/Ooz^9Ԧ*6rXJۙY%mE(#@dAVYZje1MkըV:ۅs+=lY]al _ ~2uiS)ஞD-U&%sʾCm}[FT:[|gk[.fMK&' [eY% ljAU+5a$ޔ]ۻ e:Ʉk(~ qbMd`6.2' XNs4Xj[xdpY<߼Tnep#:I{ "J_A;;IVABhCe;/F L:ԮG1pvthx_J,$!U:2{th^j`VgHZԭwz\-]M%W!]T"1 6|׫=gQt{H%w(5gim!۹½2񦾒)ʚ.WYa.yga*\Bn9Y99tAAPk+}-Ѥ7󣇧Ya AG mddSϰe~3I(=Yu ˊ$BlRZ2ۂכff(vsLui~/_A0,)[k`W}`@W=B3-X +%%wbnxW[٠%69BqKVۋB L$!RȮ+V&8)DtwRg7@--B@FsEdΙZ= _A VYz`de~NPl̵F)Z"١-Rܨ(mԊ#6s :hG}B'XVur{X\X*V01e.0i9zNٮp'g@Ǒ""r+4VrE$'Q3U|[ FVKW)%i{~)OJGdx{viV!:1$cc)9;ZςmQA]JYsZq3reO,q\C%%mXQ|D*`]Aw~QcIi19 :]Drli.]'S^/姞[y |1 WEJã|6/%J'S\?CII; UM&1mgҶ򙆗Ub  oY2&µIWtSj1i6"i!P;#:d/}uv=)q5&+{s+/bGcC9LeL,R܀6,ܥdvp{c'd)>綃wjSgEhQszs0BӲ@Z&teFoȰ`HZψq0 d:WI>ZZUD\`ryleyaJ@YE$Jus ml ùB̵G)'rȮMtHOTJ@Tq%+{ma$8s_Pזҧ.6b1^q#E|֜IjQ(Wm\y,EIj?Y*]mose 3S;< tLje WrAM`N'5qzCzoz\=vp\93YUMda# v WRvsAY:Xryɠ0ZLYiqB} 'Ĭ-aY%\Uy9dD[8~d-TD=l$F=1J .PEy ;O=[ycwpqT="R*XU(:ںc&:jȼN< [*;!uߐ0sƓa7"QRꢓGhpFrK===0s=_Fn{)bXmopU6Bɚw(x(孄 \P2Z.3}7;h(.LKweB$wߊ[~r" `̝`wTx:3L[đKM GH>@?N|VDhf”-g/~Ǔ_NWf$y"n a6#ЈS / )^@ωykN0tލ Dg0uāh/SӸ#–Jt~PjxOa^}g!LR*"qw` P:WaKkJGL_uCaL&ɴ(0`'R<&aVVђ귝@FZf_a~'xU0 Gq,9_6Je^cX>cIǦieR^`e&G0V /Ƶi谵/')JޞȳGLm߅F8l#0gekry$fvFrWI8\EPEoC୅GhIf'WS#B(9u,s|!iZ H7AH #70Gm*8pXH֝1e?9ܪb]Ā/dc̭;I_K5Q2G@ȟ$Yxx? PMc@/%YC!k]M5;ZTdvtv$%{/gX1G[SZ]&sz<1n=lhfY4µrGUÀ֋b4$Ue/:5G PM%0;G )e^mADlζ/֭nY2oe+HaS <=WTUajLLP:fkⰘZ$}]ZV"yiw]lZf93ĬXGZ tyX|g_NLٜm:xPigԣ52ij]{,ߗl3O5tce!elH|3-zv>" cdQ8L䖴*A 3ɜy'\u`Dka-sF2Wy&S~̘-Ϭڟ_4$^ m F_HV*)t}5'_*<(ac1d$ŵTxˡ#az;$DeNZfN6iAݾn= #za5e7)MvEQ!0 C(Be8Ëj1%r5z,}O.bB.za޿HbmRX9PMf&Q7\)ؖGHm*;NܥKxƩjqnc̻k)y`ӺǐDxiW֝cGpR* 9 ̴b.tHڅ8è{&VJЫG~2 -ʶsƫGduT檳h"z^%3RdE%Nw|w|bjkW8}@cRDM4\bO"#;r:?ؗݫ۲fv ˑ>9 9HRcNd `_bT4U$e6@3EP"``#?Cn7yK84H'#!%e1LGcg|^zMA$?ȧC͌8JzXuT/nұ_ҬxCba7`*O`[8_ ;0 ȉ0KښdX=]^S "W'G =G*Tj8M$r ONH&yɄ$u?}~&ĭrٵY< '(s=FHP7guml5R('?(l璬`63 iVPpJF4 U\;tfݚ4"cW|(I3trVQϏXo`e)wu~Dd *~wx<*Gf4ʌWG+o|wnG2>! 'yBA6[d~ag}v5ro: < *'dV8<-$q:GxJUw{[{@냖JY^>!syMcST]I7:MIqM.DZ4/\IJK2"n @UF". Yf0=؇ڍX 0Ò`?V_$|m6N(i4)HrH@YK'6lhaO,-Vlbu د ,IJRT6,!9RIm9ǶnDb6\^sҪS`5ޡX -SF@5C (K`Oo$ bOW) )ژDܪ0z1,<LG!2KHܚP.:tM=J8rNgs@q483 stL$/0Ep%G> 0Kߏ\adF Tj ~  vi%gYY&v~2\T"؉"\iw&V^#Fb1FK@:W8Z ˼J?Z67e8;{Jڢ?"0揮8B]4\! p l %9-;L,umCQG61|;m.,WS#z'/FHzp3 촁L E/|d q/ G@ZcF)YHfjy,FB2 >I5dM-Ԛ7 /_"D(dMܹ9VUnoL>4s ir>v|OT4jZhl>L oO\?8 T-#7r=0ΗH$:|6#woM0F XK@YI[y"59fN1@'lfKN*f)=U7삡.I zǒqR (8S=3:^2|gǷ/$Y*WRIᩀ ~pƧȑHҘeC2 ]}-Jte]v>d텶ũr7*MeU ̄@$,Y0,@Fd\|{&T2= Y0EpȾFd ;,&!O Q&K:Ie1uM_[*s4l@\ws|o= _JْTĕ(CbٝFc zK++89d'U{lc|3Hnk|ajt7wsѲx4k7"6 SOJRgWM5'ä)9]7 .ˮ1eh&Ѕ*VJ&_Q;T1̲p`D="|]HAWpgyI@ghgҗcmuR檱^< .Ru#P@^~%+rNt\DZ KtC9Wɣڮ/7 *l&.dTM΄ ou8D"sQ^<ȳS7/-ZxR_,29RC[GlSzb6vF^#ּ &F,cC#إ&1fᣘnEgnoPϋJ{G&#H`4/yyrc HZ|[Mq!n%PU.xjS-AUBRRp|"%"}B\K 4ޗ>?4ecYr# ʲO#^$?&`yY@q)uoJE}Zݫ]NځaZ }㽊U ʇM|@(CR$u`.Z.R ,GYl⯊'f8dvU hpi#\ZR 漆cr[r^ފX˟%H.)coGL{C}g@(i/a#w6ÈW&"=Sp[ᜈ^MOUu⎆bJJ({H2W] ˟8~ j oe7Qk{LPSq.C6N@ sίXlJb|d*uw A~/0<-<ŦmǗޒrolެ!{55~-,{as̮:R87A-*,_封^Ax('<0Pz#LP#ХAn3^(ziPq 5D;q88CdmS8 [nbdVJI5d(ana Pr# 1Z)>d|I#yOG^ rSN懥}v0`L=3?f,pX^WxEE\7ֆs#R+eDl"͹ ]1# b} WeHnOYs_(glƜnII GX˘k>$_a q:IQs\q>pʆ uvrI ,a*ݶ%vdYxqFt*$;qTz)%E!oS@Z>=>mL=g(Fǟ"n4CnoFͽvtոVYeI.uك2GiFnB Os)}l6d6 0Zߔޢjz郠-SH-pIe'gZ; E"1h%LKPeI=X܀k.Jy;J<,3F̋a~((1KFwIU?价C~Qxv::^ `/e{dxII\VG>9wG@+EQofsVF9F3GBV%%tEXtdate:create2014-01-18T09:53:19-08:00R%tEXtdate:modify2014-01-18T09:53:19-08:00#31IENDB`Pygments-2.1/doc/_templates/0000755000175000017500000000000012646734115015477 5ustar dmitrydmitryPygments-2.1/doc/_templates/docssidebar.html0000644000175000017500000000020312642443625020641 0ustar dmitrydmitry{% if pagename != 'docs/index' %} « Back to docs index {% endif %} Pygments-2.1/doc/_templates/indexsidebar.html0000644000175000017500000000175312642443625021033 0ustar dmitrydmitry

Download

{% if version.endswith('(hg)') %}

This documentation is for version {{ version }}, which is not released yet.

You can use it from the Mercurial repo or look for released versions in the Python Package Index.

{% else %}

Current version: {{ version }}

Get Pygments from the Python Package Index, or install it with:

pip install Pygments
{% endif %}

Questions? Suggestions?

Clone at Bitbucket or come to the #pocoo channel on FreeNode.

You can also open an issue at the tracker.

Pygments-2.1/doc/download.rst0000644000175000017500000000263412642443625015707 0ustar dmitrydmitryDownload and installation ========================= The current release is version |version|. Packaged versions ----------------- You can download it `from the Python Package Index `_. For installation of packages from PyPI, we recommend `Pip `_, which works on all major platforms. Under Linux, most distributions include a package for Pygments, usually called ``pygments`` or ``python-pygments``. You can install it with the package manager as usual. Development sources ------------------- We're using the `Mercurial `_ version control system. You can get the development source using this command:: hg clone http://bitbucket.org/birkenfeld/pygments-main pygments Development takes place at `Bitbucket `_, you can browse the source online `here `_. The latest changes in the development source code are listed in the `changelog `_. .. Documentation ------------- .. XXX todo You can download the documentation either as a bunch of rst files from the Mercurial repository, see above, or as a tar.gz containing rendered HTML files:

pygmentsdocs.tar.gz

Pygments-2.1/doc/pygmentize.10000644000175000017500000000665012642443625015625 0ustar dmitrydmitry.TH PYGMENTIZE 1 "February 15, 2007" .SH NAME pygmentize \- highlights the input file .SH SYNOPSIS .B \fBpygmentize\fP .RI [-l\ \fI\fP]\ [-F\ \fI\fP[:\fI\fP]]\ [-f\ \fI\fP] .RI [-O\ \fI\fP]\ [-P\ \fI\fP]\ [-o\ \fI\fP]\ [\fI\fP] .br .B \fBpygmentize\fP .RI -S\ \fI

%(title)s

''' DOC_HEADER_EXTERNALCSS = '''\ %(title)s

%(title)s

''' DOC_FOOTER = '''\ ''' class HtmlFormatter(Formatter): r""" Format tokens as HTML 4 ```` tags within a ``
`` tag, wrapped
    in a ``
`` tag. The ``
``'s CSS class can be set by the `cssclass` option. If the `linenos` option is set to ``"table"``, the ``
`` is
    additionally wrapped inside a ```` which has one row and two
    cells: one containing the line numbers and one containing the code.
    Example:

    .. sourcecode:: html

        
1
            2
def foo(bar):
              pass
            
(whitespace added to improve clarity). Wrapping can be disabled using the `nowrap` option. A list of lines can be specified using the `hl_lines` option to make these lines highlighted (as of Pygments 0.11). With the `full` option, a complete HTML 4 document is output, including the style definitions inside a ``