Pygments-2.3.1/0000755000175000017500000000000013405476655012416 5ustar piotrpiotrPygments-2.3.1/Pygments.egg-info/0000755000175000017500000000000013405476652015713 5ustar piotrpiotrPygments-2.3.1/Pygments.egg-info/SOURCES.txt0000644000175000017500000006364413405476652017614 0ustar piotrpiotrAUTHORS CHANGES LICENSE MANIFEST.in Makefile README.rst TODO pygmentize setup.cfg setup.py Pygments.egg-info/PKG-INFO Pygments.egg-info/SOURCES.txt Pygments.egg-info/dependency_links.txt Pygments.egg-info/entry_points.txt Pygments.egg-info/not-zip-safe Pygments.egg-info/top_level.txt doc/Makefile doc/conf.py doc/download.rst doc/faq.rst doc/index.rst doc/languages.rst doc/make.bat doc/pygmentize.1 doc/_build/doctrees/download.doctree doc/_build/doctrees/environment.pickle doc/_build/doctrees/faq.doctree doc/_build/doctrees/index.doctree doc/_build/doctrees/languages.doctree doc/_build/doctrees/docs/api.doctree doc/_build/doctrees/docs/authors.doctree doc/_build/doctrees/docs/changelog.doctree doc/_build/doctrees/docs/cmdline.doctree doc/_build/doctrees/docs/filterdevelopment.doctree doc/_build/doctrees/docs/filters.doctree doc/_build/doctrees/docs/formatterdevelopment.doctree doc/_build/doctrees/docs/formatters.doctree doc/_build/doctrees/docs/index.doctree doc/_build/doctrees/docs/integrate.doctree doc/_build/doctrees/docs/java.doctree doc/_build/doctrees/docs/lexerdevelopment.doctree doc/_build/doctrees/docs/lexers.doctree doc/_build/doctrees/docs/moinmoin.doctree doc/_build/doctrees/docs/plugins.doctree doc/_build/doctrees/docs/quickstart.doctree doc/_build/doctrees/docs/rstdirective.doctree doc/_build/doctrees/docs/styles.doctree doc/_build/doctrees/docs/tokens.doctree doc/_build/doctrees/docs/unicode.doctree doc/_build/html/.buildinfo doc/_build/html/download.html doc/_build/html/faq.html doc/_build/html/genindex.html doc/_build/html/index.html doc/_build/html/languages.html doc/_build/html/objects.inv doc/_build/html/py-modindex.html doc/_build/html/search.html doc/_build/html/searchindex.js doc/_build/html/_sources/download.rst.txt doc/_build/html/_sources/faq.rst.txt doc/_build/html/_sources/index.rst.txt doc/_build/html/_sources/languages.rst.txt doc/_build/html/_sources/docs/api.rst.txt doc/_build/html/_sources/docs/authors.rst.txt doc/_build/html/_sources/docs/changelog.rst.txt doc/_build/html/_sources/docs/cmdline.rst.txt doc/_build/html/_sources/docs/filterdevelopment.rst.txt doc/_build/html/_sources/docs/filters.rst.txt doc/_build/html/_sources/docs/formatterdevelopment.rst.txt doc/_build/html/_sources/docs/formatters.rst.txt doc/_build/html/_sources/docs/index.rst.txt doc/_build/html/_sources/docs/integrate.rst.txt doc/_build/html/_sources/docs/java.rst.txt doc/_build/html/_sources/docs/lexerdevelopment.rst.txt doc/_build/html/_sources/docs/lexers.rst.txt doc/_build/html/_sources/docs/moinmoin.rst.txt doc/_build/html/_sources/docs/plugins.rst.txt doc/_build/html/_sources/docs/quickstart.rst.txt doc/_build/html/_sources/docs/rstdirective.rst.txt doc/_build/html/_sources/docs/styles.rst.txt doc/_build/html/_sources/docs/tokens.rst.txt doc/_build/html/_sources/docs/unicode.rst.txt doc/_build/html/_static/ajax-loader.gif doc/_build/html/_static/basic.css doc/_build/html/_static/bodybg.png doc/_build/html/_static/comment-bright.png doc/_build/html/_static/comment-close.png doc/_build/html/_static/comment.png doc/_build/html/_static/docbg.png doc/_build/html/_static/doctools.js doc/_build/html/_static/documentation_options.js doc/_build/html/_static/down-pressed.png doc/_build/html/_static/down.png doc/_build/html/_static/favicon.ico doc/_build/html/_static/file.png doc/_build/html/_static/jquery-3.2.1.js doc/_build/html/_static/jquery.js doc/_build/html/_static/listitem.png doc/_build/html/_static/logo.png doc/_build/html/_static/logo_new.png doc/_build/html/_static/logo_only.png doc/_build/html/_static/minus.png doc/_build/html/_static/plus.png doc/_build/html/_static/pocoo.png doc/_build/html/_static/pygments.css doc/_build/html/_static/pygments14.css doc/_build/html/_static/searchtools.js doc/_build/html/_static/underscore-1.3.1.js doc/_build/html/_static/underscore.js doc/_build/html/_static/up-pressed.png doc/_build/html/_static/up.png doc/_build/html/_static/websupport.js doc/_build/html/docs/api.html doc/_build/html/docs/authors.html doc/_build/html/docs/changelog.html doc/_build/html/docs/cmdline.html doc/_build/html/docs/filterdevelopment.html doc/_build/html/docs/filters.html doc/_build/html/docs/formatterdevelopment.html doc/_build/html/docs/formatters.html doc/_build/html/docs/index.html doc/_build/html/docs/integrate.html doc/_build/html/docs/java.html doc/_build/html/docs/lexerdevelopment.html doc/_build/html/docs/lexers.html doc/_build/html/docs/moinmoin.html doc/_build/html/docs/plugins.html doc/_build/html/docs/quickstart.html doc/_build/html/docs/rstdirective.html doc/_build/html/docs/styles.html doc/_build/html/docs/tokens.html doc/_build/html/docs/unicode.html doc/_static/favicon.ico doc/_static/logo_new.png doc/_static/logo_only.png doc/_templates/docssidebar.html doc/_templates/indexsidebar.html doc/_themes/pygments14/layout.html doc/_themes/pygments14/theme.conf doc/_themes/pygments14/static/bodybg.png doc/_themes/pygments14/static/docbg.png doc/_themes/pygments14/static/listitem.png doc/_themes/pygments14/static/logo.png doc/_themes/pygments14/static/pocoo.png doc/_themes/pygments14/static/pygments14.css_t doc/docs/api.rst doc/docs/authors.rst doc/docs/changelog.rst doc/docs/cmdline.rst doc/docs/filterdevelopment.rst doc/docs/filters.rst doc/docs/formatterdevelopment.rst doc/docs/formatters.rst doc/docs/index.rst doc/docs/integrate.rst doc/docs/java.rst doc/docs/lexerdevelopment.rst doc/docs/lexers.rst doc/docs/moinmoin.rst doc/docs/plugins.rst doc/docs/quickstart.rst doc/docs/rstdirective.rst doc/docs/styles.rst doc/docs/tokens.rst doc/docs/unicode.rst external/autopygmentize external/lasso-builtins-generator-9.lasso external/markdown-processor.py external/moin-parser.py external/pygments.bashcomp external/rst-directive.py pygments/__init__.py pygments/cmdline.py pygments/console.py pygments/filter.py pygments/formatter.py pygments/lexer.py pygments/modeline.py pygments/plugin.py pygments/regexopt.py pygments/scanner.py pygments/sphinxext.py pygments/style.py pygments/token.py pygments/unistring.py pygments/util.py pygments/filters/__init__.py pygments/formatters/__init__.py pygments/formatters/_mapping.py pygments/formatters/bbcode.py pygments/formatters/html.py pygments/formatters/img.py pygments/formatters/irc.py pygments/formatters/latex.py pygments/formatters/other.py pygments/formatters/rtf.py pygments/formatters/svg.py pygments/formatters/terminal.py pygments/formatters/terminal256.py pygments/lexers/__init__.py pygments/lexers/_asy_builtins.py pygments/lexers/_cl_builtins.py pygments/lexers/_cocoa_builtins.py pygments/lexers/_csound_builtins.py pygments/lexers/_lasso_builtins.py pygments/lexers/_lua_builtins.py pygments/lexers/_mapping.py pygments/lexers/_mql_builtins.py pygments/lexers/_openedge_builtins.py pygments/lexers/_php_builtins.py pygments/lexers/_postgres_builtins.py pygments/lexers/_scilab_builtins.py pygments/lexers/_sourcemod_builtins.py pygments/lexers/_stan_builtins.py pygments/lexers/_stata_builtins.py pygments/lexers/_tsql_builtins.py pygments/lexers/_vim_builtins.py pygments/lexers/actionscript.py pygments/lexers/agile.py pygments/lexers/algebra.py pygments/lexers/ambient.py pygments/lexers/ampl.py pygments/lexers/apl.py pygments/lexers/archetype.py pygments/lexers/asm.py pygments/lexers/automation.py pygments/lexers/basic.py pygments/lexers/bibtex.py pygments/lexers/business.py pygments/lexers/c_cpp.py pygments/lexers/c_like.py pygments/lexers/capnproto.py pygments/lexers/chapel.py pygments/lexers/clean.py pygments/lexers/compiled.py pygments/lexers/configs.py pygments/lexers/console.py pygments/lexers/crystal.py pygments/lexers/csound.py pygments/lexers/css.py pygments/lexers/d.py pygments/lexers/dalvik.py pygments/lexers/data.py pygments/lexers/diff.py pygments/lexers/dotnet.py pygments/lexers/dsls.py pygments/lexers/dylan.py pygments/lexers/ecl.py pygments/lexers/eiffel.py pygments/lexers/elm.py pygments/lexers/erlang.py pygments/lexers/esoteric.py pygments/lexers/ezhil.py pygments/lexers/factor.py pygments/lexers/fantom.py pygments/lexers/felix.py pygments/lexers/forth.py pygments/lexers/fortran.py pygments/lexers/foxpro.py pygments/lexers/functional.py pygments/lexers/go.py pygments/lexers/grammar_notation.py pygments/lexers/graph.py pygments/lexers/graphics.py pygments/lexers/haskell.py pygments/lexers/haxe.py pygments/lexers/hdl.py pygments/lexers/hexdump.py pygments/lexers/html.py pygments/lexers/idl.py pygments/lexers/igor.py pygments/lexers/inferno.py pygments/lexers/installers.py pygments/lexers/int_fiction.py pygments/lexers/iolang.py pygments/lexers/j.py pygments/lexers/javascript.py pygments/lexers/julia.py pygments/lexers/jvm.py pygments/lexers/lisp.py pygments/lexers/make.py pygments/lexers/markup.py pygments/lexers/math.py pygments/lexers/matlab.py pygments/lexers/ml.py pygments/lexers/modeling.py pygments/lexers/modula2.py pygments/lexers/monte.py pygments/lexers/ncl.py pygments/lexers/nimrod.py pygments/lexers/nit.py pygments/lexers/nix.py pygments/lexers/oberon.py pygments/lexers/objective.py pygments/lexers/ooc.py pygments/lexers/other.py pygments/lexers/parasail.py pygments/lexers/parsers.py pygments/lexers/pascal.py pygments/lexers/pawn.py pygments/lexers/perl.py pygments/lexers/php.py pygments/lexers/praat.py pygments/lexers/prolog.py pygments/lexers/python.py pygments/lexers/qvt.py pygments/lexers/r.py pygments/lexers/rdf.py pygments/lexers/rebol.py pygments/lexers/resource.py pygments/lexers/rnc.py pygments/lexers/roboconf.py pygments/lexers/robotframework.py pygments/lexers/ruby.py pygments/lexers/rust.py pygments/lexers/sas.py pygments/lexers/scripting.py pygments/lexers/shell.py pygments/lexers/smalltalk.py pygments/lexers/smv.py pygments/lexers/snobol.py pygments/lexers/special.py pygments/lexers/sql.py pygments/lexers/stata.py pygments/lexers/supercollider.py pygments/lexers/tcl.py pygments/lexers/templates.py pygments/lexers/testing.py pygments/lexers/text.py pygments/lexers/textedit.py pygments/lexers/textfmts.py pygments/lexers/theorem.py pygments/lexers/trafficscript.py pygments/lexers/typoscript.py pygments/lexers/urbi.py pygments/lexers/varnish.py pygments/lexers/verification.py pygments/lexers/web.py pygments/lexers/webmisc.py pygments/lexers/whiley.py pygments/lexers/x10.py pygments/lexers/xorg.py pygments/styles/__init__.py pygments/styles/abap.py pygments/styles/algol.py pygments/styles/algol_nu.py pygments/styles/arduino.py pygments/styles/autumn.py pygments/styles/borland.py pygments/styles/bw.py pygments/styles/colorful.py pygments/styles/default.py pygments/styles/emacs.py pygments/styles/friendly.py pygments/styles/fruity.py pygments/styles/igor.py pygments/styles/lovelace.py pygments/styles/manni.py pygments/styles/monokai.py pygments/styles/murphy.py pygments/styles/native.py pygments/styles/paraiso_dark.py pygments/styles/paraiso_light.py pygments/styles/pastie.py pygments/styles/perldoc.py pygments/styles/rainbow_dash.py pygments/styles/rrt.py pygments/styles/sas.py pygments/styles/stata.py pygments/styles/tango.py pygments/styles/trac.py pygments/styles/vim.py pygments/styles/vs.py pygments/styles/xcode.py scripts/check_sources.py scripts/debug_lexer.py scripts/detect_missing_analyse_text.py scripts/epydoc.css scripts/find_error.py scripts/get_vimkw.py scripts/pylintrc scripts/release-checklist scripts/vim2pygments.py tests/run.py tests/string_asserts.py tests/support.py tests/test_basic_api.py tests/test_bibtex.py tests/test_cfm.py tests/test_clexer.py tests/test_cmdline.py tests/test_cpp.py tests/test_crystal.py tests/test_csound.py tests/test_data.py tests/test_examplefiles.py tests/test_ezhil.py tests/test_html_formatter.py tests/test_inherit.py tests/test_irc_formatter.py tests/test_java.py tests/test_javascript.py tests/test_julia.py tests/test_latex_formatter.py tests/test_lexers_other.py tests/test_markdown_lexer.py tests/test_modeline.py tests/test_objectiveclexer.py tests/test_perllexer.py tests/test_php.py tests/test_praat.py tests/test_properties.py tests/test_python.py tests/test_qbasiclexer.py tests/test_r.py tests/test_regexlexer.py tests/test_regexopt.py tests/test_rtf_formatter.py tests/test_ruby.py tests/test_shell.py tests/test_smarty.py tests/test_sql.py tests/test_string_asserts.py tests/test_terminal_formatter.py tests/test_textfmts.py tests/test_token.py tests/test_unistring.py tests/test_using_api.py tests/test_util.py tests/test_whiley.py tests/dtds/HTML4-f.dtd tests/dtds/HTML4-s.dtd tests/dtds/HTML4.dcl tests/dtds/HTML4.dtd tests/dtds/HTML4.soc tests/dtds/HTMLlat1.ent tests/dtds/HTMLspec.ent tests/dtds/HTMLsym.ent tests/examplefiles/99_bottles_of_beer.chpl tests/examplefiles/AcidStateAdvanced.hs tests/examplefiles/AlternatingGroup.mu tests/examplefiles/BOM.js tests/examplefiles/Blink.ino tests/examplefiles/CPDictionary.j tests/examplefiles/Config.in.cache tests/examplefiles/Constants.mo tests/examplefiles/DancingSudoku.lhs tests/examplefiles/Deflate.fs tests/examplefiles/Error.pmod tests/examplefiles/Errors.scala tests/examplefiles/FakeFile.pike tests/examplefiles/Get-CommandDefinitionHtml.ps1 tests/examplefiles/IPDispatchC.nc tests/examplefiles/IPDispatchP.nc tests/examplefiles/Intro.java tests/examplefiles/Makefile tests/examplefiles/Object.st tests/examplefiles/OrderedMap.hx tests/examplefiles/RoleQ.pm6 tests/examplefiles/SmallCheck.hs tests/examplefiles/Sorting.mod tests/examplefiles/StdGeneric.icl tests/examplefiles/Sudoku.lhs tests/examplefiles/abnf_example1.abnf tests/examplefiles/abnf_example2.abnf tests/examplefiles/addressbook.proto tests/examplefiles/ahcon.f tests/examplefiles/all.nit tests/examplefiles/antlr_ANTLRv3.g tests/examplefiles/antlr_throws tests/examplefiles/apache2.conf tests/examplefiles/as3_test.as tests/examplefiles/as3_test2.as tests/examplefiles/as3_test3.as tests/examplefiles/aspx-cs_example tests/examplefiles/autoit_submit.au3 tests/examplefiles/automake.mk tests/examplefiles/badcase.java tests/examplefiles/bigtest.nsi tests/examplefiles/bnf_example1.bnf tests/examplefiles/boot-9.scm tests/examplefiles/ca65_example tests/examplefiles/capdl_example.cdl tests/examplefiles/cbmbas_example tests/examplefiles/cells.ps tests/examplefiles/ceval.c tests/examplefiles/char.scala tests/examplefiles/cheetah_example.html tests/examplefiles/classes.dylan tests/examplefiles/clojure-weird-keywords.clj tests/examplefiles/condensed_ruby.rb tests/examplefiles/coq_RelationClasses tests/examplefiles/core.cljs tests/examplefiles/database.pytb tests/examplefiles/de.MoinMoin.po tests/examplefiles/demo.ahk tests/examplefiles/demo.cfm tests/examplefiles/demo.css.in tests/examplefiles/demo.frt tests/examplefiles/demo.hbs tests/examplefiles/demo.js.in tests/examplefiles/demo.thrift tests/examplefiles/demo.xul.in tests/examplefiles/django_sample.html+django tests/examplefiles/docker.docker tests/examplefiles/durexmania.aheui tests/examplefiles/dwarf.cw tests/examplefiles/eg_example1.eg tests/examplefiles/ember.handlebars tests/examplefiles/erl_session tests/examplefiles/es6.js tests/examplefiles/escape_semicolon.clj tests/examplefiles/eval.rs tests/examplefiles/evil_regex.js tests/examplefiles/example.Rd tests/examplefiles/example.als tests/examplefiles/example.bat tests/examplefiles/example.bc tests/examplefiles/example.bug tests/examplefiles/example.c tests/examplefiles/example.ceylon tests/examplefiles/example.chai tests/examplefiles/example.clay tests/examplefiles/example.cls tests/examplefiles/example.cob tests/examplefiles/example.coffee tests/examplefiles/example.cpp tests/examplefiles/example.e tests/examplefiles/example.elm tests/examplefiles/example.ezt tests/examplefiles/example.f90 tests/examplefiles/example.feature tests/examplefiles/example.fish tests/examplefiles/example.gd tests/examplefiles/example.gi tests/examplefiles/example.golo tests/examplefiles/example.groovy tests/examplefiles/example.gs tests/examplefiles/example.gst tests/examplefiles/example.hlsl tests/examplefiles/example.hs tests/examplefiles/example.hx tests/examplefiles/example.i6t tests/examplefiles/example.i7x tests/examplefiles/example.j tests/examplefiles/example.jag tests/examplefiles/example.java tests/examplefiles/example.jcl tests/examplefiles/example.jsgf tests/examplefiles/example.jsonld tests/examplefiles/example.juttle tests/examplefiles/example.kal tests/examplefiles/example.kt tests/examplefiles/example.lagda tests/examplefiles/example.liquid tests/examplefiles/example.lua tests/examplefiles/example.ma tests/examplefiles/example.mac tests/examplefiles/example.md tests/examplefiles/example.monkey tests/examplefiles/example.moo tests/examplefiles/example.moon tests/examplefiles/example.mq4 tests/examplefiles/example.mqh tests/examplefiles/example.msc tests/examplefiles/example.ng2 tests/examplefiles/example.ni tests/examplefiles/example.nim tests/examplefiles/example.nix tests/examplefiles/example.ns2 tests/examplefiles/example.pas tests/examplefiles/example.pcmk tests/examplefiles/example.pp tests/examplefiles/example.praat tests/examplefiles/example.prg tests/examplefiles/example.rb tests/examplefiles/example.red tests/examplefiles/example.reds tests/examplefiles/example.reg tests/examplefiles/example.rexx tests/examplefiles/example.rhtml tests/examplefiles/example.rkt tests/examplefiles/example.rpf tests/examplefiles/example.rts tests/examplefiles/example.sbl tests/examplefiles/example.scd tests/examplefiles/example.sh tests/examplefiles/example.sh-session tests/examplefiles/example.shell-session tests/examplefiles/example.slim tests/examplefiles/example.sls tests/examplefiles/example.sml tests/examplefiles/example.snobol tests/examplefiles/example.stan tests/examplefiles/example.tap tests/examplefiles/example.tasm tests/examplefiles/example.tea tests/examplefiles/example.tf tests/examplefiles/example.thy tests/examplefiles/example.todotxt tests/examplefiles/example.ttl tests/examplefiles/example.u tests/examplefiles/example.weechatlog tests/examplefiles/example.whiley tests/examplefiles/example.x10 tests/examplefiles/example.xhtml tests/examplefiles/example.xtend tests/examplefiles/example.xtm tests/examplefiles/example.yaml tests/examplefiles/example1.cadl tests/examplefiles/example2.aspx tests/examplefiles/example2.cpp tests/examplefiles/example2.msc tests/examplefiles/exampleScript.cfc tests/examplefiles/exampleTag.cfc tests/examplefiles/example_coq.v tests/examplefiles/example_elixir.ex tests/examplefiles/example_file.fy tests/examplefiles/ezhil_primefactors.n tests/examplefiles/fennelview.fnl tests/examplefiles/fibonacci.tokigun.aheui tests/examplefiles/firefox.mak tests/examplefiles/flatline_example tests/examplefiles/flipflop.sv tests/examplefiles/foo.sce tests/examplefiles/format.ml tests/examplefiles/fucked_up.rb tests/examplefiles/function.mu tests/examplefiles/functional.rst tests/examplefiles/garcia-wachs.kk tests/examplefiles/genclass.clj tests/examplefiles/genshi_example.xml+genshi tests/examplefiles/genshitext_example.genshitext tests/examplefiles/glsl.frag tests/examplefiles/glsl.vert tests/examplefiles/grammar-test.p6 tests/examplefiles/guidance.smv tests/examplefiles/hash_syntax.rb tests/examplefiles/hello-world.puzzlet.aheui tests/examplefiles/hello.at tests/examplefiles/hello.golo tests/examplefiles/hello.lsl tests/examplefiles/hello.smali tests/examplefiles/hello.sp tests/examplefiles/hexdump_debugexe tests/examplefiles/hexdump_hd tests/examplefiles/hexdump_hexcat tests/examplefiles/hexdump_hexdump tests/examplefiles/hexdump_od tests/examplefiles/hexdump_xxd tests/examplefiles/html+php_faulty.php tests/examplefiles/http_request_example tests/examplefiles/http_response_example tests/examplefiles/hybris_File.hy tests/examplefiles/idl_sample.pro tests/examplefiles/iex_example tests/examplefiles/inet_pton6.dg tests/examplefiles/inform6_example tests/examplefiles/interp.scala tests/examplefiles/intro.ik tests/examplefiles/ints.php tests/examplefiles/intsyn.fun tests/examplefiles/intsyn.sig tests/examplefiles/irb_heredoc tests/examplefiles/irc.lsp tests/examplefiles/java.properties tests/examplefiles/jbst_example1.jbst tests/examplefiles/jbst_example2.jbst tests/examplefiles/jinjadesignerdoc.rst tests/examplefiles/json.lasso tests/examplefiles/json.lasso9 tests/examplefiles/language.hy tests/examplefiles/lighttpd_config.conf tests/examplefiles/limbo.b tests/examplefiles/linecontinuation.py tests/examplefiles/livescript-demo.ls tests/examplefiles/logos_example.xm tests/examplefiles/ltmain.sh tests/examplefiles/main.cmake tests/examplefiles/markdown.lsp tests/examplefiles/matlab_noreturn tests/examplefiles/matlab_sample tests/examplefiles/matlabsession_sample.txt tests/examplefiles/metagrammar.treetop tests/examplefiles/minehunt.qml tests/examplefiles/minimal.ns2 tests/examplefiles/modula2_test_cases.def tests/examplefiles/moin_SyntaxReference.txt tests/examplefiles/multiline_regexes.rb tests/examplefiles/nanomsg.intr tests/examplefiles/nasm_aoutso.asm tests/examplefiles/nasm_objexe.asm tests/examplefiles/nemerle_sample.n tests/examplefiles/nginx_nginx.conf tests/examplefiles/noexcept.cpp tests/examplefiles/numbers.c tests/examplefiles/objc_example.m tests/examplefiles/openedge_example tests/examplefiles/pacman.conf tests/examplefiles/pacman.ijs tests/examplefiles/pawn_example tests/examplefiles/perl_misc tests/examplefiles/perl_perl5db tests/examplefiles/perl_regex-delims tests/examplefiles/perlfunc.1 tests/examplefiles/phpMyAdmin.spec tests/examplefiles/phpcomplete.vim tests/examplefiles/pkgconfig_example.pc tests/examplefiles/plain.bst tests/examplefiles/pleac.in.rb tests/examplefiles/postgresql_test.txt tests/examplefiles/pppoe.applescript tests/examplefiles/psql_session.txt tests/examplefiles/py3_test.txt tests/examplefiles/py3tb_test.py3tb tests/examplefiles/pycon_ctrlc_traceback tests/examplefiles/pycon_test.pycon tests/examplefiles/pytb_test2.pytb tests/examplefiles/pytb_test3.pytb tests/examplefiles/python25-bsd.mak tests/examplefiles/qbasic_example tests/examplefiles/qsort.prolog tests/examplefiles/r-console-transcript.Rout tests/examplefiles/r6rs-comments.scm tests/examplefiles/ragel-cpp_rlscan tests/examplefiles/ragel-cpp_snippet tests/examplefiles/regex.js tests/examplefiles/resourcebundle_demo tests/examplefiles/reversi.lsp tests/examplefiles/rnc_example.rnc tests/examplefiles/roboconf.graph tests/examplefiles/roboconf.instances tests/examplefiles/robotframework_test.txt tests/examplefiles/rql-queries.rql tests/examplefiles/ruby_func_def.rb tests/examplefiles/sample.qvto tests/examplefiles/scilab.sci tests/examplefiles/scope.cirru tests/examplefiles/session.dylan-console tests/examplefiles/sibling.prolog tests/examplefiles/simple.camkes tests/examplefiles/simple.croc tests/examplefiles/smarty_example.html tests/examplefiles/source.lgt tests/examplefiles/sources.list tests/examplefiles/sparql.rq tests/examplefiles/sphere.pov tests/examplefiles/sqlite3.sqlite3-console tests/examplefiles/squid.conf tests/examplefiles/string.jl tests/examplefiles/string_delimiters.d tests/examplefiles/stripheredoc.sh tests/examplefiles/subr.el tests/examplefiles/swig_java.swg tests/examplefiles/swig_std_vector.i tests/examplefiles/tads3_example.t tests/examplefiles/termcap tests/examplefiles/terminfo tests/examplefiles/test-3.0.xq tests/examplefiles/test-exist-update.xq tests/examplefiles/test.R tests/examplefiles/test.adb tests/examplefiles/test.adls tests/examplefiles/test.agda tests/examplefiles/test.apl tests/examplefiles/test.asy tests/examplefiles/test.awk tests/examplefiles/test.bb tests/examplefiles/test.bib tests/examplefiles/test.bmx tests/examplefiles/test.boo tests/examplefiles/test.bpl tests/examplefiles/test.bro tests/examplefiles/test.cadl tests/examplefiles/test.cr tests/examplefiles/test.cs tests/examplefiles/test.csd tests/examplefiles/test.css tests/examplefiles/test.cu tests/examplefiles/test.cyp tests/examplefiles/test.d tests/examplefiles/test.dart tests/examplefiles/test.dtd tests/examplefiles/test.ebnf tests/examplefiles/test.ec tests/examplefiles/test.eh tests/examplefiles/test.erl tests/examplefiles/test.escript tests/examplefiles/test.evoque tests/examplefiles/test.fan tests/examplefiles/test.flx tests/examplefiles/test.gdc tests/examplefiles/test.gradle tests/examplefiles/test.groovy tests/examplefiles/test.hsail tests/examplefiles/test.html tests/examplefiles/test.idr tests/examplefiles/test.ini tests/examplefiles/test.java tests/examplefiles/test.jsp tests/examplefiles/test.lean tests/examplefiles/test.maql tests/examplefiles/test.mask tests/examplefiles/test.mod tests/examplefiles/test.moo tests/examplefiles/test.mt tests/examplefiles/test.myt tests/examplefiles/test.ncl tests/examplefiles/test.nim tests/examplefiles/test.odin tests/examplefiles/test.opa tests/examplefiles/test.orc tests/examplefiles/test.p6 tests/examplefiles/test.pan tests/examplefiles/test.pas tests/examplefiles/test.php tests/examplefiles/test.pig tests/examplefiles/test.plot tests/examplefiles/test.ps1 tests/examplefiles/test.psl tests/examplefiles/test.pwn tests/examplefiles/test.pypylog tests/examplefiles/test.r3 tests/examplefiles/test.rb tests/examplefiles/test.rhtml tests/examplefiles/test.rsl tests/examplefiles/test.scaml tests/examplefiles/test.sco tests/examplefiles/test.shen tests/examplefiles/test.sil tests/examplefiles/test.ssp tests/examplefiles/test.swift tests/examplefiles/test.tcsh tests/examplefiles/test.vb tests/examplefiles/test.vhdl tests/examplefiles/test.xqy tests/examplefiles/test.xsl tests/examplefiles/test.zep tests/examplefiles/test2.odin tests/examplefiles/test2.pypylog tests/examplefiles/test_basic.adls tests/examplefiles/truncated.pytb tests/examplefiles/tsql_example.sql tests/examplefiles/twig_test tests/examplefiles/type.lisp tests/examplefiles/typescript_example tests/examplefiles/typoscript_example tests/examplefiles/underscore.coffee tests/examplefiles/unicode.applescript tests/examplefiles/unicode.go tests/examplefiles/unicode.js tests/examplefiles/unicodedoc.py tests/examplefiles/unix-io.lid tests/examplefiles/varnish.vcl tests/examplefiles/vbnet_test.bas tests/examplefiles/vctreestatus_hg tests/examplefiles/vimrc tests/examplefiles/vpath.mk tests/examplefiles/wdiff_example1.wdiff tests/examplefiles/wdiff_example3.wdiff tests/examplefiles/webkit-transition.css tests/examplefiles/while.pov tests/examplefiles/wiki.factor tests/examplefiles/xml_example tests/examplefiles/xorg.conf tests/examplefiles/yahalom.cpsa tests/examplefiles/zmlrpc.f90 tests/support/empty.py tests/support/html_formatter.py tests/support/python_lexer.py tests/support/tagsPygments-2.3.1/Pygments.egg-info/entry_points.txt0000644000175000017500000000006613405476652021213 0ustar piotrpiotr[console_scripts] pygmentize = pygments.cmdline:main Pygments-2.3.1/Pygments.egg-info/not-zip-safe0000644000175000017500000000000113376277601020141 0ustar piotrpiotr Pygments-2.3.1/Pygments.egg-info/top_level.txt0000644000175000017500000000001113405476652020435 0ustar piotrpiotrpygments Pygments-2.3.1/Pygments.egg-info/dependency_links.txt0000644000175000017500000000000113405476652021761 0ustar piotrpiotr Pygments-2.3.1/Pygments.egg-info/PKG-INFO0000644000175000017500000000326013405476652017011 0ustar piotrpiotrMetadata-Version: 1.1 Name: Pygments Version: 2.3.1 Summary: Pygments is a syntax highlighting package written in Python. Home-page: http://pygments.org/ Author: Georg Brandl Author-email: georg@python.org License: BSD License Description: Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. Keywords: syntax highlighting Platform: any Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: System Administrators Classifier: Development Status :: 6 - Mature Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Operating System :: OS Independent Classifier: Topic :: Text Processing :: Filters Classifier: Topic :: Utilities Pygments-2.3.1/TODO0000644000175000017500000000064613376260540013103 0ustar piotrpiotrTodo ==== - lexers that need work: * review perl lexer (numerous bugs, but so far no one had complaints ;) * readd property support for C# lexer? that is, find a regex that doesn't backtrack to death... * add support for function name highlighting to C++ lexer - allow "overlay" token types to highlight specials: nth line, a word etc. - pygmentize option presets, more sophisticated method to output styles? Pygments-2.3.1/pygmentize0000755000175000017500000000023013376260540014521 0ustar piotrpiotr#!/usr/bin/env python2 import sys import pygments.cmdline try: sys.exit(pygments.cmdline.main(sys.argv)) except KeyboardInterrupt: sys.exit(1) Pygments-2.3.1/Makefile0000644000175000017500000000354513376476371014067 0ustar piotrpiotr# # Makefile for Pygments # ~~~~~~~~~~~~~~~~~~~~~ # # Combines scripts for common tasks. # # :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. # :license: BSD, see LICENSE for details. # PYTHON ?= python export PYTHONPATH = $(shell echo "$$PYTHONPATH"):$(shell python -c 'import os; print ":".join(os.path.abspath(line.strip()) for line in file("PYTHONPATH"))' 2>/dev/null) .PHONY: all check clean clean-pyc codetags docs mapfiles \ pylint reindent test test-coverage all: clean-pyc check test check: @$(PYTHON) scripts/detect_missing_analyse_text.py || true @pyflakes pygments | grep -v 'but unused' || true @$(PYTHON) scripts/check_sources.py -i build -i dist -i pygments/lexers/_mapping.py \ -i docs/build -i pygments/formatters/_mapping.py -i pygments/unistring.py clean: clean-pyc -rm -rf build -rm -f codetags.html clean-pyc: find . -name '*.pyc' -exec rm -f {} + find . -name '*.pyo' -exec rm -f {} + find . -name '*~' -exec rm -f {} + codetags: @$(PYTHON) scripts/find_codetags.py -i tests/examplefiles -i scripts/pylintrc \ -i scripts/find_codetags.py -o codetags.html . docs: make -C doc html mapfiles: (cd pygments/formatters; $(PYTHON) _mapping.py) (cd pygments/lexers; $(PYTHON) _mapping.py) pylint: @pylint --rcfile scripts/pylintrc pygments reindent: @$(PYTHON) scripts/reindent.py -r -B . test: @$(PYTHON) tests/run.py -d $(TEST) test-coverage: @$(PYTHON) tests/run.py -d --with-coverage --cover-package=pygments --cover-erase $(TEST) test-examplefiles: nosetests tests/test_examplefiles.py tox-test: @tox -- $(TEST) tox-test-coverage: @tox -- --with-coverage --cover-package=pygments --cover-erase $(TEST) RLMODULES = pygments.lexers regexlint: @if [ -z "$(REGEXLINT)" ]; then echo "Please set REGEXLINT=checkout path"; exit 1; fi PYTHONPATH=`pwd`:$(REGEXLINT) $(REGEXLINT)/regexlint/cmdline.py $(RLMODULES) Pygments-2.3.1/CHANGES0000644000175000017500000007666413405476560013427 0ustar piotrpiotrPygments changelog ================== Issue numbers refer to the tracker at , pull request numbers to the requests at . Version 2.3.1 ------------- (released Dec 16, 2018) - Updated lexers: * ASM (PR#784) * Chapel (PR#735) * Clean (PR#621) * CSound (PR#684) * Elm (PR#744) * Fortran (PR#747) * GLSL (PR#740) * Haskell (PR#745) * Hy (PR#754) * Igor Pro (PR#764) * PowerShell (PR#705) * Python (PR#720, #1299, PR#715) * SLexer (PR#680) * YAML (PR#762, PR#724) - Fix invalid string escape sequences - Fix `FutureWarning` introduced by regex changes in Python 3.7 Version 2.3.0 ------------- (released Nov 25, 2018) - Added lexers: * Fennel (PR#783) * HLSL (PR#675) - Updated lexers: * Dockerfile (PR#714) - Minimum Python versions changed to 2.7 and 3.5 - Added support for Python 3.7 generator changes (PR#772) - Fix incorrect token type in SCSS for single-quote strings (#1322) - Use `terminal256` formatter if `TERM` contains `256` (PR#666) - Fix incorrect handling of GitHub style fences in Markdown (PR#741, #1389) - Fix `%a` not being highlighted in Python3 strings (PR#727) Version 2.2.0 ------------- (released Jan 22, 2017) - Added lexers: * AMPL * TypoScript (#1173) * Varnish config (PR#554) * Clean (PR#503) * WDiff (PR#513) * Flatline (PR#551) * Silver (PR#537) * HSAIL (PR#518) * JSGF (PR#546) * NCAR command language (PR#536) * Extempore (PR#530) * Cap'n Proto (PR#595) * Whiley (PR#573) * Monte (PR#592) * Crystal (PR#576) * Snowball (PR#589) * CapDL (PR#579) * NuSMV (PR#564) * SAS, Stata (PR#593) - Added the ability to load lexer and formatter classes directly from files with the `-x` command line option and the `lexers.load_lexer_from_file()` and `formatters.load_formatter_from_file()` functions. (PR#559) - Added `lexers.find_lexer_class_by_name()`. (#1203) - Added new token types and lexing for magic methods and variables in Python and PHP. - Added a new token type for string affixes and lexing for them in Python, C++ and Postgresql lexers. - Added a new token type for heredoc (and similar) string delimiters and lexing for them in C++, Perl, PHP, Postgresql and Ruby lexers. - Styles can now define colors with ANSI colors for use in the 256-color terminal formatter. (PR#531) - Improved the CSS lexer. (#1083, #1130) - Added "Rainbow Dash" style. (PR#623) - Delay loading `pkg_resources`, which takes a long while to import. (PR#690) Version 2.1.3 ------------- (released Mar 2, 2016) - Fixed regression in Bash lexer (PR#563) Version 2.1.2 ------------- (released Feb 29, 2016) - Fixed Python 3 regression in image formatter (#1215) - Fixed regression in Bash lexer (PR#562) Version 2.1.1 ------------- (relased Feb 14, 2016) - Fixed Jython compatibility (#1205) - Fixed HTML formatter output with leading empty lines (#1111) - Added a mapping table for LaTeX encodings and added utf8 (#1152) - Fixed image formatter font searching on Macs (#1188) - Fixed deepcopy-ing of Token instances (#1168) - Fixed Julia string interpolation (#1170) - Fixed statefulness of HttpLexer between get_tokens calls - Many smaller fixes to various lexers Version 2.1 ----------- (released Jan 17, 2016) - Added lexers: * Emacs Lisp (PR#431) * Arduino (PR#442) * Modula-2 with multi-dialect support (#1090) * Fortran fixed format (PR#213) * Archetype Definition language (PR#483) * Terraform (PR#432) * Jcl, Easytrieve (PR#208) * ParaSail (PR#381) * Boogie (PR#420) * Turtle (PR#425) * Fish Shell (PR#422) * Roboconf (PR#449) * Test Anything Protocol (PR#428) * Shen (PR#385) * Component Pascal (PR#437) * SuperCollider (PR#472) * Shell consoles (Tcsh, PowerShell, MSDOS) (PR#479) * Elm and J (PR#452) * Crmsh (PR#440) * Praat (PR#492) * CSound (PR#494) * Ezhil (PR#443) * Thrift (PR#469) * QVT Operational (PR#204) * Hexdump (PR#508) * CAmkES Configuration (PR#462) - Added styles: * Lovelace (PR#456) * Algol and Algol-nu (#1090) - Added formatters: * IRC (PR#458) * True color (24-bit) terminal ANSI sequences (#1142) (formatter alias: "16m") - New "filename" option for HTML formatter (PR#527). - Improved performance of the HTML formatter for long lines (PR#504). - Updated autopygmentize script (PR#445). - Fixed style inheritance for non-standard token types in HTML output. - Added support for async/await to Python 3 lexer. - Rewrote linenos option for TerminalFormatter (it's better, but slightly different output than before) (#1147). - Javascript lexer now supports most of ES6 (#1100). - Cocoa builtins updated for iOS 8.1 (PR#433). - Combined BashSessionLexer and ShellSessionLexer, new version should support the prompt styles of either. - Added option to pygmentize to show a full traceback on exceptions. - Fixed incomplete output on Windows and Python 3 (e.g. when using iPython Notebook) (#1153). - Allowed more traceback styles in Python console lexer (PR#253). - Added decorators to TypeScript (PR#509). - Fix highlighting of certain IRC logs formats (#1076). Version 2.0.2 ------------- (released Jan 20, 2015) - Fix Python tracebacks getting duplicated in the console lexer (#1068). - Backquote-delimited identifiers are now recognized in F# (#1062). Version 2.0.1 ------------- (released Nov 10, 2014) - Fix an encoding issue when using ``pygmentize`` with the ``-o`` option. Version 2.0 ----------- (released Nov 9, 2014) - Default lexer encoding is now "guess", i.e. UTF-8 / Locale / Latin1 is tried in that order. - Major update to Swift lexer (PR#410). - Multiple fixes to lexer guessing in conflicting cases: * recognize HTML5 by doctype * recognize XML by XML declaration * don't recognize C/C++ as SystemVerilog - Simplified regexes and builtin lists. Version 2.0rc1 -------------- (released Oct 16, 2014) - Dropped Python 2.4 and 2.5 compatibility. This is in favor of single-source compatibility between Python 2.6, 2.7 and 3.3+. - New website and documentation based on Sphinx (finally!) - Lexers added: * APL (#969) * Agda and Literate Agda (PR#203) * Alloy (PR#355) * AmbientTalk * BlitzBasic (PR#197) * ChaiScript (PR#24) * Chapel (PR#256) * Cirru (PR#275) * Clay (PR#184) * ColdFusion CFC (PR#283) * Cryptol and Literate Cryptol (PR#344) * Cypher (PR#257) * Docker config files * EBNF (PR#193) * Eiffel (PR#273) * GAP (PR#311) * Golo (PR#309) * Handlebars (PR#186) * Hy (PR#238) * Idris and Literate Idris (PR#210) * Igor Pro (PR#172) * Inform 6/7 (PR#281) * Intel objdump (PR#279) * Isabelle (PR#386) * Jasmin (PR#349) * JSON-LD (PR#289) * Kal (PR#233) * Lean (PR#399) * LSL (PR#296) * Limbo (PR#291) * Liquid (#977) * MQL (PR#285) * MaskJS (PR#280) * Mozilla preprocessors * Mathematica (PR#245) * NesC (PR#166) * Nit (PR#375) * Nix (PR#267) * Pan * Pawn (PR#211) * Perl 6 (PR#181) * Pig (PR#304) * Pike (PR#237) * QBasic (PR#182) * Red (PR#341) * ResourceBundle (#1038) * Rexx (PR#199) * Rql (PR#251) * Rsl * SPARQL (PR#78) * Slim (PR#366) * Swift (PR#371) * Swig (PR#168) * TADS 3 (PR#407) * Todo.txt todo lists * Twig (PR#404) - Added a helper to "optimize" regular expressions that match one of many literal words; this can save 20% and more lexing time with lexers that highlight many keywords or builtins. - New styles: "xcode" and "igor", similar to the default highlighting of the respective IDEs. - The command-line "pygmentize" tool now tries a little harder to find the correct encoding for files and the terminal (#979). - Added "inencoding" option for lexers to override "encoding" analogous to "outencoding" (#800). - Added line-by-line "streaming" mode for pygmentize with the "-s" option. (PR#165) Only fully works for lexers that have no constructs spanning lines! - Added an "envname" option to the LaTeX formatter to select a replacement verbatim environment (PR#235). - Updated the Makefile lexer to yield a little more useful highlighting. - Lexer aliases passed to ``get_lexer_by_name()`` are now case-insensitive. - File name matching in lexers and formatters will now use a regex cache for speed (PR#205). - Pygments will now recognize "vim" modelines when guessing the lexer for a file based on content (PR#118). - Major restructure of the ``pygments.lexers`` module namespace. There are now many more modules with less lexers per module. Old modules are still around and re-export the lexers they previously contained. - The NameHighlightFilter now works with any Name.* token type (#790). - Python 3 lexer: add new exceptions from PEP 3151. - Opa lexer: add new keywords (PR#170). - Julia lexer: add keywords and underscore-separated number literals (PR#176). - Lasso lexer: fix method highlighting, update builtins. Fix guessing so that plain XML isn't always taken as Lasso (PR#163). - Objective C/C++ lexers: allow "@" prefixing any expression (#871). - Ruby lexer: fix lexing of Name::Space tokens (#860) and of symbols in hashes (#873). - Stan lexer: update for version 2.4.0 of the language (PR#162, PR#255, PR#377). - JavaScript lexer: add the "yield" keyword (PR#196). - HTTP lexer: support for PATCH method (PR#190). - Koka lexer: update to newest language spec (PR#201). - Haxe lexer: rewrite and support for Haxe 3 (PR#174). - Prolog lexer: add different kinds of numeric literals (#864). - F# lexer: rewrite with newest spec for F# 3.0 (#842), fix a bug with dotted chains (#948). - Kotlin lexer: general update (PR#271). - Rebol lexer: fix comment detection and analyse_text (PR#261). - LLVM lexer: update keywords to v3.4 (PR#258). - PHP lexer: add new keywords and binary literals (PR#222). - external/markdown-processor.py updated to newest python-markdown (PR#221). - CSS lexer: some highlighting order fixes (PR#231). - Ceylon lexer: fix parsing of nested multiline comments (#915). - C family lexers: fix parsing of indented preprocessor directives (#944). - Rust lexer: update to 0.9 language version (PR#270, PR#388). - Elixir lexer: update to 0.15 language version (PR#392). - Fix swallowing incomplete tracebacks in Python console lexer (#874). Version 1.6 ----------- (released Feb 3, 2013) - Lexers added: * Dylan console (PR#149) * Logos (PR#150) * Shell sessions (PR#158) - Fix guessed lexers not receiving lexer options (#838). - Fix unquoted HTML attribute lexing in Opa (#841). - Fixes to the Dart lexer (PR#160). Version 1.6rc1 -------------- (released Jan 9, 2013) - Lexers added: * AspectJ (PR#90) * AutoIt (PR#122) * BUGS-like languages (PR#89) * Ceylon (PR#86) * Croc (new name for MiniD) * CUDA (PR#75) * Dg (PR#116) * IDL (PR#115) * Jags (PR#89) * Julia (PR#61) * Kconfig (#711) * Lasso (PR#95, PR#113) * LiveScript (PR#84) * Monkey (PR#117) * Mscgen (PR#80) * NSIS scripts (PR#136) * OpenCOBOL (PR#72) * QML (PR#123) * Puppet (PR#133) * Racket (PR#94) * Rdoc (PR#99) * Robot Framework (PR#137) * RPM spec files (PR#124) * Rust (PR#67) * Smali (Dalvik assembly) * SourcePawn (PR#39) * Stan (PR#89) * Treetop (PR#125) * TypeScript (PR#114) * VGL (PR#12) * Visual FoxPro (#762) * Windows Registry (#819) * Xtend (PR#68) - The HTML formatter now supports linking to tags using CTags files, when the python-ctags package is installed (PR#87). - The HTML formatter now has a "linespans" option that wraps every line in a tag with a specific id (PR#82). - When deriving a lexer from another lexer with token definitions, definitions for states not in the child lexer are now inherited. If you override a state in the child lexer, an "inherit" keyword has been added to insert the base state at that position (PR#141). - The C family lexers now inherit token definitions from a common base class, removing code duplication (PR#141). - Use "colorama" on Windows for console color output (PR#142). - Fix Template Haskell highlighting (PR#63). - Fix some S/R lexer errors (PR#91). - Fix a bug in the Prolog lexer with names that start with 'is' (#810). - Rewrite Dylan lexer, add Dylan LID lexer (PR#147). - Add a Java quickstart document (PR#146). - Add a "external/autopygmentize" file that can be used as .lessfilter (#802). Version 1.5 ----------- (codename Zeitdilatation, released Mar 10, 2012) - Lexers added: * Awk (#630) * Fancy (#633) * PyPy Log * eC * Nimrod * Nemerle (#667) * F# (#353) * Groovy (#501) * PostgreSQL (#660) * DTD * Gosu (#634) * Octave (PR#22) * Standard ML (PR#14) * CFengine3 (#601) * Opa (PR#37) * HTTP sessions (PR#42) * JSON (PR#31) * SNOBOL (PR#30) * MoonScript (PR#43) * ECL (PR#29) * Urbiscript (PR#17) * OpenEdge ABL (PR#27) * SystemVerilog (PR#35) * Coq (#734) * PowerShell (#654) * Dart (#715) * Fantom (PR#36) * Bro (PR#5) * NewLISP (PR#26) * VHDL (PR#45) * Scilab (#740) * Elixir (PR#57) * Tea (PR#56) * Kotlin (PR#58) - Fix Python 3 terminal highlighting with pygmentize (#691). - In the LaTeX formatter, escape special &, < and > chars (#648). - In the LaTeX formatter, fix display problems for styles with token background colors (#670). - Enhancements to the Squid conf lexer (#664). - Several fixes to the reStructuredText lexer (#636). - Recognize methods in the ObjC lexer (#638). - Fix Lua "class" highlighting: it does not have classes (#665). - Fix degenerate regex in Scala lexer (#671) and highlighting bugs (#713, 708). - Fix number pattern order in Ocaml lexer (#647). - Fix generic type highlighting in ActionScript 3 (#666). - Fixes to the Clojure lexer (PR#9). - Fix degenerate regex in Nemerle lexer (#706). - Fix infinite looping in CoffeeScript lexer (#729). - Fix crashes and analysis with ObjectiveC lexer (#693, #696). - Add some Fortran 2003 keywords. - Fix Boo string regexes (#679). - Add "rrt" style (#727). - Fix infinite looping in Darcs Patch lexer. - Lots of misc fixes to character-eating bugs and ordering problems in many different lexers. Version 1.4 ----------- (codename Unschärfe, released Jan 03, 2011) - Lexers added: * Factor (#520) * PostScript (#486) * Verilog (#491) * BlitzMax Basic (#478) * Ioke (#465) * Java properties, split out of the INI lexer (#445) * Scss (#509) * Duel/JBST * XQuery (#617) * Mason (#615) * GoodData (#609) * SSP (#473) * Autohotkey (#417) * Google Protocol Buffers * Hybris (#506) - Do not fail in analyse_text methods (#618). - Performance improvements in the HTML formatter (#523). - With the ``noclasses`` option in the HTML formatter, some styles present in the stylesheet were not added as inline styles. - Four fixes to the Lua lexer (#480, #481, #482, #497). - More context-sensitive Gherkin lexer with support for more i18n translations. - Support new OO keywords in Matlab lexer (#521). - Small fix in the CoffeeScript lexer (#519). - A bugfix for backslashes in ocaml strings (#499). - Fix unicode/raw docstrings in the Python lexer (#489). - Allow PIL to work without PIL.pth (#502). - Allow seconds as a unit in CSS (#496). - Support ``application/javascript`` as a JavaScript mime type (#504). - Support `Offload `_ C++ Extensions as keywords in the C++ lexer (#484). - Escape more characters in LaTeX output (#505). - Update Haml/Sass lexers to version 3 (#509). - Small PHP lexer string escaping fix (#515). - Support comments before preprocessor directives, and unsigned/ long long literals in C/C++ (#613, #616). - Support line continuations in the INI lexer (#494). - Fix lexing of Dylan string and char literals (#628). - Fix class/procedure name highlighting in VB.NET lexer (#624). Version 1.3.1 ------------- (bugfix release, released Mar 05, 2010) - The ``pygmentize`` script was missing from the distribution. Version 1.3 ----------- (codename Schneeglöckchen, released Mar 01, 2010) - Added the ``ensurenl`` lexer option, which can be used to suppress the automatic addition of a newline to the lexer input. - Lexers added: * Ada * Coldfusion * Modula-2 * Haxe * R console * Objective-J * Haml and Sass * CoffeeScript - Enhanced reStructuredText highlighting. - Added support for PHP 5.3 namespaces in the PHP lexer. - Added a bash completion script for `pygmentize`, to the external/ directory (#466). - Fixed a bug in `do_insertions()` used for multi-lexer languages. - Fixed a Ruby regex highlighting bug (#476). - Fixed regex highlighting bugs in Perl lexer (#258). - Add small enhancements to the C lexer (#467) and Bash lexer (#469). - Small fixes for the Tcl, Debian control file, Nginx config, Smalltalk, Objective-C, Clojure, Lua lexers. - Gherkin lexer: Fixed single apostrophe bug and added new i18n keywords. Version 1.2.2 ------------- (bugfix release, released Jan 02, 2010) * Removed a backwards incompatibility in the LaTeX formatter that caused Sphinx to produce invalid commands when writing LaTeX output (#463). * Fixed a forever-backtracking regex in the BashLexer (#462). Version 1.2.1 ------------- (bugfix release, released Jan 02, 2010) * Fixed mishandling of an ellipsis in place of the frames in a Python console traceback, resulting in clobbered output. Version 1.2 ----------- (codename Neujahr, released Jan 01, 2010) - Dropped Python 2.3 compatibility. - Lexers added: * Asymptote * Go * Gherkin (Cucumber) * CMake * Ooc * Coldfusion * Haxe * R console - Added options for rendering LaTeX in source code comments in the LaTeX formatter (#461). - Updated the Logtalk lexer. - Added `line_number_start` option to image formatter (#456). - Added `hl_lines` and `hl_color` options to image formatter (#457). - Fixed the HtmlFormatter's handling of noclasses=True to not output any classes (#427). - Added the Monokai style (#453). - Fixed LLVM lexer identifier syntax and added new keywords (#442). - Fixed the PythonTracebackLexer to handle non-traceback data in header or trailer, and support more partial tracebacks that start on line 2 (#437). - Fixed the CLexer to not highlight ternary statements as labels. - Fixed lexing of some Ruby quoting peculiarities (#460). - A few ASM lexer fixes (#450). Version 1.1.1 ------------- (bugfix release, released Sep 15, 2009) - Fixed the BBCode lexer (#435). - Added support for new Jinja2 keywords. - Fixed test suite failures. - Added Gentoo-specific suffixes to Bash lexer. Version 1.1 ----------- (codename Brillouin, released Sep 11, 2009) - Ported Pygments to Python 3. This needed a few changes in the way encodings are handled; they may affect corner cases when used with Python 2 as well. - Lexers added: * Antlr/Ragel, thanks to Ana Nelson * (Ba)sh shell * Erlang shell * GLSL * Prolog * Evoque * Modelica * Rebol * MXML * Cython * ABAP * ASP.net (VB/C#) * Vala * Newspeak - Fixed the LaTeX formatter's output so that output generated for one style can be used with the style definitions of another (#384). - Added "anchorlinenos" and "noclobber_cssfile" (#396) options to HTML formatter. - Support multiline strings in Lua lexer. - Rewrite of the JavaScript lexer by Pumbaa80 to better support regular expression literals (#403). - When pygmentize is asked to highlight a file for which multiple lexers match the filename, use the analyse_text guessing engine to determine the winner (#355). - Fixed minor bugs in the JavaScript lexer (#383), the Matlab lexer (#378), the Scala lexer (#392), the INI lexer (#391), the Clojure lexer (#387) and the AS3 lexer (#389). - Fixed three Perl heredoc lexing bugs (#379, #400, #422). - Fixed a bug in the image formatter which misdetected lines (#380). - Fixed bugs lexing extended Ruby strings and regexes. - Fixed a bug when lexing git diffs. - Fixed a bug lexing the empty commit in the PHP lexer (#405). - Fixed a bug causing Python numbers to be mishighlighted as floats (#397). - Fixed a bug when backslashes are used in odd locations in Python (#395). - Fixed various bugs in Matlab and S-Plus lexers, thanks to Winston Chang (#410, #411, #413, #414) and fmarc (#419). - Fixed a bug in Haskell single-line comment detection (#426). - Added new-style reStructuredText directive for docutils 0.5+ (#428). Version 1.0 ----------- (codename Dreiundzwanzig, released Nov 23, 2008) - Don't use join(splitlines()) when converting newlines to ``\n``, because that doesn't keep all newlines at the end when the ``stripnl`` lexer option is False. - Added ``-N`` option to command-line interface to get a lexer name for a given filename. - Added Tango style, written by Andre Roberge for the Crunchy project. - Added Python3TracebackLexer and ``python3`` option to PythonConsoleLexer. - Fixed a few bugs in the Haskell lexer. - Fixed PythonTracebackLexer to be able to recognize SyntaxError and KeyboardInterrupt (#360). - Provide one formatter class per image format, so that surprises like:: pygmentize -f gif -o foo.gif foo.py creating a PNG file are avoided. - Actually use the `font_size` option of the image formatter. - Fixed numpy lexer that it doesn't listen for `*.py` any longer. - Fixed HTML formatter so that text options can be Unicode strings (#371). - Unified Diff lexer supports the "udiff" alias now. - Fixed a few issues in Scala lexer (#367). - RubyConsoleLexer now supports simple prompt mode (#363). - JavascriptLexer is smarter about what constitutes a regex (#356). - Add Applescript lexer, thanks to Andreas Amann (#330). - Make the codetags more strict about matching words (#368). - NginxConfLexer is a little more accurate on mimetypes and variables (#370). Version 0.11.1 -------------- (released Aug 24, 2008) - Fixed a Jython compatibility issue in pygments.unistring (#358). Version 0.11 ------------ (codename Straußenei, released Aug 23, 2008) Many thanks go to Tim Hatch for writing or integrating most of the bug fixes and new features. - Lexers added: * Nasm-style assembly language, thanks to delroth * YAML, thanks to Kirill Simonov * ActionScript 3, thanks to Pierre Bourdon * Cheetah/Spitfire templates, thanks to Matt Good * Lighttpd config files * Nginx config files * Gnuplot plotting scripts * Clojure * POV-Ray scene files * Sqlite3 interactive console sessions * Scala source files, thanks to Krzysiek Goj - Lexers improved: * C lexer highlights standard library functions now and supports C99 types. * Bash lexer now correctly highlights heredocs without preceding whitespace. * Vim lexer now highlights hex colors properly and knows a couple more keywords. * Irc logs lexer now handles xchat's default time format (#340) and correctly highlights lines ending in ``>``. * Support more delimiters for perl regular expressions (#258). * ObjectiveC lexer now supports 2.0 features. - Added "Visual Studio" style. - Updated markdown processor to Markdown 1.7. - Support roman/sans/mono style defs and use them in the LaTeX formatter. - The RawTokenFormatter is no longer registered to ``*.raw`` and it's documented that tokenization with this lexer may raise exceptions. - New option ``hl_lines`` to HTML formatter, to highlight certain lines. - New option ``prestyles`` to HTML formatter. - New option *-g* to pygmentize, to allow lexer guessing based on filetext (can be slowish, so file extensions are still checked first). - ``guess_lexer()`` now makes its decision much faster due to a cache of whether data is xml-like (a check which is used in several versions of ``analyse_text()``. Several lexers also have more accurate ``analyse_text()`` now. Version 0.10 ------------ (codename Malzeug, released May 06, 2008) - Lexers added: * Io * Smalltalk * Darcs patches * Tcl * Matlab * Matlab sessions * FORTRAN * XSLT * tcsh * NumPy * Python 3 * S, S-plus, R statistics languages * Logtalk - In the LatexFormatter, the *commandprefix* option is now by default 'PY' instead of 'C', since the latter resulted in several collisions with other packages. Also, the special meaning of the *arg* argument to ``get_style_defs()`` was removed. - Added ImageFormatter, to format code as PNG, JPG, GIF or BMP. (Needs the Python Imaging Library.) - Support doc comments in the PHP lexer. - Handle format specifications in the Perl lexer. - Fix comment handling in the Batch lexer. - Add more file name extensions for the C++, INI and XML lexers. - Fixes in the IRC and MuPad lexers. - Fix function and interface name highlighting in the Java lexer. - Fix at-rule handling in the CSS lexer. - Handle KeyboardInterrupts gracefully in pygmentize. - Added BlackWhiteStyle. - Bash lexer now correctly highlights math, does not require whitespace after semicolons, and correctly highlights boolean operators. - Makefile lexer is now capable of handling BSD and GNU make syntax. Version 0.9 ----------- (codename Herbstzeitlose, released Oct 14, 2007) - Lexers added: * Erlang * ActionScript * Literate Haskell * Common Lisp * Various assembly languages * Gettext catalogs * Squid configuration * Debian control files * MySQL-style SQL * MOOCode - Lexers improved: * Greatly improved the Haskell and OCaml lexers. * Improved the Bash lexer's handling of nested constructs. * The C# and Java lexers exhibited abysmal performance with some input code; this should now be fixed. * The IRC logs lexer is now able to colorize weechat logs too. * The Lua lexer now recognizes multi-line comments. * Fixed bugs in the D and MiniD lexer. - The encoding handling of the command line mode (pygmentize) was enhanced. You shouldn't get UnicodeErrors from it anymore if you don't give an encoding option. - Added a ``-P`` option to the command line mode which can be used to give options whose values contain commas or equals signs. - Added 256-color terminal formatter. - Added an experimental SVG formatter. - Added the ``lineanchors`` option to the HTML formatter, thanks to Ian Charnas for the idea. - Gave the line numbers table a CSS class in the HTML formatter. - Added a Vim 7-like style. Version 0.8.1 ------------- (released Jun 27, 2007) - Fixed POD highlighting in the Ruby lexer. - Fixed Unicode class and namespace name highlighting in the C# lexer. - Fixed Unicode string prefix highlighting in the Python lexer. - Fixed a bug in the D and MiniD lexers. - Fixed the included MoinMoin parser. Version 0.8 ----------- (codename Maikäfer, released May 30, 2007) - Lexers added: * Haskell, thanks to Adam Blinkinsop * Redcode, thanks to Adam Blinkinsop * D, thanks to Kirk McDonald * MuPad, thanks to Christopher Creutzig * MiniD, thanks to Jarrett Billingsley * Vim Script, by Tim Hatch - The HTML formatter now has a second line-numbers mode in which it will just integrate the numbers in the same ``
`` tag as the
  code.

- The `CSharpLexer` now is Unicode-aware, which means that it has an
  option that can be set so that it correctly lexes Unicode
  identifiers allowed by the C# specs.

- Added a `RaiseOnErrorTokenFilter` that raises an exception when the
  lexer generates an error token, and a `VisibleWhitespaceFilter` that
  converts whitespace (spaces, tabs, newlines) into visible
  characters.

- Fixed the `do_insertions()` helper function to yield correct
  indices.

- The ReST lexer now automatically highlights source code blocks in
  ".. sourcecode:: language" and ".. code:: language" directive
  blocks.

- Improved the default style (thanks to Tiberius Teng). The old
  default is still available as the "emacs" style (which was an alias
  before).

- The `get_style_defs` method of HTML formatters now uses the
  `cssclass` option as the default selector if it was given.

- Improved the ReST and Bash lexers a bit.

- Fixed a few bugs in the Makefile and Bash lexers, thanks to Tim
  Hatch.

- Fixed a bug in the command line code that disallowed ``-O`` options
  when using the ``-S`` option.

- Fixed a bug in the `RawTokenFormatter`.


Version 0.7.1
-------------
(released Feb 15, 2007)

- Fixed little highlighting bugs in the Python, Java, Scheme and
  Apache Config lexers.

- Updated the included manpage.

- Included a built version of the documentation in the source tarball.


Version 0.7
-----------
(codename Faschingskrapfn, released Feb 14, 2007)

- Added a MoinMoin parser that uses Pygments. With it, you get
  Pygments highlighting in Moin Wiki pages.

- Changed the exception raised if no suitable lexer, formatter etc. is
  found in one of the `get_*_by_*` functions to a custom exception,
  `pygments.util.ClassNotFound`. It is, however, a subclass of
  `ValueError` in order to retain backwards compatibility.

- Added a `-H` command line option which can be used to get the
  docstring of a lexer, formatter or filter.

- Made the handling of lexers and formatters more consistent. The
  aliases and filename patterns of formatters are now attributes on
  them.

- Added an OCaml lexer, thanks to Adam Blinkinsop.

- Made the HTML formatter more flexible, and easily subclassable in
  order to make it easy to implement custom wrappers, e.g. alternate
  line number markup. See the documentation.

- Added an `outencoding` option to all formatters, making it possible
  to override the `encoding` (which is used by lexers and formatters)
  when using the command line interface. Also, if using the terminal
  formatter and the output file is a terminal and has an encoding
  attribute, use it if no encoding is given.

- Made it possible to just drop style modules into the `styles`
  subpackage of the Pygments installation.

- Added a "state" keyword argument to the `using` helper.

- Added a `commandprefix` option to the `LatexFormatter` which allows
  to control how the command names are constructed.

- Added quite a few new lexers, thanks to Tim Hatch:

  * Java Server Pages
  * Windows batch files
  * Trac Wiki markup
  * Python tracebacks
  * ReStructuredText
  * Dylan
  * and the Befunge esoteric programming language (yay!)

- Added Mako lexers by Ben Bangert.

- Added "fruity" style, another dark background originally vim-based
  theme.

- Added sources.list lexer by Dennis Kaarsemaker.

- Added token stream filters, and a pygmentize option to use them.

- Changed behavior of `in` Operator for tokens.

- Added mimetypes for all lexers.

- Fixed some problems lexing Python strings.

- Fixed tickets: #167, #178, #179, #180, #185, #201.


Version 0.6
-----------
(codename Zimtstern, released Dec 20, 2006)

- Added option for the HTML formatter to write the CSS to an external
  file in "full document" mode.

- Added RTF formatter.

- Added Bash and Apache configuration lexers (thanks to Tim Hatch).

- Improved guessing methods for various lexers.

- Added `@media` support to CSS lexer (thanks to Tim Hatch).

- Added a Groff lexer (thanks to Tim Hatch).

- License change to BSD.

- Added lexers for the Myghty template language.

- Added a Scheme lexer (thanks to Marek Kubica).

- Added some functions to iterate over existing lexers, formatters and
  lexers.

- The HtmlFormatter's `get_style_defs()` can now take a list as an
  argument to generate CSS with multiple prefixes.

- Support for guessing input encoding added.

- Encoding support added: all processing is now done with Unicode
  strings, input and output are converted from and optionally to byte
  strings (see the ``encoding`` option of lexers and formatters).

- Some improvements in the C(++) lexers handling comments and line
  continuations.


Version 0.5.1
-------------
(released Oct 30, 2006)

- Fixed traceback in ``pygmentize -L`` (thanks to Piotr Ozarowski).


Version 0.5
-----------
(codename PyKleur, released Oct 30, 2006)

- Initial public release.
Pygments-2.3.1/MANIFEST.in0000644000175000017500000000024213376260540014141 0ustar  piotrpiotrinclude pygmentize
include external/*
include Makefile CHANGES LICENSE AUTHORS TODO
recursive-include tests *
recursive-include doc *
recursive-include scripts *
Pygments-2.3.1/tests/0000755000175000017500000000000013405476653013556 5ustar  piotrpiotrPygments-2.3.1/tests/test_textfmts.py0000644000175000017500000000222013376260540017032 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic Tests for textfmts
    ~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Operator, Number, Text, Token
from pygments.lexers.textfmts import HttpLexer


class RubyTest(unittest.TestCase):

    def setUp(self):
        self.lexer = HttpLexer()
        self.maxDiff = None

    def testApplicationXml(self):
        fragment = u'GET / HTTP/1.0\nContent-Type: application/xml\n\n\n'
        tokens = [
            (Token.Name.Tag, u''),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(
            tokens, list(self.lexer.get_tokens(fragment))[-len(tokens):])

    def testApplicationCalendarXml(self):
        fragment = u'GET / HTTP/1.0\nContent-Type: application/calendar+xml\n\n\n'
        tokens = [
            (Token.Name.Tag, u''),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(
            tokens, list(self.lexer.get_tokens(fragment))[-len(tokens):])

Pygments-2.3.1/tests/test_regexopt.py0000644000175000017500000000672613376260540017030 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Tests for pygments.regexopt
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import re
import random
import unittest
import itertools

from pygments.regexopt import regex_opt

ALPHABET = ['a', 'b', 'c', 'd', 'e']

try:
    from itertools import combinations_with_replacement
    N_TRIES = 15
except ImportError:
    # Python 2.6
    def combinations_with_replacement(iterable, r):
        pool = tuple(iterable)
        n = len(pool)
        for indices in itertools.product(range(n), repeat=r):
            if sorted(indices) == list(indices):
                yield tuple(pool[i] for i in indices)
    N_TRIES = 9


class RegexOptTestCase(unittest.TestCase):

    def generate_keywordlist(self, length):
        return [''.join(p) for p in
                combinations_with_replacement(ALPHABET, length)]

    def test_randomly(self):
        # generate a list of all possible keywords of a certain length using
        # a restricted alphabet, then choose some to match and make sure only
        # those do
        for n in range(3, N_TRIES):
            kwlist = self.generate_keywordlist(n)
            to_match = random.sample(kwlist,
                                     random.randint(1, len(kwlist) - 1))
            no_match = set(kwlist) - set(to_match)
            rex = re.compile(regex_opt(to_match))
            self.assertEqual(rex.groups, 1)
            for w in to_match:
                self.assertTrue(rex.match(w))
            for w in no_match:
                self.assertFalse(rex.match(w))

    def test_prefix(self):
        opt = regex_opt(('a', 'b'), prefix=r':{1,2}')
        print(opt)
        rex = re.compile(opt)
        self.assertFalse(rex.match('a'))
        self.assertTrue(rex.match('::a'))
        self.assertFalse(rex.match(':::')) # fullmatch

    def test_suffix(self):
        opt = regex_opt(('a', 'b'), suffix=r':{1,2}')
        print(opt)
        rex = re.compile(opt)
        self.assertFalse(rex.match('a'))
        self.assertTrue(rex.match('a::'))
        self.assertFalse(rex.match(':::')) # fullmatch

    def test_suffix_opt(self):
        # test that detected suffixes remain sorted.
        opt = regex_opt(('afoo', 'abfoo'))
        print(opt)
        rex = re.compile(opt)
        m = rex.match('abfoo')
        self.assertEqual(5, m.end())

    def test_different_length_grouping(self):
        opt = regex_opt(('a', 'xyz'))
        print(opt)
        rex = re.compile(opt)
        self.assertTrue(rex.match('a'))
        self.assertTrue(rex.match('xyz'))
        self.assertFalse(rex.match('b'))
        self.assertEqual(1, rex.groups)

    def test_same_length_grouping(self):
        opt = regex_opt(('a', 'b'))
        print(opt)
        rex = re.compile(opt)
        self.assertTrue(rex.match('a'))
        self.assertTrue(rex.match('b'))
        self.assertFalse(rex.match('x'))

        self.assertEqual(1, rex.groups)
        groups = rex.match('a').groups()
        self.assertEqual(('a',), groups)

    def test_same_length_suffix_grouping(self):
        opt = regex_opt(('a', 'b'), suffix='(m)')
        print(opt)
        rex = re.compile(opt)
        self.assertTrue(rex.match('am'))
        self.assertTrue(rex.match('bm'))
        self.assertFalse(rex.match('xm'))
        self.assertFalse(rex.match('ax'))
        self.assertEqual(2, rex.groups)
        groups = rex.match('am').groups()
        self.assertEqual(('a', 'm'), groups)
Pygments-2.3.1/tests/test_properties.py0000644000175000017500000000527113376260540017361 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Properties Tests
    ~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers.configs import PropertiesLexer
from pygments.token import Token


class PropertiesTest(unittest.TestCase):
    def setUp(self):
        self.lexer = PropertiesLexer()

    def test_comments(self):
        """
        Assures lines lead by either # or ! are recognized as a comment
        """
        fragment = '! a comment\n# also a comment\n'
        tokens = [
            (Token.Comment, '! a comment'),
            (Token.Text, '\n'),
            (Token.Comment, '# also a comment'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_leading_whitespace_comments(self):
        fragment = '    # comment\n'
        tokens = [
            (Token.Text, '    '),
            (Token.Comment, '# comment'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_escaped_space_in_key(self):
        fragment = 'key = value\n'
        tokens = [
            (Token.Name.Attribute, 'key'),
            (Token.Text, ' '),
            (Token.Operator, '='),
            (Token.Text, ' '),
            (Token.Literal.String, 'value'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_escaped_space_in_value(self):
        fragment = 'key = doubleword\\ value\n'
        tokens = [
            (Token.Name.Attribute, 'key'),
            (Token.Text, ' '),
            (Token.Operator, '='),
            (Token.Text, ' '),
            (Token.Literal.String, 'doubleword\\ value'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_space_delimited_kv_pair(self):
        fragment = 'key value\n'
        tokens = [
            (Token.Name.Attribute, 'key'),
            (Token.Text, ' '),
            (Token.Literal.String, 'value\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_just_key(self):
        fragment = 'justkey\n'
        tokens = [
            (Token.Name.Attribute, 'justkey'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_just_key_with_space(self):
        fragment = 'just\\ key\n'
        tokens = [
            (Token.Name.Attribute, 'just\\ key'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_irc_formatter.py0000644000175000017500000000135613376260540020025 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments IRC formatter tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import re
import unittest

from pygments.util import StringIO
from pygments.lexers import PythonLexer
from pygments.formatters import IRCFormatter

import support

tokensource = list(PythonLexer().get_tokens("lambda x: 123"))

class IRCFormatterTest(unittest.TestCase):
    def test_correct_output(self):
        hfmt = IRCFormatter()
        houtfile = StringIO()
        hfmt.format(tokensource, houtfile)

        self.assertEqual(u'\x0302lambda\x03 x: \x0302123\x03\n', houtfile.getvalue())

Pygments-2.3.1/tests/test_latex_formatter.py0000644000175000017500000000277213376260540020370 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments LaTeX formatter tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import os
import unittest
import tempfile

from pygments.formatters import LatexFormatter
from pygments.lexers import PythonLexer

import support

TESTFILE, TESTDIR = support.location(__file__)


class LatexFormatterTest(unittest.TestCase):

    def test_valid_output(self):
        with open(TESTFILE) as fp:
            tokensource = list(PythonLexer().get_tokens(fp.read()))
        fmt = LatexFormatter(full=True, encoding='latin1')

        handle, pathname = tempfile.mkstemp('.tex')
        # place all output files in /tmp too
        old_wd = os.getcwd()
        os.chdir(os.path.dirname(pathname))
        tfile = os.fdopen(handle, 'wb')
        fmt.format(tokensource, tfile)
        tfile.close()
        try:
            import subprocess
            po = subprocess.Popen(['latex', '-interaction=nonstopmode',
                                   pathname], stdout=subprocess.PIPE)
            ret = po.wait()
            output = po.stdout.read()
            po.stdout.close()
        except OSError as e:
            # latex not available
            raise support.SkipTest(e)
        else:
            if ret:
                print(output)
            self.assertFalse(ret, 'latex run reported errors')

        os.unlink(pathname)
        os.chdir(old_wd)
Pygments-2.3.1/tests/test_ezhil.py0000644000175000017500000001423013376260540016273 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic EzhilLexer Test
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2015 Muthiah Annamalai 
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Operator, Number, Text, Token
from pygments.lexers import EzhilLexer


class EzhilTest(unittest.TestCase):

    def setUp(self):
        self.lexer = EzhilLexer()
        self.maxDiff = None
    
    def testSum(self):
        fragment = u'1+3\n'
        tokens = [
            (Number.Integer, u'1'),
            (Operator, u'+'),
            (Number.Integer, u'3'),
            (Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        
    def testGCDExpr(self):
        fragment = u'1^3+(5-5)*gcd(a,b)\n'
        tokens = [
            (Token.Number.Integer,u'1'),
            (Token.Operator,u'^'),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Operator, u'+'),
            (Token.Punctuation, u'('),
            (Token.Literal.Number.Integer, u'5'),
            (Token.Operator, u'-'),
            (Token.Literal.Number.Integer, u'5'),
            (Token.Punctuation, u')'),
            (Token.Operator, u'*'),
            (Token.Name, u'gcd'),
            (Token.Punctuation, u'('),
            (Token.Name, u'a'),
            (Token.Operator, u','),
            (Token.Name, u'b'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n')
            ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testIfStatement(self):
        fragment = u"""@( 0 > 3 ) ஆனால்
	பதிப்பி "wont print"	
முடி"""
        tokens = [          
            (Token.Operator, u'@'),
            (Token.Punctuation, u'('),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer,u'0'),
            (Token.Text, u' '),
            (Token.Operator,u'>'),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Text, u' '),
            (Token.Punctuation, u')'),
            (Token.Text, u' '),
            (Token.Keyword, u'ஆனால்'),
            (Token.Text, u'\n'),
            (Token.Text, u'\t'),
            (Token.Keyword, u'பதிப்பி'),
            (Token.Text, u' '),
            (Token.Literal.String, u'"wont print"'),
            (Token.Text, u'\t'),
            (Token.Text, u'\n'),
            (Token.Keyword, u'முடி'),
            (Token.Text, u'\n')
            ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testFunction(self):
        fragment = u"""# (C) முத்தையா அண்ணாமலை 2013, 2015
நிரல்பாகம்  gcd ( x, y )
    மு = max(x,y)
     q = min(x,y)

    @( q == 0 ) ஆனால்
           பின்கொடு  மு
    முடி
    பின்கொடு  gcd( மு - q , q )
முடி\n"""
        tokens = [
            (Token.Comment.Single,
             u'# (C) \u0bae\u0bc1\u0ba4\u0bcd\u0ba4\u0bc8\u0baf\u0bbe \u0b85'
             u'\u0ba3\u0bcd\u0ba3\u0bbe\u0bae\u0bb2\u0bc8 2013, 2015\n'),
            (Token.Keyword,u'நிரல்பாகம்'),
            (Token.Text, u'  '),
            (Token.Name, u'gcd'),
            (Token.Text, u' '),
            (Token.Punctuation, u'('),
            (Token.Text, u' '),
            (Token.Name, u'x'),
            (Token.Operator, u','),
            (Token.Text, u' '),
            (Token.Name, u'y'),
            (Token.Text, u' '),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
            (Token.Text, u'    '),
            (Token.Name, u'\u0bae\u0bc1'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Name.Builtin, u'max'),
            (Token.Punctuation, u'('),
            (Token.Name, u'x'),
            (Token.Operator, u','),
            (Token.Name, u'y'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
            (Token.Text, u'     '),
            (Token.Name, u'q'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Name.Builtin, u'min'),
            (Token.Punctuation, u'('),
            (Token.Name, u'x'),
            (Token.Operator, u','),
            (Token.Name, u'y'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
            (Token.Text, u'\n'),
            (Token.Text, u'    '),
            (Token.Operator, u'@'),
            (Token.Punctuation, u'('),
            (Token.Text, u' '),
            (Token.Name, u'q'),
            (Token.Text, u' '),
            (Token.Operator, u'=='),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'0'),
            (Token.Text, u' '),
            (Token.Punctuation, u')'),
            (Token.Text, u' '),
            (Token.Keyword, u'ஆனால்'),
            (Token.Text, u'\n'),
            (Token.Text, u'           '),
            (Token.Keyword, u'பின்கொடு'),
            (Token.Text, u'  '),
            (Token.Name, u'\u0bae\u0bc1'),
            (Token.Text, u'\n'),
            (Token.Text, u'    '),
            (Token.Keyword, u'முடி'),
            (Token.Text, u'\n'),
            (Token.Text, u'    '),
            (Token.Keyword, u'\u0baa\u0bbf\u0ba9\u0bcd\u0b95\u0bca\u0b9f\u0bc1'),
            (Token.Text, u'  '),
            (Token.Name, u'gcd'),
            (Token.Punctuation, u'('),
            (Token.Text, u' '),
            (Token.Name, u'\u0bae\u0bc1'),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Text, u' '),
            (Token.Name, u'q'),
            (Token.Text, u' '),
            (Token.Operator, u','),
            (Token.Text, u' '),
            (Token.Name, u'q'),
            (Token.Text, u' '),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
            (Token.Keyword, u'முடி'), #u'\u0bae\u0bc1\u0b9f\u0bbf'),
            (Token.Text, u'\n')
            ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        
if __name__ == "__main__":
    unittest.main()
Pygments-2.3.1/tests/test_lexers_other.py0000644000175000017500000000561213376260540017667 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Tests for other lexers
    ~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""
import glob
import os
import unittest

from pygments.lexers import guess_lexer
from pygments.lexers.scripting import EasytrieveLexer, JclLexer, RexxLexer


def _exampleFilePath(filename):
    return os.path.join(os.path.dirname(__file__), 'examplefiles', filename)


class AnalyseTextTest(unittest.TestCase):
    def _testCanRecognizeAndGuessExampleFiles(self, lexer):
        assert lexer is not None

        for pattern in lexer.filenames:
            exampleFilesPattern = _exampleFilePath(pattern)
            for exampleFilePath in glob.glob(exampleFilesPattern):
                with open(exampleFilePath, 'rb') as fp:
                    text = fp.read().decode('utf-8')
                probability = lexer.analyse_text(text)
                self.assertTrue(probability > 0,
                                '%s must recognize %r' % (
                                    lexer.name, exampleFilePath))
                guessedLexer = guess_lexer(text)
                self.assertEqual(guessedLexer.name, lexer.name)

    def testCanRecognizeAndGuessExampleFiles(self):
        LEXERS_TO_TEST = [
            EasytrieveLexer,
            JclLexer,
            RexxLexer,
        ]
        for lexerToTest in LEXERS_TO_TEST:
            self._testCanRecognizeAndGuessExampleFiles(lexerToTest)


class EasyTrieveLexerTest(unittest.TestCase):
    def testCanGuessFromText(self):
        self.assertTrue(EasytrieveLexer.analyse_text('MACRO'))
        self.assertTrue(EasytrieveLexer.analyse_text('\nMACRO'))
        self.assertTrue(EasytrieveLexer.analyse_text(' \nMACRO'))
        self.assertTrue(EasytrieveLexer.analyse_text(' \n MACRO'))
        self.assertTrue(EasytrieveLexer.analyse_text('*\nMACRO'))
        self.assertTrue(EasytrieveLexer.analyse_text(
            '*\n *\n\n \n*\n MACRO'))


class RexxLexerTest(unittest.TestCase):
    def testCanGuessFromText(self):
        self.assertAlmostEqual(0.01, RexxLexer.analyse_text('/* */'))
        self.assertAlmostEqual(1.0,
                               RexxLexer.analyse_text('''/* Rexx */
                say "hello world"'''))
        val = RexxLexer.analyse_text('/* */\n'
                                     'hello:pRoceduRe\n'
                                     '  say "hello world"')
        self.assertTrue(val > 0.5, val)
        val = RexxLexer.analyse_text('''/* */
                if 1 > 0 then do
                    say "ok"
                end
                else do
                    say "huh?"
                end''')
        self.assertTrue(val > 0.2, val)
        val = RexxLexer.analyse_text('''/* */
                greeting = "hello world!"
                parse value greeting "hello" name "!"
                say name''')
        self.assertTrue(val > 0.2, val)
Pygments-2.3.1/tests/test_examplefiles.py0000644000175000017500000001117413376260540017642 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments tests with example files
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import os
import pprint
import difflib
import pickle

from pygments.lexers import get_lexer_for_filename, get_lexer_by_name
from pygments.token import Error
from pygments.util import ClassNotFound

import support

STORE_OUTPUT = False

STATS = {}

TESTDIR = os.path.dirname(__file__)

# Jython generates a StackOverflowError for repetitions of the form (a|b)+,
# which are commonly used in string patterns, when matching more than about 1000
# chars.  These tests do not complete.  See http://bugs.jython.org/issue1965
BAD_FILES_FOR_JYTHON = ('Object.st', 'all.nit', 'genclass.clj',
                        'ragel-cpp_rlscan')

def test_example_files():
    global STATS
    STATS = {}
    outdir = os.path.join(TESTDIR, 'examplefiles', 'output')
    if STORE_OUTPUT and not os.path.isdir(outdir):
        os.makedirs(outdir)
    for fn in os.listdir(os.path.join(TESTDIR, 'examplefiles')):
        if fn.startswith('.') or fn.endswith('#'):
            continue

        absfn = os.path.join(TESTDIR, 'examplefiles', fn)
        if not os.path.isfile(absfn):
            continue

        extension = os.getenv('TEST_EXT')
        if extension and not absfn.endswith(extension):
            continue

        print(absfn)
        with open(absfn, 'rb') as f:
            code = f.read()
        try:
            code = code.decode('utf-8')
        except UnicodeError:
            code = code.decode('latin1')

        lx = None
        if '_' in fn:
            try:
                lx = get_lexer_by_name(fn.split('_')[0])
            except ClassNotFound:
                pass
        if lx is None:
            try:
                lx = get_lexer_for_filename(absfn, code=code)
            except ClassNotFound:
                raise AssertionError('file %r has no registered extension, '
                                     'nor is of the form _filename '
                                     'for overriding, thus no lexer found.'
                                     % fn)
        yield check_lexer, lx, fn

    N = 7
    stats = list(STATS.items())
    stats.sort(key=lambda x: x[1][1])
    print('\nExample files that took longest absolute time:')
    for fn, t in stats[-N:]:
        print('%-30s  %6d chars  %8.2f ms  %7.3f ms/char' % ((fn,) + t))
    print()
    stats.sort(key=lambda x: x[1][2])
    print('\nExample files that took longest relative time:')
    for fn, t in stats[-N:]:
        print('%-30s  %6d chars  %8.2f ms  %7.3f ms/char' % ((fn,) + t))


def check_lexer(lx, fn):
    if os.name == 'java' and fn in BAD_FILES_FOR_JYTHON:
        raise support.SkipTest('%s is a known bad file on Jython' % fn)
    absfn = os.path.join(TESTDIR, 'examplefiles', fn)
    with open(absfn, 'rb') as fp:
        text = fp.read()
    text = text.replace(b'\r\n', b'\n')
    text = text.strip(b'\n') + b'\n'
    try:
        text = text.decode('utf-8')
        if text.startswith(u'\ufeff'):
            text = text[len(u'\ufeff'):]
    except UnicodeError:
        text = text.decode('latin1')
    ntext = []
    tokens = []
    import time
    t1 = time.time()
    for type, val in lx.get_tokens(text):
        ntext.append(val)
        assert type != Error, \
            'lexer %s generated error token for %s: %r at position %d' % \
            (lx, absfn, val, len(u''.join(ntext)))
        tokens.append((type, val))
    t2 = time.time()
    STATS[os.path.basename(absfn)] = (len(text),
                                      1000 * (t2 - t1), 1000 * (t2 - t1) / len(text))
    if u''.join(ntext) != text:
        print('\n'.join(difflib.unified_diff(u''.join(ntext).splitlines(),
                                             text.splitlines())))
        raise AssertionError('round trip failed for ' + absfn)

    # check output against previous run if enabled
    if STORE_OUTPUT:
        # no previous output -- store it
        outfn = os.path.join(TESTDIR, 'examplefiles', 'output', fn)
        if not os.path.isfile(outfn):
            with open(outfn, 'wb') as fp:
                pickle.dump(tokens, fp)
            return
        # otherwise load it and compare
        with open(outfn, 'rb') as fp:
            stored_tokens = pickle.load(fp)
        if stored_tokens != tokens:
            f1 = pprint.pformat(stored_tokens)
            f2 = pprint.pformat(tokens)
            print('\n'.join(difflib.unified_diff(f1.splitlines(),
                                                 f2.splitlines())))
            assert False, absfn
Pygments-2.3.1/tests/test_terminal_formatter.py0000644000175000017500000000573613376260540021071 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments terminal formatter tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import unittest
import re

from pygments.util import StringIO
from pygments.lexers.sql import PlPgsqlLexer
from pygments.formatters import TerminalFormatter, Terminal256Formatter, \
    HtmlFormatter, LatexFormatter

from pygments.style import Style
from pygments.token import Token
from pygments.lexers import Python3Lexer
from pygments import highlight

DEMO_TEXT = '''\
-- comment
select
* from bar;
'''
DEMO_LEXER = PlPgsqlLexer
DEMO_TOKENS = list(DEMO_LEXER().get_tokens(DEMO_TEXT))

ANSI_RE = re.compile(r'\x1b[\w\W]*?m')


def strip_ansi(x):
    return ANSI_RE.sub('', x)


class TerminalFormatterTest(unittest.TestCase):
    def test_reasonable_output(self):
        out = StringIO()
        TerminalFormatter().format(DEMO_TOKENS, out)
        plain = strip_ansi(out.getvalue())
        self.assertEqual(DEMO_TEXT.count('\n'), plain.count('\n'))
        print(repr(plain))

        for a, b in zip(DEMO_TEXT.splitlines(), plain.splitlines()):
            self.assertEqual(a, b)

    def test_reasonable_output_lineno(self):
        out = StringIO()
        TerminalFormatter(linenos=True).format(DEMO_TOKENS, out)
        plain = strip_ansi(out.getvalue())
        self.assertEqual(DEMO_TEXT.count('\n') + 1, plain.count('\n'))
        print(repr(plain))

        for a, b in zip(DEMO_TEXT.splitlines(), plain.splitlines()):
            self.assertTrue(a in b)


class MyStyle(Style):
    styles = {
        Token.Comment:    '#ansidarkgray',
        Token.String:     '#ansiblue bg:#ansidarkred',
        Token.Number:     '#ansigreen bg:#ansidarkgreen',
        Token.Number.Hex: '#ansidarkgreen bg:#ansired',
    }


class Terminal256FormatterTest(unittest.TestCase):
    code = '''
# this should be a comment
print("Hello World")
async def function(a,b,c, *d, **kwarg:Bool)->Bool:
    pass
    return 123, 0xb3e3

'''

    def test_style_html(self):
        style = HtmlFormatter(style=MyStyle).get_style_defs()
        self.assertTrue('#555555' in style,
                        "ansigray for comment not html css style")

    def test_others_work(self):
        """check other formatters don't crash"""
        highlight(self.code, Python3Lexer(), LatexFormatter(style=MyStyle))
        highlight(self.code, Python3Lexer(), HtmlFormatter(style=MyStyle))

    def test_256esc_seq(self):
        """
        test that a few escape sequences are actualy used when using #ansi<> color codes
        """
        def termtest(x):
            return highlight(x, Python3Lexer(),
                             Terminal256Formatter(style=MyStyle))

        self.assertTrue('32;41' in termtest('0x123'))
        self.assertTrue('32;42' in termtest('123'))
        self.assertTrue('30;01' in termtest('#comment'))
        self.assertTrue('34;41' in termtest('"String"'))
Pygments-2.3.1/tests/test_regexlexer.py0000644000175000017500000000272213376260540017335 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments regex lexer tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Text
from pygments.lexer import RegexLexer
from pygments.lexer import bygroups
from pygments.lexer import default


class TestLexer(RegexLexer):
    """Test tuple state transitions including #pop."""
    tokens = {
        'root': [
            ('a', Text.Root, 'rag'),
            ('e', Text.Root),
            default(('beer', 'beer'))
        ],
        'beer': [
            ('d', Text.Beer, ('#pop', '#pop')),
        ],
        'rag': [
            ('b', Text.Rag, '#push'),
            ('c', Text.Rag, ('#pop', 'beer')),
        ],
    }


class TupleTransTest(unittest.TestCase):
    def test(self):
        lx = TestLexer()
        toks = list(lx.get_tokens_unprocessed('abcde'))
        self.assertEqual(toks,
           [(0, Text.Root, 'a'), (1, Text.Rag, 'b'), (2, Text.Rag, 'c'),
            (3, Text.Beer, 'd'), (4, Text.Root, 'e')])

    def test_multiline(self):
        lx = TestLexer()
        toks = list(lx.get_tokens_unprocessed('a\ne'))
        self.assertEqual(toks,
           [(0, Text.Root, 'a'), (1, Text, u'\n'),
            (2, Text.Root, 'e')])

    def test_default(self):
        lx = TestLexer()
        toks = list(lx.get_tokens_unprocessed('d'))
        self.assertEqual(toks, [(0, Text.Beer, 'd')])
Pygments-2.3.1/tests/test_bibtex.py0000644000175000017500000001723313376260540016443 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    BibTeX Test
    ~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import textwrap
import unittest

from pygments.lexers import BibTeXLexer, BSTLexer
from pygments.token import Token


class BibTeXTest(unittest.TestCase):
    def setUp(self):
        self.lexer = BibTeXLexer()

    def testPreamble(self):
        data = u'@PREAMBLE{"% some LaTeX code here"}'
        tokens = [
            (Token.Name.Class, u'@PREAMBLE'),
            (Token.Punctuation, u'{'),
            (Token.String, u'"'),
            (Token.String, u'% some LaTeX code here'),
            (Token.String, u'"'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(data)), tokens)

    def testString(self):
        data = u'@STRING(SCI = "Science")'
        tokens = [
            (Token.Name.Class, u'@STRING'),
            (Token.Punctuation, u'('),
            (Token.Name.Attribute, u'SCI'),
            (Token.Text, u' '),
            (Token.Punctuation, u'='),
            (Token.Text, u' '),
            (Token.String, u'"'),
            (Token.String, u'Science'),
            (Token.String, u'"'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(data)), tokens)

    def testEntry(self):
        data = u"""
            This is a comment.

            @ARTICLE{ruckenstein-diffusion,
                author = "Liu, Hongquin" # and # "Ruckenstein, Eli",
                year = 1997,
                month = JAN,
                pages = "888-895"
            }
        """

        tokens = [
            (Token.Comment, u'This is a comment.'),
            (Token.Text, u'\n\n'),
            (Token.Name.Class, u'@ARTICLE'),
            (Token.Punctuation, u'{'),
            (Token.Name.Label, u'ruckenstein-diffusion'),
            (Token.Punctuation, u','),
            (Token.Text, u'\n    '),
            (Token.Name.Attribute, u'author'),
            (Token.Text, u' '),
            (Token.Punctuation, u'='),
            (Token.Text, u' '),
            (Token.String, u'"'),
            (Token.String, u'Liu, Hongquin'),
            (Token.String, u'"'),
            (Token.Text, u' '),
            (Token.Punctuation, u'#'),
            (Token.Text, u' '),
            (Token.Name.Variable, u'and'),
            (Token.Text, u' '),
            (Token.Punctuation, u'#'),
            (Token.Text, u' '),
            (Token.String, u'"'),
            (Token.String, u'Ruckenstein, Eli'),
            (Token.String, u'"'),
            (Token.Punctuation, u','),
            (Token.Text, u'\n    '),
            (Token.Name.Attribute, u'year'),
            (Token.Text, u' '),
            (Token.Punctuation, u'='),
            (Token.Text, u' '),
            (Token.Number, u'1997'),
            (Token.Punctuation, u','),
            (Token.Text, u'\n    '),
            (Token.Name.Attribute, u'month'),
            (Token.Text, u' '),
            (Token.Punctuation, u'='),
            (Token.Text, u' '),
            (Token.Name.Variable, u'JAN'),
            (Token.Punctuation, u','),
            (Token.Text, u'\n    '),
            (Token.Name.Attribute, u'pages'),
            (Token.Text, u' '),
            (Token.Punctuation, u'='),
            (Token.Text, u' '),
            (Token.String, u'"'),
            (Token.String, u'888-895'),
            (Token.String, u'"'),
            (Token.Text, u'\n'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(textwrap.dedent(data))), tokens)

    def testComment(self):
        data = '@COMMENT{test}'
        tokens = [
            (Token.Comment, u'@COMMENT'),
            (Token.Comment, u'{test}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(data)), tokens)

    def testMissingBody(self):
        data = '@ARTICLE xxx'
        tokens = [
            (Token.Name.Class, u'@ARTICLE'),
            (Token.Text, u' '),
            (Token.Error, u'x'),
            (Token.Error, u'x'),
            (Token.Error, u'x'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(data)), tokens)

    def testMismatchedBrace(self):
        data = '@PREAMBLE(""}'
        tokens = [
            (Token.Name.Class, u'@PREAMBLE'),
            (Token.Punctuation, u'('),
            (Token.String, u'"'),
            (Token.String, u'"'),
            (Token.Error, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(data)), tokens)


class BSTTest(unittest.TestCase):
    def setUp(self):
        self.lexer = BSTLexer()

    def testBasicBST(self):
        data = """
            % BibTeX standard bibliography style `plain'

            INTEGERS { output.state before.all }

            FUNCTION {sort.format.title}
            { 't :=
            "A " #2
                "An " #3
                "The " #4 t chop.word
                chop.word
            chop.word
            sortify
            #1 global.max$ substring$
            }

            ITERATE {call.type$}
        """
        tokens = [
            (Token.Comment.SingleLine, "% BibTeX standard bibliography style `plain'"),
            (Token.Text, u'\n\n'),
            (Token.Keyword, u'INTEGERS'),
            (Token.Text, u' '),
            (Token.Punctuation, u'{'),
            (Token.Text, u' '),
            (Token.Name.Variable, u'output.state'),
            (Token.Text, u' '),
            (Token.Name.Variable, u'before.all'),
            (Token.Text, u' '),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n\n'),
            (Token.Keyword, u'FUNCTION'),
            (Token.Text, u' '),
            (Token.Punctuation, u'{'),
            (Token.Name.Variable, u'sort.format.title'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n'),
            (Token.Punctuation, u'{'),
            (Token.Text, u' '),
            (Token.Name.Function, u"'t"),
            (Token.Text, u' '),
            (Token.Name.Variable, u':='),
            (Token.Text, u'\n'),
            (Token.Literal.String, u'"A "'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'#2'),
            (Token.Text, u'\n    '),
            (Token.Literal.String, u'"An "'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'#3'),
            (Token.Text, u'\n    '),
            (Token.Literal.String, u'"The "'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'#4'),
            (Token.Text, u' '),
            (Token.Name.Variable, u't'),
            (Token.Text, u' '),
            (Token.Name.Variable, u'chop.word'),
            (Token.Text, u'\n    '),
            (Token.Name.Variable, u'chop.word'),
            (Token.Text, u'\n'),
            (Token.Name.Variable, u'chop.word'),
            (Token.Text, u'\n'),
            (Token.Name.Variable, u'sortify'),
            (Token.Text, u'\n'),
            (Token.Literal.Number, u'#1'),
            (Token.Text, u' '),
            (Token.Name.Builtin, u'global.max$'),
            (Token.Text, u' '),
            (Token.Name.Builtin, u'substring$'),
            (Token.Text, u'\n'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n\n'),
            (Token.Keyword, u'ITERATE'),
            (Token.Text, u' '),
            (Token.Punctuation, u'{'),
            (Token.Name.Builtin, u'call.type$'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(list(self.lexer.get_tokens(textwrap.dedent(data))), tokens)
Pygments-2.3.1/tests/test_java.py0000644000175000017500000000434613376260540016110 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic JavaLexer Test
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Text, Name, Operator, Keyword, Number
from pygments.lexers import JavaLexer


class JavaTest(unittest.TestCase):

    def setUp(self):
        self.lexer = JavaLexer()
        self.maxDiff = None

    def testEnhancedFor(self):
        fragment = u'label:\nfor(String var2: var1) {}\n'
        tokens = [
            (Name.Label, u'label:'),
            (Text, u'\n'),
            (Keyword, u'for'),
            (Operator, u'('),
            (Name, u'String'),
            (Text, u' '),
            (Name, u'var2'),
            (Operator, u':'),
            (Text, u' '),
            (Name, u'var1'),
            (Operator, u')'),
            (Text, u' '),
            (Operator, u'{'),
            (Operator, u'}'),
            (Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testNumericLiterals(self):
        fragment = '0 5L 9__542_72l 0xbEEf 0X9_A 0_35 01 0b0___101_0'
        fragment += ' 0. .7_17F 3e-1_3d 1f 6_01.9e+3 0x.1Fp3 0XEP8D\n'
        tokens = [
            (Number.Integer, '0'),
            (Text, ' '),
            (Number.Integer, '5L'),
            (Text, ' '),
            (Number.Integer, '9__542_72l'),
            (Text, ' '),
            (Number.Hex, '0xbEEf'),
            (Text, ' '),
            (Number.Hex, '0X9_A'),
            (Text, ' '),
            (Number.Oct, '0_35'),
            (Text, ' '),
            (Number.Oct, '01'),
            (Text, ' '),
            (Number.Bin, '0b0___101_0'),
            (Text, ' '),
            (Number.Float, '0.'),
            (Text, ' '),
            (Number.Float, '.7_17F'),
            (Text, ' '),
            (Number.Float, '3e-1_3d'),
            (Text, ' '),
            (Number.Float, '1f'),
            (Text, ' '),
            (Number.Float, '6_01.9e+3'),
            (Text, ' '),
            (Number.Float, '0x.1Fp3'),
            (Text, ' '),
            (Number.Float, '0XEP8D'),
            (Text, '\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_rtf_formatter.py0000644000175000017500000001006613402534110020024 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments RTF formatter tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest
from string_asserts import StringTests

from pygments.util import StringIO
from pygments.formatters import RtfFormatter
from pygments.lexers.special import TextLexer

class RtfFormatterTest(StringTests, unittest.TestCase):
    foot = (r'\par' '\n' r'}')

    def _escape(self, string):
        return(string.replace("\n", r"\n"))

    def _build_message(self, *args, **kwargs):
        string = kwargs.get('string', None)
        t = self._escape(kwargs.get('t', ''))
        expected = self._escape(kwargs.get('expected', ''))
        result = self._escape(kwargs.get('result', ''))

        if string is None:
            string = (u"The expected output of '{t}'\n"
                      u"\t\tShould be '{expected}'\n"
                      u"\t\tActually outputs '{result}'\n"
                      u"\t(WARNING: Partial Output of Result!)")

        end = -(len(self._escape(self.foot)))
        start = end-len(expected)

        return string.format(t=t,
                             result = result[start:end],
                             expected = expected)

    def format_rtf(self, t):
        tokensource = list(TextLexer().get_tokens(t))
        fmt = RtfFormatter()
        buf = StringIO()
        fmt.format(tokensource, buf)
        result = buf.getvalue()
        buf.close()
        return result

    def test_rtf_header(self):
        t = u''
        result = self.format_rtf(t)
        expected = r'{\rtf1\ansi\uc0'
        msg = (u"RTF documents are expected to start with '{expected}'\n"
               u"\t\tStarts intead with '{result}'\n"
               u"\t(WARNING: Partial Output of Result!)".format(
                   expected = expected,
                   result = result[:len(expected)]))
        self.assertStartsWith(result, expected, msg)

    def test_rtf_footer(self):
        t = u''
        result = self.format_rtf(t)
        expected = self.foot
        msg = (u"RTF documents are expected to end with '{expected}'\n"
               u"\t\tEnds intead with '{result}'\n"
               u"\t(WARNING: Partial Output of Result!)".format(
                   expected = self._escape(expected),
                   result = self._escape(result[-len(expected):])))
        self.assertEndsWith(result, expected, msg)

    def test_ascii_characters(self):
        t = u'a b c d ~'
        result = self.format_rtf(t)
        expected = (r'a b c d ~')
        if not result.endswith(self.foot):
            return(unittest.skip('RTF Footer incorrect'))
        msg = self._build_message(t=t, result=result, expected=expected)
        self.assertEndsWith(result, expected+self.foot, msg)

    def test_escape_characters(self):
        t = u'\\ {{'
        result = self.format_rtf(t)
        expected = (r'\\ \{\{')
        if not result.endswith(self.foot):
            return(unittest.skip('RTF Footer incorrect'))
        msg = self._build_message(t=t, result=result, expected=expected)
        self.assertEndsWith(result, expected+self.foot, msg)

    def test_single_characters(self):
        t = u'â € ¤ каждой'
        result = self.format_rtf(t)
        expected = (r'{\u226} {\u8364} {\u164} '
                    r'{\u1082}{\u1072}{\u1078}{\u1076}{\u1086}{\u1081}')
        if not result.endswith(self.foot):
            return(unittest.skip('RTF Footer incorrect'))
        msg = self._build_message(t=t, result=result, expected=expected)
        self.assertEndsWith(result, expected+self.foot, msg)

    def test_double_characters(self):
        t = u'က 힣 ↕ ↕︎ 鼖'
        result = self.format_rtf(t)
        expected = (r'{\u4096} {\u55203} {\u8597} '
                    r'{\u8597}{\u65038} {\u55422}{\u56859}')
        if not result.endswith(self.foot):
            return(unittest.skip('RTF Footer incorrect'))
        msg = self._build_message(t=t, result=result, expected=expected)
        self.assertEndsWith(result, expected+self.foot, msg)
Pygments-2.3.1/tests/test_r.py0000644000175000017500000000366213402534110015413 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    R Tests
    ~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import SLexer
from pygments.token import Token, Name, Punctuation


class RTest(unittest.TestCase):
    def setUp(self):
        self.lexer = SLexer()

    def testCall(self):
        fragment = u'f(1, a)\n'
        tokens = [
            (Name.Function, u'f'),
            (Punctuation, u'('),
            (Token.Literal.Number, u'1'),
            (Punctuation, u','),
            (Token.Text, u' '),
            (Token.Name, u'a'),
            (Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testName1(self):
        fragment = u'._a_2.c'
        tokens = [
            (Name, u'._a_2.c'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testName2(self):
        # Invalid names are valid if backticks are used
        fragment = u'`.1 blah`'
        tokens = [
            (Name, u'`.1 blah`'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testName3(self):
        # Internal backticks can be escaped
        fragment = u'`.1 \\` blah`'
        tokens = [
            (Name, u'`.1 \\` blah`'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testCustomOperator(self):
        fragment = u'7 % and % 8'
        tokens = [
            (Token.Literal.Number, u'7'),
            (Token.Text, u' '),
            (Token.Operator, u'% and %'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'8'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_string_asserts.py0000644000175000017500000000222213376260540020230 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments string assert utility tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest
from string_asserts import StringTests

class TestStringTests(StringTests, unittest.TestCase):

    def test_startswith_correct(self):
        self.assertStartsWith("AAA", "A")

    # @unittest.expectedFailure not supported by nose
    def test_startswith_incorrect(self):
        self.assertRaises(AssertionError, self.assertStartsWith, "AAA", "B")

    # @unittest.expectedFailure not supported by nose
    def test_startswith_short(self):
        self.assertRaises(AssertionError, self.assertStartsWith, "A", "AA")

    def test_endswith_correct(self):
        self.assertEndsWith("AAA", "A")

    # @unittest.expectedFailure not supported by nose
    def test_endswith_incorrect(self):
        self.assertRaises(AssertionError, self.assertEndsWith, "AAA", "B")

    # @unittest.expectedFailure not supported by nose
    def test_endswith_short(self):
        self.assertRaises(AssertionError, self.assertEndsWith, "A", "AA")
Pygments-2.3.1/tests/test_using_api.py0000644000175000017500000000210013376260540017127 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments tests for using()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexer import using, bygroups, this, RegexLexer
from pygments.token import String, Text, Keyword

class TestLexer(RegexLexer):
    tokens = {
        'root': [
            (r'#.*',
             using(this, state='invalid')),
            (r'(")(.+?)(")',
             bygroups(String, using(this, state='string'), String)),
            (r'[^"]+', Text),
        ],
        'string': [
            (r'.+', Keyword),
        ],
    }


class UsingStateTest(unittest.TestCase):
    def test_basic(self):
        expected = [(Text, 'a'), (String, '"'), (Keyword, 'bcd'),
                    (String, '"'), (Text, 'e\n')]
        t = list(TestLexer().get_tokens('a"bcd"e'))
        self.assertEqual(t, expected)

    def test_error(self):
        def gen():
            return list(TestLexer().get_tokens('#a'))
        self.assertRaises(KeyError, gen)
Pygments-2.3.1/tests/string_asserts.py0000644000175000017500000000133213376260540017172 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments string assert utility
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

class StringTests(object):

    def assertStartsWith(self, haystack, needle, msg=None):
        if msg is None:
            msg = "'{0}' does not start with '{1}'".format(haystack, needle)
        if not haystack.startswith(needle):
            raise(AssertionError(msg))

    def assertEndsWith(self, haystack, needle, msg=None):
        if msg is None:
            msg = "'{0}' does not end with '{1}'".format(haystack, needle)
        if not haystack.endswith(needle):
            raise(AssertionError(msg))
Pygments-2.3.1/tests/test_cmdline.py0000644000175000017500000002611713376260540016602 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Command line test
    ~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import io
import os
import re
import sys
import tempfile
import unittest

import support
from pygments import cmdline, highlight
from pygments.util import BytesIO, StringIO


TESTFILE, TESTDIR = support.location(__file__)
TESTCODE = '''\
def func(args):
    pass
'''


def run_cmdline(*args, **kwds):
    saved_stdin = sys.stdin
    saved_stdout = sys.stdout
    saved_stderr = sys.stderr
    if sys.version_info > (3,):
        stdin_buffer = BytesIO()
        stdout_buffer = BytesIO()
        stderr_buffer = BytesIO()
        new_stdin = sys.stdin = io.TextIOWrapper(stdin_buffer, 'utf-8')
        new_stdout = sys.stdout = io.TextIOWrapper(stdout_buffer, 'utf-8')
        new_stderr = sys.stderr = io.TextIOWrapper(stderr_buffer, 'utf-8')
    else:
        stdin_buffer = new_stdin = sys.stdin = StringIO()
        stdout_buffer = new_stdout = sys.stdout = StringIO()
        stderr_buffer = new_stderr = sys.stderr = StringIO()
    new_stdin.write(kwds.get('stdin', ''))
    new_stdin.seek(0, 0)
    try:
        ret = cmdline.main(['pygmentize'] + list(args))
    finally:
        sys.stdin = saved_stdin
        sys.stdout = saved_stdout
        sys.stderr = saved_stderr
    new_stdout.flush()
    new_stderr.flush()
    out, err = stdout_buffer.getvalue().decode('utf-8'), \
        stderr_buffer.getvalue().decode('utf-8')
    return (ret, out, err)


class CmdLineTest(unittest.TestCase):

    def check_success(self, *cmdline, **kwds):
        code, out, err = run_cmdline(*cmdline, **kwds)
        self.assertEqual(code, 0)
        self.assertEqual(err, '')
        return out

    def check_failure(self, *cmdline, **kwds):
        expected_code = kwds.pop('code', 1)
        code, out, err = run_cmdline(*cmdline, **kwds)
        self.assertEqual(code, expected_code)
        self.assertEqual(out, '')
        return err

    def test_normal(self):
        # test that cmdline gives the same output as library api
        from pygments.lexers import PythonLexer
        from pygments.formatters import HtmlFormatter
        filename = TESTFILE
        with open(filename, 'rb') as fp:
            code = fp.read()

        output = highlight(code, PythonLexer(), HtmlFormatter())

        o = self.check_success('-lpython', '-fhtml', filename)
        self.assertEqual(o, output)

    def test_stdin(self):
        o = self.check_success('-lpython', '-fhtml', stdin=TESTCODE)
        o = re.sub('<[^>]*>', '', o)
        # rstrip is necessary since HTML inserts a \n after the last 
        self.assertEqual(o.rstrip(), TESTCODE.rstrip())

        # guess if no lexer given
        o = self.check_success('-fhtml', stdin=TESTCODE)
        o = re.sub('<[^>]*>', '', o)
        # rstrip is necessary since HTML inserts a \n after the last 
        self.assertEqual(o.rstrip(), TESTCODE.rstrip())

    def test_outfile(self):
        # test that output file works with and without encoding
        fd, name = tempfile.mkstemp()
        os.close(fd)
        for opts in [['-fhtml', '-o', name, TESTFILE],
                     ['-flatex', '-o', name, TESTFILE],
                     ['-fhtml', '-o', name, '-O', 'encoding=utf-8', TESTFILE]]:
            try:
                self.check_success(*opts)
            finally:
                os.unlink(name)

    def test_load_from_file(self):
        lexer_file = os.path.join(TESTDIR, 'support', 'python_lexer.py')
        formatter_file = os.path.join(TESTDIR, 'support', 'html_formatter.py')

        # By default, use CustomLexer
        o = self.check_success('-l', lexer_file, '-f', 'html',
                               '-x', stdin=TESTCODE)
        o = re.sub('<[^>]*>', '', o)
        # rstrip is necessary since HTML inserts a \n after the last 
        self.assertEqual(o.rstrip(), TESTCODE.rstrip())

        # If user specifies a name, use it
        o = self.check_success('-f', 'html', '-x', '-l',
                               lexer_file + ':LexerWrapper', stdin=TESTCODE)
        o = re.sub('<[^>]*>', '', o)
        # rstrip is necessary since HTML inserts a \n after the last 
        self.assertEqual(o.rstrip(), TESTCODE.rstrip())

        # Should also work for formatters
        o = self.check_success('-lpython', '-f',
                               formatter_file + ':HtmlFormatterWrapper',
                               '-x', stdin=TESTCODE)
        o = re.sub('<[^>]*>', '', o)
        # rstrip is necessary since HTML inserts a \n after the last 
        self.assertEqual(o.rstrip(), TESTCODE.rstrip())

    def test_stream_opt(self):
        o = self.check_success('-lpython', '-s', '-fterminal', stdin=TESTCODE)
        o = re.sub(r'\x1b\[.*?m', '', o)
        self.assertEqual(o.replace('\r\n', '\n'), TESTCODE)

    def test_h_opt(self):
        o = self.check_success('-h')
        self.assertTrue('Usage:' in o)

    def test_L_opt(self):
        o = self.check_success('-L')
        self.assertTrue('Lexers' in o and 'Formatters' in o and
                        'Filters' in o and 'Styles' in o)
        o = self.check_success('-L', 'lexer')
        self.assertTrue('Lexers' in o and 'Formatters' not in o)
        self.check_success('-L', 'lexers')

    def test_O_opt(self):
        filename = TESTFILE
        o = self.check_success('-Ofull=1,linenos=true,foo=bar',
                               '-fhtml', filename)
        self.assertTrue('foo, bar=baz=,' in o)

    def test_F_opt(self):
        filename = TESTFILE
        o = self.check_success('-Fhighlight:tokentype=Name.Blubb,'
                               'names=TESTFILE filename',
                               '-fhtml', filename)
        self.assertTrue('\n'
        tokens = [
            (Token.Comment.Preproc, u'#'),
            (Token.Comment.Preproc, u'include'),
            (Token.Text, u' '),
            (Token.Comment.PreprocFile, u''),
            (Token.Comment.Preproc, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testPreprocFile2(self):
        fragment = u'#include "foo.h"\n'
        tokens = [
            (Token.Comment.Preproc, u'#'),
            (Token.Comment.Preproc, u'include'),
            (Token.Text, u' '),
            (Token.Comment.PreprocFile, u'"foo.h"'),
            (Token.Comment.Preproc, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

Pygments-2.3.1/tests/dtds/0000755000175000017500000000000013405476653014514 5ustar  piotrpiotrPygments-2.3.1/tests/dtds/HTML4.dtd0000644000175000017500000013114113376260540016033 0ustar  piotrpiotr

    
    
    ...
    
    
    ...
    
    

    The URI used as a system identifier with the public identifier allows
    the user agent to download the DTD and entity sets as needed.

    The FPI for the Strict HTML 4.0 DTD is:

        "-//W3C//DTD HTML 4.0//EN"

    and its URI is:

        http://www.w3.org/TR/REC-html40/strict.dtd

    Authors should use the Strict DTD unless they need the
    presentation control for user agents that don't (adequately)
    support style sheets.

    If you are writing a document that includes frames, use 
    the following FPI:

        "-//W3C//DTD HTML 4.0 Frameset//EN"

    with the URI:

        http://www.w3.org/TR/REC-html40/frameset.dtd

    The following URIs are supported in relation to HTML 4.0

    "http://www.w3.org/TR/REC-html40/strict.dtd" (Strict DTD)
    "http://www.w3.org/TR/REC-html40/loose.dtd" (Loose DTD)
    "http://www.w3.org/TR/REC-html40/frameset.dtd" (Frameset DTD)
    "http://www.w3.org/TR/REC-html40/HTMLlat1.ent" (Latin-1 entities)
    "http://www.w3.org/TR/REC-html40/HTMLsymbol.ent" (Symbol entities)
    "http://www.w3.org/TR/REC-html40/HTMLspecial.ent" (Special entities)

    These URIs point to the latest version of each file. To reference
    this specific revision use the following URIs:

    "http://www.w3.org/TR/REC-html40-971218/strict.dtd"
    "http://www.w3.org/TR/REC-html40-971218/loose.dtd"
    "http://www.w3.org/TR/REC-html40-971218/frameset.dtd"
    "http://www.w3.org/TR/REC-html40-971218/HTMLlat1.ent"
    "http://www.w3.org/TR/REC-html40-971218/HTMLsymbol.ent"
    "http://www.w3.org/TR/REC-html40-971218/HTMLspecial.ent"

-->





















































%HTMLlat1;


%HTMLsymbol;


%HTMLspecial;













]]>



















































































































































































































































  




















































]]>




]]>





]]>







































]]>





Pygments-2.3.1/tests/dtds/HTML4-s.dtd0000644000175000017500000010422513376260540016276 0ustar  piotrpiotr
















































%HTMLlat1;


%HTMLsymbol;


%HTMLspecial;













]]>





























































































































































































































  












































































Pygments-2.3.1/tests/dtds/HTML4.dcl0000644000175000017500000000547613376260540016035 0ustar  piotrpiotrPygments-2.3.1/tests/dtds/HTML4-f.dtd0000644000175000017500000000175713376260540016267 0ustar  piotrpiotr

    
    
    ...
    
    
    ...
    
    
-->



%HTML4.dtd;Pygments-2.3.1/tests/dtds/HTMLlat1.ent0000644000175000017500000002736713376260540016562 0ustar  piotrpiotr

































































































Pygments-2.3.1/tests/dtds/HTMLspec.ent0000644000175000017500000001002413376260540016631 0ustar  piotrpiotr
















































Pygments-2.3.1/tests/dtds/HTMLsym.ent0000644000175000017500000003415613376260540016523 0ustar  piotrpiotr






































































































































 



























Pygments-2.3.1/tests/dtds/HTML4.soc0000644000175000017500000000057613376260540016053 0ustar  piotrpiotrOVERRIDE YES
SGMLDECL HTML4.dcl
DOCTYPE HTML HTML4.dtd
PUBLIC "-//W3C//DTD HTML 4.0//EN" HTML4-s.dtd
PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" HTML4.dtd
PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN" HTML4-f.dtd
PUBLIC "-//W3C//ENTITIES Latin1//EN//HTML" HTMLlat1.ent
PUBLIC "-//W3C//ENTITIES Special//EN//HTML" HTMLspec.ent
PUBLIC "-//W3C//ENTITIES Symbols//EN//HTML" HTMLsym.ent
Pygments-2.3.1/tests/test_inherit.py0000644000175000017500000000412513376260540016624 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Tests for inheritance in RegexLexer
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexer import RegexLexer, inherit
from pygments.token import Text


class InheritTest(unittest.TestCase):
    def test_single_inheritance_position(self):
        t = Two()
        pats = [x[0].__self__.pattern for x in t._tokens['root']]
        self.assertEqual(['x', 'a', 'b', 'y'], pats)
    def test_multi_inheritance_beginning(self):
        t = Beginning()
        pats = [x[0].__self__.pattern for x in t._tokens['root']]
        self.assertEqual(['x', 'a', 'b', 'y', 'm'], pats)
    def test_multi_inheritance_end(self):
        t = End()
        pats = [x[0].__self__.pattern for x in t._tokens['root']]
        self.assertEqual(['m', 'x', 'a', 'b', 'y'], pats)

    def test_multi_inheritance_position(self):
        t = Three()
        pats = [x[0].__self__.pattern for x in t._tokens['root']]
        self.assertEqual(['i', 'x', 'a', 'b', 'y', 'j'], pats)

    def test_single_inheritance_with_skip(self):
        t = Skipped()
        pats = [x[0].__self__.pattern for x in t._tokens['root']]
        self.assertEqual(['x', 'a', 'b', 'y'], pats)


class One(RegexLexer):
    tokens = {
        'root': [
            ('a', Text),
            ('b', Text),
        ],
    }

class Two(One):
    tokens = {
        'root': [
            ('x', Text),
            inherit,
            ('y', Text),
        ],
    }

class Three(Two):
    tokens = {
        'root': [
            ('i', Text),
            inherit,
            ('j', Text),
        ],
    }

class Beginning(Two):
    tokens = {
        'root': [
            inherit,
            ('m', Text),
        ],
    }

class End(Two):
    tokens = {
        'root': [
            ('m', Text),
            inherit,
        ],
    }

class Empty(One):
    tokens = {}

class Skipped(Empty):
    tokens = {
        'root': [
            ('x', Text),
            inherit,
            ('y', Text),
        ],
    }

Pygments-2.3.1/tests/test_data.py0000644000175000017500000000635313376260540016100 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Data Tests
    ~~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import JsonLexer, JsonBareObjectLexer
from pygments.token import Token


class JsonTest(unittest.TestCase):
    def setUp(self):
        self.lexer = JsonLexer()

    def testBasic(self):
        fragment = u'{"foo": "bar", "foo2": [1, 2, 3]}\n'
        tokens = [
            (Token.Punctuation, u'{'),
            (Token.Name.Tag, u'"foo"'),
            (Token.Punctuation, u':'),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"bar"'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Name.Tag, u'"foo2"'),
            (Token.Punctuation, u':'),
            (Token.Text, u' '),
            (Token.Punctuation, u'['),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Punctuation, u']'),
            (Token.Punctuation, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

class JsonBareObjectTest(unittest.TestCase):
    def setUp(self):
        self.lexer = JsonBareObjectLexer()

    def testBasic(self):
        # This is the same as testBasic for JsonLexer above, except the
        # enclosing curly braces are removed.
        fragment = u'"foo": "bar", "foo2": [1, 2, 3]\n'
        tokens = [
            (Token.Name.Tag, u'"foo"'),
            (Token.Punctuation, u':'),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"bar"'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Name.Tag, u'"foo2"'),
            (Token.Punctuation, u':'),
            (Token.Text, u' '),
            (Token.Punctuation, u'['),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Punctuation, u']'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testClosingCurly(self):
        # This can be an Error token, but should not be a can't-pop-from-stack
        # exception.
        fragment = '}"a"\n'
        tokens = [
            (Token.Error, '}'),
            (Token.Name.Tag, '"a"'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testClosingCurlyInValue(self):
        fragment = '"": ""}\n'
        tokens = [
            (Token.Name.Tag, '""'),
            (Token.Punctuation, ':'),
            (Token.Text, ' '),
            (Token.Literal.String.Double, '""'),
            (Token.Error, '}'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

Pygments-2.3.1/tests/test_qbasiclexer.py0000644000175000017500000000245113376260540017464 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Tests for QBasic
    ~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import glob
import os
import unittest

from pygments.token import Token
from pygments.lexers.basic import QBasicLexer


class QBasicTest(unittest.TestCase):
    def setUp(self):
        self.lexer = QBasicLexer()
        self.maxDiff = None

    def testKeywordsWithDollar(self):
        fragment = u'DIM x\nx = RIGHT$("abc", 1)\n'
        expected = [
            (Token.Keyword.Declaration, u'DIM'),
            (Token.Text.Whitespace, u' '),
            (Token.Name.Variable.Global, u'x'),
            (Token.Text, u'\n'),
            (Token.Name.Variable.Global, u'x'),
            (Token.Text.Whitespace, u' '),
            (Token.Operator, u'='),
            (Token.Text.Whitespace, u' '),
            (Token.Keyword.Reserved, u'RIGHT$'),
            (Token.Punctuation, u'('),
            (Token.Literal.String.Double, u'"abc"'),
            (Token.Punctuation, u','),
            (Token.Text.Whitespace, u' '),
            (Token.Literal.Number.Integer.Long, u'1'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/support.py0000644000175000017500000000053513376260540015640 0ustar  piotrpiotr# coding: utf-8
"""
Support for Pygments tests
"""

import os

from nose import SkipTest


def location(mod_name):
    """
    Return the file and directory that the code for *mod_name* is in.
    """
    source = mod_name.endswith("pyc") and mod_name[:-1] or mod_name
    source = os.path.abspath(source)
    return source, os.path.dirname(source)
Pygments-2.3.1/tests/test_python.py0000644000175000017500000001036013377321324016500 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Python Tests
    ~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import PythonLexer, Python3Lexer
from pygments.token import Token


class PythonTest(unittest.TestCase):
    def setUp(self):
        self.lexer = PythonLexer()

    def test_cls_builtin(self):
        """
        Tests that a cls token gets interpreted as a Token.Name.Builtin.Pseudo

        """
        fragment = 'class TestClass():\n    @classmethod\n    def hello(cls):\n        pass\n'
        tokens = [
            (Token.Keyword, 'class'),
            (Token.Text, ' '),
            (Token.Name.Class, 'TestClass'),
            (Token.Punctuation, '('),
            (Token.Punctuation, ')'),
            (Token.Punctuation, ':'),
            (Token.Text, '\n'),
            (Token.Text, '    '),
            (Token.Name.Decorator, '@classmethod'),
            (Token.Text, '\n'),
            (Token.Text, '    '),
            (Token.Keyword, 'def'),
            (Token.Text, ' '),
            (Token.Name.Function, 'hello'),
            (Token.Punctuation, '('),
            (Token.Name.Builtin.Pseudo, 'cls'),
            (Token.Punctuation, ')'),
            (Token.Punctuation, ':'),
            (Token.Text, '\n'),
            (Token.Text, '        '),
            (Token.Keyword, 'pass'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))


class Python3Test(unittest.TestCase):
    def setUp(self):
        self.lexer = Python3Lexer()
        
    def testNeedsName(self):
        """
        Tests that '@' is recognized as an Operator
        """
        fragment = u'S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)\n'
        tokens = [
            (Token.Name, u'S'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Punctuation, u'('),
            (Token.Name, u'H'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Name, u'beta'),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Text, u' '),
            (Token.Name, u'r'),
            (Token.Punctuation, u')'),
            (Token.Operator, u'.'),
            (Token.Name, u'T'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Name, u'inv'),
            (Token.Punctuation, u'('),
            (Token.Name, u'H'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Name, u'V'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Name, u'H'),
            (Token.Operator, u'.'),
            (Token.Name, u'T'),
            (Token.Punctuation, u')'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Punctuation, u'('),
            (Token.Name, u'H'),
            (Token.Text, u' '),
            (Token.Operator, u'@'),
            (Token.Text, u' '),
            (Token.Name, u'beta'),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Text, u' '),
            (Token.Name, u'r'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def test_pep_515(self):
        """
        Tests that the lexer can parse numeric literals with underscores
        """
        fragments = (
            (Token.Literal.Number.Integer, u'1_000_000'),
            (Token.Literal.Number.Float, u'1_000.000_001'),
            (Token.Literal.Number.Float, u'1_000e1_000j'),
            (Token.Literal.Number.Hex, u'0xCAFE_F00D'),
            (Token.Literal.Number.Bin, u'0b_0011_1111_0100_1110'),
            (Token.Literal.Number.Oct, u'0o_777_123'),
        )

        for token, fragment in fragments:
            tokens = [
                (token, fragment),
                (Token.Text, u'\n'),
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_praat.py0000644000175000017500000001037013376260540016270 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Praat lexer tests
    ~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Token
from pygments.lexers import PraatLexer

class PraatTest(unittest.TestCase):

    def setUp(self):
        self.lexer = PraatLexer()
        self.maxDiff = None

    def testNumericAssignment(self):
        fragment = u'var = -15e4\n'
        tokens = [
            (Token.Text, u'var'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Literal.Number, u'15e4'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testStringAssignment(self):
        fragment = u'var$ = "foo"\n'
        tokens = [
            (Token.Text, u'var$'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'foo'),
            (Token.Literal.String, u'"'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testStringEscapedQuotes(self):
        fragment = u'"it said ""foo"""\n'
        tokens = [
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'it said '),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'foo'),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'"'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testFunctionCall(self):
        fragment = u'selected("Sound", i+(a*b))\n'
        tokens = [
            (Token.Name.Function, u'selected'),
            (Token.Punctuation, u'('),
            (Token.Literal.String, u'"'),
            (Token.Literal.String, u'Sound'),
            (Token.Literal.String, u'"'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Text, u'i'),
            (Token.Operator, u'+'),
            (Token.Text, u'('),
            (Token.Text, u'a'),
            (Token.Operator, u'*'),
            (Token.Text, u'b'),
            (Token.Text, u')'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testBrokenUnquotedString(self):
        fragment = u'printline string\n... \'interpolated\' string\n'
        tokens = [
            (Token.Keyword, u'printline'),
            (Token.Text, u' '),
            (Token.Literal.String, u'string'),
            (Token.Text, u'\n'),
            (Token.Punctuation, u'...'),
            (Token.Text, u' '),
            (Token.Literal.String.Interpol, u"'"),
            (Token.Literal.String.Interpol, u'interpolated'),
            (Token.Literal.String.Interpol, u"'"),
            (Token.Text, u' '),
            (Token.Literal.String, u'string'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testInlinIf(self):
        fragment = u'var = if true == 1 then -1 else 0 fi'
        tokens = [
            (Token.Text, u'var'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Keyword, u'if'),
            (Token.Text, u' '),
            (Token.Text, u'true'),
            (Token.Text, u' '),
            (Token.Operator, u'=='),
            (Token.Text, u' '),
            (Token.Literal.Number, u'1'),
            (Token.Text, u' '),
            (Token.Keyword, u'then'),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Literal.Number, u'1'),
            (Token.Text, u' '),
            (Token.Keyword, u'else'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'0'),
            (Token.Text, u' '),
            (Token.Keyword, u'fi'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_crystal.py0000644000175000017500000002271713376260540016652 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic CrystalLexer Test
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import unicode_literals
import unittest

from pygments.token import Text, Comment, Operator, Keyword, Name, String, \
    Number, Punctuation, Error
from pygments.lexers import CrystalLexer


class CrystalTest(unittest.TestCase):

    def setUp(self):
        self.lexer = CrystalLexer()
        self.maxDiff = None

    def testRangeSyntax1(self):
        fragment = '1...3\n'
        tokens = [
            (Number.Integer, '1'),
            (Operator, '...'),
            (Number.Integer, '3'),
            (Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testRangeSyntax2(self):
        fragment = '1 .. 3\n'
        tokens = [
            (Number.Integer, '1'),
            (Text, ' '),
            (Operator, '..'),
            (Text, ' '),
            (Number.Integer, '3'),
            (Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testInterpolationNestedCurly(self):
        fragment = (
            '"A#{ (3..5).group_by { |x| x/2}.map '
            'do |k,v| "#{k}" end.join }" + "Z"\n')
        tokens = [
            (String.Double, '"'),
            (String.Double, 'A'),
            (String.Interpol, '#{'),
            (Text, ' '),
            (Punctuation, '('),
            (Number.Integer, '3'),
            (Operator, '..'),
            (Number.Integer, '5'),
            (Punctuation, ')'),
            (Operator, '.'),
            (Name, 'group_by'),
            (Text, ' '),
            (String.Interpol, '{'),
            (Text, ' '),
            (Operator, '|'),
            (Name, 'x'),
            (Operator, '|'),
            (Text, ' '),
            (Name, 'x'),
            (Operator, '/'),
            (Number.Integer, '2'),
            (String.Interpol, '}'),
            (Operator, '.'),
            (Name, 'map'),
            (Text, ' '),
            (Keyword, 'do'),
            (Text, ' '),
            (Operator, '|'),
            (Name, 'k'),
            (Punctuation, ','),
            (Name, 'v'),
            (Operator, '|'),
            (Text, ' '),
            (String.Double, '"'),
            (String.Interpol, '#{'),
            (Name, 'k'),
            (String.Interpol, '}'),
            (String.Double, '"'),
            (Text, ' '),
            (Keyword, 'end'),
            (Operator, '.'),
            (Name, 'join'),
            (Text, ' '),
            (String.Interpol, '}'),
            (String.Double, '"'),
            (Text, ' '),
            (Operator, '+'),
            (Text, ' '),
            (String.Double, '"'),
            (String.Double, 'Z'),
            (String.Double, '"'),
            (Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testOperatorMethods(self):
        fragment = '([] of Int32).[]?(5)\n'
        tokens = [
            (Punctuation, '('),
            (Operator, '['),
            (Operator, ']'),
            (Text, ' '),
            (Keyword, 'of'),
            (Text, ' '),
            (Name.Builtin, 'Int32'),
            (Punctuation, ')'),
            (Operator, '.'),
            (Name.Operator, '[]?'),
            (Punctuation, '('),
            (Number.Integer, '5'),
            (Punctuation, ')'),
            (Text, '\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
    
    def testArrayAccess(self):
        fragment = '[5][5]?\n'
        tokens = [
            (Operator, '['),
            (Number.Integer, '5'),
            (Operator, ']'),
            (Operator, '['),
            (Number.Integer, '5'),
            (Operator, ']?'),
            (Text, '\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testNumbers(self):
        for kind, testset in [
            (Number.Integer, '0  1  1_000_000  1u8  11231231231121312i64'),
            (Number.Float, '0.0  1.0_f32  1_f32  0f64  1e+4  1e111  1_234.567_890'),
            (Number.Bin, '0b1001_0110  0b0u8'),
            (Number.Oct, '0o17  0o7_i32'),
            (Number.Hex, '0xdeadBEEF'),
        ]:
            for fragment in testset.split():
                self.assertEqual([(kind, fragment), (Text, '\n')],
                                 list(self.lexer.get_tokens(fragment + '\n')))

        for fragment in '01  0b2  0x129g2  0o12358'.split():
            self.assertEqual(next(self.lexer.get_tokens(fragment + '\n'))[0],
                             Error)

    def testChars(self):
        for fragment in ["'a'", "'я'", "'\\u{1234}'", "'\n'"]:
            self.assertEqual([(String.Char, fragment), (Text, '\n')],
                             list(self.lexer.get_tokens(fragment + '\n')))
        self.assertEqual(next(self.lexer.get_tokens("'abc'"))[0], Error)

    def testMacro(self):
        fragment = (
            'def<=>(other : self) : Int\n'
            '{%for field in %w(first_name middle_name last_name)%}\n'
            'cmp={{field.id}}<=>other.{{field.id}}\n'
            'return cmp if cmp!=0\n'
            '{%end%}\n'
            '0\n'
            'end\n')
        tokens = [
            (Keyword, 'def'),
            (Name.Function, '<=>'),
            (Punctuation, '('),
            (Name, 'other'),
            (Text, ' '),
            (Punctuation, ':'),
            (Text, ' '),
            (Keyword.Pseudo, 'self'),
            (Punctuation, ')'),
            (Text, ' '),
            (Punctuation, ':'),
            (Text, ' '),
            (Name.Builtin, 'Int'),
            (Text, '\n'),
            (String.Interpol, '{%'),
            (Keyword, 'for'),
            (Text, ' '),
            (Name, 'field'),
            (Text, ' '),
            (Keyword, 'in'),
            (Text, ' '),
            (String.Other, '%w('),
            (String.Other, 'first_name middle_name last_name'),
            (String.Other, ')'),
            (String.Interpol, '%}'),
            (Text, '\n'),
            (Name, 'cmp'),
            (Operator, '='),
            (String.Interpol, '{{'),
            (Name, 'field'),
            (Operator, '.'),
            (Name, 'id'),
            (String.Interpol, '}}'),
            (Operator, '<=>'),
            (Name, 'other'),
            (Operator, '.'),
            (String.Interpol, '{{'),
            (Name, 'field'),
            (Operator, '.'),
            (Name, 'id'),
            (String.Interpol, '}}'),
            (Text, '\n'),
            (Keyword, 'return'),
            (Text, ' '),
            (Name, 'cmp'),
            (Text, ' '),
            (Keyword, 'if'),
            (Text, ' '),
            (Name, 'cmp'),
            (Operator, '!='),
            (Number.Integer, '0'),
            (Text, '\n'),
            (String.Interpol, '{%'),
            (Keyword, 'end'),
            (String.Interpol, '%}'),
            (Text, '\n'),
            (Number.Integer, '0'),
            (Text, '\n'),
            (Keyword, 'end'),
            (Text, '\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testLib(self):
        fragment = (
            '@[Link("some")]\nlib LibSome\n'
            '@[CallConvention("X86_StdCall")]\nfun foo="some.foo"(thing : Void*) : LibC::Int\n'
            'end\n')
        tokens = [
            (Operator, '@['),
            (Name.Decorator, 'Link'),
            (Punctuation, '('),
            (String.Double, '"'),
            (String.Double, 'some'),
            (String.Double, '"'),
            (Punctuation, ')'),
            (Operator, ']'),
            (Text, '\n'),
            (Keyword, 'lib'),
            (Text, ' '),
            (Name.Namespace, 'LibSome'),
            (Text, '\n'),
            (Operator, '@['),
            (Name.Decorator, 'CallConvention'),
            (Punctuation, '('),
            (String.Double, '"'),
            (String.Double, 'X86_StdCall'),
            (String.Double, '"'),
            (Punctuation, ')'),
            (Operator, ']'),
            (Text, '\n'),
            (Keyword, 'fun'),
            (Text, ' '),
            (Name.Function, 'foo'),
            (Operator, '='),
            (String.Double, '"'),
            (String.Double, 'some.foo'),
            (String.Double, '"'),
            (Punctuation, '('),
            (Name, 'thing'),
            (Text, ' '),
            (Punctuation, ':'),
            (Text, ' '),
            (Name.Builtin, 'Void'),
            (Operator, '*'),
            (Punctuation, ')'),
            (Text, ' '),
            (Punctuation, ':'),
            (Text, ' '),
            (Name, 'LibC'),
            (Operator, '::'),
            (Name.Builtin, 'Int'),
            (Text, '\n'),
            (Keyword, 'end'),
            (Text, '\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testEscapedBracestring(self):
        fragment = 'str.gsub(%r{\\\\\\\\}, "/")\n'
        tokens = [
            (Name, 'str'),
            (Operator, '.'),
            (Name, 'gsub'),
            (Punctuation, '('),
            (String.Regex, '%r{'),
            (String.Regex, '\\\\'),
            (String.Regex, '\\\\'),
            (String.Regex, '}'),
            (Punctuation, ','),
            (Text, ' '),
            (String.Double, '"'),
            (String.Double, '/'),
            (String.Double, '"'),
            (Punctuation, ')'),
            (Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_cpp.py0000644000175000017500000000146613376260540015751 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    CPP Tests
    ~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import CppLexer
from pygments.token import Token


class CppTest(unittest.TestCase):
    def setUp(self):
        self.lexer = CppLexer()

    def testGoodComment(self):
        fragment = u'/* foo */\n'
        tokens = [
            (Token.Comment.Multiline, u'/* foo */'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testOpenComment(self):
        fragment = u'/* foo\n'
        tokens = [
            (Token.Comment.Multiline, u'/* foo\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_ruby.py0000644000175000017500000001141613376260540016144 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic RubyLexer Test
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Operator, Number, Text, Token
from pygments.lexers import RubyLexer


class RubyTest(unittest.TestCase):

    def setUp(self):
        self.lexer = RubyLexer()
        self.maxDiff = None

    def testRangeSyntax1(self):
        fragment = u'1..3\n'
        tokens = [
            (Number.Integer, u'1'),
            (Operator, u'..'),
            (Number.Integer, u'3'),
            (Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testRangeSyntax2(self):
        fragment = u'1...3\n'
        tokens = [
            (Number.Integer, u'1'),
            (Operator, u'...'),
            (Number.Integer, u'3'),
            (Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testRangeSyntax3(self):
        fragment = u'1 .. 3\n'
        tokens = [
            (Number.Integer, u'1'),
            (Text, u' '),
            (Operator, u'..'),
            (Text, u' '),
            (Number.Integer, u'3'),
            (Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testInterpolationNestedCurly(self):
        fragment = (
            u'"A#{ (3..5).group_by { |x| x/2}.map '
            u'do |k,v| "#{k}" end.join }" + "Z"\n')

        tokens = [
            (Token.Literal.String.Double, u'"'),
            (Token.Literal.String.Double, u'A'),
            (Token.Literal.String.Interpol, u'#{'),
            (Token.Text, u' '),
            (Token.Punctuation, u'('),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Operator, u'..'),
            (Token.Literal.Number.Integer, u'5'),
            (Token.Punctuation, u')'),
            (Token.Operator, u'.'),
            (Token.Name, u'group_by'),
            (Token.Text, u' '),
            (Token.Literal.String.Interpol, u'{'),
            (Token.Text, u' '),
            (Token.Operator, u'|'),
            (Token.Name, u'x'),
            (Token.Operator, u'|'),
            (Token.Text, u' '),
            (Token.Name, u'x'),
            (Token.Operator, u'/'),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Literal.String.Interpol, u'}'),
            (Token.Operator, u'.'),
            (Token.Name, u'map'),
            (Token.Text, u' '),
            (Token.Keyword, u'do'),
            (Token.Text, u' '),
            (Token.Operator, u'|'),
            (Token.Name, u'k'),
            (Token.Punctuation, u','),
            (Token.Name, u'v'),
            (Token.Operator, u'|'),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"'),
            (Token.Literal.String.Interpol, u'#{'),
            (Token.Name, u'k'),
            (Token.Literal.String.Interpol, u'}'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u' '),
            (Token.Keyword, u'end'),
            (Token.Operator, u'.'),
            (Token.Name, u'join'),
            (Token.Text, u' '),
            (Token.Literal.String.Interpol, u'}'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u' '),
            (Token.Operator, u'+'),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"'),
            (Token.Literal.String.Double, u'Z'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testOperatorMethods(self):
        fragment = u'x.==4\n'
        tokens = [
            (Token.Name, u'x'),
            (Token.Operator, u'.'),
            (Token.Name.Operator, u'=='),
            (Token.Literal.Number.Integer, u'4'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testEscapedBracestring(self):
        fragment = u'str.gsub(%r{\\\\\\\\}, "/")\n'
        tokens = [
            (Token.Name, u'str'),
            (Token.Operator, u'.'),
            (Token.Name, u'gsub'),
            (Token.Punctuation, u'('),
            (Token.Literal.String.Regex, u'%r{'),
            (Token.Literal.String.Regex, u'\\\\'),
            (Token.Literal.String.Regex, u'\\\\'),
            (Token.Literal.String.Regex, u'}'),
            (Token.Punctuation, u','),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"'),
            (Token.Literal.String.Double, u'/'),
            (Token.Literal.String.Double, u'"'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_util.py0000644000175000017500000001757513376260540016154 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Test suite for the util module
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import re
import unittest

from pygments import util, console


class FakeLexer(object):
    def analyse(text):
        return text
    analyse = util.make_analysator(analyse)


class UtilTest(unittest.TestCase):

    def test_getoptions(self):
        raises = self.assertRaises
        equals = self.assertEqual

        equals(util.get_bool_opt({}, 'a', True), True)
        equals(util.get_bool_opt({}, 'a', 1), True)
        equals(util.get_bool_opt({}, 'a', 'true'), True)
        equals(util.get_bool_opt({}, 'a', 'no'), False)
        raises(util.OptionError, util.get_bool_opt, {}, 'a', [])
        raises(util.OptionError, util.get_bool_opt, {}, 'a', 'foo')

        equals(util.get_int_opt({}, 'a', 1), 1)
        raises(util.OptionError, util.get_int_opt, {}, 'a', [])
        raises(util.OptionError, util.get_int_opt, {}, 'a', 'bar')

        equals(util.get_list_opt({}, 'a', [1]), [1])
        equals(util.get_list_opt({}, 'a', '1 2'), ['1', '2'])
        raises(util.OptionError, util.get_list_opt, {}, 'a', 1)

        equals(util.get_choice_opt({}, 'a', ['foo', 'bar'], 'bar'), 'bar')
        equals(util.get_choice_opt({}, 'a', ['foo', 'bar'], 'Bar', True), 'bar')
        raises(util.OptionError, util.get_choice_opt, {}, 'a',
               ['foo', 'bar'], 'baz')

    def test_docstring_headline(self):
        def f1():
            """
            docstring headline

            other text
            """
        def f2():
            """
            docstring
            headline

            other text
            """
        def f3():
            pass

        self.assertEqual(util.docstring_headline(f1), 'docstring headline')
        self.assertEqual(util.docstring_headline(f2), 'docstring headline')
        self.assertEqual(util.docstring_headline(f3), '')

    def test_analysator_returns_float(self):
        # If an analysator wrapped by make_analysator returns a floating point
        # number, then that number will be returned by the wrapper.
        self.assertEqual(FakeLexer.analyse('0.5'), 0.5)

    def test_analysator_returns_boolean(self):
        # If an analysator wrapped by make_analysator returns a boolean value,
        # then the wrapper will return 1.0 if the boolean was True or 0.0 if
        # it was False.
        self.assertEqual(FakeLexer.analyse(True), 1.0)
        self.assertEqual(FakeLexer.analyse(False), 0.0)

    def test_analysator_raises_exception(self):
        # If an analysator wrapped by make_analysator raises an exception,
        # then the wrapper will return 0.0.
        class ErrorLexer(object):
            def analyse(text):
                raise RuntimeError('something bad happened')
            analyse = util.make_analysator(analyse)
        self.assertEqual(ErrorLexer.analyse(''), 0.0)

    def test_analysator_value_error(self):
        # When converting the analysator's return value to a float a
        # ValueError may occur.  If that happens 0.0 is returned instead.
        self.assertEqual(FakeLexer.analyse('bad input'), 0.0)

    def test_analysator_type_error(self):
        # When converting the analysator's return value to a float a
        # TypeError may occur.  If that happens 0.0 is returned instead.
        self.assertEqual(FakeLexer.analyse('xxx'), 0.0)

    def test_shebang_matches(self):
        self.assertTrue(util.shebang_matches('#!/usr/bin/env python\n', r'python(2\.\d)?'))
        self.assertTrue(util.shebang_matches('#!/usr/bin/python2.4', r'python(2\.\d)?'))
        self.assertTrue(util.shebang_matches('#!/usr/bin/startsomethingwith python',
                                             r'python(2\.\d)?'))
        self.assertTrue(util.shebang_matches('#!C:\\Python2.4\\Python.exe',
                                             r'python(2\.\d)?'))

        self.assertFalse(util.shebang_matches('#!/usr/bin/python-ruby',
                                              r'python(2\.\d)?'))
        self.assertFalse(util.shebang_matches('#!/usr/bin/python/ruby',
                                              r'python(2\.\d)?'))
        self.assertFalse(util.shebang_matches('#!', r'python'))

    def test_doctype_matches(self):
        self.assertTrue(util.doctype_matches(
            ' ', 'html.*'))
        self.assertFalse(util.doctype_matches(
            '  ', 'html.*'))
        self.assertTrue(util.html_doctype_matches(
            ''))

    def test_xml(self):
        self.assertTrue(util.looks_like_xml(
            ''))
        self.assertTrue(util.looks_like_xml('abc'))
        self.assertFalse(util.looks_like_xml(''))

    def test_unirange(self):
        first_non_bmp = u'\U00010000'
        r = re.compile(util.unirange(0x10000, 0x20000))
        m = r.match(first_non_bmp)
        self.assertTrue(m)
        self.assertEqual(m.end(), len(first_non_bmp))
        self.assertFalse(r.match(u'\uffff'))
        self.assertFalse(r.match(u'xxx'))
        # Tests that end is inclusive
        r = re.compile(util.unirange(0x10000, 0x10000) + '+')
        # Tests that the plus works for the entire unicode point, if narrow
        # build
        m = r.match(first_non_bmp * 2)
        self.assertTrue(m)
        self.assertEqual(m.end(), len(first_non_bmp) * 2)

    def test_format_lines(self):
        lst = ['cat', 'dog']
        output = util.format_lines('var', lst)
        d = {}
        exec(output, d)
        self.assertTrue(isinstance(d['var'], tuple))
        self.assertEqual(('cat', 'dog'), d['var'])

    def test_duplicates_removed_seq_types(self):
        # tuple
        x = util.duplicates_removed(('a', 'a', 'b'))
        self.assertEqual(['a', 'b'], x)
        # list
        x = util.duplicates_removed(['a', 'a', 'b'])
        self.assertEqual(['a', 'b'], x)
        # iterator
        x = util.duplicates_removed(iter(('a', 'a', 'b')))
        self.assertEqual(['a', 'b'], x)

    def test_duplicates_removed_nonconsecutive(self):
        # keeps first
        x = util.duplicates_removed(('a', 'b', 'a'))
        self.assertEqual(['a', 'b'], x)

    def test_guess_decode(self):
        # UTF-8 should be decoded as UTF-8
        s = util.guess_decode(u'\xff'.encode('utf-8'))
        self.assertEqual(s, (u'\xff', 'utf-8'))

        # otherwise, it could be latin1 or the locale encoding...
        import locale
        s = util.guess_decode(b'\xff')
        self.assertTrue(s[1] in ('latin1', locale.getpreferredencoding()))

    def test_guess_decode_from_terminal(self):
        class Term:
            encoding = 'utf-7'

        s = util.guess_decode_from_terminal(u'\xff'.encode('utf-7'), Term)
        self.assertEqual(s, (u'\xff', 'utf-7'))

        s = util.guess_decode_from_terminal(u'\xff'.encode('utf-8'), Term)
        self.assertEqual(s, (u'\xff', 'utf-8'))

    def test_add_metaclass(self):
        class Meta(type):
            pass

        @util.add_metaclass(Meta)
        class Cls:
            pass

        self.assertEqual(type(Cls), Meta)


class ConsoleTest(unittest.TestCase):

    def test_ansiformat(self):
        f = console.ansiformat
        c = console.codes
        all_attrs = f('+*_blue_*+', 'text')
        self.assertTrue(c['blue'] in all_attrs and c['blink'] in all_attrs
                        and c['bold'] in all_attrs and c['underline'] in all_attrs
                        and c['reset'] in all_attrs)
        self.assertRaises(KeyError, f, '*mauve*', 'text')

    def test_functions(self):
        self.assertEqual(console.reset_color(), console.codes['reset'])
        self.assertEqual(console.colorize('blue', 'text'),
                         console.codes['blue'] + 'text' + console.codes['reset'])
Pygments-2.3.1/tests/support/0000755000175000017500000000000013405476655015274 5ustar  piotrpiotrPygments-2.3.1/tests/support/tags0000644000175000017500000000354113376260540016147 0ustar  piotrpiotr!_TAG_FILE_FORMAT	2	/extended format; --format=1 will not append ;" to lines/
!_TAG_FILE_SORTED	1	/0=unsorted, 1=sorted, 2=foldcase/
!_TAG_PROGRAM_AUTHOR	Darren Hiebert	/dhiebert@users.sourceforge.net/
!_TAG_PROGRAM_NAME	Exuberant Ctags	//
!_TAG_PROGRAM_URL	http://ctags.sourceforge.net	/official site/
!_TAG_PROGRAM_VERSION	5.8	//
HtmlFormatter	test_html_formatter.py	19;"	i
HtmlFormatterTest	test_html_formatter.py	34;"	c
NullFormatter	test_html_formatter.py	19;"	i
PythonLexer	test_html_formatter.py	18;"	i
StringIO	test_html_formatter.py	13;"	i
dirname	test_html_formatter.py	16;"	i
escape_html	test_html_formatter.py	20;"	i
fp	test_html_formatter.py	27;"	v
inspect	test_html_formatter.py	15;"	i
isfile	test_html_formatter.py	16;"	i
join	test_html_formatter.py	16;"	i
os	test_html_formatter.py	10;"	i
re	test_html_formatter.py	11;"	i
subprocess	test_html_formatter.py	125;"	i
support	test_html_formatter.py	23;"	i
tempfile	test_html_formatter.py	14;"	i
test_all_options	test_html_formatter.py	72;"	m	class:HtmlFormatterTest
test_correct_output	test_html_formatter.py	35;"	m	class:HtmlFormatterTest
test_ctags	test_html_formatter.py	165;"	m	class:HtmlFormatterTest
test_external_css	test_html_formatter.py	48;"	m	class:HtmlFormatterTest
test_get_style_defs	test_html_formatter.py	141;"	m	class:HtmlFormatterTest
test_lineanchors	test_html_formatter.py	98;"	m	class:HtmlFormatterTest
test_lineanchors_with_startnum	test_html_formatter.py	106;"	m	class:HtmlFormatterTest
test_linenos	test_html_formatter.py	82;"	m	class:HtmlFormatterTest
test_linenos_with_startnum	test_html_formatter.py	90;"	m	class:HtmlFormatterTest
test_unicode_options	test_html_formatter.py	155;"	m	class:HtmlFormatterTest
test_valid_output	test_html_formatter.py	114;"	m	class:HtmlFormatterTest
tokensource	test_html_formatter.py	29;"	v
uni_open	test_html_formatter.py	21;"	i
unittest	test_html_formatter.py	12;"	i
Pygments-2.3.1/tests/support/html_formatter.py0000644000175000017500000000021413376260540020661 0ustar  piotrpiotr# -*- coding: utf-8 -*-
from pygments.formatters import HtmlFormatter


class HtmlFormatterWrapper(HtmlFormatter):
    name = 'HtmlWrapper'
Pygments-2.3.1/tests/support/python_lexer.py0000644000175000017500000000041313376260540020353 0ustar  piotrpiotr# -*- coding: utf-8 -*-
# pygments.lexers.python (as CustomLexer) for test_cmdline.py

from pygments.lexers import PythonLexer


class CustomLexer(PythonLexer):
    name = 'PythonLexerWrapper'


class LexerWrapper(CustomLexer):
    name = 'PythonLexerWrapperWrapper'
Pygments-2.3.1/tests/support/empty.py0000644000175000017500000000003013376260540016764 0ustar  piotrpiotr# -*- coding: utf-8 -*-
Pygments-2.3.1/tests/test_basic_api.py0000644000175000017500000002664213376260540017104 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments basic API tests
    ~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import random
import unittest

from pygments import lexers, formatters, lex, format
from pygments.token import _TokenType, Text
from pygments.lexer import RegexLexer
from pygments.formatters.img import FontNotFound
from pygments.util import text_type, StringIO, BytesIO, xrange, ClassNotFound

import support

TESTFILE, TESTDIR = support.location(__file__)

test_content = [chr(i) for i in xrange(33, 128)] * 5
random.shuffle(test_content)
test_content = ''.join(test_content) + '\n'


def test_lexer_instantiate_all():
    # instantiate every lexer, to see if the token type defs are correct
    def verify(name):
        getattr(lexers, name)
    for x in lexers.LEXERS:
        yield verify, x


def test_lexer_classes():
    # test that every lexer class has the correct public API
    def verify(cls):
        assert type(cls.name) is str
        for attr in 'aliases', 'filenames', 'alias_filenames', 'mimetypes':
            assert hasattr(cls, attr)
            assert type(getattr(cls, attr)) is list, \
                "%s: %s attribute wrong" % (cls, attr)
        result = cls.analyse_text("abc")
        assert isinstance(result, float) and 0.0 <= result <= 1.0
        result = cls.analyse_text(".abc")
        assert isinstance(result, float) and 0.0 <= result <= 1.0

        assert all(al.lower() == al for al in cls.aliases)

        inst = cls(opt1="val1", opt2="val2")
        if issubclass(cls, RegexLexer):
            if not hasattr(cls, '_tokens'):
                # if there's no "_tokens", the lexer has to be one with
                # multiple tokendef variants
                assert cls.token_variants
                for variant in cls.tokens:
                    assert 'root' in cls.tokens[variant]
            else:
                assert 'root' in cls._tokens, \
                       '%s has no root state' % cls

        if cls.name in ['XQuery', 'Opa']:   # XXX temporary
            return

        try:
            tokens = list(inst.get_tokens(test_content))
        except KeyboardInterrupt:
            raise KeyboardInterrupt(
                'interrupted %s.get_tokens(): test_content=%r' %
                (cls.__name__, test_content))
        txt = ""
        for token in tokens:
            assert isinstance(token, tuple)
            assert isinstance(token[0], _TokenType)
            assert isinstance(token[1], text_type)
            txt += token[1]
        assert txt == test_content, "%s lexer roundtrip failed: %r != %r" % \
            (cls.name, test_content, txt)

    for lexer in lexers._iter_lexerclasses(plugins=False):
        yield verify, lexer


def test_lexer_options():
    # test that the basic options work
    def ensure(tokens, output):
        concatenated = ''.join(token[1] for token in tokens)
        assert concatenated == output, \
            '%s: %r != %r' % (lexer, concatenated, output)

    def verify(cls):
        inst = cls(stripnl=False)
        ensure(inst.get_tokens('a\nb'), 'a\nb\n')
        ensure(inst.get_tokens('\n\n\n'), '\n\n\n')
        inst = cls(stripall=True)
        ensure(inst.get_tokens('   \n  b\n\n\n'), 'b\n')
        # some lexers require full lines in input
        if ('ConsoleLexer' not in cls.__name__ and
            'SessionLexer' not in cls.__name__ and
            not cls.__name__.startswith('Literate') and
            cls.__name__ not in ('ErlangShellLexer', 'RobotFrameworkLexer')):
            inst = cls(ensurenl=False)
            ensure(inst.get_tokens('a\nb'), 'a\nb')
            inst = cls(ensurenl=False, stripall=True)
            ensure(inst.get_tokens('a\nb\n\n'), 'a\nb')

    for lexer in lexers._iter_lexerclasses(plugins=False):
        if lexer.__name__ == 'RawTokenLexer':
            # this one is special
            continue
        yield verify, lexer


def test_get_lexers():
    # test that the lexers functions work
    def verify(func, args):
        x = func(opt='val', *args)
        assert isinstance(x, lexers.PythonLexer)
        assert x.options["opt"] == "val"

    for func, args in [(lexers.get_lexer_by_name, ("python",)),
                       (lexers.get_lexer_for_filename, ("test.py",)),
                       (lexers.get_lexer_for_mimetype, ("text/x-python",)),
                       (lexers.guess_lexer, ("#!/usr/bin/python -O\nprint",)),
                       (lexers.guess_lexer_for_filename, ("a.py", "<%= @foo %>"))
                       ]:
        yield verify, func, args

    for cls, (_, lname, aliases, _, mimetypes) in lexers.LEXERS.items():
        assert cls == lexers.find_lexer_class(lname).__name__

        for alias in aliases:
            assert cls == lexers.get_lexer_by_name(alias).__class__.__name__

        for mimetype in mimetypes:
            assert cls == lexers.get_lexer_for_mimetype(mimetype).__class__.__name__

    try:
        lexers.get_lexer_by_name(None)
    except ClassNotFound:
        pass
    else:
        raise Exception


def test_formatter_public_api():
    # test that every formatter class has the correct public API
    ts = list(lexers.PythonLexer().get_tokens("def f(): pass"))
    string_out = StringIO()
    bytes_out = BytesIO()

    def verify(formatter):
        info = formatters.FORMATTERS[formatter.__name__]
        assert len(info) == 5
        assert info[1], "missing formatter name"
        assert info[2], "missing formatter aliases"
        assert info[4], "missing formatter docstring"

        try:
            inst = formatter(opt1="val1")
        except (ImportError, FontNotFound) as e:
            raise support.SkipTest(e)

        try:
            inst.get_style_defs()
        except NotImplementedError:
            # may be raised by formatters for which it doesn't make sense
            pass

        if formatter.unicodeoutput:
            inst.format(ts, string_out)
        else:
            inst.format(ts, bytes_out)

    for name in formatters.FORMATTERS:
        formatter = getattr(formatters, name)
        yield verify, formatter


def test_formatter_encodings():
    from pygments.formatters import HtmlFormatter

    # unicode output
    fmt = HtmlFormatter()
    tokens = [(Text, u"ä")]
    out = format(tokens, fmt)
    assert type(out) is text_type
    assert u"ä" in out

    # encoding option
    fmt = HtmlFormatter(encoding="latin1")
    tokens = [(Text, u"ä")]
    assert u"ä".encode("latin1") in format(tokens, fmt)

    # encoding and outencoding option
    fmt = HtmlFormatter(encoding="latin1", outencoding="utf8")
    tokens = [(Text, u"ä")]
    assert u"ä".encode("utf8") in format(tokens, fmt)


def test_formatter_unicode_handling():
    # test that the formatter supports encoding and Unicode
    tokens = list(lexers.PythonLexer(encoding='utf-8').
                  get_tokens("def f(): 'ä'"))

    def verify(formatter):
        try:
            inst = formatter(encoding=None)
        except (ImportError, FontNotFound) as e:
            # some dependency or font not installed
            raise support.SkipTest(e)

        if formatter.name != 'Raw tokens':
            out = format(tokens, inst)
            if formatter.unicodeoutput:
                assert type(out) is text_type, '%s: %r' % (formatter, out)

            inst = formatter(encoding='utf-8')
            out = format(tokens, inst)
            assert type(out) is bytes, '%s: %r' % (formatter, out)
            # Cannot test for encoding, since formatters may have to escape
            # non-ASCII characters.
        else:
            inst = formatter()
            out = format(tokens, inst)
            assert type(out) is bytes, '%s: %r' % (formatter, out)

    for formatter, info in formatters.FORMATTERS.items():
        # this tests the automatic importing as well
        fmter = getattr(formatters, formatter)
        yield verify, fmter


def test_get_formatters():
    # test that the formatters functions work
    x = formatters.get_formatter_by_name("html", opt="val")
    assert isinstance(x, formatters.HtmlFormatter)
    assert x.options["opt"] == "val"

    x = formatters.get_formatter_for_filename("a.html", opt="val")
    assert isinstance(x, formatters.HtmlFormatter)
    assert x.options["opt"] == "val"


def test_styles():
    # minimal style test
    from pygments.formatters import HtmlFormatter
    HtmlFormatter(style="pastie")


def test_bare_class_handler():
    from pygments.formatters import HtmlFormatter
    from pygments.lexers import PythonLexer
    try:
        lex('test\n', PythonLexer)
    except TypeError as e:
        assert 'lex() argument must be a lexer instance' in str(e)
    else:
        assert False, 'nothing raised'
    try:
        format([], HtmlFormatter)
    except TypeError as e:
        assert 'format() argument must be a formatter instance' in str(e)
    else:
        assert False, 'nothing raised'


class FiltersTest(unittest.TestCase):

    def test_basic(self):
        filters_args = [
            ('whitespace', {'spaces': True, 'tabs': True, 'newlines': True}),
            ('whitespace', {'wstokentype': False, 'spaces': True}),
            ('highlight', {'names': ['isinstance', 'lexers', 'x']}),
            ('codetagify', {'codetags': 'API'}),
            ('keywordcase', {'case': 'capitalize'}),
            ('raiseonerror', {}),
            ('gobble', {'n': 4}),
            ('tokenmerge', {}),
        ]
        for x, args in filters_args:
            lx = lexers.PythonLexer()
            lx.add_filter(x, **args)
            with open(TESTFILE, 'rb') as fp:
                text = fp.read().decode('utf-8')
            tokens = list(lx.get_tokens(text))
            self.assertTrue(all(isinstance(t[1], text_type)
                                for t in tokens),
                            '%s filter did not return Unicode' % x)
            roundtext = ''.join([t[1] for t in tokens])
            if x not in ('whitespace', 'keywordcase', 'gobble'):
                # these filters change the text
                self.assertEqual(roundtext, text,
                                 "lexer roundtrip with %s filter failed" % x)

    def test_raiseonerror(self):
        lx = lexers.PythonLexer()
        lx.add_filter('raiseonerror', excclass=RuntimeError)
        self.assertRaises(RuntimeError, list, lx.get_tokens('$'))

    def test_whitespace(self):
        lx = lexers.PythonLexer()
        lx.add_filter('whitespace', spaces='%')
        with open(TESTFILE, 'rb') as fp:
            text = fp.read().decode('utf-8')
        lxtext = ''.join([t[1] for t in list(lx.get_tokens(text))])
        self.assertFalse(' ' in lxtext)

    def test_keywordcase(self):
        lx = lexers.PythonLexer()
        lx.add_filter('keywordcase', case='capitalize')
        with open(TESTFILE, 'rb') as fp:
            text = fp.read().decode('utf-8')
        lxtext = ''.join([t[1] for t in list(lx.get_tokens(text))])
        self.assertTrue('Def' in lxtext and 'Class' in lxtext)

    def test_codetag(self):
        lx = lexers.PythonLexer()
        lx.add_filter('codetagify')
        text = u'# BUG: text'
        tokens = list(lx.get_tokens(text))
        self.assertEqual('# ', tokens[0][1])
        self.assertEqual('BUG', tokens[1][1])

    def test_codetag_boundary(self):
        # ticket #368
        lx = lexers.PythonLexer()
        lx.add_filter('codetagify')
        text = u'# DEBUG: text'
        tokens = list(lx.get_tokens(text))
        self.assertEqual('# DEBUG: text', tokens[0][1])
Pygments-2.3.1/tests/test_unistring.py0000644000175000017500000000265513376260540017212 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Test suite for the unistring module
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import re
import unittest
import random

from pygments import unistring as uni
from pygments.util import unichr


class UnistringTest(unittest.TestCase):
    def test_cats_exist_and_compilable(self):
        for cat in uni.cats:
            s = getattr(uni, cat)
            if s == '':  # Probably Cs on Jython
                continue
            print("%s %r" % (cat, s))
            re.compile('[%s]' % s)

    def _cats_that_match(self, c):
        matching_cats = []
        for cat in uni.cats:
            s = getattr(uni, cat)
            if s == '':  # Probably Cs on Jython
                continue
            if re.compile('[%s]' % s).match(c):
                matching_cats.append(cat)
        return matching_cats

    def test_spot_check_types(self):
        # Each char should match one, and precisely one, category
        random.seed(0)
        for i in range(1000):
            o = random.randint(0, 65535)
            c = unichr(o)
            if o > 0xd800 and o <= 0xdfff and not uni.Cs:
                continue  # Bah, Jython.
            print(hex(o))
            cats = self._cats_that_match(c)
            self.assertEqual(len(cats), 1,
                             "%d (%s): %s" % (o, c, cats))
Pygments-2.3.1/tests/test_markdown_lexer.py0000644000175000017500000000157513376303451020210 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments regex lexer tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""
import unittest

from pygments.lexers.markup import MarkdownLexer


class SameTextTests(unittest.TestCase):

    lexer = MarkdownLexer()

    def assert_same_text(self, text):
        """Show that lexed markdown does not remove any content. """
        tokens = list(self.lexer.get_tokens_unprocessed(text))
        output = ''.join(t[2] for t in tokens)
        self.assertEqual(text, output)

    def test_code_fence(self):
        self.assert_same_text(r'```\nfoo\n```\n')

    def test_code_fence_gsm(self):
        self.assert_same_text(r'```markdown\nfoo\n```\n')

    def test_code_fence_gsm_with_no_lexer(self):
        self.assert_same_text(r'```invalid-lexer\nfoo\n```\n')
Pygments-2.3.1/tests/test_perllexer.py0000644000175000017500000001424113376260540017164 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments regex lexer tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import time
import unittest

from pygments.token import Keyword, Name, String, Text
from pygments.lexers.perl import PerlLexer


class RunawayRegexTest(unittest.TestCase):
    # A previous version of the Perl lexer would spend a great deal of
    # time backtracking when given particular strings.  These tests show that
    # the runaway backtracking doesn't happen any more (at least for the given
    # cases).

    lexer = PerlLexer()

    ### Test helpers.

    def assert_single_token(self, s, token):
        """Show that a given string generates only one token."""
        tokens = list(self.lexer.get_tokens_unprocessed(s))
        self.assertEqual(len(tokens), 1, tokens)
        self.assertEqual(s, tokens[0][2])
        self.assertEqual(token, tokens[0][1])

    def assert_tokens(self, strings, expected_tokens):
        """Show that a given string generates the expected tokens."""
        tokens = list(self.lexer.get_tokens_unprocessed(''.join(strings)))
        self.assertEqual(len(tokens), len(expected_tokens), tokens)
        for index, s in enumerate(strings):
            self.assertEqual(s, tokens[index][2])
            self.assertEqual(expected_tokens[index], tokens[index][1])

    def assert_fast_tokenization(self, s):
        """Show that a given string is tokenized quickly."""
        start = time.time()
        tokens = list(self.lexer.get_tokens_unprocessed(s))
        end = time.time()
        # Isn't 10 seconds kind of a long time?  Yes, but we don't want false
        # positives when the tests are starved for CPU time.
        if end-start > 10:
            self.fail('tokenization took too long')
        return tokens

    ### Strings.

    def test_single_quote_strings(self):
        self.assert_single_token(r"'foo\tbar\\\'baz'", String)
        self.assert_fast_tokenization("'" + '\\'*999)

    def test_double_quote_strings(self):
        self.assert_single_token(r'"foo\tbar\\\"baz"', String)
        self.assert_fast_tokenization('"' + '\\'*999)

    def test_backtick_strings(self):
        self.assert_single_token(r'`foo\tbar\\\`baz`', String.Backtick)
        self.assert_fast_tokenization('`' + '\\'*999)

    ### Regex matches with various delimiters.

    def test_match(self):
        self.assert_single_token(r'/aa\tbb/', String.Regex)
        self.assert_fast_tokenization('/' + '\\'*999)

    def test_match_with_slash(self):
        self.assert_tokens(['m', '/\n\\t\\\\/'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m/xxx\n' + '\\'*999)

    def test_match_with_bang(self):
        self.assert_tokens(['m', r'!aa\t\!bb!'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m!' + '\\'*999)

    def test_match_with_brace(self):
        self.assert_tokens(['m', r'{aa\t\}bb}'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m{' + '\\'*999)

    def test_match_with_angle_brackets(self):
        self.assert_tokens(['m', r'bb>'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m<' + '\\'*999)

    def test_match_with_parenthesis(self):
        self.assert_tokens(['m', r'(aa\t\)bb)'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m(' + '\\'*999)

    def test_match_with_at_sign(self):
        self.assert_tokens(['m', r'@aa\t\@bb@'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m@' + '\\'*999)

    def test_match_with_percent_sign(self):
        self.assert_tokens(['m', r'%aa\t\%bb%'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m%' + '\\'*999)

    def test_match_with_dollar_sign(self):
        self.assert_tokens(['m', r'$aa\t\$bb$'], [String.Regex, String.Regex])
        self.assert_fast_tokenization('m$' + '\\'*999)

    ### Regex substitutions with various delimeters.

    def test_substitution_with_slash(self):
        self.assert_single_token('s/aaa/bbb/g', String.Regex)
        self.assert_fast_tokenization('s/foo/' + '\\'*999)

    def test_substitution_with_at_sign(self):
        self.assert_single_token(r's@aaa@bbb@g', String.Regex)
        self.assert_fast_tokenization('s@foo@' + '\\'*999)

    def test_substitution_with_percent_sign(self):
        self.assert_single_token(r's%aaa%bbb%g', String.Regex)
        self.assert_fast_tokenization('s%foo%' + '\\'*999)

    def test_substitution_with_brace(self):
        self.assert_single_token(r's{aaa}', String.Regex)
        self.assert_fast_tokenization('s{' + '\\'*999)

    def test_substitution_with_angle_bracket(self):
        self.assert_single_token(r's', String.Regex)
        self.assert_fast_tokenization('s<' + '\\'*999)

    def test_substitution_with_angle_bracket(self):
        self.assert_single_token(r's', String.Regex)
        self.assert_fast_tokenization('s<' + '\\'*999)

    def test_substitution_with_square_bracket(self):
        self.assert_single_token(r's[aaa]', String.Regex)
        self.assert_fast_tokenization('s[' + '\\'*999)

    def test_substitution_with_parenthesis(self):
        self.assert_single_token(r's(aaa)', String.Regex)
        self.assert_fast_tokenization('s(' + '\\'*999)

    ### Namespaces/modules

    def test_package_statement(self):
        self.assert_tokens(['package', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
        self.assert_tokens(['package', '  ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])

    def test_use_statement(self):
        self.assert_tokens(['use', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
        self.assert_tokens(['use', '  ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])

    def test_no_statement(self):
        self.assert_tokens(['no', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
        self.assert_tokens(['no', '  ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])

    def test_require_statement(self):
        self.assert_tokens(['require', ' ', 'Foo'], [Keyword, Text, Name.Namespace])
        self.assert_tokens(['require', '  ', 'Foo::Bar'], [Keyword, Text, Name.Namespace])
        self.assert_tokens(['require', ' ', '"Foo/Bar.pm"'], [Keyword, Text, String])

Pygments-2.3.1/tests/test_whiley.py0000644000175000017500000000137113376260540016463 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Whiley Test
    ~~~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import WhileyLexer
from pygments.token import Token


class WhileyTest(unittest.TestCase):
    def setUp(self):
        self.lexer = WhileyLexer()

    def testWhileyOperator(self):
        fragment = u'123 \u2200 x\n'
        tokens = [
            (Token.Literal.Number.Integer, u'123'),
            (Token.Text, u' '),
            (Token.Operator, u'\u2200'),
            (Token.Text, u' '),
            (Token.Name, u'x'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_csound.py0000644000175000017500000003774713377321073016475 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Csound lexer tests
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest
from textwrap import dedent

from pygments.token import Comment, Error, Keyword, Name, Number, Operator, Punctuation, \
    String, Text
from pygments.lexers import CsoundOrchestraLexer


class CsoundOrchestraTest(unittest.TestCase):

    def setUp(self):
        self.lexer = CsoundOrchestraLexer()
        self.maxDiff = None

    def testComments(self):
        fragment = dedent('''\
            /*
             * comment
             */
            ; comment
            // comment
        ''')
        tokens = [
            (Comment.Multiline, u'/*\n * comment\n */'),
            (Text, u'\n'),
            (Comment.Single, u'; comment'),
            (Text, u'\n'),
            (Comment.Single, u'// comment'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testInstrumentBlocks(self):
        fragment = dedent('''\
            instr/**/1,/**/N_a_M_e_,/**/+Name/**///
              iDuration = p3
              outc:a(aSignal)
            endin
        ''')
        tokens = [
            (Keyword.Declaration, u'instr'),
            (Comment.Multiline, u'/**/'),
            (Name.Function, u'1'),
            (Punctuation, u','),
            (Comment.Multiline, u'/**/'),
            (Name.Function, u'N_a_M_e_'),
            (Punctuation, u','),
            (Comment.Multiline, u'/**/'),
            (Punctuation, u'+'),
            (Name.Function, u'Name'),
            (Comment.Multiline, u'/**/'),
            (Comment.Single, u'//'),
            (Text, u'\n'),
            (Text, u'  '),
            (Keyword.Type, u'i'),
            (Name, u'Duration'),
            (Text, u' '),
            (Operator, u'='),
            (Text, u' '),
            (Name.Variable.Instance, u'p3'),
            (Text, u'\n'),
            (Text, u'  '),
            (Name.Builtin, u'outc'),
            (Punctuation, u':'),
            (Keyword.Type, u'a'),
            (Punctuation, u'('),
            (Keyword.Type, u'a'),
            (Name, u'Signal'),
            (Punctuation, u')'),
            (Text, u'\n'),
            (Keyword.Declaration, u'endin'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testUserDefinedOpcodes(self):
        fragment = dedent('''\
            opcode/**/aUDO,/**/i[],/**/aik//
              aUDO
            endop
        ''')
        tokens = [
            (Keyword.Declaration, u'opcode'),
            (Comment.Multiline, u'/**/'),
            (Name.Function, u'aUDO'),
            (Punctuation, u','),
            (Comment.Multiline, u'/**/'),
            (Keyword.Type, u'i[]'),
            (Punctuation, u','),
            (Comment.Multiline, u'/**/'),
            (Keyword.Type, u'aik'),
            (Comment.Single, u'//'),
            (Text, u'\n'),
            (Text, u'  '),
            (Name.Function, u'aUDO'),
            (Text, u'\n'),
            (Keyword.Declaration, u'endop'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testNumbers(self):
        fragment = '123 0123456789'
        tokens = [
            (Number.Integer, u'123'),
            (Text, u' '),
            (Number.Integer, u'0123456789'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        fragment = '0xabcdef0123456789 0XABCDEF'
        tokens = [
            (Keyword.Type, u'0x'),
            (Number.Hex, u'abcdef0123456789'),
            (Text, u' '),
            (Keyword.Type, u'0X'),
            (Number.Hex, u'ABCDEF'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        fragments = ['1e2', '3e+4', '5e-6', '7E8', '9E+0', '1E-2', '3.', '4.56', '.789']
        for fragment in fragments:
            tokens = [
                (Number.Float, fragment),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testQuotedStrings(self):
        fragment = '"characters$MACRO."'
        tokens = [
            (String, u'"'),
            (String, u'characters'),
            (Comment.Preproc, u'$MACRO.'),
            (String, u'"'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testBracedStrings(self):
        fragment = dedent('''\
            {{
            characters$MACRO.
            }}
        ''')
        tokens = [
            (String, u'{{'),
            (String, u'\ncharacters$MACRO.\n'),
            (String, u'}}'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testEscapeSequences(self):
        for character in ['\\', 'a', 'b', 'n', 'r', 't', '"', '012', '345', '67']:
            escapedCharacter = '\\' + character
            fragment = '"' + escapedCharacter + '"'
            tokens = [
                (String, u'"'),
                (String.Escape, escapedCharacter),
                (String, u'"'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
            fragment = '{{' + escapedCharacter + '}}'
            tokens = [
                (String, u'{{'),
                (String.Escape, escapedCharacter),
                (String, u'}}'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testOperators(self):
        fragments = ['+', '-', '~', u'¬', '!', '*', '/', '^', '%', '<<', '>>', '<', '>',
                     '<=', '>=', '==', '!=', '&', '#', '|', '&&', '||', '?', ':', '+=',
                     '-=', '*=', '/=']
        for fragment in fragments:
            tokens = [
                (Operator, fragment),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testGlobalValueIdentifiers(self):
        for fragment in ['0dbfs', 'A4', 'kr', 'ksmps', 'nchnls', 'nchnls_i', 'sr']:
            tokens = [
                (Name.Variable.Global, fragment),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testKeywords(self):
        fragments = ['do', 'else', 'elseif', 'endif', 'enduntil', 'fi', 'if', 'ithen',
                     'kthen', 'od', 'then', 'until', 'while']
        for fragment in fragments:
            tokens = [
                (Keyword, fragment),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        for fragment in ['return', 'rireturn']:
            tokens = [
                (Keyword.Pseudo, fragment),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testLabels(self):
        fragment = dedent('''\
            aLabel:
             label2:
        ''')
        tokens = [
            (Name.Label, u'aLabel'),
            (Punctuation, u':'),
            (Text, u'\n'),
            (Text, u' '),
            (Name.Label, u'label2'),
            (Punctuation, u':'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testPrintksAndPrintsEscapeSequences(self):
        escapedCharacters = ['%!', '%%', '%n', '%N', '%r', '%R', '%t', '%T', '\\\\a',
                             '\\\\A', '\\\\b', '\\\\B', '\\\\n', '\\\\N', '\\\\r',
                             '\\\\R', '\\\\t', '\\\\T']
        for opcode in ['printks', 'prints']:
            for escapedCharacter in escapedCharacters:
                fragment = opcode + ' "' + escapedCharacter + '"'
                tokens = [
                    (Name.Builtin, opcode),
                    (Text, u' '),
                    (String, u'"'),
                    (String.Escape, escapedCharacter),
                    (String, u'"'),
                    (Text, u'\n')
                ]
                self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testGotoStatements(self):
        for keyword in ['goto', 'igoto', 'kgoto']:
            fragment = keyword + ' aLabel'
            tokens = [
                (Keyword, keyword),
                (Text, u' '),
                (Name.Label, u'aLabel'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        for opcode in ['reinit', 'rigoto', 'tigoto']:
            fragment = opcode + ' aLabel'
            tokens = [
                (Keyword.Pseudo, opcode),
                (Text, u' '),
                (Name.Label, u'aLabel'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        for opcode in ['cggoto', 'cigoto', 'cingoto', 'ckgoto', 'cngoto', 'cnkgoto']:
            fragment = opcode + ' 1==0, aLabel'
            tokens = [
                (Keyword.Pseudo, opcode),
                (Text, u' '),
                (Number.Integer, u'1'),
                (Operator, u'=='),
                (Number.Integer, u'0'),
                (Punctuation, u','),
                (Text, u' '),
                (Name.Label, u'aLabel'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        fragment = 'timout 0, 0, aLabel'
        tokens = [
            (Keyword.Pseudo, 'timout'),
            (Text, u' '),
            (Number.Integer, u'0'),
            (Punctuation, u','),
            (Text, u' '),
            (Number.Integer, u'0'),
            (Punctuation, u','),
            (Text, u' '),
            (Name.Label, u'aLabel'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
        for opcode in ['loop_ge', 'loop_gt', 'loop_le', 'loop_lt']:
            fragment = opcode + ' 0, 0, 0, aLabel'
            tokens = [
                (Keyword.Pseudo, opcode),
                (Text, u' '),
                (Number.Integer, u'0'),
                (Punctuation, u','),
                (Text, u' '),
                (Number.Integer, u'0'),
                (Punctuation, u','),
                (Text, u' '),
                (Number.Integer, u'0'),
                (Punctuation, u','),
                (Text, u' '),
                (Name.Label, u'aLabel'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testIncludeDirectives(self):
        for character in ['"', '|']:
            fragment = '#include/**/' + character + 'file.udo' + character
            tokens = [
                (Comment.Preproc, u'#include'),
                (Comment.Multiline, u'/**/'),
                (String, character + u'file.udo' + character),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testObjectLikeMacroDefinitions(self):
        fragment = dedent('''\
            # \tdefine MACRO#macro_body#
            #define/**/
            MACRO/**/
            #\\#macro
            body\\##
        ''')
        tokens = [
            (Comment.Preproc, u'# \tdefine'),
            (Text, u' '),
            (Comment.Preproc, u'MACRO'),
            (Punctuation, u'#'),
            (Comment.Preproc, u'macro_body'),
            (Punctuation, u'#'),
            (Text, u'\n'),
            (Comment.Preproc, u'#define'),
            (Comment.Multiline, u'/**/'),
            (Text, u'\n'),
            (Comment.Preproc, u'MACRO'),
            (Comment.Multiline, u'/**/'),
            (Text, u'\n'),
            (Punctuation, u'#'),
            (Comment.Preproc, u'\\#'),
            (Comment.Preproc, u'macro\nbody'),
            (Comment.Preproc, u'\\#'),
            (Punctuation, u'#'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testFunctionLikeMacroDefinitions(self):
        fragment = dedent('''\
            #define MACRO(ARG1#ARG2) #macro_body#
            #define/**/
            MACRO(ARG1'ARG2' ARG3)/**/
            #\\#macro
            body\\##
        ''')
        tokens = [
            (Comment.Preproc, u'#define'),
            (Text, u' '),
            (Comment.Preproc, u'MACRO'),
            (Punctuation, u'('),
            (Comment.Preproc, u'ARG1'),
            (Punctuation, u'#'),
            (Comment.Preproc, u'ARG2'),
            (Punctuation, u')'),
            (Text, u' '),
            (Punctuation, u'#'),
            (Comment.Preproc, u'macro_body'),
            (Punctuation, u'#'),
            (Text, u'\n'),
            (Comment.Preproc, u'#define'),
            (Comment.Multiline, u'/**/'),
            (Text, u'\n'),
            (Comment.Preproc, u'MACRO'),
            (Punctuation, u'('),
            (Comment.Preproc, u'ARG1'),
            (Punctuation, u"'"),
            (Comment.Preproc, u'ARG2'),
            (Punctuation, u"'"),
            (Text, u' '),
            (Comment.Preproc, u'ARG3'),
            (Punctuation, u')'),
            (Comment.Multiline, u'/**/'),
            (Text, u'\n'),
            (Punctuation, u'#'),
            (Comment.Preproc, u'\\#'),
            (Comment.Preproc, u'macro\nbody'),
            (Comment.Preproc, u'\\#'),
            (Punctuation, u'#'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testMacroPreprocessorDirectives(self):
        for directive in ['#ifdef', '#ifndef', '#undef']:
            fragment = directive + ' MACRO'
            tokens = [
                (Comment.Preproc, directive),
                (Text, u' '),
                (Comment.Preproc, u'MACRO'),
                (Text, u'\n')
            ]
            self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testOtherPreprocessorDirectives(self):
        fragment = dedent('''\
            #else
            #end
            #endif
            ###
            @ \t12345
            @@ \t67890
        ''')
        tokens = [
            (Comment.Preproc, u'#else'),
            (Text, u'\n'),
            (Comment.Preproc, u'#end'),
            (Text, u'\n'),
            (Comment.Preproc, u'#endif'),
            (Text, u'\n'),
            (Comment.Preproc, u'###'),
            (Text, u'\n'),
            (Comment.Preproc, u'@ \t12345'),
            (Text, u'\n'),
            (Comment.Preproc, u'@@ \t67890'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testFunctionLikeMacros(self):
        fragment = "$MACRO.(((x#y\\)))' \"(#'x)\\)x\\))\"# {{x\\))x)\\)(#'}});"
        tokens = [
            (Comment.Preproc, u'$MACRO.'),
            (Punctuation, u'('),
            (Comment.Preproc, u'('),
            (Comment.Preproc, u'('),
            (Comment.Preproc, u'x#y\\)'),
            (Comment.Preproc, u')'),
            (Comment.Preproc, u')'),
            (Punctuation, u"'"),
            (Comment.Preproc, u' '),
            (String, u'"'),
            (Error, u'('),
            (Error, u'#'),
            (Error, u"'"),
            (String, u'x'),
            (Error, u')'),
            (Comment.Preproc, u'\\)'),
            (String, u'x'),
            (Comment.Preproc, u'\\)'),
            (Error, u')'),
            (String, u'"'),
            (Punctuation, u'#'),
            (Comment.Preproc, u' '),
            (String, u'{{'),
            (String, u'x'),
            (Comment.Preproc, u'\\)'),
            (Error, u')'),
            (String, u'x'),
            (Error, u')'),
            (Comment.Preproc, u'\\)'),
            (Error, u'('),
            (Error, u'#'),
            (Error, u"'"),
            (String, u'}}'),
            (Punctuation, u')'),
            (Comment.Single, u';'),
            (Text, u'\n')
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_javascript.py0000644000175000017500000000506213376260540017331 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Javascript tests
    ~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import CoffeeScriptLexer
from pygments.token import Token

COFFEE_SLASH_GOLDEN = [
    # input_str, slashes_are_regex_here
    (r'/\\/', True),
    (r'/\\/i', True),
    (r'/\//', True),
    (r'/(\s)/', True),
    ('/a{2,8}/', True),
    ('/b*c?d+/', True),
    ('/(capture-match)/', True),
    ('/(?:do-not-capture-match)/', True),
    ('/this|or|that/', True),
    ('/[char-set]/', True),
    ('/[^neg-char_st]/', True),
    ('/^.*$/', True),
    (r'/\n(\f)\0\1\d\b\cm\u1234/', True),
    (r'/^.?([^/\\\n\w]*)a\1+$/.something(or_other) # something more complex', True),
    ("foo = (str) ->\n  /'|\"/.test str", True),
    ('a = a / b / c', False),
    ('a = a/b/c', False),
    ('a = a/b/ c', False),
    ('a = a /b/c', False),
    ('a = 1 + /d/.test(a)', True),
]

def test_coffee_slashes():
    for input_str, slashes_are_regex_here in COFFEE_SLASH_GOLDEN:
        yield coffee_runner, input_str, slashes_are_regex_here

def coffee_runner(input_str, slashes_are_regex_here):
    lex = CoffeeScriptLexer()
    output = list(lex.get_tokens(input_str))
    print(output)
    for t, s in output:
        if '/' in s:
            is_regex = t is Token.String.Regex
            assert is_regex == slashes_are_regex_here, (t, s)

class CoffeeTest(unittest.TestCase):
    def setUp(self):
        self.lexer = CoffeeScriptLexer()

    def testMixedSlashes(self):
        fragment = u'a?/foo/:1/2;\n'
        tokens = [
            (Token.Name.Other, u'a'),
            (Token.Operator, u'?'),
            (Token.Literal.String.Regex, u'/foo/'),
            (Token.Operator, u':'),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Operator, u'/'),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testBewareInfiniteLoop(self):
        # This demonstrates the case that "This isn't really guarding" comment
        # refers to.
        fragment = '/a/x;\n'
        tokens = [
            (Token.Text, ''),
            (Token.Operator, '/'),
            (Token.Name.Other, 'a'),
            (Token.Operator, '/'),
            (Token.Name.Other, 'x'),
            (Token.Punctuation, ';'),
            (Token.Text, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_objectiveclexer.py0000644000175000017500000000473213376260540020343 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic CLexer Test
    ~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest
import os

from pygments.token import Token
from pygments.lexers import ObjectiveCLexer


class ObjectiveCLexerTest(unittest.TestCase):

    def setUp(self):
        self.lexer = ObjectiveCLexer()

    def testLiteralNumberInt(self):
        fragment = u'@(1);\n'
        expected = [
            (Token.Literal, u'@('),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Literal, u')'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))

    def testLiteralNumberExpression(self):
        fragment = u'@(1+2);\n'
        expected = [
            (Token.Literal, u'@('),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Operator, u'+'),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Literal, u')'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))

    def testLiteralNumberNestedExpression(self):
        fragment = u'@(1+(2+3));\n'
        expected = [
            (Token.Literal, u'@('),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Operator, u'+'),
            (Token.Punctuation, u'('),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Operator, u'+'),
            (Token.Literal.Number.Integer, u'3'),
            (Token.Punctuation, u')'),
            (Token.Literal, u')'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))

    def testLiteralNumberBool(self):
        fragment = u'@NO;\n'
        expected = [
            (Token.Literal.Number, u'@NO'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))

    def testLieralNumberBoolExpression(self):
        fragment = u'@(YES);\n'
        expected = [
            (Token.Literal, u'@('),
            (Token.Name.Builtin, u'YES'),
            (Token.Literal, u')'),
            (Token.Punctuation, u';'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_token.py0000644000175000017500000000322413376260540016301 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Test suite for the token module
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import copy
import unittest

from pygments import token


class TokenTest(unittest.TestCase):

    def test_tokentype(self):
        e = self.assertEqual

        t = token.String

        e(t.split(), [token.Token, token.Literal, token.String])

        e(t.__class__, token._TokenType)

    def test_functions(self):
        self.assertTrue(token.is_token_subtype(token.String, token.String))
        self.assertTrue(token.is_token_subtype(token.String, token.Literal))
        self.assertFalse(token.is_token_subtype(token.Literal, token.String))

        self.assertTrue(token.string_to_tokentype(token.String) is token.String)
        self.assertTrue(token.string_to_tokentype('') is token.Token)
        self.assertTrue(token.string_to_tokentype('String') is token.String)

    def test_sanity_check(self):
        stp = token.STANDARD_TYPES.copy()
        stp[token.Token] = '---' # Token and Text do conflict, that is okay
        t = {}
        for k, v in stp.items():
            t.setdefault(v, []).append(k)
        if len(t) == len(stp):
            return # Okay

        for k, v in t.items():
            if len(v) > 1:
                self.fail("%r has more than one key: %r" % (k, v))

    def test_copying(self):
        # Token instances are supposed to be singletons, so copying or even
        # deepcopying should return themselves
        t = token.String
        self.assertIs(t, copy.copy(t))
        self.assertIs(t, copy.deepcopy(t))
Pygments-2.3.1/tests/test_modeline.py0000644000175000017500000000127613376260540016762 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Tests for the vim modeline feature
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

from pygments import modeline


def test_lexer_classes():
    def verify(buf):
        assert modeline.get_filetype_from_buffer(buf) == 'python'

    for buf in [
            'vi: ft=python' + '\n' * 8,
            'vi: ft=python' + '\n' * 8,
            '\n\n\n\nvi=8: syntax=python' + '\n' * 8,
            '\n' * 8 + 'ex: filetype=python',
            '\n' * 8 + 'vim: some,other,syn=python\n\n\n\n'
    ]:
        yield verify, buf
Pygments-2.3.1/tests/test_html_formatter.py0000644000175000017500000001601013402534110020170 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments HTML formatter tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import io
import os
import re
import unittest
import tempfile
from os.path import join, dirname, isfile

from pygments.util import StringIO
from pygments.lexers import PythonLexer
from pygments.formatters import HtmlFormatter, NullFormatter
from pygments.formatters.html import escape_html

import support

TESTFILE, TESTDIR = support.location(__file__)

with io.open(TESTFILE, encoding='utf-8') as fp:
    tokensource = list(PythonLexer().get_tokens(fp.read()))


class HtmlFormatterTest(unittest.TestCase):
    def test_correct_output(self):
        hfmt = HtmlFormatter(nowrap=True)
        houtfile = StringIO()
        hfmt.format(tokensource, houtfile)

        nfmt = NullFormatter()
        noutfile = StringIO()
        nfmt.format(tokensource, noutfile)

        stripped_html = re.sub('<.*?>', '', houtfile.getvalue())
        escaped_text = escape_html(noutfile.getvalue())
        self.assertEqual(stripped_html, escaped_text)

    def test_external_css(self):
        # test correct behavior
        # CSS should be in /tmp directory
        fmt1 = HtmlFormatter(full=True, cssfile='fmt1.css', outencoding='utf-8')
        # CSS should be in TESTDIR (TESTDIR is absolute)
        fmt2 = HtmlFormatter(full=True, cssfile=join(TESTDIR, 'fmt2.css'),
                             outencoding='utf-8')
        tfile = tempfile.NamedTemporaryFile(suffix='.html')
        fmt1.format(tokensource, tfile)
        try:
            fmt2.format(tokensource, tfile)
            self.assertTrue(isfile(join(TESTDIR, 'fmt2.css')))
        except IOError:
            # test directory not writable
            pass
        tfile.close()

        self.assertTrue(isfile(join(dirname(tfile.name), 'fmt1.css')))
        os.unlink(join(dirname(tfile.name), 'fmt1.css'))
        try:
            os.unlink(join(TESTDIR, 'fmt2.css'))
        except OSError:
            pass

    def test_all_options(self):
        def check(optdict):
            outfile = StringIO()
            fmt = HtmlFormatter(**optdict)
            fmt.format(tokensource, outfile)

        for optdict in [
            dict(nowrap=True),
            dict(linenos=True, full=True),
            dict(linenos=True, linespans='L'),
            dict(hl_lines=[1, 5, 10, 'xxx']),
            dict(hl_lines=[1, 5, 10], noclasses=True),
        ]:
            check(optdict)

        for linenos in [False, 'table', 'inline']:
            for noclasses in [False, True]:
                for linenospecial in [0, 5]:
                    for anchorlinenos in [False, True]:
                        optdict = dict(
                            linenos=linenos,
                            noclasses=noclasses,
                            linenospecial=linenospecial,
                            anchorlinenos=anchorlinenos,
                        )
                        check(optdict)

    def test_linenos(self):
        optdict = dict(linenos=True)
        outfile = StringIO()
        fmt = HtmlFormatter(**optdict)
        fmt.format(tokensource, outfile)
        html = outfile.getvalue()
        self.assertTrue(re.search(r"
\s+1\s+2\s+3", html))

    def test_linenos_with_startnum(self):
        optdict = dict(linenos=True, linenostart=5)
        outfile = StringIO()
        fmt = HtmlFormatter(**optdict)
        fmt.format(tokensource, outfile)
        html = outfile.getvalue()
        self.assertTrue(re.search(r"
\s+5\s+6\s+7", html))

    def test_lineanchors(self):
        optdict = dict(lineanchors="foo")
        outfile = StringIO()
        fmt = HtmlFormatter(**optdict)
        fmt.format(tokensource, outfile)
        html = outfile.getvalue()
        self.assertTrue(re.search("
", html))

    def test_lineanchors_with_startnum(self):
        optdict = dict(lineanchors="foo", linenostart=5)
        outfile = StringIO()
        fmt = HtmlFormatter(**optdict)
        fmt.format(tokensource, outfile)
        html = outfile.getvalue()
        self.assertTrue(re.search("
", html))

    def test_valid_output(self):
        # test all available wrappers
        fmt = HtmlFormatter(full=True, linenos=True, noclasses=True,
                            outencoding='utf-8')

        handle, pathname = tempfile.mkstemp('.html')
        tfile = os.fdopen(handle, 'w+b')
        fmt.format(tokensource, tfile)
        tfile.close()
        catname = os.path.join(TESTDIR, 'dtds', 'HTML4.soc')
        try:
            import subprocess
            po = subprocess.Popen(['nsgmls', '-s', '-c', catname, pathname],
                                  stdout=subprocess.PIPE)
            ret = po.wait()
            output = po.stdout.read()
            po.stdout.close()
        except OSError:
            # nsgmls not available
            pass
        else:
            if ret:
                print(output)
            self.assertFalse(ret, 'nsgmls run reported errors')

        os.unlink(pathname)

    def test_get_style_defs(self):
        fmt = HtmlFormatter()
        sd = fmt.get_style_defs()
        self.assertTrue(sd.startswith('.'))

        fmt = HtmlFormatter(cssclass='foo')
        sd = fmt.get_style_defs()
        self.assertTrue(sd.startswith('.foo'))
        sd = fmt.get_style_defs('.bar')
        self.assertTrue(sd.startswith('.bar'))
        sd = fmt.get_style_defs(['.bar', '.baz'])
        fl = sd.splitlines()[0]
        self.assertTrue('.bar' in fl and '.baz' in fl)

    def test_unicode_options(self):
        fmt = HtmlFormatter(title=u'Föö',
                            cssclass=u'bär',
                            cssstyles=u'div:before { content: \'bäz\' }',
                            encoding='utf-8')
        handle, pathname = tempfile.mkstemp('.html')
        tfile = os.fdopen(handle, 'w+b')
        fmt.format(tokensource, tfile)
        tfile.close()

    def test_ctags(self):
        try:
            import ctags
        except ImportError:
            # we can't check without the ctags module, but at least check the exception
            self.assertRaises(RuntimeError, HtmlFormatter, tagsfile='support/tags')
        else:
            # this tagfile says that test_ctags() is on line 165, even if it isn't
            # anymore in the actual source
            fmt = HtmlFormatter(tagsfile='support/tags', lineanchors='L',
                                tagurlformat='%(fname)s%(fext)s')
            outfile = StringIO()
            fmt.format(tokensource, outfile)
            self.assertTrue('test_ctags'
                            in outfile.getvalue())

    def test_filename(self):
        optdict = dict(filename="test.py")
        outfile = StringIO()
        fmt = HtmlFormatter(**optdict)
        fmt.format(tokensource, outfile)
        html = outfile.getvalue()
        self.assertTrue(re.search("test.py
", html))
Pygments-2.3.1/tests/test_sql.py0000644000175000017500000000530413376260540015761 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments SQL lexers tests
    ~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2016 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""
import unittest

from pygments.lexers.sql import TransactSqlLexer
from pygments.token import Comment, Name, Number, Punctuation, Whitespace


class TransactSqlLexerTest(unittest.TestCase):

    def setUp(self):
        self.lexer = TransactSqlLexer()

    def _assertAreTokensOfType(self, examples, expected_token_type):
        for test_number, example in enumerate(examples.split(), 1):
            token_count = 0
            for token_type, token_value in self.lexer.get_tokens(example):
                if token_type != Whitespace:
                    token_count += 1
                    self.assertEqual(
                        token_type, expected_token_type,
                        'token_type #%d for %s is be %s but must be %s' %
                        (test_number, token_value, token_type, expected_token_type))
            self.assertEqual(
                token_count, 1,
                '%s must yield exactly 1 token instead of %d' %
                (example, token_count))

    def _assertTokensMatch(self, text, expected_tokens_without_trailing_newline):
        actual_tokens = tuple(self.lexer.get_tokens(text))
        if (len(actual_tokens) >= 1) and (actual_tokens[-1] == (Whitespace, '\n')):
            actual_tokens = tuple(actual_tokens[:-1])
        self.assertEqual(
            expected_tokens_without_trailing_newline, actual_tokens,
            'text must yield expected tokens: %s' % text)

    def test_can_lex_float(self):
        self._assertAreTokensOfType(
            '1. 1.e1 .1 1.2 1.2e3 1.2e+3 1.2e-3 1e2', Number.Float)
        self._assertTokensMatch(
            '1e2.1e2',
            ((Number.Float, '1e2'), (Number.Float, '.1e2'))
        )

    def test_can_reject_almost_float(self):
        self._assertTokensMatch(
            '.e1',
            ((Punctuation, '.'), (Name, 'e1')))

    def test_can_lex_integer(self):
        self._assertAreTokensOfType(
            '1 23 456', Number.Integer)

    def test_can_lex_names(self):
        self._assertAreTokensOfType(
            u'thingy thingy123 _thingy _ _123 Ähnliches Müll #temp1 ##temp2', Name)

    def test_can_lex_comments(self):
        self._assertTokensMatch('--\n', ((Comment.Single, '--\n'),))
        self._assertTokensMatch('/**/', (
            (Comment.Multiline, '/*'), (Comment.Multiline, '*/')
        ))
        self._assertTokensMatch('/*/**/*/', (
            (Comment.Multiline, '/*'),
            (Comment.Multiline, '/*'),
            (Comment.Multiline, '*/'),
            (Comment.Multiline, '*/'),
        ))
Pygments-2.3.1/tests/run.py0000644000175000017500000000307313402534110014713 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Pygments unit tests
    ~~~~~~~~~~~~~~~~~~

    Usage::

        python run.py [testfile ...]


    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

from __future__ import print_function

import os
import sys
import warnings

# only find tests in this directory
if os.path.dirname(__file__):
    os.chdir(os.path.dirname(__file__))

# make FutureWarnings (coming from Regex syntax most likely) and
# DeprecationWarnings due to non-raw strings an error
warnings.filterwarnings("error", module=r"pygments\..*",
                        category=FutureWarning)
warnings.filterwarnings("error", module=r".*pygments.*",
                        category=DeprecationWarning)


try:
    import nose
except ImportError:
    print('nose is required to run the Pygments test suite')
    sys.exit(1)

# make sure the current source is first on sys.path
sys.path.insert(0, '..')

if '--with-coverage' not in sys.argv:
    # if running with coverage, pygments should not be imported before coverage
    # is started, otherwise it will count already executed lines as uncovered
    try:
        import pygments
    except ImportError as err:
        print('Cannot find Pygments to test: %s' % err)
        sys.exit(1)
    else:
        print('Pygments %s test suite running (Python %s)...' %
              (pygments.__version__, sys.version.split()[0]),
              file=sys.stderr)
else:
    print('Pygments test suite running (Python %s)...' % sys.version.split()[0],
          file=sys.stderr)

nose.main()
Pygments-2.3.1/tests/test_julia.py0000644000175000017500000000337713376260540016276 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Julia Tests
    ~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import JuliaLexer
from pygments.token import Token


class JuliaTests(unittest.TestCase):
    def setUp(self):
        self.lexer = JuliaLexer()

    def test_unicode(self):
        """
        Test that unicode character, √, in an expression is recognized
        """
        fragment = u's = \u221a((1/n) * sum(count .^ 2) - mu .^2)\n'
        tokens = [
            (Token.Name, u's'),
            (Token.Text, u' '),
            (Token.Operator, u'='),
            (Token.Text, u' '),
            (Token.Operator, u'\u221a'),
            (Token.Punctuation, u'('),
            (Token.Punctuation, u'('),
            (Token.Literal.Number.Integer, u'1'),
            (Token.Operator, u'/'),
            (Token.Name, u'n'),
            (Token.Punctuation, u')'),
            (Token.Text, u' '),
            (Token.Operator, u'*'),
            (Token.Text, u' '),
            (Token.Name, u'sum'),
            (Token.Punctuation, u'('),
            (Token.Name, u'count'),
            (Token.Text, u' '),
            (Token.Operator, u'.^'),
            (Token.Text, u' '),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Punctuation, u')'),
            (Token.Text, u' '),
            (Token.Operator, u'-'),
            (Token.Text, u' '),
            (Token.Name, u'mu'),
            (Token.Text, u' '),
            (Token.Operator, u'.^'),
            (Token.Literal.Number.Integer, u'2'),
            (Token.Punctuation, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_cfm.py0000644000175000017500000000265313376260540015733 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic ColdfusionHtmlLexer Test
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest
import os

from pygments.token import Token
from pygments.lexers import ColdfusionHtmlLexer


class ColdfusionHtmlLexerTest(unittest.TestCase):

    def setUp(self):
        self.lexer = ColdfusionHtmlLexer()

    def testBasicComment(self):
        fragment = u''
        expected = [
            (Token.Text, u''),
            (Token.Comment.Multiline, u''),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))

    def testNestedComment(self):
        fragment = u' --->'
        expected = [
            (Token.Text, u''),
            (Token.Comment.Multiline, u''),
            (Token.Comment.Multiline, u' '),
            (Token.Comment.Multiline, u'--->'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(expected, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_php.py0000644000175000017500000000200313376260540015742 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    PHP Tests
    ~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.lexers import PhpLexer
from pygments.token import Token


class PhpTest(unittest.TestCase):
    def setUp(self):
        self.lexer = PhpLexer()

    def testStringEscapingRun(self):
        fragment = '\n'
        tokens = [
            (Token.Comment.Preproc, ''),
            (Token.Other, '\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
Pygments-2.3.1/tests/test_shell.py0000644000175000017500000001114513376260540016271 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic Shell Tests
    ~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Token
from pygments.lexers import BashLexer, BashSessionLexer


class BashTest(unittest.TestCase):

    def setUp(self):
        self.lexer = BashLexer()
        self.maxDiff = None

    def testCurlyNoEscapeAndQuotes(self):
        fragment = u'echo "${a//["b"]/}"\n'
        tokens = [
            (Token.Name.Builtin, u'echo'),
            (Token.Text, u' '),
            (Token.Literal.String.Double, u'"'),
            (Token.String.Interpol, u'${'),
            (Token.Name.Variable, u'a'),
            (Token.Punctuation, u'//['),
            (Token.Literal.String.Double, u'"b"'),
            (Token.Punctuation, u']/'),
            (Token.String.Interpol, u'}'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testCurlyWithEscape(self):
        fragment = u'echo ${a//[\\"]/}\n'
        tokens = [
            (Token.Name.Builtin, u'echo'),
            (Token.Text, u' '),
            (Token.String.Interpol, u'${'),
            (Token.Name.Variable, u'a'),
            (Token.Punctuation, u'//['),
            (Token.Literal.String.Escape, u'\\"'),
            (Token.Punctuation, u']/'),
            (Token.String.Interpol, u'}'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testParsedSingle(self):
        fragment = u"a=$'abc\\''\n"
        tokens = [
            (Token.Name.Variable, u'a'),
            (Token.Operator, u'='),
            (Token.Literal.String.Single, u"$'abc\\''"),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testShortVariableNames(self):
        fragment = u'x="$"\ny="$_"\nz="$abc"\n'
        tokens = [
            # single lone $
            (Token.Name.Variable, u'x'),
            (Token.Operator, u'='),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'$'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'\n'),
            # single letter shell var
            (Token.Name.Variable, u'y'),
            (Token.Operator, u'='),
            (Token.Literal.String.Double, u'"'),
            (Token.Name.Variable, u'$_'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'\n'),
            # multi-letter user var
            (Token.Name.Variable, u'z'),
            (Token.Operator, u'='),
            (Token.Literal.String.Double, u'"'),
            (Token.Name.Variable, u'$abc'),
            (Token.Literal.String.Double, u'"'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testArrayNums(self):
        fragment = u'a=(1 2 3)\n'
        tokens = [
            (Token.Name.Variable, u'a'),
            (Token.Operator, u'='),
            (Token.Operator, u'('),
            (Token.Literal.Number, u'1'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'2'),
            (Token.Text, u' '),
            (Token.Literal.Number, u'3'),
            (Token.Operator, u')'),
            (Token.Text, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

    def testEndOfLineNums(self):
        fragment = u'a=1\nb=2 # comment\n'
        tokens = [
            (Token.Name.Variable, u'a'),
            (Token.Operator, u'='),
            (Token.Literal.Number, u'1'),
            (Token.Text, u'\n'),
            (Token.Name.Variable, u'b'),
            (Token.Operator, u'='),
            (Token.Literal.Number, u'2'),
            (Token.Text, u' '),
            (Token.Comment.Single, u'# comment\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

class BashSessionTest(unittest.TestCase):

    def setUp(self):
        self.lexer = BashSessionLexer()
        self.maxDiff = None

    def testNeedsName(self):
        fragment = u'$ echo \\\nhi\nhi\n'
        tokens = [
            (Token.Text, u''),
            (Token.Generic.Prompt, u'$'),
            (Token.Text, u' '),
            (Token.Name.Builtin, u'echo'),
            (Token.Text, u' '),
            (Token.Literal.String.Escape, u'\\\n'),
            (Token.Text, u'hi'),
            (Token.Text, u'\n'),
            (Token.Generic.Output, u'hi\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

Pygments-2.3.1/tests/test_smarty.py0000644000175000017500000000231313376260540016476 0ustar  piotrpiotr# -*- coding: utf-8 -*-
"""
    Basic SmartyLexer Test
    ~~~~~~~~~~~~~~~~~~~~

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
"""

import unittest

from pygments.token import Operator, Number, Text, Token
from pygments.lexers import SmartyLexer


class SmartyTest(unittest.TestCase):

    def setUp(self):
        self.lexer = SmartyLexer()

    def testNestedCurly(self):
        fragment = u'{templateFunction param={anotherFunction} param2=$something}\n'
        tokens = [
            (Token.Comment.Preproc, u'{'),
            (Token.Name.Function, u'templateFunction'),
            (Token.Text, u' '),
            (Token.Name.Attribute, u'param'),
            (Token.Operator, u'='),
            (Token.Comment.Preproc, u'{'),
            (Token.Name.Attribute, u'anotherFunction'),
            (Token.Comment.Preproc, u'}'),
            (Token.Text, u' '),
            (Token.Name.Attribute, u'param2'),
            (Token.Operator, u'='),
            (Token.Name.Variable, u'$something'),
            (Token.Comment.Preproc, u'}'),
            (Token.Other, u'\n'),
        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))

Pygments-2.3.1/setup.cfg0000644000175000017500000000022313405476655014234 0ustar  piotrpiotr[egg_info]
tag_build = 
tag_date = 0

[aliases]
release = egg_info -Db ''
upload = upload --sign --identity=36580288

[bdist_wheel]
universal = 1

Pygments-2.3.1/doc/0000755000175000017500000000000013405476652013160 5ustar  piotrpiotrPygments-2.3.1/doc/make.bat0000644000175000017500000001175413376260540014567 0ustar  piotrpiotr@ECHO OFF

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
	set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
	set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)

if "%1" == "" goto help

if "%1" == "help" (
	:help
	echo.Please use `make ^` where ^ is one of
	echo.  html       to make standalone HTML files
	echo.  dirhtml    to make HTML files named index.html in directories
	echo.  singlehtml to make a single large HTML file
	echo.  pickle     to make pickle files
	echo.  json       to make JSON files
	echo.  htmlhelp   to make HTML files and a HTML help project
	echo.  qthelp     to make HTML files and a qthelp project
	echo.  devhelp    to make HTML files and a Devhelp project
	echo.  epub       to make an epub
	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
	echo.  text       to make text files
	echo.  man        to make manual pages
	echo.  texinfo    to make Texinfo files
	echo.  gettext    to make PO message catalogs
	echo.  changes    to make an overview over all changed/added/deprecated items
	echo.  linkcheck  to check all external links for integrity
	echo.  doctest    to run all doctests embedded in the documentation if enabled
	goto end
)

if "%1" == "clean" (
	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
	del /q /s %BUILDDIR%\*
	goto end
)

if "%1" == "html" (
	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
	goto end
)

if "%1" == "dirhtml" (
	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
	goto end
)

if "%1" == "singlehtml" (
	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
	goto end
)

if "%1" == "pickle" (
	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished; now you can process the pickle files.
	goto end
)

if "%1" == "json" (
	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished; now you can process the JSON files.
	goto end
)

if "%1" == "htmlhelp" (
	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
	goto end
)

if "%1" == "qthelp" (
	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Pygments.qhcp
	echo.To view the help file:
	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Pygments.ghc
	goto end
)

if "%1" == "devhelp" (
	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished.
	goto end
)

if "%1" == "epub" (
	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The epub file is in %BUILDDIR%/epub.
	goto end
)

if "%1" == "latex" (
	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
	goto end
)

if "%1" == "text" (
	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The text files are in %BUILDDIR%/text.
	goto end
)

if "%1" == "man" (
	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The manual pages are in %BUILDDIR%/man.
	goto end
)

if "%1" == "texinfo" (
	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
	goto end
)

if "%1" == "gettext" (
	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
	if errorlevel 1 exit /b 1
	echo.
	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
	goto end
)

if "%1" == "changes" (
	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
	if errorlevel 1 exit /b 1
	echo.
	echo.The overview file is in %BUILDDIR%/changes.
	goto end
)

if "%1" == "linkcheck" (
	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
	if errorlevel 1 exit /b 1
	echo.
	echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
	goto end
)

if "%1" == "doctest" (
	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
	if errorlevel 1 exit /b 1
	echo.
	echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
	goto end
)

:end
Pygments-2.3.1/doc/_templates/0000755000175000017500000000000013405476652015315 5ustar  piotrpiotrPygments-2.3.1/doc/_templates/indexsidebar.html0000644000175000017500000000175313376260540020644 0ustar  piotrpiotr

Download

{% if version.endswith('(hg)') %}

This documentation is for version {{ version }}, which is not released yet.

You can use it from the Mercurial repo or look for released versions in the Python Package Index.

{% else %}

Current version: {{ version }}

Get Pygments from the Python Package Index, or install it with:

pip install Pygments
{% endif %}

Questions? Suggestions?

Clone at Bitbucket or come to the #pocoo channel on FreeNode.

You can also open an issue at the tracker.

Pygments-2.3.1/doc/_templates/docssidebar.html0000644000175000017500000000020313376260540020452 0ustar piotrpiotr{% if pagename != 'docs/index' %} « Back to docs index {% endif %} Pygments-2.3.1/doc/Makefile0000644000175000017500000001272213376260540014616 0ustar piotrpiotr# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = PYTHONPATH=.. sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Pygments.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Pygments.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Pygments" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Pygments" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." Pygments-2.3.1/doc/docs/0000755000175000017500000000000013405476652014110 5ustar piotrpiotrPygments-2.3.1/doc/docs/formatters.rst0000644000175000017500000000312013376260540017016 0ustar piotrpiotr.. -*- mode: rst -*- ==================== Available formatters ==================== This page lists all builtin formatters. Common options ============== All formatters support these options: `encoding` If given, must be an encoding name (such as ``"utf-8"``). This will be used to convert the token strings (which are Unicode strings) to byte strings in the output (default: ``None``). It will also be written in an encoding declaration suitable for the document format if the `full` option is given (e.g. a ``meta content-type`` directive in HTML or an invocation of the `inputenc` package in LaTeX). If this is ``""`` or ``None``, Unicode strings will be written to the output file, which most file-like objects do not support. For example, `pygments.highlight()` will return a Unicode string if called with no `outfile` argument and a formatter that has `encoding` set to ``None`` because it uses a `StringIO.StringIO` object that supports Unicode arguments to `write()`. Using a regular file object wouldn't work. .. versionadded:: 0.6 `outencoding` When using Pygments from the command line, any `encoding` option given is passed to the lexer and the formatter. This is sometimes not desirable, for example if you want to set the input encoding to ``"guess"``. Therefore, `outencoding` has been introduced which overrides `encoding` for the formatter if given. .. versionadded:: 0.7 Formatter classes ================= All these classes are importable from :mod:`pygments.formatters`. .. pygmentsdoc:: formatters Pygments-2.3.1/doc/docs/unicode.rst0000644000175000017500000000441313376260540016264 0ustar piotrpiotr===================== Unicode and Encodings ===================== Since Pygments 0.6, all lexers use unicode strings internally. Because of that you might encounter the occasional :exc:`UnicodeDecodeError` if you pass strings with the wrong encoding. Per default all lexers have their input encoding set to `guess`. This means that the following encodings are tried: * UTF-8 (including BOM handling) * The locale encoding (i.e. the result of `locale.getpreferredencoding()`) * As a last resort, `latin1` If you pass a lexer a byte string object (not unicode), it tries to decode the data using this encoding. You can override the encoding using the `encoding` or `inencoding` lexer options. If you have the `chardet`_ library installed and set the encoding to ``chardet`` if will analyse the text and use the encoding it thinks is the right one automatically: .. sourcecode:: python from pygments.lexers import PythonLexer lexer = PythonLexer(encoding='chardet') The best way is to pass Pygments unicode objects. In that case you can't get unexpected output. The formatters now send Unicode objects to the stream if you don't set the output encoding. You can do so by passing the formatters an `encoding` option: .. sourcecode:: python from pygments.formatters import HtmlFormatter f = HtmlFormatter(encoding='utf-8') **You will have to set this option if you have non-ASCII characters in the source and the output stream does not accept Unicode written to it!** This is the case for all regular files and for terminals. Note: The Terminal formatter tries to be smart: if its output stream has an `encoding` attribute, and you haven't set the option, it will encode any Unicode string with this encoding before writing it. This is the case for `sys.stdout`, for example. The other formatters don't have that behavior. Another note: If you call Pygments via the command line (`pygmentize`), encoding is handled differently, see :doc:`the command line docs `. .. versionadded:: 0.7 The formatters now also accept an `outencoding` option which will override the `encoding` option if given. This makes it possible to use a single options dict with lexers and formatters, and still have different input and output encodings. .. _chardet: https://chardet.github.io/ Pygments-2.3.1/doc/docs/integrate.rst0000644000175000017500000000232013376260540016613 0ustar piotrpiotr.. -*- mode: rst -*- =================================== Using Pygments in various scenarios =================================== Markdown -------- Since Pygments 0.9, the distribution ships Markdown_ preprocessor sample code that uses Pygments to render source code in :file:`external/markdown-processor.py`. You can copy and adapt it to your liking. .. _Markdown: http://www.freewisdom.org/projects/python-markdown/ TextMate -------- Antonio Cangiano has created a Pygments bundle for TextMate that allows to colorize code via a simple menu option. It can be found here_. .. _here: http://antoniocangiano.com/2008/10/28/pygments-textmate-bundle/ Bash completion --------------- The source distribution contains a file ``external/pygments.bashcomp`` that sets up completion for the ``pygmentize`` command in bash. Wrappers for other languages ---------------------------- These libraries provide Pygments highlighting for users of other languages than Python: * `pygments.rb `_, a pygments wrapper for Ruby * `Clygments `_, a pygments wrapper for Clojure * `PHPygments `_, a pygments wrapper for PHP Pygments-2.3.1/doc/docs/filterdevelopment.rst0000644000175000017500000000453113376260540020367 0ustar piotrpiotr.. -*- mode: rst -*- ===================== Write your own filter ===================== .. versionadded:: 0.7 Writing own filters is very easy. All you have to do is to subclass the `Filter` class and override the `filter` method. Additionally a filter is instantiated with some keyword arguments you can use to adjust the behavior of your filter. Subclassing Filters =================== As an example, we write a filter that converts all `Name.Function` tokens to normal `Name` tokens to make the output less colorful. .. sourcecode:: python from pygments.util import get_bool_opt from pygments.token import Name from pygments.filter import Filter class UncolorFilter(Filter): def __init__(self, **options): Filter.__init__(self, **options) self.class_too = get_bool_opt(options, 'classtoo') def filter(self, lexer, stream): for ttype, value in stream: if ttype is Name.Function or (self.class_too and ttype is Name.Class): ttype = Name yield ttype, value Some notes on the `lexer` argument: that can be quite confusing since it doesn't need to be a lexer instance. If a filter was added by using the `add_filter()` function of lexers, that lexer is registered for the filter. In that case `lexer` will refer to the lexer that has registered the filter. It *can* be used to access options passed to a lexer. Because it could be `None` you always have to check for that case if you access it. Using a decorator ================= You can also use the `simplefilter` decorator from the `pygments.filter` module: .. sourcecode:: python from pygments.util import get_bool_opt from pygments.token import Name from pygments.filter import simplefilter @simplefilter def uncolor(self, lexer, stream, options): class_too = get_bool_opt(options, 'classtoo') for ttype, value in stream: if ttype is Name.Function or (class_too and ttype is Name.Class): ttype = Name yield ttype, value The decorator automatically subclasses an internal filter class and uses the decorated function as a method for filtering. (That's why there is a `self` argument that you probably won't end up using in the method.) Pygments-2.3.1/doc/docs/tokens.rst0000644000175000017500000002465313376260540016151 0ustar piotrpiotr.. -*- mode: rst -*- ============== Builtin Tokens ============== .. module:: pygments.token In the :mod:`pygments.token` module, there is a special object called `Token` that is used to create token types. You can create a new token type by accessing an attribute of `Token`: .. sourcecode:: pycon >>> from pygments.token import Token >>> Token.String Token.String >>> Token.String is Token.String True Note that tokens are singletons so you can use the ``is`` operator for comparing token types. As of Pygments 0.7 you can also use the ``in`` operator to perform set tests: .. sourcecode:: pycon >>> from pygments.token import Comment >>> Comment.Single in Comment True >>> Comment in Comment.Multi False This can be useful in :doc:`filters ` and if you write lexers on your own without using the base lexers. You can also split a token type into a hierarchy, and get the parent of it: .. sourcecode:: pycon >>> String.split() [Token, Token.Literal, Token.Literal.String] >>> String.parent Token.Literal In principle, you can create an unlimited number of token types but nobody can guarantee that a style would define style rules for a token type. Because of that, Pygments proposes some global token types defined in the `pygments.token.STANDARD_TYPES` dict. For some tokens aliases are already defined: .. sourcecode:: pycon >>> from pygments.token import String >>> String Token.Literal.String Inside the :mod:`pygments.token` module the following aliases are defined: ============= ============================ ==================================== `Text` `Token.Text` for any type of text data `Whitespace` `Token.Text.Whitespace` for specially highlighted whitespace `Error` `Token.Error` represents lexer errors `Other` `Token.Other` special token for data not matched by a parser (e.g. HTML markup in PHP code) `Keyword` `Token.Keyword` any kind of keywords `Name` `Token.Name` variable/function names `Literal` `Token.Literal` Any literals `String` `Token.Literal.String` string literals `Number` `Token.Literal.Number` number literals `Operator` `Token.Operator` operators (``+``, ``not``...) `Punctuation` `Token.Punctuation` punctuation (``[``, ``(``...) `Comment` `Token.Comment` any kind of comments `Generic` `Token.Generic` generic tokens (have a look at the explanation below) ============= ============================ ==================================== The `Whitespace` token type is new in Pygments 0.8. It is used only by the `VisibleWhitespaceFilter` currently. Normally you just create token types using the already defined aliases. For each of those token aliases, a number of subtypes exists (excluding the special tokens `Token.Text`, `Token.Error` and `Token.Other`) The `is_token_subtype()` function in the `pygments.token` module can be used to test if a token type is a subtype of another (such as `Name.Tag` and `Name`). (This is the same as ``Name.Tag in Name``. The overloaded `in` operator was newly introduced in Pygments 0.7, the function still exists for backwards compatibility.) With Pygments 0.7, it's also possible to convert strings to token types (for example if you want to supply a token from the command line): .. sourcecode:: pycon >>> from pygments.token import String, string_to_tokentype >>> string_to_tokentype("String") Token.Literal.String >>> string_to_tokentype("Token.Literal.String") Token.Literal.String >>> string_to_tokentype(String) Token.Literal.String Keyword Tokens ============== `Keyword` For any kind of keyword (especially if it doesn't match any of the subtypes of course). `Keyword.Constant` For keywords that are constants (e.g. ``None`` in future Python versions). `Keyword.Declaration` For keywords used for variable declaration (e.g. ``var`` in some programming languages like JavaScript). `Keyword.Namespace` For keywords used for namespace declarations (e.g. ``import`` in Python and Java and ``package`` in Java). `Keyword.Pseudo` For keywords that aren't really keywords (e.g. ``None`` in old Python versions). `Keyword.Reserved` For reserved keywords. `Keyword.Type` For builtin types that can't be used as identifiers (e.g. ``int``, ``char`` etc. in C). Name Tokens =========== `Name` For any name (variable names, function names, classes). `Name.Attribute` For all attributes (e.g. in HTML tags). `Name.Builtin` Builtin names; names that are available in the global namespace. `Name.Builtin.Pseudo` Builtin names that are implicit (e.g. ``self`` in Ruby, ``this`` in Java). `Name.Class` Class names. Because no lexer can know if a name is a class or a function or something else this token is meant for class declarations. `Name.Constant` Token type for constants. In some languages you can recognise a token by the way it's defined (the value after a ``const`` keyword for example). In other languages constants are uppercase by definition (Ruby). `Name.Decorator` Token type for decorators. Decorators are syntactic elements in the Python language. Similar syntax elements exist in C# and Java. `Name.Entity` Token type for special entities. (e.g. `` `` in HTML). `Name.Exception` Token type for exception names (e.g. ``RuntimeError`` in Python). Some languages define exceptions in the function signature (Java). You can highlight the name of that exception using this token then. `Name.Function` Token type for function names. `Name.Function.Magic` same as `Name.Function` but for special function names that have an implicit use in a language (e.g. ``__init__`` method in Python). `Name.Label` Token type for label names (e.g. in languages that support ``goto``). `Name.Namespace` Token type for namespaces. (e.g. import paths in Java/Python), names following the ``module``/``namespace`` keyword in other languages. `Name.Other` Other names. Normally unused. `Name.Tag` Tag names (in HTML/XML markup or configuration files). `Name.Variable` Token type for variables. Some languages have prefixes for variable names (PHP, Ruby, Perl). You can highlight them using this token. `Name.Variable.Class` same as `Name.Variable` but for class variables (also static variables). `Name.Variable.Global` same as `Name.Variable` but for global variables (used in Ruby, for example). `Name.Variable.Instance` same as `Name.Variable` but for instance variables. `Name.Variable.Magic` same as `Name.Variable` but for special variable names that have an implicit use in a language (e.g. ``__doc__`` in Python). Literals ======== `Literal` For any literal (if not further defined). `Literal.Date` for date literals (e.g. ``42d`` in Boo). `String` For any string literal. `String.Affix` Token type for affixes that further specify the type of the string they're attached to (e.g. the prefixes ``r`` and ``u8`` in ``r"foo"`` and ``u8"foo"``). `String.Backtick` Token type for strings enclosed in backticks. `String.Char` Token type for single characters (e.g. Java, C). `String.Delimiter` Token type for delimiting identifiers in "heredoc", raw and other similar strings (e.g. the word ``END`` in Perl code ``print <<'END';``). `String.Doc` Token type for documentation strings (for example Python). `String.Double` Double quoted strings. `String.Escape` Token type for escape sequences in strings. `String.Heredoc` Token type for "heredoc" strings (e.g. in Ruby or Perl). `String.Interpol` Token type for interpolated parts in strings (e.g. ``#{foo}`` in Ruby). `String.Other` Token type for any other strings (for example ``%q{foo}`` string constructs in Ruby). `String.Regex` Token type for regular expression literals (e.g. ``/foo/`` in JavaScript). `String.Single` Token type for single quoted strings. `String.Symbol` Token type for symbols (e.g. ``:foo`` in LISP or Ruby). `Number` Token type for any number literal. `Number.Bin` Token type for binary literals (e.g. ``0b101010``). `Number.Float` Token type for float literals (e.g. ``42.0``). `Number.Hex` Token type for hexadecimal number literals (e.g. ``0xdeadbeef``). `Number.Integer` Token type for integer literals (e.g. ``42``). `Number.Integer.Long` Token type for long integer literals (e.g. ``42L`` in Python). `Number.Oct` Token type for octal literals. Operators ========= `Operator` For any punctuation operator (e.g. ``+``, ``-``). `Operator.Word` For any operator that is a word (e.g. ``not``). Punctuation =========== .. versionadded:: 0.7 `Punctuation` For any punctuation which is not an operator (e.g. ``[``, ``(``...) Comments ======== `Comment` Token type for any comment. `Comment.Hashbang` Token type for hashbang comments (i.e. first lines of files that start with ``#!``). `Comment.Multiline` Token type for multiline comments. `Comment.Preproc` Token type for preprocessor comments (also ``
print "Hello World"
As you can see, Pygments uses CSS classes (by default, but you can change that) instead of inline styles in order to avoid outputting redundant style information over and over. A CSS stylesheet that contains all CSS classes possibly used in the output can be produced by: .. sourcecode:: python print(HtmlFormatter().get_style_defs('.highlight')) The argument to :func:`get_style_defs` is used as an additional CSS selector: the output may look like this: .. sourcecode:: css .highlight .k { color: #AA22FF; font-weight: bold } .highlight .s { color: #BB4444 } ... Options ======= The :func:`highlight()` function supports a fourth argument called *outfile*, it must be a file object if given. The formatted output will then be written to this file instead of being returned as a string. Lexers and formatters both support options. They are given to them as keyword arguments either to the class or to the lookup method: .. sourcecode:: python from pygments import highlight from pygments.lexers import get_lexer_by_name from pygments.formatters import HtmlFormatter lexer = get_lexer_by_name("python", stripall=True) formatter = HtmlFormatter(linenos=True, cssclass="source") result = highlight(code, lexer, formatter) This makes the lexer strip all leading and trailing whitespace from the input (`stripall` option), lets the formatter output line numbers (`linenos` option), and sets the wrapping ``
``'s class to ``source`` (instead of ``highlight``). Important options include: `encoding` : for lexers and formatters Since Pygments uses Unicode strings internally, this determines which encoding will be used to convert to or from byte strings. `style` : for formatters The name of the style to use when writing the output. For an overview of builtin lexers and formatters and their options, visit the :doc:`lexer ` and :doc:`formatters ` lists. For a documentation on filters, see :doc:`this page `. Lexer and formatter lookup ========================== If you want to lookup a built-in lexer by its alias or a filename, you can use one of the following methods: .. sourcecode:: pycon >>> from pygments.lexers import (get_lexer_by_name, ... get_lexer_for_filename, get_lexer_for_mimetype) >>> get_lexer_by_name('python') >>> get_lexer_for_filename('spam.rb') >>> get_lexer_for_mimetype('text/x-perl') All these functions accept keyword arguments; they will be passed to the lexer as options. A similar API is available for formatters: use :func:`.get_formatter_by_name()` and :func:`.get_formatter_for_filename()` from the :mod:`pygments.formatters` module for this purpose. Guessing lexers =============== If you don't know the content of the file, or you want to highlight a file whose extension is ambiguous, such as ``.html`` (which could contain plain HTML or some template tags), use these functions: .. sourcecode:: pycon >>> from pygments.lexers import guess_lexer, guess_lexer_for_filename >>> guess_lexer('#!/usr/bin/python\nprint "Hello World!"') >>> guess_lexer_for_filename('test.py', 'print "Hello World!"') :func:`.guess_lexer()` passes the given content to the lexer classes' :meth:`analyse_text()` method and returns the one for which it returns the highest number. All lexers have two different filename pattern lists: the primary and the secondary one. The :func:`.get_lexer_for_filename()` function only uses the primary list, whose entries are supposed to be unique among all lexers. :func:`.guess_lexer_for_filename()`, however, will first loop through all lexers and look at the primary and secondary filename patterns if the filename matches. If only one lexer matches, it is returned, else the guessing mechanism of :func:`.guess_lexer()` is used with the matching lexers. As usual, keyword arguments to these functions are given to the created lexer as options. Command line usage ================== You can use Pygments from the command line, using the :program:`pygmentize` script:: $ pygmentize test.py will highlight the Python file test.py using ANSI escape sequences (a.k.a. terminal colors) and print the result to standard output. To output HTML, use the ``-f`` option:: $ pygmentize -f html -o test.html test.py to write an HTML-highlighted version of test.py to the file test.html. Note that it will only be a snippet of HTML, if you want a full HTML document, use the "full" option:: $ pygmentize -f html -O full -o test.html test.py This will produce a full HTML document with included stylesheet. A style can be selected with ``-O style=``. If you need a stylesheet for an existing HTML file using Pygments CSS classes, it can be created with:: $ pygmentize -S default -f html > style.css where ``default`` is the style name. More options and tricks and be found in the :doc:`command line reference `. Pygments-2.3.1/doc/docs/filters.rst0000644000175000017500000000227613376260540016313 0ustar piotrpiotr.. -*- mode: rst -*- ======= Filters ======= .. versionadded:: 0.7 You can filter token streams coming from lexers to improve or annotate the output. For example, you can highlight special words in comments, convert keywords to upper or lowercase to enforce a style guide etc. To apply a filter, you can use the `add_filter()` method of a lexer: .. sourcecode:: pycon >>> from pygments.lexers import PythonLexer >>> l = PythonLexer() >>> # add a filter given by a string and options >>> l.add_filter('codetagify', case='lower') >>> l.filters [] >>> from pygments.filters import KeywordCaseFilter >>> # or give an instance >>> l.add_filter(KeywordCaseFilter(case='lower')) The `add_filter()` method takes keyword arguments which are forwarded to the constructor of the filter. To get a list of all registered filters by name, you can use the `get_all_filters()` function from the `pygments.filters` module that returns an iterable for all known filters. If you want to write your own filter, have a look at :doc:`Write your own filter `. Builtin Filters =============== .. pygmentsdoc:: filters Pygments-2.3.1/doc/docs/cmdline.rst0000644000175000017500000001300113376260540016242 0ustar piotrpiotr.. -*- mode: rst -*- ====================== Command Line Interface ====================== You can use Pygments from the shell, provided you installed the :program:`pygmentize` script:: $ pygmentize test.py print "Hello World" will print the file test.py to standard output, using the Python lexer (inferred from the file name extension) and the terminal formatter (because you didn't give an explicit formatter name). If you want HTML output:: $ pygmentize -f html -l python -o test.html test.py As you can see, the -l option explicitly selects a lexer. As seen above, if you give an input file name and it has an extension that Pygments recognizes, you can omit this option. The ``-o`` option gives an output file name. If it is not given, output is written to stdout. The ``-f`` option selects a formatter (as with ``-l``, it can also be omitted if an output file name is given and has a supported extension). If no output file name is given and ``-f`` is omitted, the :class:`.TerminalFormatter` is used. The above command could therefore also be given as:: $ pygmentize -o test.html test.py To create a full HTML document, including line numbers and stylesheet (using the "emacs" style), highlighting the Python file ``test.py`` to ``test.html``:: $ pygmentize -O full,style=emacs -o test.html test.py Options and filters ------------------- Lexer and formatter options can be given using the ``-O`` option:: $ pygmentize -f html -O style=colorful,linenos=1 -l python test.py Be sure to enclose the option string in quotes if it contains any special shell characters, such as spaces or expansion wildcards like ``*``. If an option expects a list value, separate the list entries with spaces (you'll have to quote the option value in this case too, so that the shell doesn't split it). Since the ``-O`` option argument is split at commas and expects the split values to be of the form ``name=value``, you can't give an option value that contains commas or equals signs. Therefore, an option ``-P`` is provided (as of Pygments 0.9) that works like ``-O`` but can only pass one option per ``-P``. Its value can then contain all characters:: $ pygmentize -P "heading=Pygments, the Python highlighter" ... Filters are added to the token stream using the ``-F`` option:: $ pygmentize -f html -l pascal -F keywordcase:case=upper main.pas As you see, options for the filter are given after a colon. As for ``-O``, the filter name and options must be one shell word, so there may not be any spaces around the colon. Generating styles ----------------- Formatters normally don't output full style information. For example, the HTML formatter by default only outputs ```` tags with ``class`` attributes. Therefore, there's a special ``-S`` option for generating style definitions. Usage is as follows:: $ pygmentize -f html -S colorful -a .syntax generates a CSS style sheet (because you selected the HTML formatter) for the "colorful" style prepending a ".syntax" selector to all style rules. For an explanation what ``-a`` means for :doc:`a particular formatter `, look for the `arg` argument for the formatter's :meth:`.get_style_defs()` method. Getting lexer names ------------------- .. versionadded:: 1.0 The ``-N`` option guesses a lexer name for a given filename, so that :: $ pygmentize -N setup.py will print out ``python``. It won't highlight anything yet. If no specific lexer is known for that filename, ``text`` is printed. Custom Lexers and Formatters ---------------------------- .. versionadded:: 2.2 The ``-x`` flag enables custom lexers and formatters to be loaded from files relative to the current directory. Create a file with a class named CustomLexer or CustomFormatter, then specify it on the command line:: $ pygmentize -l your_lexer.py -f your_formatter.py -x You can also specify the name of your class with a colon:: $ pygmentize -l your_lexer.py:SomeLexer -x For more information, see :doc:`the Pygments documentation on Lexer development `. Getting help ------------ The ``-L`` option lists lexers, formatters, along with their short names and supported file name extensions, styles and filters. If you want to see only one category, give it as an argument:: $ pygmentize -L filters will list only all installed filters. The ``-H`` option will give you detailed information (the same that can be found in this documentation) about a lexer, formatter or filter. Usage is as follows:: $ pygmentize -H formatter html will print the help for the HTML formatter, while :: $ pygmentize -H lexer python will print the help for the Python lexer, etc. A note on encodings ------------------- .. versionadded:: 0.9 Pygments tries to be smart regarding encodings in the formatting process: * If you give an ``encoding`` option, it will be used as the input and output encoding. * If you give an ``outencoding`` option, it will override ``encoding`` as the output encoding. * If you give an ``inencoding`` option, it will override ``encoding`` as the input encoding. * If you don't give an encoding and have given an output file, the default encoding for lexer and formatter is the terminal encoding or the default locale encoding of the system. As a last resort, ``latin1`` is used (which will pass through all non-ASCII characters). * If you don't give an encoding and haven't given an output file (that means output is written to the console), the default encoding for lexer and formatter is the terminal encoding (``sys.stdout.encoding``). Pygments-2.3.1/doc/docs/index.rst0000644000175000017500000000157413376260540015752 0ustar piotrpiotrPygments documentation ====================== **Starting with Pygments** .. toctree:: :maxdepth: 1 ../download quickstart cmdline **Builtin components** .. toctree:: :maxdepth: 1 lexers filters formatters styles **Reference** .. toctree:: :maxdepth: 1 unicode tokens api **Hacking for Pygments** .. toctree:: :maxdepth: 1 lexerdevelopment formatterdevelopment filterdevelopment plugins **Hints and tricks** .. toctree:: :maxdepth: 1 rstdirective moinmoin java integrate **About Pygments** .. toctree:: :maxdepth: 1 changelog authors If you find bugs or have suggestions for the documentation, please look :ref:`here ` for info on how to contact the team. .. XXX You can download an offline version of this documentation from the :doc:`download page `. Pygments-2.3.1/doc/docs/lexers.rst0000644000175000017500000000376113376260540016145 0ustar piotrpiotr.. -*- mode: rst -*- ================ Available lexers ================ This page lists all available builtin lexers and the options they take. Currently, **all lexers** support these options: `stripnl` Strip leading and trailing newlines from the input (default: ``True``) `stripall` Strip all leading and trailing whitespace from the input (default: ``False``). `ensurenl` Make sure that the input ends with a newline (default: ``True``). This is required for some lexers that consume input linewise. .. versionadded:: 1.3 `tabsize` If given and greater than 0, expand tabs in the input (default: ``0``). `encoding` If given, must be an encoding name (such as ``"utf-8"``). This encoding will be used to convert the input string to Unicode (if it is not already a Unicode string). The default is ``"guess"``. If this option is set to ``"guess"``, a simple UTF-8 vs. Latin-1 detection is used, if it is set to ``"chardet"``, the `chardet library `_ is used to guess the encoding of the input. .. versionadded:: 0.6 The "Short Names" field lists the identifiers that can be used with the `get_lexer_by_name()` function. These lexers are builtin and can be imported from `pygments.lexers`: .. pygmentsdoc:: lexers Iterating over all lexers ------------------------- .. versionadded:: 0.6 To get all lexers (both the builtin and the plugin ones), you can use the `get_all_lexers()` function from the `pygments.lexers` module: .. sourcecode:: pycon >>> from pygments.lexers import get_all_lexers >>> i = get_all_lexers() >>> i.next() ('Diff', ('diff',), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')) >>> i.next() ('Delphi', ('delphi', 'objectpascal', 'pas', 'pascal'), ('*.pas',), ('text/x-pascal',)) >>> i.next() ('XML+Ruby', ('xml+erb', 'xml+ruby'), (), ()) As you can see, the return value is an iterator which yields tuples in the form ``(name, aliases, filetypes, mimetypes)``. Pygments-2.3.1/doc/docs/java.rst0000644000175000017500000000443113376260540015557 0ustar piotrpiotr===================== Use Pygments in Java ===================== Thanks to `Jython `_ it is possible to use Pygments in Java. This page is a simple tutorial to get an idea of how this works. You can then look at the `Jython documentation `_ for more advanced uses. Since version 1.5, Pygments is deployed on `Maven Central `_ as a JAR, as is Jython which makes it a lot easier to create a Java project. Here is an example of a `Maven `_ ``pom.xml`` file for a project running Pygments: .. sourcecode:: xml 4.0.0 example example 1.0-SNAPSHOT org.python jython-standalone 2.5.3 org.pygments pygments 1.5 runtime The following Java example: .. sourcecode:: java PythonInterpreter interpreter = new PythonInterpreter(); // Set a variable with the content you want to work with interpreter.set("code", code); // Simple use Pygments as you would in Python interpreter.exec("from pygments import highlight\n" + "from pygments.lexers import PythonLexer\n" + "from pygments.formatters import HtmlFormatter\n" + "\nresult = highlight(code, PythonLexer(), HtmlFormatter())"); // Get the result that has been set in a variable System.out.println(interpreter.get("result", String.class)); will print something like: .. sourcecode:: html
print "Hello World"
Pygments-2.3.1/doc/docs/changelog.rst0000644000175000017500000000003313376260540016557 0ustar piotrpiotr.. include:: ../../CHANGES Pygments-2.3.1/doc/docs/plugins.rst0000644000175000017500000000502513376260540016317 0ustar piotrpiotr================ Register Plugins ================ If you want to extend Pygments without hacking the sources, but want to use the lexer/formatter/style/filter lookup functions (`lexers.get_lexer_by_name` et al.), you can use `setuptools`_ entrypoints to add new lexers, formatters or styles as if they were in the Pygments core. .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools That means you can use your highlighter modules with the `pygmentize` script, which relies on the mentioned functions. Entrypoints =========== Here is a list of setuptools entrypoints that Pygments understands: `pygments.lexers` This entrypoint is used for adding new lexers to the Pygments core. The name of the entrypoint values doesn't really matter, Pygments extracts required metadata from the class definition: .. sourcecode:: ini [pygments.lexers] yourlexer = yourmodule:YourLexer Note that you have to define ``name``, ``aliases`` and ``filename`` attributes so that you can use the highlighter from the command line: .. sourcecode:: python class YourLexer(...): name = 'Name Of Your Lexer' aliases = ['alias'] filenames = ['*.ext'] `pygments.formatters` You can use this entrypoint to add new formatters to Pygments. The name of an entrypoint item is the name of the formatter. If you prefix the name with a slash it's used as a filename pattern: .. sourcecode:: ini [pygments.formatters] yourformatter = yourmodule:YourFormatter /.ext = yourmodule:YourFormatter `pygments.styles` To add a new style you can use this entrypoint. The name of the entrypoint is the name of the style: .. sourcecode:: ini [pygments.styles] yourstyle = yourmodule:YourStyle `pygments.filters` Use this entrypoint to register a new filter. The name of the entrypoint is the name of the filter: .. sourcecode:: ini [pygments.filters] yourfilter = yourmodule:YourFilter How To Use Entrypoints ====================== This documentation doesn't explain how to use those entrypoints because this is covered in the `setuptools documentation`_. That page should cover everything you need to write a plugin. .. _setuptools documentation: http://peak.telecommunity.com/DevCenter/setuptools Extending The Core ================== If you have written a Pygments plugin that is open source, please inform us about that. There is a high chance that we'll add it to the Pygments distribution. Pygments-2.3.1/doc/docs/formatterdevelopment.rst0000644000175000017500000001404713376260540021110 0ustar piotrpiotr.. -*- mode: rst -*- ======================== Write your own formatter ======================== As well as creating :doc:`your own lexer `, writing a new formatter for Pygments is easy and straightforward. A formatter is a class that is initialized with some keyword arguments (the formatter options) and that must provides a `format()` method. Additionally a formatter should provide a `get_style_defs()` method that returns the style definitions from the style in a form usable for the formatter's output format. Quickstart ========== The most basic formatter shipped with Pygments is the `NullFormatter`. It just sends the value of a token to the output stream: .. sourcecode:: python from pygments.formatter import Formatter class NullFormatter(Formatter): def format(self, tokensource, outfile): for ttype, value in tokensource: outfile.write(value) As you can see, the `format()` method is passed two parameters: `tokensource` and `outfile`. The first is an iterable of ``(token_type, value)`` tuples, the latter a file like object with a `write()` method. Because the formatter is that basic it doesn't overwrite the `get_style_defs()` method. Styles ====== Styles aren't instantiated but their metaclass provides some class functions so that you can access the style definitions easily. Styles are iterable and yield tuples in the form ``(ttype, d)`` where `ttype` is a token and `d` is a dict with the following keys: ``'color'`` Hexadecimal color value (eg: ``'ff0000'`` for red) or `None` if not defined. ``'bold'`` `True` if the value should be bold ``'italic'`` `True` if the value should be italic ``'underline'`` `True` if the value should be underlined ``'bgcolor'`` Hexadecimal color value for the background (eg: ``'eeeeeee'`` for light gray) or `None` if not defined. ``'border'`` Hexadecimal color value for the border (eg: ``'0000aa'`` for a dark blue) or `None` for no border. Additional keys might appear in the future, formatters should ignore all keys they don't support. HTML 3.2 Formatter ================== For an more complex example, let's implement a HTML 3.2 Formatter. We don't use CSS but inline markup (````, ````, etc). Because this isn't good style this formatter isn't in the standard library ;-) .. sourcecode:: python from pygments.formatter import Formatter class OldHtmlFormatter(Formatter): def __init__(self, **options): Formatter.__init__(self, **options) # create a dict of (start, end) tuples that wrap the # value of a token so that we can use it in the format # method later self.styles = {} # we iterate over the `_styles` attribute of a style item # that contains the parsed style values. for token, style in self.style: start = end = '' # a style item is a tuple in the following form: # colors are readily specified in hex: 'RRGGBB' if style['color']: start += '' % style['color'] end = '' + end if style['bold']: start += '' end = '' + end if style['italic']: start += '' end = '' + end if style['underline']: start += '' end = '' + end self.styles[token] = (start, end) def format(self, tokensource, outfile): # lastval is a string we use for caching # because it's possible that an lexer yields a number # of consecutive tokens with the same token type. # to minimize the size of the generated html markup we # try to join the values of same-type tokens here lastval = '' lasttype = None # wrap the whole output with
            outfile.write('
')

            for ttype, value in tokensource:
                # if the token type doesn't exist in the stylemap
                # we try it with the parent of the token type
                # eg: parent of Token.Literal.String.Double is
                # Token.Literal.String
                while ttype not in self.styles:
                    ttype = ttype.parent
                if ttype == lasttype:
                    # the current token type is the same of the last
                    # iteration. cache it
                    lastval += value
                else:
                    # not the same token as last iteration, but we
                    # have some data in the buffer. wrap it with the
                    # defined style and write it to the output file
                    if lastval:
                        stylebegin, styleend = self.styles[lasttype]
                        outfile.write(stylebegin + lastval + styleend)
                    # set lastval/lasttype to current values
                    lastval = value
                    lasttype = ttype

            # if something is left in the buffer, write it to the
            # output file, then close the opened 
 tag
            if lastval:
                stylebegin, styleend = self.styles[lasttype]
                outfile.write(stylebegin + lastval + styleend)
            outfile.write('
\n') The comments should explain it. Again, this formatter doesn't override the `get_style_defs()` method. If we would have used CSS classes instead of inline HTML markup, we would need to generate the CSS first. For that purpose the `get_style_defs()` method exists: Generating Style Definitions ============================ Some formatters like the `LatexFormatter` and the `HtmlFormatter` don't output inline markup but reference either macros or css classes. Because the definitions of those are not part of the output, the `get_style_defs()` method exists. It is passed one parameter (if it's used and how it's used is up to the formatter) and has to return a string or ``None``. Pygments-2.3.1/doc/docs/rstdirective.rst0000644000175000017500000000155113376260540017345 0ustar piotrpiotr.. -*- mode: rst -*- ================================ Using Pygments in ReST documents ================================ Many Python people use `ReST`_ for documentation their sourcecode, programs, scripts et cetera. This also means that documentation often includes sourcecode samples or snippets. You can easily enable Pygments support for your ReST texts using a custom directive -- this is also how this documentation displays source code. From Pygments 0.9, the directive is shipped in the distribution as `external/rst-directive.py`. You can copy and adapt this code to your liking. .. removed -- too confusing *Loosely related note:* The ReST lexer now recognizes ``.. sourcecode::`` and ``.. code::`` directives and highlights the contents in the specified language if the `handlecodeblocks` option is true. .. _ReST: http://docutils.sf.net/rst.html Pygments-2.3.1/doc/docs/moinmoin.rst0000644000175000017500000000267013376260540016466 0ustar piotrpiotr.. -*- mode: rst -*- ============================ Using Pygments with MoinMoin ============================ From Pygments 0.7, the source distribution ships a `Moin`_ parser plugin that can be used to get Pygments highlighting in Moin wiki pages. To use it, copy the file `external/moin-parser.py` from the Pygments distribution to the `data/plugin/parser` subdirectory of your Moin instance. Edit the options at the top of the file (currently ``ATTACHMENTS`` and ``INLINESTYLES``) and rename the file to the name that the parser directive should have. For example, if you name the file ``code.py``, you can get a highlighted Python code sample with this Wiki markup:: {{{ #!code python [...] }}} where ``python`` is the Pygments name of the lexer to use. Additionally, if you set the ``ATTACHMENTS`` option to True, Pygments will also be called for all attachments for whose filenames there is no other parser registered. You are responsible for including CSS rules that will map the Pygments CSS classes to colors. You can output a stylesheet file with `pygmentize`, put it into the `htdocs` directory of your Moin instance and then include it in the `stylesheets` configuration option in the Moin config, e.g.:: stylesheets = [('screen', '/htdocs/pygments.css')] If you do not want to do that and are willing to accept larger HTML output, you can set the ``INLINESTYLES`` option to True. .. _Moin: http://moinmoin.wikiwikiweb.de/ Pygments-2.3.1/doc/docs/styles.rst0000644000175000017500000001427713376260540016172 0ustar piotrpiotr.. -*- mode: rst -*- ====== Styles ====== Pygments comes with some builtin styles that work for both the HTML and LaTeX formatter. The builtin styles can be looked up with the `get_style_by_name` function: .. sourcecode:: pycon >>> from pygments.styles import get_style_by_name >>> get_style_by_name('colorful') You can pass a instance of a `Style` class to a formatter as the `style` option in form of a string: .. sourcecode:: pycon >>> from pygments.styles import get_style_by_name >>> from pygments.formatters import HtmlFormatter >>> HtmlFormatter(style='colorful').style Or you can also import your own style (which must be a subclass of `pygments.style.Style`) and pass it to the formatter: .. sourcecode:: pycon >>> from yourapp.yourmodule import YourStyle >>> from pygments.formatters import HtmlFormatter >>> HtmlFormatter(style=YourStyle).style Creating Own Styles =================== So, how to create a style? All you have to do is to subclass `Style` and define some styles: .. sourcecode:: python from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic class YourStyle(Style): default_style = "" styles = { Comment: 'italic #888', Keyword: 'bold #005', Name: '#f00', Name.Function: '#0f0', Name.Class: 'bold #0f0', String: 'bg:#eee #111' } That's it. There are just a few rules. When you define a style for `Name` the style automatically also affects `Name.Function` and so on. If you defined ``'bold'`` and you don't want boldface for a subtoken use ``'nobold'``. (Philosophy: the styles aren't written in CSS syntax since this way they can be used for a variety of formatters.) `default_style` is the style inherited by all token types. To make the style usable for Pygments, you must * either register it as a plugin (see :doc:`the plugin docs `) * or drop it into the `styles` subpackage of your Pygments distribution one style class per style, where the file name is the style name and the class name is `StylenameClass`. For example, if your style should be called ``"mondrian"``, name the class `MondrianStyle`, put it into the file ``mondrian.py`` and this file into the ``pygments.styles`` subpackage directory. Style Rules =========== Here a small overview of all allowed styles: ``bold`` render text as bold ``nobold`` don't render text as bold (to prevent subtokens being highlighted bold) ``italic`` render text italic ``noitalic`` don't render text as italic ``underline`` render text underlined ``nounderline`` don't render text underlined ``bg:`` transparent background ``bg:#000000`` background color (black) ``border:`` no border ``border:#ffffff`` border color (white) ``#ff0000`` text color (red) ``noinherit`` don't inherit styles from supertoken Note that there may not be a space between ``bg:`` and the color value since the style definition string is split at whitespace. Also, using named colors is not allowed since the supported color names vary for different formatters. Furthermore, not all lexers might support every style. Builtin Styles ============== Pygments ships some builtin styles which are maintained by the Pygments team. To get a list of known styles you can use this snippet: .. sourcecode:: pycon >>> from pygments.styles import STYLE_MAP >>> STYLE_MAP.keys() ['default', 'emacs', 'friendly', 'colorful'] Getting a list of available styles ================================== .. versionadded:: 0.6 Because it could be that a plugin registered a style, there is a way to iterate over all styles: .. sourcecode:: pycon >>> from pygments.styles import get_all_styles >>> styles = list(get_all_styles()) .. _AnsiTerminalStyle: Terminal Styles =============== .. versionadded:: 2.2 Custom styles used with the 256-color terminal formatter can also map colors to use the 8 default ANSI colors. To do so, use ``#ansigreen``, ``#ansired`` or any other colors defined in :attr:`pygments.style.ansicolors`. Foreground ANSI colors will be mapped to the corresponding `escape codes 30 to 37 `_ thus respecting any custom color mapping and themes provided by many terminal emulators. Light variants are treated as foreground color with and an added bold flag. ``bg:#ansi`` will also be respected, except the light variant will be the same shade as their dark variant. See the following example where the color of the string ``"hello world"`` is governed by the escape sequence ``\x1b[34;01m`` (Ansi Blue, Bold, 41 being red background) instead of an extended foreground & background color. .. sourcecode:: pycon >>> from pygments import highlight >>> from pygments.style import Style >>> from pygments.token import Token >>> from pygments.lexers import Python3Lexer >>> from pygments.formatters import Terminal256Formatter >>> class MyStyle(Style): styles = { Token.String: '#ansiblue bg:#ansired', } >>> code = 'print("Hello World")' >>> result = highlight(code, Python3Lexer(), Terminal256Formatter(style=MyStyle)) >>> print(result.encode()) b'\x1b[34;41;01m"\x1b[39;49;00m\x1b[34;41;01mHello World\x1b[39;49;00m\x1b[34;41;01m"\x1b[39;49;00m' Colors specified using ``#ansi*`` are converted to a default set of RGB colors when used with formatters other than the terminal-256 formatter. By definition of ANSI, the following colors are considered "light" colors, and will be rendered by most terminals as bold: - "darkgray", "red", "green", "yellow", "blue", "fuchsia", "turquoise", "white" The following are considered "dark" colors and will be rendered as non-bold: - "black", "darkred", "darkgreen", "brown", "darkblue", "purple", "teal", "lightgray" Exact behavior might depends on the terminal emulator you are using, and its settings. Pygments-2.3.1/doc/docs/lexerdevelopment.rst0000644000175000017500000006610613376260540020227 0ustar piotrpiotr.. -*- mode: rst -*- .. highlight:: python ==================== Write your own lexer ==================== If a lexer for your favorite language is missing in the Pygments package, you can easily write your own and extend Pygments. All you need can be found inside the :mod:`pygments.lexer` module. As you can read in the :doc:`API documentation `, a lexer is a class that is initialized with some keyword arguments (the lexer options) and that provides a :meth:`.get_tokens_unprocessed()` method which is given a string or unicode object with the data to lex. The :meth:`.get_tokens_unprocessed()` method must return an iterator or iterable containing tuples in the form ``(index, token, value)``. Normally you don't need to do this since there are base lexers that do most of the work and that you can subclass. RegexLexer ========== The lexer base class used by almost all of Pygments' lexers is the :class:`RegexLexer`. This class allows you to define lexing rules in terms of *regular expressions* for different *states*. States are groups of regular expressions that are matched against the input string at the *current position*. If one of these expressions matches, a corresponding action is performed (such as yielding a token with a specific type, or changing state), the current position is set to where the last match ended and the matching process continues with the first regex of the current state. Lexer states are kept on a stack: each time a new state is entered, the new state is pushed onto the stack. The most basic lexers (like the `DiffLexer`) just need one state. Each state is defined as a list of tuples in the form (`regex`, `action`, `new_state`) where the last item is optional. In the most basic form, `action` is a token type (like `Name.Builtin`). That means: When `regex` matches, emit a token with the match text and type `tokentype` and push `new_state` on the state stack. If the new state is ``'#pop'``, the topmost state is popped from the stack instead. To pop more than one state, use ``'#pop:2'`` and so on. ``'#push'`` is a synonym for pushing the current state on the stack. The following example shows the `DiffLexer` from the builtin lexers. Note that it contains some additional attributes `name`, `aliases` and `filenames` which aren't required for a lexer. They are used by the builtin lexer lookup functions. :: from pygments.lexer import RegexLexer from pygments.token import * class DiffLexer(RegexLexer): name = 'Diff' aliases = ['diff'] filenames = ['*.diff'] tokens = { 'root': [ (r' .*\n', Text), (r'\+.*\n', Generic.Inserted), (r'-.*\n', Generic.Deleted), (r'@.*\n', Generic.Subheading), (r'Index.*\n', Generic.Heading), (r'=.*\n', Generic.Heading), (r'.*\n', Text), ] } As you can see this lexer only uses one state. When the lexer starts scanning the text, it first checks if the current character is a space. If this is true it scans everything until newline and returns the data as a `Text` token (which is the "no special highlighting" token). If this rule doesn't match, it checks if the current char is a plus sign. And so on. If no rule matches at the current position, the current char is emitted as an `Error` token that indicates a lexing error, and the position is increased by one. Adding and testing a new lexer ============================== The easiest way to use a new lexer is to use Pygments' support for loading the lexer from a file relative to your current directory. First, change the name of your lexer class to CustomLexer: .. code-block:: python from pygments.lexer import RegexLexer from pygments.token import * class CustomLexer(RegexLexer): """All your lexer code goes here!""" Then you can load the lexer from the command line with the additional flag ``-x``: .. code-block:: console $ pygmentize -l your_lexer_file.py -x To specify a class name other than CustomLexer, append it with a colon: .. code-block:: console $ pygmentize -l your_lexer.py:SomeLexer -x Or, using the Python API: .. code-block:: python # For a lexer named CustomLexer your_lexer = load_lexer_from_file(filename, **options) # For a lexer named MyNewLexer your_named_lexer = load_lexer_from_file(filename, "MyNewLexer", **options) When loading custom lexers and formatters, be extremely careful to use only trusted files; Pygments will perform the equivalent of ``eval`` on them. If you only want to use your lexer with the Pygments API, you can import and instantiate the lexer yourself, then pass it to :func:`pygments.highlight`. To prepare your new lexer for inclusion in the Pygments distribution, so that it will be found when passing filenames or lexer aliases from the command line, you have to perform the following steps. First, change to the current directory containing the Pygments source code. You will need to have either an unpacked source tarball, or (preferably) a copy cloned from BitBucket. .. code-block:: console $ cd .../pygments-main Select a matching module under ``pygments/lexers``, or create a new module for your lexer class. Next, make sure the lexer is known from outside of the module. All modules in the ``pygments.lexers`` package specify ``__all__``. For example, ``esoteric.py`` sets:: __all__ = ['BrainfuckLexer', 'BefungeLexer', ...] Add the name of your lexer class to this list (or create the list if your lexer is the only class in the module). Finally the lexer can be made publicly known by rebuilding the lexer mapping: .. code-block:: console $ make mapfiles To test the new lexer, store an example file with the proper extension in ``tests/examplefiles``. For example, to test your ``DiffLexer``, add a ``tests/examplefiles/example.diff`` containing a sample diff output. Now you can use pygmentize to render your example to HTML: .. code-block:: console $ ./pygmentize -O full -f html -o /tmp/example.html tests/examplefiles/example.diff Note that this explicitly calls the ``pygmentize`` in the current directory by preceding it with ``./``. This ensures your modifications are used. Otherwise a possibly already installed, unmodified version without your new lexer would have been called from the system search path (``$PATH``). To view the result, open ``/tmp/example.html`` in your browser. Once the example renders as expected, you should run the complete test suite: .. code-block:: console $ make test It also tests that your lexer fulfills the lexer API and certain invariants, such as that the concatenation of all token text is the same as the input text. Regex Flags =========== You can either define regex flags locally in the regex (``r'(?x)foo bar'``) or globally by adding a `flags` attribute to your lexer class. If no attribute is defined, it defaults to `re.MULTILINE`. For more information about regular expression flags see the page about `regular expressions`_ in the Python documentation. .. _regular expressions: http://docs.python.org/library/re.html#regular-expression-syntax Scanning multiple tokens at once ================================ So far, the `action` element in the rule tuple of regex, action and state has been a single token type. Now we look at the first of several other possible values. Here is a more complex lexer that highlights INI files. INI files consist of sections, comments and ``key = value`` pairs:: from pygments.lexer import RegexLexer, bygroups from pygments.token import * class IniLexer(RegexLexer): name = 'INI' aliases = ['ini', 'cfg'] filenames = ['*.ini', '*.cfg'] tokens = { 'root': [ (r'\s+', Text), (r';.*?$', Comment), (r'\[.*?\]$', Keyword), (r'(.*?)(\s*)(=)(\s*)(.*?)$', bygroups(Name.Attribute, Text, Operator, Text, String)) ] } The lexer first looks for whitespace, comments and section names. Later it looks for a line that looks like a key, value pair, separated by an ``'='`` sign, and optional whitespace. The `bygroups` helper yields each capturing group in the regex with a different token type. First the `Name.Attribute` token, then a `Text` token for the optional whitespace, after that a `Operator` token for the equals sign. Then a `Text` token for the whitespace again. The rest of the line is returned as `String`. Note that for this to work, every part of the match must be inside a capturing group (a ``(...)``), and there must not be any nested capturing groups. If you nevertheless need a group, use a non-capturing group defined using this syntax: ``(?:some|words|here)`` (note the ``?:`` after the beginning parenthesis). If you find yourself needing a capturing group inside the regex which shouldn't be part of the output but is used in the regular expressions for backreferencing (eg: ``r'(<(foo|bar)>)(.*?)()'``), you can pass `None` to the bygroups function and that group will be skipped in the output. Changing states =============== Many lexers need multiple states to work as expected. For example, some languages allow multiline comments to be nested. Since this is a recursive pattern it's impossible to lex just using regular expressions. Here is a lexer that recognizes C++ style comments (multi-line with ``/* */`` and single-line with ``//`` until end of line):: from pygments.lexer import RegexLexer from pygments.token import * class CppCommentLexer(RegexLexer): name = 'Example Lexer with states' tokens = { 'root': [ (r'[^/]+', Text), (r'/\*', Comment.Multiline, 'comment'), (r'//.*?$', Comment.Singleline), (r'/', Text) ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ] } This lexer starts lexing in the ``'root'`` state. It tries to match as much as possible until it finds a slash (``'/'``). If the next character after the slash is an asterisk (``'*'``) the `RegexLexer` sends those two characters to the output stream marked as `Comment.Multiline` and continues lexing with the rules defined in the ``'comment'`` state. If there wasn't an asterisk after the slash, the `RegexLexer` checks if it's a Singleline comment (i.e. followed by a second slash). If this also wasn't the case it must be a single slash, which is not a comment starter (the separate regex for a single slash must also be given, else the slash would be marked as an error token). Inside the ``'comment'`` state, we do the same thing again. Scan until the lexer finds a star or slash. If it's the opening of a multiline comment, push the ``'comment'`` state on the stack and continue scanning, again in the ``'comment'`` state. Else, check if it's the end of the multiline comment. If yes, pop one state from the stack. Note: If you pop from an empty stack you'll get an `IndexError`. (There is an easy way to prevent this from happening: don't ``'#pop'`` in the root state). If the `RegexLexer` encounters a newline that is flagged as an error token, the stack is emptied and the lexer continues scanning in the ``'root'`` state. This can help producing error-tolerant highlighting for erroneous input, e.g. when a single-line string is not closed. Advanced state tricks ===================== There are a few more things you can do with states: - You can push multiple states onto the stack if you give a tuple instead of a simple string as the third item in a rule tuple. For example, if you want to match a comment containing a directive, something like: .. code-block:: text /* rest of comment */ you can use this rule:: tokens = { 'root': [ (r'/\* <', Comment, ('comment', 'directive')), ... ], 'directive': [ (r'[^>]*', Comment.Directive), (r'>', Comment, '#pop'), ], 'comment': [ (r'[^*]+', Comment), (r'\*/', Comment, '#pop'), (r'\*', Comment), ] } When this encounters the above sample, first ``'comment'`` and ``'directive'`` are pushed onto the stack, then the lexer continues in the directive state until it finds the closing ``>``, then it continues in the comment state until the closing ``*/``. Then, both states are popped from the stack again and lexing continues in the root state. .. versionadded:: 0.9 The tuple can contain the special ``'#push'`` and ``'#pop'`` (but not ``'#pop:n'``) directives. - You can include the rules of a state in the definition of another. This is done by using `include` from `pygments.lexer`:: from pygments.lexer import RegexLexer, bygroups, include from pygments.token import * class ExampleLexer(RegexLexer): tokens = { 'comments': [ (r'/\*.*?\*/', Comment), (r'//.*?\n', Comment), ], 'root': [ include('comments'), (r'(function )(\w+)( {)', bygroups(Keyword, Name, Keyword), 'function'), (r'.', Text), ], 'function': [ (r'[^}/]+', Text), include('comments'), (r'/', Text), (r'\}', Keyword, '#pop'), ] } This is a hypothetical lexer for a language that consist of functions and comments. Because comments can occur at toplevel and in functions, we need rules for comments in both states. As you can see, the `include` helper saves repeating rules that occur more than once (in this example, the state ``'comment'`` will never be entered by the lexer, as it's only there to be included in ``'root'`` and ``'function'``). - Sometimes, you may want to "combine" a state from existing ones. This is possible with the `combined` helper from `pygments.lexer`. If you, instead of a new state, write ``combined('state1', 'state2')`` as the third item of a rule tuple, a new anonymous state will be formed from state1 and state2 and if the rule matches, the lexer will enter this state. This is not used very often, but can be helpful in some cases, such as the `PythonLexer`'s string literal processing. - If you want your lexer to start lexing in a different state you can modify the stack by overriding the `get_tokens_unprocessed()` method:: from pygments.lexer import RegexLexer class ExampleLexer(RegexLexer): tokens = {...} def get_tokens_unprocessed(self, text, stack=('root', 'otherstate')): for item in RegexLexer.get_tokens_unprocessed(self, text, stack): yield item Some lexers like the `PhpLexer` use this to make the leading ``', Name.Tag), ], 'script-content': [ (r'(.+?)(<\s*/\s*script\s*>)', bygroups(using(JavascriptLexer), Name.Tag), '#pop'), ] } Here the content of a ```` end tag is processed by the `JavascriptLexer`, while the end tag is yielded as a normal token with the `Name.Tag` type. Also note the ``(r'<\s*script\s*', Name.Tag, ('script-content', 'tag'))`` rule. Here, two states are pushed onto the state stack, ``'script-content'`` and ``'tag'``. That means that first ``'tag'`` is processed, which will lex attributes and the closing ``>``, then the ``'tag'`` state is popped and the next state on top of the stack will be ``'script-content'``. Since you cannot refer to the class currently being defined, use `this` (imported from `pygments.lexer`) to refer to the current lexer class, i.e. ``using(this)``. This construct may seem unnecessary, but this is often the most obvious way of lexing arbitrary syntax between fixed delimiters without introducing deeply nested states. The `using()` helper has a special keyword argument, `state`, which works as follows: if given, the lexer to use initially is not in the ``"root"`` state, but in the state given by this argument. This does not work with advanced `RegexLexer` subclasses such as `ExtendedRegexLexer` (see below). Any other keywords arguments passed to `using()` are added to the keyword arguments used to create the lexer. Delegating Lexer ================ Another approach for nested lexers is the `DelegatingLexer` which is for example used for the template engine lexers. It takes two lexers as arguments on initialisation: a `root_lexer` and a `language_lexer`. The input is processed as follows: First, the whole text is lexed with the `language_lexer`. All tokens yielded with the special type of ``Other`` are then concatenated and given to the `root_lexer`. The language tokens of the `language_lexer` are then inserted into the `root_lexer`'s token stream at the appropriate positions. :: from pygments.lexer import DelegatingLexer from pygments.lexers.web import HtmlLexer, PhpLexer class HtmlPhpLexer(DelegatingLexer): def __init__(self, **options): super(HtmlPhpLexer, self).__init__(HtmlLexer, PhpLexer, **options) This procedure ensures that e.g. HTML with template tags in it is highlighted correctly even if the template tags are put into HTML tags or attributes. If you want to change the needle token ``Other`` to something else, you can give the lexer another token type as the third parameter:: DelegatingLexer.__init__(MyLexer, OtherLexer, Text, **options) Callbacks ========= Sometimes the grammar of a language is so complex that a lexer would be unable to process it just by using regular expressions and stacks. For this, the `RegexLexer` allows callbacks to be given in rule tuples, instead of token types (`bygroups` and `using` are nothing else but preimplemented callbacks). The callback must be a function taking two arguments: * the lexer itself * the match object for the last matched rule The callback must then return an iterable of (or simply yield) ``(index, tokentype, value)`` tuples, which are then just passed through by `get_tokens_unprocessed()`. The ``index`` here is the position of the token in the input string, ``tokentype`` is the normal token type (like `Name.Builtin`), and ``value`` the associated part of the input string. You can see an example here:: from pygments.lexer import RegexLexer from pygments.token import Generic class HypotheticLexer(RegexLexer): def headline_callback(lexer, match): equal_signs = match.group(1) text = match.group(2) yield match.start(), Generic.Headline, equal_signs + text + equal_signs tokens = { 'root': [ (r'(=+)(.*?)(\1)', headline_callback) ] } If the regex for the `headline_callback` matches, the function is called with the match object. Note that after the callback is done, processing continues normally, that is, after the end of the previous match. The callback has no possibility to influence the position. There are not really any simple examples for lexer callbacks, but you can see them in action e.g. in the `SMLLexer` class in `ml.py`_. .. _ml.py: http://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ml.py The ExtendedRegexLexer class ============================ The `RegexLexer`, even with callbacks, unfortunately isn't powerful enough for the funky syntax rules of languages such as Ruby. But fear not; even then you don't have to abandon the regular expression approach: Pygments has a subclass of `RegexLexer`, the `ExtendedRegexLexer`. All features known from RegexLexers are available here too, and the tokens are specified in exactly the same way, *except* for one detail: The `get_tokens_unprocessed()` method holds its internal state data not as local variables, but in an instance of the `pygments.lexer.LexerContext` class, and that instance is passed to callbacks as a third argument. This means that you can modify the lexer state in callbacks. The `LexerContext` class has the following members: * `text` -- the input text * `pos` -- the current starting position that is used for matching regexes * `stack` -- a list containing the state stack * `end` -- the maximum position to which regexes are matched, this defaults to the length of `text` Additionally, the `get_tokens_unprocessed()` method can be given a `LexerContext` instead of a string and will then process this context instead of creating a new one for the string argument. Note that because you can set the current position to anything in the callback, it won't be automatically be set by the caller after the callback is finished. For example, this is how the hypothetical lexer above would be written with the `ExtendedRegexLexer`:: from pygments.lexer import ExtendedRegexLexer from pygments.token import Generic class ExHypotheticLexer(ExtendedRegexLexer): def headline_callback(lexer, match, ctx): equal_signs = match.group(1) text = match.group(2) yield match.start(), Generic.Headline, equal_signs + text + equal_signs ctx.pos = match.end() tokens = { 'root': [ (r'(=+)(.*?)(\1)', headline_callback) ] } This might sound confusing (and it can really be). But it is needed, and for an example look at the Ruby lexer in `ruby.py`_. .. _ruby.py: https://bitbucket.org/birkenfeld/pygments-main/src/tip/pygments/lexers/ruby.py Handling Lists of Keywords ========================== For a relatively short list (hundreds) you can construct an optimized regular expression directly using ``words()`` (longer lists, see next section). This function handles a few things for you automatically, including escaping metacharacters and Python's first-match rather than longest-match in alternations. Feel free to put the lists themselves in ``pygments/lexers/_$lang_builtins.py`` (see examples there), and generated by code if possible. An example of using ``words()`` is something like:: from pygments.lexer import RegexLexer, words, Name class MyLexer(RegexLexer): tokens = { 'root': [ (words(('else', 'elseif'), suffix=r'\b'), Name.Builtin), (r'\w+', Name), ], } As you can see, you can add ``prefix`` and ``suffix`` parts to the constructed regex. Modifying Token Streams ======================= Some languages ship a lot of builtin functions (for example PHP). The total amount of those functions differs from system to system because not everybody has every extension installed. In the case of PHP there are over 3000 builtin functions. That's an incredibly huge amount of functions, much more than you want to put into a regular expression. But because only `Name` tokens can be function names this is solvable by overriding the ``get_tokens_unprocessed()`` method. The following lexer subclasses the `PythonLexer` so that it highlights some additional names as pseudo keywords:: from pygments.lexers.python import PythonLexer from pygments.token import Name, Keyword class MyPythonLexer(PythonLexer): EXTRA_KEYWORDS = set(('foo', 'bar', 'foobar', 'barfoo', 'spam', 'eggs')) def get_tokens_unprocessed(self, text): for index, token, value in PythonLexer.get_tokens_unprocessed(self, text): if token is Name and value in self.EXTRA_KEYWORDS: yield index, Keyword.Pseudo, value else: yield index, token, value The `PhpLexer` and `LuaLexer` use this method to resolve builtin functions. Pygments-2.3.1/doc/docs/api.rst0000644000175000017500000002723513376260540015416 0ustar piotrpiotr.. -*- mode: rst -*- ===================== The full Pygments API ===================== This page describes the Pygments API. High-level API ============== .. module:: pygments Functions from the :mod:`pygments` module: .. function:: lex(code, lexer) Lex `code` with the `lexer` (must be a `Lexer` instance) and return an iterable of tokens. Currently, this only calls `lexer.get_tokens()`. .. function:: format(tokens, formatter, outfile=None) Format a token stream (iterable of tokens) `tokens` with the `formatter` (must be a `Formatter` instance). The result is written to `outfile`, or if that is ``None``, returned as a string. .. function:: highlight(code, lexer, formatter, outfile=None) This is the most high-level highlighting function. It combines `lex` and `format` in one function. .. module:: pygments.lexers Functions from :mod:`pygments.lexers`: .. function:: get_lexer_by_name(alias, **options) Return an instance of a `Lexer` subclass that has `alias` in its aliases list. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is found. .. function:: get_lexer_for_filename(fn, **options) Return a `Lexer` subclass instance that has a filename pattern matching `fn`. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no lexer for that filename is found. .. function:: get_lexer_for_mimetype(mime, **options) Return a `Lexer` subclass instance that has `mime` in its mimetype list. The lexer is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if not lexer for that mimetype is found. .. function:: load_lexer_from_file(filename, lexername="CustomLexer", **options) Return a `Lexer` subclass instance loaded from the provided file, relative to the current directory. The file is expected to contain a Lexer class named `lexername` (by default, CustomLexer). Users should be very careful with the input, because this method is equivalent to running eval on the input file. The lexer is given the `options` at its instantiation. :exc:`ClassNotFound` is raised if there are any errors loading the Lexer .. versionadded:: 2.2 .. function:: guess_lexer(text, **options) Return a `Lexer` subclass instance that's guessed from the text in `text`. For that, the :meth:`.analyse_text()` method of every known lexer class is called with the text as argument, and the lexer which returned the highest value will be instantiated and returned. :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can handle the content. .. function:: guess_lexer_for_filename(filename, text, **options) As :func:`guess_lexer()`, but only lexers which have a pattern in `filenames` or `alias_filenames` that matches `filename` are taken into consideration. :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can handle the content. .. function:: get_all_lexers() Return an iterable over all registered lexers, yielding tuples in the format:: (longname, tuple of aliases, tuple of filename patterns, tuple of mimetypes) .. versionadded:: 0.6 .. function:: find_lexer_class_by_name(alias) Return the `Lexer` subclass that has `alias` in its aliases list, without instantiating it. Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is found. .. versionadded:: 2.2 .. function:: find_lexer_class(name) Return the `Lexer` subclass that with the *name* attribute as given by the *name* argument. .. module:: pygments.formatters Functions from :mod:`pygments.formatters`: .. function:: get_formatter_by_name(alias, **options) Return an instance of a :class:`.Formatter` subclass that has `alias` in its aliases list. The formatter is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no formatter with that alias is found. .. function:: get_formatter_for_filename(fn, **options) Return a :class:`.Formatter` subclass instance that has a filename pattern matching `fn`. The formatter is given the `options` at its instantiation. Will raise :exc:`pygments.util.ClassNotFound` if no formatter for that filename is found. .. function:: load_formatter_from_file(filename, formattername="CustomFormatter", **options) Return a `Formatter` subclass instance loaded from the provided file, relative to the current directory. The file is expected to contain a Formatter class named ``formattername`` (by default, CustomFormatter). Users should be very careful with the input, because this method is equivalent to running eval on the input file. The formatter is given the `options` at its instantiation. :exc:`ClassNotFound` is raised if there are any errors loading the Formatter .. versionadded:: 2.2 .. module:: pygments.styles Functions from :mod:`pygments.styles`: .. function:: get_style_by_name(name) Return a style class by its short name. The names of the builtin styles are listed in :data:`pygments.styles.STYLE_MAP`. Will raise :exc:`pygments.util.ClassNotFound` if no style of that name is found. .. function:: get_all_styles() Return an iterable over all registered styles, yielding their names. .. versionadded:: 0.6 .. module:: pygments.lexer Lexers ====== The base lexer class from which all lexers are derived is: .. class:: Lexer(**options) The constructor takes a \*\*keywords dictionary of options. Every subclass must first process its own options and then call the `Lexer` constructor, since it processes the `stripnl`, `stripall` and `tabsize` options. An example looks like this: .. sourcecode:: python def __init__(self, **options): self.compress = options.get('compress', '') Lexer.__init__(self, **options) As these options must all be specifiable as strings (due to the command line usage), there are various utility functions available to help with that, see `Option processing`_. .. method:: get_tokens(text) This method is the basic interface of a lexer. It is called by the `highlight()` function. It must process the text and return an iterable of ``(tokentype, value)`` pairs from `text`. Normally, you don't need to override this method. The default implementation processes the `stripnl`, `stripall` and `tabsize` options and then yields all tokens from `get_tokens_unprocessed()`, with the ``index`` dropped. .. method:: get_tokens_unprocessed(text) This method should process the text and return an iterable of ``(index, tokentype, value)`` tuples where ``index`` is the starting position of the token within the input text. This method must be overridden by subclasses. .. staticmethod:: analyse_text(text) A static method which is called for lexer guessing. It should analyse the text and return a float in the range from ``0.0`` to ``1.0``. If it returns ``0.0``, the lexer will not be selected as the most probable one, if it returns ``1.0``, it will be selected immediately. .. note:: You don't have to add ``@staticmethod`` to the definition of this method, this will be taken care of by the Lexer's metaclass. For a list of known tokens have a look at the :doc:`tokens` page. A lexer also can have the following attributes (in fact, they are mandatory except `alias_filenames`) that are used by the builtin lookup mechanism. .. attribute:: name Full name for the lexer, in human-readable form. .. attribute:: aliases A list of short, unique identifiers that can be used to lookup the lexer from a list, e.g. using `get_lexer_by_name()`. .. attribute:: filenames A list of `fnmatch` patterns that match filenames which contain content for this lexer. The patterns in this list should be unique among all lexers. .. attribute:: alias_filenames A list of `fnmatch` patterns that match filenames which may or may not contain content for this lexer. This list is used by the :func:`.guess_lexer_for_filename()` function, to determine which lexers are then included in guessing the correct one. That means that e.g. every lexer for HTML and a template language should include ``\*.html`` in this list. .. attribute:: mimetypes A list of MIME types for content that can be lexed with this lexer. .. module:: pygments.formatter Formatters ========== A formatter is derived from this class: .. class:: Formatter(**options) As with lexers, this constructor processes options and then must call the base class :meth:`__init__`. The :class:`Formatter` class recognizes the options `style`, `full` and `title`. It is up to the formatter class whether it uses them. .. method:: get_style_defs(arg='') This method must return statements or declarations suitable to define the current style for subsequent highlighted text (e.g. CSS classes in the `HTMLFormatter`). The optional argument `arg` can be used to modify the generation and is formatter dependent (it is standardized because it can be given on the command line). This method is called by the ``-S`` :doc:`command-line option `, the `arg` is then given by the ``-a`` option. .. method:: format(tokensource, outfile) This method must format the tokens from the `tokensource` iterable and write the formatted version to the file object `outfile`. Formatter options can control how exactly the tokens are converted. .. versionadded:: 0.7 A formatter must have the following attributes that are used by the builtin lookup mechanism. .. attribute:: name Full name for the formatter, in human-readable form. .. attribute:: aliases A list of short, unique identifiers that can be used to lookup the formatter from a list, e.g. using :func:`.get_formatter_by_name()`. .. attribute:: filenames A list of :mod:`fnmatch` patterns that match filenames for which this formatter can produce output. The patterns in this list should be unique among all formatters. .. module:: pygments.util Option processing ================= The :mod:`pygments.util` module has some utility functions usable for option processing: .. exception:: OptionError This exception will be raised by all option processing functions if the type or value of the argument is not correct. .. function:: get_bool_opt(options, optname, default=None) Interpret the key `optname` from the dictionary `options` as a boolean and return it. Return `default` if `optname` is not in `options`. The valid string values for ``True`` are ``1``, ``yes``, ``true`` and ``on``, the ones for ``False`` are ``0``, ``no``, ``false`` and ``off`` (matched case-insensitively). .. function:: get_int_opt(options, optname, default=None) As :func:`get_bool_opt`, but interpret the value as an integer. .. function:: get_list_opt(options, optname, default=None) If the key `optname` from the dictionary `options` is a string, split it at whitespace and return it. If it is already a list or a tuple, it is returned as a list. .. function:: get_choice_opt(options, optname, allowed, default=None) If the key `optname` from the dictionary is not in the sequence `allowed`, raise an error, otherwise return it. .. versionadded:: 0.8 Pygments-2.3.1/doc/index.rst0000644000175000017500000000364013376260540015016 0ustar piotrpiotrWelcome! ======== This is the home of Pygments. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details that increase highlighting quality * support for new languages and formats are added easily; most languages use a simple regex-based lexing mechanism * a number of output formats is available, among them HTML, RTF, LaTeX and ANSI sequences * it is usable as a command-line tool and as a library * ... and it highlights even Perl 6! Read more in the :doc:`FAQ list ` or the :doc:`documentation `, or `download the latest release `_. .. _contribute: Contribute ---------- Like every open-source project, we are always looking for volunteers to help us with programming. Python knowledge is required, but don't fear: Python is a very clear and easy to learn language. Development takes place on `Bitbucket `_, where the Mercurial repository, tickets and pull requests can be viewed. Our primary communication instrument is the IRC channel **#pocoo** on the Freenode network. To join it, let your IRC client connect to ``irc.freenode.net`` and do ``/join #pocoo``. If you found a bug, just open a ticket in the Bitbucket tracker. Be sure to log in to be notified when the issue is fixed -- development is not fast-paced as the library is quite stable. You can also send an e-mail to the developers, see below. The authors ----------- Pygments is maintained by **Georg Brandl**, e-mail address *georg*\ *@*\ *python.org*. Many lexers and fixes have been contributed by **Armin Ronacher**, the rest of the `Pocoo `_ team and **Tim Hatch**. .. toctree:: :maxdepth: 1 :hidden: docs/index Pygments-2.3.1/doc/conf.py0000644000175000017500000001706613376260540014463 0ustar piotrpiotr# -*- coding: utf-8 -*- # # Pygments documentation build configuration file # import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('..')) import pygments # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'pygments.sphinxext'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Pygments' copyright = u'2015, Georg Brandl' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = pygments.__version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. #pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'pygments14' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = '_static/favicon.ico' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = {'index': 'indexsidebar.html', 'docs/*': 'docssidebar.html'} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'Pygmentsdoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'Pygments.tex', u'Pygments Documentation', u'Georg Brandl', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'pygments', u'Pygments Documentation', [u'Georg Brandl'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'Pygments', u'Pygments Documentation', u'Georg Brandl', 'Pygments', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # Example configuration for intersphinx: refer to the Python standard library. #intersphinx_mapping = {'http://docs.python.org/': None} Pygments-2.3.1/doc/faq.rst0000644000175000017500000001371313376303352014457 0ustar piotrpiotr:orphan: Pygments FAQ ============= What is Pygments? ----------------- Pygments is a syntax highlighting engine written in Python. That means, it will take source code (or other markup) in a supported language and output a processed version (in different formats) containing syntax highlighting markup. Its features include: * a wide range of common :doc:`languages and markup formats ` is supported * new languages and formats are added easily * a number of output formats is available, including: - HTML - ANSI sequences (console output) - LaTeX - RTF * it is usable as a command-line tool and as a library * parsing and formatting is fast Pygments is licensed under the BSD license. Where does the name Pygments come from? --------------------------------------- *Py* of course stands for Python, while *pigments* are used for coloring paint, and in this case, source code! What are the system requirements? --------------------------------- Pygments only needs a standard Python install, version 2.7 or higher or version 3.5 or higher for Python 3. No additional libraries are needed. How can I use Pygments? ----------------------- Pygments is usable as a command-line tool as well as a library. From the command-line, usage looks like this (assuming the pygmentize script is properly installed):: pygmentize -f html /path/to/file.py This will print a HTML-highlighted version of /path/to/file.py to standard output. For a complete help, please run ``pygmentize -h``. Usage as a library is thoroughly demonstrated in the Documentation section. How do I make a new style? -------------------------- Please see the :doc:`documentation on styles `. How can I report a bug or suggest a feature? -------------------------------------------- Please report bugs and feature wishes in the tracker at Bitbucket. You can also e-mail the author or use IRC, see the contact details. I want this support for this language! -------------------------------------- Instead of waiting for others to include language support, why not write it yourself? All you have to know is :doc:`outlined in the docs `. Can I use Pygments for programming language processing? ------------------------------------------------------- The Pygments lexing machinery is quite powerful can be used to build lexers for basically all languages. However, parsing them is not possible, though some lexers go some steps in this direction in order to e.g. highlight function names differently. Also, error reporting is not the scope of Pygments. It focuses on correctly highlighting syntactically valid documents, not finding and compensating errors. Who uses Pygments? ------------------ This is an (incomplete) list of projects and sites known to use the Pygments highlighter. * `Wikipedia `_ * `BitBucket `_, a Mercurial and Git hosting site * `The Sphinx documentation builder `_, for embedded source examples * `rst2pdf `_, a reStructuredText to PDF converter * `Codecov `_, a code coverage CI service * `Trac `_, the universal project management tool * `AsciiDoc `_, a text-based documentation generator * `ActiveState Code `_, the Python Cookbook successor * `ViewVC `_, a web-based version control repository browser * `BzrFruit `_, a Bazaar branch viewer * `QBzr `_, a cross-platform Qt-based GUI front end for Bazaar * `Review Board `_, a collaborative code reviewing tool * `Diamanda `_, a Django powered wiki system with support for Pygments * `Progopedia `_ (`English `_), an encyclopedia of programming languages * `Bruce `_, a reStructuredText presentation tool * `PIDA `_, a universal IDE written in Python * `BPython `_, a curses-based intelligent Python shell * `PuDB `_, a console Python debugger * `XWiki `_, a wiki-based development framework in Java, using Jython * `roux `_, a script for running R scripts and creating beautiful output including graphs * `hurl `_, a web service for making HTTP requests * `wxHTMLPygmentizer `_ is a GUI utility, used to make code-colorization easier * `Postmarkup `_, a BBCode to XHTML generator * `WpPygments `_, and `WPygments `_, highlighter plugins for WordPress * `Siafoo `_, a tool for sharing and storing useful code and programming experience * `D source `_, a community for the D programming language * `dpaste.com `_, another Django pastebin * `Django snippets `_, a pastebin for Django code * `Fayaa `_, a Chinese pastebin * `Incollo.com `_, a free collaborative debugging tool * `PasteBox `_, a pastebin focused on privacy * `hilite.me `_, a site to highlight code snippets * `patx.me `_, a pastebin * `Fluidic `_, an experiment in integrating shells with a GUI * `pygments.rb `_, a pygments wrapper for Ruby * `Clygments `_, a pygments wrapper for Clojure * `PHPygments `_, a pygments wrapper for PHP If you have a project or web site using Pygments, drop me a line, and I'll add a link here. Pygments-2.3.1/doc/languages.rst0000644000175000017500000000541713376303352015660 0ustar piotrpiotr:orphan: Supported languages =================== Pygments supports an ever-growing range of languages. Watch this space... Programming languages --------------------- * ActionScript * Ada * ANTLR * AppleScript * Assembly (various) * Asymptote * Awk * Befunge * Boo * BrainFuck * C, C++ * C# * Clojure * CoffeeScript * ColdFusion * Common Lisp * Coq * Cryptol (incl. Literate Cryptol) * `Crystal `_ * `Cython `_ * `D `_ * Dart * Delphi * Dylan * `Elm `_ * Erlang * `Ezhil `_ Ezhil - A Tamil programming language * Factor * Fancy * `Fennel `_ * Fortran * F# * GAP * Gherkin (Cucumber) * GL shaders * Groovy * `Haskell `_ (incl. Literate Haskell) * HLSL * IDL * Io * Java * JavaScript * Lasso * LLVM * Logtalk * `Lua `_ * Matlab * MiniD * Modelica * Modula-2 * MuPad * Nemerle * Nimrod * Objective-C * Objective-J * Octave * OCaml * PHP * `Perl `_ * PovRay * PostScript * PowerShell * Prolog * `Python `_ 2.x and 3.x (incl. console sessions and tracebacks) * `REBOL `_ * `Red `_ * Redcode * `Ruby `_ (incl. irb sessions) * Rust * S, S-Plus, R * Scala * Scheme * Scilab * Smalltalk * SNOBOL * Tcl * Vala * Verilog * VHDL * Visual Basic.NET * Visual FoxPro * XQuery * Zephir Template languages ------------------ * Cheetah templates * `Django `_ / `Jinja `_ templates * ERB (Ruby templating) * `Genshi `_ (the Trac template language) * JSP (Java Server Pages) * `Myghty `_ (the HTML::Mason based framework) * `Mako `_ (the Myghty successor) * `Smarty `_ templates (PHP templating) * Tea Other markup ------------ * Apache config files * Bash shell scripts * BBCode * CMake * CSS * Debian control files * Diff files * DTD * Gettext catalogs * Gnuplot script * Groff markup * HTML * HTTP sessions * INI-style config files * IRC logs (irssi style) * Lighttpd config files * Makefiles * MoinMoin/Trac Wiki markup * MySQL * Nginx config files * POV-Ray scenes * Ragel * Redcode * ReST * Robot Framework * RPM spec files * SQL, also MySQL, SQLite * Squid configuration * TeX * tcsh * Vim Script * Windows batch files * XML * XSLT * YAML ... that's all? --------------- Well, why not write your own? Contributing to Pygments is easy and fun. Take a look at the :doc:`docs on lexer development ` and :ref:`contact details `. Note: the languages listed here are supported in the development version. The latest release may lack a few of them. Pygments-2.3.1/doc/download.rst0000644000175000017500000000263413376260540015520 0ustar piotrpiotrDownload and installation ========================= The current release is version |version|. Packaged versions ----------------- You can download it `from the Python Package Index `_. For installation of packages from PyPI, we recommend `Pip `_, which works on all major platforms. Under Linux, most distributions include a package for Pygments, usually called ``pygments`` or ``python-pygments``. You can install it with the package manager as usual. Development sources ------------------- We're using the `Mercurial `_ version control system. You can get the development source using this command:: hg clone http://bitbucket.org/birkenfeld/pygments-main pygments Development takes place at `Bitbucket `_, you can browse the source online `here `_. The latest changes in the development source code are listed in the `changelog `_. .. Documentation ------------- .. XXX todo You can download the documentation either as a bunch of rst files from the Mercurial repository, see above, or as a tar.gz containing rendered HTML files:

pygmentsdocs.tar.gz

Pygments-2.3.1/doc/_static/0000755000175000017500000000000013405476652014606 5ustar piotrpiotrPygments-2.3.1/doc/_static/logo_only.png0000644000175000017500000004005013376260540017306 0ustar piotrpiotrPNG  IHDR^^+wbKGD pHYs  tIME!د IDATxy|T' @A7B]֊icKi]Rmk>}bR +U% mqaʎ ɄdHH8?f&˹w>_/ܹ333=C0wB:ێSӶfOѩBeۗD ᄍqCvҐ=?-*HB| YS( tA5!OLϿĈQb*J"e8 ^G?5]:ێ'U&lTLrx3Ƀ/dk!9|ڥ ڃ~`^$[RoH>.5 w[KyWBJLGd} W=mW"ᾺZMrيS$x#t_ŷ.+TezEr$#}~s/ݾ.)%_J"et%O0AK/Lcrd ϬJWoa.ɿ&OZȷwJFS 4L0ϨW\Giβl{=T;Ǵu&wMuȔ+Z.&9qzw!zný^DrQ <&Fs}ղ|wvk"?Lۚ>A}xcLsL^+6?œ{w.'BNDDijw߼ƒ|k(9jh7&nCU.`QZtlu(DƵrK)QmI5^ڶfo񐴃uA Om)j픋GʺŵS/3zI'zn\ʄFM3bIoBO|t K+o}?کH\f%yU)%W^$lc& e(W#)1݉(AmTRɃVЎ?~t; O_:%˹fi@ޛ j 3zN2OoZbZOJ" 1I\0q:xt ]Tx' 2{u ە. K j/9WBRF0]QS5'A\Mڅ?pt3;+g[ ^@ti 7K "+G5(K'􆗑@$->ֺlcJLUwVݓ)DՓ(OD\5Fmch|t/C;{Y+_7mXy9@\5\5Pr4 y M'ukE X])%S2CYϋ%ƩΌah󫾛xƾ~Ra~JnJ.*rdhzT7k#IQ#K`JT'@y\kڻHlj3>ahGݪG7ClY7oyk+=Ҧf{7~}hrW_O},XJTWCHS鱳+FgE h;^XC_l=hI9XAޫSKS//d@ahJLwIoڮF=T &ƸnJ+Fgnෝ]UvhBl.-K ‹%'{UF)9(tRxhB:Qx>A&}Ǵ*񮹶4yƹn=W&S%Y|0(iRU/^eip1g(\Y6ʱ®9BWP\8HV)!iN0 ̰y?W Ԩ|I~LY`Ճ]զjՎ1S+b`O_;'%.vujz]sedG.ۛ,ϱ A;hB{l[>9]6op_Ⱀ⌈+~&bw'_3'Yy?zJK}口~4X^//?&K^To㪳.r]nRJ+jKg)Y1u}(Ft7> ZPR|5l#1+9h`[:?]ϲL_8)Tլ)X)vIӭ2)bhしKG0~ӽ;!^Iw-m{?&+A\6+(Z/ytO/-'X$Yv=蒓@PҮ\bZP.쾧XzYoӁyN HnkR$ܨ|2rzh5,ht!]dRiWT]NqA=T1V)0p+Ԩ)+MySgW+j[;iMUҞeu\*Y^{%duXor? {~Q]8tϧJR~&jS6 )>z}H FIJ""*7;X~OyQxu76҆^M[|?G:HoӾ2M9ɸ|UZDD_-In-N+E'+y]=͸ɔSS0㵵I]1$^`]o|w%F""3WJuF+ȒmF\1.>=|ol*%pBc̿ZLdE'JA C;>'_$Jw'Z;Kȯ:멡ۛ6N+NQW|DKDDw_x?-k$On7iCQ,?CdtdEHz'K>z`/oosdnu1;a`RV"əiȊ),+XqRIqJoe櫼hGO^P@)*KfqLYJl=r8MtS C;>o9wŏdo1}ͤ_3{Ҩ:W>' 7G7p0*]4ǐNjԛ׀HԒ齽Y1i/O^7xCt+YJMWzHsMyi4͕,5D3U׊XګAӥq)!GbD7V70q>!ޠIr"Y Ӯ*j}W6*ߓƔ93!_)Y%7)Q*hxZt{Ֆ|!^yòZ:& G&#ȷߘ:7W"H | ~& M3JgkOfҡF{hxZЎόeB^J/HdErk5]X_|Of4>ҵɛ_M/]]͍ Zg] vǥR-3i͑ C;<-<#ѡ@+{ _ * }Ox.׌1s.fЦ3,4otYYTAtNwYύҖ, 8o(Zx7DCbM8}U_>,*Պm=1kF蹗@9|Wkb83P[PRQIkyQeahc_"x]룑R"~WXӥ_҃VWT%FT4 u[k?}&\5ǭ(4'CD)Q2WRZڋXq uQ^'bT+H8~͖DK]vh9uM^Φ/#fzXC;qRimЎh@bċpVc=oJ T)9A8ts$=Ԥ| .Iӹlu H>.1'_[?ês5 ]~jtAW^[ U%-4S>팟<v|i;k$^ o|Rxc%㒲lNZ)V5Q1ҩ]%zGIZQ\kL[άu[N,x[\vЩRyݭMw.yTi!꺲a-?ȆcT,fgu.?Bdkēm>}[.?ri=Qׇt|!^{KWyKzc~cy/c^2s:nvU!#ռ%uOe:+*n_v :&%̦_cST@BHg>wvvɍCXVArԨ:)*ǘN Zuv/籇_#exK Xk^͉tkm4t(HFӡcnJON Z5vx C_/|,xŵXU +`r^QߗZw,Xs%onkxCŰ4q9j튊6HߤuЎOA[l_lSӍPC͵Wu”~w\47S+ӯpfahٕA7W&]#V\If_ X#;v ݀z+J'i1Qq;R?'仓n(-Ir#Vl~-F `']tb4fܰLlB2\"߉u r;.:ěc?dFGŠiFBnrCa^r PC Wgn9Quۛ0wWl; j;.:}cǨiƟ&a? 8\ }KnXVПqTdD77 sy0+ěGL-XfO21ފuN+4 ) ;̷n3ijv)ٺgTqO:OêxC?;Z:I\yrw;~yC aL:q)L:Z?B!l_R4V &!`Ys&ړQЧ8ە<qnv'/BL+&'(SpjGi ~3鑬TiLuRϟZ'pRМ18* L )6F`Zy3mi'Ctp'-jww7Si~1?yV mj8@M3QHW\))X()\4c=f}'0G9N2d_di ݪu1Dw3UAmGxS ̍5W5wckF˃E5~_^ f)zNöob"O4YjBsѻ}DEmmߙv|\7) 2 M7+ah3J*}p 蝏f%by^7\zO~V$ߏ}{_1٢xCUjPKW]rۮ,h"'W!b)A>86H콬OnksNH)Ϻ ΄]p_KRF}5- b;"&tZoSpE2S %~t$ˇ dS!sPDÆЦ]w3#oǒѦtZn{߳LB~ω8p4HqYQY+FLvTHB6vWЁ_~lGV7^Z6K IX!Q pCW t[;Kc$)I1.x&d+۟56-e_~jGV7^X}يsh> x 2hqﳓ "R@BNb6eƅ4<SZ$5xݪ{cկ̷4`sT O fsx{'Y?\ pǁ3i?Ll&JVJ v-8`E NH.̤qA vӉIDAT$BRqYP=o{x2+iOoxﶜ}i_e}*O=-FU<?ԸP6ZIvoy}5{;-Xb[esBt2U7E#4 ]#UծuARf}_MQ|,@b/xGKN+1ۖWy2,t^4;u]p'}<)L#tMHw ZѡgfޖW9J}A%ʇְFd ZLj*+^#']'̈֎ބtX/\tӓ:ͦA'EGww3s?#0c|I:hyR'Sɢo@3Μj4v} Z-+o&H&&0X`5z4%=_[H̤hEs"|lGśa Z!jcHHd_?{4hUP!yTaDE27 Jw:S"Hњk!] ΞxJ^Y;n;gMQ+L÷MH Eջ, ;)Dҍ\t47ڑ]A9+)_v>>bZ絺}I'tHV"-20Rv5exCt5쥦Or;k:52qH$\߳a$_(SS6:p}NoBOpkPV63^hu!]Ǝ] %7Si;iGh`LU;|'^-Ӯ1HK#UpAo$21uksq!)u!#MHqIҵ"D7jvO邠˷tt+Ŧ{|s4%ko犭ImXeِ.4嵬tfO%ƅ4x:D;<o{cIת`dàtA[H^XF'd"早_d"⹤x6o{cu7xK>,ڗJ*O B'߼b~#>gX|5Lm՞J$^KRzMH[.- a3yU=_<)_7!YE#H +()9UK^ϓ_^@?vye}XI&8*AԸpxU bn6L<㶦o9{ŗyx_y#_tAVUwz9pb5;q J7Nx?o^`*YmƢ Yf/ru0|'tWg2qd1ͫ< GX^ng0rpTuBN{RN`*YH i%,T?1VI:V:H-wPDw3-NOei iSjKcDΣy!]Ұe$~L0d&$4}#)Lx[ĝ+8]^0_ǕvEk `QcWg6s v>_M':ovEa:%n:5_\8 _FUW=|ꝸ!]ɷD2YXf`p"u=lMӱdm{wC/;ƞ'R557@ZFDHBnv3[jxcmtՈfC8':J!3=K^_03Hd̥ -O+Ib-L#pjC ̓QiuH8eH -7X.5tز+s)v&AˠS~Eⅰ#N ?)75]cgb͉FByڞ\q8>{]cj,7wɷQreZ.sb,/5 Mr{cm4!t5ŭnLܴ}@~8¨}%^w^gO_/o$c4A[q/sx\t3AGlϖx/#/ ~!Vxr97dR)7a~߿fv&q[Ґ/ے=ht[a_'Ru GSR3/~wC9njv!4_.Q&`);ӉfyCh^̏Pt-S)\d{Lƅeh49o9'zF1;7L!</IƅcL&oy׳HE'RJK䯓oZMM~떇c)r,cBu %.t}xSDknbEK%]4|+H~"f&I/I pm"[.ivjd3[ U^V (Ruom8%߮]8Aְmλ]e&S)6/ Ψx?Ou5줦?|AHDw)Qfl`UwΚv;\9&sFjW:]`YJU JY)隹(?\Tn=_kwKl vIy"]_g;|ua~ Z.%]D酔srCW ś^hEknx]dt`.% fQuꗥĪ7?^./t)q2zQ>{Nu9zc7#?' ,]7œtYv29n8#%cat9"USfL|ϑ}sFpfRdɺ&x?(V$Yy< ԍ$cɯ0U^p H^,'^. |-ݣ *-8Y:0tsK;rƳh! x,R|b=OJu3&72⫙Wow^xUsh{r*Yt.cD&qt|ӃZY|3=\̩JgH 2E3IN&I;ɴ tЎ@W.L s _Im;Z1,ب#E ~jG?њYft{yy$Sհ&JS2ۣ3E;B)^"H ׍? 2)]'D6ΤvŪ~(-Nh%ٿa?_mX,[ytԎ@W_Amǀ|Y3@tH%ӵtQT:Aע|KWNc$cm4c;fT*E)#]ۑ%JMX=*c~|> t륄ܿMc}4/zZcsWt".+?oxگU?K5Tv"Ej?%ݠñ2 {ajI#UPۘVq#:FpO?Yd;*tɰrE۩i%f@ƅ˚,c$yQڕX^[lp}4?xx{~X;5XN!؟+g|tEIit/'r̥vdm&o Իt+W|N00$] nT/'rKYxɷ'xǒ}lؾi%g3d9VR=ܡO#tҎO[,Ru1en4l1vPӌ`,y,t9S# ~iUr|E.PD@n\gvn {H AhkK]f9հfǻvDY*]'#v ")WtWӲ_k">o{%YMno<4E?ayRi9$]# ?2x|5 J;Q;Ģ5ߚ\) FILvLƅ)8ٔtWB 7 W.ߣ4cR7L❭f$ٲHAjkXeaBL hK<{;L]F'D䢔AmkPړx5~eXCᓮW%}$$ݠ,YqrMDbɫéRۣ5ߝtLKX$WHxBv xr-k+&/h#͛#(%uLs?Qn0)ݠc(F5Bo^'Ni:vhט,%%F말nہRkׯ \NTMhw19.w]_rJ7P^KY;rQO$߲Jf6ssDX| ]# ,ޖw@ym44{+mS=y!9}'^N(v>{g4ktGuVP;PRb@uĜ?19enXf:>Xe?b=1,ө~kҬvȤiu~~]Sh{^p*Ytoh\Ucs'䘒n! ~SZn 7=.z2..o9kjڡt!^[|sATX^p0Du{T@Nn1)u j;JGSDmB?--1|yLF~9ԧ fLN--m|ݧm,!K+9y%.naDG?ڮ9 'Fu,|=nnQy!(1(];p2rhͷMN+>$`k˫ynf6sX ̍&]oE r;HI"U.|g7);*Ώ׽ :(]=8)'#b"%#ݠlvOR u|VǦۢ5͎T}JZ)bK L$&X5"0rr 5^X~߶Q;+En]x1IZ7Lɹt3-Y7dLt҅x3HWj1/%ѐNđQvlfA~IN~Kvؑk *Y4G/'J n6ŪoZ# Vjfݚhۙtҵ|ݝ+H7t%룹)LΤ\E)Ԏұ2&]\(;kKgm>C;( V0z-7鋔ΤU#ڎ͹t(5@Xb}|3;1&TgoVU#PI7pRvċR]h?*Y4UVDdfsJEyҙ0iDvPˤt|e|[ .JiYoߤt?I@3 TfEհ/w5l˺4qSNvV5#hym=I@^"US(/@a;5 Oۇ ے2˷cNFHy, 툎'i!w/TڕR0_s'cѓ|SL`׳VnBo@+ImySɋFFx伒RAnkGgUҥV5.]"[ws㋄X^ph+ DЊvMO˻g({9ԗ*K)5jH}tjAj_2J.$L(=&*[uc9'5ڹt&I&ݠ#LI |3u㙞/I{ikwWuk D}n/IL1nPѯ6tE!@e;KeIToY=y=Xҍ.'nPχԐeɷH<:# \pS/%+IW^D7]Ǽ2愀O5+wb׼>BNO. +AcKyWsZVi7 TpLhn:_NCs!ib)r_[37Wb;(x:B Խus;sߓ7.\vBv[~o$ActB!9z ^q!êIENDB`Pygments-2.3.1/doc/_static/logo_new.png0000644000175000017500000011776013376260540017133 0ustar piotrpiotrPNG  IHDR!cbKGD pHYs  tIME.3w6 IDATx{g}}u"9IHK@٤ vBsR=mL)}zlZ]n_ x[g+Y"JH,.]vKv 7ɹJ O'qֱ%;v4h4=I3+ifgfv9'{9 wˁ@r p9xH<\$.ˁ@r p9xH<\$..˨Jdro*% Q(b~$IP(A`8 eN3Ghb^Yb;p_-W_2_z=WN$,d2=d`0x `A >t~_G2G/y]bDR+Υ5Ooe7Jt+OУG<|b0R8凎,U+tut_!e7⚦-Q$ zxa^z8=?ѩ/+veY+F^W0 Qz $gOI;C3+.S{7_y㺷<_y.v.p%sss4??eEQ800x.=x%WD٭+v-_.Um';'>Vr6% Ր{V7obrr뺎QQ4@zJ2ΈcZxvϴy9vݘv{9=}g_ٗSqϳ*³{~U*z5hFSSSf磡!F3x&3?'wX4񜱜UI5v_>Z%1bR(/ψS=CT6tfffǭ$Q,c$ag.V_{莽OtS<ϕ'y3*{^K>OO?ye B q(2b1@G畖O)Mڱ'1#ZQQ <$(~S_K$Us3/-mɯ“/B 柜X#˖Ʒr|~fş9 ڱ H$x<fHǶhϞ={p[&,=H>#~SH8ۚwZi-_7kJnh55ų,v "HgEc$ኒ܄ 5(,3ٚEV\أ*56³F'?c`PKd2Z,t H{?)G؋`}<2ɚ|E?}Jzz p㪭V*Y^rԨgxvSO !dYV2 eYH$y?FʤY;RCi5HxTJ o荴meQxW-[7J<YWzΒ^ˈN^$|a2j4 * <̣Q}$*hb,xzi{*V>ǩUw\9QnU+WUL+cbl^fͤVzshG>4o5O5 4F "Qx16GY+^J5׼ʳJ=KT^UW?u&NC@⹘3N.?aDY"Zyī4&˅[:iSFDOo!7]Ԍ+{WZ>>*RmY<[YXWQ{swwC~&j{RN^#iF._y.ϱuEZi.jEd,kgh.><J]<@gs/Iz▂3 0kZ".v{s\Vb[V{_ns| l#x(=ڳ_v^eSM5آ>^!2/ҏ:D,;}wq"bĈĉ1F 81Fyb ;ƉsB7PΈLr2ºUG4 Ns'*|p. b]H,aƈi?MZ,˿ּ GmȋF,pup,d254Ml6[w9ߏQNx5HA@uZ\\0 4$^7/nw=_wؿ!:2%Y . ydzluXpy, Jn~/j幏W5̪"F:;;tiFfZǒ$P($IuarަqKӤiچC3x $Iiwox|$I P($l4Mt:M...2&`lz ]iaaEݺ񮷯"ЄW#<9o몪0DщvM2,)c6h4l4q7he3׏v^+4M={nh4-)lĨn>V w>KUUZXXpqHN^\o?LfFdgyT&uf!ȳlryϡ  <;u?yf ^iLDӉ<j=x|^g1r$Eh˖-lhhu&''>vɖ$٨HH6Qd\!"bu#Y5Mx<Z֭[:U\)]g{YYyюkW^4=4E^Ωs?Ho[j^gx/_t0ݪ3Ή8/`aFo ^ZǨgW\ Ӳ򯕖-zeYzlkZƼΦx=5k7ж7}Ϋ*uxF?W.L 륟2쉏[ lu}N*UUf&\TU]6c1777Lu;77GKp ONND"nF-,,z755S&)GˈPJ$`I(Ji>33ý!ݸ~-kE2̵(xSL&;~ľx<>H_ҍx37"Cޙ[qez"Ԁ3o,ȲMw[X?^O!7~εitAgR'um*lTK-EGYE/|~0DX6d:m=EYfGӓNxE&*j$L:gӼ^/UUizzF;r% >==Ɠvyg'=(V޽szEQOMMq W1`Cx \=g4Fވ,Qy:|+._m1/kVQŶ_9@7~a r݃zob֔oʣߌHxJ_\c;vY&/]&,xT^CdoR9oc+; x7~]:/J [/NW#pj"Q5}6TFQ]js5:؉ރ64WzVkH<"h4DN,=5Dv}ٶ `0H֭'ϚieNiΉA<$n'4Z-L ) 1I7t st[k:.ID4d2IN7SI2E#vMUӴ Nj\fggY˜sS?C~ h9nD3:cV3dgD _: 8 Z )#4??/*,=7H40MhvL$Ip8"HQKLAfLȓ$Ricϊ<&GHd,Oje_a̘R 'DKĜF,P($\w8n b&t]t:M J Ȉ+FI(leUU<0/:w[h4BPKѨv׌}4L;uhŘ?nٲ9]Cu¶'I_, 4tu {mֱLtn >Oϼ_+K'JZJ̜Dkșℬ"9"دύeXZY ׵W\^]+OǏ~?잊y];*.g(3?GԷnl5DgH433#nk]6ɖn0ckE QѴ;wnᴃd2Iw:Fl†LHey,NS߬[s|2d79F,R'~WUUinnUr;t]Q'Su]hoөiRUձ422kG'1ey2R*rs%R"u{C@ @ìu[ϳ;Zb_m߾cN|}r$W+IN(*U:*eše^Q70ӶM,mf"gf/_EWB:O<+}]"xw. UZD@yH<{fggұΝ;[26ht^ivL]i׮]EȲ<6444Ifi~~~)>=1FlrRQjJyww86D±'CX,6+ | ֭[fIQG~G'eBmϦaq;{^9u]'EQ:tZAN|}s#:T3fbk^b[L5պRJlycǎ\#I(I wtr)/8ljFrE}VY[ Zw*#*cإ{!bЦ .#N?'g!IT4j7E-t온*Mm4u<:^Q9nI; B^|xhse[yx'8щҭ'dܜ'xrM,k\/D+fݪX㞗n喦P(ԱrH9 dgW IGfWO%h/_CZ=.g%s8WZQuO)6DSDwH {`qRըWbk~ZT^CDw9~-7y`S {-[m+NebKrޖ֐wԘ3ȓ{<t]iD}"M`@u܋QQ ntzBj^4vg8hh4 @["N )eYHDjGK j\takH hᵈn7|i깲#r ckQ(NC7;:rʏ.ge^cZ+Eޕw6]9"_g>CWxle 5Qc˻ό#D>ydY]v9ҕ\.GTscJa+XATH$B4j %G[lX 9^W(zFJ=>V5MNԶYB,^*!:Ec:O:$/~ bGrKǿī]~L`*mYd^Q9!,\ؙ\sQy}ϡa 6hλY_N8ۜfyG D=ג` jk(EQק99);^$V;6 Goژ OДe |> Oxm\Bp4^7iǎhzablΝ,t3AUvԥ2?7eQ@ eM kme{d:ǹ\Lsk0&|}=jkl!B4R$We;bOEt] R4eɮhm>c̼d=EZ % ,Yɩx^WJ{ZJߖvMEf~I|  _:.{B?Y[ $ GGϳ ʻΚ;V<ӄUg۲=kTC|Bqv_?m+ "΢WYuWO%M祠3w5; 龃oŀa AeD"azz)H$x;e0DC+zh`  u]6_ !O_$J)"#](1hQ:& Q+NFs.hÑf7KL&x]1 9:=CDĊkhuCxYFx+(VM[F$+=fĉ3"oȉ68K ^7/@kkx+M}/OU?;FU^/T:M˙\|j˖j@Nx&\cַ| >)eYͼ'YDҲTUpH:(DMNЄh*`2?<<F-K^O4 !!ǩh*m (Vč¢qFB$9rNǥRQY/hSϖ 2#[9W}t^Y-/O%hYQu_~7B^!ΪҴ#V]٧ْm7ryW.]*;f/<=&~"h41u\+EUU}JvDFʧ^-U4wa IX$ EǏSQуBciچ^]\zমK#:ncJ{% EYVZ7πYRmyU:7C +v}K_{g[38o*DU4k>jel;׌3gA<2|>r-lxx7𢓼T*ɍ7dR8uJ42 Фj!J@'ZrF#M0L }1厴v BWKkݺuQ)9ADgϞ؃ͱ,%^^3?z̲ͲTLk4:~-/g2w fx׾ïM-V;nQFDݕ]lyenV榉 Ǣ>-cbNO:ܜ~Վ^=zIL͙LFh,z嘴k'v^)C%ьT*IQ&Xvt9ZkNG5jKɼ7_GkoBl9֭?}]|29ygi,bzȽ7_G^9s~l׏[mЭIq,c;v`nT1TjuR&uەv&ک2 R/4Dk%:Dz@T ֏s=`ԗ@`_/ iK$.cH$x+uJ Үk:"‘YNlFFFȲ,:毕V"4{ inOz1˫rB4T96=J7ҫ_8yVqX6tYZ8_ʒsZ5W?նȼ}]Asa?"x٬zQwV'&/pF^$5piRQΝSd|#F7n߾X,6p8,\IQ @+"o9L!;H$IzuL{!]Ѩ#5:q~e#ToPC3DFEQXXX?<<ܵHu~.P tB8CwC6\.zVs t n# Lfffx0!yg%^6~Ȟ =n7bZV"<`&yGljvUx`9ȋs̱_:@}Yg:*ʖgDGg1◭E IDAT&v FDr9RUռC4Q8nZ\\ZכZ Ipf x~\@eJ&R&iȼ@ @CCC,/kiۓyw}1&jjek6èg<^6vU \[q>yvުUԾ5kݑMgZuMe\Zm:6+pdZ =@кtZXX/vތF-dDk5.bvS<瓓<.N=' Ϡzy;Py+VCk} ,G"GU+ޕ7VND5YڱV6Dޙ3G1`YF4Mnԩ*nV]F!J@f nB&zO۶mcNdzy1>55'''ܜ+ulNKL3O jzͯ5*/ϡo _+X8pOS>z~U2Ԣ4YUWc+VA˒H$B@@LQX,6+ ItO $ hIhǎlzz~y8??I$e=;_?oU^mD͸Euo\q^zW(pE쫌ڜ]wJ;;Nd5h%X2TP+Mހ@.9??!,c:%cD' $hOtڥFj{]z˚'Կ} pe_ef.[~.9*[]#o= Z\mP(Jus*"$E"E ":t~tdi'?I>;;WUUf=O=/FͳS®2<_k xT#tWk.HMyWw"d!2,c"u$$R2d&$VT*ixywV.׌|/|! d/}Q}iL4\'jˊ~J&/lFBt]H-T*%B47NO;v`mkxQ!̌bxVG5&ҝNu./c>456 "JiMD9)jqQzMy`!Ip@UUGD#dYssjnA##܉Ft뭷nww6bx1:s/KA.Av^oT$Z>8Bx4kmn4"6M婽<\J;UUuO7M-yBvXڱc۾};s6󦧧x|)G7xF%[+7Wz/j<Z56_Z]릴+gy`!˲pdX;oD# $,3>JB!ڹs'byLynyxy~:QBM.^;_c)!p皫1C5m'*9#Ι[<2 M EiKl6K 5,cDv-Q>6i4[OH{ 5/DqJUBV]O\T]tJU̙_DX.  ߈%I?7> -` J @dY.ʲ<ϓdhffvgiK%֗{N |^V|+ì_OW" DYΩH=˄@ @7t:~FFvюʑ$mۆ  (M|eì%HTURXWK3 W99WOtF֭-xE6=뿾\->n&}mffwᅫ$^^K'H\6^/JA]V8>fͶQIaǚxE<ۓI(B煪Bp" PP($t$I,9zCܹb1 5ԁtAUw2=mxogklzF-#T]̙lsD"e2jB|> 86IeڶmfmjTjFnv TuF9 ?ne_;DiWtGxnJ? "@кhFE"1/v&t/b\C^~EQڹs'۾};eyPe2u/.1^INv_Cw~AN֯]xGju/2yFB~h&CCC]ŷ @$7|a2^x +gғm9< p)2IJB -dYfnW@44p]<\C^b-ݻ36CG%7x6PUNk_T^^)UUUEұU̙=B4jL&Iyh6j\d'"iDD֭ۈk`WR+"OӴw {I\tR5N^mgߏ#￞BWc}+kU5D$I2k4塙=3,~7'!ٙN@$Y/Lr=FH'SklݾDwMD^tR5DN7lɤPav6NEoD | d2ԉB$XlH-P""M6]ef?Z: @o_OW^{xͭrΎjͨȲ,Tt]ߖR)$ Oxۅhw\.GdIHtBۉBOhTh*wСlmxShi낭99'#1eg_F1zW9&gB 47ZVMgi7-R(0@(J/[XX=s  Zmx>m\zl#8![BLX| ^l"Zi!'J.{?~ӸFKUUtZ0#Y:zP(YdL&B4@@(B$&񟎞:DKB@Se8X^y#D9ri7Z, u`UU93F&Br9} 4D$^wnn!ZFMux\$ ˛+Χ+ Z'ʙ3Gg>hS?8C`e~'j,gFxp8,2J6y"7ȁ@@\'D")L,bѫdY@Bx54⢫>O2x<>:==ͧy"KM*Dp\N j|s*j^6x}ͧz9vn *,~ҽ"A4| 9Oggg9">EpKzg6i{nx&L&C쟙ᳳ{~fnԁǟnvV .|ͧٹ;71Lt/UUy*j:Q8 x<>dp,4IUe/vF0hܐޙfiff{J6xT*%^+ʍ_Y:q윦/Vf4W?oM]zyS_Fx@ @`5YHd1O%Ib,{금㣢]DQ6??[8$Iݓv377GEJDhsH"d2fFCiF333|KI B׸umljC:!EcQxgoV\W@Ü-Է|#"e>Fx}٩\~{^ ؖSƦW;DNSSS|~~"=_4</sfiF;𗝒$hgs\uhߪpHD[fy B rgY<ڀo\GJvUUO$"IҘzbNiZs+i{}FggԱӉhNF9MxnN333]Z#Ԕ'&{8oTb(t@5$Id2](Z.Q<DËX@_:.Xz!tꗸOD:-@D"?PdYkenv=L6)#3F['P󣅅,V!xWFZ.Jmduatm!llt\P/Nx Nf˧jj+V=1lURBeϻVZ߮' ݞN{w͛MIp8ωm:>555NO$MHv:%]$ɑݻwiלFV;GK=ƗN 3k=AOF'ڸmO[nu|We,Ѯ]"ebͭ~``w{rNizz̴ћfqMUi AUUǻԗLxw+2OuSSS-띪pեgw'cW@M^<2ѻ{9 :@ IAjUرVSJ\4d2 Usss444ڝҬ:-,,(K~blݎM}0h4:Q R*"EQMBbTeppOR0T!JRyd(B,(tTUuUqXK Bto<D//޿]WN_tZk7,8z*ɲI6B!h(D4Z\\$M(Ns*044D#t.+^Kik֭(Iҁ@ PzfIӴ4mS;ʲ<"}MNo$iL _~?ر9_'Ѻu6|)Ns"v7|>ںuN/xu@HgS Zs|gbcqw5IKliD@ę C| tGu'3e":AI]'Vg9ɘ\v"רuõn8E$Έ)SZEZ *K /M!k\.:z(3::Z ;wB`aaA.sY"rl~Nݿ?AձRοGzxvvvݿ>w9uQx45;;;l,ґ#GzkݧըEB<6Ȧg"7/<N!mAm_lu/IYѣGk2Ck}MXLpA_kϞ=-pzzˆ#sNU7sN3Lϰ6kC!TP?`VظĻ5YY9I\.z饗jj zX9R7K/1<XcV՘~?KL{{{֑[ɰ!^S.v:|r!\.?Ծ7(ĵx UKmQMOn(u7Y\Wkk+ 1ޛ=d*N8tvvrݚiѣGzʫڇѣGZC\M~6w?\M޳=3!2]`AoՏ]bi9-9@;j&p8/8uhϞ=L,᪅XG([z; jX)U_/~}.Tg/uT##~X*ndizzڒl˥m7Y&N3m1xUmzR60|FmǝFJ. {ًMOOjV~s\444ٳgmL}}}wttTO'A핃+Z#S}}}#vd:L?QGGS۪ceY#_a^kk+߿zbzgA^b>": ua۟WNPkss3Wŵ8qٷoy*y ,K~N89rvڼ]eDaB죓kC1lb0 :wzTEJߨ3ݶi{.={诖v"ezu 8qB^e@V inn={ێ_~eͧv:z%ѥ%*5}=PUGVj蜂:WjyZXX89???]yt^쬊+YY:w?.ԹWUp)x8K=Ļ\K -b*beY۳gsX,f밦EZZZ:4lEyss3\.`:::L!^1^9^jsE/;9 DRsW\ ƕ1yʤ6_m`ӳ?b -Qy&yؐp<-}9Vz AյEww7zZ85w> A5 v}̌[7*t"khN_FunuufgguMQ` v y"++~-ZѰF`73_~!V3#khN8ԻSXN˲,5HA9Y{ύֆ~ nf-_~!`FgPY5mfFeC]uA >i].~ 5i UWAP9R'nfWyhDݹsG\.&LKKKP P-- RAycֆ~<6c uV?yM\68й?Nx={VYߏ-R@@%4}~_! Z< ,_~AiY9B83ئ'䘝5Yss3uwwꪑ9P`;A1fӳ {w_ iye|x93|֔6OӮ{x9{g׺\. XiBS6؋+j})n0?7dN W!6OΧ8q;* Ko/<""%Xu͇*Z>h ;Ϭy8v%:<I@]}\}XAᡷ&TDC5} .|_ o?=)K(3 {%Xu5"Y1\BB`̃ݛUbv Ov[/< OS]۱h}a*?t0-pܽݫOyx2P p6=Pܫ׶8Adzzܹ}}} J~5].ViΆ#DDkC?ַrDk wLS8RJP 13&)9h"hPD~!z$2dY۳gs5=ΙHDݍs0TLS5xmACn#P q@\(0+:-RXQyZC6+wyh o||\Uݹs~ 򖖖t/z"ϡ@%5iM ibpOf3_'ݦ4 υt=uGex~y'Qx,--Fs/YM }1*/M_s: Gôlő 5'7?9%ԕe74n/#xdiiiqqb`N8^U2 ;cZPrQGG^u O]]]/]Tr O6/Lؓu/j|AԄ_ogurtfxxPou+++DDtNZuuu9zUryJu[Tkkȝ;wvGrJ~`;u ̜|I($r*R娯Ԝxz ا)ih܎ʆ<;==Qza >o_x{uw*e+9 ȃi!jZWEfqqy睊]h NNƱՠC<""!kCa}A q;⽛Uv_BkRG(c4}5}G$yI/ r0{Aؒ~V> 5!z/[~P iUZ!zAE7hZ5Gt5V_WWWݧFZ[[QX]]j @@a7,*3VA{ܱ-v%\Տ~,e%J  0?ò\htvvr~t@4;;[V/E~Dx5!^cHgTdJu-U5CV@O< :ΌOIί#'͌y#]m*U6(_Z+ <0nfgO}$Z2BtZz;'3+(ʴڢ]m'4u~ x39 <> 3tgNc1-[F & 6]{ɟ*!AACl;|vy֭/w3&y#"Za!mxD=ڊ[h<-=c{@t y"d29_B k\3̣(HLucRM("yv}W1>UkC>~=꘸~n]y1 GSb4nWXQc44nGWd2I ;,9Nx<(P5Z8ueeEqDe♌ 4,݊lݨ#ߠ>EbTzx!wY<ϲm"eQ` -Ϳ̚# ôl-L[}\ŶotZÄu&7E9:|^0 ;od AW&MO#j όۄ A^oЭ⃜Ob OB[[=Cb<?a$`7L8#mC@-prn6[ ŵ[٭f-nqG7(ȠkI= {gZszuP8M:G"kxh4E"s^!+wgO__x+]y9=_T1ͺpn}3+"csGW<r$iA(NR_4%áPh .m*J i-0{i|ȳ _'$N8Z.\DxT\x#bX%C`cǘ`0x$q @Q*BET1ed2o B<0-[yZBvu_Xqy5B xJEυ DWӅo!& qrrR8 P7h vl ΅!JNv}_ <xanv5ym3<)3/y~G:L&7,q)Bhˠ$P Ͳ,,K@`w @^p.bRwTwa327~%'ilGWIChXԢؑeы'嚝ϡȅx*"yhJv+e ΅ =*iJS6\藗[Bn?Jx*|?w!j"(ƻr@WA-U'mn!ȫ"gݹ>v = +o[ ¾ΏgѿnJsw/D;@fcYNgL&!ijyJ|I\E}؇'Q#xŞSA<(ñ,߶!ij3-?HW-]$umhtA5QU*+Yq;3auZaÁ"0pKxֆ~]+q2OZaV԰Oq;=4OAt:MnX-{d2n-?d빏y<6""MǒzH$®t:K1p:sNšzJ(L^/u6fB0'"ဘPWW@ooY!OR"c}NM===LWW}w7 &V6F$I,-sI===cmnie[N433sxnnn\1,UR|x<.JD"ћT*ESSS t:{SvS^xZ͕j늟N3:yJ|/Ϸc8J$"Q0dbY+9Ǐ3j#"Jع_Siݮo2oԶU:\h;5;Lj Q 6`N+rB[_~HD$G^y7ޥ7ϗZ+}Zq;*,;#i@. u4;fͶ47qZE-~8 Eev!*t+'wx^ [=p8&rrW:eu)0қecR@";^esG*Jf%MIDATgre%"cn?p:WDQQ -/D"!z<=8#6,m's|jn,^_vM1hѨhƐZ8N}.T{xv-o3!WbAٽ: N-aNnIQHDCza/M&ߠE]ހ4H׈9SQx< 7H$7b 0\ElWW@PILnAE֛+`Xg2D"wQJd_[t+6U"j$qf$q17DtJ{ n3 R)AL& Tc,g[ |===w& ̓%{cǘ"7W3r{9x4nf'IR aE'~㸼R  OXQԉυuҹD*o[h~"/4kT*uX^V{nrB<0%VxFjymwvVZtL)2xzY!i.*M^` ^KAћ;dy=̞7vU8N:tP<ϋ>dp#J>xًt|L4nP <  B%1?pc vh;vrd2,KP5P(4z'&''7L8+ǹy>uӖ08 MeTqr0'^h9?|iZpMcZT'*LfyaV~T U~C5z+?cjdJUz{{ ===y7T꺑ϒwj_aYt!/ &C239pHK,ZCb&V6&-aZP(n+jO5ǿ]<8p \oȕ^L&Ǔdz_~^fY:VB$)zV |u9eP'vd2\*o@ 7 fPeAxu !^iF0dFcꙧEZݗT,+ŊiJT&''/yu"}b^|>߀rK@`|{89#:TA8Q)[^WqHc2|^uR]ut tw *SJ WڇUiJ:)_GytS,1ij녗"|.t<={YhE϶9@[ܣ@t:#j#ŽR7n[Uoכ7cfffW0^>)ἕ`˲z<=,T*2y߀х?z{{ܹzxza<īBRZ"k<тo'Z{nxԶv7w3%Kk =9@[xh&4dDj<+Ms^z[og5ON mi2z> PXԄ =2Y k8M~+ |ee*,^7}qmSil޼zDDSSSʇcj<>AP:ܹS=wݡNT'6k#&~PC/̉WZF(~R}}^xs,m}eTtR^AXq3mvx`˲y=CT^p8&zIإ2ιJ W*YVSڬ9SrLT*r<ϲ|D"!Pooo&۶c '|~h4Ѕ]CǓyW=IDmhw7˲Wڿg8-~ B1ȫx&“[zN qi]:X(mn{f\@-G"|gffg(z;ݸ @ѡCڢѨ(d2D"A>oj 'NeonAa[+oZ E1iyjuRF1ī1N2:x*r6q37mF){EC0NzM?77Wu9+̡@+D'PJޯ6LwU{EpfYȝ+NoHޗo鹊҄Jr: &̈NΛ2Z{#G2BCVEn׵L>33cJ (% H$B!LUƂ YBZIy7_ѰvKYkMgb%+To)yLv_mP Sa'L{cT,˞C)<* i#=^)J 4H:ڎ`UtZqL&盚zXL&=τEei"`GN[o83LVơD"_XDe+f= >!,U*JmG)>=$PinuP488;v  @GvwJmjpp兰Jl9 ڬՈuRkv4u fۆu㪊65O| ]X򯬬PuވǏW\#;T0NډWqHq\b(b^oL={=$SP' )B:E3^ L텗}KDWdxPǼ^oޯ!LO9}1_ݐJ7zѡT&t'^^7Ϙ P:(`N꽼WVVdՑA3whxcb K\w*f|CL&CM[) dPbÍpWW@{ؑݝoAC# &,`1'C۱/#=RqwE= mz)?orrRİC<OLVa#{ !^bZ#F63qF犴2MSjHCj1R>M[☆MOQz&]LCdVkF?vǰahL.na@| 455K~5>=0uR%=PNQ2`{A})1yOoA)MO3rȭ1Qi9^oޅc*d2yEv\yت}9xaWtA'ZOh"566&&]F3WC۱-T&<2t:o uw ˲ c B.R0 j,@ Nt.#?2<xuawF!3k *nFӪ,ܫFz>0Ǔw3H$&uo3 ܸW}9杖 <r= M&qܕߕh4jLz جn;vc-+>L^[ P4p7PH'J+'  ԳH}= /쮜 ubuҜfK8W^yE_^oF"q%r @dDbDD`aŎl)FddR)Fhme۱3r hSj36&5Fb{PR:.^o,.P(Ԧv$8N7xCL&%{O:PFVZ1{Ck{6EYNRaM,.|6׊P(CA(HDn7y<,^]_WyS?ZBZ9D"R켃tp8佺vt:M%ge۱mcdhrrRdYny~\AxX{LMMd,Kv8pPdrLq`D+++yǃfϡyC:~ڵkavS !zL:pEQQ:&ǣx7O:/^K<=N(* tϏ"\A^zxWQ?.Q\a~|鹪4\ =@ |d/wfffzK?H$"uR:BiUY`Y S;u01NG(8WtA |7D~ߣ6t4jizLU{ExPQJ7,˞v::)i`0xU>iܵk״euZ|ppWeСCeA& <ϲcU۱{}mDDHh@ {ppaYV9W86 z uPh" j^7D3?sU͏Zzi>k΅@vw|}(on:0-[ <|zta۫GiOv*Ŷvp8:ˢBVNG3}P'֜ ՜eؖ? 7YP*.[xBTm^Qev5qI1O4nEAF7,,?~@56ʦgY/sBn_wv5x|;̦otп"_0ψߦv "0I7ӆ]]]!vʹdP'EZHa߾|oO]HvS>EmӸh_?M/h7k1@x`]_]_MbU*lz=׈D"|[ 4h;UMoG`:}TJ¹<rm\.8N:x c48n yD> (d24997 zX-JqW6Ft:{dr|~6APtD@`7zXMb 466&ri? P"vE1(( /gffv)v)0P69Na^/Z8&Nq>XA ϠN@?xeq<={1C"ZD" &@t:MhT,$g9<(|S !@$IR iJ{zn Bh0V(]zc0@;p488dA(e)0>NQ'`()*z1׏ɤvJT*uLIv6E0B<2y4=tjm 䭚* DE@y!!9x6B<C`s?ॊIENDB`Pygments-2.3.1/doc/_static/favicon.ico0000644000175000017500000004107613376260540016731 0ustar piotrpiotr@@ (B(@      $6FMMMF5""5FMMMF6$  2OmiH*)HimP2  :`58(t :wJJw!&hv8A a:  <fHDFEEA .jj Vdg<  <g#[KIHGFDB ~~h=  <g VLJIHGFED)~~k}h>  <h/uMJKIIHGEE ~~  kC& =h:NMLKJJHGE;#{z q uO.  >h4NONMLKJIH=b_X3 &Dl:QQOOMMLKI8 rA;f^8  /OuL"RRQPOOMML-yK%<g w  d;  7\ CSSRRQPOON9sK) <gizg<  ;d FVUTTSRRPP,qjB% <g drh=  <g Y"VVVVUTTRS&gh= <giuh=  <g LXXXWVUUTU2wg<  <gGQ -0h=  <g"TZZYYXXWWU Fg<  <gBJ*-h=  <g /^[[ZZZYYXW!Og<  <gAFh=  <g.V\\\[[[ZZY%Og<  <g 7<FKh=  <g%F[^^]]]]\\U&g<  <g" >Ah=  :f%L`_^^^^^]]Y 7g<  <g ;>h< 2`% U__````__^S +g<  <f (+QVf:#O0f`aaaa````S g<  :aY]^/4j/^ecbbbbbaaW g<  4V aeE?{ heeedddddH h<  -R"$Q?{5^ghhhhhhh^m> 1]HJQ?{5^jkkkkkkkd [3  >l ILQ?{q mnnnnoopd\8  <h  Q4j1]rpqqqrrrrm  d<  <gSUE#N8etttuuvvvvdg<  <g 00EG^/2`:wwxxyyz{{u h=  <g++JKf: :f+Nyz{{}}}! +h=  <g'&LLg<  <g.M }~{'h=  <gON**g<  <g ,}!/h=  <g;9ƿ **g<  <g5{ 0Fh=  <g E@ ʼʾ''g<  <g&# $0Ch=  <hh]Dzɷ ɹ ʼʾg<  <g !  !"%'/@h=  =hOCƪǭDZȴ ɷ ɺ ʻʾ c;  <f#x !"$%'*,DUh=  ?icO#ğĤƨƬǯȲȶ ɷ ʺ\7  ;c |"!#%&(*,/2Z)ih= "Bl){_/)™%ß ģŧƫƮDZȳ~U2  8^}%%')+-/257S"\h< Bo&mM93.)%Þ"ġťƩƬrK,  5] n!)+-/257;=Q Tf:>l 2|SC<83.*&Ý"áĤ{jjA$ 5\h-0257:>AC@~^1;gF_MwG}B<83.*&Üydh= 1Qv u)257:=@CFHm5fI] HWToPuJyE~@;73/#s^g<  'Cjn/69\fP]\ [-F\ \fI\fP[:\fI\fP]]\ [-f\ \fI\fP] .RI [-O\ \fI\fP]\ [-P\ \fI\fP]\ [-o\ \fI\fP]\ [\fI\fP] .br .B \fBpygmentize\fP .RI -S\ \fI {%- endif %} {% endblock %} {% block header %}
{% endblock %} {% block footer %}
{# closes "outerwrapper" div #} {% endblock %} {% block sidebarrel %} {% endblock %} {% block sidebarsourcelink %} {% endblock %} Pygments-2.3.1/doc/_themes/pygments14/theme.conf0000644000175000017500000000040113376260540020555 0ustar piotrpiotr[theme] inherit = basic stylesheet = pygments14.css pygments_style = friendly [options] green = #66b55e darkgreen = #36852e darkgray = #666666 border = #66b55e yellow = #f4cd00 darkyellow = #d4ad00 lightyellow = #fffbe3 background = #f9f9f9 font = PT Sans Pygments-2.3.1/scripts/0000755000175000017500000000000013405476653014103 5ustar piotrpiotrPygments-2.3.1/scripts/detect_missing_analyse_text.py0000644000175000017500000000170313376260540022230 0ustar piotrpiotrfrom __future__ import print_function import sys from pygments.lexers import get_all_lexers, find_lexer_class from pygments.lexer import Lexer def main(): uses = {} for name, aliases, filenames, mimetypes in get_all_lexers(): cls = find_lexer_class(name) if not cls.aliases: print(cls, "has no aliases") for f in filenames: if f not in uses: uses[f] = [] uses[f].append(cls) ret = 0 for k, v in uses.items(): if len(v) > 1: #print "Multiple for", k, v for i in v: if i.analyse_text is None: print(i, "has a None analyse_text") ret |= 1 elif Lexer.analyse_text.__doc__ == i.analyse_text.__doc__: print(i, "needs analyse_text, multiple lexers for", k) ret |= 2 return ret if __name__ == '__main__': sys.exit(main()) Pygments-2.3.1/scripts/find_error.py0000755000175000017500000002107513376260540016607 0ustar piotrpiotr#!/usr/bin/python # -*- coding: utf-8 -*- """ Lexing error finder ~~~~~~~~~~~~~~~~~~~ For the source files given on the command line, display the text where Error tokens are being generated, along with some context. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import os import sys # always prefer Pygments from source if exists srcpath = os.path.join(os.path.dirname(__file__), '..') if os.path.isdir(os.path.join(srcpath, 'pygments')): sys.path.insert(0, srcpath) from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \ ProfilingRegexLexer, ProfilingRegexLexerMeta from pygments.lexers import get_lexer_by_name, find_lexer_class, \ find_lexer_class_for_filename from pygments.token import Error, Text, _TokenType from pygments.cmdline import _parse_options class DebuggingRegexLexer(ExtendedRegexLexer): """Make the state stack, position and current match instance attributes.""" def get_tokens_unprocessed(self, text, stack=('root',)): """ Split ``text`` into (tokentype, text) pairs. ``stack`` is the inital stack (default: ``['root']``) """ tokendefs = self._tokens self.ctx = ctx = LexerContext(text, 0) ctx.stack = list(stack) statetokens = tokendefs[ctx.stack[-1]] while 1: for rexmatch, action, new_state in statetokens: self.m = m = rexmatch(text, ctx.pos, ctx.end) if m: if action is not None: if type(action) is _TokenType: yield ctx.pos, action, m.group() ctx.pos = m.end() else: if not isinstance(self, ExtendedRegexLexer): for item in action(self, m): yield item ctx.pos = m.end() else: for item in action(self, m, ctx): yield item if not new_state: # altered the state stack? statetokens = tokendefs[ctx.stack[-1]] if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': ctx.stack.pop() elif state == '#push': ctx.stack.append(ctx.stack[-1]) else: ctx.stack.append(state) elif isinstance(new_state, int): # pop del ctx.stack[new_state:] elif new_state == '#push': ctx.stack.append(ctx.stack[-1]) else: assert False, 'wrong state def: %r' % new_state statetokens = tokendefs[ctx.stack[-1]] break else: try: if ctx.pos >= ctx.end: break if text[ctx.pos] == '\n': # at EOL, reset state to 'root' ctx.stack = ['root'] statetokens = tokendefs['root'] yield ctx.pos, Text, u'\n' ctx.pos += 1 continue yield ctx.pos, Error, text[ctx.pos] ctx.pos += 1 except IndexError: break def main(fn, lexer=None, options={}): if lexer is not None: lxcls = get_lexer_by_name(lexer).__class__ else: lxcls = find_lexer_class_for_filename(os.path.basename(fn)) if lxcls is None: name, rest = fn.split('_', 1) lxcls = find_lexer_class(name) if lxcls is None: raise AssertionError('no lexer found for file %r' % fn) print('Using lexer: %s (%s.%s)' % (lxcls.name, lxcls.__module__, lxcls.__name__)) debug_lexer = False # if profile: # # does not work for e.g. ExtendedRegexLexers # if lxcls.__bases__ == (RegexLexer,): # # yes we can! (change the metaclass) # lxcls.__class__ = ProfilingRegexLexerMeta # lxcls.__bases__ = (ProfilingRegexLexer,) # lxcls._prof_sort_index = profsort # else: # if lxcls.__bases__ == (RegexLexer,): # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True # elif lxcls.__bases__ == (DebuggingRegexLexer,): # # already debugged before # debug_lexer = True # else: # # HACK: ExtendedRegexLexer subclasses will only partially work here. # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True lx = lxcls(**options) lno = 1 if fn == '-': text = sys.stdin.read() else: with open(fn, 'rb') as fp: text = fp.read().decode('utf-8') text = text.strip('\n') + '\n' tokens = [] states = [] def show_token(tok, state): reprs = list(map(repr, tok)) print(' ' + reprs[1] + ' ' + ' ' * (29-len(reprs[1])) + reprs[0], end=' ') if debug_lexer: print(' ' + ' ' * (29-len(reprs[0])) + ' : '.join(state) if state else '', end=' ') print() for type, val in lx.get_tokens(text): lno += val.count('\n') if type == Error and not ignerror: print('Error parsing', fn, 'on line', lno) if not showall: print('Previous tokens' + (debug_lexer and ' and states' or '') + ':') for i in range(max(len(tokens) - num, 0), len(tokens)): if debug_lexer: show_token(tokens[i], states[i]) else: show_token(tokens[i], None) print('Error token:') l = len(repr(val)) print(' ' + repr(val), end=' ') if debug_lexer and hasattr(lx, 'ctx'): print(' ' * (60-l) + ' : '.join(lx.ctx.stack), end=' ') print() print() return 1 tokens.append((type, val)) if debug_lexer: if hasattr(lx, 'ctx'): states.append(lx.ctx.stack[:]) else: states.append(None) if showall: show_token((type, val), states[-1] if debug_lexer else None) return 0 def print_help(): print('''\ Pygments development helper to quickly debug lexers. scripts/debug_lexer.py [options] file ... Give one or more filenames to lex them and display possible error tokens and/or profiling info. Files are assumed to be encoded in UTF-8. Selecting lexer and options: -l NAME use lexer named NAME (default is to guess from the given filenames) -O OPTIONSTR use lexer options parsed from OPTIONSTR Debugging lexing errors: -n N show the last N tokens on error -a always show all lexed tokens (default is only to show them when an error occurs) -e do not stop on error tokens Profiling: -p use the ProfilingRegexLexer to profile regexes instead of the debugging lexer -s N sort profiling output by column N (default is column 4, the time per call) ''') num = 10 showall = False ignerror = False lexer = None options = {} profile = False profsort = 4 if __name__ == '__main__': import getopt opts, args = getopt.getopt(sys.argv[1:], 'n:l:aepO:s:h') for opt, val in opts: if opt == '-n': num = int(val) elif opt == '-a': showall = True elif opt == '-e': ignerror = True elif opt == '-l': lexer = val elif opt == '-p': profile = True elif opt == '-s': profsort = int(val) elif opt == '-O': options = _parse_options([val]) elif opt == '-h': print_help() sys.exit(0) ret = 0 if not args: print_help() for f in args: ret += main(f, lexer, options) sys.exit(bool(ret)) Pygments-2.3.1/scripts/get_vimkw.py0000644000175000017500000000421513376260540016444 0ustar piotrpiotrfrom __future__ import print_function import re from pygments.util import format_lines r_line = re.compile(r"^(syn keyword vimCommand contained|syn keyword vimOption " r"contained|syn keyword vimAutoEvent contained)\s+(.*)") r_item = re.compile(r"(\w+)(?:\[(\w+)\])?") HEADER = '''\ # -*- coding: utf-8 -*- """ pygments.lexers._vim_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file is autogenerated by scripts/get_vimkw.py :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Split up in multiple functions so it's importable by jython, which has a # per-method size limit. ''' METHOD = '''\ def _get%(key)s(): %(body)s return var %(key)s = _get%(key)s() ''' def getkw(input, output): out = file(output, 'w') # Copy template from an existing file. print(HEADER, file=out) output_info = {'command': [], 'option': [], 'auto': []} for line in file(input): m = r_line.match(line) if m: # Decide which output gets mapped to d if 'vimCommand' in m.group(1): d = output_info['command'] elif 'AutoEvent' in m.group(1): d = output_info['auto'] else: d = output_info['option'] # Extract all the shortened versions for i in r_item.finditer(m.group(2)): d.append('(%r,%r)' % (i.group(1), "%s%s" % (i.group(1), i.group(2) or ''))) output_info['option'].append("('nnoremap','nnoremap')") output_info['option'].append("('inoremap','inoremap')") output_info['option'].append("('vnoremap','vnoremap')") for key, keywordlist in output_info.items(): keywordlist.sort() body = format_lines('var', keywordlist, raw=True, indent_level=1) print(METHOD % locals(), file=out) def is_keyword(w, keywords): for i in range(len(w), 0, -1): if w[:i] in keywords: return keywords[w[:i]][:len(w)] == w return False if __name__ == "__main__": getkw("/usr/share/vim/vim74/syntax/vim.vim", "pygments/lexers/_vim_builtins.py") Pygments-2.3.1/scripts/check_sources.py0000755000175000017500000001411313376305454017275 0ustar piotrpiotr#!/usr/bin/env python # -*- coding: utf-8 -*- """ Checker for file headers ~~~~~~~~~~~~~~~~~~~~~~~~ Make sure each Python file has a correct file header including copyright and license information. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import io import os import re import sys import getopt from os.path import join, splitext, abspath checkers = {} def checker(*suffixes, **kwds): only_pkg = kwds.pop('only_pkg', False) def deco(func): for suffix in suffixes: checkers.setdefault(suffix, []).append(func) func.only_pkg = only_pkg return func return deco name_mail_re = r'[\w ]+(<.*?>)?' copyright_re = re.compile(r'^ :copyright: Copyright 2006-2017 by ' r'the Pygments team, see AUTHORS\.$', re.UNICODE) copyright_2_re = re.compile(r'^ %s(, %s)*[,.]$' % (name_mail_re, name_mail_re), re.UNICODE) is_const_re = re.compile(r'if.*?==\s+(None|False|True)\b') misspellings = ["developement", "adress", "verificate", # ALLOW-MISSPELLING "informations", "unlexer"] # ALLOW-MISSPELLING @checker('.py') def check_syntax(fn, lines): if '#!/' in lines[0]: lines = lines[1:] if 'coding:' in lines[0]: lines = lines[1:] try: compile('\n'.join(lines), fn, "exec") except SyntaxError as err: yield 0, "not compilable: %s" % err @checker('.py') def check_style_and_encoding(fn, lines): for lno, line in enumerate(lines): if len(line) > 110: yield lno+1, "line too long" if is_const_re.search(line): yield lno+1, 'using == None/True/False' @checker('.py', only_pkg=True) def check_fileheader(fn, lines): # line number correction c = 1 if lines[0:1] == ['#!/usr/bin/env python']: lines = lines[1:] c = 2 llist = [] docopen = False for lno, l in enumerate(lines): llist.append(l) if lno == 0: if l != '# -*- coding: utf-8 -*-': yield 1, "missing coding declaration" elif lno == 1: if l != '"""' and l != 'r"""': yield 2, 'missing docstring begin (""")' else: docopen = True elif docopen: if l == '"""': # end of docstring if lno <= 4: yield lno+c, "missing module name in docstring" break if l != "" and l[:4] != ' ' and docopen: yield lno+c, "missing correct docstring indentation" if lno == 2: # if not in package, don't check the module name modname = fn[:-3].replace('/', '.').replace('.__init__', '') while modname: if l.lower()[4:] == modname: break modname = '.'.join(modname.split('.')[1:]) else: yield 3, "wrong module name in docstring heading" modnamelen = len(l.strip()) elif lno == 3: if l.strip() != modnamelen * "~": yield 4, "wrong module name underline, should be ~~~...~" else: yield 0, "missing end and/or start of docstring..." # check for copyright and license fields license = llist[-2:-1] if license != [" :license: BSD, see LICENSE for details."]: yield 0, "no correct license info" ci = -3 copyright = llist[ci:ci+1] while copyright and copyright_2_re.match(copyright[0]): ci -= 1 copyright = llist[ci:ci+1] if not copyright or not copyright_re.match(copyright[0]): yield 0, "no correct copyright info" def main(argv): try: gopts, args = getopt.getopt(argv[1:], "vi:") except getopt.GetoptError: print("Usage: %s [-v] [-i ignorepath]* [path]" % argv[0]) return 2 opts = {} for opt, val in gopts: if opt == '-i': val = abspath(val) opts.setdefault(opt, []).append(val) if len(args) == 0: path = '.' elif len(args) == 1: path = args[0] else: print("Usage: %s [-v] [-i ignorepath]* [path]" % argv[0]) return 2 verbose = '-v' in opts num = 0 out = io.StringIO() # TODO: replace os.walk run with iteration over output of # `svn list -R`. for root, dirs, files in os.walk(path): if '.hg' in dirs: dirs.remove('.hg') if 'examplefiles' in dirs: dirs.remove('examplefiles') if '-i' in opts and abspath(root) in opts['-i']: del dirs[:] continue # XXX: awkward: for the Makefile call: don't check non-package # files for file headers in_pygments_pkg = root.startswith('./pygments') for fn in files: fn = join(root, fn) if fn[:2] == './': fn = fn[2:] if '-i' in opts and abspath(fn) in opts['-i']: continue ext = splitext(fn)[1] checkerlist = checkers.get(ext, None) if not checkerlist: continue if verbose: print("Checking %s..." % fn) try: lines = open(fn, 'rb').read().decode('utf-8').splitlines() except (IOError, OSError) as err: print("%s: cannot open: %s" % (fn, err)) num += 1 continue for checker in checkerlist: if not in_pygments_pkg and checker.only_pkg: continue for lno, msg in checker(fn, lines): print(u"%s:%d: %s" % (fn, lno, msg), file=out) num += 1 if verbose: print() if num == 0: print("No errors found.") else: print(out.getvalue().rstrip('\n')) print("%d error%s found." % (num, num > 1 and "s" or "")) return int(num > 0) if __name__ == '__main__': sys.exit(main(sys.argv)) Pygments-2.3.1/scripts/release-checklist0000644000175000017500000000151513376476371017423 0ustar piotrpiotrRelease checklist ================= * Check ``hg status`` * ``make check`` * LATER when configured properly: ``make pylint`` * ``tox`` * Update version info in ``setup.py/__init__.py`` * Check setup.py metadata: long description, trove classifiers * Update release date/code name in ``CHANGES`` * ``hg commit`` * ``make clean`` * ``python2 setup.py release bdist_wheel`` * ``python3 setup.py release bdist_wheel sdist`` * ``twine upload dist/Pygments-$NEWVER*`` * Check PyPI release page for obvious errors * ``hg tag`` * Merge default into stable if this was a ``x.y.0`` * Update homepage (release info), regenerate docs (+printable!) * Add new version/milestone to tracker categories * Write announcement and send to mailing list/python-announce * Update version info, add new ``CHANGES`` entry for next version * ``hg commit`` * ``hg push`` Pygments-2.3.1/scripts/vim2pygments.py0000755000175000017500000006331413376260540017124 0ustar piotrpiotr#!/usr/bin/env python # -*- coding: utf-8 -*- """ Vim Colorscheme Converter ~~~~~~~~~~~~~~~~~~~~~~~~~ This script converts vim colorscheme files to valid pygments style classes meant for putting into modules. :copyright 2006 by Armin Ronacher. :license: BSD, see LICENSE for details. """ from __future__ import print_function import sys import re from os import path from io import StringIO split_re = re.compile(r'(? 2 and \ len(parts[0]) >= 2 and \ 'highlight'.startswith(parts[0]): token = parts[1].lower() if token not in TOKENS: continue for item in parts[2:]: p = item.split('=', 1) if not len(p) == 2: continue key, value = p if key in ('ctermfg', 'guifg'): color = get_vim_color(value) if color: set('color', color) elif key in ('ctermbg', 'guibg'): color = get_vim_color(value) if color: set('bgcolor', color) elif key in ('term', 'cterm', 'gui'): items = value.split(',') for item in items: item = item.lower() if item == 'none': set('noinherit', True) elif item == 'bold': set('bold', True) elif item == 'underline': set('underline', True) elif item == 'italic': set('italic', True) if bg_color is not None and not colors['Normal'].get('bgcolor'): colors['Normal']['bgcolor'] = bg_color color_map = {} for token, styles in colors.items(): if token in TOKENS: tmp = [] if styles.get('noinherit'): tmp.append('noinherit') if 'color' in styles: tmp.append(styles['color']) if 'bgcolor' in styles: tmp.append('bg:' + styles['bgcolor']) if styles.get('bold'): tmp.append('bold') if styles.get('italic'): tmp.append('italic') if styles.get('underline'): tmp.append('underline') tokens = TOKENS[token] if not isinstance(tokens, tuple): tokens = (tokens,) for token in tokens: color_map[token] = ' '.join(tmp) default_token = color_map.pop('') return default_token, color_map class StyleWriter(object): def __init__(self, code, name): self.code = code self.name = name.lower() def write_header(self, out): out.write('# -*- coding: utf-8 -*-\n"""\n') out.write(' %s Colorscheme\n' % self.name.title()) out.write(' %s\n\n' % ('~' * (len(self.name) + 12))) out.write(' Converted by %s\n' % SCRIPT_NAME) out.write('"""\nfrom pygments.style import Style\n') out.write('from pygments.token import Token, %s\n\n' % ', '.join(TOKEN_TYPES)) out.write('class %sStyle(Style):\n\n' % self.name.title()) def write(self, out): self.write_header(out) default_token, tokens = find_colors(self.code) tokens = list(tokens.items()) tokens.sort(lambda a, b: cmp(len(a[0]), len(a[1]))) bg_color = [x[3:] for x in default_token.split() if x.startswith('bg:')] if bg_color: out.write(' background_color = %r\n' % bg_color[0]) out.write(' styles = {\n') out.write(' %-20s%r,\n' % ('Token:', default_token)) for token, definition in tokens: if definition: out.write(' %-20s%r,\n' % (token + ':', definition)) out.write(' }') def __repr__(self): out = StringIO() self.write_style(out) return out.getvalue() def convert(filename, stream=None): name = path.basename(filename) if name.endswith('.vim'): name = name[:-4] f = file(filename) code = f.read() f.close() writer = StyleWriter(code, name) if stream is not None: out = stream else: out = StringIO() writer.write(out) if stream is None: return out.getvalue() def main(): if len(sys.argv) != 2 or sys.argv[1] in ('-h', '--help'): print('Usage: %s ' % sys.argv[0]) return 2 if sys.argv[1] in ('-v', '--version'): print('%s %s' % (SCRIPT_NAME, SCRIPT_VERSION)) return filename = sys.argv[1] if not (path.exists(filename) and path.isfile(filename)): print('Error: %s not found' % filename) return 1 convert(filename, sys.stdout) sys.stdout.write('\n') if __name__ == '__main__': sys.exit(main() or 0) Pygments-2.3.1/scripts/pylintrc0000644000175000017500000002114713376260540015670 0ustar piotrpiotr# lint Python modules using external checkers. # # This is the main checker controling the other ones and the reports # generation. It is itself both a raw checker and an astng checker in order # to: # * handle message activation / deactivation at the module level # * handle some basic but necessary stats'data (number of classes, methods...) # [MASTER] # Specify a configuration file. #rcfile= # Profiled execution. profile=no # Add to the black list. It should be a base name, not a # path. You may set this option multiple times. ignore=.svn # Pickle collected data for later comparisons. persistent=yes # Set the cache size for astng objects. cache-size=500 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable only checker(s) with the given id(s). This option conflict with the # disable-checker option #enable-checker= # Enable all checker(s) except those with the given id(s). This option conflict # with the disable-checker option #disable-checker= # Enable all messages in the listed categories. #enable-msg-cat= # Disable all messages in the listed categories. #disable-msg-cat= # Enable the message(s) with the given id(s). #enable-msg= # Disable the message(s) with the given id(s). disable-msg=C0323,W0142,C0301,C0103,C0111,E0213,C0302,C0203,W0703,R0201 [REPORTS] # set the output format. Available formats are text, parseable, colorized and # html output-format=colorized # Include message's id in output include-ids=yes # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells wether to display a full report or only the messages reports=yes # Python expression which should return a note less than 10 (10 is the highest # note).You have access to the variables errors warning, statement which # respectivly contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (R0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (R0004). comment=no # Enable the report(s) with the given id(s). #enable-report= # Disable the report(s) with the given id(s). #disable-report= # checks for # * unused variables / imports # * undefined variables # * redefinition of variable from builtins or from an outer scope # * use of variable before assigment # [VARIABLES] # Tells wether we should check for unused import in __init__ files. init-import=no # A regular expression matching names used for dummy variables (i.e. not used). dummy-variables-rgx=_|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= # try to find bugs in the code using type inference # [TYPECHECK] # Tells wether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # When zope mode is activated, consider the acquired-members option to ignore # access to some undefined attributes. zope=no # List of members which are usually get through zope's acquisition mecanism and # so shouldn't trigger E0201 when accessed (need zope=yes to be considered). acquired-members=REQUEST,acl_users,aq_parent # checks for : # * doc strings # * modules / classes / functions / methods / arguments / variables name # * number of arguments, local variables, branchs, returns and statements in # functions, methods # * required module attributes # * dangerous default values as arguments # * redefinition of function / method / class # * uses of the global statement # [BASIC] # Required attributes for module, separated by a comma required-attributes= # Regular expression which should only match functions or classes name which do # not require a docstring no-docstring-rgx=__.*__ # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=(([A-Z_][A-Z1-9_]*)|(__.*__))$ # Regular expression which should only match correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Regular expression which should only match correct function names function-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct method names method-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct instance attribute names attr-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input # checks for sign of poor/misdesign: # * number of methods, attributes, local variables... # * size, complexity of functions, methods # [DESIGN] # Maximum number of arguments for function / method max-args=12 # Maximum number of locals for function / method body max-locals=30 # Maximum number of return / yield for function / method body max-returns=12 # Maximum number of branch for function / method body max-branchs=30 # Maximum number of statements in function / method body max-statements=60 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=20 # checks for # * external modules dependencies # * relative / wildcard imports # * cyclic imports # * uses of deprecated modules # [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,string,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report R0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report R0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report R0402 must # not be disabled) int-import-graph= # checks for : # * methods without self as first argument # * overridden methods signature # * access only to existant members via self # * attributes not defined in the __init__ method # * supported interfaces implementation # * unreachable code # [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # checks for similarities and duplicated code. This computation may be # memory / CPU intensive, so you should disable it if you experiments some # problems. # [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=10 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # checks for: # * warning notes in the code like FIXME, XXX # * PEP 263: source code with non ascii character but no encoding declaration # [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO # checks for : # * unauthorized constructions # * strict indentation # * line length # * use of <> instead of != # [FORMAT] # Maximum number of characters on a single line. max-line-length=90 # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' Pygments-2.3.1/scripts/epydoc.css0000644000175000017500000003277413376260540016106 0ustar piotrpiotr /* Epydoc CSS Stylesheet * * This stylesheet can be used to customize the appearance of epydoc's * HTML output. * */ /* Adapted for Pocoo API docs by Georg Brandl */ /* Default Colors & Styles * - Set the default foreground & background color with 'body'; and * link colors with 'a:link' and 'a:visited'. * - Use bold for decision list terms. * - The heading styles defined here are used for headings *within* * docstring descriptions. All headings used by epydoc itself use * either class='epydoc' or class='toc' (CSS styles for both * defined below). */ body { background: #ffffff; color: #000000; font-family: Trebuchet MS,Tahoma,sans-serif; font-size: 0.9em; line-height: 140%; margin: 0; padding: 0 1.2em 1.2em 1.2em; } a:link { color: #C87900; text-decoration: none; border-bottom: 1px solid #C87900; } a:visited { color: #C87900; text-decoration: none; border-bottom: 1px dotted #C87900; } a:hover { color: #F8A900; border-bottom-color: #F8A900; } dt { font-weight: bold; } h1 { font-size: +180%; font-style: italic; font-weight: bold; margin-top: 1.5em; } h2 { font-size: +140%; font-style: italic; font-weight: bold; } h3 { font-size: +110%; font-style: italic; font-weight: normal; } p { margin-top: .5em; margin-bottom: .5em; } hr { margin-top: 1.5em; margin-bottom: 1.5em; border: 1px solid #BBB; } tt.literal { background: #F5FFD0; padding: 2px; font-size: 110%; } table.rst-docutils { border: 0; } table.rst-docutils td { border: 0; padding: 5px 20px 5px 0px; } /* Page Header & Footer * - The standard page header consists of a navigation bar (with * pointers to standard pages such as 'home' and 'trees'); a * breadcrumbs list, which can be used to navigate to containing * classes or modules; options links, to show/hide private * variables and to show/hide frames; and a page title (using *

). The page title may be followed by a link to the * corresponding source code (using 'span.codelink'). * - The footer consists of a navigation bar, a timestamp, and a * pointer to epydoc's homepage. */ h1.epydoc { margin-top: .4em; margin-bottom: .4em; font-size: +180%; font-weight: bold; font-style: normal; } h2.epydoc { font-size: +130%; font-weight: bold; font-style: normal; } h3.epydoc { font-size: +115%; font-weight: bold; font-style: normal; } table.navbar { background: #E6F8A0; color: #000000; border-top: 1px solid #c0d0d0; border-bottom: 1px solid #c0d0d0; margin: -1px -1.2em 1em -1.2em; } table.navbar th { padding: 2px 7px 2px 0px; } th.navbar-select { background-color: transparent; } th.navbar-select:before { content: ">" } th.navbar-select:after { content: "<" } table.navbar a { border: 0; } span.breadcrumbs { font-size: 95%; font-weight: bold; } span.options { font-size: 80%; } span.codelink { font-size: 85%; } td.footer { font-size: 85%; } /* Table Headers * - Each summary table and details section begins with a 'header' * row. This row contains a section title (marked by * 'span.table-header') as well as a show/hide private link * (marked by 'span.options', defined above). * - Summary tables that contain user-defined groups mark those * groups using 'group header' rows. */ td.table-header { background: #B6C870; color: #000000; border-bottom: 1px solid #FFF; } span.table-header { font-size: 110%; font-weight: bold; } th.group-header { text-align: left; font-style: italic; font-size: 110%; } td.spacer { width: 5%; } /* Summary Tables (functions, variables, etc) * - Each object is described by a single row of the table with * two cells. The left cell gives the object's type, and is * marked with 'code.summary-type'. The right cell gives the * object's name and a summary description. * - CSS styles for the table's header and group headers are * defined above, under 'Table Headers' */ table.summary { border-collapse: collapse; background: #E6F8A0; color: #000000; margin: 1em 0 .5em 0; border: 0; } table.summary tr { border-bottom: 1px solid #BBB; } td.summary a { font-weight: bold; } code.summary-type { font-size: 85%; } /* Details Tables (functions, variables, etc) * - Each object is described in its own single-celled table. * - A single-row summary table w/ table-header is used as * a header for each details section (CSS style for table-header * is defined above, under 'Table Headers'). */ table.detsummary { margin-top: 2em; } table.details { border-collapse: collapse; background: #E6F8A0; color: #000000; border-bottom: 1px solid #BBB; margin: 0; } table.details td { padding: .2em .2em .2em .5em; } table.details table td { padding: 0; } table.details h3 { margin: 5px 0 5px 0; font-size: 105%; font-style: normal; } table.details dd { display: inline; margin-left: 5px; } table.details dl { margin-left: 5px; } /* Index tables (identifier index, term index, etc) * - link-index is used for indices containing lists of links * (namely, the identifier index & term index). * - index-where is used in link indices for the text indicating * the container/source for each link. * - metadata-index is used for indices containing metadata * extracted from fields (namely, the bug index & todo index). */ table.link-index { border-collapse: collapse; background: #F6FFB0; color: #000000; border: 1px solid #608090; } td.link-index { border-width: 0px; } span.index-where { font-size: 70%; } table.metadata-index { border-collapse: collapse; background: #F6FFB0; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } td.metadata-index { border-width: 1px; border-style: solid; } /* Function signatures * - sig* is used for the signature in the details section. * - .summary-sig* is used for the signature in the summary * table, and when listing property accessor functions. * */ .sig-name { color: #006080; } .sig-arg { color: #008060; } .sig-default { color: #602000; } .summary-sig-name { font-weight: bold; } .summary-sig-arg { color: #006040; } .summary-sig-default { color: #501800; } /* Variable values * - In the 'variable details' sections, each varaible's value is * listed in a 'pre.variable' box. The width of this box is * restricted to 80 chars; if the value's repr is longer than * this it will be wrapped, using a backslash marked with * class 'variable-linewrap'. If the value's repr is longer * than 3 lines, the rest will be ellided; and an ellipsis * marker ('...' marked with 'variable-ellipsis') will be used. * - If the value is a string, its quote marks will be marked * with 'variable-quote'. * - If the variable is a regexp, it is syntax-highlighted using * the re* CSS classes. */ pre.variable { padding: .5em; margin: 0; background-color: #dce4ec; border: 1px solid #708890; } .variable-linewrap { display: none; } .variable-ellipsis { color: #604000; font-weight: bold; } .variable-quote { color: #604000; font-weight: bold; } .re { color: #000000; } .re-char { color: #006030; } .re-op { color: #600000; } .re-group { color: #003060; } .re-ref { color: #404040; } /* Base tree * - Used by class pages to display the base class hierarchy. */ pre.base-tree { font-size: 90%; margin: 1em 0 2em 0; line-height: 100%;} /* Frames-based table of contents headers * - Consists of two frames: one for selecting modules; and * the other listing the contents of the selected module. * - h1.toc is used for each frame's heading * - h2.toc is used for subheadings within each frame. */ h1.toc { text-align: center; font-size: 105%; margin: 0; font-weight: bold; padding: 0; } h2.toc { font-size: 100%; font-weight: bold; margin: 0.5em 0 0 -0.3em; } /* Syntax Highlighting for Source Code * - doctest examples are displayed in a 'pre.py-doctest' block. * If the example is in a details table entry, then it will use * the colors specified by the 'table pre.py-doctest' line. * - Source code listings are displayed in a 'pre.py-src' block. * Each line is marked with 'span.py-line' (used to draw a line * down the left margin, separating the code from the line * numbers). Line numbers are displayed with 'span.py-lineno'. * The expand/collapse block toggle button is displayed with * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not * modify the font size of the text.) * - If a source code page is opened with an anchor, then the * corresponding code block will be highlighted. The code * block's header is highlighted with 'py-highlight-hdr'; and * the code block's body is highlighted with 'py-highlight'. * - The remaining py-* classes are used to perform syntax * highlighting (py-string for string literals, py-name for names, * etc.) */ pre.rst-literal-block, pre.py-doctest { margin-left: 1em; margin-right: 1.5em; line-height: 150%; background-color: #F5FFD0; padding: .5em; border: 1px solid #B6C870; font-size: 110%; } pre.py-src { border: 1px solid #BBB; margin-top: 3em; background: #f0f0f0; color: #000000; line-height: 150%; } span.py-line { margin-left: .2em; padding-left: .4em; } span.py-lineno { border-right: 1px solid #BBB; padding: .3em .5em .3em .5em; font-style: italic; font-size: 90%; } a.py-toggle { text-decoration: none; } div.py-highlight-hdr { border-top: 1px solid #BBB; background: #d0e0e0; } div.py-highlight { border-bottom: 1px solid #BBB; background: #d0e0e0; } .py-prompt { color: #005050; font-weight: bold;} .py-string { color: #006030; } .py-comment { color: #003060; } .py-keyword { color: #600000; } .py-output { color: #404040; } .py-name { color: #000050; } .py-name:link { color: #000050; } .py-name:visited { color: #000050; } .py-number { color: #005000; } .py-def-name { color: #000060; font-weight: bold; } .py-base-class { color: #000060; } .py-param { color: #000060; } .py-docstring { color: #006030; } .py-decorator { color: #804020; } /* Use this if you don't want links to names underlined: */ /*a.py-name { text-decoration: none; }*/ /* Graphs & Diagrams * - These CSS styles are used for graphs & diagrams generated using * Graphviz dot. 'img.graph-without-title' is used for bare * diagrams (to remove the border created by making the image * clickable). */ img.graph-without-title { border: none; } img.graph-with-title { border: 1px solid #000000; } span.graph-title { font-weight: bold; } span.graph-caption { } /* General-purpose classes * - 'p.indent-wrapped-lines' defines a paragraph whose first line * is not indented, but whose subsequent lines are. * - The 'nomargin-top' class is used to remove the top margin (e.g. * from lists). The 'nomargin' class is used to remove both the * top and bottom margin (but not the left or right margin -- * for lists, that would cause the bullets to disappear.) */ p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em; margin: 0; } .nomargin-top { margin-top: 0; } .nomargin { margin-top: 0; margin-bottom: 0; } Pygments-2.3.1/scripts/debug_lexer.py0000755000175000017500000002107513376260540016743 0ustar piotrpiotr#!/usr/bin/python # -*- coding: utf-8 -*- """ Lexing error finder ~~~~~~~~~~~~~~~~~~~ For the source files given on the command line, display the text where Error tokens are being generated, along with some context. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import os import sys # always prefer Pygments from source if exists srcpath = os.path.join(os.path.dirname(__file__), '..') if os.path.isdir(os.path.join(srcpath, 'pygments')): sys.path.insert(0, srcpath) from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \ ProfilingRegexLexer, ProfilingRegexLexerMeta from pygments.lexers import get_lexer_by_name, find_lexer_class, \ find_lexer_class_for_filename from pygments.token import Error, Text, _TokenType from pygments.cmdline import _parse_options class DebuggingRegexLexer(ExtendedRegexLexer): """Make the state stack, position and current match instance attributes.""" def get_tokens_unprocessed(self, text, stack=('root',)): """ Split ``text`` into (tokentype, text) pairs. ``stack`` is the inital stack (default: ``['root']``) """ tokendefs = self._tokens self.ctx = ctx = LexerContext(text, 0) ctx.stack = list(stack) statetokens = tokendefs[ctx.stack[-1]] while 1: for rexmatch, action, new_state in statetokens: self.m = m = rexmatch(text, ctx.pos, ctx.end) if m: if action is not None: if type(action) is _TokenType: yield ctx.pos, action, m.group() ctx.pos = m.end() else: if not isinstance(self, ExtendedRegexLexer): for item in action(self, m): yield item ctx.pos = m.end() else: for item in action(self, m, ctx): yield item if not new_state: # altered the state stack? statetokens = tokendefs[ctx.stack[-1]] if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': ctx.stack.pop() elif state == '#push': ctx.stack.append(ctx.stack[-1]) else: ctx.stack.append(state) elif isinstance(new_state, int): # pop del ctx.stack[new_state:] elif new_state == '#push': ctx.stack.append(ctx.stack[-1]) else: assert False, 'wrong state def: %r' % new_state statetokens = tokendefs[ctx.stack[-1]] break else: try: if ctx.pos >= ctx.end: break if text[ctx.pos] == '\n': # at EOL, reset state to 'root' ctx.stack = ['root'] statetokens = tokendefs['root'] yield ctx.pos, Text, u'\n' ctx.pos += 1 continue yield ctx.pos, Error, text[ctx.pos] ctx.pos += 1 except IndexError: break def main(fn, lexer=None, options={}): if lexer is not None: lxcls = get_lexer_by_name(lexer).__class__ else: lxcls = find_lexer_class_for_filename(os.path.basename(fn)) if lxcls is None: name, rest = fn.split('_', 1) lxcls = find_lexer_class(name) if lxcls is None: raise AssertionError('no lexer found for file %r' % fn) print('Using lexer: %s (%s.%s)' % (lxcls.name, lxcls.__module__, lxcls.__name__)) debug_lexer = False # if profile: # # does not work for e.g. ExtendedRegexLexers # if lxcls.__bases__ == (RegexLexer,): # # yes we can! (change the metaclass) # lxcls.__class__ = ProfilingRegexLexerMeta # lxcls.__bases__ = (ProfilingRegexLexer,) # lxcls._prof_sort_index = profsort # else: # if lxcls.__bases__ == (RegexLexer,): # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True # elif lxcls.__bases__ == (DebuggingRegexLexer,): # # already debugged before # debug_lexer = True # else: # # HACK: ExtendedRegexLexer subclasses will only partially work here. # lxcls.__bases__ = (DebuggingRegexLexer,) # debug_lexer = True lx = lxcls(**options) lno = 1 if fn == '-': text = sys.stdin.read() else: with open(fn, 'rb') as fp: text = fp.read().decode('utf-8') text = text.strip('\n') + '\n' tokens = [] states = [] def show_token(tok, state): reprs = list(map(repr, tok)) print(' ' + reprs[1] + ' ' + ' ' * (29-len(reprs[1])) + reprs[0], end=' ') if debug_lexer: print(' ' + ' ' * (29-len(reprs[0])) + ' : '.join(state) if state else '', end=' ') print() for type, val in lx.get_tokens(text): lno += val.count('\n') if type == Error and not ignerror: print('Error parsing', fn, 'on line', lno) if not showall: print('Previous tokens' + (debug_lexer and ' and states' or '') + ':') for i in range(max(len(tokens) - num, 0), len(tokens)): if debug_lexer: show_token(tokens[i], states[i]) else: show_token(tokens[i], None) print('Error token:') l = len(repr(val)) print(' ' + repr(val), end=' ') if debug_lexer and hasattr(lx, 'ctx'): print(' ' * (60-l) + ' : '.join(lx.ctx.stack), end=' ') print() print() return 1 tokens.append((type, val)) if debug_lexer: if hasattr(lx, 'ctx'): states.append(lx.ctx.stack[:]) else: states.append(None) if showall: show_token((type, val), states[-1] if debug_lexer else None) return 0 def print_help(): print('''\ Pygments development helper to quickly debug lexers. scripts/debug_lexer.py [options] file ... Give one or more filenames to lex them and display possible error tokens and/or profiling info. Files are assumed to be encoded in UTF-8. Selecting lexer and options: -l NAME use lexer named NAME (default is to guess from the given filenames) -O OPTIONSTR use lexer options parsed from OPTIONSTR Debugging lexing errors: -n N show the last N tokens on error -a always show all lexed tokens (default is only to show them when an error occurs) -e do not stop on error tokens Profiling: -p use the ProfilingRegexLexer to profile regexes instead of the debugging lexer -s N sort profiling output by column N (default is column 4, the time per call) ''') num = 10 showall = False ignerror = False lexer = None options = {} profile = False profsort = 4 if __name__ == '__main__': import getopt opts, args = getopt.getopt(sys.argv[1:], 'n:l:aepO:s:h') for opt, val in opts: if opt == '-n': num = int(val) elif opt == '-a': showall = True elif opt == '-e': ignerror = True elif opt == '-l': lexer = val elif opt == '-p': profile = True elif opt == '-s': profsort = int(val) elif opt == '-O': options = _parse_options([val]) elif opt == '-h': print_help() sys.exit(0) ret = 0 if not args: print_help() for f in args: ret += main(f, lexer, options) sys.exit(bool(ret)) Pygments-2.3.1/README.rst0000644000175000017500000000206013376260540014072 0ustar piotrpiotrREADME for Pygments =================== This is the source of Pygments. It is a generic syntax highlighter that supports over 300 languages and text formats, for use in code hosting, forums, wikis or other applications that need to prettify source code. Installing ---------- ... works as usual, use ``python setup.py install``. Documentation ------------- ... can be found online at http://pygments.org/ or created by :: cd doc make html Development ----------- ... takes place on `Bitbucket `_, where the Mercurial repository, tickets and pull requests can be viewed. Continuous testing runs on drone.io: .. image:: https://drone.io/bitbucket.org/birkenfeld/pygments-main/status.png :target: https://drone.io/bitbucket.org/birkenfeld/pygments-main The authors ----------- Pygments is maintained by **Georg Brandl**, e-mail address *georg*\ *@*\ *python.org*. Many lexers and fixes have been contributed by **Armin Ronacher**, the rest of the `Pocoo `_ team and **Tim Hatch**. Pygments-2.3.1/setup.py0000755000175000017500000000461613405476077014140 0ustar piotrpiotr#!/usr/bin/env python # -*- coding: utf-8 -*- """Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image \ formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ try: from setuptools import setup, find_packages have_setuptools = True except ImportError: from distutils.core import setup def find_packages(*args, **kwargs): return [ 'pygments', 'pygments.lexers', 'pygments.formatters', 'pygments.styles', 'pygments.filters', ] have_setuptools = False if have_setuptools: add_keywords = dict( entry_points = { 'console_scripts': ['pygmentize = pygments.cmdline:main'], }, ) else: add_keywords = dict( scripts = ['pygmentize'], ) setup( name = 'Pygments', version = '2.3.1', url = 'http://pygments.org/', license = 'BSD License', author = 'Georg Brandl', author_email = 'georg@python.org', description = 'Pygments is a syntax highlighting package written in Python.', long_description = __doc__, keywords = 'syntax highlighting', packages = find_packages(), platforms = 'any', zip_safe = False, include_package_data = True, classifiers = [ 'License :: OSI Approved :: BSD License', 'Intended Audience :: Developers', 'Intended Audience :: End Users/Desktop', 'Intended Audience :: System Administrators', 'Development Status :: 6 - Mature', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 3', 'Operating System :: OS Independent', 'Topic :: Text Processing :: Filters', 'Topic :: Utilities', ], **add_keywords ) Pygments-2.3.1/AUTHORS0000644000175000017500000001766213376303352013470 0ustar piotrpiotrPygments is written and maintained by Georg Brandl . Major developers are Tim Hatch and Armin Ronacher . Other contributors, listed alphabetically, are: * Sam Aaron -- Ioke lexer * Ali Afshar -- image formatter * Thomas Aglassinger -- Easytrieve, JCL, Rexx and Transact-SQL lexers * Muthiah Annamalai -- Ezhil lexer * Kumar Appaiah -- Debian control lexer * Andreas Amann -- AppleScript lexer * Timothy Armstrong -- Dart lexer fixes * Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers * Jeremy Ashkenas -- CoffeeScript lexer * José Joaquín Atria -- Praat lexer * Stefan Matthias Aust -- Smalltalk lexer * Lucas Bajolet -- Nit lexer * Ben Bangert -- Mako lexers * Max Battcher -- Darcs patch lexer * Thomas Baruchel -- APL lexer * Tim Baumann -- (Literate) Agda lexer * Paul Baumgart, 280 North, Inc. -- Objective-J lexer * Michael Bayer -- Myghty lexers * Thomas Beale -- Archetype lexers * John Benediktsson -- Factor lexer * Trevor Bergeron -- mIRC formatter * Vincent Bernat -- LessCSS lexer * Christopher Bertels -- Fancy lexer * Sébastien Bigaret -- QVT Operational lexer * Jarrett Billingsley -- MiniD lexer * Adam Blinkinsop -- Haskell, Redcode lexers * Frits van Bommel -- assembler lexers * Pierre Bourdon -- bugfixes * Matthias Bussonnier -- ANSI style handling for terminal-256 formatter * chebee7i -- Python traceback lexer improvements * Hiram Chirino -- Scaml and Jade lexers * Mauricio Caceres -- SAS and Stata lexers. * Ian Cooper -- VGL lexer * David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers * Leaf Corcoran -- MoonScript lexer * Christopher Creutzig -- MuPAD lexer * Daniël W. Crompton -- Pike lexer * Pete Curry -- bugfixes * Bryan Davis -- EBNF lexer * Bruno Deferrari -- Shen lexer * Giedrius Dubinskas -- HTML formatter improvements * Owen Durni -- Haxe lexer * Alexander Dutton, Oxford University Computing Services -- SPARQL lexer * James Edwards -- Terraform lexer * Nick Efford -- Python 3 lexer * Sven Efftinge -- Xtend lexer * Artem Egorkine -- terminal256 formatter * Matthew Fernandez -- CAmkES lexer * Michael Ficarra -- CPSA lexer * James H. Fisher -- PostScript lexer * William S. Fulton -- SWIG lexer * Carlos Galdino -- Elixir and Elixir Console lexers * Michael Galloy -- IDL lexer * Naveen Garg -- Autohotkey lexer * Laurent Gautier -- R/S lexer * Alex Gaynor -- PyPy log lexer * Richard Gerkin -- Igor Pro lexer * Alain Gilbert -- TypeScript lexer * Alex Gilding -- BlitzBasic lexer * Bertrand Goetzmann -- Groovy lexer * Krzysiek Goj -- Scala lexer * Andrey Golovizin -- BibTeX lexers * Matt Good -- Genshi, Cheetah lexers * Michał Górny -- vim modeline support * Alex Gosse -- TrafficScript lexer * Patrick Gotthardt -- PHP namespaces support * Olivier Guibe -- Asymptote lexer * Phil Hagelberg -- Fennel lexer * Florian Hahn -- Boogie lexer * Martin Harriman -- SNOBOL lexer * Matthew Harrison -- SVG formatter * Steven Hazel -- Tcl lexer * Dan Michael Heggø -- Turtle lexer * Aslak Hellesøy -- Gherkin lexer * Greg Hendershott -- Racket lexer * Justin Hendrick -- ParaSail lexer * Jordi Gutiérrez Hermoso -- Octave lexer * David Hess, Fish Software, Inc. -- Objective-J lexer * Varun Hiremath -- Debian control lexer * Rob Hoelz -- Perl 6 lexer * Doug Hogan -- Mscgen lexer * Ben Hollis -- Mason lexer * Max Horn -- GAP lexer * Alastair Houghton -- Lexer inheritance facility * Tim Howard -- BlitzMax lexer * Dustin Howett -- Logos lexer * Ivan Inozemtsev -- Fantom lexer * Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session, MSDOS session, BC, WDiff * Brian R. Jackson -- Tea lexer * Christian Jann -- ShellSession lexer * Dennis Kaarsemaker -- sources.list lexer * Dmitri Kabak -- Inferno Limbo lexer * Igor Kalnitsky -- vhdl lexer * Alexander Kit -- MaskJS lexer * Pekka Klärck -- Robot Framework lexer * Gerwin Klein -- Isabelle lexer * Eric Knibbe -- Lasso lexer * Stepan Koltsov -- Clay lexer * Adam Koprowski -- Opa lexer * Benjamin Kowarsch -- Modula-2 lexer * Domen Kožar -- Nix lexer * Oleh Krekel -- Emacs Lisp lexer * Alexander Kriegisch -- Kconfig and AspectJ lexers * Marek Kubica -- Scheme lexer * Jochen Kupperschmidt -- Markdown processor * Gerd Kurzbach -- Modelica lexer * Jon Larimer, Google Inc. -- Smali lexer * Olov Lassus -- Dart lexer * Matt Layman -- TAP lexer * Kristian Lyngstøl -- Varnish lexers * Sylvestre Ledru -- Scilab lexer * Chee Sing Lee -- Flatline lexer * Mark Lee -- Vala lexer * Valentin Lorentz -- C++ lexer improvements * Ben Mabey -- Gherkin lexer * Angus MacArthur -- QML lexer * Louis Mandel -- X10 lexer * Louis Marchand -- Eiffel lexer * Simone Margaritelli -- Hybris lexer * Kirk McDonald -- D lexer * Gordon McGregor -- SystemVerilog lexer * Stephen McKamey -- Duel/JBST lexer * Brian McKenna -- F# lexer * Charles McLaughlin -- Puppet lexer * Lukas Meuser -- BBCode formatter, Lua lexer * Cat Miller -- Pig lexer * Paul Miller -- LiveScript lexer * Hong Minhee -- HTTP lexer * Michael Mior -- Awk lexer * Bruce Mitchener -- Dylan lexer rewrite * Reuben Morais -- SourcePawn lexer * Jon Morton -- Rust lexer * Paulo Moura -- Logtalk lexer * Mher Movsisyan -- DTD lexer * Dejan Muhamedagic -- Crmsh lexer * Ana Nelson -- Ragel, ANTLR, R console lexers * Kurt Neufeld -- Markdown lexer * Nam T. Nguyen -- Monokai style * Jesper Noehr -- HTML formatter "anchorlinenos" * Mike Nolta -- Julia lexer * Jonas Obrist -- BBCode lexer * Edward O'Callaghan -- Cryptol lexer * David Oliva -- Rebol lexer * Pat Pannuto -- nesC lexer * Jon Parise -- Protocol buffers and Thrift lexers * Benjamin Peterson -- Test suite refactoring * Ronny Pfannschmidt -- BBCode lexer * Dominik Picheta -- Nimrod lexer * Andrew Pinkham -- RTF Formatter Refactoring * Clément Prévost -- UrbiScript lexer * Tanner Prynn -- cmdline -x option and loading lexers from files * Oleh Prypin -- Crystal lexer (based on Ruby lexer) * Elias Rabel -- Fortran fixed form lexer * raichoo -- Idris lexer * Kashif Rasul -- CUDA lexer * Nathan Reed -- HLSL lexer * Justin Reidy -- MXML lexer * Norman Richards -- JSON lexer * Corey Richardson -- Rust lexer updates * Lubomir Rintel -- GoodData MAQL and CL lexers * Andre Roberge -- Tango style * Georg Rollinger -- HSAIL lexer * Michiel Roos -- TypoScript lexer * Konrad Rudolph -- LaTeX formatter enhancements * Mario Ruggier -- Evoque lexers * Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements * Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers * Matteo Sasso -- Common Lisp lexer * Joe Schafer -- Ada lexer * Ken Schutte -- Matlab lexers * René Schwaiger -- Rainbow Dash style * Sebastian Schweizer -- Whiley lexer * Tassilo Schweyer -- Io, MOOCode lexers * Ted Shaw -- AutoIt lexer * Joerg Sieker -- ABAP lexer * Robert Simmons -- Standard ML lexer * Kirill Simonov -- YAML lexer * Corbin Simpson -- Monte lexer * Alexander Smishlajev -- Visual FoxPro lexer * Steve Spigarelli -- XQuery lexer * Jerome St-Louis -- eC lexer * Camil Staps -- Clean and NuSMV lexers * James Strachan -- Kotlin lexer * Tom Stuart -- Treetop lexer * Colin Sullivan -- SuperCollider lexer * Ben Swift -- Extempore lexer * Edoardo Tenani -- Arduino lexer * Tiberius Teng -- default style overhaul * Jeremy Thurgood -- Erlang, Squid config lexers * Brian Tiffin -- OpenCOBOL lexer * Bob Tolbert -- Hy lexer * Matthias Trute -- Forth lexer * Erick Tryzelaar -- Felix lexer * Alexander Udalov -- Kotlin lexer improvements * Thomas Van Doren -- Chapel lexer * Daniele Varrazzo -- PostgreSQL lexers * Abe Voelker -- OpenEdge ABL lexer * Pepijn de Vos -- HTML formatter CTags support * Matthias Vallentin -- Bro lexer * Benoît Vinot -- AMPL lexer * Linh Vu Hong -- RSL lexer * Nathan Weizenbaum -- Haml and Sass lexers * Nathan Whetsell -- Csound lexers * Dietmar Winkler -- Modelica lexer * Nils Winter -- Smalltalk lexer * Davy Wybiral -- Clojure lexer * Whitney Young -- ObjectiveC lexer * Diego Zamboni -- CFengine3 lexer * Enrique Zamudio -- Ceylon lexer * Alex Zimin -- Nemerle lexer * Rob Zimmerman -- Kal lexer * Vincent Zurczak -- Roboconf lexer Many thanks for all contributions! Pygments-2.3.1/LICENSE0000644000175000017500000000246313376260540013417 0ustar piotrpiotrCopyright (c) 2006-2017 by the respective authors (see AUTHORS file). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Pygments-2.3.1/PKG-INFO0000644000175000017500000000326013405476655013514 0ustar piotrpiotrMetadata-Version: 1.1 Name: Pygments Version: 2.3.1 Summary: Pygments is a syntax highlighting package written in Python. Home-page: http://pygments.org/ Author: Georg Brandl Author-email: georg@python.org License: BSD License Description: Pygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 300 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. Keywords: syntax highlighting Platform: any Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: Intended Audience :: System Administrators Classifier: Development Status :: 6 - Mature Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 3 Classifier: Operating System :: OS Independent Classifier: Topic :: Text Processing :: Filters Classifier: Topic :: Utilities Pygments-2.3.1/pygments/0000755000175000017500000000000013405476652014261 5ustar piotrpiotrPygments-2.3.1/pygments/sphinxext.py0000644000175000017500000001106113376303353016656 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.sphinxext ~~~~~~~~~~~~~~~~~~ Sphinx extension to generate automatic documentation of lexers, formatters and filters. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import sys from docutils import nodes from docutils.statemachine import ViewList from docutils.parsers.rst import Directive from sphinx.util.nodes import nested_parse_with_titles MODULEDOC = ''' .. module:: %s %s %s ''' LEXERDOC = ''' .. class:: %s :Short names: %s :Filenames: %s :MIME types: %s %s ''' FMTERDOC = ''' .. class:: %s :Short names: %s :Filenames: %s %s ''' FILTERDOC = ''' .. class:: %s :Name: %s %s ''' class PygmentsDoc(Directive): """ A directive to collect all lexers/formatters/filters and generate autoclass directives for them. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): self.filenames = set() if self.arguments[0] == 'lexers': out = self.document_lexers() elif self.arguments[0] == 'formatters': out = self.document_formatters() elif self.arguments[0] == 'filters': out = self.document_filters() else: raise Exception('invalid argument for "pygmentsdoc" directive') node = nodes.compound() vl = ViewList(out.split('\n'), source='') nested_parse_with_titles(self.state, vl, node) for fn in self.filenames: self.state.document.settings.record_dependencies.add(fn) return node.children def document_lexers(self): from pygments.lexers._mapping import LEXERS out = [] modules = {} moduledocstrings = {} for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]): module = data[0] mod = __import__(module, None, None, [classname]) self.filenames.add(mod.__file__) cls = getattr(mod, classname) if not cls.__doc__: print("Warning: %s does not have a docstring." % classname) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') modules.setdefault(module, []).append(( classname, ', '.join(data[2]) or 'None', ', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None', ', '.join(data[4]) or 'None', docstring)) if module not in moduledocstrings: moddoc = mod.__doc__ if isinstance(moddoc, bytes): moddoc = moddoc.decode('utf8') moduledocstrings[module] = moddoc for module, lexers in sorted(modules.items(), key=lambda x: x[0]): if moduledocstrings[module] is None: raise Exception("Missing docstring for %s" % (module,)) heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.') out.append(MODULEDOC % (module, heading, '-'*len(heading))) for data in lexers: out.append(LEXERDOC % data) return ''.join(out) def document_formatters(self): from pygments.formatters import FORMATTERS out = [] for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]): module = data[0] mod = __import__(module, None, None, [classname]) self.filenames.add(mod.__file__) cls = getattr(mod, classname) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') heading = cls.__name__ out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None', ', '.join(data[3]).replace('*', '\\*') or 'None', docstring)) return ''.join(out) def document_filters(self): from pygments.filters import FILTERS out = [] for name, cls in FILTERS.items(): self.filenames.add(sys.modules[cls.__module__].__file__) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') out.append(FILTERDOC % (cls.__name__, name, docstring)) return ''.join(out) def setup(app): app.add_directive('pygmentsdoc', PygmentsDoc) Pygments-2.3.1/pygments/plugin.py0000644000175000017500000000330613376303353016125 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.plugin ~~~~~~~~~~~~~~~ Pygments setuptools plugin interface. The methods defined here also work if setuptools isn't installed but they just return nothing. lexer plugins:: [pygments.lexers] yourlexer = yourmodule:YourLexer formatter plugins:: [pygments.formatters] yourformatter = yourformatter:YourFormatter /.ext = yourformatter:YourFormatter As you can see, you can define extensions for the formatter with a leading slash. syntax plugins:: [pygments.styles] yourstyle = yourstyle:YourStyle filter plugin:: [pygments.filter] yourfilter = yourfilter:YourFilter :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ LEXER_ENTRY_POINT = 'pygments.lexers' FORMATTER_ENTRY_POINT = 'pygments.formatters' STYLE_ENTRY_POINT = 'pygments.styles' FILTER_ENTRY_POINT = 'pygments.filters' def iter_entry_points(group_name): try: import pkg_resources except (ImportError, IOError): return [] return pkg_resources.iter_entry_points(group_name) def find_plugin_lexers(): for entrypoint in iter_entry_points(LEXER_ENTRY_POINT): yield entrypoint.load() def find_plugin_formatters(): for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT): yield entrypoint.name, entrypoint.load() def find_plugin_styles(): for entrypoint in iter_entry_points(STYLE_ENTRY_POINT): yield entrypoint.name, entrypoint.load() def find_plugin_filters(): for entrypoint in iter_entry_points(FILTER_ENTRY_POINT): yield entrypoint.name, entrypoint.load() Pygments-2.3.1/pygments/lexers/0000755000175000017500000000000013405476653015564 5ustar piotrpiotrPygments-2.3.1/pygments/lexers/ezhil.py0000644000175000017500000000571413376260540017251 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.ezhil ~~~~~~~~~~~~~~~~~~~~~ Pygments lexers for Ezhil language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words from pygments.token import Keyword, Text, Comment, Name from pygments.token import String, Number, Punctuation, Operator __all__ = ['EzhilLexer'] class EzhilLexer(RegexLexer): """ Lexer for `Ezhil, a Tamil script-based programming language `_ .. versionadded:: 2.1 """ name = 'Ezhil' aliases = ['ezhil'] filenames = ['*.n'] mimetypes = ['text/x-ezhil'] flags = re.MULTILINE | re.UNICODE # Refer to tamil.utf8.tamil_letters from open-tamil for a stricter version of this. # This much simpler version is close enough, and includes combining marks. _TALETTERS = u'[a-zA-Z_]|[\u0b80-\u0bff]' tokens = { 'root': [ include('keywords'), (r'#.*\n', Comment.Single), (r'[@+/*,^\-%]|[!<>=]=?|&&?|\|\|?', Operator), (u'இல்', Operator.Word), (words((u'assert', u'max', u'min', u'நீளம்', u'சரம்_இடமாற்று', u'சரம்_கண்டுபிடி', u'பட்டியல்', u'பின்இணை', u'வரிசைப்படுத்து', u'எடு', u'தலைகீழ்', u'நீட்டிக்க', u'நுழைக்க', u'வை', u'கோப்பை_திற', u'கோப்பை_எழுது', u'கோப்பை_மூடு', u'pi', u'sin', u'cos', u'tan', u'sqrt', u'hypot', u'pow', u'exp', u'log', u'log10', u'exit', ), suffix=r'\b'), Name.Builtin), (r'(True|False)\b', Keyword.Constant), (r'[^\S\n]+', Text), include('identifier'), include('literal'), (r'[(){}\[\]:;.]', Punctuation), ], 'keywords': [ (u'பதிப்பி|தேர்ந்தெடு|தேர்வு|ஏதேனில்|ஆனால்|இல்லைஆனால்|இல்லை|ஆக|ஒவ்வொன்றாக|இல்|வரை|செய்|முடியேனில்|பின்கொடு|முடி|நிரல்பாகம்|தொடர்|நிறுத்து|நிரல்பாகம்', Keyword), ], 'identifier': [ (u'(?:'+_TALETTERS+u')(?:[0-9]|'+_TALETTERS+u')*', Name), ], 'literal': [ (r'".*?"', String), (r'(?u)\d+((\.\d*)?[eE][+-]?\d+|\.\d*)', Number.Float), (r'(?u)\d+', Number.Integer), ] } def __init__(self, **options): super(EzhilLexer, self).__init__(**options) self.encoding = options.get('encoding', 'utf-8') Pygments-2.3.1/pygments/lexers/csound.py0000644000175000017500000004023613402534107017420 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.csound ~~~~~~~~~~~~~~~~~~~~~~ Lexers for Csound languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, default, include, using, words from pygments.token import Comment, Error, Keyword, Name, Number, Operator, Punctuation, \ String, Text, Whitespace from pygments.lexers._csound_builtins import OPCODES, DEPRECATED_OPCODES from pygments.lexers.html import HtmlLexer from pygments.lexers.python import PythonLexer from pygments.lexers.scripting import LuaLexer __all__ = ['CsoundScoreLexer', 'CsoundOrchestraLexer', 'CsoundDocumentLexer'] newline = (r'((?:(?:;|//).*)*)(\n)', bygroups(Comment.Single, Text)) class CsoundLexer(RegexLexer): tokens = { 'whitespace': [ (r'[ \t]+', Text), (r'/[*](?:.|\n)*?[*]/', Comment.Multiline), (r'(?:;|//).*$', Comment.Single), (r'(\\)(\n)', bygroups(Whitespace, Text)) ], 'preprocessor directives': [ (r'#(?:e(?:nd(?:if)?|lse)\b|##)|@@?[ \t]*\d+', Comment.Preproc), (r'#include', Comment.Preproc, 'include directive'), (r'#[ \t]*define', Comment.Preproc, 'define directive'), (r'#(?:ifn?def|undef)\b', Comment.Preproc, 'macro directive') ], 'include directive': [ include('whitespace'), (r'([^ \t]).*?\1', String, '#pop') ], 'define directive': [ (r'\n', Text), include('whitespace'), (r'([A-Z_a-z]\w*)(\()', bygroups(Comment.Preproc, Punctuation), ('#pop', 'macro parameter name list')), (r'[A-Z_a-z]\w*', Comment.Preproc, ('#pop', 'before macro body')) ], 'macro parameter name list': [ include('whitespace'), (r'[A-Z_a-z]\w*', Comment.Preproc), (r"['#]", Punctuation), (r'\)', Punctuation, ('#pop', 'before macro body')) ], 'before macro body': [ (r'\n', Text), include('whitespace'), (r'#', Punctuation, ('#pop', 'macro body')) ], 'macro body': [ (r'(?:\\(?!#)|[^#\\]|\n)+', Comment.Preproc), (r'\\#', Comment.Preproc), (r'(?`_ scores. .. versionadded:: 2.1 """ name = 'Csound Score' aliases = ['csound-score', 'csound-sco'] filenames = ['*.sco'] tokens = { 'root': [ (r'\n', Text), include('whitespace and macro uses'), include('preprocessor directives'), (r'[abCdefiqstvxy]', Keyword), # There is also a w statement that is generated internally and should not be # used; see https://github.com/csound/csound/issues/750. (r'z', Keyword.Constant), # z is a constant equal to 800,000,000,000. 800 billion seconds is about # 25,367.8 years. See also # https://csound.github.io/docs/manual/ScoreTop.html and # https://github.com/csound/csound/search?q=stof+path%3AEngine+filename%3Asread.c. (r'([nNpP][pP])(\d+)', bygroups(Keyword, Number.Integer)), (r'[mn]', Keyword, 'mark statement'), include('numbers'), (r'[!+\-*/^%&|<>#~.]', Operator), (r'[()\[\]]', Punctuation), (r'"', String, 'quoted string'), (r'\{', Comment.Preproc, 'loop after left brace'), ], 'mark statement': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', Name.Label), (r'\n', Text, '#pop') ], 'quoted string': [ (r'"', String, '#pop'), (r'[^"$]+', String), include('macro uses'), (r'[$]', String) ], 'loop after left brace': [ include('whitespace and macro uses'), (r'\d+', Number.Integer, ('#pop', 'loop after repeat count')), ], 'loop after repeat count': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', Comment.Preproc, ('#pop', 'loop')) ], 'loop': [ (r'\}', Comment.Preproc, '#pop'), include('root') ], # Braced strings are not allowed in Csound scores, but this is needed # because the superclass includes it. 'braced string': [ (r'\}\}', String, '#pop'), (r'[^}]|\}(?!\})', String) ] } class CsoundOrchestraLexer(CsoundLexer): """ For `Csound `_ orchestras. .. versionadded:: 2.1 """ name = 'Csound Orchestra' aliases = ['csound', 'csound-orc'] filenames = ['*.orc', '*.udo'] user_defined_opcodes = set() def opcode_name_callback(lexer, match): opcode = match.group(0) lexer.user_defined_opcodes.add(opcode) yield match.start(), Name.Function, opcode def name_callback(lexer, match): name = match.group(1) if name in OPCODES or name in DEPRECATED_OPCODES: yield match.start(), Name.Builtin, name if match.group(2): yield match.start(2), Punctuation, match.group(2) yield match.start(3), Keyword.Type, match.group(3) elif name in lexer.user_defined_opcodes: yield match.start(), Name.Function, name else: nameMatch = re.search(r'^(g?[afikSw])(\w+)', name) if nameMatch: yield nameMatch.start(1), Keyword.Type, nameMatch.group(1) yield nameMatch.start(2), Name, nameMatch.group(2) else: yield match.start(), Name, name if match.group(2): yield match.start(2), Punctuation, match.group(2) yield match.start(3), Name, match.group(3) tokens = { 'root': [ (r'\n', Text), (r'^([ \t]*)(\w+)(:)(?:[ \t]+|$)', bygroups(Text, Name.Label, Punctuation)), include('whitespace and macro uses'), include('preprocessor directives'), (r'\binstr\b', Keyword.Declaration, 'instrument numbers and identifiers'), (r'\bopcode\b', Keyword.Declaration, 'after opcode keyword'), (r'\b(?:end(?:in|op))\b', Keyword.Declaration), include('partial statements') ], 'partial statements': [ (r'\b(?:0dbfs|A4|k(?:r|smps)|nchnls(?:_i)?|sr)\b', Name.Variable.Global), include('numbers'), (r'\+=|-=|\*=|/=|<<|>>|<=|>=|==|!=|&&|\|\||[~¬]|[=!+\-*/^%&|<>#?:]', Operator), (r'[(),\[\]]', Punctuation), (r'"', String, 'quoted string'), (r'\{\{', String, 'braced string'), (words(( 'do', 'else', 'elseif', 'endif', 'enduntil', 'fi', 'if', 'ithen', 'kthen', 'od', 'then', 'until', 'while', ), prefix=r'\b', suffix=r'\b'), Keyword), (words(('return', 'rireturn'), prefix=r'\b', suffix=r'\b'), Keyword.Pseudo), (r'\b[ik]?goto\b', Keyword, 'goto label'), (r'\b(r(?:einit|igoto)|tigoto)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), 'goto label'), (r'\b(c(?:g|in?|k|nk?)goto)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument')), (r'\b(timout)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument', 'goto argument')), (r'\b(loop_[gl][et])(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument', 'goto argument', 'goto argument')), (r'\bprintk?s\b', Name.Builtin, 'prints opcode'), (r'\b(?:readscore|scoreline(?:_i)?)\b', Name.Builtin, 'Csound score opcode'), (r'\bpyl?run[it]?\b', Name.Builtin, 'Python opcode'), (r'\blua_(?:exec|opdef)\b', Name.Builtin, 'Lua opcode'), (r'\bp\d+\b', Name.Variable.Instance), (r'\b([A-Z_a-z]\w*)(?:(:)([A-Za-z]))?\b', name_callback) ], 'instrument numbers and identifiers': [ include('whitespace and macro uses'), (r'\d+|[A-Z_a-z]\w*', Name.Function), (r'[+,]', Punctuation), (r'\n', Text, '#pop') ], 'after opcode keyword': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', opcode_name_callback, ('#pop', 'opcode type signatures')), (r'\n', Text, '#pop') ], 'opcode type signatures': [ include('whitespace and macro uses'), # https://github.com/csound/csound/search?q=XIDENT+path%3AEngine+filename%3Acsound_orc.lex (r'0|[afijkKoOpPStV\[\]]+', Keyword.Type), (r',', Punctuation), (r'\n', Text, '#pop') ], 'quoted string': [ (r'"', String, '#pop'), (r'[^\\"$%)]+', String), include('macro uses'), include('escape sequences'), include('format specifiers'), (r'[\\$%)]', String) ], 'braced string': [ (r'\}\}', String, '#pop'), (r'(?:[^\\%)}]|\}(?!\}))+', String), include('escape sequences'), include('format specifiers'), (r'[\\%)]', String) ], 'escape sequences': [ # https://github.com/csound/csound/search?q=unquote_string+path%3AEngine+filename%3Acsound_orc_compile.c (r'\\(?:[\\abnrt"]|[0-7]{1,3})', String.Escape) ], # Format specifiers are highlighted in all strings, even though only # fprintks https://csound.github.io/docs/manual/fprintks.html # fprints https://csound.github.io/docs/manual/fprints.html # printf/printf_i https://csound.github.io/docs/manual/printf.html # printks https://csound.github.io/docs/manual/printks.html # prints https://csound.github.io/docs/manual/prints.html # sprintf https://csound.github.io/docs/manual/sprintf.html # sprintfk https://csound.github.io/docs/manual/sprintfk.html # work with strings that contain format specifiers. In addition, these # opcodes’ handling of format specifiers is inconsistent: # - fprintks, fprints, printks, and prints do accept %a and %A # specifiers, but can’t accept %s specifiers. # - printf, printf_i, sprintf, and sprintfk don’t accept %a and %A # specifiers, but can accept %s specifiers. # See https://github.com/csound/csound/issues/747 for more information. 'format specifiers': [ (r'%[#0\- +]*\d*(?:\.\d+)?[diuoxXfFeEgGaAcs]', String.Interpol), (r'%%', String.Escape) ], 'goto argument': [ include('whitespace and macro uses'), (r',', Punctuation, '#pop'), include('partial statements') ], 'goto label': [ include('whitespace and macro uses'), (r'\w+', Name.Label, '#pop'), default('#pop') ], 'prints opcode': [ include('whitespace and macro uses'), (r'"', String, 'prints quoted string'), default('#pop') ], 'prints quoted string': [ (r'\\\\[aAbBnNrRtT]', String.Escape), (r'%[!nNrRtT]|[~^]{1,2}', String.Escape), include('quoted string') ], 'Csound score opcode': [ include('whitespace and macro uses'), (r'\{\{', String, 'Csound score'), (r'\n', Text, '#pop') ], 'Csound score': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(CsoundScoreLexer)) ], 'Python opcode': [ include('whitespace and macro uses'), (r'\{\{', String, 'Python'), (r'\n', Text, '#pop') ], 'Python': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(PythonLexer)) ], 'Lua opcode': [ include('whitespace and macro uses'), (r'\{\{', String, 'Lua'), (r'\n', Text, '#pop') ], 'Lua': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(LuaLexer)) ] } class CsoundDocumentLexer(RegexLexer): """ For `Csound `_ documents. .. versionadded:: 2.1 """ name = 'Csound Document' aliases = ['csound-document', 'csound-csd'] filenames = ['*.csd'] # These tokens are based on those in XmlLexer in pygments/lexers/html.py. Making # CsoundDocumentLexer a subclass of XmlLexer rather than RegexLexer may seem like a # better idea, since Csound Document files look like XML files. However, Csound # Documents can contain Csound comments (preceded by //, for example) before and # after the root element, unescaped bitwise AND & and less than < operators, etc. In # other words, while Csound Document files look like XML files, they may not actually # be XML files. tokens = { 'root': [ (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'(?:;|//).*$', Comment.Single), (r'[^/;<]+|/(?!/)', Text), (r'<\s*CsInstruments', Name.Tag, ('orchestra', 'tag')), (r'<\s*CsScore', Name.Tag, ('score', 'tag')), (r'<\s*[Hh][Tt][Mm][Ll]', Name.Tag, ('HTML', 'tag')), (r'<\s*[\w:.-]+', Name.Tag, 'tag'), (r'<\s*/\s*[\w:.-]+\s*>', Name.Tag) ], 'orchestra': [ (r'<\s*/\s*CsInstruments\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*CsInstruments\s*>)', using(CsoundOrchestraLexer)) ], 'score': [ (r'<\s*/\s*CsScore\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*CsScore\s*>)', using(CsoundScoreLexer)) ], 'HTML': [ (r'<\s*/\s*[Hh][Tt][Mm][Ll]\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*[Hh][Tt][Mm][Ll]\s*>)', using(HtmlLexer)) ], 'tag': [ (r'\s+', Text), (r'[\w.:-]+\s*=', Name.Attribute, 'attr'), (r'/?\s*>', Name.Tag, '#pop') ], 'attr': [ (r'\s+', Text), (r'".*?"', String, '#pop'), (r"'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop') ] } Pygments-2.3.1/pygments/lexers/supercollider.py0000644000175000017500000000667413376260540021020 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.supercollider ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for SuperCollider :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['SuperColliderLexer'] class SuperColliderLexer(RegexLexer): """ For `SuperCollider `_ source code. .. versionadded:: 2.1 """ name = 'SuperCollider' aliases = ['sc', 'supercollider'] filenames = ['*.sc', '*.scd'] mimetypes = ['application/supercollider', 'text/supercollider', ] flags = re.DOTALL | re.MULTILINE tokens = { 'commentsandwhitespace': [ (r'\s+', Text), (r'', Comment), (r'', Name.Builtin, 'cfoutput'), (r'(?s)()(.+?)()', bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)), # negative lookbehind is for strings with embedded > (r'(?s)()', bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)), ], 'cfoutput': [ (r'[^#<]+', Other), (r'(#)(.*?)(#)', bygroups(Punctuation, using(ColdfusionLexer), Punctuation)), # (r'', Name.Builtin, '#push'), (r'', Name.Builtin, '#pop'), include('tags'), (r'(?s)<[^<>]*', Other), (r'#', Other), ], 'cfcomment': [ (r'', Comment.Multiline, '#pop'), (r'([^<-]|<(?!!---)|-(?!-->))+', Comment.Multiline), ], } class ColdfusionHtmlLexer(DelegatingLexer): """ Coldfusion markup in html """ name = 'Coldfusion HTML' aliases = ['cfm'] filenames = ['*.cfm', '*.cfml'] mimetypes = ['application/x-coldfusion'] def __init__(self, **options): super(ColdfusionHtmlLexer, self).__init__(HtmlLexer, ColdfusionMarkupLexer, **options) class ColdfusionCFCLexer(DelegatingLexer): """ Coldfusion markup/script components .. versionadded:: 2.0 """ name = 'Coldfusion CFC' aliases = ['cfc'] filenames = ['*.cfc'] mimetypes = [] def __init__(self, **options): super(ColdfusionCFCLexer, self).__init__(ColdfusionHtmlLexer, ColdfusionLexer, **options) class SspLexer(DelegatingLexer): """ Lexer for Scalate Server Pages. .. versionadded:: 1.4 """ name = 'Scalate Server Page' aliases = ['ssp'] filenames = ['*.ssp'] mimetypes = ['application/x-ssp'] def __init__(self, **options): super(SspLexer, self).__init__(XmlLexer, JspRootLexer, **options) def analyse_text(text): rv = 0.0 if re.search(r'val \w+\s*:', text): rv += 0.6 if looks_like_xml(text): rv += 0.2 if '<%' in text and '%>' in text: rv += 0.1 return rv class TeaTemplateRootLexer(RegexLexer): """ Base for the `TeaTemplateLexer`. Yields `Token.Other` for area outside of code blocks. .. versionadded:: 1.5 """ tokens = { 'root': [ (r'<%\S?', Keyword, 'sec'), (r'[^<]+', Other), (r'<', Other), ], 'sec': [ (r'%>', Keyword, '#pop'), # note: '\w\W' != '.' without DOTALL. (r'[\w\W]+?(?=%>|\Z)', using(TeaLangLexer)), ], } class TeaTemplateLexer(DelegatingLexer): """ Lexer for `Tea Templates `_. .. versionadded:: 1.5 """ name = 'Tea' aliases = ['tea'] filenames = ['*.tea'] mimetypes = ['text/x-tea'] def __init__(self, **options): super(TeaTemplateLexer, self).__init__(XmlLexer, TeaTemplateRootLexer, **options) def analyse_text(text): rv = TeaLangLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 if '<%' in text and '%>' in text: rv += 0.1 return rv class LassoHtmlLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `HtmlLexer`. Nested JavaScript and CSS is also highlighted. .. versionadded:: 1.6 """ name = 'HTML+Lasso' aliases = ['html+lasso'] alias_filenames = ['*.html', '*.htm', '*.xhtml', '*.lasso', '*.lasso[89]', '*.incl', '*.inc', '*.las'] mimetypes = ['text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]'] def __init__(self, **options): super(LassoHtmlLexer, self).__init__(HtmlLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): # same as HTML lexer rv += 0.5 return rv class LassoXmlLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `XmlLexer`. .. versionadded:: 1.6 """ name = 'XML+Lasso' aliases = ['xml+lasso'] alias_filenames = ['*.xml', '*.lasso', '*.lasso[89]', '*.incl', '*.inc', '*.las'] mimetypes = ['application/xml+lasso'] def __init__(self, **options): super(LassoXmlLexer, self).__init__(XmlLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class LassoCssLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `CssLexer`. .. versionadded:: 1.6 """ name = 'CSS+Lasso' aliases = ['css+lasso'] alias_filenames = ['*.css'] mimetypes = ['text/css+lasso'] def __init__(self, **options): options['requiredelimiters'] = True super(LassoCssLexer, self).__init__(CssLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.05 if re.search(r'\w+:.+?;', text): rv += 0.1 if 'padding:' in text: rv += 0.1 return rv class LassoJavascriptLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `JavascriptLexer`. .. versionadded:: 1.6 """ name = 'JavaScript+Lasso' aliases = ['js+lasso', 'javascript+lasso'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso'] def __init__(self, **options): options['requiredelimiters'] = True super(LassoJavascriptLexer, self).__init__(JavascriptLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.05 return rv class HandlebarsLexer(RegexLexer): """ Generic `handlebars ` template lexer. Highlights only the Handlebars template tags (stuff between `{{` and `}}`). Everything else is left for a delegating lexer. .. versionadded:: 2.0 """ name = "Handlebars" aliases = ['handlebars'] tokens = { 'root': [ (r'[^{]+', Other), (r'\{\{!.*\}\}', Comment), (r'(\{\{\{)(\s*)', bygroups(Comment.Special, Text), 'tag'), (r'(\{\{)(\s*)', bygroups(Comment.Preproc, Text), 'tag'), ], 'tag': [ (r'\s+', Text), (r'\}\}\}', Comment.Special, '#pop'), (r'\}\}', Comment.Preproc, '#pop'), # Handlebars (r'([#/]*)(each|if|unless|else|with|log|in(line)?)', bygroups(Keyword, Keyword)), (r'#\*inline', Keyword), # General {{#block}} (r'([#/])([\w-]+)', bygroups(Name.Function, Name.Function)), # {{opt=something}} (r'([\w-]+)(=)', bygroups(Name.Attribute, Operator)), # Partials {{> ...}} (r'(>)(\s*)(@partial-block)', bygroups(Keyword, Text, Keyword)), (r'(#?>)(\s*)([\w-]+)', bygroups(Keyword, Text, Name.Variable)), (r'(>)(\s*)(\()', bygroups(Keyword, Text, Punctuation), 'dynamic-partial'), include('generic'), ], 'dynamic-partial': [ (r'\s+', Text), (r'\)', Punctuation, '#pop'), (r'(lookup)(\s+)(\.|this)(\s+)', bygroups(Keyword, Text, Name.Variable, Text)), (r'(lookup)(\s+)(\S+)', bygroups(Keyword, Text, using(this, state='variable'))), (r'[\w-]+', Name.Function), include('generic'), ], 'variable': [ (r'[a-zA-Z][\w-]*', Name.Variable), (r'\.[\w-]+', Name.Variable), (r'(this\/|\.\/|(\.\.\/)+)[\w-]+', Name.Variable), ], 'generic': [ include('variable'), # borrowed from DjangoLexer (r':?"(\\\\|\\"|[^"])*"', String.Double), (r":?'(\\\\|\\'|[^'])*'", String.Single), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), ] } class HandlebarsHtmlLexer(DelegatingLexer): """ Subclass of the `HandlebarsLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML+Handlebars" aliases = ["html+handlebars"] filenames = ['*.handlebars', '*.hbs'] mimetypes = ['text/html+handlebars', 'text/x-handlebars-template'] def __init__(self, **options): super(HandlebarsHtmlLexer, self).__init__(HtmlLexer, HandlebarsLexer, **options) class YamlJinjaLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `YamlLexer`. Commonly used in Saltstack salt states. .. versionadded:: 2.0 """ name = 'YAML+Jinja' aliases = ['yaml+jinja', 'salt', 'sls'] filenames = ['*.sls'] mimetypes = ['text/x-yaml+jinja', 'text/x-sls'] def __init__(self, **options): super(YamlJinjaLexer, self).__init__(YamlLexer, DjangoLexer, **options) class LiquidLexer(RegexLexer): """ Lexer for `Liquid templates `_. .. versionadded:: 2.0 """ name = 'liquid' aliases = ['liquid'] filenames = ['*.liquid'] tokens = { 'root': [ (r'[^{]+', Text), # tags and block tags (r'(\{%)(\s*)', bygroups(Punctuation, Whitespace), 'tag-or-block'), # output tags (r'(\{\{)(\s*)([^\s}]+)', bygroups(Punctuation, Whitespace, using(this, state = 'generic')), 'output'), (r'\{', Text) ], 'tag-or-block': [ # builtin logic blocks (r'(if|unless|elsif|case)(?=\s+)', Keyword.Reserved, 'condition'), (r'(when)(\s+)', bygroups(Keyword.Reserved, Whitespace), combined('end-of-block', 'whitespace', 'generic')), (r'(else)(\s*)(%\})', bygroups(Keyword.Reserved, Whitespace, Punctuation), '#pop'), # other builtin blocks (r'(capture)(\s+)([^\s%]+)(\s*)(%\})', bygroups(Name.Tag, Whitespace, using(this, state = 'variable'), Whitespace, Punctuation), '#pop'), (r'(comment)(\s*)(%\})', bygroups(Name.Tag, Whitespace, Punctuation), 'comment'), (r'(raw)(\s*)(%\})', bygroups(Name.Tag, Whitespace, Punctuation), 'raw'), # end of block (r'(end(case|unless|if))(\s*)(%\})', bygroups(Keyword.Reserved, None, Whitespace, Punctuation), '#pop'), (r'(end([^\s%]+))(\s*)(%\})', bygroups(Name.Tag, None, Whitespace, Punctuation), '#pop'), # builtin tags (assign and include are handled together with usual tags) (r'(cycle)(\s+)(?:([^\s:]*)(:))?(\s*)', bygroups(Name.Tag, Whitespace, using(this, state='generic'), Punctuation, Whitespace), 'variable-tag-markup'), # other tags or blocks (r'([^\s%]+)(\s*)', bygroups(Name.Tag, Whitespace), 'tag-markup') ], 'output': [ include('whitespace'), (r'\}\}', Punctuation, '#pop'), # end of output (r'\|', Punctuation, 'filters') ], 'filters': [ include('whitespace'), (r'\}\}', Punctuation, ('#pop', '#pop')), # end of filters and output (r'([^\s|:]+)(:?)(\s*)', bygroups(Name.Function, Punctuation, Whitespace), 'filter-markup') ], 'filter-markup': [ (r'\|', Punctuation, '#pop'), include('end-of-tag'), include('default-param-markup') ], 'condition': [ include('end-of-block'), include('whitespace'), (r'([^\s=!><]+)(\s*)([=!><]=?)(\s*)(\S+)(\s*)(%\})', bygroups(using(this, state = 'generic'), Whitespace, Operator, Whitespace, using(this, state = 'generic'), Whitespace, Punctuation)), (r'\b!', Operator), (r'\bnot\b', Operator.Word), (r'([\w.\'"]+)(\s+)(contains)(\s+)([\w.\'"]+)', bygroups(using(this, state = 'generic'), Whitespace, Operator.Word, Whitespace, using(this, state = 'generic'))), include('generic'), include('whitespace') ], 'generic-value': [ include('generic'), include('end-at-whitespace') ], 'operator': [ (r'(\s*)((=|!|>|<)=?)(\s*)', bygroups(Whitespace, Operator, None, Whitespace), '#pop'), (r'(\s*)(\bcontains\b)(\s*)', bygroups(Whitespace, Operator.Word, Whitespace), '#pop'), ], 'end-of-tag': [ (r'\}\}', Punctuation, '#pop') ], 'end-of-block': [ (r'%\}', Punctuation, ('#pop', '#pop')) ], 'end-at-whitespace': [ (r'\s+', Whitespace, '#pop') ], # states for unknown markup 'param-markup': [ include('whitespace'), # params with colons or equals (r'([^\s=:]+)(\s*)(=|:)', bygroups(Name.Attribute, Whitespace, Operator)), # explicit variables (r'(\{\{)(\s*)([^\s}])(\s*)(\}\})', bygroups(Punctuation, Whitespace, using(this, state = 'variable'), Whitespace, Punctuation)), include('string'), include('number'), include('keyword'), (r',', Punctuation) ], 'default-param-markup': [ include('param-markup'), (r'.', Text) # fallback for switches / variables / un-quoted strings / ... ], 'variable-param-markup': [ include('param-markup'), include('variable'), (r'.', Text) # fallback ], 'tag-markup': [ (r'%\}', Punctuation, ('#pop', '#pop')), # end of tag include('default-param-markup') ], 'variable-tag-markup': [ (r'%\}', Punctuation, ('#pop', '#pop')), # end of tag include('variable-param-markup') ], # states for different values types 'keyword': [ (r'\b(false|true)\b', Keyword.Constant) ], 'variable': [ (r'[a-zA-Z_]\w*', Name.Variable), (r'(?<=\w)\.(?=\w)', Punctuation) ], 'string': [ (r"'[^']*'", String.Single), (r'"[^"]*"', String.Double) ], 'number': [ (r'\d+\.\d+', Number.Float), (r'\d+', Number.Integer) ], 'generic': [ # decides for variable, string, keyword or number include('keyword'), include('string'), include('number'), include('variable') ], 'whitespace': [ (r'[ \t]+', Whitespace) ], # states for builtin blocks 'comment': [ (r'(\{%)(\s*)(endcomment)(\s*)(%\})', bygroups(Punctuation, Whitespace, Name.Tag, Whitespace, Punctuation), ('#pop', '#pop')), (r'.', Comment) ], 'raw': [ (r'[^{]+', Text), (r'(\{%)(\s*)(endraw)(\s*)(%\})', bygroups(Punctuation, Whitespace, Name.Tag, Whitespace, Punctuation), '#pop'), (r'\{', Text) ], } class TwigLexer(RegexLexer): """ `Twig `_ template lexer. It just highlights Twig code between the preprocessor directives, other data is left untouched by the lexer. .. versionadded:: 2.0 """ name = 'Twig' aliases = ['twig'] mimetypes = ['application/x-twig'] flags = re.M | re.S # Note that a backslash is included in the following two patterns # PHP uses a backslash as a namespace separator _ident_char = r'[\\\w-]|[^\x00-\x7f]' _ident_begin = r'(?:[\\_a-z]|[^\x00-\x7f])' _ident_end = r'(?:' + _ident_char + ')*' _ident_inner = _ident_begin + _ident_end tokens = { 'root': [ (r'[^{]+', Other), (r'\{\{', Comment.Preproc, 'var'), # twig comments (r'\{\#.*?\#\}', Comment), # raw twig blocks (r'(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endraw)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Other, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), (r'(\{%)(-?\s*)(verbatim)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endverbatim)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Other, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), # filter blocks (r'(\{%%)(-?\s*)(filter)(\s+)(%s)' % _ident_inner, bygroups(Comment.Preproc, Text, Keyword, Text, Name.Function), 'tag'), (r'(\{%)(-?\s*)([a-zA-Z_]\w*)', bygroups(Comment.Preproc, Text, Keyword), 'tag'), (r'\{', Other), ], 'varnames': [ (r'(\|)(\s*)(%s)' % _ident_inner, bygroups(Operator, Text, Name.Function)), (r'(is)(\s+)(not)?(\s*)(%s)' % _ident_inner, bygroups(Keyword, Text, Keyword, Text, Name.Function)), (r'(?i)(true|false|none|null)\b', Keyword.Pseudo), (r'(in|not|and|b-and|or|b-or|b-xor|is' r'if|elseif|else|import' r'constant|defined|divisibleby|empty|even|iterable|odd|sameas' r'matches|starts\s+with|ends\s+with)\b', Keyword), (r'(loop|block|parent)\b', Name.Builtin), (_ident_inner, Name.Variable), (r'\.' + _ident_inner, Name.Variable), (r'\.[0-9]+', Number), (r':?"(\\\\|\\"|[^"])*"', String.Double), (r":?'(\\\\|\\'|[^'])*'", String.Single), (r'([{}()\[\]+\-*/,:~%]|\.\.|\?|:|\*\*|\/\/|!=|[><=]=?)', Operator), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), ], 'var': [ (r'\s+', Text), (r'(-?)(\}\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames') ], 'tag': [ (r'\s+', Text), (r'(-?)(%\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames'), (r'.', Punctuation), ], } class TwigHtmlLexer(DelegatingLexer): """ Subclass of the `TwigLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML+Twig" aliases = ["html+twig"] filenames = ['*.twig'] mimetypes = ['text/html+twig'] def __init__(self, **options): super(TwigHtmlLexer, self).__init__(HtmlLexer, TwigLexer, **options) class Angular2Lexer(RegexLexer): """ Generic `angular2 `_ template lexer. Highlights only the Angular template tags (stuff between `{{` and `}}` and special attributes: '(event)=', '[property]=', '[(twoWayBinding)]='). Everything else is left for a delegating lexer. .. versionadded:: 2.1 """ name = "Angular2" aliases = ['ng2'] tokens = { 'root': [ (r'[^{([*#]+', Other), # {{meal.name}} (r'(\{\{)(\s*)', bygroups(Comment.Preproc, Text), 'ngExpression'), # (click)="deleteOrder()"; [value]="test"; [(twoWayTest)]="foo.bar" (r'([([]+)([\w:.-]+)([\])]+)(\s*)(=)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation, Text, Operator, Text), 'attr'), (r'([([]+)([\w:.-]+)([\])]+)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation, Text)), # *ngIf="..."; #f="ngForm" (r'([*#])([\w:.-]+)(\s*)(=)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation, Operator), 'attr'), (r'([*#])([\w:.-]+)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation)), ], 'ngExpression': [ (r'\s+(\|\s+)?', Text), (r'\}\}', Comment.Preproc, '#pop'), # Literals (r':?(true|false)', String.Boolean), (r':?"(\\\\|\\"|[^"])*"', String.Double), (r":?'(\\\\|\\'|[^'])*'", String.Single), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), # Variabletext (r'[a-zA-Z][\w-]*(\(.*\))?', Name.Variable), (r'\.[\w-]+(\(.*\))?', Name.Variable), # inline If (r'(\?)(\s*)([^}\s]+)(\s*)(:)(\s*)([^}\s]+)(\s*)', bygroups(Operator, Text, String, Text, Operator, Text, String, Text)), ], 'attr': [ ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } class Angular2HtmlLexer(DelegatingLexer): """ Subclass of the `Angular2Lexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML + Angular2" aliases = ["html+ng2"] filenames = ['*.ng2'] def __init__(self, **options): super(Angular2HtmlLexer, self).__init__(HtmlLexer, Angular2Lexer, **options) Pygments-2.3.1/pygments/lexers/hdl.py0000644000175000017500000004441313376260540016704 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.hdl ~~~~~~~~~~~~~~~~~~~ Lexers for hardware descriptor languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, include, using, this, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error __all__ = ['VerilogLexer', 'SystemVerilogLexer', 'VhdlLexer'] class VerilogLexer(RegexLexer): """ For verilog source code with preprocessor directives. .. versionadded:: 1.4 """ name = 'verilog' aliases = ['verilog', 'v'] filenames = ['*.v'] mimetypes = ['text/x-verilog'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/[*].*?[*]/)+' tokens = { 'root': [ (r'^\s*`define', Comment.Preproc, 'macro'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'[{}#@]', Punctuation), (r'L?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'([0-9]+)|(\'h)[0-9a-fA-F]+', Number.Hex), (r'([0-9]+)|(\'b)[01]+', Number.Bin), (r'([0-9]+)|(\'d)[0-9]+', Number.Integer), (r'([0-9]+)|(\'o)[0-7]+', Number.Oct), (r'\'[01xz]', Number), (r'\d+[Ll]?', Number.Integer), (r'\*/', Error), (r'[~!%^&*+=|?:<>/-]', Operator), (r'[()\[\],.;\']', Punctuation), (r'`[a-zA-Z_]\w*', Name.Constant), (r'^(\s*)(package)(\s+)', bygroups(Text, Keyword.Namespace, Text)), (r'^(\s*)(import)(\s+)', bygroups(Text, Keyword.Namespace, Text), 'import'), (words(( 'always', 'always_comb', 'always_ff', 'always_latch', 'and', 'assign', 'automatic', 'begin', 'break', 'buf', 'bufif0', 'bufif1', 'case', 'casex', 'casez', 'cmos', 'const', 'continue', 'deassign', 'default', 'defparam', 'disable', 'do', 'edge', 'else', 'end', 'endcase', 'endfunction', 'endgenerate', 'endmodule', 'endpackage', 'endprimitive', 'endspecify', 'endtable', 'endtask', 'enum', 'event', 'final', 'for', 'force', 'forever', 'fork', 'function', 'generate', 'genvar', 'highz0', 'highz1', 'if', 'initial', 'inout', 'input', 'integer', 'join', 'large', 'localparam', 'macromodule', 'medium', 'module', 'nand', 'negedge', 'nmos', 'nor', 'not', 'notif0', 'notif1', 'or', 'output', 'packed', 'parameter', 'pmos', 'posedge', 'primitive', 'pull0', 'pull1', 'pulldown', 'pullup', 'rcmos', 'ref', 'release', 'repeat', 'return', 'rnmos', 'rpmos', 'rtran', 'rtranif0', 'rtranif1', 'scalared', 'signed', 'small', 'specify', 'specparam', 'strength', 'string', 'strong0', 'strong1', 'struct', 'table', 'task', 'tran', 'tranif0', 'tranif1', 'type', 'typedef', 'unsigned', 'var', 'vectored', 'void', 'wait', 'weak0', 'weak1', 'while', 'xnor', 'xor'), suffix=r'\b'), Keyword), (words(( 'accelerate', 'autoexpand_vectornets', 'celldefine', 'default_nettype', 'else', 'elsif', 'endcelldefine', 'endif', 'endprotect', 'endprotected', 'expand_vectornets', 'ifdef', 'ifndef', 'include', 'noaccelerate', 'noexpand_vectornets', 'noremove_gatenames', 'noremove_netnames', 'nounconnected_drive', 'protect', 'protected', 'remove_gatenames', 'remove_netnames', 'resetall', 'timescale', 'unconnected_drive', 'undef'), prefix=r'`', suffix=r'\b'), Comment.Preproc), (words(( 'bits', 'bitstoreal', 'bitstoshortreal', 'countdrivers', 'display', 'fclose', 'fdisplay', 'finish', 'floor', 'fmonitor', 'fopen', 'fstrobe', 'fwrite', 'getpattern', 'history', 'incsave', 'input', 'itor', 'key', 'list', 'log', 'monitor', 'monitoroff', 'monitoron', 'nokey', 'nolog', 'printtimescale', 'random', 'readmemb', 'readmemh', 'realtime', 'realtobits', 'reset', 'reset_count', 'reset_value', 'restart', 'rtoi', 'save', 'scale', 'scope', 'shortrealtobits', 'showscopes', 'showvariables', 'showvars', 'sreadmemb', 'sreadmemh', 'stime', 'stop', 'strobe', 'time', 'timeformat', 'write'), prefix=r'\$', suffix=r'\b'), Name.Builtin), (words(( 'byte', 'shortint', 'int', 'longint', 'integer', 'time', 'bit', 'logic', 'reg', 'supply0', 'supply1', 'tri', 'triand', 'trior', 'tri0', 'tri1', 'trireg', 'uwire', 'wire', 'wand', 'wo' 'shortreal', 'real', 'realtime'), suffix=r'\b'), Keyword.Type), (r'[a-zA-Z_]\w*:(?!:)', Name.Label), (r'\$?[a-zA-Z_]\w*', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'//.*?\n', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Comment.Preproc, '#pop'), ], 'import': [ (r'[\w:]+\*?', Name.Namespace, '#pop') ] } def get_tokens_unprocessed(self, text): for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text): # Convention: mark all upper case names as constants if token is Name: if value.isupper(): token = Name.Constant yield index, token, value class SystemVerilogLexer(RegexLexer): """ Extends verilog lexer to recognise all SystemVerilog keywords from IEEE 1800-2009 standard. .. versionadded:: 1.5 """ name = 'systemverilog' aliases = ['systemverilog', 'sv'] filenames = ['*.sv', '*.svh'] mimetypes = ['text/x-systemverilog'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/[*].*?[*]/)+' tokens = { 'root': [ (r'^\s*`define', Comment.Preproc, 'macro'), (r'^(\s*)(package)(\s+)', bygroups(Text, Keyword.Namespace, Text)), (r'^(\s*)(import)(\s+)', bygroups(Text, Keyword.Namespace, Text), 'import'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'[{}#@]', Punctuation), (r'L?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'([0-9]+)|(\'h)[0-9a-fA-F]+', Number.Hex), (r'([0-9]+)|(\'b)[01]+', Number.Bin), (r'([0-9]+)|(\'d)[0-9]+', Number.Integer), (r'([0-9]+)|(\'o)[0-7]+', Number.Oct), (r'\'[01xz]', Number), (r'\d+[Ll]?', Number.Integer), (r'\*/', Error), (r'[~!%^&*+=|?:<>/-]', Operator), (r'[()\[\],.;\']', Punctuation), (r'`[a-zA-Z_]\w*', Name.Constant), (words(( 'accept_on', 'alias', 'always', 'always_comb', 'always_ff', 'always_latch', 'and', 'assert', 'assign', 'assume', 'automatic', 'before', 'begin', 'bind', 'bins', 'binsof', 'bit', 'break', 'buf', 'bufif0', 'bufif1', 'byte', 'case', 'casex', 'casez', 'cell', 'chandle', 'checker', 'class', 'clocking', 'cmos', 'config', 'const', 'constraint', 'context', 'continue', 'cover', 'covergroup', 'coverpoint', 'cross', 'deassign', 'default', 'defparam', 'design', 'disable', 'dist', 'do', 'edge', 'else', 'end', 'endcase', 'endchecker', 'endclass', 'endclocking', 'endconfig', 'endfunction', 'endgenerate', 'endgroup', 'endinterface', 'endmodule', 'endpackage', 'endprimitive', 'endprogram', 'endproperty', 'endsequence', 'endspecify', 'endtable', 'endtask', 'enum', 'event', 'eventually', 'expect', 'export', 'extends', 'extern', 'final', 'first_match', 'for', 'force', 'foreach', 'forever', 'fork', 'forkjoin', 'function', 'generate', 'genvar', 'global', 'highz0', 'highz1', 'if', 'iff', 'ifnone', 'ignore_bins', 'illegal_bins', 'implies', 'import', 'incdir', 'include', 'initial', 'inout', 'input', 'inside', 'instance', 'int', 'integer', 'interface', 'intersect', 'join', 'join_any', 'join_none', 'large', 'let', 'liblist', 'library', 'local', 'localparam', 'logic', 'longint', 'macromodule', 'matches', 'medium', 'modport', 'module', 'nand', 'negedge', 'new', 'nexttime', 'nmos', 'nor', 'noshowcancelled', 'not', 'notif0', 'notif1', 'null', 'or', 'output', 'package', 'packed', 'parameter', 'pmos', 'posedge', 'primitive', 'priority', 'program', 'property', 'protected', 'pull0', 'pull1', 'pulldown', 'pullup', 'pulsestyle_ondetect', 'pulsestyle_onevent', 'pure', 'rand', 'randc', 'randcase', 'randsequence', 'rcmos', 'real', 'realtime', 'ref', 'reg', 'reject_on', 'release', 'repeat', 'restrict', 'return', 'rnmos', 'rpmos', 'rtran', 'rtranif0', 'rtranif1', 's_always', 's_eventually', 's_nexttime', 's_until', 's_until_with', 'scalared', 'sequence', 'shortint', 'shortreal', 'showcancelled', 'signed', 'small', 'solve', 'specify', 'specparam', 'static', 'string', 'strong', 'strong0', 'strong1', 'struct', 'super', 'supply0', 'supply1', 'sync_accept_on', 'sync_reject_on', 'table', 'tagged', 'task', 'this', 'throughout', 'time', 'timeprecision', 'timeunit', 'tran', 'tranif0', 'tranif1', 'tri', 'tri0', 'tri1', 'triand', 'trior', 'trireg', 'type', 'typedef', 'union', 'unique', 'unique0', 'unsigned', 'until', 'until_with', 'untyped', 'use', 'uwire', 'var', 'vectored', 'virtual', 'void', 'wait', 'wait_order', 'wand', 'weak', 'weak0', 'weak1', 'while', 'wildcard', 'wire', 'with', 'within', 'wor', 'xnor', 'xor'), suffix=r'\b'), Keyword), (words(( '`__FILE__', '`__LINE__', '`begin_keywords', '`celldefine', '`default_nettype', '`define', '`else', '`elsif', '`end_keywords', '`endcelldefine', '`endif', '`ifdef', '`ifndef', '`include', '`line', '`nounconnected_drive', '`pragma', '`resetall', '`timescale', '`unconnected_drive', '`undef', '`undefineall'), suffix=r'\b'), Comment.Preproc), (words(( '$display', '$displayb', '$displayh', '$displayo', '$dumpall', '$dumpfile', '$dumpflush', '$dumplimit', '$dumpoff', '$dumpon', '$dumpports', '$dumpportsall', '$dumpportsflush', '$dumpportslimit', '$dumpportsoff', '$dumpportson', '$dumpvars', '$fclose', '$fdisplay', '$fdisplayb', '$fdisplayh', '$fdisplayo', '$feof', '$ferror', '$fflush', '$fgetc', '$fgets', '$finish', '$fmonitor', '$fmonitorb', '$fmonitorh', '$fmonitoro', '$fopen', '$fread', '$fscanf', '$fseek', '$fstrobe', '$fstrobeb', '$fstrobeh', '$fstrobeo', '$ftell', '$fwrite', '$fwriteb', '$fwriteh', '$fwriteo', '$monitor', '$monitorb', '$monitorh', '$monitoro', '$monitoroff', '$monitoron', '$plusargs', '$random', '$readmemb', '$readmemh', '$rewind', '$sformat', '$sformatf', '$sscanf', '$strobe', '$strobeb', '$strobeh', '$strobeo', '$swrite', '$swriteb', '$swriteh', '$swriteo', '$test', '$ungetc', '$value$plusargs', '$write', '$writeb', '$writeh', '$writememb', '$writememh', '$writeo'), suffix=r'\b'), Name.Builtin), (r'(class)(\s+)', bygroups(Keyword, Text), 'classname'), (words(( 'byte', 'shortint', 'int', 'longint', 'integer', 'time', 'bit', 'logic', 'reg', 'supply0', 'supply1', 'tri', 'triand', 'trior', 'tri0', 'tri1', 'trireg', 'uwire', 'wire', 'wand', 'wo' 'shortreal', 'real', 'realtime'), suffix=r'\b'), Keyword.Type), (r'[a-zA-Z_]\w*:(?!:)', Name.Label), (r'\$?[a-zA-Z_]\w*', Name), ], 'classname': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop'), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'//.*?\n', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Comment.Preproc, '#pop'), ], 'import': [ (r'[\w:]+\*?', Name.Namespace, '#pop') ] } def get_tokens_unprocessed(self, text): for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text): # Convention: mark all upper case names as constants if token is Name: if value.isupper(): token = Name.Constant yield index, token, value class VhdlLexer(RegexLexer): """ For VHDL source code. .. versionadded:: 1.5 """ name = 'vhdl' aliases = ['vhdl'] filenames = ['*.vhdl', '*.vhd'] mimetypes = ['text/x-vhdl'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'--.*?$', Comment.Single), (r"'(U|X|0|1|Z|W|L|H|-)'", String.Char), (r'[~!%^&*+=|?:<>/-]', Operator), (r"'[a-z_]\w*", Name.Attribute), (r'[()\[\],.;\']', Punctuation), (r'"[^\n\\"]*"', String), (r'(library)(\s+)([a-z_]\w*)', bygroups(Keyword, Text, Name.Namespace)), (r'(use)(\s+)(entity)', bygroups(Keyword, Text, Keyword)), (r'(use)(\s+)([a-z_][\w.]*\.)(all)', bygroups(Keyword, Text, Name.Namespace, Keyword)), (r'(use)(\s+)([a-z_][\w.]*)', bygroups(Keyword, Text, Name.Namespace)), (r'(std|ieee)(\.[a-z_]\w*)', bygroups(Name.Namespace, Name.Namespace)), (words(('std', 'ieee', 'work'), suffix=r'\b'), Name.Namespace), (r'(entity|component)(\s+)([a-z_]\w*)', bygroups(Keyword, Text, Name.Class)), (r'(architecture|configuration)(\s+)([a-z_]\w*)(\s+)' r'(of)(\s+)([a-z_]\w*)(\s+)(is)', bygroups(Keyword, Text, Name.Class, Text, Keyword, Text, Name.Class, Text, Keyword)), (r'([a-z_]\w*)(:)(\s+)(process|for)', bygroups(Name.Class, Operator, Text, Keyword)), (r'(end)(\s+)', bygroups(using(this), Text), 'endblock'), include('types'), include('keywords'), include('numbers'), (r'[a-z_]\w*', Name), ], 'endblock': [ include('keywords'), (r'[a-z_]\w*', Name.Class), (r'(\s+)', Text), (r';', Punctuation, '#pop'), ], 'types': [ (words(( 'boolean', 'bit', 'character', 'severity_level', 'integer', 'time', 'delay_length', 'natural', 'positive', 'string', 'bit_vector', 'file_open_kind', 'file_open_status', 'std_ulogic', 'std_ulogic_vector', 'std_logic', 'std_logic_vector', 'signed', 'unsigned'), suffix=r'\b'), Keyword.Type), ], 'keywords': [ (words(( 'abs', 'access', 'after', 'alias', 'all', 'and', 'architecture', 'array', 'assert', 'attribute', 'begin', 'block', 'body', 'buffer', 'bus', 'case', 'component', 'configuration', 'constant', 'disconnect', 'downto', 'else', 'elsif', 'end', 'entity', 'exit', 'file', 'for', 'function', 'generate', 'generic', 'group', 'guarded', 'if', 'impure', 'in', 'inertial', 'inout', 'is', 'label', 'library', 'linkage', 'literal', 'loop', 'map', 'mod', 'nand', 'new', 'next', 'nor', 'not', 'null', 'of', 'on', 'open', 'or', 'others', 'out', 'package', 'port', 'postponed', 'procedure', 'process', 'pure', 'range', 'record', 'register', 'reject', 'rem', 'return', 'rol', 'ror', 'select', 'severity', 'signal', 'shared', 'sla', 'sll', 'sra', 'srl', 'subtype', 'then', 'to', 'transport', 'type', 'units', 'until', 'use', 'variable', 'wait', 'when', 'while', 'with', 'xnor', 'xor'), suffix=r'\b'), Keyword), ], 'numbers': [ (r'\d{1,2}#[0-9a-f_]+#?', Number.Integer), (r'\d+', Number.Integer), (r'(\d+\.\d*|\.\d+|\d+)E[+-]?\d+', Number.Float), (r'X"[0-9a-f_]+"', Number.Hex), (r'O"[0-7_]+"', Number.Oct), (r'B"[01_]+"', Number.Bin), ], } Pygments-2.3.1/pygments/lexers/c_like.py0000644000175000017500000005707213402534107017361 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.c_like ~~~~~~~~~~~~~~~~~~~~~~ Lexers for other C-like languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, inherit, words, \ default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation from pygments.lexers.c_cpp import CLexer, CppLexer from pygments.lexers import _mql_builtins __all__ = ['PikeLexer', 'NesCLexer', 'ClayLexer', 'ECLexer', 'ValaLexer', 'CudaLexer', 'SwigLexer', 'MqlLexer', 'ArduinoLexer'] class PikeLexer(CppLexer): """ For `Pike `_ source code. .. versionadded:: 2.0 """ name = 'Pike' aliases = ['pike'] filenames = ['*.pike', '*.pmod'] mimetypes = ['text/x-pike'] tokens = { 'statements': [ (words(( 'catch', 'new', 'private', 'protected', 'public', 'gauge', 'throw', 'throws', 'class', 'interface', 'implement', 'abstract', 'extends', 'from', 'this', 'super', 'constant', 'final', 'static', 'import', 'use', 'extern', 'inline', 'proto', 'break', 'continue', 'if', 'else', 'for', 'while', 'do', 'switch', 'case', 'as', 'in', 'version', 'return', 'true', 'false', 'null', '__VERSION__', '__MAJOR__', '__MINOR__', '__BUILD__', '__REAL_VERSION__', '__REAL_MAJOR__', '__REAL_MINOR__', '__REAL_BUILD__', '__DATE__', '__TIME__', '__FILE__', '__DIR__', '__LINE__', '__AUTO_BIGNUM__', '__NT__', '__PIKE__', '__amigaos__', '_Pragma', 'static_assert', 'defined', 'sscanf'), suffix=r'\b'), Keyword), (r'(bool|int|long|float|short|double|char|string|object|void|mapping|' r'array|multiset|program|function|lambda|mixed|' r'[a-z_][a-z0-9_]*_t)\b', Keyword.Type), (r'(class)(\s+)', bygroups(Keyword, Text), 'classname'), (r'[~!%^&*+=|?:<>/@-]', Operator), inherit, ], 'classname': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop'), # template specification (r'\s*(?=>)', Text, '#pop'), ], } class NesCLexer(CLexer): """ For `nesC `_ source code with preprocessor directives. .. versionadded:: 2.0 """ name = 'nesC' aliases = ['nesc'] filenames = ['*.nc'] mimetypes = ['text/x-nescsrc'] tokens = { 'statements': [ (words(( 'abstract', 'as', 'async', 'atomic', 'call', 'command', 'component', 'components', 'configuration', 'event', 'extends', 'generic', 'implementation', 'includes', 'interface', 'module', 'new', 'norace', 'post', 'provides', 'signal', 'task', 'uses'), suffix=r'\b'), Keyword), (words(('nx_struct', 'nx_union', 'nx_int8_t', 'nx_int16_t', 'nx_int32_t', 'nx_int64_t', 'nx_uint8_t', 'nx_uint16_t', 'nx_uint32_t', 'nx_uint64_t'), suffix=r'\b'), Keyword.Type), inherit, ], } class ClayLexer(RegexLexer): """ For `Clay `_ source. .. versionadded:: 2.0 """ name = 'Clay' filenames = ['*.clay'] aliases = ['clay'] mimetypes = ['text/x-clay'] tokens = { 'root': [ (r'\s', Text), (r'//.*?$', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'\b(public|private|import|as|record|variant|instance' r'|define|overload|default|external|alias' r'|rvalue|ref|forward|inline|noinline|forceinline' r'|enum|var|and|or|not|if|else|goto|return|while' r'|switch|case|break|continue|for|in|true|false|try|catch|throw' r'|finally|onerror|staticassert|eval|when|newtype' r'|__FILE__|__LINE__|__COLUMN__|__ARG__' r')\b', Keyword), (r'[~!%^&*+=|:<>/-]', Operator), (r'[#(){}\[\],;.]', Punctuation), (r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex), (r'\d+[LlUu]*', Number.Integer), (r'\b(true|false)\b', Name.Builtin), (r'(?i)[a-z_?][\w?]*', Name), (r'"""', String, 'tdqs'), (r'"', String, 'dqs'), ], 'strings': [ (r'(?i)\\(x[0-9a-f]{2}|.)', String.Escape), (r'.', String), ], 'nl': [ (r'\n', String), ], 'dqs': [ (r'"', String, '#pop'), include('strings'), ], 'tdqs': [ (r'"""', String, '#pop'), include('strings'), include('nl'), ], } class ECLexer(CLexer): """ For eC source code with preprocessor directives. .. versionadded:: 1.5 """ name = 'eC' aliases = ['ec'] filenames = ['*.ec', '*.eh'] mimetypes = ['text/x-echdr', 'text/x-ecsrc'] tokens = { 'statements': [ (words(( 'virtual', 'class', 'private', 'public', 'property', 'import', 'delete', 'new', 'new0', 'renew', 'renew0', 'define', 'get', 'set', 'remote', 'dllexport', 'dllimport', 'stdcall', 'subclass', '__on_register_module', 'namespace', 'using', 'typed_object', 'any_object', 'incref', 'register', 'watch', 'stopwatching', 'firewatchers', 'watchable', 'class_designer', 'class_fixed', 'class_no_expansion', 'isset', 'class_default_property', 'property_category', 'class_data', 'class_property', 'thisclass', 'dbtable', 'dbindex', 'database_open', 'dbfield'), suffix=r'\b'), Keyword), (words(('uint', 'uint16', 'uint32', 'uint64', 'bool', 'byte', 'unichar', 'int64'), suffix=r'\b'), Keyword.Type), (r'(class)(\s+)', bygroups(Keyword, Text), 'classname'), (r'(null|value|this)\b', Name.Builtin), inherit, ], 'classname': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop'), # template specification (r'\s*(?=>)', Text, '#pop'), ], } class ValaLexer(RegexLexer): """ For Vala source code with preprocessor directives. .. versionadded:: 1.1 """ name = 'Vala' aliases = ['vala', 'vapi'] filenames = ['*.vala', '*.vapi'] mimetypes = ['text/x-vala'] tokens = { 'whitespace': [ (r'^\s*#if\s+0', Comment.Preproc, 'if0'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'//(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), ], 'statements': [ (r'[L@]?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(?s)""".*?"""', String), # verbatim strings (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'0x[0-9a-fA-F]+[Ll]?', Number.Hex), (r'0[0-7]+[Ll]?', Number.Oct), (r'\d+[Ll]?', Number.Integer), (r'[~!%^&*+=|?:<>/-]', Operator), (r'(\[)(Compact|Immutable|(?:Boolean|Simple)Type)(\])', bygroups(Punctuation, Name.Decorator, Punctuation)), # TODO: "correctly" parse complex code attributes (r'(\[)(CCode|(?:Integer|Floating)Type)', bygroups(Punctuation, Name.Decorator)), (r'[()\[\],.]', Punctuation), (words(( 'as', 'base', 'break', 'case', 'catch', 'construct', 'continue', 'default', 'delete', 'do', 'else', 'enum', 'finally', 'for', 'foreach', 'get', 'if', 'in', 'is', 'lock', 'new', 'out', 'params', 'return', 'set', 'sizeof', 'switch', 'this', 'throw', 'try', 'typeof', 'while', 'yield'), suffix=r'\b'), Keyword), (words(( 'abstract', 'const', 'delegate', 'dynamic', 'ensures', 'extern', 'inline', 'internal', 'override', 'owned', 'private', 'protected', 'public', 'ref', 'requires', 'signal', 'static', 'throws', 'unowned', 'var', 'virtual', 'volatile', 'weak', 'yields'), suffix=r'\b'), Keyword.Declaration), (r'(namespace|using)(\s+)', bygroups(Keyword.Namespace, Text), 'namespace'), (r'(class|errordomain|interface|struct)(\s+)', bygroups(Keyword.Declaration, Text), 'class'), (r'(\.)([a-zA-Z_]\w*)', bygroups(Operator, Name.Attribute)), # void is an actual keyword, others are in glib-2.0.vapi (words(( 'void', 'bool', 'char', 'double', 'float', 'int', 'int8', 'int16', 'int32', 'int64', 'long', 'short', 'size_t', 'ssize_t', 'string', 'time_t', 'uchar', 'uint', 'uint8', 'uint16', 'uint32', 'uint64', 'ulong', 'unichar', 'ushort'), suffix=r'\b'), Keyword.Type), (r'(true|false|null)\b', Name.Builtin), (r'[a-zA-Z_]\w*', Name), ], 'root': [ include('whitespace'), default('statement'), ], 'statement': [ include('whitespace'), include('statements'), ('[{}]', Punctuation), (';', Punctuation, '#pop'), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], 'if0': [ (r'^\s*#if.*?(?`_ source. .. versionadded:: 1.6 """ name = 'CUDA' filenames = ['*.cu', '*.cuh'] aliases = ['cuda', 'cu'] mimetypes = ['text/x-cuda'] function_qualifiers = set(('__device__', '__global__', '__host__', '__noinline__', '__forceinline__')) variable_qualifiers = set(('__device__', '__constant__', '__shared__', '__restrict__')) vector_types = set(('char1', 'uchar1', 'char2', 'uchar2', 'char3', 'uchar3', 'char4', 'uchar4', 'short1', 'ushort1', 'short2', 'ushort2', 'short3', 'ushort3', 'short4', 'ushort4', 'int1', 'uint1', 'int2', 'uint2', 'int3', 'uint3', 'int4', 'uint4', 'long1', 'ulong1', 'long2', 'ulong2', 'long3', 'ulong3', 'long4', 'ulong4', 'longlong1', 'ulonglong1', 'longlong2', 'ulonglong2', 'float1', 'float2', 'float3', 'float4', 'double1', 'double2', 'dim3')) variables = set(('gridDim', 'blockIdx', 'blockDim', 'threadIdx', 'warpSize')) functions = set(('__threadfence_block', '__threadfence', '__threadfence_system', '__syncthreads', '__syncthreads_count', '__syncthreads_and', '__syncthreads_or')) execution_confs = set(('<<<', '>>>')) def get_tokens_unprocessed(self, text): for index, token, value in CLexer.get_tokens_unprocessed(self, text): if token is Name: if value in self.variable_qualifiers: token = Keyword.Type elif value in self.vector_types: token = Keyword.Type elif value in self.variables: token = Name.Builtin elif value in self.execution_confs: token = Keyword.Pseudo elif value in self.function_qualifiers: token = Keyword.Reserved elif value in self.functions: token = Name.Function yield index, token, value class SwigLexer(CppLexer): """ For `SWIG `_ source code. .. versionadded:: 2.0 """ name = 'SWIG' aliases = ['swig'] filenames = ['*.swg', '*.i'] mimetypes = ['text/swig'] priority = 0.04 # Lower than C/C++ and Objective C/C++ tokens = { 'statements': [ # SWIG directives (r'(%[a-z_][a-z0-9_]*)', Name.Function), # Special variables (r'\$\**\&?\w+', Name), # Stringification / additional preprocessor directives (r'##*[a-zA-Z_]\w*', Comment.Preproc), inherit, ], } # This is a far from complete set of SWIG directives swig_directives = set(( # Most common directives '%apply', '%define', '%director', '%enddef', '%exception', '%extend', '%feature', '%fragment', '%ignore', '%immutable', '%import', '%include', '%inline', '%insert', '%module', '%newobject', '%nspace', '%pragma', '%rename', '%shared_ptr', '%template', '%typecheck', '%typemap', # Less common directives '%arg', '%attribute', '%bang', '%begin', '%callback', '%catches', '%clear', '%constant', '%copyctor', '%csconst', '%csconstvalue', '%csenum', '%csmethodmodifiers', '%csnothrowexception', '%default', '%defaultctor', '%defaultdtor', '%defined', '%delete', '%delobject', '%descriptor', '%exceptionclass', '%exceptionvar', '%extend_smart_pointer', '%fragments', '%header', '%ifcplusplus', '%ignorewarn', '%implicit', '%implicitconv', '%init', '%javaconst', '%javaconstvalue', '%javaenum', '%javaexception', '%javamethodmodifiers', '%kwargs', '%luacode', '%mutable', '%naturalvar', '%nestedworkaround', '%perlcode', '%pythonabc', '%pythonappend', '%pythoncallback', '%pythoncode', '%pythondynamic', '%pythonmaybecall', '%pythonnondynamic', '%pythonprepend', '%refobject', '%shadow', '%sizeof', '%trackobjects', '%types', '%unrefobject', '%varargs', '%warn', '%warnfilter')) def analyse_text(text): rv = 0 # Search for SWIG directives, which are conventionally at the beginning of # a line. The probability of them being within a line is low, so let another # lexer win in this case. matches = re.findall(r'^\s*(%[a-z_][a-z0-9_]*)', text, re.M) for m in matches: if m in SwigLexer.swig_directives: rv = 0.98 break else: rv = 0.91 # Fraction higher than MatlabLexer return rv class MqlLexer(CppLexer): """ For `MQL4 `_ and `MQL5 `_ source code. .. versionadded:: 2.0 """ name = 'MQL' aliases = ['mql', 'mq4', 'mq5', 'mql4', 'mql5'] filenames = ['*.mq4', '*.mq5', '*.mqh'] mimetypes = ['text/x-mql'] tokens = { 'statements': [ (words(_mql_builtins.keywords, suffix=r'\b'), Keyword), (words(_mql_builtins.c_types, suffix=r'\b'), Keyword.Type), (words(_mql_builtins.types, suffix=r'\b'), Name.Function), (words(_mql_builtins.constants, suffix=r'\b'), Name.Constant), (words(_mql_builtins.colors, prefix='(clr)?', suffix=r'\b'), Name.Constant), inherit, ], } class ArduinoLexer(CppLexer): """ For `Arduino(tm) `_ source. This is an extension of the CppLexer, as the Arduino® Language is a superset of C++ .. versionadded:: 2.1 """ name = 'Arduino' aliases = ['arduino'] filenames = ['*.ino'] mimetypes = ['text/x-arduino'] # Language sketch main structure functions structure = set(('setup', 'loop')) # Language operators operators = set(('not', 'or', 'and', 'xor')) # Language 'variables' variables = set(( 'DIGITAL_MESSAGE', 'FIRMATA_STRING', 'ANALOG_MESSAGE', 'REPORT_DIGITAL', 'REPORT_ANALOG', 'INPUT_PULLUP', 'SET_PIN_MODE', 'INTERNAL2V56', 'SYSTEM_RESET', 'LED_BUILTIN', 'INTERNAL1V1', 'SYSEX_START', 'INTERNAL', 'EXTERNAL', 'HIGH', 'LOW', 'INPUT', 'OUTPUT', 'INPUT_PULLUP', 'LED_BUILTIN', 'true', 'false', 'void', 'boolean', 'char', 'unsigned char', 'byte', 'int', 'unsigned int', 'word', 'long', 'unsigned long', 'short', 'float', 'double', 'string', 'String', 'array', 'static', 'volatile', 'const', 'boolean', 'byte', 'word', 'string', 'String', 'array', 'int', 'float', 'private', 'char', 'virtual', 'operator', 'sizeof', 'uint8_t', 'uint16_t', 'uint32_t', 'uint64_t', 'int8_t', 'int16_t', 'int32_t', 'int64_t', 'dynamic_cast', 'typedef', 'const_cast', 'const', 'struct', 'static_cast', 'union', 'unsigned', 'long', 'volatile', 'static', 'protected', 'bool', 'public', 'friend', 'auto', 'void', 'enum', 'extern', 'class', 'short', 'reinterpret_cast', 'double', 'register', 'explicit', 'signed', 'inline', 'delete', '_Bool', 'complex', '_Complex', '_Imaginary', 'atomic_bool', 'atomic_char', 'atomic_schar', 'atomic_uchar', 'atomic_short', 'atomic_ushort', 'atomic_int', 'atomic_uint', 'atomic_long', 'atomic_ulong', 'atomic_llong', 'atomic_ullong', 'PROGMEM')) # Language shipped functions and class ( ) functions = set(( 'KeyboardController', 'MouseController', 'SoftwareSerial', 'EthernetServer', 'EthernetClient', 'LiquidCrystal', 'RobotControl', 'GSMVoiceCall', 'EthernetUDP', 'EsploraTFT', 'HttpClient', 'RobotMotor', 'WiFiClient', 'GSMScanner', 'FileSystem', 'Scheduler', 'GSMServer', 'YunClient', 'YunServer', 'IPAddress', 'GSMClient', 'GSMModem', 'Keyboard', 'Ethernet', 'Console', 'GSMBand', 'Esplora', 'Stepper', 'Process', 'WiFiUDP', 'GSM_SMS', 'Mailbox', 'USBHost', 'Firmata', 'PImage', 'Client', 'Server', 'GSMPIN', 'FileIO', 'Bridge', 'Serial', 'EEPROM', 'Stream', 'Mouse', 'Audio', 'Servo', 'File', 'Task', 'GPRS', 'WiFi', 'Wire', 'TFT', 'GSM', 'SPI', 'SD', 'runShellCommandAsynchronously', 'analogWriteResolution', 'retrieveCallingNumber', 'printFirmwareVersion', 'analogReadResolution', 'sendDigitalPortPair', 'noListenOnLocalhost', 'readJoystickButton', 'setFirmwareVersion', 'readJoystickSwitch', 'scrollDisplayRight', 'getVoiceCallStatus', 'scrollDisplayLeft', 'writeMicroseconds', 'delayMicroseconds', 'beginTransmission', 'getSignalStrength', 'runAsynchronously', 'getAsynchronously', 'listenOnLocalhost', 'getCurrentCarrier', 'readAccelerometer', 'messageAvailable', 'sendDigitalPorts', 'lineFollowConfig', 'countryNameWrite', 'runShellCommand', 'readStringUntil', 'rewindDirectory', 'readTemperature', 'setClockDivider', 'readLightSensor', 'endTransmission', 'analogReference', 'detachInterrupt', 'countryNameRead', 'attachInterrupt', 'encryptionType', 'readBytesUntil', 'robotNameWrite', 'readMicrophone', 'robotNameRead', 'cityNameWrite', 'userNameWrite', 'readJoystickY', 'readJoystickX', 'mouseReleased', 'openNextFile', 'scanNetworks', 'noInterrupts', 'digitalWrite', 'beginSpeaker', 'mousePressed', 'isActionDone', 'mouseDragged', 'displayLogos', 'noAutoscroll', 'addParameter', 'remoteNumber', 'getModifiers', 'keyboardRead', 'userNameRead', 'waitContinue', 'processInput', 'parseCommand', 'printVersion', 'readNetworks', 'writeMessage', 'blinkVersion', 'cityNameRead', 'readMessage', 'setDataMode', 'parsePacket', 'isListening', 'setBitOrder', 'beginPacket', 'isDirectory', 'motorsWrite', 'drawCompass', 'digitalRead', 'clearScreen', 'serialEvent', 'rightToLeft', 'setTextSize', 'leftToRight', 'requestFrom', 'keyReleased', 'compassRead', 'analogWrite', 'interrupts', 'WiFiServer', 'disconnect', 'playMelody', 'parseFloat', 'autoscroll', 'getPINUsed', 'setPINUsed', 'setTimeout', 'sendAnalog', 'readSlider', 'analogRead', 'beginWrite', 'createChar', 'motorsStop', 'keyPressed', 'tempoWrite', 'readButton', 'subnetMask', 'debugPrint', 'macAddress', 'writeGreen', 'randomSeed', 'attachGPRS', 'readString', 'sendString', 'remotePort', 'releaseAll', 'mouseMoved', 'background', 'getXChange', 'getYChange', 'answerCall', 'getResult', 'voiceCall', 'endPacket', 'constrain', 'getSocket', 'writeJSON', 'getButton', 'available', 'connected', 'findUntil', 'readBytes', 'exitValue', 'readGreen', 'writeBlue', 'startLoop', 'IPAddress', 'isPressed', 'sendSysex', 'pauseMode', 'gatewayIP', 'setCursor', 'getOemKey', 'tuneWrite', 'noDisplay', 'loadImage', 'switchPIN', 'onRequest', 'onReceive', 'changePIN', 'playFile', 'noBuffer', 'parseInt', 'overflow', 'checkPIN', 'knobRead', 'beginTFT', 'bitClear', 'updateIR', 'bitWrite', 'position', 'writeRGB', 'highByte', 'writeRed', 'setSpeed', 'readBlue', 'noStroke', 'remoteIP', 'transfer', 'shutdown', 'hangCall', 'beginSMS', 'endWrite', 'attached', 'maintain', 'noCursor', 'checkReg', 'checkPUK', 'shiftOut', 'isValid', 'shiftIn', 'pulseIn', 'connect', 'println', 'localIP', 'pinMode', 'getIMEI', 'display', 'noBlink', 'process', 'getBand', 'running', 'beginSD', 'drawBMP', 'lowByte', 'setBand', 'release', 'bitRead', 'prepare', 'pointTo', 'readRed', 'setMode', 'noFill', 'remove', 'listen', 'stroke', 'detach', 'attach', 'noTone', 'exists', 'buffer', 'height', 'bitSet', 'circle', 'config', 'cursor', 'random', 'IRread', 'setDNS', 'endSMS', 'getKey', 'micros', 'millis', 'begin', 'print', 'write', 'ready', 'flush', 'width', 'isPIN', 'blink', 'clear', 'press', 'mkdir', 'rmdir', 'close', 'point', 'yield', 'image', 'BSSID', 'click', 'delay', 'read', 'text', 'move', 'peek', 'beep', 'rect', 'line', 'open', 'seek', 'fill', 'size', 'turn', 'stop', 'home', 'find', 'step', 'tone', 'sqrt', 'RSSI', 'SSID', 'end', 'bit', 'tan', 'cos', 'sin', 'pow', 'map', 'abs', 'max', 'min', 'get', 'run', 'put', 'isAlphaNumeric', 'isAlpha', 'isAscii', 'isWhitespace', 'isControl', 'isDigit', 'isGraph', 'isLowerCase', 'isPrintable', 'isPunct', 'isSpace', 'isUpperCase', 'isHexadecimalDigit')) # do not highlight suppress_highlight = set(( 'namespace', 'template', 'mutable', 'using', 'asm', 'typeid', 'typename', 'this', 'alignof', 'constexpr', 'decltype', 'noexcept', 'static_assert', 'thread_local', 'restrict')) def get_tokens_unprocessed(self, text): for index, token, value in CppLexer.get_tokens_unprocessed(self, text): if value in self.structure: yield index, Name.Builtin, value elif value in self.operators: yield index, Operator, value elif value in self.variables: yield index, Keyword.Reserved, value elif value in self.suppress_highlight: yield index, Name, value elif value in self.functions: yield index, Name.Function, value else: yield index, token, value Pygments-2.3.1/pygments/lexers/_stata_builtins.py0000644000175000017500000006106313376260540021321 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers._stata_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Builtins for Stata :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ builtins_base = ( "if", "else", "in", "foreach", "for", "forv", "forva", "forval", "forvalu", "forvalue", "forvalues", "by", "bys", "bysort", "quietly", "qui", "about", "ac", "ac_7", "acprplot", "acprplot_7", "adjust", "ado", "adopath", "adoupdate", "alpha", "ameans", "an", "ano", "anov", "anova", "anova_estat", "anova_terms", "anovadef", "aorder", "ap", "app", "appe", "appen", "append", "arch", "arch_dr", "arch_estat", "arch_p", "archlm", "areg", "areg_p", "args", "arima", "arima_dr", "arima_estat", "arima_p", "as", "asmprobit", "asmprobit_estat", "asmprobit_lf", "asmprobit_mfx__dlg", "asmprobit_p", "ass", "asse", "asser", "assert", "avplot", "avplot_7", "avplots", "avplots_7", "bcskew0", "bgodfrey", "binreg", "bip0_lf", "biplot", "bipp_lf", "bipr_lf", "bipr_p", "biprobit", "bitest", "bitesti", "bitowt", "blogit", "bmemsize", "boot", "bootsamp", "bootstrap", "bootstrap_8", "boxco_l", "boxco_p", "boxcox", "boxcox_6", "boxcox_p", "bprobit", "br", "break", "brier", "bro", "brow", "brows", "browse", "brr", "brrstat", "bs", "bs_7", "bsampl_w", "bsample", "bsample_7", "bsqreg", "bstat", "bstat_7", "bstat_8", "bstrap", "bstrap_7", "ca", "ca_estat", "ca_p", "cabiplot", "camat", "canon", "canon_8", "canon_8_p", "canon_estat", "canon_p", "cap", "caprojection", "capt", "captu", "captur", "capture", "cat", "cc", "cchart", "cchart_7", "cci", "cd", "censobs_table", "centile", "cf", "char", "chdir", "checkdlgfiles", "checkestimationsample", "checkhlpfiles", "checksum", "chelp", "ci", "cii", "cl", "class", "classutil", "clear", "cli", "clis", "clist", "clo", "clog", "clog_lf", "clog_p", "clogi", "clogi_sw", "clogit", "clogit_lf", "clogit_p", "clogitp", "clogl_sw", "cloglog", "clonevar", "clslistarray", "cluster", "cluster_measures", "cluster_stop", "cluster_tree", "cluster_tree_8", "clustermat", "cmdlog", "cnr", "cnre", "cnreg", "cnreg_p", "cnreg_sw", "cnsreg", "codebook", "collaps4", "collapse", "colormult_nb", "colormult_nw", "compare", "compress", "conf", "confi", "confir", "confirm", "conren", "cons", "const", "constr", "constra", "constrai", "constrain", "constraint", "continue", "contract", "copy", "copyright", "copysource", "cor", "corc", "corr", "corr2data", "corr_anti", "corr_kmo", "corr_smc", "corre", "correl", "correla", "correlat", "correlate", "corrgram", "cou", "coun", "count", "cox", "cox_p", "cox_sw", "coxbase", "coxhaz", "coxvar", "cprplot", "cprplot_7", "crc", "cret", "cretu", "cretur", "creturn", "cross", "cs", "cscript", "cscript_log", "csi", "ct", "ct_is", "ctset", "ctst_5", "ctst_st", "cttost", "cumsp", "cumsp_7", "cumul", "cusum", "cusum_7", "cutil", "d", "datasig", "datasign", "datasigna", "datasignat", "datasignatu", "datasignatur", "datasignature", "datetof", "db", "dbeta", "de", "dec", "deco", "decod", "decode", "deff", "des", "desc", "descr", "descri", "describ", "describe", "destring", "dfbeta", "dfgls", "dfuller", "di", "di_g", "dir", "dirstats", "dis", "discard", "disp", "disp_res", "disp_s", "displ", "displa", "display", "distinct", "do", "doe", "doed", "doedi", "doedit", "dotplot", "dotplot_7", "dprobit", "drawnorm", "drop", "ds", "ds_util", "dstdize", "duplicates", "durbina", "dwstat", "dydx", "e", "ed", "edi", "edit", "egen", "eivreg", "emdef", "en", "enc", "enco", "encod", "encode", "eq", "erase", "ereg", "ereg_lf", "ereg_p", "ereg_sw", "ereghet", "ereghet_glf", "ereghet_glf_sh", "ereghet_gp", "ereghet_ilf", "ereghet_ilf_sh", "ereghet_ip", "eret", "eretu", "eretur", "ereturn", "err", "erro", "error", "est", "est_cfexist", "est_cfname", "est_clickable", "est_expand", "est_hold", "est_table", "est_unhold", "est_unholdok", "estat", "estat_default", "estat_summ", "estat_vce_only", "esti", "estimates", "etodow", "etof", "etomdy", "ex", "exi", "exit", "expand", "expandcl", "fac", "fact", "facto", "factor", "factor_estat", "factor_p", "factor_pca_rotated", "factor_rotate", "factormat", "fcast", "fcast_compute", "fcast_graph", "fdades", "fdadesc", "fdadescr", "fdadescri", "fdadescrib", "fdadescribe", "fdasav", "fdasave", "fdause", "fh_st", "open", "read", "close", "file", "filefilter", "fillin", "find_hlp_file", "findfile", "findit", "findit_7", "fit", "fl", "fli", "flis", "flist", "for5_0", "form", "forma", "format", "fpredict", "frac_154", "frac_adj", "frac_chk", "frac_cox", "frac_ddp", "frac_dis", "frac_dv", "frac_in", "frac_mun", "frac_pp", "frac_pq", "frac_pv", "frac_wgt", "frac_xo", "fracgen", "fracplot", "fracplot_7", "fracpoly", "fracpred", "fron_ex", "fron_hn", "fron_p", "fron_tn", "fron_tn2", "frontier", "ftodate", "ftoe", "ftomdy", "ftowdate", "g", "gamhet_glf", "gamhet_gp", "gamhet_ilf", "gamhet_ip", "gamma", "gamma_d2", "gamma_p", "gamma_sw", "gammahet", "gdi_hexagon", "gdi_spokes", "ge", "gen", "gene", "gener", "genera", "generat", "generate", "genrank", "genstd", "genvmean", "gettoken", "gl", "gladder", "gladder_7", "glim_l01", "glim_l02", "glim_l03", "glim_l04", "glim_l05", "glim_l06", "glim_l07", "glim_l08", "glim_l09", "glim_l10", "glim_l11", "glim_l12", "glim_lf", "glim_mu", "glim_nw1", "glim_nw2", "glim_nw3", "glim_p", "glim_v1", "glim_v2", "glim_v3", "glim_v4", "glim_v5", "glim_v6", "glim_v7", "glm", "glm_6", "glm_p", "glm_sw", "glmpred", "glo", "glob", "globa", "global", "glogit", "glogit_8", "glogit_p", "gmeans", "gnbre_lf", "gnbreg", "gnbreg_5", "gnbreg_p", "gomp_lf", "gompe_sw", "gomper_p", "gompertz", "gompertzhet", "gomphet_glf", "gomphet_glf_sh", "gomphet_gp", "gomphet_ilf", "gomphet_ilf_sh", "gomphet_ip", "gphdot", "gphpen", "gphprint", "gprefs", "gprobi_p", "gprobit", "gprobit_8", "gr", "gr7", "gr_copy", "gr_current", "gr_db", "gr_describe", "gr_dir", "gr_draw", "gr_draw_replay", "gr_drop", "gr_edit", "gr_editviewopts", "gr_example", "gr_example2", "gr_export", "gr_print", "gr_qscheme", "gr_query", "gr_read", "gr_rename", "gr_replay", "gr_save", "gr_set", "gr_setscheme", "gr_table", "gr_undo", "gr_use", "graph", "graph7", "grebar", "greigen", "greigen_7", "greigen_8", "grmeanby", "grmeanby_7", "gs_fileinfo", "gs_filetype", "gs_graphinfo", "gs_stat", "gsort", "gwood", "h", "hadimvo", "hareg", "hausman", "haver", "he", "heck_d2", "heckma_p", "heckman", "heckp_lf", "heckpr_p", "heckprob", "hel", "help", "hereg", "hetpr_lf", "hetpr_p", "hetprob", "hettest", "hexdump", "hilite", "hist", "hist_7", "histogram", "hlogit", "hlu", "hmeans", "hotel", "hotelling", "hprobit", "hreg", "hsearch", "icd9", "icd9_ff", "icd9p", "iis", "impute", "imtest", "inbase", "include", "inf", "infi", "infil", "infile", "infix", "inp", "inpu", "input", "ins", "insheet", "insp", "inspe", "inspec", "inspect", "integ", "inten", "intreg", "intreg_7", "intreg_p", "intrg2_ll", "intrg_ll", "intrg_ll2", "ipolate", "iqreg", "ir", "irf", "irf_create", "irfm", "iri", "is_svy", "is_svysum", "isid", "istdize", "ivprob_1_lf", "ivprob_lf", "ivprobit", "ivprobit_p", "ivreg", "ivreg_footnote", "ivtob_1_lf", "ivtob_lf", "ivtobit", "ivtobit_p", "jackknife", "jacknife", "jknife", "jknife_6", "jknife_8", "jkstat", "joinby", "kalarma1", "kap", "kap_3", "kapmeier", "kappa", "kapwgt", "kdensity", "kdensity_7", "keep", "ksm", "ksmirnov", "ktau", "kwallis", "l", "la", "lab", "labe", "label", "labelbook", "ladder", "levels", "levelsof", "leverage", "lfit", "lfit_p", "li", "lincom", "line", "linktest", "lis", "list", "lloghet_glf", "lloghet_glf_sh", "lloghet_gp", "lloghet_ilf", "lloghet_ilf_sh", "lloghet_ip", "llogi_sw", "llogis_p", "llogist", "llogistic", "llogistichet", "lnorm_lf", "lnorm_sw", "lnorma_p", "lnormal", "lnormalhet", "lnormhet_glf", "lnormhet_glf_sh", "lnormhet_gp", "lnormhet_ilf", "lnormhet_ilf_sh", "lnormhet_ip", "lnskew0", "loadingplot", "loc", "loca", "local", "log", "logi", "logis_lf", "logistic", "logistic_p", "logit", "logit_estat", "logit_p", "loglogs", "logrank", "loneway", "lookfor", "lookup", "lowess", "lowess_7", "lpredict", "lrecomp", "lroc", "lroc_7", "lrtest", "ls", "lsens", "lsens_7", "lsens_x", "lstat", "ltable", "ltable_7", "ltriang", "lv", "lvr2plot", "lvr2plot_7", "m", "ma", "mac", "macr", "macro", "makecns", "man", "manova", "manova_estat", "manova_p", "manovatest", "mantel", "mark", "markin", "markout", "marksample", "mat", "mat_capp", "mat_order", "mat_put_rr", "mat_rapp", "mata", "mata_clear", "mata_describe", "mata_drop", "mata_matdescribe", "mata_matsave", "mata_matuse", "mata_memory", "mata_mlib", "mata_mosave", "mata_rename", "mata_which", "matalabel", "matcproc", "matlist", "matname", "matr", "matri", "matrix", "matrix_input__dlg", "matstrik", "mcc", "mcci", "md0_", "md1_", "md1debug_", "md2_", "md2debug_", "mds", "mds_estat", "mds_p", "mdsconfig", "mdslong", "mdsmat", "mdsshepard", "mdytoe", "mdytof", "me_derd", "mean", "means", "median", "memory", "memsize", "meqparse", "mer", "merg", "merge", "mfp", "mfx", "mhelp", "mhodds", "minbound", "mixed_ll", "mixed_ll_reparm", "mkassert", "mkdir", "mkmat", "mkspline", "ml", "ml_5", "ml_adjs", "ml_bhhhs", "ml_c_d", "ml_check", "ml_clear", "ml_cnt", "ml_debug", "ml_defd", "ml_e0", "ml_e0_bfgs", "ml_e0_cycle", "ml_e0_dfp", "ml_e0i", "ml_e1", "ml_e1_bfgs", "ml_e1_bhhh", "ml_e1_cycle", "ml_e1_dfp", "ml_e2", "ml_e2_cycle", "ml_ebfg0", "ml_ebfr0", "ml_ebfr1", "ml_ebh0q", "ml_ebhh0", "ml_ebhr0", "ml_ebr0i", "ml_ecr0i", "ml_edfp0", "ml_edfr0", "ml_edfr1", "ml_edr0i", "ml_eds", "ml_eer0i", "ml_egr0i", "ml_elf", "ml_elf_bfgs", "ml_elf_bhhh", "ml_elf_cycle", "ml_elf_dfp", "ml_elfi", "ml_elfs", "ml_enr0i", "ml_enrr0", "ml_erdu0", "ml_erdu0_bfgs", "ml_erdu0_bhhh", "ml_erdu0_bhhhq", "ml_erdu0_cycle", "ml_erdu0_dfp", "ml_erdu0_nrbfgs", "ml_exde", "ml_footnote", "ml_geqnr", "ml_grad0", "ml_graph", "ml_hbhhh", "ml_hd0", "ml_hold", "ml_init", "ml_inv", "ml_log", "ml_max", "ml_mlout", "ml_mlout_8", "ml_model", "ml_nb0", "ml_opt", "ml_p", "ml_plot", "ml_query", "ml_rdgrd", "ml_repor", "ml_s_e", "ml_score", "ml_searc", "ml_technique", "ml_unhold", "mleval", "mlf_", "mlmatbysum", "mlmatsum", "mlog", "mlogi", "mlogit", "mlogit_footnote", "mlogit_p", "mlopts", "mlsum", "mlvecsum", "mnl0_", "mor", "more", "mov", "move", "mprobit", "mprobit_lf", "mprobit_p", "mrdu0_", "mrdu1_", "mvdecode", "mvencode", "mvreg", "mvreg_estat", "n", "nbreg", "nbreg_al", "nbreg_lf", "nbreg_p", "nbreg_sw", "nestreg", "net", "newey", "newey_7", "newey_p", "news", "nl", "nl_7", "nl_9", "nl_9_p", "nl_p", "nl_p_7", "nlcom", "nlcom_p", "nlexp2", "nlexp2_7", "nlexp2a", "nlexp2a_7", "nlexp3", "nlexp3_7", "nlgom3", "nlgom3_7", "nlgom4", "nlgom4_7", "nlinit", "nllog3", "nllog3_7", "nllog4", "nllog4_7", "nlog_rd", "nlogit", "nlogit_p", "nlogitgen", "nlogittree", "nlpred", "no", "nobreak", "noi", "nois", "noisi", "noisil", "noisily", "note", "notes", "notes_dlg", "nptrend", "numlabel", "numlist", "odbc", "old_ver", "olo", "olog", "ologi", "ologi_sw", "ologit", "ologit_p", "ologitp", "on", "one", "onew", "onewa", "oneway", "op_colnm", "op_comp", "op_diff", "op_inv", "op_str", "opr", "opro", "oprob", "oprob_sw", "oprobi", "oprobi_p", "oprobit", "oprobitp", "opts_exclusive", "order", "orthog", "orthpoly", "ou", "out", "outf", "outfi", "outfil", "outfile", "outs", "outsh", "outshe", "outshee", "outsheet", "ovtest", "pac", "pac_7", "palette", "parse", "parse_dissim", "pause", "pca", "pca_8", "pca_display", "pca_estat", "pca_p", "pca_rotate", "pcamat", "pchart", "pchart_7", "pchi", "pchi_7", "pcorr", "pctile", "pentium", "pergram", "pergram_7", "permute", "permute_8", "personal", "peto_st", "pkcollapse", "pkcross", "pkequiv", "pkexamine", "pkexamine_7", "pkshape", "pksumm", "pksumm_7", "pl", "plo", "plot", "plugin", "pnorm", "pnorm_7", "poisgof", "poiss_lf", "poiss_sw", "poisso_p", "poisson", "poisson_estat", "post", "postclose", "postfile", "postutil", "pperron", "pr", "prais", "prais_e", "prais_e2", "prais_p", "predict", "predictnl", "preserve", "print", "pro", "prob", "probi", "probit", "probit_estat", "probit_p", "proc_time", "procoverlay", "procrustes", "procrustes_estat", "procrustes_p", "profiler", "prog", "progr", "progra", "program", "prop", "proportion", "prtest", "prtesti", "pwcorr", "pwd", "q", "s", "qby", "qbys", "qchi", "qchi_7", "qladder", "qladder_7", "qnorm", "qnorm_7", "qqplot", "qqplot_7", "qreg", "qreg_c", "qreg_p", "qreg_sw", "qu", "quadchk", "quantile", "quantile_7", "que", "quer", "query", "range", "ranksum", "ratio", "rchart", "rchart_7", "rcof", "recast", "reclink", "recode", "reg", "reg3", "reg3_p", "regdw", "regr", "regre", "regre_p2", "regres", "regres_p", "regress", "regress_estat", "regriv_p", "remap", "ren", "rena", "renam", "rename", "renpfix", "repeat", "replace", "report", "reshape", "restore", "ret", "retu", "retur", "return", "rm", "rmdir", "robvar", "roccomp", "roccomp_7", "roccomp_8", "rocf_lf", "rocfit", "rocfit_8", "rocgold", "rocplot", "rocplot_7", "roctab", "roctab_7", "rolling", "rologit", "rologit_p", "rot", "rota", "rotat", "rotate", "rotatemat", "rreg", "rreg_p", "ru", "run", "runtest", "rvfplot", "rvfplot_7", "rvpplot", "rvpplot_7", "sa", "safesum", "sample", "sampsi", "sav", "save", "savedresults", "saveold", "sc", "sca", "scal", "scala", "scalar", "scatter", "scm_mine", "sco", "scob_lf", "scob_p", "scobi_sw", "scobit", "scor", "score", "scoreplot", "scoreplot_help", "scree", "screeplot", "screeplot_help", "sdtest", "sdtesti", "se", "search", "separate", "seperate", "serrbar", "serrbar_7", "serset", "set", "set_defaults", "sfrancia", "sh", "she", "shel", "shell", "shewhart", "shewhart_7", "signestimationsample", "signrank", "signtest", "simul", "simul_7", "simulate", "simulate_8", "sktest", "sleep", "slogit", "slogit_d2", "slogit_p", "smooth", "snapspan", "so", "sor", "sort", "spearman", "spikeplot", "spikeplot_7", "spikeplt", "spline_x", "split", "sqreg", "sqreg_p", "sret", "sretu", "sretur", "sreturn", "ssc", "st", "st_ct", "st_hc", "st_hcd", "st_hcd_sh", "st_is", "st_issys", "st_note", "st_promo", "st_set", "st_show", "st_smpl", "st_subid", "stack", "statsby", "statsby_8", "stbase", "stci", "stci_7", "stcox", "stcox_estat", "stcox_fr", "stcox_fr_ll", "stcox_p", "stcox_sw", "stcoxkm", "stcoxkm_7", "stcstat", "stcurv", "stcurve", "stcurve_7", "stdes", "stem", "stepwise", "stereg", "stfill", "stgen", "stir", "stjoin", "stmc", "stmh", "stphplot", "stphplot_7", "stphtest", "stphtest_7", "stptime", "strate", "strate_7", "streg", "streg_sw", "streset", "sts", "sts_7", "stset", "stsplit", "stsum", "sttocc", "sttoct", "stvary", "stweib", "su", "suest", "suest_8", "sum", "summ", "summa", "summar", "summari", "summariz", "summarize", "sunflower", "sureg", "survcurv", "survsum", "svar", "svar_p", "svmat", "svy", "svy_disp", "svy_dreg", "svy_est", "svy_est_7", "svy_estat", "svy_get", "svy_gnbreg_p", "svy_head", "svy_header", "svy_heckman_p", "svy_heckprob_p", "svy_intreg_p", "svy_ivreg_p", "svy_logistic_p", "svy_logit_p", "svy_mlogit_p", "svy_nbreg_p", "svy_ologit_p", "svy_oprobit_p", "svy_poisson_p", "svy_probit_p", "svy_regress_p", "svy_sub", "svy_sub_7", "svy_x", "svy_x_7", "svy_x_p", "svydes", "svydes_8", "svygen", "svygnbreg", "svyheckman", "svyheckprob", "svyintreg", "svyintreg_7", "svyintrg", "svyivreg", "svylc", "svylog_p", "svylogit", "svymarkout", "svymarkout_8", "svymean", "svymlog", "svymlogit", "svynbreg", "svyolog", "svyologit", "svyoprob", "svyoprobit", "svyopts", "svypois", "svypois_7", "svypoisson", "svyprobit", "svyprobt", "svyprop", "svyprop_7", "svyratio", "svyreg", "svyreg_p", "svyregress", "svyset", "svyset_7", "svyset_8", "svytab", "svytab_7", "svytest", "svytotal", "sw", "sw_8", "swcnreg", "swcox", "swereg", "swilk", "swlogis", "swlogit", "swologit", "swoprbt", "swpois", "swprobit", "swqreg", "swtobit", "swweib", "symmetry", "symmi", "symplot", "symplot_7", "syntax", "sysdescribe", "sysdir", "sysuse", "szroeter", "ta", "tab", "tab1", "tab2", "tab_or", "tabd", "tabdi", "tabdis", "tabdisp", "tabi", "table", "tabodds", "tabodds_7", "tabstat", "tabu", "tabul", "tabula", "tabulat", "tabulate", "te", "tempfile", "tempname", "tempvar", "tes", "test", "testnl", "testparm", "teststd", "tetrachoric", "time_it", "timer", "tis", "tob", "tobi", "tobit", "tobit_p", "tobit_sw", "token", "tokeni", "tokeniz", "tokenize", "tostring", "total", "translate", "translator", "transmap", "treat_ll", "treatr_p", "treatreg", "trim", "trnb_cons", "trnb_mean", "trpoiss_d2", "trunc_ll", "truncr_p", "truncreg", "tsappend", "tset", "tsfill", "tsline", "tsline_ex", "tsreport", "tsrevar", "tsrline", "tsset", "tssmooth", "tsunab", "ttest", "ttesti", "tut_chk", "tut_wait", "tutorial", "tw", "tware_st", "two", "twoway", "twoway__fpfit_serset", "twoway__function_gen", "twoway__histogram_gen", "twoway__ipoint_serset", "twoway__ipoints_serset", "twoway__kdensity_gen", "twoway__lfit_serset", "twoway__normgen_gen", "twoway__pci_serset", "twoway__qfit_serset", "twoway__scatteri_serset", "twoway__sunflower_gen", "twoway_ksm_serset", "ty", "typ", "type", "typeof", "u", "unab", "unabbrev", "unabcmd", "update", "us", "use", "uselabel", "var", "var_mkcompanion", "var_p", "varbasic", "varfcast", "vargranger", "varirf", "varirf_add", "varirf_cgraph", "varirf_create", "varirf_ctable", "varirf_describe", "varirf_dir", "varirf_drop", "varirf_erase", "varirf_graph", "varirf_ograph", "varirf_rename", "varirf_set", "varirf_table", "varlist", "varlmar", "varnorm", "varsoc", "varstable", "varstable_w", "varstable_w2", "varwle", "vce", "vec", "vec_fevd", "vec_mkphi", "vec_p", "vec_p_w", "vecirf_create", "veclmar", "veclmar_w", "vecnorm", "vecnorm_w", "vecrank", "vecstable", "verinst", "vers", "versi", "versio", "version", "view", "viewsource", "vif", "vwls", "wdatetof", "webdescribe", "webseek", "webuse", "weib1_lf", "weib2_lf", "weib_lf", "weib_lf0", "weibhet_glf", "weibhet_glf_sh", "weibhet_glfa", "weibhet_glfa_sh", "weibhet_gp", "weibhet_ilf", "weibhet_ilf_sh", "weibhet_ilfa", "weibhet_ilfa_sh", "weibhet_ip", "weibu_sw", "weibul_p", "weibull", "weibull_c", "weibull_s", "weibullhet", "wh", "whelp", "whi", "which", "whil", "while", "wilc_st", "wilcoxon", "win", "wind", "windo", "window", "winexec", "wntestb", "wntestb_7", "wntestq", "xchart", "xchart_7", "xcorr", "xcorr_7", "xi", "xi_6", "xmlsav", "xmlsave", "xmluse", "xpose", "xsh", "xshe", "xshel", "xshell", "xt_iis", "xt_tis", "xtab_p", "xtabond", "xtbin_p", "xtclog", "xtcloglog", "xtcloglog_8", "xtcloglog_d2", "xtcloglog_pa_p", "xtcloglog_re_p", "xtcnt_p", "xtcorr", "xtdata", "xtdes", "xtfront_p", "xtfrontier", "xtgee", "xtgee_elink", "xtgee_estat", "xtgee_makeivar", "xtgee_p", "xtgee_plink", "xtgls", "xtgls_p", "xthaus", "xthausman", "xtht_p", "xthtaylor", "xtile", "xtint_p", "xtintreg", "xtintreg_8", "xtintreg_d2", "xtintreg_p", "xtivp_1", "xtivp_2", "xtivreg", "xtline", "xtline_ex", "xtlogit", "xtlogit_8", "xtlogit_d2", "xtlogit_fe_p", "xtlogit_pa_p", "xtlogit_re_p", "xtmixed", "xtmixed_estat", "xtmixed_p", "xtnb_fe", "xtnb_lf", "xtnbreg", "xtnbreg_pa_p", "xtnbreg_refe_p", "xtpcse", "xtpcse_p", "xtpois", "xtpoisson", "xtpoisson_d2", "xtpoisson_pa_p", "xtpoisson_refe_p", "xtpred", "xtprobit", "xtprobit_8", "xtprobit_d2", "xtprobit_re_p", "xtps_fe", "xtps_lf", "xtps_ren", "xtps_ren_8", "xtrar_p", "xtrc", "xtrc_p", "xtrchh", "xtrefe_p", "xtreg", "xtreg_be", "xtreg_fe", "xtreg_ml", "xtreg_pa_p", "xtreg_re", "xtregar", "xtrere_p", "xtset", "xtsf_ll", "xtsf_llti", "xtsum", "xttab", "xttest0", "xttobit", "xttobit_8", "xttobit_p", "xttrans", "yx", "yxview__barlike_draw", "yxview_area_draw", "yxview_bar_draw", "yxview_dot_draw", "yxview_dropline_draw", "yxview_function_draw", "yxview_iarrow_draw", "yxview_ilabels_draw", "yxview_normal_draw", "yxview_pcarrow_draw", "yxview_pcbarrow_draw", "yxview_pccapsym_draw", "yxview_pcscatter_draw", "yxview_pcspike_draw", "yxview_rarea_draw", "yxview_rbar_draw", "yxview_rbarm_draw", "yxview_rcap_draw", "yxview_rcapsym_draw", "yxview_rconnected_draw", "yxview_rline_draw", "yxview_rscatter_draw", "yxview_rspike_draw", "yxview_spike_draw", "yxview_sunflower_draw", "zap_s", "zinb", "zinb_llf", "zinb_plf", "zip", "zip_llf", "zip_p", "zip_plf", "zt_ct_5", "zt_hc_5", "zt_hcd_5", "zt_is_5", "zt_iss_5", "zt_sho_5", "zt_smp_5", "ztbase_5", "ztcox_5", "ztdes_5", "ztereg_5", "ztfill_5", "ztgen_5", "ztir_5", "ztjoin_5", "ztnb", "ztnb_p", "ztp", "ztp_p", "zts_5", "ztset_5", "ztspli_5", "ztsum_5", "zttoct_5", "ztvary_5", "ztweib_5" ) builtins_functions = ( "Cdhms", "Chms", "Clock", "Cmdyhms", "Cofc", "Cofd", "F", "Fden", "Ftail", "I", "J", "_caller", "abbrev", "abs", "acos", "acosh", "asin", "asinh", "atan", "atan2", "atanh", "autocode", "betaden", "binomial", "binomialp", "binomialtail", "binormal", "bofd", "byteorder", "c", "ceil", "char", "chi2", "chi2den", "chi2tail", "cholesky", "chop", "clip", "clock", "cloglog", "cofC", "cofd", "colnumb", "colsof", "comb", "cond", "corr", "cos", "cosh", "d", "daily", "date", "day", "det", "dgammapda", "dgammapdada", "dgammapdadx", "dgammapdx", "dgammapdxdx", "dhms", "diag", "diag0cnt", "digamma", "dofC", "dofb", "dofc", "dofh", "dofm", "dofq", "dofw", "dofy", "dow", "doy", "dunnettprob", "e", "el", "epsdouble", "epsfloat", "exp", "fileexists", "fileread", "filereaderror", "filewrite", "float", "floor", "fmtwidth", "gammaden", "gammap", "gammaptail", "get", "group", "h", "hadamard", "halfyear", "halfyearly", "has_eprop", "hh", "hhC", "hms", "hofd", "hours", "hypergeometric", "hypergeometricp", "ibeta", "ibetatail", "index", "indexnot", "inlist", "inrange", "int", "inv", "invF", "invFtail", "invbinomial", "invbinomialtail", "invchi2", "invchi2tail", "invcloglog", "invdunnettprob", "invgammap", "invgammaptail", "invibeta", "invibetatail", "invlogit", "invnFtail", "invnbinomial", "invnbinomialtail", "invnchi2", "invnchi2tail", "invnibeta", "invnorm", "invnormal", "invnttail", "invpoisson", "invpoissontail", "invsym", "invt", "invttail", "invtukeyprob", "irecode", "issym", "issymmetric", "itrim", "length", "ln", "lnfact", "lnfactorial", "lngamma", "lnnormal", "lnnormalden", "log", "log10", "logit", "lower", "ltrim", "m", "match", "matmissing", "matrix", "matuniform", "max", "maxbyte", "maxdouble", "maxfloat", "maxint", "maxlong", "mdy", "mdyhms", "mi", "min", "minbyte", "mindouble", "minfloat", "minint", "minlong", "minutes", "missing", "mm", "mmC", "mod", "mofd", "month", "monthly", "mreldif", "msofhours", "msofminutes", "msofseconds", "nF", "nFden", "nFtail", "nbetaden", "nbinomial", "nbinomialp", "nbinomialtail", "nchi2", "nchi2den", "nchi2tail", "nibeta", "norm", "normal", "normalden", "normd", "npnF", "npnchi2", "npnt", "nt", "ntden", "nttail", "nullmat", "plural", "poisson", "poissonp", "poissontail", "proper", "q", "qofd", "quarter", "quarterly", "r", "rbeta", "rbinomial", "rchi2", "real", "recode", "regexm", "regexr", "regexs", "reldif", "replay", "return", "reverse", "rgamma", "rhypergeometric", "rnbinomial", "rnormal", "round", "rownumb", "rowsof", "rpoisson", "rt", "rtrim", "runiform", "s", "scalar", "seconds", "sign", "sin", "sinh", "smallestdouble", "soundex", "soundex_nara", "sqrt", "ss", "ssC", "strcat", "strdup", "string", "strlen", "strlower", "strltrim", "strmatch", "strofreal", "strpos", "strproper", "strreverse", "strrtrim", "strtoname", "strtrim", "strupper", "subinstr", "subinword", "substr", "sum", "sweep", "syminv", "t", "tC", "tan", "tanh", "tc", "td", "tden", "th", "tin", "tm", "tq", "trace", "trigamma", "trim", "trunc", "ttail", "tukeyprob", "tw", "twithin", "uniform", "upper", "vec", "vecdiag", "w", "week", "weekly", "wofd", "word", "wordcount", "year", "yearly", "yh", "ym", "yofd", "yq", "yw" ) Pygments-2.3.1/pygments/lexers/ampl.py0000644000175000017500000001003313376303352017054 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.ampl ~~~~~~~~~~~~~~~~~~~~ Lexers for the AMPL language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, using, this, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['AmplLexer'] class AmplLexer(RegexLexer): """ For `AMPL `_ source code. .. versionadded:: 2.2 """ name = 'Ampl' aliases = ['ampl'] filenames = ['*.run'] tokens = { 'root': [ (r'\n', Text), (r'\s+', Text.Whitespace), (r'#.*?\n', Comment.Single), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (words(( 'call', 'cd', 'close', 'commands', 'data', 'delete', 'display', 'drop', 'end', 'environ', 'exit', 'expand', 'include', 'load', 'model', 'objective', 'option', 'problem', 'purge', 'quit', 'redeclare', 'reload', 'remove', 'reset', 'restore', 'shell', 'show', 'solexpand', 'solution', 'solve', 'update', 'unload', 'xref', 'coeff', 'coef', 'cover', 'obj', 'interval', 'default', 'from', 'to', 'to_come', 'net_in', 'net_out', 'dimen', 'dimension', 'check', 'complements', 'write', 'function', 'pipe', 'format', 'if', 'then', 'else', 'in', 'while', 'repeat', 'for'), suffix=r'\b'), Keyword.Reserved), (r'(integer|binary|symbolic|ordered|circular|reversed|INOUT|IN|OUT|LOCAL)', Keyword.Type), (r'\".*?\"', String.Double), (r'\'.*?\'', String.Single), (r'[()\[\]{},;:]+', Punctuation), (r'\b(\w+)(\.)(astatus|init0|init|lb0|lb1|lb2|lb|lrc|' r'lslack|rc|relax|slack|sstatus|status|ub0|ub1|ub2|' r'ub|urc|uslack|val)', bygroups(Name.Variable, Punctuation, Keyword.Reserved)), (r'(set|param|var|arc|minimize|maximize|subject to|s\.t\.|subj to|' r'node|table|suffix|read table|write table)(\s+)(\w+)', bygroups(Keyword.Declaration, Text, Name.Variable)), (r'(param)(\s*)(:)(\s*)(\w+)(\s*)(:)(\s*)((\w|\s)+)', bygroups(Keyword.Declaration, Text, Punctuation, Text, Name.Variable, Text, Punctuation, Text, Name.Variable)), (r'(let|fix|unfix)(\s*)((?:\{.*\})?)(\s*)(\w+)', bygroups(Keyword.Declaration, Text, using(this), Text, Name.Variable)), (words(( 'abs', 'acos', 'acosh', 'alias', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'ctime', 'cos', 'exp', 'floor', 'log', 'log10', 'max', 'min', 'precision', 'round', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'time', 'trunc', 'Beta', 'Cauchy', 'Exponential', 'Gamma', 'Irand224', 'Normal', 'Normal01', 'Poisson', 'Uniform', 'Uniform01', 'num', 'num0', 'ichar', 'char', 'length', 'substr', 'sprintf', 'match', 'sub', 'gsub', 'print', 'printf', 'next', 'nextw', 'prev', 'prevw', 'first', 'last', 'ord', 'ord0', 'card', 'arity', 'indexarity'), prefix=r'\b', suffix=r'\b'), Name.Builtin), (r'(\+|\-|\*|/|\*\*|=|<=|>=|==|\||\^|<|>|\!|\.\.|:=|\&|\!=|<<|>>)', Operator), (words(( 'or', 'exists', 'forall', 'and', 'in', 'not', 'within', 'union', 'diff', 'difference', 'symdiff', 'inter', 'intersect', 'intersection', 'cross', 'setof', 'by', 'less', 'sum', 'prod', 'product', 'div', 'mod'), suffix=r'\b'), Keyword.Reserved), # Operator.Name but not enough emphasized with that (r'(\d+\.(?!\.)\d*|\.(?!.)\d+)([eE][+-]?\d+)?', Number.Float), (r'\d+([eE][+-]?\d+)?', Number.Integer), (r'[+-]?Infinity', Number.Integer), (r'(\w+|(\.(?!\.)))', Text) ] } Pygments-2.3.1/pygments/lexers/modeling.py0000644000175000017500000003104113376260540017724 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.modeling ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for modeling languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation from pygments.lexers.html import HtmlLexer from pygments.lexers import _stan_builtins __all__ = ['ModelicaLexer', 'BugsLexer', 'JagsLexer', 'StanLexer'] class ModelicaLexer(RegexLexer): """ For `Modelica `_ source code. .. versionadded:: 1.1 """ name = 'Modelica' aliases = ['modelica'] filenames = ['*.mo'] mimetypes = ['text/x-modelica'] flags = re.DOTALL | re.MULTILINE _name = r"(?:'(?:[^\\']|\\.)+'|[a-zA-Z_]\w*)" tokens = { 'whitespace': [ (u'[\\s\ufeff]+', Text), (r'//[^\n]*\n?', Comment.Single), (r'/\*.*?\*/', Comment.Multiline) ], 'root': [ include('whitespace'), (r'"', String.Double, 'string'), (r'[()\[\]{},;]+', Punctuation), (r'\.?[*^/+-]|\.|<>|[<>:=]=?', Operator), (r'\d+(\.?\d*[eE][-+]?\d+|\.\d*)', Number.Float), (r'\d+', Number.Integer), (r'(abs|acos|actualStream|array|asin|assert|AssertionLevel|atan|' r'atan2|backSample|Boolean|cardinality|cat|ceil|change|Clock|' r'Connections|cos|cosh|cross|delay|diagonal|div|edge|exp|' r'ExternalObject|fill|floor|getInstanceName|hold|homotopy|' r'identity|inStream|integer|Integer|interval|inverse|isPresent|' r'linspace|log|log10|matrix|max|min|mod|ndims|noClock|noEvent|' r'ones|outerProduct|pre|previous|product|Real|reinit|rem|rooted|' r'sample|scalar|semiLinear|shiftSample|sign|sin|sinh|size|skew|' r'smooth|spatialDistribution|sqrt|StateSelect|String|subSample|' r'sum|superSample|symmetric|tan|tanh|terminal|terminate|time|' r'transpose|vector|zeros)\b', Name.Builtin), (r'(algorithm|annotation|break|connect|constant|constrainedby|der|' r'discrete|each|else|elseif|elsewhen|encapsulated|enumeration|' r'equation|exit|expandable|extends|external|final|flow|for|if|' r'import|impure|in|initial|inner|input|loop|nondiscrete|outer|' r'output|parameter|partial|protected|public|pure|redeclare|' r'replaceable|return|stream|then|when|while)\b', Keyword.Reserved), (r'(and|not|or)\b', Operator.Word), (r'(block|class|connector|end|function|model|operator|package|' r'record|type)\b', Keyword.Reserved, 'class'), (r'(false|true)\b', Keyword.Constant), (r'within\b', Keyword.Reserved, 'package-prefix'), (_name, Name) ], 'class': [ include('whitespace'), (r'(function|record)\b', Keyword.Reserved), (r'(if|for|when|while)\b', Keyword.Reserved, '#pop'), (_name, Name.Class, '#pop'), default('#pop') ], 'package-prefix': [ include('whitespace'), (_name, Name.Namespace, '#pop'), default('#pop') ], 'string': [ (r'"', String.Double, '#pop'), (r'\\[\'"?\\abfnrtv]', String.Escape), (r'(?i)<\s*html\s*>([^\\"]|\\.)+?(<\s*/\s*html\s*>|(?="))', using(HtmlLexer)), (r'<|\\?[^"\\<]+', String.Double) ] } class BugsLexer(RegexLexer): """ Pygments Lexer for `OpenBugs `_ and WinBugs models. .. versionadded:: 1.6 """ name = 'BUGS' aliases = ['bugs', 'winbugs', 'openbugs'] filenames = ['*.bug'] _FUNCTIONS = ( # Scalar functions 'abs', 'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctanh', 'cloglog', 'cos', 'cosh', 'cumulative', 'cut', 'density', 'deviance', 'equals', 'expr', 'gammap', 'ilogit', 'icloglog', 'integral', 'log', 'logfact', 'loggam', 'logit', 'max', 'min', 'phi', 'post.p.value', 'pow', 'prior.p.value', 'probit', 'replicate.post', 'replicate.prior', 'round', 'sin', 'sinh', 'solution', 'sqrt', 'step', 'tan', 'tanh', 'trunc', # Vector functions 'inprod', 'interp.lin', 'inverse', 'logdet', 'mean', 'eigen.vals', 'ode', 'prod', 'p.valueM', 'rank', 'ranked', 'replicate.postM', 'sd', 'sort', 'sum', # Special 'D', 'I', 'F', 'T', 'C') """ OpenBUGS built-in functions From http://www.openbugs.info/Manuals/ModelSpecification.html#ContentsAII This also includes - T, C, I : Truncation and censoring. ``T`` and ``C`` are in OpenBUGS. ``I`` in WinBUGS. - D : ODE - F : Functional http://www.openbugs.info/Examples/Functionals.html """ _DISTRIBUTIONS = ('dbern', 'dbin', 'dcat', 'dnegbin', 'dpois', 'dhyper', 'dbeta', 'dchisqr', 'ddexp', 'dexp', 'dflat', 'dgamma', 'dgev', 'df', 'dggamma', 'dgpar', 'dloglik', 'dlnorm', 'dlogis', 'dnorm', 'dpar', 'dt', 'dunif', 'dweib', 'dmulti', 'ddirch', 'dmnorm', 'dmt', 'dwish') """ OpenBUGS built-in distributions Functions from http://www.openbugs.info/Manuals/ModelSpecification.html#ContentsAI """ tokens = { 'whitespace': [ (r"\s+", Text), ], 'comments': [ # Comments (r'#.*$', Comment.Single), ], 'root': [ # Comments include('comments'), include('whitespace'), # Block start (r'(model)(\s+)(\{)', bygroups(Keyword.Namespace, Text, Punctuation)), # Reserved Words (r'(for|in)(?![\w.])', Keyword.Reserved), # Built-in Functions (r'(%s)(?=\s*\()' % r'|'.join(_FUNCTIONS + _DISTRIBUTIONS), Name.Builtin), # Regular variable names (r'[A-Za-z][\w.]*', Name), # Number Literals (r'[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?', Number), # Punctuation (r'\[|\]|\(|\)|:|,|;', Punctuation), # Assignment operators # SLexer makes these tokens Operators. (r'<-|~', Operator), # Infix and prefix operators (r'\+|-|\*|/', Operator), # Block (r'[{}]', Punctuation), ] } def analyse_text(text): if re.search(r"^\s*model\s*{", text, re.M): return 0.7 else: return 0.0 class JagsLexer(RegexLexer): """ Pygments Lexer for JAGS. .. versionadded:: 1.6 """ name = 'JAGS' aliases = ['jags'] filenames = ['*.jag', '*.bug'] # JAGS _FUNCTIONS = ( 'abs', 'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctanh', 'cos', 'cosh', 'cloglog', 'equals', 'exp', 'icloglog', 'ifelse', 'ilogit', 'log', 'logfact', 'loggam', 'logit', 'phi', 'pow', 'probit', 'round', 'sin', 'sinh', 'sqrt', 'step', 'tan', 'tanh', 'trunc', 'inprod', 'interp.lin', 'logdet', 'max', 'mean', 'min', 'prod', 'sum', 'sd', 'inverse', 'rank', 'sort', 't', 'acos', 'acosh', 'asin', 'asinh', 'atan', # Truncation/Censoring (should I include) 'T', 'I') # Distributions with density, probability and quartile functions _DISTRIBUTIONS = tuple('[dpq]%s' % x for x in ('bern', 'beta', 'dchiqsqr', 'ddexp', 'dexp', 'df', 'gamma', 'gen.gamma', 'logis', 'lnorm', 'negbin', 'nchisqr', 'norm', 'par', 'pois', 'weib')) # Other distributions without density and probability _OTHER_DISTRIBUTIONS = ( 'dt', 'dunif', 'dbetabin', 'dbern', 'dbin', 'dcat', 'dhyper', 'ddirch', 'dmnorm', 'dwish', 'dmt', 'dmulti', 'dbinom', 'dchisq', 'dnbinom', 'dweibull', 'ddirich') tokens = { 'whitespace': [ (r"\s+", Text), ], 'names': [ # Regular variable names (r'[a-zA-Z][\w.]*\b', Name), ], 'comments': [ # do not use stateful comments (r'(?s)/\*.*?\*/', Comment.Multiline), # Comments (r'#.*$', Comment.Single), ], 'root': [ # Comments include('comments'), include('whitespace'), # Block start (r'(model|data)(\s+)(\{)', bygroups(Keyword.Namespace, Text, Punctuation)), (r'var(?![\w.])', Keyword.Declaration), # Reserved Words (r'(for|in)(?![\w.])', Keyword.Reserved), # Builtins # Need to use lookahead because . is a valid char (r'(%s)(?=\s*\()' % r'|'.join(_FUNCTIONS + _DISTRIBUTIONS + _OTHER_DISTRIBUTIONS), Name.Builtin), # Names include('names'), # Number Literals (r'[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?', Number), (r'\[|\]|\(|\)|:|,|;', Punctuation), # Assignment operators (r'<-|~', Operator), # # JAGS includes many more than OpenBUGS (r'\+|-|\*|\/|\|\|[&]{2}|[<>=]=?|\^|%.*?%', Operator), (r'[{}]', Punctuation), ] } def analyse_text(text): if re.search(r'^\s*model\s*\{', text, re.M): if re.search(r'^\s*data\s*\{', text, re.M): return 0.9 elif re.search(r'^\s*var', text, re.M): return 0.9 else: return 0.3 else: return 0 class StanLexer(RegexLexer): """Pygments Lexer for Stan models. The Stan modeling language is specified in the *Stan Modeling Language User's Guide and Reference Manual, v2.8.0*, `pdf `__. .. versionadded:: 1.6 """ name = 'Stan' aliases = ['stan'] filenames = ['*.stan'] tokens = { 'whitespace': [ (r"\s+", Text), ], 'comments': [ (r'(?s)/\*.*?\*/', Comment.Multiline), # Comments (r'(//|#).*$', Comment.Single), ], 'root': [ # Stan is more restrictive on strings than this regex (r'"[^"]*"', String), # Comments include('comments'), # block start include('whitespace'), # Block start (r'(%s)(\s*)(\{)' % r'|'.join(('functions', 'data', r'transformed\s+?data', 'parameters', r'transformed\s+parameters', 'model', r'generated\s+quantities')), bygroups(Keyword.Namespace, Text, Punctuation)), # Reserved Words (r'(%s)\b' % r'|'.join(_stan_builtins.KEYWORDS), Keyword), # Truncation (r'T(?=\s*\[)', Keyword), # Data types (r'(%s)\b' % r'|'.join(_stan_builtins.TYPES), Keyword.Type), # Punctuation (r"[;:,\[\]()]", Punctuation), # Builtin (r'(%s)(?=\s*\()' % r'|'.join(_stan_builtins.FUNCTIONS + _stan_builtins.DISTRIBUTIONS), Name.Builtin), # Special names ending in __, like lp__ (r'[A-Za-z]\w*__\b', Name.Builtin.Pseudo), (r'(%s)\b' % r'|'.join(_stan_builtins.RESERVED), Keyword.Reserved), # user-defined functions (r'[A-Za-z]\w*(?=\s*\()]', Name.Function), # Regular variable names (r'[A-Za-z]\w*\b', Name), # Real Literals (r'-?[0-9]+(\.[0-9]+)?[eE]-?[0-9]+', Number.Float), (r'-?[0-9]*\.[0-9]*', Number.Float), # Integer Literals (r'-?[0-9]+', Number.Integer), # Assignment operators # SLexer makes these tokens Operators. (r'<-|~', Operator), # Infix, prefix and postfix operators (and = ) (r"\+|-|\.?\*|\.?/|\\|'|\^|==?|!=?|<=?|>=?|\|\||&&", Operator), # Block delimiters (r'[{}]', Punctuation), ] } def analyse_text(text): if re.search(r'^\s*parameters\s*\{', text, re.M): return 1.0 else: return 0.0 Pygments-2.3.1/pygments/lexers/inferno.py0000644000175000017500000000605513402534107017566 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.inferno ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Inferno os and all the related stuff. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, default from pygments.token import Punctuation, Text, Comment, Operator, Keyword, \ Name, String, Number __all__ = ['LimboLexer'] class LimboLexer(RegexLexer): """ Lexer for `Limbo programming language `_ TODO: - maybe implement better var declaration highlighting - some simple syntax error highlighting .. versionadded:: 2.0 """ name = 'Limbo' aliases = ['limbo'] filenames = ['*.b'] mimetypes = ['text/limbo'] tokens = { 'whitespace': [ (r'^(\s*)([a-zA-Z_]\w*:(\s*)\n)', bygroups(Text, Name.Label)), (r'\n', Text), (r'\s+', Text), (r'#(\n|(.|\n)*?[^\\]\n)', Comment.Single), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|' r'u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\', String), # stray backslash ], 'statements': [ (r'"', String, 'string'), (r"'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])', Number.Float), (r'16r[0-9a-fA-F]+', Number.Hex), (r'8r[0-7]+', Number.Oct), (r'((([1-3]\d)|([2-9]))r)?(\d+)', Number.Integer), (r'[()\[\],.]', Punctuation), (r'[~!%^&*+=|?:<>/-]|(->)|(<-)|(=>)|(::)', Operator), (r'(alt|break|case|continue|cyclic|do|else|exit' r'for|hd|if|implement|import|include|len|load|or' r'pick|return|spawn|tagof|tl|to|while)\b', Keyword), (r'(byte|int|big|real|string|array|chan|list|adt' r'|fn|ref|of|module|self|type)\b', Keyword.Type), (r'(con|iota|nil)\b', Keyword.Constant), (r'[a-zA-Z_]\w*', Name), ], 'statement' : [ include('whitespace'), include('statements'), ('[{}]', Punctuation), (';', Punctuation, '#pop'), ], 'root': [ include('whitespace'), default('statement'), ], } def analyse_text(text): # Any limbo module implements something if re.search(r'^implement \w+;', text, re.MULTILINE): return 0.7 # TODO: # - Make lexers for: # - asm sources # - man pages # - mkfiles # - module definitions # - namespace definitions # - shell scripts # - maybe keyfiles and fonts # they all seem to be quite similar to their equivalents # from unix world, so there should not be a lot of problems Pygments-2.3.1/pygments/lexers/math.py0000644000175000017500000000127413376260540017064 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.math ~~~~~~~~~~~~~~~~~~~~ Just export lexers that were contained in this module. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexers.python import NumPyLexer from pygments.lexers.matlab import MatlabLexer, MatlabSessionLexer, \ OctaveLexer, ScilabLexer from pygments.lexers.julia import JuliaLexer, JuliaConsoleLexer from pygments.lexers.r import RConsoleLexer, SLexer, RdLexer from pygments.lexers.modeling import BugsLexer, JagsLexer, StanLexer from pygments.lexers.idl import IDLLexer from pygments.lexers.algebra import MuPADLexer __all__ = [] Pygments-2.3.1/pygments/lexers/scripting.py0000644000175000017500000020426413402534107020132 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.scripting ~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for scripting and embedded languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, default, combined, \ words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error, Whitespace, Other from pygments.util import get_bool_opt, get_list_opt, iteritems __all__ = ['LuaLexer', 'MoonScriptLexer', 'ChaiscriptLexer', 'LSLLexer', 'AppleScriptLexer', 'RexxLexer', 'MOOCodeLexer', 'HybrisLexer', 'EasytrieveLexer', 'JclLexer'] class LuaLexer(RegexLexer): """ For `Lua `_ source code. Additional options accepted: `func_name_highlighting` If given and ``True``, highlight builtin function names (default: ``True``). `disabled_modules` If given, must be a list of module names whose function names should not be highlighted. By default all modules are highlighted. To get a list of allowed modules have a look into the `_lua_builtins` module: .. sourcecode:: pycon >>> from pygments.lexers._lua_builtins import MODULES >>> MODULES.keys() ['string', 'coroutine', 'modules', 'io', 'basic', ...] """ name = 'Lua' aliases = ['lua'] filenames = ['*.lua', '*.wlua'] mimetypes = ['text/x-lua', 'application/x-lua'] _comment_multiline = r'(?:--\[(?P=*)\[[\w\W]*?\](?P=level)\])' _comment_single = r'(?:--.*$)' _space = r'(?:\s+)' _s = r'(?:%s|%s|%s)' % (_comment_multiline, _comment_single, _space) _name = r'(?:[^\W\d]\w*)' tokens = { 'root': [ # Lua allows a file to start with a shebang. (r'#!.*', Comment.Preproc), default('base'), ], 'ws': [ (_comment_multiline, Comment.Multiline), (_comment_single, Comment.Single), (_space, Text), ], 'base': [ include('ws'), (r'(?i)0x[\da-f]*(\.[\da-f]*)?(p[+-]?\d+)?', Number.Hex), (r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+e[+-]?\d+', Number.Float), (r'\d+', Number.Integer), # multiline strings (r'(?s)\[(=*)\[.*?\]\1\]', String), (r'::', Punctuation, 'label'), (r'\.{3}', Punctuation), (r'[=<>|~&+\-*/%#^]+|\.\.', Operator), (r'[\[\]{}().,:;]', Punctuation), (r'(and|or|not)\b', Operator.Word), ('(break|do|else|elseif|end|for|if|in|repeat|return|then|until|' r'while)\b', Keyword.Reserved), (r'goto\b', Keyword.Reserved, 'goto'), (r'(local)\b', Keyword.Declaration), (r'(true|false|nil)\b', Keyword.Constant), (r'(function)\b', Keyword.Reserved, 'funcname'), (r'[A-Za-z_]\w*(\.[A-Za-z_]\w*)?', Name), ("'", String.Single, combined('stringescape', 'sqs')), ('"', String.Double, combined('stringescape', 'dqs')) ], 'funcname': [ include('ws'), (r'[.:]', Punctuation), (r'%s(?=%s*[.:])' % (_name, _s), Name.Class), (_name, Name.Function, '#pop'), # inline function (r'\(', Punctuation, '#pop'), ], 'goto': [ include('ws'), (_name, Name.Label, '#pop'), ], 'label': [ include('ws'), (r'::', Punctuation, '#pop'), (_name, Name.Label), ], 'stringescape': [ (r'\\([abfnrtv\\"\']|[\r\n]{1,2}|z\s*|x[0-9a-fA-F]{2}|\d{1,3}|' r'u\{[0-9a-fA-F]+\})', String.Escape), ], 'sqs': [ (r"'", String.Single, '#pop'), (r"[^\\']+", String.Single), ], 'dqs': [ (r'"', String.Double, '#pop'), (r'[^\\"]+', String.Double), ] } def __init__(self, **options): self.func_name_highlighting = get_bool_opt( options, 'func_name_highlighting', True) self.disabled_modules = get_list_opt(options, 'disabled_modules', []) self._functions = set() if self.func_name_highlighting: from pygments.lexers._lua_builtins import MODULES for mod, func in iteritems(MODULES): if mod not in self.disabled_modules: self._functions.update(func) RegexLexer.__init__(self, **options) def get_tokens_unprocessed(self, text): for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text): if token is Name: if value in self._functions: yield index, Name.Builtin, value continue elif '.' in value: a, b = value.split('.') yield index, Name, a yield index + len(a), Punctuation, u'.' yield index + len(a) + 1, Name, b continue yield index, token, value class MoonScriptLexer(LuaLexer): """ For `MoonScript `_ source code. .. versionadded:: 1.5 """ name = "MoonScript" aliases = ["moon", "moonscript"] filenames = ["*.moon"] mimetypes = ['text/x-moonscript', 'application/x-moonscript'] tokens = { 'root': [ (r'#!(.*?)$', Comment.Preproc), default('base'), ], 'base': [ ('--.*$', Comment.Single), (r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+e[+-]?\d+', Number.Float), (r'(?i)0x[0-9a-f]*', Number.Hex), (r'\d+', Number.Integer), (r'\n', Text), (r'[^\S\n]+', Text), (r'(?s)\[(=*)\[.*?\]\1\]', String), (r'(->|=>)', Name.Function), (r':[a-zA-Z_]\w*', Name.Variable), (r'(==|!=|~=|<=|>=|\.\.\.|\.\.|[=+\-*/%^<>#!.\\:])', Operator), (r'[;,]', Punctuation), (r'[\[\]{}()]', Keyword.Type), (r'[a-zA-Z_]\w*:', Name.Variable), (words(( 'class', 'extends', 'if', 'then', 'super', 'do', 'with', 'import', 'export', 'while', 'elseif', 'return', 'for', 'in', 'from', 'when', 'using', 'else', 'and', 'or', 'not', 'switch', 'break'), suffix=r'\b'), Keyword), (r'(true|false|nil)\b', Keyword.Constant), (r'(and|or|not)\b', Operator.Word), (r'(self)\b', Name.Builtin.Pseudo), (r'@@?([a-zA-Z_]\w*)?', Name.Variable.Class), (r'[A-Z]\w*', Name.Class), # proper name (r'[A-Za-z_]\w*(\.[A-Za-z_]\w*)?', Name), ("'", String.Single, combined('stringescape', 'sqs')), ('"', String.Double, combined('stringescape', 'dqs')) ], 'stringescape': [ (r'''\\([abfnrtv\\"']|\d{1,3})''', String.Escape) ], 'sqs': [ ("'", String.Single, '#pop'), (".", String) ], 'dqs': [ ('"', String.Double, '#pop'), (".", String) ] } def get_tokens_unprocessed(self, text): # set . as Operator instead of Punctuation for index, token, value in LuaLexer.get_tokens_unprocessed(self, text): if token == Punctuation and value == ".": token = Operator yield index, token, value class ChaiscriptLexer(RegexLexer): """ For `ChaiScript `_ source code. .. versionadded:: 2.0 """ name = 'ChaiScript' aliases = ['chai', 'chaiscript'] filenames = ['*.chai'] mimetypes = ['text/x-chaiscript', 'application/x-chaiscript'] flags = re.DOTALL | re.MULTILINE tokens = { 'commentsandwhitespace': [ (r'\s+', Text), (r'//.*?\n', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'^\#.*?\n', Comment.Single) ], 'slashstartsregex': [ include('commentsandwhitespace'), (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/' r'([gim]+\b|\B)', String.Regex, '#pop'), (r'(?=/)', Text, ('#pop', 'badregex')), default('#pop') ], 'badregex': [ (r'\n', Text, '#pop') ], 'root': [ include('commentsandwhitespace'), (r'\n', Text), (r'[^\S\n]+', Text), (r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|\.\.' r'(<<|>>>?|==?|!=?|[-<>+*%&|^/])=?', Operator, 'slashstartsregex'), (r'[{(\[;,]', Punctuation, 'slashstartsregex'), (r'[})\].]', Punctuation), (r'[=+\-*/]', Operator), (r'(for|in|while|do|break|return|continue|if|else|' r'throw|try|catch' r')\b', Keyword, 'slashstartsregex'), (r'(var)\b', Keyword.Declaration, 'slashstartsregex'), (r'(attr|def|fun)\b', Keyword.Reserved), (r'(true|false)\b', Keyword.Constant), (r'(eval|throw)\b', Name.Builtin), (r'`\S+`', Name.Builtin), (r'[$a-zA-Z_]\w*', Name.Other), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), (r'"', String.Double, 'dqstring'), (r"'(\\\\|\\'|[^'])*'", String.Single), ], 'dqstring': [ (r'\$\{[^"}]+?\}', String.Interpol), (r'\$', String.Double), (r'\\\\', String.Double), (r'\\"', String.Double), (r'[^\\"$]+', String.Double), (r'"', String.Double, '#pop'), ], } class LSLLexer(RegexLexer): """ For Second Life's Linden Scripting Language source code. .. versionadded:: 2.0 """ name = 'LSL' aliases = ['lsl'] filenames = ['*.lsl'] mimetypes = ['text/x-lsl'] flags = re.MULTILINE lsl_keywords = r'\b(?:do|else|for|if|jump|return|while)\b' lsl_types = r'\b(?:float|integer|key|list|quaternion|rotation|string|vector)\b' lsl_states = r'\b(?:(?:state)\s+\w+|default)\b' lsl_events = r'\b(?:state_(?:entry|exit)|touch(?:_(?:start|end))?|(?:land_)?collision(?:_(?:start|end))?|timer|listen|(?:no_)?sensor|control|(?:not_)?at_(?:rot_)?target|money|email|run_time_permissions|changed|attach|dataserver|moving_(?:start|end)|link_message|(?:on|object)_rez|remote_data|http_re(?:sponse|quest)|path_update|transaction_result)\b' lsl_functions_builtin = r'\b(?:ll(?:ReturnObjectsBy(?:ID|Owner)|Json(?:2List|[GS]etValue|ValueType)|Sin|Cos|Tan|Atan2|Sqrt|Pow|Abs|Fabs|Frand|Floor|Ceil|Round|Vec(?:Mag|Norm|Dist)|Rot(?:Between|2(?:Euler|Fwd|Left|Up))|(?:Euler|Axes)2Rot|Whisper|(?:Region|Owner)?Say|Shout|Listen(?:Control|Remove)?|Sensor(?:Repeat|Remove)?|Detected(?:Name|Key|Owner|Type|Pos|Vel|Grab|Rot|Group|LinkNumber)|Die|Ground|Wind|(?:[GS]et)(?:AnimationOverride|MemoryLimit|PrimMediaParams|ParcelMusicURL|Object(?:Desc|Name)|PhysicsMaterial|Status|Scale|Color|Alpha|Texture|Pos|Rot|Force|Torque)|ResetAnimationOverride|(?:Scale|Offset|Rotate)Texture|(?:Rot)?Target(?:Remove)?|(?:Stop)?MoveToTarget|Apply(?:Rotational)?Impulse|Set(?:KeyframedMotion|ContentType|RegionPos|(?:Angular)?Velocity|Buoyancy|HoverHeight|ForceAndTorque|TimerEvent|ScriptState|Damage|TextureAnim|Sound(?:Queueing|Radius)|Vehicle(?:Type|(?:Float|Vector|Rotation)Param)|(?:Touch|Sit)?Text|Camera(?:Eye|At)Offset|PrimitiveParams|ClickAction|Link(?:Alpha|Color|PrimitiveParams(?:Fast)?|Texture(?:Anim)?|Camera|Media)|RemoteScriptAccessPin|PayPrice|LocalRot)|ScaleByFactor|Get(?:(?:Max|Min)ScaleFactor|ClosestNavPoint|StaticPath|SimStats|Env|PrimitiveParams|Link(?:PrimitiveParams|Number(?:OfSides)?|Key|Name|Media)|HTTPHeader|FreeURLs|Object(?:Details|PermMask|PrimCount)|Parcel(?:MaxPrims|Details|Prim(?:Count|Owners))|Attached|(?:SPMax|Free|Used)Memory|Region(?:Name|TimeDilation|FPS|Corner|AgentCount)|Root(?:Position|Rotation)|UnixTime|(?:Parcel|Region)Flags|(?:Wall|GMT)clock|SimulatorHostname|BoundingBox|GeometricCenter|Creator|NumberOf(?:Prims|NotecardLines|Sides)|Animation(?:List)?|(?:Camera|Local)(?:Pos|Rot)|Vel|Accel|Omega|Time(?:stamp|OfDay)|(?:Object|CenterOf)?Mass|MassMKS|Energy|Owner|(?:Owner)?Key|SunDirection|Texture(?:Offset|Scale|Rot)|Inventory(?:Number|Name|Key|Type|Creator|PermMask)|Permissions(?:Key)?|StartParameter|List(?:Length|EntryType)|Date|Agent(?:Size|Info|Language|List)|LandOwnerAt|NotecardLine|Script(?:Name|State))|(?:Get|Reset|GetAndReset)Time|PlaySound(?:Slave)?|LoopSound(?:Master|Slave)?|(?:Trigger|Stop|Preload)Sound|(?:(?:Get|Delete)Sub|Insert)String|To(?:Upper|Lower)|Give(?:InventoryList|Money)|RezObject|(?:Stop)?LookAt|Sleep|CollisionFilter|(?:Take|Release)Controls|DetachFromAvatar|AttachToAvatar(?:Temp)?|InstantMessage|(?:GetNext)?Email|StopHover|MinEventDelay|RotLookAt|String(?:Length|Trim)|(?:Start|Stop)Animation|TargetOmega|RequestPermissions|(?:Create|Break)Link|BreakAllLinks|(?:Give|Remove)Inventory|Water|PassTouches|Request(?:Agent|Inventory)Data|TeleportAgent(?:Home|GlobalCoords)?|ModifyLand|CollisionSound|ResetScript|MessageLinked|PushObject|PassCollisions|AxisAngle2Rot|Rot2(?:Axis|Angle)|A(?:cos|sin)|AngleBetween|AllowInventoryDrop|SubStringIndex|List2(?:CSV|Integer|Json|Float|String|Key|Vector|Rot|List(?:Strided)?)|DeleteSubList|List(?:Statistics|Sort|Randomize|(?:Insert|Find|Replace)List)|EdgeOfWorld|AdjustSoundVolume|Key2Name|TriggerSoundLimited|EjectFromLand|(?:CSV|ParseString)2List|OverMyLand|SameGroup|UnSit|Ground(?:Slope|Normal|Contour)|GroundRepel|(?:Set|Remove)VehicleFlags|(?:AvatarOn)?(?:Link)?SitTarget|Script(?:Danger|Profiler)|Dialog|VolumeDetect|ResetOtherScript|RemoteLoadScriptPin|(?:Open|Close)RemoteDataChannel|SendRemoteData|RemoteDataReply|(?:Integer|String)ToBase64|XorBase64|Log(?:10)?|Base64To(?:String|Integer)|ParseStringKeepNulls|RezAtRoot|RequestSimulatorData|ForceMouselook|(?:Load|Release|(?:E|Une)scape)URL|ParcelMedia(?:CommandList|Query)|ModPow|MapDestination|(?:RemoveFrom|AddTo|Reset)Land(?:Pass|Ban)List|(?:Set|Clear)CameraParams|HTTP(?:Request|Response)|TextBox|DetectedTouch(?:UV|Face|Pos|(?:N|Bin)ormal|ST)|(?:MD5|SHA1|DumpList2)String|Request(?:Secure)?URL|Clear(?:Prim|Link)Media|(?:Link)?ParticleSystem|(?:Get|Request)(?:Username|DisplayName)|RegionSayTo|CastRay|GenerateKey|TransferLindenDollars|ManageEstateAccess|(?:Create|Delete)Character|ExecCharacterCmd|Evade|FleeFrom|NavigateTo|PatrolPoints|Pursue|UpdateCharacter|WanderWithin))\b' lsl_constants_float = r'\b(?:DEG_TO_RAD|PI(?:_BY_TWO)?|RAD_TO_DEG|SQRT2|TWO_PI)\b' lsl_constants_integer = r'\b(?:JSON_APPEND|STATUS_(?:PHYSICS|ROTATE_[XYZ]|PHANTOM|SANDBOX|BLOCK_GRAB(?:_OBJECT)?|(?:DIE|RETURN)_AT_EDGE|CAST_SHADOWS|OK|MALFORMED_PARAMS|TYPE_MISMATCH|BOUNDS_ERROR|NOT_(?:FOUND|SUPPORTED)|INTERNAL_ERROR|WHITELIST_FAILED)|AGENT(?:_(?:BY_(?:LEGACY_|USER)NAME|FLYING|ATTACHMENTS|SCRIPTED|MOUSELOOK|SITTING|ON_OBJECT|AWAY|WALKING|IN_AIR|TYPING|CROUCHING|BUSY|ALWAYS_RUN|AUTOPILOT|LIST_(?:PARCEL(?:_OWNER)?|REGION)))?|CAMERA_(?:PITCH|DISTANCE|BEHINDNESS_(?:ANGLE|LAG)|(?:FOCUS|POSITION)(?:_(?:THRESHOLD|LOCKED|LAG))?|FOCUS_OFFSET|ACTIVE)|ANIM_ON|LOOP|REVERSE|PING_PONG|SMOOTH|ROTATE|SCALE|ALL_SIDES|LINK_(?:ROOT|SET|ALL_(?:OTHERS|CHILDREN)|THIS)|ACTIVE|PASSIVE|SCRIPTED|CONTROL_(?:FWD|BACK|(?:ROT_)?(?:LEFT|RIGHT)|UP|DOWN|(?:ML_)?LBUTTON)|PERMISSION_(?:RETURN_OBJECTS|DEBIT|OVERRIDE_ANIMATIONS|SILENT_ESTATE_MANAGEMENT|TAKE_CONTROLS|TRIGGER_ANIMATION|ATTACH|CHANGE_LINKS|(?:CONTROL|TRACK)_CAMERA|TELEPORT)|INVENTORY_(?:TEXTURE|SOUND|OBJECT|SCRIPT|LANDMARK|CLOTHING|NOTECARD|BODYPART|ANIMATION|GESTURE|ALL|NONE)|CHANGED_(?:INVENTORY|COLOR|SHAPE|SCALE|TEXTURE|LINK|ALLOWED_DROP|OWNER|REGION(?:_START)?|TELEPORT|MEDIA)|OBJECT_(?:(?:PHYSICS|SERVER|STREAMING)_COST|UNKNOWN_DETAIL|CHARACTER_TIME|PHANTOM|PHYSICS|TEMP_ON_REZ|NAME|DESC|POS|PRIM_EQUIVALENCE|RETURN_(?:PARCEL(?:_OWNER)?|REGION)|ROO?T|VELOCITY|OWNER|GROUP|CREATOR|ATTACHED_POINT|RENDER_WEIGHT|PATHFINDING_TYPE|(?:RUNNING|TOTAL)_SCRIPT_COUNT|SCRIPT_(?:MEMORY|TIME))|TYPE_(?:INTEGER|FLOAT|STRING|KEY|VECTOR|ROTATION|INVALID)|(?:DEBUG|PUBLIC)_CHANNEL|ATTACH_(?:AVATAR_CENTER|CHEST|HEAD|BACK|PELVIS|MOUTH|CHIN|NECK|NOSE|BELLY|[LR](?:SHOULDER|HAND|FOOT|EAR|EYE|[UL](?:ARM|LEG)|HIP)|(?:LEFT|RIGHT)_PEC|HUD_(?:CENTER_[12]|TOP_(?:RIGHT|CENTER|LEFT)|BOTTOM(?:_(?:RIGHT|LEFT))?))|LAND_(?:LEVEL|RAISE|LOWER|SMOOTH|NOISE|REVERT)|DATA_(?:ONLINE|NAME|BORN|SIM_(?:POS|STATUS|RATING)|PAYINFO)|PAYMENT_INFO_(?:ON_FILE|USED)|REMOTE_DATA_(?:CHANNEL|REQUEST|REPLY)|PSYS_(?:PART_(?:BF_(?:ZERO|ONE(?:_MINUS_(?:DEST_COLOR|SOURCE_(ALPHA|COLOR)))?|DEST_COLOR|SOURCE_(ALPHA|COLOR))|BLEND_FUNC_(DEST|SOURCE)|FLAGS|(?:START|END)_(?:COLOR|ALPHA|SCALE|GLOW)|MAX_AGE|(?:RIBBON|WIND|INTERP_(?:COLOR|SCALE)|BOUNCE|FOLLOW_(?:SRC|VELOCITY)|TARGET_(?:POS|LINEAR)|EMISSIVE)_MASK)|SRC_(?:MAX_AGE|PATTERN|ANGLE_(?:BEGIN|END)|BURST_(?:RATE|PART_COUNT|RADIUS|SPEED_(?:MIN|MAX))|ACCEL|TEXTURE|TARGET_KEY|OMEGA|PATTERN_(?:DROP|EXPLODE|ANGLE(?:_CONE(?:_EMPTY)?)?)))|VEHICLE_(?:REFERENCE_FRAME|TYPE_(?:NONE|SLED|CAR|BOAT|AIRPLANE|BALLOON)|(?:LINEAR|ANGULAR)_(?:FRICTION_TIMESCALE|MOTOR_DIRECTION)|LINEAR_MOTOR_OFFSET|HOVER_(?:HEIGHT|EFFICIENCY|TIMESCALE)|BUOYANCY|(?:LINEAR|ANGULAR)_(?:DEFLECTION_(?:EFFICIENCY|TIMESCALE)|MOTOR_(?:DECAY_)?TIMESCALE)|VERTICAL_ATTRACTION_(?:EFFICIENCY|TIMESCALE)|BANKING_(?:EFFICIENCY|MIX|TIMESCALE)|FLAG_(?:NO_DEFLECTION_UP|LIMIT_(?:ROLL_ONLY|MOTOR_UP)|HOVER_(?:(?:WATER|TERRAIN|UP)_ONLY|GLOBAL_HEIGHT)|MOUSELOOK_(?:STEER|BANK)|CAMERA_DECOUPLED))|PRIM_(?:TYPE(?:_(?:BOX|CYLINDER|PRISM|SPHERE|TORUS|TUBE|RING|SCULPT))?|HOLE_(?:DEFAULT|CIRCLE|SQUARE|TRIANGLE)|MATERIAL(?:_(?:STONE|METAL|GLASS|WOOD|FLESH|PLASTIC|RUBBER))?|SHINY_(?:NONE|LOW|MEDIUM|HIGH)|BUMP_(?:NONE|BRIGHT|DARK|WOOD|BARK|BRICKS|CHECKER|CONCRETE|TILE|STONE|DISKS|GRAVEL|BLOBS|SIDING|LARGETILE|STUCCO|SUCTION|WEAVE)|TEXGEN_(?:DEFAULT|PLANAR)|SCULPT_(?:TYPE_(?:SPHERE|TORUS|PLANE|CYLINDER|MASK)|FLAG_(?:MIRROR|INVERT))|PHYSICS(?:_(?:SHAPE_(?:CONVEX|NONE|PRIM|TYPE)))?|(?:POS|ROT)_LOCAL|SLICE|TEXT|FLEXIBLE|POINT_LIGHT|TEMP_ON_REZ|PHANTOM|POSITION|SIZE|ROTATION|TEXTURE|NAME|OMEGA|DESC|LINK_TARGET|COLOR|BUMP_SHINY|FULLBRIGHT|TEXGEN|GLOW|MEDIA_(?:ALT_IMAGE_ENABLE|CONTROLS|(?:CURRENT|HOME)_URL|AUTO_(?:LOOP|PLAY|SCALE|ZOOM)|FIRST_CLICK_INTERACT|(?:WIDTH|HEIGHT)_PIXELS|WHITELIST(?:_ENABLE)?|PERMS_(?:INTERACT|CONTROL)|PARAM_MAX|CONTROLS_(?:STANDARD|MINI)|PERM_(?:NONE|OWNER|GROUP|ANYONE)|MAX_(?:URL_LENGTH|WHITELIST_(?:SIZE|COUNT)|(?:WIDTH|HEIGHT)_PIXELS)))|MASK_(?:BASE|OWNER|GROUP|EVERYONE|NEXT)|PERM_(?:TRANSFER|MODIFY|COPY|MOVE|ALL)|PARCEL_(?:MEDIA_COMMAND_(?:STOP|PAUSE|PLAY|LOOP|TEXTURE|URL|TIME|AGENT|UNLOAD|AUTO_ALIGN|TYPE|SIZE|DESC|LOOP_SET)|FLAG_(?:ALLOW_(?:FLY|(?:GROUP_)?SCRIPTS|LANDMARK|TERRAFORM|DAMAGE|CREATE_(?:GROUP_)?OBJECTS)|USE_(?:ACCESS_(?:GROUP|LIST)|BAN_LIST|LAND_PASS_LIST)|LOCAL_SOUND_ONLY|RESTRICT_PUSHOBJECT|ALLOW_(?:GROUP|ALL)_OBJECT_ENTRY)|COUNT_(?:TOTAL|OWNER|GROUP|OTHER|SELECTED|TEMP)|DETAILS_(?:NAME|DESC|OWNER|GROUP|AREA|ID|SEE_AVATARS))|LIST_STAT_(?:MAX|MIN|MEAN|MEDIAN|STD_DEV|SUM(?:_SQUARES)?|NUM_COUNT|GEOMETRIC_MEAN|RANGE)|PAY_(?:HIDE|DEFAULT)|REGION_FLAG_(?:ALLOW_DAMAGE|FIXED_SUN|BLOCK_TERRAFORM|SANDBOX|DISABLE_(?:COLLISIONS|PHYSICS)|BLOCK_FLY|ALLOW_DIRECT_TELEPORT|RESTRICT_PUSHOBJECT)|HTTP_(?:METHOD|MIMETYPE|BODY_(?:MAXLENGTH|TRUNCATED)|CUSTOM_HEADER|PRAGMA_NO_CACHE|VERBOSE_THROTTLE|VERIFY_CERT)|STRING_(?:TRIM(?:_(?:HEAD|TAIL))?)|CLICK_ACTION_(?:NONE|TOUCH|SIT|BUY|PAY|OPEN(?:_MEDIA)?|PLAY|ZOOM)|TOUCH_INVALID_FACE|PROFILE_(?:NONE|SCRIPT_MEMORY)|RC_(?:DATA_FLAGS|DETECT_PHANTOM|GET_(?:LINK_NUM|NORMAL|ROOT_KEY)|MAX_HITS|REJECT_(?:TYPES|AGENTS|(?:NON)?PHYSICAL|LAND))|RCERR_(?:CAST_TIME_EXCEEDED|SIM_PERF_LOW|UNKNOWN)|ESTATE_ACCESS_(?:ALLOWED_(?:AGENT|GROUP)_(?:ADD|REMOVE)|BANNED_AGENT_(?:ADD|REMOVE))|DENSITY|FRICTION|RESTITUTION|GRAVITY_MULTIPLIER|KFM_(?:COMMAND|CMD_(?:PLAY|STOP|PAUSE|SET_MODE)|MODE|FORWARD|LOOP|PING_PONG|REVERSE|DATA|ROTATION|TRANSLATION)|ERR_(?:GENERIC|PARCEL_PERMISSIONS|MALFORMED_PARAMS|RUNTIME_PERMISSIONS|THROTTLED)|CHARACTER_(?:CMD_(?:(?:SMOOTH_)?STOP|JUMP)|DESIRED_(?:TURN_)?SPEED|RADIUS|STAY_WITHIN_PARCEL|LENGTH|ORIENTATION|ACCOUNT_FOR_SKIPPED_FRAMES|AVOIDANCE_MODE|TYPE(?:_(?:[A-D]|NONE))?|MAX_(?:DECEL|TURN_RADIUS|(?:ACCEL|SPEED)))|PURSUIT_(?:OFFSET|FUZZ_FACTOR|GOAL_TOLERANCE|INTERCEPT)|REQUIRE_LINE_OF_SIGHT|FORCE_DIRECT_PATH|VERTICAL|HORIZONTAL|AVOID_(?:CHARACTERS|DYNAMIC_OBSTACLES|NONE)|PU_(?:EVADE_(?:HIDDEN|SPOTTED)|FAILURE_(?:DYNAMIC_PATHFINDING_DISABLED|INVALID_(?:GOAL|START)|NO_(?:NAVMESH|VALID_DESTINATION)|OTHER|TARGET_GONE|(?:PARCEL_)?UNREACHABLE)|(?:GOAL|SLOWDOWN_DISTANCE)_REACHED)|TRAVERSAL_TYPE(?:_(?:FAST|NONE|SLOW))?|CONTENT_TYPE_(?:ATOM|FORM|HTML|JSON|LLSD|RSS|TEXT|XHTML|XML)|GCNP_(?:RADIUS|STATIC)|(?:PATROL|WANDER)_PAUSE_AT_WAYPOINTS|OPT_(?:AVATAR|CHARACTER|EXCLUSION_VOLUME|LEGACY_LINKSET|MATERIAL_VOLUME|OTHER|STATIC_OBSTACLE|WALKABLE)|SIM_STAT_PCT_CHARS_STEPPED)\b' lsl_constants_integer_boolean = r'\b(?:FALSE|TRUE)\b' lsl_constants_rotation = r'\b(?:ZERO_ROTATION)\b' lsl_constants_string = r'\b(?:EOF|JSON_(?:ARRAY|DELETE|FALSE|INVALID|NULL|NUMBER|OBJECT|STRING|TRUE)|NULL_KEY|TEXTURE_(?:BLANK|DEFAULT|MEDIA|PLYWOOD|TRANSPARENT)|URL_REQUEST_(?:GRANTED|DENIED))\b' lsl_constants_vector = r'\b(?:TOUCH_INVALID_(?:TEXCOORD|VECTOR)|ZERO_VECTOR)\b' lsl_invalid_broken = r'\b(?:LAND_(?:LARGE|MEDIUM|SMALL)_BRUSH)\b' lsl_invalid_deprecated = r'\b(?:ATTACH_[LR]PEC|DATA_RATING|OBJECT_ATTACHMENT_(?:GEOMETRY_BYTES|SURFACE_AREA)|PRIM_(?:CAST_SHADOWS|MATERIAL_LIGHT|TYPE_LEGACY)|PSYS_SRC_(?:INNER|OUTER)ANGLE|VEHICLE_FLAG_NO_FLY_UP|ll(?:Cloud|Make(?:Explosion|Fountain|Smoke|Fire)|RemoteDataSetRegion|Sound(?:Preload)?|XorBase64Strings(?:Correct)?))\b' lsl_invalid_illegal = r'\b(?:event)\b' lsl_invalid_unimplemented = r'\b(?:CHARACTER_(?:MAX_ANGULAR_(?:ACCEL|SPEED)|TURN_SPEED_MULTIPLIER)|PERMISSION_(?:CHANGE_(?:JOINTS|PERMISSIONS)|RELEASE_OWNERSHIP|REMAP_CONTROLS)|PRIM_PHYSICS_MATERIAL|PSYS_SRC_OBJ_REL_MASK|ll(?:CollisionSprite|(?:Stop)?PointAt|(?:(?:Refresh|Set)Prim)URL|(?:Take|Release)Camera|RemoteLoadScript))\b' lsl_reserved_godmode = r'\b(?:ll(?:GodLikeRezObject|Set(?:Inventory|Object)PermMask))\b' lsl_reserved_log = r'\b(?:print)\b' lsl_operators = r'\+\+|\-\-|<<|>>|&&?|\|\|?|\^|~|[!%<>=*+\-/]=?' tokens = { 'root': [ (r'//.*?\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (r'"', String.Double, 'string'), (lsl_keywords, Keyword), (lsl_types, Keyword.Type), (lsl_states, Name.Class), (lsl_events, Name.Builtin), (lsl_functions_builtin, Name.Function), (lsl_constants_float, Keyword.Constant), (lsl_constants_integer, Keyword.Constant), (lsl_constants_integer_boolean, Keyword.Constant), (lsl_constants_rotation, Keyword.Constant), (lsl_constants_string, Keyword.Constant), (lsl_constants_vector, Keyword.Constant), (lsl_invalid_broken, Error), (lsl_invalid_deprecated, Error), (lsl_invalid_illegal, Error), (lsl_invalid_unimplemented, Error), (lsl_reserved_godmode, Keyword.Reserved), (lsl_reserved_log, Keyword.Reserved), (r'\b([a-zA-Z_]\w*)\b', Name.Variable), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d*', Number.Float), (r'(\d+\.\d*|\.\d+)', Number.Float), (r'0[xX][0-9a-fA-F]+', Number.Hex), (r'\d+', Number.Integer), (lsl_operators, Operator), (r':=?', Error), (r'[,;{}()\[\]]', Punctuation), (r'\n+', Whitespace), (r'\s+', Whitespace) ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'string': [ (r'\\([nt"\\])', String.Escape), (r'"', String.Double, '#pop'), (r'\\.', Error), (r'[^"\\]+', String.Double), ] } class AppleScriptLexer(RegexLexer): """ For `AppleScript source code `_, including `AppleScript Studio `_. Contributed by Andreas Amann . .. versionadded:: 1.0 """ name = 'AppleScript' aliases = ['applescript'] filenames = ['*.applescript'] flags = re.MULTILINE | re.DOTALL Identifiers = r'[a-zA-Z]\w*' # XXX: use words() for all of these Literals = ('AppleScript', 'current application', 'false', 'linefeed', 'missing value', 'pi', 'quote', 'result', 'return', 'space', 'tab', 'text item delimiters', 'true', 'version') Classes = ('alias ', 'application ', 'boolean ', 'class ', 'constant ', 'date ', 'file ', 'integer ', 'list ', 'number ', 'POSIX file ', 'real ', 'record ', 'reference ', 'RGB color ', 'script ', 'text ', 'unit types', '(?:Unicode )?text', 'string') BuiltIn = ('attachment', 'attribute run', 'character', 'day', 'month', 'paragraph', 'word', 'year') HandlerParams = ('about', 'above', 'against', 'apart from', 'around', 'aside from', 'at', 'below', 'beneath', 'beside', 'between', 'for', 'given', 'instead of', 'on', 'onto', 'out of', 'over', 'since') Commands = ('ASCII (character|number)', 'activate', 'beep', 'choose URL', 'choose application', 'choose color', 'choose file( name)?', 'choose folder', 'choose from list', 'choose remote application', 'clipboard info', 'close( access)?', 'copy', 'count', 'current date', 'delay', 'delete', 'display (alert|dialog)', 'do shell script', 'duplicate', 'exists', 'get eof', 'get volume settings', 'info for', 'launch', 'list (disks|folder)', 'load script', 'log', 'make', 'mount volume', 'new', 'offset', 'open( (for access|location))?', 'path to', 'print', 'quit', 'random number', 'read', 'round', 'run( script)?', 'say', 'scripting components', 'set (eof|the clipboard to|volume)', 'store script', 'summarize', 'system attribute', 'system info', 'the clipboard', 'time to GMT', 'write', 'quoted form') References = ('(in )?back of', '(in )?front of', '[0-9]+(st|nd|rd|th)', 'first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh', 'eighth', 'ninth', 'tenth', 'after', 'back', 'before', 'behind', 'every', 'front', 'index', 'last', 'middle', 'some', 'that', 'through', 'thru', 'where', 'whose') Operators = ("and", "or", "is equal", "equals", "(is )?equal to", "is not", "isn't", "isn't equal( to)?", "is not equal( to)?", "doesn't equal", "does not equal", "(is )?greater than", "comes after", "is not less than or equal( to)?", "isn't less than or equal( to)?", "(is )?less than", "comes before", "is not greater than or equal( to)?", "isn't greater than or equal( to)?", "(is )?greater than or equal( to)?", "is not less than", "isn't less than", "does not come before", "doesn't come before", "(is )?less than or equal( to)?", "is not greater than", "isn't greater than", "does not come after", "doesn't come after", "starts? with", "begins? with", "ends? with", "contains?", "does not contain", "doesn't contain", "is in", "is contained by", "is not in", "is not contained by", "isn't contained by", "div", "mod", "not", "(a )?(ref( to)?|reference to)", "is", "does") Control = ('considering', 'else', 'error', 'exit', 'from', 'if', 'ignoring', 'in', 'repeat', 'tell', 'then', 'times', 'to', 'try', 'until', 'using terms from', 'while', 'whith', 'with timeout( of)?', 'with transaction', 'by', 'continue', 'end', 'its?', 'me', 'my', 'return', 'of', 'as') Declarations = ('global', 'local', 'prop(erty)?', 'set', 'get') Reserved = ('but', 'put', 'returning', 'the') StudioClasses = ('action cell', 'alert reply', 'application', 'box', 'browser( cell)?', 'bundle', 'button( cell)?', 'cell', 'clip view', 'color well', 'color-panel', 'combo box( item)?', 'control', 'data( (cell|column|item|row|source))?', 'default entry', 'dialog reply', 'document', 'drag info', 'drawer', 'event', 'font(-panel)?', 'formatter', 'image( (cell|view))?', 'matrix', 'menu( item)?', 'item', 'movie( view)?', 'open-panel', 'outline view', 'panel', 'pasteboard', 'plugin', 'popup button', 'progress indicator', 'responder', 'save-panel', 'scroll view', 'secure text field( cell)?', 'slider', 'sound', 'split view', 'stepper', 'tab view( item)?', 'table( (column|header cell|header view|view))', 'text( (field( cell)?|view))?', 'toolbar( item)?', 'user-defaults', 'view', 'window') StudioEvents = ('accept outline drop', 'accept table drop', 'action', 'activated', 'alert ended', 'awake from nib', 'became key', 'became main', 'begin editing', 'bounds changed', 'cell value', 'cell value changed', 'change cell value', 'change item value', 'changed', 'child of item', 'choose menu item', 'clicked', 'clicked toolbar item', 'closed', 'column clicked', 'column moved', 'column resized', 'conclude drop', 'data representation', 'deminiaturized', 'dialog ended', 'document nib name', 'double clicked', 'drag( (entered|exited|updated))?', 'drop', 'end editing', 'exposed', 'idle', 'item expandable', 'item value', 'item value changed', 'items changed', 'keyboard down', 'keyboard up', 'launched', 'load data representation', 'miniaturized', 'mouse down', 'mouse dragged', 'mouse entered', 'mouse exited', 'mouse moved', 'mouse up', 'moved', 'number of browser rows', 'number of items', 'number of rows', 'open untitled', 'opened', 'panel ended', 'parameters updated', 'plugin loaded', 'prepare drop', 'prepare outline drag', 'prepare outline drop', 'prepare table drag', 'prepare table drop', 'read from file', 'resigned active', 'resigned key', 'resigned main', 'resized( sub views)?', 'right mouse down', 'right mouse dragged', 'right mouse up', 'rows changed', 'scroll wheel', 'selected tab view item', 'selection changed', 'selection changing', 'should begin editing', 'should close', 'should collapse item', 'should end editing', 'should expand item', 'should open( untitled)?', 'should quit( after last window closed)?', 'should select column', 'should select item', 'should select row', 'should select tab view item', 'should selection change', 'should zoom', 'shown', 'update menu item', 'update parameters', 'update toolbar item', 'was hidden', 'was miniaturized', 'will become active', 'will close', 'will dismiss', 'will display browser cell', 'will display cell', 'will display item cell', 'will display outline cell', 'will finish launching', 'will hide', 'will miniaturize', 'will move', 'will open', 'will pop up', 'will quit', 'will resign active', 'will resize( sub views)?', 'will select tab view item', 'will show', 'will zoom', 'write to file', 'zoomed') StudioCommands = ('animate', 'append', 'call method', 'center', 'close drawer', 'close panel', 'display', 'display alert', 'display dialog', 'display panel', 'go', 'hide', 'highlight', 'increment', 'item for', 'load image', 'load movie', 'load nib', 'load panel', 'load sound', 'localized string', 'lock focus', 'log', 'open drawer', 'path for', 'pause', 'perform action', 'play', 'register', 'resume', 'scroll', 'select( all)?', 'show', 'size to fit', 'start', 'step back', 'step forward', 'stop', 'synchronize', 'unlock focus', 'update') StudioProperties = ('accepts arrow key', 'action method', 'active', 'alignment', 'allowed identifiers', 'allows branch selection', 'allows column reordering', 'allows column resizing', 'allows column selection', 'allows customization', 'allows editing text attributes', 'allows empty selection', 'allows mixed state', 'allows multiple selection', 'allows reordering', 'allows undo', 'alpha( value)?', 'alternate image', 'alternate increment value', 'alternate title', 'animation delay', 'associated file name', 'associated object', 'auto completes', 'auto display', 'auto enables items', 'auto repeat', 'auto resizes( outline column)?', 'auto save expanded items', 'auto save name', 'auto save table columns', 'auto saves configuration', 'auto scroll', 'auto sizes all columns to fit', 'auto sizes cells', 'background color', 'bezel state', 'bezel style', 'bezeled', 'border rect', 'border type', 'bordered', 'bounds( rotation)?', 'box type', 'button returned', 'button type', 'can choose directories', 'can choose files', 'can draw', 'can hide', 'cell( (background color|size|type))?', 'characters', 'class', 'click count', 'clicked( data)? column', 'clicked data item', 'clicked( data)? row', 'closeable', 'collating', 'color( (mode|panel))', 'command key down', 'configuration', 'content(s| (size|view( margins)?))?', 'context', 'continuous', 'control key down', 'control size', 'control tint', 'control view', 'controller visible', 'coordinate system', 'copies( on scroll)?', 'corner view', 'current cell', 'current column', 'current( field)? editor', 'current( menu)? item', 'current row', 'current tab view item', 'data source', 'default identifiers', 'delta (x|y|z)', 'destination window', 'directory', 'display mode', 'displayed cell', 'document( (edited|rect|view))?', 'double value', 'dragged column', 'dragged distance', 'dragged items', 'draws( cell)? background', 'draws grid', 'dynamically scrolls', 'echos bullets', 'edge', 'editable', 'edited( data)? column', 'edited data item', 'edited( data)? row', 'enabled', 'enclosing scroll view', 'ending page', 'error handling', 'event number', 'event type', 'excluded from windows menu', 'executable path', 'expanded', 'fax number', 'field editor', 'file kind', 'file name', 'file type', 'first responder', 'first visible column', 'flipped', 'floating', 'font( panel)?', 'formatter', 'frameworks path', 'frontmost', 'gave up', 'grid color', 'has data items', 'has horizontal ruler', 'has horizontal scroller', 'has parent data item', 'has resize indicator', 'has shadow', 'has sub menu', 'has vertical ruler', 'has vertical scroller', 'header cell', 'header view', 'hidden', 'hides when deactivated', 'highlights by', 'horizontal line scroll', 'horizontal page scroll', 'horizontal ruler view', 'horizontally resizable', 'icon image', 'id', 'identifier', 'ignores multiple clicks', 'image( (alignment|dims when disabled|frame style|scaling))?', 'imports graphics', 'increment value', 'indentation per level', 'indeterminate', 'index', 'integer value', 'intercell spacing', 'item height', 'key( (code|equivalent( modifier)?|window))?', 'knob thickness', 'label', 'last( visible)? column', 'leading offset', 'leaf', 'level', 'line scroll', 'loaded', 'localized sort', 'location', 'loop mode', 'main( (bunde|menu|window))?', 'marker follows cell', 'matrix mode', 'maximum( content)? size', 'maximum visible columns', 'menu( form representation)?', 'miniaturizable', 'miniaturized', 'minimized image', 'minimized title', 'minimum column width', 'minimum( content)? size', 'modal', 'modified', 'mouse down state', 'movie( (controller|file|rect))?', 'muted', 'name', 'needs display', 'next state', 'next text', 'number of tick marks', 'only tick mark values', 'opaque', 'open panel', 'option key down', 'outline table column', 'page scroll', 'pages across', 'pages down', 'palette label', 'pane splitter', 'parent data item', 'parent window', 'pasteboard', 'path( (names|separator))?', 'playing', 'plays every frame', 'plays selection only', 'position', 'preferred edge', 'preferred type', 'pressure', 'previous text', 'prompt', 'properties', 'prototype cell', 'pulls down', 'rate', 'released when closed', 'repeated', 'requested print time', 'required file type', 'resizable', 'resized column', 'resource path', 'returns records', 'reuses columns', 'rich text', 'roll over', 'row height', 'rulers visible', 'save panel', 'scripts path', 'scrollable', 'selectable( identifiers)?', 'selected cell', 'selected( data)? columns?', 'selected data items?', 'selected( data)? rows?', 'selected item identifier', 'selection by rect', 'send action on arrow key', 'sends action when done editing', 'separates columns', 'separator item', 'sequence number', 'services menu', 'shared frameworks path', 'shared support path', 'sheet', 'shift key down', 'shows alpha', 'shows state by', 'size( mode)?', 'smart insert delete enabled', 'sort case sensitivity', 'sort column', 'sort order', 'sort type', 'sorted( data rows)?', 'sound', 'source( mask)?', 'spell checking enabled', 'starting page', 'state', 'string value', 'sub menu', 'super menu', 'super view', 'tab key traverses cells', 'tab state', 'tab type', 'tab view', 'table view', 'tag', 'target( printer)?', 'text color', 'text container insert', 'text container origin', 'text returned', 'tick mark position', 'time stamp', 'title(d| (cell|font|height|position|rect))?', 'tool tip', 'toolbar', 'trailing offset', 'transparent', 'treat packages as directories', 'truncated labels', 'types', 'unmodified characters', 'update views', 'use sort indicator', 'user defaults', 'uses data source', 'uses ruler', 'uses threaded animation', 'uses title from previous column', 'value wraps', 'version', 'vertical( (line scroll|page scroll|ruler view))?', 'vertically resizable', 'view', 'visible( document rect)?', 'volume', 'width', 'window', 'windows menu', 'wraps', 'zoomable', 'zoomed') tokens = { 'root': [ (r'\s+', Text), (u'¬\\n', String.Escape), (r"'s\s+", Text), # This is a possessive, consider moving (r'(--|#).*?$', Comment), (r'\(\*', Comment.Multiline, 'comment'), (r'[(){}!,.:]', Punctuation), (u'(«)([^»]+)(»)', bygroups(Text, Name.Builtin, Text)), (r'\b((?:considering|ignoring)\s*)' r'(application responses|case|diacriticals|hyphens|' r'numeric strings|punctuation|white space)', bygroups(Keyword, Name.Builtin)), (u'(-|\\*|\\+|&|≠|>=?|<=?|=|≥|≤|/|÷|\\^)', Operator), (r"\b(%s)\b" % '|'.join(Operators), Operator.Word), (r'^(\s*(?:on|end)\s+)' r'(%s)' % '|'.join(StudioEvents[::-1]), bygroups(Keyword, Name.Function)), (r'^(\s*)(in|on|script|to)(\s+)', bygroups(Text, Keyword, Text)), (r'\b(as )(%s)\b' % '|'.join(Classes), bygroups(Keyword, Name.Class)), (r'\b(%s)\b' % '|'.join(Literals), Name.Constant), (r'\b(%s)\b' % '|'.join(Commands), Name.Builtin), (r'\b(%s)\b' % '|'.join(Control), Keyword), (r'\b(%s)\b' % '|'.join(Declarations), Keyword), (r'\b(%s)\b' % '|'.join(Reserved), Name.Builtin), (r'\b(%s)s?\b' % '|'.join(BuiltIn), Name.Builtin), (r'\b(%s)\b' % '|'.join(HandlerParams), Name.Builtin), (r'\b(%s)\b' % '|'.join(StudioProperties), Name.Attribute), (r'\b(%s)s?\b' % '|'.join(StudioClasses), Name.Builtin), (r'\b(%s)\b' % '|'.join(StudioCommands), Name.Builtin), (r'\b(%s)\b' % '|'.join(References), Name.Builtin), (r'"(\\\\|\\"|[^"])*"', String.Double), (r'\b(%s)\b' % Identifiers, Name.Variable), (r'[-+]?(\d+\.\d*|\d*\.\d+)(E[-+][0-9]+)?', Number.Float), (r'[-+]?\d+', Number.Integer), ], 'comment': [ (r'\(\*', Comment.Multiline, '#push'), (r'\*\)', Comment.Multiline, '#pop'), ('[^*(]+', Comment.Multiline), ('[*(]', Comment.Multiline), ], } class RexxLexer(RegexLexer): """ `Rexx `_ is a scripting language available for a wide range of different platforms with its roots found on mainframe systems. It is popular for I/O- and data based tasks and can act as glue language to bind different applications together. .. versionadded:: 2.0 """ name = 'Rexx' aliases = ['rexx', 'arexx'] filenames = ['*.rexx', '*.rex', '*.rx', '*.arexx'] mimetypes = ['text/x-rexx'] flags = re.IGNORECASE tokens = { 'root': [ (r'\s', Whitespace), (r'/\*', Comment.Multiline, 'comment'), (r'"', String, 'string_double'), (r"'", String, 'string_single'), (r'[0-9]+(\.[0-9]+)?(e[+-]?[0-9])?', Number), (r'([a-z_]\w*)(\s*)(:)(\s*)(procedure)\b', bygroups(Name.Function, Whitespace, Operator, Whitespace, Keyword.Declaration)), (r'([a-z_]\w*)(\s*)(:)', bygroups(Name.Label, Whitespace, Operator)), include('function'), include('keyword'), include('operator'), (r'[a-z_]\w*', Text), ], 'function': [ (words(( 'abbrev', 'abs', 'address', 'arg', 'b2x', 'bitand', 'bitor', 'bitxor', 'c2d', 'c2x', 'center', 'charin', 'charout', 'chars', 'compare', 'condition', 'copies', 'd2c', 'd2x', 'datatype', 'date', 'delstr', 'delword', 'digits', 'errortext', 'form', 'format', 'fuzz', 'insert', 'lastpos', 'left', 'length', 'linein', 'lineout', 'lines', 'max', 'min', 'overlay', 'pos', 'queued', 'random', 'reverse', 'right', 'sign', 'sourceline', 'space', 'stream', 'strip', 'substr', 'subword', 'symbol', 'time', 'trace', 'translate', 'trunc', 'value', 'verify', 'word', 'wordindex', 'wordlength', 'wordpos', 'words', 'x2b', 'x2c', 'x2d', 'xrange'), suffix=r'(\s*)(\()'), bygroups(Name.Builtin, Whitespace, Operator)), ], 'keyword': [ (r'(address|arg|by|call|do|drop|else|end|exit|for|forever|if|' r'interpret|iterate|leave|nop|numeric|off|on|options|parse|' r'pull|push|queue|return|say|select|signal|to|then|trace|until|' r'while)\b', Keyword.Reserved), ], 'operator': [ (r'(-|//|/|\(|\)|\*\*|\*|\\<<|\\<|\\==|\\=|\\>>|\\>|\\|\|\||\||' r'&&|&|%|\+|<<=|<<|<=|<>|<|==|=|><|>=|>>=|>>|>|¬<<|¬<|¬==|¬=|' r'¬>>|¬>|¬|\.|,)', Operator), ], 'string_double': [ (r'[^"\n]+', String), (r'""', String), (r'"', String, '#pop'), (r'\n', Text, '#pop'), # Stray linefeed also terminates strings. ], 'string_single': [ (r'[^\'\n]', String), (r'\'\'', String), (r'\'', String, '#pop'), (r'\n', Text, '#pop'), # Stray linefeed also terminates strings. ], 'comment': [ (r'[^*]+', Comment.Multiline), (r'\*/', Comment.Multiline, '#pop'), (r'\*', Comment.Multiline), ] } _c = lambda s: re.compile(s, re.MULTILINE) _ADDRESS_COMMAND_PATTERN = _c(r'^\s*address\s+command\b') _ADDRESS_PATTERN = _c(r'^\s*address\s+') _DO_WHILE_PATTERN = _c(r'^\s*do\s+while\b') _IF_THEN_DO_PATTERN = _c(r'^\s*if\b.+\bthen\s+do\s*$') _PROCEDURE_PATTERN = _c(r'^\s*([a-z_]\w*)(\s*)(:)(\s*)(procedure)\b') _ELSE_DO_PATTERN = _c(r'\belse\s+do\s*$') _PARSE_ARG_PATTERN = _c(r'^\s*parse\s+(upper\s+)?(arg|value)\b') PATTERNS_AND_WEIGHTS = ( (_ADDRESS_COMMAND_PATTERN, 0.2), (_ADDRESS_PATTERN, 0.05), (_DO_WHILE_PATTERN, 0.1), (_ELSE_DO_PATTERN, 0.1), (_IF_THEN_DO_PATTERN, 0.1), (_PROCEDURE_PATTERN, 0.5), (_PARSE_ARG_PATTERN, 0.2), ) def analyse_text(text): """ Check for inital comment and patterns that distinguish Rexx from other C-like languages. """ if re.search(r'/\*\**\s*rexx', text, re.IGNORECASE): # Header matches MVS Rexx requirements, this is certainly a Rexx # script. return 1.0 elif text.startswith('/*'): # Header matches general Rexx requirements; the source code might # still be any language using C comments such as C++, C# or Java. lowerText = text.lower() result = sum(weight for (pattern, weight) in RexxLexer.PATTERNS_AND_WEIGHTS if pattern.search(lowerText)) + 0.01 return min(result, 1.0) class MOOCodeLexer(RegexLexer): """ For `MOOCode `_ (the MOO scripting language). .. versionadded:: 0.9 """ name = 'MOOCode' filenames = ['*.moo'] aliases = ['moocode', 'moo'] mimetypes = ['text/x-moocode'] tokens = { 'root': [ # Numbers (r'(0|[1-9][0-9_]*)', Number.Integer), # Strings (r'"(\\\\|\\"|[^"])*"', String), # exceptions (r'(E_PERM|E_DIV)', Name.Exception), # db-refs (r'((#[-0-9]+)|(\$\w+))', Name.Entity), # Keywords (r'\b(if|else|elseif|endif|for|endfor|fork|endfork|while' r'|endwhile|break|continue|return|try' r'|except|endtry|finally|in)\b', Keyword), # builtins (r'(random|length)', Name.Builtin), # special variables (r'(player|caller|this|args)', Name.Variable.Instance), # skip whitespace (r'\s+', Text), (r'\n', Text), # other operators (r'([!;=,{}&|:.\[\]@()<>?]+)', Operator), # function call (r'(\w+)(\()', bygroups(Name.Function, Operator)), # variables (r'(\w+)', Text), ] } class HybrisLexer(RegexLexer): """ For `Hybris `_ source code. .. versionadded:: 1.4 """ name = 'Hybris' aliases = ['hybris', 'hy'] filenames = ['*.hy', '*.hyb'] mimetypes = ['text/x-hybris', 'application/x-hybris'] flags = re.MULTILINE | re.DOTALL tokens = { 'root': [ # method names (r'^(\s*(?:function|method|operator\s+)+?)' r'([a-zA-Z_]\w*)' r'(\s*)(\()', bygroups(Keyword, Name.Function, Text, Operator)), (r'[^\S\n]+', Text), (r'//.*?\n', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'@[a-zA-Z_][\w.]*', Name.Decorator), (r'(break|case|catch|next|default|do|else|finally|for|foreach|of|' r'unless|if|new|return|switch|me|throw|try|while)\b', Keyword), (r'(extends|private|protected|public|static|throws|function|method|' r'operator)\b', Keyword.Declaration), (r'(true|false|null|__FILE__|__LINE__|__VERSION__|__LIB_PATH__|' r'__INC_PATH__)\b', Keyword.Constant), (r'(class|struct)(\s+)', bygroups(Keyword.Declaration, Text), 'class'), (r'(import|include)(\s+)', bygroups(Keyword.Namespace, Text), 'import'), (words(( 'gc_collect', 'gc_mm_items', 'gc_mm_usage', 'gc_collect_threshold', 'urlencode', 'urldecode', 'base64encode', 'base64decode', 'sha1', 'crc32', 'sha2', 'md5', 'md5_file', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'exp', 'fabs', 'floor', 'fmod', 'log', 'log10', 'pow', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'isint', 'isfloat', 'ischar', 'isstring', 'isarray', 'ismap', 'isalias', 'typeof', 'sizeof', 'toint', 'tostring', 'fromxml', 'toxml', 'binary', 'pack', 'load', 'eval', 'var_names', 'var_values', 'user_functions', 'dyn_functions', 'methods', 'call', 'call_method', 'mknod', 'mkfifo', 'mount', 'umount2', 'umount', 'ticks', 'usleep', 'sleep', 'time', 'strtime', 'strdate', 'dllopen', 'dlllink', 'dllcall', 'dllcall_argv', 'dllclose', 'env', 'exec', 'fork', 'getpid', 'wait', 'popen', 'pclose', 'exit', 'kill', 'pthread_create', 'pthread_create_argv', 'pthread_exit', 'pthread_join', 'pthread_kill', 'smtp_send', 'http_get', 'http_post', 'http_download', 'socket', 'bind', 'listen', 'accept', 'getsockname', 'getpeername', 'settimeout', 'connect', 'server', 'recv', 'send', 'close', 'print', 'println', 'printf', 'input', 'readline', 'serial_open', 'serial_fcntl', 'serial_get_attr', 'serial_get_ispeed', 'serial_get_ospeed', 'serial_set_attr', 'serial_set_ispeed', 'serial_set_ospeed', 'serial_write', 'serial_read', 'serial_close', 'xml_load', 'xml_parse', 'fopen', 'fseek', 'ftell', 'fsize', 'fread', 'fwrite', 'fgets', 'fclose', 'file', 'readdir', 'pcre_replace', 'size', 'pop', 'unmap', 'has', 'keys', 'values', 'length', 'find', 'substr', 'replace', 'split', 'trim', 'remove', 'contains', 'join'), suffix=r'\b'), Name.Builtin), (words(( 'MethodReference', 'Runner', 'Dll', 'Thread', 'Pipe', 'Process', 'Runnable', 'CGI', 'ClientSocket', 'Socket', 'ServerSocket', 'File', 'Console', 'Directory', 'Exception'), suffix=r'\b'), Keyword.Type), (r'"(\\\\|\\"|[^"])*"', String), (r"'\\.'|'[^\\]'|'\\u[0-9a-f]{4}'", String.Char), (r'(\.)([a-zA-Z_]\w*)', bygroups(Operator, Name.Attribute)), (r'[a-zA-Z_]\w*:', Name.Label), (r'[a-zA-Z_$]\w*', Name), (r'[~^*!%&\[\](){}<>|+=:;,./?\-@]+', Operator), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-f]+', Number.Hex), (r'[0-9]+L?', Number.Integer), (r'\n', Text), ], 'class': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop') ], 'import': [ (r'[\w.]+\*?', Name.Namespace, '#pop') ], } class EasytrieveLexer(RegexLexer): """ Easytrieve Plus is a programming language for extracting, filtering and converting sequential data. Furthermore it can layout data for reports. It is mainly used on mainframe platforms and can access several of the mainframe's native file formats. It is somewhat comparable to awk. .. versionadded:: 2.1 """ name = 'Easytrieve' aliases = ['easytrieve'] filenames = ['*.ezt', '*.mac'] mimetypes = ['text/x-easytrieve'] flags = 0 # Note: We cannot use r'\b' at the start and end of keywords because # Easytrieve Plus delimiter characters are: # # * space ( ) # * apostrophe (') # * period (.) # * comma (,) # * paranthesis ( and ) # * colon (:) # # Additionally words end once a '*' appears, indicatins a comment. _DELIMITERS = r' \'.,():\n' _DELIMITERS_OR_COMENT = _DELIMITERS + '*' _DELIMITER_PATTERN = '[' + _DELIMITERS + ']' _DELIMITER_PATTERN_CAPTURE = '(' + _DELIMITER_PATTERN + ')' _NON_DELIMITER_OR_COMMENT_PATTERN = '[^' + _DELIMITERS_OR_COMENT + ']' _OPERATORS_PATTERN = u'[.+\\-/=\\[\\](){}<>;,&%¬]' _KEYWORDS = [ 'AFTER-BREAK', 'AFTER-LINE', 'AFTER-SCREEN', 'AIM', 'AND', 'ATTR', 'BEFORE', 'BEFORE-BREAK', 'BEFORE-LINE', 'BEFORE-SCREEN', 'BUSHU', 'BY', 'CALL', 'CASE', 'CHECKPOINT', 'CHKP', 'CHKP-STATUS', 'CLEAR', 'CLOSE', 'COL', 'COLOR', 'COMMIT', 'CONTROL', 'COPY', 'CURSOR', 'D', 'DECLARE', 'DEFAULT', 'DEFINE', 'DELETE', 'DENWA', 'DISPLAY', 'DLI', 'DO', 'DUPLICATE', 'E', 'ELSE', 'ELSE-IF', 'END', 'END-CASE', 'END-DO', 'END-IF', 'END-PROC', 'ENDPAGE', 'ENDTABLE', 'ENTER', 'EOF', 'EQ', 'ERROR', 'EXIT', 'EXTERNAL', 'EZLIB', 'F1', 'F10', 'F11', 'F12', 'F13', 'F14', 'F15', 'F16', 'F17', 'F18', 'F19', 'F2', 'F20', 'F21', 'F22', 'F23', 'F24', 'F25', 'F26', 'F27', 'F28', 'F29', 'F3', 'F30', 'F31', 'F32', 'F33', 'F34', 'F35', 'F36', 'F4', 'F5', 'F6', 'F7', 'F8', 'F9', 'FETCH', 'FILE-STATUS', 'FILL', 'FINAL', 'FIRST', 'FIRST-DUP', 'FOR', 'GE', 'GET', 'GO', 'GOTO', 'GQ', 'GR', 'GT', 'HEADING', 'HEX', 'HIGH-VALUES', 'IDD', 'IDMS', 'IF', 'IN', 'INSERT', 'JUSTIFY', 'KANJI-DATE', 'KANJI-DATE-LONG', 'KANJI-TIME', 'KEY', 'KEY-PRESSED', 'KOKUGO', 'KUN', 'LAST-DUP', 'LE', 'LEVEL', 'LIKE', 'LINE', 'LINE-COUNT', 'LINE-NUMBER', 'LINK', 'LIST', 'LOW-VALUES', 'LQ', 'LS', 'LT', 'MACRO', 'MASK', 'MATCHED', 'MEND', 'MESSAGE', 'MOVE', 'MSTART', 'NE', 'NEWPAGE', 'NOMASK', 'NOPRINT', 'NOT', 'NOTE', 'NOVERIFY', 'NQ', 'NULL', 'OF', 'OR', 'OTHERWISE', 'PA1', 'PA2', 'PA3', 'PAGE-COUNT', 'PAGE-NUMBER', 'PARM-REGISTER', 'PATH-ID', 'PATTERN', 'PERFORM', 'POINT', 'POS', 'PRIMARY', 'PRINT', 'PROCEDURE', 'PROGRAM', 'PUT', 'READ', 'RECORD', 'RECORD-COUNT', 'RECORD-LENGTH', 'REFRESH', 'RELEASE', 'RENUM', 'REPEAT', 'REPORT', 'REPORT-INPUT', 'RESHOW', 'RESTART', 'RETRIEVE', 'RETURN-CODE', 'ROLLBACK', 'ROW', 'S', 'SCREEN', 'SEARCH', 'SECONDARY', 'SELECT', 'SEQUENCE', 'SIZE', 'SKIP', 'SOKAKU', 'SORT', 'SQL', 'STOP', 'SUM', 'SYSDATE', 'SYSDATE-LONG', 'SYSIN', 'SYSIPT', 'SYSLST', 'SYSPRINT', 'SYSSNAP', 'SYSTIME', 'TALLY', 'TERM-COLUMNS', 'TERM-NAME', 'TERM-ROWS', 'TERMINATION', 'TITLE', 'TO', 'TRANSFER', 'TRC', 'UNIQUE', 'UNTIL', 'UPDATE', 'UPPERCASE', 'USER', 'USERID', 'VALUE', 'VERIFY', 'W', 'WHEN', 'WHILE', 'WORK', 'WRITE', 'X', 'XDM', 'XRST' ] tokens = { 'root': [ (r'\*.*\n', Comment.Single), (r'\n+', Whitespace), # Macro argument (r'&' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+\.', Name.Variable, 'after_macro_argument'), # Macro call (r'%' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name.Variable), (r'(FILE|MACRO|REPORT)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'after_declaration'), (r'(JOB|PARM)' + r'(' + _DELIMITER_PATTERN + r')', bygroups(Keyword.Declaration, Operator)), (words(_KEYWORDS, suffix=_DELIMITER_PATTERN_CAPTURE), bygroups(Keyword.Reserved, Operator)), (_OPERATORS_PATTERN, Operator), # Procedure declaration (r'(' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+)(\s*)(\.?)(\s*)(PROC)(\s*\n)', bygroups(Name.Function, Whitespace, Operator, Whitespace, Keyword.Declaration, Whitespace)), (r'[0-9]+\.[0-9]*', Number.Float), (r'[0-9]+', Number.Integer), (r"'(''|[^'])*'", String), (r'\s+', Whitespace), # Everything else just belongs to a name (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name), ], 'after_declaration': [ (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name.Function), default('#pop'), ], 'after_macro_argument': [ (r'\*.*\n', Comment.Single, '#pop'), (r'\s+', Whitespace, '#pop'), (_OPERATORS_PATTERN, Operator, '#pop'), (r"'(''|[^'])*'", String, '#pop'), # Everything else just belongs to a name (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name), ], } _COMMENT_LINE_REGEX = re.compile(r'^\s*\*') _MACRO_HEADER_REGEX = re.compile(r'^\s*MACRO') def analyse_text(text): """ Perform a structural analysis for basic Easytrieve constructs. """ result = 0.0 lines = text.split('\n') hasEndProc = False hasHeaderComment = False hasFile = False hasJob = False hasProc = False hasParm = False hasReport = False def isCommentLine(line): return EasytrieveLexer._COMMENT_LINE_REGEX.match(lines[0]) is not None def isEmptyLine(line): return not bool(line.strip()) # Remove possible empty lines and header comments. while lines and (isEmptyLine(lines[0]) or isCommentLine(lines[0])): if not isEmptyLine(lines[0]): hasHeaderComment = True del lines[0] if EasytrieveLexer._MACRO_HEADER_REGEX.match(lines[0]): # Looks like an Easytrieve macro. result = 0.4 if hasHeaderComment: result += 0.4 else: # Scan the source for lines starting with indicators. for line in lines: words = line.split() if (len(words) >= 2): firstWord = words[0] if not hasReport: if not hasJob: if not hasFile: if not hasParm: if firstWord == 'PARM': hasParm = True if firstWord == 'FILE': hasFile = True if firstWord == 'JOB': hasJob = True elif firstWord == 'PROC': hasProc = True elif firstWord == 'END-PROC': hasEndProc = True elif firstWord == 'REPORT': hasReport = True # Weight the findings. if hasJob and (hasProc == hasEndProc): if hasHeaderComment: result += 0.1 if hasParm: if hasProc: # Found PARM, JOB and PROC/END-PROC: # pretty sure this is Easytrieve. result += 0.8 else: # Found PARAM and JOB: probably this is Easytrieve result += 0.5 else: # Found JOB and possibly other keywords: might be Easytrieve result += 0.11 if hasParm: # Note: PARAM is not a proper English word, so this is # regarded a much better indicator for Easytrieve than # the other words. result += 0.2 if hasFile: result += 0.01 if hasReport: result += 0.01 assert 0.0 <= result <= 1.0 return result class JclLexer(RegexLexer): """ `Job Control Language (JCL) `_ is a scripting language used on mainframe platforms to instruct the system on how to run a batch job or start a subsystem. It is somewhat comparable to MS DOS batch and Unix shell scripts. .. versionadded:: 2.1 """ name = 'JCL' aliases = ['jcl'] filenames = ['*.jcl'] mimetypes = ['text/x-jcl'] flags = re.IGNORECASE tokens = { 'root': [ (r'//\*.*\n', Comment.Single), (r'//', Keyword.Pseudo, 'statement'), (r'/\*', Keyword.Pseudo, 'jes2_statement'), # TODO: JES3 statement (r'.*\n', Other) # Input text or inline code in any language. ], 'statement': [ (r'\s*\n', Whitespace, '#pop'), (r'([a-z]\w*)(\s+)(exec|job)(\s*)', bygroups(Name.Label, Whitespace, Keyword.Reserved, Whitespace), 'option'), (r'[a-z]\w*', Name.Variable, 'statement_command'), (r'\s+', Whitespace, 'statement_command'), ], 'statement_command': [ (r'\s+(command|cntl|dd|endctl|endif|else|include|jcllib|' r'output|pend|proc|set|then|xmit)\s+', Keyword.Reserved, 'option'), include('option') ], 'jes2_statement': [ (r'\s*\n', Whitespace, '#pop'), (r'\$', Keyword, 'option'), (r'\b(jobparam|message|netacct|notify|output|priority|route|' r'setup|signoff|xeq|xmit)\b', Keyword, 'option'), ], 'option': [ # (r'\n', Text, 'root'), (r'\*', Name.Builtin), (r'[\[\](){}<>;,]', Punctuation), (r'[-+*/=&%]', Operator), (r'[a-z_]\w*', Name), (r'\d+\.\d*', Number.Float), (r'\.\d+', Number.Float), (r'\d+', Number.Integer), (r"'", String, 'option_string'), (r'[ \t]+', Whitespace, 'option_comment'), (r'\.', Punctuation), ], 'option_string': [ (r"(\n)(//)", bygroups(Text, Keyword.Pseudo)), (r"''", String), (r"[^']", String), (r"'", String, '#pop'), ], 'option_comment': [ # (r'\n', Text, 'root'), (r'.+', Comment.Single), ] } _JOB_HEADER_PATTERN = re.compile(r'^//[a-z#$@][a-z0-9#$@]{0,7}\s+job(\s+.*)?$', re.IGNORECASE) def analyse_text(text): """ Recognize JCL job by header. """ result = 0.0 lines = text.split('\n') if len(lines) > 0: if JclLexer._JOB_HEADER_PATTERN.match(lines[0]): result = 1.0 assert 0.0 <= result <= 1.0 return result Pygments-2.3.1/pygments/lexers/rnc.py0000644000175000017500000000370613376260540016717 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.rnc ~~~~~~~~~~~~~~~~~~~ Lexer for Relax-NG Compact syntax :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Punctuation __all__ = ['RNCCompactLexer'] class RNCCompactLexer(RegexLexer): """ For `RelaxNG-compact `_ syntax. .. versionadded:: 2.2 """ name = 'Relax-NG Compact' aliases = ['rnc', 'rng-compact'] filenames = ['*.rnc'] tokens = { 'root': [ (r'namespace\b', Keyword.Namespace), (r'(?:default|datatypes)\b', Keyword.Declaration), (r'##.*$', Comment.Preproc), (r'#.*$', Comment.Single), (r'"[^"]*"', String.Double), # TODO single quoted strings and escape sequences outside of # double-quoted strings (r'(?:element|attribute|mixed)\b', Keyword.Declaration, 'variable'), (r'(text\b|xsd:[^ ]+)', Keyword.Type, 'maybe_xsdattributes'), (r'[,?&*=|~]|>>', Operator), (r'[(){}]', Punctuation), (r'.', Text), ], # a variable has been declared using `element` or `attribute` 'variable': [ (r'[^{]+', Name.Variable), (r'\{', Punctuation, '#pop'), ], # after an xsd: declaration there may be attributes 'maybe_xsdattributes': [ (r'\{', Punctuation, 'xsdattributes'), (r'\}', Punctuation, '#pop'), (r'.', Text), ], # attributes take the form { key1 = value1 key2 = value2 ... } 'xsdattributes': [ (r'[^ =}]', Name.Attribute), (r'=', Operator), (r'"[^"]*"', String.Double), (r'\}', Punctuation, '#pop'), (r'.', Text), ], } Pygments-2.3.1/pygments/lexers/_openedge_builtins.py0000644000175000017500000013635213376260540021777 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers._openedge_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Builtin list for the OpenEdgeLexer. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ OPENEDGEKEYWORDS = ( 'ABSOLUTE', 'ABS', 'ABSO', 'ABSOL', 'ABSOLU', 'ABSOLUT', 'ACCELERATOR', 'ACCUMULATE', 'ACCUM', 'ACCUMU', 'ACCUMUL', 'ACCUMULA', 'ACCUMULAT', 'ACTIVE-FORM', 'ACTIVE-WINDOW', 'ADD', 'ADD-BUFFER', 'ADD-CALC-COLUMN', 'ADD-COLUMNS-FROM', 'ADD-EVENTS-PROCEDURE', 'ADD-FIELDS-FROM', 'ADD-FIRST', 'ADD-INDEX-FIELD', 'ADD-LAST', 'ADD-LIKE-COLUMN', 'ADD-LIKE-FIELD', 'ADD-LIKE-INDEX', 'ADD-NEW-FIELD', 'ADD-NEW-INDEX', 'ADD-SCHEMA-LOCATION', 'ADD-SUPER-PROCEDURE', 'ADM-DATA', 'ADVISE', 'ALERT-BOX', 'ALIAS', 'ALL', 'ALLOW-COLUMN-SEARCHING', 'ALLOW-REPLICATION', 'ALTER', 'ALWAYS-ON-TOP', 'AMBIGUOUS', 'AMBIG', 'AMBIGU', 'AMBIGUO', 'AMBIGUOU', 'ANALYZE', 'ANALYZ', 'AND', 'ANSI-ONLY', 'ANY', 'ANYWHERE', 'APPEND', 'APPL-ALERT-BOXES', 'APPL-ALERT', 'APPL-ALERT-', 'APPL-ALERT-B', 'APPL-ALERT-BO', 'APPL-ALERT-BOX', 'APPL-ALERT-BOXE', 'APPL-CONTEXT-ID', 'APPLICATION', 'APPLY', 'APPSERVER-INFO', 'APPSERVER-PASSWORD', 'APPSERVER-USERID', 'ARRAY-MESSAGE', 'AS', 'ASC', 'ASCENDING', 'ASCE', 'ASCEN', 'ASCEND', 'ASCENDI', 'ASCENDIN', 'ASK-OVERWRITE', 'ASSEMBLY', 'ASSIGN', 'ASYNCHRONOUS', 'ASYNC-REQUEST-COUNT', 'ASYNC-REQUEST-HANDLE', 'AT', 'ATTACHED-PAIRLIST', 'ATTR-SPACE', 'ATTR', 'ATTRI', 'ATTRIB', 'ATTRIBU', 'ATTRIBUT', 'AUDIT-CONTROL', 'AUDIT-ENABLED', 'AUDIT-EVENT-CONTEXT', 'AUDIT-POLICY', 'AUTHENTICATION-FAILED', 'AUTHORIZATION', 'AUTO-COMPLETION', 'AUTO-COMP', 'AUTO-COMPL', 'AUTO-COMPLE', 'AUTO-COMPLET', 'AUTO-COMPLETI', 'AUTO-COMPLETIO', 'AUTO-ENDKEY', 'AUTO-END-KEY', 'AUTO-GO', 'AUTO-INDENT', 'AUTO-IND', 'AUTO-INDE', 'AUTO-INDEN', 'AUTOMATIC', 'AUTO-RESIZE', 'AUTO-RETURN', 'AUTO-RET', 'AUTO-RETU', 'AUTO-RETUR', 'AUTO-SYNCHRONIZE', 'AUTO-ZAP', 'AUTO-Z', 'AUTO-ZA', 'AVAILABLE', 'AVAIL', 'AVAILA', 'AVAILAB', 'AVAILABL', 'AVAILABLE-FORMATS', 'AVERAGE', 'AVE', 'AVER', 'AVERA', 'AVERAG', 'AVG', 'BACKGROUND', 'BACK', 'BACKG', 'BACKGR', 'BACKGRO', 'BACKGROU', 'BACKGROUN', 'BACKWARDS', 'BACKWARD', 'BASE64-DECODE', 'BASE64-ENCODE', 'BASE-ADE', 'BASE-KEY', 'BATCH-MODE', 'BATCH', 'BATCH-', 'BATCH-M', 'BATCH-MO', 'BATCH-MOD', 'BATCH-SIZE', 'BEFORE-HIDE', 'BEFORE-H', 'BEFORE-HI', 'BEFORE-HID', 'BEGIN-EVENT-GROUP', 'BEGINS', 'BELL', 'BETWEEN', 'BGCOLOR', 'BGC', 'BGCO', 'BGCOL', 'BGCOLO', 'BIG-ENDIAN', 'BINARY', 'BIND', 'BIND-WHERE', 'BLANK', 'BLOCK-ITERATION-DISPLAY', 'BORDER-BOTTOM-CHARS', 'BORDER-B', 'BORDER-BO', 'BORDER-BOT', 'BORDER-BOTT', 'BORDER-BOTTO', 'BORDER-BOTTOM-PIXELS', 'BORDER-BOTTOM-P', 'BORDER-BOTTOM-PI', 'BORDER-BOTTOM-PIX', 'BORDER-BOTTOM-PIXE', 'BORDER-BOTTOM-PIXEL', 'BORDER-LEFT-CHARS', 'BORDER-L', 'BORDER-LE', 'BORDER-LEF', 'BORDER-LEFT', 'BORDER-LEFT-', 'BORDER-LEFT-C', 'BORDER-LEFT-CH', 'BORDER-LEFT-CHA', 'BORDER-LEFT-CHAR', 'BORDER-LEFT-PIXELS', 'BORDER-LEFT-P', 'BORDER-LEFT-PI', 'BORDER-LEFT-PIX', 'BORDER-LEFT-PIXE', 'BORDER-LEFT-PIXEL', 'BORDER-RIGHT-CHARS', 'BORDER-R', 'BORDER-RI', 'BORDER-RIG', 'BORDER-RIGH', 'BORDER-RIGHT', 'BORDER-RIGHT-', 'BORDER-RIGHT-C', 'BORDER-RIGHT-CH', 'BORDER-RIGHT-CHA', 'BORDER-RIGHT-CHAR', 'BORDER-RIGHT-PIXELS', 'BORDER-RIGHT-P', 'BORDER-RIGHT-PI', 'BORDER-RIGHT-PIX', 'BORDER-RIGHT-PIXE', 'BORDER-RIGHT-PIXEL', 'BORDER-TOP-CHARS', 'BORDER-T', 'BORDER-TO', 'BORDER-TOP', 'BORDER-TOP-', 'BORDER-TOP-C', 'BORDER-TOP-CH', 'BORDER-TOP-CHA', 'BORDER-TOP-CHAR', 'BORDER-TOP-PIXELS', 'BORDER-TOP-P', 'BORDER-TOP-PI', 'BORDER-TOP-PIX', 'BORDER-TOP-PIXE', 'BORDER-TOP-PIXEL', 'BOX', 'BOX-SELECTABLE', 'BOX-SELECT', 'BOX-SELECTA', 'BOX-SELECTAB', 'BOX-SELECTABL', 'BREAK', 'BROWSE', 'BUFFER', 'BUFFER-CHARS', 'BUFFER-COMPARE', 'BUFFER-COPY', 'BUFFER-CREATE', 'BUFFER-DELETE', 'BUFFER-FIELD', 'BUFFER-HANDLE', 'BUFFER-LINES', 'BUFFER-NAME', 'BUFFER-RELEASE', 'BUFFER-VALUE', 'BUTTON', 'BUTTONS', 'BY', 'BY-POINTER', 'BY-VARIANT-POINTER', 'CACHE', 'CACHE-SIZE', 'CALL', 'CALL-NAME', 'CALL-TYPE', 'CANCEL-BREAK', 'CANCEL-BUTTON', 'CAN-CREATE', 'CAN-DELETE', 'CAN-DO', 'CAN-FIND', 'CAN-QUERY', 'CAN-READ', 'CAN-SET', 'CAN-WRITE', 'CAPS', 'CAREFUL-PAINT', 'CASE', 'CASE-SENSITIVE', 'CASE-SEN', 'CASE-SENS', 'CASE-SENSI', 'CASE-SENSIT', 'CASE-SENSITI', 'CASE-SENSITIV', 'CAST', 'CATCH', 'CDECL', 'CENTERED', 'CENTER', 'CENTERE', 'CHAINED', 'CHARACTER_LENGTH', 'CHARSET', 'CHECK', 'CHECKED', 'CHOOSE', 'CHR', 'CLASS', 'CLASS-TYPE', 'CLEAR', 'CLEAR-APPL-CONTEXT', 'CLEAR-LOG', 'CLEAR-SELECTION', 'CLEAR-SELECT', 'CLEAR-SELECTI', 'CLEAR-SELECTIO', 'CLEAR-SORT-ARROWS', 'CLEAR-SORT-ARROW', 'CLIENT-CONNECTION-ID', 'CLIENT-PRINCIPAL', 'CLIENT-TTY', 'CLIENT-TYPE', 'CLIENT-WORKSTATION', 'CLIPBOARD', 'CLOSE', 'CLOSE-LOG', 'CODE', 'CODEBASE-LOCATOR', 'CODEPAGE', 'CODEPAGE-CONVERT', 'COLLATE', 'COL-OF', 'COLON', 'COLON-ALIGNED', 'COLON-ALIGN', 'COLON-ALIGNE', 'COLOR', 'COLOR-TABLE', 'COLUMN', 'COL', 'COLU', 'COLUM', 'COLUMN-BGCOLOR', 'COLUMN-DCOLOR', 'COLUMN-FGCOLOR', 'COLUMN-FONT', 'COLUMN-LABEL', 'COLUMN-LAB', 'COLUMN-LABE', 'COLUMN-MOVABLE', 'COLUMN-OF', 'COLUMN-PFCOLOR', 'COLUMN-READ-ONLY', 'COLUMN-RESIZABLE', 'COLUMNS', 'COLUMN-SCROLLING', 'COMBO-BOX', 'COMMAND', 'COMPARES', 'COMPILE', 'COMPILER', 'COMPLETE', 'COM-SELF', 'CONFIG-NAME', 'CONNECT', 'CONNECTED', 'CONSTRUCTOR', 'CONTAINS', 'CONTENTS', 'CONTEXT', 'CONTEXT-HELP', 'CONTEXT-HELP-FILE', 'CONTEXT-HELP-ID', 'CONTEXT-POPUP', 'CONTROL', 'CONTROL-BOX', 'CONTROL-FRAME', 'CONVERT', 'CONVERT-3D-COLORS', 'CONVERT-TO-OFFSET', 'CONVERT-TO-OFFS', 'CONVERT-TO-OFFSE', 'COPY-DATASET', 'COPY-LOB', 'COPY-SAX-ATTRIBUTES', 'COPY-TEMP-TABLE', 'COUNT', 'COUNT-OF', 'CPCASE', 'CPCOLL', 'CPINTERNAL', 'CPLOG', 'CPPRINT', 'CPRCODEIN', 'CPRCODEOUT', 'CPSTREAM', 'CPTERM', 'CRC-VALUE', 'CREATE', 'CREATE-LIKE', 'CREATE-LIKE-SEQUENTIAL', 'CREATE-NODE-NAMESPACE', 'CREATE-RESULT-LIST-ENTRY', 'CREATE-TEST-FILE', 'CURRENT', 'CURRENT_DATE', 'CURRENT-CHANGED', 'CURRENT-COLUMN', 'CURRENT-ENVIRONMENT', 'CURRENT-ENV', 'CURRENT-ENVI', 'CURRENT-ENVIR', 'CURRENT-ENVIRO', 'CURRENT-ENVIRON', 'CURRENT-ENVIRONM', 'CURRENT-ENVIRONME', 'CURRENT-ENVIRONMEN', 'CURRENT-ITERATION', 'CURRENT-LANGUAGE', 'CURRENT-LANG', 'CURRENT-LANGU', 'CURRENT-LANGUA', 'CURRENT-LANGUAG', 'CURRENT-QUERY', 'CURRENT-RESULT-ROW', 'CURRENT-ROW-MODIFIED', 'CURRENT-VALUE', 'CURRENT-WINDOW', 'CURSOR', 'CURS', 'CURSO', 'CURSOR-CHAR', 'CURSOR-LINE', 'CURSOR-OFFSET', 'DATABASE', 'DATA-BIND', 'DATA-ENTRY-RETURN', 'DATA-ENTRY-RET', 'DATA-ENTRY-RETU', 'DATA-ENTRY-RETUR', 'DATA-RELATION', 'DATA-REL', 'DATA-RELA', 'DATA-RELAT', 'DATA-RELATI', 'DATA-RELATIO', 'DATASERVERS', 'DATASET', 'DATASET-HANDLE', 'DATA-SOURCE', 'DATA-SOURCE-COMPLETE-MAP', 'DATA-SOURCE-MODIFIED', 'DATA-SOURCE-ROWID', 'DATA-TYPE', 'DATA-T', 'DATA-TY', 'DATA-TYP', 'DATE-FORMAT', 'DATE-F', 'DATE-FO', 'DATE-FOR', 'DATE-FORM', 'DATE-FORMA', 'DAY', 'DBCODEPAGE', 'DBCOLLATION', 'DBNAME', 'DBPARAM', 'DB-REFERENCES', 'DBRESTRICTIONS', 'DBREST', 'DBRESTR', 'DBRESTRI', 'DBRESTRIC', 'DBRESTRICT', 'DBRESTRICTI', 'DBRESTRICTIO', 'DBRESTRICTION', 'DBTASKID', 'DBTYPE', 'DBVERSION', 'DBVERS', 'DBVERSI', 'DBVERSIO', 'DCOLOR', 'DDE', 'DDE-ERROR', 'DDE-ID', 'DDE-I', 'DDE-ITEM', 'DDE-NAME', 'DDE-TOPIC', 'DEBLANK', 'DEBUG', 'DEBU', 'DEBUG-ALERT', 'DEBUGGER', 'DEBUG-LIST', 'DECIMALS', 'DECLARE', 'DECLARE-NAMESPACE', 'DECRYPT', 'DEFAULT', 'DEFAULT-BUFFER-HANDLE', 'DEFAULT-BUTTON', 'DEFAUT-B', 'DEFAUT-BU', 'DEFAUT-BUT', 'DEFAUT-BUTT', 'DEFAUT-BUTTO', 'DEFAULT-COMMIT', 'DEFAULT-EXTENSION', 'DEFAULT-EX', 'DEFAULT-EXT', 'DEFAULT-EXTE', 'DEFAULT-EXTEN', 'DEFAULT-EXTENS', 'DEFAULT-EXTENSI', 'DEFAULT-EXTENSIO', 'DEFAULT-NOXLATE', 'DEFAULT-NOXL', 'DEFAULT-NOXLA', 'DEFAULT-NOXLAT', 'DEFAULT-VALUE', 'DEFAULT-WINDOW', 'DEFINED', 'DEFINE-USER-EVENT-MANAGER', 'DELETE', 'DEL', 'DELE', 'DELET', 'DELETE-CHARACTER', 'DELETE-CHAR', 'DELETE-CHARA', 'DELETE-CHARAC', 'DELETE-CHARACT', 'DELETE-CHARACTE', 'DELETE-CURRENT-ROW', 'DELETE-LINE', 'DELETE-RESULT-LIST-ENTRY', 'DELETE-SELECTED-ROW', 'DELETE-SELECTED-ROWS', 'DELIMITER', 'DESC', 'DESCENDING', 'DESCE', 'DESCEN', 'DESCEND', 'DESCENDI', 'DESCENDIN', 'DESELECT-FOCUSED-ROW', 'DESELECTION', 'DESELECT-ROWS', 'DESELECT-SELECTED-ROW', 'DESTRUCTOR', 'DIALOG-BOX', 'DICTIONARY', 'DICT', 'DICTI', 'DICTIO', 'DICTION', 'DICTIONA', 'DICTIONAR', 'DIR', 'DISABLE', 'DISABLE-AUTO-ZAP', 'DISABLED', 'DISABLE-DUMP-TRIGGERS', 'DISABLE-LOAD-TRIGGERS', 'DISCONNECT', 'DISCON', 'DISCONN', 'DISCONNE', 'DISCONNEC', 'DISP', 'DISPLAY', 'DISPL', 'DISPLA', 'DISPLAY-MESSAGE', 'DISPLAY-TYPE', 'DISPLAY-T', 'DISPLAY-TY', 'DISPLAY-TYP', 'DISTINCT', 'DO', 'DOMAIN-DESCRIPTION', 'DOMAIN-NAME', 'DOMAIN-TYPE', 'DOS', 'DOUBLE', 'DOWN', 'DRAG-ENABLED', 'DROP', 'DROP-DOWN', 'DROP-DOWN-LIST', 'DROP-FILE-NOTIFY', 'DROP-TARGET', 'DUMP', 'DYNAMIC', 'DYNAMIC-FUNCTION', 'EACH', 'ECHO', 'EDGE-CHARS', 'EDGE', 'EDGE-', 'EDGE-C', 'EDGE-CH', 'EDGE-CHA', 'EDGE-CHAR', 'EDGE-PIXELS', 'EDGE-P', 'EDGE-PI', 'EDGE-PIX', 'EDGE-PIXE', 'EDGE-PIXEL', 'EDIT-CAN-PASTE', 'EDIT-CAN-UNDO', 'EDIT-CLEAR', 'EDIT-COPY', 'EDIT-CUT', 'EDITING', 'EDITOR', 'EDIT-PASTE', 'EDIT-UNDO', 'ELSE', 'EMPTY', 'EMPTY-TEMP-TABLE', 'ENABLE', 'ENABLED-FIELDS', 'ENCODE', 'ENCRYPT', 'ENCRYPT-AUDIT-MAC-KEY', 'ENCRYPTION-SALT', 'END', 'END-DOCUMENT', 'END-ELEMENT', 'END-EVENT-GROUP', 'END-FILE-DROP', 'ENDKEY', 'END-KEY', 'END-MOVE', 'END-RESIZE', 'END-ROW-RESIZE', 'END-USER-PROMPT', 'ENTERED', 'ENTRY', 'EQ', 'ERROR', 'ERROR-COLUMN', 'ERROR-COL', 'ERROR-COLU', 'ERROR-COLUM', 'ERROR-ROW', 'ERROR-STACK-TRACE', 'ERROR-STATUS', 'ERROR-STAT', 'ERROR-STATU', 'ESCAPE', 'ETIME', 'EVENT-GROUP-ID', 'EVENT-PROCEDURE', 'EVENT-PROCEDURE-CONTEXT', 'EVENTS', 'EVENT', 'EVENT-TYPE', 'EVENT-T', 'EVENT-TY', 'EVENT-TYP', 'EXCEPT', 'EXCLUSIVE-ID', 'EXCLUSIVE-LOCK', 'EXCLUSIVE', 'EXCLUSIVE-', 'EXCLUSIVE-L', 'EXCLUSIVE-LO', 'EXCLUSIVE-LOC', 'EXCLUSIVE-WEB-USER', 'EXECUTE', 'EXISTS', 'EXP', 'EXPAND', 'EXPANDABLE', 'EXPLICIT', 'EXPORT', 'EXPORT-PRINCIPAL', 'EXTENDED', 'EXTENT', 'EXTERNAL', 'FALSE', 'FETCH', 'FETCH-SELECTED-ROW', 'FGCOLOR', 'FGC', 'FGCO', 'FGCOL', 'FGCOLO', 'FIELD', 'FIELDS', 'FILE', 'FILE-CREATE-DATE', 'FILE-CREATE-TIME', 'FILE-INFORMATION', 'FILE-INFO', 'FILE-INFOR', 'FILE-INFORM', 'FILE-INFORMA', 'FILE-INFORMAT', 'FILE-INFORMATI', 'FILE-INFORMATIO', 'FILE-MOD-DATE', 'FILE-MOD-TIME', 'FILENAME', 'FILE-NAME', 'FILE-OFFSET', 'FILE-OFF', 'FILE-OFFS', 'FILE-OFFSE', 'FILE-SIZE', 'FILE-TYPE', 'FILL', 'FILLED', 'FILL-IN', 'FILTERS', 'FINAL', 'FINALLY', 'FIND', 'FIND-BY-ROWID', 'FIND-CASE-SENSITIVE', 'FIND-CURRENT', 'FINDER', 'FIND-FIRST', 'FIND-GLOBAL', 'FIND-LAST', 'FIND-NEXT-OCCURRENCE', 'FIND-PREV-OCCURRENCE', 'FIND-SELECT', 'FIND-UNIQUE', 'FIND-WRAP-AROUND', 'FIRST', 'FIRST-ASYNCH-REQUEST', 'FIRST-CHILD', 'FIRST-COLUMN', 'FIRST-FORM', 'FIRST-OBJECT', 'FIRST-OF', 'FIRST-PROCEDURE', 'FIRST-PROC', 'FIRST-PROCE', 'FIRST-PROCED', 'FIRST-PROCEDU', 'FIRST-PROCEDUR', 'FIRST-SERVER', 'FIRST-TAB-ITEM', 'FIRST-TAB-I', 'FIRST-TAB-IT', 'FIRST-TAB-ITE', 'FIT-LAST-COLUMN', 'FIXED-ONLY', 'FLAT-BUTTON', 'FLOAT', 'FOCUS', 'FOCUSED-ROW', 'FOCUSED-ROW-SELECTED', 'FONT', 'FONT-TABLE', 'FOR', 'FORCE-FILE', 'FOREGROUND', 'FORE', 'FOREG', 'FOREGR', 'FOREGRO', 'FOREGROU', 'FOREGROUN', 'FORM', 'FORMAT', 'FORMA', 'FORMATTED', 'FORMATTE', 'FORM-LONG-INPUT', 'FORWARD', 'FORWARDS', 'FRAGMENT', 'FRAGMEN', 'FRAME', 'FRAM', 'FRAME-COL', 'FRAME-DB', 'FRAME-DOWN', 'FRAME-FIELD', 'FRAME-FILE', 'FRAME-INDEX', 'FRAME-INDE', 'FRAME-LINE', 'FRAME-NAME', 'FRAME-ROW', 'FRAME-SPACING', 'FRAME-SPA', 'FRAME-SPAC', 'FRAME-SPACI', 'FRAME-SPACIN', 'FRAME-VALUE', 'FRAME-VAL', 'FRAME-VALU', 'FRAME-X', 'FRAME-Y', 'FREQUENCY', 'FROM', 'FROM-CHARS', 'FROM-C', 'FROM-CH', 'FROM-CHA', 'FROM-CHAR', 'FROM-CURRENT', 'FROM-CUR', 'FROM-CURR', 'FROM-CURRE', 'FROM-CURREN', 'FROM-PIXELS', 'FROM-P', 'FROM-PI', 'FROM-PIX', 'FROM-PIXE', 'FROM-PIXEL', 'FULL-HEIGHT-CHARS', 'FULL-HEIGHT', 'FULL-HEIGHT-', 'FULL-HEIGHT-C', 'FULL-HEIGHT-CH', 'FULL-HEIGHT-CHA', 'FULL-HEIGHT-CHAR', 'FULL-HEIGHT-PIXELS', 'FULL-HEIGHT-P', 'FULL-HEIGHT-PI', 'FULL-HEIGHT-PIX', 'FULL-HEIGHT-PIXE', 'FULL-HEIGHT-PIXEL', 'FULL-PATHNAME', 'FULL-PATHN', 'FULL-PATHNA', 'FULL-PATHNAM', 'FULL-WIDTH-CHARS', 'FULL-WIDTH', 'FULL-WIDTH-', 'FULL-WIDTH-C', 'FULL-WIDTH-CH', 'FULL-WIDTH-CHA', 'FULL-WIDTH-CHAR', 'FULL-WIDTH-PIXELS', 'FULL-WIDTH-P', 'FULL-WIDTH-PI', 'FULL-WIDTH-PIX', 'FULL-WIDTH-PIXE', 'FULL-WIDTH-PIXEL', 'FUNCTION', 'FUNCTION-CALL-TYPE', 'GATEWAYS', 'GATEWAY', 'GE', 'GENERATE-MD5', 'GENERATE-PBE-KEY', 'GENERATE-PBE-SALT', 'GENERATE-RANDOM-KEY', 'GENERATE-UUID', 'GET', 'GET-ATTR-CALL-TYPE', 'GET-ATTRIBUTE-NODE', 'GET-BINARY-DATA', 'GET-BLUE-VALUE', 'GET-BLUE', 'GET-BLUE-', 'GET-BLUE-V', 'GET-BLUE-VA', 'GET-BLUE-VAL', 'GET-BLUE-VALU', 'GET-BROWSE-COLUMN', 'GET-BUFFER-HANDLEGETBYTE', 'GET-BYTE', 'GET-CALLBACK-PROC-CONTEXT', 'GET-CALLBACK-PROC-NAME', 'GET-CGI-LIST', 'GET-CGI-LONG-VALUE', 'GET-CGI-VALUE', 'GET-CODEPAGES', 'GET-COLLATIONS', 'GET-CONFIG-VALUE', 'GET-CURRENT', 'GET-DOUBLE', 'GET-DROPPED-FILE', 'GET-DYNAMIC', 'GET-ERROR-COLUMN', 'GET-ERROR-ROW', 'GET-FILE', 'GET-FILE-NAME', 'GET-FILE-OFFSET', 'GET-FILE-OFFSE', 'GET-FIRST', 'GET-FLOAT', 'GET-GREEN-VALUE', 'GET-GREEN', 'GET-GREEN-', 'GET-GREEN-V', 'GET-GREEN-VA', 'GET-GREEN-VAL', 'GET-GREEN-VALU', 'GET-INDEX-BY-NAMESPACE-NAME', 'GET-INDEX-BY-QNAME', 'GET-INT64', 'GET-ITERATION', 'GET-KEY-VALUE', 'GET-KEY-VAL', 'GET-KEY-VALU', 'GET-LAST', 'GET-LOCALNAME-BY-INDEX', 'GET-LONG', 'GET-MESSAGE', 'GET-NEXT', 'GET-NUMBER', 'GET-POINTER-VALUE', 'GET-PREV', 'GET-PRINTERS', 'GET-PROPERTY', 'GET-QNAME-BY-INDEX', 'GET-RED-VALUE', 'GET-RED', 'GET-RED-', 'GET-RED-V', 'GET-RED-VA', 'GET-RED-VAL', 'GET-RED-VALU', 'GET-REPOSITIONED-ROW', 'GET-RGB-VALUE', 'GET-SELECTED-WIDGET', 'GET-SELECTED', 'GET-SELECTED-', 'GET-SELECTED-W', 'GET-SELECTED-WI', 'GET-SELECTED-WID', 'GET-SELECTED-WIDG', 'GET-SELECTED-WIDGE', 'GET-SHORT', 'GET-SIGNATURE', 'GET-SIZE', 'GET-STRING', 'GET-TAB-ITEM', 'GET-TEXT-HEIGHT-CHARS', 'GET-TEXT-HEIGHT', 'GET-TEXT-HEIGHT-', 'GET-TEXT-HEIGHT-C', 'GET-TEXT-HEIGHT-CH', 'GET-TEXT-HEIGHT-CHA', 'GET-TEXT-HEIGHT-CHAR', 'GET-TEXT-HEIGHT-PIXELS', 'GET-TEXT-HEIGHT-P', 'GET-TEXT-HEIGHT-PI', 'GET-TEXT-HEIGHT-PIX', 'GET-TEXT-HEIGHT-PIXE', 'GET-TEXT-HEIGHT-PIXEL', 'GET-TEXT-WIDTH-CHARS', 'GET-TEXT-WIDTH', 'GET-TEXT-WIDTH-', 'GET-TEXT-WIDTH-C', 'GET-TEXT-WIDTH-CH', 'GET-TEXT-WIDTH-CHA', 'GET-TEXT-WIDTH-CHAR', 'GET-TEXT-WIDTH-PIXELS', 'GET-TEXT-WIDTH-P', 'GET-TEXT-WIDTH-PI', 'GET-TEXT-WIDTH-PIX', 'GET-TEXT-WIDTH-PIXE', 'GET-TEXT-WIDTH-PIXEL', 'GET-TYPE-BY-INDEX', 'GET-TYPE-BY-NAMESPACE-NAME', 'GET-TYPE-BY-QNAME', 'GET-UNSIGNED-LONG', 'GET-UNSIGNED-SHORT', 'GET-URI-BY-INDEX', 'GET-VALUE-BY-INDEX', 'GET-VALUE-BY-NAMESPACE-NAME', 'GET-VALUE-BY-QNAME', 'GET-WAIT-STATE', 'GLOBAL', 'GO-ON', 'GO-PENDING', 'GO-PEND', 'GO-PENDI', 'GO-PENDIN', 'GRANT', 'GRAPHIC-EDGE', 'GRAPHIC-E', 'GRAPHIC-ED', 'GRAPHIC-EDG', 'GRID-FACTOR-HORIZONTAL', 'GRID-FACTOR-H', 'GRID-FACTOR-HO', 'GRID-FACTOR-HOR', 'GRID-FACTOR-HORI', 'GRID-FACTOR-HORIZ', 'GRID-FACTOR-HORIZO', 'GRID-FACTOR-HORIZON', 'GRID-FACTOR-HORIZONT', 'GRID-FACTOR-HORIZONTA', 'GRID-FACTOR-VERTICAL', 'GRID-FACTOR-V', 'GRID-FACTOR-VE', 'GRID-FACTOR-VER', 'GRID-FACTOR-VERT', 'GRID-FACTOR-VERTI', 'GRID-FACTOR-VERTIC', 'GRID-FACTOR-VERTICA', 'GRID-SNAP', 'GRID-UNIT-HEIGHT-CHARS', 'GRID-UNIT-HEIGHT', 'GRID-UNIT-HEIGHT-', 'GRID-UNIT-HEIGHT-C', 'GRID-UNIT-HEIGHT-CH', 'GRID-UNIT-HEIGHT-CHA', 'GRID-UNIT-HEIGHT-PIXELS', 'GRID-UNIT-HEIGHT-P', 'GRID-UNIT-HEIGHT-PI', 'GRID-UNIT-HEIGHT-PIX', 'GRID-UNIT-HEIGHT-PIXE', 'GRID-UNIT-HEIGHT-PIXEL', 'GRID-UNIT-WIDTH-CHARS', 'GRID-UNIT-WIDTH', 'GRID-UNIT-WIDTH-', 'GRID-UNIT-WIDTH-C', 'GRID-UNIT-WIDTH-CH', 'GRID-UNIT-WIDTH-CHA', 'GRID-UNIT-WIDTH-CHAR', 'GRID-UNIT-WIDTH-PIXELS', 'GRID-UNIT-WIDTH-P', 'GRID-UNIT-WIDTH-PI', 'GRID-UNIT-WIDTH-PIX', 'GRID-UNIT-WIDTH-PIXE', 'GRID-UNIT-WIDTH-PIXEL', 'GRID-VISIBLE', 'GROUP', 'GT', 'GUID', 'HANDLER', 'HAS-RECORDS', 'HAVING', 'HEADER', 'HEIGHT-CHARS', 'HEIGHT', 'HEIGHT-', 'HEIGHT-C', 'HEIGHT-CH', 'HEIGHT-CHA', 'HEIGHT-CHAR', 'HEIGHT-PIXELS', 'HEIGHT-P', 'HEIGHT-PI', 'HEIGHT-PIX', 'HEIGHT-PIXE', 'HEIGHT-PIXEL', 'HELP', 'HEX-DECODE', 'HEX-ENCODE', 'HIDDEN', 'HIDE', 'HORIZONTAL', 'HORI', 'HORIZ', 'HORIZO', 'HORIZON', 'HORIZONT', 'HORIZONTA', 'HOST-BYTE-ORDER', 'HTML-CHARSET', 'HTML-END-OF-LINE', 'HTML-END-OF-PAGE', 'HTML-FRAME-BEGIN', 'HTML-FRAME-END', 'HTML-HEADER-BEGIN', 'HTML-HEADER-END', 'HTML-TITLE-BEGIN', 'HTML-TITLE-END', 'HWND', 'ICON', 'IF', 'IMAGE', 'IMAGE-DOWN', 'IMAGE-INSENSITIVE', 'IMAGE-SIZE', 'IMAGE-SIZE-CHARS', 'IMAGE-SIZE-C', 'IMAGE-SIZE-CH', 'IMAGE-SIZE-CHA', 'IMAGE-SIZE-CHAR', 'IMAGE-SIZE-PIXELS', 'IMAGE-SIZE-P', 'IMAGE-SIZE-PI', 'IMAGE-SIZE-PIX', 'IMAGE-SIZE-PIXE', 'IMAGE-SIZE-PIXEL', 'IMAGE-UP', 'IMMEDIATE-DISPLAY', 'IMPLEMENTS', 'IMPORT', 'IMPORT-PRINCIPAL', 'IN', 'INCREMENT-EXCLUSIVE-ID', 'INDEX', 'INDEXED-REPOSITION', 'INDEX-HINT', 'INDEX-INFORMATION', 'INDICATOR', 'INFORMATION', 'INFO', 'INFOR', 'INFORM', 'INFORMA', 'INFORMAT', 'INFORMATI', 'INFORMATIO', 'IN-HANDLE', 'INHERIT-BGCOLOR', 'INHERIT-BGC', 'INHERIT-BGCO', 'INHERIT-BGCOL', 'INHERIT-BGCOLO', 'INHERIT-FGCOLOR', 'INHERIT-FGC', 'INHERIT-FGCO', 'INHERIT-FGCOL', 'INHERIT-FGCOLO', 'INHERITS', 'INITIAL', 'INIT', 'INITI', 'INITIA', 'INITIAL-DIR', 'INITIAL-FILTER', 'INITIALIZE-DOCUMENT-TYPE', 'INITIATE', 'INNER-CHARS', 'INNER-LINES', 'INPUT', 'INPUT-OUTPUT', 'INPUT-O', 'INPUT-OU', 'INPUT-OUT', 'INPUT-OUTP', 'INPUT-OUTPU', 'INPUT-VALUE', 'INSERT', 'INSERT-ATTRIBUTE', 'INSERT-BACKTAB', 'INSERT-B', 'INSERT-BA', 'INSERT-BAC', 'INSERT-BACK', 'INSERT-BACKT', 'INSERT-BACKTA', 'INSERT-FILE', 'INSERT-ROW', 'INSERT-STRING', 'INSERT-TAB', 'INSERT-T', 'INSERT-TA', 'INTERFACE', 'INTERNAL-ENTRIES', 'INTO', 'INVOKE', 'IS', 'IS-ATTR-SPACE', 'IS-ATTR', 'IS-ATTR-', 'IS-ATTR-S', 'IS-ATTR-SP', 'IS-ATTR-SPA', 'IS-ATTR-SPAC', 'IS-CLASS', 'IS-CLAS', 'IS-LEAD-BYTE', 'IS-OPEN', 'IS-PARAMETER-SET', 'IS-ROW-SELECTED', 'IS-SELECTED', 'ITEM', 'ITEMS-PER-ROW', 'JOIN', 'JOIN-BY-SQLDB', 'KBLABEL', 'KEEP-CONNECTION-OPEN', 'KEEP-FRAME-Z-ORDER', 'KEEP-FRAME-Z', 'KEEP-FRAME-Z-', 'KEEP-FRAME-Z-O', 'KEEP-FRAME-Z-OR', 'KEEP-FRAME-Z-ORD', 'KEEP-FRAME-Z-ORDE', 'KEEP-MESSAGES', 'KEEP-SECURITY-CACHE', 'KEEP-TAB-ORDER', 'KEY', 'KEYCODE', 'KEY-CODE', 'KEYFUNCTION', 'KEYFUNC', 'KEYFUNCT', 'KEYFUNCTI', 'KEYFUNCTIO', 'KEY-FUNCTION', 'KEY-FUNC', 'KEY-FUNCT', 'KEY-FUNCTI', 'KEY-FUNCTIO', 'KEYLABEL', 'KEY-LABEL', 'KEYS', 'KEYWORD', 'KEYWORD-ALL', 'LABEL', 'LABEL-BGCOLOR', 'LABEL-BGC', 'LABEL-BGCO', 'LABEL-BGCOL', 'LABEL-BGCOLO', 'LABEL-DCOLOR', 'LABEL-DC', 'LABEL-DCO', 'LABEL-DCOL', 'LABEL-DCOLO', 'LABEL-FGCOLOR', 'LABEL-FGC', 'LABEL-FGCO', 'LABEL-FGCOL', 'LABEL-FGCOLO', 'LABEL-FONT', 'LABEL-PFCOLOR', 'LABEL-PFC', 'LABEL-PFCO', 'LABEL-PFCOL', 'LABEL-PFCOLO', 'LABELS', 'LANDSCAPE', 'LANGUAGES', 'LANGUAGE', 'LARGE', 'LARGE-TO-SMALL', 'LAST', 'LAST-ASYNCH-REQUEST', 'LAST-BATCH', 'LAST-CHILD', 'LAST-EVENT', 'LAST-EVEN', 'LAST-FORM', 'LASTKEY', 'LAST-KEY', 'LAST-OBJECT', 'LAST-OF', 'LAST-PROCEDURE', 'LAST-PROCE', 'LAST-PROCED', 'LAST-PROCEDU', 'LAST-PROCEDUR', 'LAST-SERVER', 'LAST-TAB-ITEM', 'LAST-TAB-I', 'LAST-TAB-IT', 'LAST-TAB-ITE', 'LC', 'LDBNAME', 'LE', 'LEAVE', 'LEFT-ALIGNED', 'LEFT-ALIGN', 'LEFT-ALIGNE', 'LEFT-TRIM', 'LENGTH', 'LIBRARY', 'LIKE', 'LIKE-SEQUENTIAL', 'LINE', 'LINE-COUNTER', 'LINE-COUNT', 'LINE-COUNTE', 'LIST-EVENTS', 'LISTING', 'LISTI', 'LISTIN', 'LIST-ITEM-PAIRS', 'LIST-ITEMS', 'LIST-PROPERTY-NAMES', 'LIST-QUERY-ATTRS', 'LIST-SET-ATTRS', 'LIST-WIDGETS', 'LITERAL-QUESTION', 'LITTLE-ENDIAN', 'LOAD', 'LOAD-DOMAINS', 'LOAD-ICON', 'LOAD-IMAGE', 'LOAD-IMAGE-DOWN', 'LOAD-IMAGE-INSENSITIVE', 'LOAD-IMAGE-UP', 'LOAD-MOUSE-POINTER', 'LOAD-MOUSE-P', 'LOAD-MOUSE-PO', 'LOAD-MOUSE-POI', 'LOAD-MOUSE-POIN', 'LOAD-MOUSE-POINT', 'LOAD-MOUSE-POINTE', 'LOAD-PICTURE', 'LOAD-SMALL-ICON', 'LOCAL-NAME', 'LOCATOR-COLUMN-NUMBER', 'LOCATOR-LINE-NUMBER', 'LOCATOR-PUBLIC-ID', 'LOCATOR-SYSTEM-ID', 'LOCATOR-TYPE', 'LOCKED', 'LOCK-REGISTRATION', 'LOG', 'LOG-AUDIT-EVENT', 'LOGIN-EXPIRATION-TIMESTAMP', 'LOGIN-HOST', 'LOGIN-STATE', 'LOG-MANAGER', 'LOGOUT', 'LOOKAHEAD', 'LOOKUP', 'LT', 'MACHINE-CLASS', 'MANDATORY', 'MANUAL-HIGHLIGHT', 'MAP', 'MARGIN-EXTRA', 'MARGIN-HEIGHT-CHARS', 'MARGIN-HEIGHT', 'MARGIN-HEIGHT-', 'MARGIN-HEIGHT-C', 'MARGIN-HEIGHT-CH', 'MARGIN-HEIGHT-CHA', 'MARGIN-HEIGHT-CHAR', 'MARGIN-HEIGHT-PIXELS', 'MARGIN-HEIGHT-P', 'MARGIN-HEIGHT-PI', 'MARGIN-HEIGHT-PIX', 'MARGIN-HEIGHT-PIXE', 'MARGIN-HEIGHT-PIXEL', 'MARGIN-WIDTH-CHARS', 'MARGIN-WIDTH', 'MARGIN-WIDTH-', 'MARGIN-WIDTH-C', 'MARGIN-WIDTH-CH', 'MARGIN-WIDTH-CHA', 'MARGIN-WIDTH-CHAR', 'MARGIN-WIDTH-PIXELS', 'MARGIN-WIDTH-P', 'MARGIN-WIDTH-PI', 'MARGIN-WIDTH-PIX', 'MARGIN-WIDTH-PIXE', 'MARGIN-WIDTH-PIXEL', 'MARK-NEW', 'MARK-ROW-STATE', 'MATCHES', 'MAX-BUTTON', 'MAX-CHARS', 'MAX-DATA-GUESS', 'MAX-HEIGHT', 'MAX-HEIGHT-CHARS', 'MAX-HEIGHT-C', 'MAX-HEIGHT-CH', 'MAX-HEIGHT-CHA', 'MAX-HEIGHT-CHAR', 'MAX-HEIGHT-PIXELS', 'MAX-HEIGHT-P', 'MAX-HEIGHT-PI', 'MAX-HEIGHT-PIX', 'MAX-HEIGHT-PIXE', 'MAX-HEIGHT-PIXEL', 'MAXIMIZE', 'MAXIMUM', 'MAX', 'MAXI', 'MAXIM', 'MAXIMU', 'MAXIMUM-LEVEL', 'MAX-ROWS', 'MAX-SIZE', 'MAX-VALUE', 'MAX-VAL', 'MAX-VALU', 'MAX-WIDTH-CHARS', 'MAX-WIDTH', 'MAX-WIDTH-', 'MAX-WIDTH-C', 'MAX-WIDTH-CH', 'MAX-WIDTH-CHA', 'MAX-WIDTH-CHAR', 'MAX-WIDTH-PIXELS', 'MAX-WIDTH-P', 'MAX-WIDTH-PI', 'MAX-WIDTH-PIX', 'MAX-WIDTH-PIXE', 'MAX-WIDTH-PIXEL', 'MD5-DIGEST', 'MEMBER', 'MEMPTR-TO-NODE-VALUE', 'MENU', 'MENUBAR', 'MENU-BAR', 'MENU-ITEM', 'MENU-KEY', 'MENU-K', 'MENU-KE', 'MENU-MOUSE', 'MENU-M', 'MENU-MO', 'MENU-MOU', 'MENU-MOUS', 'MERGE-BY-FIELD', 'MESSAGE', 'MESSAGE-AREA', 'MESSAGE-AREA-FONT', 'MESSAGE-LINES', 'METHOD', 'MIN-BUTTON', 'MIN-COLUMN-WIDTH-CHARS', 'MIN-COLUMN-WIDTH-C', 'MIN-COLUMN-WIDTH-CH', 'MIN-COLUMN-WIDTH-CHA', 'MIN-COLUMN-WIDTH-CHAR', 'MIN-COLUMN-WIDTH-PIXELS', 'MIN-COLUMN-WIDTH-P', 'MIN-COLUMN-WIDTH-PI', 'MIN-COLUMN-WIDTH-PIX', 'MIN-COLUMN-WIDTH-PIXE', 'MIN-COLUMN-WIDTH-PIXEL', 'MIN-HEIGHT-CHARS', 'MIN-HEIGHT', 'MIN-HEIGHT-', 'MIN-HEIGHT-C', 'MIN-HEIGHT-CH', 'MIN-HEIGHT-CHA', 'MIN-HEIGHT-CHAR', 'MIN-HEIGHT-PIXELS', 'MIN-HEIGHT-P', 'MIN-HEIGHT-PI', 'MIN-HEIGHT-PIX', 'MIN-HEIGHT-PIXE', 'MIN-HEIGHT-PIXEL', 'MINIMUM', 'MIN', 'MINI', 'MINIM', 'MINIMU', 'MIN-SIZE', 'MIN-VALUE', 'MIN-VAL', 'MIN-VALU', 'MIN-WIDTH-CHARS', 'MIN-WIDTH', 'MIN-WIDTH-', 'MIN-WIDTH-C', 'MIN-WIDTH-CH', 'MIN-WIDTH-CHA', 'MIN-WIDTH-CHAR', 'MIN-WIDTH-PIXELS', 'MIN-WIDTH-P', 'MIN-WIDTH-PI', 'MIN-WIDTH-PIX', 'MIN-WIDTH-PIXE', 'MIN-WIDTH-PIXEL', 'MODIFIED', 'MODULO', 'MOD', 'MODU', 'MODUL', 'MONTH', 'MOUSE', 'MOUSE-POINTER', 'MOUSE-P', 'MOUSE-PO', 'MOUSE-POI', 'MOUSE-POIN', 'MOUSE-POINT', 'MOUSE-POINTE', 'MOVABLE', 'MOVE-AFTER-TAB-ITEM', 'MOVE-AFTER', 'MOVE-AFTER-', 'MOVE-AFTER-T', 'MOVE-AFTER-TA', 'MOVE-AFTER-TAB', 'MOVE-AFTER-TAB-', 'MOVE-AFTER-TAB-I', 'MOVE-AFTER-TAB-IT', 'MOVE-AFTER-TAB-ITE', 'MOVE-BEFORE-TAB-ITEM', 'MOVE-BEFOR', 'MOVE-BEFORE', 'MOVE-BEFORE-', 'MOVE-BEFORE-T', 'MOVE-BEFORE-TA', 'MOVE-BEFORE-TAB', 'MOVE-BEFORE-TAB-', 'MOVE-BEFORE-TAB-I', 'MOVE-BEFORE-TAB-IT', 'MOVE-BEFORE-TAB-ITE', 'MOVE-COLUMN', 'MOVE-COL', 'MOVE-COLU', 'MOVE-COLUM', 'MOVE-TO-BOTTOM', 'MOVE-TO-B', 'MOVE-TO-BO', 'MOVE-TO-BOT', 'MOVE-TO-BOTT', 'MOVE-TO-BOTTO', 'MOVE-TO-EOF', 'MOVE-TO-TOP', 'MOVE-TO-T', 'MOVE-TO-TO', 'MPE', 'MULTI-COMPILE', 'MULTIPLE', 'MULTIPLE-KEY', 'MULTITASKING-INTERVAL', 'MUST-EXIST', 'NAME', 'NAMESPACE-PREFIX', 'NAMESPACE-URI', 'NATIVE', 'NE', 'NEEDS-APPSERVER-PROMPT', 'NEEDS-PROMPT', 'NEW', 'NEW-INSTANCE', 'NEW-ROW', 'NEXT', 'NEXT-COLUMN', 'NEXT-PROMPT', 'NEXT-ROWID', 'NEXT-SIBLING', 'NEXT-TAB-ITEM', 'NEXT-TAB-I', 'NEXT-TAB-IT', 'NEXT-TAB-ITE', 'NEXT-VALUE', 'NO', 'NO-APPLY', 'NO-ARRAY-MESSAGE', 'NO-ASSIGN', 'NO-ATTR-LIST', 'NO-ATTR', 'NO-ATTR-', 'NO-ATTR-L', 'NO-ATTR-LI', 'NO-ATTR-LIS', 'NO-ATTR-SPACE', 'NO-ATTR-S', 'NO-ATTR-SP', 'NO-ATTR-SPA', 'NO-ATTR-SPAC', 'NO-AUTO-VALIDATE', 'NO-BIND-WHERE', 'NO-BOX', 'NO-CONSOLE', 'NO-CONVERT', 'NO-CONVERT-3D-COLORS', 'NO-CURRENT-VALUE', 'NO-DEBUG', 'NODE-VALUE-TO-MEMPTR', 'NO-DRAG', 'NO-ECHO', 'NO-EMPTY-SPACE', 'NO-ERROR', 'NO-FILL', 'NO-F', 'NO-FI', 'NO-FIL', 'NO-FOCUS', 'NO-HELP', 'NO-HIDE', 'NO-INDEX-HINT', 'NO-INHERIT-BGCOLOR', 'NO-INHERIT-BGC', 'NO-INHERIT-BGCO', 'NO-INHERIT-FGCOLOR', 'NO-INHERIT-FGC', 'NO-INHERIT-FGCO', 'NO-INHERIT-FGCOL', 'NO-INHERIT-FGCOLO', 'NO-JOIN-BY-SQLDB', 'NO-LABELS', 'NO-LABE', 'NO-LOBS', 'NO-LOCK', 'NO-LOOKAHEAD', 'NO-MAP', 'NO-MESSAGE', 'NO-MES', 'NO-MESS', 'NO-MESSA', 'NO-MESSAG', 'NONAMESPACE-SCHEMA-LOCATION', 'NONE', 'NO-PAUSE', 'NO-PREFETCH', 'NO-PREFE', 'NO-PREFET', 'NO-PREFETC', 'NORMALIZE', 'NO-ROW-MARKERS', 'NO-SCROLLBAR-VERTICAL', 'NO-SEPARATE-CONNECTION', 'NO-SEPARATORS', 'NOT', 'NO-TAB-STOP', 'NOT-ACTIVE', 'NO-UNDERLINE', 'NO-UND', 'NO-UNDE', 'NO-UNDER', 'NO-UNDERL', 'NO-UNDERLI', 'NO-UNDERLIN', 'NO-UNDO', 'NO-VALIDATE', 'NO-VAL', 'NO-VALI', 'NO-VALID', 'NO-VALIDA', 'NO-VALIDAT', 'NOW', 'NO-WAIT', 'NO-WORD-WRAP', 'NULL', 'NUM-ALIASES', 'NUM-ALI', 'NUM-ALIA', 'NUM-ALIAS', 'NUM-ALIASE', 'NUM-BUFFERS', 'NUM-BUTTONS', 'NUM-BUT', 'NUM-BUTT', 'NUM-BUTTO', 'NUM-BUTTON', 'NUM-COLUMNS', 'NUM-COL', 'NUM-COLU', 'NUM-COLUM', 'NUM-COLUMN', 'NUM-COPIES', 'NUM-DBS', 'NUM-DROPPED-FILES', 'NUM-ENTRIES', 'NUMERIC', 'NUMERIC-FORMAT', 'NUMERIC-F', 'NUMERIC-FO', 'NUMERIC-FOR', 'NUMERIC-FORM', 'NUMERIC-FORMA', 'NUM-FIELDS', 'NUM-FORMATS', 'NUM-ITEMS', 'NUM-ITERATIONS', 'NUM-LINES', 'NUM-LOCKED-COLUMNS', 'NUM-LOCKED-COL', 'NUM-LOCKED-COLU', 'NUM-LOCKED-COLUM', 'NUM-LOCKED-COLUMN', 'NUM-MESSAGES', 'NUM-PARAMETERS', 'NUM-REFERENCES', 'NUM-REPLACED', 'NUM-RESULTS', 'NUM-SELECTED-ROWS', 'NUM-SELECTED-WIDGETS', 'NUM-SELECTED', 'NUM-SELECTED-', 'NUM-SELECTED-W', 'NUM-SELECTED-WI', 'NUM-SELECTED-WID', 'NUM-SELECTED-WIDG', 'NUM-SELECTED-WIDGE', 'NUM-SELECTED-WIDGET', 'NUM-TABS', 'NUM-TO-RETAIN', 'NUM-VISIBLE-COLUMNS', 'OCTET-LENGTH', 'OF', 'OFF', 'OK', 'OK-CANCEL', 'OLD', 'ON', 'ON-FRAME-BORDER', 'ON-FRAME', 'ON-FRAME-', 'ON-FRAME-B', 'ON-FRAME-BO', 'ON-FRAME-BOR', 'ON-FRAME-BORD', 'ON-FRAME-BORDE', 'OPEN', 'OPSYS', 'OPTION', 'OR', 'ORDERED-JOIN', 'ORDINAL', 'OS-APPEND', 'OS-COMMAND', 'OS-COPY', 'OS-CREATE-DIR', 'OS-DELETE', 'OS-DIR', 'OS-DRIVES', 'OS-DRIVE', 'OS-ERROR', 'OS-GETENV', 'OS-RENAME', 'OTHERWISE', 'OUTPUT', 'OVERLAY', 'OVERRIDE', 'OWNER', 'PAGE', 'PAGE-BOTTOM', 'PAGE-BOT', 'PAGE-BOTT', 'PAGE-BOTTO', 'PAGED', 'PAGE-NUMBER', 'PAGE-NUM', 'PAGE-NUMB', 'PAGE-NUMBE', 'PAGE-SIZE', 'PAGE-TOP', 'PAGE-WIDTH', 'PAGE-WID', 'PAGE-WIDT', 'PARAMETER', 'PARAM', 'PARAME', 'PARAMET', 'PARAMETE', 'PARENT', 'PARSE-STATUS', 'PARTIAL-KEY', 'PASCAL', 'PASSWORD-FIELD', 'PATHNAME', 'PAUSE', 'PBE-HASH-ALGORITHM', 'PBE-HASH-ALG', 'PBE-HASH-ALGO', 'PBE-HASH-ALGOR', 'PBE-HASH-ALGORI', 'PBE-HASH-ALGORIT', 'PBE-HASH-ALGORITH', 'PBE-KEY-ROUNDS', 'PDBNAME', 'PERSISTENT', 'PERSIST', 'PERSISTE', 'PERSISTEN', 'PERSISTENT-CACHE-DISABLED', 'PFCOLOR', 'PFC', 'PFCO', 'PFCOL', 'PFCOLO', 'PIXELS', 'PIXELS-PER-COLUMN', 'PIXELS-PER-COL', 'PIXELS-PER-COLU', 'PIXELS-PER-COLUM', 'PIXELS-PER-ROW', 'POPUP-MENU', 'POPUP-M', 'POPUP-ME', 'POPUP-MEN', 'POPUP-ONLY', 'POPUP-O', 'POPUP-ON', 'POPUP-ONL', 'PORTRAIT', 'POSITION', 'PRECISION', 'PREFER-DATASET', 'PREPARED', 'PREPARE-STRING', 'PREPROCESS', 'PREPROC', 'PREPROCE', 'PREPROCES', 'PRESELECT', 'PRESEL', 'PRESELE', 'PRESELEC', 'PREV', 'PREV-COLUMN', 'PREV-SIBLING', 'PREV-TAB-ITEM', 'PREV-TAB-I', 'PREV-TAB-IT', 'PREV-TAB-ITE', 'PRIMARY', 'PRINTER', 'PRINTER-CONTROL-HANDLE', 'PRINTER-HDC', 'PRINTER-NAME', 'PRINTER-PORT', 'PRINTER-SETUP', 'PRIVATE', 'PRIVATE-DATA', 'PRIVATE-D', 'PRIVATE-DA', 'PRIVATE-DAT', 'PRIVILEGES', 'PROCEDURE', 'PROCE', 'PROCED', 'PROCEDU', 'PROCEDUR', 'PROCEDURE-CALL-TYPE', 'PROCESS', 'PROC-HANDLE', 'PROC-HA', 'PROC-HAN', 'PROC-HAND', 'PROC-HANDL', 'PROC-STATUS', 'PROC-ST', 'PROC-STA', 'PROC-STAT', 'PROC-STATU', 'proc-text', 'proc-text-buffe', 'PROFILER', 'PROGRAM-NAME', 'PROGRESS', 'PROGRESS-SOURCE', 'PROGRESS-S', 'PROGRESS-SO', 'PROGRESS-SOU', 'PROGRESS-SOUR', 'PROGRESS-SOURC', 'PROMPT', 'PROMPT-FOR', 'PROMPT-F', 'PROMPT-FO', 'PROMSGS', 'PROPATH', 'PROPERTY', 'PROTECTED', 'PROVERSION', 'PROVERS', 'PROVERSI', 'PROVERSIO', 'PROXY', 'PROXY-PASSWORD', 'PROXY-USERID', 'PUBLIC', 'PUBLIC-ID', 'PUBLISH', 'PUBLISHED-EVENTS', 'PUT', 'PUTBYTE', 'PUT-BYTE', 'PUT-DOUBLE', 'PUT-FLOAT', 'PUT-INT64', 'PUT-KEY-VALUE', 'PUT-KEY-VAL', 'PUT-KEY-VALU', 'PUT-LONG', 'PUT-SHORT', 'PUT-STRING', 'PUT-UNSIGNED-LONG', 'QUERY', 'QUERY-CLOSE', 'QUERY-OFF-END', 'QUERY-OPEN', 'QUERY-PREPARE', 'QUERY-TUNING', 'QUESTION', 'QUIT', 'QUOTER', 'RADIO-BUTTONS', 'RADIO-SET', 'RANDOM', 'RAW-TRANSFER', 'RCODE-INFORMATION', 'RCODE-INFO', 'RCODE-INFOR', 'RCODE-INFORM', 'RCODE-INFORMA', 'RCODE-INFORMAT', 'RCODE-INFORMATI', 'RCODE-INFORMATIO', 'READ-AVAILABLE', 'READ-EXACT-NUM', 'READ-FILE', 'READKEY', 'READ-ONLY', 'READ-XML', 'READ-XMLSCHEMA', 'REAL', 'RECORD-LENGTH', 'RECTANGLE', 'RECT', 'RECTA', 'RECTAN', 'RECTANG', 'RECTANGL', 'RECURSIVE', 'REFERENCE-ONLY', 'REFRESH', 'REFRESHABLE', 'REFRESH-AUDIT-POLICY', 'REGISTER-DOMAIN', 'RELEASE', 'REMOTE', 'REMOVE-EVENTS-PROCEDURE', 'REMOVE-SUPER-PROCEDURE', 'REPEAT', 'REPLACE', 'REPLACE-SELECTION-TEXT', 'REPOSITION', 'REPOSITION-BACKWARD', 'REPOSITION-FORWARD', 'REPOSITION-MODE', 'REPOSITION-TO-ROW', 'REPOSITION-TO-ROWID', 'REQUEST', 'RESET', 'RESIZABLE', 'RESIZA', 'RESIZAB', 'RESIZABL', 'RESIZE', 'RESTART-ROW', 'RESTART-ROWID', 'RETAIN', 'RETAIN-SHAPE', 'RETRY', 'RETRY-CANCEL', 'RETURN', 'RETURN-INSERTED', 'RETURN-INS', 'RETURN-INSE', 'RETURN-INSER', 'RETURN-INSERT', 'RETURN-INSERTE', 'RETURNS', 'RETURN-TO-START-DIR', 'RETURN-TO-START-DI', 'RETURN-VALUE', 'RETURN-VAL', 'RETURN-VALU', 'RETURN-VALUE-DATA-TYPE', 'REVERSE-FROM', 'REVERT', 'REVOKE', 'RGB-VALUE', 'RIGHT-ALIGNED', 'RETURN-ALIGN', 'RETURN-ALIGNE', 'RIGHT-TRIM', 'R-INDEX', 'ROLES', 'ROUND', 'ROUTINE-LEVEL', 'ROW', 'ROW-HEIGHT-CHARS', 'ROW-HEIGHT-PIXELS', 'ROW-MARKERS', 'ROW-OF', 'ROW-RESIZABLE', 'RULE', 'RUN', 'RUN-PROCEDURE', 'SAVE', 'SAVE-AS', 'SAVE-FILE', 'SAX-COMPLETE', 'SAX-COMPLE', 'SAX-COMPLET', 'SAX-PARSE', 'SAX-PARSE-FIRST', 'SAX-PARSE-NEXT', 'SAX-PARSER-ERROR', 'SAX-RUNNING', 'SAX-UNINITIALIZED', 'SAX-WRITE-BEGIN', 'SAX-WRITE-COMPLETE', 'SAX-WRITE-CONTENT', 'SAX-WRITE-ELEMENT', 'SAX-WRITE-ERROR', 'SAX-WRITE-IDLE', 'SAX-WRITER', 'SAX-WRITE-TAG', 'SCHEMA', 'SCHEMA-LOCATION', 'SCHEMA-MARSHAL', 'SCHEMA-PATH', 'SCREEN', 'SCREEN-IO', 'SCREEN-LINES', 'SCREEN-VALUE', 'SCREEN-VAL', 'SCREEN-VALU', 'SCROLL', 'SCROLLABLE', 'SCROLLBAR-HORIZONTAL', 'SCROLLBAR-H', 'SCROLLBAR-HO', 'SCROLLBAR-HOR', 'SCROLLBAR-HORI', 'SCROLLBAR-HORIZ', 'SCROLLBAR-HORIZO', 'SCROLLBAR-HORIZON', 'SCROLLBAR-HORIZONT', 'SCROLLBAR-HORIZONTA', 'SCROLL-BARS', 'SCROLLBAR-VERTICAL', 'SCROLLBAR-V', 'SCROLLBAR-VE', 'SCROLLBAR-VER', 'SCROLLBAR-VERT', 'SCROLLBAR-VERTI', 'SCROLLBAR-VERTIC', 'SCROLLBAR-VERTICA', 'SCROLL-DELTA', 'SCROLLED-ROW-POSITION', 'SCROLLED-ROW-POS', 'SCROLLED-ROW-POSI', 'SCROLLED-ROW-POSIT', 'SCROLLED-ROW-POSITI', 'SCROLLED-ROW-POSITIO', 'SCROLLING', 'SCROLL-OFFSET', 'SCROLL-TO-CURRENT-ROW', 'SCROLL-TO-ITEM', 'SCROLL-TO-I', 'SCROLL-TO-IT', 'SCROLL-TO-ITE', 'SCROLL-TO-SELECTED-ROW', 'SDBNAME', 'SEAL', 'SEAL-TIMESTAMP', 'SEARCH', 'SEARCH-SELF', 'SEARCH-TARGET', 'SECTION', 'SECURITY-POLICY', 'SEEK', 'SELECT', 'SELECTABLE', 'SELECT-ALL', 'SELECTED', 'SELECT-FOCUSED-ROW', 'SELECTION', 'SELECTION-END', 'SELECTION-LIST', 'SELECTION-START', 'SELECTION-TEXT', 'SELECT-NEXT-ROW', 'SELECT-PREV-ROW', 'SELECT-ROW', 'SELF', 'SEND', 'send-sql-statement', 'send-sql', 'SENSITIVE', 'SEPARATE-CONNECTION', 'SEPARATOR-FGCOLOR', 'SEPARATORS', 'SERVER', 'SERVER-CONNECTION-BOUND', 'SERVER-CONNECTION-BOUND-REQUEST', 'SERVER-CONNECTION-CONTEXT', 'SERVER-CONNECTION-ID', 'SERVER-OPERATING-MODE', 'SESSION', 'SESSION-ID', 'SET', 'SET-APPL-CONTEXT', 'SET-ATTR-CALL-TYPE', 'SET-ATTRIBUTE-NODE', 'SET-BLUE-VALUE', 'SET-BLUE', 'SET-BLUE-', 'SET-BLUE-V', 'SET-BLUE-VA', 'SET-BLUE-VAL', 'SET-BLUE-VALU', 'SET-BREAK', 'SET-BUFFERS', 'SET-CALLBACK', 'SET-CLIENT', 'SET-COMMIT', 'SET-CONTENTS', 'SET-CURRENT-VALUE', 'SET-DB-CLIENT', 'SET-DYNAMIC', 'SET-EVENT-MANAGER-OPTION', 'SET-GREEN-VALUE', 'SET-GREEN', 'SET-GREEN-', 'SET-GREEN-V', 'SET-GREEN-VA', 'SET-GREEN-VAL', 'SET-GREEN-VALU', 'SET-INPUT-SOURCE', 'SET-OPTION', 'SET-OUTPUT-DESTINATION', 'SET-PARAMETER', 'SET-POINTER-VALUE', 'SET-PROPERTY', 'SET-RED-VALUE', 'SET-RED', 'SET-RED-', 'SET-RED-V', 'SET-RED-VA', 'SET-RED-VAL', 'SET-RED-VALU', 'SET-REPOSITIONED-ROW', 'SET-RGB-VALUE', 'SET-ROLLBACK', 'SET-SELECTION', 'SET-SIZE', 'SET-SORT-ARROW', 'SETUSERID', 'SETUSER', 'SETUSERI', 'SET-WAIT-STATE', 'SHA1-DIGEST', 'SHARED', 'SHARE-LOCK', 'SHARE', 'SHARE-', 'SHARE-L', 'SHARE-LO', 'SHARE-LOC', 'SHOW-IN-TASKBAR', 'SHOW-STATS', 'SHOW-STAT', 'SIDE-LABEL-HANDLE', 'SIDE-LABEL-H', 'SIDE-LABEL-HA', 'SIDE-LABEL-HAN', 'SIDE-LABEL-HAND', 'SIDE-LABEL-HANDL', 'SIDE-LABELS', 'SIDE-LAB', 'SIDE-LABE', 'SIDE-LABEL', 'SILENT', 'SIMPLE', 'SINGLE', 'SIZE', 'SIZE-CHARS', 'SIZE-C', 'SIZE-CH', 'SIZE-CHA', 'SIZE-CHAR', 'SIZE-PIXELS', 'SIZE-P', 'SIZE-PI', 'SIZE-PIX', 'SIZE-PIXE', 'SIZE-PIXEL', 'SKIP', 'SKIP-DELETED-RECORD', 'SLIDER', 'SMALL-ICON', 'SMALLINT', 'SMALL-TITLE', 'SOME', 'SORT', 'SORT-ASCENDING', 'SORT-NUMBER', 'SOURCE', 'SOURCE-PROCEDURE', 'SPACE', 'SQL', 'SQRT', 'SSL-SERVER-NAME', 'STANDALONE', 'START', 'START-DOCUMENT', 'START-ELEMENT', 'START-MOVE', 'START-RESIZE', 'START-ROW-RESIZE', 'STATE-DETAIL', 'STATIC', 'STATUS', 'STATUS-AREA', 'STATUS-AREA-FONT', 'STDCALL', 'STOP', 'STOP-PARSING', 'STOPPED', 'STOPPE', 'STORED-PROCEDURE', 'STORED-PROC', 'STORED-PROCE', 'STORED-PROCED', 'STORED-PROCEDU', 'STORED-PROCEDUR', 'STREAM', 'STREAM-HANDLE', 'STREAM-IO', 'STRETCH-TO-FIT', 'STRICT', 'STRING', 'STRING-VALUE', 'STRING-XREF', 'SUB-AVERAGE', 'SUB-AVE', 'SUB-AVER', 'SUB-AVERA', 'SUB-AVERAG', 'SUB-COUNT', 'SUB-MAXIMUM', 'SUM-MAX', 'SUM-MAXI', 'SUM-MAXIM', 'SUM-MAXIMU', 'SUB-MENU', 'SUBSUB-', 'SUB-MIN', 'SUBSCRIBE', 'SUBSTITUTE', 'SUBST', 'SUBSTI', 'SUBSTIT', 'SUBSTITU', 'SUBSTITUT', 'SUBSTRING', 'SUBSTR', 'SUBSTRI', 'SUBSTRIN', 'SUB-TOTAL', 'SUBTYPE', 'SUM', 'SUPER', 'SUPER-PROCEDURES', 'SUPPRESS-NAMESPACE-PROCESSING', 'SUPPRESS-WARNINGS', 'SUPPRESS-W', 'SUPPRESS-WA', 'SUPPRESS-WAR', 'SUPPRESS-WARN', 'SUPPRESS-WARNI', 'SUPPRESS-WARNIN', 'SUPPRESS-WARNING', 'SYMMETRIC-ENCRYPTION-ALGORITHM', 'SYMMETRIC-ENCRYPTION-IV', 'SYMMETRIC-ENCRYPTION-KEY', 'SYMMETRIC-SUPPORT', 'SYSTEM-ALERT-BOXES', 'SYSTEM-ALERT', 'SYSTEM-ALERT-', 'SYSTEM-ALERT-B', 'SYSTEM-ALERT-BO', 'SYSTEM-ALERT-BOX', 'SYSTEM-ALERT-BOXE', 'SYSTEM-DIALOG', 'SYSTEM-HELP', 'SYSTEM-ID', 'TABLE', 'TABLE-HANDLE', 'TABLE-NUMBER', 'TAB-POSITION', 'TAB-STOP', 'TARGET', 'TARGET-PROCEDURE', 'TEMP-DIRECTORY', 'TEMP-DIR', 'TEMP-DIRE', 'TEMP-DIREC', 'TEMP-DIRECT', 'TEMP-DIRECTO', 'TEMP-DIRECTOR', 'TEMP-TABLE', 'TEMP-TABLE-PREPARE', 'TERM', 'TERMINAL', 'TERMI', 'TERMIN', 'TERMINA', 'TERMINATE', 'TEXT', 'TEXT-CURSOR', 'TEXT-SEG-GROW', 'TEXT-SELECTED', 'THEN', 'THIS-OBJECT', 'THIS-PROCEDURE', 'THREE-D', 'THROW', 'THROUGH', 'THRU', 'TIC-MARKS', 'TIME', 'TIME-SOURCE', 'TITLE', 'TITLE-BGCOLOR', 'TITLE-BGC', 'TITLE-BGCO', 'TITLE-BGCOL', 'TITLE-BGCOLO', 'TITLE-DCOLOR', 'TITLE-DC', 'TITLE-DCO', 'TITLE-DCOL', 'TITLE-DCOLO', 'TITLE-FGCOLOR', 'TITLE-FGC', 'TITLE-FGCO', 'TITLE-FGCOL', 'TITLE-FGCOLO', 'TITLE-FONT', 'TITLE-FO', 'TITLE-FON', 'TO', 'TODAY', 'TOGGLE-BOX', 'TOOLTIP', 'TOOLTIPS', 'TOPIC', 'TOP-NAV-QUERY', 'TOP-ONLY', 'TO-ROWID', 'TOTAL', 'TRAILING', 'TRANS', 'TRANSACTION', 'TRANSACTION-MODE', 'TRANS-INIT-PROCEDURE', 'TRANSPARENT', 'TRIGGER', 'TRIGGERS', 'TRIM', 'TRUE', 'TRUNCATE', 'TRUNC', 'TRUNCA', 'TRUNCAT', 'TYPE', 'TYPE-OF', 'UNBOX', 'UNBUFFERED', 'UNBUFF', 'UNBUFFE', 'UNBUFFER', 'UNBUFFERE', 'UNDERLINE', 'UNDERL', 'UNDERLI', 'UNDERLIN', 'UNDO', 'UNFORMATTED', 'UNFORM', 'UNFORMA', 'UNFORMAT', 'UNFORMATT', 'UNFORMATTE', 'UNION', 'UNIQUE', 'UNIQUE-ID', 'UNIQUE-MATCH', 'UNIX', 'UNLESS-HIDDEN', 'UNLOAD', 'UNSIGNED-LONG', 'UNSUBSCRIBE', 'UP', 'UPDATE', 'UPDATE-ATTRIBUTE', 'URL', 'URL-DECODE', 'URL-ENCODE', 'URL-PASSWORD', 'URL-USERID', 'USE', 'USE-DICT-EXPS', 'USE-FILENAME', 'USE-INDEX', 'USER', 'USE-REVVIDEO', 'USERID', 'USER-ID', 'USE-TEXT', 'USE-UNDERLINE', 'USE-WIDGET-POOL', 'USING', 'V6DISPLAY', 'V6FRAME', 'VALIDATE', 'VALIDATE-EXPRESSION', 'VALIDATE-MESSAGE', 'VALIDATE-SEAL', 'VALIDATION-ENABLED', 'VALID-EVENT', 'VALID-HANDLE', 'VALID-OBJECT', 'VALUE', 'VALUE-CHANGED', 'VALUES', 'VARIABLE', 'VAR', 'VARI', 'VARIA', 'VARIAB', 'VARIABL', 'VERBOSE', 'VERSION', 'VERTICAL', 'VERT', 'VERTI', 'VERTIC', 'VERTICA', 'VIEW', 'VIEW-AS', 'VIEW-FIRST-COLUMN-ON-REOPEN', 'VIRTUAL-HEIGHT-CHARS', 'VIRTUAL-HEIGHT', 'VIRTUAL-HEIGHT-', 'VIRTUAL-HEIGHT-C', 'VIRTUAL-HEIGHT-CH', 'VIRTUAL-HEIGHT-CHA', 'VIRTUAL-HEIGHT-CHAR', 'VIRTUAL-HEIGHT-PIXELS', 'VIRTUAL-HEIGHT-P', 'VIRTUAL-HEIGHT-PI', 'VIRTUAL-HEIGHT-PIX', 'VIRTUAL-HEIGHT-PIXE', 'VIRTUAL-HEIGHT-PIXEL', 'VIRTUAL-WIDTH-CHARS', 'VIRTUAL-WIDTH', 'VIRTUAL-WIDTH-', 'VIRTUAL-WIDTH-C', 'VIRTUAL-WIDTH-CH', 'VIRTUAL-WIDTH-CHA', 'VIRTUAL-WIDTH-CHAR', 'VIRTUAL-WIDTH-PIXELS', 'VIRTUAL-WIDTH-P', 'VIRTUAL-WIDTH-PI', 'VIRTUAL-WIDTH-PIX', 'VIRTUAL-WIDTH-PIXE', 'VIRTUAL-WIDTH-PIXEL', 'VISIBLE', 'VOID', 'WAIT', 'WAIT-FOR', 'WARNING', 'WEB-CONTEXT', 'WEEKDAY', 'WHEN', 'WHERE', 'WHILE', 'WIDGET', 'WIDGET-ENTER', 'WIDGET-E', 'WIDGET-EN', 'WIDGET-ENT', 'WIDGET-ENTE', 'WIDGET-ID', 'WIDGET-LEAVE', 'WIDGET-L', 'WIDGET-LE', 'WIDGET-LEA', 'WIDGET-LEAV', 'WIDGET-POOL', 'WIDTH-CHARS', 'WIDTH', 'WIDTH-', 'WIDTH-C', 'WIDTH-CH', 'WIDTH-CHA', 'WIDTH-CHAR', 'WIDTH-PIXELS', 'WIDTH-P', 'WIDTH-PI', 'WIDTH-PIX', 'WIDTH-PIXE', 'WIDTH-PIXEL', 'WINDOW', 'WINDOW-MAXIMIZED', 'WINDOW-MAXIM', 'WINDOW-MAXIMI', 'WINDOW-MAXIMIZ', 'WINDOW-MAXIMIZE', 'WINDOW-MINIMIZED', 'WINDOW-MINIM', 'WINDOW-MINIMI', 'WINDOW-MINIMIZ', 'WINDOW-MINIMIZE', 'WINDOW-NAME', 'WINDOW-NORMAL', 'WINDOW-STATE', 'WINDOW-STA', 'WINDOW-STAT', 'WINDOW-SYSTEM', 'WITH', 'WORD-INDEX', 'WORD-WRAP', 'WORK-AREA-HEIGHT-PIXELS', 'WORK-AREA-WIDTH-PIXELS', 'WORK-AREA-X', 'WORK-AREA-Y', 'WORKFILE', 'WORK-TABLE', 'WORK-TAB', 'WORK-TABL', 'WRITE', 'WRITE-CDATA', 'WRITE-CHARACTERS', 'WRITE-COMMENT', 'WRITE-DATA-ELEMENT', 'WRITE-EMPTY-ELEMENT', 'WRITE-ENTITY-REF', 'WRITE-EXTERNAL-DTD', 'WRITE-FRAGMENT', 'WRITE-MESSAGE', 'WRITE-PROCESSING-INSTRUCTION', 'WRITE-STATUS', 'WRITE-XML', 'WRITE-XMLSCHEMA', 'X', 'XCODE', 'XML-DATA-TYPE', 'XML-NODE-TYPE', 'XML-SCHEMA-PATH', 'XML-SUPPRESS-NAMESPACE-PROCESSING', 'X-OF', 'XREF', 'XREF-XML', 'Y', 'YEAR', 'YEAR-OFFSET', 'YES', 'YES-NO', 'YES-NO-CANCEL', 'Y-OF' ) Pygments-2.3.1/pygments/lexers/configs.py0000644000175000017500000007011113402534107017550 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.configs ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for configuration file formats. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, default, words, bygroups, include, using from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace, Literal from pygments.lexers.shell import BashLexer from pygments.lexers.data import JsonLexer __all__ = ['IniLexer', 'RegeditLexer', 'PropertiesLexer', 'KconfigLexer', 'Cfengine3Lexer', 'ApacheConfLexer', 'SquidConfLexer', 'NginxConfLexer', 'LighttpdConfLexer', 'DockerLexer', 'TerraformLexer', 'TermcapLexer', 'TerminfoLexer', 'PkgConfigLexer', 'PacmanConfLexer'] class IniLexer(RegexLexer): """ Lexer for configuration files in INI style. """ name = 'INI' aliases = ['ini', 'cfg', 'dosini'] filenames = ['*.ini', '*.cfg', '*.inf'] mimetypes = ['text/x-ini', 'text/inf'] tokens = { 'root': [ (r'\s+', Text), (r'[;#].*', Comment.Single), (r'\[.*?\]$', Keyword), (r'(.*?)([ \t]*)(=)([ \t]*)(.*(?:\n[ \t].+)*)', bygroups(Name.Attribute, Text, Operator, Text, String)), # standalone option, supported by some INI parsers (r'(.+?)$', Name.Attribute), ], } def analyse_text(text): npos = text.find('\n') if npos < 3: return False return text[0] == '[' and text[npos-1] == ']' class RegeditLexer(RegexLexer): """ Lexer for `Windows Registry `_ files produced by regedit. .. versionadded:: 1.6 """ name = 'reg' aliases = ['registry'] filenames = ['*.reg'] mimetypes = ['text/x-windows-registry'] tokens = { 'root': [ (r'Windows Registry Editor.*', Text), (r'\s+', Text), (r'[;#].*', Comment.Single), (r'(\[)(-?)(HKEY_[A-Z_]+)(.*?\])$', bygroups(Keyword, Operator, Name.Builtin, Keyword)), # String keys, which obey somewhat normal escaping (r'("(?:\\"|\\\\|[^"])+")([ \t]*)(=)([ \t]*)', bygroups(Name.Attribute, Text, Operator, Text), 'value'), # Bare keys (includes @) (r'(.*?)([ \t]*)(=)([ \t]*)', bygroups(Name.Attribute, Text, Operator, Text), 'value'), ], 'value': [ (r'-', Operator, '#pop'), # delete value (r'(dword|hex(?:\([0-9a-fA-F]\))?)(:)([0-9a-fA-F,]+)', bygroups(Name.Variable, Punctuation, Number), '#pop'), # As far as I know, .reg files do not support line continuation. (r'.+', String, '#pop'), default('#pop'), ] } def analyse_text(text): return text.startswith('Windows Registry Editor') class PropertiesLexer(RegexLexer): """ Lexer for configuration files in Java's properties format. Note: trailing whitespace counts as part of the value as per spec .. versionadded:: 1.4 """ name = 'Properties' aliases = ['properties', 'jproperties'] filenames = ['*.properties'] mimetypes = ['text/x-java-properties'] tokens = { 'root': [ (r'^(\w+)([ \t])(\w+\s*)$', bygroups(Name.Attribute, Text, String)), (r'^\w+(\\[ \t]\w*)*$', Name.Attribute), (r'(^ *)([#!].*)', bygroups(Text, Comment)), # More controversial comments (r'(^ *)((?:;|//).*)', bygroups(Text, Comment)), (r'(.*?)([ \t]*)([=:])([ \t]*)(.*(?:(?<=\\)\n.*)*)', bygroups(Name.Attribute, Text, Operator, Text, String)), (r'\s', Text), ], } def _rx_indent(level): # Kconfig *always* interprets a tab as 8 spaces, so this is the default. # Edit this if you are in an environment where KconfigLexer gets expanded # input (tabs expanded to spaces) and the expansion tab width is != 8, # e.g. in connection with Trac (trac.ini, [mimeviewer], tab_width). # Value range here is 2 <= {tab_width} <= 8. tab_width = 8 # Regex matching a given indentation {level}, assuming that indentation is # a multiple of {tab_width}. In other cases there might be problems. if tab_width == 2: space_repeat = '+' else: space_repeat = '{1,%d}' % (tab_width - 1) if level == 1: level_repeat = '' else: level_repeat = '{%s}' % level return r'(?:\t| %s\t| {%s})%s.*\n' % (space_repeat, tab_width, level_repeat) class KconfigLexer(RegexLexer): """ For Linux-style Kconfig files. .. versionadded:: 1.6 """ name = 'Kconfig' aliases = ['kconfig', 'menuconfig', 'linux-config', 'kernel-config'] # Adjust this if new kconfig file names appear in your environment filenames = ['Kconfig', '*Config.in*', 'external.in*', 'standard-modules.in'] mimetypes = ['text/x-kconfig'] # No re.MULTILINE, indentation-aware help text needs line-by-line handling flags = 0 def call_indent(level): # If indentation >= {level} is detected, enter state 'indent{level}' return (_rx_indent(level), String.Doc, 'indent%s' % level) def do_indent(level): # Print paragraphs of indentation level >= {level} as String.Doc, # ignoring blank lines. Then return to 'root' state. return [ (_rx_indent(level), String.Doc), (r'\s*\n', Text), default('#pop:2') ] tokens = { 'root': [ (r'\s+', Text), (r'#.*?\n', Comment.Single), (words(( 'mainmenu', 'config', 'menuconfig', 'choice', 'endchoice', 'comment', 'menu', 'endmenu', 'visible if', 'if', 'endif', 'source', 'prompt', 'select', 'depends on', 'default', 'range', 'option'), suffix=r'\b'), Keyword), (r'(---help---|help)[\t ]*\n', Keyword, 'help'), (r'(bool|tristate|string|hex|int|defconfig_list|modules|env)\b', Name.Builtin), (r'[!=&|]', Operator), (r'[()]', Punctuation), (r'[0-9]+', Number.Integer), (r"'(''|[^'])*'", String.Single), (r'"(""|[^"])*"', String.Double), (r'\S+', Text), ], # Help text is indented, multi-line and ends when a lower indentation # level is detected. 'help': [ # Skip blank lines after help token, if any (r'\s*\n', Text), # Determine the first help line's indentation level heuristically(!). # Attention: this is not perfect, but works for 99% of "normal" # indentation schemes up to a max. indentation level of 7. call_indent(7), call_indent(6), call_indent(5), call_indent(4), call_indent(3), call_indent(2), call_indent(1), default('#pop'), # for incomplete help sections without text ], # Handle text for indentation levels 7 to 1 'indent7': do_indent(7), 'indent6': do_indent(6), 'indent5': do_indent(5), 'indent4': do_indent(4), 'indent3': do_indent(3), 'indent2': do_indent(2), 'indent1': do_indent(1), } class Cfengine3Lexer(RegexLexer): """ Lexer for `CFEngine3 `_ policy files. .. versionadded:: 1.5 """ name = 'CFEngine3' aliases = ['cfengine3', 'cf3'] filenames = ['*.cf'] mimetypes = [] tokens = { 'root': [ (r'#.*?\n', Comment), (r'(body)(\s+)(\S+)(\s+)(control)', bygroups(Keyword, Text, Keyword, Text, Keyword)), (r'(body|bundle)(\s+)(\S+)(\s+)(\w+)(\()', bygroups(Keyword, Text, Keyword, Text, Name.Function, Punctuation), 'arglist'), (r'(body|bundle)(\s+)(\S+)(\s+)(\w+)', bygroups(Keyword, Text, Keyword, Text, Name.Function)), (r'(")([^"]+)(")(\s+)(string|slist|int|real)(\s*)(=>)(\s*)', bygroups(Punctuation, Name.Variable, Punctuation, Text, Keyword.Type, Text, Operator, Text)), (r'(\S+)(\s*)(=>)(\s*)', bygroups(Keyword.Reserved, Text, Operator, Text)), (r'"', String, 'string'), (r'(\w+)(\()', bygroups(Name.Function, Punctuation)), (r'([\w.!&|()]+)(::)', bygroups(Name.Class, Punctuation)), (r'(\w+)(:)', bygroups(Keyword.Declaration, Punctuation)), (r'@[{(][^)}]+[})]', Name.Variable), (r'[(){},;]', Punctuation), (r'=>', Operator), (r'->', Operator), (r'\d+\.\d+', Number.Float), (r'\d+', Number.Integer), (r'\w+', Name.Function), (r'\s+', Text), ], 'string': [ (r'\$[{(]', String.Interpol, 'interpol'), (r'\\.', String.Escape), (r'"', String, '#pop'), (r'\n', String), (r'.', String), ], 'interpol': [ (r'\$[{(]', String.Interpol, '#push'), (r'[})]', String.Interpol, '#pop'), (r'[^${()}]+', String.Interpol), ], 'arglist': [ (r'\)', Punctuation, '#pop'), (r',', Punctuation), (r'\w+', Name.Variable), (r'\s+', Text), ], } class ApacheConfLexer(RegexLexer): """ Lexer for configuration files following the Apache config file format. .. versionadded:: 0.6 """ name = 'ApacheConf' aliases = ['apacheconf', 'aconf', 'apache'] filenames = ['.htaccess', 'apache.conf', 'apache2.conf'] mimetypes = ['text/x-apacheconf'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'\s+', Text), (r'(#.*?)$', Comment), (r'(<[^\s>]+)(?:(\s+)(.*?))?(>)', bygroups(Name.Tag, Text, String, Name.Tag)), (r'([a-z]\w*)(\s+)', bygroups(Name.Builtin, Text), 'value'), (r'\.+', Text), ], 'value': [ (r'\\\n', Text), (r'$', Text, '#pop'), (r'\\', Text), (r'[^\S\n]+', Text), (r'\d+\.\d+\.\d+\.\d+(?:/\d+)?', Number), (r'\d+', Number), (r'/([a-z0-9][\w./-]+)', String.Other), (r'(on|off|none|any|all|double|email|dns|min|minimal|' r'os|productonly|full|emerg|alert|crit|error|warn|' r'notice|info|debug|registry|script|inetd|standalone|' r'user|group)\b', Keyword), (r'"([^"\\]*(?:\\.[^"\\]*)*)"', String.Double), (r'[^\s"\\]+', Text) ], } class SquidConfLexer(RegexLexer): """ Lexer for `squid `_ configuration files. .. versionadded:: 0.9 """ name = 'SquidConf' aliases = ['squidconf', 'squid.conf', 'squid'] filenames = ['squid.conf'] mimetypes = ['text/x-squidconf'] flags = re.IGNORECASE keywords = ( "access_log", "acl", "always_direct", "announce_host", "announce_period", "announce_port", "announce_to", "anonymize_headers", "append_domain", "as_whois_server", "auth_param_basic", "authenticate_children", "authenticate_program", "authenticate_ttl", "broken_posts", "buffered_logs", "cache_access_log", "cache_announce", "cache_dir", "cache_dns_program", "cache_effective_group", "cache_effective_user", "cache_host", "cache_host_acl", "cache_host_domain", "cache_log", "cache_mem", "cache_mem_high", "cache_mem_low", "cache_mgr", "cachemgr_passwd", "cache_peer", "cache_peer_access", "cahce_replacement_policy", "cache_stoplist", "cache_stoplist_pattern", "cache_store_log", "cache_swap", "cache_swap_high", "cache_swap_log", "cache_swap_low", "client_db", "client_lifetime", "client_netmask", "connect_timeout", "coredump_dir", "dead_peer_timeout", "debug_options", "delay_access", "delay_class", "delay_initial_bucket_level", "delay_parameters", "delay_pools", "deny_info", "dns_children", "dns_defnames", "dns_nameservers", "dns_testnames", "emulate_httpd_log", "err_html_text", "fake_user_agent", "firewall_ip", "forwarded_for", "forward_snmpd_port", "fqdncache_size", "ftpget_options", "ftpget_program", "ftp_list_width", "ftp_passive", "ftp_user", "half_closed_clients", "header_access", "header_replace", "hierarchy_stoplist", "high_response_time_warning", "high_page_fault_warning", "hosts_file", "htcp_port", "http_access", "http_anonymizer", "httpd_accel", "httpd_accel_host", "httpd_accel_port", "httpd_accel_uses_host_header", "httpd_accel_with_proxy", "http_port", "http_reply_access", "icp_access", "icp_hit_stale", "icp_port", "icp_query_timeout", "ident_lookup", "ident_lookup_access", "ident_timeout", "incoming_http_average", "incoming_icp_average", "inside_firewall", "ipcache_high", "ipcache_low", "ipcache_size", "local_domain", "local_ip", "logfile_rotate", "log_fqdn", "log_icp_queries", "log_mime_hdrs", "maximum_object_size", "maximum_single_addr_tries", "mcast_groups", "mcast_icp_query_timeout", "mcast_miss_addr", "mcast_miss_encode_key", "mcast_miss_port", "memory_pools", "memory_pools_limit", "memory_replacement_policy", "mime_table", "min_http_poll_cnt", "min_icp_poll_cnt", "minimum_direct_hops", "minimum_object_size", "minimum_retry_timeout", "miss_access", "negative_dns_ttl", "negative_ttl", "neighbor_timeout", "neighbor_type_domain", "netdb_high", "netdb_low", "netdb_ping_period", "netdb_ping_rate", "never_direct", "no_cache", "passthrough_proxy", "pconn_timeout", "pid_filename", "pinger_program", "positive_dns_ttl", "prefer_direct", "proxy_auth", "proxy_auth_realm", "query_icmp", "quick_abort", "quick_abort_max", "quick_abort_min", "quick_abort_pct", "range_offset_limit", "read_timeout", "redirect_children", "redirect_program", "redirect_rewrites_host_header", "reference_age", "refresh_pattern", "reload_into_ims", "request_body_max_size", "request_size", "request_timeout", "shutdown_lifetime", "single_parent_bypass", "siteselect_timeout", "snmp_access", "snmp_incoming_address", "snmp_port", "source_ping", "ssl_proxy", "store_avg_object_size", "store_objects_per_bucket", "strip_query_terms", "swap_level1_dirs", "swap_level2_dirs", "tcp_incoming_address", "tcp_outgoing_address", "tcp_recv_bufsize", "test_reachability", "udp_hit_obj", "udp_hit_obj_size", "udp_incoming_address", "udp_outgoing_address", "unique_hostname", "unlinkd_program", "uri_whitespace", "useragent_log", "visible_hostname", "wais_relay", "wais_relay_host", "wais_relay_port", ) opts = ( "proxy-only", "weight", "ttl", "no-query", "default", "round-robin", "multicast-responder", "on", "off", "all", "deny", "allow", "via", "parent", "no-digest", "heap", "lru", "realm", "children", "q1", "q2", "credentialsttl", "none", "disable", "offline_toggle", "diskd", ) actions = ( "shutdown", "info", "parameter", "server_list", "client_list", r'squid.conf', ) actions_stats = ( "objects", "vm_objects", "utilization", "ipcache", "fqdncache", "dns", "redirector", "io", "reply_headers", "filedescriptors", "netdb", ) actions_log = ("status", "enable", "disable", "clear") acls = ( "url_regex", "urlpath_regex", "referer_regex", "port", "proto", "req_mime_type", "rep_mime_type", "method", "browser", "user", "src", "dst", "time", "dstdomain", "ident", "snmp_community", ) ip_re = ( r'(?:(?:(?:[3-9]\d?|2(?:5[0-5]|[0-4]?\d)?|1\d{0,2}|0x0*[0-9a-f]{1,2}|' r'0+[1-3]?[0-7]{0,2})(?:\.(?:[3-9]\d?|2(?:5[0-5]|[0-4]?\d)?|1\d{0,2}|' r'0x0*[0-9a-f]{1,2}|0+[1-3]?[0-7]{0,2})){3})|(?!.*::.*::)(?:(?!:)|' r':(?=:))(?:[0-9a-f]{0,4}(?:(?<=::)|(?`_ configuration files. .. versionadded:: 0.11 """ name = 'Nginx configuration file' aliases = ['nginx'] filenames = ['nginx.conf'] mimetypes = ['text/x-nginx-conf'] tokens = { 'root': [ (r'(include)(\s+)([^\s;]+)', bygroups(Keyword, Text, Name)), (r'[^\s;#]+', Keyword, 'stmt'), include('base'), ], 'block': [ (r'\}', Punctuation, '#pop:2'), (r'[^\s;#]+', Keyword.Namespace, 'stmt'), include('base'), ], 'stmt': [ (r'\{', Punctuation, 'block'), (r';', Punctuation, '#pop'), include('base'), ], 'base': [ (r'#.*\n', Comment.Single), (r'on|off', Name.Constant), (r'\$[^\s;#()]+', Name.Variable), (r'([a-z0-9.-]+)(:)([0-9]+)', bygroups(Name, Punctuation, Number.Integer)), (r'[a-z-]+/[a-z-+]+', String), # mimetype # (r'[a-zA-Z._-]+', Keyword), (r'[0-9]+[km]?\b', Number.Integer), (r'(~)(\s*)([^\s{]+)', bygroups(Punctuation, Text, String.Regex)), (r'[:=~]', Punctuation), (r'[^\s;#{}$]+', String), # catch all (r'/[^\s;#]*', Name), # pathname (r'\s+', Text), (r'[$;]', Text), # leftover characters ], } class LighttpdConfLexer(RegexLexer): """ Lexer for `Lighttpd `_ configuration files. .. versionadded:: 0.11 """ name = 'Lighttpd configuration file' aliases = ['lighty', 'lighttpd'] filenames = [] mimetypes = ['text/x-lighttpd-conf'] tokens = { 'root': [ (r'#.*\n', Comment.Single), (r'/\S*', Name), # pathname (r'[a-zA-Z._-]+', Keyword), (r'\d+\.\d+\.\d+\.\d+(?:/\d+)?', Number), (r'[0-9]+', Number), (r'=>|=~|\+=|==|=|\+', Operator), (r'\$[A-Z]+', Name.Builtin), (r'[(){}\[\],]', Punctuation), (r'"([^"\\]*(?:\\.[^"\\]*)*)"', String.Double), (r'\s+', Text), ], } class DockerLexer(RegexLexer): """ Lexer for `Docker `_ configuration files. .. versionadded:: 2.0 """ name = 'Docker' aliases = ['docker', 'dockerfile'] filenames = ['Dockerfile', '*.docker'] mimetypes = ['text/x-dockerfile-config'] _keywords = (r'(?:FROM|MAINTAINER|EXPOSE|WORKDIR|USER|STOPSIGNAL)') _bash_keywords = (r'(?:RUN|CMD|ENTRYPOINT|ENV|ARG|LABEL|ADD|COPY)') _lb = r'(?:\s*\\?\s*)' # dockerfile line break regex flags = re.IGNORECASE | re.MULTILINE tokens = { 'root': [ (r'#.*', Comment), (r'(ONBUILD)(%s)' % (_lb,), bygroups(Keyword, using(BashLexer))), (r'(HEALTHCHECK)((%s--\w+=\w+%s)*)' % (_lb, _lb), bygroups(Keyword, using(BashLexer))), (r'(VOLUME|ENTRYPOINT|CMD|SHELL)(%s)(\[.*?\])' % (_lb,), bygroups(Keyword, using(BashLexer), using(JsonLexer))), (r'(LABEL|ENV|ARG)((%s\w+=\w+%s)*)' % (_lb, _lb), bygroups(Keyword, using(BashLexer))), (r'(%s|VOLUME)\b(.*)' % (_keywords), bygroups(Keyword, String)), (r'(%s)' % (_bash_keywords,), Keyword), (r'(.*\\\n)*.+', using(BashLexer)), ] } class TerraformLexer(RegexLexer): """ Lexer for `terraformi .tf files `_. .. versionadded:: 2.1 """ name = 'Terraform' aliases = ['terraform', 'tf'] filenames = ['*.tf'] mimetypes = ['application/x-tf', 'application/x-terraform'] tokens = { 'root': [ include('string'), include('punctuation'), include('curly'), include('basic'), include('whitespace'), (r'[0-9]+', Number), ], 'basic': [ (words(('true', 'false'), prefix=r'\b', suffix=r'\b'), Keyword.Type), (r'\s*/\*', Comment.Multiline, 'comment'), (r'\s*#.*\n', Comment.Single), (r'(.*?)(\s*)(=)', bygroups(Name.Attribute, Text, Operator)), (words(('variable', 'resource', 'provider', 'provisioner', 'module'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved, 'function'), (words(('ingress', 'egress', 'listener', 'default', 'connection', 'alias'), prefix=r'\b', suffix=r'\b'), Keyword.Declaration), (r'\$\{', String.Interpol, 'var_builtin'), ], 'function': [ (r'(\s+)(".*")(\s+)', bygroups(Text, String, Text)), include('punctuation'), include('curly'), ], 'var_builtin': [ (r'\$\{', String.Interpol, '#push'), (words(('concat', 'file', 'join', 'lookup', 'element'), prefix=r'\b', suffix=r'\b'), Name.Builtin), include('string'), include('punctuation'), (r'\s+', Text), (r'\}', String.Interpol, '#pop'), ], 'string': [ (r'(".*")', bygroups(String.Double)), ], 'punctuation': [ (r'[\[\](),.]', Punctuation), ], # Keep this seperate from punctuation - we sometimes want to use different # Tokens for { } 'curly': [ (r'\{', Text.Punctuation), (r'\}', Text.Punctuation), ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'whitespace': [ (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), ], } class TermcapLexer(RegexLexer): """ Lexer for termcap database source. This is very simple and minimal. .. versionadded:: 2.1 """ name = 'Termcap' aliases = ['termcap'] filenames = ['termcap', 'termcap.src'] mimetypes = [] # NOTE: # * multiline with trailing backslash # * separator is ':' # * to embed colon as data, we must use \072 # * space after separator is not allowed (mayve) tokens = { 'root': [ (r'^#.*$', Comment), (r'^[^\s#:|]+', Name.Tag, 'names'), ], 'names': [ (r'\n', Text, '#pop'), (r':', Punctuation, 'defs'), (r'\|', Punctuation), (r'[^:|]+', Name.Attribute), ], 'defs': [ (r'\\\n[ \t]*', Text), (r'\n[ \t]*', Text, '#pop:2'), (r'(#)([0-9]+)', bygroups(Operator, Number)), (r'=', Operator, 'data'), (r':', Punctuation), (r'[^\s:=#]+', Name.Class), ], 'data': [ (r'\\072', Literal), (r':', Punctuation, '#pop'), (r'[^:\\]+', Literal), # for performance (r'.', Literal), ], } class TerminfoLexer(RegexLexer): """ Lexer for terminfo database source. This is very simple and minimal. .. versionadded:: 2.1 """ name = 'Terminfo' aliases = ['terminfo'] filenames = ['terminfo', 'terminfo.src'] mimetypes = [] # NOTE: # * multiline with leading whitespace # * separator is ',' # * to embed comma as data, we can use \, # * space after separator is allowed tokens = { 'root': [ (r'^#.*$', Comment), (r'^[^\s#,|]+', Name.Tag, 'names'), ], 'names': [ (r'\n', Text, '#pop'), (r'(,)([ \t]*)', bygroups(Punctuation, Text), 'defs'), (r'\|', Punctuation), (r'[^,|]+', Name.Attribute), ], 'defs': [ (r'\n[ \t]+', Text), (r'\n', Text, '#pop:2'), (r'(#)([0-9]+)', bygroups(Operator, Number)), (r'=', Operator, 'data'), (r'(,)([ \t]*)', bygroups(Punctuation, Text)), (r'[^\s,=#]+', Name.Class), ], 'data': [ (r'\\[,\\]', Literal), (r'(,)([ \t]*)', bygroups(Punctuation, Text), '#pop'), (r'[^\\,]+', Literal), # for performance (r'.', Literal), ], } class PkgConfigLexer(RegexLexer): """ Lexer for `pkg-config `_ (see also `manual page `_). .. versionadded:: 2.1 """ name = 'PkgConfig' aliases = ['pkgconfig'] filenames = ['*.pc'] mimetypes = [] tokens = { 'root': [ (r'#.*$', Comment.Single), # variable definitions (r'^(\w+)(=)', bygroups(Name.Attribute, Operator)), # keyword lines (r'^([\w.]+)(:)', bygroups(Name.Tag, Punctuation), 'spvalue'), # variable references include('interp'), # fallback (r'[^${}#=:\n.]+', Text), (r'.', Text), ], 'interp': [ # you can escape literal "$" as "$$" (r'\$\$', Text), # variable references (r'\$\{', String.Interpol, 'curly'), ], 'curly': [ (r'\}', String.Interpol, '#pop'), (r'\w+', Name.Attribute), ], 'spvalue': [ include('interp'), (r'#.*$', Comment.Single, '#pop'), (r'\n', Text, '#pop'), # fallback (r'[^${}#\n]+', Text), (r'.', Text), ], } class PacmanConfLexer(RegexLexer): """ Lexer for `pacman.conf `_. Actually, IniLexer works almost fine for this format, but it yield error token. It is because pacman.conf has a form without assignment like: UseSyslog Color TotalDownload CheckSpace VerbosePkgLists These are flags to switch on. .. versionadded:: 2.1 """ name = 'PacmanConf' aliases = ['pacmanconf'] filenames = ['pacman.conf'] mimetypes = [] tokens = { 'root': [ # comment (r'#.*$', Comment.Single), # section header (r'^\s*\[.*?\]\s*$', Keyword), # variable definitions # (Leading space is allowed...) (r'(\w+)(\s*)(=)', bygroups(Name.Attribute, Text, Operator)), # flags to on (r'^(\s*)(\w+)(\s*)$', bygroups(Text, Name.Attribute, Text)), # built-in special values (words(( '$repo', # repository '$arch', # architecture '%o', # outfile '%u', # url ), suffix=r'\b'), Name.Variable), # fallback (r'.', Text), ], } Pygments-2.3.1/pygments/lexers/archetype.py0000644000175000017500000002560013376260540020116 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.archetype ~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for Archetype-related syntaxes, including: - ODIN syntax - ADL syntax - cADL sub-syntax of ADL For uses of this syntax, see the openEHR archetypes Contributed by Thomas Beale , . :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups, using, default from pygments.token import Text, Comment, Name, Literal, Number, String, \ Punctuation, Keyword, Operator, Generic __all__ = ['OdinLexer', 'CadlLexer', 'AdlLexer'] class AtomsLexer(RegexLexer): """ Lexer for Values used in ADL and ODIN. .. versionadded:: 2.1 """ tokens = { # ----- pseudo-states for inclusion ----- 'whitespace': [ (r'\n', Text), (r'\s+', Text), (r'[ \t]*--.*$', Comment), ], 'archetype_id': [ (r'[ \t]*([a-zA-Z]\w+(\.[a-zA-Z]\w+)*::)?[a-zA-Z]\w+(-[a-zA-Z]\w+){2}' r'\.\w+[\w-]*\.v\d+(\.\d+){,2}((-[a-z]+)(\.\d+)?)?', Name.Decorator), ], 'date_constraints': [ # ISO 8601-based date/time constraints (r'[Xx?YyMmDdHhSs\d]{2,4}([:-][Xx?YyMmDdHhSs\d]{2}){2}', Literal.Date), # ISO 8601-based duration constraints + optional trailing slash (r'(P[YyMmWwDd]+(T[HhMmSs]+)?|PT[HhMmSs]+)/?', Literal.Date), ], 'ordered_values': [ # ISO 8601 date with optional 'T' ligature (r'\d{4}-\d{2}-\d{2}T?', Literal.Date), # ISO 8601 time (r'\d{2}:\d{2}:\d{2}(\.\d+)?([+-]\d{4}|Z)?', Literal.Date), # ISO 8601 duration (r'P((\d*(\.\d+)?[YyMmWwDd]){1,3}(T(\d*(\.\d+)?[HhMmSs]){,3})?|' r'T(\d*(\.\d+)?[HhMmSs]){,3})', Literal.Date), (r'[+-]?(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+', Number.Float), (r'[+-]?(\d+)*\.\d+%?', Number.Float), (r'0x[0-9a-fA-F]+', Number.Hex), (r'[+-]?\d+%?', Number.Integer), ], 'values': [ include('ordered_values'), (r'([Tt]rue|[Ff]alse)', Literal), (r'"', String, 'string'), (r"'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'[a-z][a-z0-9+.-]*:', Literal, 'uri'), # term code (r'(\[)(\w[\w-]*(?:\([^)\n]+\))?)(::)(\w[\w-]*)(\])', bygroups(Punctuation, Name.Decorator, Punctuation, Name.Decorator, Punctuation)), (r'\|', Punctuation, 'interval'), # list continuation (r'\.\.\.', Punctuation), ], 'constraint_values': [ (r'(\[)(\w[\w-]*(?:\([^)\n]+\))?)(::)', bygroups(Punctuation, Name.Decorator, Punctuation), 'adl14_code_constraint'), # ADL 1.4 ordinal constraint (r'(\d*)(\|)(\[\w[\w-]*::\w[\w-]*\])((?:[,;])?)', bygroups(Number, Punctuation, Name.Decorator, Punctuation)), include('date_constraints'), include('values'), ], # ----- real states ----- 'string': [ ('"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|' r'u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})', String.Escape), # all other characters (r'[^\\"]+', String), # stray backslash (r'\\', String), ], 'uri': [ # effective URI terminators (r'[,>\s]', Punctuation, '#pop'), (r'[^>\s,]+', Literal), ], 'interval': [ (r'\|', Punctuation, '#pop'), include('ordered_values'), (r'\.\.', Punctuation), (r'[<>=] *', Punctuation), # handle +/- (r'\+/-', Punctuation), (r'\s+', Text), ], 'any_code': [ include('archetype_id'), # if it is a code (r'[a-z_]\w*[0-9.]+(@[^\]]+)?', Name.Decorator), # if it is tuple with attribute names (r'[a-z_]\w*', Name.Class), # if it is an integer, i.e. Xpath child index (r'[0-9]+', Text), (r'\|', Punctuation, 'code_rubric'), (r'\]', Punctuation, '#pop'), # handle use_archetype statement (r'\s*,\s*', Punctuation), ], 'code_rubric': [ (r'\|', Punctuation, '#pop'), (r'[^|]+', String), ], 'adl14_code_constraint': [ (r'\]', Punctuation, '#pop'), (r'\|', Punctuation, 'code_rubric'), (r'(\w[\w-]*)([;,]?)', bygroups(Name.Decorator, Punctuation)), include('whitespace'), ], } class OdinLexer(AtomsLexer): """ Lexer for ODIN syntax. .. versionadded:: 2.1 """ name = 'ODIN' aliases = ['odin'] filenames = ['*.odin'] mimetypes = ['text/odin'] tokens = { 'path': [ (r'>', Punctuation, '#pop'), # attribute name (r'[a-z_]\w*', Name.Class), (r'/', Punctuation), (r'\[', Punctuation, 'key'), (r'\s*,\s*', Punctuation, '#pop'), (r'\s+', Text, '#pop'), ], 'key': [ include('values'), (r'\]', Punctuation, '#pop'), ], 'type_cast': [ (r'\)', Punctuation, '#pop'), (r'[^)]+', Name.Class), ], 'root': [ include('whitespace'), (r'([Tt]rue|[Ff]alse)', Literal), include('values'), # x-ref path (r'/', Punctuation, 'path'), # x-ref path starting with key (r'\[', Punctuation, 'key'), # attribute name (r'[a-z_]\w*', Name.Class), (r'=', Operator), (r'\(', Punctuation, 'type_cast'), (r',', Punctuation), (r'<', Punctuation), (r'>', Punctuation), (r';', Punctuation), ], } class CadlLexer(AtomsLexer): """ Lexer for cADL syntax. .. versionadded:: 2.1 """ name = 'cADL' aliases = ['cadl'] filenames = ['*.cadl'] tokens = { 'path': [ # attribute name (r'[a-z_]\w*', Name.Class), (r'/', Punctuation), (r'\[', Punctuation, 'any_code'), (r'\s+', Punctuation, '#pop'), ], 'root': [ include('whitespace'), (r'(cardinality|existence|occurrences|group|include|exclude|' r'allow_archetype|use_archetype|use_node)\W', Keyword.Type), (r'(and|or|not|there_exists|xor|implies|for_all)\W', Keyword.Type), (r'(after|before|closed)\W', Keyword.Type), (r'(not)\W', Operator), (r'(matches|is_in)\W', Operator), # is_in / not is_in char (u'(\u2208|\u2209)', Operator), # there_exists / not there_exists / for_all / and / or (u'(\u2203|\u2204|\u2200|\u2227|\u2228|\u22BB|\223C)', Operator), # regex in slot or as string constraint (r'(\{)(\s*/[^}]+/\s*)(\})', bygroups(Punctuation, String.Regex, Punctuation)), # regex in slot or as string constraint (r'(\{)(\s*\^[^}]+\^\s*)(\})', bygroups(Punctuation, String.Regex, Punctuation)), (r'/', Punctuation, 'path'), # for cardinality etc (r'(\{)((?:\d+\.\.)?(?:\d+|\*))' r'((?:\s*;\s*(?:ordered|unordered|unique)){,2})(\})', bygroups(Punctuation, Number, Number, Punctuation)), # [{ is start of a tuple value (r'\[\{', Punctuation), (r'\}\]', Punctuation), (r'\{', Punctuation), (r'\}', Punctuation), include('constraint_values'), # type name (r'[A-Z]\w+(<[A-Z]\w+([A-Za-z_<>]*)>)?', Name.Class), # attribute name (r'[a-z_]\w*', Name.Class), (r'\[', Punctuation, 'any_code'), (r'(~|//|\\\\|\+|-|/|\*|\^|!=|=|<=|>=|<|>]?)', Operator), (r'\(', Punctuation), (r'\)', Punctuation), # for lists of values (r',', Punctuation), (r'"', String, 'string'), # for assumed value (r';', Punctuation), ], } class AdlLexer(AtomsLexer): """ Lexer for ADL syntax. .. versionadded:: 2.1 """ name = 'ADL' aliases = ['adl'] filenames = ['*.adl', '*.adls', '*.adlf', '*.adlx'] tokens = { 'whitespace': [ # blank line ends (r'\s*\n', Text), # comment-only line (r'^[ \t]*--.*$', Comment), ], 'odin_section': [ # repeating the following two rules from the root state enable multi-line # strings that start in the first column to be dealt with (r'^(language|description|ontology|terminology|annotations|' r'component_terminologies|revision_history)[ \t]*\n', Generic.Heading), (r'^(definition)[ \t]*\n', Generic.Heading, 'cadl_section'), (r'^([ \t]*|[ \t]+.*)\n', using(OdinLexer)), (r'^([^"]*")(>[ \t]*\n)', bygroups(String, Punctuation)), # template overlay delimiter (r'^----------*\n', Text, '#pop'), (r'^.*\n', String), default('#pop'), ], 'cadl_section': [ (r'^([ \t]*|[ \t]+.*)\n', using(CadlLexer)), default('#pop'), ], 'rules_section': [ (r'^[ \t]+.*\n', using(CadlLexer)), default('#pop'), ], 'metadata': [ (r'\)', Punctuation, '#pop'), (r';', Punctuation), (r'([Tt]rue|[Ff]alse)', Literal), # numbers and version ids (r'\d+(\.\d+)*', Literal), # Guids (r'(\d|[a-fA-F])+(-(\d|[a-fA-F])+){3,}', Literal), (r'\w+', Name.Class), (r'"', String, 'string'), (r'=', Operator), (r'[ \t]+', Text), default('#pop'), ], 'root': [ (r'^(archetype|template_overlay|operational_template|template|' r'speciali[sz]e)', Generic.Heading), (r'^(language|description|ontology|terminology|annotations|' r'component_terminologies|revision_history)[ \t]*\n', Generic.Heading, 'odin_section'), (r'^(definition)[ \t]*\n', Generic.Heading, 'cadl_section'), (r'^(rules)[ \t]*\n', Generic.Heading, 'rules_section'), include('archetype_id'), (r'[ \t]*\(', Punctuation, 'metadata'), include('whitespace'), ], } Pygments-2.3.1/pygments/lexers/dylan.py0000644000175000017500000002426613402534107017241 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.dylan ~~~~~~~~~~~~~~~~~~~~~ Lexers for the Dylan language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, bygroups, do_insertions, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Literal __all__ = ['DylanLexer', 'DylanConsoleLexer', 'DylanLidLexer'] class DylanLexer(RegexLexer): """ For the `Dylan `_ language. .. versionadded:: 0.7 """ name = 'Dylan' aliases = ['dylan'] filenames = ['*.dylan', '*.dyl', '*.intr'] mimetypes = ['text/x-dylan'] flags = re.IGNORECASE builtins = set(( 'subclass', 'abstract', 'block', 'concrete', 'constant', 'class', 'compiler-open', 'compiler-sideways', 'domain', 'dynamic', 'each-subclass', 'exception', 'exclude', 'function', 'generic', 'handler', 'inherited', 'inline', 'inline-only', 'instance', 'interface', 'import', 'keyword', 'library', 'macro', 'method', 'module', 'open', 'primary', 'required', 'sealed', 'sideways', 'singleton', 'slot', 'thread', 'variable', 'virtual')) keywords = set(( 'above', 'afterwards', 'begin', 'below', 'by', 'case', 'cleanup', 'create', 'define', 'else', 'elseif', 'end', 'export', 'finally', 'for', 'from', 'if', 'in', 'let', 'local', 'otherwise', 'rename', 'select', 'signal', 'then', 'to', 'unless', 'until', 'use', 'when', 'while')) operators = set(( '~', '+', '-', '*', '|', '^', '=', '==', '~=', '~==', '<', '<=', '>', '>=', '&', '|')) functions = set(( 'abort', 'abs', 'add', 'add!', 'add-method', 'add-new', 'add-new!', 'all-superclasses', 'always', 'any?', 'applicable-method?', 'apply', 'aref', 'aref-setter', 'as', 'as-lowercase', 'as-lowercase!', 'as-uppercase', 'as-uppercase!', 'ash', 'backward-iteration-protocol', 'break', 'ceiling', 'ceiling/', 'cerror', 'check-type', 'choose', 'choose-by', 'complement', 'compose', 'concatenate', 'concatenate-as', 'condition-format-arguments', 'condition-format-string', 'conjoin', 'copy-sequence', 'curry', 'default-handler', 'dimension', 'dimensions', 'direct-subclasses', 'direct-superclasses', 'disjoin', 'do', 'do-handlers', 'element', 'element-setter', 'empty?', 'error', 'even?', 'every?', 'false-or', 'fill!', 'find-key', 'find-method', 'first', 'first-setter', 'floor', 'floor/', 'forward-iteration-protocol', 'function-arguments', 'function-return-values', 'function-specializers', 'gcd', 'generic-function-mandatory-keywords', 'generic-function-methods', 'head', 'head-setter', 'identity', 'initialize', 'instance?', 'integral?', 'intersection', 'key-sequence', 'key-test', 'last', 'last-setter', 'lcm', 'limited', 'list', 'logand', 'logbit?', 'logior', 'lognot', 'logxor', 'make', 'map', 'map-as', 'map-into', 'max', 'member?', 'merge-hash-codes', 'min', 'modulo', 'negative', 'negative?', 'next-method', 'object-class', 'object-hash', 'odd?', 'one-of', 'pair', 'pop', 'pop-last', 'positive?', 'push', 'push-last', 'range', 'rank', 'rcurry', 'reduce', 'reduce1', 'remainder', 'remove', 'remove!', 'remove-duplicates', 'remove-duplicates!', 'remove-key!', 'remove-method', 'replace-elements!', 'replace-subsequence!', 'restart-query', 'return-allowed?', 'return-description', 'return-query', 'reverse', 'reverse!', 'round', 'round/', 'row-major-index', 'second', 'second-setter', 'shallow-copy', 'signal', 'singleton', 'size', 'size-setter', 'slot-initialized?', 'sort', 'sort!', 'sorted-applicable-methods', 'subsequence-position', 'subtype?', 'table-protocol', 'tail', 'tail-setter', 'third', 'third-setter', 'truncate', 'truncate/', 'type-error-expected-type', 'type-error-value', 'type-for-copy', 'type-union', 'union', 'values', 'vector', 'zero?')) valid_name = '\\\\?[\\w!&*<>|^$%@\\-+~?/=]+' def get_tokens_unprocessed(self, text): for index, token, value in RegexLexer.get_tokens_unprocessed(self, text): if token is Name: lowercase_value = value.lower() if lowercase_value in self.builtins: yield index, Name.Builtin, value continue if lowercase_value in self.keywords: yield index, Keyword, value continue if lowercase_value in self.functions: yield index, Name.Builtin, value continue if lowercase_value in self.operators: yield index, Operator, value continue yield index, token, value tokens = { 'root': [ # Whitespace (r'\s+', Text), # single line comment (r'//.*?\n', Comment.Single), # lid header (r'([a-z0-9-]+)(:)([ \t]*)(.*(?:\n[ \t].+)*)', bygroups(Name.Attribute, Operator, Text, String)), default('code') # no header match, switch to code ], 'code': [ # Whitespace (r'\s+', Text), # single line comment (r'//.*?\n', Comment.Single), # multi-line comment (r'/\*', Comment.Multiline, 'comment'), # strings and characters (r'"', String, 'string'), (r"'(\\.|\\[0-7]{1,3}|\\x[a-f0-9]{1,2}|[^\\\'\n])'", String.Char), # binary integer (r'#b[01]+', Number.Bin), # octal integer (r'#o[0-7]+', Number.Oct), # floating point (r'[-+]?(\d*\.\d+(e[-+]?\d+)?|\d+(\.\d*)?e[-+]?\d+)', Number.Float), # decimal integer (r'[-+]?\d+', Number.Integer), # hex integer (r'#x[0-9a-f]+', Number.Hex), # Macro parameters (r'(\?' + valid_name + ')(:)' r'(token|name|variable|expression|body|case-body|\*)', bygroups(Name.Tag, Operator, Name.Builtin)), (r'(\?)(:)(token|name|variable|expression|body|case-body|\*)', bygroups(Name.Tag, Operator, Name.Builtin)), (r'\?' + valid_name, Name.Tag), # Punctuation (r'(=>|::|#\(|#\[|##|\?\?|\?=|\?|[(){}\[\],.;])', Punctuation), # Most operators are picked up as names and then re-flagged. # This one isn't valid in a name though, so we pick it up now. (r':=', Operator), # Pick up #t / #f before we match other stuff with #. (r'#[tf]', Literal), # #"foo" style keywords (r'#"', String.Symbol, 'keyword'), # #rest, #key, #all-keys, etc. (r'#[a-z0-9-]+', Keyword), # required-init-keyword: style keywords. (valid_name + ':', Keyword), # class names ('<' + valid_name + '>', Name.Class), # define variable forms. (r'\*' + valid_name + r'\*', Name.Variable.Global), # define constant forms. (r'\$' + valid_name, Name.Constant), # everything else. We re-flag some of these in the method above. (valid_name, Name), ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'keyword': [ (r'"', String.Symbol, '#pop'), (r'[^\\"]+', String.Symbol), # all other characters ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-f0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ] } class DylanLidLexer(RegexLexer): """ For Dylan LID (Library Interchange Definition) files. .. versionadded:: 1.6 """ name = 'DylanLID' aliases = ['dylan-lid', 'lid'] filenames = ['*.lid', '*.hdp'] mimetypes = ['text/x-dylan-lid'] flags = re.IGNORECASE tokens = { 'root': [ # Whitespace (r'\s+', Text), # single line comment (r'//.*?\n', Comment.Single), # lid header (r'(.*?)(:)([ \t]*)(.*(?:\n[ \t].+)*)', bygroups(Name.Attribute, Operator, Text, String)), ] } class DylanConsoleLexer(Lexer): """ For Dylan interactive console output like: .. sourcecode:: dylan-console ? let a = 1; => 1 ? a => 1 This is based on a copy of the RubyConsoleLexer. .. versionadded:: 1.6 """ name = 'Dylan session' aliases = ['dylan-console', 'dylan-repl'] filenames = ['*.dylan-console'] mimetypes = ['text/x-dylan-console'] _line_re = re.compile('.*?\n') _prompt_re = re.compile(r'\?| ') def get_tokens_unprocessed(self, text): dylexer = DylanLexer(**self.options) curcode = '' insertions = [] for match in self._line_re.finditer(text): line = match.group() m = self._prompt_re.match(line) if m is not None: end = m.end() insertions.append((len(curcode), [(0, Generic.Prompt, line[:end])])) curcode += line[end:] else: if curcode: for item in do_insertions(insertions, dylexer.get_tokens_unprocessed(curcode)): yield item curcode = '' insertions = [] yield match.start(), Generic.Output, line if curcode: for item in do_insertions(insertions, dylexer.get_tokens_unprocessed(curcode)): yield item Pygments-2.3.1/pygments/lexers/julia.py0000644000175000017500000003341113402534107017226 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.julia ~~~~~~~~~~~~~~~~~~~~~ Lexers for the Julia language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, bygroups, do_insertions, \ words, include from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic from pygments.util import shebang_matches, unirange __all__ = ['JuliaLexer', 'JuliaConsoleLexer'] allowed_variable = ( u'(?:[a-zA-Z_\u00A1-\uffff]|%s)(?:[a-zA-Z_0-9\u00A1-\uffff]|%s)*!*' % ((unirange(0x10000, 0x10ffff),) * 2)) class JuliaLexer(RegexLexer): """ For `Julia `_ source code. .. versionadded:: 1.6 """ name = 'Julia' aliases = ['julia', 'jl'] filenames = ['*.jl'] mimetypes = ['text/x-julia', 'application/x-julia'] flags = re.MULTILINE | re.UNICODE tokens = { 'root': [ (r'\n', Text), (r'[^\S\n]+', Text), (r'#=', Comment.Multiline, "blockcomment"), (r'#.*$', Comment), (r'[\[\]{}(),;]', Punctuation), # keywords (r'in\b', Keyword.Pseudo), (r'(true|false)\b', Keyword.Constant), (r'(local|global|const)\b', Keyword.Declaration), (words([ 'function', 'type', 'typealias', 'abstract', 'immutable', 'baremodule', 'begin', 'bitstype', 'break', 'catch', 'ccall', 'continue', 'do', 'else', 'elseif', 'end', 'export', 'finally', 'for', 'if', 'import', 'importall', 'let', 'macro', 'module', 'quote', 'return', 'try', 'using', 'while'], suffix=r'\b'), Keyword), # NOTE # Patterns below work only for definition sites and thus hardly reliable. # # functions # (r'(function)(\s+)(' + allowed_variable + ')', # bygroups(Keyword, Text, Name.Function)), # # types # (r'(type|typealias|abstract|immutable)(\s+)(' + allowed_variable + ')', # bygroups(Keyword, Text, Name.Class)), # type names (words([ 'ANY', 'ASCIIString', 'AbstractArray', 'AbstractChannel', 'AbstractFloat', 'AbstractMatrix', 'AbstractRNG', 'AbstractSparseArray', 'AbstractSparseMatrix', 'AbstractSparseVector', 'AbstractString', 'AbstractVecOrMat', 'AbstractVector', 'Any', 'ArgumentError', 'Array', 'AssertionError', 'Associative', 'Base64DecodePipe', 'Base64EncodePipe', 'Bidiagonal', 'BigFloat', 'BigInt', 'BitArray', 'BitMatrix', 'BitVector', 'Bool', 'BoundsError', 'Box', 'BufferStream', 'CapturedException', 'CartesianIndex', 'CartesianRange', 'Cchar', 'Cdouble', 'Cfloat', 'Channel', 'Char', 'Cint', 'Cintmax_t', 'Clong', 'Clonglong', 'ClusterManager', 'Cmd', 'Coff_t', 'Colon', 'Complex', 'Complex128', 'Complex32', 'Complex64', 'CompositeException', 'Condition', 'Cptrdiff_t', 'Cshort', 'Csize_t', 'Cssize_t', 'Cstring', 'Cuchar', 'Cuint', 'Cuintmax_t', 'Culong', 'Culonglong', 'Cushort', 'Cwchar_t', 'Cwstring', 'DataType', 'Date', 'DateTime', 'DenseArray', 'DenseMatrix', 'DenseVecOrMat', 'DenseVector', 'Diagonal', 'Dict', 'DimensionMismatch', 'Dims', 'DirectIndexString', 'Display', 'DivideError', 'DomainError', 'EOFError', 'EachLine', 'Enum', 'Enumerate', 'ErrorException', 'Exception', 'Expr', 'Factorization', 'FileMonitor', 'FileOffset', 'Filter', 'Float16', 'Float32', 'Float64', 'FloatRange', 'Function', 'GenSym', 'GlobalRef', 'GotoNode', 'HTML', 'Hermitian', 'IO', 'IOBuffer', 'IOStream', 'IPv4', 'IPv6', 'InexactError', 'InitError', 'Int', 'Int128', 'Int16', 'Int32', 'Int64', 'Int8', 'IntSet', 'Integer', 'InterruptException', 'IntrinsicFunction', 'InvalidStateException', 'Irrational', 'KeyError', 'LabelNode', 'LambdaStaticData', 'LinSpace', 'LineNumberNode', 'LoadError', 'LocalProcess', 'LowerTriangular', 'MIME', 'Matrix', 'MersenneTwister', 'Method', 'MethodError', 'MethodTable', 'Module', 'NTuple', 'NewvarNode', 'NullException', 'Nullable', 'Number', 'ObjectIdDict', 'OrdinalRange', 'OutOfMemoryError', 'OverflowError', 'Pair', 'ParseError', 'PartialQuickSort', 'Pipe', 'PollingFileWatcher', 'ProcessExitedException', 'ProcessGroup', 'Ptr', 'QuoteNode', 'RandomDevice', 'Range', 'Rational', 'RawFD', 'ReadOnlyMemoryError', 'Real', 'ReentrantLock', 'Ref', 'Regex', 'RegexMatch', 'RemoteException', 'RemoteRef', 'RepString', 'RevString', 'RopeString', 'RoundingMode', 'SegmentationFault', 'SerializationState', 'Set', 'SharedArray', 'SharedMatrix', 'SharedVector', 'Signed', 'SimpleVector', 'SparseMatrixCSC', 'StackOverflowError', 'StatStruct', 'StepRange', 'StridedArray', 'StridedMatrix', 'StridedVecOrMat', 'StridedVector', 'SubArray', 'SubString', 'SymTridiagonal', 'Symbol', 'SymbolNode', 'Symmetric', 'SystemError', 'TCPSocket', 'Task', 'Text', 'TextDisplay', 'Timer', 'TopNode', 'Tridiagonal', 'Tuple', 'Type', 'TypeConstructor', 'TypeError', 'TypeName', 'TypeVar', 'UDPSocket', 'UInt', 'UInt128', 'UInt16', 'UInt32', 'UInt64', 'UInt8', 'UTF16String', 'UTF32String', 'UTF8String', 'UndefRefError', 'UndefVarError', 'UnicodeError', 'UniformScaling', 'Union', 'UnitRange', 'Unsigned', 'UpperTriangular', 'Val', 'Vararg', 'VecOrMat', 'Vector', 'VersionNumber', 'Void', 'WString', 'WeakKeyDict', 'WeakRef', 'WorkerConfig', 'Zip'], suffix=r'\b'), Keyword.Type), # builtins (words([ u'ARGS', u'CPU_CORES', u'C_NULL', u'DevNull', u'ENDIAN_BOM', u'ENV', u'I', u'Inf', u'Inf16', u'Inf32', u'Inf64', u'InsertionSort', u'JULIA_HOME', u'LOAD_PATH', u'MergeSort', u'NaN', u'NaN16', u'NaN32', u'NaN64', u'OS_NAME', u'QuickSort', u'RoundDown', u'RoundFromZero', u'RoundNearest', u'RoundNearestTiesAway', u'RoundNearestTiesUp', u'RoundToZero', u'RoundUp', u'STDERR', u'STDIN', u'STDOUT', u'VERSION', u'WORD_SIZE', u'catalan', u'e', u'eu', u'eulergamma', u'golden', u'im', u'nothing', u'pi', u'γ', u'π', u'φ'], suffix=r'\b'), Name.Builtin), # operators # see: https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm (words([ # prec-assignment u'=', u':=', u'+=', u'-=', u'*=', u'/=', u'//=', u'.//=', u'.*=', u'./=', u'\\=', u'.\\=', u'^=', u'.^=', u'÷=', u'.÷=', u'%=', u'.%=', u'|=', u'&=', u'$=', u'=>', u'<<=', u'>>=', u'>>>=', u'~', u'.+=', u'.-=', # prec-conditional u'?', # prec-arrow u'--', u'-->', # prec-lazy-or u'||', # prec-lazy-and u'&&', # prec-comparison u'>', u'<', u'>=', u'≥', u'<=', u'≤', u'==', u'===', u'≡', u'!=', u'≠', u'!==', u'≢', u'.>', u'.<', u'.>=', u'.≥', u'.<=', u'.≤', u'.==', u'.!=', u'.≠', u'.=', u'.!', u'<:', u'>:', u'∈', u'∉', u'∋', u'∌', u'⊆', u'⊈', u'⊂', u'⊄', u'⊊', # prec-pipe u'|>', u'<|', # prec-colon u':', # prec-plus u'+', u'-', u'.+', u'.-', u'|', u'∪', u'$', # prec-bitshift u'<<', u'>>', u'>>>', u'.<<', u'.>>', u'.>>>', # prec-times u'*', u'/', u'./', u'÷', u'.÷', u'%', u'⋅', u'.%', u'.*', u'\\', u'.\\', u'&', u'∩', # prec-rational u'//', u'.//', # prec-power u'^', u'.^', # prec-decl u'::', # prec-dot u'.', # unary op u'+', u'-', u'!', u'√', u'∛', u'∜' ]), Operator), # chars (r"'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,3}|\\u[a-fA-F0-9]{1,4}|" r"\\U[a-fA-F0-9]{1,6}|[^\\\'\n])'", String.Char), # try to match trailing transpose (r'(?<=[.\w)\]])\'+', Operator), # strings (r'"""', String, 'tqstring'), (r'"', String, 'string'), # regular expressions (r'r"""', String.Regex, 'tqregex'), (r'r"', String.Regex, 'regex'), # backticks (r'`', String.Backtick, 'command'), # names (allowed_variable, Name), (r'@' + allowed_variable, Name.Decorator), # numbers (r'(\d+(_\d+)+\.\d*|\d*\.\d+(_\d+)+)([eEf][+-]?[0-9]+)?', Number.Float), (r'(\d+\.\d*|\d*\.\d+)([eEf][+-]?[0-9]+)?', Number.Float), (r'\d+(_\d+)+[eEf][+-]?[0-9]+', Number.Float), (r'\d+[eEf][+-]?[0-9]+', Number.Float), (r'0b[01]+(_[01]+)+', Number.Bin), (r'0b[01]+', Number.Bin), (r'0o[0-7]+(_[0-7]+)+', Number.Oct), (r'0o[0-7]+', Number.Oct), (r'0x[a-fA-F0-9]+(_[a-fA-F0-9]+)+', Number.Hex), (r'0x[a-fA-F0-9]+', Number.Hex), (r'\d+(_\d+)+', Number.Integer), (r'\d+', Number.Integer) ], "blockcomment": [ (r'[^=#]', Comment.Multiline), (r'#=', Comment.Multiline, '#push'), (r'=#', Comment.Multiline, '#pop'), (r'[=#]', Comment.Multiline), ], 'string': [ (r'"', String, '#pop'), # FIXME: This escape pattern is not perfect. (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape), # Interpolation is defined as "$" followed by the shortest full # expression, which is something we can't parse. # Include the most common cases here: $word, and $(paren'd expr). (r'\$' + allowed_variable, String.Interpol), # (r'\$[a-zA-Z_]+', String.Interpol), (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'), # @printf and @sprintf formats (r'%[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]', String.Interpol), (r'.|\s', String), ], 'tqstring': [ (r'"""', String, '#pop'), (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape), (r'\$' + allowed_variable, String.Interpol), (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'), (r'.|\s', String), ], 'regex': [ (r'"', String.Regex, '#pop'), (r'\\"', String.Regex), (r'.|\s', String.Regex), ], 'tqregex': [ (r'"""', String.Regex, '#pop'), (r'.|\s', String.Regex), ], 'command': [ (r'`', String.Backtick, '#pop'), (r'\$' + allowed_variable, String.Interpol), (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'), (r'.|\s', String.Backtick) ], 'in-intp': [ (r'\(', Punctuation, '#push'), (r'\)', Punctuation, '#pop'), include('root'), ] } def analyse_text(text): return shebang_matches(text, r'julia') class JuliaConsoleLexer(Lexer): """ For Julia console sessions. Modeled after MatlabSessionLexer. .. versionadded:: 1.6 """ name = 'Julia console' aliases = ['jlcon'] def get_tokens_unprocessed(self, text): jllexer = JuliaLexer(**self.options) start = 0 curcode = '' insertions = [] output = False error = False for line in text.splitlines(True): if line.startswith('julia>'): insertions.append((len(curcode), [(0, Generic.Prompt, line[:6])])) curcode += line[6:] output = False error = False elif line.startswith('help?>') or line.startswith('shell>'): yield start, Generic.Prompt, line[:6] yield start + 6, Text, line[6:] output = False error = False elif line.startswith(' ') and not output: insertions.append((len(curcode), [(0, Text, line[:6])])) curcode += line[6:] else: if curcode: for item in do_insertions( insertions, jllexer.get_tokens_unprocessed(curcode)): yield item curcode = '' insertions = [] if line.startswith('ERROR: ') or error: yield start, Generic.Error, line error = True else: yield start, Generic.Output, line output = True start += len(line) if curcode: for item in do_insertions( insertions, jllexer.get_tokens_unprocessed(curcode)): yield item Pygments-2.3.1/pygments/lexers/diff.py0000644000175000017500000001141113376260540017035 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.diff ~~~~~~~~~~~~~~~~~~~~ Lexers for diff/patch formats. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups from pygments.token import Text, Comment, Operator, Keyword, Name, Generic, \ Literal __all__ = ['DiffLexer', 'DarcsPatchLexer', 'WDiffLexer'] class DiffLexer(RegexLexer): """ Lexer for unified or context-style diffs or patches. """ name = 'Diff' aliases = ['diff', 'udiff'] filenames = ['*.diff', '*.patch'] mimetypes = ['text/x-diff', 'text/x-patch'] tokens = { 'root': [ (r' .*\n', Text), (r'\+.*\n', Generic.Inserted), (r'-.*\n', Generic.Deleted), (r'!.*\n', Generic.Strong), (r'@.*\n', Generic.Subheading), (r'([Ii]ndex|diff).*\n', Generic.Heading), (r'=.*\n', Generic.Heading), (r'.*\n', Text), ] } def analyse_text(text): if text[:7] == 'Index: ': return True if text[:5] == 'diff ': return True if text[:4] == '--- ': return 0.9 class DarcsPatchLexer(RegexLexer): """ DarcsPatchLexer is a lexer for the various versions of the darcs patch format. Examples of this format are derived by commands such as ``darcs annotate --patch`` and ``darcs send``. .. versionadded:: 0.10 """ name = 'Darcs Patch' aliases = ['dpatch'] filenames = ['*.dpatch', '*.darcspatch'] DPATCH_KEYWORDS = ('hunk', 'addfile', 'adddir', 'rmfile', 'rmdir', 'move', 'replace') tokens = { 'root': [ (r'<', Operator), (r'>', Operator), (r'\{', Operator), (r'\}', Operator), (r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)(\])', bygroups(Operator, Keyword, Name, Text, Name, Operator, Literal.Date, Text, Operator)), (r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)', bygroups(Operator, Keyword, Name, Text, Name, Operator, Literal.Date, Text), 'comment'), (r'New patches:', Generic.Heading), (r'Context:', Generic.Heading), (r'Patch bundle hash:', Generic.Heading), (r'(\s*)(%s)(.*\n)' % '|'.join(DPATCH_KEYWORDS), bygroups(Text, Keyword, Text)), (r'\+', Generic.Inserted, "insert"), (r'-', Generic.Deleted, "delete"), (r'.*\n', Text), ], 'comment': [ (r'[^\]].*\n', Comment), (r'\]', Operator, "#pop"), ], 'specialText': [ # darcs add [_CODE_] special operators for clarity (r'\n', Text, "#pop"), # line-based (r'\[_[^_]*_]', Operator), ], 'insert': [ include('specialText'), (r'\[', Generic.Inserted), (r'[^\n\[]+', Generic.Inserted), ], 'delete': [ include('specialText'), (r'\[', Generic.Deleted), (r'[^\n\[]+', Generic.Deleted), ], } class WDiffLexer(RegexLexer): """ A `wdiff `_ lexer. Note that: * only to normal output (without option like -l). * if target files of wdiff contain "[-", "-]", "{+", "+}", especially they are unbalanced, this lexer will get confusing. .. versionadded:: 2.2 """ name = 'WDiff' aliases = ['wdiff'] filenames = ['*.wdiff'] mimetypes = [] flags = re.MULTILINE | re.DOTALL # We can only assume "[-" after "[-" before "-]" is `nested`, # for instance wdiff to wdiff outputs. We have no way to # distinct these marker is of wdiff output from original text. ins_op = r"\{\+" ins_cl = r"\+\}" del_op = r"\[\-" del_cl = r"\-\]" normal = r'[^{}[\]+-]+' # for performance tokens = { 'root': [ (ins_op, Generic.Inserted, 'inserted'), (del_op, Generic.Deleted, 'deleted'), (normal, Text), (r'.', Text), ], 'inserted': [ (ins_op, Generic.Inserted, '#push'), (del_op, Generic.Inserted, '#push'), (del_cl, Generic.Inserted, '#pop'), (ins_cl, Generic.Inserted, '#pop'), (normal, Generic.Inserted), (r'.', Generic.Inserted), ], 'deleted': [ (del_op, Generic.Deleted, '#push'), (ins_op, Generic.Deleted, '#push'), (ins_cl, Generic.Deleted, '#pop'), (del_cl, Generic.Deleted, '#pop'), (normal, Generic.Deleted), (r'.', Generic.Deleted), ], } Pygments-2.3.1/pygments/lexers/int_fiction.py0000644000175000017500000015474313402534107020443 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.int_fiction ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for interactive fiction languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, \ this, default, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error, Generic __all__ = ['Inform6Lexer', 'Inform6TemplateLexer', 'Inform7Lexer', 'Tads3Lexer'] class Inform6Lexer(RegexLexer): """ For `Inform 6 `_ source code. .. versionadded:: 2.0 """ name = 'Inform 6' aliases = ['inform6', 'i6'] filenames = ['*.inf'] flags = re.MULTILINE | re.DOTALL | re.UNICODE _name = r'[a-zA-Z_]\w*' # Inform 7 maps these four character classes to their ASCII # equivalents. To support Inform 6 inclusions within Inform 7, # Inform6Lexer maps them too. _dash = u'\\-\u2010-\u2014' _dquote = u'"\u201c\u201d' _squote = u"'\u2018\u2019" _newline = u'\\n\u0085\u2028\u2029' tokens = { 'root': [ (r'\A(!%%[^%s]*[%s])+' % (_newline, _newline), Comment.Preproc, 'directive'), default('directive') ], '_whitespace': [ (r'\s+', Text), (r'![^%s]*' % _newline, Comment.Single) ], 'default': [ include('_whitespace'), (r'\[', Punctuation, 'many-values'), # Array initialization (r':|(?=;)', Punctuation, '#pop'), (r'<', Punctuation), # Second angle bracket in an action statement default(('expression', '_expression')) ], # Expressions '_expression': [ include('_whitespace'), (r'(?=sp\b)', Text, '#pop'), (r'(?=[%s%s$0-9#a-zA-Z_])' % (_dquote, _squote), Text, ('#pop', 'value')), (r'\+\+|[%s]{1,2}(?!>)|~~?' % _dash, Operator), (r'(?=[()\[%s,?@{:;])' % _dash, Text, '#pop') ], 'expression': [ include('_whitespace'), (r'\(', Punctuation, ('expression', '_expression')), (r'\)', Punctuation, '#pop'), (r'\[', Punctuation, ('#pop', 'statements', 'locals')), (r'>(?=(\s+|(![^%s]*))*[>;])' % _newline, Punctuation), (r'\+\+|[%s]{2}(?!>)' % _dash, Operator), (r',', Punctuation, '_expression'), (r'&&?|\|\|?|[=~><]?=|[%s]{1,2}>?|\.\.?[&#]?|::|[<>+*/%%]' % _dash, Operator, '_expression'), (r'(has|hasnt|in|notin|ofclass|or|provides)\b', Operator.Word, '_expression'), (r'sp\b', Name), (r'\?~?', Name.Label, 'label?'), (r'[@{]', Error), default('#pop') ], '_assembly-expression': [ (r'\(', Punctuation, ('#push', '_expression')), (r'[\[\]]', Punctuation), (r'[%s]>' % _dash, Punctuation, '_expression'), (r'sp\b', Keyword.Pseudo), (r';', Punctuation, '#pop:3'), include('expression') ], '_for-expression': [ (r'\)', Punctuation, '#pop:2'), (r':', Punctuation, '#pop'), include('expression') ], '_keyword-expression': [ (r'(from|near|to)\b', Keyword, '_expression'), include('expression') ], '_list-expression': [ (r',', Punctuation, '#pop'), include('expression') ], '_object-expression': [ (r'has\b', Keyword.Declaration, '#pop'), include('_list-expression') ], # Values 'value': [ include('_whitespace'), # Strings (r'[%s][^@][%s]' % (_squote, _squote), String.Char, '#pop'), (r'([%s])(@\{[0-9a-fA-F]{1,4}\})([%s])' % (_squote, _squote), bygroups(String.Char, String.Escape, String.Char), '#pop'), (r'([%s])(@.{2})([%s])' % (_squote, _squote), bygroups(String.Char, String.Escape, String.Char), '#pop'), (r'[%s]' % _squote, String.Single, ('#pop', 'dictionary-word')), (r'[%s]' % _dquote, String.Double, ('#pop', 'string')), # Numbers (r'\$[+%s][0-9]*\.?[0-9]*([eE][+%s]?[0-9]+)?' % (_dash, _dash), Number.Float, '#pop'), (r'\$[0-9a-fA-F]+', Number.Hex, '#pop'), (r'\$\$[01]+', Number.Bin, '#pop'), (r'[0-9]+', Number.Integer, '#pop'), # Values prefixed by hashes (r'(##|#a\$)(%s)' % _name, bygroups(Operator, Name), '#pop'), (r'(#g\$)(%s)' % _name, bygroups(Operator, Name.Variable.Global), '#pop'), (r'#[nw]\$', Operator, ('#pop', 'obsolete-dictionary-word')), (r'(#r\$)(%s)' % _name, bygroups(Operator, Name.Function), '#pop'), (r'#', Name.Builtin, ('#pop', 'system-constant')), # System functions (words(( 'child', 'children', 'elder', 'eldest', 'glk', 'indirect', 'metaclass', 'parent', 'random', 'sibling', 'younger', 'youngest'), suffix=r'\b'), Name.Builtin, '#pop'), # Metaclasses (r'(?i)(Class|Object|Routine|String)\b', Name.Builtin, '#pop'), # Veneer routines (words(( 'Box__Routine', 'CA__Pr', 'CDefArt', 'CInDefArt', 'Cl__Ms', 'Copy__Primitive', 'CP__Tab', 'DA__Pr', 'DB__Pr', 'DefArt', 'Dynam__String', 'EnglishNumber', 'Glk__Wrap', 'IA__Pr', 'IB__Pr', 'InDefArt', 'Main__', 'Meta__class', 'OB__Move', 'OB__Remove', 'OC__Cl', 'OP__Pr', 'Print__Addr', 'Print__PName', 'PrintShortName', 'RA__Pr', 'RA__Sc', 'RL__Pr', 'R_Process', 'RT__ChG', 'RT__ChGt', 'RT__ChLDB', 'RT__ChLDW', 'RT__ChPR', 'RT__ChPrintA', 'RT__ChPrintC', 'RT__ChPrintO', 'RT__ChPrintS', 'RT__ChPS', 'RT__ChR', 'RT__ChSTB', 'RT__ChSTW', 'RT__ChT', 'RT__Err', 'RT__TrPS', 'RV__Pr', 'Symb__Tab', 'Unsigned__Compare', 'WV__Pr', 'Z__Region'), prefix='(?i)', suffix=r'\b'), Name.Builtin, '#pop'), # Other built-in symbols (words(( 'call', 'copy', 'create', 'DEBUG', 'destroy', 'DICT_CHAR_SIZE', 'DICT_ENTRY_BYTES', 'DICT_IS_UNICODE', 'DICT_WORD_SIZE', 'false', 'FLOAT_INFINITY', 'FLOAT_NAN', 'FLOAT_NINFINITY', 'GOBJFIELD_CHAIN', 'GOBJFIELD_CHILD', 'GOBJFIELD_NAME', 'GOBJFIELD_PARENT', 'GOBJFIELD_PROPTAB', 'GOBJFIELD_SIBLING', 'GOBJ_EXT_START', 'GOBJ_TOTAL_LENGTH', 'Grammar__Version', 'INDIV_PROP_START', 'INFIX', 'infix__watching', 'MODULE_MODE', 'name', 'nothing', 'NUM_ATTR_BYTES', 'print', 'print_to_array', 'recreate', 'remaining', 'self', 'sender', 'STRICT_MODE', 'sw__var', 'sys__glob0', 'sys__glob1', 'sys__glob2', 'sys_statusline_flag', 'TARGET_GLULX', 'TARGET_ZCODE', 'temp__global2', 'temp__global3', 'temp__global4', 'temp_global', 'true', 'USE_MODULES', 'WORDSIZE'), prefix='(?i)', suffix=r'\b'), Name.Builtin, '#pop'), # Other values (_name, Name, '#pop') ], # Strings 'dictionary-word': [ (r'[~^]+', String.Escape), (r'[^~^\\@({%s]+' % _squote, String.Single), (r'[({]', String.Single), (r'@\{[0-9a-fA-F]{,4}\}', String.Escape), (r'@.{2}', String.Escape), (r'[%s]' % _squote, String.Single, '#pop') ], 'string': [ (r'[~^]+', String.Escape), (r'[^~^\\@({%s]+' % _dquote, String.Double), (r'[({]', String.Double), (r'\\', String.Escape), (r'@(\\\s*[%s]\s*)*@((\\\s*[%s]\s*)*[0-9])*' % (_newline, _newline), String.Escape), (r'@(\\\s*[%s]\s*)*\{((\\\s*[%s]\s*)*[0-9a-fA-F]){,4}' r'(\\\s*[%s]\s*)*\}' % (_newline, _newline, _newline), String.Escape), (r'@(\\\s*[%s]\s*)*.(\\\s*[%s]\s*)*.' % (_newline, _newline), String.Escape), (r'[%s]' % _dquote, String.Double, '#pop') ], 'plain-string': [ (r'[^~^\\({\[\]%s]+' % _dquote, String.Double), (r'[~^({\[\]]', String.Double), (r'\\', String.Escape), (r'[%s]' % _dquote, String.Double, '#pop') ], # Names '_constant': [ include('_whitespace'), (_name, Name.Constant, '#pop'), include('value') ], '_global': [ include('_whitespace'), (_name, Name.Variable.Global, '#pop'), include('value') ], 'label?': [ include('_whitespace'), (_name, Name.Label, '#pop'), default('#pop') ], 'variable?': [ include('_whitespace'), (_name, Name.Variable, '#pop'), default('#pop') ], # Values after hashes 'obsolete-dictionary-word': [ (r'\S\w*', String.Other, '#pop') ], 'system-constant': [ include('_whitespace'), (_name, Name.Builtin, '#pop') ], # Directives 'directive': [ include('_whitespace'), (r'#', Punctuation), (r';', Punctuation, '#pop'), (r'\[', Punctuation, ('default', 'statements', 'locals', 'routine-name?')), (words(( 'abbreviate', 'endif', 'dictionary', 'ifdef', 'iffalse', 'ifndef', 'ifnot', 'iftrue', 'ifv3', 'ifv5', 'release', 'serial', 'switches', 'system_file', 'version'), prefix='(?i)', suffix=r'\b'), Keyword, 'default'), (r'(?i)(array|global)\b', Keyword, ('default', 'directive-keyword?', '_global')), (r'(?i)attribute\b', Keyword, ('default', 'alias?', '_constant')), (r'(?i)class\b', Keyword, ('object-body', 'duplicates', 'class-name')), (r'(?i)(constant|default)\b', Keyword, ('default', 'expression', '_constant')), (r'(?i)(end\b)(.*)', bygroups(Keyword, Text)), (r'(?i)(extend|verb)\b', Keyword, 'grammar'), (r'(?i)fake_action\b', Keyword, ('default', '_constant')), (r'(?i)import\b', Keyword, 'manifest'), (r'(?i)(include|link)\b', Keyword, ('default', 'before-plain-string')), (r'(?i)(lowstring|undef)\b', Keyword, ('default', '_constant')), (r'(?i)message\b', Keyword, ('default', 'diagnostic')), (r'(?i)(nearby|object)\b', Keyword, ('object-body', '_object-head')), (r'(?i)property\b', Keyword, ('default', 'alias?', '_constant', 'property-keyword*')), (r'(?i)replace\b', Keyword, ('default', 'routine-name?', 'routine-name?')), (r'(?i)statusline\b', Keyword, ('default', 'directive-keyword?')), (r'(?i)stub\b', Keyword, ('default', 'routine-name?')), (r'(?i)trace\b', Keyword, ('default', 'trace-keyword?', 'trace-keyword?')), (r'(?i)zcharacter\b', Keyword, ('default', 'directive-keyword?', 'directive-keyword?')), (_name, Name.Class, ('object-body', '_object-head')) ], # [, Replace, Stub 'routine-name?': [ include('_whitespace'), (_name, Name.Function, '#pop'), default('#pop') ], 'locals': [ include('_whitespace'), (r';', Punctuation, '#pop'), (r'\*', Punctuation), (r'"', String.Double, 'plain-string'), (_name, Name.Variable) ], # Array 'many-values': [ include('_whitespace'), (r';', Punctuation), (r'\]', Punctuation, '#pop'), (r':', Error), default(('expression', '_expression')) ], # Attribute, Property 'alias?': [ include('_whitespace'), (r'alias\b', Keyword, ('#pop', '_constant')), default('#pop') ], # Class, Object, Nearby 'class-name': [ include('_whitespace'), (r'(?=[,;]|(class|has|private|with)\b)', Text, '#pop'), (_name, Name.Class, '#pop') ], 'duplicates': [ include('_whitespace'), (r'\(', Punctuation, ('#pop', 'expression', '_expression')), default('#pop') ], '_object-head': [ (r'[%s]>' % _dash, Punctuation), (r'(class|has|private|with)\b', Keyword.Declaration, '#pop'), include('_global') ], 'object-body': [ include('_whitespace'), (r';', Punctuation, '#pop:2'), (r',', Punctuation), (r'class\b', Keyword.Declaration, 'class-segment'), (r'(has|private|with)\b', Keyword.Declaration), (r':', Error), default(('_object-expression', '_expression')) ], 'class-segment': [ include('_whitespace'), (r'(?=[,;]|(class|has|private|with)\b)', Text, '#pop'), (_name, Name.Class), default('value') ], # Extend, Verb 'grammar': [ include('_whitespace'), (r'=', Punctuation, ('#pop', 'default')), (r'\*', Punctuation, ('#pop', 'grammar-line')), default('_directive-keyword') ], 'grammar-line': [ include('_whitespace'), (r';', Punctuation, '#pop'), (r'[/*]', Punctuation), (r'[%s]>' % _dash, Punctuation, 'value'), (r'(noun|scope)\b', Keyword, '=routine'), default('_directive-keyword') ], '=routine': [ include('_whitespace'), (r'=', Punctuation, 'routine-name?'), default('#pop') ], # Import 'manifest': [ include('_whitespace'), (r';', Punctuation, '#pop'), (r',', Punctuation), (r'(?i)global\b', Keyword, '_global'), default('_global') ], # Include, Link, Message 'diagnostic': [ include('_whitespace'), (r'[%s]' % _dquote, String.Double, ('#pop', 'message-string')), default(('#pop', 'before-plain-string', 'directive-keyword?')) ], 'before-plain-string': [ include('_whitespace'), (r'[%s]' % _dquote, String.Double, ('#pop', 'plain-string')) ], 'message-string': [ (r'[~^]+', String.Escape), include('plain-string') ], # Keywords used in directives '_directive-keyword!': [ include('_whitespace'), (words(( 'additive', 'alias', 'buffer', 'class', 'creature', 'data', 'error', 'fatalerror', 'first', 'has', 'held', 'initial', 'initstr', 'last', 'long', 'meta', 'multi', 'multiexcept', 'multiheld', 'multiinside', 'noun', 'number', 'only', 'private', 'replace', 'reverse', 'scope', 'score', 'special', 'string', 'table', 'terminating', 'time', 'topic', 'warning', 'with'), suffix=r'\b'), Keyword, '#pop'), (r'[%s]{1,2}>|[+=]' % _dash, Punctuation, '#pop') ], '_directive-keyword': [ include('_directive-keyword!'), include('value') ], 'directive-keyword?': [ include('_directive-keyword!'), default('#pop') ], 'property-keyword*': [ include('_whitespace'), (r'(additive|long)\b', Keyword), default('#pop') ], 'trace-keyword?': [ include('_whitespace'), (words(( 'assembly', 'dictionary', 'expressions', 'lines', 'linker', 'objects', 'off', 'on', 'symbols', 'tokens', 'verbs'), suffix=r'\b'), Keyword, '#pop'), default('#pop') ], # Statements 'statements': [ include('_whitespace'), (r'\]', Punctuation, '#pop'), (r'[;{}]', Punctuation), (words(( 'box', 'break', 'continue', 'default', 'give', 'inversion', 'new_line', 'quit', 'read', 'remove', 'return', 'rfalse', 'rtrue', 'spaces', 'string', 'until'), suffix=r'\b'), Keyword, 'default'), (r'(do|else)\b', Keyword), (r'(font|style)\b', Keyword, ('default', 'miscellaneous-keyword?')), (r'for\b', Keyword, ('for', '(?')), (r'(if|switch|while)', Keyword, ('expression', '_expression', '(?')), (r'(jump|save|restore)\b', Keyword, ('default', 'label?')), (r'objectloop\b', Keyword, ('_keyword-expression', 'variable?', '(?')), (r'print(_ret)?\b|(?=[%s])' % _dquote, Keyword, 'print-list'), (r'\.', Name.Label, 'label?'), (r'@', Keyword, 'opcode'), (r'#(?![agrnw]\$|#)', Punctuation, 'directive'), (r'<', Punctuation, 'default'), (r'move\b', Keyword, ('default', '_keyword-expression', '_expression')), default(('default', '_keyword-expression', '_expression')) ], 'miscellaneous-keyword?': [ include('_whitespace'), (r'(bold|fixed|from|near|off|on|reverse|roman|to|underline)\b', Keyword, '#pop'), (r'(a|A|an|address|char|name|number|object|property|string|the|' r'The)\b(?=(\s+|(![^%s]*))*\))' % _newline, Keyword.Pseudo, '#pop'), (r'%s(?=(\s+|(![^%s]*))*\))' % (_name, _newline), Name.Function, '#pop'), default('#pop') ], '(?': [ include('_whitespace'), (r'\(', Punctuation, '#pop'), default('#pop') ], 'for': [ include('_whitespace'), (r';', Punctuation, ('_for-expression', '_expression')), default(('_for-expression', '_expression')) ], 'print-list': [ include('_whitespace'), (r';', Punctuation, '#pop'), (r':', Error), default(('_list-expression', '_expression', '_list-expression', 'form')) ], 'form': [ include('_whitespace'), (r'\(', Punctuation, ('#pop', 'miscellaneous-keyword?')), default('#pop') ], # Assembly 'opcode': [ include('_whitespace'), (r'[%s]' % _dquote, String.Double, ('operands', 'plain-string')), (_name, Keyword, 'operands') ], 'operands': [ (r':', Error), default(('_assembly-expression', '_expression')) ] } def get_tokens_unprocessed(self, text): # 'in' is either a keyword or an operator. # If the token two tokens after 'in' is ')', 'in' is a keyword: # objectloop(a in b) # Otherwise, it is an operator: # objectloop(a in b && true) objectloop_queue = [] objectloop_token_count = -1 previous_token = None for index, token, value in RegexLexer.get_tokens_unprocessed(self, text): if previous_token is Name.Variable and value == 'in': objectloop_queue = [[index, token, value]] objectloop_token_count = 2 elif objectloop_token_count > 0: if token not in Comment and token not in Text: objectloop_token_count -= 1 objectloop_queue.append((index, token, value)) else: if objectloop_token_count == 0: if objectloop_queue[-1][2] == ')': objectloop_queue[0][1] = Keyword while objectloop_queue: yield objectloop_queue.pop(0) objectloop_token_count = -1 yield index, token, value if token not in Comment and token not in Text: previous_token = token while objectloop_queue: yield objectloop_queue.pop(0) class Inform7Lexer(RegexLexer): """ For `Inform 7 `_ source code. .. versionadded:: 2.0 """ name = 'Inform 7' aliases = ['inform7', 'i7'] filenames = ['*.ni', '*.i7x'] flags = re.MULTILINE | re.DOTALL | re.UNICODE _dash = Inform6Lexer._dash _dquote = Inform6Lexer._dquote _newline = Inform6Lexer._newline _start = r'\A|(?<=[%s])' % _newline # There are three variants of Inform 7, differing in how to # interpret at signs and braces in I6T. In top-level inclusions, at # signs in the first column are inweb syntax. In phrase definitions # and use options, tokens in braces are treated as I7. Use options # also interpret "{N}". tokens = {} token_variants = ['+i6t-not-inline', '+i6t-inline', '+i6t-use-option'] for level in token_variants: tokens[level] = { '+i6-root': list(Inform6Lexer.tokens['root']), '+i6t-root': [ # For Inform6TemplateLexer (r'[^%s]*' % Inform6Lexer._newline, Comment.Preproc, ('directive', '+p')) ], 'root': [ (r'(\|?\s)+', Text), (r'\[', Comment.Multiline, '+comment'), (r'[%s]' % _dquote, Generic.Heading, ('+main', '+titling', '+titling-string')), default(('+main', '+heading?')) ], '+titling-string': [ (r'[^%s]+' % _dquote, Generic.Heading), (r'[%s]' % _dquote, Generic.Heading, '#pop') ], '+titling': [ (r'\[', Comment.Multiline, '+comment'), (r'[^%s.;:|%s]+' % (_dquote, _newline), Generic.Heading), (r'[%s]' % _dquote, Generic.Heading, '+titling-string'), (r'[%s]{2}|(?<=[\s%s])\|[\s%s]' % (_newline, _dquote, _dquote), Text, ('#pop', '+heading?')), (r'[.;:]|(?<=[\s%s])\|' % _dquote, Text, '#pop'), (r'[|%s]' % _newline, Generic.Heading) ], '+main': [ (r'(?i)[^%s:a\[(|%s]+' % (_dquote, _newline), Text), (r'[%s]' % _dquote, String.Double, '+text'), (r':', Text, '+phrase-definition'), (r'(?i)\bas\b', Text, '+use-option'), (r'\[', Comment.Multiline, '+comment'), (r'(\([%s])(.*?)([%s]\))' % (_dash, _dash), bygroups(Punctuation, using(this, state=('+i6-root', 'directive'), i6t='+i6t-not-inline'), Punctuation)), (r'(%s|(?<=[\s;:.%s]))\|\s|[%s]{2,}' % (_start, _dquote, _newline), Text, '+heading?'), (r'(?i)[a(|%s]' % _newline, Text) ], '+phrase-definition': [ (r'\s+', Text), (r'\[', Comment.Multiline, '+comment'), (r'(\([%s])(.*?)([%s]\))' % (_dash, _dash), bygroups(Punctuation, using(this, state=('+i6-root', 'directive', 'default', 'statements'), i6t='+i6t-inline'), Punctuation), '#pop'), default('#pop') ], '+use-option': [ (r'\s+', Text), (r'\[', Comment.Multiline, '+comment'), (r'(\([%s])(.*?)([%s]\))' % (_dash, _dash), bygroups(Punctuation, using(this, state=('+i6-root', 'directive'), i6t='+i6t-use-option'), Punctuation), '#pop'), default('#pop') ], '+comment': [ (r'[^\[\]]+', Comment.Multiline), (r'\[', Comment.Multiline, '#push'), (r'\]', Comment.Multiline, '#pop') ], '+text': [ (r'[^\[%s]+' % _dquote, String.Double), (r'\[.*?\]', String.Interpol), (r'[%s]' % _dquote, String.Double, '#pop') ], '+heading?': [ (r'(\|?\s)+', Text), (r'\[', Comment.Multiline, '+comment'), (r'[%s]{4}\s+' % _dash, Text, '+documentation-heading'), (r'[%s]{1,3}' % _dash, Text), (r'(?i)(volume|book|part|chapter|section)\b[^%s]*' % _newline, Generic.Heading, '#pop'), default('#pop') ], '+documentation-heading': [ (r'\s+', Text), (r'\[', Comment.Multiline, '+comment'), (r'(?i)documentation\s+', Text, '+documentation-heading2'), default('#pop') ], '+documentation-heading2': [ (r'\s+', Text), (r'\[', Comment.Multiline, '+comment'), (r'[%s]{4}\s' % _dash, Text, '+documentation'), default('#pop:2') ], '+documentation': [ (r'(?i)(%s)\s*(chapter|example)\s*:[^%s]*' % (_start, _newline), Generic.Heading), (r'(?i)(%s)\s*section\s*:[^%s]*' % (_start, _newline), Generic.Subheading), (r'((%s)\t.*?[%s])+' % (_start, _newline), using(this, state='+main')), (r'[^%s\[]+|[%s\[]' % (_newline, _newline), Text), (r'\[', Comment.Multiline, '+comment'), ], '+i6t-not-inline': [ (r'(%s)@c( .*?)?([%s]|\Z)' % (_start, _newline), Comment.Preproc), (r'(%s)@([%s]+|Purpose:)[^%s]*' % (_start, _dash, _newline), Comment.Preproc), (r'(%s)@p( .*?)?([%s]|\Z)' % (_start, _newline), Generic.Heading, '+p') ], '+i6t-use-option': [ include('+i6t-not-inline'), (r'(\{)(N)(\})', bygroups(Punctuation, Text, Punctuation)) ], '+i6t-inline': [ (r'(\{)(\S[^}]*)?(\})', bygroups(Punctuation, using(this, state='+main'), Punctuation)) ], '+i6t': [ (r'(\{[%s])(![^}]*)(\}?)' % _dash, bygroups(Punctuation, Comment.Single, Punctuation)), (r'(\{[%s])(lines)(:)([^}]*)(\}?)' % _dash, bygroups(Punctuation, Keyword, Punctuation, Text, Punctuation), '+lines'), (r'(\{[%s])([^:}]*)(:?)([^}]*)(\}?)' % _dash, bygroups(Punctuation, Keyword, Punctuation, Text, Punctuation)), (r'(\(\+)(.*?)(\+\)|\Z)', bygroups(Punctuation, using(this, state='+main'), Punctuation)) ], '+p': [ (r'[^@]+', Comment.Preproc), (r'(%s)@c( .*?)?([%s]|\Z)' % (_start, _newline), Comment.Preproc, '#pop'), (r'(%s)@([%s]|Purpose:)' % (_start, _dash), Comment.Preproc), (r'(%s)@p( .*?)?([%s]|\Z)' % (_start, _newline), Generic.Heading), (r'@', Comment.Preproc) ], '+lines': [ (r'(%s)@c( .*?)?([%s]|\Z)' % (_start, _newline), Comment.Preproc), (r'(%s)@([%s]|Purpose:)[^%s]*' % (_start, _dash, _newline), Comment.Preproc), (r'(%s)@p( .*?)?([%s]|\Z)' % (_start, _newline), Generic.Heading, '+p'), (r'(%s)@\w*[ %s]' % (_start, _newline), Keyword), (r'![^%s]*' % _newline, Comment.Single), (r'(\{)([%s]endlines)(\})' % _dash, bygroups(Punctuation, Keyword, Punctuation), '#pop'), (r'[^@!{]+?([%s]|\Z)|.' % _newline, Text) ] } # Inform 7 can include snippets of Inform 6 template language, # so all of Inform6Lexer's states are copied here, with # modifications to account for template syntax. Inform7Lexer's # own states begin with '+' to avoid name conflicts. Some of # Inform6Lexer's states begin with '_': these are not modified. # They deal with template syntax either by including modified # states, or by matching r'' then pushing to modified states. for token in Inform6Lexer.tokens: if token == 'root': continue tokens[level][token] = list(Inform6Lexer.tokens[token]) if not token.startswith('_'): tokens[level][token][:0] = [include('+i6t'), include(level)] def __init__(self, **options): level = options.get('i6t', '+i6t-not-inline') if level not in self._all_tokens: self._tokens = self.__class__.process_tokendef(level) else: self._tokens = self._all_tokens[level] RegexLexer.__init__(self, **options) class Inform6TemplateLexer(Inform7Lexer): """ For `Inform 6 template `_ code. .. versionadded:: 2.0 """ name = 'Inform 6 template' aliases = ['i6t'] filenames = ['*.i6t'] def get_tokens_unprocessed(self, text, stack=('+i6t-root',)): return Inform7Lexer.get_tokens_unprocessed(self, text, stack) class Tads3Lexer(RegexLexer): """ For `TADS 3 `_ source code. """ name = 'TADS 3' aliases = ['tads3'] filenames = ['*.t'] flags = re.DOTALL | re.MULTILINE _comment_single = r'(?://(?:[^\\\n]|\\+[\w\W])*$)' _comment_multiline = r'(?:/\*(?:[^*]|\*(?!/))*\*/)' _escape = (r'(?:\\(?:[\n\\<>"\'^v bnrt]|u[\da-fA-F]{,4}|x[\da-fA-F]{,2}|' r'[0-3]?[0-7]{1,2}))') _name = r'(?:[_a-zA-Z]\w*)' _no_quote = r'(?=\s|\\?>)' _operator = (r'(?:&&|\|\||\+\+|--|\?\?|::|[.,@\[\]~]|' r'(?:[=+\-*/%!&|^]|<>?>?)=?)') _ws = r'(?:\\|\s|%s|%s)' % (_comment_single, _comment_multiline) _ws_pp = r'(?:\\\n|[^\S\n]|%s|%s)' % (_comment_single, _comment_multiline) def _make_string_state(triple, double, verbatim=None, _escape=_escape): if verbatim: verbatim = ''.join(['(?:%s|%s)' % (re.escape(c.lower()), re.escape(c.upper())) for c in verbatim]) char = r'"' if double else r"'" token = String.Double if double else String.Single escaped_quotes = r'+|%s(?!%s{2})' % (char, char) if triple else r'' prefix = '%s%s' % ('t' if triple else '', 'd' if double else 's') tag_state_name = '%sqt' % prefix state = [] if triple: state += [ (r'%s{3,}' % char, token, '#pop'), (r'\\%s+' % char, String.Escape), (char, token) ] else: state.append((char, token, '#pop')) state += [ include('s/verbatim'), (r'[^\\<&{}%s]+' % char, token) ] if verbatim: # This regex can't use `(?i)` because escape sequences are # case-sensitive. `<\XMP>` works; `<\xmp>` doesn't. state.append((r'\\?<(/|\\\\|(?!%s)\\)%s(?=[\s=>])' % (_escape, verbatim), Name.Tag, ('#pop', '%sqs' % prefix, tag_state_name))) else: state += [ (r'\\?<\\%s]|<(?!<)|\\%s%s|%s|\\.)*>?' % (char, char, escaped_quotes, _escape), Comment.Multiline), (r'(?i)\\?]|\\>)', Name.Tag, ('#pop', '%sqs/listing' % prefix, tag_state_name)), (r'(?i)\\?]|\\>)', Name.Tag, ('#pop', '%sqs/xmp' % prefix, tag_state_name)), (r'\\?<([^\s=><\\%s]|<(?!<)|\\%s%s|%s|\\.)*' % (char, char, escaped_quotes, _escape), Name.Tag, tag_state_name), include('s/entity') ] state += [ include('s/escape'), (r'\{([^}<\\%s]|<(?!<)|\\%s%s|%s|\\.)*\}' % (char, char, escaped_quotes, _escape), String.Interpol), (r'[\\&{}<]', token) ] return state def _make_tag_state(triple, double, _escape=_escape): char = r'"' if double else r"'" quantifier = r'{3,}' if triple else r'' state_name = '%s%sqt' % ('t' if triple else '', 'd' if double else 's') token = String.Double if double else String.Single escaped_quotes = r'+|%s(?!%s{2})' % (char, char) if triple else r'' return [ (r'%s%s' % (char, quantifier), token, '#pop:2'), (r'(\s|\\\n)+', Text), (r'(=)(\\?")', bygroups(Punctuation, String.Double), 'dqs/%s' % state_name), (r"(=)(\\?')", bygroups(Punctuation, String.Single), 'sqs/%s' % state_name), (r'=', Punctuation, 'uqs/%s' % state_name), (r'\\?>', Name.Tag, '#pop'), (r'\{([^}<\\%s]|<(?!<)|\\%s%s|%s|\\.)*\}' % (char, char, escaped_quotes, _escape), String.Interpol), (r'([^\s=><\\%s]|<(?!<)|\\%s%s|%s|\\.)+' % (char, char, escaped_quotes, _escape), Name.Attribute), include('s/escape'), include('s/verbatim'), include('s/entity'), (r'[\\{}&]', Name.Attribute) ] def _make_attribute_value_state(terminator, host_triple, host_double, _escape=_escape): token = (String.Double if terminator == r'"' else String.Single if terminator == r"'" else String.Other) host_char = r'"' if host_double else r"'" host_quantifier = r'{3,}' if host_triple else r'' host_token = String.Double if host_double else String.Single escaped_quotes = (r'+|%s(?!%s{2})' % (host_char, host_char) if host_triple else r'') return [ (r'%s%s' % (host_char, host_quantifier), host_token, '#pop:3'), (r'%s%s' % (r'' if token is String.Other else r'\\?', terminator), token, '#pop'), include('s/verbatim'), include('s/entity'), (r'\{([^}<\\%s]|<(?!<)|\\%s%s|%s|\\.)*\}' % (host_char, host_char, escaped_quotes, _escape), String.Interpol), (r'([^\s"\'<%s{}\\&])+' % (r'>' if token is String.Other else r''), token), include('s/escape'), (r'["\'\s&{<}\\]', token) ] tokens = { 'root': [ (u'\ufeff', Text), (r'\{', Punctuation, 'object-body'), (r';+', Punctuation), (r'(?=(argcount|break|case|catch|continue|default|definingobj|' r'delegated|do|else|for|foreach|finally|goto|if|inherited|' r'invokee|local|nil|new|operator|replaced|return|self|switch|' r'targetobj|targetprop|throw|true|try|while)\b)', Text, 'block'), (r'(%s)(%s*)(\()' % (_name, _ws), bygroups(Name.Function, using(this, state='whitespace'), Punctuation), ('block?/root', 'more/parameters', 'main/parameters')), include('whitespace'), (r'\++', Punctuation), (r'[^\s!"%-(*->@-_a-z{-~]+', Error), # Averts an infinite loop (r'(?!\Z)', Text, 'main/root') ], 'main/root': [ include('main/basic'), default(('#pop', 'object-body/no-braces', 'classes', 'class')) ], 'object-body/no-braces': [ (r';', Punctuation, '#pop'), (r'\{', Punctuation, ('#pop', 'object-body')), include('object-body') ], 'object-body': [ (r';', Punctuation), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), (r':', Punctuation, ('classes', 'class')), (r'(%s?)(%s*)(\()' % (_name, _ws), bygroups(Name.Function, using(this, state='whitespace'), Punctuation), ('block?', 'more/parameters', 'main/parameters')), (r'(%s)(%s*)(\{)' % (_name, _ws), bygroups(Name.Function, using(this, state='whitespace'), Punctuation), 'block'), (r'(%s)(%s*)(:)' % (_name, _ws), bygroups(Name.Variable, using(this, state='whitespace'), Punctuation), ('object-body/no-braces', 'classes', 'class')), include('whitespace'), (r'->|%s' % _operator, Punctuation, 'main'), default('main/object-body') ], 'main/object-body': [ include('main/basic'), (r'(%s)(%s*)(=?)' % (_name, _ws), bygroups(Name.Variable, using(this, state='whitespace'), Punctuation), ('#pop', 'more', 'main')), default('#pop:2') ], 'block?/root': [ (r'\{', Punctuation, ('#pop', 'block')), include('whitespace'), (r'(?=[\[\'"<(:])', Text, # It might be a VerbRule macro. ('#pop', 'object-body/no-braces', 'grammar', 'grammar-rules')), # It might be a macro like DefineAction. default(('#pop', 'object-body/no-braces')) ], 'block?': [ (r'\{', Punctuation, ('#pop', 'block')), include('whitespace'), default('#pop') ], 'block/basic': [ (r'[;:]+', Punctuation), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), (r'default\b', Keyword.Reserved), (r'(%s)(%s*)(:)' % (_name, _ws), bygroups(Name.Label, using(this, state='whitespace'), Punctuation)), include('whitespace') ], 'block': [ include('block/basic'), (r'(?!\Z)', Text, ('more', 'main')) ], 'block/embed': [ (r'>>', String.Interpol, '#pop'), include('block/basic'), (r'(?!\Z)', Text, ('more/embed', 'main')) ], 'main/basic': [ include('whitespace'), (r'\(', Punctuation, ('#pop', 'more', 'main')), (r'\[', Punctuation, ('#pop', 'more/list', 'main')), (r'\{', Punctuation, ('#pop', 'more/inner', 'main/inner', 'more/parameters', 'main/parameters')), (r'\*|\.{3}', Punctuation, '#pop'), (r'(?i)0x[\da-f]+', Number.Hex, '#pop'), (r'(\d+\.(?!\.)\d*|\.\d+)([eE][-+]?\d+)?|\d+[eE][-+]?\d+', Number.Float, '#pop'), (r'0[0-7]+', Number.Oct, '#pop'), (r'\d+', Number.Integer, '#pop'), (r'"""', String.Double, ('#pop', 'tdqs')), (r"'''", String.Single, ('#pop', 'tsqs')), (r'"', String.Double, ('#pop', 'dqs')), (r"'", String.Single, ('#pop', 'sqs')), (r'R"""', String.Regex, ('#pop', 'tdqr')), (r"R'''", String.Regex, ('#pop', 'tsqr')), (r'R"', String.Regex, ('#pop', 'dqr')), (r"R'", String.Regex, ('#pop', 'sqr')), # Two-token keywords (r'(extern)(%s+)(object\b)' % _ws, bygroups(Keyword.Reserved, using(this, state='whitespace'), Keyword.Reserved)), (r'(function|method)(%s*)(\()' % _ws, bygroups(Keyword.Reserved, using(this, state='whitespace'), Punctuation), ('#pop', 'block?', 'more/parameters', 'main/parameters')), (r'(modify)(%s+)(grammar\b)' % _ws, bygroups(Keyword.Reserved, using(this, state='whitespace'), Keyword.Reserved), ('#pop', 'object-body/no-braces', ':', 'grammar')), (r'(new)(%s+(?=(?:function|method)\b))' % _ws, bygroups(Keyword.Reserved, using(this, state='whitespace'))), (r'(object)(%s+)(template\b)' % _ws, bygroups(Keyword.Reserved, using(this, state='whitespace'), Keyword.Reserved), ('#pop', 'template')), (r'(string)(%s+)(template\b)' % _ws, bygroups(Keyword, using(this, state='whitespace'), Keyword.Reserved), ('#pop', 'function-name')), # Keywords (r'(argcount|definingobj|invokee|replaced|targetobj|targetprop)\b', Name.Builtin, '#pop'), (r'(break|continue|goto)\b', Keyword.Reserved, ('#pop', 'label')), (r'(case|extern|if|intrinsic|return|static|while)\b', Keyword.Reserved), (r'catch\b', Keyword.Reserved, ('#pop', 'catch')), (r'class\b', Keyword.Reserved, ('#pop', 'object-body/no-braces', 'class')), (r'(default|do|else|finally|try)\b', Keyword.Reserved, '#pop'), (r'(dictionary|property)\b', Keyword.Reserved, ('#pop', 'constants')), (r'enum\b', Keyword.Reserved, ('#pop', 'enum')), (r'export\b', Keyword.Reserved, ('#pop', 'main')), (r'(for|foreach)\b', Keyword.Reserved, ('#pop', 'more/inner', 'main/inner')), (r'(function|method)\b', Keyword.Reserved, ('#pop', 'block?', 'function-name')), (r'grammar\b', Keyword.Reserved, ('#pop', 'object-body/no-braces', 'grammar')), (r'inherited\b', Keyword.Reserved, ('#pop', 'inherited')), (r'local\b', Keyword.Reserved, ('#pop', 'more/local', 'main/local')), (r'(modify|replace|switch|throw|transient)\b', Keyword.Reserved, '#pop'), (r'new\b', Keyword.Reserved, ('#pop', 'class')), (r'(nil|true)\b', Keyword.Constant, '#pop'), (r'object\b', Keyword.Reserved, ('#pop', 'object-body/no-braces')), (r'operator\b', Keyword.Reserved, ('#pop', 'operator')), (r'propertyset\b', Keyword.Reserved, ('#pop', 'propertyset', 'main')), (r'self\b', Name.Builtin.Pseudo, '#pop'), (r'template\b', Keyword.Reserved, ('#pop', 'template')), # Operators (r'(__objref|defined)(%s*)(\()' % _ws, bygroups(Operator.Word, using(this, state='whitespace'), Operator), ('#pop', 'more/__objref', 'main')), (r'delegated\b', Operator.Word), # Compiler-defined macros and built-in properties (r'(__DATE__|__DEBUG|__LINE__|__FILE__|' r'__TADS_MACRO_FORMAT_VERSION|__TADS_SYS_\w*|__TADS_SYSTEM_NAME|' r'__TADS_VERSION_MAJOR|__TADS_VERSION_MINOR|__TADS3|__TIME__|' r'construct|finalize|grammarInfo|grammarTag|lexicalParent|' r'miscVocab|sourceTextGroup|sourceTextGroupName|' r'sourceTextGroupOrder|sourceTextOrder)\b', Name.Builtin, '#pop') ], 'main': [ include('main/basic'), (_name, Name, '#pop'), default('#pop') ], 'more/basic': [ (r'\(', Punctuation, ('more/list', 'main')), (r'\[', Punctuation, ('more', 'main')), (r'\.{3}', Punctuation), (r'->|\.\.', Punctuation, 'main'), (r'(?=;)|[:)\]]', Punctuation, '#pop'), include('whitespace'), (_operator, Operator, 'main'), (r'\?', Operator, ('main', 'more/conditional', 'main')), (r'(is|not)(%s+)(in\b)' % _ws, bygroups(Operator.Word, using(this, state='whitespace'), Operator.Word)), (r'[^\s!"%-_a-z{-~]+', Error) # Averts an infinite loop ], 'more': [ include('more/basic'), default('#pop') ], # Then expression (conditional operator) 'more/conditional': [ (r':(?!:)', Operator, '#pop'), include('more') ], # Embedded expressions 'more/embed': [ (r'>>', String.Interpol, '#pop:2'), include('more') ], # For/foreach loop initializer or short-form anonymous function 'main/inner': [ (r'\(', Punctuation, ('#pop', 'more/inner', 'main/inner')), (r'local\b', Keyword.Reserved, ('#pop', 'main/local')), include('main') ], 'more/inner': [ (r'\}', Punctuation, '#pop'), (r',', Punctuation, 'main/inner'), (r'(in|step)\b', Keyword, 'main/inner'), include('more') ], # Local 'main/local': [ (_name, Name.Variable, '#pop'), include('whitespace') ], 'more/local': [ (r',', Punctuation, 'main/local'), include('more') ], # List 'more/list': [ (r'[,:]', Punctuation, 'main'), include('more') ], # Parameter list 'main/parameters': [ (r'(%s)(%s*)(?=:)' % (_name, _ws), bygroups(Name.Variable, using(this, state='whitespace')), '#pop'), (r'(%s)(%s+)(%s)' % (_name, _ws, _name), bygroups(Name.Class, using(this, state='whitespace'), Name.Variable), '#pop'), (r'\[+', Punctuation), include('main/basic'), (_name, Name.Variable, '#pop'), default('#pop') ], 'more/parameters': [ (r'(:)(%s*(?=[?=,:)]))' % _ws, bygroups(Punctuation, using(this, state='whitespace'))), (r'[?\]]+', Punctuation), (r'[:)]', Punctuation, ('#pop', 'multimethod?')), (r',', Punctuation, 'main/parameters'), (r'=', Punctuation, ('more/parameter', 'main')), include('more') ], 'more/parameter': [ (r'(?=[,)])', Text, '#pop'), include('more') ], 'multimethod?': [ (r'multimethod\b', Keyword, '#pop'), include('whitespace'), default('#pop') ], # Statements and expressions 'more/__objref': [ (r',', Punctuation, 'mode'), (r'\)', Operator, '#pop'), include('more') ], 'mode': [ (r'(error|warn)\b', Keyword, '#pop'), include('whitespace') ], 'catch': [ (r'\(+', Punctuation), (_name, Name.Exception, ('#pop', 'variables')), include('whitespace') ], 'enum': [ include('whitespace'), (r'token\b', Keyword, ('#pop', 'constants')), default(('#pop', 'constants')) ], 'grammar': [ (r'\)+', Punctuation), (r'\(', Punctuation, 'grammar-tag'), (r':', Punctuation, 'grammar-rules'), (_name, Name.Class), include('whitespace') ], 'grammar-tag': [ include('whitespace'), (r'"""([^\\"<]|""?(?!")|\\"+|\\.|<(?!<))+("{3,}|<<)|' r'R"""([^\\"]|""?(?!")|\\"+|\\.)+"{3,}|' r"'''([^\\'<]|''?(?!')|\\'+|\\.|<(?!<))+('{3,}|<<)|" r"R'''([^\\']|''?(?!')|\\'+|\\.)+'{3,}|" r'"([^\\"<]|\\.|<(?!<))+("|<<)|R"([^\\"]|\\.)+"|' r"'([^\\'<]|\\.|<(?!<))+('|<<)|R'([^\\']|\\.)+'|" r"([^)\s\\/]|/(?![/*]))+|\)", String.Other, '#pop') ], 'grammar-rules': [ include('string'), include('whitespace'), (r'(\[)(%s*)(badness)' % _ws, bygroups(Punctuation, using(this, state='whitespace'), Keyword), 'main'), (r'->|%s|[()]' % _operator, Punctuation), (_name, Name.Constant), default('#pop:2') ], ':': [ (r':', Punctuation, '#pop') ], 'function-name': [ (r'(<<([^>]|>>>|>(?!>))*>>)+', String.Interpol), (r'(?=%s?%s*[({])' % (_name, _ws), Text, '#pop'), (_name, Name.Function, '#pop'), include('whitespace') ], 'inherited': [ (r'<', Punctuation, ('#pop', 'classes', 'class')), include('whitespace'), (_name, Name.Class, '#pop'), default('#pop') ], 'operator': [ (r'negate\b', Operator.Word, '#pop'), include('whitespace'), (_operator, Operator), default('#pop') ], 'propertyset': [ (r'\(', Punctuation, ('more/parameters', 'main/parameters')), (r'\{', Punctuation, ('#pop', 'object-body')), include('whitespace') ], 'template': [ (r'(?=;)', Text, '#pop'), include('string'), (r'inherited\b', Keyword.Reserved), include('whitespace'), (r'->|\?|%s' % _operator, Punctuation), (_name, Name.Variable) ], # Identifiers 'class': [ (r'\*|\.{3}', Punctuation, '#pop'), (r'object\b', Keyword.Reserved, '#pop'), (r'transient\b', Keyword.Reserved), (_name, Name.Class, '#pop'), include('whitespace'), default('#pop') ], 'classes': [ (r'[:,]', Punctuation, 'class'), include('whitespace'), (r'>', Punctuation, '#pop'), default('#pop') ], 'constants': [ (r',+', Punctuation), (r';', Punctuation, '#pop'), (r'property\b', Keyword.Reserved), (_name, Name.Constant), include('whitespace') ], 'label': [ (_name, Name.Label, '#pop'), include('whitespace'), default('#pop') ], 'variables': [ (r',+', Punctuation), (r'\)', Punctuation, '#pop'), include('whitespace'), (_name, Name.Variable) ], # Whitespace and comments 'whitespace': [ (r'^%s*#(%s|[^\n]|(?<=\\)\n)*\n?' % (_ws_pp, _comment_multiline), Comment.Preproc), (_comment_single, Comment.Single), (_comment_multiline, Comment.Multiline), (r'\\+\n+%s*#?|\n+|([^\S\n]|\\)+' % _ws_pp, Text) ], # Strings 'string': [ (r'"""', String.Double, 'tdqs'), (r"'''", String.Single, 'tsqs'), (r'"', String.Double, 'dqs'), (r"'", String.Single, 'sqs') ], 's/escape': [ (r'\{\{|\}\}|%s' % _escape, String.Escape) ], 's/verbatim': [ (r'<<\s*(as\s+decreasingly\s+likely\s+outcomes|cycling|else|end|' r'first\s+time|one\s+of|only|or|otherwise|' r'(sticky|(then\s+)?(purely\s+)?at)\s+random|stopping|' r'(then\s+)?(half\s+)?shuffled|\|\|)\s*>>', String.Interpol), (r'<<(%%(_(%s|\\?.)|[\-+ ,#]|\[\d*\]?)*\d*\.?\d*(%s|\\?.)|' r'\s*((else|otherwise)\s+)?(if|unless)\b)?' % (_escape, _escape), String.Interpol, ('block/embed', 'more/embed', 'main')) ], 's/entity': [ (r'(?i)&(#(x[\da-f]+|\d+)|[a-z][\da-z]*);?', Name.Entity) ], 'tdqs': _make_string_state(True, True), 'tsqs': _make_string_state(True, False), 'dqs': _make_string_state(False, True), 'sqs': _make_string_state(False, False), 'tdqs/listing': _make_string_state(True, True, 'listing'), 'tsqs/listing': _make_string_state(True, False, 'listing'), 'dqs/listing': _make_string_state(False, True, 'listing'), 'sqs/listing': _make_string_state(False, False, 'listing'), 'tdqs/xmp': _make_string_state(True, True, 'xmp'), 'tsqs/xmp': _make_string_state(True, False, 'xmp'), 'dqs/xmp': _make_string_state(False, True, 'xmp'), 'sqs/xmp': _make_string_state(False, False, 'xmp'), # Tags 'tdqt': _make_tag_state(True, True), 'tsqt': _make_tag_state(True, False), 'dqt': _make_tag_state(False, True), 'sqt': _make_tag_state(False, False), 'dqs/tdqt': _make_attribute_value_state(r'"', True, True), 'dqs/tsqt': _make_attribute_value_state(r'"', True, False), 'dqs/dqt': _make_attribute_value_state(r'"', False, True), 'dqs/sqt': _make_attribute_value_state(r'"', False, False), 'sqs/tdqt': _make_attribute_value_state(r"'", True, True), 'sqs/tsqt': _make_attribute_value_state(r"'", True, False), 'sqs/dqt': _make_attribute_value_state(r"'", False, True), 'sqs/sqt': _make_attribute_value_state(r"'", False, False), 'uqs/tdqt': _make_attribute_value_state(_no_quote, True, True), 'uqs/tsqt': _make_attribute_value_state(_no_quote, True, False), 'uqs/dqt': _make_attribute_value_state(_no_quote, False, True), 'uqs/sqt': _make_attribute_value_state(_no_quote, False, False), # Regular expressions 'tdqr': [ (r'[^\\"]+', String.Regex), (r'\\"*', String.Regex), (r'"{3,}', String.Regex, '#pop'), (r'"', String.Regex) ], 'tsqr': [ (r"[^\\']+", String.Regex), (r"\\'*", String.Regex), (r"'{3,}", String.Regex, '#pop'), (r"'", String.Regex) ], 'dqr': [ (r'[^\\"]+', String.Regex), (r'\\"?', String.Regex), (r'"', String.Regex, '#pop') ], 'sqr': [ (r"[^\\']+", String.Regex), (r"\\'?", String.Regex), (r"'", String.Regex, '#pop') ] } def get_tokens_unprocessed(self, text, **kwargs): pp = r'^%s*#%s*' % (self._ws_pp, self._ws_pp) if_false_level = 0 for index, token, value in ( RegexLexer.get_tokens_unprocessed(self, text, **kwargs)): if if_false_level == 0: # Not in a false #if if (token is Comment.Preproc and re.match(r'%sif%s+(0|nil)%s*$\n?' % (pp, self._ws_pp, self._ws_pp), value)): if_false_level = 1 else: # In a false #if if token is Comment.Preproc: if (if_false_level == 1 and re.match(r'%sel(if|se)\b' % pp, value)): if_false_level = 0 elif re.match(r'%sif' % pp, value): if_false_level += 1 elif re.match(r'%sendif\b' % pp, value): if_false_level -= 1 else: token = Comment yield index, token, value Pygments-2.3.1/pygments/lexers/lisp.py0000644000175000017500000043033113402534107017073 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.lisp ~~~~~~~~~~~~~~~~~~~~ Lexers for Lispy languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, words, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Literal, Error from pygments.lexers.python import PythonLexer __all__ = ['SchemeLexer', 'CommonLispLexer', 'HyLexer', 'RacketLexer', 'NewLispLexer', 'EmacsLispLexer', 'ShenLexer', 'CPSALexer', 'XtlangLexer', 'FennelLexer'] class SchemeLexer(RegexLexer): """ A Scheme lexer, parsing a stream and outputting the tokens needed to highlight scheme code. This lexer could be most probably easily subclassed to parse other LISP-Dialects like Common Lisp, Emacs Lisp or AutoLisp. This parser is checked with pastes from the LISP pastebin at http://paste.lisp.org/ to cover as much syntax as possible. It supports the full Scheme syntax as defined in R5RS. .. versionadded:: 0.6 """ name = 'Scheme' aliases = ['scheme', 'scm'] filenames = ['*.scm', '*.ss'] mimetypes = ['text/x-scheme', 'application/x-scheme'] # list of known keywords and builtins taken form vim 6.4 scheme.vim # syntax file. keywords = ( 'lambda', 'define', 'if', 'else', 'cond', 'and', 'or', 'case', 'let', 'let*', 'letrec', 'begin', 'do', 'delay', 'set!', '=>', 'quote', 'quasiquote', 'unquote', 'unquote-splicing', 'define-syntax', 'let-syntax', 'letrec-syntax', 'syntax-rules' ) builtins = ( '*', '+', '-', '/', '<', '<=', '=', '>', '>=', 'abs', 'acos', 'angle', 'append', 'apply', 'asin', 'assoc', 'assq', 'assv', 'atan', 'boolean?', 'caaaar', 'caaadr', 'caaar', 'caadar', 'caaddr', 'caadr', 'caar', 'cadaar', 'cadadr', 'cadar', 'caddar', 'cadddr', 'caddr', 'cadr', 'call-with-current-continuation', 'call-with-input-file', 'call-with-output-file', 'call-with-values', 'call/cc', 'car', 'cdaaar', 'cdaadr', 'cdaar', 'cdadar', 'cdaddr', 'cdadr', 'cdar', 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr', 'cdddr', 'cddr', 'cdr', 'ceiling', 'char->integer', 'char-alphabetic?', 'char-ci<=?', 'char-ci=?', 'char-ci>?', 'char-downcase', 'char-lower-case?', 'char-numeric?', 'char-ready?', 'char-upcase', 'char-upper-case?', 'char-whitespace?', 'char<=?', 'char=?', 'char>?', 'char?', 'close-input-port', 'close-output-port', 'complex?', 'cons', 'cos', 'current-input-port', 'current-output-port', 'denominator', 'display', 'dynamic-wind', 'eof-object?', 'eq?', 'equal?', 'eqv?', 'eval', 'even?', 'exact->inexact', 'exact?', 'exp', 'expt', 'floor', 'for-each', 'force', 'gcd', 'imag-part', 'inexact->exact', 'inexact?', 'input-port?', 'integer->char', 'integer?', 'interaction-environment', 'lcm', 'length', 'list', 'list->string', 'list->vector', 'list-ref', 'list-tail', 'list?', 'load', 'log', 'magnitude', 'make-polar', 'make-rectangular', 'make-string', 'make-vector', 'map', 'max', 'member', 'memq', 'memv', 'min', 'modulo', 'negative?', 'newline', 'not', 'null-environment', 'null?', 'number->string', 'number?', 'numerator', 'odd?', 'open-input-file', 'open-output-file', 'output-port?', 'pair?', 'peek-char', 'port?', 'positive?', 'procedure?', 'quotient', 'rational?', 'rationalize', 'read', 'read-char', 'real-part', 'real?', 'remainder', 'reverse', 'round', 'scheme-report-environment', 'set-car!', 'set-cdr!', 'sin', 'sqrt', 'string', 'string->list', 'string->number', 'string->symbol', 'string-append', 'string-ci<=?', 'string-ci=?', 'string-ci>?', 'string-copy', 'string-fill!', 'string-length', 'string-ref', 'string-set!', 'string<=?', 'string=?', 'string>?', 'string?', 'substring', 'symbol->string', 'symbol?', 'tan', 'transcript-off', 'transcript-on', 'truncate', 'values', 'vector', 'vector->list', 'vector-fill!', 'vector-length', 'vector-ref', 'vector-set!', 'vector?', 'with-input-from-file', 'with-output-to-file', 'write', 'write-char', 'zero?' ) # valid names for identifiers # well, names can only not consist fully of numbers # but this should be good enough for now valid_name = r'[\w!$%&*+,/:<=>?@^~|-]+' tokens = { 'root': [ # the comments # and going to the end of the line (r';.*$', Comment.Single), # multi-line comment (r'#\|', Comment.Multiline, 'multiline-comment'), # commented form (entire sexpr folliwng) (r'#;\s*\(', Comment, 'commented-form'), # signifies that the program text that follows is written with the # lexical and datum syntax described in r6rs (r'#!r6rs', Comment), # whitespaces - usually not relevant (r'\s+', Text), # numbers (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), # support for uncommon kinds of numbers - # have to figure out what the characters mean # (r'(#e|#i|#b|#o|#d|#x)[\d.]+', Number), # strings, symbols and characters (r'"(\\\\|\\"|[^"])*"', String), (r"'" + valid_name, String.Symbol), (r"#\\([()/'\"._!§$%& ?=+-]|[a-zA-Z0-9]+)", String.Char), # constants (r'(#t|#f)', Name.Constant), # special operators (r"('|#|`|,@|,|\.)", Operator), # highlight the keywords ('(%s)' % '|'.join(re.escape(entry) + ' ' for entry in keywords), Keyword), # first variable in a quoted string like # '(this is syntactic sugar) (r"(?<='\()" + valid_name, Name.Variable), (r"(?<=#\()" + valid_name, Name.Variable), # highlight the builtins (r"(?<=\()(%s)" % '|'.join(re.escape(entry) + ' ' for entry in builtins), Name.Builtin), # the remaining functions (r'(?<=\()' + valid_name, Name.Function), # find the remaining variables (valid_name, Name.Variable), # the famous parentheses! (r'(\(|\))', Punctuation), (r'(\[|\])', Punctuation), ], 'multiline-comment': [ (r'#\|', Comment.Multiline, '#push'), (r'\|#', Comment.Multiline, '#pop'), (r'[^|#]+', Comment.Multiline), (r'[|#]', Comment.Multiline), ], 'commented-form': [ (r'\(', Comment, '#push'), (r'\)', Comment, '#pop'), (r'[^()]+', Comment), ], } class CommonLispLexer(RegexLexer): """ A Common Lisp lexer. .. versionadded:: 0.9 """ name = 'Common Lisp' aliases = ['common-lisp', 'cl', 'lisp'] filenames = ['*.cl', '*.lisp'] mimetypes = ['text/x-common-lisp'] flags = re.IGNORECASE | re.MULTILINE # couple of useful regexes # characters that are not macro-characters and can be used to begin a symbol nonmacro = r'\\.|[\w!$%&*+-/<=>?@\[\]^{}~]' constituent = nonmacro + '|[#.:]' terminated = r'(?=[ "()\'\n,;`])' # whitespace or terminating macro characters # symbol token, reverse-engineered from hyperspec # Take a deep breath... symbol = r'(\|[^|]+\||(?:%s)(?:%s)*)' % (nonmacro, constituent) def __init__(self, **options): from pygments.lexers._cl_builtins import BUILTIN_FUNCTIONS, \ SPECIAL_FORMS, MACROS, LAMBDA_LIST_KEYWORDS, DECLARATIONS, \ BUILTIN_TYPES, BUILTIN_CLASSES self.builtin_function = BUILTIN_FUNCTIONS self.special_forms = SPECIAL_FORMS self.macros = MACROS self.lambda_list_keywords = LAMBDA_LIST_KEYWORDS self.declarations = DECLARATIONS self.builtin_types = BUILTIN_TYPES self.builtin_classes = BUILTIN_CLASSES RegexLexer.__init__(self, **options) def get_tokens_unprocessed(self, text): stack = ['root'] for index, token, value in RegexLexer.get_tokens_unprocessed(self, text, stack): if token is Name.Variable: if value in self.builtin_function: yield index, Name.Builtin, value continue if value in self.special_forms: yield index, Keyword, value continue if value in self.macros: yield index, Name.Builtin, value continue if value in self.lambda_list_keywords: yield index, Keyword, value continue if value in self.declarations: yield index, Keyword, value continue if value in self.builtin_types: yield index, Keyword.Type, value continue if value in self.builtin_classes: yield index, Name.Class, value continue yield index, token, value tokens = { 'root': [ default('body'), ], 'multiline-comment': [ (r'#\|', Comment.Multiline, '#push'), # (cf. Hyperspec 2.4.8.19) (r'\|#', Comment.Multiline, '#pop'), (r'[^|#]+', Comment.Multiline), (r'[|#]', Comment.Multiline), ], 'commented-form': [ (r'\(', Comment.Preproc, '#push'), (r'\)', Comment.Preproc, '#pop'), (r'[^()]+', Comment.Preproc), ], 'body': [ # whitespace (r'\s+', Text), # single-line comment (r';.*$', Comment.Single), # multi-line comment (r'#\|', Comment.Multiline, 'multiline-comment'), # encoding comment (?) (r'#\d*Y.*$', Comment.Special), # strings and characters (r'"(\\.|\\\n|[^"\\])*"', String), # quoting (r":" + symbol, String.Symbol), (r"::" + symbol, String.Symbol), (r":#" + symbol, String.Symbol), (r"'" + symbol, String.Symbol), (r"'", Operator), (r"`", Operator), # decimal numbers (r'[-+]?\d+\.?' + terminated, Number.Integer), (r'[-+]?\d+/\d+' + terminated, Number), (r'[-+]?(\d*\.\d+([defls][-+]?\d+)?|\d+(\.\d*)?[defls][-+]?\d+)' + terminated, Number.Float), # sharpsign strings and characters (r"#\\." + terminated, String.Char), (r"#\\" + symbol, String.Char), # vector (r'#\(', Operator, 'body'), # bitstring (r'#\d*\*[01]*', Literal.Other), # uninterned symbol (r'#:' + symbol, String.Symbol), # read-time and load-time evaluation (r'#[.,]', Operator), # function shorthand (r'#\'', Name.Function), # binary rational (r'#b[+-]?[01]+(/[01]+)?', Number.Bin), # octal rational (r'#o[+-]?[0-7]+(/[0-7]+)?', Number.Oct), # hex rational (r'#x[+-]?[0-9a-f]+(/[0-9a-f]+)?', Number.Hex), # radix rational (r'#\d+r[+-]?[0-9a-z]+(/[0-9a-z]+)?', Number), # complex (r'(#c)(\()', bygroups(Number, Punctuation), 'body'), # array (r'(#\d+a)(\()', bygroups(Literal.Other, Punctuation), 'body'), # structure (r'(#s)(\()', bygroups(Literal.Other, Punctuation), 'body'), # path (r'#p?"(\\.|[^"])*"', Literal.Other), # reference (r'#\d+=', Operator), (r'#\d+#', Operator), # read-time comment (r'#+nil' + terminated + r'\s*\(', Comment.Preproc, 'commented-form'), # read-time conditional (r'#[+-]', Operator), # special operators that should have been parsed already (r'(,@|,|\.)', Operator), # special constants (r'(t|nil)' + terminated, Name.Constant), # functions and variables (r'\*' + symbol + r'\*', Name.Variable.Global), (symbol, Name.Variable), # parentheses (r'\(', Punctuation, 'body'), (r'\)', Punctuation, '#pop'), ], } class HyLexer(RegexLexer): """ Lexer for `Hy `_ source code. .. versionadded:: 2.0 """ name = 'Hy' aliases = ['hylang'] filenames = ['*.hy'] mimetypes = ['text/x-hy', 'application/x-hy'] special_forms = ( 'cond', 'for', '->', '->>', 'car', 'cdr', 'first', 'rest', 'let', 'when', 'unless', 'import', 'do', 'progn', 'get', 'slice', 'assoc', 'with-decorator', ',', 'list_comp', 'kwapply', '~', 'is', 'in', 'is-not', 'not-in', 'quasiquote', 'unquote', 'unquote-splice', 'quote', '|', '<<=', '>>=', 'foreach', 'while', 'eval-and-compile', 'eval-when-compile' ) declarations = ( 'def', 'defn', 'defun', 'defmacro', 'defclass', 'lambda', 'fn', 'setv' ) hy_builtins = () hy_core = ( 'cycle', 'dec', 'distinct', 'drop', 'even?', 'filter', 'inc', 'instance?', 'iterable?', 'iterate', 'iterator?', 'neg?', 'none?', 'nth', 'numeric?', 'odd?', 'pos?', 'remove', 'repeat', 'repeatedly', 'take', 'take_nth', 'take_while', 'zero?' ) builtins = hy_builtins + hy_core # valid names for identifiers # well, names can only not consist fully of numbers # but this should be good enough for now valid_name = r'(?!#)[\w!$%*+<=>?/.#-:]+' def _multi_escape(entries): return words(entries, suffix=' ') tokens = { 'root': [ # the comments - always starting with semicolon # and going to the end of the line (r';.*$', Comment.Single), # whitespaces - usually not relevant (r'[,\s]+', Text), # numbers (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), (r'0[0-7]+j?', Number.Oct), (r'0[xX][a-fA-F0-9]+', Number.Hex), # strings, symbols and characters (r'"(\\\\|\\"|[^"])*"', String), (r"'" + valid_name, String.Symbol), (r"\\(.|[a-z]+)", String.Char), (r'^(\s*)([rRuU]{,2}"""(?:.|\n)*?""")', bygroups(Text, String.Doc)), (r"^(\s*)([rRuU]{,2}'''(?:.|\n)*?''')", bygroups(Text, String.Doc)), # keywords (r'::?' + valid_name, String.Symbol), # special operators (r'~@|[`\'#^~&@]', Operator), include('py-keywords'), include('py-builtins'), # highlight the special forms (_multi_escape(special_forms), Keyword), # Technically, only the special forms are 'keywords'. The problem # is that only treating them as keywords means that things like # 'defn' and 'ns' need to be highlighted as builtins. This is ugly # and weird for most styles. So, as a compromise we're going to # highlight them as Keyword.Declarations. (_multi_escape(declarations), Keyword.Declaration), # highlight the builtins (_multi_escape(builtins), Name.Builtin), # the remaining functions (r'(?<=\()' + valid_name, Name.Function), # find the remaining variables (valid_name, Name.Variable), # Hy accepts vector notation (r'(\[|\])', Punctuation), # Hy accepts map notation (r'(\{|\})', Punctuation), # the famous parentheses! (r'(\(|\))', Punctuation), ], 'py-keywords': PythonLexer.tokens['keywords'], 'py-builtins': PythonLexer.tokens['builtins'], } def analyse_text(text): if '(import ' in text or '(defn ' in text: return 0.9 class RacketLexer(RegexLexer): """ Lexer for `Racket `_ source code (formerly known as PLT Scheme). .. versionadded:: 1.6 """ name = 'Racket' aliases = ['racket', 'rkt'] filenames = ['*.rkt', '*.rktd', '*.rktl'] mimetypes = ['text/x-racket', 'application/x-racket'] # Generated by example.rkt _keywords = ( u'#%app', u'#%datum', u'#%declare', u'#%expression', u'#%module-begin', u'#%plain-app', u'#%plain-lambda', u'#%plain-module-begin', u'#%printing-module-begin', u'#%provide', u'#%require', u'#%stratified-body', u'#%top', u'#%top-interaction', u'#%variable-reference', u'->', u'->*', u'->*m', u'->d', u'->dm', u'->i', u'->m', u'...', u':do-in', u'==', u'=>', u'_', u'absent', u'abstract', u'all-defined-out', u'all-from-out', u'and', u'any', u'augment', u'augment*', u'augment-final', u'augment-final*', u'augride', u'augride*', u'begin', u'begin-for-syntax', u'begin0', u'case', u'case->', u'case->m', u'case-lambda', u'class', u'class*', u'class-field-accessor', u'class-field-mutator', u'class/c', u'class/derived', u'combine-in', u'combine-out', u'command-line', u'compound-unit', u'compound-unit/infer', u'cond', u'cons/dc', u'contract', u'contract-out', u'contract-struct', u'contracted', u'define', u'define-compound-unit', u'define-compound-unit/infer', u'define-contract-struct', u'define-custom-hash-types', u'define-custom-set-types', u'define-for-syntax', u'define-local-member-name', u'define-logger', u'define-match-expander', u'define-member-name', u'define-module-boundary-contract', u'define-namespace-anchor', u'define-opt/c', u'define-sequence-syntax', u'define-serializable-class', u'define-serializable-class*', u'define-signature', u'define-signature-form', u'define-struct', u'define-struct/contract', u'define-struct/derived', u'define-syntax', u'define-syntax-rule', u'define-syntaxes', u'define-unit', u'define-unit-binding', u'define-unit-from-context', u'define-unit/contract', u'define-unit/new-import-export', u'define-unit/s', u'define-values', u'define-values-for-export', u'define-values-for-syntax', u'define-values/invoke-unit', u'define-values/invoke-unit/infer', u'define/augment', u'define/augment-final', u'define/augride', u'define/contract', u'define/final-prop', u'define/match', u'define/overment', u'define/override', u'define/override-final', u'define/private', u'define/public', u'define/public-final', u'define/pubment', u'define/subexpression-pos-prop', u'define/subexpression-pos-prop/name', u'delay', u'delay/idle', u'delay/name', u'delay/strict', u'delay/sync', u'delay/thread', u'do', u'else', u'except', u'except-in', u'except-out', u'export', u'extends', u'failure-cont', u'false', u'false/c', u'field', u'field-bound?', u'file', u'flat-murec-contract', u'flat-rec-contract', u'for', u'for*', u'for*/and', u'for*/async', u'for*/first', u'for*/fold', u'for*/fold/derived', u'for*/hash', u'for*/hasheq', u'for*/hasheqv', u'for*/last', u'for*/list', u'for*/lists', u'for*/mutable-set', u'for*/mutable-seteq', u'for*/mutable-seteqv', u'for*/or', u'for*/product', u'for*/set', u'for*/seteq', u'for*/seteqv', u'for*/stream', u'for*/sum', u'for*/vector', u'for*/weak-set', u'for*/weak-seteq', u'for*/weak-seteqv', u'for-label', u'for-meta', u'for-syntax', u'for-template', u'for/and', u'for/async', u'for/first', u'for/fold', u'for/fold/derived', u'for/hash', u'for/hasheq', u'for/hasheqv', u'for/last', u'for/list', u'for/lists', u'for/mutable-set', u'for/mutable-seteq', u'for/mutable-seteqv', u'for/or', u'for/product', u'for/set', u'for/seteq', u'for/seteqv', u'for/stream', u'for/sum', u'for/vector', u'for/weak-set', u'for/weak-seteq', u'for/weak-seteqv', u'gen:custom-write', u'gen:dict', u'gen:equal+hash', u'gen:set', u'gen:stream', u'generic', u'get-field', u'hash/dc', u'if', u'implies', u'import', u'include', u'include-at/relative-to', u'include-at/relative-to/reader', u'include/reader', u'inherit', u'inherit-field', u'inherit/inner', u'inherit/super', u'init', u'init-depend', u'init-field', u'init-rest', u'inner', u'inspect', u'instantiate', u'interface', u'interface*', u'invariant-assertion', u'invoke-unit', u'invoke-unit/infer', u'lambda', u'lazy', u'let', u'let*', u'let*-values', u'let-syntax', u'let-syntaxes', u'let-values', u'let/cc', u'let/ec', u'letrec', u'letrec-syntax', u'letrec-syntaxes', u'letrec-syntaxes+values', u'letrec-values', u'lib', u'link', u'local', u'local-require', u'log-debug', u'log-error', u'log-fatal', u'log-info', u'log-warning', u'match', u'match*', u'match*/derived', u'match-define', u'match-define-values', u'match-lambda', u'match-lambda*', u'match-lambda**', u'match-let', u'match-let*', u'match-let*-values', u'match-let-values', u'match-letrec', u'match-letrec-values', u'match/derived', u'match/values', u'member-name-key', u'mixin', u'module', u'module*', u'module+', u'nand', u'new', u'nor', u'object-contract', u'object/c', u'only', u'only-in', u'only-meta-in', u'open', u'opt/c', u'or', u'overment', u'overment*', u'override', u'override*', u'override-final', u'override-final*', u'parameterize', u'parameterize*', u'parameterize-break', u'parametric->/c', u'place', u'place*', u'place/context', u'planet', u'prefix', u'prefix-in', u'prefix-out', u'private', u'private*', u'prompt-tag/c', u'protect-out', u'provide', u'provide-signature-elements', u'provide/contract', u'public', u'public*', u'public-final', u'public-final*', u'pubment', u'pubment*', u'quasiquote', u'quasisyntax', u'quasisyntax/loc', u'quote', u'quote-syntax', u'quote-syntax/prune', u'recontract-out', u'recursive-contract', u'relative-in', u'rename', u'rename-in', u'rename-inner', u'rename-out', u'rename-super', u'require', u'send', u'send*', u'send+', u'send-generic', u'send/apply', u'send/keyword-apply', u'set!', u'set!-values', u'set-field!', u'shared', u'stream', u'stream*', u'stream-cons', u'struct', u'struct*', u'struct-copy', u'struct-field-index', u'struct-out', u'struct/c', u'struct/ctc', u'struct/dc', u'submod', u'super', u'super-instantiate', u'super-make-object', u'super-new', u'syntax', u'syntax-case', u'syntax-case*', u'syntax-id-rules', u'syntax-rules', u'syntax/loc', u'tag', u'this', u'this%', u'thunk', u'thunk*', u'time', u'unconstrained-domain->', u'unit', u'unit-from-context', u'unit/c', u'unit/new-import-export', u'unit/s', u'unless', u'unquote', u'unquote-splicing', u'unsyntax', u'unsyntax-splicing', u'values/drop', u'when', u'with-continuation-mark', u'with-contract', u'with-contract-continuation-mark', u'with-handlers', u'with-handlers*', u'with-method', u'with-syntax', u'λ' ) # Generated by example.rkt _builtins = ( u'*', u'*list/c', u'+', u'-', u'/', u'<', u'', u'>/c', u'>=', u'>=/c', u'abort-current-continuation', u'abs', u'absolute-path?', u'acos', u'add-between', u'add1', u'alarm-evt', u'always-evt', u'and/c', u'andmap', u'angle', u'any/c', u'append', u'append*', u'append-map', u'apply', u'argmax', u'argmin', u'arithmetic-shift', u'arity-at-least', u'arity-at-least-value', u'arity-at-least?', u'arity-checking-wrapper', u'arity-includes?', u'arity=?', u'arrow-contract-info', u'arrow-contract-info-accepts-arglist', u'arrow-contract-info-chaperone-procedure', u'arrow-contract-info-check-first-order', u'arrow-contract-info?', u'asin', u'assf', u'assoc', u'assq', u'assv', u'atan', u'bad-number-of-results', u'banner', u'base->-doms/c', u'base->-rngs/c', u'base->?', u'between/c', u'bitwise-and', u'bitwise-bit-field', u'bitwise-bit-set?', u'bitwise-ior', u'bitwise-not', u'bitwise-xor', u'blame-add-car-context', u'blame-add-cdr-context', u'blame-add-context', u'blame-add-missing-party', u'blame-add-nth-arg-context', u'blame-add-range-context', u'blame-add-unknown-context', u'blame-context', u'blame-contract', u'blame-fmt->-string', u'blame-missing-party?', u'blame-negative', u'blame-original?', u'blame-positive', u'blame-replace-negative', u'blame-source', u'blame-swap', u'blame-swapped?', u'blame-update', u'blame-value', u'blame?', u'boolean=?', u'boolean?', u'bound-identifier=?', u'box', u'box-cas!', u'box-immutable', u'box-immutable/c', u'box/c', u'box?', u'break-enabled', u'break-parameterization?', u'break-thread', u'build-chaperone-contract-property', u'build-compound-type-name', u'build-contract-property', u'build-flat-contract-property', u'build-list', u'build-path', u'build-path/convention-type', u'build-string', u'build-vector', u'byte-pregexp', u'byte-pregexp?', u'byte-ready?', u'byte-regexp', u'byte-regexp?', u'byte?', u'bytes', u'bytes->immutable-bytes', u'bytes->list', u'bytes->path', u'bytes->path-element', u'bytes->string/latin-1', u'bytes->string/locale', u'bytes->string/utf-8', u'bytes-append', u'bytes-append*', u'bytes-close-converter', u'bytes-convert', u'bytes-convert-end', u'bytes-converter?', u'bytes-copy', u'bytes-copy!', u'bytes-environment-variable-name?', u'bytes-fill!', u'bytes-join', u'bytes-length', u'bytes-no-nuls?', u'bytes-open-converter', u'bytes-ref', u'bytes-set!', u'bytes-utf-8-index', u'bytes-utf-8-length', u'bytes-utf-8-ref', u'bytes?', u'bytes?', u'caaaar', u'caaadr', u'caaar', u'caadar', u'caaddr', u'caadr', u'caar', u'cadaar', u'cadadr', u'cadar', u'caddar', u'cadddr', u'caddr', u'cadr', u'call-in-nested-thread', u'call-with-atomic-output-file', u'call-with-break-parameterization', u'call-with-composable-continuation', u'call-with-continuation-barrier', u'call-with-continuation-prompt', u'call-with-current-continuation', u'call-with-default-reading-parameterization', u'call-with-escape-continuation', u'call-with-exception-handler', u'call-with-file-lock/timeout', u'call-with-immediate-continuation-mark', u'call-with-input-bytes', u'call-with-input-file', u'call-with-input-file*', u'call-with-input-string', u'call-with-output-bytes', u'call-with-output-file', u'call-with-output-file*', u'call-with-output-string', u'call-with-parameterization', u'call-with-semaphore', u'call-with-semaphore/enable-break', u'call-with-values', u'call/cc', u'call/ec', u'car', u'cartesian-product', u'cdaaar', u'cdaadr', u'cdaar', u'cdadar', u'cdaddr', u'cdadr', u'cdar', u'cddaar', u'cddadr', u'cddar', u'cdddar', u'cddddr', u'cdddr', u'cddr', u'cdr', u'ceiling', u'channel-get', u'channel-put', u'channel-put-evt', u'channel-put-evt?', u'channel-try-get', u'channel/c', u'channel?', u'chaperone-box', u'chaperone-channel', u'chaperone-continuation-mark-key', u'chaperone-contract-property?', u'chaperone-contract?', u'chaperone-evt', u'chaperone-hash', u'chaperone-hash-set', u'chaperone-of?', u'chaperone-procedure', u'chaperone-procedure*', u'chaperone-prompt-tag', u'chaperone-struct', u'chaperone-struct-type', u'chaperone-vector', u'chaperone?', u'char->integer', u'char-alphabetic?', u'char-blank?', u'char-ci<=?', u'char-ci=?', u'char-ci>?', u'char-downcase', u'char-foldcase', u'char-general-category', u'char-graphic?', u'char-in', u'char-in/c', u'char-iso-control?', u'char-lower-case?', u'char-numeric?', u'char-punctuation?', u'char-ready?', u'char-symbolic?', u'char-title-case?', u'char-titlecase', u'char-upcase', u'char-upper-case?', u'char-utf-8-length', u'char-whitespace?', u'char<=?', u'char=?', u'char>?', u'char?', u'check-duplicate-identifier', u'check-duplicates', u'checked-procedure-check-and-extract', u'choice-evt', u'class->interface', u'class-info', u'class-seal', u'class-unseal', u'class?', u'cleanse-path', u'close-input-port', u'close-output-port', u'coerce-chaperone-contract', u'coerce-chaperone-contracts', u'coerce-contract', u'coerce-contract/f', u'coerce-contracts', u'coerce-flat-contract', u'coerce-flat-contracts', u'collect-garbage', u'collection-file-path', u'collection-path', u'combinations', u'compile', u'compile-allow-set!-undefined', u'compile-context-preservation-enabled', u'compile-enforce-module-constants', u'compile-syntax', u'compiled-expression-recompile', u'compiled-expression?', u'compiled-module-expression?', u'complete-path?', u'complex?', u'compose', u'compose1', u'conjoin', u'conjugate', u'cons', u'cons/c', u'cons?', u'const', u'continuation-mark-key/c', u'continuation-mark-key?', u'continuation-mark-set->context', u'continuation-mark-set->list', u'continuation-mark-set->list*', u'continuation-mark-set-first', u'continuation-mark-set?', u'continuation-marks', u'continuation-prompt-available?', u'continuation-prompt-tag?', u'continuation?', u'contract-continuation-mark-key', u'contract-custom-write-property-proc', u'contract-exercise', u'contract-first-order', u'contract-first-order-passes?', u'contract-late-neg-projection', u'contract-name', u'contract-proc', u'contract-projection', u'contract-property?', u'contract-random-generate', u'contract-random-generate-fail', u'contract-random-generate-fail?', u'contract-random-generate-get-current-environment', u'contract-random-generate-stash', u'contract-random-generate/choose', u'contract-stronger?', u'contract-struct-exercise', u'contract-struct-generate', u'contract-struct-late-neg-projection', u'contract-struct-list-contract?', u'contract-val-first-projection', u'contract?', u'convert-stream', u'copy-directory/files', u'copy-file', u'copy-port', u'cos', u'cosh', u'count', u'current-blame-format', u'current-break-parameterization', u'current-code-inspector', u'current-command-line-arguments', u'current-compile', u'current-compiled-file-roots', u'current-continuation-marks', u'current-contract-region', u'current-custodian', u'current-directory', u'current-directory-for-user', u'current-drive', u'current-environment-variables', u'current-error-port', u'current-eval', u'current-evt-pseudo-random-generator', u'current-force-delete-permissions', u'current-future', u'current-gc-milliseconds', u'current-get-interaction-input-port', u'current-inexact-milliseconds', u'current-input-port', u'current-inspector', u'current-library-collection-links', u'current-library-collection-paths', u'current-load', u'current-load-extension', u'current-load-relative-directory', u'current-load/use-compiled', u'current-locale', u'current-logger', u'current-memory-use', u'current-milliseconds', u'current-module-declare-name', u'current-module-declare-source', u'current-module-name-resolver', u'current-module-path-for-load', u'current-namespace', u'current-output-port', u'current-parameterization', u'current-plumber', u'current-preserved-thread-cell-values', u'current-print', u'current-process-milliseconds', u'current-prompt-read', u'current-pseudo-random-generator', u'current-read-interaction', u'current-reader-guard', u'current-readtable', u'current-seconds', u'current-security-guard', u'current-subprocess-custodian-mode', u'current-thread', u'current-thread-group', u'current-thread-initial-stack-size', u'current-write-relative-directory', u'curry', u'curryr', u'custodian-box-value', u'custodian-box?', u'custodian-limit-memory', u'custodian-managed-list', u'custodian-memory-accounting-available?', u'custodian-require-memory', u'custodian-shutdown-all', u'custodian?', u'custom-print-quotable-accessor', u'custom-print-quotable?', u'custom-write-accessor', u'custom-write-property-proc', u'custom-write?', u'date', u'date*', u'date*-nanosecond', u'date*-time-zone-name', u'date*?', u'date-day', u'date-dst?', u'date-hour', u'date-minute', u'date-month', u'date-second', u'date-time-zone-offset', u'date-week-day', u'date-year', u'date-year-day', u'date?', u'datum->syntax', u'datum-intern-literal', u'default-continuation-prompt-tag', u'degrees->radians', u'delete-directory', u'delete-directory/files', u'delete-file', u'denominator', u'dict->list', u'dict-can-functional-set?', u'dict-can-remove-keys?', u'dict-clear', u'dict-clear!', u'dict-copy', u'dict-count', u'dict-empty?', u'dict-for-each', u'dict-has-key?', u'dict-implements/c', u'dict-implements?', u'dict-iter-contract', u'dict-iterate-first', u'dict-iterate-key', u'dict-iterate-next', u'dict-iterate-value', u'dict-key-contract', u'dict-keys', u'dict-map', u'dict-mutable?', u'dict-ref', u'dict-ref!', u'dict-remove', u'dict-remove!', u'dict-set', u'dict-set!', u'dict-set*', u'dict-set*!', u'dict-update', u'dict-update!', u'dict-value-contract', u'dict-values', u'dict?', u'directory-exists?', u'directory-list', u'disjoin', u'display', u'display-lines', u'display-lines-to-file', u'display-to-file', u'displayln', u'double-flonum?', u'drop', u'drop-common-prefix', u'drop-right', u'dropf', u'dropf-right', u'dump-memory-stats', u'dup-input-port', u'dup-output-port', u'dynamic->*', u'dynamic-get-field', u'dynamic-object/c', u'dynamic-place', u'dynamic-place*', u'dynamic-require', u'dynamic-require-for-syntax', u'dynamic-send', u'dynamic-set-field!', u'dynamic-wind', u'eighth', u'empty', u'empty-sequence', u'empty-stream', u'empty?', u'environment-variables-copy', u'environment-variables-names', u'environment-variables-ref', u'environment-variables-set!', u'environment-variables?', u'eof', u'eof-evt', u'eof-object?', u'ephemeron-value', u'ephemeron?', u'eprintf', u'eq-contract-val', u'eq-contract?', u'eq-hash-code', u'eq?', u'equal-contract-val', u'equal-contract?', u'equal-hash-code', u'equal-secondary-hash-code', u'equal<%>', u'equal?', u'equal?/recur', u'eqv-hash-code', u'eqv?', u'error', u'error-display-handler', u'error-escape-handler', u'error-print-context-length', u'error-print-source-location', u'error-print-width', u'error-value->string-handler', u'eval', u'eval-jit-enabled', u'eval-syntax', u'even?', u'evt/c', u'evt?', u'exact->inexact', u'exact-ceiling', u'exact-floor', u'exact-integer?', u'exact-nonnegative-integer?', u'exact-positive-integer?', u'exact-round', u'exact-truncate', u'exact?', u'executable-yield-handler', u'exit', u'exit-handler', u'exn', u'exn-continuation-marks', u'exn-message', u'exn:break', u'exn:break-continuation', u'exn:break:hang-up', u'exn:break:hang-up?', u'exn:break:terminate', u'exn:break:terminate?', u'exn:break?', u'exn:fail', u'exn:fail:contract', u'exn:fail:contract:arity', u'exn:fail:contract:arity?', u'exn:fail:contract:blame', u'exn:fail:contract:blame-object', u'exn:fail:contract:blame?', u'exn:fail:contract:continuation', u'exn:fail:contract:continuation?', u'exn:fail:contract:divide-by-zero', u'exn:fail:contract:divide-by-zero?', u'exn:fail:contract:non-fixnum-result', u'exn:fail:contract:non-fixnum-result?', u'exn:fail:contract:variable', u'exn:fail:contract:variable-id', u'exn:fail:contract:variable?', u'exn:fail:contract?', u'exn:fail:filesystem', u'exn:fail:filesystem:errno', u'exn:fail:filesystem:errno-errno', u'exn:fail:filesystem:errno?', u'exn:fail:filesystem:exists', u'exn:fail:filesystem:exists?', u'exn:fail:filesystem:missing-module', u'exn:fail:filesystem:missing-module-path', u'exn:fail:filesystem:missing-module?', u'exn:fail:filesystem:version', u'exn:fail:filesystem:version?', u'exn:fail:filesystem?', u'exn:fail:network', u'exn:fail:network:errno', u'exn:fail:network:errno-errno', u'exn:fail:network:errno?', u'exn:fail:network?', u'exn:fail:object', u'exn:fail:object?', u'exn:fail:out-of-memory', u'exn:fail:out-of-memory?', u'exn:fail:read', u'exn:fail:read-srclocs', u'exn:fail:read:eof', u'exn:fail:read:eof?', u'exn:fail:read:non-char', u'exn:fail:read:non-char?', u'exn:fail:read?', u'exn:fail:syntax', u'exn:fail:syntax-exprs', u'exn:fail:syntax:missing-module', u'exn:fail:syntax:missing-module-path', u'exn:fail:syntax:missing-module?', u'exn:fail:syntax:unbound', u'exn:fail:syntax:unbound?', u'exn:fail:syntax?', u'exn:fail:unsupported', u'exn:fail:unsupported?', u'exn:fail:user', u'exn:fail:user?', u'exn:fail?', u'exn:misc:match?', u'exn:missing-module-accessor', u'exn:missing-module?', u'exn:srclocs-accessor', u'exn:srclocs?', u'exn?', u'exp', u'expand', u'expand-once', u'expand-syntax', u'expand-syntax-once', u'expand-syntax-to-top-form', u'expand-to-top-form', u'expand-user-path', u'explode-path', u'expt', u'externalizable<%>', u'failure-result/c', u'false?', u'field-names', u'fifth', u'file->bytes', u'file->bytes-lines', u'file->lines', u'file->list', u'file->string', u'file->value', u'file-exists?', u'file-name-from-path', u'file-or-directory-identity', u'file-or-directory-modify-seconds', u'file-or-directory-permissions', u'file-position', u'file-position*', u'file-size', u'file-stream-buffer-mode', u'file-stream-port?', u'file-truncate', u'filename-extension', u'filesystem-change-evt', u'filesystem-change-evt-cancel', u'filesystem-change-evt?', u'filesystem-root-list', u'filter', u'filter-map', u'filter-not', u'filter-read-input-port', u'find-executable-path', u'find-files', u'find-library-collection-links', u'find-library-collection-paths', u'find-relative-path', u'find-system-path', u'findf', u'first', u'first-or/c', u'fixnum?', u'flat-contract', u'flat-contract-predicate', u'flat-contract-property?', u'flat-contract?', u'flat-named-contract', u'flatten', u'floating-point-bytes->real', u'flonum?', u'floor', u'flush-output', u'fold-files', u'foldl', u'foldr', u'for-each', u'force', u'format', u'fourth', u'fprintf', u'free-identifier=?', u'free-label-identifier=?', u'free-template-identifier=?', u'free-transformer-identifier=?', u'fsemaphore-count', u'fsemaphore-post', u'fsemaphore-try-wait?', u'fsemaphore-wait', u'fsemaphore?', u'future', u'future?', u'futures-enabled?', u'gcd', u'generate-member-key', u'generate-temporaries', u'generic-set?', u'generic?', u'gensym', u'get-output-bytes', u'get-output-string', u'get-preference', u'get/build-late-neg-projection', u'get/build-val-first-projection', u'getenv', u'global-port-print-handler', u'group-by', u'group-execute-bit', u'group-read-bit', u'group-write-bit', u'guard-evt', u'handle-evt', u'handle-evt?', u'has-blame?', u'has-contract?', u'hash', u'hash->list', u'hash-clear', u'hash-clear!', u'hash-copy', u'hash-copy-clear', u'hash-count', u'hash-empty?', u'hash-eq?', u'hash-equal?', u'hash-eqv?', u'hash-for-each', u'hash-has-key?', u'hash-iterate-first', u'hash-iterate-key', u'hash-iterate-key+value', u'hash-iterate-next', u'hash-iterate-pair', u'hash-iterate-value', u'hash-keys', u'hash-map', u'hash-placeholder?', u'hash-ref', u'hash-ref!', u'hash-remove', u'hash-remove!', u'hash-set', u'hash-set!', u'hash-set*', u'hash-set*!', u'hash-update', u'hash-update!', u'hash-values', u'hash-weak?', u'hash/c', u'hash?', u'hasheq', u'hasheqv', u'identifier-binding', u'identifier-binding-symbol', u'identifier-label-binding', u'identifier-prune-lexical-context', u'identifier-prune-to-source-module', u'identifier-remove-from-definition-context', u'identifier-template-binding', u'identifier-transformer-binding', u'identifier?', u'identity', u'if/c', u'imag-part', u'immutable?', u'impersonate-box', u'impersonate-channel', u'impersonate-continuation-mark-key', u'impersonate-hash', u'impersonate-hash-set', u'impersonate-procedure', u'impersonate-procedure*', u'impersonate-prompt-tag', u'impersonate-struct', u'impersonate-vector', u'impersonator-contract?', u'impersonator-ephemeron', u'impersonator-of?', u'impersonator-prop:application-mark', u'impersonator-prop:blame', u'impersonator-prop:contracted', u'impersonator-property-accessor-procedure?', u'impersonator-property?', u'impersonator?', u'implementation?', u'implementation?/c', u'in-bytes', u'in-bytes-lines', u'in-combinations', u'in-cycle', u'in-dict', u'in-dict-keys', u'in-dict-pairs', u'in-dict-values', u'in-directory', u'in-hash', u'in-hash-keys', u'in-hash-pairs', u'in-hash-values', u'in-immutable-hash', u'in-immutable-hash-keys', u'in-immutable-hash-pairs', u'in-immutable-hash-values', u'in-immutable-set', u'in-indexed', u'in-input-port-bytes', u'in-input-port-chars', u'in-lines', u'in-list', u'in-mlist', u'in-mutable-hash', u'in-mutable-hash-keys', u'in-mutable-hash-pairs', u'in-mutable-hash-values', u'in-mutable-set', u'in-naturals', u'in-parallel', u'in-permutations', u'in-port', u'in-producer', u'in-range', u'in-sequences', u'in-set', u'in-slice', u'in-stream', u'in-string', u'in-syntax', u'in-value', u'in-values*-sequence', u'in-values-sequence', u'in-vector', u'in-weak-hash', u'in-weak-hash-keys', u'in-weak-hash-pairs', u'in-weak-hash-values', u'in-weak-set', u'inexact->exact', u'inexact-real?', u'inexact?', u'infinite?', u'input-port-append', u'input-port?', u'inspector?', u'instanceof/c', u'integer->char', u'integer->integer-bytes', u'integer-bytes->integer', u'integer-in', u'integer-length', u'integer-sqrt', u'integer-sqrt/remainder', u'integer?', u'interface->method-names', u'interface-extension?', u'interface?', u'internal-definition-context-binding-identifiers', u'internal-definition-context-introduce', u'internal-definition-context-seal', u'internal-definition-context?', u'is-a?', u'is-a?/c', u'keyword->string', u'keyword-apply', u'keywordbytes', u'list->mutable-set', u'list->mutable-seteq', u'list->mutable-seteqv', u'list->set', u'list->seteq', u'list->seteqv', u'list->string', u'list->vector', u'list->weak-set', u'list->weak-seteq', u'list->weak-seteqv', u'list-contract?', u'list-prefix?', u'list-ref', u'list-set', u'list-tail', u'list-update', u'list/c', u'list?', u'listen-port-number?', u'listof', u'load', u'load-extension', u'load-on-demand-enabled', u'load-relative', u'load-relative-extension', u'load/cd', u'load/use-compiled', u'local-expand', u'local-expand/capture-lifts', u'local-transformer-expand', u'local-transformer-expand/capture-lifts', u'locale-string-encoding', u'log', u'log-all-levels', u'log-level-evt', u'log-level?', u'log-max-level', u'log-message', u'log-receiver?', u'logger-name', u'logger?', u'magnitude', u'make-arity-at-least', u'make-base-empty-namespace', u'make-base-namespace', u'make-bytes', u'make-channel', u'make-chaperone-contract', u'make-continuation-mark-key', u'make-continuation-prompt-tag', u'make-contract', u'make-custodian', u'make-custodian-box', u'make-custom-hash', u'make-custom-hash-types', u'make-custom-set', u'make-custom-set-types', u'make-date', u'make-date*', u'make-derived-parameter', u'make-directory', u'make-directory*', u'make-do-sequence', u'make-empty-namespace', u'make-environment-variables', u'make-ephemeron', u'make-exn', u'make-exn:break', u'make-exn:break:hang-up', u'make-exn:break:terminate', u'make-exn:fail', u'make-exn:fail:contract', u'make-exn:fail:contract:arity', u'make-exn:fail:contract:blame', u'make-exn:fail:contract:continuation', u'make-exn:fail:contract:divide-by-zero', u'make-exn:fail:contract:non-fixnum-result', u'make-exn:fail:contract:variable', u'make-exn:fail:filesystem', u'make-exn:fail:filesystem:errno', u'make-exn:fail:filesystem:exists', u'make-exn:fail:filesystem:missing-module', u'make-exn:fail:filesystem:version', u'make-exn:fail:network', u'make-exn:fail:network:errno', u'make-exn:fail:object', u'make-exn:fail:out-of-memory', u'make-exn:fail:read', u'make-exn:fail:read:eof', u'make-exn:fail:read:non-char', u'make-exn:fail:syntax', u'make-exn:fail:syntax:missing-module', u'make-exn:fail:syntax:unbound', u'make-exn:fail:unsupported', u'make-exn:fail:user', u'make-file-or-directory-link', u'make-flat-contract', u'make-fsemaphore', u'make-generic', u'make-handle-get-preference-locked', u'make-hash', u'make-hash-placeholder', u'make-hasheq', u'make-hasheq-placeholder', u'make-hasheqv', u'make-hasheqv-placeholder', u'make-immutable-custom-hash', u'make-immutable-hash', u'make-immutable-hasheq', u'make-immutable-hasheqv', u'make-impersonator-property', u'make-input-port', u'make-input-port/read-to-peek', u'make-inspector', u'make-keyword-procedure', u'make-known-char-range-list', u'make-limited-input-port', u'make-list', u'make-lock-file-name', u'make-log-receiver', u'make-logger', u'make-mixin-contract', u'make-mutable-custom-set', u'make-none/c', u'make-object', u'make-output-port', u'make-parameter', u'make-parent-directory*', u'make-phantom-bytes', u'make-pipe', u'make-pipe-with-specials', u'make-placeholder', u'make-plumber', u'make-polar', u'make-prefab-struct', u'make-primitive-class', u'make-proj-contract', u'make-pseudo-random-generator', u'make-reader-graph', u'make-readtable', u'make-rectangular', u'make-rename-transformer', u'make-resolved-module-path', u'make-security-guard', u'make-semaphore', u'make-set!-transformer', u'make-shared-bytes', u'make-sibling-inspector', u'make-special-comment', u'make-srcloc', u'make-string', u'make-struct-field-accessor', u'make-struct-field-mutator', u'make-struct-type', u'make-struct-type-property', u'make-syntax-delta-introducer', u'make-syntax-introducer', u'make-temporary-file', u'make-tentative-pretty-print-output-port', u'make-thread-cell', u'make-thread-group', u'make-vector', u'make-weak-box', u'make-weak-custom-hash', u'make-weak-custom-set', u'make-weak-hash', u'make-weak-hasheq', u'make-weak-hasheqv', u'make-will-executor', u'map', u'match-equality-test', u'matches-arity-exactly?', u'max', u'mcar', u'mcdr', u'mcons', u'member', u'member-name-key-hash-code', u'member-name-key=?', u'member-name-key?', u'memf', u'memq', u'memv', u'merge-input', u'method-in-interface?', u'min', u'mixin-contract', u'module->exports', u'module->imports', u'module->language-info', u'module->namespace', u'module-compiled-cross-phase-persistent?', u'module-compiled-exports', u'module-compiled-imports', u'module-compiled-language-info', u'module-compiled-name', u'module-compiled-submodules', u'module-declared?', u'module-path-index-join', u'module-path-index-resolve', u'module-path-index-split', u'module-path-index-submodule', u'module-path-index?', u'module-path?', u'module-predefined?', u'module-provide-protected?', u'modulo', u'mpair?', u'mutable-set', u'mutable-seteq', u'mutable-seteqv', u'n->th', u'nack-guard-evt', u'namespace-anchor->empty-namespace', u'namespace-anchor->namespace', u'namespace-anchor?', u'namespace-attach-module', u'namespace-attach-module-declaration', u'namespace-base-phase', u'namespace-mapped-symbols', u'namespace-module-identifier', u'namespace-module-registry', u'namespace-require', u'namespace-require/constant', u'namespace-require/copy', u'namespace-require/expansion-time', u'namespace-set-variable-value!', u'namespace-symbol->identifier', u'namespace-syntax-introduce', u'namespace-undefine-variable!', u'namespace-unprotect-module', u'namespace-variable-value', u'namespace?', u'nan?', u'natural-number/c', u'negate', u'negative?', u'never-evt', u'new-∀/c', u'new-∃/c', u'newline', u'ninth', u'non-empty-listof', u'non-empty-string?', u'none/c', u'normal-case-path', u'normalize-arity', u'normalize-path', u'normalized-arity?', u'not', u'not/c', u'null', u'null?', u'number->string', u'number?', u'numerator', u'object%', u'object->vector', u'object-info', u'object-interface', u'object-method-arity-includes?', u'object-name', u'object-or-false=?', u'object=?', u'object?', u'odd?', u'one-of/c', u'open-input-bytes', u'open-input-file', u'open-input-output-file', u'open-input-string', u'open-output-bytes', u'open-output-file', u'open-output-nowhere', u'open-output-string', u'or/c', u'order-of-magnitude', u'ormap', u'other-execute-bit', u'other-read-bit', u'other-write-bit', u'output-port?', u'pair?', u'parameter-procedure=?', u'parameter/c', u'parameter?', u'parameterization?', u'parse-command-line', u'partition', u'path->bytes', u'path->complete-path', u'path->directory-path', u'path->string', u'path-add-suffix', u'path-convention-type', u'path-element->bytes', u'path-element->string', u'path-element?', u'path-for-some-system?', u'path-list-string->path-list', u'path-only', u'path-replace-suffix', u'path-string?', u'pathbytes', u'port->bytes-lines', u'port->lines', u'port->list', u'port->string', u'port-closed-evt', u'port-closed?', u'port-commit-peeked', u'port-count-lines!', u'port-count-lines-enabled', u'port-counts-lines?', u'port-display-handler', u'port-file-identity', u'port-file-unlock', u'port-next-location', u'port-number?', u'port-print-handler', u'port-progress-evt', u'port-provides-progress-evts?', u'port-read-handler', u'port-try-file-lock?', u'port-write-handler', u'port-writes-atomic?', u'port-writes-special?', u'port?', u'positive?', u'predicate/c', u'prefab-key->struct-type', u'prefab-key?', u'prefab-struct-key', u'preferences-lock-file-mode', u'pregexp', u'pregexp?', u'pretty-display', u'pretty-format', u'pretty-print', u'pretty-print-.-symbol-without-bars', u'pretty-print-abbreviate-read-macros', u'pretty-print-columns', u'pretty-print-current-style-table', u'pretty-print-depth', u'pretty-print-exact-as-decimal', u'pretty-print-extend-style-table', u'pretty-print-handler', u'pretty-print-newline', u'pretty-print-post-print-hook', u'pretty-print-pre-print-hook', u'pretty-print-print-hook', u'pretty-print-print-line', u'pretty-print-remap-stylable', u'pretty-print-show-inexactness', u'pretty-print-size-hook', u'pretty-print-style-table?', u'pretty-printing', u'pretty-write', u'primitive-closure?', u'primitive-result-arity', u'primitive?', u'print', u'print-as-expression', u'print-boolean-long-form', u'print-box', u'print-graph', u'print-hash-table', u'print-mpair-curly-braces', u'print-pair-curly-braces', u'print-reader-abbreviations', u'print-struct', u'print-syntax-width', u'print-unreadable', u'print-vector-length', u'printable/c', u'printable<%>', u'printf', u'println', u'procedure->method', u'procedure-arity', u'procedure-arity-includes/c', u'procedure-arity-includes?', u'procedure-arity?', u'procedure-closure-contents-eq?', u'procedure-extract-target', u'procedure-keywords', u'procedure-reduce-arity', u'procedure-reduce-keyword-arity', u'procedure-rename', u'procedure-result-arity', u'procedure-specialize', u'procedure-struct-type?', u'procedure?', u'process', u'process*', u'process*/ports', u'process/ports', u'processor-count', u'progress-evt?', u'promise-forced?', u'promise-running?', u'promise/c', u'promise/name?', u'promise?', u'prop:arity-string', u'prop:arrow-contract', u'prop:arrow-contract-get-info', u'prop:arrow-contract?', u'prop:blame', u'prop:chaperone-contract', u'prop:checked-procedure', u'prop:contract', u'prop:contracted', u'prop:custom-print-quotable', u'prop:custom-write', u'prop:dict', u'prop:dict/contract', u'prop:equal+hash', u'prop:evt', u'prop:exn:missing-module', u'prop:exn:srclocs', u'prop:expansion-contexts', u'prop:flat-contract', u'prop:impersonator-of', u'prop:input-port', u'prop:liberal-define-context', u'prop:object-name', u'prop:opt-chaperone-contract', u'prop:opt-chaperone-contract-get-test', u'prop:opt-chaperone-contract?', u'prop:orc-contract', u'prop:orc-contract-get-subcontracts', u'prop:orc-contract?', u'prop:output-port', u'prop:place-location', u'prop:procedure', u'prop:recursive-contract', u'prop:recursive-contract-unroll', u'prop:recursive-contract?', u'prop:rename-transformer', u'prop:sequence', u'prop:set!-transformer', u'prop:stream', u'proper-subset?', u'pseudo-random-generator->vector', u'pseudo-random-generator-vector?', u'pseudo-random-generator?', u'put-preferences', u'putenv', u'quotient', u'quotient/remainder', u'radians->degrees', u'raise', u'raise-argument-error', u'raise-arguments-error', u'raise-arity-error', u'raise-blame-error', u'raise-contract-error', u'raise-mismatch-error', u'raise-not-cons-blame-error', u'raise-range-error', u'raise-result-error', u'raise-syntax-error', u'raise-type-error', u'raise-user-error', u'random', u'random-seed', u'range', u'rational?', u'rationalize', u'read', u'read-accept-bar-quote', u'read-accept-box', u'read-accept-compiled', u'read-accept-dot', u'read-accept-graph', u'read-accept-infix-dot', u'read-accept-lang', u'read-accept-quasiquote', u'read-accept-reader', u'read-byte', u'read-byte-or-special', u'read-bytes', u'read-bytes!', u'read-bytes!-evt', u'read-bytes-avail!', u'read-bytes-avail!*', u'read-bytes-avail!-evt', u'read-bytes-avail!/enable-break', u'read-bytes-evt', u'read-bytes-line', u'read-bytes-line-evt', u'read-case-sensitive', u'read-cdot', u'read-char', u'read-char-or-special', u'read-curly-brace-as-paren', u'read-curly-brace-with-tag', u'read-decimal-as-inexact', u'read-eval-print-loop', u'read-language', u'read-line', u'read-line-evt', u'read-on-demand-source', u'read-square-bracket-as-paren', u'read-square-bracket-with-tag', u'read-string', u'read-string!', u'read-string!-evt', u'read-string-evt', u'read-syntax', u'read-syntax/recursive', u'read/recursive', u'readtable-mapping', u'readtable?', u'real->decimal-string', u'real->double-flonum', u'real->floating-point-bytes', u'real->single-flonum', u'real-in', u'real-part', u'real?', u'reencode-input-port', u'reencode-output-port', u'regexp', u'regexp-match', u'regexp-match*', u'regexp-match-evt', u'regexp-match-exact?', u'regexp-match-peek', u'regexp-match-peek-immediate', u'regexp-match-peek-positions', u'regexp-match-peek-positions*', u'regexp-match-peek-positions-immediate', u'regexp-match-peek-positions-immediate/end', u'regexp-match-peek-positions/end', u'regexp-match-positions', u'regexp-match-positions*', u'regexp-match-positions/end', u'regexp-match/end', u'regexp-match?', u'regexp-max-lookbehind', u'regexp-quote', u'regexp-replace', u'regexp-replace*', u'regexp-replace-quote', u'regexp-replaces', u'regexp-split', u'regexp-try-match', u'regexp?', u'relative-path?', u'relocate-input-port', u'relocate-output-port', u'remainder', u'remf', u'remf*', u'remove', u'remove*', u'remove-duplicates', u'remq', u'remq*', u'remv', u'remv*', u'rename-contract', u'rename-file-or-directory', u'rename-transformer-target', u'rename-transformer?', u'replace-evt', u'reroot-path', u'resolve-path', u'resolved-module-path-name', u'resolved-module-path?', u'rest', u'reverse', u'round', u'second', u'seconds->date', u'security-guard?', u'semaphore-peek-evt', u'semaphore-peek-evt?', u'semaphore-post', u'semaphore-try-wait?', u'semaphore-wait', u'semaphore-wait/enable-break', u'semaphore?', u'sequence->list', u'sequence->stream', u'sequence-add-between', u'sequence-andmap', u'sequence-append', u'sequence-count', u'sequence-filter', u'sequence-fold', u'sequence-for-each', u'sequence-generate', u'sequence-generate*', u'sequence-length', u'sequence-map', u'sequence-ormap', u'sequence-ref', u'sequence-tail', u'sequence/c', u'sequence?', u'set', u'set!-transformer-procedure', u'set!-transformer?', u'set->list', u'set->stream', u'set-add', u'set-add!', u'set-box!', u'set-clear', u'set-clear!', u'set-copy', u'set-copy-clear', u'set-count', u'set-empty?', u'set-eq?', u'set-equal?', u'set-eqv?', u'set-first', u'set-for-each', u'set-implements/c', u'set-implements?', u'set-intersect', u'set-intersect!', u'set-map', u'set-mcar!', u'set-mcdr!', u'set-member?', u'set-mutable?', u'set-phantom-bytes!', u'set-port-next-location!', u'set-remove', u'set-remove!', u'set-rest', u'set-some-basic-contracts!', u'set-subtract', u'set-subtract!', u'set-symmetric-difference', u'set-symmetric-difference!', u'set-union', u'set-union!', u'set-weak?', u'set/c', u'set=?', u'set?', u'seteq', u'seteqv', u'seventh', u'sgn', u'shared-bytes', u'shell-execute', u'shrink-path-wrt', u'shuffle', u'simple-form-path', u'simplify-path', u'sin', u'single-flonum?', u'sinh', u'sixth', u'skip-projection-wrapper?', u'sleep', u'some-system-path->string', u'sort', u'special-comment-value', u'special-comment?', u'special-filter-input-port', u'split-at', u'split-at-right', u'split-common-prefix', u'split-path', u'splitf-at', u'splitf-at-right', u'sqr', u'sqrt', u'srcloc', u'srcloc->string', u'srcloc-column', u'srcloc-line', u'srcloc-position', u'srcloc-source', u'srcloc-span', u'srcloc?', u'stop-after', u'stop-before', u'stream->list', u'stream-add-between', u'stream-andmap', u'stream-append', u'stream-count', u'stream-empty?', u'stream-filter', u'stream-first', u'stream-fold', u'stream-for-each', u'stream-length', u'stream-map', u'stream-ormap', u'stream-ref', u'stream-rest', u'stream-tail', u'stream/c', u'stream?', u'string', u'string->bytes/latin-1', u'string->bytes/locale', u'string->bytes/utf-8', u'string->immutable-string', u'string->keyword', u'string->list', u'string->number', u'string->path', u'string->path-element', u'string->some-system-path', u'string->symbol', u'string->uninterned-symbol', u'string->unreadable-symbol', u'string-append', u'string-append*', u'string-ci<=?', u'string-ci=?', u'string-ci>?', u'string-contains?', u'string-copy', u'string-copy!', u'string-downcase', u'string-environment-variable-name?', u'string-fill!', u'string-foldcase', u'string-join', u'string-len/c', u'string-length', u'string-locale-ci?', u'string-locale-downcase', u'string-locale-upcase', u'string-locale?', u'string-no-nuls?', u'string-normalize-nfc', u'string-normalize-nfd', u'string-normalize-nfkc', u'string-normalize-nfkd', u'string-normalize-spaces', u'string-port?', u'string-prefix?', u'string-ref', u'string-replace', u'string-set!', u'string-split', u'string-suffix?', u'string-titlecase', u'string-trim', u'string-upcase', u'string-utf-8-length', u'string<=?', u'string=?', u'string>?', u'string?', u'struct->vector', u'struct-accessor-procedure?', u'struct-constructor-procedure?', u'struct-info', u'struct-mutator-procedure?', u'struct-predicate-procedure?', u'struct-type-info', u'struct-type-make-constructor', u'struct-type-make-predicate', u'struct-type-property-accessor-procedure?', u'struct-type-property/c', u'struct-type-property?', u'struct-type?', u'struct:arity-at-least', u'struct:arrow-contract-info', u'struct:date', u'struct:date*', u'struct:exn', u'struct:exn:break', u'struct:exn:break:hang-up', u'struct:exn:break:terminate', u'struct:exn:fail', u'struct:exn:fail:contract', u'struct:exn:fail:contract:arity', u'struct:exn:fail:contract:blame', u'struct:exn:fail:contract:continuation', u'struct:exn:fail:contract:divide-by-zero', u'struct:exn:fail:contract:non-fixnum-result', u'struct:exn:fail:contract:variable', u'struct:exn:fail:filesystem', u'struct:exn:fail:filesystem:errno', u'struct:exn:fail:filesystem:exists', u'struct:exn:fail:filesystem:missing-module', u'struct:exn:fail:filesystem:version', u'struct:exn:fail:network', u'struct:exn:fail:network:errno', u'struct:exn:fail:object', u'struct:exn:fail:out-of-memory', u'struct:exn:fail:read', u'struct:exn:fail:read:eof', u'struct:exn:fail:read:non-char', u'struct:exn:fail:syntax', u'struct:exn:fail:syntax:missing-module', u'struct:exn:fail:syntax:unbound', u'struct:exn:fail:unsupported', u'struct:exn:fail:user', u'struct:srcloc', u'struct:wrapped-extra-arg-arrow', u'struct?', u'sub1', u'subbytes', u'subclass?', u'subclass?/c', u'subprocess', u'subprocess-group-enabled', u'subprocess-kill', u'subprocess-pid', u'subprocess-status', u'subprocess-wait', u'subprocess?', u'subset?', u'substring', u'suggest/c', u'symbol->string', u'symbol-interned?', u'symbol-unreadable?', u'symboldatum', u'syntax->list', u'syntax-arm', u'syntax-column', u'syntax-debug-info', u'syntax-disarm', u'syntax-e', u'syntax-line', u'syntax-local-bind-syntaxes', u'syntax-local-certifier', u'syntax-local-context', u'syntax-local-expand-expression', u'syntax-local-get-shadower', u'syntax-local-identifier-as-binding', u'syntax-local-introduce', u'syntax-local-lift-context', u'syntax-local-lift-expression', u'syntax-local-lift-module', u'syntax-local-lift-module-end-declaration', u'syntax-local-lift-provide', u'syntax-local-lift-require', u'syntax-local-lift-values-expression', u'syntax-local-make-definition-context', u'syntax-local-make-delta-introducer', u'syntax-local-module-defined-identifiers', u'syntax-local-module-exports', u'syntax-local-module-required-identifiers', u'syntax-local-name', u'syntax-local-phase-level', u'syntax-local-submodules', u'syntax-local-transforming-module-provides?', u'syntax-local-value', u'syntax-local-value/immediate', u'syntax-original?', u'syntax-position', u'syntax-property', u'syntax-property-preserved?', u'syntax-property-symbol-keys', u'syntax-protect', u'syntax-rearm', u'syntax-recertify', u'syntax-shift-phase-level', u'syntax-source', u'syntax-source-module', u'syntax-span', u'syntax-taint', u'syntax-tainted?', u'syntax-track-origin', u'syntax-transforming-module-expression?', u'syntax-transforming-with-lifts?', u'syntax-transforming?', u'syntax/c', u'syntax?', u'system', u'system*', u'system*/exit-code', u'system-big-endian?', u'system-idle-evt', u'system-language+country', u'system-library-subpath', u'system-path-convention-type', u'system-type', u'system/exit-code', u'tail-marks-match?', u'take', u'take-common-prefix', u'take-right', u'takef', u'takef-right', u'tan', u'tanh', u'tcp-abandon-port', u'tcp-accept', u'tcp-accept-evt', u'tcp-accept-ready?', u'tcp-accept/enable-break', u'tcp-addresses', u'tcp-close', u'tcp-connect', u'tcp-connect/enable-break', u'tcp-listen', u'tcp-listener?', u'tcp-port?', u'tentative-pretty-print-port-cancel', u'tentative-pretty-print-port-transfer', u'tenth', u'terminal-port?', u'the-unsupplied-arg', u'third', u'thread', u'thread-cell-ref', u'thread-cell-set!', u'thread-cell-values?', u'thread-cell?', u'thread-dead-evt', u'thread-dead?', u'thread-group?', u'thread-receive', u'thread-receive-evt', u'thread-resume', u'thread-resume-evt', u'thread-rewind-receive', u'thread-running?', u'thread-send', u'thread-suspend', u'thread-suspend-evt', u'thread-try-receive', u'thread-wait', u'thread/suspend-to-kill', u'thread?', u'time-apply', u'touch', u'transplant-input-port', u'transplant-output-port', u'true', u'truncate', u'udp-addresses', u'udp-bind!', u'udp-bound?', u'udp-close', u'udp-connect!', u'udp-connected?', u'udp-multicast-interface', u'udp-multicast-join-group!', u'udp-multicast-leave-group!', u'udp-multicast-loopback?', u'udp-multicast-set-interface!', u'udp-multicast-set-loopback!', u'udp-multicast-set-ttl!', u'udp-multicast-ttl', u'udp-open-socket', u'udp-receive!', u'udp-receive!*', u'udp-receive!-evt', u'udp-receive!/enable-break', u'udp-receive-ready-evt', u'udp-send', u'udp-send*', u'udp-send-evt', u'udp-send-ready-evt', u'udp-send-to', u'udp-send-to*', u'udp-send-to-evt', u'udp-send-to/enable-break', u'udp-send/enable-break', u'udp?', u'unbox', u'uncaught-exception-handler', u'unit?', u'unspecified-dom', u'unsupplied-arg?', u'use-collection-link-paths', u'use-compiled-file-paths', u'use-user-specific-search-paths', u'user-execute-bit', u'user-read-bit', u'user-write-bit', u'value-blame', u'value-contract', u'values', u'variable-reference->empty-namespace', u'variable-reference->module-base-phase', u'variable-reference->module-declaration-inspector', u'variable-reference->module-path-index', u'variable-reference->module-source', u'variable-reference->namespace', u'variable-reference->phase', u'variable-reference->resolved-module-path', u'variable-reference-constant?', u'variable-reference?', u'vector', u'vector->immutable-vector', u'vector->list', u'vector->pseudo-random-generator', u'vector->pseudo-random-generator!', u'vector->values', u'vector-append', u'vector-argmax', u'vector-argmin', u'vector-copy', u'vector-copy!', u'vector-count', u'vector-drop', u'vector-drop-right', u'vector-fill!', u'vector-filter', u'vector-filter-not', u'vector-immutable', u'vector-immutable/c', u'vector-immutableof', u'vector-length', u'vector-map', u'vector-map!', u'vector-member', u'vector-memq', u'vector-memv', u'vector-ref', u'vector-set!', u'vector-set*!', u'vector-set-performance-stats!', u'vector-split-at', u'vector-split-at-right', u'vector-take', u'vector-take-right', u'vector/c', u'vector?', u'vectorof', u'version', u'void', u'void?', u'weak-box-value', u'weak-box?', u'weak-set', u'weak-seteq', u'weak-seteqv', u'will-execute', u'will-executor?', u'will-register', u'will-try-execute', u'with-input-from-bytes', u'with-input-from-file', u'with-input-from-string', u'with-output-to-bytes', u'with-output-to-file', u'with-output-to-string', u'would-be-future', u'wrap-evt', u'wrapped-extra-arg-arrow', u'wrapped-extra-arg-arrow-extra-neg-party-argument', u'wrapped-extra-arg-arrow-real-func', u'wrapped-extra-arg-arrow?', u'writable<%>', u'write', u'write-byte', u'write-bytes', u'write-bytes-avail', u'write-bytes-avail*', u'write-bytes-avail-evt', u'write-bytes-avail/enable-break', u'write-char', u'write-special', u'write-special-avail*', u'write-special-evt', u'write-string', u'write-to-file', u'writeln', u'xor', u'zero?', u'~.a', u'~.s', u'~.v', u'~a', u'~e', u'~r', u'~s', u'~v' ) _opening_parenthesis = r'[([{]' _closing_parenthesis = r'[)\]}]' _delimiters = r'()[\]{}",\'`;\s' _symbol = r'(?:\|[^|]*\||\\[\w\W]|[^|\\%s]+)+' % _delimiters _exact_decimal_prefix = r'(?:#e)?(?:#d)?(?:#e)?' _exponent = r'(?:[defls][-+]?\d+)' _inexact_simple_no_hashes = r'(?:\d+(?:/\d+|\.\d*)?|\.\d+)' _inexact_simple = (r'(?:%s|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|' r'\d+(?:\.\d*#+|/\d+#+)))' % _inexact_simple_no_hashes) _inexact_normal_no_hashes = r'(?:%s%s?)' % (_inexact_simple_no_hashes, _exponent) _inexact_normal = r'(?:%s%s?)' % (_inexact_simple, _exponent) _inexact_special = r'(?:(?:inf|nan)\.[0f])' _inexact_real = r'(?:[-+]?%s|[-+]%s)' % (_inexact_normal, _inexact_special) _inexact_unsigned = r'(?:%s|%s)' % (_inexact_normal, _inexact_special) tokens = { 'root': [ (_closing_parenthesis, Error), (r'(?!\Z)', Text, 'unquoted-datum') ], 'datum': [ (r'(?s)#;|#![ /]([^\\\n]|\\.)*', Comment), (u';[^\\n\\r\x85\u2028\u2029]*', Comment.Single), (r'#\|', Comment.Multiline, 'block-comment'), # Whitespaces (r'(?u)\s+', Text), # Numbers: Keep in mind Racket reader hash prefixes, which # can denote the base or the type. These don't map neatly # onto Pygments token types; some judgment calls here. # #d or no prefix (r'(?i)%s[-+]?\d+(?=[%s])' % (_exact_decimal_prefix, _delimiters), Number.Integer, '#pop'), (r'(?i)%s[-+]?(\d+(\.\d*)?|\.\d+)([deflst][-+]?\d+)?(?=[%s])' % (_exact_decimal_prefix, _delimiters), Number.Float, '#pop'), (r'(?i)%s[-+]?(%s([-+]%s?i)?|[-+]%s?i)(?=[%s])' % (_exact_decimal_prefix, _inexact_normal_no_hashes, _inexact_normal_no_hashes, _inexact_normal_no_hashes, _delimiters), Number, '#pop'), # Inexact without explicit #i (r'(?i)(#d)?(%s([-+]%s?i)?|[-+]%s?i|%s@%s)(?=[%s])' % (_inexact_real, _inexact_unsigned, _inexact_unsigned, _inexact_real, _inexact_real, _delimiters), Number.Float, '#pop'), # The remaining extflonums (r'(?i)(([-+]?%st[-+]?\d+)|[-+](inf|nan)\.t)(?=[%s])' % (_inexact_simple, _delimiters), Number.Float, '#pop'), # #b (r'(?iu)(#[ei])?#b%s' % _symbol, Number.Bin, '#pop'), # #o (r'(?iu)(#[ei])?#o%s' % _symbol, Number.Oct, '#pop'), # #x (r'(?iu)(#[ei])?#x%s' % _symbol, Number.Hex, '#pop'), # #i is always inexact, i.e. float (r'(?iu)(#d)?#i%s' % _symbol, Number.Float, '#pop'), # Strings and characters (r'#?"', String.Double, ('#pop', 'string')), (r'#<<(.+)\n(^(?!\1$).*$\n)*^\1$', String.Heredoc, '#pop'), (r'#\\(u[\da-fA-F]{1,4}|U[\da-fA-F]{1,8})', String.Char, '#pop'), (r'(?is)#\\([0-7]{3}|[a-z]+|.)', String.Char, '#pop'), (r'(?s)#[pr]x#?"(\\?.)*?"', String.Regex, '#pop'), # Constants (r'#(true|false|[tTfF])', Name.Constant, '#pop'), # Keyword argument names (e.g. #:keyword) (r'(?u)#:%s' % _symbol, Keyword.Declaration, '#pop'), # Reader extensions (r'(#lang |#!)(\S+)', bygroups(Keyword.Namespace, Name.Namespace)), (r'#reader', Keyword.Namespace, 'quoted-datum'), # Other syntax (r"(?i)\.(?=[%s])|#c[is]|#['`]|#,@?" % _delimiters, Operator), (r"'|#[s&]|#hash(eqv?)?|#\d*(?=%s)" % _opening_parenthesis, Operator, ('#pop', 'quoted-datum')) ], 'datum*': [ (r'`|,@?', Operator), (_symbol, String.Symbol, '#pop'), (r'[|\\]', Error), default('#pop') ], 'list': [ (_closing_parenthesis, Punctuation, '#pop') ], 'unquoted-datum': [ include('datum'), (r'quote(?=[%s])' % _delimiters, Keyword, ('#pop', 'quoted-datum')), (r'`', Operator, ('#pop', 'quasiquoted-datum')), (r'quasiquote(?=[%s])' % _delimiters, Keyword, ('#pop', 'quasiquoted-datum')), (_opening_parenthesis, Punctuation, ('#pop', 'unquoted-list')), (words(_keywords, prefix='(?u)', suffix='(?=[%s])' % _delimiters), Keyword, '#pop'), (words(_builtins, prefix='(?u)', suffix='(?=[%s])' % _delimiters), Name.Builtin, '#pop'), (_symbol, Name, '#pop'), include('datum*') ], 'unquoted-list': [ include('list'), (r'(?!\Z)', Text, 'unquoted-datum') ], 'quasiquoted-datum': [ include('datum'), (r',@?', Operator, ('#pop', 'unquoted-datum')), (r'unquote(-splicing)?(?=[%s])' % _delimiters, Keyword, ('#pop', 'unquoted-datum')), (_opening_parenthesis, Punctuation, ('#pop', 'quasiquoted-list')), include('datum*') ], 'quasiquoted-list': [ include('list'), (r'(?!\Z)', Text, 'quasiquoted-datum') ], 'quoted-datum': [ include('datum'), (_opening_parenthesis, Punctuation, ('#pop', 'quoted-list')), include('datum*') ], 'quoted-list': [ include('list'), (r'(?!\Z)', Text, 'quoted-datum') ], 'block-comment': [ (r'#\|', Comment.Multiline, '#push'), (r'\|#', Comment.Multiline, '#pop'), (r'[^#|]+|.', Comment.Multiline) ], 'string': [ (r'"', String.Double, '#pop'), (r'(?s)\\([0-7]{1,3}|x[\da-fA-F]{1,2}|u[\da-fA-F]{1,4}|' r'U[\da-fA-F]{1,8}|.)', String.Escape), (r'[^\\"]+', String.Double) ] } class NewLispLexer(RegexLexer): """ For `newLISP. `_ source code (version 10.3.0). .. versionadded:: 1.5 """ name = 'NewLisp' aliases = ['newlisp'] filenames = ['*.lsp', '*.nl', '*.kif'] mimetypes = ['text/x-newlisp', 'application/x-newlisp'] flags = re.IGNORECASE | re.MULTILINE | re.UNICODE # list of built-in functions for newLISP version 10.3 builtins = ( '^', '--', '-', ':', '!', '!=', '?', '@', '*', '/', '&', '%', '+', '++', '<', '<<', '<=', '=', '>', '>=', '>>', '|', '~', '$', '$0', '$1', '$10', '$11', '$12', '$13', '$14', '$15', '$2', '$3', '$4', '$5', '$6', '$7', '$8', '$9', '$args', '$idx', '$it', '$main-args', 'abort', 'abs', 'acos', 'acosh', 'add', 'address', 'amb', 'and', 'append-file', 'append', 'apply', 'args', 'array-list', 'array?', 'array', 'asin', 'asinh', 'assoc', 'atan', 'atan2', 'atanh', 'atom?', 'base64-dec', 'base64-enc', 'bayes-query', 'bayes-train', 'begin', 'beta', 'betai', 'bind', 'binomial', 'bits', 'callback', 'case', 'catch', 'ceil', 'change-dir', 'char', 'chop', 'Class', 'clean', 'close', 'command-event', 'cond', 'cons', 'constant', 'context?', 'context', 'copy-file', 'copy', 'cos', 'cosh', 'count', 'cpymem', 'crc32', 'crit-chi2', 'crit-z', 'current-line', 'curry', 'date-list', 'date-parse', 'date-value', 'date', 'debug', 'dec', 'def-new', 'default', 'define-macro', 'define', 'delete-file', 'delete-url', 'delete', 'destroy', 'det', 'device', 'difference', 'directory?', 'directory', 'div', 'do-until', 'do-while', 'doargs', 'dolist', 'dostring', 'dotimes', 'dotree', 'dump', 'dup', 'empty?', 'encrypt', 'ends-with', 'env', 'erf', 'error-event', 'eval-string', 'eval', 'exec', 'exists', 'exit', 'exp', 'expand', 'explode', 'extend', 'factor', 'fft', 'file-info', 'file?', 'filter', 'find-all', 'find', 'first', 'flat', 'float?', 'float', 'floor', 'flt', 'fn', 'for-all', 'for', 'fork', 'format', 'fv', 'gammai', 'gammaln', 'gcd', 'get-char', 'get-float', 'get-int', 'get-long', 'get-string', 'get-url', 'global?', 'global', 'if-not', 'if', 'ifft', 'import', 'inc', 'index', 'inf?', 'int', 'integer?', 'integer', 'intersect', 'invert', 'irr', 'join', 'lambda-macro', 'lambda?', 'lambda', 'last-error', 'last', 'legal?', 'length', 'let', 'letex', 'letn', 'list?', 'list', 'load', 'local', 'log', 'lookup', 'lower-case', 'macro?', 'main-args', 'MAIN', 'make-dir', 'map', 'mat', 'match', 'max', 'member', 'min', 'mod', 'module', 'mul', 'multiply', 'NaN?', 'net-accept', 'net-close', 'net-connect', 'net-error', 'net-eval', 'net-interface', 'net-ipv', 'net-listen', 'net-local', 'net-lookup', 'net-packet', 'net-peek', 'net-peer', 'net-ping', 'net-receive-from', 'net-receive-udp', 'net-receive', 'net-select', 'net-send-to', 'net-send-udp', 'net-send', 'net-service', 'net-sessions', 'new', 'nil?', 'nil', 'normal', 'not', 'now', 'nper', 'npv', 'nth', 'null?', 'number?', 'open', 'or', 'ostype', 'pack', 'parse-date', 'parse', 'peek', 'pipe', 'pmt', 'pop-assoc', 'pop', 'post-url', 'pow', 'prefix', 'pretty-print', 'primitive?', 'print', 'println', 'prob-chi2', 'prob-z', 'process', 'prompt-event', 'protected?', 'push', 'put-url', 'pv', 'quote?', 'quote', 'rand', 'random', 'randomize', 'read', 'read-char', 'read-expr', 'read-file', 'read-key', 'read-line', 'read-utf8', 'reader-event', 'real-path', 'receive', 'ref-all', 'ref', 'regex-comp', 'regex', 'remove-dir', 'rename-file', 'replace', 'reset', 'rest', 'reverse', 'rotate', 'round', 'save', 'search', 'seed', 'seek', 'select', 'self', 'semaphore', 'send', 'sequence', 'series', 'set-locale', 'set-ref-all', 'set-ref', 'set', 'setf', 'setq', 'sgn', 'share', 'signal', 'silent', 'sin', 'sinh', 'sleep', 'slice', 'sort', 'source', 'spawn', 'sqrt', 'starts-with', 'string?', 'string', 'sub', 'swap', 'sym', 'symbol?', 'symbols', 'sync', 'sys-error', 'sys-info', 'tan', 'tanh', 'term', 'throw-error', 'throw', 'time-of-day', 'time', 'timer', 'title-case', 'trace-highlight', 'trace', 'transpose', 'Tree', 'trim', 'true?', 'true', 'unicode', 'unify', 'unique', 'unless', 'unpack', 'until', 'upper-case', 'utf8', 'utf8len', 'uuid', 'wait-pid', 'when', 'while', 'write', 'write-char', 'write-file', 'write-line', 'xfer-event', 'xml-error', 'xml-parse', 'xml-type-tags', 'zero?', ) # valid names valid_name = r'([\w!$%&*+.,/<=>?@^~|-])+|(\[.*?\])+' tokens = { 'root': [ # shebang (r'#!(.*?)$', Comment.Preproc), # comments starting with semicolon (r';.*$', Comment.Single), # comments starting with # (r'#.*$', Comment.Single), # whitespace (r'\s+', Text), # strings, symbols and characters (r'"(\\\\|\\"|[^"])*"', String), # braces (r'\{', String, "bracestring"), # [text] ... [/text] delimited strings (r'\[text\]*', String, "tagstring"), # 'special' operators... (r"('|:)", Operator), # highlight the builtins (words(builtins, suffix=r'\b'), Keyword), # the remaining functions (r'(?<=\()' + valid_name, Name.Variable), # the remaining variables (valid_name, String.Symbol), # parentheses (r'(\(|\))', Punctuation), ], # braced strings... 'bracestring': [ (r'\{', String, "#push"), (r'\}', String, "#pop"), ('[^{}]+', String), ], # tagged [text]...[/text] delimited strings... 'tagstring': [ (r'(?s)(.*?)(\[/text\])', String, '#pop'), ], } class EmacsLispLexer(RegexLexer): """ An ELisp lexer, parsing a stream and outputting the tokens needed to highlight elisp code. .. versionadded:: 2.1 """ name = 'EmacsLisp' aliases = ['emacs', 'elisp', 'emacs-lisp'] filenames = ['*.el'] mimetypes = ['text/x-elisp', 'application/x-elisp'] flags = re.MULTILINE # couple of useful regexes # characters that are not macro-characters and can be used to begin a symbol nonmacro = r'\\.|[\w!$%&*+-/<=>?@^{}~|]' constituent = nonmacro + '|[#.:]' terminated = r'(?=[ "()\]\'\n,;`])' # whitespace or terminating macro characters # symbol token, reverse-engineered from hyperspec # Take a deep breath... symbol = r'((?:%s)(?:%s)*)' % (nonmacro, constituent) macros = set(( 'atomic-change-group', 'case', 'block', 'cl-block', 'cl-callf', 'cl-callf2', 'cl-case', 'cl-decf', 'cl-declaim', 'cl-declare', 'cl-define-compiler-macro', 'cl-defmacro', 'cl-defstruct', 'cl-defsubst', 'cl-deftype', 'cl-defun', 'cl-destructuring-bind', 'cl-do', 'cl-do*', 'cl-do-all-symbols', 'cl-do-symbols', 'cl-dolist', 'cl-dotimes', 'cl-ecase', 'cl-etypecase', 'eval-when', 'cl-eval-when', 'cl-flet', 'cl-flet*', 'cl-function', 'cl-incf', 'cl-labels', 'cl-letf', 'cl-letf*', 'cl-load-time-value', 'cl-locally', 'cl-loop', 'cl-macrolet', 'cl-multiple-value-bind', 'cl-multiple-value-setq', 'cl-progv', 'cl-psetf', 'cl-psetq', 'cl-pushnew', 'cl-remf', 'cl-return', 'cl-return-from', 'cl-rotatef', 'cl-shiftf', 'cl-symbol-macrolet', 'cl-tagbody', 'cl-the', 'cl-typecase', 'combine-after-change-calls', 'condition-case-unless-debug', 'decf', 'declaim', 'declare', 'declare-function', 'def-edebug-spec', 'defadvice', 'defclass', 'defcustom', 'defface', 'defgeneric', 'defgroup', 'define-advice', 'define-alternatives', 'define-compiler-macro', 'define-derived-mode', 'define-generic-mode', 'define-global-minor-mode', 'define-globalized-minor-mode', 'define-minor-mode', 'define-modify-macro', 'define-obsolete-face-alias', 'define-obsolete-function-alias', 'define-obsolete-variable-alias', 'define-setf-expander', 'define-skeleton', 'defmacro', 'defmethod', 'defsetf', 'defstruct', 'defsubst', 'deftheme', 'deftype', 'defun', 'defvar-local', 'delay-mode-hooks', 'destructuring-bind', 'do', 'do*', 'do-all-symbols', 'do-symbols', 'dolist', 'dont-compile', 'dotimes', 'dotimes-with-progress-reporter', 'ecase', 'ert-deftest', 'etypecase', 'eval-and-compile', 'eval-when-compile', 'flet', 'ignore-errors', 'incf', 'labels', 'lambda', 'letrec', 'lexical-let', 'lexical-let*', 'loop', 'multiple-value-bind', 'multiple-value-setq', 'noreturn', 'oref', 'oref-default', 'oset', 'oset-default', 'pcase', 'pcase-defmacro', 'pcase-dolist', 'pcase-exhaustive', 'pcase-let', 'pcase-let*', 'pop', 'psetf', 'psetq', 'push', 'pushnew', 'remf', 'return', 'rotatef', 'rx', 'save-match-data', 'save-selected-window', 'save-window-excursion', 'setf', 'setq-local', 'shiftf', 'track-mouse', 'typecase', 'unless', 'use-package', 'when', 'while-no-input', 'with-case-table', 'with-category-table', 'with-coding-priority', 'with-current-buffer', 'with-demoted-errors', 'with-eval-after-load', 'with-file-modes', 'with-local-quit', 'with-output-to-string', 'with-output-to-temp-buffer', 'with-parsed-tramp-file-name', 'with-selected-frame', 'with-selected-window', 'with-silent-modifications', 'with-slots', 'with-syntax-table', 'with-temp-buffer', 'with-temp-file', 'with-temp-message', 'with-timeout', 'with-tramp-connection-property', 'with-tramp-file-property', 'with-tramp-progress-reporter', 'with-wrapper-hook', 'load-time-value', 'locally', 'macrolet', 'progv', 'return-from', )) special_forms = set(( 'and', 'catch', 'cond', 'condition-case', 'defconst', 'defvar', 'function', 'if', 'interactive', 'let', 'let*', 'or', 'prog1', 'prog2', 'progn', 'quote', 'save-current-buffer', 'save-excursion', 'save-restriction', 'setq', 'setq-default', 'subr-arity', 'unwind-protect', 'while', )) builtin_function = set(( '%', '*', '+', '-', '/', '/=', '1+', '1-', '<', '<=', '=', '>', '>=', 'Snarf-documentation', 'abort-recursive-edit', 'abs', 'accept-process-output', 'access-file', 'accessible-keymaps', 'acos', 'active-minibuffer-window', 'add-face-text-property', 'add-name-to-file', 'add-text-properties', 'all-completions', 'append', 'apply', 'apropos-internal', 'aref', 'arrayp', 'aset', 'ash', 'asin', 'assoc', 'assoc-string', 'assq', 'atan', 'atom', 'autoload', 'autoload-do-load', 'backtrace', 'backtrace--locals', 'backtrace-debug', 'backtrace-eval', 'backtrace-frame', 'backward-char', 'backward-prefix-chars', 'barf-if-buffer-read-only', 'base64-decode-region', 'base64-decode-string', 'base64-encode-region', 'base64-encode-string', 'beginning-of-line', 'bidi-find-overridden-directionality', 'bidi-resolved-levels', 'bitmap-spec-p', 'bobp', 'bolp', 'bool-vector', 'bool-vector-count-consecutive', 'bool-vector-count-population', 'bool-vector-exclusive-or', 'bool-vector-intersection', 'bool-vector-not', 'bool-vector-p', 'bool-vector-set-difference', 'bool-vector-subsetp', 'bool-vector-union', 'boundp', 'buffer-base-buffer', 'buffer-chars-modified-tick', 'buffer-enable-undo', 'buffer-file-name', 'buffer-has-markers-at', 'buffer-list', 'buffer-live-p', 'buffer-local-value', 'buffer-local-variables', 'buffer-modified-p', 'buffer-modified-tick', 'buffer-name', 'buffer-size', 'buffer-string', 'buffer-substring', 'buffer-substring-no-properties', 'buffer-swap-text', 'bufferp', 'bury-buffer-internal', 'byte-code', 'byte-code-function-p', 'byte-to-position', 'byte-to-string', 'byteorder', 'call-interactively', 'call-last-kbd-macro', 'call-process', 'call-process-region', 'cancel-kbd-macro-events', 'capitalize', 'capitalize-region', 'capitalize-word', 'car', 'car-less-than-car', 'car-safe', 'case-table-p', 'category-docstring', 'category-set-mnemonics', 'category-table', 'category-table-p', 'ccl-execute', 'ccl-execute-on-string', 'ccl-program-p', 'cdr', 'cdr-safe', 'ceiling', 'char-after', 'char-before', 'char-category-set', 'char-charset', 'char-equal', 'char-or-string-p', 'char-resolve-modifiers', 'char-syntax', 'char-table-extra-slot', 'char-table-p', 'char-table-parent', 'char-table-range', 'char-table-subtype', 'char-to-string', 'char-width', 'characterp', 'charset-after', 'charset-id-internal', 'charset-plist', 'charset-priority-list', 'charsetp', 'check-coding-system', 'check-coding-systems-region', 'clear-buffer-auto-save-failure', 'clear-charset-maps', 'clear-face-cache', 'clear-font-cache', 'clear-image-cache', 'clear-string', 'clear-this-command-keys', 'close-font', 'clrhash', 'coding-system-aliases', 'coding-system-base', 'coding-system-eol-type', 'coding-system-p', 'coding-system-plist', 'coding-system-priority-list', 'coding-system-put', 'color-distance', 'color-gray-p', 'color-supported-p', 'combine-after-change-execute', 'command-error-default-function', 'command-remapping', 'commandp', 'compare-buffer-substrings', 'compare-strings', 'compare-window-configurations', 'completing-read', 'compose-region-internal', 'compose-string-internal', 'composition-get-gstring', 'compute-motion', 'concat', 'cons', 'consp', 'constrain-to-field', 'continue-process', 'controlling-tty-p', 'coordinates-in-window-p', 'copy-alist', 'copy-category-table', 'copy-file', 'copy-hash-table', 'copy-keymap', 'copy-marker', 'copy-sequence', 'copy-syntax-table', 'copysign', 'cos', 'current-active-maps', 'current-bidi-paragraph-direction', 'current-buffer', 'current-case-table', 'current-column', 'current-global-map', 'current-idle-time', 'current-indentation', 'current-input-mode', 'current-local-map', 'current-message', 'current-minor-mode-maps', 'current-time', 'current-time-string', 'current-time-zone', 'current-window-configuration', 'cygwin-convert-file-name-from-windows', 'cygwin-convert-file-name-to-windows', 'daemon-initialized', 'daemonp', 'dbus--init-bus', 'dbus-get-unique-name', 'dbus-message-internal', 'debug-timer-check', 'declare-equiv-charset', 'decode-big5-char', 'decode-char', 'decode-coding-region', 'decode-coding-string', 'decode-sjis-char', 'decode-time', 'default-boundp', 'default-file-modes', 'default-printer-name', 'default-toplevel-value', 'default-value', 'define-category', 'define-charset-alias', 'define-charset-internal', 'define-coding-system-alias', 'define-coding-system-internal', 'define-fringe-bitmap', 'define-hash-table-test', 'define-key', 'define-prefix-command', 'delete', 'delete-all-overlays', 'delete-and-extract-region', 'delete-char', 'delete-directory-internal', 'delete-field', 'delete-file', 'delete-frame', 'delete-other-windows-internal', 'delete-overlay', 'delete-process', 'delete-region', 'delete-terminal', 'delete-window-internal', 'delq', 'describe-buffer-bindings', 'describe-vector', 'destroy-fringe-bitmap', 'detect-coding-region', 'detect-coding-string', 'ding', 'directory-file-name', 'directory-files', 'directory-files-and-attributes', 'discard-input', 'display-supports-face-attributes-p', 'do-auto-save', 'documentation', 'documentation-property', 'downcase', 'downcase-region', 'downcase-word', 'draw-string', 'dump-colors', 'dump-emacs', 'dump-face', 'dump-frame-glyph-matrix', 'dump-glyph-matrix', 'dump-glyph-row', 'dump-redisplay-history', 'dump-tool-bar-row', 'elt', 'emacs-pid', 'encode-big5-char', 'encode-char', 'encode-coding-region', 'encode-coding-string', 'encode-sjis-char', 'encode-time', 'end-kbd-macro', 'end-of-line', 'eobp', 'eolp', 'eq', 'eql', 'equal', 'equal-including-properties', 'erase-buffer', 'error-message-string', 'eval', 'eval-buffer', 'eval-region', 'event-convert-list', 'execute-kbd-macro', 'exit-recursive-edit', 'exp', 'expand-file-name', 'expt', 'external-debugging-output', 'face-attribute-relative-p', 'face-attributes-as-vector', 'face-font', 'fboundp', 'fceiling', 'fetch-bytecode', 'ffloor', 'field-beginning', 'field-end', 'field-string', 'field-string-no-properties', 'file-accessible-directory-p', 'file-acl', 'file-attributes', 'file-attributes-lessp', 'file-directory-p', 'file-executable-p', 'file-exists-p', 'file-locked-p', 'file-modes', 'file-name-absolute-p', 'file-name-all-completions', 'file-name-as-directory', 'file-name-completion', 'file-name-directory', 'file-name-nondirectory', 'file-newer-than-file-p', 'file-readable-p', 'file-regular-p', 'file-selinux-context', 'file-symlink-p', 'file-system-info', 'file-system-info', 'file-writable-p', 'fillarray', 'find-charset-region', 'find-charset-string', 'find-coding-systems-region-internal', 'find-composition-internal', 'find-file-name-handler', 'find-font', 'find-operation-coding-system', 'float', 'float-time', 'floatp', 'floor', 'fmakunbound', 'following-char', 'font-at', 'font-drive-otf', 'font-face-attributes', 'font-family-list', 'font-get', 'font-get-glyphs', 'font-get-system-font', 'font-get-system-normal-font', 'font-info', 'font-match-p', 'font-otf-alternates', 'font-put', 'font-shape-gstring', 'font-spec', 'font-variation-glyphs', 'font-xlfd-name', 'fontp', 'fontset-font', 'fontset-info', 'fontset-list', 'fontset-list-all', 'force-mode-line-update', 'force-window-update', 'format', 'format-mode-line', 'format-network-address', 'format-time-string', 'forward-char', 'forward-comment', 'forward-line', 'forward-word', 'frame-border-width', 'frame-bottom-divider-width', 'frame-can-run-window-configuration-change-hook', 'frame-char-height', 'frame-char-width', 'frame-face-alist', 'frame-first-window', 'frame-focus', 'frame-font-cache', 'frame-fringe-width', 'frame-list', 'frame-live-p', 'frame-or-buffer-changed-p', 'frame-parameter', 'frame-parameters', 'frame-pixel-height', 'frame-pixel-width', 'frame-pointer-visible-p', 'frame-right-divider-width', 'frame-root-window', 'frame-scroll-bar-height', 'frame-scroll-bar-width', 'frame-selected-window', 'frame-terminal', 'frame-text-cols', 'frame-text-height', 'frame-text-lines', 'frame-text-width', 'frame-total-cols', 'frame-total-lines', 'frame-visible-p', 'framep', 'frexp', 'fringe-bitmaps-at-pos', 'fround', 'fset', 'ftruncate', 'funcall', 'funcall-interactively', 'function-equal', 'functionp', 'gap-position', 'gap-size', 'garbage-collect', 'gc-status', 'generate-new-buffer-name', 'get', 'get-buffer', 'get-buffer-create', 'get-buffer-process', 'get-buffer-window', 'get-byte', 'get-char-property', 'get-char-property-and-overlay', 'get-file-buffer', 'get-file-char', 'get-internal-run-time', 'get-load-suffixes', 'get-pos-property', 'get-process', 'get-screen-color', 'get-text-property', 'get-unicode-property-internal', 'get-unused-category', 'get-unused-iso-final-char', 'getenv-internal', 'gethash', 'gfile-add-watch', 'gfile-rm-watch', 'global-key-binding', 'gnutls-available-p', 'gnutls-boot', 'gnutls-bye', 'gnutls-deinit', 'gnutls-error-fatalp', 'gnutls-error-string', 'gnutls-errorp', 'gnutls-get-initstage', 'gnutls-peer-status', 'gnutls-peer-status-warning-describe', 'goto-char', 'gpm-mouse-start', 'gpm-mouse-stop', 'group-gid', 'group-real-gid', 'handle-save-session', 'handle-switch-frame', 'hash-table-count', 'hash-table-p', 'hash-table-rehash-size', 'hash-table-rehash-threshold', 'hash-table-size', 'hash-table-test', 'hash-table-weakness', 'iconify-frame', 'identity', 'image-flush', 'image-mask-p', 'image-metadata', 'image-size', 'imagemagick-types', 'imagep', 'indent-to', 'indirect-function', 'indirect-variable', 'init-image-library', 'inotify-add-watch', 'inotify-rm-watch', 'input-pending-p', 'insert', 'insert-and-inherit', 'insert-before-markers', 'insert-before-markers-and-inherit', 'insert-buffer-substring', 'insert-byte', 'insert-char', 'insert-file-contents', 'insert-startup-screen', 'int86', 'integer-or-marker-p', 'integerp', 'interactive-form', 'intern', 'intern-soft', 'internal--track-mouse', 'internal-char-font', 'internal-complete-buffer', 'internal-copy-lisp-face', 'internal-default-process-filter', 'internal-default-process-sentinel', 'internal-describe-syntax-value', 'internal-event-symbol-parse-modifiers', 'internal-face-x-get-resource', 'internal-get-lisp-face-attribute', 'internal-lisp-face-attribute-values', 'internal-lisp-face-empty-p', 'internal-lisp-face-equal-p', 'internal-lisp-face-p', 'internal-make-lisp-face', 'internal-make-var-non-special', 'internal-merge-in-global-face', 'internal-set-alternative-font-family-alist', 'internal-set-alternative-font-registry-alist', 'internal-set-font-selection-order', 'internal-set-lisp-face-attribute', 'internal-set-lisp-face-attribute-from-resource', 'internal-show-cursor', 'internal-show-cursor-p', 'interrupt-process', 'invisible-p', 'invocation-directory', 'invocation-name', 'isnan', 'iso-charset', 'key-binding', 'key-description', 'keyboard-coding-system', 'keymap-parent', 'keymap-prompt', 'keymapp', 'keywordp', 'kill-all-local-variables', 'kill-buffer', 'kill-emacs', 'kill-local-variable', 'kill-process', 'last-nonminibuffer-frame', 'lax-plist-get', 'lax-plist-put', 'ldexp', 'length', 'libxml-parse-html-region', 'libxml-parse-xml-region', 'line-beginning-position', 'line-end-position', 'line-pixel-height', 'list', 'list-fonts', 'list-system-processes', 'listp', 'load', 'load-average', 'local-key-binding', 'local-variable-if-set-p', 'local-variable-p', 'locale-info', 'locate-file-internal', 'lock-buffer', 'log', 'logand', 'logb', 'logior', 'lognot', 'logxor', 'looking-at', 'lookup-image', 'lookup-image-map', 'lookup-key', 'lower-frame', 'lsh', 'macroexpand', 'make-bool-vector', 'make-byte-code', 'make-category-set', 'make-category-table', 'make-char', 'make-char-table', 'make-directory-internal', 'make-frame-invisible', 'make-frame-visible', 'make-hash-table', 'make-indirect-buffer', 'make-keymap', 'make-list', 'make-local-variable', 'make-marker', 'make-network-process', 'make-overlay', 'make-serial-process', 'make-sparse-keymap', 'make-string', 'make-symbol', 'make-symbolic-link', 'make-temp-name', 'make-terminal-frame', 'make-variable-buffer-local', 'make-variable-frame-local', 'make-vector', 'makunbound', 'map-char-table', 'map-charset-chars', 'map-keymap', 'map-keymap-internal', 'mapatoms', 'mapc', 'mapcar', 'mapconcat', 'maphash', 'mark-marker', 'marker-buffer', 'marker-insertion-type', 'marker-position', 'markerp', 'match-beginning', 'match-data', 'match-end', 'matching-paren', 'max', 'max-char', 'md5', 'member', 'memory-info', 'memory-limit', 'memory-use-counts', 'memq', 'memql', 'menu-bar-menu-at-x-y', 'menu-or-popup-active-p', 'menu-or-popup-active-p', 'merge-face-attribute', 'message', 'message-box', 'message-or-box', 'min', 'minibuffer-completion-contents', 'minibuffer-contents', 'minibuffer-contents-no-properties', 'minibuffer-depth', 'minibuffer-prompt', 'minibuffer-prompt-end', 'minibuffer-selected-window', 'minibuffer-window', 'minibufferp', 'minor-mode-key-binding', 'mod', 'modify-category-entry', 'modify-frame-parameters', 'modify-syntax-entry', 'mouse-pixel-position', 'mouse-position', 'move-overlay', 'move-point-visually', 'move-to-column', 'move-to-window-line', 'msdos-downcase-filename', 'msdos-long-file-names', 'msdos-memget', 'msdos-memput', 'msdos-mouse-disable', 'msdos-mouse-enable', 'msdos-mouse-init', 'msdos-mouse-p', 'msdos-remember-default-colors', 'msdos-set-keyboard', 'msdos-set-mouse-buttons', 'multibyte-char-to-unibyte', 'multibyte-string-p', 'narrow-to-region', 'natnump', 'nconc', 'network-interface-info', 'network-interface-list', 'new-fontset', 'newline-cache-check', 'next-char-property-change', 'next-frame', 'next-overlay-change', 'next-property-change', 'next-read-file-uses-dialog-p', 'next-single-char-property-change', 'next-single-property-change', 'next-window', 'nlistp', 'nreverse', 'nth', 'nthcdr', 'null', 'number-or-marker-p', 'number-to-string', 'numberp', 'open-dribble-file', 'open-font', 'open-termscript', 'optimize-char-table', 'other-buffer', 'other-window-for-scrolling', 'overlay-buffer', 'overlay-end', 'overlay-get', 'overlay-lists', 'overlay-properties', 'overlay-put', 'overlay-recenter', 'overlay-start', 'overlayp', 'overlays-at', 'overlays-in', 'parse-partial-sexp', 'play-sound-internal', 'plist-get', 'plist-member', 'plist-put', 'point', 'point-marker', 'point-max', 'point-max-marker', 'point-min', 'point-min-marker', 'pos-visible-in-window-p', 'position-bytes', 'posix-looking-at', 'posix-search-backward', 'posix-search-forward', 'posix-string-match', 'posn-at-point', 'posn-at-x-y', 'preceding-char', 'prefix-numeric-value', 'previous-char-property-change', 'previous-frame', 'previous-overlay-change', 'previous-property-change', 'previous-single-char-property-change', 'previous-single-property-change', 'previous-window', 'prin1', 'prin1-to-string', 'princ', 'print', 'process-attributes', 'process-buffer', 'process-coding-system', 'process-command', 'process-connection', 'process-contact', 'process-datagram-address', 'process-exit-status', 'process-filter', 'process-filter-multibyte-p', 'process-id', 'process-inherit-coding-system-flag', 'process-list', 'process-mark', 'process-name', 'process-plist', 'process-query-on-exit-flag', 'process-running-child-p', 'process-send-eof', 'process-send-region', 'process-send-string', 'process-sentinel', 'process-status', 'process-tty-name', 'process-type', 'processp', 'profiler-cpu-log', 'profiler-cpu-running-p', 'profiler-cpu-start', 'profiler-cpu-stop', 'profiler-memory-log', 'profiler-memory-running-p', 'profiler-memory-start', 'profiler-memory-stop', 'propertize', 'purecopy', 'put', 'put-text-property', 'put-unicode-property-internal', 'puthash', 'query-font', 'query-fontset', 'quit-process', 'raise-frame', 'random', 'rassoc', 'rassq', 're-search-backward', 're-search-forward', 'read', 'read-buffer', 'read-char', 'read-char-exclusive', 'read-coding-system', 'read-command', 'read-event', 'read-from-minibuffer', 'read-from-string', 'read-function', 'read-key-sequence', 'read-key-sequence-vector', 'read-no-blanks-input', 'read-non-nil-coding-system', 'read-string', 'read-variable', 'recent-auto-save-p', 'recent-doskeys', 'recent-keys', 'recenter', 'recursion-depth', 'recursive-edit', 'redirect-debugging-output', 'redirect-frame-focus', 'redisplay', 'redraw-display', 'redraw-frame', 'regexp-quote', 'region-beginning', 'region-end', 'register-ccl-program', 'register-code-conversion-map', 'remhash', 'remove-list-of-text-properties', 'remove-text-properties', 'rename-buffer', 'rename-file', 'replace-match', 'reset-this-command-lengths', 'resize-mini-window-internal', 'restore-buffer-modified-p', 'resume-tty', 'reverse', 'round', 'run-hook-with-args', 'run-hook-with-args-until-failure', 'run-hook-with-args-until-success', 'run-hook-wrapped', 'run-hooks', 'run-window-configuration-change-hook', 'run-window-scroll-functions', 'safe-length', 'scan-lists', 'scan-sexps', 'scroll-down', 'scroll-left', 'scroll-other-window', 'scroll-right', 'scroll-up', 'search-backward', 'search-forward', 'secure-hash', 'select-frame', 'select-window', 'selected-frame', 'selected-window', 'self-insert-command', 'send-string-to-terminal', 'sequencep', 'serial-process-configure', 'set', 'set-buffer', 'set-buffer-auto-saved', 'set-buffer-major-mode', 'set-buffer-modified-p', 'set-buffer-multibyte', 'set-case-table', 'set-category-table', 'set-char-table-extra-slot', 'set-char-table-parent', 'set-char-table-range', 'set-charset-plist', 'set-charset-priority', 'set-coding-system-priority', 'set-cursor-size', 'set-default', 'set-default-file-modes', 'set-default-toplevel-value', 'set-file-acl', 'set-file-modes', 'set-file-selinux-context', 'set-file-times', 'set-fontset-font', 'set-frame-height', 'set-frame-position', 'set-frame-selected-window', 'set-frame-size', 'set-frame-width', 'set-fringe-bitmap-face', 'set-input-interrupt-mode', 'set-input-meta-mode', 'set-input-mode', 'set-keyboard-coding-system-internal', 'set-keymap-parent', 'set-marker', 'set-marker-insertion-type', 'set-match-data', 'set-message-beep', 'set-minibuffer-window', 'set-mouse-pixel-position', 'set-mouse-position', 'set-network-process-option', 'set-output-flow-control', 'set-process-buffer', 'set-process-coding-system', 'set-process-datagram-address', 'set-process-filter', 'set-process-filter-multibyte', 'set-process-inherit-coding-system-flag', 'set-process-plist', 'set-process-query-on-exit-flag', 'set-process-sentinel', 'set-process-window-size', 'set-quit-char', 'set-safe-terminal-coding-system-internal', 'set-screen-color', 'set-standard-case-table', 'set-syntax-table', 'set-terminal-coding-system-internal', 'set-terminal-local-value', 'set-terminal-parameter', 'set-text-properties', 'set-time-zone-rule', 'set-visited-file-modtime', 'set-window-buffer', 'set-window-combination-limit', 'set-window-configuration', 'set-window-dedicated-p', 'set-window-display-table', 'set-window-fringes', 'set-window-hscroll', 'set-window-margins', 'set-window-new-normal', 'set-window-new-pixel', 'set-window-new-total', 'set-window-next-buffers', 'set-window-parameter', 'set-window-point', 'set-window-prev-buffers', 'set-window-redisplay-end-trigger', 'set-window-scroll-bars', 'set-window-start', 'set-window-vscroll', 'setcar', 'setcdr', 'setplist', 'show-face-resources', 'signal', 'signal-process', 'sin', 'single-key-description', 'skip-chars-backward', 'skip-chars-forward', 'skip-syntax-backward', 'skip-syntax-forward', 'sleep-for', 'sort', 'sort-charsets', 'special-variable-p', 'split-char', 'split-window-internal', 'sqrt', 'standard-case-table', 'standard-category-table', 'standard-syntax-table', 'start-kbd-macro', 'start-process', 'stop-process', 'store-kbd-macro-event', 'string', 'string-as-multibyte', 'string-as-unibyte', 'string-bytes', 'string-collate-equalp', 'string-collate-lessp', 'string-equal', 'string-lessp', 'string-make-multibyte', 'string-make-unibyte', 'string-match', 'string-to-char', 'string-to-multibyte', 'string-to-number', 'string-to-syntax', 'string-to-unibyte', 'string-width', 'stringp', 'subr-name', 'subrp', 'subst-char-in-region', 'substitute-command-keys', 'substitute-in-file-name', 'substring', 'substring-no-properties', 'suspend-emacs', 'suspend-tty', 'suspicious-object', 'sxhash', 'symbol-function', 'symbol-name', 'symbol-plist', 'symbol-value', 'symbolp', 'syntax-table', 'syntax-table-p', 'system-groups', 'system-move-file-to-trash', 'system-name', 'system-users', 'tan', 'terminal-coding-system', 'terminal-list', 'terminal-live-p', 'terminal-local-value', 'terminal-name', 'terminal-parameter', 'terminal-parameters', 'terpri', 'test-completion', 'text-char-description', 'text-properties-at', 'text-property-any', 'text-property-not-all', 'this-command-keys', 'this-command-keys-vector', 'this-single-command-keys', 'this-single-command-raw-keys', 'time-add', 'time-less-p', 'time-subtract', 'tool-bar-get-system-style', 'tool-bar-height', 'tool-bar-pixel-width', 'top-level', 'trace-redisplay', 'trace-to-stderr', 'translate-region-internal', 'transpose-regions', 'truncate', 'try-completion', 'tty-display-color-cells', 'tty-display-color-p', 'tty-no-underline', 'tty-suppress-bold-inverse-default-colors', 'tty-top-frame', 'tty-type', 'type-of', 'undo-boundary', 'unencodable-char-position', 'unhandled-file-name-directory', 'unibyte-char-to-multibyte', 'unibyte-string', 'unicode-property-table-internal', 'unify-charset', 'unintern', 'unix-sync', 'unlock-buffer', 'upcase', 'upcase-initials', 'upcase-initials-region', 'upcase-region', 'upcase-word', 'use-global-map', 'use-local-map', 'user-full-name', 'user-login-name', 'user-real-login-name', 'user-real-uid', 'user-uid', 'variable-binding-locus', 'vconcat', 'vector', 'vector-or-char-table-p', 'vectorp', 'verify-visited-file-modtime', 'vertical-motion', 'visible-frame-list', 'visited-file-modtime', 'w16-get-clipboard-data', 'w16-selection-exists-p', 'w16-set-clipboard-data', 'w32-battery-status', 'w32-default-color-map', 'w32-define-rgb-color', 'w32-display-monitor-attributes-list', 'w32-frame-menu-bar-size', 'w32-frame-rect', 'w32-get-clipboard-data', 'w32-get-codepage-charset', 'w32-get-console-codepage', 'w32-get-console-output-codepage', 'w32-get-current-locale-id', 'w32-get-default-locale-id', 'w32-get-keyboard-layout', 'w32-get-locale-info', 'w32-get-valid-codepages', 'w32-get-valid-keyboard-layouts', 'w32-get-valid-locale-ids', 'w32-has-winsock', 'w32-long-file-name', 'w32-reconstruct-hot-key', 'w32-register-hot-key', 'w32-registered-hot-keys', 'w32-selection-exists-p', 'w32-send-sys-command', 'w32-set-clipboard-data', 'w32-set-console-codepage', 'w32-set-console-output-codepage', 'w32-set-current-locale', 'w32-set-keyboard-layout', 'w32-set-process-priority', 'w32-shell-execute', 'w32-short-file-name', 'w32-toggle-lock-key', 'w32-unload-winsock', 'w32-unregister-hot-key', 'w32-window-exists-p', 'w32notify-add-watch', 'w32notify-rm-watch', 'waiting-for-user-input-p', 'where-is-internal', 'widen', 'widget-apply', 'widget-get', 'widget-put', 'window-absolute-pixel-edges', 'window-at', 'window-body-height', 'window-body-width', 'window-bottom-divider-width', 'window-buffer', 'window-combination-limit', 'window-configuration-frame', 'window-configuration-p', 'window-dedicated-p', 'window-display-table', 'window-edges', 'window-end', 'window-frame', 'window-fringes', 'window-header-line-height', 'window-hscroll', 'window-inside-absolute-pixel-edges', 'window-inside-edges', 'window-inside-pixel-edges', 'window-left-child', 'window-left-column', 'window-line-height', 'window-list', 'window-list-1', 'window-live-p', 'window-margins', 'window-minibuffer-p', 'window-mode-line-height', 'window-new-normal', 'window-new-pixel', 'window-new-total', 'window-next-buffers', 'window-next-sibling', 'window-normal-size', 'window-old-point', 'window-parameter', 'window-parameters', 'window-parent', 'window-pixel-edges', 'window-pixel-height', 'window-pixel-left', 'window-pixel-top', 'window-pixel-width', 'window-point', 'window-prev-buffers', 'window-prev-sibling', 'window-redisplay-end-trigger', 'window-resize-apply', 'window-resize-apply-total', 'window-right-divider-width', 'window-scroll-bar-height', 'window-scroll-bar-width', 'window-scroll-bars', 'window-start', 'window-system', 'window-text-height', 'window-text-pixel-size', 'window-text-width', 'window-top-child', 'window-top-line', 'window-total-height', 'window-total-width', 'window-use-time', 'window-valid-p', 'window-vscroll', 'windowp', 'write-char', 'write-region', 'x-backspace-delete-keys-p', 'x-change-window-property', 'x-change-window-property', 'x-close-connection', 'x-close-connection', 'x-create-frame', 'x-create-frame', 'x-delete-window-property', 'x-delete-window-property', 'x-disown-selection-internal', 'x-display-backing-store', 'x-display-backing-store', 'x-display-color-cells', 'x-display-color-cells', 'x-display-grayscale-p', 'x-display-grayscale-p', 'x-display-list', 'x-display-list', 'x-display-mm-height', 'x-display-mm-height', 'x-display-mm-width', 'x-display-mm-width', 'x-display-monitor-attributes-list', 'x-display-pixel-height', 'x-display-pixel-height', 'x-display-pixel-width', 'x-display-pixel-width', 'x-display-planes', 'x-display-planes', 'x-display-save-under', 'x-display-save-under', 'x-display-screens', 'x-display-screens', 'x-display-visual-class', 'x-display-visual-class', 'x-family-fonts', 'x-file-dialog', 'x-file-dialog', 'x-file-dialog', 'x-focus-frame', 'x-frame-geometry', 'x-frame-geometry', 'x-get-atom-name', 'x-get-resource', 'x-get-selection-internal', 'x-hide-tip', 'x-hide-tip', 'x-list-fonts', 'x-load-color-file', 'x-menu-bar-open-internal', 'x-menu-bar-open-internal', 'x-open-connection', 'x-open-connection', 'x-own-selection-internal', 'x-parse-geometry', 'x-popup-dialog', 'x-popup-menu', 'x-register-dnd-atom', 'x-select-font', 'x-select-font', 'x-selection-exists-p', 'x-selection-owner-p', 'x-send-client-message', 'x-server-max-request-size', 'x-server-max-request-size', 'x-server-vendor', 'x-server-vendor', 'x-server-version', 'x-server-version', 'x-show-tip', 'x-show-tip', 'x-synchronize', 'x-synchronize', 'x-uses-old-gtk-dialog', 'x-window-property', 'x-window-property', 'x-wm-set-size-hint', 'xw-color-defined-p', 'xw-color-defined-p', 'xw-color-values', 'xw-color-values', 'xw-display-color-p', 'xw-display-color-p', 'yes-or-no-p', 'zlib-available-p', 'zlib-decompress-region', 'forward-point', )) builtin_function_highlighted = set(( 'defvaralias', 'provide', 'require', 'with-no-warnings', 'define-widget', 'with-electric-help', 'throw', 'defalias', 'featurep' )) lambda_list_keywords = set(( '&allow-other-keys', '&aux', '&body', '&environment', '&key', '&optional', '&rest', '&whole', )) error_keywords = set(( 'cl-assert', 'cl-check-type', 'error', 'signal', 'user-error', 'warn', )) def get_tokens_unprocessed(self, text): stack = ['root'] for index, token, value in RegexLexer.get_tokens_unprocessed(self, text, stack): if token is Name.Variable: if value in EmacsLispLexer.builtin_function: yield index, Name.Function, value continue if value in EmacsLispLexer.special_forms: yield index, Keyword, value continue if value in EmacsLispLexer.error_keywords: yield index, Name.Exception, value continue if value in EmacsLispLexer.builtin_function_highlighted: yield index, Name.Builtin, value continue if value in EmacsLispLexer.macros: yield index, Name.Builtin, value continue if value in EmacsLispLexer.lambda_list_keywords: yield index, Keyword.Pseudo, value continue yield index, token, value tokens = { 'root': [ default('body'), ], 'body': [ # whitespace (r'\s+', Text), # single-line comment (r';.*$', Comment.Single), # strings and characters (r'"', String, 'string'), (r'\?([^\\]|\\.)', String.Char), # quoting (r":" + symbol, Name.Builtin), (r"::" + symbol, String.Symbol), (r"'" + symbol, String.Symbol), (r"'", Operator), (r"`", Operator), # decimal numbers (r'[-+]?\d+\.?' + terminated, Number.Integer), (r'[-+]?\d+/\d+' + terminated, Number), (r'[-+]?(\d*\.\d+([defls][-+]?\d+)?|\d+(\.\d*)?[defls][-+]?\d+)' + terminated, Number.Float), # vectors (r'\[|\]', Punctuation), # uninterned symbol (r'#:' + symbol, String.Symbol), # read syntax for char tables (r'#\^\^?', Operator), # function shorthand (r'#\'', Name.Function), # binary rational (r'#[bB][+-]?[01]+(/[01]+)?', Number.Bin), # octal rational (r'#[oO][+-]?[0-7]+(/[0-7]+)?', Number.Oct), # hex rational (r'#[xX][+-]?[0-9a-fA-F]+(/[0-9a-fA-F]+)?', Number.Hex), # radix rational (r'#\d+r[+-]?[0-9a-zA-Z]+(/[0-9a-zA-Z]+)?', Number), # reference (r'#\d+=', Operator), (r'#\d+#', Operator), # special operators that should have been parsed already (r'(,@|,|\.|:)', Operator), # special constants (r'(t|nil)' + terminated, Name.Constant), # functions and variables (r'\*' + symbol + r'\*', Name.Variable.Global), (symbol, Name.Variable), # parentheses (r'#\(', Operator, 'body'), (r'\(', Punctuation, 'body'), (r'\)', Punctuation, '#pop'), ], 'string': [ (r'[^"\\`]+', String), (r'`%s\'' % symbol, String.Symbol), (r'`', String), (r'\\.', String), (r'\\\n', String), (r'"', String, '#pop'), ], } class ShenLexer(RegexLexer): """ Lexer for `Shen `_ source code. .. versionadded:: 2.1 """ name = 'Shen' aliases = ['shen'] filenames = ['*.shen'] mimetypes = ['text/x-shen', 'application/x-shen'] DECLARATIONS = ( 'datatype', 'define', 'defmacro', 'defprolog', 'defcc', 'synonyms', 'declare', 'package', 'type', 'function', ) SPECIAL_FORMS = ( 'lambda', 'get', 'let', 'if', 'cases', 'cond', 'put', 'time', 'freeze', 'value', 'load', '$', 'protect', 'or', 'and', 'not', 'do', 'output', 'prolog?', 'trap-error', 'error', 'make-string', '/.', 'set', '@p', '@s', '@v', ) BUILTINS = ( '==', '=', '*', '+', '-', '/', '<', '>', '>=', '<=', '<-address', '<-vector', 'abort', 'absvector', 'absvector?', 'address->', 'adjoin', 'append', 'arity', 'assoc', 'bind', 'boolean?', 'bound?', 'call', 'cd', 'close', 'cn', 'compile', 'concat', 'cons', 'cons?', 'cut', 'destroy', 'difference', 'element?', 'empty?', 'enable-type-theory', 'error-to-string', 'eval', 'eval-kl', 'exception', 'explode', 'external', 'fail', 'fail-if', 'file', 'findall', 'fix', 'fst', 'fwhen', 'gensym', 'get-time', 'hash', 'hd', 'hdstr', 'hdv', 'head', 'identical', 'implementation', 'in', 'include', 'include-all-but', 'inferences', 'input', 'input+', 'integer?', 'intern', 'intersection', 'is', 'kill', 'language', 'length', 'limit', 'lineread', 'loaded', 'macro', 'macroexpand', 'map', 'mapcan', 'maxinferences', 'mode', 'n->string', 'nl', 'nth', 'null', 'number?', 'occurrences', 'occurs-check', 'open', 'os', 'out', 'port', 'porters', 'pos', 'pr', 'preclude', 'preclude-all-but', 'print', 'profile', 'profile-results', 'ps', 'quit', 'read', 'read+', 'read-byte', 'read-file', 'read-file-as-bytelist', 'read-file-as-string', 'read-from-string', 'release', 'remove', 'return', 'reverse', 'run', 'save', 'set', 'simple-error', 'snd', 'specialise', 'spy', 'step', 'stinput', 'stoutput', 'str', 'string->n', 'string->symbol', 'string?', 'subst', 'symbol?', 'systemf', 'tail', 'tc', 'tc?', 'thaw', 'tl', 'tlstr', 'tlv', 'track', 'tuple?', 'undefmacro', 'unify', 'unify!', 'union', 'unprofile', 'unspecialise', 'untrack', 'variable?', 'vector', 'vector->', 'vector?', 'verified', 'version', 'warn', 'when', 'write-byte', 'write-to-file', 'y-or-n?', ) BUILTINS_ANYWHERE = ('where', 'skip', '>>', '_', '!', '', '') MAPPINGS = dict((s, Keyword) for s in DECLARATIONS) MAPPINGS.update((s, Name.Builtin) for s in BUILTINS) MAPPINGS.update((s, Keyword) for s in SPECIAL_FORMS) valid_symbol_chars = r'[\w!$%*+,<=>?/.\'@&#:-]' valid_name = '%s+' % valid_symbol_chars symbol_name = r'[a-z!$%%*+,<=>?/.\'@&#_-]%s*' % valid_symbol_chars variable = r'[A-Z]%s*' % valid_symbol_chars tokens = { 'string': [ (r'"', String, '#pop'), (r'c#\d{1,3};', String.Escape), (r'~[ARS%]', String.Interpol), (r'(?s).', String), ], 'root': [ (r'(?s)\\\*.*?\*\\', Comment.Multiline), # \* ... *\ (r'\\\\.*', Comment.Single), # \\ ... (r'\s+', Text), (r'_{5,}', Punctuation), (r'={5,}', Punctuation), (r'(;|:=|\||--?>|<--?)', Punctuation), (r'(:-|:|\{|\})', Literal), (r'[+-]*\d*\.\d+(e[+-]?\d+)?', Number.Float), (r'[+-]*\d+', Number.Integer), (r'"', String, 'string'), (variable, Name.Variable), (r'(true|false|<>|\[\])', Keyword.Pseudo), (symbol_name, Literal), (r'(\[|\]|\(|\))', Punctuation), ], } def get_tokens_unprocessed(self, text): tokens = RegexLexer.get_tokens_unprocessed(self, text) tokens = self._process_symbols(tokens) tokens = self._process_declarations(tokens) return tokens def _relevant(self, token): return token not in (Text, Comment.Single, Comment.Multiline) def _process_declarations(self, tokens): opening_paren = False for index, token, value in tokens: yield index, token, value if self._relevant(token): if opening_paren and token == Keyword and value in self.DECLARATIONS: declaration = value for index, token, value in \ self._process_declaration(declaration, tokens): yield index, token, value opening_paren = value == '(' and token == Punctuation def _process_symbols(self, tokens): opening_paren = False for index, token, value in tokens: if opening_paren and token in (Literal, Name.Variable): token = self.MAPPINGS.get(value, Name.Function) elif token == Literal and value in self.BUILTINS_ANYWHERE: token = Name.Builtin opening_paren = value == '(' and token == Punctuation yield index, token, value def _process_declaration(self, declaration, tokens): for index, token, value in tokens: if self._relevant(token): break yield index, token, value if declaration == 'datatype': prev_was_colon = False token = Keyword.Type if token == Literal else token yield index, token, value for index, token, value in tokens: if prev_was_colon and token == Literal: token = Keyword.Type yield index, token, value if self._relevant(token): prev_was_colon = token == Literal and value == ':' elif declaration == 'package': token = Name.Namespace if token == Literal else token yield index, token, value elif declaration == 'define': token = Name.Function if token == Literal else token yield index, token, value for index, token, value in tokens: if self._relevant(token): break yield index, token, value if value == '{' and token == Literal: yield index, Punctuation, value for index, token, value in self._process_signature(tokens): yield index, token, value else: yield index, token, value else: token = Name.Function if token == Literal else token yield index, token, value return def _process_signature(self, tokens): for index, token, value in tokens: if token == Literal and value == '}': yield index, Punctuation, value return elif token in (Literal, Name.Function): token = Name.Variable if value.istitle() else Keyword.Type yield index, token, value class CPSALexer(SchemeLexer): """ A CPSA lexer based on the CPSA language as of version 2.2.12 .. versionadded:: 2.1 """ name = 'CPSA' aliases = ['cpsa'] filenames = ['*.cpsa'] mimetypes = [] # list of known keywords and builtins taken form vim 6.4 scheme.vim # syntax file. _keywords = ( 'herald', 'vars', 'defmacro', 'include', 'defprotocol', 'defrole', 'defskeleton', 'defstrand', 'deflistener', 'non-orig', 'uniq-orig', 'pen-non-orig', 'precedes', 'trace', 'send', 'recv', 'name', 'text', 'skey', 'akey', 'data', 'mesg', ) _builtins = ( 'cat', 'enc', 'hash', 'privk', 'pubk', 'invk', 'ltk', 'gen', 'exp', ) # valid names for identifiers # well, names can only not consist fully of numbers # but this should be good enough for now valid_name = r'[\w!$%&*+,/:<=>?@^~|-]+' tokens = { 'root': [ # the comments - always starting with semicolon # and going to the end of the line (r';.*$', Comment.Single), # whitespaces - usually not relevant (r'\s+', Text), # numbers (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), # support for uncommon kinds of numbers - # have to figure out what the characters mean # (r'(#e|#i|#b|#o|#d|#x)[\d.]+', Number), # strings, symbols and characters (r'"(\\\\|\\"|[^"])*"', String), (r"'" + valid_name, String.Symbol), (r"#\\([()/'\"._!§$%& ?=+-]|[a-zA-Z0-9]+)", String.Char), # constants (r'(#t|#f)', Name.Constant), # special operators (r"('|#|`|,@|,|\.)", Operator), # highlight the keywords (words(_keywords, suffix=r'\b'), Keyword), # first variable in a quoted string like # '(this is syntactic sugar) (r"(?<='\()" + valid_name, Name.Variable), (r"(?<=#\()" + valid_name, Name.Variable), # highlight the builtins (words(_builtins, prefix=r'(?<=\()', suffix=r'\b'), Name.Builtin), # the remaining functions (r'(?<=\()' + valid_name, Name.Function), # find the remaining variables (valid_name, Name.Variable), # the famous parentheses! (r'(\(|\))', Punctuation), (r'(\[|\])', Punctuation), ], } class XtlangLexer(RegexLexer): """An xtlang lexer for the `Extempore programming environment `_. This is a mixture of Scheme and xtlang, really. Keyword lists are taken from the Extempore Emacs mode (https://github.com/extemporelang/extempore-emacs-mode) .. versionadded:: 2.2 """ name = 'xtlang' aliases = ['extempore'] filenames = ['*.xtm'] mimetypes = [] common_keywords = ( 'lambda', 'define', 'if', 'else', 'cond', 'and', 'or', 'let', 'begin', 'set!', 'map', 'for-each', ) scheme_keywords = ( 'do', 'delay', 'quasiquote', 'unquote', 'unquote-splicing', 'eval', 'case', 'let*', 'letrec', 'quote', ) xtlang_bind_keywords = ( 'bind-func', 'bind-val', 'bind-lib', 'bind-type', 'bind-alias', 'bind-poly', 'bind-dylib', 'bind-lib-func', 'bind-lib-val', ) xtlang_keywords = ( 'letz', 'memzone', 'cast', 'convert', 'dotimes', 'doloop', ) common_functions = ( '*', '+', '-', '/', '<', '<=', '=', '>', '>=', '%', 'abs', 'acos', 'angle', 'append', 'apply', 'asin', 'assoc', 'assq', 'assv', 'atan', 'boolean?', 'caaaar', 'caaadr', 'caaar', 'caadar', 'caaddr', 'caadr', 'caar', 'cadaar', 'cadadr', 'cadar', 'caddar', 'cadddr', 'caddr', 'cadr', 'car', 'cdaaar', 'cdaadr', 'cdaar', 'cdadar', 'cdaddr', 'cdadr', 'cdar', 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr', 'cdddr', 'cddr', 'cdr', 'ceiling', 'cons', 'cos', 'floor', 'length', 'list', 'log', 'max', 'member', 'min', 'modulo', 'not', 'reverse', 'round', 'sin', 'sqrt', 'substring', 'tan', 'println', 'random', 'null?', 'callback', 'now', ) scheme_functions = ( 'call-with-current-continuation', 'call-with-input-file', 'call-with-output-file', 'call-with-values', 'call/cc', 'char->integer', 'char-alphabetic?', 'char-ci<=?', 'char-ci=?', 'char-ci>?', 'char-downcase', 'char-lower-case?', 'char-numeric?', 'char-ready?', 'char-upcase', 'char-upper-case?', 'char-whitespace?', 'char<=?', 'char=?', 'char>?', 'char?', 'close-input-port', 'close-output-port', 'complex?', 'current-input-port', 'current-output-port', 'denominator', 'display', 'dynamic-wind', 'eof-object?', 'eq?', 'equal?', 'eqv?', 'even?', 'exact->inexact', 'exact?', 'exp', 'expt', 'force', 'gcd', 'imag-part', 'inexact->exact', 'inexact?', 'input-port?', 'integer->char', 'integer?', 'interaction-environment', 'lcm', 'list->string', 'list->vector', 'list-ref', 'list-tail', 'list?', 'load', 'magnitude', 'make-polar', 'make-rectangular', 'make-string', 'make-vector', 'memq', 'memv', 'negative?', 'newline', 'null-environment', 'number->string', 'number?', 'numerator', 'odd?', 'open-input-file', 'open-output-file', 'output-port?', 'pair?', 'peek-char', 'port?', 'positive?', 'procedure?', 'quotient', 'rational?', 'rationalize', 'read', 'read-char', 'real-part', 'real?', 'remainder', 'scheme-report-environment', 'set-car!', 'set-cdr!', 'string', 'string->list', 'string->number', 'string->symbol', 'string-append', 'string-ci<=?', 'string-ci=?', 'string-ci>?', 'string-copy', 'string-fill!', 'string-length', 'string-ref', 'string-set!', 'string<=?', 'string=?', 'string>?', 'string?', 'symbol->string', 'symbol?', 'transcript-off', 'transcript-on', 'truncate', 'values', 'vector', 'vector->list', 'vector-fill!', 'vector-length', 'vector?', 'with-input-from-file', 'with-output-to-file', 'write', 'write-char', 'zero?', ) xtlang_functions = ( 'toString', 'afill!', 'pfill!', 'tfill!', 'tbind', 'vfill!', 'array-fill!', 'pointer-fill!', 'tuple-fill!', 'vector-fill!', 'free', 'array', 'tuple', 'list', '~', 'cset!', 'cref', '&', 'bor', 'ang-names', '<<', '>>', 'nil', 'printf', 'sprintf', 'null', 'now', 'pset!', 'pref-ptr', 'vset!', 'vref', 'aset!', 'aref', 'aref-ptr', 'tset!', 'tref', 'tref-ptr', 'salloc', 'halloc', 'zalloc', 'alloc', 'schedule', 'exp', 'log', 'sin', 'cos', 'tan', 'asin', 'acos', 'atan', 'sqrt', 'expt', 'floor', 'ceiling', 'truncate', 'round', 'llvm_printf', 'push_zone', 'pop_zone', 'memzone', 'callback', 'llvm_sprintf', 'make-array', 'array-set!', 'array-ref', 'array-ref-ptr', 'pointer-set!', 'pointer-ref', 'pointer-ref-ptr', 'stack-alloc', 'heap-alloc', 'zone-alloc', 'make-tuple', 'tuple-set!', 'tuple-ref', 'tuple-ref-ptr', 'closure-set!', 'closure-ref', 'pref', 'pdref', 'impc_null', 'bitcast', 'void', 'ifret', 'ret->', 'clrun->', 'make-env-zone', 'make-env', '<>', 'dtof', 'ftod', 'i1tof', 'i1tod', 'i1toi8', 'i1toi32', 'i1toi64', 'i8tof', 'i8tod', 'i8toi1', 'i8toi32', 'i8toi64', 'i32tof', 'i32tod', 'i32toi1', 'i32toi8', 'i32toi64', 'i64tof', 'i64tod', 'i64toi1', 'i64toi8', 'i64toi32', ) # valid names for Scheme identifiers (names cannot consist fully # of numbers, but this should be good enough for now) valid_scheme_name = r'[\w!$%&*+,/:<=>?@^~|-]+' # valid characters in xtlang names & types valid_xtlang_name = r'[\w.!-]+' valid_xtlang_type = r'[]{}[\w<>,*/|!-]+' tokens = { # keep track of when we're exiting the xtlang form 'xtlang': [ (r'\(', Punctuation, '#push'), (r'\)', Punctuation, '#pop'), (r'(?<=bind-func\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-val\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-type\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-alias\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-poly\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-lib\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-dylib\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-lib-func\s)' + valid_xtlang_name, Name.Function), (r'(?<=bind-lib-val\s)' + valid_xtlang_name, Name.Function), # type annotations (r':' + valid_xtlang_type, Keyword.Type), # types (r'(<' + valid_xtlang_type + r'>|\|' + valid_xtlang_type + r'\||/' + valid_xtlang_type + r'/|' + valid_xtlang_type + r'\*)\**', Keyword.Type), # keywords (words(xtlang_keywords, prefix=r'(?<=\()'), Keyword), # builtins (words(xtlang_functions, prefix=r'(?<=\()'), Name.Function), include('common'), # variables (valid_xtlang_name, Name.Variable), ], 'scheme': [ # quoted symbols (r"'" + valid_scheme_name, String.Symbol), # char literals (r"#\\([()/'\"._!§$%& ?=+-]|[a-zA-Z0-9]+)", String.Char), # special operators (r"('|#|`|,@|,|\.)", Operator), # keywords (words(scheme_keywords, prefix=r'(?<=\()'), Keyword), # builtins (words(scheme_functions, prefix=r'(?<=\()'), Name.Function), include('common'), # variables (valid_scheme_name, Name.Variable), ], # common to both xtlang and Scheme 'common': [ # comments (r';.*$', Comment.Single), # whitespaces - usually not relevant (r'\s+', Text), # numbers (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), # binary/oct/hex literals (r'(#b|#o|#x)[\d.]+', Number), # strings (r'"(\\\\|\\"|[^"])*"', String), # true/false constants (r'(#t|#f)', Name.Constant), # keywords (words(common_keywords, prefix=r'(?<=\()'), Keyword), # builtins (words(common_functions, prefix=r'(?<=\()'), Name.Function), # the famous parentheses! (r'(\(|\))', Punctuation), ], 'root': [ # go into xtlang mode (words(xtlang_bind_keywords, prefix=r'(?<=\()', suffix=r'\b'), Keyword, 'xtlang'), include('scheme') ], } class FennelLexer(RegexLexer): """A lexer for the `Fennel programming language `_. Fennel compiles to Lua, so all the Lua builtins are recognized as well as the special forms that are particular to the Fennel compiler. .. versionadded:: 2.3 """ name = 'Fennel' aliases = ['fennel', 'fnl'] filenames = ['*.fnl'] # these two lists are taken from fennel-mode.el: # https://gitlab.com/technomancy/fennel-mode # this list is current as of Fennel version 0.1.0. special_forms = ( u'require-macros', u'eval-compiler', u'do', u'values', u'if', u'when', u'each', u'for', u'fn', u'lambda', u'λ', u'set', u'global', u'var', u'local', u'let', u'tset', u'doto', u'set-forcibly!', u'defn', u'partial', u'while', u'or', u'and', u'true', u'false', u'nil', u'.', u'+', u'..', u'^', u'-', u'*', u'%', u'/', u'>', u'<', u'>=', u'<=', u'=', u'~=', u'#', u'...', u':', u'->', u'->>', ) # Might be nicer to use the list from _lua_builtins.py but it's unclear how? builtins = ( u'_G', u'_VERSION', u'arg', u'assert', u'bit32', u'collectgarbage', u'coroutine', u'debug', u'dofile', u'error', u'getfenv', u'getmetatable', u'io', u'ipairs', u'load', u'loadfile', u'loadstring', u'math', u'next', u'os', u'package', u'pairs', u'pcall', u'print', u'rawequal', u'rawget', u'rawlen', u'rawset', u'require', u'select', u'setfenv', u'setmetatable', u'string', u'table', u'tonumber', u'tostring', u'type', u'unpack', u'xpcall' ) # based on the scheme definition, but disallowing leading digits and commas valid_name = r'[a-zA-Z_!$%&*+/:<=>?@^~|-][\w!$%&*+/:<=>?@^~|\.-]*' tokens = { 'root': [ # the only comment form is a semicolon; goes to the end of the line (r';.*$', Comment.Single), (r'[,\s]+', Text), (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), (r'"(\\\\|\\"|[^"])*"', String), (r"'(\\\\|\\'|[^'])*'", String), # these are technically strings, but it's worth visually # distinguishing them because their intent is different # from regular strings. (r':' + valid_name, String.Symbol), # special forms are keywords (words(special_forms, suffix=' '), Keyword), # lua standard library are builtins (words(builtins, suffix=' '), Name.Builtin), # special-case the vararg symbol (r'\.\.\.', Name.Variable), # regular identifiers (valid_name, Name.Variable), # all your normal paired delimiters for your programming enjoyment (r'(\(|\))', Punctuation), (r'(\[|\])', Punctuation), (r'(\{|\})', Punctuation), ] } Pygments-2.3.1/pygments/lexers/haxe.py0000644000175000017500000007435613402534107017064 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.haxe ~~~~~~~~~~~~~~~~~~~~ Lexers for Haxe and related stuff. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import ExtendedRegexLexer, RegexLexer, include, bygroups, \ default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Whitespace __all__ = ['HaxeLexer', 'HxmlLexer'] class HaxeLexer(ExtendedRegexLexer): """ For Haxe source code (http://haxe.org/). .. versionadded:: 1.3 """ name = 'Haxe' aliases = ['hx', 'haxe', 'hxsl'] filenames = ['*.hx', '*.hxsl'] mimetypes = ['text/haxe', 'text/x-haxe', 'text/x-hx'] # keywords extracted from lexer.mll in the haxe compiler source keyword = (r'(?:function|class|static|var|if|else|while|do|for|' r'break|return|continue|extends|implements|import|' r'switch|case|default|public|private|try|untyped|' r'catch|new|this|throw|extern|enum|in|interface|' r'cast|override|dynamic|typedef|package|' r'inline|using|null|true|false|abstract)\b') # idtype in lexer.mll typeid = r'_*[A-Z]\w*' # combined ident and dollar and idtype ident = r'(?:_*[a-z]\w*|_+[0-9]\w*|' + typeid + r'|_+|\$\w+)' binop = (r'(?:%=|&=|\|=|\^=|\+=|\-=|\*=|/=|<<=|>\s*>\s*=|>\s*>\s*>\s*=|==|' r'!=|<=|>\s*=|&&|\|\||<<|>>>|>\s*>|\.\.\.|<|>|%|&|\||\^|\+|\*|' r'/|\-|=>|=)') # ident except keywords ident_no_keyword = r'(?!' + keyword + ')' + ident flags = re.DOTALL | re.MULTILINE preproc_stack = [] def preproc_callback(self, match, ctx): proc = match.group(2) if proc == 'if': # store the current stack self.preproc_stack.append(ctx.stack[:]) elif proc in ['else', 'elseif']: # restore the stack back to right before #if if self.preproc_stack: ctx.stack = self.preproc_stack[-1][:] elif proc == 'end': # remove the saved stack of previous #if if self.preproc_stack: self.preproc_stack.pop() # #if and #elseif should follow by an expr if proc in ['if', 'elseif']: ctx.stack.append('preproc-expr') # #error can be optionally follow by the error msg if proc in ['error']: ctx.stack.append('preproc-error') yield match.start(), Comment.Preproc, '#' + proc ctx.pos = match.end() tokens = { 'root': [ include('spaces'), include('meta'), (r'(?:package)\b', Keyword.Namespace, ('semicolon', 'package')), (r'(?:import)\b', Keyword.Namespace, ('semicolon', 'import')), (r'(?:using)\b', Keyword.Namespace, ('semicolon', 'using')), (r'(?:extern|private)\b', Keyword.Declaration), (r'(?:abstract)\b', Keyword.Declaration, 'abstract'), (r'(?:class|interface)\b', Keyword.Declaration, 'class'), (r'(?:enum)\b', Keyword.Declaration, 'enum'), (r'(?:typedef)\b', Keyword.Declaration, 'typedef'), # top-level expression # although it is not supported in haxe, but it is common to write # expression in web pages the positive lookahead here is to prevent # an infinite loop at the EOF (r'(?=.)', Text, 'expr-statement'), ], # space/tab/comment/preproc 'spaces': [ (r'\s+', Text), (r'//[^\n\r]*', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'(#)(if|elseif|else|end|error)\b', preproc_callback), ], 'string-single-interpol': [ (r'\$\{', String.Interpol, ('string-interpol-close', 'expr')), (r'\$\$', String.Escape), (r'\$(?=' + ident + ')', String.Interpol, 'ident'), include('string-single'), ], 'string-single': [ (r"'", String.Single, '#pop'), (r'\\.', String.Escape), (r'.', String.Single), ], 'string-double': [ (r'"', String.Double, '#pop'), (r'\\.', String.Escape), (r'.', String.Double), ], 'string-interpol-close': [ (r'\$'+ident, String.Interpol), (r'\}', String.Interpol, '#pop'), ], 'package': [ include('spaces'), (ident, Name.Namespace), (r'\.', Punctuation, 'import-ident'), default('#pop'), ], 'import': [ include('spaces'), (ident, Name.Namespace), (r'\*', Keyword), # wildcard import (r'\.', Punctuation, 'import-ident'), (r'in', Keyword.Namespace, 'ident'), default('#pop'), ], 'import-ident': [ include('spaces'), (r'\*', Keyword, '#pop'), # wildcard import (ident, Name.Namespace, '#pop'), ], 'using': [ include('spaces'), (ident, Name.Namespace), (r'\.', Punctuation, 'import-ident'), default('#pop'), ], 'preproc-error': [ (r'\s+', Comment.Preproc), (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), default('#pop'), ], 'preproc-expr': [ (r'\s+', Comment.Preproc), (r'\!', Comment.Preproc), (r'\(', Comment.Preproc, ('#pop', 'preproc-parenthesis')), (ident, Comment.Preproc, '#pop'), # Float (r'\.[0-9]+', Number.Float), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float), (r'[0-9]+\.[0-9]+', Number.Float), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float), # Int (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), # String (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), ], 'preproc-parenthesis': [ (r'\s+', Comment.Preproc), (r'\)', Comment.Preproc, '#pop'), default('preproc-expr-in-parenthesis'), ], 'preproc-expr-chain': [ (r'\s+', Comment.Preproc), (binop, Comment.Preproc, ('#pop', 'preproc-expr-in-parenthesis')), default('#pop'), ], # same as 'preproc-expr' but able to chain 'preproc-expr-chain' 'preproc-expr-in-parenthesis': [ (r'\s+', Comment.Preproc), (r'\!', Comment.Preproc), (r'\(', Comment.Preproc, ('#pop', 'preproc-expr-chain', 'preproc-parenthesis')), (ident, Comment.Preproc, ('#pop', 'preproc-expr-chain')), # Float (r'\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'preproc-expr-chain')), # Int (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'preproc-expr-chain')), (r'[0-9]+', Number.Integer, ('#pop', 'preproc-expr-chain')), # String (r"'", String.Single, ('#pop', 'preproc-expr-chain', 'string-single')), (r'"', String.Double, ('#pop', 'preproc-expr-chain', 'string-double')), ], 'abstract': [ include('spaces'), default(('#pop', 'abstract-body', 'abstract-relation', 'abstract-opaque', 'type-param-constraint', 'type-name')), ], 'abstract-body': [ include('spaces'), (r'\{', Punctuation, ('#pop', 'class-body')), ], 'abstract-opaque': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'type')), default('#pop'), ], 'abstract-relation': [ include('spaces'), (r'(?:to|from)', Keyword.Declaration, 'type'), (r',', Punctuation), default('#pop'), ], 'meta': [ include('spaces'), (r'@', Name.Decorator, ('meta-body', 'meta-ident', 'meta-colon')), ], # optional colon 'meta-colon': [ include('spaces'), (r':', Name.Decorator, '#pop'), default('#pop'), ], # same as 'ident' but set token as Name.Decorator instead of Name 'meta-ident': [ include('spaces'), (ident, Name.Decorator, '#pop'), ], 'meta-body': [ include('spaces'), (r'\(', Name.Decorator, ('#pop', 'meta-call')), default('#pop'), ], 'meta-call': [ include('spaces'), (r'\)', Name.Decorator, '#pop'), default(('#pop', 'meta-call-sep', 'expr')), ], 'meta-call-sep': [ include('spaces'), (r'\)', Name.Decorator, '#pop'), (r',', Punctuation, ('#pop', 'meta-call')), ], 'typedef': [ include('spaces'), default(('#pop', 'typedef-body', 'type-param-constraint', 'type-name')), ], 'typedef-body': [ include('spaces'), (r'=', Operator, ('#pop', 'optional-semicolon', 'type')), ], 'enum': [ include('spaces'), default(('#pop', 'enum-body', 'bracket-open', 'type-param-constraint', 'type-name')), ], 'enum-body': [ include('spaces'), include('meta'), (r'\}', Punctuation, '#pop'), (ident_no_keyword, Name, ('enum-member', 'type-param-constraint')), ], 'enum-member': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'semicolon', 'flag', 'function-param')), default(('#pop', 'semicolon', 'flag')), ], 'class': [ include('spaces'), default(('#pop', 'class-body', 'bracket-open', 'extends', 'type-param-constraint', 'type-name')), ], 'extends': [ include('spaces'), (r'(?:extends|implements)\b', Keyword.Declaration, 'type'), (r',', Punctuation), # the comma is made optional here, since haxe2 # requires the comma but haxe3 does not allow it default('#pop'), ], 'bracket-open': [ include('spaces'), (r'\{', Punctuation, '#pop'), ], 'bracket-close': [ include('spaces'), (r'\}', Punctuation, '#pop'), ], 'class-body': [ include('spaces'), include('meta'), (r'\}', Punctuation, '#pop'), (r'(?:static|public|private|override|dynamic|inline|macro)\b', Keyword.Declaration), default('class-member'), ], 'class-member': [ include('spaces'), (r'(var)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'var')), (r'(function)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'class-method')), ], # local function, anonymous or not 'function-local': [ include('spaces'), (ident_no_keyword, Name.Function, ('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), default(('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), ], 'optional-expr': [ include('spaces'), include('expr'), default('#pop'), ], 'class-method': [ include('spaces'), (ident, Name.Function, ('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), ], # function arguments 'function-param': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r'\?', Punctuation), (ident_no_keyword, Name, ('#pop', 'function-param-sep', 'assign', 'flag')), ], 'function-param-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'function-param')), ], 'prop-get-set': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'prop-get-set-opt', 'comma', 'prop-get-set-opt')), default('#pop'), ], 'prop-get-set-opt': [ include('spaces'), (r'(?:default|null|never|dynamic|get|set)\b', Keyword, '#pop'), (ident_no_keyword, Text, '#pop'), # custom getter/setter ], 'expr-statement': [ include('spaces'), # makes semicolon optional here, just to avoid checking the last # one is bracket or not. default(('#pop', 'optional-semicolon', 'expr')), ], 'expr': [ include('spaces'), (r'@', Name.Decorator, ('#pop', 'optional-expr', 'meta-body', 'meta-ident', 'meta-colon')), (r'(?:\+\+|\-\-|~(?!/)|!|\-)', Operator), (r'\(', Punctuation, ('#pop', 'expr-chain', 'parenthesis')), (r'(?:static|public|private|override|dynamic|inline)\b', Keyword.Declaration), (r'(?:function)\b', Keyword.Declaration, ('#pop', 'expr-chain', 'function-local')), (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket')), (r'(?:true|false|null)\b', Keyword.Constant, ('#pop', 'expr-chain')), (r'(?:this)\b', Keyword, ('#pop', 'expr-chain')), (r'(?:cast)\b', Keyword, ('#pop', 'expr-chain', 'cast')), (r'(?:try)\b', Keyword, ('#pop', 'catch', 'expr')), (r'(?:var)\b', Keyword.Declaration, ('#pop', 'var')), (r'(?:new)\b', Keyword, ('#pop', 'expr-chain', 'new')), (r'(?:switch)\b', Keyword, ('#pop', 'switch')), (r'(?:if)\b', Keyword, ('#pop', 'if')), (r'(?:do)\b', Keyword, ('#pop', 'do')), (r'(?:while)\b', Keyword, ('#pop', 'while')), (r'(?:for)\b', Keyword, ('#pop', 'for')), (r'(?:untyped|throw)\b', Keyword), (r'(?:return)\b', Keyword, ('#pop', 'optional-expr')), (r'(?:macro)\b', Keyword, ('#pop', 'macro')), (r'(?:continue|break)\b', Keyword, '#pop'), (r'(?:\$\s*[a-z]\b|\$(?!'+ident+'))', Name, ('#pop', 'dollar')), (ident_no_keyword, Name, ('#pop', 'expr-chain')), # Float (r'\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'expr-chain')), # Int (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'expr-chain')), (r'[0-9]+', Number.Integer, ('#pop', 'expr-chain')), # String (r"'", String.Single, ('#pop', 'expr-chain', 'string-single-interpol')), (r'"', String.Double, ('#pop', 'expr-chain', 'string-double')), # EReg (r'~/(\\\\|\\/|[^/\n])*/[gimsu]*', String.Regex, ('#pop', 'expr-chain')), # Array (r'\[', Punctuation, ('#pop', 'expr-chain', 'array-decl')), ], 'expr-chain': [ include('spaces'), (r'(?:\+\+|\-\-)', Operator), (binop, Operator, ('#pop', 'expr')), (r'(?:in)\b', Keyword, ('#pop', 'expr')), (r'\?', Operator, ('#pop', 'expr', 'ternary', 'expr')), (r'(\.)(' + ident_no_keyword + ')', bygroups(Punctuation, Name)), (r'\[', Punctuation, 'array-access'), (r'\(', Punctuation, 'call'), default('#pop'), ], # macro reification 'macro': [ include('spaces'), include('meta'), (r':', Punctuation, ('#pop', 'type')), (r'(?:extern|private)\b', Keyword.Declaration), (r'(?:abstract)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'abstract')), (r'(?:class|interface)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'macro-class')), (r'(?:enum)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'enum')), (r'(?:typedef)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'typedef')), default(('#pop', 'expr')), ], 'macro-class': [ (r'\{', Punctuation, ('#pop', 'class-body')), include('class') ], # cast can be written as "cast expr" or "cast(expr, type)" 'cast': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'cast-type', 'expr')), default(('#pop', 'expr')), ], # optionally give a type as the 2nd argument of cast() 'cast-type': [ include('spaces'), (r',', Punctuation, ('#pop', 'type')), default('#pop'), ], 'catch': [ include('spaces'), (r'(?:catch)\b', Keyword, ('expr', 'function-param', 'parenthesis-open')), default('#pop'), ], # do-while loop 'do': [ include('spaces'), default(('#pop', 'do-while', 'expr')), ], # the while after do 'do-while': [ include('spaces'), (r'(?:while)\b', Keyword, ('#pop', 'parenthesis', 'parenthesis-open')), ], 'while': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), ], 'for': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), ], 'if': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'else', 'optional-semicolon', 'expr', 'parenthesis')), ], 'else': [ include('spaces'), (r'(?:else)\b', Keyword, ('#pop', 'expr')), default('#pop'), ], 'switch': [ include('spaces'), default(('#pop', 'switch-body', 'bracket-open', 'expr')), ], 'switch-body': [ include('spaces'), (r'(?:case|default)\b', Keyword, ('case-block', 'case')), (r'\}', Punctuation, '#pop'), ], 'case': [ include('spaces'), (r':', Punctuation, '#pop'), default(('#pop', 'case-sep', 'case-guard', 'expr')), ], 'case-sep': [ include('spaces'), (r':', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'case')), ], 'case-guard': [ include('spaces'), (r'(?:if)\b', Keyword, ('#pop', 'parenthesis', 'parenthesis-open')), default('#pop'), ], # optional multiple expr under a case 'case-block': [ include('spaces'), (r'(?!(?:case|default)\b|\})', Keyword, 'expr-statement'), default('#pop'), ], 'new': [ include('spaces'), default(('#pop', 'call', 'parenthesis-open', 'type')), ], 'array-decl': [ include('spaces'), (r'\]', Punctuation, '#pop'), default(('#pop', 'array-decl-sep', 'expr')), ], 'array-decl-sep': [ include('spaces'), (r'\]', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'array-decl')), ], 'array-access': [ include('spaces'), default(('#pop', 'array-access-close', 'expr')), ], 'array-access-close': [ include('spaces'), (r'\]', Punctuation, '#pop'), ], 'comma': [ include('spaces'), (r',', Punctuation, '#pop'), ], 'colon': [ include('spaces'), (r':', Punctuation, '#pop'), ], 'semicolon': [ include('spaces'), (r';', Punctuation, '#pop'), ], 'optional-semicolon': [ include('spaces'), (r';', Punctuation, '#pop'), default('#pop'), ], # identity that CAN be a Haxe keyword 'ident': [ include('spaces'), (ident, Name, '#pop'), ], 'dollar': [ include('spaces'), (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket-close', 'expr')), default(('#pop', 'expr-chain')), ], 'type-name': [ include('spaces'), (typeid, Name, '#pop'), ], 'type-full-name': [ include('spaces'), (r'\.', Punctuation, 'ident'), default('#pop'), ], 'type': [ include('spaces'), (r'\?', Punctuation), (ident, Name, ('#pop', 'type-check', 'type-full-name')), (r'\{', Punctuation, ('#pop', 'type-check', 'type-struct')), (r'\(', Punctuation, ('#pop', 'type-check', 'type-parenthesis')), ], 'type-parenthesis': [ include('spaces'), default(('#pop', 'parenthesis-close', 'type')), ], 'type-check': [ include('spaces'), (r'->', Punctuation, ('#pop', 'type')), (r'<(?!=)', Punctuation, 'type-param'), default('#pop'), ], 'type-struct': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r'\?', Punctuation), (r'>', Punctuation, ('comma', 'type')), (ident_no_keyword, Name, ('#pop', 'type-struct-sep', 'type', 'colon')), include('class-body'), ], 'type-struct-sep': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-struct')), ], # type-param can be a normal type or a constant literal... 'type-param-type': [ # Float (r'\.[0-9]+', Number.Float, '#pop'), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, '#pop'), # Int (r'0x[0-9a-fA-F]+', Number.Hex, '#pop'), (r'[0-9]+', Number.Integer, '#pop'), # String (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), # EReg (r'~/(\\\\|\\/|[^/\n])*/[gim]*', String.Regex, '#pop'), # Array (r'\[', Operator, ('#pop', 'array-decl')), include('type'), ], # type-param part of a type # ie. the path in Map 'type-param': [ include('spaces'), default(('#pop', 'type-param-sep', 'type-param-type')), ], 'type-param-sep': [ include('spaces'), (r'>', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-param')), ], # optional type-param that may include constraint # ie. 'type-param-constraint': [ include('spaces'), (r'<(?!=)', Punctuation, ('#pop', 'type-param-constraint-sep', 'type-param-constraint-flag', 'type-name')), default('#pop'), ], 'type-param-constraint-sep': [ include('spaces'), (r'>', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-param-constraint-sep', 'type-param-constraint-flag', 'type-name')), ], # the optional constraint inside type-param 'type-param-constraint-flag': [ include('spaces'), (r':', Punctuation, ('#pop', 'type-param-constraint-flag-type')), default('#pop'), ], 'type-param-constraint-flag-type': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'type-param-constraint-flag-type-sep', 'type')), default(('#pop', 'type')), ], 'type-param-constraint-flag-type-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, 'type'), ], # a parenthesis expr that contain exactly one expr 'parenthesis': [ include('spaces'), default(('#pop', 'parenthesis-close', 'flag', 'expr')), ], 'parenthesis-open': [ include('spaces'), (r'\(', Punctuation, '#pop'), ], 'parenthesis-close': [ include('spaces'), (r'\)', Punctuation, '#pop'), ], 'var': [ include('spaces'), (ident_no_keyword, Text, ('#pop', 'var-sep', 'assign', 'flag', 'prop-get-set')), ], # optional more var decl. 'var-sep': [ include('spaces'), (r',', Punctuation, ('#pop', 'var')), default('#pop'), ], # optional assignment 'assign': [ include('spaces'), (r'=', Operator, ('#pop', 'expr')), default('#pop'), ], # optional type flag 'flag': [ include('spaces'), (r':', Punctuation, ('#pop', 'type')), default('#pop'), ], # colon as part of a ternary operator (?:) 'ternary': [ include('spaces'), (r':', Operator, '#pop'), ], # function call 'call': [ include('spaces'), (r'\)', Punctuation, '#pop'), default(('#pop', 'call-sep', 'expr')), ], # after a call param 'call-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'call')), ], # bracket can be block or object 'bracket': [ include('spaces'), (r'(?!(?:\$\s*[a-z]\b|\$(?!'+ident+')))' + ident_no_keyword, Name, ('#pop', 'bracket-check')), (r"'", String.Single, ('#pop', 'bracket-check', 'string-single')), (r'"', String.Double, ('#pop', 'bracket-check', 'string-double')), default(('#pop', 'block')), ], 'bracket-check': [ include('spaces'), (r':', Punctuation, ('#pop', 'object-sep', 'expr')), # is object default(('#pop', 'block', 'optional-semicolon', 'expr-chain')), # is block ], # code block 'block': [ include('spaces'), (r'\}', Punctuation, '#pop'), default('expr-statement'), ], # object in key-value pairs 'object': [ include('spaces'), (r'\}', Punctuation, '#pop'), default(('#pop', 'object-sep', 'expr', 'colon', 'ident-or-string')) ], # a key of an object 'ident-or-string': [ include('spaces'), (ident_no_keyword, Name, '#pop'), (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), ], # after a key-value pair in object 'object-sep': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'object')), ], } def analyse_text(text): if re.match(r'\w+\s*:\s*\w', text): return 0.3 class HxmlLexer(RegexLexer): """ Lexer for `haXe build `_ files. .. versionadded:: 1.6 """ name = 'Hxml' aliases = ['haxeml', 'hxml'] filenames = ['*.hxml'] tokens = { 'root': [ # Seperator (r'(--)(next)', bygroups(Punctuation, Generic.Heading)), # Compiler switches with one dash (r'(-)(prompt|debug|v)', bygroups(Punctuation, Keyword.Keyword)), # Compilerswitches with two dashes (r'(--)(neko-source|flash-strict|flash-use-stage|no-opt|no-traces|' r'no-inline|times|no-output)', bygroups(Punctuation, Keyword)), # Targets and other options that take an argument (r'(-)(cpp|js|neko|x|as3|swf9?|swf-lib|php|xml|main|lib|D|resource|' r'cp|cmd)( +)(.+)', bygroups(Punctuation, Keyword, Whitespace, String)), # Options that take only numerical arguments (r'(-)(swf-version)( +)(\d+)', bygroups(Punctuation, Keyword, Number.Integer)), # An Option that defines the size, the fps and the background # color of an flash movie (r'(-)(swf-header)( +)(\d+)(:)(\d+)(:)(\d+)(:)([A-Fa-f0-9]{6})', bygroups(Punctuation, Keyword, Whitespace, Number.Integer, Punctuation, Number.Integer, Punctuation, Number.Integer, Punctuation, Number.Hex)), # options with two dashes that takes arguments (r'(--)(js-namespace|php-front|php-lib|remap|gen-hx-classes)( +)' r'(.+)', bygroups(Punctuation, Keyword, Whitespace, String)), # Single line comment, multiline ones are not allowed. (r'#.*', Comment.Single) ] } Pygments-2.3.1/pygments/lexers/foxpro.py0000644000175000017500000006317413376260540017457 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.foxpro ~~~~~~~~~~~~~~~~~~~~~~ Simple lexer for Microsoft Visual FoxPro source code. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer from pygments.token import Punctuation, Text, Comment, Operator, Keyword, \ Name, String __all__ = ['FoxProLexer'] class FoxProLexer(RegexLexer): """Lexer for Microsoft Visual FoxPro language. FoxPro syntax allows to shorten all keywords and function names to 4 characters. Shortened forms are not recognized by this lexer. .. versionadded:: 1.6 """ name = 'FoxPro' aliases = ['foxpro', 'vfp', 'clipper', 'xbase'] filenames = ['*.PRG', '*.prg'] mimetype = [] flags = re.IGNORECASE | re.MULTILINE tokens = { 'root': [ (r';\s*\n', Punctuation), # consume newline (r'(^|\n)\s*', Text, 'newline'), # Square brackets may be used for array indices # and for string literal. Look for arrays # before matching string literals. (r'(?<=\w)\[[0-9, ]+\]', Text), (r'\'[^\'\n]*\'|"[^"\n]*"|\[[^]*]\]', String), (r'(^\s*\*|&&|&&).*?\n', Comment.Single), (r'(ABS|ACLASS|ACOPY|ACOS|ADATABASES|ADBOBJECTS|ADDBS|' r'ADDPROPERTY|ADEL|ADIR|ADLLS|ADOCKSTATE|AELEMENT|AERROR|' r'AEVENTS|AFIELDS|AFONT|AGETCLASS|AGETFILEVERSION|AINS|' r'AINSTANCE|ALANGUAGE|ALEN|ALIAS|ALINES|ALLTRIM|' r'AMEMBERS|AMOUSEOBJ|ANETRESOURCES|APRINTERS|APROCINFO|' r'ASC|ASCAN|ASELOBJ|ASESSIONS|ASIN|ASORT|ASQLHANDLES|' r'ASTACKINFO|ASUBSCRIPT|AT|AT_C|ATAGINFO|ATAN|ATC|ATCC|' r'ATCLINE|ATLINE|ATN2|AUSED|AVCXCLASSES|BAR|BARCOUNT|' r'BARPROMPT|BETWEEN|BINDEVENT|BINTOC|BITAND|BITCLEAR|' r'BITLSHIFT|BITNOT|BITOR|BITRSHIFT|BITSET|BITTEST|BITXOR|' r'BOF|CANDIDATE|CAPSLOCK|CAST|CDOW|CDX|CEILING|CHR|CHRSAW|' r'CHRTRAN|CHRTRANC|CLEARRESULTSET|CMONTH|CNTBAR|CNTPAD|COL|' r'COM|Functions|COMARRAY|COMCLASSINFO|COMPOBJ|COMPROP|' r'COMRETURNERROR|COS|CPCONVERT|CPCURRENT|CPDBF|CREATEBINARY|' r'CREATEOBJECT|CREATEOBJECTEX|CREATEOFFLINE|CTOBIN|CTOD|' r'CTOT|CURDIR|CURSORGETPROP|CURSORSETPROP|CURSORTOXML|' r'CURVAL|DATE|DATETIME|DAY|DBC|DBF|DBGETPROP|DBSETPROP|' r'DBUSED|DDEAbortTrans|DDEAdvise|DDEEnabled|DDEExecute|' r'DDEInitiate|DDELastError|DDEPoke|DDERequest|DDESetOption|' r'DDESetService|DDESetTopic|DDETerminate|DEFAULTEXT|' r'DELETED|DESCENDING|DIFFERENCE|DIRECTORY|DISKSPACE|' r'DisplayPath|DMY|DODEFAULT|DOW|DRIVETYPE|DROPOFFLINE|' r'DTOC|DTOR|DTOS|DTOT|EDITSOURCE|EMPTY|EOF|ERROR|EVAL(UATE)?|' r'EVENTHANDLER|EVL|EXECSCRIPT|EXP|FCHSIZE|FCLOSE|FCOUNT|' r'FCREATE|FDATE|FEOF|FERROR|FFLUSH|FGETS|FIELD|FILE|' r'FILETOSTR|FILTER|FKLABEL|FKMAX|FLDLIST|FLOCK|FLOOR|' r'FONTMETRIC|FOPEN|FOR|FORCEEXT|FORCEPATH|FOUND|FPUTS|' r'FREAD|FSEEK|FSIZE|FTIME|FULLPATH|FV|FWRITE|' r'GETAUTOINCVALUE|GETBAR|GETCOLOR|GETCP|GETDIR|GETENV|' r'GETFILE|GETFLDSTATE|GETFONT|GETINTERFACE|' r'GETNEXTMODIFIED|GETOBJECT|GETPAD|GETPEM|GETPICT|' r'GETPRINTER|GETRESULTSET|GETWORDCOUNT|GETWORDNUM|' r'GETCURSORADAPTER|GOMONTH|HEADER|HOME|HOUR|ICASE|' r'IDXCOLLATE|IIF|IMESTATUS|INDBC|INDEXSEEK|INKEY|INLIST|' r'INPUTBOX|INSMODE|INT|ISALPHA|ISBLANK|ISCOLOR|ISDIGIT|' r'ISEXCLUSIVE|ISFLOCKED|ISLEADBYTE|ISLOWER|ISMEMOFETCHED|' r'ISMOUSE|ISNULL|ISPEN|ISREADONLY|ISRLOCKED|' r'ISTRANSACTABLE|ISUPPER|JUSTDRIVE|JUSTEXT|JUSTFNAME|' r'JUSTPATH|JUSTSTEM|KEY|KEYMATCH|LASTKEY|LEFT|LEFTC|LEN|' r'LENC|LIKE|LIKEC|LINENO|LOADPICTURE|LOCFILE|LOCK|LOG|' r'LOG10|LOOKUP|LOWER|LTRIM|LUPDATE|MAKETRANSACTABLE|MAX|' r'MCOL|MDOWN|MDX|MDY|MEMLINES|MEMORY|MENU|MESSAGE|' r'MESSAGEBOX|MIN|MINUTE|MLINE|MOD|MONTH|MRKBAR|MRKPAD|' r'MROW|MTON|MWINDOW|NDX|NEWOBJECT|NORMALIZE|NTOM|NUMLOCK|' r'NVL|OBJNUM|OBJTOCLIENT|OBJVAR|OCCURS|OEMTOANSI|OLDVAL|' r'ON|ORDER|OS|PAD|PADL|PARAMETERS|PAYMENT|PCOL|PCOUNT|' r'PEMSTATUS|PI|POPUP|PRIMARY|PRINTSTATUS|PRMBAR|PRMPAD|' r'PROGRAM|PROMPT|PROPER|PROW|PRTINFO|PUTFILE|PV|QUARTER|' r'RAISEEVENT|RAND|RAT|RATC|RATLINE|RDLEVEL|READKEY|RECCOUNT|' r'RECNO|RECSIZE|REFRESH|RELATION|REPLICATE|REQUERY|RGB|' r'RGBSCHEME|RIGHT|RIGHTC|RLOCK|ROUND|ROW|RTOD|RTRIM|' r'SAVEPICTURE|SCHEME|SCOLS|SEC|SECONDS|SEEK|SELECT|SET|' r'SETFLDSTATE|SETRESULTSET|SIGN|SIN|SKPBAR|SKPPAD|SOUNDEX|' r'SPACE|SQLCANCEL|SQLCOLUMNS|SQLCOMMIT|SQLCONNECT|' r'SQLDISCONNECT|SQLEXEC|SQLGETPROP|SQLIDLEDISCONNECT|' r'SQLMORERESULTS|SQLPREPARE|SQLROLLBACK|SQLSETPROP|' r'SQLSTRINGCONNECT|SQLTABLES|SQRT|SROWS|STR|STRCONV|' r'STREXTRACT|STRTOFILE|STRTRAN|STUFF|STUFFC|SUBSTR|' r'SUBSTRC|SYS|SYSMETRIC|TABLEREVERT|TABLEUPDATE|TAG|' r'TAGCOUNT|TAGNO|TAN|TARGET|TEXTMERGE|TIME|TRANSFORM|' r'TRIM|TTOC|TTOD|TXNLEVEL|TXTWIDTH|TYPE|UNBINDEVENTS|' r'UNIQUE|UPDATED|UPPER|USED|VAL|VARREAD|VARTYPE|VERSION|' r'WBORDER|WCHILD|WCOLS|WDOCKABLE|WEEK|WEXIST|WFONT|WLAST|' r'WLCOL|WLROW|WMAXIMUM|WMINIMUM|WONTOP|WOUTPUT|WPARENT|' r'WREAD|WROWS|WTITLE|WVISIBLE|XMLTOCURSOR|XMLUPDATEGRAM|' r'YEAR)(?=\s*\()', Name.Function), (r'_ALIGNMENT|_ASCIICOLS|_ASCIIROWS|_ASSIST|_BEAUTIFY|_BOX|' r'_BROWSER|_BUILDER|_CALCMEM|_CALCVALUE|_CLIPTEXT|_CONVERTER|' r'_COVERAGE|_CUROBJ|_DBLCLICK|_DIARYDATE|_DOS|_FOXDOC|_FOXREF|' r'_GALLERY|_GENGRAPH|_GENHTML|_GENMENU|_GENPD|_GENSCRN|' r'_GENXTAB|_GETEXPR|_INCLUDE|_INCSEEK|_INDENT|_LMARGIN|_MAC|' r'_MENUDESIGNER|_MLINE|_PADVANCE|_PAGENO|_PAGETOTAL|_PBPAGE|' r'_PCOLNO|_PCOPIES|_PDRIVER|_PDSETUP|_PECODE|_PEJECT|_PEPAGE|' r'_PLENGTH|_PLINENO|_PLOFFSET|_PPITCH|_PQUALITY|_PRETEXT|' r'_PSCODE|_PSPACING|_PWAIT|_RMARGIN|_REPORTBUILDER|' r'_REPORTOUTPUT|_REPORTPREVIEW|_SAMPLES|_SCCTEXT|_SCREEN|' r'_SHELL|_SPELLCHK|_STARTUP|_TABS|_TALLY|_TASKPANE|_TEXT|' r'_THROTTLE|_TOOLBOX|_TOOLTIPTIMEOUT|_TRANSPORT|_TRIGGERLEVEL|' r'_UNIX|_VFP|_WINDOWS|_WIZARD|_WRAP', Keyword.Pseudo), (r'THISFORMSET|THISFORM|THIS', Name.Builtin), (r'Application|CheckBox|Collection|Column|ComboBox|' r'CommandButton|CommandGroup|Container|Control|CursorAdapter|' r'Cursor|Custom|DataEnvironment|DataObject|EditBox|' r'Empty|Exception|Fields|Files|File|FormSet|Form|FoxCode|' r'Grid|Header|Hyperlink|Image|Label|Line|ListBox|Objects|' r'OptionButton|OptionGroup|PageFrame|Page|ProjectHook|Projects|' r'Project|Relation|ReportListener|Separator|Servers|Server|' r'Session|Shape|Spinner|Tables|TextBox|Timer|ToolBar|' r'XMLAdapter|XMLField|XMLTable', Name.Class), (r'm\.[a-z_]\w*', Name.Variable), (r'\.(F|T|AND|OR|NOT|NULL)\.|\b(AND|OR|NOT|NULL)\b', Operator.Word), (r'\.(ActiveColumn|ActiveControl|ActiveForm|ActivePage|' r'ActiveProject|ActiveRow|AddLineFeeds|ADOCodePage|Alias|' r'Alignment|Align|AllowAddNew|AllowAutoColumnFit|' r'AllowCellSelection|AllowDelete|AllowHeaderSizing|' r'AllowInsert|AllowModalMessages|AllowOutput|AllowRowSizing|' r'AllowSimultaneousFetch|AllowTabs|AllowUpdate|' r'AlwaysOnBottom|AlwaysOnTop|Anchor|Application|' r'AutoActivate|AutoCenter|AutoCloseTables|AutoComplete|' r'AutoCompSource|AutoCompTable|AutoHideScrollBar|' r'AutoIncrement|AutoOpenTables|AutoRelease|AutoSize|' r'AutoVerbMenu|AutoYield|BackColor|ForeColor|BackStyle|' r'BaseClass|BatchUpdateCount|BindControls|BorderColor|' r'BorderStyle|BorderWidth|BoundColumn|BoundTo|Bound|' r'BreakOnError|BufferModeOverride|BufferMode|' r'BuildDateTime|ButtonCount|Buttons|Cancel|Caption|' r'Centered|Century|ChildAlias|ChildOrder|ChildTable|' r'ClassLibrary|Class|ClipControls|Closable|CLSID|CodePage|' r'ColorScheme|ColorSource|ColumnCount|ColumnLines|' r'ColumnOrder|Columns|ColumnWidths|CommandClauses|' r'Comment|CompareMemo|ConflictCheckCmd|ConflictCheckType|' r'ContinuousScroll|ControlBox|ControlCount|Controls|' r'ControlSource|ConversionFunc|Count|CurrentControl|' r'CurrentDataSession|CurrentPass|CurrentX|CurrentY|' r'CursorSchema|CursorSource|CursorStatus|Curvature|' r'Database|DataSessionID|DataSession|DataSourceType|' r'DataSource|DataType|DateFormat|DateMark|Debug|' r'DeclareXMLPrefix|DEClassLibrary|DEClass|DefaultFilePath|' r'Default|DefOLELCID|DeleteCmdDataSourceType|DeleteCmdDataSource|' r'DeleteCmd|DeleteMark|Description|Desktop|' r'Details|DisabledBackColor|DisabledForeColor|' r'DisabledItemBackColor|DisabledItemForeColor|' r'DisabledPicture|DisableEncode|DisplayCount|' r'DisplayValue|Dockable|Docked|DockPosition|' r'DocumentFile|DownPicture|DragIcon|DragMode|DrawMode|' r'DrawStyle|DrawWidth|DynamicAlignment|DynamicBackColor|' r'DynamicForeColor|DynamicCurrentControl|DynamicFontBold|' r'DynamicFontItalic|DynamicFontStrikethru|' r'DynamicFontUnderline|DynamicFontName|DynamicFontOutline|' r'DynamicFontShadow|DynamicFontSize|DynamicInputMask|' r'DynamicLineHeight|EditorOptions|Enabled|' r'EnableHyperlinks|Encrypted|ErrorNo|Exclude|Exclusive|' r'FetchAsNeeded|FetchMemoCmdList|FetchMemoDataSourceType|' r'FetchMemoDataSource|FetchMemo|FetchSize|' r'FileClassLibrary|FileClass|FillColor|FillStyle|Filter|' r'FirstElement|FirstNestedTable|Flags|FontBold|FontItalic|' r'FontStrikethru|FontUnderline|FontCharSet|FontCondense|' r'FontExtend|FontName|FontOutline|FontShadow|FontSize|' r'ForceCloseTag|Format|FormCount|FormattedOutput|Forms|' r'FractionDigits|FRXDataSession|FullName|GDIPlusGraphics|' r'GridLineColor|GridLines|GridLineWidth|HalfHeightCaption|' r'HeaderClassLibrary|HeaderClass|HeaderHeight|Height|' r'HelpContextID|HideSelection|HighlightBackColor|' r'HighlightForeColor|HighlightStyle|HighlightRowLineWidth|' r'HighlightRow|Highlight|HomeDir|Hours|HostName|' r'HScrollSmallChange|hWnd|Icon|IncrementalSearch|Increment|' r'InitialSelectedAlias|InputMask|InsertCmdDataSourceType|' r'InsertCmdDataSource|InsertCmdRefreshCmd|' r'InsertCmdRefreshFieldList|InsertCmdRefreshKeyFieldList|' r'InsertCmd|Instancing|IntegralHeight|' r'Interval|IMEMode|IsAttribute|IsBase64|IsBinary|IsNull|' r'IsDiffGram|IsLoaded|ItemBackColor,|ItemData|ItemIDData|' r'ItemTips|IXMLDOMElement|KeyboardHighValue|KeyboardLowValue|' r'Keyfield|KeyFieldList|KeyPreview|KeySort|LanguageOptions|' r'LeftColumn|Left|LineContents|LineNo|LineSlant|LinkMaster|' r'ListCount|ListenerType|ListIndex|ListItemID|ListItem|' r'List|LockColumnsLeft|LockColumns|LockScreen|MacDesktop|' r'MainFile|MapN19_4ToCurrency|MapBinary|MapVarchar|Margin|' r'MaxButton|MaxHeight|MaxLeft|MaxLength|MaxRecords|MaxTop|' r'MaxWidth|MDIForm|MemberClassLibrary|MemberClass|' r'MemoWindow|Message|MinButton|MinHeight|MinWidth|' r'MouseIcon|MousePointer|Movable|MoverBars|MultiSelect|' r'Name|NestedInto|NewIndex|NewItemID|NextSiblingTable|' r'NoCpTrans|NoDataOnLoad|NoData|NullDisplay|' r'NumberOfElements|Object|OLEClass|OLEDragMode|' r'OLEDragPicture|OLEDropEffects|OLEDropHasData|' r'OLEDropMode|OLEDropTextInsertion|OLELCID|' r'OLERequestPendingTimeout|OLEServerBusyRaiseError|' r'OLEServerBusyTimeout|OLETypeAllowed|OneToMany|' r'OpenViews|OpenWindow|Optimize|OrderDirection|Order|' r'OutputPageCount|OutputType|PageCount|PageHeight|' r'PageNo|PageOrder|Pages|PageTotal|PageWidth|' r'PanelLink|Panel|ParentAlias|ParentClass|ParentTable|' r'Parent|Partition|PasswordChar|PictureMargin|' r'PicturePosition|PictureSpacing|PictureSelectionDisplay|' r'PictureVal|Picture|Prepared|' r'PolyPoints|PreserveWhiteSpace|PreviewContainer|' r'PrintJobName|Procedure|PROCESSID|ProgID|ProjectHookClass|' r'ProjectHookLibrary|ProjectHook|QuietMode|' r'ReadCycle|ReadLock|ReadMouse|ReadObject|ReadOnly|' r'ReadSave|ReadTimeout|RecordMark|RecordSourceType|' r'RecordSource|RefreshAlias|' r'RefreshCmdDataSourceType|RefreshCmdDataSource|RefreshCmd|' r'RefreshIgnoreFieldList|RefreshTimeStamp|RelationalExpr|' r'RelativeColumn|RelativeRow|ReleaseType|Resizable|' r'RespectCursorCP|RespectNesting|RightToLeft|RotateFlip|' r'Rotation|RowColChange|RowHeight|RowSourceType|' r'RowSource|ScaleMode|SCCProvider|SCCStatus|ScrollBars|' r'Seconds|SelectCmd|SelectedID|' r'SelectedItemBackColor|SelectedItemForeColor|Selected|' r'SelectionNamespaces|SelectOnEntry|SelLength|SelStart|' r'SelText|SendGDIPlusImage|SendUpdates|ServerClassLibrary|' r'ServerClass|ServerHelpFile|ServerName|' r'ServerProject|ShowTips|ShowInTaskbar|ShowWindow|' r'Sizable|SizeBox|SOM|Sorted|Sparse|SpecialEffect|' r'SpinnerHighValue|SpinnerLowValue|SplitBar|StackLevel|' r'StartMode|StatusBarText|StatusBar|Stretch|StrictDateEntry|' r'Style|TabIndex|Tables|TabOrientation|Tabs|TabStop|' r'TabStretch|TabStyle|Tag|TerminateRead|Text|Themes|' r'ThreadID|TimestampFieldList|TitleBar|ToolTipText|' r'TopIndex|TopItemID|Top|TwoPassProcess|TypeLibCLSID|' r'TypeLibDesc|TypeLibName|Type|Unicode|UpdatableFieldList|' r'UpdateCmdDataSourceType|UpdateCmdDataSource|' r'UpdateCmdRefreshCmd|UpdateCmdRefreshFieldList|' r'UpdateCmdRefreshKeyFieldList|UpdateCmd|' r'UpdateGramSchemaLocation|UpdateGram|UpdateNameList|UpdateType|' r'UseCodePage|UseCursorSchema|UseDeDataSource|UseMemoSize|' r'UserValue|UseTransactions|UTF8Encoded|Value|VersionComments|' r'VersionCompany|VersionCopyright|VersionDescription|' r'VersionNumber|VersionProduct|VersionTrademarks|Version|' r'VFPXMLProgID|ViewPortHeight|ViewPortLeft|' r'ViewPortTop|ViewPortWidth|VScrollSmallChange|View|Visible|' r'VisualEffect|WhatsThisButton|WhatsThisHelpID|WhatsThisHelp|' r'WhereType|Width|WindowList|WindowState|WindowType|WordWrap|' r'WrapCharInCDATA|WrapInCDATA|WrapMemoInCDATA|XMLAdapter|' r'XMLConstraints|XMLNameIsXPath|XMLNamespace|XMLName|' r'XMLPrefix|XMLSchemaLocation|XMLTable|XMLType|' r'XSDfractionDigits|XSDmaxLength|XSDtotalDigits|' r'XSDtype|ZoomBox)', Name.Attribute), (r'\.(ActivateCell|AddColumn|AddItem|AddListItem|AddObject|' r'AddProperty|AddTableSchema|AddToSCC|Add|' r'ApplyDiffgram|Attach|AutoFit|AutoOpen|Box|Build|' r'CancelReport|ChangesToCursor|CheckIn|CheckOut|Circle|' r'CleanUp|ClearData|ClearStatus|Clear|CloneObject|CloseTables|' r'Close|Cls|CursorAttach|CursorDetach|CursorFill|' r'CursorRefresh|DataToClip|DelayedMemoFetch|DeleteColumn|' r'Dock|DoMessage|DoScroll|DoStatus|DoVerb|Drag|Draw|Eval|' r'GetData|GetDockState|GetFormat|GetKey|GetLatestVersion|' r'GetPageHeight|GetPageWidth|Help|Hide|IncludePageInOutput|' r'IndexToItemID|ItemIDToIndex|Item|LoadXML|Line|Modify|' r'MoveItem|Move|Nest|OLEDrag|OnPreviewClose|OutputPage|' r'Point|Print|PSet|Quit|ReadExpression|ReadMethod|' r'RecordRefresh|Refresh|ReleaseXML|Release|RemoveFromSCC|' r'RemoveItem|RemoveListItem|RemoveObject|Remove|' r'Render|Requery|RequestData|ResetToDefault|Reset|Run|' r'SaveAsClass|SaveAs|SetAll|SetData|SetFocus|SetFormat|' r'SetMain|SetVar|SetViewPort|ShowWhatsThis|Show|' r'SupportsListenerType|TextHeight|TextWidth|ToCursor|' r'ToXML|UndoCheckOut|Unnest|UpdateStatus|WhatsThisMode|' r'WriteExpression|WriteMethod|ZOrder)', Name.Function), (r'\.(Activate|AdjustObjectSize|AfterBand|AfterBuild|' r'AfterCloseTables|AfterCursorAttach|AfterCursorClose|' r'AfterCursorDetach|AfterCursorFill|AfterCursorRefresh|' r'AfterCursorUpdate|AfterDelete|AfterInsert|' r'AfterRecordRefresh|AfterUpdate|AfterDock|AfterReport|' r'AfterRowColChange|BeforeBand|BeforeCursorAttach|' r'BeforeCursorClose|BeforeCursorDetach|BeforeCursorFill|' r'BeforeCursorRefresh|BeforeCursorUpdate|BeforeDelete|' r'BeforeInsert|BeforeDock|BeforeOpenTables|' r'BeforeRecordRefresh|BeforeReport|BeforeRowColChange|' r'BeforeUpdate|Click|dbc_Activate|dbc_AfterAddTable|' r'dbc_AfterAppendProc|dbc_AfterCloseTable|dbc_AfterCopyProc|' r'dbc_AfterCreateConnection|dbc_AfterCreateOffline|' r'dbc_AfterCreateTable|dbc_AfterCreateView|dbc_AfterDBGetProp|' r'dbc_AfterDBSetProp|dbc_AfterDeleteConnection|' r'dbc_AfterDropOffline|dbc_AfterDropTable|' r'dbc_AfterModifyConnection|dbc_AfterModifyProc|' r'dbc_AfterModifyTable|dbc_AfterModifyView|dbc_AfterOpenTable|' r'dbc_AfterRemoveTable|dbc_AfterRenameConnection|' r'dbc_AfterRenameTable|dbc_AfterRenameView|' r'dbc_AfterValidateData|dbc_BeforeAddTable|' r'dbc_BeforeAppendProc|dbc_BeforeCloseTable|' r'dbc_BeforeCopyProc|dbc_BeforeCreateConnection|' r'dbc_BeforeCreateOffline|dbc_BeforeCreateTable|' r'dbc_BeforeCreateView|dbc_BeforeDBGetProp|' r'dbc_BeforeDBSetProp|dbc_BeforeDeleteConnection|' r'dbc_BeforeDropOffline|dbc_BeforeDropTable|' r'dbc_BeforeModifyConnection|dbc_BeforeModifyProc|' r'dbc_BeforeModifyTable|dbc_BeforeModifyView|' r'dbc_BeforeOpenTable|dbc_BeforeRemoveTable|' r'dbc_BeforeRenameConnection|dbc_BeforeRenameTable|' r'dbc_BeforeRenameView|dbc_BeforeValidateData|' r'dbc_CloseData|dbc_Deactivate|dbc_ModifyData|dbc_OpenData|' r'dbc_PackData|DblClick|Deactivate|Deleted|Destroy|DoCmd|' r'DownClick|DragDrop|DragOver|DropDown|ErrorMessage|Error|' r'EvaluateContents|GotFocus|Init|InteractiveChange|KeyPress|' r'LoadReport|Load|LostFocus|Message|MiddleClick|MouseDown|' r'MouseEnter|MouseLeave|MouseMove|MouseUp|MouseWheel|Moved|' r'OLECompleteDrag|OLEDragOver|OLEGiveFeedback|OLESetData|' r'OLEStartDrag|OnMoveItem|Paint|ProgrammaticChange|' r'QueryAddFile|QueryModifyFile|QueryNewFile|QueryRemoveFile|' r'QueryRunFile|QueryUnload|RangeHigh|RangeLow|ReadActivate|' r'ReadDeactivate|ReadShow|ReadValid|ReadWhen|Resize|' r'RightClick|SCCInit|SCCDestroy|Scrolled|Timer|UIEnable|' r'UnDock|UnloadReport|Unload|UpClick|Valid|When)', Name.Function), (r'\s+', Text), # everything else is not colored (r'.', Text), ], 'newline': [ (r'\*.*?$', Comment.Single, '#pop'), (r'(ACCEPT|ACTIVATE\s*MENU|ACTIVATE\s*POPUP|ACTIVATE\s*SCREEN|' r'ACTIVATE\s*WINDOW|APPEND|APPEND\s*FROM|APPEND\s*FROM\s*ARRAY|' r'APPEND\s*GENERAL|APPEND\s*MEMO|ASSIST|AVERAGE|BLANK|BROWSE|' r'BUILD\s*APP|BUILD\s*EXE|BUILD\s*PROJECT|CALCULATE|CALL|' r'CANCEL|CHANGE|CLEAR|CLOSE|CLOSE\s*MEMO|COMPILE|CONTINUE|' r'COPY\s*FILE|COPY\s*INDEXES|COPY\s*MEMO|COPY\s*STRUCTURE|' r'COPY\s*STRUCTURE\s*EXTENDED|COPY\s*TAG|COPY\s*TO|' r'COPY\s*TO\s*ARRAY|COUNT|CREATE|CREATE\s*COLOR\s*SET|' r'CREATE\s*CURSOR|CREATE\s*FROM|CREATE\s*LABEL|CREATE\s*MENU|' r'CREATE\s*PROJECT|CREATE\s*QUERY|CREATE\s*REPORT|' r'CREATE\s*SCREEN|CREATE\s*TABLE|CREATE\s*VIEW|DDE|' r'DEACTIVATE\s*MENU|DEACTIVATE\s*POPUP|DEACTIVATE\s*WINDOW|' r'DECLARE|DEFINE\s*BAR|DEFINE\s*BOX|DEFINE\s*MENU|' r'DEFINE\s*PAD|DEFINE\s*POPUP|DEFINE\s*WINDOW|DELETE|' r'DELETE\s*FILE|DELETE\s*TAG|DIMENSION|DIRECTORY|DISPLAY|' r'DISPLAY\s*FILES|DISPLAY\s*MEMORY|DISPLAY\s*STATUS|' r'DISPLAY\s*STRUCTURE|DO|EDIT|EJECT|EJECT\s*PAGE|ERASE|' r'EXIT|EXPORT|EXTERNAL|FILER|FIND|FLUSH|FUNCTION|GATHER|' r'GETEXPR|GO|GOTO|HELP|HIDE\s*MENU|HIDE\s*POPUP|' r'HIDE\s*WINDOW|IMPORT|INDEX|INPUT|INSERT|JOIN|KEYBOARD|' r'LABEL|LIST|LOAD|LOCATE|LOOP|MENU|MENU\s*TO|MODIFY\s*COMMAND|' r'MODIFY\s*FILE|MODIFY\s*GENERAL|MODIFY\s*LABEL|MODIFY\s*MEMO|' r'MODIFY\s*MENU|MODIFY\s*PROJECT|MODIFY\s*QUERY|' r'MODIFY\s*REPORT|MODIFY\s*SCREEN|MODIFY\s*STRUCTURE|' r'MODIFY\s*WINDOW|MOVE\s*POPUP|MOVE\s*WINDOW|NOTE|' r'ON\s*APLABOUT|ON\s*BAR|ON\s*ERROR|ON\s*ESCAPE|' r'ON\s*EXIT\s*BAR|ON\s*EXIT\s*MENU|ON\s*EXIT\s*PAD|' r'ON\s*EXIT\s*POPUP|ON\s*KEY|ON\s*KEY\s*=|ON\s*KEY\s*LABEL|' r'ON\s*MACHELP|ON\s*PAD|ON\s*PAGE|ON\s*READERROR|' r'ON\s*SELECTION\s*BAR|ON\s*SELECTION\s*MENU|' r'ON\s*SELECTION\s*PAD|ON\s*SELECTION\s*POPUP|ON\s*SHUTDOWN|' r'PACK|PARAMETERS|PLAY\s*MACRO|POP\s*KEY|POP\s*MENU|' r'POP\s*POPUP|PRIVATE|PROCEDURE|PUBLIC|PUSH\s*KEY|' r'PUSH\s*MENU|PUSH\s*POPUP|QUIT|READ|READ\s*MENU|RECALL|' r'REINDEX|RELEASE|RELEASE\s*MODULE|RENAME|REPLACE|' r'REPLACE\s*FROM\s*ARRAY|REPORT|RESTORE\s*FROM|' r'RESTORE\s*MACROS|RESTORE\s*SCREEN|RESTORE\s*WINDOW|' r'RESUME|RETRY|RETURN|RUN|RUN\s*\/N"|RUNSCRIPT|' r'SAVE\s*MACROS|SAVE\s*SCREEN|SAVE\s*TO|SAVE\s*WINDOWS|' r'SCATTER|SCROLL|SEEK|SELECT|SET|SET\s*ALTERNATE|' r'SET\s*ANSI|SET\s*APLABOUT|SET\s*AUTOSAVE|SET\s*BELL|' r'SET\s*BLINK|SET\s*BLOCKSIZE|SET\s*BORDER|SET\s*BRSTATUS|' r'SET\s*CARRY|SET\s*CENTURY|SET\s*CLEAR|SET\s*CLOCK|' r'SET\s*COLLATE|SET\s*COLOR\s*OF|SET\s*COLOR\s*OF\s*SCHEME|' r'SET\s*COLOR\s*SET|SET\s*COLOR\s*TO|SET\s*COMPATIBLE|' r'SET\s*CONFIRM|SET\s*CONSOLE|SET\s*CURRENCY|SET\s*CURSOR|' r'SET\s*DATE|SET\s*DEBUG|SET\s*DECIMALS|SET\s*DEFAULT|' r'SET\s*DELETED|SET\s*DELIMITERS|SET\s*DEVELOPMENT|' r'SET\s*DEVICE|SET\s*DISPLAY|SET\s*DOHISTORY|SET\s*ECHO|' r'SET\s*ESCAPE|SET\s*EXACT|SET\s*EXCLUSIVE|SET\s*FIELDS|' r'SET\s*FILTER|SET\s*FIXED|SET\s*FORMAT|SET\s*FULLPATH|' r'SET\s*FUNCTION|SET\s*HEADINGS|SET\s*HELP|SET\s*HELPFILTER|' r'SET\s*HOURS|SET\s*INDEX|SET\s*INTENSITY|SET\s*KEY|' r'SET\s*KEYCOMP|SET\s*LIBRARY|SET\s*LOCK|SET\s*LOGERRORS|' r'SET\s*MACDESKTOP|SET\s*MACHELP|SET\s*MACKEY|SET\s*MARGIN|' r'SET\s*MARK\s*OF|SET\s*MARK\s*TO|SET\s*MEMOWIDTH|' r'SET\s*MESSAGE|SET\s*MOUSE|SET\s*MULTILOCKS|SET\s*NEAR|' r'SET\s*NOCPTRANS|SET\s*NOTIFY|SET\s*ODOMETER|SET\s*OPTIMIZE|' r'SET\s*ORDER|SET\s*PALETTE|SET\s*PATH|SET\s*PDSETUP|' r'SET\s*POINT|SET\s*PRINTER|SET\s*PROCEDURE|SET\s*READBORDER|' r'SET\s*REFRESH|SET\s*RELATION|SET\s*RELATION\s*OFF|' r'SET\s*REPROCESS|SET\s*RESOURCE|SET\s*SAFETY|SET\s*SCOREBOARD|' r'SET\s*SEPARATOR|SET\s*SHADOWS|SET\s*SKIP|SET\s*SKIP\s*OF|' r'SET\s*SPACE|SET\s*STATUS|SET\s*STATUS\s*BAR|SET\s*STEP|' r'SET\s*STICKY|SET\s*SYSMENU|SET\s*TALK|SET\s*TEXTMERGE|' r'SET\s*TEXTMERGE\s*DELIMITERS|SET\s*TOPIC|SET\s*TRBETWEEN|' r'SET\s*TYPEAHEAD|SET\s*UDFPARMS|SET\s*UNIQUE|SET\s*VIEW|' r'SET\s*VOLUME|SET\s*WINDOW\s*OF\s*MEMO|SET\s*XCMDFILE|' r'SHOW\s*GET|SHOW\s*GETS|SHOW\s*MENU|SHOW\s*OBJECT|' r'SHOW\s*POPUP|SHOW\s*WINDOW|SIZE\s*POPUP|SKIP|SORT|' r'STORE|SUM|SUSPEND|TOTAL|TYPE|UNLOCK|UPDATE|USE|WAIT|' r'ZAP|ZOOM\s*WINDOW|DO\s*CASE|CASE|OTHERWISE|ENDCASE|' r'DO\s*WHILE|ENDDO|FOR|ENDFOR|NEXT|IF|ELSE|ENDIF|PRINTJOB|' r'ENDPRINTJOB|SCAN|ENDSCAN|TEXT|ENDTEXT|=)', Keyword.Reserved, '#pop'), (r'#\s*(IF|ELIF|ELSE|ENDIF|DEFINE|IFDEF|IFNDEF|INCLUDE)', Comment.Preproc, '#pop'), (r'(m\.)?[a-z_]\w*', Name.Variable, '#pop'), (r'.', Text, '#pop'), ], } Pygments-2.3.1/pygments/lexers/_cl_builtins.py0000644000175000017500000003334513376260540020605 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers._cl_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ANSI Common Lisp builtins. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ BUILTIN_FUNCTIONS = set(( # 638 functions '<', '<=', '=', '>', '>=', '-', '/', '/=', '*', '+', '1-', '1+', 'abort', 'abs', 'acons', 'acos', 'acosh', 'add-method', 'adjoin', 'adjustable-array-p', 'adjust-array', 'allocate-instance', 'alpha-char-p', 'alphanumericp', 'append', 'apply', 'apropos', 'apropos-list', 'aref', 'arithmetic-error-operands', 'arithmetic-error-operation', 'array-dimension', 'array-dimensions', 'array-displacement', 'array-element-type', 'array-has-fill-pointer-p', 'array-in-bounds-p', 'arrayp', 'array-rank', 'array-row-major-index', 'array-total-size', 'ash', 'asin', 'asinh', 'assoc', 'assoc-if', 'assoc-if-not', 'atan', 'atanh', 'atom', 'bit', 'bit-and', 'bit-andc1', 'bit-andc2', 'bit-eqv', 'bit-ior', 'bit-nand', 'bit-nor', 'bit-not', 'bit-orc1', 'bit-orc2', 'bit-vector-p', 'bit-xor', 'boole', 'both-case-p', 'boundp', 'break', 'broadcast-stream-streams', 'butlast', 'byte', 'byte-position', 'byte-size', 'caaaar', 'caaadr', 'caaar', 'caadar', 'caaddr', 'caadr', 'caar', 'cadaar', 'cadadr', 'cadar', 'caddar', 'cadddr', 'caddr', 'cadr', 'call-next-method', 'car', 'cdaaar', 'cdaadr', 'cdaar', 'cdadar', 'cdaddr', 'cdadr', 'cdar', 'cddaar', 'cddadr', 'cddar', 'cdddar', 'cddddr', 'cdddr', 'cddr', 'cdr', 'ceiling', 'cell-error-name', 'cerror', 'change-class', 'char', 'char<', 'char<=', 'char=', 'char>', 'char>=', 'char/=', 'character', 'characterp', 'char-code', 'char-downcase', 'char-equal', 'char-greaterp', 'char-int', 'char-lessp', 'char-name', 'char-not-equal', 'char-not-greaterp', 'char-not-lessp', 'char-upcase', 'cis', 'class-name', 'class-of', 'clear-input', 'clear-output', 'close', 'clrhash', 'code-char', 'coerce', 'compile', 'compiled-function-p', 'compile-file', 'compile-file-pathname', 'compiler-macro-function', 'complement', 'complex', 'complexp', 'compute-applicable-methods', 'compute-restarts', 'concatenate', 'concatenated-stream-streams', 'conjugate', 'cons', 'consp', 'constantly', 'constantp', 'continue', 'copy-alist', 'copy-list', 'copy-pprint-dispatch', 'copy-readtable', 'copy-seq', 'copy-structure', 'copy-symbol', 'copy-tree', 'cos', 'cosh', 'count', 'count-if', 'count-if-not', 'decode-float', 'decode-universal-time', 'delete', 'delete-duplicates', 'delete-file', 'delete-if', 'delete-if-not', 'delete-package', 'denominator', 'deposit-field', 'describe', 'describe-object', 'digit-char', 'digit-char-p', 'directory', 'directory-namestring', 'disassemble', 'documentation', 'dpb', 'dribble', 'echo-stream-input-stream', 'echo-stream-output-stream', 'ed', 'eighth', 'elt', 'encode-universal-time', 'endp', 'enough-namestring', 'ensure-directories-exist', 'ensure-generic-function', 'eq', 'eql', 'equal', 'equalp', 'error', 'eval', 'evenp', 'every', 'exp', 'export', 'expt', 'fboundp', 'fceiling', 'fdefinition', 'ffloor', 'fifth', 'file-author', 'file-error-pathname', 'file-length', 'file-namestring', 'file-position', 'file-string-length', 'file-write-date', 'fill', 'fill-pointer', 'find', 'find-all-symbols', 'find-class', 'find-if', 'find-if-not', 'find-method', 'find-package', 'find-restart', 'find-symbol', 'finish-output', 'first', 'float', 'float-digits', 'floatp', 'float-precision', 'float-radix', 'float-sign', 'floor', 'fmakunbound', 'force-output', 'format', 'fourth', 'fresh-line', 'fround', 'ftruncate', 'funcall', 'function-keywords', 'function-lambda-expression', 'functionp', 'gcd', 'gensym', 'gentemp', 'get', 'get-decoded-time', 'get-dispatch-macro-character', 'getf', 'gethash', 'get-internal-real-time', 'get-internal-run-time', 'get-macro-character', 'get-output-stream-string', 'get-properties', 'get-setf-expansion', 'get-universal-time', 'graphic-char-p', 'hash-table-count', 'hash-table-p', 'hash-table-rehash-size', 'hash-table-rehash-threshold', 'hash-table-size', 'hash-table-test', 'host-namestring', 'identity', 'imagpart', 'import', 'initialize-instance', 'input-stream-p', 'inspect', 'integer-decode-float', 'integer-length', 'integerp', 'interactive-stream-p', 'intern', 'intersection', 'invalid-method-error', 'invoke-debugger', 'invoke-restart', 'invoke-restart-interactively', 'isqrt', 'keywordp', 'last', 'lcm', 'ldb', 'ldb-test', 'ldiff', 'length', 'lisp-implementation-type', 'lisp-implementation-version', 'list', 'list*', 'list-all-packages', 'listen', 'list-length', 'listp', 'load', 'load-logical-pathname-translations', 'log', 'logand', 'logandc1', 'logandc2', 'logbitp', 'logcount', 'logeqv', 'logical-pathname', 'logical-pathname-translations', 'logior', 'lognand', 'lognor', 'lognot', 'logorc1', 'logorc2', 'logtest', 'logxor', 'long-site-name', 'lower-case-p', 'machine-instance', 'machine-type', 'machine-version', 'macroexpand', 'macroexpand-1', 'macro-function', 'make-array', 'make-broadcast-stream', 'make-concatenated-stream', 'make-condition', 'make-dispatch-macro-character', 'make-echo-stream', 'make-hash-table', 'make-instance', 'make-instances-obsolete', 'make-list', 'make-load-form', 'make-load-form-saving-slots', 'make-package', 'make-pathname', 'make-random-state', 'make-sequence', 'make-string', 'make-string-input-stream', 'make-string-output-stream', 'make-symbol', 'make-synonym-stream', 'make-two-way-stream', 'makunbound', 'map', 'mapc', 'mapcan', 'mapcar', 'mapcon', 'maphash', 'map-into', 'mapl', 'maplist', 'mask-field', 'max', 'member', 'member-if', 'member-if-not', 'merge', 'merge-pathnames', 'method-combination-error', 'method-qualifiers', 'min', 'minusp', 'mismatch', 'mod', 'muffle-warning', 'name-char', 'namestring', 'nbutlast', 'nconc', 'next-method-p', 'nintersection', 'ninth', 'no-applicable-method', 'no-next-method', 'not', 'notany', 'notevery', 'nreconc', 'nreverse', 'nset-difference', 'nset-exclusive-or', 'nstring-capitalize', 'nstring-downcase', 'nstring-upcase', 'nsublis', 'nsubst', 'nsubst-if', 'nsubst-if-not', 'nsubstitute', 'nsubstitute-if', 'nsubstitute-if-not', 'nth', 'nthcdr', 'null', 'numberp', 'numerator', 'nunion', 'oddp', 'open', 'open-stream-p', 'output-stream-p', 'package-error-package', 'package-name', 'package-nicknames', 'packagep', 'package-shadowing-symbols', 'package-used-by-list', 'package-use-list', 'pairlis', 'parse-integer', 'parse-namestring', 'pathname', 'pathname-device', 'pathname-directory', 'pathname-host', 'pathname-match-p', 'pathname-name', 'pathnamep', 'pathname-type', 'pathname-version', 'peek-char', 'phase', 'plusp', 'position', 'position-if', 'position-if-not', 'pprint', 'pprint-dispatch', 'pprint-fill', 'pprint-indent', 'pprint-linear', 'pprint-newline', 'pprint-tab', 'pprint-tabular', 'prin1', 'prin1-to-string', 'princ', 'princ-to-string', 'print', 'print-object', 'probe-file', 'proclaim', 'provide', 'random', 'random-state-p', 'rassoc', 'rassoc-if', 'rassoc-if-not', 'rational', 'rationalize', 'rationalp', 'read', 'read-byte', 'read-char', 'read-char-no-hang', 'read-delimited-list', 'read-from-string', 'read-line', 'read-preserving-whitespace', 'read-sequence', 'readtable-case', 'readtablep', 'realp', 'realpart', 'reduce', 'reinitialize-instance', 'rem', 'remhash', 'remove', 'remove-duplicates', 'remove-if', 'remove-if-not', 'remove-method', 'remprop', 'rename-file', 'rename-package', 'replace', 'require', 'rest', 'restart-name', 'revappend', 'reverse', 'room', 'round', 'row-major-aref', 'rplaca', 'rplacd', 'sbit', 'scale-float', 'schar', 'search', 'second', 'set', 'set-difference', 'set-dispatch-macro-character', 'set-exclusive-or', 'set-macro-character', 'set-pprint-dispatch', 'set-syntax-from-char', 'seventh', 'shadow', 'shadowing-import', 'shared-initialize', 'short-site-name', 'signal', 'signum', 'simple-bit-vector-p', 'simple-condition-format-arguments', 'simple-condition-format-control', 'simple-string-p', 'simple-vector-p', 'sin', 'sinh', 'sixth', 'sleep', 'slot-boundp', 'slot-exists-p', 'slot-makunbound', 'slot-missing', 'slot-unbound', 'slot-value', 'software-type', 'software-version', 'some', 'sort', 'special-operator-p', 'sqrt', 'stable-sort', 'standard-char-p', 'store-value', 'stream-element-type', 'stream-error-stream', 'stream-external-format', 'streamp', 'string', 'string<', 'string<=', 'string=', 'string>', 'string>=', 'string/=', 'string-capitalize', 'string-downcase', 'string-equal', 'string-greaterp', 'string-left-trim', 'string-lessp', 'string-not-equal', 'string-not-greaterp', 'string-not-lessp', 'stringp', 'string-right-trim', 'string-trim', 'string-upcase', 'sublis', 'subseq', 'subsetp', 'subst', 'subst-if', 'subst-if-not', 'substitute', 'substitute-if', 'substitute-if-not', 'subtypep','svref', 'sxhash', 'symbol-function', 'symbol-name', 'symbolp', 'symbol-package', 'symbol-plist', 'symbol-value', 'synonym-stream-symbol', 'syntax:', 'tailp', 'tan', 'tanh', 'tenth', 'terpri', 'third', 'translate-logical-pathname', 'translate-pathname', 'tree-equal', 'truename', 'truncate', 'two-way-stream-input-stream', 'two-way-stream-output-stream', 'type-error-datum', 'type-error-expected-type', 'type-of', 'typep', 'unbound-slot-instance', 'unexport', 'unintern', 'union', 'unread-char', 'unuse-package', 'update-instance-for-different-class', 'update-instance-for-redefined-class', 'upgraded-array-element-type', 'upgraded-complex-part-type', 'upper-case-p', 'use-package', 'user-homedir-pathname', 'use-value', 'values', 'values-list', 'vector', 'vectorp', 'vector-pop', 'vector-push', 'vector-push-extend', 'warn', 'wild-pathname-p', 'write', 'write-byte', 'write-char', 'write-line', 'write-sequence', 'write-string', 'write-to-string', 'yes-or-no-p', 'y-or-n-p', 'zerop', )) SPECIAL_FORMS = set(( 'block', 'catch', 'declare', 'eval-when', 'flet', 'function', 'go', 'if', 'labels', 'lambda', 'let', 'let*', 'load-time-value', 'locally', 'macrolet', 'multiple-value-call', 'multiple-value-prog1', 'progn', 'progv', 'quote', 'return-from', 'setq', 'symbol-macrolet', 'tagbody', 'the', 'throw', 'unwind-protect', )) MACROS = set(( 'and', 'assert', 'call-method', 'case', 'ccase', 'check-type', 'cond', 'ctypecase', 'decf', 'declaim', 'defclass', 'defconstant', 'defgeneric', 'define-compiler-macro', 'define-condition', 'define-method-combination', 'define-modify-macro', 'define-setf-expander', 'define-symbol-macro', 'defmacro', 'defmethod', 'defpackage', 'defparameter', 'defsetf', 'defstruct', 'deftype', 'defun', 'defvar', 'destructuring-bind', 'do', 'do*', 'do-all-symbols', 'do-external-symbols', 'dolist', 'do-symbols', 'dotimes', 'ecase', 'etypecase', 'formatter', 'handler-bind', 'handler-case', 'ignore-errors', 'incf', 'in-package', 'lambda', 'loop', 'loop-finish', 'make-method', 'multiple-value-bind', 'multiple-value-list', 'multiple-value-setq', 'nth-value', 'or', 'pop', 'pprint-exit-if-list-exhausted', 'pprint-logical-block', 'pprint-pop', 'print-unreadable-object', 'prog', 'prog*', 'prog1', 'prog2', 'psetf', 'psetq', 'push', 'pushnew', 'remf', 'restart-bind', 'restart-case', 'return', 'rotatef', 'setf', 'shiftf', 'step', 'time', 'trace', 'typecase', 'unless', 'untrace', 'when', 'with-accessors', 'with-compilation-unit', 'with-condition-restarts', 'with-hash-table-iterator', 'with-input-from-string', 'with-open-file', 'with-open-stream', 'with-output-to-string', 'with-package-iterator', 'with-simple-restart', 'with-slots', 'with-standard-io-syntax', )) LAMBDA_LIST_KEYWORDS = set(( '&allow-other-keys', '&aux', '&body', '&environment', '&key', '&optional', '&rest', '&whole', )) DECLARATIONS = set(( 'dynamic-extent', 'ignore', 'optimize', 'ftype', 'inline', 'special', 'ignorable', 'notinline', 'type', )) BUILTIN_TYPES = set(( 'atom', 'boolean', 'base-char', 'base-string', 'bignum', 'bit', 'compiled-function', 'extended-char', 'fixnum', 'keyword', 'nil', 'signed-byte', 'short-float', 'single-float', 'double-float', 'long-float', 'simple-array', 'simple-base-string', 'simple-bit-vector', 'simple-string', 'simple-vector', 'standard-char', 'unsigned-byte', # Condition Types 'arithmetic-error', 'cell-error', 'condition', 'control-error', 'division-by-zero', 'end-of-file', 'error', 'file-error', 'floating-point-inexact', 'floating-point-overflow', 'floating-point-underflow', 'floating-point-invalid-operation', 'parse-error', 'package-error', 'print-not-readable', 'program-error', 'reader-error', 'serious-condition', 'simple-condition', 'simple-error', 'simple-type-error', 'simple-warning', 'stream-error', 'storage-condition', 'style-warning', 'type-error', 'unbound-variable', 'unbound-slot', 'undefined-function', 'warning', )) BUILTIN_CLASSES = set(( 'array', 'broadcast-stream', 'bit-vector', 'built-in-class', 'character', 'class', 'complex', 'concatenated-stream', 'cons', 'echo-stream', 'file-stream', 'float', 'function', 'generic-function', 'hash-table', 'integer', 'list', 'logical-pathname', 'method-combination', 'method', 'null', 'number', 'package', 'pathname', 'ratio', 'rational', 'readtable', 'real', 'random-state', 'restart', 'sequence', 'standard-class', 'standard-generic-function', 'standard-method', 'standard-object', 'string-stream', 'stream', 'string', 'structure-class', 'structure-object', 'symbol', 'synonym-stream', 't', 'two-way-stream', 'vector', )) Pygments-2.3.1/pygments/lexers/verification.py0000644000175000017500000000717113376260540020617 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.verification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for Intermediate Verification Languages (IVLs). :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, words from pygments.token import Comment, Operator, Keyword, Name, Number, \ Punctuation, Whitespace __all__ = ['BoogieLexer', 'SilverLexer'] class BoogieLexer(RegexLexer): """ For `Boogie `_ source code. .. versionadded:: 2.1 """ name = 'Boogie' aliases = ['boogie'] filenames = ['*.bpl'] tokens = { 'root': [ # Whitespace and Comments (r'\n', Whitespace), (r'\s+', Whitespace), (r'//[/!](.*?)\n', Comment.Doc), (r'//(.*?)\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (words(( 'axiom', 'break', 'call', 'ensures', 'else', 'exists', 'function', 'forall', 'if', 'invariant', 'modifies', 'procedure', 'requires', 'then', 'var', 'while'), suffix=r'\b'), Keyword), (words(('const',), suffix=r'\b'), Keyword.Reserved), (words(('bool', 'int', 'ref'), suffix=r'\b'), Keyword.Type), include('numbers'), (r"(>=|<=|:=|!=|==>|&&|\|\||[+/\-=>*<\[\]])", Operator), (r"([{}():;,.])", Punctuation), # Identifier (r'[a-zA-Z_]\w*', Name), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'numbers': [ (r'[0-9]+', Number.Integer), ], } class SilverLexer(RegexLexer): """ For `Silver `_ source code. .. versionadded:: 2.2 """ name = 'Silver' aliases = ['silver'] filenames = ['*.sil', '*.vpr'] tokens = { 'root': [ # Whitespace and Comments (r'\n', Whitespace), (r'\s+', Whitespace), (r'//[/!](.*?)\n', Comment.Doc), (r'//(.*?)\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (words(( 'result', 'true', 'false', 'null', 'method', 'function', 'predicate', 'program', 'domain', 'axiom', 'var', 'returns', 'field', 'define', 'requires', 'ensures', 'invariant', 'fold', 'unfold', 'inhale', 'exhale', 'new', 'assert', 'assume', 'goto', 'while', 'if', 'elseif', 'else', 'fresh', 'constraining', 'Seq', 'Set', 'Multiset', 'union', 'intersection', 'setminus', 'subset', 'unfolding', 'in', 'old', 'forall', 'exists', 'acc', 'wildcard', 'write', 'none', 'epsilon', 'perm', 'unique', 'apply', 'package', 'folding', 'label', 'forperm'), suffix=r'\b'), Keyword), (words(('Int', 'Perm', 'Bool', 'Ref'), suffix=r'\b'), Keyword.Type), include('numbers'), (r'[!%&*+=|?:<>/\-\[\]]', Operator), (r'([{}():;,.])', Punctuation), # Identifier (r'[\w$]\w*', Name), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'numbers': [ (r'[0-9]+', Number.Integer), ], } Pygments-2.3.1/pygments/lexers/markup.py0000644000175000017500000004774613376303451017447 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.markup ~~~~~~~~~~~~~~~~~~~~~~ Lexers for non-HTML markup languages. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexers.html import HtmlLexer, XmlLexer from pygments.lexers.javascript import JavascriptLexer from pygments.lexers.css import CssLexer from pygments.lexer import RegexLexer, DelegatingLexer, include, bygroups, \ using, this, do_insertions, default, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Other from pygments.util import get_bool_opt, ClassNotFound __all__ = ['BBCodeLexer', 'MoinWikiLexer', 'RstLexer', 'TexLexer', 'GroffLexer', 'MozPreprocHashLexer', 'MozPreprocPercentLexer', 'MozPreprocXulLexer', 'MozPreprocJavascriptLexer', 'MozPreprocCssLexer', 'MarkdownLexer'] class BBCodeLexer(RegexLexer): """ A lexer that highlights BBCode(-like) syntax. .. versionadded:: 0.6 """ name = 'BBCode' aliases = ['bbcode'] mimetypes = ['text/x-bbcode'] tokens = { 'root': [ (r'[^[]+', Text), # tag/end tag begin (r'\[/?\w+', Keyword, 'tag'), # stray bracket (r'\[', Text), ], 'tag': [ (r'\s+', Text), # attribute with value (r'(\w+)(=)("?[^\s"\]]+"?)', bygroups(Name.Attribute, Operator, String)), # tag argument (a la [color=green]) (r'(=)("?[^\s"\]]+"?)', bygroups(Operator, String)), # tag end (r'\]', Keyword, '#pop'), ], } class MoinWikiLexer(RegexLexer): """ For MoinMoin (and Trac) Wiki markup. .. versionadded:: 0.7 """ name = 'MoinMoin/Trac Wiki markup' aliases = ['trac-wiki', 'moin'] filenames = [] mimetypes = ['text/x-trac-wiki'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'^#.*$', Comment), (r'(!)(\S+)', bygroups(Keyword, Text)), # Ignore-next # Titles (r'^(=+)([^=]+)(=+)(\s*#.+)?$', bygroups(Generic.Heading, using(this), Generic.Heading, String)), # Literal code blocks, with optional shebang (r'(\{\{\{)(\n#!.+)?', bygroups(Name.Builtin, Name.Namespace), 'codeblock'), (r'(\'\'\'?|\|\||`|__|~~|\^|,,|::)', Comment), # Formatting # Lists (r'^( +)([.*-])( )', bygroups(Text, Name.Builtin, Text)), (r'^( +)([a-z]{1,5}\.)( )', bygroups(Text, Name.Builtin, Text)), # Other Formatting (r'\[\[\w+.*?\]\]', Keyword), # Macro (r'(\[[^\s\]]+)(\s+[^\]]+?)?(\])', bygroups(Keyword, String, Keyword)), # Link (r'^----+$', Keyword), # Horizontal rules (r'[^\n\'\[{!_~^,|]+', Text), (r'\n', Text), (r'.', Text), ], 'codeblock': [ (r'\}\}\}', Name.Builtin, '#pop'), # these blocks are allowed to be nested in Trac, but not MoinMoin (r'\{\{\{', Text, '#push'), (r'[^{}]+', Comment.Preproc), # slurp boring text (r'.', Comment.Preproc), # allow loose { or } ], } class RstLexer(RegexLexer): """ For `reStructuredText `_ markup. .. versionadded:: 0.7 Additional options accepted: `handlecodeblocks` Highlight the contents of ``.. sourcecode:: language``, ``.. code:: language`` and ``.. code-block:: language`` directives with a lexer for the given language (default: ``True``). .. versionadded:: 0.8 """ name = 'reStructuredText' aliases = ['rst', 'rest', 'restructuredtext'] filenames = ['*.rst', '*.rest'] mimetypes = ["text/x-rst", "text/prs.fallenstein.rst"] flags = re.MULTILINE def _handle_sourcecode(self, match): from pygments.lexers import get_lexer_by_name # section header yield match.start(1), Punctuation, match.group(1) yield match.start(2), Text, match.group(2) yield match.start(3), Operator.Word, match.group(3) yield match.start(4), Punctuation, match.group(4) yield match.start(5), Text, match.group(5) yield match.start(6), Keyword, match.group(6) yield match.start(7), Text, match.group(7) # lookup lexer if wanted and existing lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name(match.group(6).strip()) except ClassNotFound: pass indention = match.group(8) indention_size = len(indention) code = (indention + match.group(9) + match.group(10) + match.group(11)) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(8), String, code return # highlight the lines with the lexer. ins = [] codelines = code.splitlines(True) code = '' for line in codelines: if len(line) > indention_size: ins.append((len(code), [(0, Text, line[:indention_size])])) code += line[indention_size:] else: code += line for item in do_insertions(ins, lexer.get_tokens_unprocessed(code)): yield item # from docutils.parsers.rst.states closers = u'\'")]}>\u2019\u201d\xbb!?' unicode_delimiters = u'\u2010\u2011\u2012\u2013\u2014\u00a0' end_string_suffix = (r'((?=$)|(?=[-/:.,; \n\x00%s%s]))' % (re.escape(unicode_delimiters), re.escape(closers))) tokens = { 'root': [ # Heading with overline (r'^(=+|-+|`+|:+|\.+|\'+|"+|~+|\^+|_+|\*+|\++|#+)([ \t]*\n)' r'(.+)(\n)(\1)(\n)', bygroups(Generic.Heading, Text, Generic.Heading, Text, Generic.Heading, Text)), # Plain heading (r'^(\S.*)(\n)(={3,}|-{3,}|`{3,}|:{3,}|\.{3,}|\'{3,}|"{3,}|' r'~{3,}|\^{3,}|_{3,}|\*{3,}|\+{3,}|#{3,})(\n)', bygroups(Generic.Heading, Text, Generic.Heading, Text)), # Bulleted lists (r'^(\s*)([-*+])( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), # Numbered lists (r'^(\s*)([0-9#ivxlcmIVXLCM]+\.)( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), (r'^(\s*)(\(?[0-9#ivxlcmIVXLCM]+\))( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), # Numbered, but keep words at BOL from becoming lists (r'^(\s*)([A-Z]+\.)( .+\n(?:\1 .+\n)+)', bygroups(Text, Number, using(this, state='inline'))), (r'^(\s*)(\(?[A-Za-z]+\))( .+\n(?:\1 .+\n)+)', bygroups(Text, Number, using(this, state='inline'))), # Line blocks (r'^(\s*)(\|)( .+\n(?:\| .+\n)*)', bygroups(Text, Operator, using(this, state='inline'))), # Sourcecode directives (r'^( *\.\.)(\s*)((?:source)?code(?:-block)?)(::)([ \t]*)([^\n]+)' r'(\n[ \t]*\n)([ \t]+)(.*)(\n)((?:(?:\8.*|)\n)+)', _handle_sourcecode), # A directive (r'^( *\.\.)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))', bygroups(Punctuation, Text, Operator.Word, Punctuation, Text, using(this, state='inline'))), # A reference target (r'^( *\.\.)(\s*)(_(?:[^:\\]|\\.)+:)(.*?)$', bygroups(Punctuation, Text, Name.Tag, using(this, state='inline'))), # A footnote/citation target (r'^( *\.\.)(\s*)(\[.+\])(.*?)$', bygroups(Punctuation, Text, Name.Tag, using(this, state='inline'))), # A substitution def (r'^( *\.\.)(\s*)(\|.+\|)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))', bygroups(Punctuation, Text, Name.Tag, Text, Operator.Word, Punctuation, Text, using(this, state='inline'))), # Comments (r'^ *\.\..*(\n( +.*\n|\n)+)?', Comment.Preproc), # Field list (r'^( *)(:[a-zA-Z-]+:)(\s*)$', bygroups(Text, Name.Class, Text)), (r'^( *)(:.*?:)([ \t]+)(.*?)$', bygroups(Text, Name.Class, Text, Name.Function)), # Definition list (r'^(\S.*(?)(`__?)', # reference with inline target bygroups(String, String.Interpol, String)), (r'`.+?`__?', String), # reference (r'(`.+?`)(:[a-zA-Z0-9:-]+?:)?', bygroups(Name.Variable, Name.Attribute)), # role (r'(:[a-zA-Z0-9:-]+?:)(`.+?`)', bygroups(Name.Attribute, Name.Variable)), # role (content first) (r'\*\*.+?\*\*', Generic.Strong), # Strong emphasis (r'\*.+?\*', Generic.Emph), # Emphasis (r'\[.*?\]_', String), # Footnote or citation (r'<.+?>', Name.Tag), # Hyperlink (r'[^\\\n\[*`:]+', Text), (r'.', Text), ], 'literal': [ (r'[^`]+', String), (r'``' + end_string_suffix, String, '#pop'), (r'`', String), ] } def __init__(self, **options): self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True) RegexLexer.__init__(self, **options) def analyse_text(text): if text[:2] == '..' and text[2:3] != '.': return 0.3 p1 = text.find("\n") p2 = text.find("\n", p1 + 1) if (p2 > -1 and # has two lines p1 * 2 + 1 == p2 and # they are the same length text[p1+1] in '-=' and # the next line both starts and ends with text[p1+1] == text[p2-1]): # ...a sufficiently high header return 0.5 class TexLexer(RegexLexer): """ Lexer for the TeX and LaTeX typesetting languages. """ name = 'TeX' aliases = ['tex', 'latex'] filenames = ['*.tex', '*.aux', '*.toc'] mimetypes = ['text/x-tex', 'text/x-latex'] tokens = { 'general': [ (r'%.*?\n', Comment), (r'[{}]', Name.Builtin), (r'[&_^]', Name.Builtin), ], 'root': [ (r'\\\[', String.Backtick, 'displaymath'), (r'\\\(', String, 'inlinemath'), (r'\$\$', String.Backtick, 'displaymath'), (r'\$', String, 'inlinemath'), (r'\\([a-zA-Z]+|.)', Keyword, 'command'), (r'\\$', Keyword), include('general'), (r'[^\\$%&_^{}]+', Text), ], 'math': [ (r'\\([a-zA-Z]+|.)', Name.Variable), include('general'), (r'[0-9]+', Number), (r'[-=!+*/()\[\]]', Operator), (r'[^=!+*/()\[\]\\$%&_^{}0-9-]+', Name.Builtin), ], 'inlinemath': [ (r'\\\)', String, '#pop'), (r'\$', String, '#pop'), include('math'), ], 'displaymath': [ (r'\\\]', String, '#pop'), (r'\$\$', String, '#pop'), (r'\$', Name.Builtin), include('math'), ], 'command': [ (r'\[.*?\]', Name.Attribute), (r'\*', Keyword), default('#pop'), ], } def analyse_text(text): for start in ("\\documentclass", "\\input", "\\documentstyle", "\\relax"): if text[:len(start)] == start: return True class GroffLexer(RegexLexer): """ Lexer for the (g)roff typesetting language, supporting groff extensions. Mainly useful for highlighting manpage sources. .. versionadded:: 0.6 """ name = 'Groff' aliases = ['groff', 'nroff', 'man'] filenames = ['*.[1234567]', '*.man'] mimetypes = ['application/x-troff', 'text/troff'] tokens = { 'root': [ (r'(\.)(\w+)', bygroups(Text, Keyword), 'request'), (r'\.', Punctuation, 'request'), # Regular characters, slurp till we find a backslash or newline (r'[^\\\n]+', Text, 'textline'), default('textline'), ], 'textline': [ include('escapes'), (r'[^\\\n]+', Text), (r'\n', Text, '#pop'), ], 'escapes': [ # groff has many ways to write escapes. (r'\\"[^\n]*', Comment), (r'\\[fn]\w', String.Escape), (r'\\\(.{2}', String.Escape), (r'\\.\[.*\]', String.Escape), (r'\\.', String.Escape), (r'\\\n', Text, 'request'), ], 'request': [ (r'\n', Text, '#pop'), include('escapes'), (r'"[^\n"]+"', String.Double), (r'\d+', Number), (r'\S+', String), (r'\s+', Text), ], } def analyse_text(text): if text[:1] != '.': return False if text[:3] == '.\\"': return True if text[:4] == '.TH ': return True if text[1:3].isalnum() and text[3].isspace(): return 0.9 class MozPreprocHashLexer(RegexLexer): """ Lexer for Mozilla Preprocessor files (with '#' as the marker). Other data is left untouched. .. versionadded:: 2.0 """ name = 'mozhashpreproc' aliases = [name] filenames = [] mimetypes = [] tokens = { 'root': [ (r'^#', Comment.Preproc, ('expr', 'exprstart')), (r'.+', Other), ], 'exprstart': [ (r'(literal)(.*)', bygroups(Comment.Preproc, Text), '#pop:2'), (words(( 'define', 'undef', 'if', 'ifdef', 'ifndef', 'else', 'elif', 'elifdef', 'elifndef', 'endif', 'expand', 'filter', 'unfilter', 'include', 'includesubst', 'error')), Comment.Preproc, '#pop'), ], 'expr': [ (words(('!', '!=', '==', '&&', '||')), Operator), (r'(defined)(\()', bygroups(Keyword, Punctuation)), (r'\)', Punctuation), (r'[0-9]+', Number.Decimal), (r'__\w+?__', Name.Variable), (r'@\w+?@', Name.Class), (r'\w+', Name), (r'\n', Text, '#pop'), (r'\s+', Text), (r'\S', Punctuation), ], } class MozPreprocPercentLexer(MozPreprocHashLexer): """ Lexer for Mozilla Preprocessor files (with '%' as the marker). Other data is left untouched. .. versionadded:: 2.0 """ name = 'mozpercentpreproc' aliases = [name] filenames = [] mimetypes = [] tokens = { 'root': [ (r'^%', Comment.Preproc, ('expr', 'exprstart')), (r'.+', Other), ], } class MozPreprocXulLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `XmlLexer`. .. versionadded:: 2.0 """ name = "XUL+mozpreproc" aliases = ['xul+mozpreproc'] filenames = ['*.xul.in'] mimetypes = [] def __init__(self, **options): super(MozPreprocXulLexer, self).__init__( XmlLexer, MozPreprocHashLexer, **options) class MozPreprocJavascriptLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `JavascriptLexer`. .. versionadded:: 2.0 """ name = "Javascript+mozpreproc" aliases = ['javascript+mozpreproc'] filenames = ['*.js.in'] mimetypes = [] def __init__(self, **options): super(MozPreprocJavascriptLexer, self).__init__( JavascriptLexer, MozPreprocHashLexer, **options) class MozPreprocCssLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `CssLexer`. .. versionadded:: 2.0 """ name = "CSS+mozpreproc" aliases = ['css+mozpreproc'] filenames = ['*.css.in'] mimetypes = [] def __init__(self, **options): super(MozPreprocCssLexer, self).__init__( CssLexer, MozPreprocPercentLexer, **options) class MarkdownLexer(RegexLexer): """ For `Markdown `_ markup. .. versionadded:: 2.2 """ name = 'markdown' aliases = ['md'] filenames = ['*.md'] mimetypes = ["text/x-markdown"] flags = re.MULTILINE def _handle_codeblock(self, match): """ match args: 1:backticks, 2:lang_name, 3:newline, 4:code, 5:backticks """ from pygments.lexers import get_lexer_by_name # section header yield match.start(1), String , match.group(1) yield match.start(2), String , match.group(2) yield match.start(3), Text , match.group(3) # lookup lexer if wanted and existing lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name( match.group(2).strip() ) except ClassNotFound: pass code = match.group(4) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(4), String, code else: for item in do_insertions([], lexer.get_tokens_unprocessed(code)): yield item yield match.start(5), String , match.group(5) tokens = { 'root': [ # heading with pound prefix (r'^(#)([^#].+\n)', bygroups(Generic.Heading, Text)), (r'^(#{2,6})(.+\n)', bygroups(Generic.Subheading, Text)), # task list (r'^(\s*)([*-] )(\[[ xX]\])( .+\n)', bygroups(Text, Keyword, Keyword, using(this, state='inline'))), # bulleted lists (r'^(\s*)([*-])(\s)(.+\n)', bygroups(Text, Keyword, Text, using(this, state='inline'))), # numbered lists (r'^(\s*)([0-9]+\.)( .+\n)', bygroups(Text, Keyword, using(this, state='inline'))), # quote (r'^(\s*>\s)(.+\n)', bygroups(Keyword, Generic.Emph)), # text block (r'^(```\n)([\w\W]*?)(^```$)', bygroups(String, Text, String)), # code block with language (r'^(```)(\w+)(\n)([\w\W]*?)(^```$)', _handle_codeblock), include('inline'), ], 'inline': [ # escape (r'\\.', Text), # italics (r'(\s)([*_][^*_]+[*_])(\W|\n)', bygroups(Text, Generic.Emph, Text)), # bold # warning: the following rule eats internal tags. eg. **foo _bar_ baz** bar is not italics (r'(\s)((\*\*|__).*\3)((?=\W|\n))', bygroups(Text, Generic.Strong, None, Text)), # "proper way" (r'(\s)([*_]{2}[^*_]+[*_]{2})((?=\W|\n))', bygroups(Text, Generic.Strong, Text)), # strikethrough (r'(\s)(~~[^~]+~~)((?=\W|\n))', bygroups(Text, Generic.Deleted, Text)), # inline code (r'`[^`]+`', String.Backtick), # mentions and topics (twitter and github stuff) (r'[@#][\w/:]+', Name.Entity), # (image?) links eg: ![Image of Yaktocat](https://octodex.github.com/images/yaktocat.png) (r'(!?\[)([^]]+)(\])(\()([^)]+)(\))', bygroups(Text, Name.Tag, Text, Text, Name.Attribute, Text)), # general text, must come last! (r'[^\\\s]+', Text), (r'.', Text), ], } def __init__(self, **options): self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True) RegexLexer.__init__(self, **options) Pygments-2.3.1/pygments/lexers/webmisc.py0000644000175000017500000011574513402534107017566 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.webmisc ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for misc. web stuff. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, bygroups, \ default, using from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Literal from pygments.util import unirange from pygments.lexers.css import _indentation, _starts_block from pygments.lexers.html import HtmlLexer from pygments.lexers.javascript import JavascriptLexer from pygments.lexers.ruby import RubyLexer __all__ = ['DuelLexer', 'SlimLexer', 'XQueryLexer', 'QmlLexer', 'CirruLexer'] class DuelLexer(RegexLexer): """ Lexer for Duel Views Engine (formerly JBST) markup with JavaScript code blocks. See http://duelengine.org/. See http://jsonml.org/jbst/. .. versionadded:: 1.4 """ name = 'Duel' aliases = ['duel', 'jbst', 'jsonml+bst'] filenames = ['*.duel', '*.jbst'] mimetypes = ['text/x-duel', 'text/x-jbst'] flags = re.DOTALL tokens = { 'root': [ (r'(<%[@=#!:]?)(.*?)(%>)', bygroups(Name.Tag, using(JavascriptLexer), Name.Tag)), (r'(<%\$)(.*?)(:)(.*?)(%>)', bygroups(Name.Tag, Name.Function, Punctuation, String, Name.Tag)), (r'(<%--)(.*?)(--%>)', bygroups(Name.Tag, Comment.Multiline, Name.Tag)), (r'()(.*?)()', bygroups(using(HtmlLexer), using(JavascriptLexer), using(HtmlLexer))), (r'(.+?)(?=<)', using(HtmlLexer)), (r'.+', using(HtmlLexer)), ], } class XQueryLexer(ExtendedRegexLexer): """ An XQuery lexer, parsing a stream and outputting the tokens needed to highlight xquery code. .. versionadded:: 1.4 """ name = 'XQuery' aliases = ['xquery', 'xqy', 'xq', 'xql', 'xqm'] filenames = ['*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'] mimetypes = ['text/xquery', 'application/xquery'] xquery_parse_state = [] # FIX UNICODE LATER # ncnamestartchar = ( # ur"[A-Z]|_|[a-z]|[\u00C0-\u00D6]|[\u00D8-\u00F6]|[\u00F8-\u02FF]|" # ur"[\u0370-\u037D]|[\u037F-\u1FFF]|[\u200C-\u200D]|[\u2070-\u218F]|" # ur"[\u2C00-\u2FEF]|[\u3001-\uD7FF]|[\uF900-\uFDCF]|[\uFDF0-\uFFFD]|" # ur"[\u10000-\uEFFFF]" # ) ncnamestartchar = r"(?:[A-Z]|_|[a-z])" # FIX UNICODE LATER # ncnamechar = ncnamestartchar + (ur"|-|\.|[0-9]|\u00B7|[\u0300-\u036F]|" # ur"[\u203F-\u2040]") ncnamechar = r"(?:" + ncnamestartchar + r"|-|\.|[0-9])" ncname = "(?:%s+%s*)" % (ncnamestartchar, ncnamechar) pitarget_namestartchar = r"(?:[A-KN-WYZ]|_|:|[a-kn-wyz])" pitarget_namechar = r"(?:" + pitarget_namestartchar + r"|-|\.|[0-9])" pitarget = "%s+%s*" % (pitarget_namestartchar, pitarget_namechar) prefixedname = "%s:%s" % (ncname, ncname) unprefixedname = ncname qname = "(?:%s|%s)" % (prefixedname, unprefixedname) entityref = r'(?:&(?:lt|gt|amp|quot|apos|nbsp);)' charref = r'(?:&#[0-9]+;|&#x[0-9a-fA-F]+;)' stringdouble = r'(?:"(?:' + entityref + r'|' + charref + r'|""|[^&"])*")' stringsingle = r"(?:'(?:" + entityref + r"|" + charref + r"|''|[^&'])*')" # FIX UNICODE LATER # elementcontentchar = (ur'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|' # ur'[\u003d-\u007a]|\u007c|[\u007e-\u007F]') elementcontentchar = r'[A-Za-z]|\s|\d|[!"#$%()*+,\-./:;=?@\[\\\]^_\'`|~]' # quotattrcontentchar = (ur'\t|\r|\n|[\u0020-\u0021]|[\u0023-\u0025]|' # ur'[\u0027-\u003b]|[\u003d-\u007a]|\u007c|[\u007e-\u007F]') quotattrcontentchar = r'[A-Za-z]|\s|\d|[!#$%()*+,\-./:;=?@\[\\\]^_\'`|~]' # aposattrcontentchar = (ur'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|' # ur'[\u003d-\u007a]|\u007c|[\u007e-\u007F]') aposattrcontentchar = r'[A-Za-z]|\s|\d|[!"#$%()*+,\-./:;=?@\[\\\]^_`|~]' # CHAR elements - fix the above elementcontentchar, quotattrcontentchar, # aposattrcontentchar # x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] flags = re.DOTALL | re.MULTILINE | re.UNICODE def punctuation_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) # transition to root always - don't pop off stack ctx.stack = ['root'] ctx.pos = match.end() def operator_root_callback(lexer, match, ctx): yield match.start(), Operator, match.group(1) # transition to root always - don't pop off stack ctx.stack = ['root'] ctx.pos = match.end() def popstate_tag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) ctx.stack.append(lexer.xquery_parse_state.pop()) ctx.pos = match.end() def popstate_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append(lexer.xquery_parse_state.pop()) ctx.pos = match.end() def popstate_kindtest_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) next_state = lexer.xquery_parse_state.pop() if next_state == 'occurrenceindicator': if re.match("[?*+]+", match.group(2)): yield match.start(), Punctuation, match.group(2) ctx.stack.append('operator') ctx.pos = match.end() else: ctx.stack.append('operator') ctx.pos = match.end(1) else: ctx.stack.append(next_state) ctx.pos = match.end(1) def popstate_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) # if we have run out of our state stack, pop whatever is on the pygments # state stack if len(lexer.xquery_parse_state) == 0: ctx.stack.pop() elif len(ctx.stack) > 1: ctx.stack.append(lexer.xquery_parse_state.pop()) else: # i don't know if i'll need this, but in case, default back to root ctx.stack = ['root'] ctx.pos = match.end() def pushstate_element_content_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append('element_content') ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append(ctx.state.pop) ctx.pos = match.end() def pushstate_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append(ctx.state.pop) ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_operator_order_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_map_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_root_validate(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_root_validate_withmode(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Keyword, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_processing_instruction_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('processing_instruction') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_element_content_processing_instruction_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('processing_instruction') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_element_content_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_operator_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_element_content_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('xml_comment') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_operator_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('xml_comment') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_kindtest_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('kindtest') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_operator_kindtestforpi_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack.append('kindtestforpi') ctx.pos = match.end() def pushstate_operator_kindtest_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_occurrenceindicator_kindtest_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('occurrenceindicator') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_operator_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append('operator') ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_operator_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) lexer.xquery_parse_state.append('operator') ctx.stack = ['root'] ctx.pos = match.end() def pushstate_operator_root_construct_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack = ['root'] ctx.pos = match.end() def pushstate_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) cur_state = ctx.stack.pop() lexer.xquery_parse_state.append(cur_state) ctx.stack = ['root'] ctx.pos = match.end() def pushstate_operator_attribute_callback(lexer, match, ctx): yield match.start(), Name.Attribute, match.group(1) ctx.stack.append('operator') ctx.pos = match.end() def pushstate_operator_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.pos = match.end() tokens = { 'comment': [ # xquery comments (r'(:\))', Comment, '#pop'), (r'(\(:)', Comment, '#push'), (r'[^:)]', Comment), (r'([^:)]|:|\))', Comment), ], 'whitespace': [ (r'\s+', Text), ], 'operator': [ include('whitespace'), (r'(\})', popstate_callback), (r'\(:', Comment, 'comment'), (r'(\{)', pushstate_root_callback), (r'then|else|external|at|div|except', Keyword, 'root'), (r'order by', Keyword, 'root'), (r'group by', Keyword, 'root'), (r'is|mod|order\s+by|stable\s+order\s+by', Keyword, 'root'), (r'and|or', Operator.Word, 'root'), (r'(eq|ge|gt|le|lt|ne|idiv|intersect|in)(?=\b)', Operator.Word, 'root'), (r'return|satisfies|to|union|where|count|preserve\s+strip', Keyword, 'root'), (r'(>=|>>|>|<=|<<|<|-|\*|!=|\+|\|\||\||:=|=|!)', operator_root_callback), (r'(::|:|;|\[|//|/|,)', punctuation_root_callback), (r'(castable|cast)(\s+)(as)\b', bygroups(Keyword, Text, Keyword), 'singletype'), (r'(instance)(\s+)(of)\b', bygroups(Keyword, Text, Keyword), 'itemtype'), (r'(treat)(\s+)(as)\b', bygroups(Keyword, Text, Keyword), 'itemtype'), (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'), (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'), (r'(case|as)\b', Keyword, 'itemtype'), (r'(\))(\s*)(as)', bygroups(Punctuation, Text, Keyword), 'itemtype'), (r'\$', Name.Variable, 'varname'), (r'(for|let|previous|next)(\s+)(\$)', bygroups(Keyword, Text, Name.Variable), 'varname'), (r'(for)(\s+)(tumbling|sliding)(\s+)(window)(\s+)(\$)', bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable), 'varname'), # (r'\)|\?|\]', Punctuation, '#push'), (r'\)|\?|\]', Punctuation), (r'(empty)(\s+)(greatest|least)', bygroups(Keyword, Text, Keyword)), (r'ascending|descending|default', Keyword, '#push'), (r'(allowing)(\s+)(empty)', bygroups(Keyword, Text, Keyword)), (r'external', Keyword), (r'(start|when|end)', Keyword, 'root'), (r'(only)(\s+)(end)', bygroups(Keyword, Text, Keyword), 'root'), (r'collation', Keyword, 'uritooperator'), # eXist specific XQUF (r'(into|following|preceding|with)', Keyword, 'root'), # support for current context on rhs of Simple Map Operator (r'\.', Operator), # finally catch all string literals and stay in operator state (stringdouble, String.Double), (stringsingle, String.Single), (r'(catch)(\s*)', bygroups(Keyword, Text), 'root'), ], 'uritooperator': [ (stringdouble, String.Double, '#pop'), (stringsingle, String.Single, '#pop'), ], 'namespacedecl': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'(at)(\s+)('+stringdouble+')', bygroups(Keyword, Text, String.Double)), (r"(at)(\s+)("+stringsingle+')', bygroups(Keyword, Text, String.Single)), (stringdouble, String.Double), (stringsingle, String.Single), (r',', Punctuation), (r'=', Operator), (r';', Punctuation, 'root'), (ncname, Name.Namespace), ], 'namespacekeyword': [ include('whitespace'), (r'\(:', Comment, 'comment'), (stringdouble, String.Double, 'namespacedecl'), (stringsingle, String.Single, 'namespacedecl'), (r'inherit|no-inherit', Keyword, 'root'), (r'namespace', Keyword, 'namespacedecl'), (r'(default)(\s+)(element)', bygroups(Keyword, Text, Keyword)), (r'preserve|no-preserve', Keyword), (r',', Punctuation), ], 'annotationname': [ (r'\(:', Comment, 'comment'), (qname, Name.Decorator), (r'(\()(' + stringdouble + ')', bygroups(Punctuation, String.Double)), (r'(\()(' + stringsingle + ')', bygroups(Punctuation, String.Single)), (r'(\,)(\s+)(' + stringdouble + ')', bygroups(Punctuation, Text, String.Double)), (r'(\,)(\s+)(' + stringsingle + ')', bygroups(Punctuation, Text, String.Single)), (r'\)', Punctuation), (r'(\s+)(\%)', bygroups(Text, Name.Decorator), 'annotationname'), (r'(\s+)(variable)(\s+)(\$)', bygroups(Text, Keyword.Declaration, Text, Name.Variable), 'varname'), (r'(\s+)(function)(\s+)', bygroups(Text, Keyword.Declaration, Text), 'root') ], 'varname': [ (r'\(:', Comment, 'comment'), (r'(' + qname + r')(\()?', bygroups(Name, Punctuation), 'operator'), ], 'singletype': [ include('whitespace'), (r'\(:', Comment, 'comment'), (ncname + r'(:\*)', Name.Variable, 'operator'), (qname, Name.Variable, 'operator'), ], 'itemtype': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'\$', Name.Variable, 'varname'), (r'(void)(\s*)(\()(\s*)(\))', bygroups(Keyword, Text, Punctuation, Text, Punctuation), 'operator'), (r'(element|attribute|schema-element|schema-attribute|comment|text|' r'node|binary|document-node|empty-sequence)(\s*)(\()', pushstate_occurrenceindicator_kindtest_callback), # Marklogic specific type? (r'(processing-instruction)(\s*)(\()', bygroups(Keyword, Text, Punctuation), ('occurrenceindicator', 'kindtestforpi')), (r'(item)(\s*)(\()(\s*)(\))(?=[*+?])', bygroups(Keyword, Text, Punctuation, Text, Punctuation), 'occurrenceindicator'), (r'(\(\#)(\s*)', bygroups(Punctuation, Text), 'pragma'), (r';', Punctuation, '#pop'), (r'then|else', Keyword, '#pop'), (r'(at)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'namespacedecl'), (r'(at)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'namespacedecl'), (r'except|intersect|in|is|return|satisfies|to|union|where|count', Keyword, 'root'), (r'and|div|eq|ge|gt|le|lt|ne|idiv|mod|or', Operator.Word, 'root'), (r':=|=|,|>=|>>|>|\[|\(|<=|<<|<|-|!=|\|\||\|', Operator, 'root'), (r'external|at', Keyword, 'root'), (r'(stable)(\s+)(order)(\s+)(by)', bygroups(Keyword, Text, Keyword, Text, Keyword), 'root'), (r'(castable|cast)(\s+)(as)', bygroups(Keyword, Text, Keyword), 'singletype'), (r'(treat)(\s+)(as)', bygroups(Keyword, Text, Keyword)), (r'(instance)(\s+)(of)', bygroups(Keyword, Text, Keyword)), (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'), (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'), (r'case|as', Keyword, 'itemtype'), (r'(\))(\s*)(as)', bygroups(Operator, Text, Keyword), 'itemtype'), (ncname + r':\*', Keyword.Type, 'operator'), (r'(function|map|array)(\()', bygroups(Keyword.Type, Punctuation)), (qname, Keyword.Type, 'occurrenceindicator'), ], 'kindtest': [ (r'\(:', Comment, 'comment'), (r'\{', Punctuation, 'root'), (r'(\))([*+?]?)', popstate_kindtest_callback), (r'\*', Name, 'closekindtest'), (qname, Name, 'closekindtest'), (r'(element|schema-element)(\s*)(\()', pushstate_kindtest_callback), ], 'kindtestforpi': [ (r'\(:', Comment, 'comment'), (r'\)', Punctuation, '#pop'), (ncname, Name.Variable), (stringdouble, String.Double), (stringsingle, String.Single), ], 'closekindtest': [ (r'\(:', Comment, 'comment'), (r'(\))', popstate_callback), (r',', Punctuation), (r'(\{)', pushstate_operator_root_callback), (r'\?', Punctuation), ], 'xml_comment': [ (r'(-->)', popstate_xmlcomment_callback), (r'[^-]{1,2}', Literal), (u'\\t|\\r|\\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|' + unirange(0x10000, 0x10ffff), Literal), ], 'processing_instruction': [ (r'\s+', Text, 'processing_instruction_content'), (r'\?>', String.Doc, '#pop'), (pitarget, Name), ], 'processing_instruction_content': [ (r'\?>', String.Doc, '#pop'), (u'\\t|\\r|\\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|' + unirange(0x10000, 0x10ffff), Literal), ], 'cdata_section': [ (r']]>', String.Doc, '#pop'), (u'\\t|\\r|\\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|' + unirange(0x10000, 0x10ffff), Literal), ], 'start_tag': [ include('whitespace'), (r'(/>)', popstate_tag_callback), (r'>', Name.Tag, 'element_content'), (r'"', Punctuation, 'quot_attribute_content'), (r"'", Punctuation, 'apos_attribute_content'), (r'=', Operator), (qname, Name.Tag), ], 'quot_attribute_content': [ (r'"', Punctuation, 'start_tag'), (r'(\{)', pushstate_root_callback), (r'""', Name.Attribute), (quotattrcontentchar, Name.Attribute), (entityref, Name.Attribute), (charref, Name.Attribute), (r'\{\{|\}\}', Name.Attribute), ], 'apos_attribute_content': [ (r"'", Punctuation, 'start_tag'), (r'\{', Punctuation, 'root'), (r"''", Name.Attribute), (aposattrcontentchar, Name.Attribute), (entityref, Name.Attribute), (charref, Name.Attribute), (r'\{\{|\}\}', Name.Attribute), ], 'element_content': [ (r')', popstate_tag_callback), (qname, Name.Tag), ], 'xmlspace_decl': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'preserve|strip', Keyword, '#pop'), ], 'declareordering': [ (r'\(:', Comment, 'comment'), include('whitespace'), (r'ordered|unordered', Keyword, '#pop'), ], 'xqueryversion': [ include('whitespace'), (r'\(:', Comment, 'comment'), (stringdouble, String.Double), (stringsingle, String.Single), (r'encoding', Keyword), (r';', Punctuation, '#pop'), ], 'pragma': [ (qname, Name.Variable, 'pragmacontents'), ], 'pragmacontents': [ (r'#\)', Punctuation, 'operator'), (u'\\t|\\r|\\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|' + unirange(0x10000, 0x10ffff), Literal), (r'(\s+)', Text), ], 'occurrenceindicator': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'\*|\?|\+', Operator, 'operator'), (r':=', Operator, 'root'), default('operator'), ], 'option': [ include('whitespace'), (qname, Name.Variable, '#pop'), ], 'qname_braren': [ include('whitespace'), (r'(\{)', pushstate_operator_root_callback), (r'(\()', Punctuation, 'root'), ], 'element_qname': [ (qname, Name.Variable, 'root'), ], 'attribute_qname': [ (qname, Name.Variable, 'root'), ], 'root': [ include('whitespace'), (r'\(:', Comment, 'comment'), # handle operator state # order on numbers matters - handle most complex first (r'\d+(\.\d*)?[eE][+-]?\d+', Number.Float, 'operator'), (r'(\.\d+)[eE][+-]?\d+', Number.Float, 'operator'), (r'(\.\d+|\d+\.\d*)', Number.Float, 'operator'), (r'(\d+)', Number.Integer, 'operator'), (r'(\.\.|\.|\))', Punctuation, 'operator'), (r'(declare)(\s+)(construction)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (r'(declare)(\s+)(default)(\s+)(order)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (r'(declare)(\s+)(context)(\s+)(item)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (ncname + r':\*', Name, 'operator'), (r'\*:'+ncname, Name.Tag, 'operator'), (r'\*', Name.Tag, 'operator'), (stringdouble, String.Double, 'operator'), (stringsingle, String.Single, 'operator'), (r'(\}|\])', popstate_callback), # NAMESPACE DECL (r'(declare)(\s+)(default)(\s+)(collation)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration)), (r'(module|declare)(\s+)(namespace)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacedecl'), (r'(declare)(\s+)(base-uri)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacedecl'), # NAMESPACE KEYWORD (r'(declare)(\s+)(default)(\s+)(element|function)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'namespacekeyword'), (r'(import)(\s+)(schema|module)', bygroups(Keyword.Pseudo, Text, Keyword.Pseudo), 'namespacekeyword'), (r'(declare)(\s+)(copy-namespaces)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacekeyword'), # VARNAMEs (r'(for|let|some|every)(\s+)(\$)', bygroups(Keyword, Text, Name.Variable), 'varname'), (r'(for)(\s+)(tumbling|sliding)(\s+)(window)(\s+)(\$)', bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable), 'varname'), (r'\$', Name.Variable, 'varname'), (r'(declare)(\s+)(variable)(\s+)(\$)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Name.Variable), 'varname'), # ANNOTATED GLOBAL VARIABLES AND FUNCTIONS (r'(declare)(\s+)(\%)', bygroups(Keyword.Declaration, Text, Name.Decorator), 'annotationname'), # ITEMTYPE (r'(\))(\s+)(as)', bygroups(Operator, Text, Keyword), 'itemtype'), (r'(element|attribute|schema-element|schema-attribute|comment|' r'text|node|document-node|empty-sequence)(\s+)(\()', pushstate_operator_kindtest_callback), (r'(processing-instruction)(\s+)(\()', pushstate_operator_kindtestforpi_callback), (r'(', Punctuation), (r'"(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|' r'\\[0-7]+\\|\\["\nabcefnrstv]|[^\\"])*"', String.Double), (r"'(?:''|[^'])*'", String.Atom), # quoted atom # Needs to not be followed by an atom. # (r'=(?=\s|[a-zA-Z\[])', Operator), (r'is\b', Operator), (r'(<|>|=<|>=|==|=:=|=|/|//|\*|\+|-)(?=\s|[a-zA-Z0-9\[])', Operator), (r'(mod|div|not)\b', Operator), (r'_', Keyword), # The don't-care variable (r'([a-z]+)(:)', bygroups(Name.Namespace, Punctuation)), (u'([a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' u'[\\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*)' u'(\\s*)(:-|-->)', bygroups(Name.Function, Text, Operator)), # function defn (u'([a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' u'[\\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*)' u'(\\s*)(\\()', bygroups(Name.Function, Text, Punctuation)), (u'[a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' u'[\\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*', String.Atom), # atom, characters # This one includes ! (u'[#&*+\\-./:<=>?@\\\\^~\u00a1-\u00bf\u2010-\u303f]+', String.Atom), # atom, graphics (r'[A-Z_]\w*', Name.Variable), (u'\\s+|[\u2000-\u200f\ufff0-\ufffe\uffef]', Text), ], 'nested-comment': [ (r'\*/', Comment.Multiline, '#pop'), (r'/\*', Comment.Multiline, '#push'), (r'[^*/]+', Comment.Multiline), (r'[*/]', Comment.Multiline), ], } def analyse_text(text): return ':-' in text class LogtalkLexer(RegexLexer): """ For `Logtalk `_ source code. .. versionadded:: 0.10 """ name = 'Logtalk' aliases = ['logtalk'] filenames = ['*.lgt', '*.logtalk'] mimetypes = ['text/x-logtalk'] tokens = { 'root': [ # Directives (r'^\s*:-\s', Punctuation, 'directive'), # Comments (r'%.*?\n', Comment), (r'/\*(.|\n)*?\*/', Comment), # Whitespace (r'\n', Text), (r'\s+', Text), # Numbers (r"0'.", Number), (r'0b[01]+', Number.Bin), (r'0o[0-7]+', Number.Oct), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+\.?\d*((e|E)(\+|-)?\d+)?', Number), # Variables (r'([A-Z_]\w*)', Name.Variable), # Event handlers (r'(after|before)(?=[(])', Keyword), # Message forwarding handler (r'forward(?=[(])', Keyword), # Execution-context methods (r'(parameter|this|se(lf|nder))(?=[(])', Keyword), # Reflection (r'(current_predicate|predicate_property)(?=[(])', Keyword), # DCGs and term expansion (r'(expand_(goal|term)|(goal|term)_expansion|phrase)(?=[(])', Keyword), # Entity (r'(abolish|c(reate|urrent))_(object|protocol|category)(?=[(])', Keyword), (r'(object|protocol|category)_property(?=[(])', Keyword), # Entity relations (r'co(mplements_object|nforms_to_protocol)(?=[(])', Keyword), (r'extends_(object|protocol|category)(?=[(])', Keyword), (r'imp(lements_protocol|orts_category)(?=[(])', Keyword), (r'(instantiat|specializ)es_class(?=[(])', Keyword), # Events (r'(current_event|(abolish|define)_events)(?=[(])', Keyword), # Flags (r'(current|set)_logtalk_flag(?=[(])', Keyword), # Compiling, loading, and library paths (r'logtalk_(compile|l(ibrary_path|oad|oad_context)|make)(?=[(])', Keyword), (r'\blogtalk_make\b', Keyword), # Database (r'(clause|retract(all)?)(?=[(])', Keyword), (r'a(bolish|ssert(a|z))(?=[(])', Keyword), # Control constructs (r'(ca(ll|tch)|throw)(?=[(])', Keyword), (r'(fa(il|lse)|true)\b', Keyword), # All solutions (r'((bag|set)of|f(ind|or)all)(?=[(])', Keyword), # Multi-threading meta-predicates (r'threaded(_(call|once|ignore|exit|peek|wait|notify))?(?=[(])', Keyword), # Term unification (r'(subsumes_term|unify_with_occurs_check)(?=[(])', Keyword), # Term creation and decomposition (r'(functor|arg|copy_term|numbervars|term_variables)(?=[(])', Keyword), # Evaluable functors (r'(div|rem|m(ax|in|od)|abs|sign)(?=[(])', Keyword), (r'float(_(integer|fractional)_part)?(?=[(])', Keyword), (r'(floor|t(an|runcate)|round|ceiling)(?=[(])', Keyword), # Other arithmetic functors (r'(cos|a(cos|sin|tan|tan2)|exp|log|s(in|qrt)|xor)(?=[(])', Keyword), # Term testing (r'(var|atom(ic)?|integer|float|c(allable|ompound)|n(onvar|umber)|' r'ground|acyclic_term)(?=[(])', Keyword), # Term comparison (r'compare(?=[(])', Keyword), # Stream selection and control (r'(curren|se)t_(in|out)put(?=[(])', Keyword), (r'(open|close)(?=[(])', Keyword), (r'flush_output(?=[(])', Keyword), (r'(at_end_of_stream|flush_output)\b', Keyword), (r'(stream_property|at_end_of_stream|set_stream_position)(?=[(])', Keyword), # Character and byte input/output (r'(nl|(get|peek|put)_(byte|c(har|ode)))(?=[(])', Keyword), (r'\bnl\b', Keyword), # Term input/output (r'read(_term)?(?=[(])', Keyword), (r'write(q|_(canonical|term))?(?=[(])', Keyword), (r'(current_)?op(?=[(])', Keyword), (r'(current_)?char_conversion(?=[(])', Keyword), # Atomic term processing (r'atom_(length|c(hars|o(ncat|des)))(?=[(])', Keyword), (r'(char_code|sub_atom)(?=[(])', Keyword), (r'number_c(har|ode)s(?=[(])', Keyword), # Implementation defined hooks functions (r'(se|curren)t_prolog_flag(?=[(])', Keyword), (r'\bhalt\b', Keyword), (r'halt(?=[(])', Keyword), # Message sending operators (r'(::|:|\^\^)', Operator), # External call (r'[{}]', Keyword), # Logic and control (r'(ignore|once)(?=[(])', Keyword), (r'\brepeat\b', Keyword), # Sorting (r'(key)?sort(?=[(])', Keyword), # Bitwise functors (r'(>>|<<|/\\|\\\\|\\)', Operator), # Predicate aliases (r'\bas\b', Operator), # Arithemtic evaluation (r'\bis\b', Keyword), # Arithemtic comparison (r'(=:=|=\\=|<|=<|>=|>)', Operator), # Term creation and decomposition (r'=\.\.', Operator), # Term unification (r'(=|\\=)', Operator), # Term comparison (r'(==|\\==|@=<|@<|@>=|@>)', Operator), # Evaluable functors (r'(//|[-+*/])', Operator), (r'\b(e|pi|div|mod|rem)\b', Operator), # Other arithemtic functors (r'\b\*\*\b', Operator), # DCG rules (r'-->', Operator), # Control constructs (r'([!;]|->)', Operator), # Logic and control (r'\\+', Operator), # Mode operators (r'[?@]', Operator), # Existential quantifier (r'\^', Operator), # Strings (r'"(\\\\|\\"|[^"])*"', String), # Ponctuation (r'[()\[\],.|]', Text), # Atoms (r"[a-z]\w*", Text), (r"'", String, 'quoted_atom'), ], 'quoted_atom': [ (r"''", String), (r"'", String, '#pop'), (r'\\([\\abfnrtv"\']|(x[a-fA-F0-9]+|[0-7]+)\\)', String.Escape), (r"[^\\'\n]+", String), (r'\\', String), ], 'directive': [ # Conditional compilation directives (r'(el)?if(?=[(])', Keyword, 'root'), (r'(e(lse|ndif))[.]', Keyword, 'root'), # Entity directives (r'(category|object|protocol)(?=[(])', Keyword, 'entityrelations'), (r'(end_(category|object|protocol))[.]', Keyword, 'root'), # Predicate scope directives (r'(public|protected|private)(?=[(])', Keyword, 'root'), # Other directives (r'e(n(coding|sure_loaded)|xport)(?=[(])', Keyword, 'root'), (r'in(clude|itialization|fo)(?=[(])', Keyword, 'root'), (r'(built_in|dynamic|synchronized|threaded)[.]', Keyword, 'root'), (r'(alias|d(ynamic|iscontiguous)|m(eta_(non_terminal|predicate)|ode|ultifile)|' r's(et_(logtalk|prolog)_flag|ynchronized))(?=[(])', Keyword, 'root'), (r'op(?=[(])', Keyword, 'root'), (r'(c(alls|oinductive)|module|reexport|use(s|_module))(?=[(])', Keyword, 'root'), (r'[a-z]\w*(?=[(])', Text, 'root'), (r'[a-z]\w*[.]', Text, 'root'), ], 'entityrelations': [ (r'(complements|extends|i(nstantiates|mp(lements|orts))|specializes)(?=[(])', Keyword), # Numbers (r"0'.", Number), (r'0b[01]+', Number.Bin), (r'0o[0-7]+', Number.Oct), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+\.?\d*((e|E)(\+|-)?\d+)?', Number), # Variables (r'([A-Z_]\w*)', Name.Variable), # Atoms (r"[a-z]\w*", Text), (r"'", String, 'quoted_atom'), # Strings (r'"(\\\\|\\"|[^"])*"', String), # End of entity-opening directive (r'([)]\.)', Text, 'root'), # Scope operator (r'(::)', Operator), # Ponctuation (r'[()\[\],.|]', Text), # Comments (r'%.*?\n', Comment), (r'/\*(.|\n)*?\*/', Comment), # Whitespace (r'\n', Text), (r'\s+', Text), ] } def analyse_text(text): if ':- object(' in text: return 1.0 elif ':- protocol(' in text: return 1.0 elif ':- category(' in text: return 1.0 elif re.search(r'^:-\s[a-z]', text, re.M): return 0.9 else: return 0.0 Pygments-2.3.1/pygments/lexers/data.py0000644000175000017500000004514313404525414017043 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.data ~~~~~~~~~~~~~~~~~~~~ Lexers for data file format. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, ExtendedRegexLexer, LexerContext, \ include, bygroups, inherit from pygments.token import Text, Comment, Keyword, Name, String, Number, \ Punctuation, Literal, Error __all__ = ['YamlLexer', 'JsonLexer', 'JsonBareObjectLexer', 'JsonLdLexer'] class YamlLexerContext(LexerContext): """Indentation context for the YAML lexer.""" def __init__(self, *args, **kwds): super(YamlLexerContext, self).__init__(*args, **kwds) self.indent_stack = [] self.indent = -1 self.next_indent = 0 self.block_scalar_indent = None class YamlLexer(ExtendedRegexLexer): """ Lexer for `YAML `_, a human-friendly data serialization language. .. versionadded:: 0.11 """ name = 'YAML' aliases = ['yaml'] filenames = ['*.yaml', '*.yml'] mimetypes = ['text/x-yaml'] def something(token_class): """Do not produce empty tokens.""" def callback(lexer, match, context): text = match.group() if not text: return yield match.start(), token_class, text context.pos = match.end() return callback def reset_indent(token_class): """Reset the indentation levels.""" def callback(lexer, match, context): text = match.group() context.indent_stack = [] context.indent = -1 context.next_indent = 0 context.block_scalar_indent = None yield match.start(), token_class, text context.pos = match.end() return callback def save_indent(token_class, start=False): """Save a possible indentation level.""" def callback(lexer, match, context): text = match.group() extra = '' if start: context.next_indent = len(text) if context.next_indent < context.indent: while context.next_indent < context.indent: context.indent = context.indent_stack.pop() if context.next_indent > context.indent: extra = text[context.indent:] text = text[:context.indent] else: context.next_indent += len(text) if text: yield match.start(), token_class, text if extra: yield match.start()+len(text), token_class.Error, extra context.pos = match.end() return callback def set_indent(token_class, implicit=False): """Set the previously saved indentation level.""" def callback(lexer, match, context): text = match.group() if context.indent < context.next_indent: context.indent_stack.append(context.indent) context.indent = context.next_indent if not implicit: context.next_indent += len(text) yield match.start(), token_class, text context.pos = match.end() return callback def set_block_scalar_indent(token_class): """Set an explicit indentation level for a block scalar.""" def callback(lexer, match, context): text = match.group() context.block_scalar_indent = None if not text: return increment = match.group(1) if increment: current_indent = max(context.indent, 0) increment = int(increment) context.block_scalar_indent = current_indent + increment if text: yield match.start(), token_class, text context.pos = match.end() return callback def parse_block_scalar_empty_line(indent_token_class, content_token_class): """Process an empty line in a block scalar.""" def callback(lexer, match, context): text = match.group() if (context.block_scalar_indent is None or len(text) <= context.block_scalar_indent): if text: yield match.start(), indent_token_class, text else: indentation = text[:context.block_scalar_indent] content = text[context.block_scalar_indent:] yield match.start(), indent_token_class, indentation yield (match.start()+context.block_scalar_indent, content_token_class, content) context.pos = match.end() return callback def parse_block_scalar_indent(token_class): """Process indentation spaces in a block scalar.""" def callback(lexer, match, context): text = match.group() if context.block_scalar_indent is None: if len(text) <= max(context.indent, 0): context.stack.pop() context.stack.pop() return context.block_scalar_indent = len(text) else: if len(text) < context.block_scalar_indent: context.stack.pop() context.stack.pop() return if text: yield match.start(), token_class, text context.pos = match.end() return callback def parse_plain_scalar_indent(token_class): """Process indentation spaces in a plain scalar.""" def callback(lexer, match, context): text = match.group() if len(text) <= context.indent: context.stack.pop() context.stack.pop() return if text: yield match.start(), token_class, text context.pos = match.end() return callback tokens = { # the root rules 'root': [ # ignored whitespaces (r'[ ]+(?=#|$)', Text), # line breaks (r'\n+', Text), # a comment (r'#[^\n]*', Comment.Single), # the '%YAML' directive (r'^%YAML(?=[ ]|$)', reset_indent(Name.Tag), 'yaml-directive'), # the %TAG directive (r'^%TAG(?=[ ]|$)', reset_indent(Name.Tag), 'tag-directive'), # document start and document end indicators (r'^(?:---|\.\.\.)(?=[ ]|$)', reset_indent(Name.Namespace), 'block-line'), # indentation spaces (r'[ ]*(?!\s|$)', save_indent(Text, start=True), ('block-line', 'indentation')), ], # trailing whitespaces after directives or a block scalar indicator 'ignored-line': [ # ignored whitespaces (r'[ ]+(?=#|$)', Text), # a comment (r'#[^\n]*', Comment.Single), # line break (r'\n', Text, '#pop:2'), ], # the %YAML directive 'yaml-directive': [ # the version number (r'([ ]+)([0-9]+\.[0-9]+)', bygroups(Text, Number), 'ignored-line'), ], # the %TAG directive 'tag-directive': [ # a tag handle and the corresponding prefix (r'([ ]+)(!|![\w-]*!)' r'([ ]+)(!|!?[\w;/?:@&=+$,.!~*\'()\[\]%-]+)', bygroups(Text, Keyword.Type, Text, Keyword.Type), 'ignored-line'), ], # block scalar indicators and indentation spaces 'indentation': [ # trailing whitespaces are ignored (r'[ ]*$', something(Text), '#pop:2'), # whitespaces preceding block collection indicators (r'[ ]+(?=[?:-](?:[ ]|$))', save_indent(Text)), # block collection indicators (r'[?:-](?=[ ]|$)', set_indent(Punctuation.Indicator)), # the beginning a block line (r'[ ]*', save_indent(Text), '#pop'), ], # an indented line in the block context 'block-line': [ # the line end (r'[ ]*(?=#|$)', something(Text), '#pop'), # whitespaces separating tokens (r'[ ]+', Text), # key with colon (r'([^,:?\[\]{}\n]+)(:)(?=[ ]|$)', bygroups(Name.Tag, set_indent(Punctuation, implicit=True))), # tags, anchors and aliases, include('descriptors'), # block collections and scalars include('block-nodes'), # flow collections and quoted scalars include('flow-nodes'), # a plain scalar (r'(?=[^\s?:,\[\]{}#&*!|>\'"%@`-]|[?:-]\S)', something(Name.Variable), 'plain-scalar-in-block-context'), ], # tags, anchors, aliases 'descriptors': [ # a full-form tag (r'!<[\w#;/?:@&=+$,.!~*\'()\[\]%-]+>', Keyword.Type), # a tag in the form '!', '!suffix' or '!handle!suffix' (r'!(?:[\w-]+!)?' r'[\w#;/?:@&=+$,.!~*\'()\[\]%-]*', Keyword.Type), # an anchor (r'&[\w-]+', Name.Label), # an alias (r'\*[\w-]+', Name.Variable), ], # block collections and scalars 'block-nodes': [ # implicit key (r':(?=[ ]|$)', set_indent(Punctuation.Indicator, implicit=True)), # literal and folded scalars (r'[|>]', Punctuation.Indicator, ('block-scalar-content', 'block-scalar-header')), ], # flow collections and quoted scalars 'flow-nodes': [ # a flow sequence (r'\[', Punctuation.Indicator, 'flow-sequence'), # a flow mapping (r'\{', Punctuation.Indicator, 'flow-mapping'), # a single-quoted scalar (r'\'', String, 'single-quoted-scalar'), # a double-quoted scalar (r'\"', String, 'double-quoted-scalar'), ], # the content of a flow collection 'flow-collection': [ # whitespaces (r'[ ]+', Text), # line breaks (r'\n+', Text), # a comment (r'#[^\n]*', Comment.Single), # simple indicators (r'[?:,]', Punctuation.Indicator), # tags, anchors and aliases include('descriptors'), # nested collections and quoted scalars include('flow-nodes'), # a plain scalar (r'(?=[^\s?:,\[\]{}#&*!|>\'"%@`])', something(Name.Variable), 'plain-scalar-in-flow-context'), ], # a flow sequence indicated by '[' and ']' 'flow-sequence': [ # include flow collection rules include('flow-collection'), # the closing indicator (r'\]', Punctuation.Indicator, '#pop'), ], # a flow mapping indicated by '{' and '}' 'flow-mapping': [ # key with colon (r'([^,:?\[\]{}\n]+)(:)(?=[ ]|$)', bygroups(Name.Tag, Punctuation)), # include flow collection rules include('flow-collection'), # the closing indicator (r'\}', Punctuation.Indicator, '#pop'), ], # block scalar lines 'block-scalar-content': [ # line break (r'\n', Text), # empty line (r'^[ ]+$', parse_block_scalar_empty_line(Text, Name.Constant)), # indentation spaces (we may leave the state here) (r'^[ ]*', parse_block_scalar_indent(Text)), # line content (r'[\S\t ]+', Name.Constant), ], # the content of a literal or folded scalar 'block-scalar-header': [ # indentation indicator followed by chomping flag (r'([1-9])?[+-]?(?=[ ]|$)', set_block_scalar_indent(Punctuation.Indicator), 'ignored-line'), # chomping flag followed by indentation indicator (r'[+-]?([1-9])?(?=[ ]|$)', set_block_scalar_indent(Punctuation.Indicator), 'ignored-line'), ], # ignored and regular whitespaces in quoted scalars 'quoted-scalar-whitespaces': [ # leading and trailing whitespaces are ignored (r'^[ ]+', Text), (r'[ ]+$', Text), # line breaks are ignored (r'\n+', Text), # other whitespaces are a part of the value (r'[ ]+', Name.Variable), ], # single-quoted scalars 'single-quoted-scalar': [ # include whitespace and line break rules include('quoted-scalar-whitespaces'), # escaping of the quote character (r'\'\'', String.Escape), # regular non-whitespace characters (r'[^\s\']+', String), # the closing quote (r'\'', String, '#pop'), ], # double-quoted scalars 'double-quoted-scalar': [ # include whitespace and line break rules include('quoted-scalar-whitespaces'), # escaping of special characters (r'\\[0abt\tn\nvfre "\\N_LP]', String), # escape codes (r'\\(?:x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})', String.Escape), # regular non-whitespace characters (r'[^\s"\\]+', String), # the closing quote (r'"', String, '#pop'), ], # the beginning of a new line while scanning a plain scalar 'plain-scalar-in-block-context-new-line': [ # empty lines (r'^[ ]+$', Text), # line breaks (r'\n+', Text), # document start and document end indicators (r'^(?=---|\.\.\.)', something(Name.Namespace), '#pop:3'), # indentation spaces (we may leave the block line state here) (r'^[ ]*', parse_plain_scalar_indent(Text), '#pop'), ], # a plain scalar in the block context 'plain-scalar-in-block-context': [ # the scalar ends with the ':' indicator (r'[ ]*(?=:[ ]|:$)', something(Text), '#pop'), # the scalar ends with whitespaces followed by a comment (r'[ ]+(?=#)', Text, '#pop'), # trailing whitespaces are ignored (r'[ ]+$', Text), # line breaks are ignored (r'\n+', Text, 'plain-scalar-in-block-context-new-line'), # other whitespaces are a part of the value (r'[ ]+', Literal.Scalar.Plain), # regular non-whitespace characters (r'(?::(?!\s)|[^\s:])+', Literal.Scalar.Plain), ], # a plain scalar is the flow context 'plain-scalar-in-flow-context': [ # the scalar ends with an indicator character (r'[ ]*(?=[,:?\[\]{}])', something(Text), '#pop'), # the scalar ends with a comment (r'[ ]+(?=#)', Text, '#pop'), # leading and trailing whitespaces are ignored (r'^[ ]+', Text), (r'[ ]+$', Text), # line breaks are ignored (r'\n+', Text), # other whitespaces are a part of the value (r'[ ]+', Name.Variable), # regular non-whitespace characters (r'[^\s,:?\[\]{}]+', Name.Variable), ], } def get_tokens_unprocessed(self, text=None, context=None): if context is None: context = YamlLexerContext(text, 0) return super(YamlLexer, self).get_tokens_unprocessed(text, context) class JsonLexer(RegexLexer): """ For JSON data structures. .. versionadded:: 1.5 """ name = 'JSON' aliases = ['json'] filenames = ['*.json'] mimetypes = ['application/json'] flags = re.DOTALL # integer part of a number int_part = r'-?(0|[1-9]\d*)' # fractional part of a number frac_part = r'\.\d+' # exponential part of a number exp_part = r'[eE](\+|-)?\d+' tokens = { 'whitespace': [ (r'\s+', Text), ], # represents a simple terminal value 'simplevalue': [ (r'(true|false|null)\b', Keyword.Constant), (('%(int_part)s(%(frac_part)s%(exp_part)s|' '%(exp_part)s|%(frac_part)s)') % vars(), Number.Float), (int_part, Number.Integer), (r'"(\\\\|\\"|[^"])*"', String.Double), ], # the right hand side of an object, after the attribute name 'objectattribute': [ include('value'), (r':', Punctuation), # comma terminates the attribute but expects more (r',', Punctuation, '#pop'), # a closing bracket terminates the entire object, so pop twice (r'\}', Punctuation, '#pop:2'), ], # a json object - { attr, attr, ... } 'objectvalue': [ include('whitespace'), (r'"(\\\\|\\"|[^"])*"', Name.Tag, 'objectattribute'), (r'\}', Punctuation, '#pop'), ], # json array - [ value, value, ... } 'arrayvalue': [ include('whitespace'), include('value'), (r',', Punctuation), (r'\]', Punctuation, '#pop'), ], # a json value - either a simple value or a complex value (object or array) 'value': [ include('whitespace'), include('simplevalue'), (r'\{', Punctuation, 'objectvalue'), (r'\[', Punctuation, 'arrayvalue'), ], # the root of a json document whould be a value 'root': [ include('value'), ], } class JsonBareObjectLexer(JsonLexer): """ For JSON data structures (with missing object curly braces). .. versionadded:: 2.2 """ name = 'JSONBareObject' aliases = ['json-object'] filenames = [] mimetypes = ['application/json-object'] tokens = { 'root': [ (r'\}', Error), include('objectvalue'), ], 'objectattribute': [ (r'\}', Error), inherit, ], } class JsonLdLexer(JsonLexer): """ For `JSON-LD `_ linked data. .. versionadded:: 2.0 """ name = 'JSON-LD' aliases = ['jsonld', 'json-ld'] filenames = ['*.jsonld'] mimetypes = ['application/ld+json'] tokens = { 'objectvalue': [ (r'"@(context|id|value|language|type|container|list|set|' r'reverse|index|base|vocab|graph)"', Name.Decorator, 'objectattribute'), inherit, ], } Pygments-2.3.1/pygments/lexers/ooc.py0000644000175000017500000000566713376260540016725 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.ooc ~~~~~~~~~~~~~~~~~~~ Lexers for the Ooc language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['OocLexer'] class OocLexer(RegexLexer): """ For `Ooc `_ source code .. versionadded:: 1.2 """ name = 'Ooc' aliases = ['ooc'] filenames = ['*.ooc'] mimetypes = ['text/x-ooc'] tokens = { 'root': [ (words(( 'class', 'interface', 'implement', 'abstract', 'extends', 'from', 'this', 'super', 'new', 'const', 'final', 'static', 'import', 'use', 'extern', 'inline', 'proto', 'break', 'continue', 'fallthrough', 'operator', 'if', 'else', 'for', 'while', 'do', 'switch', 'case', 'as', 'in', 'version', 'return', 'true', 'false', 'null'), prefix=r'\b', suffix=r'\b'), Keyword), (r'include\b', Keyword, 'include'), (r'(cover)([ \t]+)(from)([ \t]+)(\w+[*@]?)', bygroups(Keyword, Text, Keyword, Text, Name.Class)), (r'(func)((?:[ \t]|\\\n)+)(~[a-z_]\w*)', bygroups(Keyword, Text, Name.Function)), (r'\bfunc\b', Keyword), # Note: %= and ^= not listed on http://ooc-lang.org/syntax (r'//.*', Comment), (r'(?s)/\*.*?\*/', Comment.Multiline), (r'(==?|\+=?|-[=>]?|\*=?|/=?|:=|!=?|%=?|\?|>{1,3}=?|<{1,3}=?|\.\.|' r'&&?|\|\|?|\^=?)', Operator), (r'(\.)([ \t]*)([a-z]\w*)', bygroups(Operator, Text, Name.Function)), (r'[A-Z][A-Z0-9_]+', Name.Constant), (r'[A-Z]\w*([@*]|\[[ \t]*\])?', Name.Class), (r'([a-z]\w*(?:~[a-z]\w*)?)((?:[ \t]|\\\n)*)(?=\()', bygroups(Name.Function, Text)), (r'[a-z]\w*', Name.Variable), # : introduces types (r'[:(){}\[\];,]', Punctuation), (r'0x[0-9a-fA-F]+', Number.Hex), (r'0c[0-9]+', Number.Oct), (r'0b[01]+', Number.Bin), (r'[0-9_]\.[0-9_]*(?!\.)', Number.Float), (r'[0-9_]+', Number.Decimal), (r'"(?:\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\"])*"', String.Double), (r"'(?:\\.|\\[0-9]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'@', Punctuation), # pointer dereference (r'\.', Punctuation), # imports or chain operator (r'\\[ \t\n]', Text), (r'[ \t]+', Text), ], 'include': [ (r'[\w/]+', Name), (r',', Punctuation), (r'[ \t]', Text), (r'[;\n]', Text, '#pop'), ], } Pygments-2.3.1/pygments/lexers/iolang.py0000644000175000017500000000356113402534107017376 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.iolang ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the Io language. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number __all__ = ['IoLexer'] class IoLexer(RegexLexer): """ For `Io `_ (a small, prototype-based programming language) source. .. versionadded:: 0.10 """ name = 'Io' filenames = ['*.io'] aliases = ['io'] mimetypes = ['text/x-iosrc'] tokens = { 'root': [ (r'\n', Text), (r'\s+', Text), # Comments (r'//(.*?)\n', Comment.Single), (r'#(.*?)\n', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'/\+', Comment.Multiline, 'nestedcomment'), # DoubleQuotedString (r'"(\\\\|\\"|[^"])*"', String), # Operators (r'::=|:=|=|\(|\)|;|,|\*|-|\+|>|<|@|!|/|\||\^|\.|%|&|\[|\]|\{|\}', Operator), # keywords (r'(clone|do|doFile|doString|method|for|if|else|elseif|then)\b', Keyword), # constants (r'(nil|false|true)\b', Name.Constant), # names (r'(Object|list|List|Map|args|Sequence|Coroutine|File)\b', Name.Builtin), (r'[a-zA-Z_]\w*', Name), # numbers (r'(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number.Float), (r'\d+', Number.Integer) ], 'nestedcomment': [ (r'[^+/]+', Comment.Multiline), (r'/\+', Comment.Multiline, '#push'), (r'\+/', Comment.Multiline, '#pop'), (r'[+/]', Comment.Multiline), ] } Pygments-2.3.1/pygments/lexers/_tsql_builtins.py0000644000175000017500000003617413376260540021175 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers._tsql_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ These are manually translated lists from https://msdn.microsoft.com. :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # See https://msdn.microsoft.com/en-us/library/ms174986.aspx. OPERATORS = ( '!<', '!=', '!>', '<', '<=', '<>', '=', '>', '>=', '+', '+=', '-', '-=', '*', '*=', '/', '/=', '%', '%=', '&', '&=', '|', '|=', '^', '^=', '~', '::', ) OPERATOR_WORDS = ( 'all', 'and', 'any', 'between', 'except', 'exists', 'in', 'intersect', 'like', 'not', 'or', 'some', 'union', ) _KEYWORDS_SERVER = ( 'add', 'all', 'alter', 'and', 'any', 'as', 'asc', 'authorization', 'backup', 'begin', 'between', 'break', 'browse', 'bulk', 'by', 'cascade', 'case', 'catch', 'check', 'checkpoint', 'close', 'clustered', 'coalesce', 'collate', 'column', 'commit', 'compute', 'constraint', 'contains', 'containstable', 'continue', 'convert', 'create', 'cross', 'current', 'current_date', 'current_time', 'current_timestamp', 'current_user', 'cursor', 'database', 'dbcc', 'deallocate', 'declare', 'default', 'delete', 'deny', 'desc', 'disk', 'distinct', 'distributed', 'double', 'drop', 'dump', 'else', 'end', 'errlvl', 'escape', 'except', 'exec', 'execute', 'exists', 'exit', 'external', 'fetch', 'file', 'fillfactor', 'for', 'foreign', 'freetext', 'freetexttable', 'from', 'full', 'function', 'goto', 'grant', 'group', 'having', 'holdlock', 'identity', 'identity_insert', 'identitycol', 'if', 'in', 'index', 'inner', 'insert', 'intersect', 'into', 'is', 'join', 'key', 'kill', 'left', 'like', 'lineno', 'load', 'merge', 'national', 'nocheck', 'nonclustered', 'not', 'null', 'nullif', 'of', 'off', 'offsets', 'on', 'open', 'opendatasource', 'openquery', 'openrowset', 'openxml', 'option', 'or', 'order', 'outer', 'over', 'percent', 'pivot', 'plan', 'precision', 'primary', 'print', 'proc', 'procedure', 'public', 'raiserror', 'read', 'readtext', 'reconfigure', 'references', 'replication', 'restore', 'restrict', 'return', 'revert', 'revoke', 'right', 'rollback', 'rowcount', 'rowguidcol', 'rule', 'save', 'schema', 'securityaudit', 'select', 'semantickeyphrasetable', 'semanticsimilaritydetailstable', 'semanticsimilaritytable', 'session_user', 'set', 'setuser', 'shutdown', 'some', 'statistics', 'system_user', 'table', 'tablesample', 'textsize', 'then', 'throw', 'to', 'top', 'tran', 'transaction', 'trigger', 'truncate', 'try', 'try_convert', 'tsequal', 'union', 'unique', 'unpivot', 'update', 'updatetext', 'use', 'user', 'values', 'varying', 'view', 'waitfor', 'when', 'where', 'while', 'with', 'within', 'writetext', ) _KEYWORDS_FUTURE = ( 'absolute', 'action', 'admin', 'after', 'aggregate', 'alias', 'allocate', 'are', 'array', 'asensitive', 'assertion', 'asymmetric', 'at', 'atomic', 'before', 'binary', 'bit', 'blob', 'boolean', 'both', 'breadth', 'call', 'called', 'cardinality', 'cascaded', 'cast', 'catalog', 'char', 'character', 'class', 'clob', 'collation', 'collect', 'completion', 'condition', 'connect', 'connection', 'constraints', 'constructor', 'corr', 'corresponding', 'covar_pop', 'covar_samp', 'cube', 'cume_dist', 'current_catalog', 'current_default_transform_group', 'current_path', 'current_role', 'current_schema', 'current_transform_group_for_type', 'cycle', 'data', 'date', 'day', 'dec', 'decimal', 'deferrable', 'deferred', 'depth', 'deref', 'describe', 'descriptor', 'destroy', 'destructor', 'deterministic', 'diagnostics', 'dictionary', 'disconnect', 'domain', 'dynamic', 'each', 'element', 'end-exec', 'equals', 'every', 'exception', 'false', 'filter', 'first', 'float', 'found', 'free', 'fulltexttable', 'fusion', 'general', 'get', 'global', 'go', 'grouping', 'hold', 'host', 'hour', 'ignore', 'immediate', 'indicator', 'initialize', 'initially', 'inout', 'input', 'int', 'integer', 'intersection', 'interval', 'isolation', 'iterate', 'language', 'large', 'last', 'lateral', 'leading', 'less', 'level', 'like_regex', 'limit', 'ln', 'local', 'localtime', 'localtimestamp', 'locator', 'map', 'match', 'member', 'method', 'minute', 'mod', 'modifies', 'modify', 'module', 'month', 'multiset', 'names', 'natural', 'nchar', 'nclob', 'new', 'next', 'no', 'none', 'normalize', 'numeric', 'object', 'occurrences_regex', 'old', 'only', 'operation', 'ordinality', 'out', 'output', 'overlay', 'pad', 'parameter', 'parameters', 'partial', 'partition', 'path', 'percent_rank', 'percentile_cont', 'percentile_disc', 'position_regex', 'postfix', 'prefix', 'preorder', 'prepare', 'preserve', 'prior', 'privileges', 'range', 'reads', 'real', 'recursive', 'ref', 'referencing', 'regr_avgx', 'regr_avgy', 'regr_count', 'regr_intercept', 'regr_r2', 'regr_slope', 'regr_sxx', 'regr_sxy', 'regr_syy', 'relative', 'release', 'result', 'returns', 'role', 'rollup', 'routine', 'row', 'rows', 'savepoint', 'scope', 'scroll', 'search', 'second', 'section', 'sensitive', 'sequence', 'session', 'sets', 'similar', 'size', 'smallint', 'space', 'specific', 'specifictype', 'sql', 'sqlexception', 'sqlstate', 'sqlwarning', 'start', 'state', 'statement', 'static', 'stddev_pop', 'stddev_samp', 'structure', 'submultiset', 'substring_regex', 'symmetric', 'system', 'temporary', 'terminate', 'than', 'time', 'timestamp', 'timezone_hour', 'timezone_minute', 'trailing', 'translate_regex', 'translation', 'treat', 'true', 'uescape', 'under', 'unknown', 'unnest', 'usage', 'using', 'value', 'var_pop', 'var_samp', 'varchar', 'variable', 'whenever', 'width_bucket', 'window', 'within', 'without', 'work', 'write', 'xmlagg', 'xmlattributes', 'xmlbinary', 'xmlcast', 'xmlcomment', 'xmlconcat', 'xmldocument', 'xmlelement', 'xmlexists', 'xmlforest', 'xmliterate', 'xmlnamespaces', 'xmlparse', 'xmlpi', 'xmlquery', 'xmlserialize', 'xmltable', 'xmltext', 'xmlvalidate', 'year', 'zone', ) _KEYWORDS_ODBC = ( 'absolute', 'action', 'ada', 'add', 'all', 'allocate', 'alter', 'and', 'any', 'are', 'as', 'asc', 'assertion', 'at', 'authorization', 'avg', 'begin', 'between', 'bit', 'bit_length', 'both', 'by', 'cascade', 'cascaded', 'case', 'cast', 'catalog', 'char', 'char_length', 'character', 'character_length', 'check', 'close', 'coalesce', 'collate', 'collation', 'column', 'commit', 'connect', 'connection', 'constraint', 'constraints', 'continue', 'convert', 'corresponding', 'count', 'create', 'cross', 'current', 'current_date', 'current_time', 'current_timestamp', 'current_user', 'cursor', 'date', 'day', 'deallocate', 'dec', 'decimal', 'declare', 'default', 'deferrable', 'deferred', 'delete', 'desc', 'describe', 'descriptor', 'diagnostics', 'disconnect', 'distinct', 'domain', 'double', 'drop', 'else', 'end', 'end-exec', 'escape', 'except', 'exception', 'exec', 'execute', 'exists', 'external', 'extract', 'false', 'fetch', 'first', 'float', 'for', 'foreign', 'fortran', 'found', 'from', 'full', 'get', 'global', 'go', 'goto', 'grant', 'group', 'having', 'hour', 'identity', 'immediate', 'in', 'include', 'index', 'indicator', 'initially', 'inner', 'input', 'insensitive', 'insert', 'int', 'integer', 'intersect', 'interval', 'into', 'is', 'isolation', 'join', 'key', 'language', 'last', 'leading', 'left', 'level', 'like', 'local', 'lower', 'match', 'max', 'min', 'minute', 'module', 'month', 'names', 'national', 'natural', 'nchar', 'next', 'no', 'none', 'not', 'null', 'nullif', 'numeric', 'octet_length', 'of', 'on', 'only', 'open', 'option', 'or', 'order', 'outer', 'output', 'overlaps', 'pad', 'partial', 'pascal', 'position', 'precision', 'prepare', 'preserve', 'primary', 'prior', 'privileges', 'procedure', 'public', 'read', 'real', 'references', 'relative', 'restrict', 'revoke', 'right', 'rollback', 'rows', 'schema', 'scroll', 'second', 'section', 'select', 'session', 'session_user', 'set', 'size', 'smallint', 'some', 'space', 'sql', 'sqlca', 'sqlcode', 'sqlerror', 'sqlstate', 'sqlwarning', 'substring', 'sum', 'system_user', 'table', 'temporary', 'then', 'time', 'timestamp', 'timezone_hour', 'timezone_minute', 'to', 'trailing', 'transaction', 'translate', 'translation', 'trim', 'true', 'union', 'unique', 'unknown', 'update', 'upper', 'usage', 'user', 'using', 'value', 'values', 'varchar', 'varying', 'view', 'when', 'whenever', 'where', 'with', 'work', 'write', 'year', 'zone', ) # See https://msdn.microsoft.com/en-us/library/ms189822.aspx. KEYWORDS = sorted(set(_KEYWORDS_FUTURE + _KEYWORDS_ODBC + _KEYWORDS_SERVER)) # See https://msdn.microsoft.com/en-us/library/ms187752.aspx. TYPES = ( 'bigint', 'binary', 'bit', 'char', 'cursor', 'date', 'datetime', 'datetime2', 'datetimeoffset', 'decimal', 'float', 'hierarchyid', 'image', 'int', 'money', 'nchar', 'ntext', 'numeric', 'nvarchar', 'real', 'smalldatetime', 'smallint', 'smallmoney', 'sql_variant', 'table', 'text', 'time', 'timestamp', 'tinyint', 'uniqueidentifier', 'varbinary', 'varchar', 'xml', ) # See https://msdn.microsoft.com/en-us/library/ms174318.aspx. FUNCTIONS = ( '$partition', 'abs', 'acos', 'app_name', 'applock_mode', 'applock_test', 'ascii', 'asin', 'assemblyproperty', 'atan', 'atn2', 'avg', 'binary_checksum', 'cast', 'ceiling', 'certencoded', 'certprivatekey', 'char', 'charindex', 'checksum', 'checksum_agg', 'choose', 'col_length', 'col_name', 'columnproperty', 'compress', 'concat', 'connectionproperty', 'context_info', 'convert', 'cos', 'cot', 'count', 'count_big', 'current_request_id', 'current_timestamp', 'current_transaction_id', 'current_user', 'cursor_status', 'database_principal_id', 'databasepropertyex', 'dateadd', 'datediff', 'datediff_big', 'datefromparts', 'datename', 'datepart', 'datetime2fromparts', 'datetimefromparts', 'datetimeoffsetfromparts', 'day', 'db_id', 'db_name', 'decompress', 'degrees', 'dense_rank', 'difference', 'eomonth', 'error_line', 'error_message', 'error_number', 'error_procedure', 'error_severity', 'error_state', 'exp', 'file_id', 'file_idex', 'file_name', 'filegroup_id', 'filegroup_name', 'filegroupproperty', 'fileproperty', 'floor', 'format', 'formatmessage', 'fulltextcatalogproperty', 'fulltextserviceproperty', 'get_filestream_transaction_context', 'getansinull', 'getdate', 'getutcdate', 'grouping', 'grouping_id', 'has_perms_by_name', 'host_id', 'host_name', 'iif', 'index_col', 'indexkey_property', 'indexproperty', 'is_member', 'is_rolemember', 'is_srvrolemember', 'isdate', 'isjson', 'isnull', 'isnumeric', 'json_modify', 'json_query', 'json_value', 'left', 'len', 'log', 'log10', 'lower', 'ltrim', 'max', 'min', 'min_active_rowversion', 'month', 'nchar', 'newid', 'newsequentialid', 'ntile', 'object_definition', 'object_id', 'object_name', 'object_schema_name', 'objectproperty', 'objectpropertyex', 'opendatasource', 'openjson', 'openquery', 'openrowset', 'openxml', 'original_db_name', 'original_login', 'parse', 'parsename', 'patindex', 'permissions', 'pi', 'power', 'pwdcompare', 'pwdencrypt', 'quotename', 'radians', 'rand', 'rank', 'replace', 'replicate', 'reverse', 'right', 'round', 'row_number', 'rowcount_big', 'rtrim', 'schema_id', 'schema_name', 'scope_identity', 'serverproperty', 'session_context', 'session_user', 'sign', 'sin', 'smalldatetimefromparts', 'soundex', 'sp_helplanguage', 'space', 'sqrt', 'square', 'stats_date', 'stdev', 'stdevp', 'str', 'string_escape', 'string_split', 'stuff', 'substring', 'sum', 'suser_id', 'suser_name', 'suser_sid', 'suser_sname', 'switchoffset', 'sysdatetime', 'sysdatetimeoffset', 'system_user', 'sysutcdatetime', 'tan', 'textptr', 'textvalid', 'timefromparts', 'todatetimeoffset', 'try_cast', 'try_convert', 'try_parse', 'type_id', 'type_name', 'typeproperty', 'unicode', 'upper', 'user_id', 'user_name', 'var', 'varp', 'xact_state', 'year', ) Pygments-2.3.1/pygments/lexers/sql.py0000644000175000017500000007104413402534107016725 0ustar piotrpiotr# -*- coding: utf-8 -*- """ pygments.lexers.sql ~~~~~~~~~~~~~~~~~~~ Lexers for various SQL dialects and related interactive sessions. Postgres specific lexers: `PostgresLexer` A SQL lexer for the PostgreSQL dialect. Differences w.r.t. the SQL lexer are: - keywords and data types list parsed from the PG docs (run the `_postgres_builtins` module to update them); - Content of $-strings parsed using a specific lexer, e.g. the content of a PL/Python function is parsed using the Python lexer; - parse PG specific constructs: E-strings, $-strings, U&-strings, different operators and punctuation. `PlPgsqlLexer` A lexer for the PL/pgSQL language. Adds a few specific construct on top of the PG SQL lexer (such as <