pygments-2.11.2/0000755000175000017500000000000014165547207013336 5ustar carstencarstenpygments-2.11.2/description.rst0000644000175000017500000000133414165547207016414 0ustar carstencarstenPygments ~~~~~~~~ Pygments is a syntax highlighting package written in Python. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of over 500 languages and other text formats is supported * special attention is paid to details, increasing quality by a fair amount * support for new languages and formats are added easily * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image formats that PIL supports and ANSI sequences * it is usable as a command-line tool and as a library Copyright 2006-2021 by the Pygments team, see ``AUTHORS``. Licensed under the BSD, see ``LICENSE`` for details.pygments-2.11.2/AUTHORS0000644000175000017500000002241714165547207014414 0ustar carstencarstenPygments is written and maintained by Georg Brandl . Major developers are Tim Hatch and Armin Ronacher . Other contributors, listed alphabetically, are: * Sam Aaron -- Ioke lexer * Jean Abou Samra -- LilyPond lexer * João Abecasis -- JSLT lexer * Ali Afshar -- image formatter * Thomas Aglassinger -- Easytrieve, JCL, Rexx, Transact-SQL and VBScript lexers * Muthiah Annamalai -- Ezhil lexer * Kumar Appaiah -- Debian control lexer * Andreas Amann -- AppleScript lexer * Timothy Armstrong -- Dart lexer fixes * Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers * Jeremy Ashkenas -- CoffeeScript lexer * José Joaquín Atria -- Praat lexer * Stefan Matthias Aust -- Smalltalk lexer * Lucas Bajolet -- Nit lexer * Ben Bangert -- Mako lexers * Max Battcher -- Darcs patch lexer * Thomas Baruchel -- APL lexer * Tim Baumann -- (Literate) Agda lexer * Paul Baumgart, 280 North, Inc. -- Objective-J lexer * Michael Bayer -- Myghty lexers * Thomas Beale -- Archetype lexers * John Benediktsson -- Factor lexer * Trevor Bergeron -- mIRC formatter * Vincent Bernat -- LessCSS lexer * Christopher Bertels -- Fancy lexer * Sébastien Bigaret -- QVT Operational lexer * Jarrett Billingsley -- MiniD lexer * Adam Blinkinsop -- Haskell, Redcode lexers * Stéphane Blondon -- Procfile, SGF and Sieve lexers * Frits van Bommel -- assembler lexers * Pierre Bourdon -- bugfixes * Martijn Braam -- Kernel log lexer, BARE lexer * Matthias Bussonnier -- ANSI style handling for terminal-256 formatter * chebee7i -- Python traceback lexer improvements * Hiram Chirino -- Scaml and Jade lexers * Mauricio Caceres -- SAS and Stata lexers. * Ian Cooper -- VGL lexer * David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers * Leaf Corcoran -- MoonScript lexer * Christopher Creutzig -- MuPAD lexer * Daniël W. Crompton -- Pike lexer * Pete Curry -- bugfixes * Bryan Davis -- EBNF lexer * Bruno Deferrari -- Shen lexer * Luke Drummond -- Meson lexer * Giedrius Dubinskas -- HTML formatter improvements * Owen Durni -- Haxe lexer * Alexander Dutton, Oxford University Computing Services -- SPARQL lexer * James Edwards -- Terraform lexer * Nick Efford -- Python 3 lexer * Sven Efftinge -- Xtend lexer * Artem Egorkine -- terminal256 formatter * Matthew Fernandez -- CAmkES lexer * Paweł Fertyk -- GDScript lexer, HTML formatter improvements * Michael Ficarra -- CPSA lexer * James H. Fisher -- PostScript lexer * William S. Fulton -- SWIG lexer * Carlos Galdino -- Elixir and Elixir Console lexers * Michael Galloy -- IDL lexer * Naveen Garg -- Autohotkey lexer * Simon Garnotel -- FreeFem++ lexer * Laurent Gautier -- R/S lexer * Alex Gaynor -- PyPy log lexer * Richard Gerkin -- Igor Pro lexer * Alain Gilbert -- TypeScript lexer * Alex Gilding -- BlitzBasic lexer * GitHub, Inc -- DASM16, Augeas, TOML, and Slash lexers * Bertrand Goetzmann -- Groovy lexer * Krzysiek Goj -- Scala lexer * Rostyslav Golda -- FloScript lexer * Andrey Golovizin -- BibTeX lexers * Matt Good -- Genshi, Cheetah lexers * Michał Górny -- vim modeline support * Alex Gosse -- TrafficScript lexer * Patrick Gotthardt -- PHP namespaces support * Hubert Gruniaux -- C and C++ lexer improvements * Olivier Guibe -- Asymptote lexer * Phil Hagelberg -- Fennel lexer * Florian Hahn -- Boogie lexer * Martin Harriman -- SNOBOL lexer * Matthew Harrison -- SVG formatter * Steven Hazel -- Tcl lexer * Dan Michael Heggø -- Turtle lexer * Aslak Hellesøy -- Gherkin lexer * Greg Hendershott -- Racket lexer * Justin Hendrick -- ParaSail lexer * Jordi Gutiérrez Hermoso -- Octave lexer * David Hess, Fish Software, Inc. -- Objective-J lexer * Ken Hilton -- Typographic Number Theory and Arrow lexers * Varun Hiremath -- Debian control lexer * Rob Hoelz -- Perl 6 lexer * Doug Hogan -- Mscgen lexer * Ben Hollis -- Mason lexer * Max Horn -- GAP lexer * Fred Hornsey -- OMG IDL Lexer * Alastair Houghton -- Lexer inheritance facility * Tim Howard -- BlitzMax lexer * Dustin Howett -- Logos lexer * Ivan Inozemtsev -- Fantom lexer * Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session, MSDOS session, BC, WDiff * Brian R. Jackson -- Tea lexer * Christian Jann -- ShellSession lexer * Dennis Kaarsemaker -- sources.list lexer * Dmitri Kabak -- Inferno Limbo lexer * Igor Kalnitsky -- vhdl lexer * Colin Kennedy - USD lexer * Alexander Kit -- MaskJS lexer * Pekka Klärck -- Robot Framework lexer * Gerwin Klein -- Isabelle lexer * Eric Knibbe -- Lasso lexer * Stepan Koltsov -- Clay lexer * Oliver Kopp - Friendly grayscale style * Adam Koprowski -- Opa lexer * Benjamin Kowarsch -- Modula-2 lexer * Domen Kožar -- Nix lexer * Oleh Krekel -- Emacs Lisp lexer * Alexander Kriegisch -- Kconfig and AspectJ lexers * Marek Kubica -- Scheme lexer * Jochen Kupperschmidt -- Markdown processor * Gerd Kurzbach -- Modelica lexer * Jon Larimer, Google Inc. -- Smali lexer * Olov Lassus -- Dart lexer * Matt Layman -- TAP lexer * Kristian Lyngstøl -- Varnish lexers * Sylvestre Ledru -- Scilab lexer * Chee Sing Lee -- Flatline lexer * Mark Lee -- Vala lexer * Valentin Lorentz -- C++ lexer improvements * Ben Mabey -- Gherkin lexer * Angus MacArthur -- QML lexer * Louis Mandel -- X10 lexer * Louis Marchand -- Eiffel lexer * Simone Margaritelli -- Hybris lexer * Kirk McDonald -- D lexer * Gordon McGregor -- SystemVerilog lexer * Stephen McKamey -- Duel/JBST lexer * Brian McKenna -- F# lexer * Charles McLaughlin -- Puppet lexer * Kurt McKee -- Tera Term macro lexer, PostgreSQL updates, MySQL overhaul * Joe Eli McIlvain -- Savi lexer * Lukas Meuser -- BBCode formatter, Lua lexer * Cat Miller -- Pig lexer * Paul Miller -- LiveScript lexer * Hong Minhee -- HTTP lexer * Michael Mior -- Awk lexer * Bruce Mitchener -- Dylan lexer rewrite * Reuben Morais -- SourcePawn lexer * Jon Morton -- Rust lexer * Paulo Moura -- Logtalk lexer * Mher Movsisyan -- DTD lexer * Dejan Muhamedagic -- Crmsh lexer * Ana Nelson -- Ragel, ANTLR, R console lexers * Kurt Neufeld -- Markdown lexer * Nam T. Nguyen -- Monokai style * Jesper Noehr -- HTML formatter "anchorlinenos" * Mike Nolta -- Julia lexer * Avery Nortonsmith -- Pointless lexer * Jonas Obrist -- BBCode lexer * Edward O'Callaghan -- Cryptol lexer * David Oliva -- Rebol lexer * Pat Pannuto -- nesC lexer * Jon Parise -- Protocol buffers and Thrift lexers * Benjamin Peterson -- Test suite refactoring * Ronny Pfannschmidt -- BBCode lexer * Dominik Picheta -- Nimrod lexer * Andrew Pinkham -- RTF Formatter Refactoring * Clément Prévost -- UrbiScript lexer * Tanner Prynn -- cmdline -x option and loading lexers from files * Oleh Prypin -- Crystal lexer (based on Ruby lexer) * Xidorn Quan -- Web IDL lexer * Elias Rabel -- Fortran fixed form lexer * raichoo -- Idris lexer * Daniel Ramirez -- GDScript lexer * Kashif Rasul -- CUDA lexer * Nathan Reed -- HLSL lexer * Justin Reidy -- MXML lexer * Norman Richards -- JSON lexer * Corey Richardson -- Rust lexer updates * Lubomir Rintel -- GoodData MAQL and CL lexers * Andre Roberge -- Tango style * Georg Rollinger -- HSAIL lexer * Michiel Roos -- TypoScript lexer * Konrad Rudolph -- LaTeX formatter enhancements * Mario Ruggier -- Evoque lexers * Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements * Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers * Matteo Sasso -- Common Lisp lexer * Joe Schafer -- Ada lexer * Max Schillinger -- TiddlyWiki5 lexer * Ken Schutte -- Matlab lexers * René Schwaiger -- Rainbow Dash style * Sebastian Schweizer -- Whiley lexer * Tassilo Schweyer -- Io, MOOCode lexers * Pablo Seminario -- PromQL lexer * Ted Shaw -- AutoIt lexer * Joerg Sieker -- ABAP lexer * Robert Simmons -- Standard ML lexer * Kirill Simonov -- YAML lexer * Corbin Simpson -- Monte lexer * Ville Skyttä -- ASCII armored lexer * Alexander Smishlajev -- Visual FoxPro lexer * Steve Spigarelli -- XQuery lexer * Jerome St-Louis -- eC lexer * Camil Staps -- Clean and NuSMV lexers; Solarized style * James Strachan -- Kotlin lexer * Tom Stuart -- Treetop lexer * Colin Sullivan -- SuperCollider lexer * Ben Swift -- Extempore lexer * tatt61880 -- Kuin lexer * Edoardo Tenani -- Arduino lexer * Tiberius Teng -- default style overhaul * Jeremy Thurgood -- Erlang, Squid config lexers * Brian Tiffin -- OpenCOBOL lexer * Bob Tolbert -- Hy lexer * Matthias Trute -- Forth lexer * Tuoa Spi T4 -- Bdd lexer * Erick Tryzelaar -- Felix lexer * Alexander Udalov -- Kotlin lexer improvements * Thomas Van Doren -- Chapel lexer * Daniele Varrazzo -- PostgreSQL lexers * Abe Voelker -- OpenEdge ABL lexer * Pepijn de Vos -- HTML formatter CTags support * Matthias Vallentin -- Bro lexer * Benoît Vinot -- AMPL lexer * Linh Vu Hong -- RSL lexer * Immanuel Washington -- Smithy lexer * Nathan Weizenbaum -- Haml and Sass lexers * Nathan Whetsell -- Csound lexers * Dietmar Winkler -- Modelica lexer * Nils Winter -- Smalltalk lexer * Davy Wybiral -- Clojure lexer * Whitney Young -- ObjectiveC lexer * Diego Zamboni -- CFengine3 lexer * Enrique Zamudio -- Ceylon lexer * Alex Zimin -- Nemerle lexer * Rob Zimmerman -- Kal lexer * Vincent Zurczak -- Roboconf lexer * Hubert Gruniaux -- C and C++ lexer improvements * Thomas Symalla -- AMDGPU Lexer * 15b3 -- Image Formatter improvements * Fabian Neumann -- CDDL lexer * Thomas Duboucher -- CDDL lexer * Philipp Imhof -- Pango Markup formatter * Thomas Voss -- Sed lexer * Martin Fischer -- WCAG contrast testing * Marc Auberer -- Spice lexer Many thanks for all contributions! pygments-2.11.2/requirements.txt0000644000175000017500000000012414165547207016617 0ustar carstencarstenpytest-cov pytest-randomly pytest>=6.0 pyflakes pylint tox wcag-contrast-ratio lxml pygments-2.11.2/Contributing.md0000644000175000017500000000476314165547207016341 0ustar carstencarstenLicensing ========= The code is distributed under the BSD 2-clause license. Contributors making pull requests must agree that they are able and willing to put their contributions under that license. Contribution checklist ====================== * Check the documentation for how to write [a new lexer](https://pygments.org/docs/lexerdevelopment/), [a new formatter](https://pygments.org/docs/formatterdevelopment/) or [a new filter](https://pygments.org/docs/filterdevelopment/) * When writing rules, try to merge simple rules. For instance, combine: ```python _PUNCTUATION = [ (r"\(", token.Punctuation), (r"\)", token.Punctuation), (r"\[", token.Punctuation), (r"\]", token.Punctuation), ("{", token.Punctuation), ("}", token.Punctuation), ] ``` into: ```python (r"[\(\)\[\]{}]", token.Punctuation) ``` * Be careful with ``.*``. This matches greedily as much as it can. For instance, rule like ``@.*@`` will match the whole string ``@first@ second @third@``, instead of matching ``@first@`` and ``@second@``. You can use ``@.*?@`` in this case to stop early. The ``?`` tries to match _as few times_ as possible. * Don't add imports of your lexer anywhere in the codebase. (In case you're curious about ``compiled.py`` -- this file exists for backwards compatibility reasons.) * Use the standard importing convention: ``from token import Punctuation`` * For test cases that assert on the tokens produced by a lexer, use tools: * You can use the ``testcase`` formatter to produce a piece of code that can be pasted into a unittest file: ``python -m pygments -l lua -f testcase <<< "local a = 5"`` * Most snippets should instead be put as a sample file under ``tests/snippets//*.txt``. These files are automatically picked up as individual tests, asserting that the input produces the expected tokens. To add a new test, create a file with just your code snippet under a subdirectory based on your lexer's main alias. Then run ``pytest --update-goldens `` to auto-populate the currently expected tokens. Check that they look good and check in the file. Also run the same command whenever you need to update the test if the actual produced tokens change (assuming the change is expected). * Large test files should go in ``tests/examplefiles``. This works similar to ``snippets``, but the token output is stored in a separate file. Output can also be regenerated with ``--update-goldens``. pygments-2.11.2/setup.py0000755000175000017500000000007414165547207015054 0ustar carstencarsten#!/usr/bin/env python from setuptools import setup setup() pygments-2.11.2/external/0000755000175000017500000000000014165547207015160 5ustar carstencarstenpygments-2.11.2/external/autopygmentize0000755000175000017500000000662614165547207020204 0ustar carstencarsten#!/bin/bash # Best effort auto-pygmentization with transparent decompression # by Reuben Thomas 2008-2021 # This program is in the public domain. # Strategy: first see if pygmentize can find a lexer; if not, ask file; if that finds nothing, fail # Set the environment variable PYGMENTIZE_OPTS or pass options before the file path to configure pygments. # This program can be used as a .lessfilter for the less pager to auto-color less's output file="${!#}" # last argument options=${@:1:$(($#-1))} # handle others args as options to pass to pygmentize file_common_opts="--brief --dereference" case $(file --mime-type --uncompress $file_common_opts "$file") in application/xml|image/svg+xml) lexer=xml;; application/javascript) lexer=javascript;; application/json) lexer=json;; text/html) lexer=html;; text/troff) lexer=nroff;; text/x-asm) lexer=nasm;; text/x-awk) lexer=awk;; text/x-c) lexer=c;; text/x-c++) lexer=cpp;; text/x-crystal) lexer=crystal;; text/x-diff) lexer=diff;; text/x-fortran) lexer=fortran;; text/x-gawk) lexer=gawk;; text/x-java) lexer=java;; text/x-lisp) lexer=common-lisp;; text/x-lua) lexer=lua;; text/x-makefile) lexer=make;; text/x-msdos-batch) lexer=bat;; text/x-nawk) lexer=nawk;; text/x-pascal) lexer=pascal;; text/x-perl) lexer=perl;; text/x-php) lexer=php;; text/x-po) lexer=po;; text/x-python) lexer=python;; text/x-ruby) lexer=ruby;; text/x-shellscript) lexer=sh;; text/x-tcl) lexer=tcl;; text/x-tex|text/x-texinfo) lexer=latex;; # FIXME: texinfo really needs its own lexer # Types that file outputs which pygmentize didn't support as of file 5.20, pygments 2.0 # text/calendar # text/inf # text/PGP # text/rtf # text/texmacs # text/vnd.graphviz # text/x-bcpl # text/x-info # text/x-m4 # text/x-vcard # text/x-xmcd text/plain) # special filenames. TODO: insert more case $(basename "$file") in .zshrc) lexer=sh;; esac # pygmentize -N is much cheaper than file, but makes some bad guesses (e.g. # it guesses ".pl" is Prolog, not Perl) lexer=$(pygmentize -N "$file") ;; esac # Find a concatenator for compressed files concat=cat case $(file $file_common_opts --mime-type "$file") in application/gzip) concat=zcat;; application/x-bzip2) concat=bzcat;; application/x-xz) concat=xzcat;; esac # Find a suitable reader, preceded by a hex dump for binary files, # or fmt for text with very long lines prereader="" reader=cat encoding=$(file --mime-encoding --uncompress $file_common_opts "$file") # FIXME: need a way to switch between hex and text view, as file often # misdiagnoses files when they contain a few control characters # if [[ $encoding == "binary" ]]; then # prereader="od -x" # POSIX fallback # if [[ -n $(which hd) ]]; then # prereader=hd # preferred # fi # lexer=hexdump # encoding=latin1 #el # FIXME: Using fmt does not work well for system logs # if [[ "$lexer" == "text" ]]; then # if file "$file" | grep -ql "text, with very long lines"; then # reader=fmt # fi # fi if [[ "$lexer" != "text" ]]; then reader="pygmentize -O inencoding=$encoding $PYGMENTIZE_OPTS $options -l $lexer" fi # Run the reader if [[ -n "$prereader" ]]; then exec $concat "$file" | $prereader | $reader else exec $concat "$file" | $reader fi pygments-2.11.2/external/markdown-processor.py0000644000175000017500000000374114165547207021376 0ustar carstencarsten""" The Pygments Markdown Preprocessor ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This fragment is a Markdown_ preprocessor that renders source code to HTML via Pygments. To use it, invoke Markdown like so:: import markdown html = markdown.markdown(someText, extensions=[CodeBlockExtension()]) This uses CSS classes by default, so use ``pygmentize -S -f html > pygments.css`` to create a stylesheet to be added to the website. You can then highlight source code in your markdown markup:: [sourcecode:lexer] some code [/sourcecode] .. _Markdown: https://pypi.python.org/pypi/Markdown :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Options # ~~~~~~~ # Set to True if you want inline CSS styles instead of classes INLINESTYLES = False import re from markdown.preprocessors import Preprocessor from markdown.extensions import Extension from pygments import highlight from pygments.formatters import HtmlFormatter from pygments.lexers import get_lexer_by_name, TextLexer class CodeBlockPreprocessor(Preprocessor): pattern = re.compile(r'\[sourcecode:(.+?)\](.+?)\[/sourcecode\]', re.S) formatter = HtmlFormatter(noclasses=INLINESTYLES) def run(self, lines): def repl(m): try: lexer = get_lexer_by_name(m.group(1)) except ValueError: lexer = TextLexer() code = highlight(m.group(2), lexer, self.formatter) code = code.replace('\n\n', '\n \n').replace('\n', '
') return '\n\n
%s
\n\n' % code joined_lines = "\n".join(lines) joined_lines = self.pattern.sub(repl, joined_lines) return joined_lines.split("\n") class CodeBlockExtension(Extension): def extendMarkdown(self, md, md_globals): md.preprocessors.add('CodeBlockPreprocessor', CodeBlockPreprocessor(), '_begin') pygments-2.11.2/external/rst-directive.py0000644000175000017500000000501314165547207020315 0ustar carstencarsten""" The Pygments reStructuredText directive ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This fragment is a Docutils_ 0.5 directive that renders source code (to HTML only, currently) via Pygments. To use it, adjust the options below and copy the code into a module that you import on initialization. The code then automatically registers a ``sourcecode`` directive that you can use instead of normal code blocks like this:: .. sourcecode:: python My code goes here. If you want to have different code styles, e.g. one with line numbers and one without, add formatters with their names in the VARIANTS dict below. You can invoke them instead of the DEFAULT one by using a directive option:: .. sourcecode:: python :linenos: My code goes here. Look at the `directive documentation`_ to get all the gory details. .. _Docutils: https://docutils.sourceforge.io/ .. _directive documentation: https://docutils.sourceforge.io/docs/howto/rst-directives.html :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Options # ~~~~~~~ # Set to True if you want inline CSS styles instead of classes INLINESTYLES = False from pygments.formatters import HtmlFormatter # The default formatter DEFAULT = HtmlFormatter(noclasses=INLINESTYLES) # Add name -> formatter pairs for every variant you want to use VARIANTS = { # 'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=True), } from docutils import nodes from docutils.parsers.rst import directives, Directive from pygments import highlight from pygments.lexers import get_lexer_by_name, TextLexer class Pygments(Directive): """ Source code syntax hightlighting. """ required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {key: directives.flag for key in VARIANTS} has_content = True def run(self): self.assert_has_content() try: lexer = get_lexer_by_name(self.arguments[0]) except ValueError: # no lexer found - use the text one instead of an exception lexer = TextLexer() # take an arbitrary option if more than one is given formatter = self.options and VARIANTS[list(self.options)[0]] or DEFAULT parsed = highlight('\n'.join(self.content), lexer, formatter) return [nodes.raw('', parsed, format='html')] directives.register_directive('sourcecode', Pygments) pygments-2.11.2/external/lilypond-builtins-generator.ly0000644000175000017500000002166614165547207023206 0ustar carstencarsten%% Autogenerate a list of LilyPond keywords \version "2.23.6" #(use-modules (ice-9 receive) (ice-9 regex)) #(define port (open-output-file "../pygments/lexers/_lilypond_builtins.py")) #(define output-preamble "\"\"\" pygments.lexers._lilypond_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ LilyPond builtins. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. \"\"\" # Contents generated by the script lilypond-builtins-generator.ly # found in the external/ directory of the source tree. ") #(format port "~a" output-preamble) #(define (dump-py-list name vals) (let* ((string-vals (map symbol->string vals)) (fixed-vals (filter-map (lambda (str) ; To avoid conflicts with Scheme builtins, ; a leading backslash is prepended to \<, ; \= and a few others. The lexer finds it ; itself, so remove it here. (cond ((equal? str "\\\\") #f) ((string-startswith str "\\") (string-drop str 1)) (else str))) string-vals)) (sorted-vals ; reproducibility ; Avoid duplicates (e.g., identical pitches ; in different languages) (uniq-list (sort fixed-vals stringsymbol (map car supported-clefs))) #(dump-py-list 'clefs all-clefs) %% SCALES #(define all-scales '(major minor ionian locrian aeolian mixolydian lydian phrygian dorian)) #(dump-py-list 'scales all-scales) %% REPEAT TYPES #(define all-repeat-types '(volta percent unfold segno)) #(dump-py-list 'repeat_types all-repeat-types) %% UNITS #(define all-units '(mm cm in pt staff-space)) #(dump-py-list 'units all-units) %% CHORD MODIFIERS #(define all-chord-modifiers '(m dim aug maj)) #(dump-py-list 'chord_modifiers all-chord-modifiers) %% PITCHES #(define all-pitch-language-names (map car language-pitch-names)) #(dump-py-list 'pitch_language_names all-pitch-language-names) #(define all-pitch-names (append ; We highlight rests just like pitches. '(r R) (map car (append-map cdr language-pitch-names)) ; Drum note names. (map car drumPitchNames))) #(dump-py-list 'pitches all-pitch-names) %% MUSIC FUNCTIONS AND SHORTCUTS % View these as music functions. #(define extra-music-functions '(set unset override revert tweak once undo temporary repeat alternative tempo change)) #(let* ((module (current-module)) (module-alist (ly:module->alist module)) (all-music-functions (filter (lambda (entry) (ly:music-function? (cdr entry))) module-alist)) (all-predefined-music-objects (filter (lambda (entry) (ly:music? (cdr entry))) module-alist))) (receive (articulations non-articulations) (partition (lambda (entry) (ly:event? (cdr entry))) all-predefined-music-objects) (receive (dynamics non-dynamic-articulations) (partition (lambda (entry) (any (lambda (type) (music-is-of-type? (cdr entry) type)) '(dynamic-event crescendo-event decrescendo-event))) articulations) (dump-py-list 'music_functions (append extra-music-functions (map car all-music-functions))) (dump-py-list 'dynamics (map car dynamics)) (dump-py-list 'articulations (map car non-dynamic-articulations)) (dump-py-list 'music_commands (map car non-articulations))))) %% MARKUP COMMANDS #(let* ((markup-name-regexp (make-regexp "(.*)-markup(-list)?")) (modules (cons (current-module) (map resolve-module '((lily) (lily accreg))))) (alist (apply append (map ly:module->alist modules))) (markup-commands (filter (lambda (entry) (or (markup-function? (cdr entry)) (markup-list-function? (cdr entry)))) alist)) (markup-command-names (map (lambda (entry) (let* ((string-name (symbol->string (car entry))) (match (regexp-exec markup-name-regexp string-name))) (string->symbol (match:substring match 1)))) markup-commands)) (markup-words (append '(markup markuplist) markup-command-names))) (dump-py-list 'markup_commands markup-words)) %% GROBS #(let ((grob-names (map car all-grob-descriptions))) (dump-py-list 'grobs grob-names)) %% CONTEXTS #(let* ((layout-module (ly:output-def-scope $defaultlayout)) (layout-alist (ly:module->alist layout-module)) (all-context-defs (filter (lambda (entry) (ly:context-def? (cdr entry))) layout-alist)) (context-def-names (map car all-context-defs))) (dump-py-list 'contexts context-def-names)) %% TRANSLATORS #(let* ((all-translators (ly:get-all-translators)) (translator-names (map ly:translator-name all-translators))) (dump-py-list 'translators translator-names)) %% SCHEME FUNCTIONS #(let* ((module (resolve-module '(lily))) (module-alist (ly:module->alist module)) (all-functions (filter (lambda (entry) (or (procedure? (cdr entry)) (macro? (cdr entry)))) module-alist)) (all-function-names (map car all-functions))) (dump-py-list 'scheme_functions all-function-names)) %% PROPERTIES #(dump-py-list 'context_properties all-translation-properties) #(dump-py-list 'grob_properties all-backend-properties) %% PAPER VARIABLES % Reference: https://lilypond.org/doc/v2.22/Documentation/notation/page-layout #(define all-paper-variables '(paper-height top-margin bottom-margin ragged-bottom ragged-last-bottom markup-system-spacing score-markup-spacing score-system-spacing system-system-spacing markup-markup-spacing last-bottom-spacing top-system-spacing top-markup-spacing paper-width line-width left-margin right-margin check-consistency ragged-right ragged-last two-sided inner-margin outer-margin binding-offset horizontal-shift indent short-indent max-systems-per-page min-systems-per-page systems-per-page system-count page-breaking page-breaking-system-system-spacing page-count blank-page-penalty blank-last-page-penalty auto-first-page-number first-page-number print-first-page-number page-number-type page-spacing-weight print-all-headers system-separator-markup footnote-separator-markup ; Let's view these four as \paper variables. basic-distance minimum-distance padding stretchability)) #(dump-py-list 'paper_variables all-paper-variables) %% HEADER VARIABLES % Reference: https://lilypond.org/doc/v2.22/Documentation/notation/creating-titles-headers-and-footers.html#default-layout-of-bookpart-and-score-titles #(define all-header-variables '(dedication title subtitle subsubtitle instrument poet composer meter arranger tagline copyright piece opus ; The following are used in LSR snippets and regression tests. lsrtags doctitle texidoc)) #(dump-py-list 'header_variables all-header-variables) #(close-port port) pygments-2.11.2/external/pygments.bashcomp0000644000175000017500000000204714165547207020547 0ustar carstencarsten#!bash # # Bash completion support for Pygments (the 'pygmentize' command). # _pygmentize() { local cur prev COMPREPLY=() cur=`_get_cword` prev=${COMP_WORDS[COMP_CWORD-1]} case "$prev" in -f) FORMATTERS=`pygmentize -L formatters | grep '* ' | cut -c3- | sed -e 's/,//g' -e 's/:$//'` COMPREPLY=( $( compgen -W '$FORMATTERS' -- "$cur" ) ) return 0 ;; -l) LEXERS=`pygmentize -L lexers | grep '* ' | cut -c3- | sed -e 's/,//g' -e 's/:$//'` COMPREPLY=( $( compgen -W '$LEXERS' -- "$cur" ) ) return 0 ;; -S) STYLES=`pygmentize -L styles | grep '* ' | cut -c3- | sed s/:$//` COMPREPLY=( $( compgen -W '$STYLES' -- "$cur" ) ) return 0 ;; esac if [[ "$cur" == -* ]]; then COMPREPLY=( $( compgen -W '-f -l -S -L -g -O -P -F \ -N -H -h -V -o' -- "$cur" ) ) return 0 fi } complete -F _pygmentize -o default pygmentize pygments-2.11.2/external/lasso-builtins-generator-9.lasso0000755000175000017500000000756614165547207023346 0ustar carstencarsten#!/usr/bin/lasso9 /* Builtins Generator for Lasso 9 This is the shell script that was used to extract Lasso 9's built-in keywords and generate most of the _lasso_builtins.py file. When run, it creates a file containing the types, traits, methods, and members of the currently-installed version of Lasso 9. A list of tags in Lasso 8 can be generated with this code: insert(string_removeleading(#i, -pattern='_global_')); /iterate; #l8tags->sort; iterate(#l8tags, local('i')); string_lowercase(#i)+"
"; /iterate; */ output("This output statement is required for a complete list of methods.") local(f) = file("_lasso_builtins-9.py") #f->doWithClose => { #f->openTruncate #f->writeString('# -*- coding: utf-8 -*- """ pygments.lexers._lasso_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Built-in Lasso types, traits, methods, and members. :copyright: Copyright 2006-'+date->year+' by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ ') // Load and register contents of $LASSO9_MASTER_HOME/LassoModules/ database_initialize // Load all of the libraries from builtins and lassoserver // This forces all possible available types and methods to be registered local(srcs = (: dir(sys_masterHomePath + '/LassoLibraries/builtins/')->eachFilePath, dir(sys_masterHomePath + '/LassoLibraries/lassoserver/')->eachFilePath ) ) with topLevelDir in delve(#srcs) where not #topLevelDir->lastComponent->beginsWith('.') do protect => { handle_error => { stdoutnl('Unable to load: ' + #topLevelDir + ' ' + error_msg) } library_thread_loader->loadLibrary(#topLevelDir) stdoutnl('Loaded: ' + #topLevelDir) } email_initialize log_initialize session_initialize local( typesList = set(), traitsList = set(), unboundMethodsList = set(), memberMethodsList = set() ) // types with type in sys_listTypes where not #type->asString->endsWith('$') // skip threads do { #typesList->insert(#type) } // traits with trait in sys_listTraits where not #trait->asString->beginsWith('$') // skip combined traits do { #traitsList->insert(#trait) } // member methods with type in #typesList do { with method in #type->getType->listMethods where #method->typeName == #type // skip inherited methods let name = #method->methodName where not #name->asString->endsWith('=') // skip setter methods where #name->asString->isAlpha(1) // skip unpublished methods do { #memberMethodsList->insert(#name) } } with trait in #traitsList do { with method in #trait->getType->provides where #method->typeName == #trait // skip inherited methods let name = #method->methodName where not #name->asString->endsWith('=') // skip setter methods where #name->asString->isAlpha(1) // skip unpublished methods do { #memberMethodsList->insert(#name) } } // unbound methods with method in sys_listUnboundMethods let name = #method->methodName where not #name->asString->endsWith('=') // skip setter methods where #name->asString->isAlpha(1) // skip unpublished methods where #typesList !>> #name where #traitsList !>> #name do { #unboundMethodsList->insert(#name) } // write to file with i in (: pair(#typesList, "BUILTINS = { 'Types': ( "), pair(#traitsList, " ), 'Traits': ( "), pair(#unboundMethodsList, " ), 'Unbound Methods': ( "), pair(#memberMethodsList, " ) } MEMBERS = { 'Member Methods': ( ") ) do { #f->writeString(#i->second) with t in (#i->first) let ts = #t->asString order by #ts do { #f->writeString(" '"+#ts->lowercase&asString+"',\n") } } #f->writeString(" ) } ") } pygments-2.11.2/external/moin-parser.py0000644000175000017500000000677014165547207020000 0ustar carstencarsten""" The Pygments MoinMoin Parser ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is a MoinMoin parser plugin that renders source code to HTML via Pygments; you need Pygments 0.7 or newer for this parser to work. To use it, set the options below to match your setup and put this file in the data/plugin/parser subdirectory of your Moin instance, and give it the name that the parser directive should have. For example, if you name the file ``code.py``, you can get a highlighted Python code sample with this Wiki markup:: {{{ #!code python [...] }}} Additionally, if you set ATTACHMENTS below to True, Pygments will also be called for all attachments for whose filenames there is no other parser registered. You are responsible for including CSS rules that will map the Pygments CSS classes to colors. You can output a stylesheet file with `pygmentize`, put it into the `htdocs` directory of your Moin instance and then include it in the `stylesheets` configuration option in the Moin config, e.g.:: stylesheets = [('screen', '/htdocs/pygments.css')] If you do not want to do that and are willing to accept larger HTML output, you can set the INLINESTYLES option below to True. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Options # ~~~~~~~ # Set to True if you want to highlight attachments, in addition to # {{{ }}} blocks. ATTACHMENTS = True # Set to True if you want inline CSS styles instead of classes INLINESTYLES = False import sys from pygments import highlight from pygments.lexers import get_lexer_by_name, get_lexer_for_filename, TextLexer from pygments.formatters import HtmlFormatter from pygments.util import ClassNotFound # wrap lines in s so that the Moin-generated line numbers work class MoinHtmlFormatter(HtmlFormatter): def wrap(self, source, outfile): for line in source: yield 1, '' + line[1] + '' htmlformatter = MoinHtmlFormatter(noclasses=INLINESTYLES) textlexer = TextLexer() codeid = [0] class Parser: """ MoinMoin Pygments parser. """ if ATTACHMENTS: extensions = '*' else: extensions = [] Dependencies = [] def __init__(self, raw, request, **kw): self.raw = raw self.req = request if "format_args" in kw: # called from a {{{ }}} block try: self.lexer = get_lexer_by_name(kw['format_args'].strip()) except ClassNotFound: self.lexer = textlexer return if "filename" in kw: # called for an attachment filename = kw['filename'] else: # called for an attachment by an older moin # HACK: find out the filename by peeking into the execution # frame which might not always work try: frame = sys._getframe(1) filename = frame.f_locals['filename'] except: filename = 'x.txt' try: self.lexer = get_lexer_for_filename(filename) except ClassNotFound: self.lexer = textlexer def format(self, formatter): codeid[0] += 1 id = "pygments_%s" % codeid[0] w = self.req.write w(formatter.code_area(1, id, start=1, step=1)) w(formatter.rawHTML(highlight(self.raw, self.lexer, htmlformatter))) w(formatter.code_area(0, id)) pygments-2.11.2/.github/0000755000175000017500000000000014165547207014676 5ustar carstencarstenpygments-2.11.2/.github/actions/0000755000175000017500000000000014165547207016336 5ustar carstencarstenpygments-2.11.2/.github/actions/pyodide-package/0000755000175000017500000000000014165547207021364 5ustar carstencarstenpygments-2.11.2/.github/actions/pyodide-package/action.yml0000644000175000017500000000024414165547207023364 0ustar carstencarstenname: 'Update Pyodide package' description: 'Update the WASM compiled Pygments with Pyodide' runs: using: 'docker' image: 'birkenfeld/pyodide-pygments-builder' pygments-2.11.2/.github/workflows/0000755000175000017500000000000014165547207016733 5ustar carstencarstenpygments-2.11.2/.github/workflows/docs.yaml0000644000175000017500000000205214165547207020546 0ustar carstencarstenname: Docs on: push: branches: - master jobs: build: runs-on: ubuntu-latest steps: - name: Setup Python uses: actions/setup-python@v1 with: python-version: 3.7 - name: Checkout Pygments uses: actions/checkout@v1 - name: Install Sphinx & WCAG contrast ratio run: pip install Sphinx wcag-contrast-ratio - name: Create Pyodide WASM package uses: ./.github/actions/pyodide-package - name: Sphinx build run: | cd doc WEBSITE_BUILD=1 make dirhtml cp -a ../pyodide _build/dirhtml/_static touch _build/dirhtml/.nojekyll echo -e 'pygments.org\nwww.pygments.org' > _build/dirhtml/CNAME echo 'Automated deployment of docs for GitHub pages.' > _build/dirhtml/README - name: Deploy to repo uses: peaceiris/actions-gh-pages@v2.5.0 env: ACTIONS_DEPLOY_KEY: ${{ secrets.ACTIONS_DEPLOY_KEY }} EXTERNAL_REPOSITORY: pygments/pygments.github.io PUBLISH_BRANCH: master PUBLISH_DIR: ./doc/_build/dirhtml pygments-2.11.2/.github/workflows/build.yaml0000644000175000017500000000323714165547207020723 0ustar carstencarstenname: Pygments on: [push, pull_request] jobs: build: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, windows-latest] python-version: ["3.5", "3.6", "3.7", "3.8", "3.9", "3.10"] max-parallel: 4 steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Install package run: | python -m pip install --upgrade pip pip install -r requirements.txt pip install . - name: Test package run: make test TEST=-v if: runner.os == 'Linux' - name: Test package run: pytest check: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 - name: Run make check run: make check - name: Fail if the basic checks failed run: make check if: runner.os == 'Linux' check-mapfiles: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 - name: Regenerate mapfiles run: make mapfiles - name: Fail if mapfiles changed run: | if git ls-files -m | grep mapping; then echo 'Please run "make mapfiles" and add the changes to a commit.' exit 1 fi lint: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 with: python-version: 3.8 - name: Check out regexlint run: git clone https://github.com/pygments/regexlint - name: Run regexlint run: make regexlint REGEXLINT=`pwd`/regexlint pygments-2.11.2/MANIFEST.in0000644000175000017500000000021214165547207015067 0ustar carstencarsteninclude Makefile CHANGES LICENSE AUTHORS include external/* recursive-include tests * recursive-include doc * recursive-include scripts * pygments-2.11.2/doc/0000755000175000017500000000000014165547207014103 5ustar carstencarstenpygments-2.11.2/doc/make.bat0000644000175000017500000001175414165547207015520 0ustar carstencarsten@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Pygments.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Pygments.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end pygments-2.11.2/doc/download.rst0000644000175000017500000000236014165547207016445 0ustar carstencarstenDownload and installation ========================= The current release is version |version|. Packaged versions ----------------- You can download it `from the Python Package Index `_. For installation of packages from PyPI, we recommend `Pip `_, which works on all major platforms. Under Linux, most distributions include a package for Pygments, usually called ``pygments`` or ``python-pygments``. You can install it with the package manager as usual. Development sources ------------------- We're using the Git version control system. You can get the development source using this command:: git clone https://github.com/pygments/pygments Development takes place at `GitHub `_. The latest changes in the development source code are listed in the `changelog `_. .. Documentation ------------- .. XXX todo You can download the documentation either as a bunch of rst files from the Git repository, see above, or as a tar.gz containing rendered HTML files:

pygmentsdocs.tar.gz

pygments-2.11.2/doc/examples/0000755000175000017500000000000014165547207015721 5ustar carstencarstenpygments-2.11.2/doc/examples/example.py0000644000175000017500000000047514165547207017734 0ustar carstencarstenfrom typing import Iterator # This is an example class Math: @staticmethod def fib(n: int) -> Iterator[int]: """ Fibonacci series up to n """ a, b = 0, 1 while a < n: yield a a, b = b, a + b result = sum(Math.fib(42)) print("The answer is {}".format(result)) pygments-2.11.2/doc/_themes/0000755000175000017500000000000014165547207015527 5ustar carstencarstenpygments-2.11.2/doc/_themes/pygments14/0000755000175000017500000000000014165547207017542 5ustar carstencarstenpygments-2.11.2/doc/_themes/pygments14/layout.html0000644000175000017500000000653714165547207021760 0ustar carstencarsten{# sphinxdoc/layout.html ~~~~~~~~~~~~~~~~~~~~~ Sphinx layout template for the sphinxdoc theme. :copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- extends "basic/layout.html" %} {# put the sidebar before the body #} {% block sidebar1 %}{{ sidebar() }}{% endblock %} {% block sidebar2 %}{% endblock %} {% block relbar1 %}{% endblock %} {% block relbar2 %}{% endblock %} {% block extrahead %} {{ super() }} {%- if not embedded %} {%- endif %} {% endblock %} {% block header %}
{% endblock %} {% block footer %}
{# closes "flexwrapper" div #}
{# closes "outerwrapper" div #} {% endblock %} {% block sidebarrel %} {% endblock %} {% block sidebarsourcelink %} {% endblock %} pygments-2.11.2/doc/_themes/pygments14/localtoc.html0000644000175000017500000000072014165547207022227 0ustar carstencarsten{# basic/localtoc.html ~~~~~~~~~~~~~~~~~~~ Sphinx sidebar template: local table of contents. This file can be removed once https://github.com/sphinx-doc/sphinx/pull/9815 has landed. :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- if display_toc %} {%- endif %} pygments-2.11.2/doc/_themes/pygments14/static/0000755000175000017500000000000014165547207021031 5ustar carstencarstenpygments-2.11.2/doc/_themes/pygments14/static/listitem.png0000644000175000017500000000031714165547207023372 0ustar carstencarstenPNG  IHDR \ pHYs A8tIME" knIDATc`X0H9:)-XAFQt{FFo0L| -:Hu3"(Ab30000h[v0RYIMé`$%IENDB`pygments-2.11.2/doc/_themes/pygments14/static/logo.png0000644000175000017500000006446514165547207022516 0ustar carstencarstenPNG  IHDRd<-υbKGDd_l pHYs  tIME" IDATx]w|~fZz!PC(7H$ HAED!i?nD"'@HR/#wrHξ;3ϼex6CQJmDwӯm9-mGpnk4DAAAdd„ ]bYD.qwwOy6yzz>*ʼ\fĎTZlٲ[呬,ѣ`Ylͭ@TYYY_owdذa|+W `k``?}t͈RaÆنO>|OăG5kL&G3fX OOW̫V s3㍷O 2D%Z!^006xgghWFB4Z@CN WQŪ^F K [&L()&UffςC(U)2o%M28SE;v\v}ccDs׮].\ *`nnCV}W* Σɏ]pm,(]BRGjVjn{{Qe|o3Fi 11eQ~Z! %d 7<oU:߿?]rn+OgB Jbcct҅UCUcb5GiM~,hYB@( cE˗OE~.zwvjp=pwwo˜K;2݅P&BKT,E_ѤڔAVWJw^s$ݶ7*be~tUy. 5XK5+EFI9'Yas.x+i]\\~8zFFoe xФoNy~g$;!#H”F!(8,p 8 ك4$`j*4Ћ o’%QJCEg<_6ajPFb.MR(QPZ"ػnbK (ZaJnjӪU+i6m CBB0hɐ___lݺFݿ۷c̘]7n, ?}Utt֭[7l2R^dmm*((O ܤQFչ̇yfpѣGo>2mذu߾}l`` q֭;e$>g߯]m۶5 ޼ysc(u3333e~ia֔Rŋ[4|1ܹG"Ȏ"p*SSSuwRǎ%d899СC.]h߾}ɨQ׮])Sȑ#8p `vM/_.SJK(\TT>Rijj׬Y.]&oE:TV2Bee߷oj3A^ۦ(]v9( Ǥ$V;R )))͛YZWTvE7vI?rC5Ib9RJL8j5m ]7jpgbL4g;r}8GYEO$CJDJseRKk,%5#U:yߢѾ$TOcm3N+V8]W[0gΜƍܭP(la.]+00Djjj+eF?~Ȅ }fffY2 cM)T*G.D"TlX\lkk{aȐ!_:;;ԗpO81ɓ3(4#))T%%%9r$ݾ}=Rl(Je>eYD"p'e&uۇ avɓa(FJs@ ˲qvvv ƌ <+(JB#Ph-+[ a9, 3ӧO&:Y\R9,^V~~>c`` 2uphaaa=:G&u %%%"g&|UV{joܸ$::P6U*v횤V3t82 svСK^СCݽ{wtII˲J @,C.gH$"'''NIQׅ^IO7bד İ:D8V8tief̼m;PA*;rm/' ZSAjXCֵ󫄌VƭviqhH:qz+WIJ4I4k__ߴ]|ԱeڭDvСy<;ܼyvaGPŋ;B*iST*딈88::fРAu@)uoA'ǡE}h"33ZF)qpttT?~_"8~x[n6sޚ(^~m\}3fee1;wQPPpB*՗>˲pss[йs-[̭}ׯ_Ç,˂8HfϞMo.(..^E]...{v޽Z#G\t)RP1 S J)ڶm`ܸq+_СCk׮6 A걉 ,--o~{WXxI>m.%>;NM\ҍI= Gٞ06n9Ta/n Nfcd4ږ6,kNi#\2O<kNo#phM~PPPCLZ) ЬO9 6ԡCF7da'=fddtYBF___R=z˗/o+,v{gΜtJMM^XX(4bH$H$ Ba@ PVaۆ8w\Ovz@H_ri:o֭- T:DTB0 ,--PJzzz 6LFI6''ㆉ ٚfۉBmVV@(bkk{jJ 4iR!!Du;}u;b\zu={΍1BJ)|SRRC(5j9''gk&M2{I$DJEddٙ3g >zhlQQQ ]y,[,S}̏t 퐑zX`2sbvvӟi`mm~jjO%##C|4,dرN@ff&8IBa9Q*Q`… gr'7>Dڦ<|0Օ WwرQxx8OHH/B-X|yhxx-Z(_Op}<<*d k㓢|RGp `d}zߛp-7xm:SG ||#Nk2"Қͩ+KT:LKVr8ٌnᱩڶm[;w.G1C؁SJnڴiS.X B{...9t0 O4Ѕn~+##nyӦMVS4͛7_V:(utt6&YXborrr#DH~fy>O&loo:|5==B(GcǎTUDC)['OzD"N2-k׮ѕ،r{5s&`ɒ%'{Zlwܸq'[Y@=G90['ŕΕ*Q}MV5y z6YBjp7yH@gBUnD[i -kgX;apa7jִC)%˗/YA999%ua]eemll: rUjjjC׷L{',tIa ƍK6o666 u-!iuPKKKxvBM u͸skccױ QQQ}esDZd2Ae_>>++ :O?o߾*eРA4$$Yf.Vq,,,ڶm|i>ܙGJ+꫖=6Ǩi=ͭ;,D^FC`ZTo: 34᧋h{ּS~ɪnV2DA˂(TF\ ЙME7݊UӯLZSIJJ2\j˲VO?q)))m eYN,䲣GN֟=(--Y.< !2OvvALL,}ŎcP{('M4(B<ă6NjQ*jQ]&j SN ptt8k֬u]lƍ3fI ?J$ N&JMu$ S 2 ,,LQWL:u ˲q;!!ڵk#jV`8;;w;wn6%!2!ӧϧA {}i۷o[C? !5Ϋ҆o߾&LXٶYfٞϛ7Çs/'er (\R m[,EXtuؔhU (%C6Zn7쌭wVʸ(t?v)%5АNuPdK7=q١UJ{B-, c6Ξ=RpB;JD❟_axuɓOUe``0#|R[A/`"sԩ'+h7mdq\9r޼yϟ_gAj-K>8zhv0qt++'Nsh~pp0w~kŊO=t;/CCC899MԒG] LMMw%1T,y bs&&&#GqFE˗.233{p>qܹfff999:}_-\C>ƌk? LsѤ /Aj20KkTRݠ{B$=5;x J^]`rړ6ʯipEybZ Ez+c6ȑ|5mM w~~~YC'(KK˟L`ևvڝeHQt.((hZ:s bbbz@ۮ]UurP~%<@;wPNbYLTvxa]MPׯCѺV$P("9/+6997ntdVQN7˗1p666_ff+Ԥ^z-"//|DDDYeggGuqpqqZ;0-L=>2 1ThV%!DC>gåX+ 3WsT` AsQFlwěn]߹|4MĈL{,)o+ۯ1ZAZ$Q9ҐCп IDATJp 4;wGnU~ڰaCAAA{埔yMddd #W!8oJ@ iҤIinnƍ{2uO}6iuÇ~?8IOORTՍj !|}{$ijpɒ%uXY{kggg_${'-N>AA/y&_pPn/2yz݂Fk҄SksO*FGOH@ztw>0jwPo"׼tN kJ]&ɤc+ӼȒ/o^mjcظ񯵌 Ō4k֬ Rg:4 /ݼy₂6r7|#MhG*[9'*F<g,)) H$KӺ#Gq4CCC%xyy)ZXXGWgGn``<꠫kn9$U&ME:t{rX[[W uT*WرcG_\_L$eS?4pP_<%SW:Qr:DBs16elWvYI ζlLBEAZSCXDcҾD@.OO I{w-%QQw\&_533[֪Ufj0eʔQ΅yo @pn'#mcbbZ4VMJJ??SN2PJ%WP---,Ru};Nccc莮 ///z׮DOHIIG2^[V#++N͛?sڶ3 &&&?R) _bũ={ ֱB<>SD #x:Bu> \̉W7}lۀN_O[wQ# hL(Ӗ']a@Ta^)7ur\?z45`.КyLLL ю9#yyy 8g?.G8TUwpZ 84l=WXXX&TEF ZH0mMY Ay7ɪ]v}.=T*Ц5i>k9222u__lYr mu&J)))=-[Vdmm$%cpTtZBh=TAimG>W>Fzm͈vhu,16qGV*e-JJTgjUMO-<"EMO_;Yo\trjΝYDBի.nÇ3wqhbQu%5-!$''oJ\.]d3_[RA&U0Ὥ9sf+JRSSRSSW/X 0 rQW?aG=Y]WB:~Q|<5JLYKqѤģX_7XӼ6O@! (L*PZ_HPЬY3vݑpII >|x\pϟg8@8˲8@co8- Zx!#+M5nl2xfMDTܧ 0/1aO0孓bXC6?uTtG.]о}mWh>9c ŀxB)$oh)bQ0v5~~~666A% DZ))),_lHHȇϝ ;%_u4$uڒZʟcZdKz6v>EͿ_-@[k_dfȨ]\fڙ(#! #u@X]Jܠ.k*y޸qM3\tS J͝;w>p>`e}S)))|~8oҲAf"::5 ؘn WWWv`Vh* ._|`)C|RN.de[3@kJU}RA3TQm|䧟p|sk_n!8=Jej Cʆ-U\ =:%y5aDs4i.\ n^$ T*,Pejj׷˗/̃ZF֭{xx$ִRf}XA?x sѢE~%%%u@P" k;ֺfxӠ 5j={J,WXϋ86::w…E5j(Ւ,_ ۃd[;Fqhvv6o߼F-[VXpUj ~sf}Yv҅w=V(M&55޽{ [l3gD׮]ۨ P(|Ը+nݺU<<<|ccch7kMLL_v҇Ç޽{ WXX`) Zrڴic[Zs-o@sJa[>m_mQ7Ҹ]O/_WWq=(JohH)5zزeMvspr>BaJo}@AmѹshEHHHA Cz1K~~~Ǐl…eH$*7<}tU)I,mT.GuzP7:gƌaٯ Txyq|/[ôQOUFfFQJ%ŲoGfIr۶me$>>Rwo^i:::kRPPB(] XRJhWӧOaggeccS˗/3I|*:} j̫N$\@X=}?2un qrzB5hL5iXyN1AZBVVV ͙֘ ؤ& D} 魬ujm,뒈*h4A~''{r9xɼ: 3fЯ?֙ =Iq+vT*Tn«L{$MVF,g3[ήAE#",KT?Bܿ;f7^~!!!0L //oܹ󏢢r&4hQlllJ)SMp5j$Ŧ|k(EDD +tYFuظ,:0HLLDHtϞjPT(M>!듳CYxن6n ,cP *3 4hLLL322]Çmt,cccL4iƍImqsi}((**G)wʠ@tr܄o h޼9 !Ҥ4ٹs,m.*(2~ÇW!++ra5eh߲$"}Zj͊ui켏lh5=Gx5huaŰZ)}UR$~2-1;ڷo~ťK511 j>>>{r eӍ6lG}=c _~FLLtСCvvvu)0te 4]===_Kͬ Bw0 ܹS7bٲemo^rל9sUPJ [*mTMR$נ1Ue+=X^e)?Ēu6m+i6-$$}RRRs%6ǏԣGf͚]>詇vYkZjHLL*f1cƗIiDl i$Xbŧ~-;wnի׍5 {dsB?UǏ&LZ*JeZZڨŋ={֢.e8xt!͛7ORDwXP'ZTâ<ŕE|yJ_"h[J;iWTѝ;R fƌGG3fht}78pO>QyhhhիWNJJګKz<ˊٴiӥqrrrSN;qĀ͛7{_RFFv\Z/BCC<뢎/fګZFFF;v4JSUJsss? uȑ/_C ɓ'<~JSDRf1RbS]Eܾp[ Bjx["3ey~E$?{b^ !ꙒBgBahv(J˗ZaBBΝ;)ݳݲeEČP,_+G>}p}8::lddPw`@AaaSlܸ[tEFFd?Zر 6\ݽ{⎺-RJR_7BJlmmoER.>|xԉ'~W"ц *`~_Jvٕ˖-K>pK,1x-nܸaq!?. {͚5g?J*J8Re+1dܿ pj~MTFqұYl, Gu[s$(-H|lNƲr< ؾ52>|xŋd2_rI}7stn޼@ (q5Q)?C3LLLHQQ`ǎ2,##Ò2-RJ% QTEjڈo O !!!}ўSTPFRkBeUprrZW\'MJҭ\{K*lCqh%| <طUVlmm39slN)W;u:.ckkԵPJall̩I^,?@ Pʩ+N<4`6mJ,>&RpSS===M>!)o\r+++V߿ijjZOWW-BJ0 _^AXcƌApp0&NՊk J+++8994iҰu֑ T>`_ ]vWQՒ 5O2ZXjur!ìKL\Djefp%YWP&9g6%-7Y׮]Xhi &O|Ugn/!!]33#F͊,hrr2>_mN566aF[ 'ѣ̙3;/>$РA{滵ub1{YMPedwyM.+M6mFFFR???DRL)N,1 Slkk_~}'Ny6 O~RʔR,]Hfff?] s|E޽rrrRu/\N affp)n#! !SL600E)7hiiytkx^ڵk盘ߙ2{׮]=fL:X]˭{z{3 cЪU˽{~\֭[ɓ'kظSJJ G)NNNƎUSptHMMu۹sRljaaOZM5Z%dW177;vBH~]'ۭ)}#ytԮ%U?#Ӿ7\BJҚ. rk걳juenܸ'O(ryvvvEP5N,vy_5agSw,n5p:GS:_+?=!jqN(nx/y񝢷]Rt^F"DU.lmm;^PPpo-...6+0B|wmD0ɽ{&0z7n8Y_ ۢGRݺukׯ_VkR0F}ʺhU^JH~T8x0ȂϿ/$t"JZЁ輤M8._DGGo%(pww?nddty.@)]x1%<<|eDD/;w~Νu6mXXX5qlmmŃ')= C2=d01WCJD]g\{pTW\ֽ:dESDiQ uavʣnp:VPPJIIIؚaccuܹspRRvL&믿߿'|W|||$44L&3b 2d"ߍIJf buV{b zU-U}w.3RΈ߼/@7 wK(F-a$097+a[MPPTT}ҥ$''wϒݸqUG~}@Vݻw 444qͶmۚ, ήcSÆ o=eyxUL38Y;}8.5LeɫF$'~Tl/g_,{C~ʂ" TN~MS&d%Lڷ2o۶ C 1ygLIIJI8嫯zaq/1X,%K ggg0 &&&rrrJRg8Yƍ8|L&rĘTQKo3kONtŔR"ο<`b2{"AkX7'4T^)3(y>h] FV&Znؖfvު|֭G JR駝 IDATlNxxx֘1c{Y_~eqQQѼjBi?kSU;;>Ҳx$U \PrMϻU&ݬgB`Q}nٯ8KcF+r\QT[m -84 8 Vuy /VxBԿI^^^ʗQǏr!L&k}x{{ڵ777.885%u<.br@ҳu٥ ][$IPPˎ>Kw:9}F+K_(Hv"(Q7b aV;4 [IP`iioVLMMS\\\>9s_ A%''_~-ZdbeeD"e,#6V dY6Ȁڌ7 .pxkRupTW2ٮKUkCT叉ivO΀^rFH:kPTOtty+;n<ܶ999YӉ]88@q;۳g].]g&֦S {rr)Jr{ v޽{?PHQ%)RP7]7gh>wԩ߾'nֱUwy$e*|V/)系hT@q}Πy%hC+/F/PEA- J -ߪѩVf0zM_z_AػwCuvvv[kw:tРy={ܺu={ϳǎkw/Z;wXlKD,P-Q(oD. 6$6ky?@@]P?cdv m^d111dբ-[PT&$>{m6T~zk)))f#??rqq1PKÙ3gн{}}T*o%%%;j6HrݻkjLaaׯ_ٿh|J"hlYMGBT2zݫƙgr+)\J(WrF J#XM] "A@Æ ȑ#}vFN:Ǐ'o6mZdǍ)~9ujAUIqȬkJGh֬YaXll,(^^^#!JN2 H]ʮQTAUgU8NU{%E+ Bj*ѣG;<3gξVZܳ*4mT1]m۶9zhs/^$u+!jR^B˾ețf+\T+\V.@H5)Ak#|4y$^@ Z xUZ5) @b5i nEkV"qJvʕ7oޒaӕk֬عs牐tk`Yrҥ ?3ni|nɽ +kgr (CA(?|G[xu_b+J("Z\ds<7 != d?JlD\J_}񃟹wΝ9wsΜ9Clmmo&%%>`<==Kp¦2Zp82Co߾ yyyB(kmm`jj n ŋfϞ1@GGG;wdff8qbBb> nŋ_! >>ƍMP}.]!-- BCCJ޽t`РA%99h4^AAAFggLW^2VTTdPGbVVV@$!77X,VĶm/_0j(sqss ۶m+`ii ŠRcvM$-qppx<5G+**A&-< zUmmjg08vy@, `ee݃KK8 ! j=o9P~~L6!Φ𡑤z􃱠, Qdjkeښk' rBYE_7K̹֏>{M) {I\& -*`$,e5Q@ϝ;w%++daAWW74$X(B.,,O *\ //ӧO988p8wHa0E֭ߚp%WV4۳\]] ?Vobb1MMM˖-8ppG$ x<޽UV=}Njj{Ϝ9S$M|}}͛痢EȉJ;wOMM+;;;w/{72̇4F"d2N0'..SLYYð*222X_~ጻwN޿}z77ao7iO7mT{ѹsF I$|D"Q\]]o|Kg K:鿿cǎuM60TT*h42__͓'Oһ%פz2P&^7 mO`A3R௰E:TTlLW_= p e?FX?& VoAH$II6lH{`0 6l2 myyy,/cǎFR$iFߓ2,̬$222s̙Ot:$yk~+WMIȑ#78ބw]ѱJ.]t`ƌK8NT*}7--mad2L<9*//OKPu9g*Rl~_Z:yyŅ>_\U3=r/#,HQ~2 BI$dD"2U#FbXq͂χÇ{Ʒ̙3__ z9Nݻwz~ðc /^RR$F!|nh)ʭ d2n)ggg~xwuuuAUUK&""BjA'.//WxxxT3bc0FYe<|:\\\tN ;vlB$IvCCߏq(Bxҥ~^@B2 l6y>]minn.>cǎ7j*T*NPzJz!)+!!aX,N! Pxy֭ӦM;aoo/ILL-00pMRRt"&_|m޼VTZZZ #""z{.uSg >0@ V8j=BHmqBrr2={䄄;ߏ$2'WL:}[vpp$IޛVVV#Df%:cww50iҤz^+)){{PSS9Qpz7 kQ\\R5`&8u CZ-HRdX6+?0gm ';ѶT/VN1}:(+JImI&@2({%E gΜE |J JKMM@H$^ux}$k/8'T&&&^ojju:K jdddjnnH]]ضq_oݺuҲe˞K 0,..lYY8#Ç_#8.v^^g~qmy}k9PDX0PV/ $AL:ɬNrkBʾO$}OPƍ:b1VY onCؠA>ӣQQQcpj< jxBPT*{%lhI(ʥ4 ٓBYZ8[^r喉 jggu׮]LQTPQQAoo~~!* ǏާbH$sNkS{4t:rNî&MUUU!w؋C  ((znz`_8BL]]Jk4302,>|xa{{{?DO"FY&jrGp=Q*PWWQTU]]]_pBuxx ssڹ>zpČݻlaaJ* _'멽ð2T3ᎳܱcHI;j2q,T*5ѣG=z#HRSScbb_zj$>A /.GTi~ L>.5"۳jP#0\J$ yyyaX Djkjjb,Ci{mE V BR_```ѣrgϖGE?~|ݺuꛚܭusskmPĨ"HXccxڵkW8p zk=*< rƌA,,☏O,D2,>>>ڵknP(P[[{~߾} O>kjj>7\;pX,_knnիW1 }k2oEEEj0ۥ/IDAT`v5k4|r$--[FKJP__$;;{N u耆愄ϊk׮'bؾ5_uqqQs8{:s… xxxlq/..>k׮~!ٳ%&&\T*MMML#I1ڗm ;V ۇH 4Zna]i4@t֔p}4٧1xucSQcɁNTϐVͣB̙3GD&[LMM4-S(>љB\#ܹ`bbRd2[Bt7V@ 8s L8qϪBF%333_8ׯ ~" !00Z-aDќ;, xx]]]wT*X,t҆ ]\\-,,~d-EXիWRg(DW @Jf(F ɝjZ B"""blllR(O?:++kCii u@ |K5ZiR<O{lmmSG}h44 jI$={v#>tw}XZ]4dul6[d2@mnxNòD_ ɠt:=ڵknT*ǻ,`0X,FRI s_vWMJN\tU* p3fv`,h7@ PRPϛ8X@dL&[QӄS֐H$K.r''&&) 3[Bϝ;'N,󲱱a"bBHD"c6Id2---wYxqBiv055d{>[(°aî| sBH3o޼`N3-,, pޅRZ3lv*NI*ʟD" M~YXXc0sѣGXYY5My͛wz b1---Æ #H$\.!Tk@5==xښ>Wc ]caawttܢ/d2-e\tz'''g.dXLXrggg#B虭2l \6 44v mmmslmmL&s())333DH?!44t= iee4hwhh*{ !T!E.0`@#f͚\.!())sѣxzz2X,ܜiggǜ1c &&''CYY`v׿ķĥ7TjK2rOS/Ѳ'_C\B@% GݵxǏ}غ#F>}տz蓢nٲpSSl6mC.\GGG<QQQΤ$_{o%eۋjکSZZZRy@ H!!!ika޽***d2y*au~{{ `aaa3o j5СCB:r+Mv9rDE_7n\S_l}qq"۷-*NZbP( 3f̐]} Ѿ}:::ZXXlJd2A"bXß:0ˆ?B… F8}tcR6#0#0#0#0#0OHҵIENDB`pygments-2.11.2/doc/_themes/pygments14/static/bodybg.png0000644000175000017500000014527714165547207023025 0ustar carstencarstenPNG  IHDR)+ pHYsHHFk> vpAgg܊IDATxM;ɕ-`$ 1ȠCcіF4$(RUA*Jjz;A{u"3qe%5O~&W}on_ι~Oӿ܏~O]\ï';Ͽ=G~?n?w^yϧ?\ryo}Ol̞wϿ}ݞϧwͯ|u|+wFgЫS~-/﷾_|\_}on0sY\ܿ>_=|ϋxCnɟ|?-1o%vY>toz5 ~s<<8lOk>d)g1c<#K|o6obKx"okky{%- 8Kyr~9e9de>b\Hl<:f}if>s휟c ۯ_u5h%>w޷|%f\kNY,yϻ~.8oYϋ8?v f?W~MysUv{=98;*W,J{+s}O]JU0z=Ul|cZZ^|>/<Kȕחh|e,|?U}drחu8$.,HFM^In0_U{R^]s9a)_yfL2k3R{no'<{>[wYɅeg<_.om{ֳ~9S⹼H5玞/ݘ6Ȏ~F[yvNN odv^iy. DD,gT'^<#5w|:yYk =_6s~9 f!=1Ny "adYyc__h?soȳ] 柅|_KəWJKk^Z󼼗輲}³Cs;з~ɍdu'AdKprfe' Ҳ=ߞ{rN.s.fE1ֽoyYV'w< dP|G6繣~B{_ٿ~I?#18hr,|F"s'_|^yYph&3$rY6}6yuJ 3_z׼<g|H^q?>gf-LJ|tm=>&ח\X>)Q]^w{iR5K/k77+)-ߐxKJvv^37 =(YA/$Cl,25z4idɱy>{0]6Iw5Q%דŚMCsK^'K<ٿλ36f'eqzɒ̫brw5eO٬O^Ye{n;W\4'Iϟ/+syrmirA+,<)&a8c o˞Ƿĭ<ƦAy= o?܉j(!҉إr5feE%~J~&eY֟J-W|,ZI_*y(p!qMS%>Tp/=ee&vwS&nd5j@r17k.<__ݲe yy Ҝ#arx(| =W ҞF/-d==NCP޲z(:i[gq/9F6q"uYf{SVh_YAWT0!? z?;UoyyPY>Is_5 6n~8KL"AO#Cn,m^\{_>./Y#h.!&mٍzM_Z`K͓sZݘDX\|s9Քd&FHVsf %*%ndq'@ _t*@f1Ȇ6]||.,Ą 5gXn*Uk7'7D̅7HޙCv.4 <,/m_}Ǟ[Ys˕TRfL g>/,#iYy\.crǞ!i˲3Iou PL) :9Z6DKC%Fmgi-Jm%-gkMTiΑwaj$QPj&9l6ywJ`<D9g-+/\&Fd4H(+X k&Hg6:[9Idɤ gI"? ^cQYY@-wI_[;?Z[ }rc_:b:A Ұk̷ڡ m-vM^kl_ލu>6ߚE+j_]ukr3ɟbt_n$Y1CׄkU~,ӫʍeԓ3=@b[$i-`QG3Ma?aK*BRc'0HEC>JDz,gySm?ˌY_6W_!R;I:pz$Ҍr RݴdV\A^^&|!{>2̔Y(D~mjwsS|r7S;y4IJ`4*V{'%xxGyy/b)uqIj oאG#!/-!-3@8g%祘r}$([%&=ziYm]}¦ָk֖ԡٗV})(5 {)&ZiGăl-׊HP7ƚS%EJ$}iZd?bD"_z[HZs[4oHx ی򐇣ݑƌzK@6M'w"u+K($/4 \k{i|b5KZ4ms)RH4\WGQפz9)ٍ-<y Y=?3N*Vs*5.!E psݷ3Rf_1m6̛qm9&-$ٺpjZ6hM*إmpHL!+&7~u I;$رlk`@S<+:K5K*=;UefJᑚrOIטNyWWnEիP&IdG7ash{߬˷%Ni1sȣD ١"C]reAYNq3j5D=F>]l@zay&!Y$%`K˦{rV9SrA wxn #T`.7ALNQʶAWnPu ϊ9 :櫿KOiv쓾f-L+~V۝ܕ}oMOga_;|NU$g('fOgT֜(Xغt;2D[*^њ%$3qN=$bT9jw7W/`ϟP8]^rK!Le?,3{?geq<74pJ}/Z2;ŵýz9ЀQdR8GRھ|%5эY"%o'RôTh9m%l1m;ؼl:XxSkI{7i v ftouDY[ 6ġl3U"o١Y!5mO UE^|^QAnVlS^~%wLGLžȫPY'͛fȗB9QӁw@{2ɬ#)4({kG#/@ E?TNB ja)@xZWYfp)ⲗJDڿAUW:6c]ݴ̲N@Vr˼ЎP 9k' ܧyO&j?* Yʒ9Зx3P$ 8\[0Mc8&itv;̟$ I̓MƝYs%%u)$I"jj?ڻ%QuKm@:?U`uQUWwOg866oxmO푪\#-IB )w[Iv8Xls/ W*gHRDV&cĕaZCeNG]̼ٝ=f3Oræ=dtT*Kf9rGU"@ K{̈́T~^]h$PҎa4,oB^x$,+ѣ ,dhj ^Yu\!%c4j NHݙq;)"<\ fE\OYZ5-9޳P8=Kz&%wcCQ|}55 !wY3L2\禝5v:kbLҮ]ދxpI .ޝaA{'٦QmX(S$[>ē\ɔͪGG2WM_ (uR kxף됦N{"TokiHΰsU_P5zt}`52ZG-+:KȠhǴY>$Ǡȫ5ŊM sǽʂJM}BMr0Cv9fY,c}QWwyd̢,(qtOo W&%@gg췄I !0G H"+ "G fB'gGIfw3Bsr,c܇¿s[9I5 ";dmSI˴\Xv_H#:UbҙhY˷;_sRv^@&9|r|K=w缾\l&?krL=3O YSfI]EA5eҡZmr>!b[餈 ȠSfY~yO)iLnZH0yN8_ s ߵd|dK4l8\i?6IC*M6!Df63zNB١Zpٵ .Q, ASZ=,PvYEv.ր Ne&dgUS<$i&\ {A_t;,n.0/I,H"e-1"KG6(B= 5l~]{k# ؆ nh-XxJ@b 0"mtƐ!UR6+Qm1{ɫSҹOiu cr3^r  Y1q!< U\#vlf{*ɰesL`` 䡔 hJ].+J#M̱QPutoE.0D +٫YFf,0MH}DBi>*cd`<{ x2ez Q=4gr?Y|`b &?ffa!OY2J) AN|o 9[ VŢ]P&o;/-7A:9>C9$12{*VNA3]~\Q`(9|I034q,"utmL,܂`yDG7c<ъI.4W+҅"Ęk `PdXXr E|JD-g'4}{޵XFS6۸&geal 9ojap-O !&W*tRnq[TWFQ ȽWQ1Q⽙JJh;BYd(-h3zbbg e7+ԭ(k+}3lZa?o&)X[;>`ܾqI)6_dx3*)#09͋)(+IYv NwWc0mW㮎&';Jh#j[\Kq jHL\C.[#8:&NuNv,/gizt%&$ɖ0'J<;|ؙ]Q% dJ/[Dn$ٕݿW)k~Ĺ]HYdy9meyg=-.J1})hXE ZIT^VL 7{4.HI*QR`%@1MhtM#P]N,Sa_V-_IͫU7"b.jhYUŅRE1YT] <M`!i.v*sHR}-XSdܥ{g^qIz4ƣ ZUћj~@I!Wߍrѻ&6iUcUQy >aiq6U(]GETm)حY~,!)_YNgcjmdLY &N#:M{ξDT&3(n%B\o3m mM$:-)mJܒEHey 5+IeވxІ,N QO*&լ=$G0t/tA%"zJyDT_5J{E|sAtQ^w˜S^9\#˦S"kޔ3Q]roKʭ@65at[!׾kM$b/yizbUFZکӾq>Y䯛nGdJYqN旰Libm6? (EE ${|MI[5#`SEu_0@RY El)O@es ?7."Dw$Eċ߾ką r] ͉Mњ/WpyQퟤQqE4عf7,Bm5YjK_$GsjĬ]ўx@{R=P!wiH2Еj}W\c:w.bC**{X )cAQNȗe4yWvfX \5Jؑc#%/nXI5^E.vHȖ{ r}Gy2˩I%ʹe\omRZM= >i#4NmVѕ?uuȆVi@0dL Ш±s20 M/7^y(uJFc$pjg5NU)$!!.S2evt"V3aYJ_z_2rE5⊈SHVW(`$I%\}QފLiŒL'C4'XTByYG!*L\dXq4O2נՖ)Tl^GV/_Ffei?4T'Hq$8D}HKU9%{OTI/2kwaSsܙTsO )jy6;Ώ_!a"%#3IkbcTZmڃ)qdzx}laMPߔtƨBl1Zdl Y@kXuPgOu4-Ex% M.B!4{xقY8]{߼Xlhb6VGOl!4"滋ԥԴp5 "4SIJHlKv7I5WyYR#@ f^ SZ8(;g3rg4o0@TEN!aNW]VLt (TK)R (.]Gك=PIf#h "9r:yk?!sDŽf-:oLa-BFp*%ULhyK PDž)JDKWos-Pdfe$d]'ӫPV}^ U>i>i|{IK0+S0$[@W5/ )K2fݻol !t`.Seg,&LLgY@)-NzGؼ4 89X5{ב35qR5A[>W([cnvEbB:QL7SWeAY1*L>)ws+X -yM"®FRb}1(^C(d~YćRMɗۤ?݈<*{;F&*Q!Gʯ&ż+f#λg/x~vah*4L =!ky8  E\KН c {R"wU[1֝>ۊ*l6DK_7bAJyWd 3i*ښp?@C@UOtb|:̃pfLL#ȶ̃&b'ʜt8CW~4m5K 7yc=<ѱRzs J/y.23դ d{3KxR*PiLd=Ԩj2U2xK\{ ^:,YWd!(MdyGUj6YDd$& M#g-`Q1O&Jқ9yP9`L*a!\H(8pR9ռRa=E|[>D"Zy,켍Mʱ}&T‹UBV2y7Ac f"+֍rNy][k$|rǦiGablh)baXam{s,#sl@Ӽ Ti).ϥð h"uNABy)gk|[.L7@A] "*<͆uRŴ ʋz(۬xT5-5Zv_)]I]/ GO(\.9ЍXPglr5=ԙ0s$%[O6^P1/+ kn`*!~<->D`.O<4]{M'mf8)"ΏҦ{莸@^GW@\q9gEbQsh)Y#Ȯa-faF"&x7]Nnѝq(eμ"+jڕESK2/o8_ W&ְS)ILXʴr^ D35#:4 ~ ,J][a5V"4nQ\1.Sk%743G 6 2$u ;d\վ i=QGF3!AeFIU9o@uX"ӵo0z\U&,6YitCND&Y.Ԝ02n` Zވƭę@..L%c2A6[ rʨ{ܑ>-?cn䰨#@HrN[f"ĝM+GCd9RSr0]Im7_5єі~;e0$Xg aM^^cc[1/9!*EOzTfj MEs`IXk )I/sy䳯Fݙ,BN%DP\{qß(oe#;7EC dqM`pHLP롴ϝ`z7Wۄ`,0ECx@O_aڟYCYYiv}u YO#YQ楇Y jKdTR&~k W5yP̫p#6{IC;?c:|jP{as1#[^ ,Z5ʧwr^X?}Nyi*u盥jJċͨad(Ul,F尔3PY%mE(#@dAVYc5X㿣 ;r aqc-G5`2Ȿo!1|˃p56X'FnѢeS@@BQӬ/@&7iYV -7{d]!81i'D5d_^YP%M0N(1i\5| j9vE|Ԡ"Ry77fW'NׅibU S%P|fjHbH MjsJju?|Kh5VwF45mLLs 1#ʱFCo|!G PҒ,B5a#;,#xOnvϏ7"0JȿgB3l|ߌo , 6Nzwm \B)8$+-xlf?^hdd֚5d`y>[ `˼h^e_9Sb@UlsٞD"=2_ .4BoE26Е^^Pbo~4xS!PK".SMqS哉l虜nfQH$eȥa`8 eM ;JDDh>Y癆j\+^t_ d"RO'5Hc &ڨKW)%i{~)OJ,2O4\glL;?XJΎVdr[;m;Aq{<,ὨIs,d&ZTlPVS6~ ,`dva6-QkԳs+/ABR^x0<*~@#R[rq?&Qv^=|GVh8ěܘX4<| 0o۷,󴈾ũY=j2H/@ia%otiUJ XׯK2ԇ$|FR8Td*f€ @TKGѡWz a'JZa^k|XNsܪc]vs=Kʩf.cRG7n NkIv98ƽJ2P3w"]Pf85W7K/+,=jq}.L@7KEa'ዴaH {%y>fv'$" 냺o@DċLxÆKo+9pE.p MƴI3TieC'ao˿VguO:g5)7(I]v w/'ALq&\/ E x#c%e c\]ݔ"z^iưv7ZI%M':Ә5M Dz(qF{W Oct/ZxeGkUt(K9ub'C*hLΰ8^.wL/&HfF^U,YW $twuݷˁ0`U6GWKv4?K'Z wv{[*j"$%W! *d銏/R{Gkrd?_ .x3!ɥ#)ΟꌃpI:)`gFhL |[fcyc됬YɃgHʭoƘ-NylhTvy^oåmJ\Ru5 kM7dm6بe; 7"* _K5y2â>2%mEx\è3IƈWW~JMOUj{Nм#Ը"L]4A y!uqR[ghu[`jZ+`Hz  ' |Ӱ*e*;Q캨d!-HZ*PVxWAfڰlHMDN|[( IL}Ó 3 y"`ѳ1R6=,}ח\zGcayeqs=0T/ L =y^V;r7EJv1)GuL$Jus mѰ`@Q&SO][BwIIL𫒑m!mw a7o:䡬B#ax"f&!qYp+6&#Zk]Ps~I^Niyds l V'|9ҲXpVC ra%ӪR'%2-tܒY. kr_]#y!2į@iJBd1{p0hܪmݖL%ٺ@ruj>93Yռ:SYpC@#d F~ .[C=\}R!`$;S@|wܯP_ix,σܨ36%YksN]_oX4"Ngf ÙJ ,'0k ͰZrFm%@МO8-hl*Xn <[ycwpqT="R*XU(2&1e]y} \ڠ}bm#lH]A~`'g#Gn\~aV-UYK===0XRLACP$Pk#[QM*\Iβ(ؒ^eҫ)($YSf%Ƞ8ScE/'-P45CNq gT|CD)Z8fNC?/Tn4;ƪ۔Zpxz (ksp6NVQ3eAϨ7r/&\g]E/$?P뒢B6:_Z\F XCjBhψhтX ʄI.) jV=;0w^Qy-y$zS`Isd76B;uMh2*R=q'w_q-']"\rHNI"n a6#ЈJ.@ɑb 4]6EDW`lcU(y~3` ;i1<`-cFzܨ$7S ̭nOa^}⨣bgX_Ka 7E/U\ $j}LF /%YC!+~ t:w#cс]> ۑ /DN|aJ3zHd4}{X'M X U<0yS@c}6 ɥsUtQ4HeU  zɝr+e&\hz$N30H$[FEtըiPp"j{:wzi`Mƚ/j] R {J p1RbP#.#!' -u2ƴT,ttyC[V5|2}UY5f~]fĶ7^8. O[}\R؀40b2CI]Rnrjx֩^=QhQF :c[\uM;-8HqD:b|ɌuԱCQɱr˺*ײ^^LRD~4\bxf9oek,:Dh˖}pQ Cq3W` r}ndSƭ3U{_Bmm\tm x1C@A4LhѤ!9!9b:ND6:)Ճgׂ] \`Y]*/6[& bJ(/h^QR4cE5fjTh5iƬKx;{.n^'@u7[}.N3#G%SnP8Ƿ/fūkQck&ouvk^&Fw4 p Y@Nv9uM`PpJdY.džʔ6γR A 7 i'=6sh\\#;\U2`B,-nZ}0\H{E֨gF7b cg%qH:&)-RPd04)7pNNlEj띅b.ܩݨl%^afXrYY8{jm#B]>t~uқ ezJ{!6lhAߛ,bu د ,IJRTB2p}RI)r:y™Z:4áY2zB]n%p V'AptݿB!p%SsO@Mq$U [5¶:A/::q:&Df [J[=uOӄدzYTƉ\+Et:30Hih-ec"9\Z߲4:;1Reڹ .y?ⵕoGX_rPz^8Gw bW[_SPR`T;NR)\r1L4TW%T o(LcḰ^[eyr* OܓcSFwe'S*LyZ-i*wW q ^-exsߐMi nJ&SM-M\j?*1dF5>ΟxU`PdNb*|f!w5BZh8{.;Fܾ#fk:d5B -̉H[@ 5+UGEpkqԘq-$ f]U!: [<̰uV'ePW-Pj߈^R.MXʂV,!,Z|Mxw(Ly},uNf:,9s<+Ǚ2uMI; 忚/UL^ /('6JZPLҼe+ 4ѵ?0sN-c*NHFڮHFأQ}Z(ĝp,w,Tە.4:eYS:%w68?a RJjLT,Ȅ0~bb=OY]9.FEd+| h :wufx#q[eL; VgIkOISs*#oA KˮݗP%CQԖ+[W2F80Jpr"|]HA\4bȼ$3[3X[t{YesV]i3LvǮ/L8Rm+%еtʵ䅎c;FAy"քROLN-suo@t 8M<_q:ʋyvaPXL n4%lq*92}*/[Oc@JZ!P)j*jo)|i3{GQSbN+RJʡV20'UߋP&59WQahkh2 *WV6WEZb]H<Ҡ,H @pWKQc$0PG$jq~W;& .\;J7"c7# p˓+k̷)fol3\z"t.[I=p|"%"}6[?`m4ޗ9f%|2h\d04SmDLjWxpy1@4k;0o\wWs̨S7uH drս)ar- WQrk`&jbu榎f!)B-N+EKG8Y\]oA#]scaL(uU JXb9"]iIQ#5 U>/[x{7zmz 7@7s3gcYLY{@J>5lW&"=Sp[::]qGCQ{%C=$^.UO\^*WEaAh^ &bְ8~CO!,iFF^W,D6[W{%1>W2z; _^\l*E_zKK1Rq6Q ZCV$%-R_U2=ؔen,ա3&/#Uiœ~"Az7+IBqp;zy2re((hЍ!Wr\)!BHV2/Ad5Dd7uf?ؙ'fk\ Hה.Noʡb7nuW2rਖcG_Zc|3+CX!0N51Bi0=faӶ}.P*paϋ`H2:WPٕX-?B50J," V%9"qȡtHQRY<7e4TgLo""}d0icR *An¼<čF^.^]~𚃅>ֆ3$3'd-RO&Y/*WGU5P2I Z|Ǒ"#=gLQXYᖔp%t\ Uq {N[qDB:eP>˅YoSŧU6t};$Komz-|0E-O%`Ӎj8Mqg$( <9p0ȡPcn.o/[rO9l;O<( Gb"LͽvtVYeI.hD:朘Kޑ+%aZ(ЂS\5(eL,` AK@/8ծ*('BEdԧ^`o?,ڱY<Omɿ7C&J3} Cߤ,ngv@ OxhS2׀-xʎ.<6rGV%#;Sޤ*U Av[!zJ (<;/DwxE Z+yS"R.S3Y3@#ЗLjE&cc |B!YڨfWmׅpJdD$ kikkZ? K6/,0y٦:"0wO/đ;_&V}?Sf畑 %Mz+qFܥ[huAW/|s :}@\Y˼E>tHqWma.YQ jxer}Qg }ot25]5"%p F= {TfȺQ N'Q79YDZxy s({l@)'cd'){ c0/3jGUB|1c"\<3i ((vb+kZ0vL2CQaݹ_* AD|Ι,"}ҦNDzղ8p+¢+ug#U Sׂ{[:ߍfA8/wRlOKŗ5lRxB*؍4goGma>ZW!JmAspj'uC _ġ gʙkudd#4n>vYK)SiQXhM@DˠceK!(,Df5 7Q_(IJ6P5Q_-&' $1)cO&JFK bq E_s-=0c|1!?).)d<:{&WslYny]QgkJP~!48y]jkލztMsItd3Q$qePiRi|k_KCБŬ Uk-`s w7YMvgk<.@@dMj;T -P=}p)(@#cɑwΙm==UUȕ|B]3u.~ձ(6aiSAXuZiL[ .C^9 ,h{Jg28P "ӎ>g$l,ɝt ;zmw܁ăav~O][FX;?)MoFEXEV6`{M:F \oD;`&7 ܼt < ?膚uyeuZ&%j-dͤ *ɋ"G_vO[H}@A:bոnnOpv p2wZ]y؃bR;OW h_,l8r D&H{]FعеeY_Aö4 ȼtlIbŐsZ:K[Q-~7\ƍl{jRVx6JDH"kj|vOF=x&qJЍiE# LfzC'9I[C$l^nP2LU[vwh 7|An }=*xlB b`GaU^@Q u)rqtwaЪV$r HZV*z]ux."w"Ј+ȬlߒעSFzf [TN}:yphۿKzU $ z6G^JgI1Ʊ Q "-{ #WBB|}^'TT$q[:;nj4 9.Ej_|TٿfC4y,:{<[]ɪw:n#Qi zm֑=(-:6|B5&ϧj]ѹ:~yc =,_^rd5tHsYul WNcc]=m`ʹ1它O\AxtTtMqa]ךTĵ }4REJ;=ܯ#(z!iբleg/r&ux=NvtsHZ^J;>iL[P2NwzC3lSBw7/=H^ӠVyFuXO?, v4+KW~Xj_Gc|1r.,/V/~6qM|ӆ9\5)`]̹+ʠX@8="zm4AlUˉYH:D s(YatGttL|R[34UN3-,JB8Q(ڐxlpS0qQדZe|߉ ~cE(caft' N$0<e5wT9 0^ ca Q\n o:!g)vN2&RS0;]R#Ls 8GKt08:9S笱Ń1M۸;sRlJ(Zrc{u lsz W@ JZ;[J?'eT꺧^_WwnL:Qu+lU>w)` ͢H@ v3sOQ'&`%Nj/'a,j~czDraG&}S Tߑb5@ "@eMt!LiOPb {F*C=׽ :$"1#*Q 6wlj0/^/$tkucȫ%=Wr}D7"ܬʚUβE6PD[Ψ FdfQ?ӜLy].uRΤe&gD48ZMjV+TP5X9uB_̬Bq2˳N~^8f;/)KSI\np{! aU((Zc,))xBTyjrjJu%G{w<J#by46 6߃") A'9_D8wOo?,媎L SyA{{C+ڲS2XCj6$I6!Qw,,NH jtqR"w[e+)lʺ}q>yNUKz_æ +~0F{}ㅜ#Zwz9Zu"s&3ͨRyPYv'57'J"AӞpp1@*;ta ǖlt#Yy:/PGSV<yJ2Uf/M=q+H51/UA.Ű x:%E}p Z063 0!YjGp%'9NHfaV,_:@>_^`FؚvRkڇx;0t]K.s\URTRONCo~/:EcI3m.K12'%Y|IIDATM|xk85]5)Ѡ`:3fNl~U =/<<:xBáUcWWqa CRX.ytFD^{p2\L!/@ h8b'(b+[\1[7oPd I.' l4xQHX3m` IJ?-e@H1C8x ,#m?5lF.E oB:"_3虾脻nAxZ)5iļ%.K 5K mQ,*BBÁ׃ӱ޴TOMvu[O.]:;Zi"Vd5Džg'x R(lP0H`TRS'LG !UQȽtp_m{5?a-,SA4[?;2F\6o5DM0gPMTs:,IPYZK Л/5kȵTͨ#--pYz,5gizg`lʜǧHݗj)ݢl?pgT+ R5g]v!e3Nf8IzmmǍlk 34l8|DU`,ّ;xO cP+(E+]Hw]|$?tS:"4}t|p|/< ߸0 pPZ*⼉"<7"yjɶ[m9 +vV_RRA Ѓb,i e$D fwC:jӆ#KFY'XM(>//<Ecy*XBgY'{ѴRt;;W„Lrh>9` QJA'X sh675cEP݅!%:lyJ07, D(II $H#вHr {4#.j(a|9XeʃX6Hc`ԫ4jxW;IP IR}d S~: R*́^YVYm\!z|& 嫌 [85;o5}8pec\@UNߑAE3/SPTY۸<apx\(дN 9 B]W|;5xmRPT%6ڙP(dzm%TC= '9&Rc)Iw <#>t+SH BF[ V-x`m҆Ǩ5x*kа~"GaQַeaA=M0!ٿ+G=#5!+\ A:2jˆ_KkGYruځz &.u2[,8BH?Lb:}h ;X(h:]`VY| %}XzqKq7$Q5)봀5D&0}.?ꏟK/>(Zzw>j#-/"H0}霖FF&NWIG:7 ~L+2[#ߠ~6]ƖlÁ5y-Pa$4jQ*2!ca4/V&yl5gYWNCh4EU37*…m-\@MBpTR2uLtÐܯ+g,Pg ֝#H{"n.֪Ǚ#v?3EJ2k lD $ oȯq@;}8`9%:ҽ3f Sq Z{3tuۭ} e(mSn1 8iq)ƴ򇹴Ta3Uc{Q$7D1oU,v 5K.4`YtRvVX^S$*K-IY-aϿQF֬Kn%mc2VY+1#8 %auU:L9MIҒrLBbrS)UMQ?⧥r,+ ȚeAtX6#6\1~vSS)lz xkҚؼXA~\?4 %]ModU;@YvͬMmZА]\ZdZ7mz h#/RDa*-yRPewlVe 76y f5|G_E ,۔h_%'icN3ߑ,t?¨ cig)ȲM1 x1Oޏ1yHk Q䛸1:-M>tZ}lA!9]ɐ_@m|q뫇wKRE,nˬ8&Qd/ɘu:tξa<0,tv$t(`jer8PJi9wS/Gm+cUw9SCwD'TÄR.d@BjRT^`Uk8 4bTÍڀaksY}ݠ%ITv0–A_$:*[\B5; - >4>(Ք|Yt͟NnD=?)o,Wne~.5ͿC8@UWR8m]zpk<OΧL BI#sq97o3TK`ra)gvτuv"[t3oJ5 09GЈ+t?{se6 0b P>QNt z`tМJ `eNFyKࡋƚ"GRD"szn_2ĘKhVJM-~tD5h&M^4<% T"qj];B/kybed +Z99HLeY%^U鋂qÜ $O `$Y3EL[6cFZ\ZzHV5;b`TS\X%,˒?4)s%hEViPڸlG|PɋzH/wS~ZszxZYo|P;{J TrS!AI=vh΄ f;6_)5Hxou:YB2M]TNNB%[?H-6n碗PA)+\Z]{V1RGw@ENE_W~޾ \X-&t3ty[[*O(?ԻXn CۋX4qz{A1.ߠ+@^Ց0VȪ^c,qqvfi^P.hl`q"aB"TFH rf|4?)Tx`Gr4&SzHlu@7k䎱_ݻ {E-BZ׵L4$mHN:AԌ `YQ4J+Զ(6tm 9KFy1C udobA(n'%n;jTmGV3-AJ ɸzO ;-H$E%y.Mm3>fqv-VKJ= hHUD/$LDeտdAGzWlD lZ^9grAא&qE\ r*ߓL3݄D[>_%ZsKRX.שM)YjlPe;+b;q(r#Tѹ0]GUa’1j^mF72,$ոW LAڸ8Eo3}T*ne+l RP9[V2UXerS ( Ip8b0RH$+9H3`Z.ak b|RKxb폑m$jef#yE)!ahX:Krʒa j A"V ,|lMz,0MԒޠhiSCC+yTdKW(?6<8ɕ3Q1Q6D_G_[ZzhsR/>d1Rȕ\A/'R)w\J&=,[^ ,Z<:Co +X?}Ny2*%$/Ooz^9Ԧ*6rXJۙY%mE(#@dAVYZje1MkըV:ۅs+=lY]al _ ~2uiS)ஞD-U&%sʾCm}[FT:[|gk[.fMK&' [eY% ljAU+5a$ޔ]ۻ e:Ʉk(~ qbMd`6.2' XNs4Xj[xdpY<߼Tnep#:I{ "J_A;;IVABhCe;/F L:ԮG1pvthx_J,$!U:2{th^j`VgHZԭwz\-]M%W!]T"1 6|׫=gQt{H%w(5gim!۹½2񦾒)ʚ.WYa.yga*\Bn9Y99tAAPk+}-Ѥ7󣇧Ya AG mddSϰe~3I(=Yu ˊ$BlRZ2ۂכff(vsLui~/_A0,)[k`W}`@W=B3-X +%%wbnxW[٠%69BqKVۋB L$!RȮ+V&8)DtwRg7@--B@FsEdΙZ= _A VYz`de~NPl̵F)Z"١-Rܨ(mԊ#6s :hG}B'XVur{X\X*V01e.0i9zNٮp'g@Ǒ""r+4VrE$'Q3U|[ FVKW)%i{~)OJGdx{viV!:1$cc)9;ZςmQA]JYsZq3reO,q\C%%mXQ|D*`]Aw~QcIi19 :]Drli.]'S^/姞[y |1 WEJã|6/%J'S\?CII; UM&1mgҶ򙆗Ub  oY2&µIWtSj1i6"i!P;#:d/}uv=)q5&+{s+/bGcC9LeL,R܀6,ܥdvp{c'd)>綃wjSgEhQszs0BӲ@Z&teFoȰ`HZψq0 d:WI>ZZUD\`ryleyaJ@YE$Jus ml ùB̵G)'rȮMtHOTJ@Tq%+{ma$8s_Pזҧ.6b1^q#E|֜IjQ(Wm\y,EIj?Y*]mose 3S;< tLje WrAM`N'5qzCzoz\=vp\93YUMda# v WRvsAY:Xryɠ0ZLYiqB} 'Ĭ-aY%\Uy9dD[8~d-TD=l$F=1J .PEy ;O=[ycwpqT="R*XU(:ںc&:jȼN< [*;!uߐ0sƓa7"QRꢓGhpFrK===0s=_Fn{)bXmopU6Bɚw(x(孄 \P2Z.3}7;h(.LKweB$wߊ[~r" `̝`wTx:3L[đKM GH>@?N|VDhf”-g/~Ǔ_NWf$y"n a6#ЈS / )^@ωykN0tލ Dg0uāh/SӸ#–Jt~PjxOa^}g!LR*"qw` P:WaKkJGL_uCaL&ɴ(0`'R<&aVVђ귝@FZf_a~'xU0 Gq,9_6Je^cX>cIǦieR^`e&G0V /Ƶi谵/')JޞȳGLm߅F8l#0gekry$fvFrWI8\EPEoC୅GhIf'WS#B(9u,s|!iZ H7AH #70Gm*8pXH֝1e?9ܪb]Ā/dc̭;I_K5Q2G@ȟ$Yxx? PMc@/%YC!k]M5;ZTdvtv$%{/gX1G[SZ]&sz<1n=lhfY4µrGUÀ֋b4$Ue/:5G PM%0;G )e^mADlζ/֭nY2oe+HaS <=WTUajLLP:fkⰘZ$}]ZV"yiw]lZf93ĬXGZ tyX|g_NLٜm:xPigԣ52ij]{,ߗl3O5tce!elH|3-zv>" cdQ8L䖴*A 3ɜy'\u`Dka-sF2Wy&S~̘-Ϭڟ_4$^ m F_HV*)t}5'_*<(ac1d$ŵTxˡ#az;$DeNZfN6iAݾn= #za5e7)MvEQ!0 C(Be8Ëj1%r5z,}O.bB.za޿HbmRX9PMf&Q7\)ؖGHm*;NܥKxƩjqnc̻k)y`ӺǐDxiW֝cGpR* 9 ̴b.tHڅ8è{&VJЫG~2 -ʶsƫGduT檳h"z^%3RdE%Nw|w|bjkW8}@cRDM4\bO"#;r:?ؗݫ۲fv ˑ>9 9HRcNd `_bT4U$e6@3EP"``#?Cn7yK84H'#!%e1LGcg|^zMA$?ȧC͌8JzXuT/nұ_ҬxCba7`*O`[8_ ;0 ȉ0KښdX=]^S "W'G =G*Tj8M$r ONH&yɄ$u?}~&ĭrٵY< '(s=FHP7guml5R('?(l璬`63 iVPpJF4 U\;tfݚ4"cW|(I3trVQϏXo`e)wu~Dd *~wx<*Gf4ʌWG+o|wnG2>! 'yBA6[d~ag}v5ro: < *'dV8<-$q:GxJUw{[{@냖JY^>!syMcST]I7:MIqM.DZ4/\IJK2"n @UF". Yf0=؇ڍX 0Ò`?V_$|m6N(i4)HrH@YK'6lhaO,-Vlbu د ,IJRT6,!9RIm9ǶnDb6\^sҪS`5ޡX -SF@5C (K`Oo$ bOW) )ژDܪ0z1,<LG!2KHܚP.:tM=J8rNgs@q483 stL$/0Ep%G> 0Kߏ\adF Tj ~  vi%gYY&v~2\T"؉"\iw&V^#Fb1FK@:W8Z ˼J?Z67e8;{Jڢ?"0揮8B]4\! p l %9-;L,umCQG61|;m.,WS#z'/FHzp3 촁L E/|d q/ G@ZcF)YHfjy,FB2 >I5dM-Ԛ7 /_"D(dMܹ9VUnoL>4s ir>v|OT4jZhl>L oO\?8 T-#7r=0ΗH$:|6#woM0F XK@YI[y"59fN1@'lfKN*f)=U7삡.I zǒqR (8S=3:^2|gǷ/$Y*WRIᩀ ~pƧȑHҘeC2 ]}-Jte]v>d텶ũr7*MeU ̄@$,Y0,@Fd\|{&T2= Y0EpȾFd ;,&!O Q&K:Ie1uM_[*s4l@\ws|o= _JْTĕ(CbٝFc zK++89d'U{lc|3Hnk|ajt7wsѲx4k7"6 SOJRgWM5'ä)9]7 .ˮ1eh&Ѕ*VJ&_Q;T1̲p`D="|]HAWpgyI@ghgҗcmuR檱^< .Ru#P@^~%+rNt\DZ KtC9Wɣڮ/7 *l&.dTM΄ ou8D"sQ^<ȳS7/-ZxR_,29RC[GlSzb6vF^#ּ &F,cC#إ&1fᣘnEgnoPϋJ{G&#H`4/yyrc HZ|[Mq!n%PU.xjS-AUBRRp|"%"}B\K 4ޗ>?4ecYr# ʲO#^$?&`yY@q)uoJE}Zݫ]NځaZ }㽊U ʇM|@(CR$u`.Z.R ,GYl⯊'f8dvU hpi#\ZR 漆cr[r^ފX˟%H.)coGL{C}g@(i/a#w6ÈW&"=Sp[ᜈ^MOUu⎆bJJ({H2W] ˟8~ j oe7Qk{LPSq.C6N@ sίXlJb|d*uw A~/0<-<ŦmǗޒrolެ!{55~-,{as̮:R87A-*,_封^Ax('<0Pz#LP#ХAn3^(ziPq 5D;q88CdmS8 [nbdVJI5d(ana Pr# 1Z)>d|I#yOG^ rSN懥}v0`L=3?f,pX^WxEE\7ֆs#R+eDl"͹ ]1# b} WeHnOYs_(glƜnII GX˘k>$_a q:IQs\q>pʆ uvrI ,a*ݶ%vdYxqFt*$;qTz)%E!oS@Z>=>mL=g(Fǟ"n4CnoFͽvtոVYeI.uك2GiFnB Os)}l6d6 0Zߔޢjz郠-SH-pIe'gZ; E"1h%LKPeI=X܀k.Jy;J<,3F̋a~((1KFwIU?价C~Qxv::^ `/e{dxII\VG>9wG@+EQofsVF9F3GBV%%tEXtdate:create2014-01-18T09:53:19-08:00R%tEXtdate:modify2014-01-18T09:53:19-08:00#31IENDB`pygments-2.11.2/doc/_themes/pygments14/static/pocoo.png0000644000175000017500000000415214165547207022660 0ustar carstencarstenPNG  IHDRJ?G?^bKGD pHYs A8tIMEgIDATx{u?.e,Je1ıT, }L3r&D(bT 1NcZ^HMD,YR "X.ۙò={vgf}{y}y~,- u`x8%ӦӁ Ty {{ʮw`{xX >)ie٢Tg "oQ  IV&="qh"iutsxoqDt%m$j$7InjE-`pq!Z@>I}_x{#I=ZGp2 s.E:G"XOD)h hP$ [ T!^on If{ULOYe,YfZh U:2ΫW1!I(EmV&~5 v6RFĕ <.7~}Z0QkQDRg Jҵqt3SzIfG#%RWhDt[,&ǵz`W6$"!I BB߷m.-43 \+^eֱ!5>0:O-63%Z7YwtuxkKm o%"TNp3^E:8wgaI&v[{1 1f7\h@]dk+ihֹYlIz 5d3j[Oͫ96b? gر`vU^}^ʚڍ ֙3ぷb<\[n,w\WFt8-LDnvEK-v.9nIJh{uaw%z%DTMi$D$_)0Pn^nQiv%O Cg[(rT\y81rAۀޅTD*ѽڻHTlɴ] /7pm?s:. PEJs6B̲նzFM:fvZ;:5\e%=vqqHd-\Ie:3U\'7@#(2N)"*=-Ϡ'E^xbw:2+kJ(O#*eRI>Xo[ZqiVKuIZ@=ͨr7.A BfFRDij)td&5P@uDtAO>IK(Gf,-e]i"',͵|G`HW^6Ȫ+K(ӄj]42ejFc/tod&6z>)i{^r`"b~KDɔ,e)KY${RpdIENDB`pygments-2.11.2/doc/_themes/pygments14/static/docbg.png0000644000175000017500000016756014165547207022634 0ustar carstencarstenPNG  IHDRr pHYsHHFk> vpAgJ|IDATxp^uHJڍDsGɌw .X-whU2T@DYݪn%m>֎=16J! ?ə^9bZkmNh`JV?Qnd=&VlPr"K$~|q{s]F5;Uq?X}<Ky׉jX -?g|;.xͻ'i,ut;;Mq wvl-~ѵ?V# Pv<'X.w䯺5GH<k=ok vlh)>Fe!s|Dv:gw;rW>g+WrydLw{]φ>'>'^Ŏ<\l)|kt=»qස)wBDD08Q}}};-6|ۯũ↌vweܶ+w coG8|o6{8WHFo c(;[aǍ_Z "Yxdxyw# +qpͿ̗ oVu]C\wWJhVs[f 77|t`zpcs~xxx9YyssKv-e69yly'|1K,{~Ʒh7w|:-2xMw6ۜG~;:Ov;_s~ks}}<|q6T(rYTݦ)A} |ة`w  ^`ﳇ 3Ӻ~e;.'<`v< /{.~u}W^}5A6j[n[O |\|Lb[*Gs{bRe=> B+LEkfNUO |vLk,1m=gYpAӻpջo7`5WDog3H6UũjX hV݌;8soǓ4ȑ#;N}f[ mQлvXmīaMMƱ ^7](vybK<s9YLv;b?Q8k1 JGߎ{St^ƛ^yx6t7iGHhc}.[Ll-212%bΩJ"7Zf bw_ED ~-mH3&yxrkWi%yy9唟cXN 픳H U*HuR  ;q7^'iTy4_V aMp=>~+HuvrSNrXNi?VeMXoڕ7F>ۃ;r惽ծЊ7b;Lyʧh6=c~)ŋa&h,R o}t+{:k6|+Y^-86!۾-Xj\-4ws'1ߦU&7_/y& kt;XaM'>+`%o)Xle'fkGI7KqClÝv8{.&JheKy;?ۡݡ;9cΩt`([l <5&fQI#ڂڑG~;Υ*l󳧢[+!r!icZ)VcA2(ͩ*Eta1Hs1yeN ̑ϱ4? *ѫp;ˆJ2` ǭS*jNs:m`q'3- Wozj]2Clqj O>oG>fOaSiYQco늫ol,֖4@7eTF¦8nE־-`L hJ[Zu6wއ-HBnzd]`E?աL]iw+1lzkj/ s▌Y6(=+Hil+dTJuupC^MC }?a67/UaxwIf%2P-Zg b(;_`$Plմ!oNd8@UJ+`0 0O)nNٴQCEBp# 9m L=j`2P᧑oa 5.0 8 ,-geABU3(e:1' D &tBDU ߾W@s#MeaCqbleG٫R[/'YtWZegq$%/xAS> t@1{:.ȷ+e63%,J_2z^ ˥hEəuu&$ V=Re1dU (ɓ穿yŃ71֣LvWeTeI)0bhfo}ѵZyUj#iX%I0HLRGoɝQ;GbvZR0pȏ}bU. qW8L@32H:Xq3*<.NunpivOɅ(|zYO?TT>" N~'?+O֮GF=j9a NP(:LsYjix"ФEuPƶI3KQ@t &}P2k>^gE^ݺнIG(!|yRYW!7\9ce-7ܦAֱ?k.^LX:n.;ɞ+w[eITL&S &g؛Qʶ٠/Ƒ,9< I MLsڴru}TqZ >}pPN }h0le7eF((Q欬` V}5W~noidW%s @ڏ[/F+ʱ$Oa7N9nq?{&\FU2P튝 e1,kjŋXIQNlR pZ$.t'ٌo2 }ۿb%d$ B3r1[4 tħȈ"4Dž Z ݦf ;'D;,_db+Lp>]v#05,{$/%ŧ˕`>ӒixM]Y1SXG~8C%̚t.23|  팲-0k{0؉z{2]?zп?mw\~v~ e(W MI5r9|_)Pɷ'<>zͣ9Ca4mhfu0Hr}haZ<{촊Ь*?_zʍ ,ߟ=}!|v 98Y&1ƋgH H>l yyxdzjUN2[Sid;۠9UϺ7E`-6p2 ->QhX\鼧/,P@9¦0Y;.ƫq8W@j\)eCK~_v\ofr6.ʅE:+ w972RI7]6Id2)Fjśp3~8ܧuwۇT?Z*_mSD6>1X2LA6D /۵WZ( $n %Sa6)GnӑtցgiuڤVI/D>+a\ _Z<6ic) b[?WMY4V1 L U~g _ϠYd"Hm vȧ3wqI-p)9z Fq.+f[_>xꪀO=løkaQ!DlJXF6.=Z5(iS (hRhxdFik,fal2RV 9K46l#m Zw7/gW&R[ٴ 3p~{$󗑔6: E W0Ʀ$=#c )quTel;h\;&?=֪h؜ Ry|bX7ifT1E q9rNeo̥B8کa?WIÆȥlz 9 Gյ}bl\Ҩ:A5ec72,P(Ɵi7sFBBc`$c`V;/). FzRo[./!Ʒ5ThJ귓_B&L`=b:\1G(-c+,Jaɏ{rf¤:ܣm%E- !'h(ŔX;00O Y]*j|RW?5tfI;lqʶlXȩƌa܊ȅ D͜a$ 'O\%MKt?.vE}xrϹ0OZ,'ad+E!|j0vˉn+V]>OCtCt< 5XR$r7"ѽv9Rp1Nymڦ "{oDWp܍SPf\*_V>UV9Ix->9D[Xک?#d<OaԛqSGTce$XF }NHP|:wV)n%͆/cGB񤮇VKi2w#Ş+V+(!\τs YGvιOoއ`xXbL®Hb|) H=*OI0FgDsrWpDYv]T\SMROJǎ|y1Ej:67P^ JMFFOaK }8/q- S|N%CB 'ܸSaH rBY~WS*Ut˴DmZ-ƃ }={K |2 4IcU'ȷqAJXE+\({&Kk+lW29 6q.لΆݜvSHz~'ߡyZl}|G_{͑FV] 已Ĝ*fS0b`d;mv2v̖ wTk 3kǍz0߮…+t]uC/BG'/~~{:_|?TXWKpH\)TQSTfİ.(gf+i7qX|PclHS0XLC=&ݪYm?J%$=#=GNk Ť"C+<.+FH=ksXRcx0]*d S;HypTywPT 4V%(\j =iZ.K OhJWfK7I/X.9~]%6""-veV@p'b:-$C^H0xBⓢ9z^=9ES0yN8W eQfבR'~.BUTa#w~+=’ݫӼ״ox%tYJ#8p:{ I11d!yPS8pgVqE+K%"}#ʰUrN1ߊRvqݎa^}X<[?TzU>gbVy>+AcRN=a&܁fF UHPV%,gTK{l%@zJz\L^-,X TT')G4ʕ,0Td[7M"AtòdL3hy NPx.p6`ƖŹѼv=vH]DݙI<}h(2KVrp WFY`89RDS('%:/_I0/y\QOZK0gyyE0ֺWL&+*/s)uL߅$׽砐 GpT̯~vܧ6g?BcR]t{T "b@O)4& hςh({JUHBf<1:ˀ vL %WiTV8*w gQm Ov;.)^̜E;,Vcbp.I t(ZYG\D9llR{sPp\)Jzy|]U5Iɨ SQHhQYZrb OH+;~"PkxU2Xh\VbZCzlxh*E[ .\&_2LƱ0-f팸-;[qХ#< )~N0SXQ7B+Ӭ#IsGEn痕\^CRqxbܶSyB -# O9O+1KT٩^̡q)w69FIZ[-h#CbkwEx!j&Yg.ȞbV/Lki앰(* f]*+sbu fE0>?{OfbHDteWVNuMIN9ġ=O뿢 ?%Uĺ+J}2a9W)@Uq%{f#TUWJIٕb+y>M"!Ӡxu %  PWn)i bմ~a*.(DrBſ!3UE ucuO];#x6 8K9úyD}ؤhV0ޯJM,^]BQ o"g(,jxa6m[cI֬ `)=\̒Қ2h^2pFMV*5}JZ Sp|3 s#gػuK5jHuWģɓ76o; #^D~&{γ[R[*.Nգ9;]XʀM{ڕVz7w 'N W^*SS&"uz_0͖5VDk:9x[hPǒ)kzK!NME C k8F}yhb<"2R5 Ha=S _2ͫ:PxʊrTcEjv1C0[ vU⡈?>OS0p~2T*)>gMqnOⴅA@# Gÿb`M z0MubϡOP +Ǜ"3Dx6챏e L%tYШc,Ui =(+(ی;Of<< qgJْҵ._ r~|sT^+[3ً:m:Otc? CɃ8YWf2|:ڑ亳2Ճ'?CIsg O2fdN5 / -dLA`Fipm=6YSy6TJsmvu]dhBT=ZyJE3d?UvU tv7vJ9W.)R_)WrN~Mb7rūǹ U޼A~<]ULfߢKqS{Rzv,D;'LYc%"q'Ja"--"rnƯ%j:=U1 (Pb.nŷt!ȭlh)& QhrbE7Q䗔ʦwcTZWMwPȔȖlv}u4.򎍼-\@_I_,_L4CD$Hâ+]THj P=H4{~j[]C;gNSKN~6I'2)!2c{lnEDG `]PO)*+?G7ѱ z5edTi`MЃj>Y#@,Te%C'A(Ռ+dlS=EJDHG)m% >>(| kEjyNŊpsX#O @3%pPhڰ$ΓtΙāo ~Iybij֫~q]?u:$ /v esI&U  N&h00YPw@o 4V qScu!E힟Hf] 8ܨ,b*~S2i[rmx4y{-{U8%5XR۸&\Q7u]JĨU a% !kST y.{>/ Z7ȿ'] .皷0D1$mlʠQ6Pك wsiˠ Xpe7]ީ_4:FMjeƚ #fKmRM&KfԲDYDDr l*)1 bJ>H Fۣ%MD >Dz'+B "h3T2{UR^1V2VםC^*t T35ڱ@a V45Oh&9M8j 'Pi1QUU`Cslԟ \88C'|H2^OdX;#f8mߎn3*PQ-sDD2PD)(>:ƀE?q:A,^>x4Kٌ_d )x.q~۵s)?fT=>{n(CJ1DD K8T3O Y]5mk[AR=8q&VSSκڌ('r8;?ߥ8/`q;wg/y qytLz7h>owigàK10<< xpr~"L73x;WKΖrfWՉm[_Iz鵫VcF$čE|d|_3tӟ[` ?:5UMh+](vH\@ UA-2> Cw:xc]CVĩ'cdG쏥ٕأ"WYHjr$s C[z:*9meېV]_Tr+pLocn&*6[J@er *1:EUzzK)pDXl;25Y?maP"՘hHH2$NjnVAOPP*W) '3?"܏,g+>Qt([C+R7lš+Va&}6>V"ZcZ ]f֙zׄ#*Vzߴ+NV:i:V)[*ِʅiiV+o%tUmi[oZQUM]Fs",/І}6P֐Iއ"U=H@)@$M`ސC8pq[X/OCZlu x=4uk*ٱBKEw$?Gh|G<) ]sՆXyףxHXr[cYJ- [$ixR GnV~ݷ2Y~~ܯ1s-nl<;'ro3][?"A)N9P)Tznt*@gE8IMfc>lxVmzG̚*s?}jFOj*;9w)mh٩~ƷG!cUέe ~{hއUY*4fכ_3jOcq\(s唟y+5y1@*^n&+md$JĶJ@>jˠN[aoe*V+fn GD -?<hXθCs^*'_;z>'G*X~nu<,UTd<4Cfc9EBVpF=벚:*˻ưکfM?Bرُp Ar1X3ce.!rd)HH J TW"Q΄2~J9^I%M_p 엘DgrirI mf!#j@:(DP1wX6VxL:8/x.H˕@K4!b$hQ?~[y;b(4LU+Lj:>7y(]P>ś3d;bthSKB&秳簉/dK_k^r+-b330/dX*;T ) )`){+ClqlGig먲 <颻ؗG&m',QSu5)0*\71ed|/au(qla-..VYyV")Ż=2kCQ{:_r!(AA.%ҲHn jgɫ x6)&MCp3,14ahc2ҪPzSĪlw9Q%RCq63GzP=pa&u&vrDckb.0R3X䖳k[sb)?3Dg%"j&IjT+ k 4DBr HD3ehV+M[MIh;_r19aKWJX;)ǟgar~hC'0NnJ'"*]L#^? ,*+ëf /1 ZXy$ 47k=p1NV+)E&g`ӳAB<I,hPIi&ePFI"Kq$粼Oo[̴voWUc٠ +Toen#3%y6Q>3#W_/ﲜ̧sSs`%Q/e}QUc07C/:V$vxRD'ܸkv)E^l(m]LpIĭ]iGfҕI8 Q- M\mt5QCzJjbT*62zp٠ja=apv+TձQ/qc?Tίi \ PvgA0`OEdC mu7Nm򉬝r%G㩄p%O18n 15Y`snMtlճ̭0߶NF9!#:ʹU\ T7޷'\y)V/jjr\+Q㐧.X}.IvhkDlMLKu$$tZaǯMO;ҭm"3Im>BݞB(`/ Y= |]$2 k %OC@GOB̛ e9{kV=Tg=m,<4a5,~?Gc'b/, w ۂ .WQeCSh(SլLMEk/UoUG{^sX=_Bg0ol^J4 X]\َШ%>DZ.n͉tv YK R:ò?,]l_(úB-\d7| IC[SzSǐDn-0,{4Qф瘉~S&KOW{Z.>CB]Mv[e<sp69J |e>}q cPBkCSsݯGR@_AC .nW'Hqِ4Bۿ 6ji7ij#'DZ{^ v]ju*37AcqGacq/;N౽ۍJ4ԍ\bENa6pRTkfzq{:_5s2QE^dݩ9MeV,l[PA#r0hq AT4vWn`Û;`UjI9Ӵ }x`ۺ a(6Xv,Ń߹ } VO5#IaaZ8Y{,.|:5i$4Ց[& + ~@rJ̑bղꐉŹϋz[L7Uz*ny~,3xʋָ,SxN"{ XFvAb}@`^2 J 3wvw~e*^0/t6SUCײlV9'8hu;l/Mh61[toʂD@F|@%Dce40?!.GcJZ?tVnk#Gs83P $~DhƗe`\&YUn5UjTaW 8|Єja鲰4X$Akk8lÚȴh >4ha x3ukC,3U{\6SE8T(ςJY_fe)]S9Az4 ,emԅ)v6=PB?°1$x1hJ,PRv-L\M,/婰r~Vb JN1W dzxanHGS˜jbc* BZQMF="J'e34,Zuu*I4hp@ӵo]&Mzvȟ=ljmxd&I ;kǪ)4Aw+tk;Od,` 6Qj T96$fGip .c=&A2zxZТh2 Qaw-ܘ f 4jQKcYP=c>޶,aYaxSH.N D= (6F7CE[ģ.6Zc* 7?:v\z#_#L9%td_ qF^R>>zah lh,n=4|H&0"pqٌg%eF| `؈;{ol,Ϋ@ ن!FVzS cl/ WaTC5jNڲ8Rj~dɒ1ůUo6+|Tq#U+V+CXG/VR(?gr +aʑt,S ^]6Ulݖ߮͹^q3 c^7v;Y۟MM8hyVqD]-ΖV'rD"ÚD%%a!,qʃ?CS.bV'ܒvW b)dʏT~"N^xgAiRz@ r)RBk+s7ôE@J.u>}(x|-1U {#0qJAP[J TVM""$mj5tV;1XfHNd0[ʧz9RY层ȅ[Ҽ#I!$"hBcO;Rz顟=r~an'O$a6(~.k%)$[䯦8gɸO%#-Z^^,l_b1ce+͸SNL\+WPOxYQwqj[iS%>B 8鱽7xw-oW0s#HuKO_>HvR+N~'?ir1>.cG4rhU#& #YuISZu2j sUhZ xC:}}4En@3˘Wf0 oߦd0S[H]S8mZ]A`sԖ8; s!w [}"%57d$'CX^Rjc`[v T-*߄.mn;f;`/4 #긤B*C>Gª٠Q?ђ+bZM,0a:hYZ8mm~ M;/t>qxB @%%O4Tۈϲ5HJHVt=fZ G S7(cM5sO[ ,rF':}>Z0/~?!FneB‡A5wU9Jݶ5( /#O y!9Q`ߣ6ek4FNj$SGf:TXu\~[>ĉa_Ycjmtl>S*$)YĄahY";Π<962&Jߋʓ?) 9!4Z K݆ m֑  1#d[-ʖu\Up )lWSUW7=_I#92xT ~Kz26CHS6q^Ӟv]e~Wv=.D%SN "ܙl:|ht'˅bAEWmٖv~+8^ZZ(˷ N.}4vv;hFy_I#+\>qw.F:%Gl&b2zqY7Z~o݄ fVquBOӭ4/fc5#w6Zr( 4^-o6IKԦ5k%WUOq$7]Nj CPs{xmڠMJ@ǜkUl@ن&[Ugɵ]bgHt1T/8bٓ JD`UM *x97XL>Άt;{+{~~C9v'7ܪ[q;)!|(#?{27fֳ$],}8U(삄fZQmTtZF=TiM }@J@1k,b#hUSI uU$:X.Ή\6~XAv8oJ)8YV@mҐ;S\s"[w7)F 77~@\`yB]ѕ71sҪzX Z`Y e|EÃ4 >Zhtt#z)!jl}Go\`ZaV]RS> ne3/;? ?]G{[x?[xxcdxK84r %🇟;o_e]{Mw_qww3i<1 o_l;__Mڠߒy( ~Fѯ>d8~/n#yz?_Ŀ397g櫻w?w>w'e>@voY Mw>gyng<2tiwOstIr/wOrt ?~k-lٸ?_'7_8?K/q~3o?K-9t}߿?E/;/f/??? ewc8/+|x|V_3I4@jxcMǤp;!94.Oijnh]ϋb#b__}[/G_.89wxBJ?QJI8s5#Il?AaWaut5Q@F 0b0R !.XivF KAPݷ7o2.Q*0/#s _Wt-}'e5N]lkC..:}+1!ʙے \rm,GqQ}ˤS 4fE=O:LBA. Jׄ^<P@}%k:\L Y@ sb ]PЁܒMc゛nT-Nm5*_̈́I ͖su8tenXD ?_Jri*{rR&}tD=b.{cGL*;N x|)=z =;O6Z` 3GTJ v XF!fxfD q]q"h#  lsψùǠւzR꼧5NX= #tl\4SZDMș?#_B:Sdyχ\N&LOGdnq7ָRg009gr\q*$i$(ktLjhl1kgL _bTސɷ}\l{0”un9P@R;0~ca*w˕Jvf5O߃ك}џO/nvDO ҷ+q+2FRb]ٹbOd۹:^54t+r_=ݷItc7}4R] eɻ!GN] /ND; me?Qs 5 n˽r%DٺBLr'2?M]a$(&=/&*XsFD4,]$1:h]HugEr\)[VFɷuCNyԍeb ::8X=*V"⦵V./B_ʮ|;" TIDN^lUezȑӔ jqHD@aJECʈӾB6d2+$YeE)hVi Mwm?ʏ?0XBMH@o><$unYjM8ll7][UpcQ*d^74Dǩ&jUN'iwbݣb8Ut~̵jGэtD܍0R+>ƪ➥Lsa$<6ǰfy^hd. Q.F56Ri1hd1 6ڕox^+FJq#rS@+Vћ:j{ HKpVozG-/FcL./xS@NNy`f*)Z 2MH3^־PN{mW^a9[6 ʿ6G}IJ{cG ,X%n-;K5WO@:v!)[춦2"8W($si`et hJ;TvT!yljN~vm|KT0JdGSe;`-!,yU@*>YUmA z: ( uyUN؃gmsvhiQ3D2 |iRAJ>4NEP*8Op k$"'2߻w/a I$cJ) d6x|hmæVc"MtM]:V,v#(IQGUP-lk0nMx"!y~}#5a_k? 2Eg`|FHX<=ĦRI~Rmbn`i=RTg)ķc;4Cavmd༕3 *Rg(50 qal1#;zr i N*3YәRWaAWy\rB ~lv/ q_qOqr呲(?=&moX2q~gĿ;~/.t'ēBƙߩ)7|E9e~$<乎?G~_B#xco1A#U%t;[~E dJd~sWA++քRx.7C `V36U>NGO~z2\xծBQL&i,s&%ſ=88уiimpE`Xퟸ0}sZE$S_7!F^R\$Wx}F3qi=\fs[PIe~۞tBGs{biiDlTY {) Ș/Ҩt()2#唶]bKi!c!zmXB&تMUQx@:^`Am07gMCħq SxK!ߒbݡVʏXOJm PП粰/Ȥުa_sצ~Zm'3C@  )Ԕ_5 /=W0Mu>T~ʊ)*HBaS7"'EOVG&V|.E<ٙ|-{5|'w76GCiG[-'ձxgnVVUZ=~#`f,iyeJ/$Ooe _L!@C6;-FDT5D8|-Q^{k?ϊC }7%نָ96cS 5T=vKŁny ]a|IMXӺNpܵh1Khw,t$T27NVyBYA{%E3 0ܰD>aQYa|Wj<cM-Qz0 7\Zd m\#VܼCV4pTo)+l6eս_}]Cvd.1V6-dq4OE=N);PT^qD.cAbrWieL-rAhV6+SOIG'8Mb! D/ynҷ, h} Y7itDO;"~qx[p+nZ,(>D~#;xdeWޖFJ8)^շ?p{4Gb+ʶ9򥼝8Cr$nt)M1cy+)->$Jȿ7}⚳$ w:yRb{DԵ0f:M"J4CE8[tn߸H3&yx Wi%yy9ȟlyr)gˑ2ƚS>֜  ;q7^Lo1ģa\Ma+l5xZ'*)R^^,|+2އ&ƷEu#i ^?a5'iݍ0YvHd։Kp6dF"quY n'Lj~kb~'oVVJ a-3/eWhc rx(u] '$3=)4ܼ VdGU H|Jw]oW|6^hlS~Փ׉,n+ch,ۮ1)6[QPcQa$|yJ9^)|66U 7k)=]/I-oWXw_4e;7rJheKy;?ۡݡA͡`$hTj #*c,n?{O`"Ibkw(Lb1O~.k%NgҍQ6DB^?e7w3K6ȇ9f9+u1svկAf6Oq dO)zڴ&48J@Rt!_`oDpkAV. ܍wǹDv"/Eh>ܶ_8OYca+^OO,F+L $/aiK?8rt-!F(ʖJ7=|8z:ҽcLSg($k|ODoѷc l؇b]tX͆Kev 0e - 9ۚcq˲oW8L!ToS@Myaۯ0rJ!@ 3$6ݎ PP2s{=A;@ ٘5++ m y)]D.7+%ꉰL~ɡrLYi0y<F`?an]Cc}P5Xv{C h D;yg%󳧢[+!U(_VqEQpH4E*ς~4" #65LL"ܲpwN/84`,-eXJsPhoթ\ZAʞM Ya8USՙV 3Tn6UFs!·s(r:8|ܡnYנxĎal{Sg$Pc.uG͊6k4i gأ(p- 76U\〠TVz,]B"]z7?)ڔ#|U |gDT&L jsM|%ރtu gca=xƽK_\+W, 4jx'DS=jB"Kxd+b0+%uJD|ӑ7_Q'?g8T%60_nѷ+}ʯi"ʪb\Te_Ytr 7<'l1hcjp1 ' HoF]2cɫK:6G"6zSTa~wlV"+kL* ƒS=,^ ‹領I)FP&VշA16ȖOuKt1a8vjep5 ]n?UPbO!nȓ ݙ" X Ó'xK:JUM%Mz*i&\BQds'ƶxTjKiDkeKi !_Уu2IR7@YDbB4t!Ƒx++$@\/OU$:B0{p:XQq/۹n+gN>߰1?Ejpq߸=`B' ^g1r`{WeAhS|E bu_f5 tW "UVpʎ0$$ۢZϋNd,![MٖnS +쪱gFmpj e `&oPۣFجG~9KCZ`M4'gH/H @wK]8@*P䩒{"aEQ[$U  |iz)>p94L"&F`Zw״ l*”:|K9-"`v<:bU.GCC@2ч3C OB oi GI Zn%e63%,KY clBs5,-V7Ŵ{XfŁv?- q}~CR&,Rm-aK.! CA ٶ-}z-SJӐpPf˔6`Jֳ:'O0Ojsb? *6"ʁυໟ(kũ .C(Sb(> BZ< ;%B`qs&Ϟ^,ARYL=`ae%K(J6DY] h9Z+a52#; k4X+N< GbbjozUO&˗k|WD "V|fL 0XEʭ `&e`Y }1Y~;+`#3Zy;uBo GX/'q;f|1(ᦈR\hF f V g}:*BETscg5h7兙B4@wrRX[;bfSA8R,vF&gdClHKwWbspv/H H>lw4Yy.xcuRMʈ`&͓lpcFL+/Xw` S tw|e~zǸwl<&CI[z-6alux݇JT>籌<0Ϲ$z5[J:b PFm/O^R)ڷl3PREvv)(#v>oem!.v\+v;;_/Cɏv@ %TP_+>THŦ;)H9)x2p1Lq& YJ |,k" A7sjϺ7E`)sxUo:Ξ@#%ݡbjZ,x@u,dH4^qiD8fJofr6.[q;Nqmd HɽŊTAH~h#*[~_M`A=8V,3Ebfɼ@|xLz-7Ow5 om;˹p|yA1L&duXΕRnF9eq<lcW"-sMq^ cv~@ht'i"ޣa^ZUfhZZ %cvK S4G-;N\r7y'4Z~*L ٓon=`O܊á;o 8l[q;&f"n;V܎{ʨEsJwH/pD44#4%e7}p$ '*jhʼGK`VG+3_8V?xo_ȶzUksZF֎ l3_.wk/0)~#~M8^V蕳HOBI7IU~&kgǝb4*JVJ';y+}]wg:+}/@e^ޖڒMdY3Y%ÄsbhIܮողg4.oxR0zXu͆0m:R 8ׁg4uڤVI/D>+a\ _Z<)UtY-⃟+x~[tC1 9痍G*n㋸_ϠX bBxm8VG>XJ4xd105[@3dm7)+\-!C_46wcw`L*B#…/Kȣ˗d-|lڄˎC8?8m_1'%{lPւhn(f[_>xꪀO} Ց pVŐiЪض);ixy6=ȡVK-vF@|y݃$$2ȇCf ^'l$3bT2*6jGb%VzX&k2b| XgMn tM.{6]^&L/+bA;$%'A ڠXX@Jr<_wŦ$=#bc )ٜ*uTeK`{(A|\V}P}ߟ=}NkHǺ@xNs^/*NI7!\謰N l1$2v!vm6VՋϜ`I-n=Z`<#j|aUr𱓚ibK@jX.[q!W$r|8<=KG3+O_cLq.)N"wөd0ݫȑ†J_' *}%nBbV:'"J§ʟz>5<难'1h kY;UGq$'8q4nj}l1_h1ҹ OwΊHDrP%m'-P\)[,Pw}*;wmf}>,oL] @) 88g<[F70v8&~7U'%` ű8%ڤ/ő$[UonUuNJ?gbg( %9'#U#l0V㥆Tܠ}8/.sf?FB1:~nY. BI{F~u'WUS[OrF` ׽bKm~xJqaMvߵ &,F_nUqnQLz!FD UUl.ʺϓcaUe@u꼧5ɗt ؛raKVgPSTfM]\Sٙ|VyUE@r)v}-o'Fd]k FY`8܉Wzuw<[y8x K|Z':iZ#NQ-I8V%̓/KW;}gliJɷ@:߾xb*ql8xnC Ho~+H`ӤVV1(q.&s*J9Q(M3ʕ0hs5AM€v=/gདB>s.XLqd*KńF4O86q?<# 4 wXgbA h8lYA[:{ gBCp0-K^<򠬤P&:7*\J^*'٤7% q#Ke4>Pm|@-M ?ˮt9KǛTW| Ye7 Sьap6^6>8\Hz0}<_Y\=/ 5zfcK˚ ç_w@_$S~Z-v?% {`wLp ՋV!/4RWײe*C>ɸsWw.^ S\]^+nio5ș̏J) Vh8m^o h<5$Gdi))*+7M rE_Z^uK\h<Ɔk'"چ|f\ ->^)OAsSl-wmٻ|8G8Ŭe{+RFAC|ȂRS9ULbEf| ,Op&=jL;*r 7w8_=ʕU̠k#(HLshm;U'.$2˹x*\AFZ Mcc3K=Ndh"œJH ΍T v)I7]U] -) 3(>Ԓ`47N$X4 g%ԝDX)[4J#ڋDD\MyսW80v!L=yC | +ׂJ_d<b6sGs3 %D| "0H'H*-5% ~ٝp] 0G#œ{qL0z\Zf嗢ͭS&sDI|TUb~/vג*m8&Q}2avwPT$PU%g\I\g[xJi@t][l BvۙD`-w.' E{'H򿬖}vTitkUk8q/^:HQF w_X`V/L 4|ז%1Ȣ̪.}?Ο) TVԖCTZwQ25u~uVq&s"pmnjizedfAf (1DZ"6JP= ˏm&U ׼[p (r%sy<]iP&mO!9*Jڃn)pXq~;I!aRD7U?쀹?ˀ%c˅/Gp%C_%4Bm,0 ܐrW"{]h#E#x^]ycj H,-0=08%I|'T) V ,<Ii*lnc S=Wu9wh:Y8VhQT>Uxi{n1%'h,'3sM3[jeln.ڐDǖ SAB+ (J.ޏaDLc1`LC"V6g;-oefi_ȫ\6⟆Ό\uZ4bRdQw$_m]Ypuؖ.JG`bGm%ZVߖiPh[1foޕMefDFMv[䂫>ݨTC~p|3uX(Cix@*n K^E uwFj *Um Ȋ2yb{ ",WFgiY\K 9ZgD9={(ZL#j7OZY>mQw l5ؑv4y3;ȩ~Yۛ^_]0ha&:I &۟ADBTgMFY\Iherdڬf|ӻ@ L=<9߹%%л2z=Iwbgre%[`J=vo$n#OڣAV)xD{K BI ̶?4=ҿ񵄷'j(;>`;ݕha6o8{o&kˈ ԣ Ol^O-B4idm{0Œı+کlwyj 1rY҆oxwḰ}G3E[H8|:u5njU6'.UɜRI?k:Kws}ڐPp:aU1Ov -v6x>$wVhg|rmۛ 0OS'>l#'9AՍBbzXMtM;p-4j16DS4̲VzOCøYw10U)em4i&+=xqP(3}ʫ2p nH e`autljdo6b)[+?Tlucgd[aM已gut;tRUc bL?ρrB7N֎/mF;awѝ_=Xp[xS7z?ssx l0dX 忛4*Am!<6._q!^PT䇙wEʮB6tJeAKMU`%?FkHmRrbMFֳ삨Sooo 6խ\i!hn׽JłYZSV˱OwK,w# w.r]S%0p1R# ~A -\񪾸8Gcq&y+lxtΙ*Emw6NYwNЙ:N2ܸ5[:Wh1[t3~-Q=\ *sGpq?OgbluXf+mgΝnGK_'טZN*dȄ9U Lwyde|F"xNeõo'a4ގO 752/u^8SO g:g㓝yܒ\QHj&b?>|l4;]da76:4S{bڎ)4+a"^]&xf(zC"kq, x1d1C`Y[{MRdL&mKC )Ր]]|[1YAc&T" {_}X VT&8eXkv_crm4'dyAE E,&BW|u';|3L7U3Y5xTblV K)sDJ&Ue  NRC Lf,&Nd e`%%ppL+2hun7z4!7t*/d)i"2\*^fJ!k ƪPR&,$t>:J.cP{톉& !Y`ij JѴMrm%qTeC DAȁ|J}(] B42(\|5{0߀ċϥN~'A@b nE`~P<:z4k;n-6Z6"IA~ ] 2\'ƥMƱ .+bşۮSN87ٰߟerHDD:&"b\l'h ;%aYUO]ݦ񯺶_ KJ1p7i?b҃_p R@dIDO1>Dv?h-/;RP9(W lj0ʝmw֦ۅx9ϟRλ3E8 QI<#kHh܌:GJ166|J^av.r5@a3C gu[ҵ.`-,v\+'[ŕ|LE`"H;,t' ӧ]fa9Bnb_F`$7D/!Z$ȚmUovuQ%Ւw=-q;Hzyq'2)S)(>:rLƥ<_jhsDJ^. V$Mް\dC X!q%5mx[d|G8*4VFv]C:4sZmgt~O qmfOdIˈhJ'R)Vm+i1z(riHC2FdcKTS7@HY+@N0.[ļ[_̞֊k.ݦmGTWb+v>9r?N.-x[Q-or7ƪ:[-ajC>p?a/;6H:af|=t UxB9^(lеVWa |Tar3/~ll#;&4J6(*"BP6El'oT!0̊b,MY/s~qrAetYЙm&t3o! ʸWu- ~kKΓts,+0%f cM 􈮦^L ny-\W<~͍D森pn=}!wa5fzr6gD2ms4$a)ͱJ*˻ưکfM?qkd.n'ǛL^܊ZE;"0\f^yЧ,%Dl5ɱ^VAA l<Dn(`,&ۛUE 33P@F )t<Y0|o$k_>x['^[45ܮ&K ͺ] "!+8#ffkZt7>h,&GS<2m4p_m2 kRS*ix2oאϢ\m7S6+h tPL@CUj6s}}d/LiV5MbmQ3YSq `b}.z+TGPV(vSD~:{_ӡM-e %;Ξæ}xZwQ8F|JkdC9 "/'U P*5ڡoT•7${kB[,{%`"r=,qWﰉbL.My[ UȆay,n );j+<9}1sI X^{=Z`lQ?^͍[y;XJO| @y%hV!u`6eCxpGԬ ?Qe+aR2v; ȓ.~FTj 6h~Bzl77a6ִrRp,jb54uD,9<퉩^ѨS`۷c$W2nF%U<*F:V $c]3ZJ=@ٲ1f- cU*{ l0W!#pXф)sc2"܈=S *]NԐbP89]{ }&u&vrDckaf|(,Tv,n ( Tma-!9VY)g"PoS޺=5($œt,CFV!+`MМYgɫ bX^4ҍR&Bi2`x sYР@/ѓJ8sYh|2W}ۿR~;S|_U^ȏ/VS uQNg/hܬ0)Tq8Prb -;fEDEU( [NdgލIMpUۥȀTD{UE(Mw1%YbvvQI v\`;l1Z7DZ0J*ԕh 4DBr$6M&"Q".eL9Zdjt3 L\8\WĪ tI9? [t@|o<N)wŸvS t?AVb.d9VX:la:"Msh 50HIoP8[a?Ta/tsXٱl^Bm7("EZt+6 Dß 6˲ιx|$Hljv|j re!\H+JY7Ȉl^ eVmP!-[^l_(úqSmȰo4m{lG&v4l{ ea4[&!yq [={hsdZ?v/5Yz:d˩Sv؉JHlz0@kQK!%=<)%JPBPS Ʊ+w0_wP^(fi`M{.7P)z 5Q.{7|jGi5$~1&,èA( lɦXA5yC蕓Q>;M񴾡h24QaRw-ܸ8 Q%; u@YS)\W8[ !AniM}v J,2Ǐ7C2m6Jf Z1g;@_'k0c< J@ո&jU6 D0;^G7ѱB-wGaʿwEi+}jet .f"ߝɞ I̬w+S0?z Ͽ#. >Ԡll)ѩ fc+LCXQxD<$ VVy*e=j끃X(ydp۰+I !ۘh=X L7-kD1Tz& -9֨?wfF;E $_-wCҜ {*அ'MS(wkjZ5R\THyvL)cnAӄs~KC%;`L@zEaӒ-SPN{T&>@d N?mG̎g d 0!&4Xm5\ĸaa7J"8$6L+;5Ha+ò0"pqٌ d,ךm4%DcdSe)v!o6lĬZ<0#*^ί¨njsj,Fo"T5%cp ^ m ţũF>z^Ha;At |XoR%*&R_5u6:@Z&va6D_ e|om?#I%J^7rTwDz0)DDuW _T6G~Gɓ8e$d@TnL<dEw:[+ˋ?mK"J+[qmƝrJZ`q5J&rLqMzJj_a%L9rNޚ~Qr[~8ZCKtz.dn$[ˏ/̳zdm65 Ԣ |he;NDJQ؎ Bj+l)ok%{"G$b;<6JI|XD/c|~ G)wl1kz'ܒvW b)dʏ vrVj#&z5)= %U(014"S"6hƓ |?̿!gj0m:`>P!p3e8GH׳ NzX+I0TUQk J7llυ,vJl8Y Y򶋜2H$ ʤ|]^9 ~b;*:m>8 ߯n ]%`2m_1U=C+3) 3&ABA~O9i _ee>X{J~2IZo]#;,z\$Cco-1ajkRH4%Ǣbʨ ϱZH5bi2PInhşFe)։7zTXH-mj𹫳H7lNۣ'rpAb CHDS߼S 3*a[:.i騐+ϑc l,POdcLqΫL:[VI'+I~#Q)p=nϛ_2A1mCUDGi( خ-Y۸'SgWR -&2wQJ7 %>ǜZsGS{PU6ڮpD) {j هIΓX**paXqU F}k>ᘦJ+} ) Z$PD+@9 ty0MuRUH%LcE_>8*n ŸHq7hݘ4n#ܥnO M]}螒Ai՞-ș*yu:q`ў'aG6jB^$\ll9HCTR_k00>;WR|<˰? pa GQ|^8/%|kSajk0* drƂ"JbWTG9t|5NSQ%`-!1-j$E%'o!&Z+dSNQ?-.O;Vps'qԭS샹<f.<;2;|TԘ.>j_3 WjrZ;A{I{?mCvgZnq8`@ F^Im Z^1Vxko dziPKacɺxA$ŇZnshk_"n+؜7~!]A{a_$vf1sOgtͧ&ɒ̣/,s|lڏ|i} ~_hrsh/-]e$'w[ɟ;e]ku{0]%|2dkv$1Z봅f$/DCQG?sŬ7~>1nԽ_z3w.16C3mb^7e~Ɠ!cwQ< ~_*z`4[W^TF|=tv7*j߶M7m0Wdz)yƦ.5yI0WfsbH7e~9jy9__w/GKU|&4>fg#~}}֖"%Y>zڎ jssȜnsGGvdFa~^*yVgMv:;.uMn___y/HS쏮6Ƃ5Rggإ~/xddUOk8__섿B4b9F8{!dPt_uߋoy|hw1?n2_5K/m3ۈ]oE~/&;`ط}_;2-4jWdD4K7P3}hX[]l,vAmͱjdJq 6LNh&l ] >y:NcIB# JR,bd5ln|G1n  $ i*~[? bS( #>fh#'q:I;Hnd )DG"Jc[hNfPh+Tɑ&Oe։j q3DWzmg4]kFʺ[m=hoΨOeLmF1~}ʧ $͝ BRzb̰t &\M2׽Q1 ]KhanԴR:B_[]33@&A5ɉ\䱫vFyOTc?aRg_AgB&J[F_p+Uh$ꊺEh] w!i G 1m3:?]Jser_M$;.↳&HpfBD{ޟI `b^D)x,8`f )T IA]3ސ!=wMQraAR@4n6J`cvמ۾hmg<ɓVJ+HX+]=ukV_@3Z Be)6:r+'O=eb1auDl|͠+ dVRf6"hKARqcK<:\[J&G\:Aokzx R-dHpe~of鰣_9V/zrK*j&5Z3s9wǩR2,\iǦ;zN~P~cr?fHG}5#٦QuU r e)Wj*@ Ȝ>7I! P<(B9כ0=RpUru T2fnў8w!̰jRU}hP)]_A(@g]*%ĵ4ΥjΑHzYVk|(O7'l[5*ᘋw`R^z{nCD?q 3(i (#"1j#ة; ̩ds 1^ , ;c/Tf+uP$mEӾ^ajY%?֐>vUc)9kp5yK (AȺ#<;&ZLV_KAѭ{qB"t!%IBZb0YXċKXЍ}1\eQv( q%ULiK OM hٳ>5?~Lk l/bduL0LArb$1z '?wH0D1 Kt K h+QeY5 44(ާ5f\GwiGx7ý<\[/ P뢿]ni9+%&&ђ{|'?naz?#ֲft=g,gkB;.磣hy)/r_rJϖIAg/  #o`0!@k0LgY\-EDi/7ud?fg=k<8z|ۮ{`{ OA!<{aq^͟dۓq~lXgF${ rTJ #$|FO+WC*򩝹*y.8r\zgG@j|4Mѱ`C-e9[/9<*-&v\΅qs(v)Zg ڃ.IśxoyqE8(O-?<9GUoQ5M':˯pBʽSNFz .&O˩N*6 Ϣc8a'<`n(a*Gqbd:ʣX uJ)ʅvUX᧖y,Ty_@?tlĐA#CHG^F sύB#yϪ(UUBZf=,=`+T@ˮt݉x A6 Ns>~Em$Ap  U${G9e lE'[2Eٽ 3}U 8;x-u:BXCj6Pv[a%oŞ \{;k\ Q;lWTħ =*R+3EG0#1 vwtyAV" D!rR0Lb < ʀEל5܇=? 4҇KAZ_>b|1+]O`QdpYC;II]aJA+7[ @R6 8'NzW(Z6 C괽n֠ıRQFPKo[[8N9Hd2"h,Mn[H$3Orz,OKbH"|!ŋ% !6n*5esxR^7Ƽ>Oj?MRyKB&@qӉl]uk=4m:7DeeW$RiUzN1F[v#؞@,Ǚ)q0m qO7xCÒ\j)zj ^ r7UQDn֮",VC7lJf ].יּE{/*˰!7u`K bt^IJ[:w. ˣd@B8qY<)jk vҬAwp8*9zZ"0BljiI0gz*m^5ck51BeZhG]%di'2X\2\T)M` ,Q2dd'!8DԻ<67RFvfR`! YBt"!҄!3jȍ; D&"!?KFJa۞ yi*#qTt=i >JC `骣\rt(1/F6f񙛠͕~`fa]Xyx496LOoN^/@Ɍf,aWrc ;x,d * { flex: 1 1 0px; min-width: 200px; } div.sphinxsidebar .logo { font-size: 1.8em; color: #666; font-weight: 300; text-align: center; } div.sphinxsidebar .logo img { vertical-align: middle; } div.sphinxsidebar input { border: 1px solid #aaa; font-family: {{ theme_font }}, 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; font-size: 1em; } div.sphinxsidebar h3 { font-size: 1.5em; /* border-top: 1px solid {{ theme_border }}; */ margin-top: 0; margin-bottom: 0.5em; padding-top: 0.5em; } div.sphinxsidebar h4 { font-size: 1.2em; margin-bottom: 0; } div.sphinxsidebar h3, div.sphinxsidebar h4 { margin-left: -15px; padding-right: 14px; padding-left: 14px; color: #333; font-weight: 300; /*text-shadow: 0px 0px 0.5px rgba(0, 0, 0, 0.4);*/ } div.sphinxsidebarwrapper { padding: 0; display: flex; flex-wrap: wrap; gap: 15px; } div.sphinxsidebarwrapper > h3:first-child { margin-top: 0.5em; border: none; } div.sphinxsidebar h3 a { color: #333; } div.sphinxsidebar ul { color: #444; margin-top: 7px; padding: 0; line-height: 130%; } div.sphinxsidebar ul ul { margin-left: 20px; list-style-image: url(listitem.png); } div.footer { color: {{ theme_darkgray }}; text-shadow: 0 0 .2px rgba(255, 255, 255, 0.8); padding: 2em; text-align: center; clear: both; font-size: 0.8em; } /* -- body styles ----------------------------------------------------------- */ p { margin: 0.8em 0 0.5em 0; } a { color: {{ theme_darkgreen }}; text-decoration: none; } a:hover { color: {{ theme_darkyellow }}; } div.body a { text-decoration: underline; } h1 { margin: 10px 0 0 0; font-size: 2.4em; color: {{ theme_darkgray }}; font-weight: 300; } h2 { margin: 1.em 0 0.2em 0; font-size: 1.5em; font-weight: 300; padding: 0; color: {{ theme_darkgreen }}; } h3 { margin: 1em 0 -0.3em 0; font-size: 1.3em; font-weight: 300; } div.body h1 a, div.body h2 a, div.body h3 a, div.body h4 a, div.body h5 a, div.body h6 a { text-decoration: none; } div.body h1 a tt, div.body h2 a tt, div.body h3 a tt, div.body h4 a tt, div.body h5 a tt, div.body h6 a tt { color: {{ theme_darkgreen }} !important; font-size: inherit !important; } a.headerlink { color: {{ theme_green }} !important; font-size: 12px; margin-left: 6px; padding: 0 4px 0 4px; text-decoration: none !important; float: right; } a.headerlink:hover { background-color: #ccc; color: white!important; } cite, code, tt { font-family: 'Consolas', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 14px; letter-spacing: -0.02em; } tt { background-color: #f2f2f2; border: 1px solid #ddd; border-radius: 2px; color: #333; padding: 1px; } tt.descname, tt.descclassname, tt.xref { border: 0; } hr { border: 1px solid #abc; margin: 2em; } a tt { border: 0; color: {{ theme_darkgreen }}; } a tt:hover { color: {{ theme_darkyellow }}; } pre { font-family: 'Consolas', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 13px; letter-spacing: 0.015em; line-height: 120%; padding: 0.5em; border: 1px solid #ccc; border-radius: 2px; background-color: #f8f8f8; } pre a { color: inherit; text-decoration: underline; } td.linenos pre { padding: 0.5em 0; } div.quotebar { background-color: #f8f8f8; max-width: 250px; float: right; padding: 0px 7px; border: 1px solid #ccc; margin-left: 1em; } div.topic { background-color: #f8f8f8; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } div.admonition, div.warning { font-size: 0.9em; margin: 1em 0 1em 0; border: 1px solid #86989B; border-radius: 2px; background-color: #f7f7f7; padding: 0; padding-bottom: 0.5rem; } div.admonition p, div.warning p { margin: 0.5em 1em 0.5em 1em; padding: 0; } div.admonition pre, div.warning pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title, div.warning p.admonition-title { font-weight: bold; } div.warning { border: 1px solid #940000; /* background-color: #FFCCCF;*/ } div.warning p.admonition-title { } div.admonition ul, div.admonition ol, div.warning ul, div.warning ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } .viewcode-back { font-family: {{ theme_font }}, 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } pygments-2.11.2/doc/_themes/pygments14/relations.html0000644000175000017500000000132614165547207022432 0ustar carstencarsten{# basic/relations.html ~~~~~~~~~~~~~~~~~~~~ Sphinx sidebar template: relation links. This file can be removed once https://github.com/sphinx-doc/sphinx/pull/9815 has landed. :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- if prev %}

{{ _('Previous topic') }}

{{ prev.title }}

{%- endif %} {%- if next %}

{{ _('Next topic') }}

{{ next.title }}

{%- endif %} pygments-2.11.2/doc/_themes/pygments14/theme.conf0000644000175000017500000000046314165547207021516 0ustar carstencarsten[theme] inherit = basic stylesheet = pygments14.css pygments_style = friendly [options] body_min_width = inherit body_max_width = inherit green = #66b55e darkgreen = #36852e darkgray = #666666 border = #66b55e yellow = #f4cd00 darkyellow = #d4ad00 lightyellow = #fffbe3 background = #f9f9f9 font = PT Sans pygments-2.11.2/doc/index.rst0000644000175000017500000000322014165547207015741 0ustar carstencarstenWelcome! ======== This is the home of Pygments. It is a generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code. Highlights are: * a wide range of |language_count| languages and other text formats is supported * special attention is paid to details that increase highlighting quality * support for new languages and formats are added easily; most languages use a simple regex-based lexing mechanism * a number of output formats is available, among them HTML, RTF, LaTeX and ANSI sequences * it is usable as a command-line tool and as a library Read more in the :doc:`FAQ list ` or the :doc:`documentation `, or `download the latest release `_. .. _contribute: Contribute ---------- Like every open-source project, we are always looking for volunteers to help us with programming. Python knowledge is required, but don't fear: Python is a very clear and easy to learn language. Development takes place on `GitHub `_. If you found a bug, just open a ticket in the GitHub tracker. Be sure to log in to be notified when the issue is fixed -- development is not fast-paced as the library is quite stable. You can also send an e-mail to the developers, see below. The authors ----------- Pygments is maintained by **Georg Brandl**, e-mail address *georg*\ *@*\ *python.org* and **Matthäus Chajdas**. Many lexers and fixes have been contributed by **Armin Ronacher**, the rest of the `Pocoo `_ team and **Tim Hatch**. .. toctree:: :maxdepth: 1 :hidden: docs/index pygments-2.11.2/doc/pygmentize.10000644000175000017500000000767014165547207016372 0ustar carstencarsten.TH PYGMENTIZE 1 "January 20, 2021" .SH NAME pygmentize \- highlights the input file .SH SYNOPSIS .B \fBpygmentize\fP .RI [-l\ \fI\fP\ |\ -g]\ [-F\ \fI\fP[:\fI\fP]]\ [-f\ \fI\fP] .RI [-O\ \fI\fP]\ [-P\ \fI\fP]\ [-o\ \fI\fP]\ [\fI\fP] .br .B \fBpygmentize\fP .RI -S\ \fI

The highlighting here is performed in-browser using a WebAssembly translation of the latest Pygments master branch, courtesy of Pyodide.

Your content is neither sent over the web nor stored anywhere.

{% endblock %} pygments-2.11.2/doc/_templates/demo_sidebar.html0000644000175000017500000000004614165547207021543 0ustar carstencarsten

Back to top

pygments-2.11.2/doc/_templates/indexsidebar.html0000644000175000017500000000134014165547207021565 0ustar carstencarsten

Download

Current version: {{ version }}
Changelog

Get Pygments from the Python Package Index, or install it with:

pip install Pygments

Questions? Suggestions?

Clone at GitHub.

You can also open an issue at the tracker.

pygments-2.11.2/doc/_templates/docssidebar.html0000644000175000017500000000020314165547207021403 0ustar carstencarsten{% if pagename != 'docs/index' %} « Back to docs index {% endif %} pygments-2.11.2/doc/_templates/styles.html0000644000175000017500000000243114165547207020451 0ustar carstencarsten{% extends "layout.html" %} {% block htmltitle %}Styles{{ titlesuffix }}{% endblock %} {% block body %} {{ body }}

Styles

Pygments comes with the following builtin styles. For more information about styles refer to the documentation.

Styles with a lower contrast

The following styles do not meet the WCAG 2.1 AA contrast minimum, so they might be difficult to read for people with suboptimal vision. If you want your highlighted code to be well readable for other people, you should use one of the earlier styles instead.

{% endblock %} pygments-2.11.2/doc/_templates/index_with_try.html0000644000175000017500000000000014165547207022154 0ustar carstencarstenpygments-2.11.2/doc/styles.rst0000644000175000017500000000033114165547207016155 0ustar carstencarsten:orphan: This file is overridden by _templates/styles.html and just exists to allow the Styles gallery to be reliably linked from the documentation (since its location varies between `make html` and `make dirhtml`). pygments-2.11.2/doc/Makefile0000644000175000017500000001272214165547207015547 0ustar carstencarsten# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = PYTHONPATH=.. sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Pygments.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Pygments.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Pygments" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Pygments" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pygments-2.11.2/doc/languages.rst0000644000175000017500000003244514165547207016613 0ustar carstencarsten:orphan: Supported languages =================== Pygments supports an ever-growing range of languages. Watch this space... Programming languages --------------------- * `ActionScript `_ * `Ada `_ * `Agda `_ (incl. literate) * `Alloy `_ * `AMPL `_ * `ANTLR `_ * `APL `_ * `AppleScript `_ * `Assembly `_ (various) * `Asymptote `_ * `Augeas `_ * `AutoIt `_ * `Awk `_ * `BARE `_ * `BBC Basic `_ * `Befunge `_ * `BlitzBasic `_ * `Boa `_ * `Boo `_ * `Boogie `_ * `BrainFuck `_ * `C `_, `C++ `_ (incl. dialects like Arduino) * `C# `_ * `Chapel `_ * `Charm++ CI `_ * `Cirru `_ * `Clay `_ * `Clean `_ * `Clojure `_ * `CoffeeScript `_ * `ColdFusion `_ * `Common Lisp `_ * `Component Pascal `_ * `Coq `_ * `Croc `_ (MiniD) * `Cryptol `_ (incl. Literate Cryptol) * `Crystal `_ * `Cypher `_ * `Cython `_ * `D `_ * `Dart `_ * DCPU-16 * `Delphi `_ * `DOT / Graphviz `_ * `Devicetree `_ * `Dylan `_ (incl. console) * `Eiffel `_ * `Elm `_ * `Elpi `_ * `Emacs Lisp `_ * Email * `Erlang `_ (incl. shell sessions) * `Ezhil `_ * `Execline `_ * `Factor `_ * `Fancy `_ * `Fantom `_ * `Fennel `_ * `FloScript `_ * `Fortran `_ * `FreeFEM++ `_ * `F# `_ * `F* `_ * `GAP `_ * `GDScript `_ * `Gherkin `_ (Cucumber) * `GLSL `_ shaders * `GnuCOBOL `_ (OpenCOBOL) * `Golo `_ * `Gosu `_ * `Groovy `_ * `Haskell `_ (incl. Literate Haskell) * `Haxe `_ * `HLSL `_ shaders * `HSpec `_ * `Hy `_ * `IDL `_ * `Idris `_ (incl. Literate Idris) * `Igor Pro `_ * `Io `_ * `Jags `_ * `Java `_ * `JavaScript `_ * `Jasmin `_ * `Jcl `_ * `Julia `_ * `JSLT `_ * `Kotlin `_ * `Lasso `_ (incl. templating) * `Limbo `_ * `LiveScript `_ * `LLVM MIR `_ * `Logtalk `_ * `Logos `_ * `Lua `_ * `Mathematica `_ * `Matlab `_ * `The Meson Build System `_ * `MiniScript `_ * `Modelica `_ * `Modula-2 `_ * `Monkey `_ * `Monte `_ * `MoonScript `_ * `Mosel `_ * `MuPad `_ * `NASM `_ * `Nemerle `_ * `NesC `_ * `NewLISP `_ * `Nim `_ * `Nit `_ * `Notmuch `_ * `NuSMV `_ * `Objective-C `_ * `Objective-J `_ * `Octave `_ * `OCaml `_ * `Opa `_ * `ParaSail `_ * `Pawn `_ * `PHP `_ * `Perl 5 `_ * `Pike `_ * `Pointless `_ * `Pony `_ * `PovRay `_ * `PostScript `_ * `PowerShell `_ * `Praat `_ * `Prolog `_ * `Python `_ 2.x and 3.x (incl. console sessions and tracebacks) * `QBasic `_ * `Racket `_ * `Raku `_ a.k.a. Perl 6 * `ReasonML `_ * `REBOL `_ * `Red `_ * `Redcode `_ * `Rexx `_ * `Ride `_ * `Ruby `_ (incl. irb sessions) * `Rust `_ * S, S-Plus, `R `_ * `Scala `_ * `Savi `_ * `Scdoc `_ * `Scheme `_ * `Scilab `_ * `SGF `_ * Shell scripts (`Bash `_, `Tcsh `_, `Fish `_) * `Shen `_ * `Silver `_ * `Slash `_ * `Slurm `_ * `Smalltalk `_ * `SNOBOL `_ * `Snowball `_ * `Solidity `_ * `SourcePawn `_ * `Spice `_ * `Stan `_ * `Standard ML `_ * `Stata `_ * `Swift `_ * `Swig `_ * `SuperCollider `_ * `Tcl `_ * `Tera Term language `_ * `TypeScript `_ * `TypoScript `_ * `USD `_ * `Unicon `_ * `Urbiscript `_ * `Vala `_ * `VBScript `_ * Verilog, `SystemVerilog `_ * `VHDL `_ * `Visual Basic.NET `_ * `Visual FoxPro `_ * `Whiley `_ * `Xtend `_ * `XQuery `_ * `Zeek `_ * `Zephir `_ * `Zig `_ Template languages ------------------ * `Angular templates `_ * `Cheetah templates `_ * `ColdFusion `_ * `Django `_ / `Jinja `_ templates * `ERB `_ (Ruby templating) * Evoque * `Genshi `_ (the Trac template language) * `Handlebars `_ * `JSP `_ (Java Server Pages) * `Liquid `_ * `Myghty `_ (the HTML::Mason based framework) * `Mako `_ (the Myghty successor) * `Slim `_ * `Smarty `_ templates (PHP templating) * `Tea `_ * `Twig `_ Other markup ------------ * Apache config files * Apache Pig * BBCode * Bdd * CapDL * `Cap'n Proto `_ * `CDDL `_ * CMake * `Csound `_ scores * CSS * Debian control files * Diff files * Dockerfiles * DTD * EBNF * E-mail headers * Extempore * Flatline * Gettext catalogs * Gnuplot script * Groff markup * `GSQL `_ * Hexdumps * HTML * HTTP sessions * IDL * Inform * INI-style config files * IRC logs (irssi style) * Isabelle * JSGF notation * JSON, JSON-LD * Lean theorem prover * Lighttpd config files * `LilyPond `_ * Linux kernel log (dmesg) * LLVM assembly * LSL scripts * Makefiles * MoinMoin/Trac Wiki markup * MQL * MySQL * NCAR command language * `NestedText `_ * Nginx config files * `Nix language `_ * NSIS scripts * Notmuch * `OMG IDL `_ * `PEG `_ * POV-Ray scenes * `Procfile `_ * `PromQL `_ * `Puppet `_ * QML * Ragel * Redcode * ReST * `Rita `_ * `Roboconf `_ * Robot Framework * RPM spec files * Rql * RSL * Scdoc * Sieve * Singularity * `Smithy `_ * `Sophia `_ * SPARQL * SQL, also MySQL, SQLite * Squid configuration * TADS 3 * Terraform * TeX * `Thrift `_ * `TNT `_ * `TOML `_ * Treetop grammars * USD (Universal Scene Description) * Varnish configs * VGL * Vim Script * WDiff * Web IDL * Windows batch files * XML * XSLT * YAML * YANG * Windows Registry files Interactive terminal/shell sessions ----------------------------------- To highlight an interactive terminal or shell session, prefix your code snippet with a specially formatted prompt. Supported shells with examples are shown below. In each example, prompt parts in brackets ``[any]`` represent optional parts of the prompt, and prompt parts without brackets or in parenthesis ``(any)`` represent required parts of the prompt. * **Bash Session** (console, shell-session): .. code-block:: console [any@any]$ ls -lh [any@any]# ls -lh [any@any]% ls -lh $ ls -lh # ls -lh % ls -lh > ls -lh * **MSDOS Session** (doscon): .. code-block:: doscon [any]> dir > dir More? dir * **Tcsh Session** (tcshcon): .. code-block:: tcshcon (any)> ls -lh ? ls -lh * **PowerShell Session** (ps1con): .. code-block:: ps1con PS[any]> Get-ChildItem PS> Get-ChildItem >> Get-ChildItem ... that's all? --------------- Well, why not write your own? Contributing to Pygments is easy and fun. Take a look at the :doc:`docs on lexer development `. Pull requests are welcome on `GitHub `_. Note: the languages listed here are supported in the development version. The latest release may lack a few of them. pygments-2.11.2/setup.cfg0000644000175000017500000000321114165547207015154 0ustar carstencarsten[metadata] name = Pygments version = attr: pygments.__version__ url = https://pygments.org/ license = BSD License author = Georg Brandl author_email = georg@python.org description = Pygments is a syntax highlighting package written in Python. long_description = file:description.rst platforms = any keywords = syntax highlighting classifiers = Development Status :: 6 - Mature Intended Audience :: Developers Intended Audience :: End Users/Desktop Intended Audience :: System Administrators License :: OSI Approved :: BSD License Operating System :: OS Independent Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Programming Language :: Python :: 3.10 Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: PyPy Topic :: Text Processing :: Filters Topic :: Utilities project_urls = Documentation = https://pygments.org/docs/ Source = https://github.com/pygments/pygments Bug Tracker = https://github.com/pygments/pygments/issues Changelog = https://github.com/pygments/pygments/blob/master/CHANGES [options] packages = find: zip_safe = false include_package_data = true python_requires = >=3.5 [options.packages.find] include = pygments pygments.* [options.entry_points] console_scripts = pygmentize = pygments.cmdline:main [aliases] release = egg_info -Db '' upload = upload --sign --identity=36580288 pygments-2.11.2/.gitignore0000644000175000017500000000026714165547207015333 0ustar carstencarsten*.egg *.pyc *.pyo .*.sw[op] /.pytest_cache/ /.idea/ /.project /.tags /.tox/ /.cache/ /Pygments.egg-info/* /TAGS /build/* /dist/* /doc/_build /.coverage /htmlcov /.vscode venv/ .venv/ pygments-2.11.2/.gitattributes0000644000175000017500000000006114165547207016226 0ustar carstencarstentests/examplefiles/*/*.output linguist-generated pygments-2.11.2/LICENSE0000644000175000017500000000246314165547207014350 0ustar carstencarstenCopyright (c) 2006-2021 by the respective authors (see AUTHORS file). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pygments-2.11.2/.coveragerc0000644000175000017500000000003414165547207015454 0ustar carstencarsten[run] include = pygments/* pygments-2.11.2/pygments/0000755000175000017500000000000014165547207015204 5ustar carstencarstenpygments-2.11.2/pygments/lexer.py0000644000175000017500000007616414165547207016713 0ustar carstencarsten""" pygments.lexer ~~~~~~~~~~~~~~ Base lexer classes. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import sys import time from pygments.filter import apply_filters, Filter from pygments.filters import get_filter_by_name from pygments.token import Error, Text, Other, _TokenType from pygments.util import get_bool_opt, get_int_opt, get_list_opt, \ make_analysator, Future, guess_decode from pygments.regexopt import regex_opt __all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer', 'LexerContext', 'include', 'inherit', 'bygroups', 'using', 'this', 'default', 'words'] _encoding_map = [(b'\xef\xbb\xbf', 'utf-8'), (b'\xff\xfe\0\0', 'utf-32'), (b'\0\0\xfe\xff', 'utf-32be'), (b'\xff\xfe', 'utf-16'), (b'\xfe\xff', 'utf-16be')] _default_analyse = staticmethod(lambda x: 0.0) class LexerMeta(type): """ This metaclass automagically converts ``analyse_text`` methods into static methods which always return float values. """ def __new__(mcs, name, bases, d): if 'analyse_text' in d: d['analyse_text'] = make_analysator(d['analyse_text']) return type.__new__(mcs, name, bases, d) class Lexer(metaclass=LexerMeta): """ Lexer for a specific language. Basic options recognized: ``stripnl`` Strip leading and trailing newlines from the input (default: True). ``stripall`` Strip all leading and trailing whitespace from the input (default: False). ``ensurenl`` Make sure that the input ends with a newline (default: True). This is required for some lexers that consume input linewise. .. versionadded:: 1.3 ``tabsize`` If given and greater than 0, expand tabs in the input (default: 0). ``encoding`` If given, must be an encoding name. This encoding will be used to convert the input string to Unicode, if it is not already a Unicode string (default: ``'guess'``, which uses a simple UTF-8 / Locale / Latin1 detection. Can also be ``'chardet'`` to use the chardet library, if it is installed. ``inencoding`` Overrides the ``encoding`` if given. """ #: Name of the lexer name = None #: Shortcuts for the lexer aliases = [] #: File name globs filenames = [] #: Secondary file name globs alias_filenames = [] #: MIME types mimetypes = [] #: Priority, should multiple lexers match and no content is provided priority = 0 def __init__(self, **options): self.options = options self.stripnl = get_bool_opt(options, 'stripnl', True) self.stripall = get_bool_opt(options, 'stripall', False) self.ensurenl = get_bool_opt(options, 'ensurenl', True) self.tabsize = get_int_opt(options, 'tabsize', 0) self.encoding = options.get('encoding', 'guess') self.encoding = options.get('inencoding') or self.encoding self.filters = [] for filter_ in get_list_opt(options, 'filters', ()): self.add_filter(filter_) def __repr__(self): if self.options: return '' % (self.__class__.__name__, self.options) else: return '' % self.__class__.__name__ def add_filter(self, filter_, **options): """ Add a new stream filter to this lexer. """ if not isinstance(filter_, Filter): filter_ = get_filter_by_name(filter_, **options) self.filters.append(filter_) def analyse_text(text): """ Has to return a float between ``0`` and ``1`` that indicates if a lexer wants to highlight this text. Used by ``guess_lexer``. If this method returns ``0`` it won't highlight it in any case, if it returns ``1`` highlighting with this lexer is guaranteed. The `LexerMeta` metaclass automatically wraps this function so that it works like a static method (no ``self`` or ``cls`` parameter) and the return value is automatically converted to `float`. If the return value is an object that is boolean `False` it's the same as if the return values was ``0.0``. """ def get_tokens(self, text, unfiltered=False): """ Return an iterable of (tokentype, value) pairs generated from `text`. If `unfiltered` is set to `True`, the filtering mechanism is bypassed even if filters are defined. Also preprocess the text, i.e. expand tabs and strip it if wanted and applies registered filters. """ if not isinstance(text, str): if self.encoding == 'guess': text, _ = guess_decode(text) elif self.encoding == 'chardet': try: import chardet except ImportError as e: raise ImportError('To enable chardet encoding guessing, ' 'please install the chardet library ' 'from http://chardet.feedparser.org/') from e # check for BOM first decoded = None for bom, encoding in _encoding_map: if text.startswith(bom): decoded = text[len(bom):].decode(encoding, 'replace') break # no BOM found, so use chardet if decoded is None: enc = chardet.detect(text[:1024]) # Guess using first 1KB decoded = text.decode(enc.get('encoding') or 'utf-8', 'replace') text = decoded else: text = text.decode(self.encoding) if text.startswith('\ufeff'): text = text[len('\ufeff'):] else: if text.startswith('\ufeff'): text = text[len('\ufeff'):] # text now *is* a unicode string text = text.replace('\r\n', '\n') text = text.replace('\r', '\n') if self.stripall: text = text.strip() elif self.stripnl: text = text.strip('\n') if self.tabsize > 0: text = text.expandtabs(self.tabsize) if self.ensurenl and not text.endswith('\n'): text += '\n' def streamer(): for _, t, v in self.get_tokens_unprocessed(text): yield t, v stream = streamer() if not unfiltered: stream = apply_filters(stream, self.filters, self) return stream def get_tokens_unprocessed(self, text): """ Return an iterable of (index, tokentype, value) pairs where "index" is the starting position of the token within the input text. In subclasses, implement this method as a generator to maximize effectiveness. """ raise NotImplementedError class DelegatingLexer(Lexer): """ This lexer takes two lexer as arguments. A root lexer and a language lexer. First everything is scanned using the language lexer, afterwards all ``Other`` tokens are lexed using the root lexer. The lexers from the ``template`` lexer package use this base lexer. """ def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options): self.root_lexer = _root_lexer(**options) self.language_lexer = _language_lexer(**options) self.needle = _needle Lexer.__init__(self, **options) def get_tokens_unprocessed(self, text): buffered = '' insertions = [] lng_buffer = [] for i, t, v in self.language_lexer.get_tokens_unprocessed(text): if t is self.needle: if lng_buffer: insertions.append((len(buffered), lng_buffer)) lng_buffer = [] buffered += v else: lng_buffer.append((i, t, v)) if lng_buffer: insertions.append((len(buffered), lng_buffer)) return do_insertions(insertions, self.root_lexer.get_tokens_unprocessed(buffered)) # ------------------------------------------------------------------------------ # RegexLexer and ExtendedRegexLexer # class include(str): # pylint: disable=invalid-name """ Indicates that a state should include rules from another state. """ pass class _inherit: """ Indicates the a state should inherit from its superclass. """ def __repr__(self): return 'inherit' inherit = _inherit() # pylint: disable=invalid-name class combined(tuple): # pylint: disable=invalid-name """ Indicates a state combined from multiple states. """ def __new__(cls, *args): return tuple.__new__(cls, args) def __init__(self, *args): # tuple.__init__ doesn't do anything pass class _PseudoMatch: """ A pseudo match object constructed from a string. """ def __init__(self, start, text): self._text = text self._start = start def start(self, arg=None): return self._start def end(self, arg=None): return self._start + len(self._text) def group(self, arg=None): if arg: raise IndexError('No such group') return self._text def groups(self): return (self._text,) def groupdict(self): return {} def bygroups(*args): """ Callback that yields multiple actions for each group in the match. """ def callback(lexer, match, ctx=None): for i, action in enumerate(args): if action is None: continue elif type(action) is _TokenType: data = match.group(i + 1) if data: yield match.start(i + 1), action, data else: data = match.group(i + 1) if data is not None: if ctx: ctx.pos = match.start(i + 1) for item in action(lexer, _PseudoMatch(match.start(i + 1), data), ctx): if item: yield item if ctx: ctx.pos = match.end() return callback class _This: """ Special singleton used for indicating the caller class. Used by ``using``. """ this = _This() def using(_other, **kwargs): """ Callback that processes the match with a different lexer. The keyword arguments are forwarded to the lexer, except `state` which is handled separately. `state` specifies the state that the new lexer will start in, and can be an enumerable such as ('root', 'inline', 'string') or a simple string which is assumed to be on top of the root state. Note: For that to work, `_other` must not be an `ExtendedRegexLexer`. """ gt_kwargs = {} if 'state' in kwargs: s = kwargs.pop('state') if isinstance(s, (list, tuple)): gt_kwargs['stack'] = s else: gt_kwargs['stack'] = ('root', s) if _other is this: def callback(lexer, match, ctx=None): # if keyword arguments are given the callback # function has to create a new lexer instance if kwargs: # XXX: cache that somehow kwargs.update(lexer.options) lx = lexer.__class__(**kwargs) else: lx = lexer s = match.start() for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): yield i + s, t, v if ctx: ctx.pos = match.end() else: def callback(lexer, match, ctx=None): # XXX: cache that somehow kwargs.update(lexer.options) lx = _other(**kwargs) s = match.start() for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): yield i + s, t, v if ctx: ctx.pos = match.end() return callback class default: """ Indicates a state or state action (e.g. #pop) to apply. For example default('#pop') is equivalent to ('', Token, '#pop') Note that state tuples may be used as well. .. versionadded:: 2.0 """ def __init__(self, state): self.state = state class words(Future): """ Indicates a list of literal words that is transformed into an optimized regex that matches any of the words. .. versionadded:: 2.0 """ def __init__(self, words, prefix='', suffix=''): self.words = words self.prefix = prefix self.suffix = suffix def get(self): return regex_opt(self.words, prefix=self.prefix, suffix=self.suffix) class RegexLexerMeta(LexerMeta): """ Metaclass for RegexLexer, creates the self._tokens attribute from self.tokens on the first instantiation. """ def _process_regex(cls, regex, rflags, state): """Preprocess the regular expression component of a token definition.""" if isinstance(regex, Future): regex = regex.get() return re.compile(regex, rflags).match def _process_token(cls, token): """Preprocess the token component of a token definition.""" assert type(token) is _TokenType or callable(token), \ 'token type must be simple type or callable, not %r' % (token,) return token def _process_new_state(cls, new_state, unprocessed, processed): """Preprocess the state transition action of a token definition.""" if isinstance(new_state, str): # an existing state if new_state == '#pop': return -1 elif new_state in unprocessed: return (new_state,) elif new_state == '#push': return new_state elif new_state[:5] == '#pop:': return -int(new_state[5:]) else: assert False, 'unknown new state %r' % new_state elif isinstance(new_state, combined): # combine a new state from existing ones tmp_state = '_tmp_%d' % cls._tmpname cls._tmpname += 1 itokens = [] for istate in new_state: assert istate != new_state, 'circular state ref %r' % istate itokens.extend(cls._process_state(unprocessed, processed, istate)) processed[tmp_state] = itokens return (tmp_state,) elif isinstance(new_state, tuple): # push more than one state for istate in new_state: assert (istate in unprocessed or istate in ('#pop', '#push')), \ 'unknown new state ' + istate return new_state else: assert False, 'unknown new state def %r' % new_state def _process_state(cls, unprocessed, processed, state): """Preprocess a single state definition.""" assert type(state) is str, "wrong state name %r" % state assert state[0] != '#', "invalid state name %r" % state if state in processed: return processed[state] tokens = processed[state] = [] rflags = cls.flags for tdef in unprocessed[state]: if isinstance(tdef, include): # it's a state reference assert tdef != state, "circular state reference %r" % state tokens.extend(cls._process_state(unprocessed, processed, str(tdef))) continue if isinstance(tdef, _inherit): # should be processed already, but may not in the case of: # 1. the state has no counterpart in any parent # 2. the state includes more than one 'inherit' continue if isinstance(tdef, default): new_state = cls._process_new_state(tdef.state, unprocessed, processed) tokens.append((re.compile('').match, None, new_state)) continue assert type(tdef) is tuple, "wrong rule def %r" % tdef try: rex = cls._process_regex(tdef[0], rflags, state) except Exception as err: raise ValueError("uncompilable regex %r in state %r of %r: %s" % (tdef[0], state, cls, err)) from err token = cls._process_token(tdef[1]) if len(tdef) == 2: new_state = None else: new_state = cls._process_new_state(tdef[2], unprocessed, processed) tokens.append((rex, token, new_state)) return tokens def process_tokendef(cls, name, tokendefs=None): """Preprocess a dictionary of token definitions.""" processed = cls._all_tokens[name] = {} tokendefs = tokendefs or cls.tokens[name] for state in list(tokendefs): cls._process_state(tokendefs, processed, state) return processed def get_tokendefs(cls): """ Merge tokens from superclasses in MRO order, returning a single tokendef dictionary. Any state that is not defined by a subclass will be inherited automatically. States that *are* defined by subclasses will, by default, override that state in the superclass. If a subclass wishes to inherit definitions from a superclass, it can use the special value "inherit", which will cause the superclass' state definition to be included at that point in the state. """ tokens = {} inheritable = {} for c in cls.__mro__: toks = c.__dict__.get('tokens', {}) for state, items in toks.items(): curitems = tokens.get(state) if curitems is None: # N.b. because this is assigned by reference, sufficiently # deep hierarchies are processed incrementally (e.g. for # A(B), B(C), C(RegexLexer), B will be premodified so X(B) # will not see any inherits in B). tokens[state] = items try: inherit_ndx = items.index(inherit) except ValueError: continue inheritable[state] = inherit_ndx continue inherit_ndx = inheritable.pop(state, None) if inherit_ndx is None: continue # Replace the "inherit" value with the items curitems[inherit_ndx:inherit_ndx+1] = items try: # N.b. this is the index in items (that is, the superclass # copy), so offset required when storing below. new_inh_ndx = items.index(inherit) except ValueError: pass else: inheritable[state] = inherit_ndx + new_inh_ndx return tokens def __call__(cls, *args, **kwds): """Instantiate cls after preprocessing its token definitions.""" if '_tokens' not in cls.__dict__: cls._all_tokens = {} cls._tmpname = 0 if hasattr(cls, 'token_variants') and cls.token_variants: # don't process yet pass else: cls._tokens = cls.process_tokendef('', cls.get_tokendefs()) return type.__call__(cls, *args, **kwds) class RegexLexer(Lexer, metaclass=RegexLexerMeta): """ Base for simple stateful regular expression-based lexers. Simplifies the lexing process so that you need only provide a list of states and regular expressions. """ #: Flags for compiling the regular expressions. #: Defaults to MULTILINE. flags = re.MULTILINE #: At all time there is a stack of states. Initially, the stack contains #: a single state 'root'. The top of the stack is called "the current state". #: #: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}`` #: #: ``new_state`` can be omitted to signify no state transition. #: If ``new_state`` is a string, it is pushed on the stack. This ensure #: the new current state is ``new_state``. #: If ``new_state`` is a tuple of strings, all of those strings are pushed #: on the stack and the current state will be the last element of the list. #: ``new_state`` can also be ``combined('state1', 'state2', ...)`` #: to signify a new, anonymous state combined from the rules of two #: or more existing ones. #: Furthermore, it can be '#pop' to signify going back one step in #: the state stack, or '#push' to push the current state on the stack #: again. Note that if you push while in a combined state, the combined #: state itself is pushed, and not only the state in which the rule is #: defined. #: #: The tuple can also be replaced with ``include('state')``, in which #: case the rules from the state named by the string are included in the #: current one. tokens = {} def get_tokens_unprocessed(self, text, stack=('root',)): """ Split ``text`` into (tokentype, text) pairs. ``stack`` is the inital stack (default: ``['root']``) """ pos = 0 tokendefs = self._tokens statestack = list(stack) statetokens = tokendefs[statestack[-1]] while 1: for rexmatch, action, new_state in statetokens: m = rexmatch(text, pos) if m: if action is not None: if type(action) is _TokenType: yield pos, action, m.group() else: yield from action(self, m) pos = m.end() if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': if len(statestack) > 1: statestack.pop() elif state == '#push': statestack.append(statestack[-1]) else: statestack.append(state) elif isinstance(new_state, int): # pop, but keep at least one state on the stack # (random code leading to unexpected pops should # not allow exceptions) if abs(new_state) >= len(statestack): del statestack[1:] else: del statestack[new_state:] elif new_state == '#push': statestack.append(statestack[-1]) else: assert False, "wrong state def: %r" % new_state statetokens = tokendefs[statestack[-1]] break else: # We are here only if all state tokens have been considered # and there was not a match on any of them. try: if text[pos] == '\n': # at EOL, reset state to "root" statestack = ['root'] statetokens = tokendefs['root'] yield pos, Text, '\n' pos += 1 continue yield pos, Error, text[pos] pos += 1 except IndexError: break class LexerContext: """ A helper object that holds lexer position data. """ def __init__(self, text, pos, stack=None, end=None): self.text = text self.pos = pos self.end = end or len(text) # end=0 not supported ;-) self.stack = stack or ['root'] def __repr__(self): return 'LexerContext(%r, %r, %r)' % ( self.text, self.pos, self.stack) class ExtendedRegexLexer(RegexLexer): """ A RegexLexer that uses a context object to store its state. """ def get_tokens_unprocessed(self, text=None, context=None): """ Split ``text`` into (tokentype, text) pairs. If ``context`` is given, use this lexer context instead. """ tokendefs = self._tokens if not context: ctx = LexerContext(text, 0) statetokens = tokendefs['root'] else: ctx = context statetokens = tokendefs[ctx.stack[-1]] text = ctx.text while 1: for rexmatch, action, new_state in statetokens: m = rexmatch(text, ctx.pos, ctx.end) if m: if action is not None: if type(action) is _TokenType: yield ctx.pos, action, m.group() ctx.pos = m.end() else: yield from action(self, m, ctx) if not new_state: # altered the state stack? statetokens = tokendefs[ctx.stack[-1]] # CAUTION: callback must set ctx.pos! if new_state is not None: # state transition if isinstance(new_state, tuple): for state in new_state: if state == '#pop': if len(ctx.stack) > 1: ctx.stack.pop() elif state == '#push': ctx.stack.append(ctx.stack[-1]) else: ctx.stack.append(state) elif isinstance(new_state, int): # see RegexLexer for why this check is made if abs(new_state) >= len(ctx.stack): del ctx.state[1:] else: del ctx.stack[new_state:] elif new_state == '#push': ctx.stack.append(ctx.stack[-1]) else: assert False, "wrong state def: %r" % new_state statetokens = tokendefs[ctx.stack[-1]] break else: try: if ctx.pos >= ctx.end: break if text[ctx.pos] == '\n': # at EOL, reset state to "root" ctx.stack = ['root'] statetokens = tokendefs['root'] yield ctx.pos, Text, '\n' ctx.pos += 1 continue yield ctx.pos, Error, text[ctx.pos] ctx.pos += 1 except IndexError: break def do_insertions(insertions, tokens): """ Helper for lexers which must combine the results of several sublexers. ``insertions`` is a list of ``(index, itokens)`` pairs. Each ``itokens`` iterable should be inserted at position ``index`` into the token stream given by the ``tokens`` argument. The result is a combined token stream. TODO: clean up the code here. """ insertions = iter(insertions) try: index, itokens = next(insertions) except StopIteration: # no insertions yield from tokens return realpos = None insleft = True # iterate over the token stream where we want to insert # the tokens from the insertion list. for i, t, v in tokens: # first iteration. store the postition of first item if realpos is None: realpos = i oldi = 0 while insleft and i + len(v) >= index: tmpval = v[oldi:index - i] if tmpval: yield realpos, t, tmpval realpos += len(tmpval) for it_index, it_token, it_value in itokens: yield realpos, it_token, it_value realpos += len(it_value) oldi = index - i try: index, itokens = next(insertions) except StopIteration: insleft = False break # not strictly necessary if oldi < len(v): yield realpos, t, v[oldi:] realpos += len(v) - oldi # leftover tokens while insleft: # no normal tokens, set realpos to zero realpos = realpos or 0 for p, t, v in itokens: yield realpos, t, v realpos += len(v) try: index, itokens = next(insertions) except StopIteration: insleft = False break # not strictly necessary class ProfilingRegexLexerMeta(RegexLexerMeta): """Metaclass for ProfilingRegexLexer, collects regex timing info.""" def _process_regex(cls, regex, rflags, state): if isinstance(regex, words): rex = regex_opt(regex.words, prefix=regex.prefix, suffix=regex.suffix) else: rex = regex compiled = re.compile(rex, rflags) def match_func(text, pos, endpos=sys.maxsize): info = cls._prof_data[-1].setdefault((state, rex), [0, 0.0]) t0 = time.time() res = compiled.match(text, pos, endpos) t1 = time.time() info[0] += 1 info[1] += t1 - t0 return res return match_func class ProfilingRegexLexer(RegexLexer, metaclass=ProfilingRegexLexerMeta): """Drop-in replacement for RegexLexer that does profiling of its regexes.""" _prof_data = [] _prof_sort_index = 4 # defaults to time per call def get_tokens_unprocessed(self, text, stack=('root',)): # this needs to be a stack, since using(this) will produce nested calls self.__class__._prof_data.append({}) yield from RegexLexer.get_tokens_unprocessed(self, text, stack) rawdata = self.__class__._prof_data.pop() data = sorted(((s, repr(r).strip('u\'').replace('\\\\', '\\')[:65], n, 1000 * t, 1000 * t / n) for ((s, r), (n, t)) in rawdata.items()), key=lambda x: x[self._prof_sort_index], reverse=True) sum_total = sum(x[3] for x in data) print() print('Profiling result for %s lexing %d chars in %.3f ms' % (self.__class__.__name__, len(text), sum_total)) print('=' * 110) print('%-20s %-64s ncalls tottime percall' % ('state', 'regex')) print('-' * 110) for d in data: print('%-20s %-65s %5d %8.4f %8.4f' % d) print('=' * 110) pygments-2.11.2/pygments/sphinxext.py0000644000175000017500000001076214165547207017616 0ustar carstencarsten""" pygments.sphinxext ~~~~~~~~~~~~~~~~~~ Sphinx extension to generate automatic documentation of lexers, formatters and filters. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import sys from docutils import nodes from docutils.statemachine import ViewList from docutils.parsers.rst import Directive from sphinx.util.nodes import nested_parse_with_titles MODULEDOC = ''' .. module:: %s %s %s ''' LEXERDOC = ''' .. class:: %s :Short names: %s :Filenames: %s :MIME types: %s %s ''' FMTERDOC = ''' .. class:: %s :Short names: %s :Filenames: %s %s ''' FILTERDOC = ''' .. class:: %s :Name: %s %s ''' class PygmentsDoc(Directive): """ A directive to collect all lexers/formatters/filters and generate autoclass directives for them. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): self.filenames = set() if self.arguments[0] == 'lexers': out = self.document_lexers() elif self.arguments[0] == 'formatters': out = self.document_formatters() elif self.arguments[0] == 'filters': out = self.document_filters() else: raise Exception('invalid argument for "pygmentsdoc" directive') node = nodes.compound() vl = ViewList(out.split('\n'), source='') nested_parse_with_titles(self.state, vl, node) for fn in self.filenames: self.state.document.settings.record_dependencies.add(fn) return node.children def document_lexers(self): from pygments.lexers._mapping import LEXERS out = [] modules = {} moduledocstrings = {} for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]): module = data[0] mod = __import__(module, None, None, [classname]) self.filenames.add(mod.__file__) cls = getattr(mod, classname) if not cls.__doc__: print("Warning: %s does not have a docstring." % classname) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') modules.setdefault(module, []).append(( classname, ', '.join(data[2]) or 'None', ', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None', ', '.join(data[4]) or 'None', docstring)) if module not in moduledocstrings: moddoc = mod.__doc__ if isinstance(moddoc, bytes): moddoc = moddoc.decode('utf8') moduledocstrings[module] = moddoc for module, lexers in sorted(modules.items(), key=lambda x: x[0]): if moduledocstrings[module] is None: raise Exception("Missing docstring for %s" % (module,)) heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.') out.append(MODULEDOC % (module, heading, '-'*len(heading))) for data in lexers: out.append(LEXERDOC % data) return ''.join(out) def document_formatters(self): from pygments.formatters import FORMATTERS out = [] for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]): module = data[0] mod = __import__(module, None, None, [classname]) self.filenames.add(mod.__file__) cls = getattr(mod, classname) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') heading = cls.__name__ out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None', ', '.join(data[3]).replace('*', '\\*') or 'None', docstring)) return ''.join(out) def document_filters(self): from pygments.filters import FILTERS out = [] for name, cls in FILTERS.items(): self.filenames.add(sys.modules[cls.__module__].__file__) docstring = cls.__doc__ if isinstance(docstring, bytes): docstring = docstring.decode('utf8') out.append(FILTERDOC % (cls.__name__, name, docstring)) return ''.join(out) def setup(app): app.add_directive('pygmentsdoc', PygmentsDoc) pygments-2.11.2/pygments/filter.py0000644000175000017500000000362214165547207017046 0ustar carstencarsten""" pygments.filter ~~~~~~~~~~~~~~~ Module that implements the default filter. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ def apply_filters(stream, filters, lexer=None): """ Use this method to apply an iterable of filters to a stream. If lexer is given it's forwarded to the filter, otherwise the filter receives `None`. """ def _apply(filter_, stream): yield from filter_.filter(lexer, stream) for filter_ in filters: stream = _apply(filter_, stream) return stream def simplefilter(f): """ Decorator that converts a function into a filter:: @simplefilter def lowercase(self, lexer, stream, options): for ttype, value in stream: yield ttype, value.lower() """ return type(f.__name__, (FunctionFilter,), { '__module__': getattr(f, '__module__'), '__doc__': f.__doc__, 'function': f, }) class Filter: """ Default filter. Subclass this class or use the `simplefilter` decorator to create own filters. """ def __init__(self, **options): self.options = options def filter(self, lexer, stream): raise NotImplementedError() class FunctionFilter(Filter): """ Abstract class used by `simplefilter` to create simple function filters on the fly. The `simplefilter` decorator automatically creates subclasses of this class for functions passed to it. """ function = None def __init__(self, **options): if not hasattr(self, 'function'): raise TypeError('%r used without bound function' % self.__class__.__name__) Filter.__init__(self, **options) def filter(self, lexer, stream): # pylint: disable=not-callable yield from self.function(lexer, stream, self.options) pygments-2.11.2/pygments/regexopt.py0000644000175000017500000000600014165547207017407 0ustar carstencarsten""" pygments.regexopt ~~~~~~~~~~~~~~~~~ An algorithm that generates optimized regexes for matching long lists of literal strings. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from re import escape from os.path import commonprefix from itertools import groupby from operator import itemgetter CS_ESCAPE = re.compile(r'[\[\^\\\-\]]') FIRST_ELEMENT = itemgetter(0) def make_charset(letters): return '[' + CS_ESCAPE.sub(lambda m: '\\' + m.group(), ''.join(letters)) + ']' def regex_opt_inner(strings, open_paren): """Return a regex that matches any string in the sorted list of strings.""" close_paren = open_paren and ')' or '' # print strings, repr(open_paren) if not strings: # print '-> nothing left' return '' first = strings[0] if len(strings) == 1: # print '-> only 1 string' return open_paren + escape(first) + close_paren if not first: # print '-> first string empty' return open_paren + regex_opt_inner(strings[1:], '(?:') \ + '?' + close_paren if len(first) == 1: # multiple one-char strings? make a charset oneletter = [] rest = [] for s in strings: if len(s) == 1: oneletter.append(s) else: rest.append(s) if len(oneletter) > 1: # do we have more than one oneletter string? if rest: # print '-> 1-character + rest' return open_paren + regex_opt_inner(rest, '') + '|' \ + make_charset(oneletter) + close_paren # print '-> only 1-character' return open_paren + make_charset(oneletter) + close_paren prefix = commonprefix(strings) if prefix: plen = len(prefix) # we have a prefix for all strings # print '-> prefix:', prefix return open_paren + escape(prefix) \ + regex_opt_inner([s[plen:] for s in strings], '(?:') \ + close_paren # is there a suffix? strings_rev = [s[::-1] for s in strings] suffix = commonprefix(strings_rev) if suffix: slen = len(suffix) # print '-> suffix:', suffix[::-1] return open_paren \ + regex_opt_inner(sorted(s[:-slen] for s in strings), '(?:') \ + escape(suffix[::-1]) + close_paren # recurse on common 1-string prefixes # print '-> last resort' return open_paren + \ '|'.join(regex_opt_inner(list(group[1]), '') for group in groupby(strings, lambda s: s[0] == first[0])) \ + close_paren def regex_opt(strings, prefix='', suffix=''): """Return a compiled regex that matches any string in the given list. The strings to match must be literal strings, not regexes. They will be regex-escaped. *prefix* and *suffix* are pre- and appended to the final regex. """ strings = sorted(strings) return prefix + regex_opt_inner(strings, '(') + suffix pygments-2.11.2/pygments/__main__.py0000644000175000017500000000053414165547207017300 0ustar carstencarsten""" pygments.__main__ ~~~~~~~~~~~~~~~~~ Main entry point for ``python -m pygments``. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import sys import pygments.cmdline try: sys.exit(pygments.cmdline.main(sys.argv)) except KeyboardInterrupt: sys.exit(1) pygments-2.11.2/pygments/formatter.py0000644000175000017500000000551514165547207017567 0ustar carstencarsten""" pygments.formatter ~~~~~~~~~~~~~~~~~~ Base formatter class. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import codecs from pygments.util import get_bool_opt from pygments.styles import get_style_by_name __all__ = ['Formatter'] def _lookup_style(style): if isinstance(style, str): return get_style_by_name(style) return style class Formatter: """ Converts a token stream to text. Options accepted: ``style`` The style to use, can be a string or a Style subclass (default: "default"). Not used by e.g. the TerminalFormatter. ``full`` Tells the formatter to output a "full" document, i.e. a complete self-contained document. This doesn't have any effect for some formatters (default: false). ``title`` If ``full`` is true, the title that should be used to caption the document (default: ''). ``encoding`` If given, must be an encoding name. This will be used to convert the Unicode token strings to byte strings in the output. If it is "" or None, Unicode strings will be written to the output file, which most file-like objects do not support (default: None). ``outencoding`` Overrides ``encoding`` if given. """ #: Name of the formatter name = None #: Shortcuts for the formatter aliases = [] #: fn match rules filenames = [] #: If True, this formatter outputs Unicode strings when no encoding #: option is given. unicodeoutput = True def __init__(self, **options): self.style = _lookup_style(options.get('style', 'default')) self.full = get_bool_opt(options, 'full', False) self.title = options.get('title', '') self.encoding = options.get('encoding', None) or None if self.encoding in ('guess', 'chardet'): # can happen for e.g. pygmentize -O encoding=guess self.encoding = 'utf-8' self.encoding = options.get('outencoding') or self.encoding self.options = options def get_style_defs(self, arg=''): """ Return the style definitions for the current style as a string. ``arg`` is an additional argument whose meaning depends on the formatter used. Note that ``arg`` can also be a list or tuple for some formatters like the html formatter. """ return '' def format(self, tokensource, outfile): """ Format ``tokensource``, an iterable of ``(tokentype, tokenstring)`` tuples and write it into ``outfile``. """ if self.encoding: # wrap the outfile in a StreamWriter outfile = codecs.lookup(self.encoding)[3](outfile) return self.format_unencoded(tokensource, outfile) pygments-2.11.2/pygments/plugin.py0000644000175000017500000000325614165547207017062 0ustar carstencarsten""" pygments.plugin ~~~~~~~~~~~~~~~ Pygments setuptools plugin interface. The methods defined here also work if setuptools isn't installed but they just return nothing. lexer plugins:: [pygments.lexers] yourlexer = yourmodule:YourLexer formatter plugins:: [pygments.formatters] yourformatter = yourformatter:YourFormatter /.ext = yourformatter:YourFormatter As you can see, you can define extensions for the formatter with a leading slash. syntax plugins:: [pygments.styles] yourstyle = yourstyle:YourStyle filter plugin:: [pygments.filter] yourfilter = yourfilter:YourFilter :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ LEXER_ENTRY_POINT = 'pygments.lexers' FORMATTER_ENTRY_POINT = 'pygments.formatters' STYLE_ENTRY_POINT = 'pygments.styles' FILTER_ENTRY_POINT = 'pygments.filters' def iter_entry_points(group_name): try: import pkg_resources except (ImportError, OSError): return [] return pkg_resources.iter_entry_points(group_name) def find_plugin_lexers(): for entrypoint in iter_entry_points(LEXER_ENTRY_POINT): yield entrypoint.load() def find_plugin_formatters(): for entrypoint in iter_entry_points(FORMATTER_ENTRY_POINT): yield entrypoint.name, entrypoint.load() def find_plugin_styles(): for entrypoint in iter_entry_points(STYLE_ENTRY_POINT): yield entrypoint.name, entrypoint.load() def find_plugin_filters(): for entrypoint in iter_entry_points(FILTER_ENTRY_POINT): yield entrypoint.name, entrypoint.load() pygments-2.11.2/pygments/util.py0000644000175000017500000002164314165547207016541 0ustar carstencarsten""" pygments.util ~~~~~~~~~~~~~ Utility functions. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from io import TextIOWrapper split_path_re = re.compile(r'[/\\ ]') doctype_lookup_re = re.compile(r''' ]*> ''', re.DOTALL | re.MULTILINE | re.VERBOSE) tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?', re.UNICODE | re.IGNORECASE | re.DOTALL | re.MULTILINE) xml_decl_re = re.compile(r'\s*<\?xml[^>]*\?>', re.I) class ClassNotFound(ValueError): """Raised if one of the lookup functions didn't find a matching class.""" class OptionError(Exception): pass def get_choice_opt(options, optname, allowed, default=None, normcase=False): string = options.get(optname, default) if normcase: string = string.lower() if string not in allowed: raise OptionError('Value for option %s must be one of %s' % (optname, ', '.join(map(str, allowed)))) return string def get_bool_opt(options, optname, default=None): string = options.get(optname, default) if isinstance(string, bool): return string elif isinstance(string, int): return bool(string) elif not isinstance(string, str): raise OptionError('Invalid type %r for option %s; use ' '1/0, yes/no, true/false, on/off' % ( string, optname)) elif string.lower() in ('1', 'yes', 'true', 'on'): return True elif string.lower() in ('0', 'no', 'false', 'off'): return False else: raise OptionError('Invalid value %r for option %s; use ' '1/0, yes/no, true/false, on/off' % ( string, optname)) def get_int_opt(options, optname, default=None): string = options.get(optname, default) try: return int(string) except TypeError: raise OptionError('Invalid type %r for option %s; you ' 'must give an integer value' % ( string, optname)) except ValueError: raise OptionError('Invalid value %r for option %s; you ' 'must give an integer value' % ( string, optname)) def get_list_opt(options, optname, default=None): val = options.get(optname, default) if isinstance(val, str): return val.split() elif isinstance(val, (list, tuple)): return list(val) else: raise OptionError('Invalid type %r for option %s; you ' 'must give a list value' % ( val, optname)) def docstring_headline(obj): if not obj.__doc__: return '' res = [] for line in obj.__doc__.strip().splitlines(): if line.strip(): res.append(" " + line.strip()) else: break return ''.join(res).lstrip() def make_analysator(f): """Return a static text analyser function that returns float values.""" def text_analyse(text): try: rv = f(text) except Exception: return 0.0 if not rv: return 0.0 try: return min(1.0, max(0.0, float(rv))) except (ValueError, TypeError): return 0.0 text_analyse.__doc__ = f.__doc__ return staticmethod(text_analyse) def shebang_matches(text, regex): r"""Check if the given regular expression matches the last part of the shebang if one exists. >>> from pygments.util import shebang_matches >>> shebang_matches('#!/usr/bin/env python', r'python(2\.\d)?') True >>> shebang_matches('#!/usr/bin/python2.4', r'python(2\.\d)?') True >>> shebang_matches('#!/usr/bin/python-ruby', r'python(2\.\d)?') False >>> shebang_matches('#!/usr/bin/python/ruby', r'python(2\.\d)?') False >>> shebang_matches('#!/usr/bin/startsomethingwith python', ... r'python(2\.\d)?') True It also checks for common windows executable file extensions:: >>> shebang_matches('#!C:\\Python2.4\\Python.exe', r'python(2\.\d)?') True Parameters (``'-f'`` or ``'--foo'`` are ignored so ``'perl'`` does the same as ``'perl -e'``) Note that this method automatically searches the whole string (eg: the regular expression is wrapped in ``'^$'``) """ index = text.find('\n') if index >= 0: first_line = text[:index].lower() else: first_line = text.lower() if first_line.startswith('#!'): try: found = [x for x in split_path_re.split(first_line[2:].strip()) if x and not x.startswith('-')][-1] except IndexError: return False regex = re.compile(r'^%s(\.(exe|cmd|bat|bin))?$' % regex, re.IGNORECASE) if regex.search(found) is not None: return True return False def doctype_matches(text, regex): """Check if the doctype matches a regular expression (if present). Note that this method only checks the first part of a DOCTYPE. eg: 'html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"' """ m = doctype_lookup_re.search(text) if m is None: return False doctype = m.group(1) return re.compile(regex, re.I).match(doctype.strip()) is not None def html_doctype_matches(text): """Check if the file looks like it has a html doctype.""" return doctype_matches(text, r'html') _looks_like_xml_cache = {} def looks_like_xml(text): """Check if a doctype exists or if we have some tags.""" if xml_decl_re.match(text): return True key = hash(text) try: return _looks_like_xml_cache[key] except KeyError: m = doctype_lookup_re.search(text) if m is not None: return True rv = tag_re.search(text[:1000]) is not None _looks_like_xml_cache[key] = rv return rv def surrogatepair(c): """Given a unicode character code with length greater than 16 bits, return the two 16 bit surrogate pair. """ # From example D28 of: # http://www.unicode.org/book/ch03.pdf return (0xd7c0 + (c >> 10), (0xdc00 + (c & 0x3ff))) def format_lines(var_name, seq, raw=False, indent_level=0): """Formats a sequence of strings for output.""" lines = [] base_indent = ' ' * indent_level * 4 inner_indent = ' ' * (indent_level + 1) * 4 lines.append(base_indent + var_name + ' = (') if raw: # These should be preformatted reprs of, say, tuples. for i in seq: lines.append(inner_indent + i + ',') else: for i in seq: # Force use of single quotes r = repr(i + '"') lines.append(inner_indent + r[:-2] + r[-1] + ',') lines.append(base_indent + ')') return '\n'.join(lines) def duplicates_removed(it, already_seen=()): """ Returns a list with duplicates removed from the iterable `it`. Order is preserved. """ lst = [] seen = set() for i in it: if i in seen or i in already_seen: continue lst.append(i) seen.add(i) return lst class Future: """Generic class to defer some work. Handled specially in RegexLexerMeta, to support regex string construction at first use. """ def get(self): raise NotImplementedError def guess_decode(text): """Decode *text* with guessed encoding. First try UTF-8; this should fail for non-UTF-8 encodings. Then try the preferred locale encoding. Fall back to latin-1, which always works. """ try: text = text.decode('utf-8') return text, 'utf-8' except UnicodeDecodeError: try: import locale prefencoding = locale.getpreferredencoding() text = text.decode() return text, prefencoding except (UnicodeDecodeError, LookupError): text = text.decode('latin1') return text, 'latin1' def guess_decode_from_terminal(text, term): """Decode *text* coming from terminal *term*. First try the terminal encoding, if given. Then try UTF-8. Then try the preferred locale encoding. Fall back to latin-1, which always works. """ if getattr(term, 'encoding', None): try: text = text.decode(term.encoding) except UnicodeDecodeError: pass else: return text, term.encoding return guess_decode(text) def terminal_encoding(term): """Return our best guess of encoding for the given *term*.""" if getattr(term, 'encoding', None): return term.encoding import locale return locale.getpreferredencoding() class UnclosingTextIOWrapper(TextIOWrapper): # Don't close underlying buffer on destruction. def close(self): self.flush() pygments-2.11.2/pygments/modeline.py0000644000175000017500000000173214165547207017355 0ustar carstencarsten""" pygments.modeline ~~~~~~~~~~~~~~~~~ A simple modeline parser (based on pymodeline). :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re __all__ = ['get_filetype_from_buffer'] modeline_re = re.compile(r''' (?: vi | vim | ex ) (?: [<=>]? \d* )? : .* (?: ft | filetype | syn | syntax ) = ( [^:\s]+ ) ''', re.VERBOSE) def get_filetype_from_line(l): m = modeline_re.search(l) if m: return m.group(1) def get_filetype_from_buffer(buf, max_lines=5): """ Scan the buffer for modelines and return filetype if one is found. """ lines = buf.splitlines() for l in lines[-1:-max_lines-1:-1]: ret = get_filetype_from_line(l) if ret: return ret for i in range(max_lines, -1, -1): if i < len(lines): ret = get_filetype_from_line(lines[i]) if ret: return ret return None pygments-2.11.2/pygments/lexers/0000755000175000017500000000000014165547207016506 5ustar carstencarstenpygments-2.11.2/pygments/lexers/haskell.py0000644000175000017500000007775314165547207020526 0ustar carstencarsten""" pygments.lexers.haskell ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Haskell and related languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, bygroups, do_insertions, \ default, include, inherit from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Whitespace from pygments import unistring as uni __all__ = ['HaskellLexer', 'HspecLexer', 'IdrisLexer', 'AgdaLexer', 'CryptolLexer', 'LiterateHaskellLexer', 'LiterateIdrisLexer', 'LiterateAgdaLexer', 'LiterateCryptolLexer', 'KokaLexer'] line_re = re.compile('.*?\n') class HaskellLexer(RegexLexer): """ A Haskell lexer based on the lexemes defined in the Haskell 98 Report. .. versionadded:: 0.8 """ name = 'Haskell' aliases = ['haskell', 'hs'] filenames = ['*.hs'] mimetypes = ['text/x-haskell'] flags = re.MULTILINE | re.UNICODE reserved = ('case', 'class', 'data', 'default', 'deriving', 'do', 'else', 'family', 'if', 'in', 'infix[lr]?', 'instance', 'let', 'newtype', 'of', 'then', 'type', 'where', '_') ascii = ('NUL', 'SOH', '[SE]TX', 'EOT', 'ENQ', 'ACK', 'BEL', 'BS', 'HT', 'LF', 'VT', 'FF', 'CR', 'S[OI]', 'DLE', 'DC[1-4]', 'NAK', 'SYN', 'ETB', 'CAN', 'EM', 'SUB', 'ESC', '[FGRU]S', 'SP', 'DEL') tokens = { 'root': [ # Whitespace: (r'\s+', Whitespace), # (r'--\s*|.*$', Comment.Doc), (r'--(?![!#$%&*+./<=>?@^|_~:\\]).*?$', Comment.Single), (r'\{-', Comment.Multiline, 'comment'), # Lexemes: # Identifiers (r'\bimport\b', Keyword.Reserved, 'import'), (r'\bmodule\b', Keyword.Reserved, 'module'), (r'\berror\b', Name.Exception), (r'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved), (r"'[^\\]'", String.Char), # this has to come before the TH quote (r'^[_' + uni.Ll + r'][\w\']*', Name.Function), (r"'?[_" + uni.Ll + r"][\w']*", Name), (r"('')?[" + uni.Lu + r"][\w\']*", Keyword.Type), (r"(')[" + uni.Lu + r"][\w\']*", Keyword.Type), (r"(')\[[^\]]*\]", Keyword.Type), # tuples and lists get special treatment in GHC (r"(')\([^)]*\)", Keyword.Type), # .. (r"(')[:!#$%&*+.\\/<=>?@^|~-]+", Keyword.Type), # promoted type operators # Operators (r'\\(?![:!#$%&*+.\\/<=>?@^|~-]+)', Name.Function), # lambda operator (r'(<-|::|->|=>|=)(?![:!#$%&*+.\\/<=>?@^|~-]+)', Operator.Word), # specials (r':[:!#$%&*+.\\/<=>?@^|~-]*', Keyword.Type), # Constructor operators (r'[:!#$%&*+.\\/<=>?@^|~-]+', Operator), # Other operators # Numbers (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*_*[pP][+-]?\d(_*\d)*', Number.Float), (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*\.[\da-fA-F](_*[\da-fA-F])*' r'(_*[pP][+-]?\d(_*\d)*)?', Number.Float), (r'\d(_*\d)*_*[eE][+-]?\d(_*\d)*', Number.Float), (r'\d(_*\d)*\.\d(_*\d)*(_*[eE][+-]?\d(_*\d)*)?', Number.Float), (r'0[bB]_*[01](_*[01])*', Number.Bin), (r'0[oO]_*[0-7](_*[0-7])*', Number.Oct), (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*', Number.Hex), (r'\d(_*\d)*', Number.Integer), # Character/String Literals (r"'", String.Char, 'character'), (r'"', String, 'string'), # Special (r'\[\]', Keyword.Type), (r'\(\)', Name.Builtin), (r'[][(),;`{}]', Punctuation), ], 'import': [ # Import statements (r'\s+', Whitespace), (r'"', String, 'string'), # after "funclist" state (r'\)', Punctuation, '#pop'), (r'qualified\b', Keyword), # import X as Y (r'([' + uni.Lu + r'][\w.]*)(\s+)(as)(\s+)([' + uni.Lu + r'][\w.]*)', bygroups(Name.Namespace, Whitespace, Keyword, Whitespace, Name), '#pop'), # import X hiding (functions) (r'([' + uni.Lu + r'][\w.]*)(\s+)(hiding)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Keyword, Whitespace, Punctuation), 'funclist'), # import X (functions) (r'([' + uni.Lu + r'][\w.]*)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Punctuation), 'funclist'), # import X (r'[\w.]+', Name.Namespace, '#pop'), ], 'module': [ (r'\s+', Whitespace), (r'([' + uni.Lu + r'][\w.]*)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Punctuation), 'funclist'), (r'[' + uni.Lu + r'][\w.]*', Name.Namespace, '#pop'), ], 'funclist': [ (r'\s+', Whitespace), (r'[' + uni.Lu + r']\w*', Keyword.Type), (r'(_[\w\']+|[' + uni.Ll + r'][\w\']*)', Name.Function), (r'--(?![!#$%&*+./<=>?@^|_~:\\]).*?$', Comment.Single), (r'\{-', Comment.Multiline, 'comment'), (r',', Punctuation), (r'[:!#$%&*+.\\/<=>?@^|~-]+', Operator), # (HACK, but it makes sense to push two instances, believe me) (r'\(', Punctuation, ('funclist', 'funclist')), (r'\)', Punctuation, '#pop:2'), ], # NOTE: the next four states are shared in the AgdaLexer; make sure # any change is compatible with Agda as well or copy over and change 'comment': [ # Multiline Comments (r'[^-{}]+', Comment.Multiline), (r'\{-', Comment.Multiline, '#push'), (r'-\}', Comment.Multiline, '#pop'), (r'[-{}]', Comment.Multiline), ], 'character': [ # Allows multi-chars, incorrectly. (r"[^\\']'", String.Char, '#pop'), (r"\\", String.Escape, 'escape'), ("'", String.Char, '#pop'), ], 'string': [ (r'[^\\"]+', String), (r"\\", String.Escape, 'escape'), ('"', String, '#pop'), ], 'escape': [ (r'[abfnrtv"\'&\\]', String.Escape, '#pop'), (r'\^[][' + uni.Lu + r'@^_]', String.Escape, '#pop'), ('|'.join(ascii), String.Escape, '#pop'), (r'o[0-7]+', String.Escape, '#pop'), (r'x[\da-fA-F]+', String.Escape, '#pop'), (r'\d+', String.Escape, '#pop'), (r'(\s+)(\\)', bygroups(Whitespace, String.Escape), '#pop'), ], } class HspecLexer(HaskellLexer): """ A Haskell lexer with support for Hspec constructs. .. versionadded:: 2.4.0 """ name = 'Hspec' aliases = ['hspec'] filenames = [] mimetypes = [] tokens = { 'root': [ (r'(it)(\s*)("[^"]*")', bygroups(Text, Whitespace, String.Doc)), (r'(describe)(\s*)("[^"]*")', bygroups(Text, Whitespace, String.Doc)), (r'(context)(\s*)("[^"]*")', bygroups(Text, Whitespace, String.Doc)), inherit, ], } class IdrisLexer(RegexLexer): """ A lexer for the dependently typed programming language Idris. Based on the Haskell and Agda Lexer. .. versionadded:: 2.0 """ name = 'Idris' aliases = ['idris', 'idr'] filenames = ['*.idr'] mimetypes = ['text/x-idris'] reserved = ('case', 'class', 'data', 'default', 'using', 'do', 'else', 'if', 'in', 'infix[lr]?', 'instance', 'rewrite', 'auto', 'namespace', 'codata', 'mutual', 'private', 'public', 'abstract', 'total', 'partial', 'interface', 'implementation', 'export', 'covering', 'constructor', 'let', 'proof', 'of', 'then', 'static', 'where', '_', 'with', 'pattern', 'term', 'syntax', 'prefix', 'postulate', 'parameters', 'record', 'dsl', 'impossible', 'implicit', 'tactics', 'intros', 'intro', 'compute', 'refine', 'exact', 'trivial') ascii = ('NUL', 'SOH', '[SE]TX', 'EOT', 'ENQ', 'ACK', 'BEL', 'BS', 'HT', 'LF', 'VT', 'FF', 'CR', 'S[OI]', 'DLE', 'DC[1-4]', 'NAK', 'SYN', 'ETB', 'CAN', 'EM', 'SUB', 'ESC', '[FGRU]S', 'SP', 'DEL') directives = ('lib', 'link', 'flag', 'include', 'hide', 'freeze', 'access', 'default', 'logging', 'dynamic', 'name', 'error_handlers', 'language') tokens = { 'root': [ # Comments (r'^(\s*)(%%(%s))' % '|'.join(directives), bygroups(Whitespace, Keyword.Reserved)), (r'(\s*)(--(?![!#$%&*+./<=>?@^|_~:\\]).*?)$', bygroups(Whitespace, Comment.Single)), (r'(\s*)(\|{3}.*?)$', bygroups(Whitespace, Comment.Single)), (r'(\s*)(\{-)', bygroups(Whitespace, Comment.Multiline), 'comment'), # Declaration (r'^(\s*)([^\s(){}]+)(\s*)(:)(\s*)', bygroups(Whitespace, Name.Function, Whitespace, Operator.Word, Whitespace)), # Identifiers (r'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved), (r'(import|module)(\s+)', bygroups(Keyword.Reserved, Whitespace), 'module'), (r"('')?[A-Z][\w\']*", Keyword.Type), (r'[a-z][\w\']*', Text), # Special Symbols (r'(<-|::|->|=>|=)', Operator.Word), # specials (r'([(){}\[\]:!#$%&*+.\\/<=>?@^|~-]+)', Operator.Word), # specials # Numbers (r'\d+[eE][+-]?\d+', Number.Float), (r'\d+\.\d+([eE][+-]?\d+)?', Number.Float), (r'0[xX][\da-fA-F]+', Number.Hex), (r'\d+', Number.Integer), # Strings (r"'", String.Char, 'character'), (r'"', String, 'string'), (r'[^\s(){}]+', Text), (r'\s+?', Whitespace), # Whitespace ], 'module': [ (r'\s+', Whitespace), (r'([A-Z][\w.]*)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Punctuation), 'funclist'), (r'[A-Z][\w.]*', Name.Namespace, '#pop'), ], 'funclist': [ (r'\s+', Whitespace), (r'[A-Z]\w*', Keyword.Type), (r'(_[\w\']+|[a-z][\w\']*)', Name.Function), (r'--.*$', Comment.Single), (r'\{-', Comment.Multiline, 'comment'), (r',', Punctuation), (r'[:!#$%&*+.\\/<=>?@^|~-]+', Operator), # (HACK, but it makes sense to push two instances, believe me) (r'\(', Punctuation, ('funclist', 'funclist')), (r'\)', Punctuation, '#pop:2'), ], # NOTE: the next four states are shared in the AgdaLexer; make sure # any change is compatible with Agda as well or copy over and change 'comment': [ # Multiline Comments (r'[^-{}]+', Comment.Multiline), (r'\{-', Comment.Multiline, '#push'), (r'-\}', Comment.Multiline, '#pop'), (r'[-{}]', Comment.Multiline), ], 'character': [ # Allows multi-chars, incorrectly. (r"[^\\']", String.Char), (r"\\", String.Escape, 'escape'), ("'", String.Char, '#pop'), ], 'string': [ (r'[^\\"]+', String), (r"\\", String.Escape, 'escape'), ('"', String, '#pop'), ], 'escape': [ (r'[abfnrtv"\'&\\]', String.Escape, '#pop'), (r'\^[][A-Z@^_]', String.Escape, '#pop'), ('|'.join(ascii), String.Escape, '#pop'), (r'o[0-7]+', String.Escape, '#pop'), (r'x[\da-fA-F]+', String.Escape, '#pop'), (r'\d+', String.Escape, '#pop'), (r'(\s+)(\\)', bygroups(Whitespace, String.Escape), '#pop') ], } class AgdaLexer(RegexLexer): """ For the `Agda `_ dependently typed functional programming language and proof assistant. .. versionadded:: 2.0 """ name = 'Agda' aliases = ['agda'] filenames = ['*.agda'] mimetypes = ['text/x-agda'] reserved = ['abstract', 'codata', 'coinductive', 'constructor', 'data', 'field', 'forall', 'hiding', 'in', 'inductive', 'infix', 'infixl', 'infixr', 'instance', 'let', 'mutual', 'open', 'pattern', 'postulate', 'primitive', 'private', 'quote', 'quoteGoal', 'quoteTerm', 'record', 'renaming', 'rewrite', 'syntax', 'tactic', 'unquote', 'unquoteDecl', 'using', 'where', 'with'] tokens = { 'root': [ # Declaration (r'^(\s*)([^\s(){}]+)(\s*)(:)(\s*)', bygroups(Whitespace, Name.Function, Whitespace, Operator.Word, Whitespace)), # Comments (r'--(?![!#$%&*+./<=>?@^|_~:\\]).*?$', Comment.Single), (r'\{-', Comment.Multiline, 'comment'), # Holes (r'\{!', Comment.Directive, 'hole'), # Lexemes: # Identifiers (r'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved), (r'(import|module)(\s+)', bygroups(Keyword.Reserved, Whitespace), 'module'), (r'\b(Set|Prop)[\u2080-\u2089]*\b', Keyword.Type), # Special Symbols (r'(\(|\)|\{|\})', Operator), (r'(\.{1,3}|\||\u03BB|\u2200|\u2192|:|=|->)', Operator.Word), # Numbers (r'\d+[eE][+-]?\d+', Number.Float), (r'\d+\.\d+([eE][+-]?\d+)?', Number.Float), (r'0[xX][\da-fA-F]+', Number.Hex), (r'\d+', Number.Integer), # Strings (r"'", String.Char, 'character'), (r'"', String, 'string'), (r'[^\s(){}]+', Text), (r'\s+?', Whitespace), # Whitespace ], 'hole': [ # Holes (r'[^!{}]+', Comment.Directive), (r'\{!', Comment.Directive, '#push'), (r'!\}', Comment.Directive, '#pop'), (r'[!{}]', Comment.Directive), ], 'module': [ (r'\{-', Comment.Multiline, 'comment'), (r'[a-zA-Z][\w.]*', Name, '#pop'), (r'[\W0-9_]+', Text) ], 'comment': HaskellLexer.tokens['comment'], 'character': HaskellLexer.tokens['character'], 'string': HaskellLexer.tokens['string'], 'escape': HaskellLexer.tokens['escape'] } class CryptolLexer(RegexLexer): """ FIXME: A Cryptol2 lexer based on the lexemes defined in the Haskell 98 Report. .. versionadded:: 2.0 """ name = 'Cryptol' aliases = ['cryptol', 'cry'] filenames = ['*.cry'] mimetypes = ['text/x-cryptol'] reserved = ('Arith', 'Bit', 'Cmp', 'False', 'Inf', 'True', 'else', 'export', 'extern', 'fin', 'if', 'import', 'inf', 'lg2', 'max', 'min', 'module', 'newtype', 'pragma', 'property', 'then', 'type', 'where', 'width') ascii = ('NUL', 'SOH', '[SE]TX', 'EOT', 'ENQ', 'ACK', 'BEL', 'BS', 'HT', 'LF', 'VT', 'FF', 'CR', 'S[OI]', 'DLE', 'DC[1-4]', 'NAK', 'SYN', 'ETB', 'CAN', 'EM', 'SUB', 'ESC', '[FGRU]S', 'SP', 'DEL') tokens = { 'root': [ # Whitespace: (r'\s+', Whitespace), # (r'--\s*|.*$', Comment.Doc), (r'//.*$', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), # Lexemes: # Identifiers (r'\bimport\b', Keyword.Reserved, 'import'), (r'\bmodule\b', Keyword.Reserved, 'module'), (r'\berror\b', Name.Exception), (r'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved), (r'^[_a-z][\w\']*', Name.Function), (r"'?[_a-z][\w']*", Name), (r"('')?[A-Z][\w\']*", Keyword.Type), # Operators (r'\\(?![:!#$%&*+.\\/<=>?@^|~-]+)', Name.Function), # lambda operator (r'(<-|::|->|=>|=)(?![:!#$%&*+.\\/<=>?@^|~-]+)', Operator.Word), # specials (r':[:!#$%&*+.\\/<=>?@^|~-]*', Keyword.Type), # Constructor operators (r'[:!#$%&*+.\\/<=>?@^|~-]+', Operator), # Other operators # Numbers (r'\d+[eE][+-]?\d+', Number.Float), (r'\d+\.\d+([eE][+-]?\d+)?', Number.Float), (r'0[oO][0-7]+', Number.Oct), (r'0[xX][\da-fA-F]+', Number.Hex), (r'\d+', Number.Integer), # Character/String Literals (r"'", String.Char, 'character'), (r'"', String, 'string'), # Special (r'\[\]', Keyword.Type), (r'\(\)', Name.Builtin), (r'[][(),;`{}]', Punctuation), ], 'import': [ # Import statements (r'\s+', Whitespace), (r'"', String, 'string'), # after "funclist" state (r'\)', Punctuation, '#pop'), (r'qualified\b', Keyword), # import X as Y (r'([A-Z][\w.]*)(\s+)(as)(\s+)([A-Z][\w.]*)', bygroups(Name.Namespace, Whitespace, Keyword, Whitespace, Name), '#pop'), # import X hiding (functions) (r'([A-Z][\w.]*)(\s+)(hiding)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Keyword, Whitespace, Punctuation), 'funclist'), # import X (functions) (r'([A-Z][\w.]*)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Punctuation), 'funclist'), # import X (r'[\w.]+', Name.Namespace, '#pop'), ], 'module': [ (r'\s+', Whitespace), (r'([A-Z][\w.]*)(\s+)(\()', bygroups(Name.Namespace, Whitespace, Punctuation), 'funclist'), (r'[A-Z][\w.]*', Name.Namespace, '#pop'), ], 'funclist': [ (r'\s+', Whitespace), (r'[A-Z]\w*', Keyword.Type), (r'(_[\w\']+|[a-z][\w\']*)', Name.Function), # TODO: these don't match the comments in docs, remove. # (r'--(?![!#$%&*+./<=>?@^|_~:\\]).*?$', Comment.Single), # (r'{-', Comment.Multiline, 'comment'), (r',', Punctuation), (r'[:!#$%&*+.\\/<=>?@^|~-]+', Operator), # (HACK, but it makes sense to push two instances, believe me) (r'\(', Punctuation, ('funclist', 'funclist')), (r'\)', Punctuation, '#pop:2'), ], 'comment': [ # Multiline Comments (r'[^/*]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'character': [ # Allows multi-chars, incorrectly. (r"[^\\']'", String.Char, '#pop'), (r"\\", String.Escape, 'escape'), ("'", String.Char, '#pop'), ], 'string': [ (r'[^\\"]+', String), (r"\\", String.Escape, 'escape'), ('"', String, '#pop'), ], 'escape': [ (r'[abfnrtv"\'&\\]', String.Escape, '#pop'), (r'\^[][A-Z@^_]', String.Escape, '#pop'), ('|'.join(ascii), String.Escape, '#pop'), (r'o[0-7]+', String.Escape, '#pop'), (r'x[\da-fA-F]+', String.Escape, '#pop'), (r'\d+', String.Escape, '#pop'), (r'(\s+)(\\)', bygroups(Whitespace, String.Escape), '#pop'), ], } EXTRA_KEYWORDS = {'join', 'split', 'reverse', 'transpose', 'width', 'length', 'tail', '<<', '>>', '<<<', '>>>', 'const', 'reg', 'par', 'seq', 'ASSERT', 'undefined', 'error', 'trace'} def get_tokens_unprocessed(self, text): stack = ['root'] for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text, stack): if token is Name and value in self.EXTRA_KEYWORDS: yield index, Name.Builtin, value else: yield index, token, value class LiterateLexer(Lexer): """ Base class for lexers of literate file formats based on LaTeX or Bird-style (prefixing each code line with ">"). Additional options accepted: `litstyle` If given, must be ``"bird"`` or ``"latex"``. If not given, the style is autodetected: if the first non-whitespace character in the source is a backslash or percent character, LaTeX is assumed, else Bird. """ bird_re = re.compile(r'(>[ \t]*)(.*\n)') def __init__(self, baselexer, **options): self.baselexer = baselexer Lexer.__init__(self, **options) def get_tokens_unprocessed(self, text): style = self.options.get('litstyle') if style is None: style = (text.lstrip()[0:1] in '%\\') and 'latex' or 'bird' code = '' insertions = [] if style == 'bird': # bird-style for match in line_re.finditer(text): line = match.group() m = self.bird_re.match(line) if m: insertions.append((len(code), [(0, Comment.Special, m.group(1))])) code += m.group(2) else: insertions.append((len(code), [(0, Text, line)])) else: # latex-style from pygments.lexers.markup import TexLexer lxlexer = TexLexer(**self.options) codelines = 0 latex = '' for match in line_re.finditer(text): line = match.group() if codelines: if line.lstrip().startswith('\\end{code}'): codelines = 0 latex += line else: code += line elif line.lstrip().startswith('\\begin{code}'): codelines = 1 latex += line insertions.append((len(code), list(lxlexer.get_tokens_unprocessed(latex)))) latex = '' else: latex += line insertions.append((len(code), list(lxlexer.get_tokens_unprocessed(latex)))) yield from do_insertions(insertions, self.baselexer.get_tokens_unprocessed(code)) class LiterateHaskellLexer(LiterateLexer): """ For Literate Haskell (Bird-style or LaTeX) source. Additional options accepted: `litstyle` If given, must be ``"bird"`` or ``"latex"``. If not given, the style is autodetected: if the first non-whitespace character in the source is a backslash or percent character, LaTeX is assumed, else Bird. .. versionadded:: 0.9 """ name = 'Literate Haskell' aliases = ['literate-haskell', 'lhaskell', 'lhs'] filenames = ['*.lhs'] mimetypes = ['text/x-literate-haskell'] def __init__(self, **options): hslexer = HaskellLexer(**options) LiterateLexer.__init__(self, hslexer, **options) class LiterateIdrisLexer(LiterateLexer): """ For Literate Idris (Bird-style or LaTeX) source. Additional options accepted: `litstyle` If given, must be ``"bird"`` or ``"latex"``. If not given, the style is autodetected: if the first non-whitespace character in the source is a backslash or percent character, LaTeX is assumed, else Bird. .. versionadded:: 2.0 """ name = 'Literate Idris' aliases = ['literate-idris', 'lidris', 'lidr'] filenames = ['*.lidr'] mimetypes = ['text/x-literate-idris'] def __init__(self, **options): hslexer = IdrisLexer(**options) LiterateLexer.__init__(self, hslexer, **options) class LiterateAgdaLexer(LiterateLexer): """ For Literate Agda source. Additional options accepted: `litstyle` If given, must be ``"bird"`` or ``"latex"``. If not given, the style is autodetected: if the first non-whitespace character in the source is a backslash or percent character, LaTeX is assumed, else Bird. .. versionadded:: 2.0 """ name = 'Literate Agda' aliases = ['literate-agda', 'lagda'] filenames = ['*.lagda'] mimetypes = ['text/x-literate-agda'] def __init__(self, **options): agdalexer = AgdaLexer(**options) LiterateLexer.__init__(self, agdalexer, litstyle='latex', **options) class LiterateCryptolLexer(LiterateLexer): """ For Literate Cryptol (Bird-style or LaTeX) source. Additional options accepted: `litstyle` If given, must be ``"bird"`` or ``"latex"``. If not given, the style is autodetected: if the first non-whitespace character in the source is a backslash or percent character, LaTeX is assumed, else Bird. .. versionadded:: 2.0 """ name = 'Literate Cryptol' aliases = ['literate-cryptol', 'lcryptol', 'lcry'] filenames = ['*.lcry'] mimetypes = ['text/x-literate-cryptol'] def __init__(self, **options): crylexer = CryptolLexer(**options) LiterateLexer.__init__(self, crylexer, **options) class KokaLexer(RegexLexer): """ Lexer for the `Koka `_ language. .. versionadded:: 1.6 """ name = 'Koka' aliases = ['koka'] filenames = ['*.kk', '*.kki'] mimetypes = ['text/x-koka'] keywords = [ 'infix', 'infixr', 'infixl', 'type', 'cotype', 'rectype', 'alias', 'struct', 'con', 'fun', 'function', 'val', 'var', 'external', 'if', 'then', 'else', 'elif', 'return', 'match', 'private', 'public', 'private', 'module', 'import', 'as', 'include', 'inline', 'rec', 'try', 'yield', 'enum', 'interface', 'instance', ] # keywords that are followed by a type typeStartKeywords = [ 'type', 'cotype', 'rectype', 'alias', 'struct', 'enum', ] # keywords valid in a type typekeywords = [ 'forall', 'exists', 'some', 'with', ] # builtin names and special names builtin = [ 'for', 'while', 'repeat', 'foreach', 'foreach-indexed', 'error', 'catch', 'finally', 'cs', 'js', 'file', 'ref', 'assigned', ] # symbols that can be in an operator symbols = r'[$%&*+@!/\\^~=.:\-?|<>]+' # symbol boundary: an operator keyword should not be followed by any of these sboundary = '(?!' + symbols + ')' # name boundary: a keyword should not be followed by any of these boundary = r'(?![\w/])' # koka token abstractions tokenType = Name.Attribute tokenTypeDef = Name.Class tokenConstructor = Generic.Emph # main lexer tokens = { 'root': [ include('whitespace'), # go into type mode (r'::?' + sboundary, tokenType, 'type'), (r'(alias)(\s+)([a-z]\w*)?', bygroups(Keyword, Whitespace, tokenTypeDef), 'alias-type'), (r'(struct)(\s+)([a-z]\w*)?', bygroups(Keyword, Whitespace, tokenTypeDef), 'struct-type'), ((r'(%s)' % '|'.join(typeStartKeywords)) + r'(\s+)([a-z]\w*)?', bygroups(Keyword, Whitespace, tokenTypeDef), 'type'), # special sequences of tokens (we use ?: for non-capturing group as # required by 'bygroups') (r'(module)(\s+)(interface(?=\s))?(\s+)?((?:[a-z]\w*/)*[a-z]\w*)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Namespace)), (r'(import)(\s+)((?:[a-z]\w*/)*[a-z]\w*)' r'(?:(\s*)(=)(\s*)(qualified)?(\s*)' r'((?:[a-z]\w*/)*[a-z]\w*))?', bygroups(Keyword, Whitespace, Name.Namespace, Whitespace, Keyword, Whitespace, Keyword, Whitespace, Name.Namespace)), (r'^(public|private)?(\s+)?(function|fun|val)' r'(\s+)([a-z]\w*|\((?:' + symbols + r'|/)\))', bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Function)), (r'^(?:(public|private)(?=\s+external))?((?|[=.]' + sboundary, Keyword), # names (r'((?:[a-z]\w*/)*)([A-Z]\w*)', bygroups(Name.Namespace, tokenConstructor)), (r'((?:[a-z]\w*/)*)([a-z]\w*)', bygroups(Name.Namespace, Name)), (r'((?:[a-z]\w*/)*)(\((?:' + symbols + r'|/)\))', bygroups(Name.Namespace, Name)), (r'_\w*', Name.Variable), # literal string (r'@"', String.Double, 'litstring'), # operators (symbols + "|/(?![*/])", Operator), (r'`', Operator), (r'[{}()\[\];,]', Punctuation), # literals. No check for literal characters with len > 1 (r'[0-9]+\.[0-9]+([eE][\-+]?[0-9]+)?', Number.Float), (r'0[xX][0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), (r"'", String.Char, 'char'), (r'"', String.Double, 'string'), ], # type started by alias 'alias-type': [ (r'=', Keyword), include('type') ], # type started by struct 'struct-type': [ (r'(?=\((?!,*\)))', Punctuation, '#pop'), include('type') ], # type started by colon 'type': [ (r'[(\[<]', tokenType, 'type-nested'), include('type-content') ], # type nested in brackets: can contain parameters, comma etc. 'type-nested': [ (r'[)\]>]', tokenType, '#pop'), (r'[(\[<]', tokenType, 'type-nested'), (r',', tokenType), (r'([a-z]\w*)(\s*)(:)(?!:)', bygroups(Name, Whitespace, tokenType)), # parameter name include('type-content') ], # shared contents of a type 'type-content': [ include('whitespace'), # keywords (r'(%s)' % '|'.join(typekeywords) + boundary, Keyword), (r'(?=((%s)' % '|'.join(keywords) + boundary + '))', Keyword, '#pop'), # need to match because names overlap... # kinds (r'[EPHVX]' + boundary, tokenType), # type names (r'[a-z][0-9]*(?![\w/])', tokenType), (r'_\w*', tokenType.Variable), # Generic.Emph (r'((?:[a-z]\w*/)*)([A-Z]\w*)', bygroups(Name.Namespace, tokenType)), (r'((?:[a-z]\w*/)*)([a-z]\w+)', bygroups(Name.Namespace, tokenType)), # type keyword operators (r'::|->|[.:|]', tokenType), # catchall default('#pop') ], # comments and literals 'whitespace': [ (r'(\n\s*)(#.*)$', bygroups(Whitespace, Comment.Preproc)), (r'\s+', Whitespace), (r'/\*', Comment.Multiline, 'comment'), (r'//.*$', Comment.Single) ], 'comment': [ (r'[^/*]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'litstring': [ (r'[^"]+', String.Double), (r'""', String.Escape), (r'"', String.Double, '#pop'), ], 'string': [ (r'[^\\"\n]+', String.Double), include('escape-sequence'), (r'["\n]', String.Double, '#pop'), ], 'char': [ (r'[^\\\'\n]+', String.Char), include('escape-sequence'), (r'[\'\n]', String.Char, '#pop'), ], 'escape-sequence': [ (r'\\[nrt\\"\']', String.Escape), (r'\\x[0-9a-fA-F]{2}', String.Escape), (r'\\u[0-9a-fA-F]{4}', String.Escape), # Yes, \U literals are 6 hex digits. (r'\\U[0-9a-fA-F]{6}', String.Escape) ] } pygments-2.11.2/pygments/lexers/julia.py0000644000175000017500000002577414165547207020203 0ustar carstencarsten""" pygments.lexers.julia ~~~~~~~~~~~~~~~~~~~~~ Lexers for the Julia language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, bygroups, do_insertions, \ words, include from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic from pygments.util import shebang_matches from pygments.lexers._julia_builtins import OPERATORS_LIST, DOTTED_OPERATORS_LIST, \ KEYWORD_LIST, BUILTIN_LIST, LITERAL_LIST __all__ = ['JuliaLexer', 'JuliaConsoleLexer'] # see https://docs.julialang.org/en/v1/manual/variables/#Allowed-Variable-Names allowed_variable = \ '(?:[a-zA-Z_\u00A1-\U0010ffff][a-zA-Z_0-9!\u00A1-\U0010ffff]*)' # see https://github.com/JuliaLang/julia/blob/master/src/flisp/julia_opsuffs.h operator_suffixes = r'[²³¹ʰʲʳʷʸˡˢˣᴬᴮᴰᴱᴳᴴᴵᴶᴷᴸᴹᴺᴼᴾᴿᵀᵁᵂᵃᵇᵈᵉᵍᵏᵐᵒᵖᵗᵘᵛᵝᵞᵟᵠᵡᵢᵣᵤᵥᵦᵧᵨᵩᵪᶜᶠᶥᶦᶫᶰᶸᶻᶿ′″‴‵‶‷⁗⁰ⁱ⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾ⁿ₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎ₐₑₒₓₕₖₗₘₙₚₛₜⱼⱽ]*' class JuliaLexer(RegexLexer): """ For `Julia `_ source code. .. versionadded:: 1.6 """ name = 'Julia' aliases = ['julia', 'jl'] filenames = ['*.jl'] mimetypes = ['text/x-julia', 'application/x-julia'] flags = re.MULTILINE | re.UNICODE tokens = { 'root': [ (r'\n', Text), (r'[^\S\n]+', Text), (r'#=', Comment.Multiline, "blockcomment"), (r'#.*$', Comment), (r'[\[\](),;]', Punctuation), # symbols # intercept range expressions first (r'(' + allowed_variable + r')(\s*)(:)(' + allowed_variable + ')', bygroups(Name, Text, Operator, Name)), # then match :name which does not follow closing brackets, digits, or the # ::, <:, and :> operators (r'(?\d.])(:' + allowed_variable + ')', String.Symbol), # type assertions - excludes expressions like ::typeof(sin) and ::avec[1] (r'(?<=::)(\s*)(' + allowed_variable + r')\b(?![(\[])', bygroups(Text, Keyword.Type)), # type comparisons # - MyType <: A or MyType >: A ('(' + allowed_variable + r')(\s*)([<>]:)(\s*)(' + allowed_variable + r')\b(?![(\[])', bygroups(Keyword.Type, Text, Operator, Text, Keyword.Type)), # - <: B or >: B (r'([<>]:)(\s*)(' + allowed_variable + r')\b(?![(\[])', bygroups(Operator, Text, Keyword.Type)), # - A <: or A >: (r'\b(' + allowed_variable + r')(\s*)([<>]:)', bygroups(Keyword.Type, Text, Operator)), # operators # Suffixes aren't actually allowed on all operators, but we'll ignore that # since those cases are invalid Julia code. (words([*OPERATORS_LIST, *DOTTED_OPERATORS_LIST], suffix=operator_suffixes), Operator), (words(['.' + o for o in DOTTED_OPERATORS_LIST], suffix=operator_suffixes), Operator), (words(['...', '..']), Operator), # NOTE # Patterns below work only for definition sites and thus hardly reliable. # # functions # (r'(function)(\s+)(' + allowed_variable + ')', # bygroups(Keyword, Text, Name.Function)), # chars (r"'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,3}|\\u[a-fA-F0-9]{1,4}|" r"\\U[a-fA-F0-9]{1,6}|[^\\\'\n])'", String.Char), # try to match trailing transpose (r'(?<=[.\w)\]])(\'' + operator_suffixes + ')+', Operator), # raw strings (r'(raw)(""")', bygroups(String.Affix, String), 'tqrawstring'), (r'(raw)(")', bygroups(String.Affix, String), 'rawstring'), # regular expressions (r'(r)(""")', bygroups(String.Affix, String.Regex), 'tqregex'), (r'(r)(")', bygroups(String.Affix, String.Regex), 'regex'), # other strings (r'(' + allowed_variable + ')?(""")', bygroups(String.Affix, String), 'tqstring'), (r'(' + allowed_variable + ')?(")', bygroups(String.Affix, String), 'string'), # backticks (r'(' + allowed_variable + ')?(```)', bygroups(String.Affix, String.Backtick), 'tqcommand'), (r'(' + allowed_variable + ')?(`)', bygroups(String.Affix, String.Backtick), 'command'), # type names # - names that begin a curly expression ('(' + allowed_variable + r')(\{)', bygroups(Keyword.Type, Punctuation), 'curly'), # - names as part of bare 'where' (r'(where)(\s+)(' + allowed_variable + ')', bygroups(Keyword, Text, Keyword.Type)), # - curly expressions in general (r'(\{)', Punctuation, 'curly'), # - names as part of type declaration (r'(abstract[ \t]+type|primitive[ \t]+type|mutable[ \t]+struct|struct)([\s()]+)(' + allowed_variable + r')', bygroups(Keyword, Text, Keyword.Type)), # macros (r'@' + allowed_variable, Name.Decorator), (words([*OPERATORS_LIST, '..', '.', *DOTTED_OPERATORS_LIST], prefix='@', suffix=operator_suffixes), Name.Decorator), # keywords (words(KEYWORD_LIST, suffix=r'\b'), Keyword), # builtin types (words(BUILTIN_LIST, suffix=r'\b'), Keyword.Type), # builtin literals (words(LITERAL_LIST, suffix=r'\b'), Name.Builtin), # names (allowed_variable, Name), # numbers (r'(\d+((_\d+)+)?\.(?!\.)(\d+((_\d+)+)?)?|\.\d+((_\d+)+)?)([eEf][+-]?[0-9]+)?', Number.Float), (r'\d+((_\d+)+)?[eEf][+-]?[0-9]+', Number.Float), (r'0x[a-fA-F0-9]+((_[a-fA-F0-9]+)+)?(\.([a-fA-F0-9]+((_[a-fA-F0-9]+)+)?)?)?p[+-]?\d+', Number.Float), (r'0b[01]+((_[01]+)+)?', Number.Bin), (r'0o[0-7]+((_[0-7]+)+)?', Number.Oct), (r'0x[a-fA-F0-9]+((_[a-fA-F0-9]+)+)?', Number.Hex), (r'\d+((_\d+)+)?', Number.Integer), # single dot operator matched last to permit e.g. ".1" as a float (words(['.']), Operator), ], "blockcomment": [ (r'[^=#]', Comment.Multiline), (r'#=', Comment.Multiline, '#push'), (r'=#', Comment.Multiline, '#pop'), (r'[=#]', Comment.Multiline), ], 'curly': [ (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), (allowed_variable, Keyword.Type), include('root'), ], 'tqrawstring': [ (r'"""', String, '#pop'), (r'([^"]|"[^"][^"])+', String), ], 'rawstring': [ (r'"', String, '#pop'), (r'\\"', String.Escape), (r'([^"\\]|\\[^"])+', String), ], # Interpolation is defined as "$" followed by the shortest full expression, which is # something we can't parse. # Include the most common cases here: $word, and $(paren'd expr). 'interp': [ (r'\$' + allowed_variable, String.Interpol), (r'(\$)(\()', bygroups(String.Interpol, Punctuation), 'in-intp'), ], 'in-intp': [ (r'\(', Punctuation, '#push'), (r'\)', Punctuation, '#pop'), include('root'), ], 'string': [ (r'(")(' + allowed_variable + r'|\d+)?', bygroups(String, String.Affix), '#pop'), # FIXME: This escape pattern is not perfect. (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape), include('interp'), # @printf and @sprintf formats (r'%[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]', String.Interpol), (r'[^"$%\\]+', String), (r'.', String), ], 'tqstring': [ (r'(""")(' + allowed_variable + r'|\d+)?', bygroups(String, String.Affix), '#pop'), (r'\\([\\"\'$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)', String.Escape), include('interp'), (r'[^"$%\\]+', String), (r'.', String), ], 'regex': [ (r'(")([imsxa]*)?', bygroups(String.Regex, String.Affix), '#pop'), (r'\\"', String.Regex), (r'[^\\"]+', String.Regex), ], 'tqregex': [ (r'(""")([imsxa]*)?', bygroups(String.Regex, String.Affix), '#pop'), (r'[^"]+', String.Regex), ], 'command': [ (r'(`)(' + allowed_variable + r'|\d+)?', bygroups(String.Backtick, String.Affix), '#pop'), (r'\\[`$]', String.Escape), include('interp'), (r'[^\\`$]+', String.Backtick), (r'.', String.Backtick), ], 'tqcommand': [ (r'(```)(' + allowed_variable + r'|\d+)?', bygroups(String.Backtick, String.Affix), '#pop'), (r'\\\$', String.Escape), include('interp'), (r'[^\\`$]+', String.Backtick), (r'.', String.Backtick), ], } def analyse_text(text): return shebang_matches(text, r'julia') class JuliaConsoleLexer(Lexer): """ For Julia console sessions. Modeled after MatlabSessionLexer. .. versionadded:: 1.6 """ name = 'Julia console' aliases = ['jlcon', 'julia-repl'] def get_tokens_unprocessed(self, text): jllexer = JuliaLexer(**self.options) start = 0 curcode = '' insertions = [] output = False error = False for line in text.splitlines(True): if line.startswith('julia>'): insertions.append((len(curcode), [(0, Generic.Prompt, line[:6])])) curcode += line[6:] output = False error = False elif line.startswith('help?>') or line.startswith('shell>'): yield start, Generic.Prompt, line[:6] yield start + 6, Text, line[6:] output = False error = False elif line.startswith(' ') and not output: insertions.append((len(curcode), [(0, Text, line[:6])])) curcode += line[6:] else: if curcode: yield from do_insertions( insertions, jllexer.get_tokens_unprocessed(curcode)) curcode = '' insertions = [] if line.startswith('ERROR: ') or error: yield start, Generic.Error, line error = True else: yield start, Generic.Output, line output = True start += len(line) if curcode: yield from do_insertions( insertions, jllexer.get_tokens_unprocessed(curcode)) pygments-2.11.2/pygments/lexers/srcinfo.py0000644000175000017500000000320214165547207020520 0ustar carstencarsten""" pygments.lexers.srcinfo ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for .SRCINFO files used by Arch Linux Packages. The description of the format can be found in the wiki: https://wiki.archlinux.org/title/.SRCINFO :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, words from pygments.token import Text, Comment, Keyword, Name, Operator, Whitespace __all__ = ['SrcinfoLexer'] keywords = ( 'pkgbase', 'pkgname', 'pkgver', 'pkgrel', 'epoch', 'pkgdesc', 'url', 'install', 'changelog', 'arch', 'groups', 'license', 'noextract', 'options', 'backup', 'validpgpkeys', ) architecture_dependent_keywords = ( 'source', 'depends', 'checkdepends', 'makedepends', 'optdepends', 'provides', 'conflicts', 'replaces', 'md5sums', 'sha1sums', 'sha224sums', 'sha256sums', 'sha384sums', 'sha512sums' ) class SrcinfoLexer(RegexLexer): """Lexer for .SRCINFO files used by Arch Linux Packages. .. versionadded:: 2.11 """ name = 'Srcinfo' aliases = ['srcinfo'] filenames = ['.SRCINFO'] tokens = { 'root': [ (r'\s+', Whitespace), (r'#.*', Comment.Single), (words(keywords), Keyword, 'assignment'), (words(architecture_dependent_keywords, suffix=r'_\w+'), Keyword, 'assignment'), (r'\w+', Name.Variable, 'assignment'), ], 'assignment': [ (r' +', Whitespace), (r'=', Operator, 'value'), ], 'value': [ (r' +', Whitespace), (r'.*', Text, '#pop:2'), ], } pygments-2.11.2/pygments/lexers/r.py0000644000175000017500000001402714165547207017325 0ustar carstencarsten""" pygments.lexers.r ~~~~~~~~~~~~~~~~~ Lexers for the R/S languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, include, do_insertions from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic __all__ = ['RConsoleLexer', 'SLexer', 'RdLexer'] line_re = re.compile('.*?\n') class RConsoleLexer(Lexer): """ For R console transcripts or R CMD BATCH output files. """ name = 'RConsole' aliases = ['rconsole', 'rout'] filenames = ['*.Rout'] def get_tokens_unprocessed(self, text): slexer = SLexer(**self.options) current_code_block = '' insertions = [] for match in line_re.finditer(text): line = match.group() if line.startswith('>') or line.startswith('+'): # Colorize the prompt as such, # then put rest of line into current_code_block insertions.append((len(current_code_block), [(0, Generic.Prompt, line[:2])])) current_code_block += line[2:] else: # We have reached a non-prompt line! # If we have stored prompt lines, need to process them first. if current_code_block: # Weave together the prompts and highlight code. yield from do_insertions( insertions, slexer.get_tokens_unprocessed(current_code_block)) # Reset vars for next code block. current_code_block = '' insertions = [] # Now process the actual line itself, this is output from R. yield match.start(), Generic.Output, line # If we happen to end on a code block with nothing after it, need to # process the last code block. This is neither elegant nor DRY so # should be changed. if current_code_block: yield from do_insertions( insertions, slexer.get_tokens_unprocessed(current_code_block)) class SLexer(RegexLexer): """ For S, S-plus, and R source code. .. versionadded:: 0.10 """ name = 'S' aliases = ['splus', 's', 'r'] filenames = ['*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'] mimetypes = ['text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile'] valid_name = r'`[^`\\]*(?:\\.[^`\\]*)*`|(?:[a-zA-Z]|\.[A-Za-z_.])[\w.]*|\.' tokens = { 'comments': [ (r'#.*$', Comment.Single), ], 'valid_name': [ (valid_name, Name), ], 'punctuation': [ (r'\[{1,2}|\]{1,2}|\(|\)|;|,', Punctuation), ], 'keywords': [ (r'(if|else|for|while|repeat|in|next|break|return|switch|function)' r'(?![\w.])', Keyword.Reserved), ], 'operators': [ (r'<>?|-|==|<=|>=|<|>|&&?|!=|\|\|?|\?', Operator), (r'\*|\+|\^|/|!|%[^%]*%|=|~|\$|@|:{1,3}', Operator), ], 'builtin_symbols': [ (r'(NULL|NA(_(integer|real|complex|character)_)?|' r'letters|LETTERS|Inf|TRUE|FALSE|NaN|pi|\.\.(\.|[0-9]+))' r'(?![\w.])', Keyword.Constant), (r'(T|F)\b', Name.Builtin.Pseudo), ], 'numbers': [ # hex number (r'0[xX][a-fA-F0-9]+([pP][0-9]+)?[Li]?', Number.Hex), # decimal number (r'[+-]?([0-9]+(\.[0-9]+)?|\.[0-9]+|\.)([eE][+-]?[0-9]+)?[Li]?', Number), ], 'statements': [ include('comments'), # whitespaces (r'\s+', Text), (r'\'', String, 'string_squote'), (r'\"', String, 'string_dquote'), include('builtin_symbols'), include('valid_name'), include('numbers'), include('keywords'), include('punctuation'), include('operators'), ], 'root': [ # calls: (r'(%s)\s*(?=\()' % valid_name, Name.Function), include('statements'), # blocks: (r'\{|\}', Punctuation), # (r'\{', Punctuation, 'block'), (r'.', Text), ], # 'block': [ # include('statements'), # ('\{', Punctuation, '#push'), # ('\}', Punctuation, '#pop') # ], 'string_squote': [ (r'([^\'\\]|\\.)*\'', String, '#pop'), ], 'string_dquote': [ (r'([^"\\]|\\.)*"', String, '#pop'), ], } def analyse_text(text): if re.search(r'[a-z0-9_\])\s]<-(?!-)', text): return 0.11 class RdLexer(RegexLexer): """ Pygments Lexer for R documentation (Rd) files This is a very minimal implementation, highlighting little more than the macros. A description of Rd syntax is found in `Writing R Extensions `_ and `Parsing Rd files `_. .. versionadded:: 1.6 """ name = 'Rd' aliases = ['rd'] filenames = ['*.Rd'] mimetypes = ['text/x-r-doc'] # To account for verbatim / LaTeX-like / and R-like areas # would require parsing. tokens = { 'root': [ # catch escaped brackets and percent sign (r'\\[\\{}%]', String.Escape), # comments (r'%.*$', Comment), # special macros with no arguments (r'\\(?:cr|l?dots|R|tab)\b', Keyword.Constant), # macros (r'\\[a-zA-Z]+\b', Keyword), # special preprocessor macros (r'^\s*#(?:ifn?def|endif).*\b', Comment.Preproc), # non-escaped brackets (r'[{}]', Name.Builtin), # everything else (r'[^\\%\n{}]+', Text), (r'.', Text), ] } pygments-2.11.2/pygments/lexers/kuin.py0000644000175000017500000002500514165547207020030 0ustar carstencarsten""" pygments.lexers.kuin ~~~~~~~~~~~~~~~~~~~~ Lexers for the Kuin language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, using, this, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, Number, Punctuation __all__ = ['KuinLexer'] class KuinLexer(RegexLexer): """ For `Kuin `_ source code .. versionadded:: 2.9 """ name = 'Kuin' aliases = ['kuin'] filenames = ['*.kn'] tokens = { 'root': [ include('statement'), ], 'statement': [ # Whitespace / Comment include('whitespace'), # Block-statement (r'(\+?[ \t]*\*?[ \t]*\bfunc)([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*)', bygroups(Keyword, using(this), Name.Function), 'func_'), (r'\b(class)([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*)', bygroups(Keyword, using(this), Name.Class), 'class_'), (r'\b(enum)([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*)', bygroups(Keyword, using(this), Name.Constant), 'enum_'), (r'\b(block)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'block_'), (r'\b(ifdef)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'ifdef_'), (r'\b(if)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'if_'), (r'\b(switch)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'switch_'), (r'\b(while)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'while_'), (r'\b(for)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'for_'), (r'\b(foreach)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'foreach_'), (r'\b(try)\b(?:([ \t]+(?:\n\s*\|)*[ \t]*)([a-zA-Z_][0-9a-zA-Z_]*))?', bygroups(Keyword, using(this), Name.Other), 'try_'), # Line-statement (r'\b(do)\b', Keyword, 'do'), (r'(\+?[ \t]*\bvar)\b', Keyword, 'var'), (r'\b(const)\b', Keyword, 'const'), (r'\b(ret)\b', Keyword, 'ret'), (r'\b(throw)\b', Keyword, 'throw'), (r'\b(alias)\b', Keyword, 'alias'), (r'\b(assert)\b', Keyword, 'assert'), (r'\|', Text, 'continued_line'), (r'[ \t]*\n', Text), ], # Whitespace / Comment 'whitespace': [ (r'^[ \t]*;.*', Comment.Single), (r'[ \t]+(?![; \t])', Text), (r'\{', Comment.Multiline, 'multiline_comment'), ], 'multiline_comment': [ (r'\{', Comment.Multiline, 'multiline_comment'), (r'(?:\s*;.*|[^{}\n]+)', Comment.Multiline), (r'\n', Comment.Multiline), (r'\}', Comment.Multiline, '#pop'), ], # Block-statement 'func_': [ include('expr'), (r'\n', Text, 'func'), ], 'func': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(func)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), ], 'class_': [ include('expr'), (r'\n', Text, 'class'), ], 'class': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(class)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), ], 'enum_': [ include('expr'), (r'\n', Text, 'enum'), ], 'enum': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(enum)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('expr'), (r'\n', Text), ], 'block_': [ include('expr'), (r'\n', Text, 'block'), ], 'block': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(block)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), include('break'), include('skip'), ], 'ifdef_': [ include('expr'), (r'\n', Text, 'ifdef'), ], 'ifdef': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(ifdef)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), (words(('rls', 'dbg'), prefix=r'\b', suffix=r'\b'), Keyword.Constant, 'ifdef_sp'), include('statement'), include('break'), include('skip'), ], 'ifdef_sp': [ include('expr'), (r'\n', Text, '#pop'), ], 'if_': [ include('expr'), (r'\n', Text, 'if'), ], 'if': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(if)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), (words(('elif', 'else'), prefix=r'\b', suffix=r'\b'), Keyword, 'if_sp'), include('statement'), include('break'), include('skip'), ], 'if_sp': [ include('expr'), (r'\n', Text, '#pop'), ], 'switch_': [ include('expr'), (r'\n', Text, 'switch'), ], 'switch': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(switch)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), (words(('case', 'default', 'to'), prefix=r'\b', suffix=r'\b'), Keyword, 'switch_sp'), include('statement'), include('break'), include('skip'), ], 'switch_sp': [ include('expr'), (r'\n', Text, '#pop'), ], 'while_': [ include('expr'), (r'\n', Text, 'while'), ], 'while': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(while)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), include('break'), include('skip'), ], 'for_': [ include('expr'), (r'\n', Text, 'for'), ], 'for': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(for)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), include('break'), include('skip'), ], 'foreach_': [ include('expr'), (r'\n', Text, 'foreach'), ], 'foreach': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(foreach)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), include('statement'), include('break'), include('skip'), ], 'try_': [ include('expr'), (r'\n', Text, 'try'), ], 'try': [ (r'\b(end)([ \t]+(?:\n\s*\|)*[ \t]*)(try)\b', bygroups(Keyword, using(this), Keyword), '#pop:2'), (words(('catch', 'finally', 'to'), prefix=r'\b', suffix=r'\b'), Keyword, 'try_sp'), include('statement'), include('break'), include('skip'), ], 'try_sp': [ include('expr'), (r'\n', Text, '#pop'), ], # Line-statement 'break': [ (r'\b(break)\b([ \t]+)([a-zA-Z_][0-9a-zA-Z_]*)', bygroups(Keyword, using(this), Name.Other)), ], 'skip': [ (r'\b(skip)\b([ \t]+)([a-zA-Z_][0-9a-zA-Z_]*)', bygroups(Keyword, using(this), Name.Other)), ], 'alias': [ include('expr'), (r'\n', Text, '#pop'), ], 'assert': [ include('expr'), (r'\n', Text, '#pop'), ], 'const': [ include('expr'), (r'\n', Text, '#pop'), ], 'do': [ include('expr'), (r'\n', Text, '#pop'), ], 'ret': [ include('expr'), (r'\n', Text, '#pop'), ], 'throw': [ include('expr'), (r'\n', Text, '#pop'), ], 'var': [ include('expr'), (r'\n', Text, '#pop'), ], 'continued_line': [ include('expr'), (r'\n', Text, '#pop'), ], 'expr': [ # Whitespace / Comment include('whitespace'), # Punctuation (r'\(', Punctuation,), (r'\)', Punctuation,), (r'\[', Punctuation,), (r'\]', Punctuation,), (r',', Punctuation), # Keyword (words(( 'true', 'false', 'null', 'inf' ), prefix=r'\b', suffix=r'\b'), Keyword.Constant), (words(( 'me' ), prefix=r'\b', suffix=r'\b'), Keyword), (words(( 'bit16', 'bit32', 'bit64', 'bit8', 'bool', 'char', 'class', 'dict', 'enum', 'float', 'func', 'int', 'list', 'queue', 'stack' ), prefix=r'\b', suffix=r'\b'), Keyword.Type), # Number (r'\b[0-9]\.[0-9]+(?!\.)(:?e[\+-][0-9]+)?\b', Number.Float), (r'\b2#[01]+(?:b(?:8|16|32|64))?\b', Number.Bin), (r'\b8#[0-7]+(?:b(?:8|16|32|64))?\b', Number.Oct), (r'\b16#[0-9A-F]+(?:b(?:8|16|32|64))?\b', Number.Hex), (r'\b[0-9]+(?:b(?:8|16|32|64))?\b', Number.Decimal), # String / Char (r'"', String.Double, 'string'), (r"'(?:\\.|.)+?'", String.Char), # Operator (r'(?:\.|\$(?:>|<)?)', Operator), (r'(?:\^)', Operator), (r'(?:\+|-|!|##?)', Operator), (r'(?:\*|/|%)', Operator), (r'(?:~)', Operator), (r'(?:(?:=|<>)(?:&|\$)?|<=?|>=?)', Operator), (r'(?:&)', Operator), (r'(?:\|)', Operator), (r'(?:\?)', Operator), (r'(?::(?::|\+|-|\*|/|%|\^|~)?)', Operator), # Identifier (r"\b([a-zA-Z_][0-9a-zA-Z_]*)(?=@)\b", Name), (r"(@)?\b([a-zA-Z_][0-9a-zA-Z_]*)\b", bygroups(Name.Other, Name.Variable)), ], # String 'string': [ (r'(?:\\[^{\n]|[^"\\])+', String.Double), (r'\\\{', String.Double, 'toStrInString'), (r'"', String.Double, '#pop'), ], 'toStrInString': [ include('expr'), (r'\}', String.Double, '#pop'), ], } pygments-2.11.2/pygments/lexers/eiffel.py0000644000175000017500000000514714165547207020321 0ustar carstencarsten""" pygments.lexers.eiffel ~~~~~~~~~~~~~~~~~~~~~~ Lexer for the Eiffel language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, words, bygroups from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['EiffelLexer'] class EiffelLexer(RegexLexer): """ For `Eiffel `_ source code. .. versionadded:: 2.0 """ name = 'Eiffel' aliases = ['eiffel'] filenames = ['*.e'] mimetypes = ['text/x-eiffel'] tokens = { 'root': [ (r'[^\S\n]+', Whitespace), (r'--.*?$', Comment.Single), (r'[^\S\n]+', Whitespace), # Please note that keyword and operator are case insensitive. (r'(?i)(true|false|void|current|result|precursor)\b', Keyword.Constant), (r'(?i)(not|xor|implies|or)\b', Operator.Word), (r'(?i)(and)(?:(\s+)(then))?\b', bygroups(Operator.Word, Whitespace, Operator.Word)), (r'(?i)(or)(?:(\s+)(else))?\b', bygroups(Operator.Word, Whitespace, Operator.Word)), (words(( 'across', 'agent', 'alias', 'all', 'as', 'assign', 'attached', 'attribute', 'check', 'class', 'convert', 'create', 'debug', 'deferred', 'detachable', 'do', 'else', 'elseif', 'end', 'ensure', 'expanded', 'export', 'external', 'feature', 'from', 'frozen', 'if', 'inherit', 'inspect', 'invariant', 'like', 'local', 'loop', 'none', 'note', 'obsolete', 'old', 'once', 'only', 'redefine', 'rename', 'require', 'rescue', 'retry', 'select', 'separate', 'then', 'undefine', 'until', 'variant', 'when'), prefix=r'(?i)\b', suffix=r'\b'), Keyword.Reserved), (r'"\[([^\]%]|%(.|\n)|\][^"])*?\]"', String), (r'"([^"%\n]|%.)*?"', String), include('numbers'), (r"'([^'%]|%'|%%)'", String.Char), (r"(//|\\\\|>=|<=|:=|/=|~|/~|[\\?!#%&@|+/\-=>*$<^\[\]])", Operator), (r"([{}():;,.])", Punctuation), (r'([a-z]\w*)|([A-Z][A-Z0-9_]*[a-z]\w*)', Name), (r'([A-Z][A-Z0-9_]*)', Name.Class), (r'\n+', Whitespace), ], 'numbers': [ (r'0[xX][a-fA-F0-9]+', Number.Hex), (r'0[bB][01]+', Number.Bin), (r'0[cC][0-7]+', Number.Oct), (r'([0-9]+\.[0-9]*)|([0-9]*\.[0-9]+)', Number.Float), (r'[0-9]+', Number.Integer), ], } pygments-2.11.2/pygments/lexers/rebol.py0000644000175000017500000004425014165547207020170 0ustar carstencarsten""" pygments.lexers.rebol ~~~~~~~~~~~~~~~~~~~~~ Lexers for the REBOL and related languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Generic, Whitespace __all__ = ['RebolLexer', 'RedLexer'] class RebolLexer(RegexLexer): """ A `REBOL `_ lexer. .. versionadded:: 1.1 """ name = 'REBOL' aliases = ['rebol'] filenames = ['*.r', '*.r3', '*.reb'] mimetypes = ['text/x-rebol'] flags = re.IGNORECASE | re.MULTILINE escape_re = r'(?:\^\([0-9a-f]{1,4}\)*)' def word_callback(lexer, match): word = match.group() if re.match(".*:$", word): yield match.start(), Generic.Subheading, word elif re.match( r'(native|alias|all|any|as-string|as-binary|bind|bound\?|case|' r'catch|checksum|comment|debase|dehex|exclude|difference|disarm|' r'either|else|enbase|foreach|remove-each|form|free|get|get-env|if|' r'in|intersect|loop|minimum-of|maximum-of|mold|new-line|' r'new-line\?|not|now|prin|print|reduce|compose|construct|repeat|' r'reverse|save|script\?|set|shift|switch|throw|to-hex|trace|try|' r'type\?|union|unique|unless|unprotect|unset|until|use|value\?|' r'while|compress|decompress|secure|open|close|read|read-io|' r'write-io|write|update|query|wait|input\?|exp|log-10|log-2|' r'log-e|square-root|cosine|sine|tangent|arccosine|arcsine|' r'arctangent|protect|lowercase|uppercase|entab|detab|connected\?|' r'browse|launch|stats|get-modes|set-modes|to-local-file|' r'to-rebol-file|encloak|decloak|create-link|do-browser|bind\?|' r'hide|draw|show|size-text|textinfo|offset-to-caret|' r'caret-to-offset|local-request-file|rgb-to-hsv|hsv-to-rgb|' r'crypt-strength\?|dh-make-key|dh-generate-key|dh-compute-key|' r'dsa-make-key|dsa-generate-key|dsa-make-signature|' r'dsa-verify-signature|rsa-make-key|rsa-generate-key|' r'rsa-encrypt)$', word): yield match.start(), Name.Builtin, word elif re.match( r'(add|subtract|multiply|divide|remainder|power|and~|or~|xor~|' r'minimum|maximum|negate|complement|absolute|random|head|tail|' r'next|back|skip|at|pick|first|second|third|fourth|fifth|sixth|' r'seventh|eighth|ninth|tenth|last|path|find|select|make|to|copy\*|' r'insert|remove|change|poke|clear|trim|sort|min|max|abs|cp|' r'copy)$', word): yield match.start(), Name.Function, word elif re.match( r'(error|source|input|license|help|install|echo|Usage|with|func|' r'throw-on-error|function|does|has|context|probe|\?\?|as-pair|' r'mod|modulo|round|repend|about|set-net|append|join|rejoin|reform|' r'remold|charset|array|replace|move|extract|forskip|forall|alter|' r'first+|also|take|for|forever|dispatch|attempt|what-dir|' r'change-dir|clean-path|list-dir|dirize|rename|split-path|delete|' r'make-dir|delete-dir|in-dir|confirm|dump-obj|upgrade|what|' r'build-tag|process-source|build-markup|decode-cgi|read-cgi|' r'write-user|save-user|set-user-name|protect-system|parse-xml|' r'cvs-date|cvs-version|do-boot|get-net-info|desktop|layout|' r'scroll-para|get-face|alert|set-face|uninstall|unfocus|' r'request-dir|center-face|do-events|net-error|decode-url|' r'parse-header|parse-header-date|parse-email-addrs|import-email|' r'send|build-attach-body|resend|show-popup|hide-popup|open-events|' r'find-key-face|do-face|viewtop|confine|find-window|' r'insert-event-func|remove-event-func|inform|dump-pane|dump-face|' r'flag-face|deflag-face|clear-fields|read-net|vbug|path-thru|' r'read-thru|load-thru|do-thru|launch-thru|load-image|' r'request-download|do-face-alt|set-font|set-para|get-style|' r'set-style|make-face|stylize|choose|hilight-text|hilight-all|' r'unlight-text|focus|scroll-drag|clear-face|reset-face|scroll-face|' r'resize-face|load-stock|load-stock-block|notify|request|flash|' r'request-color|request-pass|request-text|request-list|' r'request-date|request-file|dbug|editor|link-relative-path|' r'emailer|parse-error)$', word): yield match.start(), Keyword.Namespace, word elif re.match( r'(halt|quit|do|load|q|recycle|call|run|ask|parse|view|unview|' r'return|exit|break)$', word): yield match.start(), Name.Exception, word elif re.match('REBOL$', word): yield match.start(), Generic.Heading, word elif re.match("to-.*", word): yield match.start(), Keyword, word elif re.match(r'(\+|-|\*|/|//|\*\*|and|or|xor|=\?|=|==|<>|<|>|<=|>=)$', word): yield match.start(), Operator, word elif re.match(r".*\?$", word): yield match.start(), Keyword, word elif re.match(r".*\!$", word): yield match.start(), Keyword.Type, word elif re.match("'.*", word): yield match.start(), Name.Variable.Instance, word # lit-word elif re.match("#.*", word): yield match.start(), Name.Label, word # issue elif re.match("%.*", word): yield match.start(), Name.Decorator, word # file else: yield match.start(), Name.Variable, word tokens = { 'root': [ (r'[^R]+', Comment), (r'REBOL\s+\[', Generic.Strong, 'script'), (r'R', Comment) ], 'script': [ (r'\s+', Text), (r'#"', String.Char, 'char'), (r'#\{[0-9a-f]*\}', Number.Hex), (r'2#\{', Number.Hex, 'bin2'), (r'64#\{[0-9a-z+/=\s]*\}', Number.Hex), (r'"', String, 'string'), (r'\{', String, 'string2'), (r';#+.*\n', Comment.Special), (r';\*+.*\n', Comment.Preproc), (r';.*\n', Comment), (r'%"', Name.Decorator, 'stringFile'), (r'%[^(^{")\s\[\]]+', Name.Decorator), (r'[+-]?([a-z]{1,3})?\$\d+(\.\d+)?', Number.Float), # money (r'[+-]?\d+\:\d+(\:\d+)?(\.\d+)?', String.Other), # time (r'\d+[\-/][0-9a-z]+[\-/]\d+(\/\d+\:\d+((\:\d+)?' r'([.\d+]?([+-]?\d+:\d+)?)?)?)?', String.Other), # date (r'\d+(\.\d+)+\.\d+', Keyword.Constant), # tuple (r'\d+X\d+', Keyword.Constant), # pair (r'[+-]?\d+(\'\d+)?([.,]\d*)?E[+-]?\d+', Number.Float), (r'[+-]?\d+(\'\d+)?[.,]\d*', Number.Float), (r'[+-]?\d+(\'\d+)?', Number), (r'[\[\]()]', Generic.Strong), (r'[a-z]+[^(^{"\s:)]*://[^(^{"\s)]*', Name.Decorator), # url (r'mailto:[^(^{"@\s)]+@[^(^{"@\s)]+', Name.Decorator), # url (r'[^(^{"@\s)]+@[^(^{"@\s)]+', Name.Decorator), # email (r'comment\s"', Comment, 'commentString1'), (r'comment\s\{', Comment, 'commentString2'), (r'comment\s\[', Comment, 'commentBlock'), (r'comment\s[^(\s{"\[]+', Comment), (r'/[^(^{")\s/[\]]*', Name.Attribute), (r'([^(^{")\s/[\]]+)(?=[:({"\s/\[\]])', word_callback), (r'<[\w:.-]*>', Name.Tag), (r'<[^(<>\s")]+', Name.Tag, 'tag'), (r'([^(^{")\s]+)', Text), ], 'string': [ (r'[^(^")]+', String), (escape_re, String.Escape), (r'[(|)]+', String), (r'\^.', String.Escape), (r'"', String, '#pop'), ], 'string2': [ (r'[^(^{})]+', String), (escape_re, String.Escape), (r'[(|)]+', String), (r'\^.', String.Escape), (r'\{', String, '#push'), (r'\}', String, '#pop'), ], 'stringFile': [ (r'[^(^")]+', Name.Decorator), (escape_re, Name.Decorator), (r'\^.', Name.Decorator), (r'"', Name.Decorator, '#pop'), ], 'char': [ (escape_re + '"', String.Char, '#pop'), (r'\^."', String.Char, '#pop'), (r'."', String.Char, '#pop'), ], 'tag': [ (escape_re, Name.Tag), (r'"', Name.Tag, 'tagString'), (r'[^(<>\r\n")]+', Name.Tag), (r'>', Name.Tag, '#pop'), ], 'tagString': [ (r'[^(^")]+', Name.Tag), (escape_re, Name.Tag), (r'[(|)]+', Name.Tag), (r'\^.', Name.Tag), (r'"', Name.Tag, '#pop'), ], 'tuple': [ (r'(\d+\.)+', Keyword.Constant), (r'\d+', Keyword.Constant, '#pop'), ], 'bin2': [ (r'\s+', Number.Hex), (r'([01]\s*){8}', Number.Hex), (r'\}', Number.Hex, '#pop'), ], 'commentString1': [ (r'[^(^")]+', Comment), (escape_re, Comment), (r'[(|)]+', Comment), (r'\^.', Comment), (r'"', Comment, '#pop'), ], 'commentString2': [ (r'[^(^{})]+', Comment), (escape_re, Comment), (r'[(|)]+', Comment), (r'\^.', Comment), (r'\{', Comment, '#push'), (r'\}', Comment, '#pop'), ], 'commentBlock': [ (r'\[', Comment, '#push'), (r'\]', Comment, '#pop'), (r'"', Comment, "commentString1"), (r'\{', Comment, "commentString2"), (r'[^(\[\]"{)]+', Comment), ], } def analyse_text(text): """ Check if code contains REBOL header and so it probably not R code """ if re.match(r'^\s*REBOL\s*\[', text, re.IGNORECASE): # The code starts with REBOL header return 1.0 elif re.search(r'\s*REBOL\s*\[', text, re.IGNORECASE): # The code contains REBOL header but also some text before it return 0.5 class RedLexer(RegexLexer): """ A `Red-language `_ lexer. .. versionadded:: 2.0 """ name = 'Red' aliases = ['red', 'red/system'] filenames = ['*.red', '*.reds'] mimetypes = ['text/x-red', 'text/x-red-system'] flags = re.IGNORECASE | re.MULTILINE escape_re = r'(?:\^\([0-9a-f]{1,4}\)*)' def word_callback(lexer, match): word = match.group() if re.match(".*:$", word): yield match.start(), Generic.Subheading, word elif re.match(r'(if|unless|either|any|all|while|until|loop|repeat|' r'foreach|forall|func|function|does|has|switch|' r'case|reduce|compose|get|set|print|prin|equal\?|' r'not-equal\?|strict-equal\?|lesser\?|greater\?|lesser-or-equal\?|' r'greater-or-equal\?|same\?|not|type\?|stats|' r'bind|union|replace|charset|routine)$', word): yield match.start(), Name.Builtin, word elif re.match(r'(make|random|reflect|to|form|mold|absolute|add|divide|multiply|negate|' r'power|remainder|round|subtract|even\?|odd\?|and~|complement|or~|xor~|' r'append|at|back|change|clear|copy|find|head|head\?|index\?|insert|' r'length\?|next|pick|poke|remove|reverse|select|sort|skip|swap|tail|tail\?|' r'take|trim|create|close|delete|modify|open|open\?|query|read|rename|' r'update|write)$', word): yield match.start(), Name.Function, word elif re.match(r'(yes|on|no|off|true|false|tab|cr|lf|newline|escape|slash|sp|space|null|' r'none|crlf|dot|null-byte)$', word): yield match.start(), Name.Builtin.Pseudo, word elif re.match(r'(#system-global|#include|#enum|#define|#either|#if|#import|#export|' r'#switch|#default|#get-definition)$', word): yield match.start(), Keyword.Namespace, word elif re.match(r'(system|halt|quit|quit-return|do|load|q|recycle|call|run|ask|parse|' r'raise-error|return|exit|break|alias|push|pop|probe|\?\?|spec-of|body-of|' r'quote|forever)$', word): yield match.start(), Name.Exception, word elif re.match(r'(action\?|block\?|char\?|datatype\?|file\?|function\?|get-path\?|zero\?|' r'get-word\?|integer\?|issue\?|lit-path\?|lit-word\?|logic\?|native\?|' r'op\?|paren\?|path\?|refinement\?|set-path\?|set-word\?|string\?|unset\?|' r'any-struct\?|none\?|word\?|any-series\?)$', word): yield match.start(), Keyword, word elif re.match(r'(JNICALL|stdcall|cdecl|infix)$', word): yield match.start(), Keyword.Namespace, word elif re.match("to-.*", word): yield match.start(), Keyword, word elif re.match(r'(\+|-\*\*|-|\*\*|//|/|\*|and|or|xor|=\?|===|==|=|<>|<=|>=|' r'<<<|>>>|<<|>>|<|>%)$', word): yield match.start(), Operator, word elif re.match(r".*\!$", word): yield match.start(), Keyword.Type, word elif re.match("'.*", word): yield match.start(), Name.Variable.Instance, word # lit-word elif re.match("#.*", word): yield match.start(), Name.Label, word # issue elif re.match("%.*", word): yield match.start(), Name.Decorator, word # file elif re.match(":.*", word): yield match.start(), Generic.Subheading, word # get-word else: yield match.start(), Name.Variable, word tokens = { 'root': [ (r'[^R]+', Comment), (r'Red/System\s+\[', Generic.Strong, 'script'), (r'Red\s+\[', Generic.Strong, 'script'), (r'R', Comment) ], 'script': [ (r'\s+', Text), (r'#"', String.Char, 'char'), (r'#\{[0-9a-f\s]*\}', Number.Hex), (r'2#\{', Number.Hex, 'bin2'), (r'64#\{[0-9a-z+/=\s]*\}', Number.Hex), (r'([0-9a-f]+)(h)((\s)|(?=[\[\]{}"()]))', bygroups(Number.Hex, Name.Variable, Whitespace)), (r'"', String, 'string'), (r'\{', String, 'string2'), (r';#+.*\n', Comment.Special), (r';\*+.*\n', Comment.Preproc), (r';.*\n', Comment), (r'%"', Name.Decorator, 'stringFile'), (r'%[^(^{")\s\[\]]+', Name.Decorator), (r'[+-]?([a-z]{1,3})?\$\d+(\.\d+)?', Number.Float), # money (r'[+-]?\d+\:\d+(\:\d+)?(\.\d+)?', String.Other), # time (r'\d+[\-/][0-9a-z]+[\-/]\d+(/\d+:\d+((:\d+)?' r'([\.\d+]?([+-]?\d+:\d+)?)?)?)?', String.Other), # date (r'\d+(\.\d+)+\.\d+', Keyword.Constant), # tuple (r'\d+X\d+', Keyword.Constant), # pair (r'[+-]?\d+(\'\d+)?([.,]\d*)?E[+-]?\d+', Number.Float), (r'[+-]?\d+(\'\d+)?[.,]\d*', Number.Float), (r'[+-]?\d+(\'\d+)?', Number), (r'[\[\]()]', Generic.Strong), (r'[a-z]+[^(^{"\s:)]*://[^(^{"\s)]*', Name.Decorator), # url (r'mailto:[^(^{"@\s)]+@[^(^{"@\s)]+', Name.Decorator), # url (r'[^(^{"@\s)]+@[^(^{"@\s)]+', Name.Decorator), # email (r'comment\s"', Comment, 'commentString1'), (r'comment\s\{', Comment, 'commentString2'), (r'comment\s\[', Comment, 'commentBlock'), (r'comment\s[^(\s{"\[]+', Comment), (r'/[^(^{^")\s/[\]]*', Name.Attribute), (r'([^(^{^")\s/[\]]+)(?=[:({"\s/\[\]])', word_callback), (r'<[\w:.-]*>', Name.Tag), (r'<[^(<>\s")]+', Name.Tag, 'tag'), (r'([^(^{")\s]+)', Text), ], 'string': [ (r'[^(^")]+', String), (escape_re, String.Escape), (r'[(|)]+', String), (r'\^.', String.Escape), (r'"', String, '#pop'), ], 'string2': [ (r'[^(^{})]+', String), (escape_re, String.Escape), (r'[(|)]+', String), (r'\^.', String.Escape), (r'\{', String, '#push'), (r'\}', String, '#pop'), ], 'stringFile': [ (r'[^(^")]+', Name.Decorator), (escape_re, Name.Decorator), (r'\^.', Name.Decorator), (r'"', Name.Decorator, '#pop'), ], 'char': [ (escape_re + '"', String.Char, '#pop'), (r'\^."', String.Char, '#pop'), (r'."', String.Char, '#pop'), ], 'tag': [ (escape_re, Name.Tag), (r'"', Name.Tag, 'tagString'), (r'[^(<>\r\n")]+', Name.Tag), (r'>', Name.Tag, '#pop'), ], 'tagString': [ (r'[^(^")]+', Name.Tag), (escape_re, Name.Tag), (r'[(|)]+', Name.Tag), (r'\^.', Name.Tag), (r'"', Name.Tag, '#pop'), ], 'tuple': [ (r'(\d+\.)+', Keyword.Constant), (r'\d+', Keyword.Constant, '#pop'), ], 'bin2': [ (r'\s+', Number.Hex), (r'([01]\s*){8}', Number.Hex), (r'\}', Number.Hex, '#pop'), ], 'commentString1': [ (r'[^(^")]+', Comment), (escape_re, Comment), (r'[(|)]+', Comment), (r'\^.', Comment), (r'"', Comment, '#pop'), ], 'commentString2': [ (r'[^(^{})]+', Comment), (escape_re, Comment), (r'[(|)]+', Comment), (r'\^.', Comment), (r'\{', Comment, '#push'), (r'\}', Comment, '#pop'), ], 'commentBlock': [ (r'\[', Comment, '#push'), (r'\]', Comment, '#pop'), (r'"', Comment, "commentString1"), (r'\{', Comment, "commentString2"), (r'[^(\[\]"{)]+', Comment), ], } pygments-2.11.2/pygments/lexers/haxe.py0000644000175000017500000007433314165547207020017 0ustar carstencarsten""" pygments.lexers.haxe ~~~~~~~~~~~~~~~~~~~~ Lexers for Haxe and related stuff. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import ExtendedRegexLexer, RegexLexer, include, bygroups, \ default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Whitespace __all__ = ['HaxeLexer', 'HxmlLexer'] class HaxeLexer(ExtendedRegexLexer): """ For Haxe source code (http://haxe.org/). .. versionadded:: 1.3 """ name = 'Haxe' aliases = ['haxe', 'hxsl', 'hx'] filenames = ['*.hx', '*.hxsl'] mimetypes = ['text/haxe', 'text/x-haxe', 'text/x-hx'] # keywords extracted from lexer.mll in the haxe compiler source keyword = (r'(?:function|class|static|var|if|else|while|do|for|' r'break|return|continue|extends|implements|import|' r'switch|case|default|public|private|try|untyped|' r'catch|new|this|throw|extern|enum|in|interface|' r'cast|override|dynamic|typedef|package|' r'inline|using|null|true|false|abstract)\b') # idtype in lexer.mll typeid = r'_*[A-Z]\w*' # combined ident and dollar and idtype ident = r'(?:_*[a-z]\w*|_+[0-9]\w*|' + typeid + r'|_+|\$\w+)' binop = (r'(?:%=|&=|\|=|\^=|\+=|\-=|\*=|/=|<<=|>\s*>\s*=|>\s*>\s*>\s*=|==|' r'!=|<=|>\s*=|&&|\|\||<<|>>>|>\s*>|\.\.\.|<|>|%|&|\||\^|\+|\*|' r'/|\-|=>|=)') # ident except keywords ident_no_keyword = r'(?!' + keyword + ')' + ident flags = re.DOTALL | re.MULTILINE preproc_stack = [] def preproc_callback(self, match, ctx): proc = match.group(2) if proc == 'if': # store the current stack self.preproc_stack.append(ctx.stack[:]) elif proc in ['else', 'elseif']: # restore the stack back to right before #if if self.preproc_stack: ctx.stack = self.preproc_stack[-1][:] elif proc == 'end': # remove the saved stack of previous #if if self.preproc_stack: self.preproc_stack.pop() # #if and #elseif should follow by an expr if proc in ['if', 'elseif']: ctx.stack.append('preproc-expr') # #error can be optionally follow by the error msg if proc in ['error']: ctx.stack.append('preproc-error') yield match.start(), Comment.Preproc, '#' + proc ctx.pos = match.end() tokens = { 'root': [ include('spaces'), include('meta'), (r'(?:package)\b', Keyword.Namespace, ('semicolon', 'package')), (r'(?:import)\b', Keyword.Namespace, ('semicolon', 'import')), (r'(?:using)\b', Keyword.Namespace, ('semicolon', 'using')), (r'(?:extern|private)\b', Keyword.Declaration), (r'(?:abstract)\b', Keyword.Declaration, 'abstract'), (r'(?:class|interface)\b', Keyword.Declaration, 'class'), (r'(?:enum)\b', Keyword.Declaration, 'enum'), (r'(?:typedef)\b', Keyword.Declaration, 'typedef'), # top-level expression # although it is not supported in haxe, but it is common to write # expression in web pages the positive lookahead here is to prevent # an infinite loop at the EOF (r'(?=.)', Text, 'expr-statement'), ], # space/tab/comment/preproc 'spaces': [ (r'\s+', Whitespace), (r'//[^\n\r]*', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'(#)(if|elseif|else|end|error)\b', preproc_callback), ], 'string-single-interpol': [ (r'\$\{', String.Interpol, ('string-interpol-close', 'expr')), (r'\$\$', String.Escape), (r'\$(?=' + ident + ')', String.Interpol, 'ident'), include('string-single'), ], 'string-single': [ (r"'", String.Single, '#pop'), (r'\\.', String.Escape), (r'.', String.Single), ], 'string-double': [ (r'"', String.Double, '#pop'), (r'\\.', String.Escape), (r'.', String.Double), ], 'string-interpol-close': [ (r'\$'+ident, String.Interpol), (r'\}', String.Interpol, '#pop'), ], 'package': [ include('spaces'), (ident, Name.Namespace), (r'\.', Punctuation, 'import-ident'), default('#pop'), ], 'import': [ include('spaces'), (ident, Name.Namespace), (r'\*', Keyword), # wildcard import (r'\.', Punctuation, 'import-ident'), (r'in', Keyword.Namespace, 'ident'), default('#pop'), ], 'import-ident': [ include('spaces'), (r'\*', Keyword, '#pop'), # wildcard import (ident, Name.Namespace, '#pop'), ], 'using': [ include('spaces'), (ident, Name.Namespace), (r'\.', Punctuation, 'import-ident'), default('#pop'), ], 'preproc-error': [ (r'\s+', Whitespace), (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), default('#pop'), ], 'preproc-expr': [ (r'\s+', Whitespace), (r'\!', Comment.Preproc), (r'\(', Comment.Preproc, ('#pop', 'preproc-parenthesis')), (ident, Comment.Preproc, '#pop'), # Float (r'\.[0-9]+', Number.Float), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float), (r'[0-9]+\.[0-9]+', Number.Float), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float), # Int (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), # String (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), ], 'preproc-parenthesis': [ (r'\s+', Whitespace), (r'\)', Comment.Preproc, '#pop'), default('preproc-expr-in-parenthesis'), ], 'preproc-expr-chain': [ (r'\s+', Whitespace), (binop, Comment.Preproc, ('#pop', 'preproc-expr-in-parenthesis')), default('#pop'), ], # same as 'preproc-expr' but able to chain 'preproc-expr-chain' 'preproc-expr-in-parenthesis': [ (r'\s+', Whitespace), (r'\!', Comment.Preproc), (r'\(', Comment.Preproc, ('#pop', 'preproc-expr-chain', 'preproc-parenthesis')), (ident, Comment.Preproc, ('#pop', 'preproc-expr-chain')), # Float (r'\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'preproc-expr-chain')), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'preproc-expr-chain')), # Int (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'preproc-expr-chain')), (r'[0-9]+', Number.Integer, ('#pop', 'preproc-expr-chain')), # String (r"'", String.Single, ('#pop', 'preproc-expr-chain', 'string-single')), (r'"', String.Double, ('#pop', 'preproc-expr-chain', 'string-double')), ], 'abstract': [ include('spaces'), default(('#pop', 'abstract-body', 'abstract-relation', 'abstract-opaque', 'type-param-constraint', 'type-name')), ], 'abstract-body': [ include('spaces'), (r'\{', Punctuation, ('#pop', 'class-body')), ], 'abstract-opaque': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'type')), default('#pop'), ], 'abstract-relation': [ include('spaces'), (r'(?:to|from)', Keyword.Declaration, 'type'), (r',', Punctuation), default('#pop'), ], 'meta': [ include('spaces'), (r'@', Name.Decorator, ('meta-body', 'meta-ident', 'meta-colon')), ], # optional colon 'meta-colon': [ include('spaces'), (r':', Name.Decorator, '#pop'), default('#pop'), ], # same as 'ident' but set token as Name.Decorator instead of Name 'meta-ident': [ include('spaces'), (ident, Name.Decorator, '#pop'), ], 'meta-body': [ include('spaces'), (r'\(', Name.Decorator, ('#pop', 'meta-call')), default('#pop'), ], 'meta-call': [ include('spaces'), (r'\)', Name.Decorator, '#pop'), default(('#pop', 'meta-call-sep', 'expr')), ], 'meta-call-sep': [ include('spaces'), (r'\)', Name.Decorator, '#pop'), (r',', Punctuation, ('#pop', 'meta-call')), ], 'typedef': [ include('spaces'), default(('#pop', 'typedef-body', 'type-param-constraint', 'type-name')), ], 'typedef-body': [ include('spaces'), (r'=', Operator, ('#pop', 'optional-semicolon', 'type')), ], 'enum': [ include('spaces'), default(('#pop', 'enum-body', 'bracket-open', 'type-param-constraint', 'type-name')), ], 'enum-body': [ include('spaces'), include('meta'), (r'\}', Punctuation, '#pop'), (ident_no_keyword, Name, ('enum-member', 'type-param-constraint')), ], 'enum-member': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'semicolon', 'flag', 'function-param')), default(('#pop', 'semicolon', 'flag')), ], 'class': [ include('spaces'), default(('#pop', 'class-body', 'bracket-open', 'extends', 'type-param-constraint', 'type-name')), ], 'extends': [ include('spaces'), (r'(?:extends|implements)\b', Keyword.Declaration, 'type'), (r',', Punctuation), # the comma is made optional here, since haxe2 # requires the comma but haxe3 does not allow it default('#pop'), ], 'bracket-open': [ include('spaces'), (r'\{', Punctuation, '#pop'), ], 'bracket-close': [ include('spaces'), (r'\}', Punctuation, '#pop'), ], 'class-body': [ include('spaces'), include('meta'), (r'\}', Punctuation, '#pop'), (r'(?:static|public|private|override|dynamic|inline|macro)\b', Keyword.Declaration), default('class-member'), ], 'class-member': [ include('spaces'), (r'(var)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'var')), (r'(function)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'class-method')), ], # local function, anonymous or not 'function-local': [ include('spaces'), (ident_no_keyword, Name.Function, ('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), default(('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), ], 'optional-expr': [ include('spaces'), include('expr'), default('#pop'), ], 'class-method': [ include('spaces'), (ident, Name.Function, ('#pop', 'optional-expr', 'flag', 'function-param', 'parenthesis-open', 'type-param-constraint')), ], # function arguments 'function-param': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r'\?', Punctuation), (ident_no_keyword, Name, ('#pop', 'function-param-sep', 'assign', 'flag')), ], 'function-param-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'function-param')), ], 'prop-get-set': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'prop-get-set-opt', 'comma', 'prop-get-set-opt')), default('#pop'), ], 'prop-get-set-opt': [ include('spaces'), (r'(?:default|null|never|dynamic|get|set)\b', Keyword, '#pop'), (ident_no_keyword, Text, '#pop'), # custom getter/setter ], 'expr-statement': [ include('spaces'), # makes semicolon optional here, just to avoid checking the last # one is bracket or not. default(('#pop', 'optional-semicolon', 'expr')), ], 'expr': [ include('spaces'), (r'@', Name.Decorator, ('#pop', 'optional-expr', 'meta-body', 'meta-ident', 'meta-colon')), (r'(?:\+\+|\-\-|~(?!/)|!|\-)', Operator), (r'\(', Punctuation, ('#pop', 'expr-chain', 'parenthesis')), (r'(?:static|public|private|override|dynamic|inline)\b', Keyword.Declaration), (r'(?:function)\b', Keyword.Declaration, ('#pop', 'expr-chain', 'function-local')), (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket')), (r'(?:true|false|null)\b', Keyword.Constant, ('#pop', 'expr-chain')), (r'(?:this)\b', Keyword, ('#pop', 'expr-chain')), (r'(?:cast)\b', Keyword, ('#pop', 'expr-chain', 'cast')), (r'(?:try)\b', Keyword, ('#pop', 'catch', 'expr')), (r'(?:var)\b', Keyword.Declaration, ('#pop', 'var')), (r'(?:new)\b', Keyword, ('#pop', 'expr-chain', 'new')), (r'(?:switch)\b', Keyword, ('#pop', 'switch')), (r'(?:if)\b', Keyword, ('#pop', 'if')), (r'(?:do)\b', Keyword, ('#pop', 'do')), (r'(?:while)\b', Keyword, ('#pop', 'while')), (r'(?:for)\b', Keyword, ('#pop', 'for')), (r'(?:untyped|throw)\b', Keyword), (r'(?:return)\b', Keyword, ('#pop', 'optional-expr')), (r'(?:macro)\b', Keyword, ('#pop', 'macro')), (r'(?:continue|break)\b', Keyword, '#pop'), (r'(?:\$\s*[a-z]\b|\$(?!'+ident+'))', Name, ('#pop', 'dollar')), (ident_no_keyword, Name, ('#pop', 'expr-chain')), # Float (r'\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.[0-9]+', Number.Float, ('#pop', 'expr-chain')), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, ('#pop', 'expr-chain')), # Int (r'0x[0-9a-fA-F]+', Number.Hex, ('#pop', 'expr-chain')), (r'[0-9]+', Number.Integer, ('#pop', 'expr-chain')), # String (r"'", String.Single, ('#pop', 'expr-chain', 'string-single-interpol')), (r'"', String.Double, ('#pop', 'expr-chain', 'string-double')), # EReg (r'~/(\\\\|\\[^\\]|[^/\\\n])*/[gimsu]*', String.Regex, ('#pop', 'expr-chain')), # Array (r'\[', Punctuation, ('#pop', 'expr-chain', 'array-decl')), ], 'expr-chain': [ include('spaces'), (r'(?:\+\+|\-\-)', Operator), (binop, Operator, ('#pop', 'expr')), (r'(?:in)\b', Keyword, ('#pop', 'expr')), (r'\?', Operator, ('#pop', 'expr', 'ternary', 'expr')), (r'(\.)(' + ident_no_keyword + ')', bygroups(Punctuation, Name)), (r'\[', Punctuation, 'array-access'), (r'\(', Punctuation, 'call'), default('#pop'), ], # macro reification 'macro': [ include('spaces'), include('meta'), (r':', Punctuation, ('#pop', 'type')), (r'(?:extern|private)\b', Keyword.Declaration), (r'(?:abstract)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'abstract')), (r'(?:class|interface)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'macro-class')), (r'(?:enum)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'enum')), (r'(?:typedef)\b', Keyword.Declaration, ('#pop', 'optional-semicolon', 'typedef')), default(('#pop', 'expr')), ], 'macro-class': [ (r'\{', Punctuation, ('#pop', 'class-body')), include('class') ], # cast can be written as "cast expr" or "cast(expr, type)" 'cast': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'parenthesis-close', 'cast-type', 'expr')), default(('#pop', 'expr')), ], # optionally give a type as the 2nd argument of cast() 'cast-type': [ include('spaces'), (r',', Punctuation, ('#pop', 'type')), default('#pop'), ], 'catch': [ include('spaces'), (r'(?:catch)\b', Keyword, ('expr', 'function-param', 'parenthesis-open')), default('#pop'), ], # do-while loop 'do': [ include('spaces'), default(('#pop', 'do-while', 'expr')), ], # the while after do 'do-while': [ include('spaces'), (r'(?:while)\b', Keyword, ('#pop', 'parenthesis', 'parenthesis-open')), ], 'while': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), ], 'for': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'expr', 'parenthesis')), ], 'if': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'else', 'optional-semicolon', 'expr', 'parenthesis')), ], 'else': [ include('spaces'), (r'(?:else)\b', Keyword, ('#pop', 'expr')), default('#pop'), ], 'switch': [ include('spaces'), default(('#pop', 'switch-body', 'bracket-open', 'expr')), ], 'switch-body': [ include('spaces'), (r'(?:case|default)\b', Keyword, ('case-block', 'case')), (r'\}', Punctuation, '#pop'), ], 'case': [ include('spaces'), (r':', Punctuation, '#pop'), default(('#pop', 'case-sep', 'case-guard', 'expr')), ], 'case-sep': [ include('spaces'), (r':', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'case')), ], 'case-guard': [ include('spaces'), (r'(?:if)\b', Keyword, ('#pop', 'parenthesis', 'parenthesis-open')), default('#pop'), ], # optional multiple expr under a case 'case-block': [ include('spaces'), (r'(?!(?:case|default)\b|\})', Keyword, 'expr-statement'), default('#pop'), ], 'new': [ include('spaces'), default(('#pop', 'call', 'parenthesis-open', 'type')), ], 'array-decl': [ include('spaces'), (r'\]', Punctuation, '#pop'), default(('#pop', 'array-decl-sep', 'expr')), ], 'array-decl-sep': [ include('spaces'), (r'\]', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'array-decl')), ], 'array-access': [ include('spaces'), default(('#pop', 'array-access-close', 'expr')), ], 'array-access-close': [ include('spaces'), (r'\]', Punctuation, '#pop'), ], 'comma': [ include('spaces'), (r',', Punctuation, '#pop'), ], 'colon': [ include('spaces'), (r':', Punctuation, '#pop'), ], 'semicolon': [ include('spaces'), (r';', Punctuation, '#pop'), ], 'optional-semicolon': [ include('spaces'), (r';', Punctuation, '#pop'), default('#pop'), ], # identity that CAN be a Haxe keyword 'ident': [ include('spaces'), (ident, Name, '#pop'), ], 'dollar': [ include('spaces'), (r'\{', Punctuation, ('#pop', 'expr-chain', 'bracket-close', 'expr')), default(('#pop', 'expr-chain')), ], 'type-name': [ include('spaces'), (typeid, Name, '#pop'), ], 'type-full-name': [ include('spaces'), (r'\.', Punctuation, 'ident'), default('#pop'), ], 'type': [ include('spaces'), (r'\?', Punctuation), (ident, Name, ('#pop', 'type-check', 'type-full-name')), (r'\{', Punctuation, ('#pop', 'type-check', 'type-struct')), (r'\(', Punctuation, ('#pop', 'type-check', 'type-parenthesis')), ], 'type-parenthesis': [ include('spaces'), default(('#pop', 'parenthesis-close', 'type')), ], 'type-check': [ include('spaces'), (r'->', Punctuation, ('#pop', 'type')), (r'<(?!=)', Punctuation, 'type-param'), default('#pop'), ], 'type-struct': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r'\?', Punctuation), (r'>', Punctuation, ('comma', 'type')), (ident_no_keyword, Name, ('#pop', 'type-struct-sep', 'type', 'colon')), include('class-body'), ], 'type-struct-sep': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-struct')), ], # type-param can be a normal type or a constant literal... 'type-param-type': [ # Float (r'\.[0-9]+', Number.Float, '#pop'), (r'[0-9]+[eE][+\-]?[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.[0-9]*[eE][+\-]?[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.[0-9]+', Number.Float, '#pop'), (r'[0-9]+\.(?!' + ident + r'|\.\.)', Number.Float, '#pop'), # Int (r'0x[0-9a-fA-F]+', Number.Hex, '#pop'), (r'[0-9]+', Number.Integer, '#pop'), # String (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), # EReg (r'~/(\\\\|\\[^\\]|[^/\\\n])*/[gim]*', String.Regex, '#pop'), # Array (r'\[', Operator, ('#pop', 'array-decl')), include('type'), ], # type-param part of a type # ie. the path in Map 'type-param': [ include('spaces'), default(('#pop', 'type-param-sep', 'type-param-type')), ], 'type-param-sep': [ include('spaces'), (r'>', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-param')), ], # optional type-param that may include constraint # ie. 'type-param-constraint': [ include('spaces'), (r'<(?!=)', Punctuation, ('#pop', 'type-param-constraint-sep', 'type-param-constraint-flag', 'type-name')), default('#pop'), ], 'type-param-constraint-sep': [ include('spaces'), (r'>', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'type-param-constraint-sep', 'type-param-constraint-flag', 'type-name')), ], # the optional constraint inside type-param 'type-param-constraint-flag': [ include('spaces'), (r':', Punctuation, ('#pop', 'type-param-constraint-flag-type')), default('#pop'), ], 'type-param-constraint-flag-type': [ include('spaces'), (r'\(', Punctuation, ('#pop', 'type-param-constraint-flag-type-sep', 'type')), default(('#pop', 'type')), ], 'type-param-constraint-flag-type-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, 'type'), ], # a parenthesis expr that contain exactly one expr 'parenthesis': [ include('spaces'), default(('#pop', 'parenthesis-close', 'flag', 'expr')), ], 'parenthesis-open': [ include('spaces'), (r'\(', Punctuation, '#pop'), ], 'parenthesis-close': [ include('spaces'), (r'\)', Punctuation, '#pop'), ], 'var': [ include('spaces'), (ident_no_keyword, Text, ('#pop', 'var-sep', 'assign', 'flag', 'prop-get-set')), ], # optional more var decl. 'var-sep': [ include('spaces'), (r',', Punctuation, ('#pop', 'var')), default('#pop'), ], # optional assignment 'assign': [ include('spaces'), (r'=', Operator, ('#pop', 'expr')), default('#pop'), ], # optional type flag 'flag': [ include('spaces'), (r':', Punctuation, ('#pop', 'type')), default('#pop'), ], # colon as part of a ternary operator (?:) 'ternary': [ include('spaces'), (r':', Operator, '#pop'), ], # function call 'call': [ include('spaces'), (r'\)', Punctuation, '#pop'), default(('#pop', 'call-sep', 'expr')), ], # after a call param 'call-sep': [ include('spaces'), (r'\)', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'call')), ], # bracket can be block or object 'bracket': [ include('spaces'), (r'(?!(?:\$\s*[a-z]\b|\$(?!'+ident+')))' + ident_no_keyword, Name, ('#pop', 'bracket-check')), (r"'", String.Single, ('#pop', 'bracket-check', 'string-single')), (r'"', String.Double, ('#pop', 'bracket-check', 'string-double')), default(('#pop', 'block')), ], 'bracket-check': [ include('spaces'), (r':', Punctuation, ('#pop', 'object-sep', 'expr')), # is object default(('#pop', 'block', 'optional-semicolon', 'expr-chain')), # is block ], # code block 'block': [ include('spaces'), (r'\}', Punctuation, '#pop'), default('expr-statement'), ], # object in key-value pairs 'object': [ include('spaces'), (r'\}', Punctuation, '#pop'), default(('#pop', 'object-sep', 'expr', 'colon', 'ident-or-string')) ], # a key of an object 'ident-or-string': [ include('spaces'), (ident_no_keyword, Name, '#pop'), (r"'", String.Single, ('#pop', 'string-single')), (r'"', String.Double, ('#pop', 'string-double')), ], # after a key-value pair in object 'object-sep': [ include('spaces'), (r'\}', Punctuation, '#pop'), (r',', Punctuation, ('#pop', 'object')), ], } def analyse_text(text): if re.match(r'\w+\s*:\s*\w', text): return 0.3 class HxmlLexer(RegexLexer): """ Lexer for `haXe build `_ files. .. versionadded:: 1.6 """ name = 'Hxml' aliases = ['haxeml', 'hxml'] filenames = ['*.hxml'] tokens = { 'root': [ # Seperator (r'(--)(next)', bygroups(Punctuation, Generic.Heading)), # Compiler switches with one dash (r'(-)(prompt|debug|v)', bygroups(Punctuation, Keyword.Keyword)), # Compilerswitches with two dashes (r'(--)(neko-source|flash-strict|flash-use-stage|no-opt|no-traces|' r'no-inline|times|no-output)', bygroups(Punctuation, Keyword)), # Targets and other options that take an argument (r'(-)(cpp|js|neko|x|as3|swf9?|swf-lib|php|xml|main|lib|D|resource|' r'cp|cmd)( +)(.+)', bygroups(Punctuation, Keyword, Whitespace, String)), # Options that take only numerical arguments (r'(-)(swf-version)( +)(\d+)', bygroups(Punctuation, Keyword, Whitespace, Number.Integer)), # An Option that defines the size, the fps and the background # color of an flash movie (r'(-)(swf-header)( +)(\d+)(:)(\d+)(:)(\d+)(:)([A-Fa-f0-9]{6})', bygroups(Punctuation, Keyword, Whitespace, Number.Integer, Punctuation, Number.Integer, Punctuation, Number.Integer, Punctuation, Number.Hex)), # options with two dashes that takes arguments (r'(--)(js-namespace|php-front|php-lib|remap|gen-hx-classes)( +)' r'(.+)', bygroups(Punctuation, Keyword, Whitespace, String)), # Single line comment, multiline ones are not allowed. (r'#.*', Comment.Single) ] } pygments-2.11.2/pygments/lexers/ampl.py0000644000175000017500000001006414165547207020012 0ustar carstencarsten""" pygments.lexers.ampl ~~~~~~~~~~~~~~~~~~~~ Lexers for the AMPL language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, using, this, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['AmplLexer'] class AmplLexer(RegexLexer): """ For `AMPL `_ source code. .. versionadded:: 2.2 """ name = 'Ampl' aliases = ['ampl'] filenames = ['*.run'] tokens = { 'root': [ (r'\n', Text), (r'\s+', Whitespace), (r'#.*?\n', Comment.Single), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (words(( 'call', 'cd', 'close', 'commands', 'data', 'delete', 'display', 'drop', 'end', 'environ', 'exit', 'expand', 'include', 'load', 'model', 'objective', 'option', 'problem', 'purge', 'quit', 'redeclare', 'reload', 'remove', 'reset', 'restore', 'shell', 'show', 'solexpand', 'solution', 'solve', 'update', 'unload', 'xref', 'coeff', 'coef', 'cover', 'obj', 'interval', 'default', 'from', 'to', 'to_come', 'net_in', 'net_out', 'dimen', 'dimension', 'check', 'complements', 'write', 'function', 'pipe', 'format', 'if', 'then', 'else', 'in', 'while', 'repeat', 'for'), suffix=r'\b'), Keyword.Reserved), (r'(integer|binary|symbolic|ordered|circular|reversed|INOUT|IN|OUT|LOCAL)', Keyword.Type), (r'\".*?\"', String.Double), (r'\'.*?\'', String.Single), (r'[()\[\]{},;:]+', Punctuation), (r'\b(\w+)(\.)(astatus|init0|init|lb0|lb1|lb2|lb|lrc|' r'lslack|rc|relax|slack|sstatus|status|ub0|ub1|ub2|' r'ub|urc|uslack|val)', bygroups(Name.Variable, Punctuation, Keyword.Reserved)), (r'(set|param|var|arc|minimize|maximize|subject to|s\.t\.|subj to|' r'node|table|suffix|read table|write table)(\s+)(\w+)', bygroups(Keyword.Declaration, Whitespace, Name.Variable)), (r'(param)(\s*)(:)(\s*)(\w+)(\s*)(:)(\s*)((\w|\s)+)', bygroups(Keyword.Declaration, Whitespace, Punctuation, Whitespace, Name.Variable, Whitespace, Punctuation, Whitespace, Name.Variable)), (r'(let|fix|unfix)(\s*)((?:\{.*\})?)(\s*)(\w+)', bygroups(Keyword.Declaration, Whitespace, using(this), Whitespace, Name.Variable)), (words(( 'abs', 'acos', 'acosh', 'alias', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'ctime', 'cos', 'exp', 'floor', 'log', 'log10', 'max', 'min', 'precision', 'round', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'time', 'trunc', 'Beta', 'Cauchy', 'Exponential', 'Gamma', 'Irand224', 'Normal', 'Normal01', 'Poisson', 'Uniform', 'Uniform01', 'num', 'num0', 'ichar', 'char', 'length', 'substr', 'sprintf', 'match', 'sub', 'gsub', 'print', 'printf', 'next', 'nextw', 'prev', 'prevw', 'first', 'last', 'ord', 'ord0', 'card', 'arity', 'indexarity'), prefix=r'\b', suffix=r'\b'), Name.Builtin), (r'(\+|\-|\*|/|\*\*|=|<=|>=|==|\||\^|<|>|\!|\.\.|:=|\&|\!=|<<|>>)', Operator), (words(( 'or', 'exists', 'forall', 'and', 'in', 'not', 'within', 'union', 'diff', 'difference', 'symdiff', 'inter', 'intersect', 'intersection', 'cross', 'setof', 'by', 'less', 'sum', 'prod', 'product', 'div', 'mod'), suffix=r'\b'), Keyword.Reserved), # Operator.Name but not enough emphasized with that (r'(\d+\.(?!\.)\d*|\.(?!.)\d+)([eE][+-]?\d+)?', Number.Float), (r'\d+([eE][+-]?\d+)?', Number.Integer), (r'[+-]?Infinity', Number.Integer), (r'(\w+|(\.(?!\.)))', Text) ] } pygments-2.11.2/pygments/lexers/rita.py0000644000175000017500000000224214165547207020017 0ustar carstencarsten""" pygments.lexers.rita ~~~~~~~~~~~~~~~~~~~~ Lexers for RITA language :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, this, \ inherit, words from pygments.token import Comment, Operator, Keyword, Name, Literal, Punctuation, Text, Whitespace __all__ = ['RitaLexer'] class RitaLexer(RegexLexer): """ Lexer for `RITA `_ .. versionadded:: 2.11 """ name = 'Rita' filenames = ['*.rita'] aliases = ['rita'] mimetypes = ['text/rita'] tokens = { 'root': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'#(.*?)\n', Comment.Single), (r'@(.*?)\n', Operator), # Yes, whole line as an operator (r'"(\w|\d|\s|(\\")|[\'_\-./,\?\!])+?"', Literal), (r'\'(\w|\d|\s|(\\\')|["_\-./,\?\!])+?\'', Literal), (r'([A-Z_]+)', Keyword), (r'([a-z0-9_]+)', Name), (r'((->)|[!?+*|=])', Operator), (r'[\(\),\{\}]', Punctuation) ] } pygments-2.11.2/pygments/lexers/pointless.py0000644000175000017500000000366014165547207021105 0ustar carstencarsten""" pygments.lexers.pointless ~~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Pointless. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, words from pygments.token import Comment, Error, Keyword, Name, Number, Operator, \ Punctuation, String, Text __all__ = ['PointlessLexer'] class PointlessLexer(RegexLexer): """ For `Pointless `_ source code. .. versionadded:: 2.7 """ name = 'Pointless' aliases = ['pointless'] filenames = ['*.ptls'] ops = words([ "+", "-", "*", "/", "**", "%", "+=", "-=", "*=", "/=", "**=", "%=", "|>", "=", "==", "!=", "<", ">", "<=", ">=", "=>", "$", "++", ]) keywords = words([ "if", "then", "else", "where", "with", "cond", "case", "and", "or", "not", "in", "as", "for", "requires", "throw", "try", "catch", "when", "yield", "upval", ], suffix=r'\b') tokens = { 'root': [ (r'[ \n\r]+', Text), (r'--.*$', Comment.Single), (r'"""', String, 'multiString'), (r'"', String, 'string'), (r'[\[\](){}:;,.]', Punctuation), (ops, Operator), (keywords, Keyword), (r'\d+|\d*\.\d+', Number), (r'(true|false)\b', Name.Builtin), (r'[A-Z][a-zA-Z0-9]*\b', String.Symbol), (r'output\b', Name.Variable.Magic), (r'(export|import)\b', Keyword.Namespace), (r'[a-z][a-zA-Z0-9]*\b', Name.Variable) ], 'multiString': [ (r'\\.', String.Escape), (r'"""', String, '#pop'), (r'"', String), (r'[^\\"]+', String), ], 'string': [ (r'\\.', String.Escape), (r'"', String, '#pop'), (r'\n', Error), (r'[^\\"]+', String), ], } pygments-2.11.2/pygments/lexers/gdscript.py0000644000175000017500000002570414165547207020707 0ustar carstencarsten""" pygments.lexers.gdscript ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for GDScript. Modified by Daniel J. Ramirez based on the original python.py. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, default, words, \ combined from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ["GDScriptLexer"] line_re = re.compile(".*?\n") class GDScriptLexer(RegexLexer): """ For `GDScript source code `_. """ name = "GDScript" aliases = ["gdscript", "gd"] filenames = ["*.gd"] mimetypes = ["text/x-gdscript", "application/x-gdscript"] def innerstring_rules(ttype): return [ # the old style '%s' % (...) string formatting ( r"%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?" "[hlL]?[E-GXc-giorsux%]", String.Interpol, ), # backslashes, quotes and formatting signs must be parsed one at a time (r'[^\\\'"%\n]+', ttype), (r'[\'"\\]', ttype), # unhandled string formatting sign (r"%", ttype), # newlines are an error (use "nl" state) ] tokens = { "root": [ (r"\n", Whitespace), ( r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")', bygroups(Whitespace, String.Affix, String.Doc), ), ( r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')", bygroups(Whitespace, String.Affix, String.Doc), ), (r"[^\S\n]+", Whitespace), (r"#.*$", Comment.Single), (r"[]{}:(),;[]", Punctuation), (r"(\\)(\n)", bygroups(Text, Whitespace)), (r"\\", Text), (r"(in|and|or|not)\b", Operator.Word), ( r"!=|==|<<|>>|&&|\+=|-=|\*=|/=|%=|&=|\|=|\|\||[-~+/*%=<>&^.!|$]", Operator, ), include("keywords"), (r"(func)(\s+)", bygroups(Keyword, Whitespace), "funcname"), (r"(class)(\s+)", bygroups(Keyword, Whitespace), "classname"), include("builtins"), ( '([rR]|[uUbB][rR]|[rR][uUbB])(""")', bygroups(String.Affix, String.Double), "tdqs", ), ( "([rR]|[uUbB][rR]|[rR][uUbB])(''')", bygroups(String.Affix, String.Single), "tsqs", ), ( '([rR]|[uUbB][rR]|[rR][uUbB])(")', bygroups(String.Affix, String.Double), "dqs", ), ( "([rR]|[uUbB][rR]|[rR][uUbB])(')", bygroups(String.Affix, String.Single), "sqs", ), ( '([uUbB]?)(""")', bygroups(String.Affix, String.Double), combined("stringescape", "tdqs"), ), ( "([uUbB]?)(''')", bygroups(String.Affix, String.Single), combined("stringescape", "tsqs"), ), ( '([uUbB]?)(")', bygroups(String.Affix, String.Double), combined("stringescape", "dqs"), ), ( "([uUbB]?)(')", bygroups(String.Affix, String.Single), combined("stringescape", "sqs"), ), include("name"), include("numbers"), ], "keywords": [ ( words( ( "and", "in", "not", "or", "as", "breakpoint", "class", "class_name", "extends", "is", "func", "setget", "signal", "tool", "const", "enum", "export", "onready", "static", "var", "break", "continue", "if", "elif", "else", "for", "pass", "return", "match", "while", "remote", "master", "puppet", "remotesync", "mastersync", "puppetsync", ), suffix=r"\b", ), Keyword, ), ], "builtins": [ ( words( ( "Color8", "ColorN", "abs", "acos", "asin", "assert", "atan", "atan2", "bytes2var", "ceil", "char", "clamp", "convert", "cos", "cosh", "db2linear", "decimals", "dectime", "deg2rad", "dict2inst", "ease", "exp", "floor", "fmod", "fposmod", "funcref", "hash", "inst2dict", "instance_from_id", "is_inf", "is_nan", "lerp", "linear2db", "load", "log", "max", "min", "nearest_po2", "pow", "preload", "print", "print_stack", "printerr", "printraw", "prints", "printt", "rad2deg", "rand_range", "rand_seed", "randf", "randi", "randomize", "range", "round", "seed", "sign", "sin", "sinh", "sqrt", "stepify", "str", "str2var", "tan", "tan", "tanh", "type_exist", "typeof", "var2bytes", "var2str", "weakref", "yield", ), prefix=r"(?`_ source code. .. versionadded:: 2.1 """ name = 'SuperCollider' aliases = ['supercollider', 'sc'] filenames = ['*.sc', '*.scd'] mimetypes = ['application/supercollider', 'text/supercollider', ] flags = re.DOTALL | re.MULTILINE tokens = { 'commentsandwhitespace': [ (r'\s+', Text), (r')?', Other), (r'[^[<]+', Other), ], 'nosquarebrackets': [ (r'\[noprocess\]', Comment.Preproc, 'noprocess'), (r'\[', Other), (r'<\?(lasso(script)?|=)', Comment.Preproc, 'anglebrackets'), (r'<(!--.*?-->)?', Other), (r'[^[<]+', Other), ], 'noprocess': [ (r'\[/noprocess\]', Comment.Preproc, '#pop'), (r'\[', Other), (r'[^[]', Other), ], 'squarebrackets': [ (r'\]', Comment.Preproc, '#pop'), include('lasso'), ], 'anglebrackets': [ (r'\?>', Comment.Preproc, '#pop'), include('lasso'), ], 'lassofile': [ (r'\]|\?>', Comment.Preproc, '#pop'), include('lasso'), ], 'whitespacecomments': [ (r'\s+', Text), (r'//.*?\n', Comment.Single), (r'/\*\*!.*?\*/', String.Doc), (r'/\*.*?\*/', Comment.Multiline), ], 'lasso': [ # whitespace/comments include('whitespacecomments'), # literals (r'\d*\.\d+(e[+-]?\d+)?', Number.Float), (r'0x[\da-f]+', Number.Hex), (r'\d+', Number.Integer), (r'(infinity|NaN)\b', Number), (r"'", String.Single, 'singlestring'), (r'"', String.Double, 'doublestring'), (r'`[^`]*`', String.Backtick), # names (r'\$[a-z_][\w.]*', Name.Variable), (r'#([a-z_][\w.]*|\d+\b)', Name.Variable.Instance), (r"(\.\s*)('[a-z_][\w.]*')", bygroups(Name.Builtin.Pseudo, Name.Variable.Class)), (r"(self)(\s*->\s*)('[a-z_][\w.]*')", bygroups(Name.Builtin.Pseudo, Operator, Name.Variable.Class)), (r'(\.\.?\s*)([a-z_][\w.]*(=(?!=))?)', bygroups(Name.Builtin.Pseudo, Name.Other.Member)), (r'(->\\?\s*|&\s*)([a-z_][\w.]*(=(?!=))?)', bygroups(Operator, Name.Other.Member)), (r'(?)(self|inherited|currentcapture|givenblock)\b', Name.Builtin.Pseudo), (r'-(?!infinity)[a-z_][\w.]*', Name.Attribute), (r'::\s*[a-z_][\w.]*', Name.Label), (r'(error_(code|msg)_\w+|Error_AddError|Error_ColumnRestriction|' r'Error_DatabaseConnectionUnavailable|Error_DatabaseTimeout|' r'Error_DeleteError|Error_FieldRestriction|Error_FileNotFound|' r'Error_InvalidDatabase|Error_InvalidPassword|' r'Error_InvalidUsername|Error_ModuleNotFound|' r'Error_NoError|Error_NoPermission|Error_OutOfMemory|' r'Error_ReqColumnMissing|Error_ReqFieldMissing|' r'Error_RequiredColumnMissing|Error_RequiredFieldMissing|' r'Error_UpdateError)\b', Name.Exception), # definitions (r'(define)(\s+)([a-z_][\w.]*)(\s*=>\s*)(type|trait|thread)\b', bygroups(Keyword.Declaration, Text, Name.Class, Operator, Keyword)), (r'(define)(\s+)([a-z_][\w.]*)(\s*->\s*)([a-z_][\w.]*=?|[-+*/%])', bygroups(Keyword.Declaration, Text, Name.Class, Operator, Name.Function), 'signature'), (r'(define)(\s+)([a-z_][\w.]*)', bygroups(Keyword.Declaration, Text, Name.Function), 'signature'), (r'(public|protected|private|provide)(\s+)(([a-z_][\w.]*=?|[-+*/%])' r'(?=\s*\())', bygroups(Keyword, Text, Name.Function), 'signature'), (r'(public|protected|private|provide)(\s+)([a-z_][\w.]*)', bygroups(Keyword, Text, Name.Function)), # keywords (r'(true|false|none|minimal|full|all|void)\b', Keyword.Constant), (r'(local|var|variable|global|data(?=\s))\b', Keyword.Declaration), (r'(array|date|decimal|duration|integer|map|pair|string|tag|xml|' r'null|boolean|bytes|keyword|list|locale|queue|set|stack|' r'staticarray)\b', Keyword.Type), (r'([a-z_][\w.]*)(\s+)(in)\b', bygroups(Name, Text, Keyword)), (r'(let|into)(\s+)([a-z_][\w.]*)', bygroups(Keyword, Text, Name)), (r'require\b', Keyword, 'requiresection'), (r'(/?)(Namespace_Using)\b', bygroups(Punctuation, Keyword.Namespace)), (r'(/?)(Cache|Database_Names|Database_SchemaNames|' r'Database_TableNames|Define_Tag|Define_Type|Email_Batch|' r'Encode_Set|HTML_Comment|Handle|Handle_Error|Header|If|Inline|' r'Iterate|LJAX_Target|Link|Link_CurrentAction|Link_CurrentGroup|' r'Link_CurrentRecord|Link_Detail|Link_FirstGroup|Link_FirstRecord|' r'Link_LastGroup|Link_LastRecord|Link_NextGroup|Link_NextRecord|' r'Link_PrevGroup|Link_PrevRecord|Log|Loop|Output_None|Portal|' r'Private|Protect|Records|Referer|Referrer|Repeating|ResultSet|' r'Rows|Search_Args|Search_Arguments|Select|Sort_Args|' r'Sort_Arguments|Thread_Atomic|Value_List|While|Abort|Case|Else|' r'Fail_If|Fail_IfNot|Fail|If_Empty|If_False|If_Null|If_True|' r'Loop_Abort|Loop_Continue|Loop_Count|Params|Params_Up|Return|' r'Return_Value|Run_Children|SOAP_DefineTag|SOAP_LastRequest|' r'SOAP_LastResponse|Tag_Name|ascending|average|by|define|' r'descending|do|equals|frozen|group|handle_failure|import|in|into|' r'join|let|match|max|min|on|order|parent|protected|provide|public|' r'require|returnhome|skip|split_thread|sum|take|thread|to|trait|' r'type|where|with|yield|yieldhome)\b', bygroups(Punctuation, Keyword)), # other (r',', Punctuation, 'commamember'), (r'(and|or|not)\b', Operator.Word), (r'([a-z_][\w.]*)(\s*::\s*[a-z_][\w.]*)?(\s*=(?!=))', bygroups(Name, Name.Label, Operator)), (r'(/?)([\w.]+)', bygroups(Punctuation, Name.Other)), (r'(=)(n?bw|n?ew|n?cn|lte?|gte?|n?eq|n?rx|ft)\b', bygroups(Operator, Operator.Word)), (r':=|[-+*/%=<>&|!?\\]+', Operator), (r'[{}():;,@^]', Punctuation), ], 'singlestring': [ (r"'", String.Single, '#pop'), (r"[^'\\]+", String.Single), include('escape'), (r"\\", String.Single), ], 'doublestring': [ (r'"', String.Double, '#pop'), (r'[^"\\]+', String.Double), include('escape'), (r'\\', String.Double), ], 'escape': [ (r'\\(U[\da-f]{8}|u[\da-f]{4}|x[\da-f]{1,2}|[0-7]{1,3}|:[^:\n\r]+:|' r'[abefnrtv?"\'\\]|$)', String.Escape), ], 'signature': [ (r'=>', Operator, '#pop'), (r'\)', Punctuation, '#pop'), (r'[(,]', Punctuation, 'parameter'), include('lasso'), ], 'parameter': [ (r'\)', Punctuation, '#pop'), (r'-?[a-z_][\w.]*', Name.Attribute, '#pop'), (r'\.\.\.', Name.Builtin.Pseudo), include('lasso'), ], 'requiresection': [ (r'(([a-z_][\w.]*=?|[-+*/%])(?=\s*\())', Name, 'requiresignature'), (r'(([a-z_][\w.]*=?|[-+*/%])(?=(\s*::\s*[\w.]+)?\s*,))', Name), (r'[a-z_][\w.]*=?|[-+*/%]', Name, '#pop'), (r'::\s*[a-z_][\w.]*', Name.Label), (r',', Punctuation), include('whitespacecomments'), ], 'requiresignature': [ (r'(\)(?=(\s*::\s*[\w.]+)?\s*,))', Punctuation, '#pop'), (r'\)', Punctuation, '#pop:2'), (r'-?[a-z_][\w.]*', Name.Attribute), (r'::\s*[a-z_][\w.]*', Name.Label), (r'\.\.\.', Name.Builtin.Pseudo), (r'[(,]', Punctuation), include('whitespacecomments'), ], 'commamember': [ (r'(([a-z_][\w.]*=?|[-+*/%])' r'(?=\s*(\(([^()]*\([^()]*\))*[^)]*\)\s*)?(::[\w.\s]+)?=>))', Name.Function, 'signature'), include('whitespacecomments'), default('#pop'), ], } def __init__(self, **options): self.builtinshighlighting = get_bool_opt( options, 'builtinshighlighting', True) self.requiredelimiters = get_bool_opt( options, 'requiredelimiters', False) self._builtins = set() self._members = set() if self.builtinshighlighting: from pygments.lexers._lasso_builtins import BUILTINS, MEMBERS for key, value in BUILTINS.items(): self._builtins.update(value) for key, value in MEMBERS.items(): self._members.update(value) RegexLexer.__init__(self, **options) def get_tokens_unprocessed(self, text): stack = ['root'] if self.requiredelimiters: stack.append('delimiters') for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text, stack): if (token is Name.Other and value.lower() in self._builtins or token is Name.Other.Member and value.lower().rstrip('=') in self._members): yield index, Name.Builtin, value continue yield index, token, value def analyse_text(text): rv = 0.0 if 'bin/lasso9' in text: rv += 0.8 if re.search(r'<\?lasso', text, re.I): rv += 0.4 if re.search(r'local\(', text, re.I): rv += 0.4 return rv class ObjectiveJLexer(RegexLexer): """ For Objective-J source code with preprocessor directives. .. versionadded:: 1.3 """ name = 'Objective-J' aliases = ['objective-j', 'objectivej', 'obj-j', 'objj'] filenames = ['*.j'] mimetypes = ['text/x-objective-j'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/[*].*?[*]/)*' flags = re.DOTALL | re.MULTILINE tokens = { 'root': [ include('whitespace'), # function definition (r'^(' + _ws + r'[+-]' + _ws + r')([(a-zA-Z_].*?[^(])(' + _ws + r'\{)', bygroups(using(this), using(this, state='function_signature'), using(this))), # class definition (r'(@interface|@implementation)(\s+)', bygroups(Keyword, Text), 'classname'), (r'(@class|@protocol)(\s*)', bygroups(Keyword, Text), 'forward_classname'), (r'(\s*)(@end)(\s*)', bygroups(Text, Keyword, Text)), include('statements'), ('[{()}]', Punctuation), (';', Punctuation), ], 'whitespace': [ (r'(@import)(\s+)("(?:\\\\|\\"|[^"])*")', bygroups(Comment.Preproc, Text, String.Double)), (r'(@import)(\s+)(<(?:\\\\|\\>|[^>])*>)', bygroups(Comment.Preproc, Text, String.Double)), (r'(#(?:include|import))(\s+)("(?:\\\\|\\"|[^"])*")', bygroups(Comment.Preproc, Text, String.Double)), (r'(#(?:include|import))(\s+)(<(?:\\\\|\\>|[^>])*>)', bygroups(Comment.Preproc, Text, String.Double)), (r'#if\s+0', Comment.Preproc, 'if0'), (r'#', Comment.Preproc, 'macro'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'//(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'', Comment, '#pop'), ('-', Comment), ], 'tag': [ (r'\s+', Whitespace), (r'[\w.:-]+\s*=', Name.Attribute, 'attr'), (r'/?\s*>', Name.Tag, '#pop'), ], 'attr': [ (r'\s+', Whitespace), ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } pygments-2.11.2/pygments/lexers/perl.py0000644000175000017500000011424714165547207020033 0ustar carstencarsten""" pygments.lexers.perl ~~~~~~~~~~~~~~~~~~~~ Lexers for Perl, Raku and related languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, bygroups, \ using, this, default, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation from pygments.util import shebang_matches __all__ = ['PerlLexer', 'Perl6Lexer'] class PerlLexer(RegexLexer): """ For `Perl `_ source code. """ name = 'Perl' aliases = ['perl', 'pl'] filenames = ['*.pl', '*.pm', '*.t', '*.perl'] mimetypes = ['text/x-perl', 'application/x-perl'] flags = re.DOTALL | re.MULTILINE # TODO: give this to a perl guy who knows how to parse perl... tokens = { 'balanced-regex': [ (r'/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*', String.Regex, '#pop'), (r'!(\\\\|\\[^\\]|[^\\!])*![egimosx]*', String.Regex, '#pop'), (r'\\(\\\\|[^\\])*\\[egimosx]*', String.Regex, '#pop'), (r'\{(\\\\|\\[^\\]|[^\\}])*\}[egimosx]*', String.Regex, '#pop'), (r'<(\\\\|\\[^\\]|[^\\>])*>[egimosx]*', String.Regex, '#pop'), (r'\[(\\\\|\\[^\\]|[^\\\]])*\][egimosx]*', String.Regex, '#pop'), (r'\((\\\\|\\[^\\]|[^\\)])*\)[egimosx]*', String.Regex, '#pop'), (r'@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*', String.Regex, '#pop'), (r'%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*', String.Regex, '#pop'), (r'\$(\\\\|\\[^\\]|[^\\$])*\$[egimosx]*', String.Regex, '#pop'), ], 'root': [ (r'\A\#!.+?$', Comment.Hashbang), (r'\#.*?$', Comment.Single), (r'^=[a-zA-Z0-9]+\s+.*?\n=cut', Comment.Multiline), (words(( 'case', 'continue', 'do', 'else', 'elsif', 'for', 'foreach', 'if', 'last', 'my', 'next', 'our', 'redo', 'reset', 'then', 'unless', 'until', 'while', 'print', 'new', 'BEGIN', 'CHECK', 'INIT', 'END', 'return'), suffix=r'\b'), Keyword), (r'(format)(\s+)(\w+)(\s*)(=)(\s*\n)', bygroups(Keyword, Text, Name, Text, Punctuation, Text), 'format'), (r'(eq|lt|gt|le|ge|ne|not|and|or|cmp)\b', Operator.Word), # common delimiters (r's/(\\\\|\\[^\\]|[^\\/])*/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*', String.Regex), (r's!(\\\\|\\!|[^!])*!(\\\\|\\!|[^!])*![egimosx]*', String.Regex), (r's\\(\\\\|[^\\])*\\(\\\\|[^\\])*\\[egimosx]*', String.Regex), (r's@(\\\\|\\[^\\]|[^\\@])*@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*', String.Regex), (r's%(\\\\|\\[^\\]|[^\\%])*%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*', String.Regex), # balanced delimiters (r's\{(\\\\|\\[^\\]|[^\\}])*\}\s*', String.Regex, 'balanced-regex'), (r's<(\\\\|\\[^\\]|[^\\>])*>\s*', String.Regex, 'balanced-regex'), (r's\[(\\\\|\\[^\\]|[^\\\]])*\]\s*', String.Regex, 'balanced-regex'), (r's\((\\\\|\\[^\\]|[^\\)])*\)\s*', String.Regex, 'balanced-regex'), (r'm?/(\\\\|\\[^\\]|[^\\/\n])*/[gcimosx]*', String.Regex), (r'm(?=[/!\\{<\[(@%$])', String.Regex, 'balanced-regex'), (r'((?<==~)|(?<=\())\s*/(\\\\|\\[^\\]|[^\\/])*/[gcimosx]*', String.Regex), (r'\s+', Text), (words(( 'abs', 'accept', 'alarm', 'atan2', 'bind', 'binmode', 'bless', 'caller', 'chdir', 'chmod', 'chomp', 'chop', 'chown', 'chr', 'chroot', 'close', 'closedir', 'connect', 'continue', 'cos', 'crypt', 'dbmclose', 'dbmopen', 'defined', 'delete', 'die', 'dump', 'each', 'endgrent', 'endhostent', 'endnetent', 'endprotoent', 'endpwent', 'endservent', 'eof', 'eval', 'exec', 'exists', 'exit', 'exp', 'fcntl', 'fileno', 'flock', 'fork', 'format', 'formline', 'getc', 'getgrent', 'getgrgid', 'getgrnam', 'gethostbyaddr', 'gethostbyname', 'gethostent', 'getlogin', 'getnetbyaddr', 'getnetbyname', 'getnetent', 'getpeername', 'getpgrp', 'getppid', 'getpriority', 'getprotobyname', 'getprotobynumber', 'getprotoent', 'getpwent', 'getpwnam', 'getpwuid', 'getservbyname', 'getservbyport', 'getservent', 'getsockname', 'getsockopt', 'glob', 'gmtime', 'goto', 'grep', 'hex', 'import', 'index', 'int', 'ioctl', 'join', 'keys', 'kill', 'last', 'lc', 'lcfirst', 'length', 'link', 'listen', 'local', 'localtime', 'log', 'lstat', 'map', 'mkdir', 'msgctl', 'msgget', 'msgrcv', 'msgsnd', 'my', 'next', 'oct', 'open', 'opendir', 'ord', 'our', 'pack', 'pipe', 'pop', 'pos', 'printf', 'prototype', 'push', 'quotemeta', 'rand', 'read', 'readdir', 'readline', 'readlink', 'readpipe', 'recv', 'redo', 'ref', 'rename', 'reverse', 'rewinddir', 'rindex', 'rmdir', 'scalar', 'seek', 'seekdir', 'select', 'semctl', 'semget', 'semop', 'send', 'setgrent', 'sethostent', 'setnetent', 'setpgrp', 'setpriority', 'setprotoent', 'setpwent', 'setservent', 'setsockopt', 'shift', 'shmctl', 'shmget', 'shmread', 'shmwrite', 'shutdown', 'sin', 'sleep', 'socket', 'socketpair', 'sort', 'splice', 'split', 'sprintf', 'sqrt', 'srand', 'stat', 'study', 'substr', 'symlink', 'syscall', 'sysopen', 'sysread', 'sysseek', 'system', 'syswrite', 'tell', 'telldir', 'tie', 'tied', 'time', 'times', 'tr', 'truncate', 'uc', 'ucfirst', 'umask', 'undef', 'unlink', 'unpack', 'unshift', 'untie', 'utime', 'values', 'vec', 'wait', 'waitpid', 'wantarray', 'warn', 'write'), suffix=r'\b'), Name.Builtin), (r'((__(DATA|DIE|WARN)__)|(STD(IN|OUT|ERR)))\b', Name.Builtin.Pseudo), (r'(<<)([\'"]?)([a-zA-Z_]\w*)(\2;?\n.*?\n)(\3)(\n)', bygroups(String, String, String.Delimiter, String, String.Delimiter, Text)), (r'__END__', Comment.Preproc, 'end-part'), (r'\$\^[ADEFHILMOPSTWX]', Name.Variable.Global), (r"\$[\\\"\[\]'&`+*.,;=%~?@$!<>(^|/-](?!\w)", Name.Variable.Global), (r'[$@%#]+', Name.Variable, 'varname'), (r'0_?[0-7]+(_[0-7]+)*', Number.Oct), (r'0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*', Number.Hex), (r'0b[01]+(_[01]+)*', Number.Bin), (r'(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*', Number.Float), (r'\d+(_\d+)*', Number.Integer), (r"'(\\\\|\\[^\\]|[^'\\])*'", String), (r'"(\\\\|\\[^\\]|[^"\\])*"', String), (r'`(\\\\|\\[^\\]|[^`\\])*`', String.Backtick), (r'<([^\s>]+)>', String.Regex), (r'(q|qq|qw|qr|qx)\{', String.Other, 'cb-string'), (r'(q|qq|qw|qr|qx)\(', String.Other, 'rb-string'), (r'(q|qq|qw|qr|qx)\[', String.Other, 'sb-string'), (r'(q|qq|qw|qr|qx)\<', String.Other, 'lt-string'), (r'(q|qq|qw|qr|qx)([\W_])(.|\n)*?\2', String.Other), (r'(package)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)', bygroups(Keyword, Text, Name.Namespace)), (r'(use|require|no)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)', bygroups(Keyword, Text, Name.Namespace)), (r'(sub)(\s+)', bygroups(Keyword, Text), 'funcname'), (words(( 'no', 'package', 'require', 'use'), suffix=r'\b'), Keyword), (r'(\[\]|\*\*|::|<<|>>|>=|<=>|<=|={3}|!=|=~|' r'!~|&&?|\|\||\.{1,3})', Operator), (r'[-+/*%=<>&^|!\\~]=?', Operator), (r'[()\[\]:;,<>/?{}]', Punctuation), # yes, there's no shortage # of punctuation in Perl! (r'(?=\w)', Name, 'name'), ], 'format': [ (r'\.\n', String.Interpol, '#pop'), (r'[^\n]*\n', String.Interpol), ], 'varname': [ (r'\s+', Text), (r'\{', Punctuation, '#pop'), # hash syntax? (r'\)|,', Punctuation, '#pop'), # argument specifier (r'\w+::', Name.Namespace), (r'[\w:]+', Name.Variable, '#pop'), ], 'name': [ (r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*(::)?(?=\s*->)', Name.Namespace, '#pop'), (r'[a-zA-Z_]\w*(::[a-zA-Z_]\w*)*::', Name.Namespace, '#pop'), (r'[\w:]+', Name, '#pop'), (r'[A-Z_]+(?=\W)', Name.Constant, '#pop'), (r'(?=\W)', Text, '#pop'), ], 'funcname': [ (r'[a-zA-Z_]\w*[!?]?', Name.Function), (r'\s+', Text), # argument declaration (r'(\([$@%]*\))(\s*)', bygroups(Punctuation, Text)), (r';', Punctuation, '#pop'), (r'.*?\{', Punctuation, '#pop'), ], 'cb-string': [ (r'\\[{}\\]', String.Other), (r'\\', String.Other), (r'\{', String.Other, 'cb-string'), (r'\}', String.Other, '#pop'), (r'[^{}\\]+', String.Other) ], 'rb-string': [ (r'\\[()\\]', String.Other), (r'\\', String.Other), (r'\(', String.Other, 'rb-string'), (r'\)', String.Other, '#pop'), (r'[^()]+', String.Other) ], 'sb-string': [ (r'\\[\[\]\\]', String.Other), (r'\\', String.Other), (r'\[', String.Other, 'sb-string'), (r'\]', String.Other, '#pop'), (r'[^\[\]]+', String.Other) ], 'lt-string': [ (r'\\[<>\\]', String.Other), (r'\\', String.Other), (r'\<', String.Other, 'lt-string'), (r'\>', String.Other, '#pop'), (r'[^<>]+', String.Other) ], 'end-part': [ (r'.+', Comment.Preproc, '#pop') ] } def analyse_text(text): if shebang_matches(text, r'perl'): return True result = 0 if re.search(r'(?:my|our)\s+[$@%(]', text): result += 0.9 if ':=' in text: # := is not valid Perl, but it appears in unicon, so we should # become less confident if we think we found Perl with := result /= 2 return result class Perl6Lexer(ExtendedRegexLexer): """ For `Raku `_ (a.k.a. Perl 6) source code. .. versionadded:: 2.0 """ name = 'Perl6' aliases = ['perl6', 'pl6', 'raku'] filenames = ['*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6', '*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod', '*.rakutest', '*.rakudoc'] mimetypes = ['text/x-perl6', 'application/x-perl6'] flags = re.MULTILINE | re.DOTALL | re.UNICODE PERL6_IDENTIFIER_RANGE = r"['\w:-]" PERL6_KEYWORDS = ( #Phasers 'BEGIN','CATCH','CHECK','CLOSE','CONTROL','DOC','END','ENTER','FIRST', 'INIT','KEEP','LAST','LEAVE','NEXT','POST','PRE','QUIT','UNDO', #Keywords 'anon','augment','but','class','constant','default','does','else', 'elsif','enum','for','gather','given','grammar','has','if','import', 'is','let','loop','made','make','method','module','multi','my','need', 'orwith','our','proceed','proto','repeat','require','return', 'return-rw','returns','role','rule','state','sub','submethod','subset', 'succeed','supersede','token','try','unit','unless','until','use', 'when','while','with','without', #Traits 'export','native','repr','required','rw','symbol', ) PERL6_BUILTINS = ( 'ACCEPTS','abs','abs2rel','absolute','accept','accessed','acos', 'acosec','acosech','acosh','acotan','acotanh','acquire','act','action', 'actions','add','add_attribute','add_enum_value','add_fallback', 'add_method','add_parent','add_private_method','add_role','add_trustee', 'adverb','after','all','allocate','allof','allowed','alternative-names', 'annotations','antipair','antipairs','any','anyof','app_lifetime', 'append','arch','archname','args','arity','Array','asec','asech','asin', 'asinh','ASSIGN-KEY','ASSIGN-POS','assuming','ast','at','atan','atan2', 'atanh','AT-KEY','atomic-assign','atomic-dec-fetch','atomic-fetch', 'atomic-fetch-add','atomic-fetch-dec','atomic-fetch-inc', 'atomic-fetch-sub','atomic-inc-fetch','AT-POS','attributes','auth', 'await','backtrace','Bag','BagHash','bail-out','base','basename', 'base-repeating','batch','BIND-KEY','BIND-POS','bind-stderr', 'bind-stdin','bind-stdout','bind-udp','bits','bless','block','Bool', 'bool-only','bounds','break','Bridge','broken','BUILD','build-date', 'bytes','cache','callframe','calling-package','CALL-ME','callsame', 'callwith','can','cancel','candidates','cando','can-ok','canonpath', 'caps','caption','Capture','cas','catdir','categorize','categorize-list', 'catfile','catpath','cause','ceiling','cglobal','changed','Channel', 'chars','chdir','child','child-name','child-typename','chmod','chomp', 'chop','chr','chrs','chunks','cis','classify','classify-list','cleanup', 'clone','close','closed','close-stdin','cmp-ok','code','codes','collate', 'column','comb','combinations','command','comment','compiler','Complex', 'compose','compose_type','composer','condition','config', 'configure_destroy','configure_type_checking','conj','connect', 'constraints','construct','contains','contents','copy','cos','cosec', 'cosech','cosh','cotan','cotanh','count','count-only','cpu-cores', 'cpu-usage','CREATE','create_type','cross','cue','curdir','curupdir','d', 'Date','DateTime','day','daycount','day-of-month','day-of-week', 'day-of-year','days-in-month','declaration','decode','decoder','deepmap', 'default','defined','DEFINITE','delayed','DELETE-KEY','DELETE-POS', 'denominator','desc','DESTROY','destroyers','devnull','diag', 'did-you-mean','die','dies-ok','dir','dirname','dir-sep','DISTROnames', 'do','does','does-ok','done','done-testing','duckmap','dynamic','e', 'eager','earlier','elems','emit','enclosing','encode','encoder', 'encoding','end','ends-with','enum_from_value','enum_value_list', 'enum_values','enums','eof','EVAL','eval-dies-ok','EVALFILE', 'eval-lives-ok','exception','excludes-max','excludes-min','EXISTS-KEY', 'EXISTS-POS','exit','exitcode','exp','expected','explicitly-manage', 'expmod','extension','f','fail','fails-like','fc','feature','file', 'filename','find_method','find_method_qualified','finish','first','flat', 'flatmap','flip','floor','flunk','flush','fmt','format','formatter', 'freeze','from','from-list','from-loop','from-posix','full', 'full-barrier','get','get_value','getc','gist','got','grab','grabpairs', 'grep','handle','handled','handles','hardware','has_accessor','Hash', 'head','headers','hh-mm-ss','hidden','hides','hour','how','hyper','id', 'illegal','im','in','indent','index','indices','indir','infinite', 'infix','infix:<+>','infix:<->','install_method_cache','Instant', 'instead','Int','int-bounds','interval','in-timezone','invalid-str', 'invert','invocant','IO','IO::Notification.watch-path','is_trusted', 'is_type','isa','is-absolute','isa-ok','is-approx','is-deeply', 'is-hidden','is-initial-thread','is-int','is-lazy','is-leap-year', 'isNaN','isnt','is-prime','is-relative','is-routine','is-setting', 'is-win','item','iterator','join','keep','kept','KERNELnames','key', 'keyof','keys','kill','kv','kxxv','l','lang','last','lastcall','later', 'lazy','lc','leading','level','like','line','lines','link','List', 'listen','live','lives-ok','local','lock','log','log10','lookup','lsb', 'made','MAIN','make','Map','match','max','maxpairs','merge','message', 'method','method_table','methods','migrate','min','minmax','minpairs', 'minute','misplaced','Mix','MixHash','mkdir','mode','modified','month', 'move','mro','msb','multi','multiness','my','name','named','named_names', 'narrow','nativecast','native-descriptor','nativesizeof','new','new_type', 'new-from-daycount','new-from-pairs','next','nextcallee','next-handle', 'nextsame','nextwith','NFC','NFD','NFKC','NFKD','nl-in','nl-out', 'nodemap','nok','none','norm','not','note','now','nude','Num', 'numerator','Numeric','of','offset','offset-in-hours','offset-in-minutes', 'ok','old','on-close','one','on-switch','open','opened','operation', 'optional','ord','ords','orig','os-error','osname','out-buffer','pack', 'package','package-kind','package-name','packages','pair','pairs', 'pairup','parameter','params','parent','parent-name','parents','parse', 'parse-base','parsefile','parse-names','parts','pass','path','path-sep', 'payload','peer-host','peer-port','periods','perl','permutations','phaser', 'pick','pickpairs','pid','placeholder','plan','plus','polar','poll', 'polymod','pop','pos','positional','posix','postfix','postmatch', 'precomp-ext','precomp-target','pred','prefix','prematch','prepend', 'print','printf','print-nl','print-to','private','private_method_table', 'proc','produce','Promise','prompt','protect','pull-one','push', 'push-all','push-at-least','push-exactly','push-until-lazy','put', 'qualifier-type','quit','r','race','radix','rand','range','Rat','raw', 're','read','readchars','readonly','ready','Real','reallocate','reals', 'reason','rebless','receive','recv','redispatcher','redo','reduce', 'rel2abs','relative','release','rename','repeated','replacement', 'report','reserved','resolve','restore','result','resume','rethrow', 'reverse','right','rindex','rmdir','role','roles_to_compose','rolish', 'roll','rootdir','roots','rotate','rotor','round','roundrobin', 'routine-type','run','rwx','s','samecase','samemark','samewith','say', 'schedule-on','scheduler','scope','sec','sech','second','seek','self', 'send','Set','set_hidden','set_name','set_package','set_rw','set_value', 'SetHash','set-instruments','setup_finalization','shape','share','shell', 'shift','sibling','sigil','sign','signal','signals','signature','sin', 'sinh','sink','sink-all','skip','skip-at-least','skip-at-least-pull-one', 'skip-one','skip-rest','sleep','sleep-timer','sleep-until','Slip','slurp', 'slurp-rest','slurpy','snap','snapper','so','socket-host','socket-port', 'sort','source','source-package','spawn','SPEC','splice','split', 'splitdir','splitpath','sprintf','spurt','sqrt','squish','srand','stable', 'start','started','starts-with','status','stderr','stdout','Str', 'sub_signature','subbuf','subbuf-rw','subname','subparse','subst', 'subst-mutate','substr','substr-eq','substr-rw','subtest','succ','sum', 'Supply','symlink','t','tail','take','take-rw','tan','tanh','tap', 'target','target-name','tc','tclc','tell','then','throttle','throw', 'throws-like','timezone','tmpdir','to','today','todo','toggle','to-posix', 'total','trailing','trans','tree','trim','trim-leading','trim-trailing', 'truncate','truncated-to','trusts','try_acquire','trying','twigil','type', 'type_captures','typename','uc','udp','uncaught_handler','unimatch', 'uniname','uninames','uniparse','uniprop','uniprops','unique','unival', 'univals','unlike','unlink','unlock','unpack','unpolar','unshift', 'unwrap','updir','USAGE','use-ok','utc','val','value','values','VAR', 'variable','verbose-config','version','VMnames','volume','vow','w','wait', 'warn','watch','watch-path','week','weekday-of-month','week-number', 'week-year','WHAT','when','WHERE','WHEREFORE','WHICH','WHO', 'whole-second','WHY','wordcase','words','workaround','wrap','write', 'write-to','x','yada','year','yield','yyyy-mm-dd','z','zip','zip-latest', ) PERL6_BUILTIN_CLASSES = ( #Booleans 'False','True', #Classes 'Any','Array','Associative','AST','atomicint','Attribute','Backtrace', 'Backtrace::Frame','Bag','Baggy','BagHash','Blob','Block','Bool','Buf', 'Callable','CallFrame','Cancellation','Capture','CArray','Channel','Code', 'compiler','Complex','ComplexStr','Cool','CurrentThreadScheduler', 'Cursor','Date','Dateish','DateTime','Distro','Duration','Encoding', 'Exception','Failure','FatRat','Grammar','Hash','HyperWhatever','Instant', 'Int','int16','int32','int64','int8','IntStr','IO','IO::ArgFiles', 'IO::CatHandle','IO::Handle','IO::Notification','IO::Path', 'IO::Path::Cygwin','IO::Path::QNX','IO::Path::Unix','IO::Path::Win32', 'IO::Pipe','IO::Socket','IO::Socket::Async','IO::Socket::INET','IO::Spec', 'IO::Spec::Cygwin','IO::Spec::QNX','IO::Spec::Unix','IO::Spec::Win32', 'IO::Special','Iterable','Iterator','Junction','Kernel','Label','List', 'Lock','Lock::Async','long','longlong','Macro','Map','Match', 'Metamodel::AttributeContainer','Metamodel::C3MRO','Metamodel::ClassHOW', 'Metamodel::EnumHOW','Metamodel::Finalization','Metamodel::MethodContainer', 'Metamodel::MROBasedMethodDispatch','Metamodel::MultipleInheritance', 'Metamodel::Naming','Metamodel::Primitives','Metamodel::PrivateMethodContainer', 'Metamodel::RoleContainer','Metamodel::Trusting','Method','Mix','MixHash', 'Mixy','Mu','NFC','NFD','NFKC','NFKD','Nil','Num','num32','num64', 'Numeric','NumStr','ObjAt','Order','Pair','Parameter','Perl','Pod::Block', 'Pod::Block::Code','Pod::Block::Comment','Pod::Block::Declarator', 'Pod::Block::Named','Pod::Block::Para','Pod::Block::Table','Pod::Heading', 'Pod::Item','Pointer','Positional','PositionalBindFailover','Proc', 'Proc::Async','Promise','Proxy','PseudoStash','QuantHash','Range','Rat', 'Rational','RatStr','Real','Regex','Routine','Scalar','Scheduler', 'Semaphore','Seq','Set','SetHash','Setty','Signature','size_t','Slip', 'Stash','Str','StrDistance','Stringy','Sub','Submethod','Supplier', 'Supplier::Preserving','Supply','Systemic','Tap','Telemetry', 'Telemetry::Instrument::Thread','Telemetry::Instrument::Usage', 'Telemetry::Period','Telemetry::Sampler','Thread','ThreadPoolScheduler', 'UInt','uint16','uint32','uint64','uint8','Uni','utf8','Variable', 'Version','VM','Whatever','WhateverCode','WrapHandle' ) PERL6_OPERATORS = ( 'X', 'Z', 'after', 'also', 'and', 'andthen', 'before', 'cmp', 'div', 'eq', 'eqv', 'extra', 'ff', 'fff', 'ge', 'gt', 'le', 'leg', 'lt', 'm', 'mm', 'mod', 'ne', 'or', 'orelse', 'rx', 's', 'tr', 'x', 'xor', 'xx', '++', '--', '**', '!', '+', '-', '~', '?', '|', '||', '+^', '~^', '?^', '^', '*', '/', '%', '%%', '+&', '+<', '+>', '~&', '~<', '~>', '?&', 'gcd', 'lcm', '+', '-', '+|', '+^', '~|', '~^', '?|', '?^', '~', '&', '^', 'but', 'does', '<=>', '..', '..^', '^..', '^..^', '!=', '==', '<', '<=', '>', '>=', '~~', '===', '!eqv', '&&', '||', '^^', '//', 'min', 'max', '??', '!!', 'ff', 'fff', 'so', 'not', '<==', '==>', '<<==', '==>>','unicmp', ) # Perl 6 has a *lot* of possible bracketing characters # this list was lifted from STD.pm6 (https://github.com/perl6/std) PERL6_BRACKETS = { '\u0028': '\u0029', '\u003c': '\u003e', '\u005b': '\u005d', '\u007b': '\u007d', '\u00ab': '\u00bb', '\u0f3a': '\u0f3b', '\u0f3c': '\u0f3d', '\u169b': '\u169c', '\u2018': '\u2019', '\u201a': '\u2019', '\u201b': '\u2019', '\u201c': '\u201d', '\u201e': '\u201d', '\u201f': '\u201d', '\u2039': '\u203a', '\u2045': '\u2046', '\u207d': '\u207e', '\u208d': '\u208e', '\u2208': '\u220b', '\u2209': '\u220c', '\u220a': '\u220d', '\u2215': '\u29f5', '\u223c': '\u223d', '\u2243': '\u22cd', '\u2252': '\u2253', '\u2254': '\u2255', '\u2264': '\u2265', '\u2266': '\u2267', '\u2268': '\u2269', '\u226a': '\u226b', '\u226e': '\u226f', '\u2270': '\u2271', '\u2272': '\u2273', '\u2274': '\u2275', '\u2276': '\u2277', '\u2278': '\u2279', '\u227a': '\u227b', '\u227c': '\u227d', '\u227e': '\u227f', '\u2280': '\u2281', '\u2282': '\u2283', '\u2284': '\u2285', '\u2286': '\u2287', '\u2288': '\u2289', '\u228a': '\u228b', '\u228f': '\u2290', '\u2291': '\u2292', '\u2298': '\u29b8', '\u22a2': '\u22a3', '\u22a6': '\u2ade', '\u22a8': '\u2ae4', '\u22a9': '\u2ae3', '\u22ab': '\u2ae5', '\u22b0': '\u22b1', '\u22b2': '\u22b3', '\u22b4': '\u22b5', '\u22b6': '\u22b7', '\u22c9': '\u22ca', '\u22cb': '\u22cc', '\u22d0': '\u22d1', '\u22d6': '\u22d7', '\u22d8': '\u22d9', '\u22da': '\u22db', '\u22dc': '\u22dd', '\u22de': '\u22df', '\u22e0': '\u22e1', '\u22e2': '\u22e3', '\u22e4': '\u22e5', '\u22e6': '\u22e7', '\u22e8': '\u22e9', '\u22ea': '\u22eb', '\u22ec': '\u22ed', '\u22f0': '\u22f1', '\u22f2': '\u22fa', '\u22f3': '\u22fb', '\u22f4': '\u22fc', '\u22f6': '\u22fd', '\u22f7': '\u22fe', '\u2308': '\u2309', '\u230a': '\u230b', '\u2329': '\u232a', '\u23b4': '\u23b5', '\u2768': '\u2769', '\u276a': '\u276b', '\u276c': '\u276d', '\u276e': '\u276f', '\u2770': '\u2771', '\u2772': '\u2773', '\u2774': '\u2775', '\u27c3': '\u27c4', '\u27c5': '\u27c6', '\u27d5': '\u27d6', '\u27dd': '\u27de', '\u27e2': '\u27e3', '\u27e4': '\u27e5', '\u27e6': '\u27e7', '\u27e8': '\u27e9', '\u27ea': '\u27eb', '\u2983': '\u2984', '\u2985': '\u2986', '\u2987': '\u2988', '\u2989': '\u298a', '\u298b': '\u298c', '\u298d': '\u298e', '\u298f': '\u2990', '\u2991': '\u2992', '\u2993': '\u2994', '\u2995': '\u2996', '\u2997': '\u2998', '\u29c0': '\u29c1', '\u29c4': '\u29c5', '\u29cf': '\u29d0', '\u29d1': '\u29d2', '\u29d4': '\u29d5', '\u29d8': '\u29d9', '\u29da': '\u29db', '\u29f8': '\u29f9', '\u29fc': '\u29fd', '\u2a2b': '\u2a2c', '\u2a2d': '\u2a2e', '\u2a34': '\u2a35', '\u2a3c': '\u2a3d', '\u2a64': '\u2a65', '\u2a79': '\u2a7a', '\u2a7d': '\u2a7e', '\u2a7f': '\u2a80', '\u2a81': '\u2a82', '\u2a83': '\u2a84', '\u2a8b': '\u2a8c', '\u2a91': '\u2a92', '\u2a93': '\u2a94', '\u2a95': '\u2a96', '\u2a97': '\u2a98', '\u2a99': '\u2a9a', '\u2a9b': '\u2a9c', '\u2aa1': '\u2aa2', '\u2aa6': '\u2aa7', '\u2aa8': '\u2aa9', '\u2aaa': '\u2aab', '\u2aac': '\u2aad', '\u2aaf': '\u2ab0', '\u2ab3': '\u2ab4', '\u2abb': '\u2abc', '\u2abd': '\u2abe', '\u2abf': '\u2ac0', '\u2ac1': '\u2ac2', '\u2ac3': '\u2ac4', '\u2ac5': '\u2ac6', '\u2acd': '\u2ace', '\u2acf': '\u2ad0', '\u2ad1': '\u2ad2', '\u2ad3': '\u2ad4', '\u2ad5': '\u2ad6', '\u2aec': '\u2aed', '\u2af7': '\u2af8', '\u2af9': '\u2afa', '\u2e02': '\u2e03', '\u2e04': '\u2e05', '\u2e09': '\u2e0a', '\u2e0c': '\u2e0d', '\u2e1c': '\u2e1d', '\u2e20': '\u2e21', '\u3008': '\u3009', '\u300a': '\u300b', '\u300c': '\u300d', '\u300e': '\u300f', '\u3010': '\u3011', '\u3014': '\u3015', '\u3016': '\u3017', '\u3018': '\u3019', '\u301a': '\u301b', '\u301d': '\u301e', '\ufd3e': '\ufd3f', '\ufe17': '\ufe18', '\ufe35': '\ufe36', '\ufe37': '\ufe38', '\ufe39': '\ufe3a', '\ufe3b': '\ufe3c', '\ufe3d': '\ufe3e', '\ufe3f': '\ufe40', '\ufe41': '\ufe42', '\ufe43': '\ufe44', '\ufe47': '\ufe48', '\ufe59': '\ufe5a', '\ufe5b': '\ufe5c', '\ufe5d': '\ufe5e', '\uff08': '\uff09', '\uff1c': '\uff1e', '\uff3b': '\uff3d', '\uff5b': '\uff5d', '\uff5f': '\uff60', '\uff62': '\uff63', } def _build_word_match(words, boundary_regex_fragment=None, prefix='', suffix=''): if boundary_regex_fragment is None: return r'\b(' + prefix + r'|'.join(re.escape(x) for x in words) + \ suffix + r')\b' else: return r'(? 0: next_open_pos = text.find(opening_chars, search_pos + n_chars) next_close_pos = text.find(closing_chars, search_pos + n_chars) if next_close_pos == -1: next_close_pos = len(text) nesting_level = 0 elif next_open_pos != -1 and next_open_pos < next_close_pos: nesting_level += 1 search_pos = next_open_pos else: # next_close_pos < next_open_pos nesting_level -= 1 search_pos = next_close_pos end_pos = next_close_pos if end_pos < 0: # if we didn't find a closer, just highlight the # rest of the text in this class end_pos = len(text) if adverbs is not None and re.search(r':to\b', adverbs): heredoc_terminator = text[match.start('delimiter') + n_chars:end_pos] end_heredoc = re.search(r'^\s*' + re.escape(heredoc_terminator) + r'\s*$', text[end_pos:], re.MULTILINE) if end_heredoc: end_pos += end_heredoc.end() else: end_pos = len(text) yield match.start(), token_class, text[match.start():end_pos + n_chars] context.pos = end_pos + n_chars return callback def opening_brace_callback(lexer, match, context): stack = context.stack yield match.start(), Text, context.text[match.start():match.end()] context.pos = match.end() # if we encounter an opening brace and we're one level # below a token state, it means we need to increment # the nesting level for braces so we know later when # we should return to the token rules. if len(stack) > 2 and stack[-2] == 'token': context.perl6_token_nesting_level += 1 def closing_brace_callback(lexer, match, context): stack = context.stack yield match.start(), Text, context.text[match.start():match.end()] context.pos = match.end() # if we encounter a free closing brace and we're one level # below a token state, it means we need to check the nesting # level to see if we need to return to the token state. if len(stack) > 2 and stack[-2] == 'token': context.perl6_token_nesting_level -= 1 if context.perl6_token_nesting_level == 0: stack.pop() def embedded_perl6_callback(lexer, match, context): context.perl6_token_nesting_level = 1 yield match.start(), Text, context.text[match.start():match.end()] context.pos = match.end() context.stack.append('root') # If you're modifying these rules, be careful if you need to process '{' or '}' # characters. We have special logic for processing these characters (due to the fact # that you can nest Perl 6 code in regex blocks), so if you need to process one of # them, make sure you also process the corresponding one! tokens = { 'common': [ (r'#[`|=](?P(?P[' + ''.join(PERL6_BRACKETS) + r'])(?P=first_char)*)', brackets_callback(Comment.Multiline)), (r'#[^\n]*$', Comment.Single), (r'^(\s*)=begin\s+(\w+)\b.*?^\1=end\s+\2', Comment.Multiline), (r'^(\s*)=for.*?\n\s*?\n', Comment.Multiline), (r'^=.*?\n\s*?\n', Comment.Multiline), (r'(regex|token|rule)(\s*' + PERL6_IDENTIFIER_RANGE + '+:sym)', bygroups(Keyword, Name), 'token-sym-brackets'), (r'(regex|token|rule)(?!' + PERL6_IDENTIFIER_RANGE + r')(\s*' + PERL6_IDENTIFIER_RANGE + '+)?', bygroups(Keyword, Name), 'pre-token'), # deal with a special case in the Perl 6 grammar (role q { ... }) (r'(role)(\s+)(q)(\s*)', bygroups(Keyword, Text, Name, Text)), (_build_word_match(PERL6_KEYWORDS, PERL6_IDENTIFIER_RANGE), Keyword), (_build_word_match(PERL6_BUILTIN_CLASSES, PERL6_IDENTIFIER_RANGE, suffix='(?::[UD])?'), Name.Builtin), (_build_word_match(PERL6_BUILTINS, PERL6_IDENTIFIER_RANGE), Name.Builtin), # copied from PerlLexer (r'[$@%&][.^:?=!~]?' + PERL6_IDENTIFIER_RANGE + '+(?:<<.*?>>|<.*?>|«.*?»)*', Name.Variable), (r'\$[!/](?:<<.*?>>|<.*?>|«.*?»)*', Name.Variable.Global), (r'::\?\w+', Name.Variable.Global), (r'[$@%&]\*' + PERL6_IDENTIFIER_RANGE + '+(?:<<.*?>>|<.*?>|«.*?»)*', Name.Variable.Global), (r'\$(?:<.*?>)+', Name.Variable), (r'(?:q|qq|Q)[a-zA-Z]?\s*(?P:[\w\s:]+)?\s*(?P(?P[^0-9a-zA-Z:\s])' r'(?P=first_char)*)', brackets_callback(String)), # copied from PerlLexer (r'0_?[0-7]+(_[0-7]+)*', Number.Oct), (r'0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*', Number.Hex), (r'0b[01]+(_[01]+)*', Number.Bin), (r'(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*', Number.Float), (r'\d+(_\d+)*', Number.Integer), (r'(?<=~~)\s*/(?:\\\\|\\/|.)*?/', String.Regex), (r'(?<=[=(,])\s*/(?:\\\\|\\/|.)*?/', String.Regex), (r'm\w+(?=\()', Name), (r'(?:m|ms|rx)\s*(?P:[\w\s:]+)?\s*(?P(?P[^\w:\s])' r'(?P=first_char)*)', brackets_callback(String.Regex)), (r'(?:s|ss|tr)\s*(?::[\w\s:]+)?\s*/(?:\\\\|\\/|.)*?/(?:\\\\|\\/|.)*?/', String.Regex), (r'<[^\s=].*?\S>', String), (_build_word_match(PERL6_OPERATORS), Operator), (r'\w' + PERL6_IDENTIFIER_RANGE + '*', Name), (r"'(\\\\|\\[^\\]|[^'\\])*'", String), (r'"(\\\\|\\[^\\]|[^"\\])*"', String), ], 'root': [ include('common'), (r'\{', opening_brace_callback), (r'\}', closing_brace_callback), (r'.+?', Text), ], 'pre-token': [ include('common'), (r'\{', Text, ('#pop', 'token')), (r'.+?', Text), ], 'token-sym-brackets': [ (r'(?P(?P[' + ''.join(PERL6_BRACKETS) + '])(?P=first_char)*)', brackets_callback(Name), ('#pop', 'pre-token')), default(('#pop', 'pre-token')), ], 'token': [ (r'\}', Text, '#pop'), (r'(?<=:)(?:my|our|state|constant|temp|let).*?;', using(this)), # make sure that quotes in character classes aren't treated as strings (r'<(?:[-!?+.]\s*)?\[.*?\]>', String.Regex), # make sure that '#' characters in quotes aren't treated as comments (r"(?my|our)\s+)?(?:module|class|role|enum|grammar)', line) if class_decl: if saw_perl_decl or class_decl.group('scope') is not None: return True rating = 0.05 continue break if ':=' in text: # Same logic as above for PerlLexer rating /= 2 return rating def __init__(self, **options): super().__init__(**options) self.encoding = options.get('encoding', 'utf-8') pygments-2.11.2/pygments/lexers/_scilab_builtins.py0000644000175000017500000014623114165547207022374 0ustar carstencarsten""" pygments.lexers._scilab_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Builtin list for the ScilabLexer. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Autogenerated commands_kw = ( 'abort', 'apropos', 'break', 'case', 'catch', 'continue', 'do', 'else', 'elseif', 'end', 'endfunction', 'for', 'function', 'help', 'if', 'pause', 'quit', 'select', 'then', 'try', 'while', ) functions_kw = ( '!!_invoke_', '%H5Object_e', '%H5Object_fieldnames', '%H5Object_p', '%XMLAttr_6', '%XMLAttr_e', '%XMLAttr_i_XMLElem', '%XMLAttr_length', '%XMLAttr_p', '%XMLAttr_size', '%XMLDoc_6', '%XMLDoc_e', '%XMLDoc_i_XMLList', '%XMLDoc_p', '%XMLElem_6', '%XMLElem_e', '%XMLElem_i_XMLDoc', '%XMLElem_i_XMLElem', '%XMLElem_i_XMLList', '%XMLElem_p', '%XMLList_6', '%XMLList_e', '%XMLList_i_XMLElem', '%XMLList_i_XMLList', '%XMLList_length', '%XMLList_p', '%XMLList_size', '%XMLNs_6', '%XMLNs_e', '%XMLNs_i_XMLElem', '%XMLNs_p', '%XMLSet_6', '%XMLSet_e', '%XMLSet_length', '%XMLSet_p', '%XMLSet_size', '%XMLValid_p', '%_EClass_6', '%_EClass_e', '%_EClass_p', '%_EObj_0', '%_EObj_1__EObj', '%_EObj_1_b', '%_EObj_1_c', '%_EObj_1_i', '%_EObj_1_s', '%_EObj_2__EObj', '%_EObj_2_b', '%_EObj_2_c', '%_EObj_2_i', '%_EObj_2_s', '%_EObj_3__EObj', '%_EObj_3_b', '%_EObj_3_c', '%_EObj_3_i', '%_EObj_3_s', '%_EObj_4__EObj', '%_EObj_4_b', '%_EObj_4_c', '%_EObj_4_i', '%_EObj_4_s', '%_EObj_5', '%_EObj_6', '%_EObj_a__EObj', '%_EObj_a_b', '%_EObj_a_c', '%_EObj_a_i', '%_EObj_a_s', '%_EObj_d__EObj', '%_EObj_d_b', '%_EObj_d_c', '%_EObj_d_i', '%_EObj_d_s', '%_EObj_disp', '%_EObj_e', '%_EObj_g__EObj', '%_EObj_g_b', '%_EObj_g_c', '%_EObj_g_i', '%_EObj_g_s', '%_EObj_h__EObj', '%_EObj_h_b', '%_EObj_h_c', '%_EObj_h_i', '%_EObj_h_s', '%_EObj_i__EObj', '%_EObj_j__EObj', '%_EObj_j_b', '%_EObj_j_c', '%_EObj_j_i', '%_EObj_j_s', '%_EObj_k__EObj', '%_EObj_k_b', '%_EObj_k_c', '%_EObj_k_i', '%_EObj_k_s', '%_EObj_l__EObj', '%_EObj_l_b', '%_EObj_l_c', '%_EObj_l_i', '%_EObj_l_s', '%_EObj_m__EObj', '%_EObj_m_b', '%_EObj_m_c', '%_EObj_m_i', '%_EObj_m_s', '%_EObj_n__EObj', '%_EObj_n_b', '%_EObj_n_c', '%_EObj_n_i', '%_EObj_n_s', '%_EObj_o__EObj', '%_EObj_o_b', '%_EObj_o_c', '%_EObj_o_i', '%_EObj_o_s', '%_EObj_p', '%_EObj_p__EObj', '%_EObj_p_b', '%_EObj_p_c', '%_EObj_p_i', '%_EObj_p_s', '%_EObj_q__EObj', '%_EObj_q_b', '%_EObj_q_c', '%_EObj_q_i', '%_EObj_q_s', '%_EObj_r__EObj', '%_EObj_r_b', '%_EObj_r_c', '%_EObj_r_i', '%_EObj_r_s', '%_EObj_s__EObj', '%_EObj_s_b', '%_EObj_s_c', '%_EObj_s_i', '%_EObj_s_s', '%_EObj_t', '%_EObj_x__EObj', '%_EObj_x_b', '%_EObj_x_c', '%_EObj_x_i', '%_EObj_x_s', '%_EObj_y__EObj', '%_EObj_y_b', '%_EObj_y_c', '%_EObj_y_i', '%_EObj_y_s', '%_EObj_z__EObj', '%_EObj_z_b', '%_EObj_z_c', '%_EObj_z_i', '%_EObj_z_s', '%_eigs', '%_load', '%b_1__EObj', '%b_2__EObj', '%b_3__EObj', '%b_4__EObj', '%b_a__EObj', '%b_d__EObj', '%b_g__EObj', '%b_h__EObj', '%b_i_XMLList', '%b_i__EObj', '%b_j__EObj', '%b_k__EObj', '%b_l__EObj', '%b_m__EObj', '%b_n__EObj', '%b_o__EObj', '%b_p__EObj', '%b_q__EObj', '%b_r__EObj', '%b_s__EObj', '%b_x__EObj', '%b_y__EObj', '%b_z__EObj', '%c_1__EObj', '%c_2__EObj', '%c_3__EObj', '%c_4__EObj', '%c_a__EObj', '%c_d__EObj', '%c_g__EObj', '%c_h__EObj', '%c_i_XMLAttr', '%c_i_XMLDoc', '%c_i_XMLElem', '%c_i_XMLList', '%c_i__EObj', '%c_j__EObj', '%c_k__EObj', '%c_l__EObj', '%c_m__EObj', '%c_n__EObj', '%c_o__EObj', '%c_p__EObj', '%c_q__EObj', '%c_r__EObj', '%c_s__EObj', '%c_x__EObj', '%c_y__EObj', '%c_z__EObj', '%ce_i_XMLList', '%fptr_i_XMLList', '%h_i_XMLList', '%hm_i_XMLList', '%i_1__EObj', '%i_2__EObj', '%i_3__EObj', '%i_4__EObj', '%i_a__EObj', '%i_abs', '%i_cumprod', '%i_cumsum', '%i_d__EObj', '%i_diag', '%i_g__EObj', '%i_h__EObj', '%i_i_XMLList', '%i_i__EObj', '%i_j__EObj', '%i_k__EObj', '%i_l__EObj', '%i_m__EObj', '%i_matrix', '%i_max', '%i_maxi', '%i_min', '%i_mini', '%i_mput', '%i_n__EObj', '%i_o__EObj', '%i_p', '%i_p__EObj', '%i_prod', '%i_q__EObj', '%i_r__EObj', '%i_s__EObj', '%i_sum', '%i_tril', '%i_triu', '%i_x__EObj', '%i_y__EObj', '%i_z__EObj', '%ip_i_XMLList', '%l_i_XMLList', '%l_i__EObj', '%lss_i_XMLList', '%mc_i_XMLList', '%msp_full', '%msp_i_XMLList', '%msp_spget', '%p_i_XMLList', '%ptr_i_XMLList', '%r_i_XMLList', '%s_1__EObj', '%s_2__EObj', '%s_3__EObj', '%s_4__EObj', '%s_a__EObj', '%s_d__EObj', '%s_g__EObj', '%s_h__EObj', '%s_i_XMLList', '%s_i__EObj', '%s_j__EObj', '%s_k__EObj', '%s_l__EObj', '%s_m__EObj', '%s_n__EObj', '%s_o__EObj', '%s_p__EObj', '%s_q__EObj', '%s_r__EObj', '%s_s__EObj', '%s_x__EObj', '%s_y__EObj', '%s_z__EObj', '%sp_i_XMLList', '%spb_i_XMLList', '%st_i_XMLList', 'Calendar', 'ClipBoard', 'Matplot', 'Matplot1', 'PlaySound', 'TCL_DeleteInterp', 'TCL_DoOneEvent', 'TCL_EvalFile', 'TCL_EvalStr', 'TCL_ExistArray', 'TCL_ExistInterp', 'TCL_ExistVar', 'TCL_GetVar', 'TCL_GetVersion', 'TCL_SetVar', 'TCL_UnsetVar', 'TCL_UpVar', '_', '_code2str', '_d', '_str2code', 'about', 'abs', 'acos', 'addModulePreferences', 'addcolor', 'addf', 'addhistory', 'addinter', 'addlocalizationdomain', 'amell', 'and', 'argn', 'arl2_ius', 'ascii', 'asin', 'atan', 'backslash', 'balanc', 'banner', 'base2dec', 'basename', 'bdiag', 'beep', 'besselh', 'besseli', 'besselj', 'besselk', 'bessely', 'beta', 'bezout', 'bfinit', 'blkfc1i', 'blkslvi', 'bool2s', 'browsehistory', 'browsevar', 'bsplin3val', 'buildDoc', 'buildouttb', 'bvode', 'c_link', 'call', 'callblk', 'captions', 'cd', 'cdfbet', 'cdfbin', 'cdfchi', 'cdfchn', 'cdff', 'cdffnc', 'cdfgam', 'cdfnbn', 'cdfnor', 'cdfpoi', 'cdft', 'ceil', 'champ', 'champ1', 'chdir', 'chol', 'clc', 'clean', 'clear', 'clearfun', 'clearglobal', 'closeEditor', 'closeEditvar', 'closeXcos', 'code2str', 'coeff', 'color', 'comp', 'completion', 'conj', 'contour2di', 'contr', 'conv2', 'convstr', 'copy', 'copyfile', 'corr', 'cos', 'coserror', 'createdir', 'cshep2d', 'csvDefault', 'csvIsnum', 'csvRead', 'csvStringToDouble', 'csvTextScan', 'csvWrite', 'ctree2', 'ctree3', 'ctree4', 'cumprod', 'cumsum', 'curblock', 'curblockc', 'daskr', 'dasrt', 'dassl', 'data2sig', 'datatipCreate', 'datatipManagerMode', 'datatipMove', 'datatipRemove', 'datatipSetDisplay', 'datatipSetInterp', 'datatipSetOrientation', 'datatipSetStyle', 'datatipToggle', 'dawson', 'dct', 'debug', 'dec2base', 'deff', 'definedfields', 'degree', 'delbpt', 'delete', 'deletefile', 'delip', 'delmenu', 'det', 'dgettext', 'dhinf', 'diag', 'diary', 'diffobjs', 'disp', 'dispbpt', 'displayhistory', 'disposefftwlibrary', 'dlgamma', 'dnaupd', 'dneupd', 'double', 'drawaxis', 'drawlater', 'drawnow', 'driver', 'dsaupd', 'dsearch', 'dseupd', 'dst', 'duplicate', 'editvar', 'emptystr', 'end_scicosim', 'ereduc', 'erf', 'erfc', 'erfcx', 'erfi', 'errcatch', 'errclear', 'error', 'eval_cshep2d', 'exec', 'execstr', 'exists', 'exit', 'exp', 'expm', 'exportUI', 'export_to_hdf5', 'eye', 'fadj2sp', 'fec', 'feval', 'fft', 'fftw', 'fftw_flags', 'fftw_forget_wisdom', 'fftwlibraryisloaded', 'figure', 'file', 'filebrowser', 'fileext', 'fileinfo', 'fileparts', 'filesep', 'find', 'findBD', 'findfiles', 'fire_closing_finished', 'floor', 'format', 'fort', 'fprintfMat', 'freq', 'frexp', 'fromc', 'fromjava', 'fscanfMat', 'fsolve', 'fstair', 'full', 'fullpath', 'funcprot', 'funptr', 'gamma', 'gammaln', 'geom3d', 'get', 'getURL', 'get_absolute_file_path', 'get_fftw_wisdom', 'getblocklabel', 'getcallbackobject', 'getdate', 'getdebuginfo', 'getdefaultlanguage', 'getdrives', 'getdynlibext', 'getenv', 'getfield', 'gethistory', 'gethistoryfile', 'getinstalledlookandfeels', 'getio', 'getlanguage', 'getlongpathname', 'getlookandfeel', 'getmd5', 'getmemory', 'getmodules', 'getos', 'getpid', 'getrelativefilename', 'getscicosvars', 'getscilabmode', 'getshortpathname', 'gettext', 'getvariablesonstack', 'getversion', 'glist', 'global', 'glue', 'grand', 'graphicfunction', 'grayplot', 'grep', 'gsort', 'gstacksize', 'h5attr', 'h5close', 'h5cp', 'h5dataset', 'h5dump', 'h5exists', 'h5flush', 'h5get', 'h5group', 'h5isArray', 'h5isAttr', 'h5isCompound', 'h5isFile', 'h5isGroup', 'h5isList', 'h5isRef', 'h5isSet', 'h5isSpace', 'h5isType', 'h5isVlen', 'h5label', 'h5ln', 'h5ls', 'h5mount', 'h5mv', 'h5open', 'h5read', 'h5readattr', 'h5rm', 'h5umount', 'h5write', 'h5writeattr', 'havewindow', 'helpbrowser', 'hess', 'hinf', 'historymanager', 'historysize', 'host', 'htmlDump', 'htmlRead', 'htmlReadStr', 'htmlWrite', 'iconvert', 'ieee', 'ilib_verbose', 'imag', 'impl', 'import_from_hdf5', 'imult', 'inpnvi', 'int', 'int16', 'int2d', 'int32', 'int3d', 'int8', 'interp', 'interp2d', 'interp3d', 'intg', 'intppty', 'inttype', 'inv', 'invoke_lu', 'is_handle_valid', 'is_hdf5_file', 'isalphanum', 'isascii', 'isdef', 'isdigit', 'isdir', 'isequal', 'isequalbitwise', 'iserror', 'isfile', 'isglobal', 'isletter', 'isnum', 'isreal', 'iswaitingforinput', 'jallowClassReloading', 'jarray', 'jautoTranspose', 'jautoUnwrap', 'javaclasspath', 'javalibrarypath', 'jcast', 'jcompile', 'jconvMatrixMethod', 'jcreatejar', 'jdeff', 'jdisableTrace', 'jenableTrace', 'jexists', 'jgetclassname', 'jgetfield', 'jgetfields', 'jgetinfo', 'jgetmethods', 'jimport', 'jinvoke', 'jinvoke_db', 'jnewInstance', 'jremove', 'jsetfield', 'junwrap', 'junwraprem', 'jwrap', 'jwrapinfloat', 'kron', 'lasterror', 'ldiv', 'ldivf', 'legendre', 'length', 'lib', 'librarieslist', 'libraryinfo', 'light', 'linear_interpn', 'lines', 'link', 'linmeq', 'list', 'listvar_in_hdf5', 'load', 'loadGui', 'loadScicos', 'loadXcos', 'loadfftwlibrary', 'loadhistory', 'log', 'log1p', 'lsq', 'lsq_splin', 'lsqrsolve', 'lsslist', 'lstcat', 'lstsize', 'ltitr', 'lu', 'ludel', 'lufact', 'luget', 'lusolve', 'macr2lst', 'macr2tree', 'matfile_close', 'matfile_listvar', 'matfile_open', 'matfile_varreadnext', 'matfile_varwrite', 'matrix', 'max', 'maxfiles', 'mclearerr', 'mclose', 'meof', 'merror', 'messagebox', 'mfprintf', 'mfscanf', 'mget', 'mgeti', 'mgetl', 'mgetstr', 'min', 'mlist', 'mode', 'model2blk', 'mopen', 'move', 'movefile', 'mprintf', 'mput', 'mputl', 'mputstr', 'mscanf', 'mseek', 'msprintf', 'msscanf', 'mtell', 'mtlb_mode', 'mtlb_sparse', 'mucomp', 'mulf', 'name2rgb', 'nearfloat', 'newaxes', 'newest', 'newfun', 'nnz', 'norm', 'notify', 'number_properties', 'ode', 'odedc', 'ones', 'openged', 'opentk', 'optim', 'or', 'ordmmd', 'parallel_concurrency', 'parallel_run', 'param3d', 'param3d1', 'part', 'pathconvert', 'pathsep', 'phase_simulation', 'plot2d', 'plot2d1', 'plot2d2', 'plot2d3', 'plot2d4', 'plot3d', 'plot3d1', 'plotbrowser', 'pointer_xproperty', 'poly', 'ppol', 'pppdiv', 'predef', 'preferences', 'print', 'printf', 'printfigure', 'printsetupbox', 'prod', 'progressionbar', 'prompt', 'pwd', 'qld', 'qp_solve', 'qr', 'raise_window', 'rand', 'rankqr', 'rat', 'rcond', 'rdivf', 'read', 'read4b', 'read_csv', 'readb', 'readgateway', 'readmps', 'real', 'realtime', 'realtimeinit', 'regexp', 'relocate_handle', 'remez', 'removeModulePreferences', 'removedir', 'removelinehistory', 'res_with_prec', 'resethistory', 'residu', 'resume', 'return', 'ricc', 'rlist', 'roots', 'rotate_axes', 'round', 'rpem', 'rtitr', 'rubberbox', 'save', 'saveGui', 'saveafterncommands', 'saveconsecutivecommands', 'savehistory', 'schur', 'sci_haltscicos', 'sci_tree2', 'sci_tree3', 'sci_tree4', 'sciargs', 'scicos_debug', 'scicos_debug_count', 'scicos_time', 'scicosim', 'scinotes', 'sctree', 'semidef', 'set', 'set_blockerror', 'set_fftw_wisdom', 'set_xproperty', 'setbpt', 'setdefaultlanguage', 'setenv', 'setfield', 'sethistoryfile', 'setlanguage', 'setlookandfeel', 'setmenu', 'sfact', 'sfinit', 'show_window', 'sident', 'sig2data', 'sign', 'simp', 'simp_mode', 'sin', 'size', 'slash', 'sleep', 'sorder', 'sparse', 'spchol', 'spcompack', 'spec', 'spget', 'splin', 'splin2d', 'splin3d', 'splitURL', 'spones', 'sprintf', 'sqrt', 'stacksize', 'str2code', 'strcat', 'strchr', 'strcmp', 'strcspn', 'strindex', 'string', 'stringbox', 'stripblanks', 'strncpy', 'strrchr', 'strrev', 'strsplit', 'strspn', 'strstr', 'strsubst', 'strtod', 'strtok', 'subf', 'sum', 'svd', 'swap_handles', 'symfcti', 'syredi', 'system_getproperty', 'system_setproperty', 'ta2lpd', 'tan', 'taucs_chdel', 'taucs_chfact', 'taucs_chget', 'taucs_chinfo', 'taucs_chsolve', 'tempname', 'testmatrix', 'timer', 'tlist', 'tohome', 'tokens', 'toolbar', 'toprint', 'tr_zer', 'tril', 'triu', 'type', 'typename', 'uiDisplayTree', 'uicontextmenu', 'uicontrol', 'uigetcolor', 'uigetdir', 'uigetfile', 'uigetfont', 'uimenu', 'uint16', 'uint32', 'uint8', 'uipopup', 'uiputfile', 'uiwait', 'ulink', 'umf_ludel', 'umf_lufact', 'umf_luget', 'umf_luinfo', 'umf_lusolve', 'umfpack', 'unglue', 'unix', 'unsetmenu', 'unzoom', 'updatebrowsevar', 'usecanvas', 'useeditor', 'user', 'var2vec', 'varn', 'vec2var', 'waitbar', 'warnBlockByUID', 'warning', 'what', 'where', 'whereis', 'who', 'winsid', 'with_module', 'writb', 'write', 'write4b', 'write_csv', 'x_choose', 'x_choose_modeless', 'x_dialog', 'x_mdialog', 'xarc', 'xarcs', 'xarrows', 'xchange', 'xchoicesi', 'xclick', 'xcos', 'xcosAddToolsMenu', 'xcosConfigureXmlFile', 'xcosDiagramToScilab', 'xcosPalCategoryAdd', 'xcosPalDelete', 'xcosPalDisable', 'xcosPalEnable', 'xcosPalGenerateIcon', 'xcosPalGet', 'xcosPalLoad', 'xcosPalMove', 'xcosSimulationStarted', 'xcosUpdateBlock', 'xdel', 'xend', 'xfarc', 'xfarcs', 'xfpoly', 'xfpolys', 'xfrect', 'xget', 'xgetmouse', 'xgraduate', 'xgrid', 'xinit', 'xlfont', 'xls_open', 'xls_read', 'xmlAddNs', 'xmlAppend', 'xmlAsNumber', 'xmlAsText', 'xmlDTD', 'xmlDelete', 'xmlDocument', 'xmlDump', 'xmlElement', 'xmlFormat', 'xmlGetNsByHref', 'xmlGetNsByPrefix', 'xmlGetOpenDocs', 'xmlIsValidObject', 'xmlName', 'xmlNs', 'xmlRead', 'xmlReadStr', 'xmlRelaxNG', 'xmlRemove', 'xmlSchema', 'xmlSetAttributes', 'xmlValidate', 'xmlWrite', 'xmlXPath', 'xname', 'xpause', 'xpoly', 'xpolys', 'xrect', 'xrects', 'xs2bmp', 'xs2emf', 'xs2eps', 'xs2gif', 'xs2jpg', 'xs2pdf', 'xs2png', 'xs2ppm', 'xs2ps', 'xs2svg', 'xsegs', 'xset', 'xstring', 'xstringb', 'xtitle', 'zeros', 'znaupd', 'zneupd', 'zoom_rect', ) macros_kw = ( '!_deff_wrapper', '%0_i_st', '%3d_i_h', '%Block_xcosUpdateBlock', '%TNELDER_p', '%TNELDER_string', '%TNMPLOT_p', '%TNMPLOT_string', '%TOPTIM_p', '%TOPTIM_string', '%TSIMPLEX_p', '%TSIMPLEX_string', '%_EVoid_p', '%_gsort', '%_listvarinfile', '%_rlist', '%_save', '%_sodload', '%_strsplit', '%_unwrap', '%ar_p', '%asn', '%b_a_b', '%b_a_s', '%b_c_s', '%b_c_spb', '%b_cumprod', '%b_cumsum', '%b_d_s', '%b_diag', '%b_e', '%b_f_s', '%b_f_spb', '%b_g_s', '%b_g_spb', '%b_grand', '%b_h_s', '%b_h_spb', '%b_i_b', '%b_i_ce', '%b_i_h', '%b_i_hm', '%b_i_s', '%b_i_sp', '%b_i_spb', '%b_i_st', '%b_iconvert', '%b_l_b', '%b_l_s', '%b_m_b', '%b_m_s', '%b_matrix', '%b_n_hm', '%b_o_hm', '%b_p_s', '%b_prod', '%b_r_b', '%b_r_s', '%b_s_b', '%b_s_s', '%b_string', '%b_sum', '%b_tril', '%b_triu', '%b_x_b', '%b_x_s', '%bicg', '%bicgstab', '%c_a_c', '%c_b_c', '%c_b_s', '%c_diag', '%c_dsearch', '%c_e', '%c_eye', '%c_f_s', '%c_grand', '%c_i_c', '%c_i_ce', '%c_i_h', '%c_i_hm', '%c_i_lss', '%c_i_r', '%c_i_s', '%c_i_st', '%c_matrix', '%c_n_l', '%c_n_st', '%c_o_l', '%c_o_st', '%c_ones', '%c_rand', '%c_tril', '%c_triu', '%cblock_c_cblock', '%cblock_c_s', '%cblock_e', '%cblock_f_cblock', '%cblock_p', '%cblock_size', '%ce_6', '%ce_c_ce', '%ce_e', '%ce_f_ce', '%ce_i_ce', '%ce_i_s', '%ce_i_st', '%ce_matrix', '%ce_p', '%ce_size', '%ce_string', '%ce_t', '%cgs', '%champdat_i_h', '%choose', '%diagram_xcos', '%dir_p', '%fptr_i_st', '%grand_perm', '%grayplot_i_h', '%h_i_st', '%hmS_k_hmS_generic', '%hm_1_hm', '%hm_1_s', '%hm_2_hm', '%hm_2_s', '%hm_3_hm', '%hm_3_s', '%hm_4_hm', '%hm_4_s', '%hm_5', '%hm_a_hm', '%hm_a_r', '%hm_a_s', '%hm_abs', '%hm_and', '%hm_bool2s', '%hm_c_hm', '%hm_ceil', '%hm_conj', '%hm_cos', '%hm_cumprod', '%hm_cumsum', '%hm_d_hm', '%hm_d_s', '%hm_degree', '%hm_dsearch', '%hm_e', '%hm_exp', '%hm_eye', '%hm_f_hm', '%hm_find', '%hm_floor', '%hm_g_hm', '%hm_grand', '%hm_gsort', '%hm_h_hm', '%hm_i_b', '%hm_i_ce', '%hm_i_h', '%hm_i_hm', '%hm_i_i', '%hm_i_p', '%hm_i_r', '%hm_i_s', '%hm_i_st', '%hm_iconvert', '%hm_imag', '%hm_int', '%hm_isnan', '%hm_isreal', '%hm_j_hm', '%hm_j_s', '%hm_k_hm', '%hm_k_s', '%hm_log', '%hm_m_p', '%hm_m_r', '%hm_m_s', '%hm_matrix', '%hm_max', '%hm_mean', '%hm_median', '%hm_min', '%hm_n_b', '%hm_n_c', '%hm_n_hm', '%hm_n_i', '%hm_n_p', '%hm_n_s', '%hm_o_b', '%hm_o_c', '%hm_o_hm', '%hm_o_i', '%hm_o_p', '%hm_o_s', '%hm_ones', '%hm_or', '%hm_p', '%hm_prod', '%hm_q_hm', '%hm_r_s', '%hm_rand', '%hm_real', '%hm_round', '%hm_s', '%hm_s_hm', '%hm_s_r', '%hm_s_s', '%hm_sign', '%hm_sin', '%hm_size', '%hm_sqrt', '%hm_stdev', '%hm_string', '%hm_sum', '%hm_x_hm', '%hm_x_p', '%hm_x_s', '%hm_zeros', '%i_1_s', '%i_2_s', '%i_3_s', '%i_4_s', '%i_Matplot', '%i_a_i', '%i_a_s', '%i_and', '%i_ascii', '%i_b_s', '%i_bezout', '%i_champ', '%i_champ1', '%i_contour', '%i_contour2d', '%i_d_i', '%i_d_s', '%i_dsearch', '%i_e', '%i_fft', '%i_g_i', '%i_gcd', '%i_grand', '%i_h_i', '%i_i_ce', '%i_i_h', '%i_i_hm', '%i_i_i', '%i_i_s', '%i_i_st', '%i_j_i', '%i_j_s', '%i_l_s', '%i_lcm', '%i_length', '%i_m_i', '%i_m_s', '%i_mfprintf', '%i_mprintf', '%i_msprintf', '%i_n_s', '%i_o_s', '%i_or', '%i_p_i', '%i_p_s', '%i_plot2d', '%i_plot2d1', '%i_plot2d2', '%i_q_s', '%i_r_i', '%i_r_s', '%i_round', '%i_s_i', '%i_s_s', '%i_sign', '%i_string', '%i_x_i', '%i_x_s', '%ip_a_s', '%ip_i_st', '%ip_m_s', '%ip_n_ip', '%ip_o_ip', '%ip_p', '%ip_part', '%ip_s_s', '%ip_string', '%k', '%l_i_h', '%l_i_s', '%l_i_st', '%l_isequal', '%l_n_c', '%l_n_l', '%l_n_m', '%l_n_p', '%l_n_s', '%l_n_st', '%l_o_c', '%l_o_l', '%l_o_m', '%l_o_p', '%l_o_s', '%l_o_st', '%lss_a_lss', '%lss_a_p', '%lss_a_r', '%lss_a_s', '%lss_c_lss', '%lss_c_p', '%lss_c_r', '%lss_c_s', '%lss_e', '%lss_eye', '%lss_f_lss', '%lss_f_p', '%lss_f_r', '%lss_f_s', '%lss_i_ce', '%lss_i_lss', '%lss_i_p', '%lss_i_r', '%lss_i_s', '%lss_i_st', '%lss_inv', '%lss_l_lss', '%lss_l_p', '%lss_l_r', '%lss_l_s', '%lss_m_lss', '%lss_m_p', '%lss_m_r', '%lss_m_s', '%lss_n_lss', '%lss_n_p', '%lss_n_r', '%lss_n_s', '%lss_norm', '%lss_o_lss', '%lss_o_p', '%lss_o_r', '%lss_o_s', '%lss_ones', '%lss_r_lss', '%lss_r_p', '%lss_r_r', '%lss_r_s', '%lss_rand', '%lss_s', '%lss_s_lss', '%lss_s_p', '%lss_s_r', '%lss_s_s', '%lss_size', '%lss_t', '%lss_v_lss', '%lss_v_p', '%lss_v_r', '%lss_v_s', '%lt_i_s', '%m_n_l', '%m_o_l', '%mc_i_h', '%mc_i_s', '%mc_i_st', '%mc_n_st', '%mc_o_st', '%mc_string', '%mps_p', '%mps_string', '%msp_a_s', '%msp_abs', '%msp_e', '%msp_find', '%msp_i_s', '%msp_i_st', '%msp_length', '%msp_m_s', '%msp_maxi', '%msp_n_msp', '%msp_nnz', '%msp_o_msp', '%msp_p', '%msp_sparse', '%msp_spones', '%msp_t', '%p_a_lss', '%p_a_r', '%p_c_lss', '%p_c_r', '%p_cumprod', '%p_cumsum', '%p_d_p', '%p_d_r', '%p_d_s', '%p_det', '%p_e', '%p_f_lss', '%p_f_r', '%p_grand', '%p_i_ce', '%p_i_h', '%p_i_hm', '%p_i_lss', '%p_i_p', '%p_i_r', '%p_i_s', '%p_i_st', '%p_inv', '%p_j_s', '%p_k_p', '%p_k_r', '%p_k_s', '%p_l_lss', '%p_l_p', '%p_l_r', '%p_l_s', '%p_m_hm', '%p_m_lss', '%p_m_r', '%p_matrix', '%p_n_l', '%p_n_lss', '%p_n_r', '%p_o_l', '%p_o_lss', '%p_o_r', '%p_o_sp', '%p_p_s', '%p_part', '%p_prod', '%p_q_p', '%p_q_r', '%p_q_s', '%p_r_lss', '%p_r_p', '%p_r_r', '%p_r_s', '%p_s_lss', '%p_s_r', '%p_simp', '%p_string', '%p_sum', '%p_v_lss', '%p_v_p', '%p_v_r', '%p_v_s', '%p_x_hm', '%p_x_r', '%p_y_p', '%p_y_r', '%p_y_s', '%p_z_p', '%p_z_r', '%p_z_s', '%pcg', '%plist_p', '%plist_string', '%r_0', '%r_a_hm', '%r_a_lss', '%r_a_p', '%r_a_r', '%r_a_s', '%r_c_lss', '%r_c_p', '%r_c_r', '%r_c_s', '%r_clean', '%r_cumprod', '%r_cumsum', '%r_d_p', '%r_d_r', '%r_d_s', '%r_det', '%r_diag', '%r_e', '%r_eye', '%r_f_lss', '%r_f_p', '%r_f_r', '%r_f_s', '%r_i_ce', '%r_i_hm', '%r_i_lss', '%r_i_p', '%r_i_r', '%r_i_s', '%r_i_st', '%r_inv', '%r_j_s', '%r_k_p', '%r_k_r', '%r_k_s', '%r_l_lss', '%r_l_p', '%r_l_r', '%r_l_s', '%r_m_hm', '%r_m_lss', '%r_m_p', '%r_m_r', '%r_m_s', '%r_matrix', '%r_n_lss', '%r_n_p', '%r_n_r', '%r_n_s', '%r_norm', '%r_o_lss', '%r_o_p', '%r_o_r', '%r_o_s', '%r_ones', '%r_p', '%r_p_s', '%r_prod', '%r_q_p', '%r_q_r', '%r_q_s', '%r_r_lss', '%r_r_p', '%r_r_r', '%r_r_s', '%r_rand', '%r_s', '%r_s_hm', '%r_s_lss', '%r_s_p', '%r_s_r', '%r_s_s', '%r_simp', '%r_size', '%r_string', '%r_sum', '%r_t', '%r_tril', '%r_triu', '%r_v_lss', '%r_v_p', '%r_v_r', '%r_v_s', '%r_varn', '%r_x_p', '%r_x_r', '%r_x_s', '%r_y_p', '%r_y_r', '%r_y_s', '%r_z_p', '%r_z_r', '%r_z_s', '%s_1_hm', '%s_1_i', '%s_2_hm', '%s_2_i', '%s_3_hm', '%s_3_i', '%s_4_hm', '%s_4_i', '%s_5', '%s_a_b', '%s_a_hm', '%s_a_i', '%s_a_ip', '%s_a_lss', '%s_a_msp', '%s_a_r', '%s_a_sp', '%s_and', '%s_b_i', '%s_b_s', '%s_bezout', '%s_c_b', '%s_c_cblock', '%s_c_lss', '%s_c_r', '%s_c_sp', '%s_d_b', '%s_d_i', '%s_d_p', '%s_d_r', '%s_d_sp', '%s_e', '%s_f_b', '%s_f_cblock', '%s_f_lss', '%s_f_r', '%s_f_sp', '%s_g_b', '%s_g_s', '%s_gcd', '%s_grand', '%s_h_b', '%s_h_s', '%s_i_b', '%s_i_c', '%s_i_ce', '%s_i_h', '%s_i_hm', '%s_i_i', '%s_i_lss', '%s_i_p', '%s_i_r', '%s_i_s', '%s_i_sp', '%s_i_spb', '%s_i_st', '%s_j_i', '%s_k_hm', '%s_k_p', '%s_k_r', '%s_k_sp', '%s_l_b', '%s_l_hm', '%s_l_i', '%s_l_lss', '%s_l_p', '%s_l_r', '%s_l_s', '%s_l_sp', '%s_lcm', '%s_m_b', '%s_m_hm', '%s_m_i', '%s_m_ip', '%s_m_lss', '%s_m_msp', '%s_m_r', '%s_matrix', '%s_n_hm', '%s_n_i', '%s_n_l', '%s_n_lss', '%s_n_r', '%s_n_st', '%s_o_hm', '%s_o_i', '%s_o_l', '%s_o_lss', '%s_o_r', '%s_o_st', '%s_or', '%s_p_b', '%s_p_i', '%s_pow', '%s_q_hm', '%s_q_i', '%s_q_p', '%s_q_r', '%s_q_sp', '%s_r_b', '%s_r_i', '%s_r_lss', '%s_r_p', '%s_r_r', '%s_r_s', '%s_r_sp', '%s_s_b', '%s_s_hm', '%s_s_i', '%s_s_ip', '%s_s_lss', '%s_s_r', '%s_s_sp', '%s_simp', '%s_v_lss', '%s_v_p', '%s_v_r', '%s_v_s', '%s_x_b', '%s_x_hm', '%s_x_i', '%s_x_r', '%s_y_p', '%s_y_r', '%s_y_sp', '%s_z_p', '%s_z_r', '%s_z_sp', '%sn', '%sp_a_s', '%sp_a_sp', '%sp_and', '%sp_c_s', '%sp_ceil', '%sp_conj', '%sp_cos', '%sp_cumprod', '%sp_cumsum', '%sp_d_s', '%sp_d_sp', '%sp_det', '%sp_diag', '%sp_e', '%sp_exp', '%sp_f_s', '%sp_floor', '%sp_grand', '%sp_gsort', '%sp_i_ce', '%sp_i_h', '%sp_i_s', '%sp_i_sp', '%sp_i_st', '%sp_int', '%sp_inv', '%sp_k_s', '%sp_k_sp', '%sp_l_s', '%sp_l_sp', '%sp_length', '%sp_max', '%sp_min', '%sp_norm', '%sp_or', '%sp_p_s', '%sp_prod', '%sp_q_s', '%sp_q_sp', '%sp_r_s', '%sp_r_sp', '%sp_round', '%sp_s_s', '%sp_s_sp', '%sp_sin', '%sp_sqrt', '%sp_string', '%sp_sum', '%sp_tril', '%sp_triu', '%sp_y_s', '%sp_y_sp', '%sp_z_s', '%sp_z_sp', '%spb_and', '%spb_c_b', '%spb_cumprod', '%spb_cumsum', '%spb_diag', '%spb_e', '%spb_f_b', '%spb_g_b', '%spb_g_spb', '%spb_h_b', '%spb_h_spb', '%spb_i_b', '%spb_i_ce', '%spb_i_h', '%spb_i_st', '%spb_or', '%spb_prod', '%spb_sum', '%spb_tril', '%spb_triu', '%st_6', '%st_c_st', '%st_e', '%st_f_st', '%st_i_b', '%st_i_c', '%st_i_fptr', '%st_i_h', '%st_i_i', '%st_i_ip', '%st_i_lss', '%st_i_msp', '%st_i_p', '%st_i_r', '%st_i_s', '%st_i_sp', '%st_i_spb', '%st_i_st', '%st_matrix', '%st_n_c', '%st_n_l', '%st_n_mc', '%st_n_p', '%st_n_s', '%st_o_c', '%st_o_l', '%st_o_mc', '%st_o_p', '%st_o_s', '%st_o_tl', '%st_p', '%st_size', '%st_string', '%st_t', '%ticks_i_h', '%xls_e', '%xls_p', '%xlssheet_e', '%xlssheet_p', '%xlssheet_size', '%xlssheet_string', 'DominationRank', 'G_make', 'IsAScalar', 'NDcost', 'OS_Version', 'PlotSparse', 'ReadHBSparse', 'TCL_CreateSlave', 'abcd', 'abinv', 'accept_func_default', 'accept_func_vfsa', 'acf', 'acosd', 'acosh', 'acoshm', 'acosm', 'acot', 'acotd', 'acoth', 'acsc', 'acscd', 'acsch', 'add_demo', 'add_help_chapter', 'add_module_help_chapter', 'add_param', 'add_profiling', 'adj2sp', 'aff2ab', 'ana_style', 'analpf', 'analyze', 'aplat', 'arhnk', 'arl2', 'arma2p', 'arma2ss', 'armac', 'armax', 'armax1', 'arobasestring2strings', 'arsimul', 'ascii2string', 'asciimat', 'asec', 'asecd', 'asech', 'asind', 'asinh', 'asinhm', 'asinm', 'assert_checkalmostequal', 'assert_checkequal', 'assert_checkerror', 'assert_checkfalse', 'assert_checkfilesequal', 'assert_checktrue', 'assert_comparecomplex', 'assert_computedigits', 'assert_cond2reltol', 'assert_cond2reqdigits', 'assert_generror', 'atand', 'atanh', 'atanhm', 'atanm', 'atomsAutoload', 'atomsAutoloadAdd', 'atomsAutoloadDel', 'atomsAutoloadList', 'atomsCategoryList', 'atomsCheckModule', 'atomsDepTreeShow', 'atomsGetConfig', 'atomsGetInstalled', 'atomsGetInstalledPath', 'atomsGetLoaded', 'atomsGetLoadedPath', 'atomsInstall', 'atomsIsInstalled', 'atomsIsLoaded', 'atomsList', 'atomsLoad', 'atomsQuit', 'atomsRemove', 'atomsRepositoryAdd', 'atomsRepositoryDel', 'atomsRepositoryList', 'atomsRestoreConfig', 'atomsSaveConfig', 'atomsSearch', 'atomsSetConfig', 'atomsShow', 'atomsSystemInit', 'atomsSystemUpdate', 'atomsTest', 'atomsUpdate', 'atomsVersion', 'augment', 'auread', 'auwrite', 'balreal', 'bench_run', 'bilin', 'bilt', 'bin2dec', 'binomial', 'bitand', 'bitcmp', 'bitget', 'bitor', 'bitset', 'bitxor', 'black', 'blanks', 'bloc2exp', 'bloc2ss', 'block_parameter_error', 'bode', 'bode_asymp', 'bstap', 'buttmag', 'bvodeS', 'bytecode', 'bytecodewalk', 'cainv', 'calendar', 'calerf', 'calfrq', 'canon', 'casc', 'cat', 'cat_code', 'cb_m2sci_gui', 'ccontrg', 'cell', 'cell2mat', 'cellstr', 'center', 'cepstrum', 'cfspec', 'char', 'chart', 'cheb1mag', 'cheb2mag', 'check_gateways', 'check_modules_xml', 'check_versions', 'chepol', 'chfact', 'chsolve', 'classmarkov', 'clean_help', 'clock', 'cls2dls', 'cmb_lin', 'cmndred', 'cmoment', 'coding_ga_binary', 'coding_ga_identity', 'coff', 'coffg', 'colcomp', 'colcompr', 'colinout', 'colregul', 'companion', 'complex', 'compute_initial_temp', 'cond', 'cond2sp', 'condestsp', 'configure_msifort', 'configure_msvc', 'conjgrad', 'cont_frm', 'cont_mat', 'contrss', 'conv', 'convert_to_float', 'convertindex', 'convol', 'convol2d', 'copfac', 'correl', 'cosd', 'cosh', 'coshm', 'cosm', 'cotd', 'cotg', 'coth', 'cothm', 'cov', 'covar', 'createXConfiguration', 'createfun', 'createstruct', 'cross', 'crossover_ga_binary', 'crossover_ga_default', 'csc', 'cscd', 'csch', 'csgn', 'csim', 'cspect', 'ctr_gram', 'czt', 'dae', 'daeoptions', 'damp', 'datafit', 'date', 'datenum', 'datevec', 'dbphi', 'dcf', 'ddp', 'dec2bin', 'dec2hex', 'dec2oct', 'del_help_chapter', 'del_module_help_chapter', 'demo_begin', 'demo_choose', 'demo_compiler', 'demo_end', 'demo_file_choice', 'demo_folder_choice', 'demo_function_choice', 'demo_gui', 'demo_run', 'demo_viewCode', 'denom', 'derivat', 'derivative', 'des2ss', 'des2tf', 'detectmsifort64tools', 'detectmsvc64tools', 'determ', 'detr', 'detrend', 'devtools_run_builder', 'dhnorm', 'diff', 'diophant', 'dir', 'dirname', 'dispfiles', 'dllinfo', 'dscr', 'dsimul', 'dt_ility', 'dtsi', 'edit', 'edit_error', 'editor', 'eigenmarkov', 'eigs', 'ell1mag', 'enlarge_shape', 'entropy', 'eomday', 'epred', 'eqfir', 'eqiir', 'equil', 'equil1', 'erfinv', 'etime', 'eval', 'evans', 'evstr', 'example_run', 'expression2code', 'extract_help_examples', 'factor', 'factorial', 'factors', 'faurre', 'ffilt', 'fft2', 'fftshift', 'fieldnames', 'filt_sinc', 'filter', 'findABCD', 'findAC', 'findBDK', 'findR', 'find_freq', 'find_links', 'find_scicos_version', 'findm', 'findmsifortcompiler', 'findmsvccompiler', 'findx0BD', 'firstnonsingleton', 'fix', 'fixedpointgcd', 'flipdim', 'flts', 'fminsearch', 'formatBlackTip', 'formatBodeMagTip', 'formatBodePhaseTip', 'formatGainplotTip', 'formatHallModuleTip', 'formatHallPhaseTip', 'formatNicholsGainTip', 'formatNicholsPhaseTip', 'formatNyquistTip', 'formatPhaseplotTip', 'formatSgridDampingTip', 'formatSgridFreqTip', 'formatZgridDampingTip', 'formatZgridFreqTip', 'format_txt', 'fourplan', 'frep2tf', 'freson', 'frfit', 'frmag', 'fseek_origin', 'fsfirlin', 'fspec', 'fspecg', 'fstabst', 'ftest', 'ftuneq', 'fullfile', 'fullrf', 'fullrfk', 'fun2string', 'g_margin', 'gainplot', 'gamitg', 'gcare', 'gcd', 'gencompilationflags_unix', 'generateBlockImage', 'generateBlockImages', 'generic_i_ce', 'generic_i_h', 'generic_i_hm', 'generic_i_s', 'generic_i_st', 'genlib', 'genmarkov', 'geomean', 'getDiagramVersion', 'getModelicaPath', 'getPreferencesValue', 'get_file_path', 'get_function_path', 'get_param', 'get_profile', 'get_scicos_version', 'getd', 'getscilabkeywords', 'getshell', 'gettklib', 'gfare', 'gfrancis', 'givens', 'glever', 'gmres', 'group', 'gschur', 'gspec', 'gtild', 'h2norm', 'h_cl', 'h_inf', 'h_inf_st', 'h_norm', 'hallchart', 'halt', 'hank', 'hankelsv', 'harmean', 'haveacompiler', 'head_comments', 'help_from_sci', 'help_skeleton', 'hermit', 'hex2dec', 'hilb', 'hilbert', 'histc', 'horner', 'householder', 'hrmt', 'htrianr', 'hypermat', 'idct', 'idst', 'ifft', 'ifftshift', 'iir', 'iirgroup', 'iirlp', 'iirmod', 'ilib_build', 'ilib_build_jar', 'ilib_compile', 'ilib_for_link', 'ilib_gen_Make', 'ilib_gen_Make_unix', 'ilib_gen_cleaner', 'ilib_gen_gateway', 'ilib_gen_loader', 'ilib_include_flag', 'ilib_mex_build', 'im_inv', 'importScicosDiagram', 'importScicosPal', 'importXcosDiagram', 'imrep2ss', 'ind2sub', 'inistate', 'init_ga_default', 'init_param', 'initial_scicos_tables', 'input', 'instruction2code', 'intc', 'intdec', 'integrate', 'interp1', 'interpln', 'intersect', 'intl', 'intsplin', 'inttrap', 'inv_coeff', 'invr', 'invrs', 'invsyslin', 'iqr', 'isLeapYear', 'is_absolute_path', 'is_param', 'iscell', 'iscellstr', 'iscolumn', 'isempty', 'isfield', 'isinf', 'ismatrix', 'isnan', 'isrow', 'isscalar', 'issparse', 'issquare', 'isstruct', 'isvector', 'jmat', 'justify', 'kalm', 'karmarkar', 'kernel', 'kpure', 'krac2', 'kroneck', 'lattn', 'lattp', 'launchtest', 'lcf', 'lcm', 'lcmdiag', 'leastsq', 'leqe', 'leqr', 'lev', 'levin', 'lex_sort', 'lft', 'lin', 'lin2mu', 'lincos', 'lindquist', 'linf', 'linfn', 'linsolve', 'linspace', 'list2vec', 'list_param', 'listfiles', 'listfunctions', 'listvarinfile', 'lmisolver', 'lmitool', 'loadXcosLibs', 'loadmatfile', 'loadwave', 'log10', 'log2', 'logm', 'logspace', 'lqe', 'lqg', 'lqg2stan', 'lqg_ltr', 'lqr', 'ls', 'lyap', 'm2sci_gui', 'm_circle', 'macglov', 'macrovar', 'mad', 'makecell', 'manedit', 'mapsound', 'markp2ss', 'matfile2sci', 'mdelete', 'mean', 'meanf', 'median', 'members', 'mese', 'meshgrid', 'mfft', 'mfile2sci', 'minreal', 'minss', 'mkdir', 'modulo', 'moment', 'mrfit', 'msd', 'mstr2sci', 'mtlb', 'mtlb_0', 'mtlb_a', 'mtlb_all', 'mtlb_any', 'mtlb_axes', 'mtlb_axis', 'mtlb_beta', 'mtlb_box', 'mtlb_choices', 'mtlb_close', 'mtlb_colordef', 'mtlb_cond', 'mtlb_cov', 'mtlb_cumprod', 'mtlb_cumsum', 'mtlb_dec2hex', 'mtlb_delete', 'mtlb_diag', 'mtlb_diff', 'mtlb_dir', 'mtlb_double', 'mtlb_e', 'mtlb_echo', 'mtlb_error', 'mtlb_eval', 'mtlb_exist', 'mtlb_eye', 'mtlb_false', 'mtlb_fft', 'mtlb_fftshift', 'mtlb_filter', 'mtlb_find', 'mtlb_findstr', 'mtlb_fliplr', 'mtlb_fopen', 'mtlb_format', 'mtlb_fprintf', 'mtlb_fread', 'mtlb_fscanf', 'mtlb_full', 'mtlb_fwrite', 'mtlb_get', 'mtlb_grid', 'mtlb_hold', 'mtlb_i', 'mtlb_ifft', 'mtlb_image', 'mtlb_imp', 'mtlb_int16', 'mtlb_int32', 'mtlb_int8', 'mtlb_is', 'mtlb_isa', 'mtlb_isfield', 'mtlb_isletter', 'mtlb_isspace', 'mtlb_l', 'mtlb_legendre', 'mtlb_linspace', 'mtlb_logic', 'mtlb_logical', 'mtlb_loglog', 'mtlb_lower', 'mtlb_max', 'mtlb_mean', 'mtlb_median', 'mtlb_mesh', 'mtlb_meshdom', 'mtlb_min', 'mtlb_more', 'mtlb_num2str', 'mtlb_ones', 'mtlb_pcolor', 'mtlb_plot', 'mtlb_prod', 'mtlb_qr', 'mtlb_qz', 'mtlb_rand', 'mtlb_randn', 'mtlb_rcond', 'mtlb_realmax', 'mtlb_realmin', 'mtlb_s', 'mtlb_semilogx', 'mtlb_semilogy', 'mtlb_setstr', 'mtlb_size', 'mtlb_sort', 'mtlb_sortrows', 'mtlb_sprintf', 'mtlb_sscanf', 'mtlb_std', 'mtlb_strcmp', 'mtlb_strcmpi', 'mtlb_strfind', 'mtlb_strrep', 'mtlb_subplot', 'mtlb_sum', 'mtlb_t', 'mtlb_toeplitz', 'mtlb_tril', 'mtlb_triu', 'mtlb_true', 'mtlb_type', 'mtlb_uint16', 'mtlb_uint32', 'mtlb_uint8', 'mtlb_upper', 'mtlb_var', 'mtlb_zeros', 'mu2lin', 'mutation_ga_binary', 'mutation_ga_default', 'mvcorrel', 'mvvacov', 'nancumsum', 'nand2mean', 'nanmax', 'nanmean', 'nanmeanf', 'nanmedian', 'nanmin', 'nanreglin', 'nanstdev', 'nansum', 'narsimul', 'ndgrid', 'ndims', 'nehari', 'neigh_func_csa', 'neigh_func_default', 'neigh_func_fsa', 'neigh_func_vfsa', 'neldermead_cget', 'neldermead_configure', 'neldermead_costf', 'neldermead_defaultoutput', 'neldermead_destroy', 'neldermead_function', 'neldermead_get', 'neldermead_log', 'neldermead_new', 'neldermead_restart', 'neldermead_search', 'neldermead_updatesimp', 'nextpow2', 'nfreq', 'nicholschart', 'nlev', 'nmplot_cget', 'nmplot_configure', 'nmplot_contour', 'nmplot_destroy', 'nmplot_function', 'nmplot_get', 'nmplot_historyplot', 'nmplot_log', 'nmplot_new', 'nmplot_outputcmd', 'nmplot_restart', 'nmplot_search', 'nmplot_simplexhistory', 'noisegen', 'nonreg_test_run', 'now', 'nthroot', 'null', 'num2cell', 'numderivative', 'numdiff', 'numer', 'nyquist', 'nyquistfrequencybounds', 'obs_gram', 'obscont', 'observer', 'obsv_mat', 'obsvss', 'oct2dec', 'odeoptions', 'optim_ga', 'optim_moga', 'optim_nsga', 'optim_nsga2', 'optim_sa', 'optimbase_cget', 'optimbase_checkbounds', 'optimbase_checkcostfun', 'optimbase_checkx0', 'optimbase_configure', 'optimbase_destroy', 'optimbase_function', 'optimbase_get', 'optimbase_hasbounds', 'optimbase_hasconstraints', 'optimbase_hasnlcons', 'optimbase_histget', 'optimbase_histset', 'optimbase_incriter', 'optimbase_isfeasible', 'optimbase_isinbounds', 'optimbase_isinnonlincons', 'optimbase_log', 'optimbase_logshutdown', 'optimbase_logstartup', 'optimbase_new', 'optimbase_outputcmd', 'optimbase_outstruct', 'optimbase_proj2bnds', 'optimbase_set', 'optimbase_stoplog', 'optimbase_terminate', 'optimget', 'optimplotfunccount', 'optimplotfval', 'optimplotx', 'optimset', 'optimsimplex_center', 'optimsimplex_check', 'optimsimplex_compsomefv', 'optimsimplex_computefv', 'optimsimplex_deltafv', 'optimsimplex_deltafvmax', 'optimsimplex_destroy', 'optimsimplex_dirmat', 'optimsimplex_fvmean', 'optimsimplex_fvstdev', 'optimsimplex_fvvariance', 'optimsimplex_getall', 'optimsimplex_getallfv', 'optimsimplex_getallx', 'optimsimplex_getfv', 'optimsimplex_getn', 'optimsimplex_getnbve', 'optimsimplex_getve', 'optimsimplex_getx', 'optimsimplex_gradientfv', 'optimsimplex_log', 'optimsimplex_new', 'optimsimplex_reflect', 'optimsimplex_setall', 'optimsimplex_setallfv', 'optimsimplex_setallx', 'optimsimplex_setfv', 'optimsimplex_setn', 'optimsimplex_setnbve', 'optimsimplex_setve', 'optimsimplex_setx', 'optimsimplex_shrink', 'optimsimplex_size', 'optimsimplex_sort', 'optimsimplex_xbar', 'orth', 'output_ga_default', 'output_moga_default', 'output_nsga2_default', 'output_nsga_default', 'p_margin', 'pack', 'pareto_filter', 'parrot', 'pbig', 'pca', 'pcg', 'pdiv', 'pen2ea', 'pencan', 'pencost', 'penlaur', 'perctl', 'perl', 'perms', 'permute', 'pertrans', 'pfactors', 'pfss', 'phasemag', 'phaseplot', 'phc', 'pinv', 'playsnd', 'plotprofile', 'plzr', 'pmodulo', 'pol2des', 'pol2str', 'polar', 'polfact', 'prbs_a', 'prettyprint', 'primes', 'princomp', 'profile', 'proj', 'projsl', 'projspec', 'psmall', 'pspect', 'qmr', 'qpsolve', 'quart', 'quaskro', 'rafiter', 'randpencil', 'range', 'rank', 'readxls', 'recompilefunction', 'recons', 'reglin', 'regress', 'remezb', 'remove_param', 'remove_profiling', 'repfreq', 'replace_Ix_by_Fx', 'repmat', 'reset_profiling', 'resize_matrix', 'returntoscilab', 'rhs2code', 'ric_desc', 'riccati', 'rmdir', 'routh_t', 'rowcomp', 'rowcompr', 'rowinout', 'rowregul', 'rowshuff', 'rref', 'sample', 'samplef', 'samwr', 'savematfile', 'savewave', 'scanf', 'sci2exp', 'sciGUI_init', 'sci_sparse', 'scicos_getvalue', 'scicos_simulate', 'scicos_workspace_init', 'scisptdemo', 'scitest', 'sdiff', 'sec', 'secd', 'sech', 'selection_ga_elitist', 'selection_ga_random', 'sensi', 'setPreferencesValue', 'set_param', 'setdiff', 'sgrid', 'show_margins', 'show_pca', 'showprofile', 'signm', 'sinc', 'sincd', 'sind', 'sinh', 'sinhm', 'sinm', 'sm2des', 'sm2ss', 'smga', 'smooth', 'solve', 'sound', 'soundsec', 'sp2adj', 'spaninter', 'spanplus', 'spantwo', 'specfact', 'speye', 'sprand', 'spzeros', 'sqroot', 'sqrtm', 'squarewave', 'squeeze', 'srfaur', 'srkf', 'ss2des', 'ss2ss', 'ss2tf', 'sskf', 'ssprint', 'ssrand', 'st_deviation', 'st_i_generic', 'st_ility', 'stabil', 'statgain', 'stdev', 'stdevf', 'steadycos', 'strange', 'strcmpi', 'struct', 'sub2ind', 'sva', 'svplot', 'sylm', 'sylv', 'sysconv', 'sysdiag', 'sysfact', 'syslin', 'syssize', 'system', 'systmat', 'tabul', 'tand', 'tanh', 'tanhm', 'tanm', 'tbx_build_blocks', 'tbx_build_cleaner', 'tbx_build_gateway', 'tbx_build_gateway_clean', 'tbx_build_gateway_loader', 'tbx_build_help', 'tbx_build_help_loader', 'tbx_build_loader', 'tbx_build_localization', 'tbx_build_macros', 'tbx_build_pal_loader', 'tbx_build_src', 'tbx_builder', 'tbx_builder_gateway', 'tbx_builder_gateway_lang', 'tbx_builder_help', 'tbx_builder_help_lang', 'tbx_builder_macros', 'tbx_builder_src', 'tbx_builder_src_lang', 'tbx_generate_pofile', 'temp_law_csa', 'temp_law_default', 'temp_law_fsa', 'temp_law_huang', 'temp_law_vfsa', 'test_clean', 'test_on_columns', 'test_run', 'test_run_level', 'testexamples', 'tf2des', 'tf2ss', 'thrownan', 'tic', 'time_id', 'toc', 'toeplitz', 'tokenpos', 'toolboxes', 'trace', 'trans', 'translatepaths', 'tree2code', 'trfmod', 'trianfml', 'trimmean', 'trisolve', 'trzeros', 'typeof', 'ui_observer', 'union', 'unique', 'unit_test_run', 'unix_g', 'unix_s', 'unix_w', 'unix_x', 'unobs', 'unpack', 'unwrap', 'variance', 'variancef', 'vec2list', 'vectorfind', 'ver', 'warnobsolete', 'wavread', 'wavwrite', 'wcenter', 'weekday', 'wfir', 'wfir_gui', 'whereami', 'who_user', 'whos', 'wiener', 'wigner', 'window', 'winlist', 'with_javasci', 'with_macros_source', 'with_modelica_compiler', 'with_tk', 'xcorr', 'xcosBlockEval', 'xcosBlockInterface', 'xcosCodeGeneration', 'xcosConfigureModelica', 'xcosPal', 'xcosPalAdd', 'xcosPalAddBlock', 'xcosPalExport', 'xcosPalGenerateAllIcons', 'xcosShowBlockWarning', 'xcosValidateBlockSet', 'xcosValidateCompareBlock', 'xcos_compile', 'xcos_debug_gui', 'xcos_run', 'xcos_simulate', 'xcov', 'xmltochm', 'xmltoformat', 'xmltohtml', 'xmltojar', 'xmltopdf', 'xmltops', 'xmltoweb', 'yulewalk', 'zeropen', 'zgrid', 'zpbutt', 'zpch1', 'zpch2', 'zpell', ) variables_kw = ( '$', '%F', '%T', '%e', '%eps', '%f', '%fftw', '%gui', '%i', '%inf', '%io', '%modalWarning', '%nan', '%pi', '%s', '%t', '%tk', '%toolboxes', '%toolboxes_dir', '%z', 'PWD', 'SCI', 'SCIHOME', 'TMPDIR', 'arnoldilib', 'assertlib', 'atomslib', 'cacsdlib', 'compatibility_functilib', 'corelib', 'data_structureslib', 'demo_toolslib', 'development_toolslib', 'differential_equationlib', 'dynamic_linklib', 'elementary_functionslib', 'enull', 'evoid', 'external_objectslib', 'fd', 'fileiolib', 'functionslib', 'genetic_algorithmslib', 'helptoolslib', 'home', 'integerlib', 'interpolationlib', 'iolib', 'jnull', 'jvoid', 'linear_algebralib', 'm2scilib', 'matiolib', 'modules_managerlib', 'neldermeadlib', 'optimbaselib', 'optimizationlib', 'optimsimplexlib', 'output_streamlib', 'overloadinglib', 'parameterslib', 'polynomialslib', 'preferenceslib', 'randliblib', 'scicos_autolib', 'scicos_utilslib', 'scinoteslib', 'signal_processinglib', 'simulated_annealinglib', 'soundlib', 'sparselib', 'special_functionslib', 'spreadsheetlib', 'statisticslib', 'stringlib', 'tclscilib', 'timelib', 'umfpacklib', 'xcoslib', ) if __name__ == '__main__': # pragma: no cover import subprocess from pygments.util import format_lines, duplicates_removed mapping = {'variables': 'builtin'} def extract_completion(var_type): s = subprocess.Popen(['scilab', '-nwni'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) output = s.communicate('''\ fd = mopen("/dev/stderr", "wt"); mputl(strcat(completion("", "%s"), "||"), fd); mclose(fd)\n''' % var_type) if '||' not in output[1]: raise Exception(output[0]) # Invalid DISPLAY causes this to be output: text = output[1].strip() if text.startswith('Error: unable to open display \n'): text = text[len('Error: unable to open display \n'):] return text.split('||') new_data = {} seen = set() # only keep first type for a given word for t in ('functions', 'commands', 'macros', 'variables'): new_data[t] = duplicates_removed(extract_completion(t), seen) seen.update(set(new_data[t])) with open(__file__) as f: content = f.read() header = content[:content.find('# Autogenerated')] footer = content[content.find("if __name__ == '__main__':"):] with open(__file__, 'w') as f: f.write(header) f.write('# Autogenerated\n\n') for k, v in sorted(new_data.items()): f.write(format_lines(k + '_kw', v) + '\n\n') f.write(footer) pygments-2.11.2/pygments/lexers/sgf.py0000644000175000017500000000400214165547207017633 0ustar carstencarsten""" pygments.lexers.sgf ~~~~~~~~~~~~~~~~~~~ Lexer for Smart Game Format (sgf) file format. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups from pygments.token import Name, Literal, String, Text, Punctuation, Whitespace __all__ = ["SmartGameFormatLexer"] class SmartGameFormatLexer(RegexLexer): """ Lexer for Smart Game Format (sgf) file format. The format is used to store game records of board games for two players (mainly Go game). For more information about the definition of the format, see: https://www.red-bean.com/sgf/ .. versionadded:: 2.4 """ name = 'SmartGameFormat' aliases = ['sgf'] filenames = ['*.sgf'] tokens = { 'root': [ (r'[():;]+', Punctuation), # tokens: (r'(A[BW]|AE|AN|AP|AR|AS|[BW]L|BM|[BW]R|[BW]S|[BW]T|CA|CH|CP|CR|' r'DD|DM|DO|DT|EL|EV|EX|FF|FG|G[BW]|GC|GM|GN|HA|HO|ID|IP|IT|IY|KM|' r'KO|LB|LN|LT|L|MA|MN|M|N|OB|OM|ON|OP|OT|OV|P[BW]|PC|PL|PM|RE|RG|' r'RO|RU|SO|SC|SE|SI|SL|SO|SQ|ST|SU|SZ|T[BW]|TC|TE|TM|TR|UC|US|VW|' r'V|[BW]|C)', Name.Builtin), # number: (r'(\[)([0-9.]+)(\])', bygroups(Punctuation, Literal.Number, Punctuation)), # date: (r'(\[)([0-9]{4}-[0-9]{2}-[0-9]{2})(\])', bygroups(Punctuation, Literal.Date, Punctuation)), # point: (r'(\[)([a-z]{2})(\])', bygroups(Punctuation, String, Punctuation)), # double points: (r'(\[)([a-z]{2})(:)([a-z]{2})(\])', bygroups(Punctuation, String, Punctuation, String, Punctuation)), (r'(\[)([\w\s#()+,\-.:?]+)(\])', bygroups(Punctuation, String, Punctuation)), (r'(\[)(\s.*)(\])', bygroups(Punctuation, Whitespace, Punctuation)), (r'\s+', Whitespace) ], } pygments-2.11.2/pygments/lexers/parsers.py0000644000175000017500000006243014165547207020544 0ustar carstencarsten""" pygments.lexers.parsers ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for parser generators. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, DelegatingLexer, \ include, bygroups, using from pygments.token import Punctuation, Other, Text, Comment, Operator, \ Keyword, Name, String, Number, Whitespace from pygments.lexers.jvm import JavaLexer from pygments.lexers.c_cpp import CLexer, CppLexer from pygments.lexers.objective import ObjectiveCLexer from pygments.lexers.d import DLexer from pygments.lexers.dotnet import CSharpLexer from pygments.lexers.ruby import RubyLexer from pygments.lexers.python import PythonLexer from pygments.lexers.perl import PerlLexer __all__ = ['RagelLexer', 'RagelEmbeddedLexer', 'RagelCLexer', 'RagelDLexer', 'RagelCppLexer', 'RagelObjectiveCLexer', 'RagelRubyLexer', 'RagelJavaLexer', 'AntlrLexer', 'AntlrPythonLexer', 'AntlrPerlLexer', 'AntlrRubyLexer', 'AntlrCppLexer', 'AntlrCSharpLexer', 'AntlrObjectiveCLexer', 'AntlrJavaLexer', 'AntlrActionScriptLexer', 'TreetopLexer', 'EbnfLexer'] class RagelLexer(RegexLexer): """ A pure `Ragel `_ lexer. Use this for fragments of Ragel. For ``.rl`` files, use RagelEmbeddedLexer instead (or one of the language-specific subclasses). .. versionadded:: 1.1 """ name = 'Ragel' aliases = ['ragel'] filenames = [] tokens = { 'whitespace': [ (r'\s+', Whitespace) ], 'comments': [ (r'\#.*$', Comment), ], 'keywords': [ (r'(access|action|alphtype)\b', Keyword), (r'(getkey|write|machine|include)\b', Keyword), (r'(any|ascii|extend|alpha|digit|alnum|lower|upper)\b', Keyword), (r'(xdigit|cntrl|graph|print|punct|space|zlen|empty)\b', Keyword) ], 'numbers': [ (r'0x[0-9A-Fa-f]+', Number.Hex), (r'[+-]?[0-9]+', Number.Integer), ], 'literals': [ (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'\[(\\\\|\\[^\\]|[^\\\]])*\]', String), # square bracket literals (r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/', String.Regex), # regular expressions ], 'identifiers': [ (r'[a-zA-Z_]\w*', Name.Variable), ], 'operators': [ (r',', Operator), # Join (r'\||&|--?', Operator), # Union, Intersection and Subtraction (r'\.|<:|:>>?', Operator), # Concatention (r':', Operator), # Label (r'->', Operator), # Epsilon Transition (r'(>|\$|%|<|@|<>)(/|eof\b)', Operator), # EOF Actions (r'(>|\$|%|<|@|<>)(!|err\b)', Operator), # Global Error Actions (r'(>|\$|%|<|@|<>)(\^|lerr\b)', Operator), # Local Error Actions (r'(>|\$|%|<|@|<>)(~|to\b)', Operator), # To-State Actions (r'(>|\$|%|<|@|<>)(\*|from\b)', Operator), # From-State Actions (r'>|@|\$|%', Operator), # Transition Actions and Priorities (r'\*|\?|\+|\{[0-9]*,[0-9]*\}', Operator), # Repetition (r'!|\^', Operator), # Negation (r'\(|\)', Operator), # Grouping ], 'root': [ include('literals'), include('whitespace'), include('comments'), include('keywords'), include('numbers'), include('identifiers'), include('operators'), (r'\{', Punctuation, 'host'), (r'=', Operator), (r';', Punctuation), ], 'host': [ (r'(' + r'|'.join(( # keep host code in largest possible chunks r'[^{}\'"/#]+', # exclude unsafe characters r'[^\\]\\[{}]', # allow escaped { or } # strings and comments may safely contain unsafe characters r'"(\\\\|\\[^\\]|[^"\\])*"', r"'(\\\\|\\[^\\]|[^'\\])*'", r'//.*$\n?', # single line comment r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment r'\#.*$\n?', # ruby comment # regular expression: There's no reason for it to start # with a * and this stops confusion with comments. r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/', # / is safe now that we've handled regex and javadoc comments r'/', )) + r')+', Other), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), ], } class RagelEmbeddedLexer(RegexLexer): """ A lexer for `Ragel`_ embedded in a host language file. This will only highlight Ragel statements. If you want host language highlighting then call the language-specific Ragel lexer. .. versionadded:: 1.1 """ name = 'Embedded Ragel' aliases = ['ragel-em'] filenames = ['*.rl'] tokens = { 'root': [ (r'(' + r'|'.join(( # keep host code in largest possible chunks r'[^%\'"/#]+', # exclude unsafe characters r'%(?=[^%]|$)', # a single % sign is okay, just not 2 of them # strings and comments may safely contain unsafe characters r'"(\\\\|\\[^\\]|[^"\\])*"', r"'(\\\\|\\[^\\]|[^'\\])*'", r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment r'//.*$\n?', # single line comment r'\#.*$\n?', # ruby/ragel comment r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/', # regular expression # / is safe now that we've handled regex and javadoc comments r'/', )) + r')+', Other), # Single Line FSM. # Please don't put a quoted newline in a single line FSM. # That's just mean. It will break this. (r'(%%)(?![{%])(.*)($|;)(\n?)', bygroups(Punctuation, using(RagelLexer), Punctuation, Text)), # Multi Line FSM. (r'(%%%%|%%)\{', Punctuation, 'multi-line-fsm'), ], 'multi-line-fsm': [ (r'(' + r'|'.join(( # keep ragel code in largest possible chunks. r'(' + r'|'.join(( r'[^}\'"\[/#]', # exclude unsafe characters r'\}(?=[^%]|$)', # } is okay as long as it's not followed by % r'\}%(?=[^%]|$)', # ...well, one %'s okay, just not two... r'[^\\]\\[{}]', # ...and } is okay if it's escaped # allow / if it's preceded with one of these symbols # (ragel EOF actions) r'(>|\$|%|<|@|<>)/', # specifically allow regex followed immediately by * # so it doesn't get mistaken for a comment r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/\*', # allow / as long as it's not followed by another / or by a * r'/(?=[^/*]|$)', # We want to match as many of these as we can in one block. # Not sure if we need the + sign here, # does it help performance? )) + r')+', # strings and comments may safely contain unsafe characters r'"(\\\\|\\[^\\]|[^"\\])*"', r"'(\\\\|\\[^\\]|[^'\\])*'", r"\[(\\\\|\\[^\\]|[^\]\\])*\]", # square bracket literal r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment r'//.*$\n?', # single line comment r'\#.*$\n?', # ruby/ragel comment )) + r')+', using(RagelLexer)), (r'\}%%', Punctuation, '#pop'), ] } def analyse_text(text): return '@LANG: indep' in text class RagelRubyLexer(DelegatingLexer): """ A lexer for `Ragel`_ in a Ruby host file. .. versionadded:: 1.1 """ name = 'Ragel in Ruby Host' aliases = ['ragel-ruby', 'ragel-rb'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(RubyLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: ruby' in text class RagelCLexer(DelegatingLexer): """ A lexer for `Ragel`_ in a C host file. .. versionadded:: 1.1 """ name = 'Ragel in C Host' aliases = ['ragel-c'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(CLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: c' in text class RagelDLexer(DelegatingLexer): """ A lexer for `Ragel`_ in a D host file. .. versionadded:: 1.1 """ name = 'Ragel in D Host' aliases = ['ragel-d'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(DLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: d' in text class RagelCppLexer(DelegatingLexer): """ A lexer for `Ragel`_ in a CPP host file. .. versionadded:: 1.1 """ name = 'Ragel in CPP Host' aliases = ['ragel-cpp'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(CppLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: c++' in text class RagelObjectiveCLexer(DelegatingLexer): """ A lexer for `Ragel`_ in an Objective C host file. .. versionadded:: 1.1 """ name = 'Ragel in Objective C Host' aliases = ['ragel-objc'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(ObjectiveCLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: objc' in text class RagelJavaLexer(DelegatingLexer): """ A lexer for `Ragel`_ in a Java host file. .. versionadded:: 1.1 """ name = 'Ragel in Java Host' aliases = ['ragel-java'] filenames = ['*.rl'] def __init__(self, **options): super().__init__(JavaLexer, RagelEmbeddedLexer, **options) def analyse_text(text): return '@LANG: java' in text class AntlrLexer(RegexLexer): """ Generic `ANTLR`_ Lexer. Should not be called directly, instead use DelegatingLexer for your target language. .. versionadded:: 1.1 .. _ANTLR: http://www.antlr.org/ """ name = 'ANTLR' aliases = ['antlr'] filenames = [] _id = r'[A-Za-z]\w*' _TOKEN_REF = r'[A-Z]\w*' _RULE_REF = r'[a-z]\w*' _STRING_LITERAL = r'\'(?:\\\\|\\\'|[^\']*)\'' _INT = r'[0-9]+' tokens = { 'whitespace': [ (r'\s+', Whitespace), ], 'comments': [ (r'//.*$', Comment), (r'/\*(.|\n)*?\*/', Comment), ], 'root': [ include('whitespace'), include('comments'), (r'(lexer|parser|tree)?(\s*)(grammar\b)(\s*)(' + _id + ')(;)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Class, Punctuation)), # optionsSpec (r'options\b', Keyword, 'options'), # tokensSpec (r'tokens\b', Keyword, 'tokens'), # attrScope (r'(scope)(\s*)(' + _id + r')(\s*)(\{)', bygroups(Keyword, Whitespace, Name.Variable, Whitespace, Punctuation), 'action'), # exception (r'(catch|finally)\b', Keyword, 'exception'), # action (r'(@' + _id + r')(\s*)(::)?(\s*)(' + _id + r')(\s*)(\{)', bygroups(Name.Label, Whitespace, Punctuation, Whitespace, Name.Label, Whitespace, Punctuation), 'action'), # rule (r'((?:protected|private|public|fragment)\b)?(\s*)(' + _id + ')(!)?', bygroups(Keyword, Whitespace, Name.Label, Punctuation), ('rule-alts', 'rule-prelims')), ], 'exception': [ (r'\n', Whitespace, '#pop'), (r'\s', Whitespace), include('comments'), (r'\[', Punctuation, 'nested-arg-action'), (r'\{', Punctuation, 'action'), ], 'rule-prelims': [ include('whitespace'), include('comments'), (r'returns\b', Keyword), (r'\[', Punctuation, 'nested-arg-action'), (r'\{', Punctuation, 'action'), # throwsSpec (r'(throws)(\s+)(' + _id + ')', bygroups(Keyword, Whitespace, Name.Label)), (r'(,)(\s*)(' + _id + ')', bygroups(Punctuation, Whitespace, Name.Label)), # Additional throws # optionsSpec (r'options\b', Keyword, 'options'), # ruleScopeSpec - scope followed by target language code or name of action # TODO finish implementing other possibilities for scope # L173 ANTLRv3.g from ANTLR book (r'(scope)(\s+)(\{)', bygroups(Keyword, Whitespace, Punctuation), 'action'), (r'(scope)(\s+)(' + _id + r')(\s*)(;)', bygroups(Keyword, Whitespace, Name.Label, Whitespace, Punctuation)), # ruleAction (r'(@' + _id + r')(\s*)(\{)', bygroups(Name.Label, Whitespace, Punctuation), 'action'), # finished prelims, go to rule alts! (r':', Punctuation, '#pop') ], 'rule-alts': [ include('whitespace'), include('comments'), # These might need to go in a separate 'block' state triggered by ( (r'options\b', Keyword, 'options'), (r':', Punctuation), # literals (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'<<([^>]|>[^>])>>', String), # identifiers # Tokens start with capital letter. (r'\$?[A-Z_]\w*', Name.Constant), # Rules start with small letter. (r'\$?[a-z_]\w*', Name.Variable), # operators (r'(\+|\||->|=>|=|\(|\)|\.\.|\.|\?|\*|\^|!|\#|~)', Operator), (r',', Punctuation), (r'\[', Punctuation, 'nested-arg-action'), (r'\{', Punctuation, 'action'), (r';', Punctuation, '#pop') ], 'tokens': [ include('whitespace'), include('comments'), (r'\{', Punctuation), (r'(' + _TOKEN_REF + r')(\s*)(=)?(\s*)(' + _STRING_LITERAL + r')?(\s*)(;)', bygroups(Name.Label, Whitespace, Punctuation, Whitespace, String, Whitespace, Punctuation)), (r'\}', Punctuation, '#pop'), ], 'options': [ include('whitespace'), include('comments'), (r'\{', Punctuation), (r'(' + _id + r')(\s*)(=)(\s*)(' + '|'.join((_id, _STRING_LITERAL, _INT, r'\*')) + r')(\s*)(;)', bygroups(Name.Variable, Whitespace, Punctuation, Whitespace, Text, Whitespace, Punctuation)), (r'\}', Punctuation, '#pop'), ], 'action': [ (r'(' + r'|'.join(( # keep host code in largest possible chunks r'[^${}\'"/\\]+', # exclude unsafe characters # strings and comments may safely contain unsafe characters r'"(\\\\|\\[^\\]|[^"\\])*"', r"'(\\\\|\\[^\\]|[^'\\])*'", r'//.*$\n?', # single line comment r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment # regular expression: There's no reason for it to start # with a * and this stops confusion with comments. r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/', # backslashes are okay, as long as we are not backslashing a % r'\\(?!%)', # Now that we've handled regex and javadoc comments # it's safe to let / through. r'/', )) + r')+', Other), (r'(\\)(%)', bygroups(Punctuation, Other)), (r'(\$[a-zA-Z]+)(\.?)(text|value)?', bygroups(Name.Variable, Punctuation, Name.Property)), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), ], 'nested-arg-action': [ (r'(' + r'|'.join(( # keep host code in largest possible chunks. r'[^$\[\]\'"/]+', # exclude unsafe characters # strings and comments may safely contain unsafe characters r'"(\\\\|\\[^\\]|[^"\\])*"', r"'(\\\\|\\[^\\]|[^'\\])*'", r'//.*$\n?', # single line comment r'/\*(.|\n)*?\*/', # multi-line javadoc-style comment # regular expression: There's no reason for it to start # with a * and this stops confusion with comments. r'/(?!\*)(\\\\|\\[^\\]|[^/\\])*/', # Now that we've handled regex and javadoc comments # it's safe to let / through. r'/', )) + r')+', Other), (r'\[', Punctuation, '#push'), (r'\]', Punctuation, '#pop'), (r'(\$[a-zA-Z]+)(\.?)(text|value)?', bygroups(Name.Variable, Punctuation, Name.Property)), (r'(\\\\|\\\]|\\\[|[^\[\]])+', Other), ] } def analyse_text(text): return re.search(r'^\s*grammar\s+[a-zA-Z0-9]+\s*;', text, re.M) # http://www.antlr.org/wiki/display/ANTLR3/Code+Generation+Targets class AntlrCppLexer(DelegatingLexer): """ `ANTLR`_ with CPP Target .. versionadded:: 1.1 """ name = 'ANTLR With CPP Target' aliases = ['antlr-cpp'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(CppLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*C\s*;', text, re.M) class AntlrObjectiveCLexer(DelegatingLexer): """ `ANTLR`_ with Objective-C Target .. versionadded:: 1.1 """ name = 'ANTLR With ObjectiveC Target' aliases = ['antlr-objc'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(ObjectiveCLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*ObjC\s*;', text) class AntlrCSharpLexer(DelegatingLexer): """ `ANTLR`_ with C# Target .. versionadded:: 1.1 """ name = 'ANTLR With C# Target' aliases = ['antlr-csharp', 'antlr-c#'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(CSharpLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*CSharp2\s*;', text, re.M) class AntlrPythonLexer(DelegatingLexer): """ `ANTLR`_ with Python Target .. versionadded:: 1.1 """ name = 'ANTLR With Python Target' aliases = ['antlr-python'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(PythonLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*Python\s*;', text, re.M) class AntlrJavaLexer(DelegatingLexer): """ `ANTLR`_ with Java Target .. versionadded:: 1. """ name = 'ANTLR With Java Target' aliases = ['antlr-java'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(JavaLexer, AntlrLexer, **options) def analyse_text(text): # Antlr language is Java by default return AntlrLexer.analyse_text(text) and 0.9 class AntlrRubyLexer(DelegatingLexer): """ `ANTLR`_ with Ruby Target .. versionadded:: 1.1 """ name = 'ANTLR With Ruby Target' aliases = ['antlr-ruby', 'antlr-rb'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(RubyLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*Ruby\s*;', text, re.M) class AntlrPerlLexer(DelegatingLexer): """ `ANTLR`_ with Perl Target .. versionadded:: 1.1 """ name = 'ANTLR With Perl Target' aliases = ['antlr-perl'] filenames = ['*.G', '*.g'] def __init__(self, **options): super().__init__(PerlLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*Perl5\s*;', text, re.M) class AntlrActionScriptLexer(DelegatingLexer): """ `ANTLR`_ with ActionScript Target .. versionadded:: 1.1 """ name = 'ANTLR With ActionScript Target' aliases = ['antlr-actionscript', 'antlr-as'] filenames = ['*.G', '*.g'] def __init__(self, **options): from pygments.lexers.actionscript import ActionScriptLexer super().__init__(ActionScriptLexer, AntlrLexer, **options) def analyse_text(text): return AntlrLexer.analyse_text(text) and \ re.search(r'^\s*language\s*=\s*ActionScript\s*;', text, re.M) class TreetopBaseLexer(RegexLexer): """ A base lexer for `Treetop `_ grammars. Not for direct use; use TreetopLexer instead. .. versionadded:: 1.6 """ tokens = { 'root': [ include('space'), (r'require[ \t]+[^\n\r]+[\n\r]', Other), (r'module\b', Keyword.Namespace, 'module'), (r'grammar\b', Keyword, 'grammar'), ], 'module': [ include('space'), include('end'), (r'module\b', Keyword, '#push'), (r'grammar\b', Keyword, 'grammar'), (r'[A-Z]\w*(?:::[A-Z]\w*)*', Name.Namespace), ], 'grammar': [ include('space'), include('end'), (r'rule\b', Keyword, 'rule'), (r'include\b', Keyword, 'include'), (r'[A-Z]\w*', Name), ], 'include': [ include('space'), (r'[A-Z]\w*(?:::[A-Z]\w*)*', Name.Class, '#pop'), ], 'rule': [ include('space'), include('end'), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'([A-Za-z_]\w*)(:)', bygroups(Name.Label, Punctuation)), (r'[A-Za-z_]\w*', Name), (r'[()]', Punctuation), (r'[?+*/&!~]', Operator), (r'\[(?:\\.|\[:\^?[a-z]+:\]|[^\\\]])+\]', String.Regex), (r'([0-9]*)(\.\.)([0-9]*)', bygroups(Number.Integer, Operator, Number.Integer)), (r'(<)([^>]+)(>)', bygroups(Punctuation, Name.Class, Punctuation)), (r'\{', Punctuation, 'inline_module'), (r'\.', String.Regex), ], 'inline_module': [ (r'\{', Other, 'ruby'), (r'\}', Punctuation, '#pop'), (r'[^{}]+', Other), ], 'ruby': [ (r'\{', Other, '#push'), (r'\}', Other, '#pop'), (r'[^{}]+', Other), ], 'space': [ (r'[ \t\n\r]+', Whitespace), (r'#[^\n]*', Comment.Single), ], 'end': [ (r'end\b', Keyword, '#pop'), ], } class TreetopLexer(DelegatingLexer): """ A lexer for `Treetop `_ grammars. .. versionadded:: 1.6 """ name = 'Treetop' aliases = ['treetop'] filenames = ['*.treetop', '*.tt'] def __init__(self, **options): super().__init__(RubyLexer, TreetopBaseLexer, **options) class EbnfLexer(RegexLexer): """ Lexer for `ISO/IEC 14977 EBNF `_ grammars. .. versionadded:: 2.0 """ name = 'EBNF' aliases = ['ebnf'] filenames = ['*.ebnf'] mimetypes = ['text/x-ebnf'] tokens = { 'root': [ include('whitespace'), include('comment_start'), include('identifier'), (r'=', Operator, 'production'), ], 'production': [ include('whitespace'), include('comment_start'), include('identifier'), (r'"[^"]*"', String.Double), (r"'[^']*'", String.Single), (r'(\?[^?]*\?)', Name.Entity), (r'[\[\]{}(),|]', Punctuation), (r'-', Operator), (r';', Punctuation, '#pop'), (r'\.', Punctuation, '#pop'), ], 'whitespace': [ (r'\s+', Text), ], 'comment_start': [ (r'\(\*', Comment.Multiline, 'comment'), ], 'comment': [ (r'[^*)]', Comment.Multiline), include('comment_start'), (r'\*\)', Comment.Multiline, '#pop'), (r'[*)]', Comment.Multiline), ], 'identifier': [ (r'([a-zA-Z][\w \-]*)', Keyword), ], } pygments-2.11.2/pygments/lexers/yang.py0000644000175000017500000001065314165547207020023 0ustar carstencarsten""" pygments.lexers.yang ~~~~~~~~~~~~~~~~~~~~ Lexer for the YANG 1.1 modeling language. See :rfc:`7950`. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import (RegexLexer, bygroups, words) from pygments.token import (Text, Token, Name, String, Comment, Number) __all__ = ['YangLexer'] class YangLexer(RegexLexer): """ Lexer for `YANG `_, based on RFC7950 .. versionadded:: 2.7 """ name = 'YANG' aliases = ['yang'] filenames = ['*.yang'] mimetypes = ['application/yang'] #Keywords from RFC7950 ; oriented at BNF style TOP_STMTS_KEYWORDS = ("module", "submodule") MODULE_HEADER_STMT_KEYWORDS = ("belongs-to", "namespace", "prefix", "yang-version") META_STMT_KEYWORDS = ("contact", "description", "organization", "reference", "revision") LINKAGE_STMTS_KEYWORDS = ("import", "include", "revision-date") BODY_STMT_KEYWORDS = ("action", "argument", "augment", "deviation", "extension", "feature", "grouping", "identity", "if-feature", "input", "notification", "output", "rpc", "typedef") DATA_DEF_STMT_KEYWORDS = ("anydata", "anyxml", "case", "choice", "config", "container", "deviate", "leaf", "leaf-list", "list", "must", "presence", "refine", "uses", "when") TYPE_STMT_KEYWORDS = ("base", "bit", "default", "enum", "error-app-tag", "error-message", "fraction-digits", "length", "max-elements", "min-elements", "modifier", "ordered-by", "path", "pattern", "position", "range", "require-instance", "status", "type", "units", "value", "yin-element") LIST_STMT_KEYWORDS = ("key", "mandatory", "unique") #RFC7950 other keywords CONSTANTS_KEYWORDS = ("add", "current", "delete", "deprecated", "false", "invert-match", "max", "min", "not-supported", "obsolete", "replace", "true", "unbounded", "user") #RFC7950 Built-In Types TYPES = ("binary", "bits", "boolean", "decimal64", "empty", "enumeration", "identityref", "instance-identifier", "int16", "int32", "int64", "int8", "leafref", "string", "uint16", "uint32", "uint64", "uint8", "union") suffix_re_pattern = r'(?=[^\w\-:])' tokens = { 'comments': [ (r'[^*/]', Comment), (r'/\*', Comment, '#push'), (r'\*/', Comment, '#pop'), (r'[*/]', Comment), ], "root": [ (r'\s+', Text.Whitespace), (r'[{};]+', Token.Punctuation), (r'(?`_. .. versionadded:: 2.0 """ name = 'Nix' aliases = ['nixos', 'nix'] filenames = ['*.nix'] mimetypes = ['text/x-nix'] flags = re.MULTILINE | re.UNICODE keywords = ['rec', 'with', 'let', 'in', 'inherit', 'assert', 'if', 'else', 'then', '...'] builtins = ['import', 'abort', 'baseNameOf', 'dirOf', 'isNull', 'builtins', 'map', 'removeAttrs', 'throw', 'toString', 'derivation'] operators = ['++', '+', '?', '.', '!', '//', '==', '!=', '&&', '||', '->', '='] punctuations = ["(", ")", "[", "]", ";", "{", "}", ":", ",", "@"] tokens = { 'root': [ # comments starting with # (r'#.*$', Comment.Single), # multiline comments (r'/\*', Comment.Multiline, 'comment'), # whitespace (r'\s+', Text), # keywords ('(%s)' % '|'.join(re.escape(entry) + '\\b' for entry in keywords), Keyword), # highlight the builtins ('(%s)' % '|'.join(re.escape(entry) + '\\b' for entry in builtins), Name.Builtin), (r'\b(true|false|null)\b', Name.Constant), # operators ('(%s)' % '|'.join(re.escape(entry) for entry in operators), Operator), # word operators (r'\b(or|and)\b', Operator.Word), # punctuations ('(%s)' % '|'.join(re.escape(entry) for entry in punctuations), Punctuation), # integers (r'[0-9]+', Number.Integer), # strings (r'"', String.Double, 'doublequote'), (r"''", String.Single, 'singlequote'), # paths (r'[\w.+-]*(\/[\w.+-]+)+', Literal), (r'\<[\w.+-]+(\/[\w.+-]+)*\>', Literal), # urls (r'[a-zA-Z][a-zA-Z0-9\+\-\.]*\:[\w%/?:@&=+$,\\.!~*\'-]+', Literal), # names of variables (r'[\w-]+\s*=', String.Symbol), (r'[a-zA-Z_][\w\'-]*', Text), ], 'comment': [ (r'[^/*]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'singlequote': [ (r"'''", String.Escape), (r"''\$\{", String.Escape), (r"''\n", String.Escape), (r"''\r", String.Escape), (r"''\t", String.Escape), (r"''", String.Single, '#pop'), (r'\$\{', String.Interpol, 'antiquote'), (r"[^']", String.Single), ], 'doublequote': [ (r'\\', String.Escape), (r'\\"', String.Escape), (r'\\$\{', String.Escape), (r'"', String.Double, '#pop'), (r'\$\{', String.Interpol, 'antiquote'), (r'[^"]', String.Double), ], 'antiquote': [ (r"\}", String.Interpol, '#pop'), # TODO: we should probably escape also here ''${ \${ (r"\$\{", String.Interpol, '#push'), include('root'), ], } def analyse_text(text): rv = 0.0 # TODO: let/in if re.search(r'import.+?<[^>]+>', text): rv += 0.4 if re.search(r'mkDerivation\s+(\(|\{|rec)', text): rv += 0.4 if re.search(r'=\s+mkIf\s+', text): rv += 0.4 if re.search(r'\{[a-zA-Z,\s]+\}:', text): rv += 0.1 return rv pygments-2.11.2/pygments/lexers/pascal.py0000644000175000017500000007754014165547207020340 0ustar carstencarsten""" pygments.lexers.pascal ~~~~~~~~~~~~~~~~~~~~~~ Lexers for Pascal family languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, include, bygroups, words, \ using, this, default from pygments.util import get_bool_opt, get_list_opt from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error from pygments.scanner import Scanner # compatibility import from pygments.lexers.modula2 import Modula2Lexer __all__ = ['DelphiLexer', 'AdaLexer'] class DelphiLexer(Lexer): """ For `Delphi `_ (Borland Object Pascal), Turbo Pascal and Free Pascal source code. Additional options accepted: `turbopascal` Highlight Turbo Pascal specific keywords (default: ``True``). `delphi` Highlight Borland Delphi specific keywords (default: ``True``). `freepascal` Highlight Free Pascal specific keywords (default: ``True``). `units` A list of units that should be considered builtin, supported are ``System``, ``SysUtils``, ``Classes`` and ``Math``. Default is to consider all of them builtin. """ name = 'Delphi' aliases = ['delphi', 'pas', 'pascal', 'objectpascal'] filenames = ['*.pas', '*.dpr'] mimetypes = ['text/x-pascal'] TURBO_PASCAL_KEYWORDS = ( 'absolute', 'and', 'array', 'asm', 'begin', 'break', 'case', 'const', 'constructor', 'continue', 'destructor', 'div', 'do', 'downto', 'else', 'end', 'file', 'for', 'function', 'goto', 'if', 'implementation', 'in', 'inherited', 'inline', 'interface', 'label', 'mod', 'nil', 'not', 'object', 'of', 'on', 'operator', 'or', 'packed', 'procedure', 'program', 'record', 'reintroduce', 'repeat', 'self', 'set', 'shl', 'shr', 'string', 'then', 'to', 'type', 'unit', 'until', 'uses', 'var', 'while', 'with', 'xor' ) DELPHI_KEYWORDS = ( 'as', 'class', 'except', 'exports', 'finalization', 'finally', 'initialization', 'is', 'library', 'on', 'property', 'raise', 'threadvar', 'try' ) FREE_PASCAL_KEYWORDS = ( 'dispose', 'exit', 'false', 'new', 'true' ) BLOCK_KEYWORDS = { 'begin', 'class', 'const', 'constructor', 'destructor', 'end', 'finalization', 'function', 'implementation', 'initialization', 'label', 'library', 'operator', 'procedure', 'program', 'property', 'record', 'threadvar', 'type', 'unit', 'uses', 'var' } FUNCTION_MODIFIERS = { 'alias', 'cdecl', 'export', 'inline', 'interrupt', 'nostackframe', 'pascal', 'register', 'safecall', 'softfloat', 'stdcall', 'varargs', 'name', 'dynamic', 'near', 'virtual', 'external', 'override', 'assembler' } # XXX: those aren't global. but currently we know no way for defining # them just for the type context. DIRECTIVES = { 'absolute', 'abstract', 'assembler', 'cppdecl', 'default', 'far', 'far16', 'forward', 'index', 'oldfpccall', 'private', 'protected', 'published', 'public' } BUILTIN_TYPES = { 'ansichar', 'ansistring', 'bool', 'boolean', 'byte', 'bytebool', 'cardinal', 'char', 'comp', 'currency', 'double', 'dword', 'extended', 'int64', 'integer', 'iunknown', 'longbool', 'longint', 'longword', 'pansichar', 'pansistring', 'pbool', 'pboolean', 'pbyte', 'pbytearray', 'pcardinal', 'pchar', 'pcomp', 'pcurrency', 'pdate', 'pdatetime', 'pdouble', 'pdword', 'pextended', 'phandle', 'pint64', 'pinteger', 'plongint', 'plongword', 'pointer', 'ppointer', 'pshortint', 'pshortstring', 'psingle', 'psmallint', 'pstring', 'pvariant', 'pwidechar', 'pwidestring', 'pword', 'pwordarray', 'pwordbool', 'real', 'real48', 'shortint', 'shortstring', 'single', 'smallint', 'string', 'tclass', 'tdate', 'tdatetime', 'textfile', 'thandle', 'tobject', 'ttime', 'variant', 'widechar', 'widestring', 'word', 'wordbool' } BUILTIN_UNITS = { 'System': ( 'abs', 'acquireexceptionobject', 'addr', 'ansitoutf8', 'append', 'arctan', 'assert', 'assigned', 'assignfile', 'beginthread', 'blockread', 'blockwrite', 'break', 'chdir', 'chr', 'close', 'closefile', 'comptocurrency', 'comptodouble', 'concat', 'continue', 'copy', 'cos', 'dec', 'delete', 'dispose', 'doubletocomp', 'endthread', 'enummodules', 'enumresourcemodules', 'eof', 'eoln', 'erase', 'exceptaddr', 'exceptobject', 'exclude', 'exit', 'exp', 'filepos', 'filesize', 'fillchar', 'finalize', 'findclasshinstance', 'findhinstance', 'findresourcehinstance', 'flush', 'frac', 'freemem', 'get8087cw', 'getdir', 'getlasterror', 'getmem', 'getmemorymanager', 'getmodulefilename', 'getvariantmanager', 'halt', 'hi', 'high', 'inc', 'include', 'initialize', 'insert', 'int', 'ioresult', 'ismemorymanagerset', 'isvariantmanagerset', 'length', 'ln', 'lo', 'low', 'mkdir', 'move', 'new', 'odd', 'olestrtostring', 'olestrtostrvar', 'ord', 'paramcount', 'paramstr', 'pi', 'pos', 'pred', 'ptr', 'pucs4chars', 'random', 'randomize', 'read', 'readln', 'reallocmem', 'releaseexceptionobject', 'rename', 'reset', 'rewrite', 'rmdir', 'round', 'runerror', 'seek', 'seekeof', 'seekeoln', 'set8087cw', 'setlength', 'setlinebreakstyle', 'setmemorymanager', 'setstring', 'settextbuf', 'setvariantmanager', 'sin', 'sizeof', 'slice', 'sqr', 'sqrt', 'str', 'stringofchar', 'stringtoolestr', 'stringtowidechar', 'succ', 'swap', 'trunc', 'truncate', 'typeinfo', 'ucs4stringtowidestring', 'unicodetoutf8', 'uniquestring', 'upcase', 'utf8decode', 'utf8encode', 'utf8toansi', 'utf8tounicode', 'val', 'vararrayredim', 'varclear', 'widecharlentostring', 'widecharlentostrvar', 'widechartostring', 'widechartostrvar', 'widestringtoucs4string', 'write', 'writeln' ), 'SysUtils': ( 'abort', 'addexitproc', 'addterminateproc', 'adjustlinebreaks', 'allocmem', 'ansicomparefilename', 'ansicomparestr', 'ansicomparetext', 'ansidequotedstr', 'ansiextractquotedstr', 'ansilastchar', 'ansilowercase', 'ansilowercasefilename', 'ansipos', 'ansiquotedstr', 'ansisamestr', 'ansisametext', 'ansistrcomp', 'ansistricomp', 'ansistrlastchar', 'ansistrlcomp', 'ansistrlicomp', 'ansistrlower', 'ansistrpos', 'ansistrrscan', 'ansistrscan', 'ansistrupper', 'ansiuppercase', 'ansiuppercasefilename', 'appendstr', 'assignstr', 'beep', 'booltostr', 'bytetocharindex', 'bytetocharlen', 'bytetype', 'callterminateprocs', 'changefileext', 'charlength', 'chartobyteindex', 'chartobytelen', 'comparemem', 'comparestr', 'comparetext', 'createdir', 'createguid', 'currentyear', 'currtostr', 'currtostrf', 'date', 'datetimetofiledate', 'datetimetostr', 'datetimetostring', 'datetimetosystemtime', 'datetimetotimestamp', 'datetostr', 'dayofweek', 'decodedate', 'decodedatefully', 'decodetime', 'deletefile', 'directoryexists', 'diskfree', 'disksize', 'disposestr', 'encodedate', 'encodetime', 'exceptionerrormessage', 'excludetrailingbackslash', 'excludetrailingpathdelimiter', 'expandfilename', 'expandfilenamecase', 'expanduncfilename', 'extractfiledir', 'extractfiledrive', 'extractfileext', 'extractfilename', 'extractfilepath', 'extractrelativepath', 'extractshortpathname', 'fileage', 'fileclose', 'filecreate', 'filedatetodatetime', 'fileexists', 'filegetattr', 'filegetdate', 'fileisreadonly', 'fileopen', 'fileread', 'filesearch', 'fileseek', 'filesetattr', 'filesetdate', 'filesetreadonly', 'filewrite', 'finalizepackage', 'findclose', 'findcmdlineswitch', 'findfirst', 'findnext', 'floattocurr', 'floattodatetime', 'floattodecimal', 'floattostr', 'floattostrf', 'floattotext', 'floattotextfmt', 'fmtloadstr', 'fmtstr', 'forcedirectories', 'format', 'formatbuf', 'formatcurr', 'formatdatetime', 'formatfloat', 'freeandnil', 'getcurrentdir', 'getenvironmentvariable', 'getfileversion', 'getformatsettings', 'getlocaleformatsettings', 'getmodulename', 'getpackagedescription', 'getpackageinfo', 'gettime', 'guidtostring', 'incamonth', 'includetrailingbackslash', 'includetrailingpathdelimiter', 'incmonth', 'initializepackage', 'interlockeddecrement', 'interlockedexchange', 'interlockedexchangeadd', 'interlockedincrement', 'inttohex', 'inttostr', 'isdelimiter', 'isequalguid', 'isleapyear', 'ispathdelimiter', 'isvalidident', 'languages', 'lastdelimiter', 'loadpackage', 'loadstr', 'lowercase', 'msecstotimestamp', 'newstr', 'nextcharindex', 'now', 'outofmemoryerror', 'quotedstr', 'raiselastoserror', 'raiselastwin32error', 'removedir', 'renamefile', 'replacedate', 'replacetime', 'safeloadlibrary', 'samefilename', 'sametext', 'setcurrentdir', 'showexception', 'sleep', 'stralloc', 'strbufsize', 'strbytetype', 'strcat', 'strcharlength', 'strcomp', 'strcopy', 'strdispose', 'strecopy', 'strend', 'strfmt', 'stricomp', 'stringreplace', 'stringtoguid', 'strlcat', 'strlcomp', 'strlcopy', 'strlen', 'strlfmt', 'strlicomp', 'strlower', 'strmove', 'strnew', 'strnextchar', 'strpas', 'strpcopy', 'strplcopy', 'strpos', 'strrscan', 'strscan', 'strtobool', 'strtobooldef', 'strtocurr', 'strtocurrdef', 'strtodate', 'strtodatedef', 'strtodatetime', 'strtodatetimedef', 'strtofloat', 'strtofloatdef', 'strtoint', 'strtoint64', 'strtoint64def', 'strtointdef', 'strtotime', 'strtotimedef', 'strupper', 'supports', 'syserrormessage', 'systemtimetodatetime', 'texttofloat', 'time', 'timestamptodatetime', 'timestamptomsecs', 'timetostr', 'trim', 'trimleft', 'trimright', 'tryencodedate', 'tryencodetime', 'tryfloattocurr', 'tryfloattodatetime', 'trystrtobool', 'trystrtocurr', 'trystrtodate', 'trystrtodatetime', 'trystrtofloat', 'trystrtoint', 'trystrtoint64', 'trystrtotime', 'unloadpackage', 'uppercase', 'widecomparestr', 'widecomparetext', 'widefmtstr', 'wideformat', 'wideformatbuf', 'widelowercase', 'widesamestr', 'widesametext', 'wideuppercase', 'win32check', 'wraptext' ), 'Classes': ( 'activateclassgroup', 'allocatehwnd', 'bintohex', 'checksynchronize', 'collectionsequal', 'countgenerations', 'deallocatehwnd', 'equalrect', 'extractstrings', 'findclass', 'findglobalcomponent', 'getclass', 'groupdescendantswith', 'hextobin', 'identtoint', 'initinheritedcomponent', 'inttoident', 'invalidpoint', 'isuniqueglobalcomponentname', 'linestart', 'objectbinarytotext', 'objectresourcetotext', 'objecttexttobinary', 'objecttexttoresource', 'pointsequal', 'readcomponentres', 'readcomponentresex', 'readcomponentresfile', 'rect', 'registerclass', 'registerclassalias', 'registerclasses', 'registercomponents', 'registerintegerconsts', 'registernoicon', 'registernonactivex', 'smallpoint', 'startclassgroup', 'teststreamformat', 'unregisterclass', 'unregisterclasses', 'unregisterintegerconsts', 'unregistermoduleclasses', 'writecomponentresfile' ), 'Math': ( 'arccos', 'arccosh', 'arccot', 'arccoth', 'arccsc', 'arccsch', 'arcsec', 'arcsech', 'arcsin', 'arcsinh', 'arctan2', 'arctanh', 'ceil', 'comparevalue', 'cosecant', 'cosh', 'cot', 'cotan', 'coth', 'csc', 'csch', 'cycletodeg', 'cycletograd', 'cycletorad', 'degtocycle', 'degtograd', 'degtorad', 'divmod', 'doubledecliningbalance', 'ensurerange', 'floor', 'frexp', 'futurevalue', 'getexceptionmask', 'getprecisionmode', 'getroundmode', 'gradtocycle', 'gradtodeg', 'gradtorad', 'hypot', 'inrange', 'interestpayment', 'interestrate', 'internalrateofreturn', 'intpower', 'isinfinite', 'isnan', 'iszero', 'ldexp', 'lnxp1', 'log10', 'log2', 'logn', 'max', 'maxintvalue', 'maxvalue', 'mean', 'meanandstddev', 'min', 'minintvalue', 'minvalue', 'momentskewkurtosis', 'netpresentvalue', 'norm', 'numberofperiods', 'payment', 'periodpayment', 'poly', 'popnstddev', 'popnvariance', 'power', 'presentvalue', 'radtocycle', 'radtodeg', 'radtograd', 'randg', 'randomrange', 'roundto', 'samevalue', 'sec', 'secant', 'sech', 'setexceptionmask', 'setprecisionmode', 'setroundmode', 'sign', 'simpleroundto', 'sincos', 'sinh', 'slndepreciation', 'stddev', 'sum', 'sumint', 'sumofsquares', 'sumsandsquares', 'syddepreciation', 'tan', 'tanh', 'totalvariance', 'variance' ) } ASM_REGISTERS = { 'ah', 'al', 'ax', 'bh', 'bl', 'bp', 'bx', 'ch', 'cl', 'cr0', 'cr1', 'cr2', 'cr3', 'cr4', 'cs', 'cx', 'dh', 'di', 'dl', 'dr0', 'dr1', 'dr2', 'dr3', 'dr4', 'dr5', 'dr6', 'dr7', 'ds', 'dx', 'eax', 'ebp', 'ebx', 'ecx', 'edi', 'edx', 'es', 'esi', 'esp', 'fs', 'gs', 'mm0', 'mm1', 'mm2', 'mm3', 'mm4', 'mm5', 'mm6', 'mm7', 'si', 'sp', 'ss', 'st0', 'st1', 'st2', 'st3', 'st4', 'st5', 'st6', 'st7', 'xmm0', 'xmm1', 'xmm2', 'xmm3', 'xmm4', 'xmm5', 'xmm6', 'xmm7' } ASM_INSTRUCTIONS = { 'aaa', 'aad', 'aam', 'aas', 'adc', 'add', 'and', 'arpl', 'bound', 'bsf', 'bsr', 'bswap', 'bt', 'btc', 'btr', 'bts', 'call', 'cbw', 'cdq', 'clc', 'cld', 'cli', 'clts', 'cmc', 'cmova', 'cmovae', 'cmovb', 'cmovbe', 'cmovc', 'cmovcxz', 'cmove', 'cmovg', 'cmovge', 'cmovl', 'cmovle', 'cmovna', 'cmovnae', 'cmovnb', 'cmovnbe', 'cmovnc', 'cmovne', 'cmovng', 'cmovnge', 'cmovnl', 'cmovnle', 'cmovno', 'cmovnp', 'cmovns', 'cmovnz', 'cmovo', 'cmovp', 'cmovpe', 'cmovpo', 'cmovs', 'cmovz', 'cmp', 'cmpsb', 'cmpsd', 'cmpsw', 'cmpxchg', 'cmpxchg486', 'cmpxchg8b', 'cpuid', 'cwd', 'cwde', 'daa', 'das', 'dec', 'div', 'emms', 'enter', 'hlt', 'ibts', 'icebp', 'idiv', 'imul', 'in', 'inc', 'insb', 'insd', 'insw', 'int', 'int01', 'int03', 'int1', 'int3', 'into', 'invd', 'invlpg', 'iret', 'iretd', 'iretw', 'ja', 'jae', 'jb', 'jbe', 'jc', 'jcxz', 'jcxz', 'je', 'jecxz', 'jg', 'jge', 'jl', 'jle', 'jmp', 'jna', 'jnae', 'jnb', 'jnbe', 'jnc', 'jne', 'jng', 'jnge', 'jnl', 'jnle', 'jno', 'jnp', 'jns', 'jnz', 'jo', 'jp', 'jpe', 'jpo', 'js', 'jz', 'lahf', 'lar', 'lcall', 'lds', 'lea', 'leave', 'les', 'lfs', 'lgdt', 'lgs', 'lidt', 'ljmp', 'lldt', 'lmsw', 'loadall', 'loadall286', 'lock', 'lodsb', 'lodsd', 'lodsw', 'loop', 'loope', 'loopne', 'loopnz', 'loopz', 'lsl', 'lss', 'ltr', 'mov', 'movd', 'movq', 'movsb', 'movsd', 'movsw', 'movsx', 'movzx', 'mul', 'neg', 'nop', 'not', 'or', 'out', 'outsb', 'outsd', 'outsw', 'pop', 'popa', 'popad', 'popaw', 'popf', 'popfd', 'popfw', 'push', 'pusha', 'pushad', 'pushaw', 'pushf', 'pushfd', 'pushfw', 'rcl', 'rcr', 'rdmsr', 'rdpmc', 'rdshr', 'rdtsc', 'rep', 'repe', 'repne', 'repnz', 'repz', 'ret', 'retf', 'retn', 'rol', 'ror', 'rsdc', 'rsldt', 'rsm', 'sahf', 'sal', 'salc', 'sar', 'sbb', 'scasb', 'scasd', 'scasw', 'seta', 'setae', 'setb', 'setbe', 'setc', 'setcxz', 'sete', 'setg', 'setge', 'setl', 'setle', 'setna', 'setnae', 'setnb', 'setnbe', 'setnc', 'setne', 'setng', 'setnge', 'setnl', 'setnle', 'setno', 'setnp', 'setns', 'setnz', 'seto', 'setp', 'setpe', 'setpo', 'sets', 'setz', 'sgdt', 'shl', 'shld', 'shr', 'shrd', 'sidt', 'sldt', 'smi', 'smint', 'smintold', 'smsw', 'stc', 'std', 'sti', 'stosb', 'stosd', 'stosw', 'str', 'sub', 'svdc', 'svldt', 'svts', 'syscall', 'sysenter', 'sysexit', 'sysret', 'test', 'ud1', 'ud2', 'umov', 'verr', 'verw', 'wait', 'wbinvd', 'wrmsr', 'wrshr', 'xadd', 'xbts', 'xchg', 'xlat', 'xlatb', 'xor' } def __init__(self, **options): Lexer.__init__(self, **options) self.keywords = set() if get_bool_opt(options, 'turbopascal', True): self.keywords.update(self.TURBO_PASCAL_KEYWORDS) if get_bool_opt(options, 'delphi', True): self.keywords.update(self.DELPHI_KEYWORDS) if get_bool_opt(options, 'freepascal', True): self.keywords.update(self.FREE_PASCAL_KEYWORDS) self.builtins = set() for unit in get_list_opt(options, 'units', list(self.BUILTIN_UNITS)): self.builtins.update(self.BUILTIN_UNITS[unit]) def get_tokens_unprocessed(self, text): scanner = Scanner(text, re.DOTALL | re.MULTILINE | re.IGNORECASE) stack = ['initial'] in_function_block = False in_property_block = False was_dot = False next_token_is_function = False next_token_is_property = False collect_labels = False block_labels = set() brace_balance = [0, 0] while not scanner.eos: token = Error if stack[-1] == 'initial': if scanner.scan(r'\s+'): token = Text elif scanner.scan(r'\{.*?\}|\(\*.*?\*\)'): if scanner.match.startswith('$'): token = Comment.Preproc else: token = Comment.Multiline elif scanner.scan(r'//.*?$'): token = Comment.Single elif scanner.scan(r'[-+*\/=<>:;,.@\^]'): token = Operator # stop label highlighting on next ";" if collect_labels and scanner.match == ';': collect_labels = False elif scanner.scan(r'[\(\)\[\]]+'): token = Punctuation # abort function naming ``foo = Function(...)`` next_token_is_function = False # if we are in a function block we count the open # braces because ootherwise it's impossible to # determine the end of the modifier context if in_function_block or in_property_block: if scanner.match == '(': brace_balance[0] += 1 elif scanner.match == ')': brace_balance[0] -= 1 elif scanner.match == '[': brace_balance[1] += 1 elif scanner.match == ']': brace_balance[1] -= 1 elif scanner.scan(r'[A-Za-z_][A-Za-z_0-9]*'): lowercase_name = scanner.match.lower() if lowercase_name == 'result': token = Name.Builtin.Pseudo elif lowercase_name in self.keywords: token = Keyword # if we are in a special block and a # block ending keyword occours (and the parenthesis # is balanced) we end the current block context if (in_function_block or in_property_block) and \ lowercase_name in self.BLOCK_KEYWORDS and \ brace_balance[0] <= 0 and \ brace_balance[1] <= 0: in_function_block = False in_property_block = False brace_balance = [0, 0] block_labels = set() if lowercase_name in ('label', 'goto'): collect_labels = True elif lowercase_name == 'asm': stack.append('asm') elif lowercase_name == 'property': in_property_block = True next_token_is_property = True elif lowercase_name in ('procedure', 'operator', 'function', 'constructor', 'destructor'): in_function_block = True next_token_is_function = True # we are in a function block and the current name # is in the set of registered modifiers. highlight # it as pseudo keyword elif in_function_block and \ lowercase_name in self.FUNCTION_MODIFIERS: token = Keyword.Pseudo # if we are in a property highlight some more # modifiers elif in_property_block and \ lowercase_name in ('read', 'write'): token = Keyword.Pseudo next_token_is_function = True # if the last iteration set next_token_is_function # to true we now want this name highlighted as # function. so do that and reset the state elif next_token_is_function: # Look if the next token is a dot. If yes it's # not a function, but a class name and the # part after the dot a function name if scanner.test(r'\s*\.\s*'): token = Name.Class # it's not a dot, our job is done else: token = Name.Function next_token_is_function = False # same for properties elif next_token_is_property: token = Name.Property next_token_is_property = False # Highlight this token as label and add it # to the list of known labels elif collect_labels: token = Name.Label block_labels.add(scanner.match.lower()) # name is in list of known labels elif lowercase_name in block_labels: token = Name.Label elif lowercase_name in self.BUILTIN_TYPES: token = Keyword.Type elif lowercase_name in self.DIRECTIVES: token = Keyword.Pseudo # builtins are just builtins if the token # before isn't a dot elif not was_dot and lowercase_name in self.builtins: token = Name.Builtin else: token = Name elif scanner.scan(r"'"): token = String stack.append('string') elif scanner.scan(r'\#(\d+|\$[0-9A-Fa-f]+)'): token = String.Char elif scanner.scan(r'\$[0-9A-Fa-f]+'): token = Number.Hex elif scanner.scan(r'\d+(?![eE]|\.[^.])'): token = Number.Integer elif scanner.scan(r'\d+(\.\d+([eE][+-]?\d+)?|[eE][+-]?\d+)'): token = Number.Float else: # if the stack depth is deeper than once, pop if len(stack) > 1: stack.pop() scanner.get_char() elif stack[-1] == 'string': if scanner.scan(r"''"): token = String.Escape elif scanner.scan(r"'"): token = String stack.pop() elif scanner.scan(r"[^']*"): token = String else: scanner.get_char() stack.pop() elif stack[-1] == 'asm': if scanner.scan(r'\s+'): token = Text elif scanner.scan(r'end'): token = Keyword stack.pop() elif scanner.scan(r'\{.*?\}|\(\*.*?\*\)'): if scanner.match.startswith('$'): token = Comment.Preproc else: token = Comment.Multiline elif scanner.scan(r'//.*?$'): token = Comment.Single elif scanner.scan(r"'"): token = String stack.append('string') elif scanner.scan(r'@@[A-Za-z_][A-Za-z_0-9]*'): token = Name.Label elif scanner.scan(r'[A-Za-z_][A-Za-z_0-9]*'): lowercase_name = scanner.match.lower() if lowercase_name in self.ASM_INSTRUCTIONS: token = Keyword elif lowercase_name in self.ASM_REGISTERS: token = Name.Builtin else: token = Name elif scanner.scan(r'[-+*\/=<>:;,.@\^]+'): token = Operator elif scanner.scan(r'[\(\)\[\]]+'): token = Punctuation elif scanner.scan(r'\$[0-9A-Fa-f]+'): token = Number.Hex elif scanner.scan(r'\d+(?![eE]|\.[^.])'): token = Number.Integer elif scanner.scan(r'\d+(\.\d+([eE][+-]?\d+)?|[eE][+-]?\d+)'): token = Number.Float else: scanner.get_char() stack.pop() # save the dot!!!11 if scanner.match.strip(): was_dot = scanner.match == '.' yield scanner.start_pos, token, scanner.match or '' class AdaLexer(RegexLexer): """ For Ada source code. .. versionadded:: 1.3 """ name = 'Ada' aliases = ['ada', 'ada95', 'ada2005'] filenames = ['*.adb', '*.ads', '*.ada'] mimetypes = ['text/x-ada'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'[^\S\n]+', Text), (r'--.*?\n', Comment.Single), (r'[^\S\n]+', Text), (r'function|procedure|entry', Keyword.Declaration, 'subprogram'), (r'(subtype|type)(\s+)(\w+)', bygroups(Keyword.Declaration, Text, Keyword.Type), 'type_def'), (r'task|protected', Keyword.Declaration), (r'(subtype)(\s+)', bygroups(Keyword.Declaration, Text)), (r'(end)(\s+)', bygroups(Keyword.Reserved, Text), 'end'), (r'(pragma)(\s+)(\w+)', bygroups(Keyword.Reserved, Text, Comment.Preproc)), (r'(true|false|null)\b', Keyword.Constant), (words(( 'Address', 'Byte', 'Boolean', 'Character', 'Controlled', 'Count', 'Cursor', 'Duration', 'File_Mode', 'File_Type', 'Float', 'Generator', 'Integer', 'Long_Float', 'Long_Integer', 'Long_Long_Float', 'Long_Long_Integer', 'Natural', 'Positive', 'Reference_Type', 'Short_Float', 'Short_Integer', 'Short_Short_Float', 'Short_Short_Integer', 'String', 'Wide_Character', 'Wide_String'), suffix=r'\b'), Keyword.Type), (r'(and(\s+then)?|in|mod|not|or(\s+else)|rem)\b', Operator.Word), (r'generic|private', Keyword.Declaration), (r'package', Keyword.Declaration, 'package'), (r'array\b', Keyword.Reserved, 'array_def'), (r'(with|use)(\s+)', bygroups(Keyword.Namespace, Text), 'import'), (r'(\w+)(\s*)(:)(\s*)(constant)', bygroups(Name.Constant, Text, Punctuation, Text, Keyword.Reserved)), (r'<<\w+>>', Name.Label), (r'(\w+)(\s*)(:)(\s*)(declare|begin|loop|for|while)', bygroups(Name.Label, Text, Punctuation, Text, Keyword.Reserved)), (words(( 'abort', 'abs', 'abstract', 'accept', 'access', 'aliased', 'all', 'array', 'at', 'begin', 'body', 'case', 'constant', 'declare', 'delay', 'delta', 'digits', 'do', 'else', 'elsif', 'end', 'entry', 'exception', 'exit', 'interface', 'for', 'goto', 'if', 'is', 'limited', 'loop', 'new', 'null', 'of', 'or', 'others', 'out', 'overriding', 'pragma', 'protected', 'raise', 'range', 'record', 'renames', 'requeue', 'return', 'reverse', 'select', 'separate', 'some', 'subtype', 'synchronized', 'task', 'tagged', 'terminate', 'then', 'type', 'until', 'when', 'while', 'xor'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), (r'"[^"]*"', String), include('attribute'), include('numbers'), (r"'[^']'", String.Character), (r'(\w+)(\s*|[(,])', bygroups(Name, using(this))), (r"(<>|=>|:=|[()|:;,.'])", Punctuation), (r'[*<>+=/&-]', Operator), (r'\n+', Text), ], 'numbers': [ (r'[0-9_]+#[0-9a-f_\.]+#', Number.Hex), (r'[0-9_]+\.[0-9_]*', Number.Float), (r'[0-9_]+', Number.Integer), ], 'attribute': [ (r"(')(\w+)", bygroups(Punctuation, Name.Attribute)), ], 'subprogram': [ (r'\(', Punctuation, ('#pop', 'formal_part')), (r';', Punctuation, '#pop'), (r'is\b', Keyword.Reserved, '#pop'), (r'"[^"]+"|\w+', Name.Function), include('root'), ], 'end': [ ('(if|case|record|loop|select)', Keyword.Reserved), (r'"[^"]+"|[\w.]+', Name.Function), (r'\s+', Text), (';', Punctuation, '#pop'), ], 'type_def': [ (r';', Punctuation, '#pop'), (r'\(', Punctuation, 'formal_part'), (r'with|and|use', Keyword.Reserved), (r'array\b', Keyword.Reserved, ('#pop', 'array_def')), (r'record\b', Keyword.Reserved, ('record_def')), (r'(null record)(;)', bygroups(Keyword.Reserved, Punctuation), '#pop'), include('root'), ], 'array_def': [ (r';', Punctuation, '#pop'), (r'(\w+)(\s+)(range)', bygroups(Keyword.Type, Text, Keyword.Reserved)), include('root'), ], 'record_def': [ (r'end record', Keyword.Reserved, '#pop'), include('root'), ], 'import': [ (r'[\w.]+', Name.Namespace, '#pop'), default('#pop'), ], 'formal_part': [ (r'\)', Punctuation, '#pop'), (r'\w+', Name.Variable), (r',|:[^=]', Punctuation), (r'(in|not|null|out|access)\b', Keyword.Reserved), include('root'), ], 'package': [ ('body', Keyword.Declaration), (r'is\s+new|renames', Keyword.Reserved), ('is', Keyword.Reserved, '#pop'), (';', Punctuation, '#pop'), (r'\(', Punctuation, 'package_instantiation'), (r'([\w.]+)', Name.Class), include('root'), ], 'package_instantiation': [ (r'("[^"]+"|\w+)(\s+)(=>)', bygroups(Name.Variable, Text, Punctuation)), (r'[\w.\'"]', Text), (r'\)', Punctuation, '#pop'), include('root'), ], } pygments-2.11.2/pygments/lexers/elm.py0000644000175000017500000000611714165547207017642 0ustar carstencarsten""" pygments.lexers.elm ~~~~~~~~~~~~~~~~~~~ Lexer for the Elm programming language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, words, include, bygroups from pygments.token import Comment, Keyword, Name, Number, Punctuation, String, \ Text, Whitespace __all__ = ['ElmLexer'] class ElmLexer(RegexLexer): """ For `Elm `_ source code. .. versionadded:: 2.1 """ name = 'Elm' aliases = ['elm'] filenames = ['*.elm'] mimetypes = ['text/x-elm'] validName = r'[a-z_][a-zA-Z0-9_\']*' specialName = r'^main ' builtinOps = ( '~', '||', '|>', '|', '`', '^', '\\', '\'', '>>', '>=', '>', '==', '=', '<~', '<|', '<=', '<<', '<-', '<', '::', ':', '/=', '//', '/', '..', '.', '->', '-', '++', '+', '*', '&&', '%', ) reservedWords = words(( 'alias', 'as', 'case', 'else', 'if', 'import', 'in', 'let', 'module', 'of', 'port', 'then', 'type', 'where', ), suffix=r'\b') tokens = { 'root': [ # Comments (r'\{-', Comment.Multiline, 'comment'), (r'--.*', Comment.Single), # Whitespace (r'\s+', Whitespace), # Strings (r'"', String, 'doublequote'), # Modules (r'^(\s*)(module)(\s*)', bygroups(Whitespace, Keyword.Namespace, Whitespace), 'imports'), # Imports (r'^(\s*)(import)(\s*)', bygroups(Whitespace, Keyword.Namespace, Whitespace), 'imports'), # Shaders (r'\[glsl\|.*', Name.Entity, 'shader'), # Keywords (reservedWords, Keyword.Reserved), # Types (r'[A-Z][a-zA-Z0-9_]*', Keyword.Type), # Main (specialName, Keyword.Reserved), # Prefix Operators (words((builtinOps), prefix=r'\(', suffix=r'\)'), Name.Function), # Infix Operators (words(builtinOps), Name.Function), # Numbers include('numbers'), # Variable Names (validName, Name.Variable), # Parens (r'[,()\[\]{}]', Punctuation), ], 'comment': [ (r'-(?!\})', Comment.Multiline), (r'\{-', Comment.Multiline, 'comment'), (r'[^-}]', Comment.Multiline), (r'-\}', Comment.Multiline, '#pop'), ], 'doublequote': [ (r'\\u[0-9a-fA-F]{4}', String.Escape), (r'\\[nrfvb\\"]', String.Escape), (r'[^"]', String), (r'"', String, '#pop'), ], 'imports': [ (r'\w+(\.\w+)*', Name.Class, '#pop'), ], 'numbers': [ (r'_?\d+\.(?=\d+)', Number.Float), (r'_?\d+', Number.Integer), ], 'shader': [ (r'\|(?!\])', Name.Entity), (r'\|\]', Name.Entity, '#pop'), (r'(.*)(\n)', bygroups(Name.Entity, Whitespace)), ], } pygments-2.11.2/pygments/lexers/rust.py0000644000175000017500000001776114165547207020071 0ustar carstencarsten""" pygments.lexers.rust ~~~~~~~~~~~~~~~~~~~~ Lexers for the Rust language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups, words, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['RustLexer'] class RustLexer(RegexLexer): """ Lexer for the Rust programming language (version 1.47). .. versionadded:: 1.6 """ name = 'Rust' filenames = ['*.rs', '*.rs.in'] aliases = ['rust', 'rs'] mimetypes = ['text/rust', 'text/x-rust'] keyword_types = (words(( 'u8', 'u16', 'u32', 'u64', 'u128', 'i8', 'i16', 'i32', 'i64', 'i128', 'usize', 'isize', 'f32', 'f64', 'char', 'str', 'bool', ), suffix=r'\b'), Keyword.Type) builtin_funcs_types = (words(( 'Copy', 'Send', 'Sized', 'Sync', 'Unpin', 'Drop', 'Fn', 'FnMut', 'FnOnce', 'drop', 'Box', 'ToOwned', 'Clone', 'PartialEq', 'PartialOrd', 'Eq', 'Ord', 'AsRef', 'AsMut', 'Into', 'From', 'Default', 'Iterator', 'Extend', 'IntoIterator', 'DoubleEndedIterator', 'ExactSizeIterator', 'Option', 'Some', 'None', 'Result', 'Ok', 'Err', 'String', 'ToString', 'Vec', ), suffix=r'\b'), Name.Builtin) builtin_macros = (words(( 'asm', 'assert', 'assert_eq', 'assert_ne', 'cfg', 'column', 'compile_error', 'concat', 'concat_idents', 'dbg', 'debug_assert', 'debug_assert_eq', 'debug_assert_ne', 'env', 'eprint', 'eprintln', 'file', 'format', 'format_args', 'format_args_nl', 'global_asm', 'include', 'include_bytes', 'include_str', 'is_aarch64_feature_detected', 'is_arm_feature_detected', 'is_mips64_feature_detected', 'is_mips_feature_detected', 'is_powerpc64_feature_detected', 'is_powerpc_feature_detected', 'is_x86_feature_detected', 'line', 'llvm_asm', 'log_syntax', 'macro_rules', 'matches', 'module_path', 'option_env', 'panic', 'print', 'println', 'stringify', 'thread_local', 'todo', 'trace_macros', 'unimplemented', 'unreachable', 'vec', 'write', 'writeln', ), suffix=r'!'), Name.Function.Magic) tokens = { 'root': [ # rust allows a file to start with a shebang, but if the first line # starts with #![ then it's not a shebang but a crate attribute. (r'#![^[\r\n].*$', Comment.Preproc), default('base'), ], 'base': [ # Whitespace and Comments (r'\n', Whitespace), (r'\s+', Whitespace), (r'//!.*?\n', String.Doc), (r'///(\n|[^/].*?\n)', String.Doc), (r'//(.*?)\n', Comment.Single), (r'/\*\*(\n|[^/*])', String.Doc, 'doccomment'), (r'/\*!', String.Doc, 'doccomment'), (r'/\*', Comment.Multiline, 'comment'), # Macro parameters (r"""\$([a-zA-Z_]\w*|\(,?|\),?|,?)""", Comment.Preproc), # Keywords (words(('as', 'async', 'await', 'box', 'const', 'crate', 'dyn', 'else', 'extern', 'for', 'if', 'impl', 'in', 'loop', 'match', 'move', 'mut', 'pub', 'ref', 'return', 'static', 'super', 'trait', 'unsafe', 'use', 'where', 'while'), suffix=r'\b'), Keyword), (words(('abstract', 'become', 'do', 'final', 'macro', 'override', 'priv', 'typeof', 'try', 'unsized', 'virtual', 'yield'), suffix=r'\b'), Keyword.Reserved), (r'(true|false)\b', Keyword.Constant), (r'self\b', Name.Builtin.Pseudo), (r'mod\b', Keyword, 'modname'), (r'let\b', Keyword.Declaration), (r'fn\b', Keyword, 'funcname'), (r'(struct|enum|type|union)\b', Keyword, 'typename'), (r'(default)(\s+)(type|fn)\b', bygroups(Keyword, Text, Keyword)), keyword_types, (r'[sS]elf\b', Name.Builtin.Pseudo), # Prelude (taken from Rust's src/libstd/prelude.rs) builtin_funcs_types, builtin_macros, # Path seperators, so types don't catch them. (r'::\b', Text), # Types in positions. (r'(?::|->)', Text, 'typename'), # Labels (r'(break|continue)(\b\s*)(\'[A-Za-z_]\w*)?', bygroups(Keyword, Text.Whitespace, Name.Label)), # Character literals (r"""'(\\['"\\nrt]|\\x[0-7][0-9a-fA-F]|\\0""" r"""|\\u\{[0-9a-fA-F]{1,6}\}|.)'""", String.Char), (r"""b'(\\['"\\nrt]|\\x[0-9a-fA-F]{2}|\\0""" r"""|\\u\{[0-9a-fA-F]{1,6}\}|.)'""", String.Char), # Binary literals (r'0b[01_]+', Number.Bin, 'number_lit'), # Octal literals (r'0o[0-7_]+', Number.Oct, 'number_lit'), # Hexadecimal literals (r'0[xX][0-9a-fA-F_]+', Number.Hex, 'number_lit'), # Decimal literals (r'[0-9][0-9_]*(\.[0-9_]+[eE][+\-]?[0-9_]+|' r'\.[0-9_]*(?!\.)|[eE][+\-]?[0-9_]+)', Number.Float, 'number_lit'), (r'[0-9][0-9_]*', Number.Integer, 'number_lit'), # String literals (r'b"', String, 'bytestring'), (r'"', String, 'string'), (r'(?s)b?r(#*)".*?"\1', String), # Lifetime names (r"'", Operator, 'lifetime'), # Operators and Punctuation (r'\.\.=?', Operator), (r'[{}()\[\],.;]', Punctuation), (r'[+\-*/%&|<>^!~@=:?]', Operator), # Identifiers (r'[a-zA-Z_]\w*', Name), # Raw identifiers (r'r#[a-zA-Z_]\w*', Name), # Attributes (r'#!?\[', Comment.Preproc, 'attribute['), # Misc # Lone hashes: not used in Rust syntax, but allowed in macro # arguments, most famously for quote::quote!() (r'#', Text), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'doccomment': [ (r'[^*/]+', String.Doc), (r'/\*', String.Doc, '#push'), (r'\*/', String.Doc, '#pop'), (r'[*/]', String.Doc), ], 'modname': [ (r'\s+', Text), (r'[a-zA-Z_]\w*', Name.Namespace, '#pop'), default('#pop'), ], 'funcname': [ (r'\s+', Text), (r'[a-zA-Z_]\w*', Name.Function, '#pop'), default('#pop'), ], 'typename': [ (r'\s+', Text), (r'&', Keyword.Pseudo), (r"'", Operator, 'lifetime'), builtin_funcs_types, keyword_types, (r'[a-zA-Z_]\w*', Name.Class, '#pop'), default('#pop'), ], 'lifetime': [ (r"(static|_)", Name.Builtin), (r"[a-zA-Z_]+\w*", Name.Attribute), default('#pop'), ], 'number_lit': [ (r'[ui](8|16|32|64|size)', Keyword, '#pop'), (r'f(32|64)', Keyword, '#pop'), default('#pop'), ], 'string': [ (r'"', String, '#pop'), (r"""\\['"\\nrt]|\\x[0-7][0-9a-fA-F]|\\0""" r"""|\\u\{[0-9a-fA-F]{1,6}\}""", String.Escape), (r'[^\\"]+', String), (r'\\', String), ], 'bytestring': [ (r"""\\x[89a-fA-F][0-9a-fA-F]""", String.Escape), include('string'), ], 'attribute_common': [ (r'"', String, 'string'), (r'\[', Comment.Preproc, 'attribute['), ], 'attribute[': [ include('attribute_common'), (r'\]', Comment.Preproc, '#pop'), (r'[^"\]\[]+', Comment.Preproc), ], } pygments-2.11.2/pygments/lexers/lilypond.py0000644000175000017500000002041614165547207020715 0ustar carstencarsten""" pygments.lexers.lilypond ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for LilyPond. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import default, inherit, words from pygments.lexers.lisp import SchemeLexer from pygments.lexers._lilypond_builtins import ( keywords, pitch_language_names, clefs, scales, repeat_types, units, chord_modifiers, pitches, music_functions, dynamics, articulations, music_commands, markup_commands, grobs, translators, contexts, context_properties, grob_properties, scheme_functions, paper_variables, header_variables ) from pygments.token import Token __all__ = ["LilyPondLexer"] NAME_END_RE = r"(?=\d|[^\w\-]|[\-_][\W\d])" def builtin_words(names, backslash, suffix=NAME_END_RE): prefix = r"[\-_^]?" if backslash == "mandatory": prefix += r"\\" elif backslash == "optional": prefix += r"\\?" else: assert backslash == "disallowed" return words(names, prefix, suffix) class LilyPondLexer(SchemeLexer): """ Lexer for input to `LilyPond `_, a text-based music typesetter. .. important:: This lexer is meant to be used in conjunction with the ``lilypond`` style. .. versionadded:: 2.11 """ name = 'LilyPond' aliases = ['lilypond'] filenames = ['*.ly'] mimetypes = [] flags = re.DOTALL | re.MULTILINE # Because parsing LilyPond input is very tricky (and in fact # impossible without executing LilyPond when there is Scheme # code in the file), this lexer does not try to recognize # lexical modes. Instead, it catches the most frequent pieces # of syntax, and, above all, knows about many kinds of builtins. # In order to parse embedded Scheme, this lexer subclasses the SchemeLexer. # It redefines the 'root' state entirely, and adds a rule for #{ #} # to the 'value' state. The latter is used to parse a Scheme expression # after #. def get_tokens_unprocessed(self, text): """Highlight Scheme variables as LilyPond builtins when applicable.""" for index, token, value in super().get_tokens_unprocessed(text): if token is Token.Name.Function or token is Token.Name.Variable: if value in scheme_functions: token = Token.Name.Builtin.SchemeFunction elif token is Token.Name.Builtin: token = Token.Name.Builtin.SchemeBuiltin yield index, token, value tokens = { "root": [ # Whitespace. (r"\s+", Token.Whitespace), # Multi-line comment. These are non-nestable. (r"%\{.*?%\}", Token.Comment.Multiline), # Simple comment. (r"%.*?$", Token.Comment.Single), # End of embedded LilyPond in Scheme. (r"#\}", Token.Punctuation, "#pop"), # Embedded Scheme, starting with # ("delayed"), # or $ (immediate). #@ and and $@ are the lesser known # "list splicing operators". (r"[#$]@?", Token.Punctuation, "value"), # Any kind of punctuation: # - sequential music: { }, # - parallel music: << >>, # - voice separator: << \\ >>, # - chord: < >, # - bar check: |, # - dot in nested properties: \revert NoteHead.color, # - equals sign in assignemnts and lists for various commands: # \override Stem.color = red, # - comma as alternative syntax for lists: \time 3,3,2 4/4, # - colon in tremolos: c:32, # - double hyphen in lyrics: li -- ly -- pond, (r"\\\\|--|[{}<>=.,:|]", Token.Punctuation), # Pitch, with optional octavation marks, octave check, # and forced or cautionary accidental. (words(pitches, suffix=r"=?[',]*!?\??" + NAME_END_RE), Token.Pitch), # String, optionally with direction specifier. (r'[\-_^]?"', Token.String, "string"), # Numbers. (r"-?\d+\.\d+", Token.Number.Float), # 5. and .5 are not allowed (r"-?\d+/\d+", Token.Number.Fraction), # Integer, or duration with optional augmentation dots. We have no # way to distinguish these, so we highlight them all as numbers. (r"-?(\d+|\\longa|\\breve)\.*", Token.Number), # Separates duration and duration multiplier highlighted as fraction. (r"\*", Token.Number), # Ties, slurs, manual beams. (r"[~()[\]]", Token.Name.Builtin.Articulation), # Predefined articulation shortcuts. A direction specifier is # required here. (r"[\-_^][>^_!.\-+]", Token.Name.Builtin.Articulation), # Fingering numbers, string numbers. (r"[\-_^]?\\?\d+", Token.Name.Builtin.Articulation), # Builtins. (builtin_words(keywords, "mandatory"), Token.Keyword), (builtin_words(pitch_language_names, "disallowed"), Token.Name.PitchLanguage), (builtin_words(clefs, "disallowed"), Token.Name.Builtin.Clef), (builtin_words(scales, "mandatory"), Token.Name.Builtin.Scale), (builtin_words(repeat_types, "disallowed"), Token.Name.Builtin.RepeatType), (builtin_words(units, "mandatory"), Token.Number), (builtin_words(chord_modifiers, "disallowed"), Token.ChordModifier), (builtin_words(music_functions, "mandatory"), Token.Name.Builtin.MusicFunction), (builtin_words(dynamics, "mandatory"), Token.Name.Builtin.Dynamic), # Those like slurs that don't take a backslash are covered above. (builtin_words(articulations, "mandatory"), Token.Name.Builtin.Articulation), (builtin_words(music_commands, "mandatory"), Token.Name.Builtin.MusicCommand), (builtin_words(markup_commands, "mandatory"), Token.Name.Builtin.MarkupCommand), (builtin_words(grobs, "disallowed"), Token.Name.Builtin.Grob), (builtin_words(translators, "disallowed"), Token.Name.Builtin.Translator), # Optional backslash because of \layout { \context { \Score ... } }. (builtin_words(contexts, "optional"), Token.Name.Builtin.Context), (builtin_words(context_properties, "disallowed"), Token.Name.Builtin.ContextProperty), (builtin_words(grob_properties, "disallowed"), Token.Name.Builtin.GrobProperty, "maybe-subproperties"), # Optional backslashes here because output definitions are wrappers # around modules. Concretely, you can do, e.g., # \paper { oddHeaderMarkup = \evenHeaderMarkup } (builtin_words(paper_variables, "optional"), Token.Name.Builtin.PaperVariable), (builtin_words(header_variables, "optional"), Token.Name.Builtin.HeaderVariable), # Other backslashed-escaped names (like dereferencing a # music variable), possibly with a direction specifier. (r"[\-_^]?\\.+?" + NAME_END_RE, Token.Name.BackslashReference), # Definition of a variable. Support assignments to alist keys # (myAlist.my-key.my-nested-key = \markup \spam \eggs). (r"([^\W\d]|-)+(?=([^\W\d]|[\-.])*\s*=)", Token.Name.Lvalue), # Virtually everything can appear in markup mode, so we highlight # as text. (r".", Token.Text), ], "string": [ (r'"', Token.String, "#pop"), (r'\\.', Token.String.Escape), (r'[^\\"]+', Token.String), ], "value": [ # Scan a LilyPond value, then pop back since we had a # complete expression. (r"#\{", Token.Punctuation, ("#pop", "root")), inherit, ], # Grob subproperties are undeclared and it would be tedious # to maintain them by hand. Instead, this state allows recognizing # everything that looks like a-known-property.foo.bar-baz as # one single property name. "maybe-subproperties": [ (r"\.", Token.Punctuation), (r"\s+", Token.Whitespace), (r"([^\W\d])+" + NAME_END_RE, Token.Name.Builtin.GrobProperty), default("#pop"), ] } pygments-2.11.2/pygments/lexers/unicon.py0000644000175000017500000004412014165547207020354 0ustar carstencarsten""" pygments.lexers.unicon ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the Icon and Unicon languages, including ucode VM. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, words, using, this from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['IconLexer', 'UcodeLexer', 'UniconLexer'] class UniconLexer(RegexLexer): """ For Unicon source code. .. versionadded:: 2.4 """ name = 'Unicon' aliases = ['unicon'] filenames = ['*.icn'] mimetypes = ['text/unicon'] flags = re.MULTILINE tokens = { 'root': [ (r'[^\S\n]+', Text), (r'#.*?\n', Comment.Single), (r'[^\S\n]+', Text), (r'class|method|procedure', Keyword.Declaration, 'subprogram'), (r'(record)(\s+)(\w+)', bygroups(Keyword.Declaration, Text, Keyword.Type), 'type_def'), (r'(#line|\$C|\$Cend|\$define|\$else|\$endif|\$error|\$ifdef|' r'\$ifndef|\$include|\$line|\$undef)\b', Keyword.PreProc), (r'(&null|&fail)\b', Keyword.Constant), (r'&allocated|&ascii|&clock|&collections|&column|&col|&control|' r'&cset|¤t|&dateline|&date|&digits|&dump|' r'&errno|&errornumber|&errortext|&errorvalue|&error|&errout|' r'&eventcode|&eventvalue|&eventsource|&e|' r'&features|&file|&host|&input|&interval|&lcase|&letters|' r'&level|&line|&ldrag|&lpress|&lrelease|' r'&main|&mdrag|&meta|&mpress|&mrelease|&now|&output|' r'&phi|&pick|&pi|&pos|&progname|' r'&random|&rdrag|®ions|&resize|&row|&rpress|&rrelease|' r'&shift|&source|&storage|&subject|' r'&time|&trace|&ucase|&version|' r'&window|&x|&y', Keyword.Reserved), (r'(by|of|not|to)\b', Keyword.Reserved), (r'(global|local|static|abstract)\b', Keyword.Reserved), (r'package|link|import', Keyword.Declaration), (words(( 'break', 'case', 'create', 'critical', 'default', 'end', 'all', 'do', 'else', 'every', 'fail', 'if', 'import', 'initial', 'initially', 'invocable', 'next', 'repeat', 'return', 'suspend', 'then', 'thread', 'until', 'while'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), (words(( 'Abort', 'abs', 'acos', 'Active', 'Alert', 'any', 'Any', 'Arb', 'Arbno', 'args', 'array', 'asin', 'atan', 'atanh', 'Attrib', 'Bal', 'bal', 'Bg', 'Break', 'Breakx', 'callout', 'center', 'char', 'chdir', 'chmod', 'chown', 'chroot', 'classname', 'Clip', 'Clone', 'close', 'cofail', 'collect', 'Color', 'ColorValue', 'condvar', 'constructor', 'copy', 'CopyArea', 'cos', 'Couple', 'crypt', 'cset', 'ctime', 'dbcolumns', 'dbdriver', 'dbkeys', 'dblimits', 'dbproduct', 'dbtables', 'delay', 'delete', 'detab', 'display', 'DrawArc', 'DrawCircle', 'DrawCube', 'DrawCurve', 'DrawCylinder', 'DrawDisk', 'DrawImage', 'DrawLine', 'DrawPoint', 'DrawPolygon', 'DrawRectangle', 'DrawSegment', 'DrawSphere', 'DrawString', 'DrawTorus', 'dtor', 'entab', 'EraseArea', 'errorclear', 'Event', 'eventmask', 'EvGet', 'EvSend', 'exec', 'exit', 'exp', 'Eye', 'Fail', 'fcntl', 'fdup', 'Fence', 'fetch', 'Fg', 'fieldnames', 'filepair', 'FillArc', 'FillCircle', 'FillPolygon', 'FillRectangle', 'find', 'flock', 'flush', 'Font', 'fork', 'FreeColor', 'FreeSpace', 'function', 'get', 'getch', 'getche', 'getegid', 'getenv', 'geteuid', 'getgid', 'getgr', 'gethost', 'getpgrp', 'getpid', 'getppid', 'getpw', 'getrusage', 'getserv', 'GetSpace', 'gettimeofday', 'getuid', 'globalnames', 'GotoRC', 'GotoXY', 'gtime', 'hardlink', 'iand', 'icom', 'IdentityMatrix', 'image', 'InPort', 'insert', 'Int86', 'integer', 'ioctl', 'ior', 'ishift', 'istate', 'ixor', 'kbhit', 'key', 'keyword', 'kill', 'left', 'Len', 'list', 'load', 'loadfunc', 'localnames', 'lock', 'log', 'Lower', 'lstat', 'many', 'map', 'match', 'MatrixMode', 'max', 'member', 'membernames', 'methodnames', 'methods', 'min', 'mkdir', 'move', 'MultMatrix', 'mutex', 'name', 'NewColor', 'Normals', 'NotAny', 'numeric', 'open', 'opencl', 'oprec', 'ord', 'OutPort', 'PaletteChars', 'PaletteColor', 'PaletteKey', 'paramnames', 'parent', 'Pattern', 'Peek', 'Pending', 'pipe', 'Pixel', 'PlayAudio', 'Poke', 'pop', 'PopMatrix', 'Pos', 'pos', 'proc', 'pull', 'push', 'PushMatrix', 'PushRotate', 'PushScale', 'PushTranslate', 'put', 'QueryPointer', 'Raise', 'read', 'ReadImage', 'readlink', 'reads', 'ready', 'real', 'receive', 'Refresh', 'Rem', 'remove', 'rename', 'repl', 'reverse', 'right', 'rmdir', 'Rotate', 'Rpos', 'Rtab', 'rtod', 'runerr', 'save', 'Scale', 'seek', 'select', 'send', 'seq', 'serial', 'set', 'setenv', 'setgid', 'setgrent', 'sethostent', 'setpgrp', 'setpwent', 'setservent', 'setuid', 'signal', 'sin', 'sort', 'sortf', 'Span', 'spawn', 'sql', 'sqrt', 'stat', 'staticnames', 'stop', 'StopAudio', 'string', 'structure', 'Succeed', 'Swi', 'symlink', 'sys_errstr', 'system', 'syswrite', 'Tab', 'tab', 'table', 'tan', 'Texcoord', 'Texture', 'TextWidth', 'Translate', 'trap', 'trim', 'truncate', 'trylock', 'type', 'umask', 'Uncouple', 'unlock', 'upto', 'utime', 'variable', 'VAttrib', 'wait', 'WAttrib', 'WDefault', 'WFlush', 'where', 'WinAssociate', 'WinButton', 'WinColorDialog', 'WindowContents', 'WinEditRegion', 'WinFontDialog', 'WinMenuBar', 'WinOpenDialog', 'WinPlayMedia', 'WinSaveDialog', 'WinScrollBar', 'WinSelectDialog', 'write', 'WriteImage', 'writes', 'WSection', 'WSync'), prefix=r'\b', suffix=r'\b'), Name.Function), include('numbers'), (r'<@|<<@|>@|>>@|\.>|->|===|~===|\*\*|\+\+|--|\.|~==|~=|<=|>=|==|' r'=|<<=|<<|>>=|>>|:=:|:=|->|<->|\+:=|\|', Operator), (r'"(?:[^\\"]|\\.)*"', String), (r"'(?:[^\\']|\\.)*'", String.Character), (r'[*<>+=/&!?@~\\-]', Operator), (r'\^', Operator), (r'(\w+)(\s*|[(,])', bygroups(Name, using(this))), (r"[\[\]]", Punctuation), (r"<>|=>|[()|:;,.'`{}%&?]", Punctuation), (r'\n+', Text), ], 'numbers': [ (r'\b([+-]?([2-9]|[12][0-9]|3[0-6])[rR][0-9a-zA-Z]+)\b', Number.Hex), (r'[+-]?[0-9]*\.([0-9]*)([Ee][+-]?[0-9]*)?', Number.Float), (r'\b([+-]?[0-9]+[KMGTPkmgtp]?)\b', Number.Integer), ], 'subprogram': [ (r'\(', Punctuation, ('#pop', 'formal_part')), (r';', Punctuation, '#pop'), (r'"[^"]+"|\w+', Name.Function), include('root'), ], 'type_def': [ (r'\(', Punctuation, 'formal_part'), ], 'formal_part': [ (r'\)', Punctuation, '#pop'), (r'\w+', Name.Variable), (r',', Punctuation), (r'(:string|:integer|:real)\b', Keyword.Reserved), include('root'), ], } class IconLexer(RegexLexer): """ Lexer for Icon. .. versionadded:: 1.6 """ name = 'Icon' aliases = ['icon'] filenames = ['*.icon', '*.ICON'] mimetypes = [] flags = re.MULTILINE tokens = { 'root': [ (r'[^\S\n]+', Text), (r'#.*?\n', Comment.Single), (r'[^\S\n]+', Text), (r'class|method|procedure', Keyword.Declaration, 'subprogram'), (r'(record)(\s+)(\w+)', bygroups(Keyword.Declaration, Text, Keyword.Type), 'type_def'), (r'(#line|\$C|\$Cend|\$define|\$else|\$endif|\$error|\$ifdef|' r'\$ifndef|\$include|\$line|\$undef)\b', Keyword.PreProc), (r'(&null|&fail)\b', Keyword.Constant), (r'&allocated|&ascii|&clock|&collections|&column|&col|&control|' r'&cset|¤t|&dateline|&date|&digits|&dump|' r'&errno|&errornumber|&errortext|&errorvalue|&error|&errout|' r'&eventcode|&eventvalue|&eventsource|&e|' r'&features|&file|&host|&input|&interval|&lcase|&letters|' r'&level|&line|&ldrag|&lpress|&lrelease|' r'&main|&mdrag|&meta|&mpress|&mrelease|&now|&output|' r'&phi|&pick|&pi|&pos|&progname|' r'&random|&rdrag|®ions|&resize|&row|&rpress|&rrelease|' r'&shift|&source|&storage|&subject|' r'&time|&trace|&ucase|&version|' r'&window|&x|&y', Keyword.Reserved), (r'(by|of|not|to)\b', Keyword.Reserved), (r'(global|local|static)\b', Keyword.Reserved), (r'link', Keyword.Declaration), (words(( 'break', 'case', 'create', 'default', 'end', 'all', 'do', 'else', 'every', 'fail', 'if', 'initial', 'invocable', 'next', 'repeat', 'return', 'suspend', 'then', 'until', 'while'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), (words(( 'abs', 'acos', 'Active', 'Alert', 'any', 'args', 'array', 'asin', 'atan', 'atanh', 'Attrib', 'bal', 'Bg', 'callout', 'center', 'char', 'chdir', 'chmod', 'chown', 'chroot', 'Clip', 'Clone', 'close', 'cofail', 'collect', 'Color', 'ColorValue', 'condvar', 'copy', 'CopyArea', 'cos', 'Couple', 'crypt', 'cset', 'ctime', 'delay', 'delete', 'detab', 'display', 'DrawArc', 'DrawCircle', 'DrawCube', 'DrawCurve', 'DrawCylinder', 'DrawDisk', 'DrawImage', 'DrawLine', 'DrawPoint', 'DrawPolygon', 'DrawRectangle', 'DrawSegment', 'DrawSphere', 'DrawString', 'DrawTorus', 'dtor', 'entab', 'EraseArea', 'errorclear', 'Event', 'eventmask', 'EvGet', 'EvSend', 'exec', 'exit', 'exp', 'Eye', 'fcntl', 'fdup', 'fetch', 'Fg', 'fieldnames', 'FillArc', 'FillCircle', 'FillPolygon', 'FillRectangle', 'find', 'flock', 'flush', 'Font', 'FreeColor', 'FreeSpace', 'function', 'get', 'getch', 'getche', 'getenv', 'GetSpace', 'gettimeofday', 'getuid', 'globalnames', 'GotoRC', 'GotoXY', 'gtime', 'hardlink', 'iand', 'icom', 'IdentityMatrix', 'image', 'InPort', 'insert', 'Int86', 'integer', 'ioctl', 'ior', 'ishift', 'istate', 'ixor', 'kbhit', 'key', 'keyword', 'kill', 'left', 'Len', 'list', 'load', 'loadfunc', 'localnames', 'lock', 'log', 'Lower', 'lstat', 'many', 'map', 'match', 'MatrixMode', 'max', 'member', 'membernames', 'methodnames', 'methods', 'min', 'mkdir', 'move', 'MultMatrix', 'mutex', 'name', 'NewColor', 'Normals', 'numeric', 'open', 'opencl', 'oprec', 'ord', 'OutPort', 'PaletteChars', 'PaletteColor', 'PaletteKey', 'paramnames', 'parent', 'Pattern', 'Peek', 'Pending', 'pipe', 'Pixel', 'Poke', 'pop', 'PopMatrix', 'Pos', 'pos', 'proc', 'pull', 'push', 'PushMatrix', 'PushRotate', 'PushScale', 'PushTranslate', 'put', 'QueryPointer', 'Raise', 'read', 'ReadImage', 'readlink', 'reads', 'ready', 'real', 'receive', 'Refresh', 'Rem', 'remove', 'rename', 'repl', 'reverse', 'right', 'rmdir', 'Rotate', 'Rpos', 'rtod', 'runerr', 'save', 'Scale', 'seek', 'select', 'send', 'seq', 'serial', 'set', 'setenv', 'setuid', 'signal', 'sin', 'sort', 'sortf', 'spawn', 'sql', 'sqrt', 'stat', 'staticnames', 'stop', 'string', 'structure', 'Swi', 'symlink', 'sys_errstr', 'system', 'syswrite', 'tab', 'table', 'tan', 'Texcoord', 'Texture', 'TextWidth', 'Translate', 'trap', 'trim', 'truncate', 'trylock', 'type', 'umask', 'Uncouple', 'unlock', 'upto', 'utime', 'variable', 'wait', 'WAttrib', 'WDefault', 'WFlush', 'where', 'WinAssociate', 'WinButton', 'WinColorDialog', 'WindowContents', 'WinEditRegion', 'WinFontDialog', 'WinMenuBar', 'WinOpenDialog', 'WinPlayMedia', 'WinSaveDialog', 'WinScrollBar', 'WinSelectDialog', 'write', 'WriteImage', 'writes', 'WSection', 'WSync'), prefix=r'\b', suffix=r'\b'), Name.Function), include('numbers'), (r'===|~===|\*\*|\+\+|--|\.|==|~==|<=|>=|=|~=|<<=|<<|>>=|>>|' r':=:|:=|<->|<-|\+:=|\|\||\|', Operator), (r'"(?:[^\\"]|\\.)*"', String), (r"'(?:[^\\']|\\.)*'", String.Character), (r'[*<>+=/&!?@~\\-]', Operator), (r'(\w+)(\s*|[(,])', bygroups(Name, using(this))), (r"[\[\]]", Punctuation), (r"<>|=>|[()|:;,.'`{}%\^&?]", Punctuation), (r'\n+', Text), ], 'numbers': [ (r'\b([+-]?([2-9]|[12][0-9]|3[0-6])[rR][0-9a-zA-Z]+)\b', Number.Hex), (r'[+-]?[0-9]*\.([0-9]*)([Ee][+-]?[0-9]*)?', Number.Float), (r'\b([+-]?[0-9]+[KMGTPkmgtp]?)\b', Number.Integer), ], 'subprogram': [ (r'\(', Punctuation, ('#pop', 'formal_part')), (r';', Punctuation, '#pop'), (r'"[^"]+"|\w+', Name.Function), include('root'), ], 'type_def': [ (r'\(', Punctuation, 'formal_part'), ], 'formal_part': [ (r'\)', Punctuation, '#pop'), (r'\w+', Name.Variable), (r',', Punctuation), (r'(:string|:integer|:real)\b', Keyword.Reserved), include('root'), ], } class UcodeLexer(RegexLexer): """ Lexer for Icon ucode files. .. versionadded:: 2.4 """ name = 'ucode' aliases = ['ucode'] filenames = ['*.u', '*.u1', '*.u2'] mimetypes = [] flags = re.MULTILINE tokens = { 'root': [ (r'(#.*\n)', Comment), (words(( 'con', 'declend', 'end', 'global', 'impl', 'invocable', 'lab', 'link', 'local', 'record', 'uid', 'unions', 'version'), prefix=r'\b', suffix=r'\b'), Name.Function), (words(( 'colm', 'filen', 'line', 'synt'), prefix=r'\b', suffix=r'\b'), Comment), (words(( 'asgn', 'bang', 'bscan', 'cat', 'ccase', 'chfail', 'coact', 'cofail', 'compl', 'coret', 'create', 'cset', 'diff', 'div', 'dup', 'efail', 'einit', 'end', 'eqv', 'eret', 'error', 'escan', 'esusp', 'field', 'goto', 'init', 'int', 'inter', 'invoke', 'keywd', 'lconcat', 'lexeq', 'lexge', 'lexgt', 'lexle', 'lexlt', 'lexne', 'limit', 'llist', 'lsusp', 'mark', 'mark0', 'minus', 'mod', 'mult', 'neg', 'neqv', 'nonnull', 'noop', 'null', 'number', 'numeq', 'numge', 'numgt', 'numle', 'numlt', 'numne', 'pfail', 'plus', 'pnull', 'pop', 'power', 'pret', 'proc', 'psusp', 'push1', 'pushn1', 'random', 'rasgn', 'rcv', 'rcvbk', 'real', 'refresh', 'rswap', 'sdup', 'sect', 'size', 'snd', 'sndbk', 'str', 'subsc', 'swap', 'tabmat', 'tally', 'toby', 'trace', 'unmark', 'value', 'var'), prefix=r'\b', suffix=r'\b'), Keyword.Declaration), (words(( 'any', 'case', 'endcase', 'endevery', 'endif', 'endifelse', 'endrepeat', 'endsuspend', 'enduntil', 'endwhile', 'every', 'if', 'ifelse', 'repeat', 'suspend', 'until', 'while'), prefix=r'\b', suffix=r'\b'), Name.Constant), (r'\d+(\s*|\.$|$)', Number.Integer), (r'[+-]?\d*\.\d+(E[-+]?\d+)?', Number.Float), (r'[+-]?\d+\.\d*(E[-+]?\d+)?', Number.Float), (r"(<>|=>|[()|:;,.'`]|[{}]|[%^]|[&?])", Punctuation), (r'\s+\b', Text), (r'[\w-]+', Text), ], } def analyse_text(text): """endsuspend and endrepeat are unique to this language, and \\self, /self doesn't seem to get used anywhere else either.""" result = 0 if 'endsuspend' in text: result += 0.1 if 'endrepeat' in text: result += 0.1 if ':=' in text: result += 0.01 if 'procedure' in text and 'end' in text: result += 0.01 # This seems quite unique to unicon -- doesn't appear in any other # example source we have (A quick search reveals that \SELF appears in # Perl/Raku code) if r'\self' in text and r'/self' in text: result += 0.5 return result pygments-2.11.2/pygments/lexers/html.py0000644000175000017500000004653514165547207020041 0ustar carstencarsten""" pygments.lexers.html ~~~~~~~~~~~~~~~~~~~~ Lexers for HTML, XML and related markup. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, bygroups, \ default, using from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Punctuation from pygments.util import looks_like_xml, html_doctype_matches from pygments.lexers.javascript import JavascriptLexer from pygments.lexers.jvm import ScalaLexer from pygments.lexers.css import CssLexer, _indentation, _starts_block from pygments.lexers.ruby import RubyLexer __all__ = ['HtmlLexer', 'DtdLexer', 'XmlLexer', 'XsltLexer', 'HamlLexer', 'ScamlLexer', 'PugLexer'] class HtmlLexer(RegexLexer): """ For HTML 4 and XHTML 1 markup. Nested JavaScript and CSS is highlighted by the appropriate lexer. """ name = 'HTML' aliases = ['html'] filenames = ['*.html', '*.htm', '*.xhtml', '*.xslt'] mimetypes = ['text/html', 'application/xhtml+xml'] flags = re.IGNORECASE | re.DOTALL tokens = { 'root': [ ('[^<&]+', Text), (r'&\S*?;', Name.Entity), (r'\<\!\[CDATA\[.*?\]\]\>', Comment.Preproc), (r'', Comment.Multiline), (r'<\?.*?\?>', Comment.Preproc), (']*>', Comment.Preproc), (r'(<)(\s*)(script)(\s*)', bygroups(Punctuation, Text, Name.Tag, Text), ('script-content', 'tag')), (r'(<)(\s*)(style)(\s*)', bygroups(Punctuation, Text, Name.Tag, Text), ('style-content', 'tag')), # note: this allows tag names not used in HTML like , # this is to support yet-unknown template engines and the like (r'(<)(\s*)([\w:.-]+)', bygroups(Punctuation, Text, Name.Tag), 'tag'), (r'(<)(\s*)(/)(\s*)([\w:.-]+)(\s*)(>)', bygroups(Punctuation, Text, Punctuation, Text, Name.Tag, Text, Punctuation)), ], 'tag': [ (r'\s+', Text), (r'([\w:-]+\s*)(=)(\s*)', bygroups(Name.Attribute, Operator, Text), 'attr'), (r'[\w:-]+', Name.Attribute), (r'(/?)(\s*)(>)', bygroups(Punctuation, Text, Punctuation), '#pop'), ], 'script-content': [ (r'(<)(\s*)(/)(\s*)(script)(\s*)(>)', bygroups(Punctuation, Text, Punctuation, Text, Name.Tag, Text, Punctuation), '#pop'), (r'.+?(?=<\s*/\s*script\s*>)', using(JavascriptLexer)), # fallback cases for when there is no closing script tag # first look for newline and then go back into root state # if that fails just read the rest of the file # this is similar to the error handling logic in lexer.py (r'.+?\n', using(JavascriptLexer), '#pop'), (r'.+', using(JavascriptLexer), '#pop'), ], 'style-content': [ (r'(<)(\s*)(/)(\s*)(style)(\s*)(>)', bygroups(Punctuation, Text, Punctuation, Text, Name.Tag, Text, Punctuation),'#pop'), (r'.+?(?=<\s*/\s*style\s*>)', using(CssLexer)), # fallback cases for when there is no closing style tag # first look for newline and then go back into root state # if that fails just read the rest of the file # this is similar to the error handling logic in lexer.py (r'.+?\n', using(CssLexer), '#pop'), (r'.+', using(CssLexer), '#pop'), ], 'attr': [ ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } def analyse_text(text): if html_doctype_matches(text): return 0.5 class DtdLexer(RegexLexer): """ A lexer for DTDs (Document Type Definitions). .. versionadded:: 1.5 """ flags = re.MULTILINE | re.DOTALL name = 'DTD' aliases = ['dtd'] filenames = ['*.dtd'] mimetypes = ['application/xml-dtd'] tokens = { 'root': [ include('common'), (r'(\s]+)', bygroups(Keyword, Text, Name.Tag)), (r'PUBLIC|SYSTEM', Keyword.Constant), (r'[\[\]>]', Keyword), ], 'common': [ (r'\s+', Text), (r'(%|&)[^;]*;', Name.Entity), ('', Comment, '#pop'), ('-', Comment), ], 'element': [ include('common'), (r'EMPTY|ANY|#PCDATA', Keyword.Constant), (r'[^>\s|()?+*,]+', Name.Tag), (r'>', Keyword, '#pop'), ], 'attlist': [ include('common'), (r'CDATA|IDREFS|IDREF|ID|NMTOKENS|NMTOKEN|ENTITIES|ENTITY|NOTATION', Keyword.Constant), (r'#REQUIRED|#IMPLIED|#FIXED', Keyword.Constant), (r'xml:space|xml:lang', Keyword.Reserved), (r'[^>\s|()?+*,]+', Name.Attribute), (r'>', Keyword, '#pop'), ], 'entity': [ include('common'), (r'SYSTEM|PUBLIC|NDATA', Keyword.Constant), (r'[^>\s|()?+*,]+', Name.Entity), (r'>', Keyword, '#pop'), ], 'notation': [ include('common'), (r'SYSTEM|PUBLIC', Keyword.Constant), (r'[^>\s|()?+*,]+', Name.Attribute), (r'>', Keyword, '#pop'), ], } def analyse_text(text): if not looks_like_xml(text) and \ ('', Comment.Preproc), (r'', Comment.Multiline), (r'<\?.*?\?>', Comment.Preproc), (']*>', Comment.Preproc), (r'<\s*[\w:.-]+', Name.Tag, 'tag'), (r'<\s*/\s*[\w:.-]+\s*>', Name.Tag), ], 'tag': [ (r'\s+', Text), (r'[\w.:-]+\s*=', Name.Attribute, 'attr'), (r'/?\s*>', Name.Tag, '#pop'), ], 'attr': [ (r'\s+', Text), ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } def analyse_text(text): if looks_like_xml(text): return 0.45 # less than HTML class XsltLexer(XmlLexer): """ A lexer for XSLT. .. versionadded:: 0.10 """ name = 'XSLT' aliases = ['xslt'] filenames = ['*.xsl', '*.xslt', '*.xpl'] # xpl is XProc mimetypes = ['application/xsl+xml', 'application/xslt+xml'] EXTRA_KEYWORDS = { 'apply-imports', 'apply-templates', 'attribute', 'attribute-set', 'call-template', 'choose', 'comment', 'copy', 'copy-of', 'decimal-format', 'element', 'fallback', 'for-each', 'if', 'import', 'include', 'key', 'message', 'namespace-alias', 'number', 'otherwise', 'output', 'param', 'preserve-space', 'processing-instruction', 'sort', 'strip-space', 'stylesheet', 'template', 'text', 'transform', 'value-of', 'variable', 'when', 'with-param' } def get_tokens_unprocessed(self, text): for index, token, value in XmlLexer.get_tokens_unprocessed(self, text): m = re.match(']*)/?>?', value) if token is Name.Tag and m and m.group(1) in self.EXTRA_KEYWORDS: yield index, Keyword, value else: yield index, token, value def analyse_text(text): if looks_like_xml(text) and ']{1,2}(?=[ \t=])', Punctuation), include('eval-or-plain'), ], 'plain': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Text), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(RubyLexer), String.Interpol)), (r'\n', Text, 'root'), ], 'html-attributes': [ (r'\s+', Text), (r'[\w:-]+[ \t]*=', Name.Attribute, 'html-attribute-value'), (r'[\w:-]+', Name.Attribute), (r'\)', Text, '#pop'), ], 'html-attribute-value': [ (r'[ \t]+', Text), (r'\w+', Name.Variable, '#pop'), (r'@\w+', Name.Variable.Instance, '#pop'), (r'\$\w+', Name.Variable.Global, '#pop'), (r"'(\\\\|\\[^\\]|[^'\\\n])*'", String, '#pop'), (r'"(\\\\|\\[^\\]|[^"\\\n])*"', String, '#pop'), ], 'html-comment-block': [ (_dot + '+', Comment), (r'\n', Text, 'root'), ], 'haml-comment-block': [ (_dot + '+', Comment.Preproc), (r'\n', Text, 'root'), ], 'filter-block': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Name.Decorator), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(RubyLexer), String.Interpol)), (r'\n', Text, 'root'), ], } class ScamlLexer(ExtendedRegexLexer): """ For `Scaml markup `_. Scaml is Haml for Scala. .. versionadded:: 1.4 """ name = 'Scaml' aliases = ['scaml'] filenames = ['*.scaml'] mimetypes = ['text/x-scaml'] flags = re.IGNORECASE # Scaml does not yet support the " |\n" notation to # wrap long lines. Once it does, use the custom faux # dot instead. # _dot = r'(?: \|\n(?=.* \|)|.)' _dot = r'.' tokens = { 'root': [ (r'[ \t]*\n', Text), (r'[ \t]*', _indentation), ], 'css': [ (r'\.[\w:-]+', Name.Class, 'tag'), (r'\#[\w:-]+', Name.Function, 'tag'), ], 'eval-or-plain': [ (r'[&!]?==', Punctuation, 'plain'), (r'([&!]?[=~])(' + _dot + r'*\n)', bygroups(Punctuation, using(ScalaLexer)), 'root'), default('plain'), ], 'content': [ include('css'), (r'%[\w:-]+', Name.Tag, 'tag'), (r'!!!' + _dot + r'*\n', Name.Namespace, '#pop'), (r'(/)(\[' + _dot + r'*?\])(' + _dot + r'*\n)', bygroups(Comment, Comment.Special, Comment), '#pop'), (r'/' + _dot + r'*\n', _starts_block(Comment, 'html-comment-block'), '#pop'), (r'-#' + _dot + r'*\n', _starts_block(Comment.Preproc, 'scaml-comment-block'), '#pop'), (r'(-@\s*)(import)?(' + _dot + r'*\n)', bygroups(Punctuation, Keyword, using(ScalaLexer)), '#pop'), (r'(-)(' + _dot + r'*\n)', bygroups(Punctuation, using(ScalaLexer)), '#pop'), (r':' + _dot + r'*\n', _starts_block(Name.Decorator, 'filter-block'), '#pop'), include('eval-or-plain'), ], 'tag': [ include('css'), (r'\{(,\n|' + _dot + r')*?\}', using(ScalaLexer)), (r'\[' + _dot + r'*?\]', using(ScalaLexer)), (r'\(', Text, 'html-attributes'), (r'/[ \t]*\n', Punctuation, '#pop:2'), (r'[<>]{1,2}(?=[ \t=])', Punctuation), include('eval-or-plain'), ], 'plain': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Text), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(ScalaLexer), String.Interpol)), (r'\n', Text, 'root'), ], 'html-attributes': [ (r'\s+', Text), (r'[\w:-]+[ \t]*=', Name.Attribute, 'html-attribute-value'), (r'[\w:-]+', Name.Attribute), (r'\)', Text, '#pop'), ], 'html-attribute-value': [ (r'[ \t]+', Text), (r'\w+', Name.Variable, '#pop'), (r'@\w+', Name.Variable.Instance, '#pop'), (r'\$\w+', Name.Variable.Global, '#pop'), (r"'(\\\\|\\[^\\]|[^'\\\n])*'", String, '#pop'), (r'"(\\\\|\\[^\\]|[^"\\\n])*"', String, '#pop'), ], 'html-comment-block': [ (_dot + '+', Comment), (r'\n', Text, 'root'), ], 'scaml-comment-block': [ (_dot + '+', Comment.Preproc), (r'\n', Text, 'root'), ], 'filter-block': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Name.Decorator), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(ScalaLexer), String.Interpol)), (r'\n', Text, 'root'), ], } class PugLexer(ExtendedRegexLexer): """ For Pug markup. Pug is a variant of Scaml, see: http://scalate.fusesource.org/documentation/scaml-reference.html .. versionadded:: 1.4 """ name = 'Pug' aliases = ['pug', 'jade'] filenames = ['*.pug', '*.jade'] mimetypes = ['text/x-pug', 'text/x-jade'] flags = re.IGNORECASE _dot = r'.' tokens = { 'root': [ (r'[ \t]*\n', Text), (r'[ \t]*', _indentation), ], 'css': [ (r'\.[\w:-]+', Name.Class, 'tag'), (r'\#[\w:-]+', Name.Function, 'tag'), ], 'eval-or-plain': [ (r'[&!]?==', Punctuation, 'plain'), (r'([&!]?[=~])(' + _dot + r'*\n)', bygroups(Punctuation, using(ScalaLexer)), 'root'), default('plain'), ], 'content': [ include('css'), (r'!!!' + _dot + r'*\n', Name.Namespace, '#pop'), (r'(/)(\[' + _dot + r'*?\])(' + _dot + r'*\n)', bygroups(Comment, Comment.Special, Comment), '#pop'), (r'/' + _dot + r'*\n', _starts_block(Comment, 'html-comment-block'), '#pop'), (r'-#' + _dot + r'*\n', _starts_block(Comment.Preproc, 'scaml-comment-block'), '#pop'), (r'(-@\s*)(import)?(' + _dot + r'*\n)', bygroups(Punctuation, Keyword, using(ScalaLexer)), '#pop'), (r'(-)(' + _dot + r'*\n)', bygroups(Punctuation, using(ScalaLexer)), '#pop'), (r':' + _dot + r'*\n', _starts_block(Name.Decorator, 'filter-block'), '#pop'), (r'[\w:-]+', Name.Tag, 'tag'), (r'\|', Text, 'eval-or-plain'), ], 'tag': [ include('css'), (r'\{(,\n|' + _dot + r')*?\}', using(ScalaLexer)), (r'\[' + _dot + r'*?\]', using(ScalaLexer)), (r'\(', Text, 'html-attributes'), (r'/[ \t]*\n', Punctuation, '#pop:2'), (r'[<>]{1,2}(?=[ \t=])', Punctuation), include('eval-or-plain'), ], 'plain': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Text), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(ScalaLexer), String.Interpol)), (r'\n', Text, 'root'), ], 'html-attributes': [ (r'\s+', Text), (r'[\w:-]+[ \t]*=', Name.Attribute, 'html-attribute-value'), (r'[\w:-]+', Name.Attribute), (r'\)', Text, '#pop'), ], 'html-attribute-value': [ (r'[ \t]+', Text), (r'\w+', Name.Variable, '#pop'), (r'@\w+', Name.Variable.Instance, '#pop'), (r'\$\w+', Name.Variable.Global, '#pop'), (r"'(\\\\|\\[^\\]|[^'\\\n])*'", String, '#pop'), (r'"(\\\\|\\[^\\]|[^"\\\n])*"', String, '#pop'), ], 'html-comment-block': [ (_dot + '+', Comment), (r'\n', Text, 'root'), ], 'scaml-comment-block': [ (_dot + '+', Comment.Preproc), (r'\n', Text, 'root'), ], 'filter-block': [ (r'([^#\n]|#[^{\n]|(\\\\)*\\#\{)+', Name.Decorator), (r'(#\{)(' + _dot + r'*?)(\})', bygroups(String.Interpol, using(ScalaLexer), String.Interpol)), (r'\n', Text, 'root'), ], } JadeLexer = PugLexer # compat pygments-2.11.2/pygments/lexers/jslt.py0000644000175000017500000000715614165547207020045 0ustar carstencarsten""" pygments.lexers.jslt ~~~~~~~~~~~~~~~~~~~~ Lexers for the JSLT language :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, combined, words from pygments.token import Comment, Keyword, Name, Number, Operator, \ Punctuation, String, Whitespace __all__ = ['JSLTLexer'] _WORD_END = r'(?=[^0-9A-Z_a-z-])' class JSLTLexer(RegexLexer): """ For `JSLT `_ source. .. versionadded:: 2.10 """ name = 'JSLT' filenames = ['*.jslt'] aliases = ['jslt'] mimetypes = ['text/x-jslt'] tokens = { 'root': [ (r'[\t\n\f\r ]+', Whitespace), (r'//.*(\n|\Z)', Comment.Single), (r'-?(0|[1-9][0-9]*)', Number.Integer), (r'-?(0|[1-9][0-9]*)(.[0-9]+a)?([Ee][+-]?[0-9]+)', Number.Float), (r'"([^"\\]|\\.)*"', String.Double), (r'[(),:\[\]{}]', Punctuation), (r'(!=|[<=>]=?)', Operator), (r'[*+/|-]', Operator), (r'\.', Operator), (words(('import',), suffix=_WORD_END), Keyword.Namespace, combined('import-path', 'whitespace')), (words(('as',), suffix=_WORD_END), Keyword.Namespace, combined('import-alias', 'whitespace')), (words(('let',), suffix=_WORD_END), Keyword.Declaration, combined('constant', 'whitespace')), (words(('def',), suffix=_WORD_END), Keyword.Declaration, combined('function', 'whitespace')), (words(('false', 'null', 'true'), suffix=_WORD_END), Keyword.Constant), (words(('else', 'for', 'if'), suffix=_WORD_END), Keyword), (words(('and', 'or'), suffix=_WORD_END), Operator.Word), (words(( 'all', 'any', 'array', 'boolean', 'capture', 'ceiling', 'contains', 'ends-with', 'error', 'flatten', 'floor', 'format-time', 'from-json', 'get-key', 'hash-int', 'index-of', 'is-array', 'is-boolean', 'is-decimal', 'is-integer', 'is-number', 'is-object', 'is-string', 'join', 'lowercase', 'max', 'min', 'mod', 'not', 'now', 'number', 'parse-time', 'parse-url', 'random', 'replace', 'round', 'sha256-hex', 'size', 'split', 'starts-with', 'string', 'sum', 'test', 'to-json', 'trim', 'uppercase', 'zip', 'zip-with-index', 'fallback'), suffix=_WORD_END), Name.Builtin), (r'[A-Z_a-z][0-9A-Z_a-z-]*:[A-Z_a-z][0-9A-Z_a-z-]*', Name.Function), (r'[A-Z_a-z][0-9A-Z_a-z-]*', Name), (r'\$[A-Z_a-z][0-9A-Z_a-z-]*', Name.Variable), ], 'constant': [ (r'[A-Z_a-z][0-9A-Z_a-z-]*', Name.Variable, 'root'), ], 'function': [ (r'[A-Z_a-z][0-9A-Z_a-z-]*', Name.Function, combined('function-parameter-list', 'whitespace')), ], 'function-parameter-list': [ (r'\(', Punctuation, combined('function-parameters', 'whitespace')), ], 'function-parameters': [ (r',', Punctuation), (r'\)', Punctuation, 'root'), (r'[A-Z_a-z][0-9A-Z_a-z-]*', Name.Variable), ], 'import-path': [ (r'"([^"]|\\.)*"', String.Symbol, 'root'), ], 'import-alias': [ (r'[A-Z_a-z][0-9A-Z_a-z-]*', Name.Namespace, 'root'), ], 'string': [ (r'"', String.Double, '#pop'), (r'\\.', String.Escape), ], 'whitespace': [ (r'[\t\n\f\r ]+', Whitespace), (r'//.*(\n|\Z)', Comment.Single), ] } pygments-2.11.2/pygments/lexers/business.py0000644000175000017500000006663314165547207020731 0ustar carstencarsten""" pygments.lexers.business ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for "business-oriented" languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words, bygroups from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error, Whitespace from pygments.lexers._openedge_builtins import OPENEDGEKEYWORDS __all__ = ['CobolLexer', 'CobolFreeformatLexer', 'ABAPLexer', 'OpenEdgeLexer', 'GoodDataCLLexer', 'MaqlLexer'] class CobolLexer(RegexLexer): """ Lexer for OpenCOBOL code. .. versionadded:: 1.6 """ name = 'COBOL' aliases = ['cobol'] filenames = ['*.cob', '*.COB', '*.cpy', '*.CPY'] mimetypes = ['text/x-cobol'] flags = re.IGNORECASE | re.MULTILINE # Data Types: by PICTURE and USAGE # Operators: **, *, +, -, /, <, >, <=, >=, =, <> # Logical (?): NOT, AND, OR # Reserved words: # http://opencobol.add1tocobol.com/#reserved-words # Intrinsics: # http://opencobol.add1tocobol.com/#does-opencobol-implement-any-intrinsic-functions tokens = { 'root': [ include('comment'), include('strings'), include('core'), include('nums'), (r'[a-z0-9]([\w\-]*[a-z0-9]+)?', Name.Variable), # (r'[\s]+', Text), (r'[ \t]+', Whitespace), ], 'comment': [ (r'(^.{6}[*/].*\n|^.{6}|\*>.*\n)', Comment), ], 'core': [ # Figurative constants (r'(^|(?<=[^\w\-]))(ALL\s+)?' r'((ZEROES)|(HIGH-VALUE|LOW-VALUE|QUOTE|SPACE|ZERO)(S)?)' r'\s*($|(?=[^\w\-]))', Name.Constant), # Reserved words STATEMENTS and other bolds (words(( 'ACCEPT', 'ADD', 'ALLOCATE', 'CALL', 'CANCEL', 'CLOSE', 'COMPUTE', 'CONFIGURATION', 'CONTINUE', 'DATA', 'DELETE', 'DISPLAY', 'DIVIDE', 'DIVISION', 'ELSE', 'END', 'END-ACCEPT', 'END-ADD', 'END-CALL', 'END-COMPUTE', 'END-DELETE', 'END-DISPLAY', 'END-DIVIDE', 'END-EVALUATE', 'END-IF', 'END-MULTIPLY', 'END-OF-PAGE', 'END-PERFORM', 'END-READ', 'END-RETURN', 'END-REWRITE', 'END-SEARCH', 'END-START', 'END-STRING', 'END-SUBTRACT', 'END-UNSTRING', 'END-WRITE', 'ENVIRONMENT', 'EVALUATE', 'EXIT', 'FD', 'FILE', 'FILE-CONTROL', 'FOREVER', 'FREE', 'GENERATE', 'GO', 'GOBACK', 'IDENTIFICATION', 'IF', 'INITIALIZE', 'INITIATE', 'INPUT-OUTPUT', 'INSPECT', 'INVOKE', 'I-O-CONTROL', 'LINKAGE', 'LOCAL-STORAGE', 'MERGE', 'MOVE', 'MULTIPLY', 'OPEN', 'PERFORM', 'PROCEDURE', 'PROGRAM-ID', 'RAISE', 'READ', 'RELEASE', 'RESUME', 'RETURN', 'REWRITE', 'SCREEN', 'SD', 'SEARCH', 'SECTION', 'SET', 'SORT', 'START', 'STOP', 'STRING', 'SUBTRACT', 'SUPPRESS', 'TERMINATE', 'THEN', 'UNLOCK', 'UNSTRING', 'USE', 'VALIDATE', 'WORKING-STORAGE', 'WRITE'), prefix=r'(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))'), Keyword.Reserved), # Reserved words (words(( 'ACCESS', 'ADDRESS', 'ADVANCING', 'AFTER', 'ALL', 'ALPHABET', 'ALPHABETIC', 'ALPHABETIC-LOWER', 'ALPHABETIC-UPPER', 'ALPHANUMERIC', 'ALPHANUMERIC-EDITED', 'ALSO', 'ALTER', 'ALTERNATE' 'ANY', 'ARE', 'AREA', 'AREAS', 'ARGUMENT-NUMBER', 'ARGUMENT-VALUE', 'AS', 'ASCENDING', 'ASSIGN', 'AT', 'AUTO', 'AUTO-SKIP', 'AUTOMATIC', 'AUTOTERMINATE', 'BACKGROUND-COLOR', 'BASED', 'BEEP', 'BEFORE', 'BELL', 'BLANK', 'BLINK', 'BLOCK', 'BOTTOM', 'BY', 'BYTE-LENGTH', 'CHAINING', 'CHARACTER', 'CHARACTERS', 'CLASS', 'CODE', 'CODE-SET', 'COL', 'COLLATING', 'COLS', 'COLUMN', 'COLUMNS', 'COMMA', 'COMMAND-LINE', 'COMMIT', 'COMMON', 'CONSTANT', 'CONTAINS', 'CONTENT', 'CONTROL', 'CONTROLS', 'CONVERTING', 'COPY', 'CORR', 'CORRESPONDING', 'COUNT', 'CRT', 'CURRENCY', 'CURSOR', 'CYCLE', 'DATE', 'DAY', 'DAY-OF-WEEK', 'DE', 'DEBUGGING', 'DECIMAL-POINT', 'DECLARATIVES', 'DEFAULT', 'DELIMITED', 'DELIMITER', 'DEPENDING', 'DESCENDING', 'DETAIL', 'DISK', 'DOWN', 'DUPLICATES', 'DYNAMIC', 'EBCDIC', 'ENTRY', 'ENVIRONMENT-NAME', 'ENVIRONMENT-VALUE', 'EOL', 'EOP', 'EOS', 'ERASE', 'ERROR', 'ESCAPE', 'EXCEPTION', 'EXCLUSIVE', 'EXTEND', 'EXTERNAL', 'FILE-ID', 'FILLER', 'FINAL', 'FIRST', 'FIXED', 'FLOAT-LONG', 'FLOAT-SHORT', 'FOOTING', 'FOR', 'FOREGROUND-COLOR', 'FORMAT', 'FROM', 'FULL', 'FUNCTION', 'FUNCTION-ID', 'GIVING', 'GLOBAL', 'GROUP', 'HEADING', 'HIGHLIGHT', 'I-O', 'ID', 'IGNORE', 'IGNORING', 'IN', 'INDEX', 'INDEXED', 'INDICATE', 'INITIAL', 'INITIALIZED', 'INPUT', 'INTO', 'INTRINSIC', 'INVALID', 'IS', 'JUST', 'JUSTIFIED', 'KEY', 'LABEL', 'LAST', 'LEADING', 'LEFT', 'LENGTH', 'LIMIT', 'LIMITS', 'LINAGE', 'LINAGE-COUNTER', 'LINE', 'LINES', 'LOCALE', 'LOCK', 'LOWLIGHT', 'MANUAL', 'MEMORY', 'MINUS', 'MODE', 'MULTIPLE', 'NATIONAL', 'NATIONAL-EDITED', 'NATIVE', 'NEGATIVE', 'NEXT', 'NO', 'NULL', 'NULLS', 'NUMBER', 'NUMBERS', 'NUMERIC', 'NUMERIC-EDITED', 'OBJECT-COMPUTER', 'OCCURS', 'OF', 'OFF', 'OMITTED', 'ON', 'ONLY', 'OPTIONAL', 'ORDER', 'ORGANIZATION', 'OTHER', 'OUTPUT', 'OVERFLOW', 'OVERLINE', 'PACKED-DECIMAL', 'PADDING', 'PAGE', 'PARAGRAPH', 'PLUS', 'POINTER', 'POSITION', 'POSITIVE', 'PRESENT', 'PREVIOUS', 'PRINTER', 'PRINTING', 'PROCEDURE-POINTER', 'PROCEDURES', 'PROCEED', 'PROGRAM', 'PROGRAM-POINTER', 'PROMPT', 'QUOTE', 'QUOTES', 'RANDOM', 'RD', 'RECORD', 'RECORDING', 'RECORDS', 'RECURSIVE', 'REDEFINES', 'REEL', 'REFERENCE', 'RELATIVE', 'REMAINDER', 'REMOVAL', 'RENAMES', 'REPLACING', 'REPORT', 'REPORTING', 'REPORTS', 'REPOSITORY', 'REQUIRED', 'RESERVE', 'RETURNING', 'REVERSE-VIDEO', 'REWIND', 'RIGHT', 'ROLLBACK', 'ROUNDED', 'RUN', 'SAME', 'SCROLL', 'SECURE', 'SEGMENT-LIMIT', 'SELECT', 'SENTENCE', 'SEPARATE', 'SEQUENCE', 'SEQUENTIAL', 'SHARING', 'SIGN', 'SIGNED', 'SIGNED-INT', 'SIGNED-LONG', 'SIGNED-SHORT', 'SIZE', 'SORT-MERGE', 'SOURCE', 'SOURCE-COMPUTER', 'SPECIAL-NAMES', 'STANDARD', 'STANDARD-1', 'STANDARD-2', 'STATUS', 'SUM', 'SYMBOLIC', 'SYNC', 'SYNCHRONIZED', 'TALLYING', 'TAPE', 'TEST', 'THROUGH', 'THRU', 'TIME', 'TIMES', 'TO', 'TOP', 'TRAILING', 'TRANSFORM', 'TYPE', 'UNDERLINE', 'UNIT', 'UNSIGNED', 'UNSIGNED-INT', 'UNSIGNED-LONG', 'UNSIGNED-SHORT', 'UNTIL', 'UP', 'UPDATE', 'UPON', 'USAGE', 'USING', 'VALUE', 'VALUES', 'VARYING', 'WAIT', 'WHEN', 'WITH', 'WORDS', 'YYYYDDD', 'YYYYMMDD'), prefix=r'(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))'), Keyword.Pseudo), # inactive reserved words (words(( 'ACTIVE-CLASS', 'ALIGNED', 'ANYCASE', 'ARITHMETIC', 'ATTRIBUTE', 'B-AND', 'B-NOT', 'B-OR', 'B-XOR', 'BIT', 'BOOLEAN', 'CD', 'CENTER', 'CF', 'CH', 'CHAIN', 'CLASS-ID', 'CLASSIFICATION', 'COMMUNICATION', 'CONDITION', 'DATA-POINTER', 'DESTINATION', 'DISABLE', 'EC', 'EGI', 'EMI', 'ENABLE', 'END-RECEIVE', 'ENTRY-CONVENTION', 'EO', 'ESI', 'EXCEPTION-OBJECT', 'EXPANDS', 'FACTORY', 'FLOAT-BINARY-16', 'FLOAT-BINARY-34', 'FLOAT-BINARY-7', 'FLOAT-DECIMAL-16', 'FLOAT-DECIMAL-34', 'FLOAT-EXTENDED', 'FORMAT', 'FUNCTION-POINTER', 'GET', 'GROUP-USAGE', 'IMPLEMENTS', 'INFINITY', 'INHERITS', 'INTERFACE', 'INTERFACE-ID', 'INVOKE', 'LC_ALL', 'LC_COLLATE', 'LC_CTYPE', 'LC_MESSAGES', 'LC_MONETARY', 'LC_NUMERIC', 'LC_TIME', 'LINE-COUNTER', 'MESSAGE', 'METHOD', 'METHOD-ID', 'NESTED', 'NONE', 'NORMAL', 'OBJECT', 'OBJECT-REFERENCE', 'OPTIONS', 'OVERRIDE', 'PAGE-COUNTER', 'PF', 'PH', 'PROPERTY', 'PROTOTYPE', 'PURGE', 'QUEUE', 'RAISE', 'RAISING', 'RECEIVE', 'RELATION', 'REPLACE', 'REPRESENTS-NOT-A-NUMBER', 'RESET', 'RESUME', 'RETRY', 'RF', 'RH', 'SECONDS', 'SEGMENT', 'SELF', 'SEND', 'SOURCES', 'STATEMENT', 'STEP', 'STRONG', 'SUB-QUEUE-1', 'SUB-QUEUE-2', 'SUB-QUEUE-3', 'SUPER', 'SYMBOL', 'SYSTEM-DEFAULT', 'TABLE', 'TERMINAL', 'TEXT', 'TYPEDEF', 'UCS-4', 'UNIVERSAL', 'USER-DEFAULT', 'UTF-16', 'UTF-8', 'VAL-STATUS', 'VALID', 'VALIDATE', 'VALIDATE-STATUS'), prefix=r'(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))'), Error), # Data Types (r'(^|(?<=[^\w\-]))' r'(PIC\s+.+?(?=(\s|\.\s))|PICTURE\s+.+?(?=(\s|\.\s))|' r'(COMPUTATIONAL)(-[1-5X])?|(COMP)(-[1-5X])?|' r'BINARY-C-LONG|' r'BINARY-CHAR|BINARY-DOUBLE|BINARY-LONG|BINARY-SHORT|' r'BINARY)\s*($|(?=[^\w\-]))', Keyword.Type), # Operators (r'(\*\*|\*|\+|-|/|<=|>=|<|>|==|/=|=)', Operator), # (r'(::)', Keyword.Declaration), (r'([(),;:&%.])', Punctuation), # Intrinsics (r'(^|(?<=[^\w\-]))(ABS|ACOS|ANNUITY|ASIN|ATAN|BYTE-LENGTH|' r'CHAR|COMBINED-DATETIME|CONCATENATE|COS|CURRENT-DATE|' r'DATE-OF-INTEGER|DATE-TO-YYYYMMDD|DAY-OF-INTEGER|DAY-TO-YYYYDDD|' r'EXCEPTION-(?:FILE|LOCATION|STATEMENT|STATUS)|EXP10|EXP|E|' r'FACTORIAL|FRACTION-PART|INTEGER-OF-(?:DATE|DAY|PART)|INTEGER|' r'LENGTH|LOCALE-(?:DATE|TIME(?:-FROM-SECONDS)?)|LOG(?:10)?|' r'LOWER-CASE|MAX|MEAN|MEDIAN|MIDRANGE|MIN|MOD|NUMVAL(?:-C)?|' r'ORD(?:-MAX|-MIN)?|PI|PRESENT-VALUE|RANDOM|RANGE|REM|REVERSE|' r'SECONDS-FROM-FORMATTED-TIME|SECONDS-PAST-MIDNIGHT|SIGN|SIN|SQRT|' r'STANDARD-DEVIATION|STORED-CHAR-LENGTH|SUBSTITUTE(?:-CASE)?|' r'SUM|TAN|TEST-DATE-YYYYMMDD|TEST-DAY-YYYYDDD|TRIM|' r'UPPER-CASE|VARIANCE|WHEN-COMPILED|YEAR-TO-YYYY)\s*' r'($|(?=[^\w\-]))', Name.Function), # Booleans (r'(^|(?<=[^\w\-]))(true|false)\s*($|(?=[^\w\-]))', Name.Builtin), # Comparing Operators (r'(^|(?<=[^\w\-]))(equal|equals|ne|lt|le|gt|ge|' r'greater|less|than|not|and|or)\s*($|(?=[^\w\-]))', Operator.Word), ], # \"[^\"\n]*\"|\'[^\'\n]*\' 'strings': [ # apparently strings can be delimited by EOL if they are continued # in the next line (r'"[^"\n]*("|\n)', String.Double), (r"'[^'\n]*('|\n)", String.Single), ], 'nums': [ (r'\d+(\s*|\.$|$)', Number.Integer), (r'[+-]?\d*\.\d+(E[-+]?\d+)?', Number.Float), (r'[+-]?\d+\.\d*(E[-+]?\d+)?', Number.Float), ], } class CobolFreeformatLexer(CobolLexer): """ Lexer for Free format OpenCOBOL code. .. versionadded:: 1.6 """ name = 'COBOLFree' aliases = ['cobolfree'] filenames = ['*.cbl', '*.CBL'] mimetypes = [] flags = re.IGNORECASE | re.MULTILINE tokens = { 'comment': [ (r'(\*>.*\n|^\w*\*.*$)', Comment), ], } class ABAPLexer(RegexLexer): """ Lexer for ABAP, SAP's integrated language. .. versionadded:: 1.1 """ name = 'ABAP' aliases = ['abap'] filenames = ['*.abap', '*.ABAP'] mimetypes = ['text/x-abap'] flags = re.IGNORECASE | re.MULTILINE tokens = { 'common': [ (r'\s+', Whitespace), (r'^\*.*$', Comment.Single), (r'\".*?\n', Comment.Single), (r'##\w+', Comment.Special), ], 'variable-names': [ (r'<\S+>', Name.Variable), (r'\w[\w~]*(?:(\[\])|->\*)?', Name.Variable), ], 'root': [ include('common'), # function calls (r'CALL\s+(?:BADI|CUSTOMER-FUNCTION|FUNCTION)', Keyword), (r'(CALL\s+(?:DIALOG|SCREEN|SUBSCREEN|SELECTION-SCREEN|' r'TRANSACTION|TRANSFORMATION))\b', Keyword), (r'(FORM|PERFORM)(\s+)(\w+)', bygroups(Keyword, Whitespace, Name.Function)), (r'(PERFORM)(\s+)(\()(\w+)(\))', bygroups(Keyword, Whitespace, Punctuation, Name.Variable, Punctuation)), (r'(MODULE)(\s+)(\S+)(\s+)(INPUT|OUTPUT)', bygroups(Keyword, Whitespace, Name.Function, Whitespace, Keyword)), # method implementation (r'(METHOD)(\s+)([\w~]+)', bygroups(Keyword, Whitespace, Name.Function)), # method calls (r'(\s+)([\w\-]+)([=\-]>)([\w\-~]+)', bygroups(Whitespace, Name.Variable, Operator, Name.Function)), # call methodnames returning style (r'(?<=(=|-)>)([\w\-~]+)(?=\()', Name.Function), # text elements (r'(TEXT)(-)(\d{3})', bygroups(Keyword, Punctuation, Number.Integer)), (r'(TEXT)(-)(\w{3})', bygroups(Keyword, Punctuation, Name.Variable)), # keywords with dashes in them. # these need to be first, because for instance the -ID part # of MESSAGE-ID wouldn't get highlighted if MESSAGE was # first in the list of keywords. (r'(ADD-CORRESPONDING|AUTHORITY-CHECK|' r'CLASS-DATA|CLASS-EVENTS|CLASS-METHODS|CLASS-POOL|' r'DELETE-ADJACENT|DIVIDE-CORRESPONDING|' r'EDITOR-CALL|ENHANCEMENT-POINT|ENHANCEMENT-SECTION|EXIT-COMMAND|' r'FIELD-GROUPS|FIELD-SYMBOLS|FUNCTION-POOL|' r'INTERFACE-POOL|INVERTED-DATE|' r'LOAD-OF-PROGRAM|LOG-POINT|' r'MESSAGE-ID|MOVE-CORRESPONDING|MULTIPLY-CORRESPONDING|' r'NEW-LINE|NEW-PAGE|NEW-SECTION|NO-EXTENSION|' r'OUTPUT-LENGTH|PRINT-CONTROL|' r'SELECT-OPTIONS|START-OF-SELECTION|SUBTRACT-CORRESPONDING|' r'SYNTAX-CHECK|SYSTEM-EXCEPTIONS|' r'TYPE-POOL|TYPE-POOLS|NO-DISPLAY' r')\b', Keyword), # keyword kombinations (r'(?])(CREATE\s+(PUBLIC|PRIVATE|DATA|OBJECT)|' r'(PUBLIC|PRIVATE|PROTECTED)\s+SECTION|' r'(TYPE|LIKE)\s+((LINE\s+OF|REF\s+TO|' r'(SORTED|STANDARD|HASHED)\s+TABLE\s+OF))?|' r'FROM\s+(DATABASE|MEMORY)|CALL\s+METHOD|' r'(GROUP|ORDER) BY|HAVING|SEPARATED BY|' r'GET\s+(BADI|BIT|CURSOR|DATASET|LOCALE|PARAMETER|' r'PF-STATUS|(PROPERTY|REFERENCE)\s+OF|' r'RUN\s+TIME|TIME\s+(STAMP)?)?|' r'SET\s+(BIT|BLANK\s+LINES|COUNTRY|CURSOR|DATASET|EXTENDED\s+CHECK|' r'HANDLER|HOLD\s+DATA|LANGUAGE|LEFT\s+SCROLL-BOUNDARY|' r'LOCALE|MARGIN|PARAMETER|PF-STATUS|PROPERTY\s+OF|' r'RUN\s+TIME\s+(ANALYZER|CLOCK\s+RESOLUTION)|SCREEN|' r'TITLEBAR|UPADTE\s+TASK\s+LOCAL|USER-COMMAND)|' r'CONVERT\s+((INVERTED-)?DATE|TIME|TIME\s+STAMP|TEXT)|' r'(CLOSE|OPEN)\s+(DATASET|CURSOR)|' r'(TO|FROM)\s+(DATA BUFFER|INTERNAL TABLE|MEMORY ID|' r'DATABASE|SHARED\s+(MEMORY|BUFFER))|' r'DESCRIBE\s+(DISTANCE\s+BETWEEN|FIELD|LIST|TABLE)|' r'FREE\s(MEMORY|OBJECT)?|' r'PROCESS\s+(BEFORE\s+OUTPUT|AFTER\s+INPUT|' r'ON\s+(VALUE-REQUEST|HELP-REQUEST))|' r'AT\s+(LINE-SELECTION|USER-COMMAND|END\s+OF|NEW)|' r'AT\s+SELECTION-SCREEN(\s+(ON(\s+(BLOCK|(HELP|VALUE)-REQUEST\s+FOR|' r'END\s+OF|RADIOBUTTON\s+GROUP))?|OUTPUT))?|' r'SELECTION-SCREEN:?\s+((BEGIN|END)\s+OF\s+((TABBED\s+)?BLOCK|LINE|' r'SCREEN)|COMMENT|FUNCTION\s+KEY|' r'INCLUDE\s+BLOCKS|POSITION|PUSHBUTTON|' r'SKIP|ULINE)|' r'LEAVE\s+(LIST-PROCESSING|PROGRAM|SCREEN|' r'TO LIST-PROCESSING|TO TRANSACTION)' r'(ENDING|STARTING)\s+AT|' r'FORMAT\s+(COLOR|INTENSIFIED|INVERSE|HOTSPOT|INPUT|FRAMES|RESET)|' r'AS\s+(CHECKBOX|SUBSCREEN|WINDOW)|' r'WITH\s+(((NON-)?UNIQUE)?\s+KEY|FRAME)|' r'(BEGIN|END)\s+OF|' r'DELETE(\s+ADJACENT\s+DUPLICATES\sFROM)?|' r'COMPARING(\s+ALL\s+FIELDS)?|' r'(INSERT|APPEND)(\s+INITIAL\s+LINE\s+(IN)?TO|\s+LINES\s+OF)?|' r'IN\s+((BYTE|CHARACTER)\s+MODE|PROGRAM)|' r'END-OF-(DEFINITION|PAGE|SELECTION)|' r'WITH\s+FRAME(\s+TITLE)|' r'(REPLACE|FIND)\s+((FIRST|ALL)\s+OCCURRENCES?\s+OF\s+)?(SUBSTRING|REGEX)?|' r'MATCH\s+(LENGTH|COUNT|LINE|OFFSET)|' r'(RESPECTING|IGNORING)\s+CASE|' r'IN\s+UPDATE\s+TASK|' r'(SOURCE|RESULT)\s+(XML)?|' r'REFERENCE\s+INTO|' # simple kombinations r'AND\s+(MARK|RETURN)|CLIENT\s+SPECIFIED|CORRESPONDING\s+FIELDS\s+OF|' r'IF\s+FOUND|FOR\s+EVENT|INHERITING\s+FROM|LEAVE\s+TO\s+SCREEN|' r'LOOP\s+AT\s+(SCREEN)?|LOWER\s+CASE|MATCHCODE\s+OBJECT|MODIF\s+ID|' r'MODIFY\s+SCREEN|NESTING\s+LEVEL|NO\s+INTERVALS|OF\s+STRUCTURE|' r'RADIOBUTTON\s+GROUP|RANGE\s+OF|REF\s+TO|SUPPRESS DIALOG|' r'TABLE\s+OF|UPPER\s+CASE|TRANSPORTING\s+NO\s+FIELDS|' r'VALUE\s+CHECK|VISIBLE\s+LENGTH|HEADER\s+LINE|COMMON\s+PART)\b', Keyword), # single word keywords. (r'(^|(?<=(\s|\.)))(ABBREVIATED|ABSTRACT|ADD|ALIASES|ALIGN|ALPHA|' r'ASSERT|AS|ASSIGN(ING)?|AT(\s+FIRST)?|' r'BACK|BLOCK|BREAK-POINT|' r'CASE|CATCH|CHANGING|CHECK|CLASS|CLEAR|COLLECT|COLOR|COMMIT|' r'CREATE|COMMUNICATION|COMPONENTS?|COMPUTE|CONCATENATE|CONDENSE|' r'CONSTANTS|CONTEXTS|CONTINUE|CONTROLS|COUNTRY|CURRENCY|' r'DATA|DATE|DECIMALS|DEFAULT|DEFINE|DEFINITION|DEFERRED|DEMAND|' r'DETAIL|DIRECTORY|DIVIDE|DO|DUMMY|' r'ELSE(IF)?|ENDAT|ENDCASE|ENDCATCH|ENDCLASS|ENDDO|ENDFORM|ENDFUNCTION|' r'ENDIF|ENDINTERFACE|ENDLOOP|ENDMETHOD|ENDMODULE|ENDSELECT|ENDTRY|ENDWHILE|' r'ENHANCEMENT|EVENTS|EXACT|EXCEPTIONS?|EXIT|EXPONENT|EXPORT|EXPORTING|EXTRACT|' r'FETCH|FIELDS?|FOR|FORM|FORMAT|FREE|FROM|FUNCTION|' r'HIDE|' r'ID|IF|IMPORT|IMPLEMENTATION|IMPORTING|IN|INCLUDE|INCLUDING|' r'INDEX|INFOTYPES|INITIALIZATION|INTERFACE|INTERFACES|INTO|' r'LANGUAGE|LEAVE|LENGTH|LINES|LOAD|LOCAL|' r'JOIN|' r'KEY|' r'NEXT|' r'MAXIMUM|MESSAGE|METHOD[S]?|MINIMUM|MODULE|MODIFIER|MODIFY|MOVE|MULTIPLY|' r'NODES|NUMBER|' r'OBLIGATORY|OBJECT|OF|OFF|ON|OTHERS|OVERLAY|' r'PACK|PAD|PARAMETERS|PERCENTAGE|POSITION|PROGRAM|PROVIDE|PUBLIC|PUT|PF\d\d|' r'RAISE|RAISING|RANGES?|READ|RECEIVE|REDEFINITION|REFRESH|REJECT|REPORT|RESERVE|' r'RESUME|RETRY|RETURN|RETURNING|RIGHT|ROLLBACK|REPLACE|' r'SCROLL|SEARCH|SELECT|SHIFT|SIGN|SINGLE|SIZE|SKIP|SORT|SPLIT|STATICS|STOP|' r'STYLE|SUBMATCHES|SUBMIT|SUBTRACT|SUM(?!\()|SUMMARY|SUMMING|SUPPLY|' r'TABLE|TABLES|TIMESTAMP|TIMES?|TIMEZONE|TITLE|\??TO|' r'TOP-OF-PAGE|TRANSFER|TRANSLATE|TRY|TYPES|' r'ULINE|UNDER|UNPACK|UPDATE|USING|' r'VALUE|VALUES|VIA|VARYING|VARY|' r'WAIT|WHEN|WHERE|WIDTH|WHILE|WITH|WINDOW|WRITE|XSD|ZERO)\b', Keyword), # builtins (r'(abs|acos|asin|atan|' r'boolc|boolx|bit_set|' r'char_off|charlen|ceil|cmax|cmin|condense|contains|' r'contains_any_of|contains_any_not_of|concat_lines_of|cos|cosh|' r'count|count_any_of|count_any_not_of|' r'dbmaxlen|distance|' r'escape|exp|' r'find|find_end|find_any_of|find_any_not_of|floor|frac|from_mixed|' r'insert|' r'lines|log|log10|' r'match|matches|' r'nmax|nmin|numofchar|' r'repeat|replace|rescale|reverse|round|' r'segment|shift_left|shift_right|sign|sin|sinh|sqrt|strlen|' r'substring|substring_after|substring_from|substring_before|substring_to|' r'tan|tanh|to_upper|to_lower|to_mixed|translate|trunc|' r'xstrlen)(\()\b', bygroups(Name.Builtin, Punctuation)), (r'&[0-9]', Name), (r'[0-9]+', Number.Integer), # operators which look like variable names before # parsing variable names. (r'(?<=(\s|.))(AND|OR|EQ|NE|GT|LT|GE|LE|CO|CN|CA|NA|CS|NOT|NS|CP|NP|' r'BYTE-CO|BYTE-CN|BYTE-CA|BYTE-NA|BYTE-CS|BYTE-NS|' r'IS\s+(NOT\s+)?(INITIAL|ASSIGNED|REQUESTED|BOUND))\b', Operator.Word), include('variable-names'), # standard operators after variable names, # because < and > are part of field symbols. (r'[?*<>=\-+&]', Operator), (r"'(''|[^'])*'", String.Single), (r"`([^`])*`", String.Single), (r"([|}])([^{}|]*?)([|{])", bygroups(Punctuation, String.Single, Punctuation)), (r'[/;:()\[\],.]', Punctuation), (r'(!)(\w+)', bygroups(Operator, Name)), ], } class OpenEdgeLexer(RegexLexer): """ Lexer for `OpenEdge ABL (formerly Progress) `_ source code. .. versionadded:: 1.5 """ name = 'OpenEdge ABL' aliases = ['openedge', 'abl', 'progress'] filenames = ['*.p', '*.cls'] mimetypes = ['text/x-openedge', 'application/x-openedge'] types = (r'(?i)(^|(?<=[^\w\-]))(CHARACTER|CHAR|CHARA|CHARAC|CHARACT|CHARACTE|' r'COM-HANDLE|DATE|DATETIME|DATETIME-TZ|' r'DECIMAL|DEC|DECI|DECIM|DECIMA|HANDLE|' r'INT64|INTEGER|INT|INTE|INTEG|INTEGE|' r'LOGICAL|LONGCHAR|MEMPTR|RAW|RECID|ROWID)\s*($|(?=[^\w\-]))') keywords = words(OPENEDGEKEYWORDS, prefix=r'(?i)(^|(?<=[^\w\-]))', suffix=r'\s*($|(?=[^\w\-]))') tokens = { 'root': [ (r'/\*', Comment.Multiline, 'comment'), (r'\{', Comment.Preproc, 'preprocessor'), (r'\s*&.*', Comment.Preproc), (r'0[xX][0-9a-fA-F]+[LlUu]*', Number.Hex), (r'(?i)(DEFINE|DEF|DEFI|DEFIN)\b', Keyword.Declaration), (types, Keyword.Type), (keywords, Name.Builtin), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'[0-9]+', Number.Integer), (r'\s+', Whitespace), (r'[+*/=-]', Operator), (r'[.:()]', Punctuation), (r'.', Name.Variable), # Lazy catch-all ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'preprocessor': [ (r'[^{}]', Comment.Preproc), (r'\{', Comment.Preproc, '#push'), (r'\}', Comment.Preproc, '#pop'), ], } def analyse_text(text): """Try to identify OpenEdge ABL based on a few common constructs.""" result = 0 if 'END.' in text: result += 0.05 if 'END PROCEDURE.' in text: result += 0.05 if 'ELSE DO:' in text: result += 0.05 return result class GoodDataCLLexer(RegexLexer): """ Lexer for `GoodData-CL `_ script files. .. versionadded:: 1.4 """ name = 'GoodData-CL' aliases = ['gooddata-cl'] filenames = ['*.gdc'] mimetypes = ['text/x-gooddata-cl'] flags = re.IGNORECASE tokens = { 'root': [ # Comments (r'#.*', Comment.Single), # Function call (r'[a-z]\w*', Name.Function), # Argument list (r'\(', Punctuation, 'args-list'), # Punctuation (r';', Punctuation), # Space is not significant (r'\s+', Text) ], 'args-list': [ (r'\)', Punctuation, '#pop'), (r',', Punctuation), (r'[a-z]\w*', Name.Variable), (r'=', Operator), (r'"', String, 'string-literal'), (r'[0-9]+(?:\.[0-9]+)?(?:e[+-]?[0-9]{1,3})?', Number), # Space is not significant (r'\s', Whitespace) ], 'string-literal': [ (r'\\[tnrfbae"\\]', String.Escape), (r'"', String, '#pop'), (r'[^\\"]+', String) ] } class MaqlLexer(RegexLexer): """ Lexer for `GoodData MAQL `_ scripts. .. versionadded:: 1.4 """ name = 'MAQL' aliases = ['maql'] filenames = ['*.maql'] mimetypes = ['text/x-gooddata-maql', 'application/x-gooddata-maql'] flags = re.IGNORECASE tokens = { 'root': [ # IDENTITY (r'IDENTIFIER\b', Name.Builtin), # IDENTIFIER (r'\{[^}]+\}', Name.Variable), # NUMBER (r'[0-9]+(?:\.[0-9]+)?(?:e[+-]?[0-9]{1,3})?', Number), # STRING (r'"', String, 'string-literal'), # RELATION (r'\<\>|\!\=', Operator), (r'\=|\>\=|\>|\<\=|\<', Operator), # := (r'\:\=', Operator), # OBJECT (r'\[[^]]+\]', Name.Variable.Class), # keywords (words(( 'DIMENSION', 'DIMENSIONS', 'BOTTOM', 'METRIC', 'COUNT', 'OTHER', 'FACT', 'WITH', 'TOP', 'OR', 'ATTRIBUTE', 'CREATE', 'PARENT', 'FALSE', 'ROW', 'ROWS', 'FROM', 'ALL', 'AS', 'PF', 'COLUMN', 'COLUMNS', 'DEFINE', 'REPORT', 'LIMIT', 'TABLE', 'LIKE', 'AND', 'BY', 'BETWEEN', 'EXCEPT', 'SELECT', 'MATCH', 'WHERE', 'TRUE', 'FOR', 'IN', 'WITHOUT', 'FILTER', 'ALIAS', 'WHEN', 'NOT', 'ON', 'KEYS', 'KEY', 'FULLSET', 'PRIMARY', 'LABELS', 'LABEL', 'VISUAL', 'TITLE', 'DESCRIPTION', 'FOLDER', 'ALTER', 'DROP', 'ADD', 'DATASET', 'DATATYPE', 'INT', 'BIGINT', 'DOUBLE', 'DATE', 'VARCHAR', 'DECIMAL', 'SYNCHRONIZE', 'TYPE', 'DEFAULT', 'ORDER', 'ASC', 'DESC', 'HYPERLINK', 'INCLUDE', 'TEMPLATE', 'MODIFY'), suffix=r'\b'), Keyword), # FUNCNAME (r'[a-z]\w*\b', Name.Function), # Comments (r'#.*', Comment.Single), # Punctuation (r'[,;()]', Punctuation), # Space is not significant (r'\s+', Whitespace) ], 'string-literal': [ (r'\\[tnrfbae"\\]', String.Escape), (r'"', String, '#pop'), (r'[^\\"]+', String) ], } pygments-2.11.2/pygments/lexers/igor.py0000644000175000017500000007357114165547207020035 0ustar carstencarsten""" pygments.lexers.igor ~~~~~~~~~~~~~~~~~~~~ Lexers for Igor Pro. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, words from pygments.token import Text, Comment, Keyword, Name, String __all__ = ['IgorLexer'] class IgorLexer(RegexLexer): """ Pygments Lexer for Igor Pro procedure files (.ipf). See http://www.wavemetrics.com/ and http://www.igorexchange.com/. .. versionadded:: 2.0 """ name = 'Igor' aliases = ['igor', 'igorpro'] filenames = ['*.ipf'] mimetypes = ['text/ipf'] flags = re.IGNORECASE | re.MULTILINE flowControl = ( 'if', 'else', 'elseif', 'endif', 'for', 'endfor', 'strswitch', 'switch', 'case', 'default', 'endswitch', 'do', 'while', 'try', 'catch', 'endtry', 'break', 'continue', 'return', 'AbortOnRTE', 'AbortOnValue' ) types = ( 'variable', 'string', 'constant', 'strconstant', 'NVAR', 'SVAR', 'WAVE', 'STRUCT', 'dfref', 'funcref', 'char', 'uchar', 'int16', 'uint16', 'int32', 'uint32', 'int64', 'uint64', 'float', 'double' ) keywords = ( 'override', 'ThreadSafe', 'MultiThread', 'static', 'Proc', 'Picture', 'Prompt', 'DoPrompt', 'macro', 'window', 'function', 'end', 'Structure', 'EndStructure', 'EndMacro', 'Menu', 'SubMenu' ) operations = ( 'Abort', 'AddFIFOData', 'AddFIFOVectData', 'AddMovieAudio', 'AddMovieFrame', 'AddWavesToBoxPlot', 'AddWavesToViolinPlot', 'AdoptFiles', 'APMath', 'Append', 'AppendBoxPlot', 'AppendImage', 'AppendLayoutObject', 'AppendMatrixContour', 'AppendText', 'AppendToGizmo', 'AppendToGraph', 'AppendToLayout', 'AppendToTable', 'AppendViolinPlot', 'AppendXYZContour', 'AutoPositionWindow', 'AxonTelegraphFindServers', 'BackgroundInfo', 'Beep', 'BoundingBall', 'BoxSmooth', 'BrowseURL', 'BuildMenu', 'Button', 'cd', 'Chart', 'CheckBox', 'CheckDisplayed', 'ChooseColor', 'Close', 'CloseHelp', 'CloseMovie', 'CloseProc', 'ColorScale', 'ColorTab2Wave', 'Concatenate', 'ControlBar', 'ControlInfo', 'ControlUpdate', 'ConvertGlobalStringTextEncoding', 'ConvexHull', 'Convolve', 'CopyDimLabels', 'CopyFile', 'CopyFolder', 'CopyScales', 'Correlate', 'CreateAliasShortcut', 'CreateBrowser', 'Cross', 'CtrlBackground', 'CtrlFIFO', 'CtrlNamedBackground', 'Cursor', 'CurveFit', 'CustomControl', 'CWT', 'DAQmx_AI_SetupReader', 'DAQmx_AO_SetOutputs', 'DAQmx_CTR_CountEdges', 'DAQmx_CTR_OutputPulse', 'DAQmx_CTR_Period', 'DAQmx_CTR_PulseWidth', 'DAQmx_DIO_Config', 'DAQmx_DIO_WriteNewData', 'DAQmx_Scan', 'DAQmx_WaveformGen', 'Debugger', 'DebuggerOptions', 'DefaultFont', 'DefaultGuiControls', 'DefaultGuiFont', 'DefaultTextEncoding', 'DefineGuide', 'DelayUpdate', 'DeleteAnnotations', 'DeleteFile', 'DeleteFolder', 'DeletePoints', 'Differentiate', 'dir', 'Display', 'DisplayHelpTopic', 'DisplayProcedure', 'DoAlert', 'DoIgorMenu', 'DoUpdate', 'DoWindow', 'DoXOPIdle', 'DPSS', 'DrawAction', 'DrawArc', 'DrawBezier', 'DrawLine', 'DrawOval', 'DrawPICT', 'DrawPoly', 'DrawRect', 'DrawRRect', 'DrawText', 'DrawUserShape', 'DSPDetrend', 'DSPPeriodogram', 'Duplicate', 'DuplicateDataFolder', 'DWT', 'EdgeStats', 'Edit', 'ErrorBars', 'EstimatePeakSizes', 'Execute', 'ExecuteScriptText', 'ExperimentInfo', 'ExperimentModified', 'ExportGizmo', 'Extract', 'FastGaussTransform', 'FastOp', 'FBinRead', 'FBinWrite', 'FFT', 'FGetPos', 'FIFOStatus', 'FIFO2Wave', 'FilterFIR', 'FilterIIR', 'FindAPeak', 'FindContour', 'FindDuplicates', 'FindLevel', 'FindLevels', 'FindPeak', 'FindPointsInPoly', 'FindRoots', 'FindSequence', 'FindValue', 'FMaxFlat', 'FPClustering', 'fprintf', 'FReadLine', 'FSetPos', 'FStatus', 'FTPCreateDirectory', 'FTPDelete', 'FTPDownload', 'FTPUpload', 'FuncFit', 'FuncFitMD', 'GBLoadWave', 'GetAxis', 'GetCamera', 'GetFileFolderInfo', 'GetGizmo', 'GetLastUserMenuInfo', 'GetMarquee', 'GetMouse', 'GetSelection', 'GetWindow', 'GISCreateVectorLayer', 'GISGetRasterInfo', 'GISGetRegisteredFileInfo', 'GISGetVectorLayerInfo', 'GISLoadRasterData', 'GISLoadVectorData', 'GISRasterizeVectorData', 'GISRegisterFile', 'GISTransformCoords', 'GISUnRegisterFile', 'GISWriteFieldData', 'GISWriteGeometryData', 'GISWriteRaster', 'GPIBReadBinaryWave2', 'GPIBReadBinary2', 'GPIBReadWave2', 'GPIBRead2', 'GPIBWriteBinaryWave2', 'GPIBWriteBinary2', 'GPIBWriteWave2', 'GPIBWrite2', 'GPIB2', 'GraphNormal', 'GraphWaveDraw', 'GraphWaveEdit', 'Grep', 'GroupBox', 'Hanning', 'HDFInfo', 'HDFReadImage', 'HDFReadSDS', 'HDFReadVset', 'HDF5CloseFile', 'HDF5CloseGroup', 'HDF5ConvertColors', 'HDF5CreateFile', 'HDF5CreateGroup', 'HDF5CreateLink', 'HDF5Dump', 'HDF5DumpErrors', 'HDF5DumpState', 'HDF5FlushFile', 'HDF5ListAttributes', 'HDF5ListGroup', 'HDF5LoadData', 'HDF5LoadGroup', 'HDF5LoadImage', 'HDF5OpenFile', 'HDF5OpenGroup', 'HDF5SaveData', 'HDF5SaveGroup', 'HDF5SaveImage', 'HDF5TestOperation', 'HDF5UnlinkObject', 'HideIgorMenus', 'HideInfo', 'HideProcedures', 'HideTools', 'HilbertTransform', 'Histogram', 'ICA', 'IFFT', 'ImageAnalyzeParticles', 'ImageBlend', 'ImageBoundaryToMask', 'ImageComposite', 'ImageEdgeDetection', 'ImageFileInfo', 'ImageFilter', 'ImageFocus', 'ImageFromXYZ', 'ImageGenerateROIMask', 'ImageGLCM', 'ImageHistModification', 'ImageHistogram', 'ImageInterpolate', 'ImageLineProfile', 'ImageLoad', 'ImageMorphology', 'ImageRegistration', 'ImageRemoveBackground', 'ImageRestore', 'ImageRotate', 'ImageSave', 'ImageSeedFill', 'ImageSkeleton3d', 'ImageSnake', 'ImageStats', 'ImageThreshold', 'ImageTransform', 'ImageUnwrapPhase', 'ImageWindow', 'IndexSort', 'InsertPoints', 'Integrate', 'IntegrateODE', 'Integrate2D', 'Interpolate2', 'Interpolate3D', 'Interp3DPath', 'ITCCloseAll2', 'ITCCloseDevice2', 'ITCConfigAllChannels2', 'ITCConfigChannelReset2', 'ITCConfigChannelUpload2', 'ITCConfigChannel2', 'ITCFIFOAvailableAll2', 'ITCFIFOAvailable2', 'ITCGetAllChannelsConfig2', 'ITCGetChannelConfig2', 'ITCGetCurrentDevice2', 'ITCGetDeviceInfo2', 'ITCGetDevices2', 'ITCGetErrorString2', 'ITCGetSerialNumber2', 'ITCGetState2', 'ITCGetVersions2', 'ITCInitialize2', 'ITCOpenDevice2', 'ITCReadADC2', 'ITCReadDigital2', 'ITCReadTimer2', 'ITCSelectDevice2', 'ITCSetDAC2', 'ITCSetGlobals2', 'ITCSetModes2', 'ITCSetState2', 'ITCStartAcq2', 'ITCStopAcq2', 'ITCUpdateFIFOPositionAll2', 'ITCUpdateFIFOPosition2', 'ITCWriteDigital2', 'JCAMPLoadWave', 'JointHistogram', 'KillBackground', 'KillControl', 'KillDataFolder', 'KillFIFO', 'KillFreeAxis', 'KillPath', 'KillPICTs', 'KillStrings', 'KillVariables', 'KillWaves', 'KillWindow', 'KMeans', 'Label', 'Layout', 'LayoutPageAction', 'LayoutSlideShow', 'Legend', 'LinearFeedbackShiftRegister', 'ListBox', 'LoadData', 'LoadPackagePreferences', 'LoadPICT', 'LoadWave', 'Loess', 'LombPeriodogram', 'Make', 'MakeIndex', 'MarkPerfTestTime', 'MatrixConvolve', 'MatrixCorr', 'MatrixEigenV', 'MatrixFilter', 'MatrixGaussJ', 'MatrixGLM', 'MatrixInverse', 'MatrixLinearSolve', 'MatrixLinearSolveTD', 'MatrixLLS', 'MatrixLUBkSub', 'MatrixLUD', 'MatrixLUDTD', 'MatrixMultiply', 'MatrixOP', 'MatrixSchur', 'MatrixSolve', 'MatrixSVBkSub', 'MatrixSVD', 'MatrixTranspose', 'MCC_FindServers', 'MeasureStyledText', 'MFR_CheckForNewBricklets', 'MFR_CloseResultFile', 'MFR_CreateOverviewTable', 'MFR_GetBrickletCount', 'MFR_GetBrickletData', 'MFR_GetBrickletDeployData', 'MFR_GetBrickletMetaData', 'MFR_GetBrickletRawData', 'MFR_GetReportTemplate', 'MFR_GetResultFileMetaData', 'MFR_GetResultFileName', 'MFR_GetVernissageVersion', 'MFR_GetVersion', 'MFR_GetXOPErrorMessage', 'MFR_OpenResultFile', 'MLLoadWave', 'Modify', 'ModifyBoxPlot', 'ModifyBrowser', 'ModifyCamera', 'ModifyContour', 'ModifyControl', 'ModifyControlList', 'ModifyFreeAxis', 'ModifyGizmo', 'ModifyGraph', 'ModifyImage', 'ModifyLayout', 'ModifyPanel', 'ModifyTable', 'ModifyViolinPlot', 'ModifyWaterfall', 'MoveDataFolder', 'MoveFile', 'MoveFolder', 'MoveString', 'MoveSubwindow', 'MoveVariable', 'MoveWave', 'MoveWindow', 'MultiTaperPSD', 'MultiThreadingControl', 'NC_CloseFile', 'NC_DumpErrors', 'NC_Inquire', 'NC_ListAttributes', 'NC_ListObjects', 'NC_LoadData', 'NC_OpenFile', 'NeuralNetworkRun', 'NeuralNetworkTrain', 'NewCamera', 'NewDataFolder', 'NewFIFO', 'NewFIFOChan', 'NewFreeAxis', 'NewGizmo', 'NewImage', 'NewLayout', 'NewMovie', 'NewNotebook', 'NewPanel', 'NewPath', 'NewWaterfall', 'NILoadWave', 'NI4882', 'Note', 'Notebook', 'NotebookAction', 'Open', 'OpenHelp', 'OpenNotebook', 'Optimize', 'ParseOperationTemplate', 'PathInfo', 'PauseForUser', 'PauseUpdate', 'PCA', 'PlayMovie', 'PlayMovieAction', 'PlaySound', 'PopupContextualMenu', 'PopupMenu', 'Preferences', 'PrimeFactors', 'Print', 'printf', 'PrintGraphs', 'PrintLayout', 'PrintNotebook', 'PrintSettings', 'PrintTable', 'Project', 'PulseStats', 'PutScrapText', 'pwd', 'Quit', 'RatioFromNumber', 'Redimension', 'Remez', 'Remove', 'RemoveContour', 'RemoveFromGizmo', 'RemoveFromGraph', 'RemoveFromLayout', 'RemoveFromTable', 'RemoveImage', 'RemoveLayoutObjects', 'RemovePath', 'Rename', 'RenameDataFolder', 'RenamePath', 'RenamePICT', 'RenameWindow', 'ReorderImages', 'ReorderTraces', 'ReplaceText', 'ReplaceWave', 'Resample', 'ResumeUpdate', 'Reverse', 'Rotate', 'Save', 'SaveData', 'SaveExperiment', 'SaveGizmoCopy', 'SaveGraphCopy', 'SaveNotebook', 'SavePackagePreferences', 'SavePICT', 'SaveTableCopy', 'SetActiveSubwindow', 'SetAxis', 'SetBackground', 'SetDashPattern', 'SetDataFolder', 'SetDimLabel', 'SetDrawEnv', 'SetDrawLayer', 'SetFileFolderInfo', 'SetFormula', 'SetIdlePeriod', 'SetIgorHook', 'SetIgorMenuMode', 'SetIgorOption', 'SetMarquee', 'SetProcessSleep', 'SetRandomSeed', 'SetScale', 'SetVariable', 'SetWaveLock', 'SetWaveTextEncoding', 'SetWindow', 'ShowIgorMenus', 'ShowInfo', 'ShowTools', 'Silent', 'Sleep', 'Slider', 'Smooth', 'SmoothCustom', 'Sort', 'SortColumns', 'SoundInRecord', 'SoundInSet', 'SoundInStartChart', 'SoundInStatus', 'SoundInStopChart', 'SoundLoadWave', 'SoundSaveWave', 'SphericalInterpolate', 'SphericalTriangulate', 'SplitString', 'SplitWave', 'sprintf', 'SQLHighLevelOp', 'sscanf', 'Stack', 'StackWindows', 'StatsAngularDistanceTest', 'StatsANOVA1Test', 'StatsANOVA2NRTest', 'StatsANOVA2RMTest', 'StatsANOVA2Test', 'StatsChiTest', 'StatsCircularCorrelationTest', 'StatsCircularMeans', 'StatsCircularMoments', 'StatsCircularTwoSampleTest', 'StatsCochranTest', 'StatsContingencyTable', 'StatsDIPTest', 'StatsDunnettTest', 'StatsFriedmanTest', 'StatsFTest', 'StatsHodgesAjneTest', 'StatsJBTest', 'StatsKDE', 'StatsKendallTauTest', 'StatsKSTest', 'StatsKWTest', 'StatsLinearCorrelationTest', 'StatsLinearRegression', 'StatsMultiCorrelationTest', 'StatsNPMCTest', 'StatsNPNominalSRTest', 'StatsQuantiles', 'StatsRankCorrelationTest', 'StatsResample', 'StatsSample', 'StatsScheffeTest', 'StatsShapiroWilkTest', 'StatsSignTest', 'StatsSRTest', 'StatsTTest', 'StatsTukeyTest', 'StatsVariancesTest', 'StatsWatsonUSquaredTest', 'StatsWatsonWilliamsTest', 'StatsWheelerWatsonTest', 'StatsWilcoxonRankTest', 'StatsWRCorrelationTest', 'STFT', 'String', 'StructFill', 'StructGet', 'StructPut', 'SumDimension', 'SumSeries', 'TabControl', 'Tag', 'TDMLoadData', 'TDMSaveData', 'TextBox', 'ThreadGroupPutDF', 'ThreadStart', 'TickWavesFromAxis', 'Tile', 'TileWindows', 'TitleBox', 'ToCommandLine', 'ToolsGrid', 'Triangulate3d', 'Unwrap', 'URLRequest', 'ValDisplay', 'Variable', 'VDTClosePort2', 'VDTGetPortList2', 'VDTGetStatus2', 'VDTOpenPort2', 'VDTOperationsPort2', 'VDTReadBinaryWave2', 'VDTReadBinary2', 'VDTReadHexWave2', 'VDTReadHex2', 'VDTReadWave2', 'VDTRead2', 'VDTTerminalPort2', 'VDTWriteBinaryWave2', 'VDTWriteBinary2', 'VDTWriteHexWave2', 'VDTWriteHex2', 'VDTWriteWave2', 'VDTWrite2', 'VDT2', 'VISAControl', 'VISARead', 'VISAReadBinary', 'VISAReadBinaryWave', 'VISAReadWave', 'VISAWrite', 'VISAWriteBinary', 'VISAWriteBinaryWave', 'VISAWriteWave', 'WaveMeanStdv', 'WaveStats', 'WaveTransform', 'wfprintf', 'WignerTransform', 'WindowFunction', 'XLLoadWave' ) functions = ( 'abs', 'acos', 'acosh', 'AddListItem', 'AiryA', 'AiryAD', 'AiryB', 'AiryBD', 'alog', 'AnnotationInfo', 'AnnotationList', 'area', 'areaXY', 'asin', 'asinh', 'atan', 'atanh', 'atan2', 'AxisInfo', 'AxisList', 'AxisValFromPixel', 'AxonTelegraphAGetDataNum', 'AxonTelegraphAGetDataString', 'AxonTelegraphAGetDataStruct', 'AxonTelegraphGetDataNum', 'AxonTelegraphGetDataString', 'AxonTelegraphGetDataStruct', 'AxonTelegraphGetTimeoutMs', 'AxonTelegraphSetTimeoutMs', 'Base64Decode', 'Base64Encode', 'Besseli', 'Besselj', 'Besselk', 'Bessely', 'beta', 'betai', 'BinarySearch', 'BinarySearchInterp', 'binomial', 'binomialln', 'binomialNoise', 'cabs', 'CaptureHistory', 'CaptureHistoryStart', 'ceil', 'cequal', 'char2num', 'chebyshev', 'chebyshevU', 'CheckName', 'ChildWindowList', 'CleanupName', 'cmplx', 'cmpstr', 'conj', 'ContourInfo', 'ContourNameList', 'ContourNameToWaveRef', 'ContourZ', 'ControlNameList', 'ConvertTextEncoding', 'cos', 'cosh', 'cosIntegral', 'cot', 'coth', 'CountObjects', 'CountObjectsDFR', 'cpowi', 'CreationDate', 'csc', 'csch', 'CsrInfo', 'CsrWave', 'CsrWaveRef', 'CsrXWave', 'CsrXWaveRef', 'CTabList', 'DataFolderDir', 'DataFolderExists', 'DataFolderRefsEqual', 'DataFolderRefStatus', 'date', 'datetime', 'DateToJulian', 'date2secs', 'Dawson', 'defined', 'deltax', 'digamma', 'dilogarithm', 'DimDelta', 'DimOffset', 'DimSize', 'ei', 'enoise', 'equalWaves', 'erf', 'erfc', 'erfcw', 'exists', 'exp', 'expInt', 'expIntegralE1', 'expNoise', 'factorial', 'Faddeeva', 'fakedata', 'faverage', 'faverageXY', 'fDAQmx_AI_GetReader', 'fDAQmx_AO_UpdateOutputs', 'fDAQmx_ConnectTerminals', 'fDAQmx_CTR_Finished', 'fDAQmx_CTR_IsFinished', 'fDAQmx_CTR_IsPulseFinished', 'fDAQmx_CTR_ReadCounter', 'fDAQmx_CTR_ReadWithOptions', 'fDAQmx_CTR_SetPulseFrequency', 'fDAQmx_CTR_Start', 'fDAQmx_DeviceNames', 'fDAQmx_DIO_Finished', 'fDAQmx_DIO_PortWidth', 'fDAQmx_DIO_Read', 'fDAQmx_DIO_Write', 'fDAQmx_DisconnectTerminals', 'fDAQmx_ErrorString', 'fDAQmx_ExternalCalDate', 'fDAQmx_NumAnalogInputs', 'fDAQmx_NumAnalogOutputs', 'fDAQmx_NumCounters', 'fDAQmx_NumDIOPorts', 'fDAQmx_ReadChan', 'fDAQmx_ReadNamedChan', 'fDAQmx_ResetDevice', 'fDAQmx_ScanGetAvailable', 'fDAQmx_ScanGetNextIndex', 'fDAQmx_ScanStart', 'fDAQmx_ScanStop', 'fDAQmx_ScanWait', 'fDAQmx_ScanWaitWithTimeout', 'fDAQmx_SelfCalDate', 'fDAQmx_SelfCalibration', 'fDAQmx_WaveformStart', 'fDAQmx_WaveformStop', 'fDAQmx_WF_IsFinished', 'fDAQmx_WF_WaitUntilFinished', 'fDAQmx_WriteChan', 'FetchURL', 'FindDimLabel', 'FindListItem', 'floor', 'FontList', 'FontSizeHeight', 'FontSizeStringWidth', 'FresnelCos', 'FresnelSin', 'FuncRefInfo', 'FunctionInfo', 'FunctionList', 'FunctionPath', 'gamma', 'gammaEuler', 'gammaInc', 'gammaNoise', 'gammln', 'gammp', 'gammq', 'Gauss', 'Gauss1D', 'Gauss2D', 'gcd', 'GetBrowserLine', 'GetBrowserSelection', 'GetDataFolder', 'GetDataFolderDFR', 'GetDefaultFont', 'GetDefaultFontSize', 'GetDefaultFontStyle', 'GetDimLabel', 'GetEnvironmentVariable', 'GetErrMessage', 'GetFormula', 'GetIndependentModuleName', 'GetIndexedObjName', 'GetIndexedObjNameDFR', 'GetKeyState', 'GetRTErrMessage', 'GetRTError', 'GetRTLocation', 'GetRTLocInfo', 'GetRTStackInfo', 'GetScrapText', 'GetUserData', 'GetWavesDataFolder', 'GetWavesDataFolderDFR', 'GISGetAllFileFormats', 'GISSRefsAreEqual', 'GizmoInfo', 'GizmoScale', 'gnoise', 'GrepList', 'GrepString', 'GuideInfo', 'GuideNameList', 'Hash', 'hcsr', 'HDF5AttributeInfo', 'HDF5DatasetInfo', 'HDF5LibraryInfo', 'HDF5TypeInfo', 'hermite', 'hermiteGauss', 'HyperGNoise', 'HyperGPFQ', 'HyperG0F1', 'HyperG1F1', 'HyperG2F1', 'IgorInfo', 'IgorVersion', 'imag', 'ImageInfo', 'ImageNameList', 'ImageNameToWaveRef', 'IndependentModuleList', 'IndexedDir', 'IndexedFile', 'IndexToScale', 'Inf', 'Integrate1D', 'interp', 'Interp2D', 'Interp3D', 'inverseERF', 'inverseERFC', 'ItemsInList', 'JacobiCn', 'JacobiSn', 'JulianToDate', 'Laguerre', 'LaguerreA', 'LaguerreGauss', 'LambertW', 'LayoutInfo', 'leftx', 'LegendreA', 'limit', 'ListMatch', 'ListToTextWave', 'ListToWaveRefWave', 'ln', 'log', 'logNormalNoise', 'lorentzianNoise', 'LowerStr', 'MacroList', 'magsqr', 'MandelbrotPoint', 'MarcumQ', 'MatrixCondition', 'MatrixDet', 'MatrixDot', 'MatrixRank', 'MatrixTrace', 'max', 'MCC_AutoBridgeBal', 'MCC_AutoFastComp', 'MCC_AutoPipetteOffset', 'MCC_AutoSlowComp', 'MCC_AutoWholeCellComp', 'MCC_GetBridgeBalEnable', 'MCC_GetBridgeBalResist', 'MCC_GetFastCompCap', 'MCC_GetFastCompTau', 'MCC_GetHolding', 'MCC_GetHoldingEnable', 'MCC_GetMode', 'MCC_GetNeutralizationCap', 'MCC_GetNeutralizationEnable', 'MCC_GetOscKillerEnable', 'MCC_GetPipetteOffset', 'MCC_GetPrimarySignalGain', 'MCC_GetPrimarySignalHPF', 'MCC_GetPrimarySignalLPF', 'MCC_GetRsCompBandwidth', 'MCC_GetRsCompCorrection', 'MCC_GetRsCompEnable', 'MCC_GetRsCompPrediction', 'MCC_GetSecondarySignalGain', 'MCC_GetSecondarySignalLPF', 'MCC_GetSlowCompCap', 'MCC_GetSlowCompTau', 'MCC_GetSlowCompTauX20Enable', 'MCC_GetSlowCurrentInjEnable', 'MCC_GetSlowCurrentInjLevel', 'MCC_GetSlowCurrentInjSetlTime', 'MCC_GetWholeCellCompCap', 'MCC_GetWholeCellCompEnable', 'MCC_GetWholeCellCompResist', 'MCC_SelectMultiClamp700B', 'MCC_SetBridgeBalEnable', 'MCC_SetBridgeBalResist', 'MCC_SetFastCompCap', 'MCC_SetFastCompTau', 'MCC_SetHolding', 'MCC_SetHoldingEnable', 'MCC_SetMode', 'MCC_SetNeutralizationCap', 'MCC_SetNeutralizationEnable', 'MCC_SetOscKillerEnable', 'MCC_SetPipetteOffset', 'MCC_SetPrimarySignalGain', 'MCC_SetPrimarySignalHPF', 'MCC_SetPrimarySignalLPF', 'MCC_SetRsCompBandwidth', 'MCC_SetRsCompCorrection', 'MCC_SetRsCompEnable', 'MCC_SetRsCompPrediction', 'MCC_SetSecondarySignalGain', 'MCC_SetSecondarySignalLPF', 'MCC_SetSlowCompCap', 'MCC_SetSlowCompTau', 'MCC_SetSlowCompTauX20Enable', 'MCC_SetSlowCurrentInjEnable', 'MCC_SetSlowCurrentInjLevel', 'MCC_SetSlowCurrentInjSetlTime', 'MCC_SetTimeoutMs', 'MCC_SetWholeCellCompCap', 'MCC_SetWholeCellCompEnable', 'MCC_SetWholeCellCompResist', 'mean', 'median', 'min', 'mod', 'ModDate', 'MPFXEMGPeak', 'MPFXExpConvExpPeak', 'MPFXGaussPeak', 'MPFXLorenzianPeak', 'MPFXVoigtPeak', 'NameOfWave', 'NaN', 'NewFreeDataFolder', 'NewFreeWave', 'norm', 'NormalizeUnicode', 'note', 'NumberByKey', 'numpnts', 'numtype', 'NumVarOrDefault', 'num2char', 'num2istr', 'num2str', 'NVAR_Exists', 'OperationList', 'PadString', 'PanelResolution', 'ParamIsDefault', 'ParseFilePath', 'PathList', 'pcsr', 'Pi', 'PICTInfo', 'PICTList', 'PixelFromAxisVal', 'pnt2x', 'poissonNoise', 'poly', 'PolygonArea', 'poly2D', 'PossiblyQuoteName', 'ProcedureText', 'p2rect', 'qcsr', 'real', 'RemoveByKey', 'RemoveEnding', 'RemoveFromList', 'RemoveListItem', 'ReplaceNumberByKey', 'ReplaceString', 'ReplaceStringByKey', 'rightx', 'round', 'r2polar', 'sawtooth', 'scaleToIndex', 'ScreenResolution', 'sec', 'sech', 'Secs2Date', 'Secs2Time', 'SelectNumber', 'SelectString', 'SetEnvironmentVariable', 'sign', 'sin', 'sinc', 'sinh', 'sinIntegral', 'SortList', 'SpecialCharacterInfo', 'SpecialCharacterList', 'SpecialDirPath', 'SphericalBessJ', 'SphericalBessJD', 'SphericalBessY', 'SphericalBessYD', 'SphericalHarmonics', 'SQLAllocHandle', 'SQLAllocStmt', 'SQLBinaryWavesToTextWave', 'SQLBindCol', 'SQLBindParameter', 'SQLBrowseConnect', 'SQLBulkOperations', 'SQLCancel', 'SQLCloseCursor', 'SQLColAttributeNum', 'SQLColAttributeStr', 'SQLColumnPrivileges', 'SQLColumns', 'SQLConnect', 'SQLDataSources', 'SQLDescribeCol', 'SQLDescribeParam', 'SQLDisconnect', 'SQLDriverConnect', 'SQLDrivers', 'SQLEndTran', 'SQLError', 'SQLExecDirect', 'SQLExecute', 'SQLFetch', 'SQLFetchScroll', 'SQLForeignKeys', 'SQLFreeConnect', 'SQLFreeEnv', 'SQLFreeHandle', 'SQLFreeStmt', 'SQLGetConnectAttrNum', 'SQLGetConnectAttrStr', 'SQLGetCursorName', 'SQLGetDataNum', 'SQLGetDataStr', 'SQLGetDescFieldNum', 'SQLGetDescFieldStr', 'SQLGetDescRec', 'SQLGetDiagFieldNum', 'SQLGetDiagFieldStr', 'SQLGetDiagRec', 'SQLGetEnvAttrNum', 'SQLGetEnvAttrStr', 'SQLGetFunctions', 'SQLGetInfoNum', 'SQLGetInfoStr', 'SQLGetStmtAttrNum', 'SQLGetStmtAttrStr', 'SQLGetTypeInfo', 'SQLMoreResults', 'SQLNativeSql', 'SQLNumParams', 'SQLNumResultCols', 'SQLNumResultRowsIfKnown', 'SQLNumRowsFetched', 'SQLParamData', 'SQLPrepare', 'SQLPrimaryKeys', 'SQLProcedureColumns', 'SQLProcedures', 'SQLPutData', 'SQLReinitialize', 'SQLRowCount', 'SQLSetConnectAttrNum', 'SQLSetConnectAttrStr', 'SQLSetCursorName', 'SQLSetDescFieldNum', 'SQLSetDescFieldStr', 'SQLSetDescRec', 'SQLSetEnvAttrNum', 'SQLSetEnvAttrStr', 'SQLSetPos', 'SQLSetStmtAttrNum', 'SQLSetStmtAttrStr', 'SQLSpecialColumns', 'SQLStatistics', 'SQLTablePrivileges', 'SQLTables', 'SQLTextWaveToBinaryWaves', 'SQLTextWaveTo2DBinaryWave', 'SQLUpdateBoundValues', 'SQLXOPCheckState', 'SQL2DBinaryWaveToTextWave', 'sqrt', 'StartMSTimer', 'StatsBetaCDF', 'StatsBetaPDF', 'StatsBinomialCDF', 'StatsBinomialPDF', 'StatsCauchyCDF', 'StatsCauchyPDF', 'StatsChiCDF', 'StatsChiPDF', 'StatsCMSSDCDF', 'StatsCorrelation', 'StatsDExpCDF', 'StatsDExpPDF', 'StatsErlangCDF', 'StatsErlangPDF', 'StatsErrorPDF', 'StatsEValueCDF', 'StatsEValuePDF', 'StatsExpCDF', 'StatsExpPDF', 'StatsFCDF', 'StatsFPDF', 'StatsFriedmanCDF', 'StatsGammaCDF', 'StatsGammaPDF', 'StatsGeometricCDF', 'StatsGeometricPDF', 'StatsGEVCDF', 'StatsGEVPDF', 'StatsHyperGCDF', 'StatsHyperGPDF', 'StatsInvBetaCDF', 'StatsInvBinomialCDF', 'StatsInvCauchyCDF', 'StatsInvChiCDF', 'StatsInvCMSSDCDF', 'StatsInvDExpCDF', 'StatsInvEValueCDF', 'StatsInvExpCDF', 'StatsInvFCDF', 'StatsInvFriedmanCDF', 'StatsInvGammaCDF', 'StatsInvGeometricCDF', 'StatsInvKuiperCDF', 'StatsInvLogisticCDF', 'StatsInvLogNormalCDF', 'StatsInvMaxwellCDF', 'StatsInvMooreCDF', 'StatsInvNBinomialCDF', 'StatsInvNCChiCDF', 'StatsInvNCFCDF', 'StatsInvNormalCDF', 'StatsInvParetoCDF', 'StatsInvPoissonCDF', 'StatsInvPowerCDF', 'StatsInvQCDF', 'StatsInvQpCDF', 'StatsInvRayleighCDF', 'StatsInvRectangularCDF', 'StatsInvSpearmanCDF', 'StatsInvStudentCDF', 'StatsInvTopDownCDF', 'StatsInvTriangularCDF', 'StatsInvUsquaredCDF', 'StatsInvVonMisesCDF', 'StatsInvWeibullCDF', 'StatsKuiperCDF', 'StatsLogisticCDF', 'StatsLogisticPDF', 'StatsLogNormalCDF', 'StatsLogNormalPDF', 'StatsMaxwellCDF', 'StatsMaxwellPDF', 'StatsMedian', 'StatsMooreCDF', 'StatsNBinomialCDF', 'StatsNBinomialPDF', 'StatsNCChiCDF', 'StatsNCChiPDF', 'StatsNCFCDF', 'StatsNCFPDF', 'StatsNCTCDF', 'StatsNCTPDF', 'StatsNormalCDF', 'StatsNormalPDF', 'StatsParetoCDF', 'StatsParetoPDF', 'StatsPermute', 'StatsPoissonCDF', 'StatsPoissonPDF', 'StatsPowerCDF', 'StatsPowerNoise', 'StatsPowerPDF', 'StatsQCDF', 'StatsQpCDF', 'StatsRayleighCDF', 'StatsRayleighPDF', 'StatsRectangularCDF', 'StatsRectangularPDF', 'StatsRunsCDF', 'StatsSpearmanRhoCDF', 'StatsStudentCDF', 'StatsStudentPDF', 'StatsTopDownCDF', 'StatsTriangularCDF', 'StatsTriangularPDF', 'StatsTrimmedMean', 'StatsUSquaredCDF', 'StatsVonMisesCDF', 'StatsVonMisesNoise', 'StatsVonMisesPDF', 'StatsWaldCDF', 'StatsWaldPDF', 'StatsWeibullCDF', 'StatsWeibullPDF', 'StopMSTimer', 'StringByKey', 'stringCRC', 'StringFromList', 'StringList', 'stringmatch', 'strlen', 'strsearch', 'StrVarOrDefault', 'str2num', 'StudentA', 'StudentT', 'sum', 'SVAR_Exists', 'TableInfo', 'TagVal', 'TagWaveRef', 'tan', 'tango_close_device', 'tango_command_inout', 'tango_compute_image_proj', 'tango_get_dev_attr_list', 'tango_get_dev_black_box', 'tango_get_dev_cmd_list', 'tango_get_dev_status', 'tango_get_dev_timeout', 'tango_get_error_stack', 'tango_open_device', 'tango_ping_device', 'tango_read_attribute', 'tango_read_attributes', 'tango_reload_dev_interface', 'tango_resume_attr_monitor', 'tango_set_attr_monitor_period', 'tango_set_dev_timeout', 'tango_start_attr_monitor', 'tango_stop_attr_monitor', 'tango_suspend_attr_monitor', 'tango_write_attribute', 'tango_write_attributes', 'tanh', 'TDMAddChannel', 'TDMAddGroup', 'TDMAppendDataValues', 'TDMAppendDataValuesTime', 'TDMChannelPropertyExists', 'TDMCloseChannel', 'TDMCloseFile', 'TDMCloseGroup', 'TDMCreateChannelProperty', 'TDMCreateFile', 'TDMCreateFileProperty', 'TDMCreateGroupProperty', 'TDMFilePropertyExists', 'TDMGetChannelPropertyNames', 'TDMGetChannelPropertyNum', 'TDMGetChannelPropertyStr', 'TDMGetChannelPropertyTime', 'TDMGetChannelPropertyType', 'TDMGetChannels', 'TDMGetChannelStringPropertyLen', 'TDMGetDataType', 'TDMGetDataValues', 'TDMGetDataValuesTime', 'TDMGetFilePropertyNames', 'TDMGetFilePropertyNum', 'TDMGetFilePropertyStr', 'TDMGetFilePropertyTime', 'TDMGetFilePropertyType', 'TDMGetFileStringPropertyLen', 'TDMGetGroupPropertyNames', 'TDMGetGroupPropertyNum', 'TDMGetGroupPropertyStr', 'TDMGetGroupPropertyTime', 'TDMGetGroupPropertyType', 'TDMGetGroups', 'TDMGetGroupStringPropertyLen', 'TDMGetLibraryErrorDescription', 'TDMGetNumChannelProperties', 'TDMGetNumChannels', 'TDMGetNumDataValues', 'TDMGetNumFileProperties', 'TDMGetNumGroupProperties', 'TDMGetNumGroups', 'TDMGroupPropertyExists', 'TDMOpenFile', 'TDMOpenFileEx', 'TDMRemoveChannel', 'TDMRemoveGroup', 'TDMReplaceDataValues', 'TDMReplaceDataValuesTime', 'TDMSaveFile', 'TDMSetChannelPropertyNum', 'TDMSetChannelPropertyStr', 'TDMSetChannelPropertyTime', 'TDMSetDataValues', 'TDMSetDataValuesTime', 'TDMSetFilePropertyNum', 'TDMSetFilePropertyStr', 'TDMSetFilePropertyTime', 'TDMSetGroupPropertyNum', 'TDMSetGroupPropertyStr', 'TDMSetGroupPropertyTime', 'TextEncodingCode', 'TextEncodingName', 'TextFile', 'ThreadGroupCreate', 'ThreadGroupGetDF', 'ThreadGroupGetDFR', 'ThreadGroupRelease', 'ThreadGroupWait', 'ThreadProcessorCount', 'ThreadReturnValue', 'ticks', 'time', 'TraceFromPixel', 'TraceInfo', 'TraceNameList', 'TraceNameToWaveRef', 'TrimString', 'trunc', 'UniqueName', 'UnPadString', 'UnsetEnvironmentVariable', 'UpperStr', 'URLDecode', 'URLEncode', 'VariableList', 'Variance', 'vcsr', 'viAssertIntrSignal', 'viAssertTrigger', 'viAssertUtilSignal', 'viClear', 'viClose', 'viDisableEvent', 'viDiscardEvents', 'viEnableEvent', 'viFindNext', 'viFindRsrc', 'viGetAttribute', 'viGetAttributeString', 'viGpibCommand', 'viGpibControlATN', 'viGpibControlREN', 'viGpibPassControl', 'viGpibSendIFC', 'viIn8', 'viIn16', 'viIn32', 'viLock', 'viMapAddress', 'viMapTrigger', 'viMemAlloc', 'viMemFree', 'viMoveIn8', 'viMoveIn16', 'viMoveIn32', 'viMoveOut8', 'viMoveOut16', 'viMoveOut32', 'viOpen', 'viOpenDefaultRM', 'viOut8', 'viOut16', 'viOut32', 'viPeek8', 'viPeek16', 'viPeek32', 'viPoke8', 'viPoke16', 'viPoke32', 'viRead', 'viReadSTB', 'viSetAttribute', 'viSetAttributeString', 'viStatusDesc', 'viTerminate', 'viUnlock', 'viUnmapAddress', 'viUnmapTrigger', 'viUsbControlIn', 'viUsbControlOut', 'viVxiCommandQuery', 'viWaitOnEvent', 'viWrite', 'VoigtFunc', 'VoigtPeak', 'WaveCRC', 'WaveDims', 'WaveExists', 'WaveHash', 'WaveInfo', 'WaveList', 'WaveMax', 'WaveMin', 'WaveName', 'WaveRefIndexed', 'WaveRefIndexedDFR', 'WaveRefsEqual', 'WaveRefWaveToList', 'WaveTextEncoding', 'WaveType', 'WaveUnits', 'WhichListItem', 'WinList', 'WinName', 'WinRecreation', 'WinType', 'wnoise', 'xcsr', 'XWaveName', 'XWaveRefFromTrace', 'x2pnt', 'zcsr', 'ZernikeR', 'zeromq_client_connect', 'zeromq_client_recv', 'zeromq_client_send', 'zeromq_handler_start', 'zeromq_handler_stop', 'zeromq_server_bind', 'zeromq_server_recv', 'zeromq_server_send', 'zeromq_set', 'zeromq_stop', 'zeromq_test_callfunction', 'zeromq_test_serializeWave', 'zeta' ) tokens = { 'root': [ (r'//.*$', Comment.Single), (r'"([^"\\]|\\.)*"', String), # Flow Control. (words(flowControl, prefix=r'\b', suffix=r'\b'), Keyword), # Types. (words(types, prefix=r'\b', suffix=r'\b'), Keyword.Type), # Keywords. (words(keywords, prefix=r'\b', suffix=r'\b'), Keyword.Reserved), # Built-in operations. (words(operations, prefix=r'\b', suffix=r'\b'), Name.Class), # Built-in functions. (words(functions, prefix=r'\b', suffix=r'\b'), Name.Function), # Compiler directives. (r'^#(include|pragma|define|undef|ifdef|ifndef|if|elif|else|endif)', Name.Decorator), (r'[^a-z"/]+$', Text), (r'.', Text), ], } pygments-2.11.2/pygments/lexers/devicetree.py0000644000175000017500000000765514165547207021214 0ustar carstencarsten""" pygments.lexers.devicetree ~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Devicetree language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, include, default, words from pygments.token import Comment, Keyword, Name, Number, Operator, \ Punctuation, String, Text, Whitespace __all__ = ['DevicetreeLexer'] class DevicetreeLexer(RegexLexer): """ Lexer for `Devicetree `_ files. .. versionadded:: 2.7 """ name = 'Devicetree' aliases = ['devicetree', 'dts'] filenames = ['*.dts', '*.dtsi'] mimetypes = ['text/x-c'] #: optional Whitespace or /*...*/ style comment _ws = r'\s*(?:/[*][^*/]*?[*]/\s*)*' tokens = { 'macro': [ # Include preprocessor directives (C style): (r'(#include)(' + _ws + r')([^\n]+)', bygroups(Comment.Preproc, Comment.Multiline, Comment.PreprocFile)), # Define preprocessor directives (C style): (r'(#define)(' + _ws + r')([^\n]+)', bygroups(Comment.Preproc, Comment.Multiline, Comment.Preproc)), # devicetree style with file: (r'(/[^*/{]+/)(' + _ws + r')("[^\n{]+")', bygroups(Comment.Preproc, Comment.Multiline, Comment.PreprocFile)), # devicetree style with property: (r'(/[^*/{]+/)(' + _ws + r')([^\n;{]*)([;]?)', bygroups(Comment.Preproc, Comment.Multiline, Comment.Preproc, Punctuation)), ], 'whitespace': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'\\\n', Text), # line continuation (r'//(\n|[\w\W]*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*][\w\W]*?[*](\\\n)?/', Comment.Multiline), # Open until EOF, so no ending delimeter (r'/(\\\n)?[*][\w\W]*', Comment.Multiline), ], 'statements': [ (r'(L?)(")', bygroups(String.Affix, String), 'string'), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+', Number.Integer), (r'([^\s{}/*]*)(\s*)(:)', bygroups(Name.Label, Text, Punctuation), '#pop'), (words(('compatible', 'model', 'phandle', 'status', '#address-cells', '#size-cells', 'reg', 'virtual-reg', 'ranges', 'dma-ranges', 'device_type', 'name'), suffix=r'\b'), Keyword.Reserved), (r'([~!%^&*+=|?:<>/#-])', Operator), (r'[()\[\]{},.]', Punctuation), (r'[a-zA-Z_][\w-]*(?=(?:\s*,\s*[a-zA-Z_][\w-]*|(?:' + _ws + r'))*\s*[=;])', Name), (r'[a-zA-Z_]\w*', Name.Attribute), ], 'root': [ include('whitespace'), include('macro'), # Nodes (r'([^/*@\s&]+|/)(@?)((?:0x)?[0-9a-fA-F,]*)(' + _ws + r')(\{)', bygroups(Name.Function, Operator, Number.Integer, Comment.Multiline, Punctuation), 'node'), default('statement'), ], 'statement': [ include('whitespace'), include('statements'), (';', Punctuation, '#pop'), ], 'node': [ include('whitespace'), include('macro'), (r'([^/*@\s&]+|/)(@?)((?:0x)?[0-9a-fA-F,]*)(' + _ws + r')(\{)', bygroups(Name.Function, Operator, Number.Integer, Comment.Multiline, Punctuation), '#push'), include('statements'), (r'\};', Punctuation, '#pop'), (';', Punctuation), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|' r'u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], } pygments-2.11.2/pygments/lexers/promql.py0000644000175000017500000001120314165547207020367 0ustar carstencarsten""" pygments.lexers.promql ~~~~~~~~~~~~~~~~~~~~~~ Lexer for Prometheus Query Language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, default, words from pygments.token import ( Comment, Keyword, Name, Number, Operator, Punctuation, String, Whitespace, ) __all__ = ["PromQLLexer"] class PromQLLexer(RegexLexer): """ For `PromQL `_ queries. For details about the grammar see: https://github.com/prometheus/prometheus/tree/master/promql/parser .. versionadded: 2.7 """ name = "PromQL" aliases = ["promql"] filenames = ["*.promql"] base_keywords = ( words( ( "bool", "by", "group_left", "group_right", "ignoring", "offset", "on", "without", ), suffix=r"\b", ), Keyword, ) aggregator_keywords = ( words( ( "sum", "min", "max", "avg", "group", "stddev", "stdvar", "count", "count_values", "bottomk", "topk", "quantile", ), suffix=r"\b", ), Keyword, ) function_keywords = ( words( ( "abs", "absent", "absent_over_time", "avg_over_time", "ceil", "changes", "clamp_max", "clamp_min", "count_over_time", "day_of_month", "day_of_week", "days_in_month", "delta", "deriv", "exp", "floor", "histogram_quantile", "holt_winters", "hour", "idelta", "increase", "irate", "label_join", "label_replace", "ln", "log10", "log2", "max_over_time", "min_over_time", "minute", "month", "predict_linear", "quantile_over_time", "rate", "resets", "round", "scalar", "sort", "sort_desc", "sqrt", "stddev_over_time", "stdvar_over_time", "sum_over_time", "time", "timestamp", "vector", "year", ), suffix=r"\b", ), Keyword.Reserved, ) tokens = { "root": [ (r"\n", Whitespace), (r"\s+", Whitespace), (r",", Punctuation), # Keywords base_keywords, aggregator_keywords, function_keywords, # Offsets (r"[1-9][0-9]*[smhdwy]", String), # Numbers (r"-?[0-9]+\.[0-9]+", Number.Float), (r"-?[0-9]+", Number.Integer), # Comments (r"#.*?$", Comment.Single), # Operators (r"(\+|\-|\*|\/|\%|\^)", Operator), (r"==|!=|>=|<=|<|>", Operator), (r"and|or|unless", Operator.Word), # Metrics (r"[_a-zA-Z][a-zA-Z0-9_]+", Name.Variable), # Params (r'(["\'])(.*?)(["\'])', bygroups(Punctuation, String, Punctuation)), # Other states (r"\(", Operator, "function"), (r"\)", Operator), (r"\{", Punctuation, "labels"), (r"\[", Punctuation, "range"), ], "labels": [ (r"\}", Punctuation, "#pop"), (r"\n", Whitespace), (r"\s+", Whitespace), (r",", Punctuation), (r'([_a-zA-Z][a-zA-Z0-9_]*?)(\s*?)(=~|!=|=|!~)(\s*?)("|\')(.*?)("|\')', bygroups(Name.Label, Whitespace, Operator, Whitespace, Punctuation, String, Punctuation)), ], "range": [ (r"\]", Punctuation, "#pop"), (r"[1-9][0-9]*[smhdwy]", String), ], "function": [ (r"\)", Operator, "#pop"), (r"\(", Operator, "#push"), default("#pop"), ], } pygments-2.11.2/pygments/lexers/_asy_builtins.py0000644000175000017500000006522714165547207021740 0ustar carstencarsten""" pygments.lexers._asy_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file contains the asy-function names and asy-variable names of Asymptote. Do not edit the ASYFUNCNAME and ASYVARNAME sets by hand. TODO: perl/python script in Asymptote SVN similar to asy-list.pl but only for function and variable names. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ ASYFUNCNAME = { 'AND', 'Arc', 'ArcArrow', 'ArcArrows', 'Arrow', 'Arrows', 'Automatic', 'AvantGarde', 'BBox', 'BWRainbow', 'BWRainbow2', 'Bar', 'Bars', 'BeginArcArrow', 'BeginArrow', 'BeginBar', 'BeginDotMargin', 'BeginMargin', 'BeginPenMargin', 'Blank', 'Bookman', 'Bottom', 'BottomTop', 'Bounds', 'Break', 'Broken', 'BrokenLog', 'Ceil', 'Circle', 'CircleBarIntervalMarker', 'Cos', 'Courier', 'CrossIntervalMarker', 'DefaultFormat', 'DefaultLogFormat', 'Degrees', 'Dir', 'DotMargin', 'DotMargins', 'Dotted', 'Draw', 'Drawline', 'Embed', 'EndArcArrow', 'EndArrow', 'EndBar', 'EndDotMargin', 'EndMargin', 'EndPenMargin', 'Fill', 'FillDraw', 'Floor', 'Format', 'Full', 'Gaussian', 'Gaussrand', 'Gaussrandpair', 'Gradient', 'Grayscale', 'Helvetica', 'Hermite', 'HookHead', 'InOutTicks', 'InTicks', 'J', 'Label', 'Landscape', 'Left', 'LeftRight', 'LeftTicks', 'Legend', 'Linear', 'Link', 'Log', 'LogFormat', 'Margin', 'Margins', 'Mark', 'MidArcArrow', 'MidArrow', 'NOT', 'NewCenturySchoolBook', 'NoBox', 'NoMargin', 'NoModifier', 'NoTicks', 'NoTicks3', 'NoZero', 'NoZeroFormat', 'None', 'OR', 'OmitFormat', 'OmitTick', 'OutTicks', 'Ox', 'Oy', 'Palatino', 'PaletteTicks', 'Pen', 'PenMargin', 'PenMargins', 'Pentype', 'Portrait', 'RadialShade', 'Rainbow', 'Range', 'Relative', 'Right', 'RightTicks', 'Rotate', 'Round', 'SQR', 'Scale', 'ScaleX', 'ScaleY', 'ScaleZ', 'Seascape', 'Shift', 'Sin', 'Slant', 'Spline', 'StickIntervalMarker', 'Straight', 'Symbol', 'Tan', 'TeXify', 'Ticks', 'Ticks3', 'TildeIntervalMarker', 'TimesRoman', 'Top', 'TrueMargin', 'UnFill', 'UpsideDown', 'Wheel', 'X', 'XEquals', 'XOR', 'XY', 'XYEquals', 'XYZero', 'XYgrid', 'XZEquals', 'XZZero', 'XZero', 'XZgrid', 'Y', 'YEquals', 'YXgrid', 'YZ', 'YZEquals', 'YZZero', 'YZero', 'YZgrid', 'Z', 'ZX', 'ZXgrid', 'ZYgrid', 'ZapfChancery', 'ZapfDingbats', '_cputime', '_draw', '_eval', '_image', '_labelpath', '_projection', '_strokepath', '_texpath', 'aCos', 'aSin', 'aTan', 'abort', 'abs', 'accel', 'acos', 'acosh', 'acot', 'acsc', 'add', 'addArrow', 'addMargins', 'addSaveFunction', 'addnode', 'addnodes', 'addpenarc', 'addpenline', 'addseg', 'adjust', 'alias', 'align', 'all', 'altitude', 'angabscissa', 'angle', 'angpoint', 'animate', 'annotate', 'anticomplementary', 'antipedal', 'apply', 'approximate', 'arc', 'arcarrowsize', 'arccircle', 'arcdir', 'arcfromcenter', 'arcfromfocus', 'arclength', 'arcnodesnumber', 'arcpoint', 'arcsubtended', 'arcsubtendedcenter', 'arctime', 'arctopath', 'array', 'arrow', 'arrow2', 'arrowbase', 'arrowbasepoints', 'arrowsize', 'asec', 'asin', 'asinh', 'ask', 'assert', 'asy', 'asycode', 'asydir', 'asyfigure', 'asyfilecode', 'asyinclude', 'asywrite', 'atan', 'atan2', 'atanh', 'atbreakpoint', 'atexit', 'atime', 'attach', 'attract', 'atupdate', 'autoformat', 'autoscale', 'autoscale3', 'axes', 'axes3', 'axialshade', 'axis', 'axiscoverage', 'azimuth', 'babel', 'background', 'bangles', 'bar', 'barmarksize', 'barsize', 'basealign', 'baseline', 'bbox', 'beep', 'begin', 'beginclip', 'begingroup', 'beginpoint', 'between', 'bevel', 'bezier', 'bezierP', 'bezierPP', 'bezierPPP', 'bezulate', 'bibliography', 'bibliographystyle', 'binarytree', 'binarytreeNode', 'binomial', 'binput', 'bins', 'bisector', 'bisectorpoint', 'blend', 'boutput', 'box', 'bqe', 'breakpoint', 'breakpoints', 'brick', 'buildRestoreDefaults', 'buildRestoreThunk', 'buildcycle', 'bulletcolor', 'canonical', 'canonicalcartesiansystem', 'cartesiansystem', 'case1', 'case2', 'case3', 'cbrt', 'cd', 'ceil', 'center', 'centerToFocus', 'centroid', 'cevian', 'change2', 'changecoordsys', 'checkSegment', 'checkconditionlength', 'checker', 'checklengths', 'checkposition', 'checktriangle', 'choose', 'circle', 'circlebarframe', 'circlemarkradius', 'circlenodesnumber', 'circumcenter', 'circumcircle', 'clamped', 'clear', 'clip', 'clipdraw', 'close', 'cmyk', 'code', 'colatitude', 'collect', 'collinear', 'color', 'colorless', 'colors', 'colorspace', 'comma', 'compassmark', 'complement', 'complementary', 'concat', 'concurrent', 'cone', 'conic', 'conicnodesnumber', 'conictype', 'conj', 'connect', 'containmentTree', 'contains', 'contour', 'contour3', 'controlSpecifier', 'convert', 'coordinates', 'coordsys', 'copy', 'cos', 'cosh', 'cot', 'countIntersections', 'cputime', 'crop', 'cropcode', 'cross', 'crossframe', 'crosshatch', 'crossmarksize', 'csc', 'cubicroots', 'curabscissa', 'curlSpecifier', 'curpoint', 'currentarrow', 'currentexitfunction', 'currentmomarrow', 'currentpolarconicroutine', 'curve', 'cut', 'cutafter', 'cutbefore', 'cyclic', 'cylinder', 'debugger', 'deconstruct', 'defaultdir', 'defaultformat', 'defaultpen', 'defined', 'degenerate', 'degrees', 'delete', 'deletepreamble', 'determinant', 'diagonal', 'diamond', 'diffdiv', 'dir', 'dirSpecifier', 'dirtime', 'display', 'distance', 'divisors', 'do_overpaint', 'dot', 'dotframe', 'dotsize', 'downcase', 'draw', 'drawAll', 'drawDoubleLine', 'drawFermion', 'drawGhost', 'drawGluon', 'drawMomArrow', 'drawPhoton', 'drawScalar', 'drawVertex', 'drawVertexBox', 'drawVertexBoxO', 'drawVertexBoxX', 'drawVertexO', 'drawVertexOX', 'drawVertexTriangle', 'drawVertexTriangleO', 'drawVertexX', 'drawarrow', 'drawarrow2', 'drawline', 'drawtick', 'duplicate', 'elle', 'ellipse', 'ellipsenodesnumber', 'embed', 'embed3', 'empty', 'enclose', 'end', 'endScript', 'endclip', 'endgroup', 'endl', 'endpoint', 'endpoints', 'eof', 'eol', 'equation', 'equations', 'erase', 'erasestep', 'erf', 'erfc', 'error', 'errorbar', 'errorbars', 'eval', 'excenter', 'excircle', 'exit', 'exitXasyMode', 'exitfunction', 'exp', 'expfactors', 'expi', 'expm1', 'exradius', 'extend', 'extension', 'extouch', 'fabs', 'factorial', 'fermat', 'fft', 'fhorner', 'figure', 'file', 'filecode', 'fill', 'filldraw', 'filloutside', 'fillrule', 'filltype', 'find', 'finite', 'finiteDifferenceJacobian', 'firstcut', 'firstframe', 'fit', 'fit2', 'fixedscaling', 'floor', 'flush', 'fmdefaults', 'fmod', 'focusToCenter', 'font', 'fontcommand', 'fontsize', 'foot', 'format', 'frac', 'frequency', 'fromCenter', 'fromFocus', 'fspline', 'functionshade', 'gamma', 'generate_random_backtrace', 'generateticks', 'gergonne', 'getc', 'getint', 'getpair', 'getreal', 'getstring', 'gettriple', 'gluon', 'gouraudshade', 'graph', 'graphic', 'gray', 'grestore', 'grid', 'grid3', 'gsave', 'halfbox', 'hatch', 'hdiffdiv', 'hermite', 'hex', 'histogram', 'history', 'hline', 'hprojection', 'hsv', 'hyperbola', 'hyperbolanodesnumber', 'hyperlink', 'hypot', 'identity', 'image', 'incenter', 'incentral', 'incircle', 'increasing', 'incrementposition', 'indexedTransform', 'indexedfigure', 'initXasyMode', 'initdefaults', 'input', 'inradius', 'insert', 'inside', 'integrate', 'interactive', 'interior', 'interp', 'interpolate', 'intersect', 'intersection', 'intersectionpoint', 'intersectionpoints', 'intersections', 'intouch', 'inverse', 'inversion', 'invisible', 'is3D', 'isDuplicate', 'isogonal', 'isogonalconjugate', 'isotomic', 'isotomicconjugate', 'isparabola', 'italic', 'item', 'key', 'kurtosis', 'kurtosisexcess', 'label', 'labelaxis', 'labelmargin', 'labelpath', 'labels', 'labeltick', 'labelx', 'labelx3', 'labely', 'labely3', 'labelz', 'labelz3', 'lastcut', 'latex', 'latitude', 'latticeshade', 'layer', 'layout', 'ldexp', 'leastsquares', 'legend', 'legenditem', 'length', 'lift', 'light', 'limits', 'line', 'linear', 'linecap', 'lineinversion', 'linejoin', 'linemargin', 'lineskip', 'linetype', 'linewidth', 'link', 'list', 'lm_enorm', 'lm_evaluate_default', 'lm_lmdif', 'lm_lmpar', 'lm_minimize', 'lm_print_default', 'lm_print_quiet', 'lm_qrfac', 'lm_qrsolv', 'locale', 'locate', 'locatefile', 'location', 'log', 'log10', 'log1p', 'logaxiscoverage', 'longitude', 'lookup', 'magnetize', 'makeNode', 'makedraw', 'makepen', 'map', 'margin', 'markangle', 'markangleradius', 'markanglespace', 'markarc', 'marker', 'markinterval', 'marknodes', 'markrightangle', 'markuniform', 'mass', 'masscenter', 'massformat', 'math', 'max', 'max3', 'maxbezier', 'maxbound', 'maxcoords', 'maxlength', 'maxratio', 'maxtimes', 'mean', 'medial', 'median', 'midpoint', 'min', 'min3', 'minbezier', 'minbound', 'minipage', 'minratio', 'mintimes', 'miterlimit', 'momArrowPath', 'momarrowsize', 'monotonic', 'multifigure', 'nativeformat', 'natural', 'needshipout', 'newl', 'newpage', 'newslide', 'newton', 'newtree', 'nextframe', 'nextnormal', 'nextpage', 'nib', 'nodabscissa', 'none', 'norm', 'normalvideo', 'notaknot', 'nowarn', 'numberpage', 'nurb', 'object', 'offset', 'onpath', 'opacity', 'opposite', 'orientation', 'orig_circlenodesnumber', 'orig_circlenodesnumber1', 'orig_draw', 'orig_ellipsenodesnumber', 'orig_ellipsenodesnumber1', 'orig_hyperbolanodesnumber', 'orig_parabolanodesnumber', 'origin', 'orthic', 'orthocentercenter', 'outformat', 'outline', 'outprefix', 'output', 'overloadedMessage', 'overwrite', 'pack', 'pad', 'pairs', 'palette', 'parabola', 'parabolanodesnumber', 'parallel', 'partialsum', 'path', 'path3', 'pattern', 'pause', 'pdf', 'pedal', 'periodic', 'perp', 'perpendicular', 'perpendicularmark', 'phantom', 'phi1', 'phi2', 'phi3', 'photon', 'piecewisestraight', 'point', 'polar', 'polarconicroutine', 'polargraph', 'polygon', 'postcontrol', 'postscript', 'pow10', 'ppoint', 'prc', 'prc0', 'precision', 'precontrol', 'prepend', 'print_random_addresses', 'project', 'projection', 'purge', 'pwhermite', 'quadrant', 'quadraticroots', 'quantize', 'quarticroots', 'quotient', 'radialshade', 'radians', 'radicalcenter', 'radicalline', 'radius', 'rand', 'randompath', 'rd', 'readline', 'realmult', 'realquarticroots', 'rectangle', 'rectangular', 'rectify', 'reflect', 'relabscissa', 'relative', 'relativedistance', 'reldir', 'relpoint', 'reltime', 'remainder', 'remark', 'removeDuplicates', 'rename', 'replace', 'report', 'resetdefaultpen', 'restore', 'restoredefaults', 'reverse', 'reversevideo', 'rf', 'rfind', 'rgb', 'rgba', 'rgbint', 'rms', 'rotate', 'rotateO', 'rotation', 'round', 'roundbox', 'roundedpath', 'roundrectangle', 'samecoordsys', 'sameside', 'sample', 'save', 'savedefaults', 'saveline', 'scale', 'scale3', 'scaleO', 'scaleT', 'scaleless', 'scientific', 'search', 'searchtree', 'sec', 'secondaryX', 'secondaryY', 'seconds', 'section', 'sector', 'seek', 'seekeof', 'segment', 'sequence', 'setpens', 'sgn', 'sgnd', 'sharpangle', 'sharpdegrees', 'shift', 'shiftless', 'shipout', 'shipout3', 'show', 'side', 'simeq', 'simpson', 'sin', 'single', 'sinh', 'size', 'size3', 'skewness', 'skip', 'slant', 'sleep', 'slope', 'slopefield', 'solve', 'solveBVP', 'sort', 'sourceline', 'sphere', 'split', 'sqrt', 'square', 'srand', 'standardizecoordsys', 'startScript', 'startTrembling', 'stdev', 'step', 'stickframe', 'stickmarksize', 'stickmarkspace', 'stop', 'straight', 'straightness', 'string', 'stripdirectory', 'stripextension', 'stripfile', 'strokepath', 'subdivide', 'subitem', 'subpath', 'substr', 'sum', 'surface', 'symmedial', 'symmedian', 'system', 'tab', 'tableau', 'tan', 'tangent', 'tangential', 'tangents', 'tanh', 'tell', 'tensionSpecifier', 'tensorshade', 'tex', 'texcolor', 'texify', 'texpath', 'texpreamble', 'texreset', 'texshipout', 'texsize', 'textpath', 'thick', 'thin', 'tick', 'tickMax', 'tickMax3', 'tickMin', 'tickMin3', 'ticklabelshift', 'ticklocate', 'tildeframe', 'tildemarksize', 'tile', 'tiling', 'time', 'times', 'title', 'titlepage', 'topbox', 'transform', 'transformation', 'transpose', 'tremble', 'trembleFuzz', 'tremble_circlenodesnumber', 'tremble_circlenodesnumber1', 'tremble_draw', 'tremble_ellipsenodesnumber', 'tremble_ellipsenodesnumber1', 'tremble_hyperbolanodesnumber', 'tremble_marknodes', 'tremble_markuniform', 'tremble_parabolanodesnumber', 'triangle', 'triangleAbc', 'triangleabc', 'triangulate', 'tricoef', 'tridiagonal', 'trilinear', 'trim', 'trueMagnetize', 'truepoint', 'tube', 'uncycle', 'unfill', 'uniform', 'unit', 'unitrand', 'unitsize', 'unityroot', 'unstraighten', 'upcase', 'updatefunction', 'uperiodic', 'upscale', 'uptodate', 'usepackage', 'usersetting', 'usetypescript', 'usleep', 'value', 'variance', 'variancebiased', 'vbox', 'vector', 'vectorfield', 'verbatim', 'view', 'vline', 'vperiodic', 'vprojection', 'warn', 'warning', 'windingnumber', 'write', 'xaxis', 'xaxis3', 'xaxis3At', 'xaxisAt', 'xequals', 'xinput', 'xlimits', 'xoutput', 'xpart', 'xscale', 'xscaleO', 'xtick', 'xtick3', 'xtrans', 'yaxis', 'yaxis3', 'yaxis3At', 'yaxisAt', 'yequals', 'ylimits', 'ypart', 'yscale', 'yscaleO', 'ytick', 'ytick3', 'ytrans', 'zaxis3', 'zaxis3At', 'zero', 'zero3', 'zlimits', 'zpart', 'ztick', 'ztick3', 'ztrans' } ASYVARNAME = { 'AliceBlue', 'Align', 'Allow', 'AntiqueWhite', 'Apricot', 'Aqua', 'Aquamarine', 'Aspect', 'Azure', 'BeginPoint', 'Beige', 'Bisque', 'Bittersweet', 'Black', 'BlanchedAlmond', 'Blue', 'BlueGreen', 'BlueViolet', 'Both', 'Break', 'BrickRed', 'Brown', 'BurlyWood', 'BurntOrange', 'CCW', 'CW', 'CadetBlue', 'CarnationPink', 'Center', 'Centered', 'Cerulean', 'Chartreuse', 'Chocolate', 'Coeff', 'Coral', 'CornflowerBlue', 'Cornsilk', 'Crimson', 'Crop', 'Cyan', 'Dandelion', 'DarkBlue', 'DarkCyan', 'DarkGoldenrod', 'DarkGray', 'DarkGreen', 'DarkKhaki', 'DarkMagenta', 'DarkOliveGreen', 'DarkOrange', 'DarkOrchid', 'DarkRed', 'DarkSalmon', 'DarkSeaGreen', 'DarkSlateBlue', 'DarkSlateGray', 'DarkTurquoise', 'DarkViolet', 'DeepPink', 'DeepSkyBlue', 'DefaultHead', 'DimGray', 'DodgerBlue', 'Dotted', 'Draw', 'E', 'ENE', 'EPS', 'ESE', 'E_Euler', 'E_PC', 'E_RK2', 'E_RK3BS', 'Emerald', 'EndPoint', 'Euler', 'Fill', 'FillDraw', 'FireBrick', 'FloralWhite', 'ForestGreen', 'Fuchsia', 'Gainsboro', 'GhostWhite', 'Gold', 'Goldenrod', 'Gray', 'Green', 'GreenYellow', 'Honeydew', 'HookHead', 'Horizontal', 'HotPink', 'I', 'IgnoreAspect', 'IndianRed', 'Indigo', 'Ivory', 'JOIN_IN', 'JOIN_OUT', 'JungleGreen', 'Khaki', 'LM_DWARF', 'LM_MACHEP', 'LM_SQRT_DWARF', 'LM_SQRT_GIANT', 'LM_USERTOL', 'Label', 'Lavender', 'LavenderBlush', 'LawnGreen', 'LeftJustified', 'LeftSide', 'LemonChiffon', 'LightBlue', 'LightCoral', 'LightCyan', 'LightGoldenrodYellow', 'LightGreen', 'LightGrey', 'LightPink', 'LightSalmon', 'LightSeaGreen', 'LightSkyBlue', 'LightSlateGray', 'LightSteelBlue', 'LightYellow', 'Lime', 'LimeGreen', 'Linear', 'Linen', 'Log', 'Logarithmic', 'Magenta', 'Mahogany', 'Mark', 'MarkFill', 'Maroon', 'Max', 'MediumAquamarine', 'MediumBlue', 'MediumOrchid', 'MediumPurple', 'MediumSeaGreen', 'MediumSlateBlue', 'MediumSpringGreen', 'MediumTurquoise', 'MediumVioletRed', 'Melon', 'MidPoint', 'MidnightBlue', 'Min', 'MintCream', 'MistyRose', 'Moccasin', 'Move', 'MoveQuiet', 'Mulberry', 'N', 'NE', 'NNE', 'NNW', 'NW', 'NavajoWhite', 'Navy', 'NavyBlue', 'NoAlign', 'NoCrop', 'NoFill', 'NoSide', 'OldLace', 'Olive', 'OliveDrab', 'OliveGreen', 'Orange', 'OrangeRed', 'Orchid', 'Ox', 'Oy', 'PC', 'PaleGoldenrod', 'PaleGreen', 'PaleTurquoise', 'PaleVioletRed', 'PapayaWhip', 'Peach', 'PeachPuff', 'Periwinkle', 'Peru', 'PineGreen', 'Pink', 'Plum', 'PowderBlue', 'ProcessBlue', 'Purple', 'RK2', 'RK3', 'RK3BS', 'RK4', 'RK5', 'RK5DP', 'RK5F', 'RawSienna', 'Red', 'RedOrange', 'RedViolet', 'Rhodamine', 'RightJustified', 'RightSide', 'RosyBrown', 'RoyalBlue', 'RoyalPurple', 'RubineRed', 'S', 'SE', 'SSE', 'SSW', 'SW', 'SaddleBrown', 'Salmon', 'SandyBrown', 'SeaGreen', 'Seashell', 'Sepia', 'Sienna', 'Silver', 'SimpleHead', 'SkyBlue', 'SlateBlue', 'SlateGray', 'Snow', 'SpringGreen', 'SteelBlue', 'Suppress', 'SuppressQuiet', 'Tan', 'TeXHead', 'Teal', 'TealBlue', 'Thistle', 'Ticksize', 'Tomato', 'Turquoise', 'UnFill', 'VERSION', 'Value', 'Vertical', 'Violet', 'VioletRed', 'W', 'WNW', 'WSW', 'Wheat', 'White', 'WhiteSmoke', 'WildStrawberry', 'XYAlign', 'YAlign', 'Yellow', 'YellowGreen', 'YellowOrange', 'addpenarc', 'addpenline', 'align', 'allowstepping', 'angularsystem', 'animationdelay', 'appendsuffix', 'arcarrowangle', 'arcarrowfactor', 'arrow2sizelimit', 'arrowangle', 'arrowbarb', 'arrowdir', 'arrowfactor', 'arrowhookfactor', 'arrowlength', 'arrowsizelimit', 'arrowtexfactor', 'authorpen', 'axis', 'axiscoverage', 'axislabelfactor', 'background', 'backgroundcolor', 'backgroundpen', 'barfactor', 'barmarksizefactor', 'basealign', 'baselinetemplate', 'beveljoin', 'bigvertexpen', 'bigvertexsize', 'black', 'blue', 'bm', 'bottom', 'bp', 'brown', 'bullet', 'byfoci', 'byvertices', 'camerafactor', 'chartreuse', 'circlemarkradiusfactor', 'circlenodesnumberfactor', 'circleprecision', 'circlescale', 'cm', 'codefile', 'codepen', 'codeskip', 'colorPen', 'coloredNodes', 'coloredSegments', 'conditionlength', 'conicnodesfactor', 'count', 'cputimeformat', 'crossmarksizefactor', 'currentcoordsys', 'currentlight', 'currentpatterns', 'currentpen', 'currentpicture', 'currentposition', 'currentprojection', 'curvilinearsystem', 'cuttings', 'cyan', 'darkblue', 'darkbrown', 'darkcyan', 'darkgray', 'darkgreen', 'darkgrey', 'darkmagenta', 'darkolive', 'darkred', 'dashdotted', 'dashed', 'datepen', 'dateskip', 'debuggerlines', 'debugging', 'deepblue', 'deepcyan', 'deepgray', 'deepgreen', 'deepgrey', 'deepmagenta', 'deepred', 'default', 'defaultControl', 'defaultS', 'defaultbackpen', 'defaultcoordsys', 'defaultfilename', 'defaultformat', 'defaultmassformat', 'defaultpen', 'diagnostics', 'differentlengths', 'dot', 'dotfactor', 'dotframe', 'dotted', 'doublelinepen', 'doublelinespacing', 'down', 'duplicateFuzz', 'ellipsenodesnumberfactor', 'eps', 'epsgeo', 'epsilon', 'evenodd', 'extendcap', 'fermionpen', 'figureborder', 'figuremattpen', 'firstnode', 'firststep', 'foregroundcolor', 'fuchsia', 'fuzz', 'gapfactor', 'ghostpen', 'gluonamplitude', 'gluonpen', 'gluonratio', 'gray', 'green', 'grey', 'hatchepsilon', 'havepagenumber', 'heavyblue', 'heavycyan', 'heavygray', 'heavygreen', 'heavygrey', 'heavymagenta', 'heavyred', 'hline', 'hwratio', 'hyperbolanodesnumberfactor', 'identity4', 'ignore', 'inXasyMode', 'inch', 'inches', 'includegraphicscommand', 'inf', 'infinity', 'institutionpen', 'intMax', 'intMin', 'invert', 'invisible', 'itempen', 'itemskip', 'itemstep', 'labelmargin', 'landscape', 'lastnode', 'left', 'legendhskip', 'legendlinelength', 'legendmargin', 'legendmarkersize', 'legendmaxrelativewidth', 'legendvskip', 'lightblue', 'lightcyan', 'lightgray', 'lightgreen', 'lightgrey', 'lightmagenta', 'lightolive', 'lightred', 'lightyellow', 'linemargin', 'lm_infmsg', 'lm_shortmsg', 'longdashdotted', 'longdashed', 'magenta', 'magneticPoints', 'magneticRadius', 'mantissaBits', 'markangleradius', 'markangleradiusfactor', 'markanglespace', 'markanglespacefactor', 'mediumblue', 'mediumcyan', 'mediumgray', 'mediumgreen', 'mediumgrey', 'mediummagenta', 'mediumred', 'mediumyellow', 'middle', 'minDistDefault', 'minblockheight', 'minblockwidth', 'mincirclediameter', 'minipagemargin', 'minipagewidth', 'minvertexangle', 'miterjoin', 'mm', 'momarrowfactor', 'momarrowlength', 'momarrowmargin', 'momarrowoffset', 'momarrowpen', 'monoPen', 'morepoints', 'nCircle', 'newbulletcolor', 'ngraph', 'nil', 'nmesh', 'nobasealign', 'nodeMarginDefault', 'nodesystem', 'nomarker', 'nopoint', 'noprimary', 'nullpath', 'nullpen', 'numarray', 'ocgindex', 'oldbulletcolor', 'olive', 'orange', 'origin', 'overpaint', 'page', 'pageheight', 'pagemargin', 'pagenumberalign', 'pagenumberpen', 'pagenumberposition', 'pagewidth', 'paleblue', 'palecyan', 'palegray', 'palegreen', 'palegrey', 'palemagenta', 'palered', 'paleyellow', 'parabolanodesnumberfactor', 'perpfactor', 'phi', 'photonamplitude', 'photonpen', 'photonratio', 'pi', 'pink', 'plain', 'plus', 'preamblenodes', 'pt', 'purple', 'r3', 'r4a', 'r4b', 'randMax', 'realDigits', 'realEpsilon', 'realMax', 'realMin', 'red', 'relativesystem', 'reverse', 'right', 'roundcap', 'roundjoin', 'royalblue', 'salmon', 'saveFunctions', 'scalarpen', 'sequencereal', 'settings', 'shipped', 'signedtrailingzero', 'solid', 'springgreen', 'sqrtEpsilon', 'squarecap', 'squarepen', 'startposition', 'stdin', 'stdout', 'stepfactor', 'stepfraction', 'steppagenumberpen', 'stepping', 'stickframe', 'stickmarksizefactor', 'stickmarkspacefactor', 'textpen', 'ticksize', 'tildeframe', 'tildemarksizefactor', 'tinv', 'titlealign', 'titlepagepen', 'titlepageposition', 'titlepen', 'titleskip', 'top', 'trailingzero', 'treeLevelStep', 'treeMinNodeWidth', 'treeNodeStep', 'trembleAngle', 'trembleFrequency', 'trembleRandom', 'tremblingMode', 'undefined', 'unitcircle', 'unitsquare', 'up', 'urlpen', 'urlskip', 'version', 'vertexpen', 'vertexsize', 'viewportmargin', 'viewportsize', 'vline', 'white', 'wye', 'xformStack', 'yellow', 'ylabelwidth', 'zerotickfuzz', 'zerowinding' } pygments-2.11.2/pygments/lexers/spice.py0000644000175000017500000000407714165547207020173 0ustar carstencarsten""" pygments.lexers.spice ~~~~~~~~~~~~~~~~~~~~~ Lexers for the Spice programming language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['SpiceLexer'] class SpiceLexer(RegexLexer): """ For `Spice `_ source. .. versionadded:: 2.11 """ name = 'Spice' filenames = ['*.spice'] aliases = ['spice', 'spicelang'] mimetypes = ['text/x-spice'] flags = re.MULTILINE | re.UNICODE tokens = { 'root': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'\\\n', Text), # line continuations (r'//(.*?)\n', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'(import|as)\b', Keyword.Namespace), (r'(f|p|type|struct|const)\b', Keyword.Declaration), (words(('if', 'else', 'for', 'foreach', 'while', 'break', 'continue', 'return', 'new', 'ext'), suffix=r'\b'), Keyword), (r'(true|false)\b', Keyword.Constant), (words(('printf', 'sizeof'), suffix=r'\b(\()'), bygroups(Name.Builtin, Punctuation)), (words(('double', 'int', 'short', 'long', 'byte', 'char', 'string', 'bool', 'dyn'), suffix=r'\b'), Keyword.Type), # double_lit (r'\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)', Number.Double), (r'\.\d+([eE][+\-]?\d+)?', Number.Double), # int_lit (r'(0|[1-9][0-9]*)', Number.Integer), # StringLiteral # -- interpreted_string_lit (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Tokens (r'(<<=|>>=|<<|>>|<=|>=|\+=|-=|\*=|/=|&&|\|\||&|\||\+\+|--|\%|==|!=|[.]{3}|[+\-*/&])', Operator), (r'[|<>=!()\[\]{}.,;:\?]', Punctuation), # identifier (r'[^\W\d]\w*', Name.Other), ] } pygments-2.11.2/pygments/lexers/ezhil.py0000644000175000017500000000642214165547207020177 0ustar carstencarsten""" pygments.lexers.ezhil ~~~~~~~~~~~~~~~~~~~~~ Pygments lexers for Ezhil language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words, bygroups from pygments.token import Keyword, Text, Comment, Name from pygments.token import String, Number, Punctuation, Operator, Whitespace __all__ = ['EzhilLexer'] class EzhilLexer(RegexLexer): """ Lexer for `Ezhil, a Tamil script-based programming language `_ .. versionadded:: 2.1 """ name = 'Ezhil' aliases = ['ezhil'] filenames = ['*.n'] mimetypes = ['text/x-ezhil'] flags = re.MULTILINE | re.UNICODE # Refer to tamil.utf8.tamil_letters from open-tamil for a stricter version of this. # This much simpler version is close enough, and includes combining marks. _TALETTERS = '[a-zA-Z_]|[\u0b80-\u0bff]' tokens = { 'root': [ include('keywords'), (r'#.*$', Comment.Single), (r'[@+/*,^\-%]|[!<>=]=?|&&?|\|\|?', Operator), ('இல்', Operator.Word), (words(('assert', 'max', 'min', 'நீளம்', 'சரம்_இடமாற்று', 'சரம்_கண்டுபிடி', 'பட்டியல்', 'பின்இணை', 'வரிசைப்படுத்து', 'எடு', 'தலைகீழ்', 'நீட்டிக்க', 'நுழைக்க', 'வை', 'கோப்பை_திற', 'கோப்பை_எழுது', 'கோப்பை_மூடு', 'pi', 'sin', 'cos', 'tan', 'sqrt', 'hypot', 'pow', 'exp', 'log', 'log10', 'exit', ), suffix=r'\b'), Name.Builtin), (r'(True|False)\b', Keyword.Constant), (r'[^\S\n]+', Whitespace), include('identifier'), include('literal'), (r'[(){}\[\]:;.]', Punctuation), ], 'keywords': [ ('பதிப்பி|தேர்ந்தெடு|தேர்வு|ஏதேனில்|ஆனால்|இல்லைஆனால்|இல்லை|ஆக|ஒவ்வொன்றாக|இல்|வரை|செய்|முடியேனில்|பின்கொடு|முடி|நிரல்பாகம்|தொடர்|நிறுத்து|நிரல்பாகம்', Keyword), ], 'identifier': [ ('(?:'+_TALETTERS+')(?:[0-9]|'+_TALETTERS+')*', Name), ], 'literal': [ (r'".*?"', String), (r'(?u)\d+((\.\d*)?[eE][+-]?\d+|\.\d*)', Number.Float), (r'(?u)\d+', Number.Integer), ] } def analyse_text(text): """This language uses Tamil-script. We'll assume that if there's a decent amount of Tamil-characters, it's this language. This assumption is obviously horribly off if someone uses string literals in tamil in another language.""" if len(re.findall(r'[\u0b80-\u0bff]', text)) > 10: return 0.25 def __init__(self, **options): super().__init__(**options) self.encoding = options.get('encoding', 'utf-8') pygments-2.11.2/pygments/lexers/rnc.py0000644000175000017500000000365614165547207017654 0ustar carstencarsten""" pygments.lexers.rnc ~~~~~~~~~~~~~~~~~~~ Lexer for Relax-NG Compact syntax :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Punctuation __all__ = ['RNCCompactLexer'] class RNCCompactLexer(RegexLexer): """ For `RelaxNG-compact `_ syntax. .. versionadded:: 2.2 """ name = 'Relax-NG Compact' aliases = ['rng-compact', 'rnc'] filenames = ['*.rnc'] tokens = { 'root': [ (r'namespace\b', Keyword.Namespace), (r'(?:default|datatypes)\b', Keyword.Declaration), (r'##.*$', Comment.Preproc), (r'#.*$', Comment.Single), (r'"[^"]*"', String.Double), # TODO single quoted strings and escape sequences outside of # double-quoted strings (r'(?:element|attribute|mixed)\b', Keyword.Declaration, 'variable'), (r'(text\b|xsd:[^ ]+)', Keyword.Type, 'maybe_xsdattributes'), (r'[,?&*=|~]|>>', Operator), (r'[(){}]', Punctuation), (r'.', Text), ], # a variable has been declared using `element` or `attribute` 'variable': [ (r'[^{]+', Name.Variable), (r'\{', Punctuation, '#pop'), ], # after an xsd: declaration there may be attributes 'maybe_xsdattributes': [ (r'\{', Punctuation, 'xsdattributes'), (r'\}', Punctuation, '#pop'), (r'.', Text), ], # attributes take the form { key1 = value1 key2 = value2 ... } 'xsdattributes': [ (r'[^ =}]', Name.Attribute), (r'=', Operator), (r'"[^"]*"', String.Double), (r'\}', Punctuation, '#pop'), (r'.', Text), ], } pygments-2.11.2/pygments/lexers/bdd.py0000644000175000017500000000314114165547207017610 0ustar carstencarsten""" pygments.lexers.bdd ~~~~~~~~~~~~~~~~~~~ Lexer for BDD(Behavior-driven development). More information: https://en.wikipedia.org/wiki/Behavior-driven_development :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include from pygments.token import Comment, Keyword, Name, String, Number, Text, Punctuation, Whitespace __all__ = ['BddLexer'] class BddLexer(RegexLexer): """ Lexer for BDD(Behavior-driven development), which highlights not only keywords, but also comments, punctuations, strings, numbers, and variables. .. versionadded:: 2.11 """ name = 'Bdd' aliases = ['bdd'] filenames = ['*.feature'] mimetypes = ['text/x-bdd'] step_keywords = r'Given|When|Then|Add|And|Feature|Scenario Outline|Scenario|Background|Examples|But' tokens = { 'comments': [ (r'^\s*#.*$', Comment), ], 'miscellaneous': [ (r'(<|>|\[|\]|=|\||:|\(|\)|\{|\}|,|\.|;|-|_|\$)', Punctuation), (r'((?<=\<)[^\\>]+(?=\>))', Name.Variable), (r'"([^\"]*)"', String), (r'^@\S+', Name.Label), ], 'numbers': [ (r'(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number), ], 'root': [ (r'\n|\s+', Whitespace), (step_keywords, Keyword), include('comments'), include('miscellaneous'), include('numbers'), (r'\S+', Text), ] } def analyse_text(self, text): return pygments-2.11.2/pygments/lexers/ncl.py0000644000175000017500000017473214165547207017652 0ustar carstencarsten""" pygments.lexers.ncl ~~~~~~~~~~~~~~~~~~~ Lexers for NCAR Command Language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['NCLLexer'] class NCLLexer(RegexLexer): """ Lexer for NCL code. .. versionadded:: 2.2 """ name = 'NCL' aliases = ['ncl'] filenames = ['*.ncl'] mimetypes = ['text/ncl'] flags = re.MULTILINE tokens = { 'root': [ (r';.*\n', Comment), include('strings'), include('core'), (r'[a-zA-Z_]\w*', Name), include('nums'), (r'[\s]+', Text), ], 'core': [ # Statements (words(( 'begin', 'break', 'continue', 'create', 'defaultapp', 'do', 'else', 'end', 'external', 'exit', 'True', 'False', 'file', 'function', 'getvalues', 'graphic', 'group', 'if', 'list', 'load', 'local', 'new', '_Missing', 'Missing', 'noparent', 'procedure', 'quit', 'QUIT', 'Quit', 'record', 'return', 'setvalues', 'stop', 'then', 'while'), prefix=r'\b', suffix=r'\s*\b'), Keyword), # Data Types (words(( 'ubyte', 'uint', 'uint64', 'ulong', 'string', 'byte', 'character', 'double', 'float', 'integer', 'int64', 'logical', 'long', 'short', 'ushort', 'enumeric', 'numeric', 'snumeric'), prefix=r'\b', suffix=r'\s*\b'), Keyword.Type), # Operators (r'[\%^*+\-/<>]', Operator), # punctuation: (r'[\[\]():@$!&|.,\\{}]', Punctuation), (r'[=:]', Punctuation), # Intrinsics (words(( 'abs', 'acos', 'addfile', 'addfiles', 'all', 'angmom_atm', 'any', 'area_conserve_remap', 'area_hi2lores', 'area_poly_sphere', 'asciiread', 'asciiwrite', 'asin', 'atan', 'atan2', 'attsetvalues', 'avg', 'betainc', 'bin_avg', 'bin_sum', 'bw_bandpass_filter', 'cancor', 'cbinread', 'cbinwrite', 'cd_calendar', 'cd_inv_calendar', 'cdfbin_p', 'cdfbin_pr', 'cdfbin_s', 'cdfbin_xn', 'cdfchi_p', 'cdfchi_x', 'cdfgam_p', 'cdfgam_x', 'cdfnor_p', 'cdfnor_x', 'cdft_p', 'cdft_t', 'ceil', 'center_finite_diff', 'center_finite_diff_n', 'cfftb', 'cfftf', 'cfftf_frq_reorder', 'charactertodouble', 'charactertofloat', 'charactertointeger', 'charactertolong', 'charactertoshort', 'charactertostring', 'chartodouble', 'chartofloat', 'chartoint', 'chartointeger', 'chartolong', 'chartoshort', 'chartostring', 'chiinv', 'clear', 'color_index_to_rgba', 'conform', 'conform_dims', 'cos', 'cosh', 'count_unique_values', 'covcorm', 'covcorm_xy', 'craybinnumrec', 'craybinrecread', 'create_graphic', 'csa1', 'csa1d', 'csa1s', 'csa1x', 'csa1xd', 'csa1xs', 'csa2', 'csa2d', 'csa2l', 'csa2ld', 'csa2ls', 'csa2lx', 'csa2lxd', 'csa2lxs', 'csa2s', 'csa2x', 'csa2xd', 'csa2xs', 'csa3', 'csa3d', 'csa3l', 'csa3ld', 'csa3ls', 'csa3lx', 'csa3lxd', 'csa3lxs', 'csa3s', 'csa3x', 'csa3xd', 'csa3xs', 'csc2s', 'csgetp', 'css2c', 'cssetp', 'cssgrid', 'csstri', 'csvoro', 'cumsum', 'cz2ccm', 'datatondc', 'day_of_week', 'day_of_year', 'days_in_month', 'default_fillvalue', 'delete', 'depth_to_pres', 'destroy', 'determinant', 'dewtemp_trh', 'dgeevx_lapack', 'dim_acumrun_n', 'dim_avg', 'dim_avg_n', 'dim_avg_wgt', 'dim_avg_wgt_n', 'dim_cumsum', 'dim_cumsum_n', 'dim_gamfit_n', 'dim_gbits', 'dim_max', 'dim_max_n', 'dim_median', 'dim_median_n', 'dim_min', 'dim_min_n', 'dim_num', 'dim_num_n', 'dim_numrun_n', 'dim_pqsort', 'dim_pqsort_n', 'dim_product', 'dim_product_n', 'dim_rmsd', 'dim_rmsd_n', 'dim_rmvmean', 'dim_rmvmean_n', 'dim_rmvmed', 'dim_rmvmed_n', 'dim_spi_n', 'dim_standardize', 'dim_standardize_n', 'dim_stat4', 'dim_stat4_n', 'dim_stddev', 'dim_stddev_n', 'dim_sum', 'dim_sum_n', 'dim_sum_wgt', 'dim_sum_wgt_n', 'dim_variance', 'dim_variance_n', 'dimsizes', 'doubletobyte', 'doubletochar', 'doubletocharacter', 'doubletofloat', 'doubletoint', 'doubletointeger', 'doubletolong', 'doubletoshort', 'dpres_hybrid_ccm', 'dpres_plevel', 'draw', 'draw_color_palette', 'dsgetp', 'dsgrid2', 'dsgrid2d', 'dsgrid2s', 'dsgrid3', 'dsgrid3d', 'dsgrid3s', 'dspnt2', 'dspnt2d', 'dspnt2s', 'dspnt3', 'dspnt3d', 'dspnt3s', 'dssetp', 'dtrend', 'dtrend_msg', 'dtrend_msg_n', 'dtrend_n', 'dtrend_quadratic', 'dtrend_quadratic_msg_n', 'dv2uvf', 'dv2uvg', 'dz_height', 'echo_off', 'echo_on', 'eof2data', 'eof_varimax', 'eofcor', 'eofcor_pcmsg', 'eofcor_ts', 'eofcov', 'eofcov_pcmsg', 'eofcov_ts', 'eofunc', 'eofunc_ts', 'eofunc_varimax', 'equiv_sample_size', 'erf', 'erfc', 'esacr', 'esacv', 'esccr', 'esccv', 'escorc', 'escorc_n', 'escovc', 'exit', 'exp', 'exp_tapersh', 'exp_tapersh_wgts', 'exp_tapershC', 'ezfftb', 'ezfftb_n', 'ezfftf', 'ezfftf_n', 'f2fosh', 'f2foshv', 'f2fsh', 'f2fshv', 'f2gsh', 'f2gshv', 'fabs', 'fbindirread', 'fbindirwrite', 'fbinnumrec', 'fbinread', 'fbinrecread', 'fbinrecwrite', 'fbinwrite', 'fft2db', 'fft2df', 'fftshift', 'fileattdef', 'filechunkdimdef', 'filedimdef', 'fileexists', 'filegrpdef', 'filevarattdef', 'filevarchunkdef', 'filevarcompressleveldef', 'filevardef', 'filevardimsizes', 'filwgts_lancos', 'filwgts_lanczos', 'filwgts_normal', 'floattobyte', 'floattochar', 'floattocharacter', 'floattoint', 'floattointeger', 'floattolong', 'floattoshort', 'floor', 'fluxEddy', 'fo2fsh', 'fo2fshv', 'fourier_info', 'frame', 'fspan', 'ftcurv', 'ftcurvd', 'ftcurvi', 'ftcurvp', 'ftcurvpi', 'ftcurvps', 'ftcurvs', 'ftest', 'ftgetp', 'ftkurv', 'ftkurvd', 'ftkurvp', 'ftkurvpd', 'ftsetp', 'ftsurf', 'g2fsh', 'g2fshv', 'g2gsh', 'g2gshv', 'gamma', 'gammainc', 'gaus', 'gaus_lobat', 'gaus_lobat_wgt', 'gc_aangle', 'gc_clkwise', 'gc_dangle', 'gc_inout', 'gc_latlon', 'gc_onarc', 'gc_pnt2gc', 'gc_qarea', 'gc_tarea', 'generate_2d_array', 'get_color_index', 'get_color_rgba', 'get_cpu_time', 'get_isolines', 'get_ncl_version', 'get_script_name', 'get_script_prefix_name', 'get_sphere_radius', 'get_unique_values', 'getbitsone', 'getenv', 'getfiledimsizes', 'getfilegrpnames', 'getfilepath', 'getfilevaratts', 'getfilevarchunkdimsizes', 'getfilevardims', 'getfilevardimsizes', 'getfilevarnames', 'getfilevartypes', 'getvaratts', 'getvardims', 'gradsf', 'gradsg', 'greg2jul', 'grid2triple', 'hlsrgb', 'hsvrgb', 'hydro', 'hyi2hyo', 'idsfft', 'igradsf', 'igradsg', 'ilapsf', 'ilapsg', 'ilapvf', 'ilapvg', 'ind', 'ind_resolve', 'int2p', 'int2p_n', 'integertobyte', 'integertochar', 'integertocharacter', 'integertoshort', 'inttobyte', 'inttochar', 'inttoshort', 'inverse_matrix', 'isatt', 'isbigendian', 'isbyte', 'ischar', 'iscoord', 'isdefined', 'isdim', 'isdimnamed', 'isdouble', 'isenumeric', 'isfile', 'isfilepresent', 'isfilevar', 'isfilevaratt', 'isfilevarcoord', 'isfilevardim', 'isfloat', 'isfunc', 'isgraphic', 'isint', 'isint64', 'isinteger', 'isleapyear', 'islogical', 'islong', 'ismissing', 'isnan_ieee', 'isnumeric', 'ispan', 'isproc', 'isshort', 'issnumeric', 'isstring', 'isubyte', 'isuint', 'isuint64', 'isulong', 'isunlimited', 'isunsigned', 'isushort', 'isvar', 'jul2greg', 'kmeans_as136', 'kolsm2_n', 'kron_product', 'lapsf', 'lapsg', 'lapvf', 'lapvg', 'latlon2utm', 'lclvl', 'lderuvf', 'lderuvg', 'linint1', 'linint1_n', 'linint2', 'linint2_points', 'linmsg', 'linmsg_n', 'linrood_latwgt', 'linrood_wgt', 'list_files', 'list_filevars', 'list_hlus', 'list_procfuncs', 'list_vars', 'ListAppend', 'ListCount', 'ListGetType', 'ListIndex', 'ListIndexFromName', 'ListPop', 'ListPush', 'ListSetType', 'loadscript', 'local_max', 'local_min', 'log', 'log10', 'longtobyte', 'longtochar', 'longtocharacter', 'longtoint', 'longtointeger', 'longtoshort', 'lspoly', 'lspoly_n', 'mask', 'max', 'maxind', 'min', 'minind', 'mixed_layer_depth', 'mixhum_ptd', 'mixhum_ptrh', 'mjo_cross_coh2pha', 'mjo_cross_segment', 'moc_globe_atl', 'monthday', 'natgrid', 'natgridd', 'natgrids', 'ncargpath', 'ncargversion', 'ndctodata', 'ndtooned', 'new', 'NewList', 'ngezlogo', 'nggcog', 'nggetp', 'nglogo', 'ngsetp', 'NhlAddAnnotation', 'NhlAddData', 'NhlAddOverlay', 'NhlAddPrimitive', 'NhlAppGetDefaultParentId', 'NhlChangeWorkstation', 'NhlClassName', 'NhlClearWorkstation', 'NhlDataPolygon', 'NhlDataPolyline', 'NhlDataPolymarker', 'NhlDataToNDC', 'NhlDestroy', 'NhlDraw', 'NhlFrame', 'NhlFreeColor', 'NhlGetBB', 'NhlGetClassResources', 'NhlGetErrorObjectId', 'NhlGetNamedColorIndex', 'NhlGetParentId', 'NhlGetParentWorkstation', 'NhlGetWorkspaceObjectId', 'NhlIsAllocatedColor', 'NhlIsApp', 'NhlIsDataComm', 'NhlIsDataItem', 'NhlIsDataSpec', 'NhlIsTransform', 'NhlIsView', 'NhlIsWorkstation', 'NhlName', 'NhlNDCPolygon', 'NhlNDCPolyline', 'NhlNDCPolymarker', 'NhlNDCToData', 'NhlNewColor', 'NhlNewDashPattern', 'NhlNewMarker', 'NhlPalGetDefined', 'NhlRemoveAnnotation', 'NhlRemoveData', 'NhlRemoveOverlay', 'NhlRemovePrimitive', 'NhlSetColor', 'NhlSetDashPattern', 'NhlSetMarker', 'NhlUpdateData', 'NhlUpdateWorkstation', 'nice_mnmxintvl', 'nngetaspectd', 'nngetaspects', 'nngetp', 'nngetsloped', 'nngetslopes', 'nngetwts', 'nngetwtsd', 'nnpnt', 'nnpntd', 'nnpntend', 'nnpntendd', 'nnpntinit', 'nnpntinitd', 'nnpntinits', 'nnpnts', 'nnsetp', 'num', 'obj_anal_ic', 'omega_ccm', 'onedtond', 'overlay', 'paleo_outline', 'pdfxy_bin', 'poisson_grid_fill', 'pop_remap', 'potmp_insitu_ocn', 'prcwater_dp', 'pres2hybrid', 'pres_hybrid_ccm', 'pres_sigma', 'print', 'print_table', 'printFileVarSummary', 'printVarSummary', 'product', 'pslec', 'pslhor', 'pslhyp', 'qsort', 'rand', 'random_chi', 'random_gamma', 'random_normal', 'random_setallseed', 'random_uniform', 'rcm2points', 'rcm2rgrid', 'rdsstoi', 'read_colormap_file', 'reg_multlin', 'regcoef', 'regCoef_n', 'regline', 'relhum', 'replace_ieeenan', 'reshape', 'reshape_ind', 'rgba_to_color_index', 'rgbhls', 'rgbhsv', 'rgbyiq', 'rgrid2rcm', 'rhomb_trunc', 'rip_cape_2d', 'rip_cape_3d', 'round', 'rtest', 'runave', 'runave_n', 'set_default_fillvalue', 'set_sphere_radius', 'setfileoption', 'sfvp2uvf', 'sfvp2uvg', 'shaec', 'shagc', 'shgetnp', 'shgetp', 'shgrid', 'shorttobyte', 'shorttochar', 'shorttocharacter', 'show_ascii', 'shsec', 'shsetp', 'shsgc', 'shsgc_R42', 'sigma2hybrid', 'simpeq', 'simpne', 'sin', 'sindex_yrmo', 'sinh', 'sizeof', 'sleep', 'smth9', 'snindex_yrmo', 'solve_linsys', 'span_color_indexes', 'span_color_rgba', 'sparse_matrix_mult', 'spcorr', 'spcorr_n', 'specx_anal', 'specxy_anal', 'spei', 'sprintf', 'sprinti', 'sqrt', 'sqsort', 'srand', 'stat2', 'stat4', 'stat_medrng', 'stat_trim', 'status_exit', 'stdatmus_p2tdz', 'stdatmus_z2tdp', 'stddev', 'str_capital', 'str_concat', 'str_fields_count', 'str_get_cols', 'str_get_dq', 'str_get_field', 'str_get_nl', 'str_get_sq', 'str_get_tab', 'str_index_of_substr', 'str_insert', 'str_is_blank', 'str_join', 'str_left_strip', 'str_lower', 'str_match', 'str_match_ic', 'str_match_ic_regex', 'str_match_ind', 'str_match_ind_ic', 'str_match_ind_ic_regex', 'str_match_ind_regex', 'str_match_regex', 'str_right_strip', 'str_split', 'str_split_by_length', 'str_split_csv', 'str_squeeze', 'str_strip', 'str_sub_str', 'str_switch', 'str_upper', 'stringtochar', 'stringtocharacter', 'stringtodouble', 'stringtofloat', 'stringtoint', 'stringtointeger', 'stringtolong', 'stringtoshort', 'strlen', 'student_t', 'sum', 'svd_lapack', 'svdcov', 'svdcov_sv', 'svdstd', 'svdstd_sv', 'system', 'systemfunc', 'tan', 'tanh', 'taper', 'taper_n', 'tdclrs', 'tdctri', 'tdcudp', 'tdcurv', 'tddtri', 'tdez2d', 'tdez3d', 'tdgetp', 'tdgrds', 'tdgrid', 'tdgtrs', 'tdinit', 'tditri', 'tdlbla', 'tdlblp', 'tdlbls', 'tdline', 'tdlndp', 'tdlnpa', 'tdlpdp', 'tdmtri', 'tdotri', 'tdpara', 'tdplch', 'tdprpa', 'tdprpi', 'tdprpt', 'tdsetp', 'tdsort', 'tdstri', 'tdstrs', 'tdttri', 'thornthwaite', 'tobyte', 'tochar', 'todouble', 'tofloat', 'toint', 'toint64', 'tointeger', 'tolong', 'toshort', 'tosigned', 'tostring', 'tostring_with_format', 'totype', 'toubyte', 'touint', 'touint64', 'toulong', 'tounsigned', 'toushort', 'trend_manken', 'tri_trunc', 'triple2grid', 'triple2grid2d', 'trop_wmo', 'ttest', 'typeof', 'undef', 'unique_string', 'update', 'ushorttoint', 'ut_calendar', 'ut_inv_calendar', 'utm2latlon', 'uv2dv_cfd', 'uv2dvf', 'uv2dvg', 'uv2sfvpf', 'uv2sfvpg', 'uv2vr_cfd', 'uv2vrdvf', 'uv2vrdvg', 'uv2vrf', 'uv2vrg', 'v5d_close', 'v5d_create', 'v5d_setLowLev', 'v5d_setUnits', 'v5d_write', 'v5d_write_var', 'variance', 'vhaec', 'vhagc', 'vhsec', 'vhsgc', 'vibeta', 'vinth2p', 'vinth2p_ecmwf', 'vinth2p_ecmwf_nodes', 'vinth2p_nodes', 'vintp2p_ecmwf', 'vr2uvf', 'vr2uvg', 'vrdv2uvf', 'vrdv2uvg', 'wavelet', 'wavelet_default', 'weibull', 'wgt_area_smooth', 'wgt_areaave', 'wgt_areaave2', 'wgt_arearmse', 'wgt_arearmse2', 'wgt_areasum2', 'wgt_runave', 'wgt_runave_n', 'wgt_vert_avg_beta', 'wgt_volave', 'wgt_volave_ccm', 'wgt_volrmse', 'wgt_volrmse_ccm', 'where', 'wk_smooth121', 'wmbarb', 'wmbarbmap', 'wmdrft', 'wmgetp', 'wmlabs', 'wmsetp', 'wmstnm', 'wmvect', 'wmvectmap', 'wmvlbl', 'wrf_avo', 'wrf_cape_2d', 'wrf_cape_3d', 'wrf_dbz', 'wrf_eth', 'wrf_helicity', 'wrf_ij_to_ll', 'wrf_interp_1d', 'wrf_interp_2d_xy', 'wrf_interp_3d_z', 'wrf_latlon_to_ij', 'wrf_ll_to_ij', 'wrf_omega', 'wrf_pvo', 'wrf_rh', 'wrf_slp', 'wrf_smooth_2d', 'wrf_td', 'wrf_tk', 'wrf_updraft_helicity', 'wrf_uvmet', 'wrf_virtual_temp', 'wrf_wetbulb', 'wrf_wps_close_int', 'wrf_wps_open_int', 'wrf_wps_rddata_int', 'wrf_wps_rdhead_int', 'wrf_wps_read_int', 'wrf_wps_write_int', 'write_matrix', 'write_table', 'yiqrgb', 'z2geouv', 'zonal_mpsi', 'addfiles_GetVar', 'advect_variable', 'area_conserve_remap_Wrap', 'area_hi2lores_Wrap', 'array_append_record', 'assignFillValue', 'byte2flt', 'byte2flt_hdf', 'calcDayAnomTLL', 'calcMonAnomLLLT', 'calcMonAnomLLT', 'calcMonAnomTLL', 'calcMonAnomTLLL', 'calculate_monthly_values', 'cd_convert', 'changeCase', 'changeCaseChar', 'clmDayTLL', 'clmDayTLLL', 'clmMon2clmDay', 'clmMonLLLT', 'clmMonLLT', 'clmMonTLL', 'clmMonTLLL', 'closest_val', 'copy_VarAtts', 'copy_VarCoords', 'copy_VarCoords_1', 'copy_VarCoords_2', 'copy_VarMeta', 'copyatt', 'crossp3', 'cshstringtolist', 'cssgrid_Wrap', 'dble2flt', 'decimalPlaces', 'delete_VarAtts', 'dim_avg_n_Wrap', 'dim_avg_wgt_n_Wrap', 'dim_avg_wgt_Wrap', 'dim_avg_Wrap', 'dim_cumsum_n_Wrap', 'dim_cumsum_Wrap', 'dim_max_n_Wrap', 'dim_min_n_Wrap', 'dim_rmsd_n_Wrap', 'dim_rmsd_Wrap', 'dim_rmvmean_n_Wrap', 'dim_rmvmean_Wrap', 'dim_rmvmed_n_Wrap', 'dim_rmvmed_Wrap', 'dim_standardize_n_Wrap', 'dim_standardize_Wrap', 'dim_stddev_n_Wrap', 'dim_stddev_Wrap', 'dim_sum_n_Wrap', 'dim_sum_wgt_n_Wrap', 'dim_sum_wgt_Wrap', 'dim_sum_Wrap', 'dim_variance_n_Wrap', 'dim_variance_Wrap', 'dpres_plevel_Wrap', 'dtrend_leftdim', 'dv2uvF_Wrap', 'dv2uvG_Wrap', 'eof_north', 'eofcor_Wrap', 'eofcov_Wrap', 'eofunc_north', 'eofunc_ts_Wrap', 'eofunc_varimax_reorder', 'eofunc_varimax_Wrap', 'eofunc_Wrap', 'epsZero', 'f2fosh_Wrap', 'f2foshv_Wrap', 'f2fsh_Wrap', 'f2fshv_Wrap', 'f2gsh_Wrap', 'f2gshv_Wrap', 'fbindirSwap', 'fbinseqSwap1', 'fbinseqSwap2', 'flt2dble', 'flt2string', 'fo2fsh_Wrap', 'fo2fshv_Wrap', 'g2fsh_Wrap', 'g2fshv_Wrap', 'g2gsh_Wrap', 'g2gshv_Wrap', 'generate_resample_indices', 'generate_sample_indices', 'generate_unique_indices', 'genNormalDist', 'get1Dindex', 'get1Dindex_Collapse', 'get1Dindex_Exclude', 'get_file_suffix', 'GetFillColor', 'GetFillColorIndex', 'getFillValue', 'getind_latlon2d', 'getVarDimNames', 'getVarFillValue', 'grib_stime2itime', 'hyi2hyo_Wrap', 'ilapsF_Wrap', 'ilapsG_Wrap', 'ind_nearest_coord', 'indStrSubset', 'int2dble', 'int2flt', 'int2p_n_Wrap', 'int2p_Wrap', 'isMonotonic', 'isStrSubset', 'latGau', 'latGauWgt', 'latGlobeF', 'latGlobeFo', 'latRegWgt', 'linint1_n_Wrap', 'linint1_Wrap', 'linint2_points_Wrap', 'linint2_Wrap', 'local_max_1d', 'local_min_1d', 'lonFlip', 'lonGlobeF', 'lonGlobeFo', 'lonPivot', 'merge_levels_sfc', 'mod', 'month_to_annual', 'month_to_annual_weighted', 'month_to_season', 'month_to_season12', 'month_to_seasonN', 'monthly_total_to_daily_mean', 'nameDim', 'natgrid_Wrap', 'NewCosWeight', 'niceLatLon2D', 'NormCosWgtGlobe', 'numAsciiCol', 'numAsciiRow', 'numeric2int', 'obj_anal_ic_deprecated', 'obj_anal_ic_Wrap', 'omega_ccm_driver', 'omega_to_w', 'oneDtostring', 'pack_values', 'pattern_cor', 'pdfx', 'pdfxy', 'pdfxy_conform', 'pot_temp', 'pot_vort_hybrid', 'pot_vort_isobaric', 'pres2hybrid_Wrap', 'print_clock', 'printMinMax', 'quadroots', 'rcm2points_Wrap', 'rcm2rgrid_Wrap', 'readAsciiHead', 'readAsciiTable', 'reg_multlin_stats', 'region_ind', 'regline_stats', 'relhum_ttd', 'replaceSingleChar', 'RGBtoCmap', 'rgrid2rcm_Wrap', 'rho_mwjf', 'rm_single_dims', 'rmAnnCycle1D', 'rmInsufData', 'rmMonAnnCycLLLT', 'rmMonAnnCycLLT', 'rmMonAnnCycTLL', 'runave_n_Wrap', 'runave_Wrap', 'short2flt', 'short2flt_hdf', 'shsgc_R42_Wrap', 'sign_f90', 'sign_matlab', 'smth9_Wrap', 'smthClmDayTLL', 'smthClmDayTLLL', 'SqrtCosWeight', 'stat_dispersion', 'static_stability', 'stdMonLLLT', 'stdMonLLT', 'stdMonTLL', 'stdMonTLLL', 'symMinMaxPlt', 'table_attach_columns', 'table_attach_rows', 'time_to_newtime', 'transpose', 'triple2grid_Wrap', 'ut_convert', 'uv2dvF_Wrap', 'uv2dvG_Wrap', 'uv2vrF_Wrap', 'uv2vrG_Wrap', 'vr2uvF_Wrap', 'vr2uvG_Wrap', 'w_to_omega', 'wallClockElapseTime', 'wave_number_spc', 'wgt_areaave_Wrap', 'wgt_runave_leftdim', 'wgt_runave_n_Wrap', 'wgt_runave_Wrap', 'wgt_vertical_n', 'wind_component', 'wind_direction', 'yyyyddd_to_yyyymmdd', 'yyyymm_time', 'yyyymm_to_yyyyfrac', 'yyyymmdd_time', 'yyyymmdd_to_yyyyddd', 'yyyymmdd_to_yyyyfrac', 'yyyymmddhh_time', 'yyyymmddhh_to_yyyyfrac', 'zonal_mpsi_Wrap', 'zonalAve', 'calendar_decode2', 'cd_string', 'kf_filter', 'run_cor', 'time_axis_labels', 'ut_string', 'wrf_contour', 'wrf_map', 'wrf_map_overlay', 'wrf_map_overlays', 'wrf_map_resources', 'wrf_map_zoom', 'wrf_overlay', 'wrf_overlays', 'wrf_user_getvar', 'wrf_user_ij_to_ll', 'wrf_user_intrp2d', 'wrf_user_intrp3d', 'wrf_user_latlon_to_ij', 'wrf_user_list_times', 'wrf_user_ll_to_ij', 'wrf_user_unstagger', 'wrf_user_vert_interp', 'wrf_vector', 'gsn_add_annotation', 'gsn_add_polygon', 'gsn_add_polyline', 'gsn_add_polymarker', 'gsn_add_shapefile_polygons', 'gsn_add_shapefile_polylines', 'gsn_add_shapefile_polymarkers', 'gsn_add_text', 'gsn_attach_plots', 'gsn_blank_plot', 'gsn_contour', 'gsn_contour_map', 'gsn_contour_shade', 'gsn_coordinates', 'gsn_create_labelbar', 'gsn_create_legend', 'gsn_create_text', 'gsn_csm_attach_zonal_means', 'gsn_csm_blank_plot', 'gsn_csm_contour', 'gsn_csm_contour_map', 'gsn_csm_contour_map_ce', 'gsn_csm_contour_map_overlay', 'gsn_csm_contour_map_polar', 'gsn_csm_hov', 'gsn_csm_lat_time', 'gsn_csm_map', 'gsn_csm_map_ce', 'gsn_csm_map_polar', 'gsn_csm_pres_hgt', 'gsn_csm_pres_hgt_streamline', 'gsn_csm_pres_hgt_vector', 'gsn_csm_streamline', 'gsn_csm_streamline_contour_map', 'gsn_csm_streamline_contour_map_ce', 'gsn_csm_streamline_contour_map_polar', 'gsn_csm_streamline_map', 'gsn_csm_streamline_map_ce', 'gsn_csm_streamline_map_polar', 'gsn_csm_streamline_scalar', 'gsn_csm_streamline_scalar_map', 'gsn_csm_streamline_scalar_map_ce', 'gsn_csm_streamline_scalar_map_polar', 'gsn_csm_time_lat', 'gsn_csm_vector', 'gsn_csm_vector_map', 'gsn_csm_vector_map_ce', 'gsn_csm_vector_map_polar', 'gsn_csm_vector_scalar', 'gsn_csm_vector_scalar_map', 'gsn_csm_vector_scalar_map_ce', 'gsn_csm_vector_scalar_map_polar', 'gsn_csm_x2y', 'gsn_csm_x2y2', 'gsn_csm_xy', 'gsn_csm_xy2', 'gsn_csm_xy3', 'gsn_csm_y', 'gsn_define_colormap', 'gsn_draw_colormap', 'gsn_draw_named_colors', 'gsn_histogram', 'gsn_labelbar_ndc', 'gsn_legend_ndc', 'gsn_map', 'gsn_merge_colormaps', 'gsn_open_wks', 'gsn_panel', 'gsn_polygon', 'gsn_polygon_ndc', 'gsn_polyline', 'gsn_polyline_ndc', 'gsn_polymarker', 'gsn_polymarker_ndc', 'gsn_retrieve_colormap', 'gsn_reverse_colormap', 'gsn_streamline', 'gsn_streamline_map', 'gsn_streamline_scalar', 'gsn_streamline_scalar_map', 'gsn_table', 'gsn_text', 'gsn_text_ndc', 'gsn_vector', 'gsn_vector_map', 'gsn_vector_scalar', 'gsn_vector_scalar_map', 'gsn_xy', 'gsn_y', 'hsv2rgb', 'maximize_output', 'namedcolor2rgb', 'namedcolor2rgba', 'reset_device_coordinates', 'span_named_colors'), prefix=r'\b'), Name.Builtin), # Resources (words(( 'amDataXF', 'amDataYF', 'amJust', 'amOn', 'amOrthogonalPosF', 'amParallelPosF', 'amResizeNotify', 'amSide', 'amTrackData', 'amViewId', 'amZone', 'appDefaultParent', 'appFileSuffix', 'appResources', 'appSysDir', 'appUsrDir', 'caCopyArrays', 'caXArray', 'caXCast', 'caXMaxV', 'caXMinV', 'caXMissingV', 'caYArray', 'caYCast', 'caYMaxV', 'caYMinV', 'caYMissingV', 'cnCellFillEdgeColor', 'cnCellFillMissingValEdgeColor', 'cnConpackParams', 'cnConstFEnableFill', 'cnConstFLabelAngleF', 'cnConstFLabelBackgroundColor', 'cnConstFLabelConstantSpacingF', 'cnConstFLabelFont', 'cnConstFLabelFontAspectF', 'cnConstFLabelFontColor', 'cnConstFLabelFontHeightF', 'cnConstFLabelFontQuality', 'cnConstFLabelFontThicknessF', 'cnConstFLabelFormat', 'cnConstFLabelFuncCode', 'cnConstFLabelJust', 'cnConstFLabelOn', 'cnConstFLabelOrthogonalPosF', 'cnConstFLabelParallelPosF', 'cnConstFLabelPerimColor', 'cnConstFLabelPerimOn', 'cnConstFLabelPerimSpaceF', 'cnConstFLabelPerimThicknessF', 'cnConstFLabelSide', 'cnConstFLabelString', 'cnConstFLabelTextDirection', 'cnConstFLabelZone', 'cnConstFUseInfoLabelRes', 'cnExplicitLabelBarLabelsOn', 'cnExplicitLegendLabelsOn', 'cnExplicitLineLabelsOn', 'cnFillBackgroundColor', 'cnFillColor', 'cnFillColors', 'cnFillDotSizeF', 'cnFillDrawOrder', 'cnFillMode', 'cnFillOn', 'cnFillOpacityF', 'cnFillPalette', 'cnFillPattern', 'cnFillPatterns', 'cnFillScaleF', 'cnFillScales', 'cnFixFillBleed', 'cnGridBoundFillColor', 'cnGridBoundFillPattern', 'cnGridBoundFillScaleF', 'cnGridBoundPerimColor', 'cnGridBoundPerimDashPattern', 'cnGridBoundPerimOn', 'cnGridBoundPerimThicknessF', 'cnHighLabelAngleF', 'cnHighLabelBackgroundColor', 'cnHighLabelConstantSpacingF', 'cnHighLabelCount', 'cnHighLabelFont', 'cnHighLabelFontAspectF', 'cnHighLabelFontColor', 'cnHighLabelFontHeightF', 'cnHighLabelFontQuality', 'cnHighLabelFontThicknessF', 'cnHighLabelFormat', 'cnHighLabelFuncCode', 'cnHighLabelPerimColor', 'cnHighLabelPerimOn', 'cnHighLabelPerimSpaceF', 'cnHighLabelPerimThicknessF', 'cnHighLabelString', 'cnHighLabelsOn', 'cnHighLowLabelOverlapMode', 'cnHighUseLineLabelRes', 'cnInfoLabelAngleF', 'cnInfoLabelBackgroundColor', 'cnInfoLabelConstantSpacingF', 'cnInfoLabelFont', 'cnInfoLabelFontAspectF', 'cnInfoLabelFontColor', 'cnInfoLabelFontHeightF', 'cnInfoLabelFontQuality', 'cnInfoLabelFontThicknessF', 'cnInfoLabelFormat', 'cnInfoLabelFuncCode', 'cnInfoLabelJust', 'cnInfoLabelOn', 'cnInfoLabelOrthogonalPosF', 'cnInfoLabelParallelPosF', 'cnInfoLabelPerimColor', 'cnInfoLabelPerimOn', 'cnInfoLabelPerimSpaceF', 'cnInfoLabelPerimThicknessF', 'cnInfoLabelSide', 'cnInfoLabelString', 'cnInfoLabelTextDirection', 'cnInfoLabelZone', 'cnLabelBarEndLabelsOn', 'cnLabelBarEndStyle', 'cnLabelDrawOrder', 'cnLabelMasking', 'cnLabelScaleFactorF', 'cnLabelScaleValueF', 'cnLabelScalingMode', 'cnLegendLevelFlags', 'cnLevelCount', 'cnLevelFlag', 'cnLevelFlags', 'cnLevelSelectionMode', 'cnLevelSpacingF', 'cnLevels', 'cnLineColor', 'cnLineColors', 'cnLineDashPattern', 'cnLineDashPatterns', 'cnLineDashSegLenF', 'cnLineDrawOrder', 'cnLineLabelAngleF', 'cnLineLabelBackgroundColor', 'cnLineLabelConstantSpacingF', 'cnLineLabelCount', 'cnLineLabelDensityF', 'cnLineLabelFont', 'cnLineLabelFontAspectF', 'cnLineLabelFontColor', 'cnLineLabelFontColors', 'cnLineLabelFontHeightF', 'cnLineLabelFontQuality', 'cnLineLabelFontThicknessF', 'cnLineLabelFormat', 'cnLineLabelFuncCode', 'cnLineLabelInterval', 'cnLineLabelPerimColor', 'cnLineLabelPerimOn', 'cnLineLabelPerimSpaceF', 'cnLineLabelPerimThicknessF', 'cnLineLabelPlacementMode', 'cnLineLabelStrings', 'cnLineLabelsOn', 'cnLinePalette', 'cnLineThicknessF', 'cnLineThicknesses', 'cnLinesOn', 'cnLowLabelAngleF', 'cnLowLabelBackgroundColor', 'cnLowLabelConstantSpacingF', 'cnLowLabelCount', 'cnLowLabelFont', 'cnLowLabelFontAspectF', 'cnLowLabelFontColor', 'cnLowLabelFontHeightF', 'cnLowLabelFontQuality', 'cnLowLabelFontThicknessF', 'cnLowLabelFormat', 'cnLowLabelFuncCode', 'cnLowLabelPerimColor', 'cnLowLabelPerimOn', 'cnLowLabelPerimSpaceF', 'cnLowLabelPerimThicknessF', 'cnLowLabelString', 'cnLowLabelsOn', 'cnLowUseHighLabelRes', 'cnMaxDataValueFormat', 'cnMaxLevelCount', 'cnMaxLevelValF', 'cnMaxPointDistanceF', 'cnMinLevelValF', 'cnMissingValFillColor', 'cnMissingValFillPattern', 'cnMissingValFillScaleF', 'cnMissingValPerimColor', 'cnMissingValPerimDashPattern', 'cnMissingValPerimGridBoundOn', 'cnMissingValPerimOn', 'cnMissingValPerimThicknessF', 'cnMonoFillColor', 'cnMonoFillPattern', 'cnMonoFillScale', 'cnMonoLevelFlag', 'cnMonoLineColor', 'cnMonoLineDashPattern', 'cnMonoLineLabelFontColor', 'cnMonoLineThickness', 'cnNoDataLabelOn', 'cnNoDataLabelString', 'cnOutOfRangeFillColor', 'cnOutOfRangeFillPattern', 'cnOutOfRangeFillScaleF', 'cnOutOfRangePerimColor', 'cnOutOfRangePerimDashPattern', 'cnOutOfRangePerimOn', 'cnOutOfRangePerimThicknessF', 'cnRasterCellSizeF', 'cnRasterMinCellSizeF', 'cnRasterModeOn', 'cnRasterSampleFactorF', 'cnRasterSmoothingOn', 'cnScalarFieldData', 'cnSmoothingDistanceF', 'cnSmoothingOn', 'cnSmoothingTensionF', 'cnSpanFillPalette', 'cnSpanLinePalette', 'ctCopyTables', 'ctXElementSize', 'ctXMaxV', 'ctXMinV', 'ctXMissingV', 'ctXTable', 'ctXTableLengths', 'ctXTableType', 'ctYElementSize', 'ctYMaxV', 'ctYMinV', 'ctYMissingV', 'ctYTable', 'ctYTableLengths', 'ctYTableType', 'dcDelayCompute', 'errBuffer', 'errFileName', 'errFilePtr', 'errLevel', 'errPrint', 'errUnitNumber', 'gsClipOn', 'gsColors', 'gsEdgeColor', 'gsEdgeDashPattern', 'gsEdgeDashSegLenF', 'gsEdgeThicknessF', 'gsEdgesOn', 'gsFillBackgroundColor', 'gsFillColor', 'gsFillDotSizeF', 'gsFillIndex', 'gsFillLineThicknessF', 'gsFillOpacityF', 'gsFillScaleF', 'gsFont', 'gsFontAspectF', 'gsFontColor', 'gsFontHeightF', 'gsFontOpacityF', 'gsFontQuality', 'gsFontThicknessF', 'gsLineColor', 'gsLineDashPattern', 'gsLineDashSegLenF', 'gsLineLabelConstantSpacingF', 'gsLineLabelFont', 'gsLineLabelFontAspectF', 'gsLineLabelFontColor', 'gsLineLabelFontHeightF', 'gsLineLabelFontQuality', 'gsLineLabelFontThicknessF', 'gsLineLabelFuncCode', 'gsLineLabelString', 'gsLineOpacityF', 'gsLineThicknessF', 'gsMarkerColor', 'gsMarkerIndex', 'gsMarkerOpacityF', 'gsMarkerSizeF', 'gsMarkerThicknessF', 'gsSegments', 'gsTextAngleF', 'gsTextConstantSpacingF', 'gsTextDirection', 'gsTextFuncCode', 'gsTextJustification', 'gsnAboveYRefLineBarColors', 'gsnAboveYRefLineBarFillScales', 'gsnAboveYRefLineBarPatterns', 'gsnAboveYRefLineColor', 'gsnAddCyclic', 'gsnAttachBorderOn', 'gsnAttachPlotsXAxis', 'gsnBelowYRefLineBarColors', 'gsnBelowYRefLineBarFillScales', 'gsnBelowYRefLineBarPatterns', 'gsnBelowYRefLineColor', 'gsnBoxMargin', 'gsnCenterString', 'gsnCenterStringFontColor', 'gsnCenterStringFontHeightF', 'gsnCenterStringFuncCode', 'gsnCenterStringOrthogonalPosF', 'gsnCenterStringParallelPosF', 'gsnContourLineThicknessesScale', 'gsnContourNegLineDashPattern', 'gsnContourPosLineDashPattern', 'gsnContourZeroLineThicknessF', 'gsnDebugWriteFileName', 'gsnDraw', 'gsnFrame', 'gsnHistogramBarWidthPercent', 'gsnHistogramBinIntervals', 'gsnHistogramBinMissing', 'gsnHistogramBinWidth', 'gsnHistogramClassIntervals', 'gsnHistogramCompare', 'gsnHistogramComputePercentages', 'gsnHistogramComputePercentagesNoMissing', 'gsnHistogramDiscreteBinValues', 'gsnHistogramDiscreteClassValues', 'gsnHistogramHorizontal', 'gsnHistogramMinMaxBinsOn', 'gsnHistogramNumberOfBins', 'gsnHistogramPercentSign', 'gsnHistogramSelectNiceIntervals', 'gsnLeftString', 'gsnLeftStringFontColor', 'gsnLeftStringFontHeightF', 'gsnLeftStringFuncCode', 'gsnLeftStringOrthogonalPosF', 'gsnLeftStringParallelPosF', 'gsnMajorLatSpacing', 'gsnMajorLonSpacing', 'gsnMaskLambertConformal', 'gsnMaskLambertConformalOutlineOn', 'gsnMaximize', 'gsnMinorLatSpacing', 'gsnMinorLonSpacing', 'gsnPanelBottom', 'gsnPanelCenter', 'gsnPanelDebug', 'gsnPanelFigureStrings', 'gsnPanelFigureStringsBackgroundFillColor', 'gsnPanelFigureStringsFontHeightF', 'gsnPanelFigureStringsJust', 'gsnPanelFigureStringsPerimOn', 'gsnPanelLabelBar', 'gsnPanelLeft', 'gsnPanelMainFont', 'gsnPanelMainFontColor', 'gsnPanelMainFontHeightF', 'gsnPanelMainString', 'gsnPanelRight', 'gsnPanelRowSpec', 'gsnPanelScalePlotIndex', 'gsnPanelTop', 'gsnPanelXF', 'gsnPanelXWhiteSpacePercent', 'gsnPanelYF', 'gsnPanelYWhiteSpacePercent', 'gsnPaperHeight', 'gsnPaperMargin', 'gsnPaperOrientation', 'gsnPaperWidth', 'gsnPolar', 'gsnPolarLabelDistance', 'gsnPolarLabelFont', 'gsnPolarLabelFontHeightF', 'gsnPolarLabelSpacing', 'gsnPolarTime', 'gsnPolarUT', 'gsnRightString', 'gsnRightStringFontColor', 'gsnRightStringFontHeightF', 'gsnRightStringFuncCode', 'gsnRightStringOrthogonalPosF', 'gsnRightStringParallelPosF', 'gsnScalarContour', 'gsnScale', 'gsnShape', 'gsnSpreadColorEnd', 'gsnSpreadColorStart', 'gsnSpreadColors', 'gsnStringFont', 'gsnStringFontColor', 'gsnStringFontHeightF', 'gsnStringFuncCode', 'gsnTickMarksOn', 'gsnXAxisIrregular2Linear', 'gsnXAxisIrregular2Log', 'gsnXRefLine', 'gsnXRefLineColor', 'gsnXRefLineDashPattern', 'gsnXRefLineThicknessF', 'gsnXYAboveFillColors', 'gsnXYBarChart', 'gsnXYBarChartBarWidth', 'gsnXYBarChartColors', 'gsnXYBarChartColors2', 'gsnXYBarChartFillDotSizeF', 'gsnXYBarChartFillLineThicknessF', 'gsnXYBarChartFillOpacityF', 'gsnXYBarChartFillScaleF', 'gsnXYBarChartOutlineOnly', 'gsnXYBarChartOutlineThicknessF', 'gsnXYBarChartPatterns', 'gsnXYBarChartPatterns2', 'gsnXYBelowFillColors', 'gsnXYFillColors', 'gsnXYFillOpacities', 'gsnXYLeftFillColors', 'gsnXYRightFillColors', 'gsnYAxisIrregular2Linear', 'gsnYAxisIrregular2Log', 'gsnYRefLine', 'gsnYRefLineColor', 'gsnYRefLineColors', 'gsnYRefLineDashPattern', 'gsnYRefLineDashPatterns', 'gsnYRefLineThicknessF', 'gsnYRefLineThicknesses', 'gsnZonalMean', 'gsnZonalMeanXMaxF', 'gsnZonalMeanXMinF', 'gsnZonalMeanYRefLine', 'lbAutoManage', 'lbBottomMarginF', 'lbBoxCount', 'lbBoxEndCapStyle', 'lbBoxFractions', 'lbBoxLineColor', 'lbBoxLineDashPattern', 'lbBoxLineDashSegLenF', 'lbBoxLineThicknessF', 'lbBoxLinesOn', 'lbBoxMajorExtentF', 'lbBoxMinorExtentF', 'lbBoxSeparatorLinesOn', 'lbBoxSizing', 'lbFillBackground', 'lbFillColor', 'lbFillColors', 'lbFillDotSizeF', 'lbFillLineThicknessF', 'lbFillPattern', 'lbFillPatterns', 'lbFillScaleF', 'lbFillScales', 'lbJustification', 'lbLabelAlignment', 'lbLabelAngleF', 'lbLabelAutoStride', 'lbLabelBarOn', 'lbLabelConstantSpacingF', 'lbLabelDirection', 'lbLabelFont', 'lbLabelFontAspectF', 'lbLabelFontColor', 'lbLabelFontHeightF', 'lbLabelFontQuality', 'lbLabelFontThicknessF', 'lbLabelFuncCode', 'lbLabelJust', 'lbLabelOffsetF', 'lbLabelPosition', 'lbLabelStride', 'lbLabelStrings', 'lbLabelsOn', 'lbLeftMarginF', 'lbMaxLabelLenF', 'lbMinLabelSpacingF', 'lbMonoFillColor', 'lbMonoFillPattern', 'lbMonoFillScale', 'lbOrientation', 'lbPerimColor', 'lbPerimDashPattern', 'lbPerimDashSegLenF', 'lbPerimFill', 'lbPerimFillColor', 'lbPerimOn', 'lbPerimThicknessF', 'lbRasterFillOn', 'lbRightMarginF', 'lbTitleAngleF', 'lbTitleConstantSpacingF', 'lbTitleDirection', 'lbTitleExtentF', 'lbTitleFont', 'lbTitleFontAspectF', 'lbTitleFontColor', 'lbTitleFontHeightF', 'lbTitleFontQuality', 'lbTitleFontThicknessF', 'lbTitleFuncCode', 'lbTitleJust', 'lbTitleOffsetF', 'lbTitleOn', 'lbTitlePosition', 'lbTitleString', 'lbTopMarginF', 'lgAutoManage', 'lgBottomMarginF', 'lgBoxBackground', 'lgBoxLineColor', 'lgBoxLineDashPattern', 'lgBoxLineDashSegLenF', 'lgBoxLineThicknessF', 'lgBoxLinesOn', 'lgBoxMajorExtentF', 'lgBoxMinorExtentF', 'lgDashIndex', 'lgDashIndexes', 'lgItemCount', 'lgItemOrder', 'lgItemPlacement', 'lgItemPositions', 'lgItemType', 'lgItemTypes', 'lgJustification', 'lgLabelAlignment', 'lgLabelAngleF', 'lgLabelAutoStride', 'lgLabelConstantSpacingF', 'lgLabelDirection', 'lgLabelFont', 'lgLabelFontAspectF', 'lgLabelFontColor', 'lgLabelFontHeightF', 'lgLabelFontQuality', 'lgLabelFontThicknessF', 'lgLabelFuncCode', 'lgLabelJust', 'lgLabelOffsetF', 'lgLabelPosition', 'lgLabelStride', 'lgLabelStrings', 'lgLabelsOn', 'lgLeftMarginF', 'lgLegendOn', 'lgLineColor', 'lgLineColors', 'lgLineDashSegLenF', 'lgLineDashSegLens', 'lgLineLabelConstantSpacingF', 'lgLineLabelFont', 'lgLineLabelFontAspectF', 'lgLineLabelFontColor', 'lgLineLabelFontColors', 'lgLineLabelFontHeightF', 'lgLineLabelFontHeights', 'lgLineLabelFontQuality', 'lgLineLabelFontThicknessF', 'lgLineLabelFuncCode', 'lgLineLabelStrings', 'lgLineLabelsOn', 'lgLineThicknessF', 'lgLineThicknesses', 'lgMarkerColor', 'lgMarkerColors', 'lgMarkerIndex', 'lgMarkerIndexes', 'lgMarkerSizeF', 'lgMarkerSizes', 'lgMarkerThicknessF', 'lgMarkerThicknesses', 'lgMonoDashIndex', 'lgMonoItemType', 'lgMonoLineColor', 'lgMonoLineDashSegLen', 'lgMonoLineLabelFontColor', 'lgMonoLineLabelFontHeight', 'lgMonoLineThickness', 'lgMonoMarkerColor', 'lgMonoMarkerIndex', 'lgMonoMarkerSize', 'lgMonoMarkerThickness', 'lgOrientation', 'lgPerimColor', 'lgPerimDashPattern', 'lgPerimDashSegLenF', 'lgPerimFill', 'lgPerimFillColor', 'lgPerimOn', 'lgPerimThicknessF', 'lgRightMarginF', 'lgTitleAngleF', 'lgTitleConstantSpacingF', 'lgTitleDirection', 'lgTitleExtentF', 'lgTitleFont', 'lgTitleFontAspectF', 'lgTitleFontColor', 'lgTitleFontHeightF', 'lgTitleFontQuality', 'lgTitleFontThicknessF', 'lgTitleFuncCode', 'lgTitleJust', 'lgTitleOffsetF', 'lgTitleOn', 'lgTitlePosition', 'lgTitleString', 'lgTopMarginF', 'mpAreaGroupCount', 'mpAreaMaskingOn', 'mpAreaNames', 'mpAreaTypes', 'mpBottomAngleF', 'mpBottomMapPosF', 'mpBottomNDCF', 'mpBottomNPCF', 'mpBottomPointLatF', 'mpBottomPointLonF', 'mpBottomWindowF', 'mpCenterLatF', 'mpCenterLonF', 'mpCenterRotF', 'mpCountyLineColor', 'mpCountyLineDashPattern', 'mpCountyLineDashSegLenF', 'mpCountyLineThicknessF', 'mpDataBaseVersion', 'mpDataResolution', 'mpDataSetName', 'mpDefaultFillColor', 'mpDefaultFillPattern', 'mpDefaultFillScaleF', 'mpDynamicAreaGroups', 'mpEllipticalBoundary', 'mpFillAreaSpecifiers', 'mpFillBoundarySets', 'mpFillColor', 'mpFillColors', 'mpFillColors-default', 'mpFillDotSizeF', 'mpFillDrawOrder', 'mpFillOn', 'mpFillPatternBackground', 'mpFillPattern', 'mpFillPatterns', 'mpFillPatterns-default', 'mpFillScaleF', 'mpFillScales', 'mpFillScales-default', 'mpFixedAreaGroups', 'mpGeophysicalLineColor', 'mpGeophysicalLineDashPattern', 'mpGeophysicalLineDashSegLenF', 'mpGeophysicalLineThicknessF', 'mpGreatCircleLinesOn', 'mpGridAndLimbDrawOrder', 'mpGridAndLimbOn', 'mpGridLatSpacingF', 'mpGridLineColor', 'mpGridLineDashPattern', 'mpGridLineDashSegLenF', 'mpGridLineThicknessF', 'mpGridLonSpacingF', 'mpGridMaskMode', 'mpGridMaxLatF', 'mpGridPolarLonSpacingF', 'mpGridSpacingF', 'mpInlandWaterFillColor', 'mpInlandWaterFillPattern', 'mpInlandWaterFillScaleF', 'mpLabelDrawOrder', 'mpLabelFontColor', 'mpLabelFontHeightF', 'mpLabelsOn', 'mpLambertMeridianF', 'mpLambertParallel1F', 'mpLambertParallel2F', 'mpLandFillColor', 'mpLandFillPattern', 'mpLandFillScaleF', 'mpLeftAngleF', 'mpLeftCornerLatF', 'mpLeftCornerLonF', 'mpLeftMapPosF', 'mpLeftNDCF', 'mpLeftNPCF', 'mpLeftPointLatF', 'mpLeftPointLonF', 'mpLeftWindowF', 'mpLimbLineColor', 'mpLimbLineDashPattern', 'mpLimbLineDashSegLenF', 'mpLimbLineThicknessF', 'mpLimitMode', 'mpMaskAreaSpecifiers', 'mpMaskOutlineSpecifiers', 'mpMaxLatF', 'mpMaxLonF', 'mpMinLatF', 'mpMinLonF', 'mpMonoFillColor', 'mpMonoFillPattern', 'mpMonoFillScale', 'mpNationalLineColor', 'mpNationalLineDashPattern', 'mpNationalLineThicknessF', 'mpOceanFillColor', 'mpOceanFillPattern', 'mpOceanFillScaleF', 'mpOutlineBoundarySets', 'mpOutlineDrawOrder', 'mpOutlineMaskingOn', 'mpOutlineOn', 'mpOutlineSpecifiers', 'mpPerimDrawOrder', 'mpPerimLineColor', 'mpPerimLineDashPattern', 'mpPerimLineDashSegLenF', 'mpPerimLineThicknessF', 'mpPerimOn', 'mpPolyMode', 'mpProjection', 'mpProvincialLineColor', 'mpProvincialLineDashPattern', 'mpProvincialLineDashSegLenF', 'mpProvincialLineThicknessF', 'mpRelativeCenterLat', 'mpRelativeCenterLon', 'mpRightAngleF', 'mpRightCornerLatF', 'mpRightCornerLonF', 'mpRightMapPosF', 'mpRightNDCF', 'mpRightNPCF', 'mpRightPointLatF', 'mpRightPointLonF', 'mpRightWindowF', 'mpSatelliteAngle1F', 'mpSatelliteAngle2F', 'mpSatelliteDistF', 'mpShapeMode', 'mpSpecifiedFillColors', 'mpSpecifiedFillDirectIndexing', 'mpSpecifiedFillPatterns', 'mpSpecifiedFillPriority', 'mpSpecifiedFillScales', 'mpTopAngleF', 'mpTopMapPosF', 'mpTopNDCF', 'mpTopNPCF', 'mpTopPointLatF', 'mpTopPointLonF', 'mpTopWindowF', 'mpUSStateLineColor', 'mpUSStateLineDashPattern', 'mpUSStateLineDashSegLenF', 'mpUSStateLineThicknessF', 'pmAnnoManagers', 'pmAnnoViews', 'pmLabelBarDisplayMode', 'pmLabelBarHeightF', 'pmLabelBarKeepAspect', 'pmLabelBarOrthogonalPosF', 'pmLabelBarParallelPosF', 'pmLabelBarSide', 'pmLabelBarWidthF', 'pmLabelBarZone', 'pmLegendDisplayMode', 'pmLegendHeightF', 'pmLegendKeepAspect', 'pmLegendOrthogonalPosF', 'pmLegendParallelPosF', 'pmLegendSide', 'pmLegendWidthF', 'pmLegendZone', 'pmOverlaySequenceIds', 'pmTickMarkDisplayMode', 'pmTickMarkZone', 'pmTitleDisplayMode', 'pmTitleZone', 'prGraphicStyle', 'prPolyType', 'prXArray', 'prYArray', 'sfCopyData', 'sfDataArray', 'sfDataMaxV', 'sfDataMinV', 'sfElementNodes', 'sfExchangeDimensions', 'sfFirstNodeIndex', 'sfMissingValueV', 'sfXArray', 'sfXCActualEndF', 'sfXCActualStartF', 'sfXCEndIndex', 'sfXCEndSubsetV', 'sfXCEndV', 'sfXCStartIndex', 'sfXCStartSubsetV', 'sfXCStartV', 'sfXCStride', 'sfXCellBounds', 'sfYArray', 'sfYCActualEndF', 'sfYCActualStartF', 'sfYCEndIndex', 'sfYCEndSubsetV', 'sfYCEndV', 'sfYCStartIndex', 'sfYCStartSubsetV', 'sfYCStartV', 'sfYCStride', 'sfYCellBounds', 'stArrowLengthF', 'stArrowStride', 'stCrossoverCheckCount', 'stExplicitLabelBarLabelsOn', 'stLabelBarEndLabelsOn', 'stLabelFormat', 'stLengthCheckCount', 'stLevelColors', 'stLevelCount', 'stLevelPalette', 'stLevelSelectionMode', 'stLevelSpacingF', 'stLevels', 'stLineColor', 'stLineOpacityF', 'stLineStartStride', 'stLineThicknessF', 'stMapDirection', 'stMaxLevelCount', 'stMaxLevelValF', 'stMinArrowSpacingF', 'stMinDistanceF', 'stMinLevelValF', 'stMinLineSpacingF', 'stMinStepFactorF', 'stMonoLineColor', 'stNoDataLabelOn', 'stNoDataLabelString', 'stScalarFieldData', 'stScalarMissingValColor', 'stSpanLevelPalette', 'stStepSizeF', 'stStreamlineDrawOrder', 'stUseScalarArray', 'stVectorFieldData', 'stZeroFLabelAngleF', 'stZeroFLabelBackgroundColor', 'stZeroFLabelConstantSpacingF', 'stZeroFLabelFont', 'stZeroFLabelFontAspectF', 'stZeroFLabelFontColor', 'stZeroFLabelFontHeightF', 'stZeroFLabelFontQuality', 'stZeroFLabelFontThicknessF', 'stZeroFLabelFuncCode', 'stZeroFLabelJust', 'stZeroFLabelOn', 'stZeroFLabelOrthogonalPosF', 'stZeroFLabelParallelPosF', 'stZeroFLabelPerimColor', 'stZeroFLabelPerimOn', 'stZeroFLabelPerimSpaceF', 'stZeroFLabelPerimThicknessF', 'stZeroFLabelSide', 'stZeroFLabelString', 'stZeroFLabelTextDirection', 'stZeroFLabelZone', 'tfDoNDCOverlay', 'tfPlotManagerOn', 'tfPolyDrawList', 'tfPolyDrawOrder', 'tiDeltaF', 'tiMainAngleF', 'tiMainConstantSpacingF', 'tiMainDirection', 'tiMainFont', 'tiMainFontAspectF', 'tiMainFontColor', 'tiMainFontHeightF', 'tiMainFontQuality', 'tiMainFontThicknessF', 'tiMainFuncCode', 'tiMainJust', 'tiMainOffsetXF', 'tiMainOffsetYF', 'tiMainOn', 'tiMainPosition', 'tiMainSide', 'tiMainString', 'tiUseMainAttributes', 'tiXAxisAngleF', 'tiXAxisConstantSpacingF', 'tiXAxisDirection', 'tiXAxisFont', 'tiXAxisFontAspectF', 'tiXAxisFontColor', 'tiXAxisFontHeightF', 'tiXAxisFontQuality', 'tiXAxisFontThicknessF', 'tiXAxisFuncCode', 'tiXAxisJust', 'tiXAxisOffsetXF', 'tiXAxisOffsetYF', 'tiXAxisOn', 'tiXAxisPosition', 'tiXAxisSide', 'tiXAxisString', 'tiYAxisAngleF', 'tiYAxisConstantSpacingF', 'tiYAxisDirection', 'tiYAxisFont', 'tiYAxisFontAspectF', 'tiYAxisFontColor', 'tiYAxisFontHeightF', 'tiYAxisFontQuality', 'tiYAxisFontThicknessF', 'tiYAxisFuncCode', 'tiYAxisJust', 'tiYAxisOffsetXF', 'tiYAxisOffsetYF', 'tiYAxisOn', 'tiYAxisPosition', 'tiYAxisSide', 'tiYAxisString', 'tmBorderLineColor', 'tmBorderThicknessF', 'tmEqualizeXYSizes', 'tmLabelAutoStride', 'tmSciNoteCutoff', 'tmXBAutoPrecision', 'tmXBBorderOn', 'tmXBDataLeftF', 'tmXBDataRightF', 'tmXBFormat', 'tmXBIrrTensionF', 'tmXBIrregularPoints', 'tmXBLabelAngleF', 'tmXBLabelConstantSpacingF', 'tmXBLabelDeltaF', 'tmXBLabelDirection', 'tmXBLabelFont', 'tmXBLabelFontAspectF', 'tmXBLabelFontColor', 'tmXBLabelFontHeightF', 'tmXBLabelFontQuality', 'tmXBLabelFontThicknessF', 'tmXBLabelFuncCode', 'tmXBLabelJust', 'tmXBLabelStride', 'tmXBLabels', 'tmXBLabelsOn', 'tmXBMajorLengthF', 'tmXBMajorLineColor', 'tmXBMajorOutwardLengthF', 'tmXBMajorThicknessF', 'tmXBMaxLabelLenF', 'tmXBMaxTicks', 'tmXBMinLabelSpacingF', 'tmXBMinorLengthF', 'tmXBMinorLineColor', 'tmXBMinorOn', 'tmXBMinorOutwardLengthF', 'tmXBMinorPerMajor', 'tmXBMinorThicknessF', 'tmXBMinorValues', 'tmXBMode', 'tmXBOn', 'tmXBPrecision', 'tmXBStyle', 'tmXBTickEndF', 'tmXBTickSpacingF', 'tmXBTickStartF', 'tmXBValues', 'tmXMajorGrid', 'tmXMajorGridLineColor', 'tmXMajorGridLineDashPattern', 'tmXMajorGridThicknessF', 'tmXMinorGrid', 'tmXMinorGridLineColor', 'tmXMinorGridLineDashPattern', 'tmXMinorGridThicknessF', 'tmXTAutoPrecision', 'tmXTBorderOn', 'tmXTDataLeftF', 'tmXTDataRightF', 'tmXTFormat', 'tmXTIrrTensionF', 'tmXTIrregularPoints', 'tmXTLabelAngleF', 'tmXTLabelConstantSpacingF', 'tmXTLabelDeltaF', 'tmXTLabelDirection', 'tmXTLabelFont', 'tmXTLabelFontAspectF', 'tmXTLabelFontColor', 'tmXTLabelFontHeightF', 'tmXTLabelFontQuality', 'tmXTLabelFontThicknessF', 'tmXTLabelFuncCode', 'tmXTLabelJust', 'tmXTLabelStride', 'tmXTLabels', 'tmXTLabelsOn', 'tmXTMajorLengthF', 'tmXTMajorLineColor', 'tmXTMajorOutwardLengthF', 'tmXTMajorThicknessF', 'tmXTMaxLabelLenF', 'tmXTMaxTicks', 'tmXTMinLabelSpacingF', 'tmXTMinorLengthF', 'tmXTMinorLineColor', 'tmXTMinorOn', 'tmXTMinorOutwardLengthF', 'tmXTMinorPerMajor', 'tmXTMinorThicknessF', 'tmXTMinorValues', 'tmXTMode', 'tmXTOn', 'tmXTPrecision', 'tmXTStyle', 'tmXTTickEndF', 'tmXTTickSpacingF', 'tmXTTickStartF', 'tmXTValues', 'tmXUseBottom', 'tmYLAutoPrecision', 'tmYLBorderOn', 'tmYLDataBottomF', 'tmYLDataTopF', 'tmYLFormat', 'tmYLIrrTensionF', 'tmYLIrregularPoints', 'tmYLLabelAngleF', 'tmYLLabelConstantSpacingF', 'tmYLLabelDeltaF', 'tmYLLabelDirection', 'tmYLLabelFont', 'tmYLLabelFontAspectF', 'tmYLLabelFontColor', 'tmYLLabelFontHeightF', 'tmYLLabelFontQuality', 'tmYLLabelFontThicknessF', 'tmYLLabelFuncCode', 'tmYLLabelJust', 'tmYLLabelStride', 'tmYLLabels', 'tmYLLabelsOn', 'tmYLMajorLengthF', 'tmYLMajorLineColor', 'tmYLMajorOutwardLengthF', 'tmYLMajorThicknessF', 'tmYLMaxLabelLenF', 'tmYLMaxTicks', 'tmYLMinLabelSpacingF', 'tmYLMinorLengthF', 'tmYLMinorLineColor', 'tmYLMinorOn', 'tmYLMinorOutwardLengthF', 'tmYLMinorPerMajor', 'tmYLMinorThicknessF', 'tmYLMinorValues', 'tmYLMode', 'tmYLOn', 'tmYLPrecision', 'tmYLStyle', 'tmYLTickEndF', 'tmYLTickSpacingF', 'tmYLTickStartF', 'tmYLValues', 'tmYMajorGrid', 'tmYMajorGridLineColor', 'tmYMajorGridLineDashPattern', 'tmYMajorGridThicknessF', 'tmYMinorGrid', 'tmYMinorGridLineColor', 'tmYMinorGridLineDashPattern', 'tmYMinorGridThicknessF', 'tmYRAutoPrecision', 'tmYRBorderOn', 'tmYRDataBottomF', 'tmYRDataTopF', 'tmYRFormat', 'tmYRIrrTensionF', 'tmYRIrregularPoints', 'tmYRLabelAngleF', 'tmYRLabelConstantSpacingF', 'tmYRLabelDeltaF', 'tmYRLabelDirection', 'tmYRLabelFont', 'tmYRLabelFontAspectF', 'tmYRLabelFontColor', 'tmYRLabelFontHeightF', 'tmYRLabelFontQuality', 'tmYRLabelFontThicknessF', 'tmYRLabelFuncCode', 'tmYRLabelJust', 'tmYRLabelStride', 'tmYRLabels', 'tmYRLabelsOn', 'tmYRMajorLengthF', 'tmYRMajorLineColor', 'tmYRMajorOutwardLengthF', 'tmYRMajorThicknessF', 'tmYRMaxLabelLenF', 'tmYRMaxTicks', 'tmYRMinLabelSpacingF', 'tmYRMinorLengthF', 'tmYRMinorLineColor', 'tmYRMinorOn', 'tmYRMinorOutwardLengthF', 'tmYRMinorPerMajor', 'tmYRMinorThicknessF', 'tmYRMinorValues', 'tmYRMode', 'tmYROn', 'tmYRPrecision', 'tmYRStyle', 'tmYRTickEndF', 'tmYRTickSpacingF', 'tmYRTickStartF', 'tmYRValues', 'tmYUseLeft', 'trGridType', 'trLineInterpolationOn', 'trXAxisType', 'trXCoordPoints', 'trXInterPoints', 'trXLog', 'trXMaxF', 'trXMinF', 'trXReverse', 'trXSamples', 'trXTensionF', 'trYAxisType', 'trYCoordPoints', 'trYInterPoints', 'trYLog', 'trYMaxF', 'trYMinF', 'trYReverse', 'trYSamples', 'trYTensionF', 'txAngleF', 'txBackgroundFillColor', 'txConstantSpacingF', 'txDirection', 'txFont', 'HLU-Fonts', 'txFontAspectF', 'txFontColor', 'txFontHeightF', 'txFontOpacityF', 'txFontQuality', 'txFontThicknessF', 'txFuncCode', 'txJust', 'txPerimColor', 'txPerimDashLengthF', 'txPerimDashPattern', 'txPerimOn', 'txPerimSpaceF', 'txPerimThicknessF', 'txPosXF', 'txPosYF', 'txString', 'vcExplicitLabelBarLabelsOn', 'vcFillArrowEdgeColor', 'vcFillArrowEdgeThicknessF', 'vcFillArrowFillColor', 'vcFillArrowHeadInteriorXF', 'vcFillArrowHeadMinFracXF', 'vcFillArrowHeadMinFracYF', 'vcFillArrowHeadXF', 'vcFillArrowHeadYF', 'vcFillArrowMinFracWidthF', 'vcFillArrowWidthF', 'vcFillArrowsOn', 'vcFillOverEdge', 'vcGlyphOpacityF', 'vcGlyphStyle', 'vcLabelBarEndLabelsOn', 'vcLabelFontColor', 'vcLabelFontHeightF', 'vcLabelsOn', 'vcLabelsUseVectorColor', 'vcLevelColors', 'vcLevelCount', 'vcLevelPalette', 'vcLevelSelectionMode', 'vcLevelSpacingF', 'vcLevels', 'vcLineArrowColor', 'vcLineArrowHeadMaxSizeF', 'vcLineArrowHeadMinSizeF', 'vcLineArrowThicknessF', 'vcMagnitudeFormat', 'vcMagnitudeScaleFactorF', 'vcMagnitudeScaleValueF', 'vcMagnitudeScalingMode', 'vcMapDirection', 'vcMaxLevelCount', 'vcMaxLevelValF', 'vcMaxMagnitudeF', 'vcMinAnnoAngleF', 'vcMinAnnoArrowAngleF', 'vcMinAnnoArrowEdgeColor', 'vcMinAnnoArrowFillColor', 'vcMinAnnoArrowLineColor', 'vcMinAnnoArrowMinOffsetF', 'vcMinAnnoArrowSpaceF', 'vcMinAnnoArrowUseVecColor', 'vcMinAnnoBackgroundColor', 'vcMinAnnoConstantSpacingF', 'vcMinAnnoExplicitMagnitudeF', 'vcMinAnnoFont', 'vcMinAnnoFontAspectF', 'vcMinAnnoFontColor', 'vcMinAnnoFontHeightF', 'vcMinAnnoFontQuality', 'vcMinAnnoFontThicknessF', 'vcMinAnnoFuncCode', 'vcMinAnnoJust', 'vcMinAnnoOn', 'vcMinAnnoOrientation', 'vcMinAnnoOrthogonalPosF', 'vcMinAnnoParallelPosF', 'vcMinAnnoPerimColor', 'vcMinAnnoPerimOn', 'vcMinAnnoPerimSpaceF', 'vcMinAnnoPerimThicknessF', 'vcMinAnnoSide', 'vcMinAnnoString1', 'vcMinAnnoString1On', 'vcMinAnnoString2', 'vcMinAnnoString2On', 'vcMinAnnoTextDirection', 'vcMinAnnoZone', 'vcMinDistanceF', 'vcMinFracLengthF', 'vcMinLevelValF', 'vcMinMagnitudeF', 'vcMonoFillArrowEdgeColor', 'vcMonoFillArrowFillColor', 'vcMonoLineArrowColor', 'vcMonoWindBarbColor', 'vcNoDataLabelOn', 'vcNoDataLabelString', 'vcPositionMode', 'vcRefAnnoAngleF', 'vcRefAnnoArrowAngleF', 'vcRefAnnoArrowEdgeColor', 'vcRefAnnoArrowFillColor', 'vcRefAnnoArrowLineColor', 'vcRefAnnoArrowMinOffsetF', 'vcRefAnnoArrowSpaceF', 'vcRefAnnoArrowUseVecColor', 'vcRefAnnoBackgroundColor', 'vcRefAnnoConstantSpacingF', 'vcRefAnnoExplicitMagnitudeF', 'vcRefAnnoFont', 'vcRefAnnoFontAspectF', 'vcRefAnnoFontColor', 'vcRefAnnoFontHeightF', 'vcRefAnnoFontQuality', 'vcRefAnnoFontThicknessF', 'vcRefAnnoFuncCode', 'vcRefAnnoJust', 'vcRefAnnoOn', 'vcRefAnnoOrientation', 'vcRefAnnoOrthogonalPosF', 'vcRefAnnoParallelPosF', 'vcRefAnnoPerimColor', 'vcRefAnnoPerimOn', 'vcRefAnnoPerimSpaceF', 'vcRefAnnoPerimThicknessF', 'vcRefAnnoSide', 'vcRefAnnoString1', 'vcRefAnnoString1On', 'vcRefAnnoString2', 'vcRefAnnoString2On', 'vcRefAnnoTextDirection', 'vcRefAnnoZone', 'vcRefLengthF', 'vcRefMagnitudeF', 'vcScalarFieldData', 'vcScalarMissingValColor', 'vcScalarValueFormat', 'vcScalarValueScaleFactorF', 'vcScalarValueScaleValueF', 'vcScalarValueScalingMode', 'vcSpanLevelPalette', 'vcUseRefAnnoRes', 'vcUseScalarArray', 'vcVectorDrawOrder', 'vcVectorFieldData', 'vcWindBarbCalmCircleSizeF', 'vcWindBarbColor', 'vcWindBarbLineThicknessF', 'vcWindBarbScaleFactorF', 'vcWindBarbTickAngleF', 'vcWindBarbTickLengthF', 'vcWindBarbTickSpacingF', 'vcZeroFLabelAngleF', 'vcZeroFLabelBackgroundColor', 'vcZeroFLabelConstantSpacingF', 'vcZeroFLabelFont', 'vcZeroFLabelFontAspectF', 'vcZeroFLabelFontColor', 'vcZeroFLabelFontHeightF', 'vcZeroFLabelFontQuality', 'vcZeroFLabelFontThicknessF', 'vcZeroFLabelFuncCode', 'vcZeroFLabelJust', 'vcZeroFLabelOn', 'vcZeroFLabelOrthogonalPosF', 'vcZeroFLabelParallelPosF', 'vcZeroFLabelPerimColor', 'vcZeroFLabelPerimOn', 'vcZeroFLabelPerimSpaceF', 'vcZeroFLabelPerimThicknessF', 'vcZeroFLabelSide', 'vcZeroFLabelString', 'vcZeroFLabelTextDirection', 'vcZeroFLabelZone', 'vfCopyData', 'vfDataArray', 'vfExchangeDimensions', 'vfExchangeUVData', 'vfMagMaxV', 'vfMagMinV', 'vfMissingUValueV', 'vfMissingVValueV', 'vfPolarData', 'vfSingleMissingValue', 'vfUDataArray', 'vfUMaxV', 'vfUMinV', 'vfVDataArray', 'vfVMaxV', 'vfVMinV', 'vfXArray', 'vfXCActualEndF', 'vfXCActualStartF', 'vfXCEndIndex', 'vfXCEndSubsetV', 'vfXCEndV', 'vfXCStartIndex', 'vfXCStartSubsetV', 'vfXCStartV', 'vfXCStride', 'vfYArray', 'vfYCActualEndF', 'vfYCActualStartF', 'vfYCEndIndex', 'vfYCEndSubsetV', 'vfYCEndV', 'vfYCStartIndex', 'vfYCStartSubsetV', 'vfYCStartV', 'vfYCStride', 'vpAnnoManagerId', 'vpClipOn', 'vpHeightF', 'vpKeepAspect', 'vpOn', 'vpUseSegments', 'vpWidthF', 'vpXF', 'vpYF', 'wkAntiAlias', 'wkBackgroundColor', 'wkBackgroundOpacityF', 'wkColorMapLen', 'wkColorMap', 'wkColorModel', 'wkDashTableLength', 'wkDefGraphicStyleId', 'wkDeviceLowerX', 'wkDeviceLowerY', 'wkDeviceUpperX', 'wkDeviceUpperY', 'wkFileName', 'wkFillTableLength', 'wkForegroundColor', 'wkFormat', 'wkFullBackground', 'wkGksWorkId', 'wkHeight', 'wkMarkerTableLength', 'wkMetaName', 'wkOrientation', 'wkPDFFileName', 'wkPDFFormat', 'wkPDFResolution', 'wkPSFileName', 'wkPSFormat', 'wkPSResolution', 'wkPaperHeightF', 'wkPaperSize', 'wkPaperWidthF', 'wkPause', 'wkTopLevelViews', 'wkViews', 'wkVisualType', 'wkWidth', 'wkWindowId', 'wkXColorMode', 'wsCurrentSize', 'wsMaximumSize', 'wsThresholdSize', 'xyComputeXMax', 'xyComputeXMin', 'xyComputeYMax', 'xyComputeYMin', 'xyCoordData', 'xyCoordDataSpec', 'xyCurveDrawOrder', 'xyDashPattern', 'xyDashPatterns', 'xyExplicitLabels', 'xyExplicitLegendLabels', 'xyLabelMode', 'xyLineColor', 'xyLineColors', 'xyLineDashSegLenF', 'xyLineLabelConstantSpacingF', 'xyLineLabelFont', 'xyLineLabelFontAspectF', 'xyLineLabelFontColor', 'xyLineLabelFontColors', 'xyLineLabelFontHeightF', 'xyLineLabelFontQuality', 'xyLineLabelFontThicknessF', 'xyLineLabelFuncCode', 'xyLineThicknessF', 'xyLineThicknesses', 'xyMarkLineMode', 'xyMarkLineModes', 'xyMarker', 'xyMarkerColor', 'xyMarkerColors', 'xyMarkerSizeF', 'xyMarkerSizes', 'xyMarkerThicknessF', 'xyMarkerThicknesses', 'xyMarkers', 'xyMonoDashPattern', 'xyMonoLineColor', 'xyMonoLineLabelFontColor', 'xyMonoLineThickness', 'xyMonoMarkLineMode', 'xyMonoMarker', 'xyMonoMarkerColor', 'xyMonoMarkerSize', 'xyMonoMarkerThickness', 'xyXIrrTensionF', 'xyXIrregularPoints', 'xyXStyle', 'xyYIrrTensionF', 'xyYIrregularPoints', 'xyYStyle'), prefix=r'\b'), Name.Builtin), # Booleans (r'\.(True|False)\.', Name.Builtin), # Comparing Operators (r'\.(eq|ne|lt|le|gt|ge|not|and|or|xor)\.', Operator.Word), ], 'strings': [ (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double), ], 'nums': [ (r'\d+(?![.e])(_[a-z]\w+)?', Number.Integer), (r'[+-]?\d*\.\d+(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float), (r'[+-]?\d+\.\d*(e[-+]?\d+)?(_[a-z]\w+)?', Number.Float), ], } pygments-2.11.2/pygments/lexers/templates.py0000644000175000017500000021372414165547207021067 0ustar carstencarsten""" pygments.lexers.templates ~~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for various template engines' markup. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexers.html import HtmlLexer, XmlLexer from pygments.lexers.javascript import JavascriptLexer, LassoLexer from pygments.lexers.css import CssLexer from pygments.lexers.php import PhpLexer from pygments.lexers.python import PythonLexer from pygments.lexers.perl import PerlLexer from pygments.lexers.jvm import JavaLexer, TeaLangLexer from pygments.lexers.data import YamlLexer from pygments.lexer import Lexer, DelegatingLexer, RegexLexer, bygroups, \ include, using, this, default, combined from pygments.token import Error, Punctuation, Whitespace, \ Text, Comment, Operator, Keyword, Name, String, Number, Other, Token from pygments.util import html_doctype_matches, looks_like_xml __all__ = ['HtmlPhpLexer', 'XmlPhpLexer', 'CssPhpLexer', 'JavascriptPhpLexer', 'ErbLexer', 'RhtmlLexer', 'XmlErbLexer', 'CssErbLexer', 'JavascriptErbLexer', 'SmartyLexer', 'HtmlSmartyLexer', 'XmlSmartyLexer', 'CssSmartyLexer', 'JavascriptSmartyLexer', 'DjangoLexer', 'HtmlDjangoLexer', 'CssDjangoLexer', 'XmlDjangoLexer', 'JavascriptDjangoLexer', 'GenshiLexer', 'HtmlGenshiLexer', 'GenshiTextLexer', 'CssGenshiLexer', 'JavascriptGenshiLexer', 'MyghtyLexer', 'MyghtyHtmlLexer', 'MyghtyXmlLexer', 'MyghtyCssLexer', 'MyghtyJavascriptLexer', 'MasonLexer', 'MakoLexer', 'MakoHtmlLexer', 'MakoXmlLexer', 'MakoJavascriptLexer', 'MakoCssLexer', 'JspLexer', 'CheetahLexer', 'CheetahHtmlLexer', 'CheetahXmlLexer', 'CheetahJavascriptLexer', 'EvoqueLexer', 'EvoqueHtmlLexer', 'EvoqueXmlLexer', 'ColdfusionLexer', 'ColdfusionHtmlLexer', 'ColdfusionCFCLexer', 'VelocityLexer', 'VelocityHtmlLexer', 'VelocityXmlLexer', 'SspLexer', 'TeaTemplateLexer', 'LassoHtmlLexer', 'LassoXmlLexer', 'LassoCssLexer', 'LassoJavascriptLexer', 'HandlebarsLexer', 'HandlebarsHtmlLexer', 'YamlJinjaLexer', 'LiquidLexer', 'TwigLexer', 'TwigHtmlLexer', 'Angular2Lexer', 'Angular2HtmlLexer'] class ErbLexer(Lexer): """ Generic `ERB `_ (Ruby Templating) lexer. Just highlights ruby code between the preprocessor directives, other data is left untouched by the lexer. All options are also forwarded to the `RubyLexer`. """ name = 'ERB' aliases = ['erb'] mimetypes = ['application/x-ruby-templating'] _block_re = re.compile(r'(<%%|%%>|<%=|<%#|<%-|<%|-%>|%>|^%[^%].*?$)', re.M) def __init__(self, **options): from pygments.lexers.ruby import RubyLexer self.ruby_lexer = RubyLexer(**options) Lexer.__init__(self, **options) def get_tokens_unprocessed(self, text): """ Since ERB doesn't allow "<%" and other tags inside of ruby blocks we have to use a split approach here that fails for that too. """ tokens = self._block_re.split(text) tokens.reverse() state = idx = 0 try: while True: # text if state == 0: val = tokens.pop() yield idx, Other, val idx += len(val) state = 1 # block starts elif state == 1: tag = tokens.pop() # literals if tag in ('<%%', '%%>'): yield idx, Other, tag idx += 3 state = 0 # comment elif tag == '<%#': yield idx, Comment.Preproc, tag val = tokens.pop() yield idx + 3, Comment, val idx += 3 + len(val) state = 2 # blocks or output elif tag in ('<%', '<%=', '<%-'): yield idx, Comment.Preproc, tag idx += len(tag) data = tokens.pop() r_idx = 0 for r_idx, r_token, r_value in \ self.ruby_lexer.get_tokens_unprocessed(data): yield r_idx + idx, r_token, r_value idx += len(data) state = 2 elif tag in ('%>', '-%>'): yield idx, Error, tag idx += len(tag) state = 0 # % raw ruby statements else: yield idx, Comment.Preproc, tag[0] r_idx = 0 for r_idx, r_token, r_value in \ self.ruby_lexer.get_tokens_unprocessed(tag[1:]): yield idx + 1 + r_idx, r_token, r_value idx += len(tag) state = 0 # block ends elif state == 2: tag = tokens.pop() if tag not in ('%>', '-%>'): yield idx, Other, tag else: yield idx, Comment.Preproc, tag idx += len(tag) state = 0 except IndexError: return def analyse_text(text): if '<%' in text and '%>' in text: return 0.4 class SmartyLexer(RegexLexer): """ Generic `Smarty `_ template lexer. Just highlights smarty code between the preprocessor directives, other data is left untouched by the lexer. """ name = 'Smarty' aliases = ['smarty'] filenames = ['*.tpl'] mimetypes = ['application/x-smarty'] flags = re.MULTILINE | re.DOTALL tokens = { 'root': [ (r'[^{]+', Other), (r'(\{)(\*.*?\*)(\})', bygroups(Comment.Preproc, Comment, Comment.Preproc)), (r'(\{php\})(.*?)(\{/php\})', bygroups(Comment.Preproc, using(PhpLexer, startinline=True), Comment.Preproc)), (r'(\{)(/?[a-zA-Z_]\w*)(\s*)', bygroups(Comment.Preproc, Name.Function, Text), 'smarty'), (r'\{', Comment.Preproc, 'smarty') ], 'smarty': [ (r'\s+', Text), (r'\{', Comment.Preproc, '#push'), (r'\}', Comment.Preproc, '#pop'), (r'#[a-zA-Z_]\w*#', Name.Variable), (r'\$[a-zA-Z_]\w*(\.\w+)*', Name.Variable), (r'[~!%^&*()+=|\[\]:;,.<>/?@-]', Operator), (r'(true|false|null)\b', Keyword.Constant), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'[a-zA-Z_]\w*', Name.Attribute) ] } def analyse_text(text): rv = 0.0 if re.search(r'\{if\s+.*?\}.*?\{/if\}', text): rv += 0.15 if re.search(r'\{include\s+file=.*?\}', text): rv += 0.15 if re.search(r'\{foreach\s+.*?\}.*?\{/foreach\}', text): rv += 0.15 if re.search(r'\{\$.*?\}', text): rv += 0.01 return rv class VelocityLexer(RegexLexer): """ Generic `Velocity `_ template lexer. Just highlights velocity directives and variable references, other data is left untouched by the lexer. """ name = 'Velocity' aliases = ['velocity'] filenames = ['*.vm', '*.fhtml'] flags = re.MULTILINE | re.DOTALL identifier = r'[a-zA-Z_]\w*' tokens = { 'root': [ (r'[^{#$]+', Other), (r'(#)(\*.*?\*)(#)', bygroups(Comment.Preproc, Comment, Comment.Preproc)), (r'(##)(.*?$)', bygroups(Comment.Preproc, Comment)), (r'(#\{?)(' + identifier + r')(\}?)(\s?\()', bygroups(Comment.Preproc, Name.Function, Comment.Preproc, Punctuation), 'directiveparams'), (r'(#\{?)(' + identifier + r')(\}|\b)', bygroups(Comment.Preproc, Name.Function, Comment.Preproc)), (r'\$!?\{?', Punctuation, 'variable') ], 'variable': [ (identifier, Name.Variable), (r'\(', Punctuation, 'funcparams'), (r'(\.)(' + identifier + r')', bygroups(Punctuation, Name.Variable), '#push'), (r'\}', Punctuation, '#pop'), default('#pop') ], 'directiveparams': [ (r'(&&|\|\||==?|!=?|[-<>+*%&|^/])|\b(eq|ne|gt|lt|ge|le|not|in)\b', Operator), (r'\[', Operator, 'rangeoperator'), (r'\b' + identifier + r'\b', Name.Function), include('funcparams') ], 'rangeoperator': [ (r'\.\.', Operator), include('funcparams'), (r'\]', Operator, '#pop') ], 'funcparams': [ (r'\$!?\{?', Punctuation, 'variable'), (r'\s+', Text), (r'[,:]', Punctuation), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r"0[xX][0-9a-fA-F]+[Ll]?", Number), (r"\b[0-9]+\b", Number), (r'(true|false|null)\b', Keyword.Constant), (r'\(', Punctuation, '#push'), (r'\)', Punctuation, '#pop'), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), (r'\[', Punctuation, '#push'), (r'\]', Punctuation, '#pop'), ] } def analyse_text(text): rv = 0.0 if re.search(r'#\{?macro\}?\(.*?\).*?#\{?end\}?', text, re.DOTALL): rv += 0.25 if re.search(r'#\{?if\}?\(.+?\).*?#\{?end\}?', text, re.DOTALL): rv += 0.15 if re.search(r'#\{?foreach\}?\(.+?\).*?#\{?end\}?', text, re.DOTALL): rv += 0.15 if re.search(r'\$!?\{?[a-zA-Z_]\w*(\([^)]*\))?' r'(\.\w+(\([^)]*\))?)*\}?', text): rv += 0.01 return rv class VelocityHtmlLexer(DelegatingLexer): """ Subclass of the `VelocityLexer` that highlights unlexed data with the `HtmlLexer`. """ name = 'HTML+Velocity' aliases = ['html+velocity'] alias_filenames = ['*.html', '*.fhtml'] mimetypes = ['text/html+velocity'] def __init__(self, **options): super().__init__(HtmlLexer, VelocityLexer, **options) class VelocityXmlLexer(DelegatingLexer): """ Subclass of the `VelocityLexer` that highlights unlexed data with the `XmlLexer`. """ name = 'XML+Velocity' aliases = ['xml+velocity'] alias_filenames = ['*.xml', '*.vm'] mimetypes = ['application/xml+velocity'] def __init__(self, **options): super().__init__(XmlLexer, VelocityLexer, **options) def analyse_text(text): rv = VelocityLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class DjangoLexer(RegexLexer): """ Generic `django `_ and `jinja `_ template lexer. It just highlights django/jinja code between the preprocessor directives, other data is left untouched by the lexer. """ name = 'Django/Jinja' aliases = ['django', 'jinja'] mimetypes = ['application/x-django-templating', 'application/x-jinja'] flags = re.M | re.S tokens = { 'root': [ (r'[^{]+', Other), (r'\{\{', Comment.Preproc, 'var'), # jinja/django comments (r'\{#.*?#\}', Comment), # django comments (r'(\{%)(-?\s*)(comment)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endcomment)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Comment, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), # raw jinja blocks (r'(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endraw)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Text, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), # filter blocks (r'(\{%)(-?\s*)(filter)(\s+)([a-zA-Z_]\w*)', bygroups(Comment.Preproc, Text, Keyword, Text, Name.Function), 'block'), (r'(\{%)(-?\s*)([a-zA-Z_]\w*)', bygroups(Comment.Preproc, Text, Keyword), 'block'), (r'\{', Other) ], 'varnames': [ (r'(\|)(\s*)([a-zA-Z_]\w*)', bygroups(Operator, Text, Name.Function)), (r'(is)(\s+)(not)?(\s+)?([a-zA-Z_]\w*)', bygroups(Keyword, Text, Keyword, Text, Name.Function)), (r'(_|true|false|none|True|False|None)\b', Keyword.Pseudo), (r'(in|as|reversed|recursive|not|and|or|is|if|else|import|' r'with(?:(?:out)?\s*context)?|scoped|ignore\s+missing)\b', Keyword), (r'(loop|block|super|forloop)\b', Name.Builtin), (r'[a-zA-Z_][\w-]*', Name.Variable), (r'\.\w+', Name.Variable), (r':?"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r":?'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'([{}()\[\]+\-*/%,:~]|[><=]=?|!=)', Operator), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), ], 'var': [ (r'\s+', Text), (r'(-?)(\}\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames') ], 'block': [ (r'\s+', Text), (r'(-?)(%\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames'), (r'.', Punctuation) ] } def analyse_text(text): rv = 0.0 if re.search(r'\{%\s*(block|extends)', text) is not None: rv += 0.4 if re.search(r'\{%\s*if\s*.*?%\}', text) is not None: rv += 0.1 if re.search(r'\{\{.*?\}\}', text) is not None: rv += 0.1 return rv class MyghtyLexer(RegexLexer): """ Generic `myghty templates`_ lexer. Code that isn't Myghty markup is yielded as `Token.Other`. .. versionadded:: 0.6 .. _myghty templates: http://www.myghty.org/ """ name = 'Myghty' aliases = ['myghty'] filenames = ['*.myt', 'autodelegate'] mimetypes = ['application/x-myghty'] tokens = { 'root': [ (r'\s+', Text), (r'(?s)(<%(?:def|method))(\s*)(.*?)(>)(.*?)()', bygroups(Name.Tag, Text, Name.Function, Name.Tag, using(this), Name.Tag)), (r'(?s)(<%\w+)(.*?)(>)(.*?)()', bygroups(Name.Tag, Name.Function, Name.Tag, using(PythonLexer), Name.Tag)), (r'(<&[^|])(.*?)(,.*?)?(&>)', bygroups(Name.Tag, Name.Function, using(PythonLexer), Name.Tag)), (r'(?s)(<&\|)(.*?)(,.*?)?(&>)', bygroups(Name.Tag, Name.Function, using(PythonLexer), Name.Tag)), (r'', Name.Tag), (r'(?s)(<%!?)(.*?)(%>)', bygroups(Name.Tag, using(PythonLexer), Name.Tag)), (r'(?<=^)#[^\n]*(\n|\Z)', Comment), (r'(?<=^)(%)([^\n]*)(\n|\Z)', bygroups(Name.Tag, using(PythonLexer), Other)), (r"""(?sx) (.+?) # anything, followed by: (?: (?<=\n)(?=[%#]) | # an eval or comment line (?=)(.*?)()', bygroups(Name.Tag, Comment.Multiline, Name.Tag)), (r'(?s)(<%(?:def|method))(\s*)(.*?)(>)(.*?)()', bygroups(Name.Tag, Text, Name.Function, Name.Tag, using(this), Name.Tag)), (r'(?s)(<%(\w+)(.*?)(>))(.*?)()', bygroups(Name.Tag, None, None, None, using(PerlLexer), Name.Tag)), (r'(?s)(<&[^|])(.*?)(,.*?)?(&>)', bygroups(Name.Tag, Name.Function, using(PerlLexer), Name.Tag)), (r'(?s)(<&\|)(.*?)(,.*?)?(&>)', bygroups(Name.Tag, Name.Function, using(PerlLexer), Name.Tag)), (r'', Name.Tag), (r'(?s)(<%!?)(.*?)(%>)', bygroups(Name.Tag, using(PerlLexer), Name.Tag)), (r'(?<=^)#[^\n]*(\n|\Z)', Comment), (r'(?<=^)(%)([^\n]*)(\n|\Z)', bygroups(Name.Tag, using(PerlLexer), Other)), (r"""(?sx) (.+?) # anything, followed by: (?: (?<=\n)(?=[%#]) | # an eval or comment line (?=', text) is not None: result = 1.0 elif re.search(r'<&.+&>', text, re.DOTALL) is not None: result = 0.11 return result class MakoLexer(RegexLexer): """ Generic `mako templates`_ lexer. Code that isn't Mako markup is yielded as `Token.Other`. .. versionadded:: 0.7 .. _mako templates: http://www.makotemplates.org/ """ name = 'Mako' aliases = ['mako'] filenames = ['*.mao'] mimetypes = ['application/x-mako'] tokens = { 'root': [ (r'(\s*)(%)(\s*end(?:\w+))(\n|\Z)', bygroups(Text.Whitespace, Comment.Preproc, Keyword, Other)), (r'(\s*)(%)([^\n]*)(\n|\Z)', bygroups(Text.Whitespace, Comment.Preproc, using(PythonLexer), Other)), (r'(\s*)(##[^\n]*)(\n|\Z)', bygroups(Text.Whitespace, Comment.Single, Text.Whitespace)), (r'(?s)<%doc>.*?', Comment.Multiline), (r'(<%)([\w.:]+)', bygroups(Comment.Preproc, Name.Builtin), 'tag'), (r'()', bygroups(Comment.Preproc, Name.Builtin, Comment.Preproc)), (r'<%(?=([\w.:]+))', Comment.Preproc, 'ondeftags'), (r'(?s)(<%(?:!?))(.*?)(%>)', bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)), (r'(\$\{)(.*?)(\})', bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)), (r'''(?sx) (.+?) # anything, followed by: (?: (?<=\n)(?=%|\#\#) | # an eval or comment line (?=\#\*) | # multiline comment (?=', Comment.Preproc, '#pop'), (r'\s+', Text), ], 'attr': [ ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } class MakoHtmlLexer(DelegatingLexer): """ Subclass of the `MakoLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 0.7 """ name = 'HTML+Mako' aliases = ['html+mako'] mimetypes = ['text/html+mako'] def __init__(self, **options): super().__init__(HtmlLexer, MakoLexer, **options) class MakoXmlLexer(DelegatingLexer): """ Subclass of the `MakoLexer` that highlights unlexed data with the `XmlLexer`. .. versionadded:: 0.7 """ name = 'XML+Mako' aliases = ['xml+mako'] mimetypes = ['application/xml+mako'] def __init__(self, **options): super().__init__(XmlLexer, MakoLexer, **options) class MakoJavascriptLexer(DelegatingLexer): """ Subclass of the `MakoLexer` that highlights unlexed data with the `JavascriptLexer`. .. versionadded:: 0.7 """ name = 'JavaScript+Mako' aliases = ['javascript+mako', 'js+mako'] mimetypes = ['application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako'] def __init__(self, **options): super().__init__(JavascriptLexer, MakoLexer, **options) class MakoCssLexer(DelegatingLexer): """ Subclass of the `MakoLexer` that highlights unlexed data with the `CssLexer`. .. versionadded:: 0.7 """ name = 'CSS+Mako' aliases = ['css+mako'] mimetypes = ['text/css+mako'] def __init__(self, **options): super().__init__(CssLexer, MakoLexer, **options) # Genshi and Cheetah lexers courtesy of Matt Good. class CheetahPythonLexer(Lexer): """ Lexer for handling Cheetah's special $ tokens in Python syntax. """ def get_tokens_unprocessed(self, text): pylexer = PythonLexer(**self.options) for pos, type_, value in pylexer.get_tokens_unprocessed(text): if type_ == Token.Error and value == '$': type_ = Comment.Preproc yield pos, type_, value class CheetahLexer(RegexLexer): """ Generic `cheetah templates`_ lexer. Code that isn't Cheetah markup is yielded as `Token.Other`. This also works for `spitfire templates`_ which use the same syntax. .. _cheetah templates: http://www.cheetahtemplate.org/ .. _spitfire templates: http://code.google.com/p/spitfire/ """ name = 'Cheetah' aliases = ['cheetah', 'spitfire'] filenames = ['*.tmpl', '*.spt'] mimetypes = ['application/x-cheetah', 'application/x-spitfire'] tokens = { 'root': [ (r'(##[^\n]*)$', (bygroups(Comment))), (r'#[*](.|\n)*?[*]#', Comment), (r'#end[^#\n]*(?:#|$)', Comment.Preproc), (r'#slurp$', Comment.Preproc), (r'(#[a-zA-Z]+)([^#\n]*)(#|$)', (bygroups(Comment.Preproc, using(CheetahPythonLexer), Comment.Preproc))), # TODO support other Python syntax like $foo['bar'] (r'(\$)([a-zA-Z_][\w.]*\w)', bygroups(Comment.Preproc, using(CheetahPythonLexer))), (r'(?s)(\$\{!?)(.*?)(\})', bygroups(Comment.Preproc, using(CheetahPythonLexer), Comment.Preproc)), (r'''(?sx) (.+?) # anything, followed by: (?: (?=\#[#a-zA-Z]*) | # an eval comment (?=\$[a-zA-Z_{]) | # a substitution \Z # end of string ) ''', Other), (r'\s+', Text), ], } class CheetahHtmlLexer(DelegatingLexer): """ Subclass of the `CheetahLexer` that highlights unlexed data with the `HtmlLexer`. """ name = 'HTML+Cheetah' aliases = ['html+cheetah', 'html+spitfire', 'htmlcheetah'] mimetypes = ['text/html+cheetah', 'text/html+spitfire'] def __init__(self, **options): super().__init__(HtmlLexer, CheetahLexer, **options) class CheetahXmlLexer(DelegatingLexer): """ Subclass of the `CheetahLexer` that highlights unlexed data with the `XmlLexer`. """ name = 'XML+Cheetah' aliases = ['xml+cheetah', 'xml+spitfire'] mimetypes = ['application/xml+cheetah', 'application/xml+spitfire'] def __init__(self, **options): super().__init__(XmlLexer, CheetahLexer, **options) class CheetahJavascriptLexer(DelegatingLexer): """ Subclass of the `CheetahLexer` that highlights unlexed data with the `JavascriptLexer`. """ name = 'JavaScript+Cheetah' aliases = ['javascript+cheetah', 'js+cheetah', 'javascript+spitfire', 'js+spitfire'] mimetypes = ['application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire'] def __init__(self, **options): super().__init__(JavascriptLexer, CheetahLexer, **options) class GenshiTextLexer(RegexLexer): """ A lexer that highlights `genshi `_ text templates. """ name = 'Genshi Text' aliases = ['genshitext'] mimetypes = ['application/x-genshi-text', 'text/x-genshi'] tokens = { 'root': [ (r'[^#$\s]+', Other), (r'^(\s*)(##.*)$', bygroups(Text, Comment)), (r'^(\s*)(#)', bygroups(Text, Comment.Preproc), 'directive'), include('variable'), (r'[#$\s]', Other), ], 'directive': [ (r'\n', Text, '#pop'), (r'(?:def|for|if)\s+.*', using(PythonLexer), '#pop'), (r'(choose|when|with)([^\S\n]+)(.*)', bygroups(Keyword, Text, using(PythonLexer)), '#pop'), (r'(choose|otherwise)\b', Keyword, '#pop'), (r'(end\w*)([^\S\n]*)(.*)', bygroups(Keyword, Text, Comment), '#pop'), ], 'variable': [ (r'(?)', bygroups(Comment.Preproc, using(PythonLexer), Comment.Preproc)), # yield style and script blocks as Other (r'<\s*(script|style)\s*.*?>.*?<\s*/\1\s*>', Other), (r'<\s*py:[a-zA-Z0-9]+', Name.Tag, 'pytag'), (r'<\s*[a-zA-Z0-9:.]+', Name.Tag, 'tag'), include('variable'), (r'[<$]', Other), ], 'pytag': [ (r'\s+', Text), (r'[\w:-]+\s*=', Name.Attribute, 'pyattr'), (r'/?\s*>', Name.Tag, '#pop'), ], 'pyattr': [ ('(")(.*?)(")', bygroups(String, using(PythonLexer), String), '#pop'), ("(')(.*?)(')", bygroups(String, using(PythonLexer), String), '#pop'), (r'[^\s>]+', String, '#pop'), ], 'tag': [ (r'\s+', Text), (r'py:[\w-]+\s*=', Name.Attribute, 'pyattr'), (r'[\w:-]+\s*=', Name.Attribute, 'attr'), (r'/?\s*>', Name.Tag, '#pop'), ], 'attr': [ ('"', String, 'attr-dstring'), ("'", String, 'attr-sstring'), (r'[^\s>]*', String, '#pop') ], 'attr-dstring': [ ('"', String, '#pop'), include('strings'), ("'", String) ], 'attr-sstring': [ ("'", String, '#pop'), include('strings'), ("'", String) ], 'strings': [ ('[^"\'$]+', String), include('variable') ], 'variable': [ (r'(?`_ and `kid `_ kid HTML templates. """ name = 'HTML+Genshi' aliases = ['html+genshi', 'html+kid'] alias_filenames = ['*.html', '*.htm', '*.xhtml'] mimetypes = ['text/html+genshi'] def __init__(self, **options): super().__init__(HtmlLexer, GenshiMarkupLexer, **options) def analyse_text(text): rv = 0.0 if re.search(r'\$\{.*?\}', text) is not None: rv += 0.2 if re.search(r'py:(.*?)=["\']', text) is not None: rv += 0.2 return rv + HtmlLexer.analyse_text(text) - 0.01 class GenshiLexer(DelegatingLexer): """ A lexer that highlights `genshi `_ and `kid `_ kid XML templates. """ name = 'Genshi' aliases = ['genshi', 'kid', 'xml+genshi', 'xml+kid'] filenames = ['*.kid'] alias_filenames = ['*.xml'] mimetypes = ['application/x-genshi', 'application/x-kid'] def __init__(self, **options): super().__init__(XmlLexer, GenshiMarkupLexer, **options) def analyse_text(text): rv = 0.0 if re.search(r'\$\{.*?\}', text) is not None: rv += 0.2 if re.search(r'py:(.*?)=["\']', text) is not None: rv += 0.2 return rv + XmlLexer.analyse_text(text) - 0.01 class JavascriptGenshiLexer(DelegatingLexer): """ A lexer that highlights javascript code in genshi text templates. """ name = 'JavaScript+Genshi Text' aliases = ['js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi'] def __init__(self, **options): super().__init__(JavascriptLexer, GenshiTextLexer, **options) def analyse_text(text): return GenshiLexer.analyse_text(text) - 0.05 class CssGenshiLexer(DelegatingLexer): """ A lexer that highlights CSS definitions in genshi text templates. """ name = 'CSS+Genshi Text' aliases = ['css+genshitext', 'css+genshi'] alias_filenames = ['*.css'] mimetypes = ['text/css+genshi'] def __init__(self, **options): super().__init__(CssLexer, GenshiTextLexer, **options) def analyse_text(text): return GenshiLexer.analyse_text(text) - 0.05 class RhtmlLexer(DelegatingLexer): """ Subclass of the ERB lexer that highlights the unlexed data with the html lexer. Nested Javascript and CSS is highlighted too. """ name = 'RHTML' aliases = ['rhtml', 'html+erb', 'html+ruby'] filenames = ['*.rhtml'] alias_filenames = ['*.html', '*.htm', '*.xhtml'] mimetypes = ['text/html+ruby'] def __init__(self, **options): super().__init__(HtmlLexer, ErbLexer, **options) def analyse_text(text): rv = ErbLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): # one more than the XmlErbLexer returns rv += 0.5 return rv class XmlErbLexer(DelegatingLexer): """ Subclass of `ErbLexer` which highlights data outside preprocessor directives with the `XmlLexer`. """ name = 'XML+Ruby' aliases = ['xml+ruby', 'xml+erb'] alias_filenames = ['*.xml'] mimetypes = ['application/xml+ruby'] def __init__(self, **options): super().__init__(XmlLexer, ErbLexer, **options) def analyse_text(text): rv = ErbLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class CssErbLexer(DelegatingLexer): """ Subclass of `ErbLexer` which highlights unlexed data with the `CssLexer`. """ name = 'CSS+Ruby' aliases = ['css+ruby', 'css+erb'] alias_filenames = ['*.css'] mimetypes = ['text/css+ruby'] def __init__(self, **options): super().__init__(CssLexer, ErbLexer, **options) def analyse_text(text): return ErbLexer.analyse_text(text) - 0.05 class JavascriptErbLexer(DelegatingLexer): """ Subclass of `ErbLexer` which highlights unlexed data with the `JavascriptLexer`. """ name = 'JavaScript+Ruby' aliases = ['javascript+ruby', 'js+ruby', 'javascript+erb', 'js+erb'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby'] def __init__(self, **options): super().__init__(JavascriptLexer, ErbLexer, **options) def analyse_text(text): return ErbLexer.analyse_text(text) - 0.05 class HtmlPhpLexer(DelegatingLexer): """ Subclass of `PhpLexer` that highlights unhandled data with the `HtmlLexer`. Nested Javascript and CSS is highlighted too. """ name = 'HTML+PHP' aliases = ['html+php'] filenames = ['*.phtml'] alias_filenames = ['*.php', '*.html', '*.htm', '*.xhtml', '*.php[345]'] mimetypes = ['application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5'] def __init__(self, **options): super().__init__(HtmlLexer, PhpLexer, **options) def analyse_text(text): rv = PhpLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): rv += 0.5 return rv class XmlPhpLexer(DelegatingLexer): """ Subclass of `PhpLexer` that highlights unhandled data with the `XmlLexer`. """ name = 'XML+PHP' aliases = ['xml+php'] alias_filenames = ['*.xml', '*.php', '*.php[345]'] mimetypes = ['application/xml+php'] def __init__(self, **options): super().__init__(XmlLexer, PhpLexer, **options) def analyse_text(text): rv = PhpLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class CssPhpLexer(DelegatingLexer): """ Subclass of `PhpLexer` which highlights unmatched data with the `CssLexer`. """ name = 'CSS+PHP' aliases = ['css+php'] alias_filenames = ['*.css'] mimetypes = ['text/css+php'] def __init__(self, **options): super().__init__(CssLexer, PhpLexer, **options) def analyse_text(text): return PhpLexer.analyse_text(text) - 0.05 class JavascriptPhpLexer(DelegatingLexer): """ Subclass of `PhpLexer` which highlights unmatched data with the `JavascriptLexer`. """ name = 'JavaScript+PHP' aliases = ['javascript+php', 'js+php'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php'] def __init__(self, **options): super().__init__(JavascriptLexer, PhpLexer, **options) def analyse_text(text): return PhpLexer.analyse_text(text) class HtmlSmartyLexer(DelegatingLexer): """ Subclass of the `SmartyLexer` that highlights unlexed data with the `HtmlLexer`. Nested Javascript and CSS is highlighted too. """ name = 'HTML+Smarty' aliases = ['html+smarty'] alias_filenames = ['*.html', '*.htm', '*.xhtml', '*.tpl'] mimetypes = ['text/html+smarty'] def __init__(self, **options): super().__init__(HtmlLexer, SmartyLexer, **options) def analyse_text(text): rv = SmartyLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): rv += 0.5 return rv class XmlSmartyLexer(DelegatingLexer): """ Subclass of the `SmartyLexer` that highlights unlexed data with the `XmlLexer`. """ name = 'XML+Smarty' aliases = ['xml+smarty'] alias_filenames = ['*.xml', '*.tpl'] mimetypes = ['application/xml+smarty'] def __init__(self, **options): super().__init__(XmlLexer, SmartyLexer, **options) def analyse_text(text): rv = SmartyLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class CssSmartyLexer(DelegatingLexer): """ Subclass of the `SmartyLexer` that highlights unlexed data with the `CssLexer`. """ name = 'CSS+Smarty' aliases = ['css+smarty'] alias_filenames = ['*.css', '*.tpl'] mimetypes = ['text/css+smarty'] def __init__(self, **options): super().__init__(CssLexer, SmartyLexer, **options) def analyse_text(text): return SmartyLexer.analyse_text(text) - 0.05 class JavascriptSmartyLexer(DelegatingLexer): """ Subclass of the `SmartyLexer` that highlights unlexed data with the `JavascriptLexer`. """ name = 'JavaScript+Smarty' aliases = ['javascript+smarty', 'js+smarty'] alias_filenames = ['*.js', '*.tpl'] mimetypes = ['application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty'] def __init__(self, **options): super().__init__(JavascriptLexer, SmartyLexer, **options) def analyse_text(text): return SmartyLexer.analyse_text(text) - 0.05 class HtmlDjangoLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `HtmlLexer`. Nested Javascript and CSS is highlighted too. """ name = 'HTML+Django/Jinja' aliases = ['html+django', 'html+jinja', 'htmldjango'] alias_filenames = ['*.html', '*.htm', '*.xhtml'] mimetypes = ['text/html+django', 'text/html+jinja'] def __init__(self, **options): super().__init__(HtmlLexer, DjangoLexer, **options) def analyse_text(text): rv = DjangoLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): rv += 0.5 return rv class XmlDjangoLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `XmlLexer`. """ name = 'XML+Django/Jinja' aliases = ['xml+django', 'xml+jinja'] alias_filenames = ['*.xml'] mimetypes = ['application/xml+django', 'application/xml+jinja'] def __init__(self, **options): super().__init__(XmlLexer, DjangoLexer, **options) def analyse_text(text): rv = DjangoLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class CssDjangoLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `CssLexer`. """ name = 'CSS+Django/Jinja' aliases = ['css+django', 'css+jinja'] alias_filenames = ['*.css'] mimetypes = ['text/css+django', 'text/css+jinja'] def __init__(self, **options): super().__init__(CssLexer, DjangoLexer, **options) def analyse_text(text): return DjangoLexer.analyse_text(text) - 0.05 class JavascriptDjangoLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `JavascriptLexer`. """ name = 'JavaScript+Django/Jinja' aliases = ['javascript+django', 'js+django', 'javascript+jinja', 'js+jinja'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja'] def __init__(self, **options): super().__init__(JavascriptLexer, DjangoLexer, **options) def analyse_text(text): return DjangoLexer.analyse_text(text) - 0.05 class JspRootLexer(RegexLexer): """ Base for the `JspLexer`. Yields `Token.Other` for area outside of JSP tags. .. versionadded:: 0.7 """ tokens = { 'root': [ (r'<%\S?', Keyword, 'sec'), # FIXME: I want to make these keywords but still parse attributes. (r'', Keyword), (r'[^<]+', Other), (r'<', Other), ], 'sec': [ (r'%>', Keyword, '#pop'), # note: '\w\W' != '.' without DOTALL. (r'[\w\W]+?(?=%>|\Z)', using(JavaLexer)), ], } class JspLexer(DelegatingLexer): """ Lexer for Java Server Pages. .. versionadded:: 0.7 """ name = 'Java Server Page' aliases = ['jsp'] filenames = ['*.jsp'] mimetypes = ['application/x-jsp'] def __init__(self, **options): super().__init__(XmlLexer, JspRootLexer, **options) def analyse_text(text): rv = JavaLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 if '<%' in text and '%>' in text: rv += 0.1 return rv class EvoqueLexer(RegexLexer): """ For files using the Evoque templating system. .. versionadded:: 1.1 """ name = 'Evoque' aliases = ['evoque'] filenames = ['*.evoque'] mimetypes = ['application/x-evoque'] flags = re.DOTALL tokens = { 'root': [ (r'[^#$]+', Other), (r'#\[', Comment.Multiline, 'comment'), (r'\$\$', Other), # svn keywords (r'\$\w+:[^$\n]*\$', Comment.Multiline), # directives: begin, end (r'(\$)(begin|end)(\{(%)?)(.*?)((?(4)%)\})', bygroups(Punctuation, Name.Builtin, Punctuation, None, String, Punctuation)), # directives: evoque, overlay # see doc for handling first name arg: /directives/evoque/ # + minor inconsistency: the "name" in e.g. $overlay{name=site_base} # should be using(PythonLexer), not passed out as String (r'(\$)(evoque|overlay)(\{(%)?)(\s*[#\w\-"\'.]+)?' r'(.*?)((?(4)%)\})', bygroups(Punctuation, Name.Builtin, Punctuation, None, String, using(PythonLexer), Punctuation)), # directives: if, for, prefer, test (r'(\$)(\w+)(\{(%)?)(.*?)((?(4)%)\})', bygroups(Punctuation, Name.Builtin, Punctuation, None, using(PythonLexer), Punctuation)), # directive clauses (no {} expression) (r'(\$)(else|rof|fi)', bygroups(Punctuation, Name.Builtin)), # expressions (r'(\$\{(%)?)(.*?)((!)(.*?))?((?(2)%)\})', bygroups(Punctuation, None, using(PythonLexer), Name.Builtin, None, None, Punctuation)), (r'#', Other), ], 'comment': [ (r'[^\]#]', Comment.Multiline), (r'#\[', Comment.Multiline, '#push'), (r'\]#', Comment.Multiline, '#pop'), (r'[\]#]', Comment.Multiline) ], } def analyse_text(text): """Evoque templates use $evoque, which is unique.""" if '$evoque' in text: return 1 class EvoqueHtmlLexer(DelegatingLexer): """ Subclass of the `EvoqueLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 1.1 """ name = 'HTML+Evoque' aliases = ['html+evoque'] filenames = ['*.html'] mimetypes = ['text/html+evoque'] def __init__(self, **options): super().__init__(HtmlLexer, EvoqueLexer, **options) def analyse_text(text): return EvoqueLexer.analyse_text(text) class EvoqueXmlLexer(DelegatingLexer): """ Subclass of the `EvoqueLexer` that highlights unlexed data with the `XmlLexer`. .. versionadded:: 1.1 """ name = 'XML+Evoque' aliases = ['xml+evoque'] filenames = ['*.xml'] mimetypes = ['application/xml+evoque'] def __init__(self, **options): super().__init__(XmlLexer, EvoqueLexer, **options) def analyse_text(text): return EvoqueLexer.analyse_text(text) class ColdfusionLexer(RegexLexer): """ Coldfusion statements """ name = 'cfstatement' aliases = ['cfs'] filenames = [] mimetypes = [] flags = re.IGNORECASE tokens = { 'root': [ (r'//.*?\n', Comment.Single), (r'/\*(?:.|\n)*?\*/', Comment.Multiline), (r'\+\+|--', Operator), (r'[-+*/^&=!]', Operator), (r'<=|>=|<|>|==', Operator), (r'mod\b', Operator), (r'(eq|lt|gt|lte|gte|not|is|and|or)\b', Operator), (r'\|\||&&', Operator), (r'\?', Operator), (r'"', String.Double, 'string'), # There is a special rule for allowing html in single quoted # strings, evidently. (r"'.*?'", String.Single), (r'\d+', Number), (r'(if|else|len|var|xml|default|break|switch|component|property|function|do|' r'try|catch|in|continue|for|return|while|required|any|array|binary|boolean|' r'component|date|guid|numeric|query|string|struct|uuid|case)\b', Keyword), (r'(true|false|null)\b', Keyword.Constant), (r'(application|session|client|cookie|super|this|variables|arguments)\b', Name.Constant), (r'([a-z_$][\w.]*)(\s*)(\()', bygroups(Name.Function, Text, Punctuation)), (r'[a-z_$][\w.]*', Name.Variable), (r'[()\[\]{};:,.\\]', Punctuation), (r'\s+', Text), ], 'string': [ (r'""', String.Double), (r'#.+?#', String.Interp), (r'[^"#]+', String.Double), (r'#', String.Double), (r'"', String.Double, '#pop'), ], } class ColdfusionMarkupLexer(RegexLexer): """ Coldfusion markup only """ name = 'Coldfusion' aliases = ['cf'] filenames = [] mimetypes = [] tokens = { 'root': [ (r'[^<]+', Other), include('tags'), (r'<[^<>]*', Other), ], 'tags': [ (r'', Comment), (r'', Name.Builtin, 'cfoutput'), (r'(?s)()(.+?)()', bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)), # negative lookbehind is for strings with embedded > (r'(?s)()', bygroups(Name.Builtin, using(ColdfusionLexer), Name.Builtin)), ], 'cfoutput': [ (r'[^#<]+', Other), (r'(#)(.*?)(#)', bygroups(Punctuation, using(ColdfusionLexer), Punctuation)), # (r'', Name.Builtin, '#push'), (r'', Name.Builtin, '#pop'), include('tags'), (r'(?s)<[^<>]*', Other), (r'#', Other), ], 'cfcomment': [ (r'', Comment.Multiline, '#pop'), (r'([^<-]|<(?!!---)|-(?!-->))+', Comment.Multiline), ], } class ColdfusionHtmlLexer(DelegatingLexer): """ Coldfusion markup in html """ name = 'Coldfusion HTML' aliases = ['cfm'] filenames = ['*.cfm', '*.cfml'] mimetypes = ['application/x-coldfusion'] def __init__(self, **options): super().__init__(HtmlLexer, ColdfusionMarkupLexer, **options) class ColdfusionCFCLexer(DelegatingLexer): """ Coldfusion markup/script components .. versionadded:: 2.0 """ name = 'Coldfusion CFC' aliases = ['cfc'] filenames = ['*.cfc'] mimetypes = [] def __init__(self, **options): super().__init__(ColdfusionHtmlLexer, ColdfusionLexer, **options) class SspLexer(DelegatingLexer): """ Lexer for Scalate Server Pages. .. versionadded:: 1.4 """ name = 'Scalate Server Page' aliases = ['ssp'] filenames = ['*.ssp'] mimetypes = ['application/x-ssp'] def __init__(self, **options): super().__init__(XmlLexer, JspRootLexer, **options) def analyse_text(text): rv = 0.0 if re.search(r'val \w+\s*:', text): rv += 0.6 if looks_like_xml(text): rv += 0.2 if '<%' in text and '%>' in text: rv += 0.1 return rv class TeaTemplateRootLexer(RegexLexer): """ Base for the `TeaTemplateLexer`. Yields `Token.Other` for area outside of code blocks. .. versionadded:: 1.5 """ tokens = { 'root': [ (r'<%\S?', Keyword, 'sec'), (r'[^<]+', Other), (r'<', Other), ], 'sec': [ (r'%>', Keyword, '#pop'), # note: '\w\W' != '.' without DOTALL. (r'[\w\W]+?(?=%>|\Z)', using(TeaLangLexer)), ], } class TeaTemplateLexer(DelegatingLexer): """ Lexer for `Tea Templates `_. .. versionadded:: 1.5 """ name = 'Tea' aliases = ['tea'] filenames = ['*.tea'] mimetypes = ['text/x-tea'] def __init__(self, **options): super().__init__(XmlLexer, TeaTemplateRootLexer, **options) def analyse_text(text): rv = TeaLangLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 if '<%' in text and '%>' in text: rv += 0.1 return rv class LassoHtmlLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `HtmlLexer`. Nested JavaScript and CSS is also highlighted. .. versionadded:: 1.6 """ name = 'HTML+Lasso' aliases = ['html+lasso'] alias_filenames = ['*.html', '*.htm', '*.xhtml', '*.lasso', '*.lasso[89]', '*.incl', '*.inc', '*.las'] mimetypes = ['text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]'] def __init__(self, **options): super().__init__(HtmlLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.01 if html_doctype_matches(text): # same as HTML lexer rv += 0.5 return rv class LassoXmlLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `XmlLexer`. .. versionadded:: 1.6 """ name = 'XML+Lasso' aliases = ['xml+lasso'] alias_filenames = ['*.xml', '*.lasso', '*.lasso[89]', '*.incl', '*.inc', '*.las'] mimetypes = ['application/xml+lasso'] def __init__(self, **options): super().__init__(XmlLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.01 if looks_like_xml(text): rv += 0.4 return rv class LassoCssLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `CssLexer`. .. versionadded:: 1.6 """ name = 'CSS+Lasso' aliases = ['css+lasso'] alias_filenames = ['*.css'] mimetypes = ['text/css+lasso'] def __init__(self, **options): options['requiredelimiters'] = True super().__init__(CssLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.05 if re.search(r'\w+:[^;]+;', text): rv += 0.1 if 'padding:' in text: rv += 0.1 return rv class LassoJavascriptLexer(DelegatingLexer): """ Subclass of the `LassoLexer` which highlights unhandled data with the `JavascriptLexer`. .. versionadded:: 1.6 """ name = 'JavaScript+Lasso' aliases = ['javascript+lasso', 'js+lasso'] alias_filenames = ['*.js'] mimetypes = ['application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso'] def __init__(self, **options): options['requiredelimiters'] = True super().__init__(JavascriptLexer, LassoLexer, **options) def analyse_text(text): rv = LassoLexer.analyse_text(text) - 0.05 return rv class HandlebarsLexer(RegexLexer): """ Generic `handlebars ` template lexer. Highlights only the Handlebars template tags (stuff between `{{` and `}}`). Everything else is left for a delegating lexer. .. versionadded:: 2.0 """ name = "Handlebars" aliases = ['handlebars'] tokens = { 'root': [ (r'[^{]+', Other), # Comment start {{! }} or {{!-- (r'\{\{!.*\}\}', Comment), # HTML Escaping open {{{expression (r'(\{\{\{)(\s*)', bygroups(Comment.Special, Text), 'tag'), # {{blockOpen {{#blockOpen {{/blockClose with optional tilde ~ (r'(\{\{)([#~/]+)([^\s}]*)', bygroups(Comment.Preproc, Number.Attribute, Number.Attribute), 'tag'), (r'(\{\{)(\s*)', bygroups(Comment.Preproc, Text), 'tag'), ], 'tag': [ (r'\s+', Text), # HTML Escaping close }}} (r'\}\}\}', Comment.Special, '#pop'), # blockClose}}, includes optional tilde ~ (r'(~?)(\}\})', bygroups(Number, Comment.Preproc), '#pop'), # {{opt=something}} (r'([^\s}]+)(=)', bygroups(Name.Attribute, Operator)), # Partials {{> ...}} (r'(>)(\s*)(@partial-block)', bygroups(Keyword, Text, Keyword)), (r'(#?>)(\s*)([\w-]+)', bygroups(Keyword, Text, Name.Variable)), (r'(>)(\s*)(\()', bygroups(Keyword, Text, Punctuation), 'dynamic-partial'), include('generic'), ], 'dynamic-partial': [ (r'\s+', Text), (r'\)', Punctuation, '#pop'), (r'(lookup)(\s+)(\.|this)(\s+)', bygroups(Keyword, Text, Name.Variable, Text)), (r'(lookup)(\s+)(\S+)', bygroups(Keyword, Text, using(this, state='variable'))), (r'[\w-]+', Name.Function), include('generic'), ], 'variable': [ (r'[()/@a-zA-Z][\w-]*', Name.Variable), (r'\.[\w-]+', Name.Variable), (r'(this\/|\.\/|(\.\.\/)+)[\w-]+', Name.Variable), ], 'generic': [ include('variable'), # borrowed from DjangoLexer (r':?"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r":?'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), ] } class HandlebarsHtmlLexer(DelegatingLexer): """ Subclass of the `HandlebarsLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML+Handlebars" aliases = ["html+handlebars"] filenames = ['*.handlebars', '*.hbs'] mimetypes = ['text/html+handlebars', 'text/x-handlebars-template'] def __init__(self, **options): super().__init__(HtmlLexer, HandlebarsLexer, **options) class YamlJinjaLexer(DelegatingLexer): """ Subclass of the `DjangoLexer` that highlights unlexed data with the `YamlLexer`. Commonly used in Saltstack salt states. .. versionadded:: 2.0 """ name = 'YAML+Jinja' aliases = ['yaml+jinja', 'salt', 'sls'] filenames = ['*.sls'] mimetypes = ['text/x-yaml+jinja', 'text/x-sls'] def __init__(self, **options): super().__init__(YamlLexer, DjangoLexer, **options) class LiquidLexer(RegexLexer): """ Lexer for `Liquid templates `_. .. versionadded:: 2.0 """ name = 'liquid' aliases = ['liquid'] filenames = ['*.liquid'] tokens = { 'root': [ (r'[^{]+', Text), # tags and block tags (r'(\{%)(\s*)', bygroups(Punctuation, Whitespace), 'tag-or-block'), # output tags (r'(\{\{)(\s*)([^\s}]+)', bygroups(Punctuation, Whitespace, using(this, state = 'generic')), 'output'), (r'\{', Text) ], 'tag-or-block': [ # builtin logic blocks (r'(if|unless|elsif|case)(?=\s+)', Keyword.Reserved, 'condition'), (r'(when)(\s+)', bygroups(Keyword.Reserved, Whitespace), combined('end-of-block', 'whitespace', 'generic')), (r'(else)(\s*)(%\})', bygroups(Keyword.Reserved, Whitespace, Punctuation), '#pop'), # other builtin blocks (r'(capture)(\s+)([^\s%]+)(\s*)(%\})', bygroups(Name.Tag, Whitespace, using(this, state = 'variable'), Whitespace, Punctuation), '#pop'), (r'(comment)(\s*)(%\})', bygroups(Name.Tag, Whitespace, Punctuation), 'comment'), (r'(raw)(\s*)(%\})', bygroups(Name.Tag, Whitespace, Punctuation), 'raw'), # end of block (r'(end(case|unless|if))(\s*)(%\})', bygroups(Keyword.Reserved, None, Whitespace, Punctuation), '#pop'), (r'(end([^\s%]+))(\s*)(%\})', bygroups(Name.Tag, None, Whitespace, Punctuation), '#pop'), # builtin tags (assign and include are handled together with usual tags) (r'(cycle)(\s+)(?:([^\s:]*)(:))?(\s*)', bygroups(Name.Tag, Whitespace, using(this, state='generic'), Punctuation, Whitespace), 'variable-tag-markup'), # other tags or blocks (r'([^\s%]+)(\s*)', bygroups(Name.Tag, Whitespace), 'tag-markup') ], 'output': [ include('whitespace'), (r'\}\}', Punctuation, '#pop'), # end of output (r'\|', Punctuation, 'filters') ], 'filters': [ include('whitespace'), (r'\}\}', Punctuation, ('#pop', '#pop')), # end of filters and output (r'([^\s|:]+)(:?)(\s*)', bygroups(Name.Function, Punctuation, Whitespace), 'filter-markup') ], 'filter-markup': [ (r'\|', Punctuation, '#pop'), include('end-of-tag'), include('default-param-markup') ], 'condition': [ include('end-of-block'), include('whitespace'), (r'([^\s=!><]+)(\s*)([=!><]=?)(\s*)(\S+)(\s*)(%\})', bygroups(using(this, state = 'generic'), Whitespace, Operator, Whitespace, using(this, state = 'generic'), Whitespace, Punctuation)), (r'\b!', Operator), (r'\bnot\b', Operator.Word), (r'([\w.\'"]+)(\s+)(contains)(\s+)([\w.\'"]+)', bygroups(using(this, state = 'generic'), Whitespace, Operator.Word, Whitespace, using(this, state = 'generic'))), include('generic'), include('whitespace') ], 'generic-value': [ include('generic'), include('end-at-whitespace') ], 'operator': [ (r'(\s*)((=|!|>|<)=?)(\s*)', bygroups(Whitespace, Operator, None, Whitespace), '#pop'), (r'(\s*)(\bcontains\b)(\s*)', bygroups(Whitespace, Operator.Word, Whitespace), '#pop'), ], 'end-of-tag': [ (r'\}\}', Punctuation, '#pop') ], 'end-of-block': [ (r'%\}', Punctuation, ('#pop', '#pop')) ], 'end-at-whitespace': [ (r'\s+', Whitespace, '#pop') ], # states for unknown markup 'param-markup': [ include('whitespace'), # params with colons or equals (r'([^\s=:]+)(\s*)(=|:)', bygroups(Name.Attribute, Whitespace, Operator)), # explicit variables (r'(\{\{)(\s*)([^\s}])(\s*)(\}\})', bygroups(Punctuation, Whitespace, using(this, state = 'variable'), Whitespace, Punctuation)), include('string'), include('number'), include('keyword'), (r',', Punctuation) ], 'default-param-markup': [ include('param-markup'), (r'.', Text) # fallback for switches / variables / un-quoted strings / ... ], 'variable-param-markup': [ include('param-markup'), include('variable'), (r'.', Text) # fallback ], 'tag-markup': [ (r'%\}', Punctuation, ('#pop', '#pop')), # end of tag include('default-param-markup') ], 'variable-tag-markup': [ (r'%\}', Punctuation, ('#pop', '#pop')), # end of tag include('variable-param-markup') ], # states for different values types 'keyword': [ (r'\b(false|true)\b', Keyword.Constant) ], 'variable': [ (r'[a-zA-Z_]\w*', Name.Variable), (r'(?<=\w)\.(?=\w)', Punctuation) ], 'string': [ (r"'[^']*'", String.Single), (r'"[^"]*"', String.Double) ], 'number': [ (r'\d+\.\d+', Number.Float), (r'\d+', Number.Integer) ], 'generic': [ # decides for variable, string, keyword or number include('keyword'), include('string'), include('number'), include('variable') ], 'whitespace': [ (r'[ \t]+', Whitespace) ], # states for builtin blocks 'comment': [ (r'(\{%)(\s*)(endcomment)(\s*)(%\})', bygroups(Punctuation, Whitespace, Name.Tag, Whitespace, Punctuation), ('#pop', '#pop')), (r'.', Comment) ], 'raw': [ (r'[^{]+', Text), (r'(\{%)(\s*)(endraw)(\s*)(%\})', bygroups(Punctuation, Whitespace, Name.Tag, Whitespace, Punctuation), '#pop'), (r'\{', Text) ], } class TwigLexer(RegexLexer): """ `Twig `_ template lexer. It just highlights Twig code between the preprocessor directives, other data is left untouched by the lexer. .. versionadded:: 2.0 """ name = 'Twig' aliases = ['twig'] mimetypes = ['application/x-twig'] flags = re.M | re.S # Note that a backslash is included in the following two patterns # PHP uses a backslash as a namespace separator _ident_char = r'[\\\w-]|[^\x00-\x7f]' _ident_begin = r'(?:[\\_a-z]|[^\x00-\x7f])' _ident_end = r'(?:' + _ident_char + ')*' _ident_inner = _ident_begin + _ident_end tokens = { 'root': [ (r'[^{]+', Other), (r'\{\{', Comment.Preproc, 'var'), # twig comments (r'\{\#.*?\#\}', Comment), # raw twig blocks (r'(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endraw)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Other, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), (r'(\{%)(-?\s*)(verbatim)(\s*-?)(%\})(.*?)' r'(\{%)(-?\s*)(endverbatim)(\s*-?)(%\})', bygroups(Comment.Preproc, Text, Keyword, Text, Comment.Preproc, Other, Comment.Preproc, Text, Keyword, Text, Comment.Preproc)), # filter blocks (r'(\{%%)(-?\s*)(filter)(\s+)(%s)' % _ident_inner, bygroups(Comment.Preproc, Text, Keyword, Text, Name.Function), 'tag'), (r'(\{%)(-?\s*)([a-zA-Z_]\w*)', bygroups(Comment.Preproc, Text, Keyword), 'tag'), (r'\{', Other), ], 'varnames': [ (r'(\|)(\s*)(%s)' % _ident_inner, bygroups(Operator, Text, Name.Function)), (r'(is)(\s+)(not)?(\s*)(%s)' % _ident_inner, bygroups(Keyword, Text, Keyword, Text, Name.Function)), (r'(?i)(true|false|none|null)\b', Keyword.Pseudo), (r'(in|not|and|b-and|or|b-or|b-xor|is' r'if|elseif|else|import' r'constant|defined|divisibleby|empty|even|iterable|odd|sameas' r'matches|starts\s+with|ends\s+with)\b', Keyword), (r'(loop|block|parent)\b', Name.Builtin), (_ident_inner, Name.Variable), (r'\.' + _ident_inner, Name.Variable), (r'\.[0-9]+', Number), (r':?"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r":?'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r'([{}()\[\]+\-*/,:~%]|\.\.|\?|:|\*\*|\/\/|!=|[><=]=?)', Operator), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), ], 'var': [ (r'\s+', Text), (r'(-?)(\}\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames') ], 'tag': [ (r'\s+', Text), (r'(-?)(%\})', bygroups(Text, Comment.Preproc), '#pop'), include('varnames'), (r'.', Punctuation), ], } class TwigHtmlLexer(DelegatingLexer): """ Subclass of the `TwigLexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML+Twig" aliases = ["html+twig"] filenames = ['*.twig'] mimetypes = ['text/html+twig'] def __init__(self, **options): super().__init__(HtmlLexer, TwigLexer, **options) class Angular2Lexer(RegexLexer): """ Generic `angular2 `_ template lexer. Highlights only the Angular template tags (stuff between `{{` and `}}` and special attributes: '(event)=', '[property]=', '[(twoWayBinding)]='). Everything else is left for a delegating lexer. .. versionadded:: 2.1 """ name = "Angular2" aliases = ['ng2'] tokens = { 'root': [ (r'[^{([*#]+', Other), # {{meal.name}} (r'(\{\{)(\s*)', bygroups(Comment.Preproc, Text), 'ngExpression'), # (click)="deleteOrder()"; [value]="test"; [(twoWayTest)]="foo.bar" (r'([([]+)([\w:.-]+)([\])]+)(\s*)(=)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation, Text, Operator, Text), 'attr'), (r'([([]+)([\w:.-]+)([\])]+)(\s*)', bygroups(Punctuation, Name.Attribute, Punctuation, Text)), # *ngIf="..."; #f="ngForm" (r'([*#])([\w:.-]+)(\s*)(=)(\s*)', bygroups(Punctuation, Name.Attribute, Text, Operator, Text), 'attr'), (r'([*#])([\w:.-]+)(\s*)', bygroups(Punctuation, Name.Attribute, Text)), ], 'ngExpression': [ (r'\s+(\|\s+)?', Text), (r'\}\}', Comment.Preproc, '#pop'), # Literals (r':?(true|false)', String.Boolean), (r':?"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r":?'(\\\\|\\[^\\]|[^'\\])*'", String.Single), (r"[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|" r"0[xX][0-9a-fA-F]+[Ll]?", Number), # Variabletext (r'[a-zA-Z][\w-]*(\(.*\))?', Name.Variable), (r'\.[\w-]+(\(.*\))?', Name.Variable), # inline If (r'(\?)(\s*)([^}\s]+)(\s*)(:)(\s*)([^}\s]+)(\s*)', bygroups(Operator, Text, String, Text, Operator, Text, String, Text)), ], 'attr': [ ('".*?"', String, '#pop'), ("'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop'), ], } class Angular2HtmlLexer(DelegatingLexer): """ Subclass of the `Angular2Lexer` that highlights unlexed data with the `HtmlLexer`. .. versionadded:: 2.0 """ name = "HTML + Angular2" aliases = ["html+ng2"] filenames = ['*.ng2'] def __init__(self, **options): super().__init__(HtmlLexer, Angular2Lexer, **options) pygments-2.11.2/pygments/lexers/maxima.py0000644000175000017500000000522514165547207020340 0ustar carstencarsten""" pygments.lexers.maxima ~~~~~~~~~~~~~~~~~~~~~~ Lexer for the computer algebra system Maxima. Derived from pygments/lexers/algebra.py. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['MaximaLexer'] class MaximaLexer(RegexLexer): """ A `Maxima `_ lexer. Derived from pygments.lexers.MuPADLexer. .. versionadded:: 2.11 """ name = 'Maxima' aliases = ['maxima', 'macsyma'] filenames = ['*.mac', '*.max'] keywords = ('if', 'then', 'else', 'elseif', 'do', 'while', 'repeat', 'until', 'for', 'from', 'to', 'downto', 'step', 'thru') constants = ('%pi', '%e', '%phi', '%gamma', '%i', 'und', 'ind', 'infinity', 'inf', 'minf', 'true', 'false', 'unknown', 'done') operators = (r'.', r':', r'=', r'#', r'+', r'-', r'*', r'/', r'^', r'@', r'>', r'<', r'|', r'!', r"'") operator_words = ('and', 'or', 'not') tokens = { 'root': [ (r'/\*', Comment.Multiline, 'comment'), (r'"(?:[^"\\]|\\.)*"', String), (r'\(|\)|\[|\]|\{|\}', Punctuation), (r'[,;$]', Punctuation), (words (constants), Name.Constant), (words (keywords), Keyword), (words (operators), Operator), (words (operator_words), Operator.Word), (r'''(?x) ((?:[a-zA-Z_#][\w#]*|`[^`]*`) (?:::[a-zA-Z_#][\w#]*|`[^`]*`)*)(\s*)([(])''', bygroups(Name.Function, Text.Whitespace, Punctuation)), (r'''(?x) (?:[a-zA-Z_#%][\w#%]*|`[^`]*`) (?:::[a-zA-Z_#%][\w#%]*|`[^`]*`)*''', Name.Variable), (r'[-+]?(\d*\.\d+([bdefls][-+]?\d+)?|\d+(\.\d*)?[bdefls][-+]?\d+)', Number.Float), (r'[-+]?\d+', Number.Integer), (r'\s+', Text.Whitespace), (r'.', Text) ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ] } def analyse_text (text): strength = 0.0 # Input expression terminator. if re.search (r'\$\s*$', text, re.MULTILINE): strength += 0.05 # Function definition operator. if ':=' in text: strength += 0.02 return strength pygments-2.11.2/pygments/lexers/apdlexer.py0000644000175000017500000006404414165547207020674 0ustar carstencarsten""" pygments.lexers.apdlexer ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for ANSYS Parametric Design Language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, words from pygments.token import Comment, Keyword, Name, Text, Number, Operator, \ String, Generic, Punctuation, Whitespace __all__ = ['apdlexer'] class apdlexer(RegexLexer): """ For APDL source code. .. versionadded:: 2.9 """ name = 'ANSYS parametric design language' aliases = ['ansys', 'apdl'] filenames = ['*.ans'] flags = re.IGNORECASE # list of elements elafunb = ("SURF152", "SURF153", "SURF154", "SURF156", "SHELL157", "SURF159", "LINK160", "BEAM161", "PLANE162", "SHELL163", "SOLID164", "COMBI165", "MASS166", "LINK167", "SOLID168", "TARGE169", "TARGE170", "CONTA171", "CONTA172", "CONTA173", "CONTA174", "CONTA175", "CONTA176", "CONTA177", "CONTA178", "PRETS179", "LINK180", "SHELL181", "PLANE182", "PLANE183", "MPC184", "SOLID185", "SOLID186", "SOLID187", "BEAM188", "BEAM189", "SOLSH190", "INTER192", "INTER193", "INTER194", "INTER195", "MESH200", "FOLLW201", "INTER202", "INTER203", "INTER204", "INTER205", "SHELL208", "SHELL209", "CPT212", "CPT213", "COMBI214", "CPT215", "CPT216", "CPT217", "FLUID220", "FLUID221", "PLANE223", "SOLID226", "SOLID227", "PLANE230", "SOLID231", "SOLID232", "PLANE233", "SOLID236", "SOLID237", "PLANE238", "SOLID239", "SOLID240", "HSFLD241", "HSFLD242", "SURF251", "SURF252", "REINF263", "REINF264", "REINF265", "SOLID272", "SOLID273", "SOLID278", "SOLID279", "SHELL281", "SOLID285", "PIPE288", "PIPE289", "ELBOW290", "USER300", "BEAM3", "BEAM4", "BEAM23", "BEAM24", "BEAM44", "BEAM54", "COMBIN7", "FLUID79", "FLUID80", "FLUID81", "FLUID141", "FLUID142", "INFIN9", "INFIN47", "PLANE13", "PLANE25", "PLANE42", "PLANE53", "PLANE67", "PLANE82", "PLANE83", "PLANE145", "PLANE146", "CONTAC12", "CONTAC52", "LINK1", "LINK8", "LINK10", "LINK32", "PIPE16", "PIPE17", "PIPE18", "PIPE20", "PIPE59", "PIPE60", "SHELL41", "SHELL43", "SHELL57", "SHELL63", "SHELL91", "SHELL93", "SHELL99", "SHELL150", "SOLID5", "SOLID45", "SOLID46", "SOLID65", "SOLID69", "SOLID92", "SOLID95", "SOLID117", "SOLID127", "SOLID128", "SOLID147", "SOLID148", "SOLID191", "VISCO88", "VISCO89", "VISCO106", "VISCO107", "VISCO108", "TRANS109") elafunc = ("PGRAPH", "/VT", "VTIN", "VTRFIL", "VTTEMP", "PGRSET", "VTCLR", "VTMETH", "VTRSLT", "VTVMOD", "PGSELE", "VTDISC", "VTMP", "VTSEC", "PGWRITE", "VTEVAL", "VTOP", "VTSFE", "POUTRES", "VTFREQ", "VTPOST", "VTSL", "FLDATA1-40", "HFPCSWP", "MSDATA", "MSVARY", "QFACT", "FLOCHECK", "HFPOWER", "MSMASS", "PERI", "SPADP", "FLREAD", "HFPORT", "MSMETH", "PLFSS", "SPARM", "FLOTRAN", "HFSCAT", "MSMIR", "PLSCH", "SPFSS", "HFADP", "ICE", "MSNOMF", "PLSYZ", "SPICE", "HFARRAY", "ICEDELE", "MSPROP", "PLTD", "SPSCAN", "HFDEEM", "ICELIST", "MSQUAD", "PLTLINE", "SPSWP", "HFEIGOPT", "ICVFRC", "MSRELAX", "PLVFRC", "HFEREFINE", "LPRT", "MSSOLU", "/PICE", "HFMODPRT", "MSADV", "MSSPEC", "PLWAVE", "HFPA", "MSCAP", "MSTERM", "PRSYZ") elafund = ("*VOPER", "VOVLAP", "*VPLOT", "VPLOT", "VPTN", "*VPUT", "VPUT", "*VREAD", "VROTAT", "VSBA", "VSBV", "VSBW", "/VSCALE", "*VSCFUN", "VSEL", "VSLA", "*VSTAT", "VSUM", "VSWEEP", "VSYMM", "VTRAN", "VTYPE", "/VUP", "*VWRITE", "/WAIT", "WAVES", "WERASE", "WFRONT", "/WINDOW", "WMID", "WMORE", "WPAVE", "WPCSYS", "WPLANE", "WPOFFS", "WPROTA", "WPSTYL", "WRFULL", "WRITE", "WRITEMAP", "*WRK", "WSORT", "WSPRINGS", "WSTART", "WTBCREATE", "XFDATA", "XFENRICH", "XFLIST", "/XFRM", "/XRANGE", "XVAR", "/YRANGE", "/ZOOM", "/WB", "XMLO", "/XML", "CNTR", "EBLOCK", "CMBLOCK", "NBLOCK", "/TRACK", "CWZPLOT", "~EUI", "NELE", "EALL", "NALL", "FLITEM", "LSLN", "PSOLVE", "ASLN", "/VERIFY", "/SSS", "~CFIN", "*EVAL", "*MOONEY", "/RUNSTAT", "ALPFILL", "ARCOLLAPSE", "ARDETACH", "ARFILL", "ARMERGE", "ARSPLIT", "FIPLOT", "GAPFINISH", "GAPLIST", "GAPMERGE", "GAPOPT", "GAPPLOT", "LNCOLLAPSE", "LNDETACH", "LNFILL", "LNMERGE", "LNSPLIT", "PCONV", "PLCONV", "PEMOPTS", "PEXCLUDE", "PINCLUDE", "PMETH", "/PMETH", "PMOPTS", "PPLOT", "PPRANGE", "PRCONV", "PRECISION", "RALL", "RFILSZ", "RITER", "RMEMRY", "RSPEED", "RSTAT", "RTIMST", "/RUNST", "RWFRNT", "SARPLOT", "SHSD", "SLPPLOT", "SLSPLOT", "VCVFILL", "/OPT", "OPEQN", "OPFACT", "OPFRST", "OPGRAD", "OPKEEP", "OPLOOP", "OPPRNT", "OPRAND", "OPSUBP", "OPSWEEP", "OPTYPE", "OPUSER", "OPVAR", "OPADD", "OPCLR", "OPDEL", "OPMAKE", "OPSEL", "OPANL", "OPDATA", "OPRESU", "OPSAVE", "OPEXE", "OPLFA", "OPLGR", "OPLIST", "OPLSW", "OPRFA", "OPRGR", "OPRSW", "PILECALC", "PILEDISPSET", "PILEGEN", "PILELOAD", "PILEMASS", "PILERUN", "PILESEL", "PILESTIF", "PLVAROPT", "PRVAROPT", "TOCOMP", "TODEF", "TOFREQ", "TOTYPE", "TOVAR", "TOEXE", "TOLOOP", "TOGRAPH", "TOLIST", "TOPLOT", "TOPRINT", "TOSTAT", "TZAMESH", "TZDELE", "TZEGEN", "XVAROPT", "PGSAVE", "SOLCONTROL", "TOTAL", "VTGEOM", "VTREAL", "VTSTAT") elafune = ("/ANUM", "AOFFST", "AOVLAP", "APLOT", "APPEND", "APTN", "ARCLEN", "ARCTRM", "AREAS", "AREFINE", "AREMESH", "AREVERSE", "AROTAT", "ARSCALE", "ARSYM", "ASBA", "ASBL", "ASBV", "ASBW", "ASCRES", "ASEL", "ASIFILE", "*ASK", "ASKIN", "ASLL", "ASLV", "ASOL", "/ASSIGN", "ASUB", "ASUM", "ATAN", "ATRAN", "ATYPE", "/AUTO", "AUTOTS", "/AUX2", "/AUX3", "/AUX12", "/AUX15", "AVPRIN", "AVRES", "AWAVE", "/AXLAB", "*AXPY", "/BATCH", "BCSOPTION", "BETAD", "BF", "BFA", "BFADELE", "BFALIST", "BFCUM", "BFDELE", "BFE", "BFECUM", "BFEDELE", "BFELIST", "BFESCAL", "BFINT", "BFK", "BFKDELE", "BFKLIST", "BFL", "BFLDELE", "BFLIST", "BFLLIST", "BFSCALE", "BFTRAN", "BFUNIF", "BFV", "BFVDELE", "BFVLIST", "BIOOPT", "BIOT", "BLC4", "BLC5", "BLOCK", "BOOL", "BOPTN", "BSAX", "BSMD", "BSM1", "BSM2", "BSPLIN", "BSS1", "BSS2", "BSTE", "BSTQ", "BTOL", "BUCOPT", "C", "CALC", "CAMPBELL", "CBDOF", "CBMD", "CBMX", "CBTE", "CBTMP", "CDOPT", "CDREAD", "CDWRITE", "CE", "CECHECK", "CECMOD", "CECYC", "CEDELE", "CEINTF", "CELIST", "CENTER", "CEQN", "CERIG", "CESGEN", "CFACT", "*CFCLOS", "*CFOPEN", "*CFWRITE", "/CFORMAT", "CGLOC", "CGOMGA", "CGROW", "CHECK", "CHKMSH", "CINT", "CIRCLE", "CISOL", "/CLABEL", "/CLEAR", "CLOCAL", "CLOG", "/CLOG", "CLRMSHLN", "CM", "CMACEL", "/CMAP", "CMATRIX", "CMDELE", "CMDOMEGA", "CMEDIT", "CMGRP", "CMLIST", "CMMOD", "CMOMEGA", "CMPLOT", "CMROTATE", "CMSEL", "CMSFILE", "CMSOPT", "CMWRITE", "CNCHECK", "CNKMOD", "CNTR", "CNVTOL", "/COLOR", "/COM", "*COMP", "COMBINE", "COMPRESS", "CON4", "CONE", "/CONFIG", "CONJUG", "/CONTOUR", "/COPY", "CORIOLIS", "COUPLE", "COVAL", "CP", "CPCYC", "CPDELE", "CPINTF", "/CPLANE", "CPLGEN", "CPLIST", "CPMERGE", "CPNGEN", "CPSGEN", "CQC", "*CREATE", "CRPLIM", "CS", "CSCIR", "CSDELE", "CSKP", "CSLIST", "CSWPLA", "CSYS", "/CTYPE", "CURR2D", "CUTCONTROL", "/CVAL", "CVAR", "/CWD", "CYCCALC", "/CYCEXPAND", "CYCFILES", "CYCFREQ", "*CYCLE", "CYCLIC", "CYCOPT", "CYCPHASE", "CYCSPEC", "CYL4", "CYL5", "CYLIND", "CZDEL", "CZMESH", "D", "DA", "DADELE", "DALIST", "DAMORPH", "DATA", "DATADEF", "DCGOMG", "DCUM", "DCVSWP", "DDASPEC", "DDELE", "DDOPTION", "DEACT", "DEFINE", "*DEL", "DELETE", "/DELETE", "DELTIM", "DEMORPH", "DERIV", "DESIZE", "DESOL", "DETAB", "/DEVDISP", "/DEVICE", "/DFLAB", "DFLX", "DFSWAVE", "DIG", "DIGIT", "*DIM", "/DIRECTORY", "DISPLAY", "/DIST", "DJ", "DJDELE", "DJLIST", "DK", "DKDELE", "DKLIST", "DL", "DLDELE", "DLIST", "DLLIST", "*DMAT", "DMOVE", "DMPEXT", "DMPOPTION", "DMPRAT", "DMPSTR", "DNSOL", "*DO", "DOF", "DOFSEL", "DOMEGA", "*DOT", "*DOWHILE", "DSCALE", "/DSCALE", "DSET", "DSPOPTION", "DSUM", "DSURF", "DSYM", "DSYS", "DTRAN", "DUMP", "/DV3D", "DVAL", "DVMORPH", "DYNOPT", "E", "EALIVE", "EDADAPT", "EDALE", "EDASMP", "EDBOUND", "EDBX", "EDBVIS", "EDCADAPT", "EDCGEN", "EDCLIST", "EDCMORE", "EDCNSTR", "EDCONTACT", "EDCPU", "EDCRB", "EDCSC", "EDCTS", "EDCURVE", "EDDAMP", "EDDBL", "EDDC", "EDDRELAX", "EDDUMP", "EDELE", "EDENERGY", "EDFPLOT", "EDGCALE", "/EDGE", "EDHGLS", "EDHIST", "EDHTIME", "EDINT", "EDIPART", "EDIS", "EDLCS", "EDLOAD", "EDMP", "EDNB", "EDNDTSD", "EDNROT", "EDOPT", "EDOUT", "EDPART", "EDPC", "EDPL", "EDPVEL", "EDRC", "EDRD", "EDREAD", "EDRI", "EDRST", "EDRUN", "EDSHELL", "EDSOLV", "EDSP", "EDSTART", "EDTERM", "EDTP", "EDVEL", "EDWELD", "EDWRITE", "EEXTRUDE", "/EFACET", "EGEN", "*EIGEN", "EINFIN", "EINTF", "EKILL", "ELBOW", "ELEM", "ELIST", "*ELSE", "*ELSEIF", "EMAGERR", "EMATWRITE", "EMF", "EMFT", "EMID", "EMIS", "EMODIF", "EMORE", "EMSYM", "EMTGEN", "EMUNIT", "EN", "*END", "*ENDDO", "*ENDIF", "ENDRELEASE", "ENERSOL", "ENGEN", "ENORM", "ENSYM", "EORIENT", "EPLOT", "EQSLV", "ERASE", "/ERASE", "EREAD", "EREFINE", "EREINF", "ERESX", "ERNORM", "ERRANG", "ESCHECK", "ESEL", "/ESHAPE", "ESIZE", "ESLA", "ESLL", "ESLN", "ESLV", "ESOL", "ESORT", "ESSOLV", "ESTIF", "ESURF", "ESYM", "ESYS", "ET", "ETABLE", "ETCHG", "ETCONTROL", "ETDELE", "ETLIST", "ETYPE", "EUSORT", "EWRITE", "*EXIT", "/EXIT", "EXP", "EXPAND", "/EXPAND", "EXPASS", "*EXPORT", "EXPROFILE", "EXPSOL", "EXTOPT", "EXTREM", "EXUNIT", "F", "/FACET", "FATIGUE", "FC", "FCCHECK", "FCDELE", "FCLIST", "FCUM", "FCTYP", "FDELE", "/FDELE", "FE", "FEBODY", "FECONS", "FEFOR", "FELIST", "FESURF", "*FFT", "FILE", "FILEAUX2", "FILEAUX3", "FILEDISP", "FILL", "FILLDATA", "/FILNAME", "FINISH", "FITEM", "FJ", "FJDELE", "FJLIST", "FK", "FKDELE", "FKLIST", "FL", "FLIST", "FLLIST", "FLST", "FLUXV", "FLUREAD", "FMAGBC", "FMAGSUM", "/FOCUS", "FOR2D", "FORCE", "FORM", "/FORMAT", "FP", "FPLIST", "*FREE", "FREQ", "FRQSCL", "FS", "FSCALE", "FSDELE", "FSLIST", "FSNODE", "FSPLOT", "FSSECT", "FSSPARM", "FSUM", "FTCALC", "FTRAN", "FTSIZE", "FTWRITE", "FTYPE", "FVMESH", "GAP", "GAPF", "GAUGE", "GCDEF", "GCGEN", "/GCMD", "/GCOLUMN", "GENOPT", "GEOM", "GEOMETRY", "*GET", "/GFILE", "/GFORMAT", "/GLINE", "/GMARKER", "GMATRIX", "GMFACE", "*GO", "/GO", "/GOLIST", "/GOPR", "GP", "GPDELE", "GPLIST", "GPLOT", "/GRAPHICS", "/GRESUME", "/GRID", "/GROPT", "GRP", "/GRTYP", "/GSAVE", "GSBDATA", "GSGDATA", "GSLIST", "GSSOL", "/GST", "GSUM", "/GTHK", "/GTYPE", "HARFRQ", "/HBC", "HBMAT", "/HEADER", "HELP", "HELPDISP", "HEMIOPT", "HFANG", "HFSYM", "HMAGSOLV", "HPGL", "HPTCREATE", "HPTDELETE", "HRCPLX", "HREXP", "HROPT", "HROCEAN", "HROUT", "IC", "ICDELE", "ICLIST", "/ICLWID", "/ICSCALE", "*IF", "IGESIN", "IGESOUT", "/IMAGE", "IMAGIN", "IMESH", "IMMED", "IMPD", "INISTATE", "*INIT", "/INPUT", "/INQUIRE", "INRES", "INRTIA", "INT1", "INTSRF", "IOPTN", "IRLF", "IRLIST", "*ITENGINE", "JPEG", "JSOL", "K", "KATT", "KBC", "KBETW", "KCALC", "KCENTER", "KCLEAR", "KDELE", "KDIST", "KEEP", "KESIZE", "KEYOPT", "KEYPTS", "KEYW", "KFILL", "KGEN", "KL", "KLIST", "KMESH", "KMODIF", "KMOVE", "KNODE", "KPLOT", "KPSCALE", "KREFINE", "KSCALE", "KSCON", "KSEL", "KSLL", "KSLN", "KSUM", "KSYMM", "KTRAN", "KUSE", "KWPAVE", "KWPLAN", "L", "L2ANG", "L2TAN", "LANG", "LARC", "/LARC", "LAREA", "LARGE", "LATT", "LAYER", "LAYERP26", "LAYLIST", "LAYPLOT", "LCABS", "LCASE", "LCCALC", "LCCAT", "LCDEF", "LCFACT", "LCFILE", "LCLEAR", "LCOMB", "LCOPER", "LCSEL", "LCSL", "LCSUM", "LCWRITE", "LCZERO", "LDELE", "LDIV", "LDRAG", "LDREAD", "LESIZE", "LEXTND", "LFILLT", "LFSURF", "LGEN", "LGLUE", "LGWRITE", "/LIGHT", "LINA", "LINE", "/LINE", "LINES", "LINL", "LINP", "LINV", "LIST", "*LIST", "LLIST", "LMATRIX", "LMESH", "LNSRCH", "LOCAL", "LOVLAP", "LPLOT", "LPTN", "LREFINE", "LREVERSE", "LROTAT", "LSBA", "*LSBAC", "LSBL", "LSBV", "LSBW", "LSCLEAR", "LSDELE", "*LSDUMP", "LSEL", "*LSENGINE", "*LSFACTOR", "LSLA", "LSLK", "LSOPER", "/LSPEC", "LSREAD", "*LSRESTORE", "LSSCALE", "LSSOLVE", "LSTR", "LSUM", "LSWRITE", "/LSYMBOL", "LSYMM", "LTAN", "LTRAN", "LUMPM", "LVSCALE", "LWPLAN", "M", "MADAPT", "MAGOPT", "MAGSOLV", "/MAIL", "MAP", "/MAP", "MAP2DTO3D", "MAPSOLVE", "MAPVAR", "MASTER", "MAT", "MATER", "MCHECK", "MDAMP", "MDELE", "MDPLOT", "MEMM", "/MENU", "MESHING", "MFANALYSIS", "MFBUCKET", "MFCALC", "MFCI", "MFCLEAR", "MFCMMAND", "MFCONV", "MFDTIME", "MFELEM", "MFEM", "MFEXTER", "MFFNAME", "MFFR", "MFIMPORT", "MFINTER", "MFITER", "MFLCOMM", "MFLIST", "MFMAP", "MFORDER", "MFOUTPUT", "*MFOURI", "MFPSIMUL", "MFRC", "MFRELAX", "MFRSTART", "MFSORDER", "MFSURFACE", "MFTIME", "MFTOL", "*MFUN", "MFVOLUME", "MFWRITE", "MGEN", "MIDTOL", "/MKDIR", "MLIST", "MMASS", "MMF", "MODCONT", "MODE", "MODIFY", "MODMSH", "MODSELOPTION", "MODOPT", "MONITOR", "*MOPER", "MOPT", "MORPH", "MOVE", "MP", "MPAMOD", "MPCHG", "MPCOPY", "MPDATA", "MPDELE", "MPDRES", "/MPLIB", "MPLIST", "MPPLOT", "MPREAD", "MPRINT", "MPTEMP", "MPTGEN", "MPTRES", "MPWRITE", "/MREP", "MSAVE", "*MSG", "MSHAPE", "MSHCOPY", "MSHKEY", "MSHMID", "MSHPATTERN", "MSOLVE", "/MSTART", "MSTOLE", "*MULT", "*MWRITE", "MXPAND", "N", "NANG", "NAXIS", "NCNV", "NDELE", "NDIST", "NDSURF", "NEQIT", "/NERR", "NFORCE", "NGEN", "NKPT", "NLADAPTIVE", "NLDIAG", "NLDPOST", "NLGEOM", "NLHIST", "NLIST", "NLMESH", "NLOG", "NLOPT", "NMODIF", "NOCOLOR", "NODES", "/NOERASE", "/NOLIST", "NOOFFSET", "NOORDER", "/NOPR", "NORA", "NORL", "/NORMAL", "NPLOT", "NPRINT", "NREAD", "NREFINE", "NRLSUM", "*NRM", "NROPT", "NROTAT", "NRRANG", "NSCALE", "NSEL", "NSLA", "NSLE", "NSLK", "NSLL", "NSLV", "NSMOOTH", "NSOL", "NSORT", "NSTORE", "NSUBST", "NSVR", "NSYM", "/NUMBER", "NUMCMP", "NUMEXP", "NUMMRG", "NUMOFF", "NUMSTR", "NUMVAR", "NUSORT", "NWPAVE", "NWPLAN", "NWRITE", "OCDATA", "OCDELETE", "OCLIST", "OCREAD", "OCTABLE", "OCTYPE", "OCZONE", "OMEGA", "OPERATE", "OPNCONTROL", "OUTAERO", "OUTOPT", "OUTPR", "/OUTPUT", "OUTRES", "OVCHECK", "PADELE", "/PAGE", "PAGET", "PAPUT", "PARESU", "PARTSEL", "PARRES", "PARSAV", "PASAVE", "PATH", "PAUSE", "/PBC", "/PBF", "PCALC", "PCGOPT", "PCIRC", "/PCIRCLE", "/PCOPY", "PCROSS", "PDANL", "PDCDF", "PDCFLD", "PDCLR", "PDCMAT", "PDCORR", "PDDMCS", "PDDOEL", "PDEF", "PDEXE", "PDHIST", "PDINQR", "PDLHS", "PDMETH", "PDOT", "PDPINV", "PDPLOT", "PDPROB", "PDRESU", "PDROPT", "/PDS", "PDSAVE", "PDSCAT", "PDSENS", "PDSHIS", "PDUSER", "PDVAR", "PDWRITE", "PERBC2D", "PERTURB", "PFACT", "PHYSICS", "PIVCHECK", "PLCAMP", "PLCFREQ", "PLCHIST", "PLCINT", "PLCPLX", "PLCRACK", "PLDISP", "PLESOL", "PLETAB", "PLFAR", "PLF2D", "PLGEOM", "PLLS", "PLMAP", "PLMC", "PLNEAR", "PLNSOL", "/PLOPTS", "PLORB", "PLOT", "PLOTTING", "PLPAGM", "PLPATH", "PLSECT", "PLST", "PLTIME", "PLTRAC", "PLVAR", "PLVECT", "PLZZ", "/PMACRO", "PMAP", "PMGTRAN", "PMLOPT", "PMLSIZE", "/PMORE", "PNGR", "/PNUM", "POINT", "POLY", "/POLYGON", "/POST1", "/POST26", "POWERH", "PPATH", "PRANGE", "PRAS", "PRCAMP", "PRCINT", "PRCPLX", "PRED", "PRENERGY", "/PREP7", "PRERR", "PRESOL", "PRETAB", "PRFAR", "PRI2", "PRIM", "PRINT", "*PRINT", "PRISM", "PRITER", "PRJSOL", "PRNEAR", "PRNLD", "PRNSOL", "PROD", "PRORB", "PRPATH", "PRRFOR", "PRRSOL", "PRSCONTROL", "PRSECT", "PRTIME", "PRVAR", "PRVECT", "PSCONTROL", "PSCR", "PSDCOM", "PSDFRQ", "PSDGRAPH", "PSDRES", "PSDSPL", "PSDUNIT", "PSDVAL", "PSDWAV", "/PSEARCH", "PSEL", "/PSF", "PSMAT", "PSMESH", "/PSPEC", "/PSTATUS", "PSTRES", "/PSYMB", "PTR", "PTXY", "PVECT", "/PWEDGE", "QDVAL", "QRDOPT", "QSOPT", "QUAD", "/QUIT", "QUOT", "R", "RACE", "RADOPT", "RAPPND", "RATE", "/RATIO", "RBE3", "RCON", "RCYC", "RDEC", "RDELE", "READ", "REAL", "REALVAR", "RECTNG", "REMESH", "/RENAME", "REORDER", "*REPEAT", "/REPLOT", "RESCOMBINE", "RESCONTROL", "RESET", "/RESET", "RESP", "RESUME", "RESVEC", "RESWRITE", "*RETURN", "REXPORT", "REZONE", "RFORCE", "/RGB", "RIGID", "RIGRESP", "RIMPORT", "RLIST", "RMALIST", "RMANL", "RMASTER", "RMCAP", "RMCLIST", "/RMDIR", "RMFLVEC", "RMLVSCALE", "RMMLIST", "RMMRANGE", "RMMSELECT", "RMNDISP", "RMNEVEC", "RMODIF", "RMORE", "RMPORDER", "RMRESUME", "RMRGENERATE", "RMROPTIONS", "RMRPLOT", "RMRSTATUS", "RMSAVE", "RMSMPLE", "RMUSE", "RMXPORT", "ROCK", "ROSE", "RPOLY", "RPR4", "RPRISM", "RPSD", "RSFIT", "RSOPT", "RSPLIT", "RSPLOT", "RSPRNT", "RSSIMS", "RSTMAC", "RSTOFF", "RSURF", "RSYMM", "RSYS", "RTHICK", "SABS", "SADD", "SALLOW", "SAVE", "SBCLIST", "SBCTRAN", "SDELETE", "SE", "SECCONTROL", "SECDATA", "SECFUNCTION", "SECJOINT", "/SECLIB", "SECLOCK", "SECMODIF", "SECNUM", "SECOFFSET", "SECPLOT", "SECREAD", "SECSTOP", "SECTYPE", "SECWRITE", "SED", "SEDLIST", "SEEXP", "/SEG", "SEGEN", "SELIST", "SELM", "SELTOL", "SENERGY", "SEOPT", "SESYMM", "*SET", "SET", "SETFGAP", "SETRAN", "SEXP", "SF", "SFA", "SFACT", "SFADELE", "SFALIST", "SFBEAM", "SFCALC", "SFCUM", "SFDELE", "SFE", "SFEDELE", "SFELIST", "SFFUN", "SFGRAD", "SFL", "SFLDELE", "SFLEX", "SFLIST", "SFLLIST", "SFSCALE", "SFTRAN", "/SHADE", "SHELL", "/SHOW", "/SHOWDISP", "SHPP", "/SHRINK", "SLIST", "SLOAD", "SMALL", "*SMAT", "SMAX", "/SMBC", "SMBODY", "SMCONS", "SMFOR", "SMIN", "SMOOTH", "SMRTSIZE", "SMSURF", "SMULT", "SNOPTION", "SOLU", "/SOLU", "SOLUOPT", "SOLVE", "SORT", "SOURCE", "SPACE", "SPCNOD", "SPCTEMP", "SPDAMP", "SPEC", "SPFREQ", "SPGRAPH", "SPH4", "SPH5", "SPHERE", "SPLINE", "SPLOT", "SPMWRITE", "SPOINT", "SPOPT", "SPREAD", "SPTOPT", "SPOWER", "SPUNIT", "SPVAL", "SQRT", "*SREAD", "SRSS", "SSBT", "/SSCALE", "SSLN", "SSMT", "SSPA", "SSPB", "SSPD", "SSPE", "SSPM", "SSUM", "SSTATE", "STABILIZE", "STAOPT", "STAT", "*STATUS", "/STATUS", "STEF", "/STITLE", "STORE", "SUBOPT", "SUBSET", "SUCALC", "SUCR", "SUDEL", "SUEVAL", "SUGET", "SUMAP", "SUMTYPE", "SUPL", "SUPR", "SURESU", "SUSAVE", "SUSEL", "SUVECT", "SV", "SVPLOT", "SVTYP", "SWADD", "SWDEL", "SWGEN", "SWLIST", "SYNCHRO", "/SYP", "/SYS", "TALLOW", "TARGET", "*TAXIS", "TB", "TBCOPY", "TBDATA", "TBDELE", "TBEO", "TBIN", "TBFIELD", "TBFT", "TBLE", "TBLIST", "TBMODIF", "TBPLOT", "TBPT", "TBTEMP", "TCHG", "/TEE", "TERM", "THEXPAND", "THOPT", "TIFF", "TIME", "TIMERANGE", "TIMINT", "TIMP", "TINTP", "/TITLE", "/TLABEL", "TOFFST", "*TOPER", "TORQ2D", "TORQC2D", "TORQSUM", "TORUS", "TRANS", "TRANSFER", "*TREAD", "TREF", "/TRIAD", "/TRLCY", "TRNOPT", "TRPDEL", "TRPLIS", "TRPOIN", "TRTIME", "TSHAP", "/TSPEC", "TSRES", "TUNIF", "TVAR", "/TXTRE", "/TYPE", "TYPE", "/UCMD", "/UDOC", "/UI", "UIMP", "/UIS", "*ULIB", "UNDELETE", "UNDO", "/UNITS", "UNPAUSE", "UPCOORD", "UPGEOM", "*USE", "/USER", "USRCAL", "USRDOF", "USRELEM", "V", "V2DOPT", "VA", "*VABS", "VADD", "VARDEL", "VARNAM", "VATT", "VCLEAR", "*VCOL", "/VCONE", "VCROSS", "*VCUM", "VDDAM", "VDELE", "VDGL", "VDOT", "VDRAG", "*VEC", "*VEDIT", "VEORIENT", "VEXT", "*VFACT", "*VFILL", "VFOPT", "VFQUERY", "VFSM", "*VFUN", "VGEN", "*VGET", "VGET", "VGLUE", "/VIEW", "VIMP", "VINP", "VINV", "*VITRP", "*VLEN", "VLIST", "VLSCALE", "*VMASK", "VMESH", "VOFFST", "VOLUMES") # list of in-built () functions elafunf = ("NX()", "NY()", "NZ()", "KX()", "KY()", "KZ()", "LX()", "LY()", "LZ()", "LSX()", "LSY()", "LSZ()", "NODE()", "KP()", "DISTND()", "DISTKP()", "DISTEN()", "ANGLEN()", "ANGLEK()", "NNEAR()", "KNEAR()", "ENEARN()", "AREAND()", "AREAKP()", "ARNODE()", "NORMNX()", "NORMNY()", "NORMNZ()", "NORMKX()", "NORMKY()", "NORMKZ()", "ENEXTN()", "NELEM()", "NODEDOF()", "ELADJ()", "NDFACE()", "NMFACE()", "ARFACE()", "UX()", "UY()", "UZ()", "ROTX()", "ROTY()", "ROTZ()", "TEMP()", "PRES()", "VX()", "VY()", "VZ()", "ENKE()", "ENDS()", "VOLT()", "MAG()", "AX()", "AY()", "AZ()", "VIRTINQR()", "KWGET()", "VALCHR()", "VALHEX()", "CHRHEX()", "STRFILL()", "STRCOMP()", "STRPOS()", "STRLENG()", "UPCASE()", "LWCASE()", "JOIN()", "SPLIT()", "ABS()", "SIGN()", "CXABS()", "EXP()", "LOG()", "LOG10()", "SQRT()", "NINT()", "MOD()", "RAND()", "GDIS()", "SIN()", "COS()", "TAN()", "SINH()", "COSH()", "TANH()", "ASIN()", "ACOS()", "ATAN()", "ATAN2()") elafung = ("NSEL()", "ESEL()", "KSEL()", "LSEL()", "ASEL()", "VSEL()", "NDNEXT()", "ELNEXT()", "KPNEXT()", "LSNEXT()", "ARNEXT()", "VLNEXT()", "CENTRX()", "CENTRY()", "CENTRZ()") elafunh = ("~CAT5IN", "~CATIAIN", "~PARAIN", "~PROEIN", "~SATIN", "~UGIN", "A", "AADD", "AATT", "ABEXTRACT", "*ABBR", "ABBRES", "ABBSAV", "ABS", "ACCAT", "ACCOPTION", "ACEL", "ACLEAR", "ADAMS", "ADAPT", "ADD", "ADDAM", "ADELE", "ADGL", "ADRAG", "AESIZE", "AFILLT", "AFLIST", "AFSURF", "*AFUN", "AGEN", "AGLUE", "AINA", "AINP", "AINV", "AL", "ALIST", "ALLSEL", "ALPHAD", "AMAP", "AMESH", "/AN3D", "ANCNTR", "ANCUT", "ANCYC", "ANDATA", "ANDSCL", "ANDYNA", "/ANFILE", "ANFLOW", "/ANGLE", "ANHARM", "ANIM", "ANISOS", "ANMODE", "ANMRES", "/ANNOT", "ANORM", "ANPRES", "ANSOL", "ANSTOAQWA", "ANSTOASAS", "ANTIME", "ANTYPE") tokens = { 'root': [ (r'!.*\n', Comment), include('strings'), include('core'), include('nums'), (words((elafunb+elafunc+elafund+elafune+elafunh), suffix=r'\b'), Keyword), (words((elafunf+elafung), suffix=r'\b'), Name.Builtin), (r'AR[0-9]+', Name.Variable.Instance), (r'[a-z][a-z0-9_]*', Name.Variable), (r'[\s]+', Whitespace), ], 'core': [ # Operators (r'(\*\*|\*|\+|-|\/|<|>|<=|>=|==|\/=|=)', Operator), (r'/EOF', Generic.Emph), (r'[(),:&;]', Punctuation), ], 'strings': [ (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double), (r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), (r'[$%]', String.Symbol), ], 'nums': [ (r'\d+(?![.ef])', Number.Integer), (r'[+-]?\d*\.?\d+([ef][-+]?\d+)?', Number.Float), (r'[+-]?\d+\.?\d*([ef][-+]?\d+)?', Number.Float), ] } pygments-2.11.2/pygments/lexers/webmisc.py0000644000175000017500000011610614165547207020516 0ustar carstencarsten""" pygments.lexers.webmisc ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for misc. web stuff. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, ExtendedRegexLexer, include, bygroups, \ default, using from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Literal, Whitespace from pygments.lexers.css import _indentation, _starts_block from pygments.lexers.html import HtmlLexer from pygments.lexers.javascript import JavascriptLexer from pygments.lexers.ruby import RubyLexer __all__ = ['DuelLexer', 'SlimLexer', 'XQueryLexer', 'QmlLexer', 'CirruLexer'] class DuelLexer(RegexLexer): """ Lexer for Duel Views Engine (formerly JBST) markup with JavaScript code blocks. See http://duelengine.org/. See http://jsonml.org/jbst/. .. versionadded:: 1.4 """ name = 'Duel' aliases = ['duel', 'jbst', 'jsonml+bst'] filenames = ['*.duel', '*.jbst'] mimetypes = ['text/x-duel', 'text/x-jbst'] flags = re.DOTALL tokens = { 'root': [ (r'(<%[@=#!:]?)(.*?)(%>)', bygroups(Name.Tag, using(JavascriptLexer), Name.Tag)), (r'(<%\$)(.*?)(:)(.*?)(%>)', bygroups(Name.Tag, Name.Function, Punctuation, String, Name.Tag)), (r'(<%--)(.*?)(--%>)', bygroups(Name.Tag, Comment.Multiline, Name.Tag)), (r'()(.*?)()', bygroups(using(HtmlLexer), using(JavascriptLexer), using(HtmlLexer))), (r'(.+?)(?=<)', using(HtmlLexer)), (r'.+', using(HtmlLexer)), ], } class XQueryLexer(ExtendedRegexLexer): """ An XQuery lexer, parsing a stream and outputting the tokens needed to highlight xquery code. .. versionadded:: 1.4 """ name = 'XQuery' aliases = ['xquery', 'xqy', 'xq', 'xql', 'xqm'] filenames = ['*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'] mimetypes = ['text/xquery', 'application/xquery'] xquery_parse_state = [] # FIX UNICODE LATER # ncnamestartchar = ( # r"[A-Z]|_|[a-z]|[\u00C0-\u00D6]|[\u00D8-\u00F6]|[\u00F8-\u02FF]|" # r"[\u0370-\u037D]|[\u037F-\u1FFF]|[\u200C-\u200D]|[\u2070-\u218F]|" # r"[\u2C00-\u2FEF]|[\u3001-\uD7FF]|[\uF900-\uFDCF]|[\uFDF0-\uFFFD]|" # r"[\u10000-\uEFFFF]" # ) ncnamestartchar = r"(?:[A-Z]|_|[a-z])" # FIX UNICODE LATER # ncnamechar = ncnamestartchar + (r"|-|\.|[0-9]|\u00B7|[\u0300-\u036F]|" # r"[\u203F-\u2040]") ncnamechar = r"(?:" + ncnamestartchar + r"|-|\.|[0-9])" ncname = "(?:%s+%s*)" % (ncnamestartchar, ncnamechar) pitarget_namestartchar = r"(?:[A-KN-WYZ]|_|:|[a-kn-wyz])" pitarget_namechar = r"(?:" + pitarget_namestartchar + r"|-|\.|[0-9])" pitarget = "%s+%s*" % (pitarget_namestartchar, pitarget_namechar) prefixedname = "%s:%s" % (ncname, ncname) unprefixedname = ncname qname = "(?:%s|%s)" % (prefixedname, unprefixedname) entityref = r'(?:&(?:lt|gt|amp|quot|apos|nbsp);)' charref = r'(?:&#[0-9]+;|&#x[0-9a-fA-F]+;)' stringdouble = r'(?:"(?:' + entityref + r'|' + charref + r'|""|[^&"])*")' stringsingle = r"(?:'(?:" + entityref + r"|" + charref + r"|''|[^&'])*')" # FIX UNICODE LATER # elementcontentchar = (r'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|' # r'[\u003d-\u007a]|\u007c|[\u007e-\u007F]') elementcontentchar = r'[A-Za-z]|\s|\d|[!"#$%()*+,\-./:;=?@\[\\\]^_\'`|~]' # quotattrcontentchar = (r'\t|\r|\n|[\u0020-\u0021]|[\u0023-\u0025]|' # r'[\u0027-\u003b]|[\u003d-\u007a]|\u007c|[\u007e-\u007F]') quotattrcontentchar = r'[A-Za-z]|\s|\d|[!#$%()*+,\-./:;=?@\[\\\]^_\'`|~]' # aposattrcontentchar = (r'\t|\r|\n|[\u0020-\u0025]|[\u0028-\u003b]|' # r'[\u003d-\u007a]|\u007c|[\u007e-\u007F]') aposattrcontentchar = r'[A-Za-z]|\s|\d|[!"#$%()*+,\-./:;=?@\[\\\]^_`|~]' # CHAR elements - fix the above elementcontentchar, quotattrcontentchar, # aposattrcontentchar # x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] flags = re.DOTALL | re.MULTILINE | re.UNICODE def punctuation_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) # transition to root always - don't pop off stack ctx.stack = ['root'] ctx.pos = match.end() def operator_root_callback(lexer, match, ctx): yield match.start(), Operator, match.group(1) # transition to root always - don't pop off stack ctx.stack = ['root'] ctx.pos = match.end() def popstate_tag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) if lexer.xquery_parse_state: ctx.stack.append(lexer.xquery_parse_state.pop()) ctx.pos = match.end() def popstate_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append(lexer.xquery_parse_state.pop()) ctx.pos = match.end() def popstate_kindtest_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) next_state = lexer.xquery_parse_state.pop() if next_state == 'occurrenceindicator': if re.match("[?*+]+", match.group(2)): yield match.start(), Punctuation, match.group(2) ctx.stack.append('operator') ctx.pos = match.end() else: ctx.stack.append('operator') ctx.pos = match.end(1) else: ctx.stack.append(next_state) ctx.pos = match.end(1) def popstate_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) # if we have run out of our state stack, pop whatever is on the pygments # state stack if len(lexer.xquery_parse_state) == 0: ctx.stack.pop() if not ctx.stack: # make sure we have at least the root state on invalid inputs ctx.stack = ['root'] elif len(ctx.stack) > 1: ctx.stack.append(lexer.xquery_parse_state.pop()) else: # i don't know if i'll need this, but in case, default back to root ctx.stack = ['root'] ctx.pos = match.end() def pushstate_element_content_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append('element_content') ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append(ctx.state.pop) ctx.pos = match.end() def pushstate_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append(ctx.state.pop) ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_operator_order_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_map_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_root_validate(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_root_validate_withmode(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Keyword, match.group(3) ctx.stack = ['root'] lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_operator_processing_instruction_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('processing_instruction') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_element_content_processing_instruction_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('processing_instruction') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_element_content_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_operator_cdata_section_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('cdata_section') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_element_content_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('xml_comment') lexer.xquery_parse_state.append('element_content') ctx.pos = match.end() def pushstate_operator_xmlcomment_callback(lexer, match, ctx): yield match.start(), String.Doc, match.group(1) ctx.stack.append('xml_comment') lexer.xquery_parse_state.append('operator') ctx.pos = match.end() def pushstate_kindtest_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('kindtest') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_operator_kindtestforpi_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack.append('kindtestforpi') ctx.pos = match.end() def pushstate_operator_kindtest_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_occurrenceindicator_kindtest_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('occurrenceindicator') ctx.stack.append('kindtest') ctx.pos = match.end() def pushstate_operator_starttag_callback(lexer, match, ctx): yield match.start(), Name.Tag, match.group(1) lexer.xquery_parse_state.append('operator') ctx.stack.append('start_tag') ctx.pos = match.end() def pushstate_operator_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) lexer.xquery_parse_state.append('operator') ctx.stack = ['root'] ctx.pos = match.end() def pushstate_operator_root_construct_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.stack = ['root'] ctx.pos = match.end() def pushstate_root_callback(lexer, match, ctx): yield match.start(), Punctuation, match.group(1) cur_state = ctx.stack.pop() lexer.xquery_parse_state.append(cur_state) ctx.stack = ['root'] ctx.pos = match.end() def pushstate_operator_attribute_callback(lexer, match, ctx): yield match.start(), Name.Attribute, match.group(1) ctx.stack.append('operator') ctx.pos = match.end() def pushstate_operator_callback(lexer, match, ctx): yield match.start(), Keyword, match.group(1) yield match.start(), Text, match.group(2) yield match.start(), Punctuation, match.group(3) lexer.xquery_parse_state.append('operator') ctx.pos = match.end() tokens = { 'comment': [ # xquery comments (r'[^:()]+', Comment), (r'\(:', Comment, '#push'), (r':\)', Comment, '#pop'), (r'[:()]', Comment), ], 'whitespace': [ (r'\s+', Whitespace), ], 'operator': [ include('whitespace'), (r'(\})', popstate_callback), (r'\(:', Comment, 'comment'), (r'(\{)', pushstate_root_callback), (r'then|else|external|at|div|except', Keyword, 'root'), (r'order by', Keyword, 'root'), (r'group by', Keyword, 'root'), (r'is|mod|order\s+by|stable\s+order\s+by', Keyword, 'root'), (r'and|or', Operator.Word, 'root'), (r'(eq|ge|gt|le|lt|ne|idiv|intersect|in)(?=\b)', Operator.Word, 'root'), (r'return|satisfies|to|union|where|count|preserve\s+strip', Keyword, 'root'), (r'(>=|>>|>|<=|<<|<|-|\*|!=|\+|\|\||\||:=|=|!)', operator_root_callback), (r'(::|:|;|\[|//|/|,)', punctuation_root_callback), (r'(castable|cast)(\s+)(as)\b', bygroups(Keyword, Text, Keyword), 'singletype'), (r'(instance)(\s+)(of)\b', bygroups(Keyword, Text, Keyword), 'itemtype'), (r'(treat)(\s+)(as)\b', bygroups(Keyword, Text, Keyword), 'itemtype'), (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'), (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'), (r'(case|as)\b', Keyword, 'itemtype'), (r'(\))(\s*)(as)', bygroups(Punctuation, Text, Keyword), 'itemtype'), (r'\$', Name.Variable, 'varname'), (r'(for|let|previous|next)(\s+)(\$)', bygroups(Keyword, Text, Name.Variable), 'varname'), (r'(for)(\s+)(tumbling|sliding)(\s+)(window)(\s+)(\$)', bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable), 'varname'), # (r'\)|\?|\]', Punctuation, '#push'), (r'\)|\?|\]', Punctuation), (r'(empty)(\s+)(greatest|least)', bygroups(Keyword, Text, Keyword)), (r'ascending|descending|default', Keyword, '#push'), (r'(allowing)(\s+)(empty)', bygroups(Keyword, Text, Keyword)), (r'external', Keyword), (r'(start|when|end)', Keyword, 'root'), (r'(only)(\s+)(end)', bygroups(Keyword, Text, Keyword), 'root'), (r'collation', Keyword, 'uritooperator'), # eXist specific XQUF (r'(into|following|preceding|with)', Keyword, 'root'), # support for current context on rhs of Simple Map Operator (r'\.', Operator), # finally catch all string literals and stay in operator state (stringdouble, String.Double), (stringsingle, String.Single), (r'(catch)(\s*)', bygroups(Keyword, Text), 'root'), ], 'uritooperator': [ (stringdouble, String.Double, '#pop'), (stringsingle, String.Single, '#pop'), ], 'namespacedecl': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'(at)(\s+)('+stringdouble+')', bygroups(Keyword, Text, String.Double)), (r"(at)(\s+)("+stringsingle+')', bygroups(Keyword, Text, String.Single)), (stringdouble, String.Double), (stringsingle, String.Single), (r',', Punctuation), (r'=', Operator), (r';', Punctuation, 'root'), (ncname, Name.Namespace), ], 'namespacekeyword': [ include('whitespace'), (r'\(:', Comment, 'comment'), (stringdouble, String.Double, 'namespacedecl'), (stringsingle, String.Single, 'namespacedecl'), (r'inherit|no-inherit', Keyword, 'root'), (r'namespace', Keyword, 'namespacedecl'), (r'(default)(\s+)(element)', bygroups(Keyword, Text, Keyword)), (r'preserve|no-preserve', Keyword), (r',', Punctuation), ], 'annotationname': [ (r'\(:', Comment, 'comment'), (qname, Name.Decorator), (r'(\()(' + stringdouble + ')', bygroups(Punctuation, String.Double)), (r'(\()(' + stringsingle + ')', bygroups(Punctuation, String.Single)), (r'(\,)(\s+)(' + stringdouble + ')', bygroups(Punctuation, Text, String.Double)), (r'(\,)(\s+)(' + stringsingle + ')', bygroups(Punctuation, Text, String.Single)), (r'\)', Punctuation), (r'(\s+)(\%)', bygroups(Text, Name.Decorator), 'annotationname'), (r'(\s+)(variable)(\s+)(\$)', bygroups(Text, Keyword.Declaration, Text, Name.Variable), 'varname'), (r'(\s+)(function)(\s+)', bygroups(Text, Keyword.Declaration, Text), 'root') ], 'varname': [ (r'\(:', Comment, 'comment'), (r'(' + qname + r')(\()?', bygroups(Name, Punctuation), 'operator'), ], 'singletype': [ include('whitespace'), (r'\(:', Comment, 'comment'), (ncname + r'(:\*)', Name.Variable, 'operator'), (qname, Name.Variable, 'operator'), ], 'itemtype': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'\$', Name.Variable, 'varname'), (r'(void)(\s*)(\()(\s*)(\))', bygroups(Keyword, Text, Punctuation, Text, Punctuation), 'operator'), (r'(element|attribute|schema-element|schema-attribute|comment|text|' r'node|binary|document-node|empty-sequence)(\s*)(\()', pushstate_occurrenceindicator_kindtest_callback), # Marklogic specific type? (r'(processing-instruction)(\s*)(\()', bygroups(Keyword, Text, Punctuation), ('occurrenceindicator', 'kindtestforpi')), (r'(item)(\s*)(\()(\s*)(\))(?=[*+?])', bygroups(Keyword, Text, Punctuation, Text, Punctuation), 'occurrenceindicator'), (r'(\(\#)(\s*)', bygroups(Punctuation, Text), 'pragma'), (r';', Punctuation, '#pop'), (r'then|else', Keyword, '#pop'), (r'(at)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'namespacedecl'), (r'(at)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'namespacedecl'), (r'except|intersect|in|is|return|satisfies|to|union|where|count', Keyword, 'root'), (r'and|div|eq|ge|gt|le|lt|ne|idiv|mod|or', Operator.Word, 'root'), (r':=|=|,|>=|>>|>|\[|\(|<=|<<|<|-|!=|\|\||\|', Operator, 'root'), (r'external|at', Keyword, 'root'), (r'(stable)(\s+)(order)(\s+)(by)', bygroups(Keyword, Text, Keyword, Text, Keyword), 'root'), (r'(castable|cast)(\s+)(as)', bygroups(Keyword, Text, Keyword), 'singletype'), (r'(treat)(\s+)(as)', bygroups(Keyword, Text, Keyword)), (r'(instance)(\s+)(of)', bygroups(Keyword, Text, Keyword)), (r'(case)(\s+)(' + stringdouble + ')', bygroups(Keyword, Text, String.Double), 'itemtype'), (r'(case)(\s+)(' + stringsingle + ')', bygroups(Keyword, Text, String.Single), 'itemtype'), (r'case|as', Keyword, 'itemtype'), (r'(\))(\s*)(as)', bygroups(Operator, Text, Keyword), 'itemtype'), (ncname + r':\*', Keyword.Type, 'operator'), (r'(function|map|array)(\()', bygroups(Keyword.Type, Punctuation)), (qname, Keyword.Type, 'occurrenceindicator'), ], 'kindtest': [ (r'\(:', Comment, 'comment'), (r'\{', Punctuation, 'root'), (r'(\))([*+?]?)', popstate_kindtest_callback), (r'\*', Name, 'closekindtest'), (qname, Name, 'closekindtest'), (r'(element|schema-element)(\s*)(\()', pushstate_kindtest_callback), ], 'kindtestforpi': [ (r'\(:', Comment, 'comment'), (r'\)', Punctuation, '#pop'), (ncname, Name.Variable), (stringdouble, String.Double), (stringsingle, String.Single), ], 'closekindtest': [ (r'\(:', Comment, 'comment'), (r'(\))', popstate_callback), (r',', Punctuation), (r'(\{)', pushstate_operator_root_callback), (r'\?', Punctuation), ], 'xml_comment': [ (r'(-->)', popstate_xmlcomment_callback), (r'[^-]{1,2}', Literal), (r'\t|\r|\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|[\U00010000-\U0010FFFF]', Literal), ], 'processing_instruction': [ (r'\s+', Text, 'processing_instruction_content'), (r'\?>', String.Doc, '#pop'), (pitarget, Name), ], 'processing_instruction_content': [ (r'\?>', String.Doc, '#pop'), (r'\t|\r|\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|[\U00010000-\U0010FFFF]', Literal), ], 'cdata_section': [ (r']]>', String.Doc, '#pop'), (r'\t|\r|\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|[\U00010000-\U0010FFFF]', Literal), ], 'start_tag': [ include('whitespace'), (r'(/>)', popstate_tag_callback), (r'>', Name.Tag, 'element_content'), (r'"', Punctuation, 'quot_attribute_content'), (r"'", Punctuation, 'apos_attribute_content'), (r'=', Operator), (qname, Name.Tag), ], 'quot_attribute_content': [ (r'"', Punctuation, 'start_tag'), (r'(\{)', pushstate_root_callback), (r'""', Name.Attribute), (quotattrcontentchar, Name.Attribute), (entityref, Name.Attribute), (charref, Name.Attribute), (r'\{\{|\}\}', Name.Attribute), ], 'apos_attribute_content': [ (r"'", Punctuation, 'start_tag'), (r'\{', Punctuation, 'root'), (r"''", Name.Attribute), (aposattrcontentchar, Name.Attribute), (entityref, Name.Attribute), (charref, Name.Attribute), (r'\{\{|\}\}', Name.Attribute), ], 'element_content': [ (r')', popstate_tag_callback), (qname, Name.Tag), ], 'xmlspace_decl': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'preserve|strip', Keyword, '#pop'), ], 'declareordering': [ (r'\(:', Comment, 'comment'), include('whitespace'), (r'ordered|unordered', Keyword, '#pop'), ], 'xqueryversion': [ include('whitespace'), (r'\(:', Comment, 'comment'), (stringdouble, String.Double), (stringsingle, String.Single), (r'encoding', Keyword), (r';', Punctuation, '#pop'), ], 'pragma': [ (qname, Name.Variable, 'pragmacontents'), ], 'pragmacontents': [ (r'#\)', Punctuation, 'operator'), (r'\t|\r|\n|[\u0020-\uD7FF]|[\uE000-\uFFFD]|[\U00010000-\U0010FFFF]', Literal), (r'(\s+)', Text), ], 'occurrenceindicator': [ include('whitespace'), (r'\(:', Comment, 'comment'), (r'\*|\?|\+', Operator, 'operator'), (r':=', Operator, 'root'), default('operator'), ], 'option': [ include('whitespace'), (qname, Name.Variable, '#pop'), ], 'qname_braren': [ include('whitespace'), (r'(\{)', pushstate_operator_root_callback), (r'(\()', Punctuation, 'root'), ], 'element_qname': [ (qname, Name.Variable, 'root'), ], 'attribute_qname': [ (qname, Name.Variable, 'root'), ], 'root': [ include('whitespace'), (r'\(:', Comment, 'comment'), # handle operator state # order on numbers matters - handle most complex first (r'\d+(\.\d*)?[eE][+-]?\d+', Number.Float, 'operator'), (r'(\.\d+)[eE][+-]?\d+', Number.Float, 'operator'), (r'(\.\d+|\d+\.\d*)', Number.Float, 'operator'), (r'(\d+)', Number.Integer, 'operator'), (r'(\.\.|\.|\))', Punctuation, 'operator'), (r'(declare)(\s+)(construction)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (r'(declare)(\s+)(default)(\s+)(order)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (r'(declare)(\s+)(context)(\s+)(item)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'operator'), (ncname + r':\*', Name, 'operator'), (r'\*:'+ncname, Name.Tag, 'operator'), (r'\*', Name.Tag, 'operator'), (stringdouble, String.Double, 'operator'), (stringsingle, String.Single, 'operator'), (r'(\}|\])', popstate_callback), # NAMESPACE DECL (r'(declare)(\s+)(default)(\s+)(collation)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration)), (r'(module|declare)(\s+)(namespace)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacedecl'), (r'(declare)(\s+)(base-uri)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacedecl'), # NAMESPACE KEYWORD (r'(declare)(\s+)(default)(\s+)(element|function)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Keyword.Declaration), 'namespacekeyword'), (r'(import)(\s+)(schema|module)', bygroups(Keyword.Pseudo, Text, Keyword.Pseudo), 'namespacekeyword'), (r'(declare)(\s+)(copy-namespaces)', bygroups(Keyword.Declaration, Text, Keyword.Declaration), 'namespacekeyword'), # VARNAMEs (r'(for|let|some|every)(\s+)(\$)', bygroups(Keyword, Text, Name.Variable), 'varname'), (r'(for)(\s+)(tumbling|sliding)(\s+)(window)(\s+)(\$)', bygroups(Keyword, Text, Keyword, Text, Keyword, Text, Name.Variable), 'varname'), (r'\$', Name.Variable, 'varname'), (r'(declare)(\s+)(variable)(\s+)(\$)', bygroups(Keyword.Declaration, Text, Keyword.Declaration, Text, Name.Variable), 'varname'), # ANNOTATED GLOBAL VARIABLES AND FUNCTIONS (r'(declare)(\s+)(\%)', bygroups(Keyword.Declaration, Text, Name.Decorator), 'annotationname'), # ITEMTYPE (r'(\))(\s+)(as)', bygroups(Operator, Text, Keyword), 'itemtype'), (r'(element|attribute|schema-element|schema-attribute|comment|' r'text|node|document-node|empty-sequence)(\s+)(\()', pushstate_operator_kindtest_callback), (r'(processing-instruction)(\s+)(\()', pushstate_operator_kindtestforpi_callback), (r'(', Punctuation), (r'"(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|' r'\\[0-7]+\\|\\["\\abcefnrstv]|[^\\"])*"', String.Double), (r"'(?:''|[^'])*'", String.Atom), # quoted atom # Needs to not be followed by an atom. # (r'=(?=\s|[a-zA-Z\[])', Operator), (r'is\b', Operator), (r'(<|>|=<|>=|==|=:=|=|/|//|\*|\+|-)(?=\s|[a-zA-Z0-9\[])', Operator), (r'(mod|div|not)\b', Operator), (r'_', Keyword), # The don't-care variable (r'([a-z]+)(:)', bygroups(Name.Namespace, Punctuation)), (r'([a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' r'[\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*)' r'(\s*)(:-|-->)', bygroups(Name.Function, Text, Operator)), # function defn (r'([a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' r'[\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*)' r'(\s*)(\()', bygroups(Name.Function, Text, Punctuation)), (r'[a-z\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]' r'[\w$\u00c0-\u1fff\u3040-\ud7ff\ue000-\uffef]*', String.Atom), # atom, characters # This one includes ! (r'[#&*+\-./:<=>?@\\^~\u00a1-\u00bf\u2010-\u303f]+', String.Atom), # atom, graphics (r'[A-Z_]\w*', Name.Variable), (r'\s+|[\u2000-\u200f\ufff0-\ufffe\uffef]', Text), ], 'nested-comment': [ (r'\*/', Comment.Multiline, '#pop'), (r'/\*', Comment.Multiline, '#push'), (r'[^*/]+', Comment.Multiline), (r'[*/]', Comment.Multiline), ], } def analyse_text(text): return ':-' in text class LogtalkLexer(RegexLexer): """ For `Logtalk `_ source code. .. versionadded:: 0.10 """ name = 'Logtalk' aliases = ['logtalk'] filenames = ['*.lgt', '*.logtalk'] mimetypes = ['text/x-logtalk'] tokens = { 'root': [ # Directives (r'^\s*:-\s', Punctuation, 'directive'), # Comments (r'%.*?\n', Comment), (r'/\*(.|\n)*?\*/', Comment), # Whitespace (r'\n', Text), (r'\s+', Text), # Numbers (r"0'[\\]?.", Number), (r'0b[01]+', Number.Bin), (r'0o[0-7]+', Number.Oct), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+\.?\d*((e|E)(\+|-)?\d+)?', Number), # Variables (r'([A-Z_][a-zA-Z0-9_]*)', Name.Variable), # Event handlers (r'(after|before)(?=[(])', Keyword), # Message forwarding handler (r'forward(?=[(])', Keyword), # Execution-context methods (r'(context|parameter|this|se(lf|nder))(?=[(])', Keyword), # Reflection (r'(current_predicate|predicate_property)(?=[(])', Keyword), # DCGs and term expansion (r'(expand_(goal|term)|(goal|term)_expansion|phrase)(?=[(])', Keyword), # Entity (r'(abolish|c(reate|urrent))_(object|protocol|category)(?=[(])', Keyword), (r'(object|protocol|category)_property(?=[(])', Keyword), # Entity relations (r'co(mplements_object|nforms_to_protocol)(?=[(])', Keyword), (r'extends_(object|protocol|category)(?=[(])', Keyword), (r'imp(lements_protocol|orts_category)(?=[(])', Keyword), (r'(instantiat|specializ)es_class(?=[(])', Keyword), # Events (r'(current_event|(abolish|define)_events)(?=[(])', Keyword), # Flags (r'(create|current|set)_logtalk_flag(?=[(])', Keyword), # Compiling, loading, and library paths (r'logtalk_(compile|l(ibrary_path|oad|oad_context)|make(_target_action)?)(?=[(])', Keyword), (r'\blogtalk_make\b', Keyword), # Database (r'(clause|retract(all)?)(?=[(])', Keyword), (r'a(bolish|ssert(a|z))(?=[(])', Keyword), # Control constructs (r'(ca(ll|tch)|throw)(?=[(])', Keyword), (r'(fa(il|lse)|true|(instantiation|system)_error)\b', Keyword), (r'(type|domain|existence|permission|representation|evaluation|resource|syntax)_error(?=[(])', Keyword), # All solutions (r'((bag|set)of|f(ind|or)all)(?=[(])', Keyword), # Multi-threading predicates (r'threaded(_(ca(ll|ncel)|once|ignore|exit|peek|wait|notify))?(?=[(])', Keyword), # Engine predicates (r'threaded_engine(_(create|destroy|self|next|next_reified|yield|post|fetch))?(?=[(])', Keyword), # Term unification (r'(subsumes_term|unify_with_occurs_check)(?=[(])', Keyword), # Term creation and decomposition (r'(functor|arg|copy_term|numbervars|term_variables)(?=[(])', Keyword), # Evaluable functors (r'(div|rem|m(ax|in|od)|abs|sign)(?=[(])', Keyword), (r'float(_(integer|fractional)_part)?(?=[(])', Keyword), (r'(floor|t(an|runcate)|round|ceiling)(?=[(])', Keyword), # Other arithmetic functors (r'(cos|a(cos|sin|tan|tan2)|exp|log|s(in|qrt)|xor)(?=[(])', Keyword), # Term testing (r'(var|atom(ic)?|integer|float|c(allable|ompound)|n(onvar|umber)|ground|acyclic_term)(?=[(])', Keyword), # Term comparison (r'compare(?=[(])', Keyword), # Stream selection and control (r'(curren|se)t_(in|out)put(?=[(])', Keyword), (r'(open|close)(?=[(])', Keyword), (r'flush_output(?=[(])', Keyword), (r'(at_end_of_stream|flush_output)\b', Keyword), (r'(stream_property|at_end_of_stream|set_stream_position)(?=[(])', Keyword), # Character and byte input/output (r'(nl|(get|peek|put)_(byte|c(har|ode)))(?=[(])', Keyword), (r'\bnl\b', Keyword), # Term input/output (r'read(_term)?(?=[(])', Keyword), (r'write(q|_(canonical|term))?(?=[(])', Keyword), (r'(current_)?op(?=[(])', Keyword), (r'(current_)?char_conversion(?=[(])', Keyword), # Atomic term processing (r'atom_(length|c(hars|o(ncat|des)))(?=[(])', Keyword), (r'(char_code|sub_atom)(?=[(])', Keyword), (r'number_c(har|ode)s(?=[(])', Keyword), # Implementation defined hooks functions (r'(se|curren)t_prolog_flag(?=[(])', Keyword), (r'\bhalt\b', Keyword), (r'halt(?=[(])', Keyword), # Message sending operators (r'(::|:|\^\^)', Operator), # External call (r'[{}]', Keyword), # Logic and control (r'(ignore|once)(?=[(])', Keyword), (r'\brepeat\b', Keyword), # Sorting (r'(key)?sort(?=[(])', Keyword), # Bitwise functors (r'(>>|<<|/\\|\\\\|\\)', Operator), # Predicate aliases (r'\bas\b', Operator), # Arithemtic evaluation (r'\bis\b', Keyword), # Arithemtic comparison (r'(=:=|=\\=|<|=<|>=|>)', Operator), # Term creation and decomposition (r'=\.\.', Operator), # Term unification (r'(=|\\=)', Operator), # Term comparison (r'(==|\\==|@=<|@<|@>=|@>)', Operator), # Evaluable functors (r'(//|[-+*/])', Operator), (r'\b(e|pi|div|mod|rem)\b', Operator), # Other arithemtic functors (r'\b\*\*\b', Operator), # DCG rules (r'-->', Operator), # Control constructs (r'([!;]|->)', Operator), # Logic and control (r'\\+', Operator), # Mode operators (r'[?@]', Operator), # Existential quantifier (r'\^', Operator), # Strings (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Punctuation (r'[()\[\],.|]', Text), # Atoms (r"[a-z][a-zA-Z0-9_]*", Text), (r"'", String, 'quoted_atom'), ], 'quoted_atom': [ (r"''", String), (r"'", String, '#pop'), (r'\\([\\abfnrtv"\']|(x[a-fA-F0-9]+|[0-7]+)\\)', String.Escape), (r"[^\\'\n]+", String), (r'\\', String), ], 'directive': [ # Conditional compilation directives (r'(el)?if(?=[(])', Keyword, 'root'), (r'(e(lse|ndif))(?=[.])', Keyword, 'root'), # Entity directives (r'(category|object|protocol)(?=[(])', Keyword, 'entityrelations'), (r'(end_(category|object|protocol))(?=[.])', Keyword, 'root'), # Predicate scope directives (r'(public|protected|private)(?=[(])', Keyword, 'root'), # Other directives (r'e(n(coding|sure_loaded)|xport)(?=[(])', Keyword, 'root'), (r'in(clude|itialization|fo)(?=[(])', Keyword, 'root'), (r'(built_in|dynamic|synchronized|threaded)(?=[.])', Keyword, 'root'), (r'(alias|d(ynamic|iscontiguous)|m(eta_(non_terminal|predicate)|ode|ultifile)|s(et_(logtalk|prolog)_flag|ynchronized))(?=[(])', Keyword, 'root'), (r'op(?=[(])', Keyword, 'root'), (r'(c(alls|oinductive)|module|reexport|use(s|_module))(?=[(])', Keyword, 'root'), (r'[a-z][a-zA-Z0-9_]*(?=[(])', Text, 'root'), (r'[a-z][a-zA-Z0-9_]*(?=[.])', Text, 'root'), ], 'entityrelations': [ (r'(complements|extends|i(nstantiates|mp(lements|orts))|specializes)(?=[(])', Keyword), # Numbers (r"0'[\\]?.", Number), (r'0b[01]+', Number.Bin), (r'0o[0-7]+', Number.Oct), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+\.?\d*((e|E)(\+|-)?\d+)?', Number), # Variables (r'([A-Z_][a-zA-Z0-9_]*)', Name.Variable), # Atoms (r"[a-z][a-zA-Z0-9_]*", Text), (r"'", String, 'quoted_atom'), # Strings (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # End of entity-opening directive (r'([)]\.)', Text, 'root'), # Scope operator (r'(::)', Operator), # Punctuation (r'[()\[\],.|]', Text), # Comments (r'%.*?\n', Comment), (r'/\*(.|\n)*?\*/', Comment), # Whitespace (r'\n', Text), (r'\s+', Text), ] } def analyse_text(text): if ':- object(' in text: return 1.0 elif ':- protocol(' in text: return 1.0 elif ':- category(' in text: return 1.0 elif re.search(r'^:-\s[a-z]', text, re.M): return 0.9 else: return 0.0 pygments-2.11.2/pygments/lexers/snobol.py0000644000175000017500000000525414165547207020362 0ustar carstencarsten""" pygments.lexers.snobol ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the SNOBOL language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['SnobolLexer'] class SnobolLexer(RegexLexer): """ Lexer for the SNOBOL4 programming language. Recognizes the common ASCII equivalents of the original SNOBOL4 operators. Does not require spaces around binary operators. .. versionadded:: 1.5 """ name = "Snobol" aliases = ["snobol"] filenames = ['*.snobol'] mimetypes = ['text/x-snobol'] tokens = { # root state, start of line # comments, continuation lines, and directives start in column 1 # as do labels 'root': [ (r'\*.*\n', Comment), (r'[+.] ', Punctuation, 'statement'), (r'-.*\n', Comment), (r'END\s*\n', Name.Label, 'heredoc'), (r'[A-Za-z$][\w$]*', Name.Label, 'statement'), (r'\s+', Text, 'statement'), ], # statement state, line after continuation or label 'statement': [ (r'\s*\n', Text, '#pop'), (r'\s+', Text), (r'(?<=[^\w.])(LT|LE|EQ|NE|GE|GT|INTEGER|IDENT|DIFFER|LGT|SIZE|' r'REPLACE|TRIM|DUPL|REMDR|DATE|TIME|EVAL|APPLY|OPSYN|LOAD|UNLOAD|' r'LEN|SPAN|BREAK|ANY|NOTANY|TAB|RTAB|REM|POS|RPOS|FAIL|FENCE|' r'ABORT|ARB|ARBNO|BAL|SUCCEED|INPUT|OUTPUT|TERMINAL)(?=[^\w.])', Name.Builtin), (r'[A-Za-z][\w.]*', Name), # ASCII equivalents of original operators # | for the EBCDIC equivalent, ! likewise # \ for EBCDIC negation (r'\*\*|[?$.!%*/#+\-@|&\\=]', Operator), (r'"[^"]*"', String), (r"'[^']*'", String), # Accept SPITBOL syntax for real numbers # as well as Macro SNOBOL4 (r'[0-9]+(?=[^.EeDd])', Number.Integer), (r'[0-9]+(\.[0-9]*)?([EDed][-+]?[0-9]+)?', Number.Float), # Goto (r':', Punctuation, 'goto'), (r'[()<>,;]', Punctuation), ], # Goto block 'goto': [ (r'\s*\n', Text, "#pop:2"), (r'\s+', Text), (r'F|S', Keyword), (r'(\()([A-Za-z][\w.]*)(\))', bygroups(Punctuation, Name.Label, Punctuation)) ], # everything after the END statement is basically one # big heredoc. 'heredoc': [ (r'.*\n', String.Heredoc) ] } pygments-2.11.2/pygments/lexers/_cocoa_builtins.py0000644000175000017500000031533714165547207022230 0ustar carstencarsten""" pygments.lexers._cocoa_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file defines a set of types used across Cocoa frameworks from Apple. There is a list of @interfaces, @protocols and some other (structs, unions) File may be also used as standalone generator for aboves. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ COCOA_INTERFACES = {'AAAttribution', 'ABNewPersonViewController', 'ABPeoplePickerNavigationController', 'ABPersonViewController', 'ABUnknownPersonViewController', 'ACAccount', 'ACAccountCredential', 'ACAccountStore', 'ACAccountType', 'ADBannerView', 'ADClient', 'ADInterstitialAd', 'ADInterstitialAdPresentationViewController', 'AEAssessmentConfiguration', 'AEAssessmentSession', 'ALAsset', 'ALAssetRepresentation', 'ALAssetsFilter', 'ALAssetsGroup', 'ALAssetsLibrary', 'APActivationPayload', 'ARAnchor', 'ARAppClipCodeAnchor', 'ARBody2D', 'ARBodyAnchor', 'ARBodyTrackingConfiguration', 'ARCamera', 'ARCoachingOverlayView', 'ARCollaborationData', 'ARConfiguration', 'ARDepthData', 'ARDirectionalLightEstimate', 'AREnvironmentProbeAnchor', 'ARFaceAnchor', 'ARFaceGeometry', 'ARFaceTrackingConfiguration', 'ARFrame', 'ARGeoAnchor', 'ARGeoTrackingConfiguration', 'ARGeoTrackingStatus', 'ARGeometryElement', 'ARGeometrySource', 'ARHitTestResult', 'ARImageAnchor', 'ARImageTrackingConfiguration', 'ARLightEstimate', 'ARMatteGenerator', 'ARMeshAnchor', 'ARMeshGeometry', 'ARObjectAnchor', 'ARObjectScanningConfiguration', 'AROrientationTrackingConfiguration', 'ARParticipantAnchor', 'ARPlaneAnchor', 'ARPlaneGeometry', 'ARPointCloud', 'ARPositionalTrackingConfiguration', 'ARQuickLookPreviewItem', 'ARRaycastQuery', 'ARRaycastResult', 'ARReferenceImage', 'ARReferenceObject', 'ARSCNFaceGeometry', 'ARSCNPlaneGeometry', 'ARSCNView', 'ARSKView', 'ARSession', 'ARSkeleton', 'ARSkeleton2D', 'ARSkeleton3D', 'ARSkeletonDefinition', 'ARTrackedRaycast', 'ARVideoFormat', 'ARView', 'ARWorldMap', 'ARWorldTrackingConfiguration', 'ASAccountAuthenticationModificationController', 'ASAccountAuthenticationModificationExtensionContext', 'ASAccountAuthenticationModificationReplacePasswordWithSignInWithAppleRequest', 'ASAccountAuthenticationModificationRequest', 'ASAccountAuthenticationModificationUpgradePasswordToStrongPasswordRequest', 'ASAccountAuthenticationModificationViewController', 'ASAuthorization', 'ASAuthorizationAppleIDButton', 'ASAuthorizationAppleIDCredential', 'ASAuthorizationAppleIDProvider', 'ASAuthorizationAppleIDRequest', 'ASAuthorizationController', 'ASAuthorizationOpenIDRequest', 'ASAuthorizationPasswordProvider', 'ASAuthorizationPasswordRequest', 'ASAuthorizationProviderExtensionAuthorizationRequest', 'ASAuthorizationRequest', 'ASAuthorizationSingleSignOnCredential', 'ASAuthorizationSingleSignOnProvider', 'ASAuthorizationSingleSignOnRequest', 'ASCredentialIdentityStore', 'ASCredentialIdentityStoreState', 'ASCredentialProviderExtensionContext', 'ASCredentialProviderViewController', 'ASCredentialServiceIdentifier', 'ASIdentifierManager', 'ASPasswordCredential', 'ASPasswordCredentialIdentity', 'ASWebAuthenticationSession', 'ASWebAuthenticationSessionRequest', 'ASWebAuthenticationSessionWebBrowserSessionManager', 'ATTrackingManager', 'AUAudioUnit', 'AUAudioUnitBus', 'AUAudioUnitBusArray', 'AUAudioUnitPreset', 'AUAudioUnitV2Bridge', 'AUAudioUnitViewConfiguration', 'AUParameter', 'AUParameterGroup', 'AUParameterNode', 'AUParameterTree', 'AUViewController', 'AVAggregateAssetDownloadTask', 'AVAsset', 'AVAssetCache', 'AVAssetDownloadStorageManagementPolicy', 'AVAssetDownloadStorageManager', 'AVAssetDownloadTask', 'AVAssetDownloadURLSession', 'AVAssetExportSession', 'AVAssetImageGenerator', 'AVAssetReader', 'AVAssetReaderAudioMixOutput', 'AVAssetReaderOutput', 'AVAssetReaderOutputMetadataAdaptor', 'AVAssetReaderSampleReferenceOutput', 'AVAssetReaderTrackOutput', 'AVAssetReaderVideoCompositionOutput', 'AVAssetResourceLoader', 'AVAssetResourceLoadingContentInformationRequest', 'AVAssetResourceLoadingDataRequest', 'AVAssetResourceLoadingRequest', 'AVAssetResourceLoadingRequestor', 'AVAssetResourceRenewalRequest', 'AVAssetSegmentReport', 'AVAssetSegmentReportSampleInformation', 'AVAssetSegmentTrackReport', 'AVAssetTrack', 'AVAssetTrackGroup', 'AVAssetTrackSegment', 'AVAssetWriter', 'AVAssetWriterInput', 'AVAssetWriterInputGroup', 'AVAssetWriterInputMetadataAdaptor', 'AVAssetWriterInputPassDescription', 'AVAssetWriterInputPixelBufferAdaptor', 'AVAsynchronousCIImageFilteringRequest', 'AVAsynchronousVideoCompositionRequest', 'AVAudioMix', 'AVAudioMixInputParameters', 'AVAudioSession', 'AVCameraCalibrationData', 'AVCaptureAudioChannel', 'AVCaptureAudioDataOutput', 'AVCaptureAudioFileOutput', 'AVCaptureAudioPreviewOutput', 'AVCaptureAutoExposureBracketedStillImageSettings', 'AVCaptureBracketedStillImageSettings', 'AVCaptureConnection', 'AVCaptureDataOutputSynchronizer', 'AVCaptureDepthDataOutput', 'AVCaptureDevice', 'AVCaptureDeviceDiscoverySession', 'AVCaptureDeviceFormat', 'AVCaptureDeviceInput', 'AVCaptureDeviceInputSource', 'AVCaptureFileOutput', 'AVCaptureInput', 'AVCaptureInputPort', 'AVCaptureManualExposureBracketedStillImageSettings', 'AVCaptureMetadataInput', 'AVCaptureMetadataOutput', 'AVCaptureMovieFileOutput', 'AVCaptureMultiCamSession', 'AVCaptureOutput', 'AVCapturePhoto', 'AVCapturePhotoBracketSettings', 'AVCapturePhotoOutput', 'AVCapturePhotoSettings', 'AVCaptureResolvedPhotoSettings', 'AVCaptureScreenInput', 'AVCaptureSession', 'AVCaptureStillImageOutput', 'AVCaptureSynchronizedData', 'AVCaptureSynchronizedDataCollection', 'AVCaptureSynchronizedDepthData', 'AVCaptureSynchronizedMetadataObjectData', 'AVCaptureSynchronizedSampleBufferData', 'AVCaptureSystemPressureState', 'AVCaptureVideoDataOutput', 'AVCaptureVideoPreviewLayer', 'AVComposition', 'AVCompositionTrack', 'AVCompositionTrackFormatDescriptionReplacement', 'AVCompositionTrackSegment', 'AVContentKeyRequest', 'AVContentKeyResponse', 'AVContentKeySession', 'AVDateRangeMetadataGroup', 'AVDepthData', 'AVDisplayCriteria', 'AVFragmentedAsset', 'AVFragmentedAssetMinder', 'AVFragmentedAssetTrack', 'AVFragmentedMovie', 'AVFragmentedMovieMinder', 'AVFragmentedMovieTrack', 'AVFrameRateRange', 'AVMediaDataStorage', 'AVMediaSelection', 'AVMediaSelectionGroup', 'AVMediaSelectionOption', 'AVMetadataBodyObject', 'AVMetadataCatBodyObject', 'AVMetadataDogBodyObject', 'AVMetadataFaceObject', 'AVMetadataGroup', 'AVMetadataHumanBodyObject', 'AVMetadataItem', 'AVMetadataItemFilter', 'AVMetadataItemValueRequest', 'AVMetadataMachineReadableCodeObject', 'AVMetadataObject', 'AVMetadataSalientObject', 'AVMovie', 'AVMovieTrack', 'AVMutableAssetDownloadStorageManagementPolicy', 'AVMutableAudioMix', 'AVMutableAudioMixInputParameters', 'AVMutableComposition', 'AVMutableCompositionTrack', 'AVMutableDateRangeMetadataGroup', 'AVMutableMediaSelection', 'AVMutableMetadataItem', 'AVMutableMovie', 'AVMutableMovieTrack', 'AVMutableTimedMetadataGroup', 'AVMutableVideoComposition', 'AVMutableVideoCompositionInstruction', 'AVMutableVideoCompositionLayerInstruction', 'AVOutputSettingsAssistant', 'AVPersistableContentKeyRequest', 'AVPictureInPictureController', 'AVPlayer', 'AVPlayerItem', 'AVPlayerItemAccessLog', 'AVPlayerItemAccessLogEvent', 'AVPlayerItemErrorLog', 'AVPlayerItemErrorLogEvent', 'AVPlayerItemLegibleOutput', 'AVPlayerItemMediaDataCollector', 'AVPlayerItemMetadataCollector', 'AVPlayerItemMetadataOutput', 'AVPlayerItemOutput', 'AVPlayerItemTrack', 'AVPlayerItemVideoOutput', 'AVPlayerLayer', 'AVPlayerLooper', 'AVPlayerMediaSelectionCriteria', 'AVPlayerViewController', 'AVPortraitEffectsMatte', 'AVQueuePlayer', 'AVRouteDetector', 'AVRoutePickerView', 'AVSampleBufferAudioRenderer', 'AVSampleBufferDisplayLayer', 'AVSampleBufferRenderSynchronizer', 'AVSemanticSegmentationMatte', 'AVSynchronizedLayer', 'AVTextStyleRule', 'AVTimedMetadataGroup', 'AVURLAsset', 'AVVideoComposition', 'AVVideoCompositionCoreAnimationTool', 'AVVideoCompositionInstruction', 'AVVideoCompositionLayerInstruction', 'AVVideoCompositionRenderContext', 'AVVideoCompositionRenderHint', 'AXCustomContent', 'BCChatAction', 'BCChatButton', 'BGAppRefreshTask', 'BGAppRefreshTaskRequest', 'BGProcessingTask', 'BGProcessingTaskRequest', 'BGTask', 'BGTaskRequest', 'BGTaskScheduler', 'CAAnimation', 'CAAnimationGroup', 'CABTMIDICentralViewController', 'CABTMIDILocalPeripheralViewController', 'CABasicAnimation', 'CADisplayLink', 'CAEAGLLayer', 'CAEmitterCell', 'CAEmitterLayer', 'CAGradientLayer', 'CAInterAppAudioSwitcherView', 'CAInterAppAudioTransportView', 'CAKeyframeAnimation', 'CALayer', 'CAMediaTimingFunction', 'CAMetalLayer', 'CAPropertyAnimation', 'CAReplicatorLayer', 'CAScrollLayer', 'CAShapeLayer', 'CASpringAnimation', 'CATextLayer', 'CATiledLayer', 'CATransaction', 'CATransformLayer', 'CATransition', 'CAValueFunction', 'CBATTRequest', 'CBAttribute', 'CBCentral', 'CBCentralManager', 'CBCharacteristic', 'CBDescriptor', 'CBL2CAPChannel', 'CBManager', 'CBMutableCharacteristic', 'CBMutableDescriptor', 'CBMutableService', 'CBPeer', 'CBPeripheral', 'CBPeripheralManager', 'CBService', 'CBUUID', 'CHHapticDynamicParameter', 'CHHapticEngine', 'CHHapticEvent', 'CHHapticEventParameter', 'CHHapticParameterCurve', 'CHHapticParameterCurveControlPoint', 'CHHapticPattern', 'CIAztecCodeDescriptor', 'CIBarcodeDescriptor', 'CIBlendKernel', 'CIColor', 'CIColorKernel', 'CIContext', 'CIDataMatrixCodeDescriptor', 'CIDetector', 'CIFaceFeature', 'CIFeature', 'CIFilter', 'CIFilterGenerator', 'CIFilterShape', 'CIImage', 'CIImageAccumulator', 'CIImageProcessorKernel', 'CIKernel', 'CIPDF417CodeDescriptor', 'CIPlugIn', 'CIQRCodeDescriptor', 'CIQRCodeFeature', 'CIRectangleFeature', 'CIRenderDestination', 'CIRenderInfo', 'CIRenderTask', 'CISampler', 'CITextFeature', 'CIVector', 'CIWarpKernel', 'CKAcceptSharesOperation', 'CKAsset', 'CKContainer', 'CKDatabase', 'CKDatabaseNotification', 'CKDatabaseOperation', 'CKDatabaseSubscription', 'CKDiscoverAllUserIdentitiesOperation', 'CKDiscoverUserIdentitiesOperation', 'CKFetchDatabaseChangesOperation', 'CKFetchNotificationChangesOperation', 'CKFetchRecordChangesOperation', 'CKFetchRecordZoneChangesConfiguration', 'CKFetchRecordZoneChangesOperation', 'CKFetchRecordZoneChangesOptions', 'CKFetchRecordZonesOperation', 'CKFetchRecordsOperation', 'CKFetchShareMetadataOperation', 'CKFetchShareParticipantsOperation', 'CKFetchSubscriptionsOperation', 'CKFetchWebAuthTokenOperation', 'CKLocationSortDescriptor', 'CKMarkNotificationsReadOperation', 'CKModifyBadgeOperation', 'CKModifyRecordZonesOperation', 'CKModifyRecordsOperation', 'CKModifySubscriptionsOperation', 'CKNotification', 'CKNotificationID', 'CKNotificationInfo', 'CKOperation', 'CKOperationConfiguration', 'CKOperationGroup', 'CKQuery', 'CKQueryCursor', 'CKQueryNotification', 'CKQueryOperation', 'CKQuerySubscription', 'CKRecord', 'CKRecordID', 'CKRecordZone', 'CKRecordZoneID', 'CKRecordZoneNotification', 'CKRecordZoneSubscription', 'CKReference', 'CKServerChangeToken', 'CKShare', 'CKShareMetadata', 'CKShareParticipant', 'CKSubscription', 'CKUserIdentity', 'CKUserIdentityLookupInfo', 'CLBeacon', 'CLBeaconIdentityConstraint', 'CLBeaconRegion', 'CLCircularRegion', 'CLFloor', 'CLGeocoder', 'CLHeading', 'CLKComplication', 'CLKComplicationDescriptor', 'CLKComplicationServer', 'CLKComplicationTemplate', 'CLKComplicationTemplateCircularSmallRingImage', 'CLKComplicationTemplateCircularSmallRingText', 'CLKComplicationTemplateCircularSmallSimpleImage', 'CLKComplicationTemplateCircularSmallSimpleText', 'CLKComplicationTemplateCircularSmallStackImage', 'CLKComplicationTemplateCircularSmallStackText', 'CLKComplicationTemplateExtraLargeColumnsText', 'CLKComplicationTemplateExtraLargeRingImage', 'CLKComplicationTemplateExtraLargeRingText', 'CLKComplicationTemplateExtraLargeSimpleImage', 'CLKComplicationTemplateExtraLargeSimpleText', 'CLKComplicationTemplateExtraLargeStackImage', 'CLKComplicationTemplateExtraLargeStackText', 'CLKComplicationTemplateGraphicBezelCircularText', 'CLKComplicationTemplateGraphicCircular', 'CLKComplicationTemplateGraphicCircularClosedGaugeImage', 'CLKComplicationTemplateGraphicCircularClosedGaugeText', 'CLKComplicationTemplateGraphicCircularImage', 'CLKComplicationTemplateGraphicCircularOpenGaugeImage', 'CLKComplicationTemplateGraphicCircularOpenGaugeRangeText', 'CLKComplicationTemplateGraphicCircularOpenGaugeSimpleText', 'CLKComplicationTemplateGraphicCircularStackImage', 'CLKComplicationTemplateGraphicCircularStackText', 'CLKComplicationTemplateGraphicCornerCircularImage', 'CLKComplicationTemplateGraphicCornerGaugeImage', 'CLKComplicationTemplateGraphicCornerGaugeText', 'CLKComplicationTemplateGraphicCornerStackText', 'CLKComplicationTemplateGraphicCornerTextImage', 'CLKComplicationTemplateGraphicExtraLargeCircular', 'CLKComplicationTemplateGraphicExtraLargeCircularClosedGaugeImage', 'CLKComplicationTemplateGraphicExtraLargeCircularClosedGaugeText', 'CLKComplicationTemplateGraphicExtraLargeCircularImage', 'CLKComplicationTemplateGraphicExtraLargeCircularOpenGaugeImage', 'CLKComplicationTemplateGraphicExtraLargeCircularOpenGaugeRangeText', 'CLKComplicationTemplateGraphicExtraLargeCircularOpenGaugeSimpleText', 'CLKComplicationTemplateGraphicExtraLargeCircularStackImage', 'CLKComplicationTemplateGraphicExtraLargeCircularStackText', 'CLKComplicationTemplateGraphicRectangularFullImage', 'CLKComplicationTemplateGraphicRectangularLargeImage', 'CLKComplicationTemplateGraphicRectangularStandardBody', 'CLKComplicationTemplateGraphicRectangularTextGauge', 'CLKComplicationTemplateModularLargeColumns', 'CLKComplicationTemplateModularLargeStandardBody', 'CLKComplicationTemplateModularLargeTable', 'CLKComplicationTemplateModularLargeTallBody', 'CLKComplicationTemplateModularSmallColumnsText', 'CLKComplicationTemplateModularSmallRingImage', 'CLKComplicationTemplateModularSmallRingText', 'CLKComplicationTemplateModularSmallSimpleImage', 'CLKComplicationTemplateModularSmallSimpleText', 'CLKComplicationTemplateModularSmallStackImage', 'CLKComplicationTemplateModularSmallStackText', 'CLKComplicationTemplateUtilitarianLargeFlat', 'CLKComplicationTemplateUtilitarianSmallFlat', 'CLKComplicationTemplateUtilitarianSmallRingImage', 'CLKComplicationTemplateUtilitarianSmallRingText', 'CLKComplicationTemplateUtilitarianSmallSquare', 'CLKComplicationTimelineEntry', 'CLKDateTextProvider', 'CLKFullColorImageProvider', 'CLKGaugeProvider', 'CLKImageProvider', 'CLKRelativeDateTextProvider', 'CLKSimpleGaugeProvider', 'CLKSimpleTextProvider', 'CLKTextProvider', 'CLKTimeIntervalGaugeProvider', 'CLKTimeIntervalTextProvider', 'CLKTimeTextProvider', 'CLKWatchFaceLibrary', 'CLLocation', 'CLLocationManager', 'CLPlacemark', 'CLRegion', 'CLSActivity', 'CLSActivityItem', 'CLSBinaryItem', 'CLSContext', 'CLSDataStore', 'CLSObject', 'CLSProgressReportingCapability', 'CLSQuantityItem', 'CLSScoreItem', 'CLVisit', 'CMAccelerometerData', 'CMAltimeter', 'CMAltitudeData', 'CMAttitude', 'CMDeviceMotion', 'CMDyskineticSymptomResult', 'CMFallDetectionEvent', 'CMFallDetectionManager', 'CMGyroData', 'CMHeadphoneMotionManager', 'CMLogItem', 'CMMagnetometerData', 'CMMotionActivity', 'CMMotionActivityManager', 'CMMotionManager', 'CMMovementDisorderManager', 'CMPedometer', 'CMPedometerData', 'CMPedometerEvent', 'CMRecordedAccelerometerData', 'CMRecordedRotationRateData', 'CMRotationRateData', 'CMSensorDataList', 'CMSensorRecorder', 'CMStepCounter', 'CMTremorResult', 'CNChangeHistoryAddContactEvent', 'CNChangeHistoryAddGroupEvent', 'CNChangeHistoryAddMemberToGroupEvent', 'CNChangeHistoryAddSubgroupToGroupEvent', 'CNChangeHistoryDeleteContactEvent', 'CNChangeHistoryDeleteGroupEvent', 'CNChangeHistoryDropEverythingEvent', 'CNChangeHistoryEvent', 'CNChangeHistoryFetchRequest', 'CNChangeHistoryRemoveMemberFromGroupEvent', 'CNChangeHistoryRemoveSubgroupFromGroupEvent', 'CNChangeHistoryUpdateContactEvent', 'CNChangeHistoryUpdateGroupEvent', 'CNContact', 'CNContactFetchRequest', 'CNContactFormatter', 'CNContactPickerViewController', 'CNContactProperty', 'CNContactRelation', 'CNContactStore', 'CNContactVCardSerialization', 'CNContactViewController', 'CNContactsUserDefaults', 'CNContainer', 'CNFetchRequest', 'CNFetchResult', 'CNGroup', 'CNInstantMessageAddress', 'CNLabeledValue', 'CNMutableContact', 'CNMutableGroup', 'CNMutablePostalAddress', 'CNPhoneNumber', 'CNPostalAddress', 'CNPostalAddressFormatter', 'CNSaveRequest', 'CNSocialProfile', 'CPActionSheetTemplate', 'CPAlertAction', 'CPAlertTemplate', 'CPBarButton', 'CPButton', 'CPContact', 'CPContactCallButton', 'CPContactDirectionsButton', 'CPContactMessageButton', 'CPContactTemplate', 'CPDashboardButton', 'CPDashboardController', 'CPGridButton', 'CPGridTemplate', 'CPImageSet', 'CPInformationItem', 'CPInformationRatingItem', 'CPInformationTemplate', 'CPInterfaceController', 'CPListImageRowItem', 'CPListItem', 'CPListSection', 'CPListTemplate', 'CPManeuver', 'CPMapButton', 'CPMapTemplate', 'CPMessageComposeBarButton', 'CPMessageListItem', 'CPMessageListItemLeadingConfiguration', 'CPMessageListItemTrailingConfiguration', 'CPNavigationAlert', 'CPNavigationSession', 'CPNowPlayingAddToLibraryButton', 'CPNowPlayingButton', 'CPNowPlayingImageButton', 'CPNowPlayingMoreButton', 'CPNowPlayingPlaybackRateButton', 'CPNowPlayingRepeatButton', 'CPNowPlayingShuffleButton', 'CPNowPlayingTemplate', 'CPPointOfInterest', 'CPPointOfInterestTemplate', 'CPRouteChoice', 'CPSearchTemplate', 'CPSessionConfiguration', 'CPTabBarTemplate', 'CPTemplate', 'CPTemplateApplicationDashboardScene', 'CPTemplateApplicationScene', 'CPTextButton', 'CPTravelEstimates', 'CPTrip', 'CPTripPreviewTextConfiguration', 'CPVoiceControlState', 'CPVoiceControlTemplate', 'CPWindow', 'CSCustomAttributeKey', 'CSIndexExtensionRequestHandler', 'CSLocalizedString', 'CSPerson', 'CSSearchQuery', 'CSSearchableIndex', 'CSSearchableItem', 'CSSearchableItemAttributeSet', 'CTCall', 'CTCallCenter', 'CTCarrier', 'CTCellularData', 'CTCellularPlanProvisioning', 'CTCellularPlanProvisioningRequest', 'CTSubscriber', 'CTSubscriberInfo', 'CTTelephonyNetworkInfo', 'CXAction', 'CXAnswerCallAction', 'CXCall', 'CXCallAction', 'CXCallController', 'CXCallDirectoryExtensionContext', 'CXCallDirectoryManager', 'CXCallDirectoryProvider', 'CXCallObserver', 'CXCallUpdate', 'CXEndCallAction', 'CXHandle', 'CXPlayDTMFCallAction', 'CXProvider', 'CXProviderConfiguration', 'CXSetGroupCallAction', 'CXSetHeldCallAction', 'CXSetMutedCallAction', 'CXStartCallAction', 'CXTransaction', 'DCAppAttestService', 'DCDevice', 'EAAccessory', 'EAAccessoryManager', 'EAGLContext', 'EAGLSharegroup', 'EASession', 'EAWiFiUnconfiguredAccessory', 'EAWiFiUnconfiguredAccessoryBrowser', 'EKAlarm', 'EKCalendar', 'EKCalendarChooser', 'EKCalendarItem', 'EKEvent', 'EKEventEditViewController', 'EKEventStore', 'EKEventViewController', 'EKObject', 'EKParticipant', 'EKRecurrenceDayOfWeek', 'EKRecurrenceEnd', 'EKRecurrenceRule', 'EKReminder', 'EKSource', 'EKStructuredLocation', 'ENExposureConfiguration', 'ENExposureDaySummary', 'ENExposureDetectionSummary', 'ENExposureInfo', 'ENExposureSummaryItem', 'ENExposureWindow', 'ENManager', 'ENScanInstance', 'ENTemporaryExposureKey', 'EntityRotationGestureRecognizer', 'EntityScaleGestureRecognizer', 'EntityTranslationGestureRecognizer', 'FPUIActionExtensionContext', 'FPUIActionExtensionViewController', 'GCColor', 'GCController', 'GCControllerAxisInput', 'GCControllerButtonInput', 'GCControllerDirectionPad', 'GCControllerElement', 'GCControllerTouchpad', 'GCDeviceBattery', 'GCDeviceCursor', 'GCDeviceHaptics', 'GCDeviceLight', 'GCDirectionalGamepad', 'GCDualShockGamepad', 'GCEventViewController', 'GCExtendedGamepad', 'GCExtendedGamepadSnapshot', 'GCGamepad', 'GCGamepadSnapshot', 'GCKeyboard', 'GCKeyboardInput', 'GCMicroGamepad', 'GCMicroGamepadSnapshot', 'GCMotion', 'GCMouse', 'GCMouseInput', 'GCPhysicalInputProfile', 'GCXboxGamepad', 'GKARC4RandomSource', 'GKAccessPoint', 'GKAchievement', 'GKAchievementChallenge', 'GKAchievementDescription', 'GKAchievementViewController', 'GKAgent', 'GKAgent2D', 'GKAgent3D', 'GKBasePlayer', 'GKBehavior', 'GKBillowNoiseSource', 'GKChallenge', 'GKChallengeEventHandler', 'GKCheckerboardNoiseSource', 'GKCircleObstacle', 'GKCloudPlayer', 'GKCoherentNoiseSource', 'GKComponent', 'GKComponentSystem', 'GKCompositeBehavior', 'GKConstantNoiseSource', 'GKCylindersNoiseSource', 'GKDecisionNode', 'GKDecisionTree', 'GKEntity', 'GKFriendRequestComposeViewController', 'GKGameCenterViewController', 'GKGameSession', 'GKGameSessionSharingViewController', 'GKGaussianDistribution', 'GKGoal', 'GKGraph', 'GKGraphNode', 'GKGraphNode2D', 'GKGraphNode3D', 'GKGridGraph', 'GKGridGraphNode', 'GKInvite', 'GKLeaderboard', 'GKLeaderboardEntry', 'GKLeaderboardScore', 'GKLeaderboardSet', 'GKLeaderboardViewController', 'GKLinearCongruentialRandomSource', 'GKLocalPlayer', 'GKMatch', 'GKMatchRequest', 'GKMatchmaker', 'GKMatchmakerViewController', 'GKMersenneTwisterRandomSource', 'GKMeshGraph', 'GKMinmaxStrategist', 'GKMonteCarloStrategist', 'GKNSPredicateRule', 'GKNoise', 'GKNoiseMap', 'GKNoiseSource', 'GKNotificationBanner', 'GKObstacle', 'GKObstacleGraph', 'GKOctree', 'GKOctreeNode', 'GKPath', 'GKPeerPickerController', 'GKPerlinNoiseSource', 'GKPlayer', 'GKPolygonObstacle', 'GKQuadtree', 'GKQuadtreeNode', 'GKRTree', 'GKRandomDistribution', 'GKRandomSource', 'GKRidgedNoiseSource', 'GKRule', 'GKRuleSystem', 'GKSCNNodeComponent', 'GKSKNodeComponent', 'GKSavedGame', 'GKScene', 'GKScore', 'GKScoreChallenge', 'GKSession', 'GKShuffledDistribution', 'GKSphereObstacle', 'GKSpheresNoiseSource', 'GKState', 'GKStateMachine', 'GKTurnBasedEventHandler', 'GKTurnBasedExchangeReply', 'GKTurnBasedMatch', 'GKTurnBasedMatchmakerViewController', 'GKTurnBasedParticipant', 'GKVoiceChat', 'GKVoiceChatService', 'GKVoronoiNoiseSource', 'GLKBaseEffect', 'GLKEffectProperty', 'GLKEffectPropertyFog', 'GLKEffectPropertyLight', 'GLKEffectPropertyMaterial', 'GLKEffectPropertyTexture', 'GLKEffectPropertyTransform', 'GLKMesh', 'GLKMeshBuffer', 'GLKMeshBufferAllocator', 'GLKReflectionMapEffect', 'GLKSkyboxEffect', 'GLKSubmesh', 'GLKTextureInfo', 'GLKTextureLoader', 'GLKView', 'GLKViewController', 'HKActivityMoveModeObject', 'HKActivityRingView', 'HKActivitySummary', 'HKActivitySummaryQuery', 'HKActivitySummaryType', 'HKAnchoredObjectQuery', 'HKAudiogramSample', 'HKAudiogramSampleType', 'HKAudiogramSensitivityPoint', 'HKBiologicalSexObject', 'HKBloodTypeObject', 'HKCDADocument', 'HKCDADocumentSample', 'HKCategorySample', 'HKCategoryType', 'HKCharacteristicType', 'HKClinicalRecord', 'HKClinicalType', 'HKCorrelation', 'HKCorrelationQuery', 'HKCorrelationType', 'HKCumulativeQuantitySample', 'HKCumulativeQuantitySeriesSample', 'HKDeletedObject', 'HKDevice', 'HKDiscreteQuantitySample', 'HKDocumentQuery', 'HKDocumentSample', 'HKDocumentType', 'HKElectrocardiogram', 'HKElectrocardiogramQuery', 'HKElectrocardiogramType', 'HKElectrocardiogramVoltageMeasurement', 'HKFHIRResource', 'HKFHIRVersion', 'HKFitzpatrickSkinTypeObject', 'HKHealthStore', 'HKHeartbeatSeriesBuilder', 'HKHeartbeatSeriesQuery', 'HKHeartbeatSeriesSample', 'HKLiveWorkoutBuilder', 'HKLiveWorkoutDataSource', 'HKObject', 'HKObjectType', 'HKObserverQuery', 'HKQuantity', 'HKQuantitySample', 'HKQuantitySeriesSampleBuilder', 'HKQuantitySeriesSampleQuery', 'HKQuantityType', 'HKQuery', 'HKQueryAnchor', 'HKSample', 'HKSampleQuery', 'HKSampleType', 'HKSeriesBuilder', 'HKSeriesSample', 'HKSeriesType', 'HKSource', 'HKSourceQuery', 'HKSourceRevision', 'HKStatistics', 'HKStatisticsCollection', 'HKStatisticsCollectionQuery', 'HKStatisticsQuery', 'HKUnit', 'HKWheelchairUseObject', 'HKWorkout', 'HKWorkoutBuilder', 'HKWorkoutConfiguration', 'HKWorkoutEvent', 'HKWorkoutRoute', 'HKWorkoutRouteBuilder', 'HKWorkoutRouteQuery', 'HKWorkoutSession', 'HKWorkoutType', 'HMAccessControl', 'HMAccessory', 'HMAccessoryBrowser', 'HMAccessoryCategory', 'HMAccessoryOwnershipToken', 'HMAccessoryProfile', 'HMAccessorySetupPayload', 'HMAction', 'HMActionSet', 'HMAddAccessoryRequest', 'HMCalendarEvent', 'HMCameraAudioControl', 'HMCameraControl', 'HMCameraProfile', 'HMCameraSettingsControl', 'HMCameraSnapshot', 'HMCameraSnapshotControl', 'HMCameraSource', 'HMCameraStream', 'HMCameraStreamControl', 'HMCameraView', 'HMCharacteristic', 'HMCharacteristicEvent', 'HMCharacteristicMetadata', 'HMCharacteristicThresholdRangeEvent', 'HMCharacteristicWriteAction', 'HMDurationEvent', 'HMEvent', 'HMEventTrigger', 'HMHome', 'HMHomeAccessControl', 'HMHomeManager', 'HMLocationEvent', 'HMMutableCalendarEvent', 'HMMutableCharacteristicEvent', 'HMMutableCharacteristicThresholdRangeEvent', 'HMMutableDurationEvent', 'HMMutableLocationEvent', 'HMMutablePresenceEvent', 'HMMutableSignificantTimeEvent', 'HMNetworkConfigurationProfile', 'HMNumberRange', 'HMPresenceEvent', 'HMRoom', 'HMService', 'HMServiceGroup', 'HMSignificantTimeEvent', 'HMTimeEvent', 'HMTimerTrigger', 'HMTrigger', 'HMUser', 'HMZone', 'ICCameraDevice', 'ICCameraFile', 'ICCameraFolder', 'ICCameraItem', 'ICDevice', 'ICDeviceBrowser', 'ICScannerBandData', 'ICScannerDevice', 'ICScannerFeature', 'ICScannerFeatureBoolean', 'ICScannerFeatureEnumeration', 'ICScannerFeatureRange', 'ICScannerFeatureTemplate', 'ICScannerFunctionalUnit', 'ICScannerFunctionalUnitDocumentFeeder', 'ICScannerFunctionalUnitFlatbed', 'ICScannerFunctionalUnitNegativeTransparency', 'ICScannerFunctionalUnitPositiveTransparency', 'ILCallClassificationRequest', 'ILCallCommunication', 'ILClassificationRequest', 'ILClassificationResponse', 'ILClassificationUIExtensionContext', 'ILClassificationUIExtensionViewController', 'ILCommunication', 'ILMessageClassificationRequest', 'ILMessageCommunication', 'ILMessageFilterExtension', 'ILMessageFilterExtensionContext', 'ILMessageFilterQueryRequest', 'ILMessageFilterQueryResponse', 'ILNetworkResponse', 'INAccountTypeResolutionResult', 'INActivateCarSignalIntent', 'INActivateCarSignalIntentResponse', 'INAddMediaIntent', 'INAddMediaIntentResponse', 'INAddMediaMediaDestinationResolutionResult', 'INAddMediaMediaItemResolutionResult', 'INAddTasksIntent', 'INAddTasksIntentResponse', 'INAddTasksTargetTaskListResolutionResult', 'INAddTasksTemporalEventTriggerResolutionResult', 'INAirline', 'INAirport', 'INAirportGate', 'INAppendToNoteIntent', 'INAppendToNoteIntentResponse', 'INBalanceAmount', 'INBalanceTypeResolutionResult', 'INBillDetails', 'INBillPayee', 'INBillPayeeResolutionResult', 'INBillTypeResolutionResult', 'INBoatReservation', 'INBoatTrip', 'INBookRestaurantReservationIntent', 'INBookRestaurantReservationIntentResponse', 'INBooleanResolutionResult', 'INBusReservation', 'INBusTrip', 'INCallCapabilityResolutionResult', 'INCallDestinationTypeResolutionResult', 'INCallRecord', 'INCallRecordFilter', 'INCallRecordResolutionResult', 'INCallRecordTypeOptionsResolutionResult', 'INCallRecordTypeResolutionResult', 'INCancelRideIntent', 'INCancelRideIntentResponse', 'INCancelWorkoutIntent', 'INCancelWorkoutIntentResponse', 'INCar', 'INCarAirCirculationModeResolutionResult', 'INCarAudioSourceResolutionResult', 'INCarDefrosterResolutionResult', 'INCarHeadUnit', 'INCarSeatResolutionResult', 'INCarSignalOptionsResolutionResult', 'INCreateNoteIntent', 'INCreateNoteIntentResponse', 'INCreateTaskListIntent', 'INCreateTaskListIntentResponse', 'INCurrencyAmount', 'INCurrencyAmountResolutionResult', 'INDailyRoutineRelevanceProvider', 'INDateComponentsRange', 'INDateComponentsRangeResolutionResult', 'INDateComponentsResolutionResult', 'INDateRelevanceProvider', 'INDateSearchTypeResolutionResult', 'INDefaultCardTemplate', 'INDeleteTasksIntent', 'INDeleteTasksIntentResponse', 'INDeleteTasksTaskListResolutionResult', 'INDeleteTasksTaskResolutionResult', 'INDoubleResolutionResult', 'INEndWorkoutIntent', 'INEndWorkoutIntentResponse', 'INEnergyResolutionResult', 'INEnumResolutionResult', 'INExtension', 'INFile', 'INFileResolutionResult', 'INFlight', 'INFlightReservation', 'INGetAvailableRestaurantReservationBookingDefaultsIntent', 'INGetAvailableRestaurantReservationBookingDefaultsIntentResponse', 'INGetAvailableRestaurantReservationBookingsIntent', 'INGetAvailableRestaurantReservationBookingsIntentResponse', 'INGetCarLockStatusIntent', 'INGetCarLockStatusIntentResponse', 'INGetCarPowerLevelStatusIntent', 'INGetCarPowerLevelStatusIntentResponse', 'INGetReservationDetailsIntent', 'INGetReservationDetailsIntentResponse', 'INGetRestaurantGuestIntent', 'INGetRestaurantGuestIntentResponse', 'INGetRideStatusIntent', 'INGetRideStatusIntentResponse', 'INGetUserCurrentRestaurantReservationBookingsIntent', 'INGetUserCurrentRestaurantReservationBookingsIntentResponse', 'INGetVisualCodeIntent', 'INGetVisualCodeIntentResponse', 'INImage', 'INImageNoteContent', 'INIntegerResolutionResult', 'INIntent', 'INIntentResolutionResult', 'INIntentResponse', 'INInteraction', 'INLengthResolutionResult', 'INListCarsIntent', 'INListCarsIntentResponse', 'INListRideOptionsIntent', 'INListRideOptionsIntentResponse', 'INLocationRelevanceProvider', 'INLocationSearchTypeResolutionResult', 'INLodgingReservation', 'INMassResolutionResult', 'INMediaAffinityTypeResolutionResult', 'INMediaDestination', 'INMediaDestinationResolutionResult', 'INMediaItem', 'INMediaItemResolutionResult', 'INMediaSearch', 'INMediaUserContext', 'INMessage', 'INMessageAttributeOptionsResolutionResult', 'INMessageAttributeResolutionResult', 'INNote', 'INNoteContent', 'INNoteContentResolutionResult', 'INNoteContentTypeResolutionResult', 'INNoteResolutionResult', 'INNotebookItemTypeResolutionResult', 'INObject', 'INObjectCollection', 'INObjectResolutionResult', 'INObjectSection', 'INOutgoingMessageTypeResolutionResult', 'INParameter', 'INPauseWorkoutIntent', 'INPauseWorkoutIntentResponse', 'INPayBillIntent', 'INPayBillIntentResponse', 'INPaymentAccount', 'INPaymentAccountResolutionResult', 'INPaymentAmount', 'INPaymentAmountResolutionResult', 'INPaymentMethod', 'INPaymentMethodResolutionResult', 'INPaymentRecord', 'INPaymentStatusResolutionResult', 'INPerson', 'INPersonHandle', 'INPersonResolutionResult', 'INPlacemarkResolutionResult', 'INPlayMediaIntent', 'INPlayMediaIntentResponse', 'INPlayMediaMediaItemResolutionResult', 'INPlayMediaPlaybackSpeedResolutionResult', 'INPlaybackQueueLocationResolutionResult', 'INPlaybackRepeatModeResolutionResult', 'INPreferences', 'INPriceRange', 'INRadioTypeResolutionResult', 'INRecurrenceRule', 'INRelativeReferenceResolutionResult', 'INRelativeSettingResolutionResult', 'INRelevanceProvider', 'INRelevantShortcut', 'INRelevantShortcutStore', 'INRentalCar', 'INRentalCarReservation', 'INRequestPaymentCurrencyAmountResolutionResult', 'INRequestPaymentIntent', 'INRequestPaymentIntentResponse', 'INRequestPaymentPayerResolutionResult', 'INRequestRideIntent', 'INRequestRideIntentResponse', 'INReservation', 'INReservationAction', 'INRestaurant', 'INRestaurantGuest', 'INRestaurantGuestDisplayPreferences', 'INRestaurantGuestResolutionResult', 'INRestaurantOffer', 'INRestaurantReservation', 'INRestaurantReservationBooking', 'INRestaurantReservationUserBooking', 'INRestaurantResolutionResult', 'INResumeWorkoutIntent', 'INResumeWorkoutIntentResponse', 'INRideCompletionStatus', 'INRideDriver', 'INRideFareLineItem', 'INRideOption', 'INRidePartySizeOption', 'INRideStatus', 'INRideVehicle', 'INSaveProfileInCarIntent', 'INSaveProfileInCarIntentResponse', 'INSearchCallHistoryIntent', 'INSearchCallHistoryIntentResponse', 'INSearchForAccountsIntent', 'INSearchForAccountsIntentResponse', 'INSearchForBillsIntent', 'INSearchForBillsIntentResponse', 'INSearchForMediaIntent', 'INSearchForMediaIntentResponse', 'INSearchForMediaMediaItemResolutionResult', 'INSearchForMessagesIntent', 'INSearchForMessagesIntentResponse', 'INSearchForNotebookItemsIntent', 'INSearchForNotebookItemsIntentResponse', 'INSearchForPhotosIntent', 'INSearchForPhotosIntentResponse', 'INSeat', 'INSendMessageAttachment', 'INSendMessageIntent', 'INSendMessageIntentResponse', 'INSendMessageRecipientResolutionResult', 'INSendPaymentCurrencyAmountResolutionResult', 'INSendPaymentIntent', 'INSendPaymentIntentResponse', 'INSendPaymentPayeeResolutionResult', 'INSendRideFeedbackIntent', 'INSendRideFeedbackIntentResponse', 'INSetAudioSourceInCarIntent', 'INSetAudioSourceInCarIntentResponse', 'INSetCarLockStatusIntent', 'INSetCarLockStatusIntentResponse', 'INSetClimateSettingsInCarIntent', 'INSetClimateSettingsInCarIntentResponse', 'INSetDefrosterSettingsInCarIntent', 'INSetDefrosterSettingsInCarIntentResponse', 'INSetMessageAttributeIntent', 'INSetMessageAttributeIntentResponse', 'INSetProfileInCarIntent', 'INSetProfileInCarIntentResponse', 'INSetRadioStationIntent', 'INSetRadioStationIntentResponse', 'INSetSeatSettingsInCarIntent', 'INSetSeatSettingsInCarIntentResponse', 'INSetTaskAttributeIntent', 'INSetTaskAttributeIntentResponse', 'INSetTaskAttributeTemporalEventTriggerResolutionResult', 'INShortcut', 'INSnoozeTasksIntent', 'INSnoozeTasksIntentResponse', 'INSnoozeTasksTaskResolutionResult', 'INSpatialEventTrigger', 'INSpatialEventTriggerResolutionResult', 'INSpeakableString', 'INSpeakableStringResolutionResult', 'INSpeedResolutionResult', 'INStartAudioCallIntent', 'INStartAudioCallIntentResponse', 'INStartCallCallCapabilityResolutionResult', 'INStartCallCallRecordToCallBackResolutionResult', 'INStartCallContactResolutionResult', 'INStartCallIntent', 'INStartCallIntentResponse', 'INStartPhotoPlaybackIntent', 'INStartPhotoPlaybackIntentResponse', 'INStartVideoCallIntent', 'INStartVideoCallIntentResponse', 'INStartWorkoutIntent', 'INStartWorkoutIntentResponse', 'INStringResolutionResult', 'INTask', 'INTaskList', 'INTaskListResolutionResult', 'INTaskPriorityResolutionResult', 'INTaskResolutionResult', 'INTaskStatusResolutionResult', 'INTemperatureResolutionResult', 'INTemporalEventTrigger', 'INTemporalEventTriggerResolutionResult', 'INTemporalEventTriggerTypeOptionsResolutionResult', 'INTermsAndConditions', 'INTextNoteContent', 'INTicketedEvent', 'INTicketedEventReservation', 'INTimeIntervalResolutionResult', 'INTrainReservation', 'INTrainTrip', 'INTransferMoneyIntent', 'INTransferMoneyIntentResponse', 'INUIAddVoiceShortcutButton', 'INUIAddVoiceShortcutViewController', 'INUIEditVoiceShortcutViewController', 'INURLResolutionResult', 'INUpcomingMediaManager', 'INUpdateMediaAffinityIntent', 'INUpdateMediaAffinityIntentResponse', 'INUpdateMediaAffinityMediaItemResolutionResult', 'INUserContext', 'INVisualCodeTypeResolutionResult', 'INVocabulary', 'INVoiceShortcut', 'INVoiceShortcutCenter', 'INVolumeResolutionResult', 'INWorkoutGoalUnitTypeResolutionResult', 'INWorkoutLocationTypeResolutionResult', 'IOSurface', 'JSContext', 'JSManagedValue', 'JSValue', 'JSVirtualMachine', 'LAContext', 'LPLinkMetadata', 'LPLinkView', 'LPMetadataProvider', 'MCAdvertiserAssistant', 'MCBrowserViewController', 'MCNearbyServiceAdvertiser', 'MCNearbyServiceBrowser', 'MCPeerID', 'MCSession', 'MDLAnimatedMatrix4x4', 'MDLAnimatedQuaternion', 'MDLAnimatedQuaternionArray', 'MDLAnimatedScalar', 'MDLAnimatedScalarArray', 'MDLAnimatedValue', 'MDLAnimatedVector2', 'MDLAnimatedVector3', 'MDLAnimatedVector3Array', 'MDLAnimatedVector4', 'MDLAnimationBindComponent', 'MDLAreaLight', 'MDLAsset', 'MDLBundleAssetResolver', 'MDLCamera', 'MDLCheckerboardTexture', 'MDLColorSwatchTexture', 'MDLLight', 'MDLLightProbe', 'MDLMaterial', 'MDLMaterialProperty', 'MDLMaterialPropertyConnection', 'MDLMaterialPropertyGraph', 'MDLMaterialPropertyNode', 'MDLMatrix4x4Array', 'MDLMesh', 'MDLMeshBufferData', 'MDLMeshBufferDataAllocator', 'MDLMeshBufferMap', 'MDLMeshBufferZoneDefault', 'MDLNoiseTexture', 'MDLNormalMapTexture', 'MDLObject', 'MDLObjectContainer', 'MDLPackedJointAnimation', 'MDLPathAssetResolver', 'MDLPhotometricLight', 'MDLPhysicallyPlausibleLight', 'MDLPhysicallyPlausibleScatteringFunction', 'MDLRelativeAssetResolver', 'MDLScatteringFunction', 'MDLSkeleton', 'MDLSkyCubeTexture', 'MDLStereoscopicCamera', 'MDLSubmesh', 'MDLSubmeshTopology', 'MDLTexture', 'MDLTextureFilter', 'MDLTextureSampler', 'MDLTransform', 'MDLTransformMatrixOp', 'MDLTransformOrientOp', 'MDLTransformRotateOp', 'MDLTransformRotateXOp', 'MDLTransformRotateYOp', 'MDLTransformRotateZOp', 'MDLTransformScaleOp', 'MDLTransformStack', 'MDLTransformTranslateOp', 'MDLURLTexture', 'MDLVertexAttribute', 'MDLVertexAttributeData', 'MDLVertexBufferLayout', 'MDLVertexDescriptor', 'MDLVoxelArray', 'MFMailComposeViewController', 'MFMessageComposeViewController', 'MIDICIDeviceInfo', 'MIDICIDiscoveredNode', 'MIDICIDiscoveryManager', 'MIDICIProfile', 'MIDICIProfileState', 'MIDICIResponder', 'MIDICISession', 'MIDINetworkConnection', 'MIDINetworkHost', 'MIDINetworkSession', 'MKAnnotationView', 'MKCircle', 'MKCircleRenderer', 'MKCircleView', 'MKClusterAnnotation', 'MKCompassButton', 'MKDirections', 'MKDirectionsRequest', 'MKDirectionsResponse', 'MKDistanceFormatter', 'MKETAResponse', 'MKGeoJSONDecoder', 'MKGeoJSONFeature', 'MKGeodesicPolyline', 'MKGradientPolylineRenderer', 'MKLocalPointsOfInterestRequest', 'MKLocalSearch', 'MKLocalSearchCompleter', 'MKLocalSearchCompletion', 'MKLocalSearchRequest', 'MKLocalSearchResponse', 'MKMapCamera', 'MKMapCameraBoundary', 'MKMapCameraZoomRange', 'MKMapItem', 'MKMapSnapshot', 'MKMapSnapshotOptions', 'MKMapSnapshotter', 'MKMapView', 'MKMarkerAnnotationView', 'MKMultiPoint', 'MKMultiPolygon', 'MKMultiPolygonRenderer', 'MKMultiPolyline', 'MKMultiPolylineRenderer', 'MKOverlayPathRenderer', 'MKOverlayPathView', 'MKOverlayRenderer', 'MKOverlayView', 'MKPinAnnotationView', 'MKPitchControl', 'MKPlacemark', 'MKPointAnnotation', 'MKPointOfInterestFilter', 'MKPolygon', 'MKPolygonRenderer', 'MKPolygonView', 'MKPolyline', 'MKPolylineRenderer', 'MKPolylineView', 'MKReverseGeocoder', 'MKRoute', 'MKRouteStep', 'MKScaleView', 'MKShape', 'MKTileOverlay', 'MKTileOverlayRenderer', 'MKUserLocation', 'MKUserLocationView', 'MKUserTrackingBarButtonItem', 'MKUserTrackingButton', 'MKZoomControl', 'MLArrayBatchProvider', 'MLCActivationDescriptor', 'MLCActivationLayer', 'MLCArithmeticLayer', 'MLCBatchNormalizationLayer', 'MLCConcatenationLayer', 'MLCConvolutionDescriptor', 'MLCConvolutionLayer', 'MLCDevice', 'MLCDropoutLayer', 'MLCEmbeddingDescriptor', 'MLCEmbeddingLayer', 'MLCFullyConnectedLayer', 'MLCGramMatrixLayer', 'MLCGraph', 'MLCGroupNormalizationLayer', 'MLCInferenceGraph', 'MLCInstanceNormalizationLayer', 'MLCLSTMDescriptor', 'MLCLSTMLayer', 'MLCLayer', 'MLCLayerNormalizationLayer', 'MLCLossDescriptor', 'MLCLossLayer', 'MLCMatMulDescriptor', 'MLCMatMulLayer', 'MLCMultiheadAttentionDescriptor', 'MLCMultiheadAttentionLayer', 'MLCPaddingLayer', 'MLCPoolingDescriptor', 'MLCPoolingLayer', 'MLCReductionLayer', 'MLCReshapeLayer', 'MLCSliceLayer', 'MLCSoftmaxLayer', 'MLCSplitLayer', 'MLCTensor', 'MLCTensorData', 'MLCTensorDescriptor', 'MLCTensorOptimizerDeviceData', 'MLCTensorParameter', 'MLCTrainingGraph', 'MLCTransposeLayer', 'MLCUpsampleLayer', 'MLCYOLOLossDescriptor', 'MLCYOLOLossLayer', 'MLDictionaryConstraint', 'MLDictionaryFeatureProvider', 'MLFeatureDescription', 'MLFeatureValue', 'MLImageConstraint', 'MLImageSize', 'MLImageSizeConstraint', 'MLKey', 'MLMetricKey', 'MLModel', 'MLModelCollection', 'MLModelCollectionEntry', 'MLModelConfiguration', 'MLModelDescription', 'MLMultiArray', 'MLMultiArrayConstraint', 'MLMultiArrayShapeConstraint', 'MLNumericConstraint', 'MLParameterDescription', 'MLParameterKey', 'MLPredictionOptions', 'MLSequence', 'MLSequenceConstraint', 'MLTask', 'MLUpdateContext', 'MLUpdateProgressHandlers', 'MLUpdateTask', 'MPChangeLanguageOptionCommandEvent', 'MPChangePlaybackPositionCommand', 'MPChangePlaybackPositionCommandEvent', 'MPChangePlaybackRateCommand', 'MPChangePlaybackRateCommandEvent', 'MPChangeRepeatModeCommand', 'MPChangeRepeatModeCommandEvent', 'MPChangeShuffleModeCommand', 'MPChangeShuffleModeCommandEvent', 'MPContentItem', 'MPFeedbackCommand', 'MPFeedbackCommandEvent', 'MPMediaEntity', 'MPMediaItem', 'MPMediaItemArtwork', 'MPMediaItemCollection', 'MPMediaLibrary', 'MPMediaPickerController', 'MPMediaPlaylist', 'MPMediaPlaylistCreationMetadata', 'MPMediaPredicate', 'MPMediaPropertyPredicate', 'MPMediaQuery', 'MPMediaQuerySection', 'MPMovieAccessLog', 'MPMovieAccessLogEvent', 'MPMovieErrorLog', 'MPMovieErrorLogEvent', 'MPMoviePlayerController', 'MPMoviePlayerViewController', 'MPMusicPlayerApplicationController', 'MPMusicPlayerController', 'MPMusicPlayerControllerMutableQueue', 'MPMusicPlayerControllerQueue', 'MPMusicPlayerMediaItemQueueDescriptor', 'MPMusicPlayerPlayParameters', 'MPMusicPlayerPlayParametersQueueDescriptor', 'MPMusicPlayerQueueDescriptor', 'MPMusicPlayerStoreQueueDescriptor', 'MPNowPlayingInfoCenter', 'MPNowPlayingInfoLanguageOption', 'MPNowPlayingInfoLanguageOptionGroup', 'MPNowPlayingSession', 'MPPlayableContentManager', 'MPPlayableContentManagerContext', 'MPRatingCommand', 'MPRatingCommandEvent', 'MPRemoteCommand', 'MPRemoteCommandCenter', 'MPRemoteCommandEvent', 'MPSGraph', 'MPSGraphConvolution2DOpDescriptor', 'MPSGraphDepthwiseConvolution2DOpDescriptor', 'MPSGraphDevice', 'MPSGraphExecutionDescriptor', 'MPSGraphOperation', 'MPSGraphPooling2DOpDescriptor', 'MPSGraphShapedType', 'MPSGraphTensor', 'MPSGraphTensorData', 'MPSGraphVariableOp', 'MPSeekCommandEvent', 'MPSkipIntervalCommand', 'MPSkipIntervalCommandEvent', 'MPTimedMetadata', 'MPVolumeView', 'MSConversation', 'MSMessage', 'MSMessageLayout', 'MSMessageLiveLayout', 'MSMessageTemplateLayout', 'MSMessagesAppViewController', 'MSServiceAccount', 'MSSession', 'MSSetupSession', 'MSSticker', 'MSStickerBrowserView', 'MSStickerBrowserViewController', 'MSStickerView', 'MTKMesh', 'MTKMeshBuffer', 'MTKMeshBufferAllocator', 'MTKSubmesh', 'MTKTextureLoader', 'MTKView', 'MTLAccelerationStructureBoundingBoxGeometryDescriptor', 'MTLAccelerationStructureDescriptor', 'MTLAccelerationStructureGeometryDescriptor', 'MTLAccelerationStructureTriangleGeometryDescriptor', 'MTLArgument', 'MTLArgumentDescriptor', 'MTLArrayType', 'MTLAttribute', 'MTLAttributeDescriptor', 'MTLAttributeDescriptorArray', 'MTLBinaryArchiveDescriptor', 'MTLBlitPassDescriptor', 'MTLBlitPassSampleBufferAttachmentDescriptor', 'MTLBlitPassSampleBufferAttachmentDescriptorArray', 'MTLBufferLayoutDescriptor', 'MTLBufferLayoutDescriptorArray', 'MTLCaptureDescriptor', 'MTLCaptureManager', 'MTLCommandBufferDescriptor', 'MTLCompileOptions', 'MTLComputePassDescriptor', 'MTLComputePassSampleBufferAttachmentDescriptor', 'MTLComputePassSampleBufferAttachmentDescriptorArray', 'MTLComputePipelineDescriptor', 'MTLComputePipelineReflection', 'MTLCounterSampleBufferDescriptor', 'MTLDepthStencilDescriptor', 'MTLFunctionConstant', 'MTLFunctionConstantValues', 'MTLFunctionDescriptor', 'MTLHeapDescriptor', 'MTLIndirectCommandBufferDescriptor', 'MTLInstanceAccelerationStructureDescriptor', 'MTLIntersectionFunctionDescriptor', 'MTLIntersectionFunctionTableDescriptor', 'MTLLinkedFunctions', 'MTLPipelineBufferDescriptor', 'MTLPipelineBufferDescriptorArray', 'MTLPointerType', 'MTLPrimitiveAccelerationStructureDescriptor', 'MTLRasterizationRateLayerArray', 'MTLRasterizationRateLayerDescriptor', 'MTLRasterizationRateMapDescriptor', 'MTLRasterizationRateSampleArray', 'MTLRenderPassAttachmentDescriptor', 'MTLRenderPassColorAttachmentDescriptor', 'MTLRenderPassColorAttachmentDescriptorArray', 'MTLRenderPassDepthAttachmentDescriptor', 'MTLRenderPassDescriptor', 'MTLRenderPassSampleBufferAttachmentDescriptor', 'MTLRenderPassSampleBufferAttachmentDescriptorArray', 'MTLRenderPassStencilAttachmentDescriptor', 'MTLRenderPipelineColorAttachmentDescriptor', 'MTLRenderPipelineColorAttachmentDescriptorArray', 'MTLRenderPipelineDescriptor', 'MTLRenderPipelineReflection', 'MTLResourceStatePassDescriptor', 'MTLResourceStatePassSampleBufferAttachmentDescriptor', 'MTLResourceStatePassSampleBufferAttachmentDescriptorArray', 'MTLSamplerDescriptor', 'MTLSharedEventHandle', 'MTLSharedEventListener', 'MTLSharedTextureHandle', 'MTLStageInputOutputDescriptor', 'MTLStencilDescriptor', 'MTLStructMember', 'MTLStructType', 'MTLTextureDescriptor', 'MTLTextureReferenceType', 'MTLTileRenderPipelineColorAttachmentDescriptor', 'MTLTileRenderPipelineColorAttachmentDescriptorArray', 'MTLTileRenderPipelineDescriptor', 'MTLType', 'MTLVertexAttribute', 'MTLVertexAttributeDescriptor', 'MTLVertexAttributeDescriptorArray', 'MTLVertexBufferLayoutDescriptor', 'MTLVertexBufferLayoutDescriptorArray', 'MTLVertexDescriptor', 'MTLVisibleFunctionTableDescriptor', 'MXAnimationMetric', 'MXAppExitMetric', 'MXAppLaunchMetric', 'MXAppResponsivenessMetric', 'MXAppRunTimeMetric', 'MXAverage', 'MXBackgroundExitData', 'MXCPUExceptionDiagnostic', 'MXCPUMetric', 'MXCallStackTree', 'MXCellularConditionMetric', 'MXCrashDiagnostic', 'MXDiagnostic', 'MXDiagnosticPayload', 'MXDiskIOMetric', 'MXDiskWriteExceptionDiagnostic', 'MXDisplayMetric', 'MXForegroundExitData', 'MXGPUMetric', 'MXHangDiagnostic', 'MXHistogram', 'MXHistogramBucket', 'MXLocationActivityMetric', 'MXMemoryMetric', 'MXMetaData', 'MXMetric', 'MXMetricManager', 'MXMetricPayload', 'MXNetworkTransferMetric', 'MXSignpostIntervalData', 'MXSignpostMetric', 'MXUnitAveragePixelLuminance', 'MXUnitSignalBars', 'MyClass', 'NCWidgetController', 'NEAppProxyFlow', 'NEAppProxyProvider', 'NEAppProxyProviderManager', 'NEAppProxyTCPFlow', 'NEAppProxyUDPFlow', 'NEAppPushManager', 'NEAppPushProvider', 'NEAppRule', 'NEDNSOverHTTPSSettings', 'NEDNSOverTLSSettings', 'NEDNSProxyManager', 'NEDNSProxyProvider', 'NEDNSProxyProviderProtocol', 'NEDNSSettings', 'NEDNSSettingsManager', 'NEEvaluateConnectionRule', 'NEFilterBrowserFlow', 'NEFilterControlProvider', 'NEFilterControlVerdict', 'NEFilterDataProvider', 'NEFilterDataVerdict', 'NEFilterFlow', 'NEFilterManager', 'NEFilterNewFlowVerdict', 'NEFilterPacketContext', 'NEFilterPacketProvider', 'NEFilterProvider', 'NEFilterProviderConfiguration', 'NEFilterRemediationVerdict', 'NEFilterReport', 'NEFilterRule', 'NEFilterSettings', 'NEFilterSocketFlow', 'NEFilterVerdict', 'NEFlowMetaData', 'NEHotspotConfiguration', 'NEHotspotConfigurationManager', 'NEHotspotEAPSettings', 'NEHotspotHS20Settings', 'NEHotspotHelper', 'NEHotspotHelperCommand', 'NEHotspotHelperResponse', 'NEHotspotNetwork', 'NEIPv4Route', 'NEIPv4Settings', 'NEIPv6Route', 'NEIPv6Settings', 'NENetworkRule', 'NEOnDemandRule', 'NEOnDemandRuleConnect', 'NEOnDemandRuleDisconnect', 'NEOnDemandRuleEvaluateConnection', 'NEOnDemandRuleIgnore', 'NEPacket', 'NEPacketTunnelFlow', 'NEPacketTunnelNetworkSettings', 'NEPacketTunnelProvider', 'NEProvider', 'NEProxyServer', 'NEProxySettings', 'NETransparentProxyManager', 'NETransparentProxyNetworkSettings', 'NETransparentProxyProvider', 'NETunnelNetworkSettings', 'NETunnelProvider', 'NETunnelProviderManager', 'NETunnelProviderProtocol', 'NETunnelProviderSession', 'NEVPNConnection', 'NEVPNIKEv2SecurityAssociationParameters', 'NEVPNManager', 'NEVPNProtocol', 'NEVPNProtocolIKEv2', 'NEVPNProtocolIPSec', 'NFCISO15693CustomCommandConfiguration', 'NFCISO15693ReadMultipleBlocksConfiguration', 'NFCISO15693ReaderSession', 'NFCISO7816APDU', 'NFCNDEFMessage', 'NFCNDEFPayload', 'NFCNDEFReaderSession', 'NFCReaderSession', 'NFCTagCommandConfiguration', 'NFCTagReaderSession', 'NFCVASCommandConfiguration', 'NFCVASReaderSession', 'NFCVASResponse', 'NIConfiguration', 'NIDiscoveryToken', 'NINearbyObject', 'NINearbyPeerConfiguration', 'NISession', 'NKAssetDownload', 'NKIssue', 'NKLibrary', 'NLEmbedding', 'NLGazetteer', 'NLLanguageRecognizer', 'NLModel', 'NLModelConfiguration', 'NLTagger', 'NLTokenizer', 'NSArray', 'NSAssertionHandler', 'NSAsynchronousFetchRequest', 'NSAsynchronousFetchResult', 'NSAtomicStore', 'NSAtomicStoreCacheNode', 'NSAttributeDescription', 'NSAttributedString', 'NSAutoreleasePool', 'NSBatchDeleteRequest', 'NSBatchDeleteResult', 'NSBatchInsertRequest', 'NSBatchInsertResult', 'NSBatchUpdateRequest', 'NSBatchUpdateResult', 'NSBlockOperation', 'NSBundle', 'NSBundleResourceRequest', 'NSByteCountFormatter', 'NSCache', 'NSCachedURLResponse', 'NSCalendar', 'NSCharacterSet', 'NSCoder', 'NSCollectionLayoutAnchor', 'NSCollectionLayoutBoundarySupplementaryItem', 'NSCollectionLayoutDecorationItem', 'NSCollectionLayoutDimension', 'NSCollectionLayoutEdgeSpacing', 'NSCollectionLayoutGroup', 'NSCollectionLayoutGroupCustomItem', 'NSCollectionLayoutItem', 'NSCollectionLayoutSection', 'NSCollectionLayoutSize', 'NSCollectionLayoutSpacing', 'NSCollectionLayoutSupplementaryItem', 'NSComparisonPredicate', 'NSCompoundPredicate', 'NSCondition', 'NSConditionLock', 'NSConstantString', 'NSConstraintConflict', 'NSCoreDataCoreSpotlightDelegate', 'NSCountedSet', 'NSData', 'NSDataAsset', 'NSDataDetector', 'NSDate', 'NSDateComponents', 'NSDateComponentsFormatter', 'NSDateFormatter', 'NSDateInterval', 'NSDateIntervalFormatter', 'NSDecimalNumber', 'NSDecimalNumberHandler', 'NSDerivedAttributeDescription', 'NSDictionary', 'NSDiffableDataSourceSectionSnapshot', 'NSDiffableDataSourceSectionTransaction', 'NSDiffableDataSourceSnapshot', 'NSDiffableDataSourceTransaction', 'NSDimension', 'NSDirectoryEnumerator', 'NSEnergyFormatter', 'NSEntityDescription', 'NSEntityMapping', 'NSEntityMigrationPolicy', 'NSEnumerator', 'NSError', 'NSEvent', 'NSException', 'NSExpression', 'NSExpressionDescription', 'NSExtensionContext', 'NSExtensionItem', 'NSFetchIndexDescription', 'NSFetchIndexElementDescription', 'NSFetchRequest', 'NSFetchRequestExpression', 'NSFetchedPropertyDescription', 'NSFetchedResultsController', 'NSFileAccessIntent', 'NSFileCoordinator', 'NSFileHandle', 'NSFileManager', 'NSFileProviderDomain', 'NSFileProviderExtension', 'NSFileProviderManager', 'NSFileProviderService', 'NSFileSecurity', 'NSFileVersion', 'NSFileWrapper', 'NSFormatter', 'NSHTTPCookie', 'NSHTTPCookieStorage', 'NSHTTPURLResponse', 'NSHashTable', 'NSISO8601DateFormatter', 'NSIncrementalStore', 'NSIncrementalStoreNode', 'NSIndexPath', 'NSIndexSet', 'NSInputStream', 'NSInvocation', 'NSInvocationOperation', 'NSItemProvider', 'NSJSONSerialization', 'NSKeyedArchiver', 'NSKeyedUnarchiver', 'NSLayoutAnchor', 'NSLayoutConstraint', 'NSLayoutDimension', 'NSLayoutManager', 'NSLayoutXAxisAnchor', 'NSLayoutYAxisAnchor', 'NSLengthFormatter', 'NSLinguisticTagger', 'NSListFormatter', 'NSLocale', 'NSLock', 'NSMachPort', 'NSManagedObject', 'NSManagedObjectContext', 'NSManagedObjectID', 'NSManagedObjectModel', 'NSMapTable', 'NSMappingModel', 'NSMassFormatter', 'NSMeasurement', 'NSMeasurementFormatter', 'NSMenuToolbarItem', 'NSMergeConflict', 'NSMergePolicy', 'NSMessagePort', 'NSMetadataItem', 'NSMetadataQuery', 'NSMetadataQueryAttributeValueTuple', 'NSMetadataQueryResultGroup', 'NSMethodSignature', 'NSMigrationManager', 'NSMutableArray', 'NSMutableAttributedString', 'NSMutableCharacterSet', 'NSMutableData', 'NSMutableDictionary', 'NSMutableIndexSet', 'NSMutableOrderedSet', 'NSMutableParagraphStyle', 'NSMutableSet', 'NSMutableString', 'NSMutableURLRequest', 'NSNetService', 'NSNetServiceBrowser', 'NSNotification', 'NSNotificationCenter', 'NSNotificationQueue', 'NSNull', 'NSNumber', 'NSNumberFormatter', 'NSObject', 'NSOperation', 'NSOperationQueue', 'NSOrderedCollectionChange', 'NSOrderedCollectionDifference', 'NSOrderedSet', 'NSOrthography', 'NSOutputStream', 'NSParagraphStyle', 'NSPersistentCloudKitContainer', 'NSPersistentCloudKitContainerEvent', 'NSPersistentCloudKitContainerEventRequest', 'NSPersistentCloudKitContainerEventResult', 'NSPersistentCloudKitContainerOptions', 'NSPersistentContainer', 'NSPersistentHistoryChange', 'NSPersistentHistoryChangeRequest', 'NSPersistentHistoryResult', 'NSPersistentHistoryToken', 'NSPersistentHistoryTransaction', 'NSPersistentStore', 'NSPersistentStoreAsynchronousResult', 'NSPersistentStoreCoordinator', 'NSPersistentStoreDescription', 'NSPersistentStoreRequest', 'NSPersistentStoreResult', 'NSPersonNameComponents', 'NSPersonNameComponentsFormatter', 'NSPipe', 'NSPointerArray', 'NSPointerFunctions', 'NSPort', 'NSPredicate', 'NSProcessInfo', 'NSProgress', 'NSPropertyDescription', 'NSPropertyListSerialization', 'NSPropertyMapping', 'NSProxy', 'NSPurgeableData', 'NSQueryGenerationToken', 'NSRecursiveLock', 'NSRegularExpression', 'NSRelationshipDescription', 'NSRelativeDateTimeFormatter', 'NSRunLoop', 'NSSaveChangesRequest', 'NSScanner', 'NSSecureUnarchiveFromDataTransformer', 'NSSet', 'NSShadow', 'NSSharingServicePickerToolbarItem', 'NSSharingServicePickerTouchBarItem', 'NSSimpleCString', 'NSSocketPort', 'NSSortDescriptor', 'NSStream', 'NSString', 'NSStringDrawingContext', 'NSTextAttachment', 'NSTextCheckingResult', 'NSTextContainer', 'NSTextStorage', 'NSTextTab', 'NSThread', 'NSTimeZone', 'NSTimer', 'NSToolbarItem', 'NSURL', 'NSURLAuthenticationChallenge', 'NSURLCache', 'NSURLComponents', 'NSURLConnection', 'NSURLCredential', 'NSURLCredentialStorage', 'NSURLProtectionSpace', 'NSURLProtocol', 'NSURLQueryItem', 'NSURLRequest', 'NSURLResponse', 'NSURLSession', 'NSURLSessionConfiguration', 'NSURLSessionDataTask', 'NSURLSessionDownloadTask', 'NSURLSessionStreamTask', 'NSURLSessionTask', 'NSURLSessionTaskMetrics', 'NSURLSessionTaskTransactionMetrics', 'NSURLSessionUploadTask', 'NSURLSessionWebSocketMessage', 'NSURLSessionWebSocketTask', 'NSUUID', 'NSUbiquitousKeyValueStore', 'NSUndoManager', 'NSUnit', 'NSUnitAcceleration', 'NSUnitAngle', 'NSUnitArea', 'NSUnitConcentrationMass', 'NSUnitConverter', 'NSUnitConverterLinear', 'NSUnitDispersion', 'NSUnitDuration', 'NSUnitElectricCharge', 'NSUnitElectricCurrent', 'NSUnitElectricPotentialDifference', 'NSUnitElectricResistance', 'NSUnitEnergy', 'NSUnitFrequency', 'NSUnitFuelEfficiency', 'NSUnitIlluminance', 'NSUnitInformationStorage', 'NSUnitLength', 'NSUnitMass', 'NSUnitPower', 'NSUnitPressure', 'NSUnitSpeed', 'NSUnitTemperature', 'NSUnitVolume', 'NSUserActivity', 'NSUserDefaults', 'NSValue', 'NSValueTransformer', 'NSXMLParser', 'NSXPCCoder', 'NSXPCConnection', 'NSXPCInterface', 'NSXPCListener', 'NSXPCListenerEndpoint', 'NWBonjourServiceEndpoint', 'NWEndpoint', 'NWHostEndpoint', 'NWPath', 'NWTCPConnection', 'NWTLSParameters', 'NWUDPSession', 'OSLogEntry', 'OSLogEntryActivity', 'OSLogEntryBoundary', 'OSLogEntryLog', 'OSLogEntrySignpost', 'OSLogEnumerator', 'OSLogMessageComponent', 'OSLogPosition', 'OSLogStore', 'PDFAction', 'PDFActionGoTo', 'PDFActionNamed', 'PDFActionRemoteGoTo', 'PDFActionResetForm', 'PDFActionURL', 'PDFAnnotation', 'PDFAppearanceCharacteristics', 'PDFBorder', 'PDFDestination', 'PDFDocument', 'PDFOutline', 'PDFPage', 'PDFSelection', 'PDFThumbnailView', 'PDFView', 'PHAdjustmentData', 'PHAsset', 'PHAssetChangeRequest', 'PHAssetCollection', 'PHAssetCollectionChangeRequest', 'PHAssetCreationRequest', 'PHAssetResource', 'PHAssetResourceCreationOptions', 'PHAssetResourceManager', 'PHAssetResourceRequestOptions', 'PHCachingImageManager', 'PHChange', 'PHChangeRequest', 'PHCloudIdentifier', 'PHCollection', 'PHCollectionList', 'PHCollectionListChangeRequest', 'PHContentEditingInput', 'PHContentEditingInputRequestOptions', 'PHContentEditingOutput', 'PHEditingExtensionContext', 'PHFetchOptions', 'PHFetchResult', 'PHFetchResultChangeDetails', 'PHImageManager', 'PHImageRequestOptions', 'PHLivePhoto', 'PHLivePhotoEditingContext', 'PHLivePhotoRequestOptions', 'PHLivePhotoView', 'PHObject', 'PHObjectChangeDetails', 'PHObjectPlaceholder', 'PHPhotoLibrary', 'PHPickerConfiguration', 'PHPickerFilter', 'PHPickerResult', 'PHPickerViewController', 'PHProject', 'PHProjectChangeRequest', 'PHVideoRequestOptions', 'PKAddCarKeyPassConfiguration', 'PKAddPassButton', 'PKAddPassesViewController', 'PKAddPaymentPassRequest', 'PKAddPaymentPassRequestConfiguration', 'PKAddPaymentPassViewController', 'PKAddSecureElementPassConfiguration', 'PKAddSecureElementPassViewController', 'PKAddShareablePassConfiguration', 'PKBarcodeEventConfigurationRequest', 'PKBarcodeEventMetadataRequest', 'PKBarcodeEventMetadataResponse', 'PKBarcodeEventSignatureRequest', 'PKBarcodeEventSignatureResponse', 'PKCanvasView', 'PKContact', 'PKDisbursementAuthorizationController', 'PKDisbursementRequest', 'PKDisbursementVoucher', 'PKDrawing', 'PKEraserTool', 'PKFloatRange', 'PKInk', 'PKInkingTool', 'PKIssuerProvisioningExtensionHandler', 'PKIssuerProvisioningExtensionPassEntry', 'PKIssuerProvisioningExtensionPaymentPassEntry', 'PKIssuerProvisioningExtensionStatus', 'PKLabeledValue', 'PKLassoTool', 'PKObject', 'PKPass', 'PKPassLibrary', 'PKPayment', 'PKPaymentAuthorizationController', 'PKPaymentAuthorizationResult', 'PKPaymentAuthorizationViewController', 'PKPaymentButton', 'PKPaymentInformationEventExtension', 'PKPaymentMerchantSession', 'PKPaymentMethod', 'PKPaymentPass', 'PKPaymentRequest', 'PKPaymentRequestMerchantSessionUpdate', 'PKPaymentRequestPaymentMethodUpdate', 'PKPaymentRequestShippingContactUpdate', 'PKPaymentRequestShippingMethodUpdate', 'PKPaymentRequestUpdate', 'PKPaymentSummaryItem', 'PKPaymentToken', 'PKPushCredentials', 'PKPushPayload', 'PKPushRegistry', 'PKSecureElementPass', 'PKShareablePassMetadata', 'PKShippingMethod', 'PKStroke', 'PKStrokePath', 'PKStrokePoint', 'PKSuicaPassProperties', 'PKTool', 'PKToolPicker', 'PKTransitPassProperties', 'QLFileThumbnailRequest', 'QLPreviewController', 'QLThumbnailGenerationRequest', 'QLThumbnailGenerator', 'QLThumbnailProvider', 'QLThumbnailReply', 'QLThumbnailRepresentation', 'RPBroadcastActivityController', 'RPBroadcastActivityViewController', 'RPBroadcastConfiguration', 'RPBroadcastController', 'RPBroadcastHandler', 'RPBroadcastMP4ClipHandler', 'RPBroadcastSampleHandler', 'RPPreviewViewController', 'RPScreenRecorder', 'RPSystemBroadcastPickerView', 'SCNAccelerationConstraint', 'SCNAction', 'SCNAnimation', 'SCNAnimationEvent', 'SCNAnimationPlayer', 'SCNAudioPlayer', 'SCNAudioSource', 'SCNAvoidOccluderConstraint', 'SCNBillboardConstraint', 'SCNBox', 'SCNCamera', 'SCNCameraController', 'SCNCapsule', 'SCNCone', 'SCNConstraint', 'SCNCylinder', 'SCNDistanceConstraint', 'SCNFloor', 'SCNGeometry', 'SCNGeometryElement', 'SCNGeometrySource', 'SCNGeometryTessellator', 'SCNHitTestResult', 'SCNIKConstraint', 'SCNLevelOfDetail', 'SCNLight', 'SCNLookAtConstraint', 'SCNMaterial', 'SCNMaterialProperty', 'SCNMorpher', 'SCNNode', 'SCNParticlePropertyController', 'SCNParticleSystem', 'SCNPhysicsBallSocketJoint', 'SCNPhysicsBehavior', 'SCNPhysicsBody', 'SCNPhysicsConeTwistJoint', 'SCNPhysicsContact', 'SCNPhysicsField', 'SCNPhysicsHingeJoint', 'SCNPhysicsShape', 'SCNPhysicsSliderJoint', 'SCNPhysicsVehicle', 'SCNPhysicsVehicleWheel', 'SCNPhysicsWorld', 'SCNPlane', 'SCNProgram', 'SCNPyramid', 'SCNReferenceNode', 'SCNRenderer', 'SCNReplicatorConstraint', 'SCNScene', 'SCNSceneSource', 'SCNShape', 'SCNSkinner', 'SCNSliderConstraint', 'SCNSphere', 'SCNTechnique', 'SCNText', 'SCNTimingFunction', 'SCNTorus', 'SCNTransaction', 'SCNTransformConstraint', 'SCNTube', 'SCNView', 'SFAcousticFeature', 'SFAuthenticationSession', 'SFContentBlockerManager', 'SFContentBlockerState', 'SFSafariViewController', 'SFSafariViewControllerConfiguration', 'SFSpeechAudioBufferRecognitionRequest', 'SFSpeechRecognitionRequest', 'SFSpeechRecognitionResult', 'SFSpeechRecognitionTask', 'SFSpeechRecognizer', 'SFSpeechURLRecognitionRequest', 'SFTranscription', 'SFTranscriptionSegment', 'SFVoiceAnalytics', 'SK3DNode', 'SKAction', 'SKAdNetwork', 'SKArcadeService', 'SKAttribute', 'SKAttributeValue', 'SKAudioNode', 'SKCameraNode', 'SKCloudServiceController', 'SKCloudServiceSetupViewController', 'SKConstraint', 'SKCropNode', 'SKDownload', 'SKEffectNode', 'SKEmitterNode', 'SKFieldNode', 'SKKeyframeSequence', 'SKLabelNode', 'SKLightNode', 'SKMutablePayment', 'SKMutableTexture', 'SKNode', 'SKOverlay', 'SKOverlayAppClipConfiguration', 'SKOverlayAppConfiguration', 'SKOverlayConfiguration', 'SKOverlayTransitionContext', 'SKPayment', 'SKPaymentDiscount', 'SKPaymentQueue', 'SKPaymentTransaction', 'SKPhysicsBody', 'SKPhysicsContact', 'SKPhysicsJoint', 'SKPhysicsJointFixed', 'SKPhysicsJointLimit', 'SKPhysicsJointPin', 'SKPhysicsJointSliding', 'SKPhysicsJointSpring', 'SKPhysicsWorld', 'SKProduct', 'SKProductDiscount', 'SKProductStorePromotionController', 'SKProductSubscriptionPeriod', 'SKProductsRequest', 'SKProductsResponse', 'SKRange', 'SKReachConstraints', 'SKReceiptRefreshRequest', 'SKReferenceNode', 'SKRegion', 'SKRenderer', 'SKRequest', 'SKScene', 'SKShader', 'SKShapeNode', 'SKSpriteNode', 'SKStoreProductViewController', 'SKStoreReviewController', 'SKStorefront', 'SKTexture', 'SKTextureAtlas', 'SKTileDefinition', 'SKTileGroup', 'SKTileGroupRule', 'SKTileMapNode', 'SKTileSet', 'SKTransformNode', 'SKTransition', 'SKUniform', 'SKVideoNode', 'SKView', 'SKWarpGeometry', 'SKWarpGeometryGrid', 'SLComposeServiceViewController', 'SLComposeSheetConfigurationItem', 'SLComposeViewController', 'SLRequest', 'SNAudioFileAnalyzer', 'SNAudioStreamAnalyzer', 'SNClassification', 'SNClassificationResult', 'SNClassifySoundRequest', 'SRAmbientLightSample', 'SRApplicationUsage', 'SRDeletionRecord', 'SRDevice', 'SRDeviceUsageReport', 'SRFetchRequest', 'SRFetchResult', 'SRKeyboardMetrics', 'SRKeyboardProbabilityMetric', 'SRMessagesUsageReport', 'SRNotificationUsage', 'SRPhoneUsageReport', 'SRSensorReader', 'SRVisit', 'SRWebUsage', 'SRWristDetection', 'SSReadingList', 'STScreenTimeConfiguration', 'STScreenTimeConfigurationObserver', 'STWebHistory', 'STWebpageController', 'TKBERTLVRecord', 'TKCompactTLVRecord', 'TKSimpleTLVRecord', 'TKSmartCard', 'TKSmartCardATR', 'TKSmartCardATRInterfaceGroup', 'TKSmartCardPINFormat', 'TKSmartCardSlot', 'TKSmartCardSlotManager', 'TKSmartCardToken', 'TKSmartCardTokenDriver', 'TKSmartCardTokenSession', 'TKSmartCardUserInteraction', 'TKSmartCardUserInteractionForPINOperation', 'TKSmartCardUserInteractionForSecurePINChange', 'TKSmartCardUserInteractionForSecurePINVerification', 'TKTLVRecord', 'TKToken', 'TKTokenAuthOperation', 'TKTokenConfiguration', 'TKTokenDriver', 'TKTokenDriverConfiguration', 'TKTokenKeyAlgorithm', 'TKTokenKeyExchangeParameters', 'TKTokenKeychainCertificate', 'TKTokenKeychainContents', 'TKTokenKeychainItem', 'TKTokenKeychainKey', 'TKTokenPasswordAuthOperation', 'TKTokenSession', 'TKTokenSmartCardPINAuthOperation', 'TKTokenWatcher', 'TWRequest', 'TWTweetComposeViewController', 'UIAcceleration', 'UIAccelerometer', 'UIAccessibilityCustomAction', 'UIAccessibilityCustomRotor', 'UIAccessibilityCustomRotorItemResult', 'UIAccessibilityCustomRotorSearchPredicate', 'UIAccessibilityElement', 'UIAccessibilityLocationDescriptor', 'UIAction', 'UIActionSheet', 'UIActivity', 'UIActivityIndicatorView', 'UIActivityItemProvider', 'UIActivityItemsConfiguration', 'UIActivityViewController', 'UIAlertAction', 'UIAlertController', 'UIAlertView', 'UIApplication', 'UIApplicationShortcutIcon', 'UIApplicationShortcutItem', 'UIAttachmentBehavior', 'UIBackgroundConfiguration', 'UIBarAppearance', 'UIBarButtonItem', 'UIBarButtonItemAppearance', 'UIBarButtonItemGroup', 'UIBarButtonItemStateAppearance', 'UIBarItem', 'UIBezierPath', 'UIBlurEffect', 'UIButton', 'UICellAccessory', 'UICellAccessoryCheckmark', 'UICellAccessoryCustomView', 'UICellAccessoryDelete', 'UICellAccessoryDisclosureIndicator', 'UICellAccessoryInsert', 'UICellAccessoryLabel', 'UICellAccessoryMultiselect', 'UICellAccessoryOutlineDisclosure', 'UICellAccessoryReorder', 'UICellConfigurationState', 'UICloudSharingController', 'UICollectionLayoutListConfiguration', 'UICollectionReusableView', 'UICollectionView', 'UICollectionViewCell', 'UICollectionViewCellRegistration', 'UICollectionViewCompositionalLayout', 'UICollectionViewCompositionalLayoutConfiguration', 'UICollectionViewController', 'UICollectionViewDiffableDataSource', 'UICollectionViewDiffableDataSourceReorderingHandlers', 'UICollectionViewDiffableDataSourceSectionSnapshotHandlers', 'UICollectionViewDropPlaceholder', 'UICollectionViewDropProposal', 'UICollectionViewFlowLayout', 'UICollectionViewFlowLayoutInvalidationContext', 'UICollectionViewFocusUpdateContext', 'UICollectionViewLayout', 'UICollectionViewLayoutAttributes', 'UICollectionViewLayoutInvalidationContext', 'UICollectionViewListCell', 'UICollectionViewPlaceholder', 'UICollectionViewSupplementaryRegistration', 'UICollectionViewTransitionLayout', 'UICollectionViewUpdateItem', 'UICollisionBehavior', 'UIColor', 'UIColorPickerViewController', 'UIColorWell', 'UICommand', 'UICommandAlternate', 'UIContextMenuConfiguration', 'UIContextMenuInteraction', 'UIContextualAction', 'UIControl', 'UICubicTimingParameters', 'UIDatePicker', 'UIDeferredMenuElement', 'UIDevice', 'UIDictationPhrase', 'UIDocument', 'UIDocumentBrowserAction', 'UIDocumentBrowserTransitionController', 'UIDocumentBrowserViewController', 'UIDocumentInteractionController', 'UIDocumentMenuViewController', 'UIDocumentPickerExtensionViewController', 'UIDocumentPickerViewController', 'UIDragInteraction', 'UIDragItem', 'UIDragPreview', 'UIDragPreviewParameters', 'UIDragPreviewTarget', 'UIDropInteraction', 'UIDropProposal', 'UIDynamicAnimator', 'UIDynamicBehavior', 'UIDynamicItemBehavior', 'UIDynamicItemGroup', 'UIEvent', 'UIFeedbackGenerator', 'UIFieldBehavior', 'UIFocusAnimationCoordinator', 'UIFocusDebugger', 'UIFocusGuide', 'UIFocusMovementHint', 'UIFocusSystem', 'UIFocusUpdateContext', 'UIFont', 'UIFontDescriptor', 'UIFontMetrics', 'UIFontPickerViewController', 'UIFontPickerViewControllerConfiguration', 'UIGestureRecognizer', 'UIGraphicsImageRenderer', 'UIGraphicsImageRendererContext', 'UIGraphicsImageRendererFormat', 'UIGraphicsPDFRenderer', 'UIGraphicsPDFRendererContext', 'UIGraphicsPDFRendererFormat', 'UIGraphicsRenderer', 'UIGraphicsRendererContext', 'UIGraphicsRendererFormat', 'UIGravityBehavior', 'UIHoverGestureRecognizer', 'UIImage', 'UIImageAsset', 'UIImageConfiguration', 'UIImagePickerController', 'UIImageSymbolConfiguration', 'UIImageView', 'UIImpactFeedbackGenerator', 'UIIndirectScribbleInteraction', 'UIInputView', 'UIInputViewController', 'UIInterpolatingMotionEffect', 'UIKey', 'UIKeyCommand', 'UILabel', 'UILargeContentViewerInteraction', 'UILayoutGuide', 'UILexicon', 'UILexiconEntry', 'UIListContentConfiguration', 'UIListContentImageProperties', 'UIListContentTextProperties', 'UIListContentView', 'UILocalNotification', 'UILocalizedIndexedCollation', 'UILongPressGestureRecognizer', 'UIManagedDocument', 'UIMarkupTextPrintFormatter', 'UIMenu', 'UIMenuController', 'UIMenuElement', 'UIMenuItem', 'UIMenuSystem', 'UIMotionEffect', 'UIMotionEffectGroup', 'UIMutableApplicationShortcutItem', 'UIMutableUserNotificationAction', 'UIMutableUserNotificationCategory', 'UINavigationBar', 'UINavigationBarAppearance', 'UINavigationController', 'UINavigationItem', 'UINib', 'UINotificationFeedbackGenerator', 'UIOpenURLContext', 'UIPageControl', 'UIPageViewController', 'UIPanGestureRecognizer', 'UIPasteConfiguration', 'UIPasteboard', 'UIPencilInteraction', 'UIPercentDrivenInteractiveTransition', 'UIPickerView', 'UIPinchGestureRecognizer', 'UIPointerEffect', 'UIPointerHighlightEffect', 'UIPointerHoverEffect', 'UIPointerInteraction', 'UIPointerLiftEffect', 'UIPointerLockState', 'UIPointerRegion', 'UIPointerRegionRequest', 'UIPointerShape', 'UIPointerStyle', 'UIPopoverBackgroundView', 'UIPopoverController', 'UIPopoverPresentationController', 'UIPresentationController', 'UIPress', 'UIPressesEvent', 'UIPreviewAction', 'UIPreviewActionGroup', 'UIPreviewInteraction', 'UIPreviewParameters', 'UIPreviewTarget', 'UIPrintFormatter', 'UIPrintInfo', 'UIPrintInteractionController', 'UIPrintPageRenderer', 'UIPrintPaper', 'UIPrinter', 'UIPrinterPickerController', 'UIProgressView', 'UIPushBehavior', 'UIReferenceLibraryViewController', 'UIRefreshControl', 'UIRegion', 'UIResponder', 'UIRotationGestureRecognizer', 'UIScene', 'UISceneActivationConditions', 'UISceneActivationRequestOptions', 'UISceneConfiguration', 'UISceneConnectionOptions', 'UISceneDestructionRequestOptions', 'UISceneOpenExternalURLOptions', 'UISceneOpenURLOptions', 'UISceneSession', 'UISceneSizeRestrictions', 'UIScreen', 'UIScreenEdgePanGestureRecognizer', 'UIScreenMode', 'UIScreenshotService', 'UIScribbleInteraction', 'UIScrollView', 'UISearchBar', 'UISearchContainerViewController', 'UISearchController', 'UISearchDisplayController', 'UISearchSuggestionItem', 'UISearchTextField', 'UISearchToken', 'UISegmentedControl', 'UISelectionFeedbackGenerator', 'UISimpleTextPrintFormatter', 'UISlider', 'UISnapBehavior', 'UISplitViewController', 'UISpringLoadedInteraction', 'UISpringTimingParameters', 'UIStackView', 'UIStatusBarManager', 'UIStepper', 'UIStoryboard', 'UIStoryboardPopoverSegue', 'UIStoryboardSegue', 'UIStoryboardUnwindSegueSource', 'UISwipeActionsConfiguration', 'UISwipeGestureRecognizer', 'UISwitch', 'UITabBar', 'UITabBarAppearance', 'UITabBarController', 'UITabBarItem', 'UITabBarItemAppearance', 'UITabBarItemStateAppearance', 'UITableView', 'UITableViewCell', 'UITableViewController', 'UITableViewDiffableDataSource', 'UITableViewDropPlaceholder', 'UITableViewDropProposal', 'UITableViewFocusUpdateContext', 'UITableViewHeaderFooterView', 'UITableViewPlaceholder', 'UITableViewRowAction', 'UITapGestureRecognizer', 'UITargetedDragPreview', 'UITargetedPreview', 'UITextChecker', 'UITextDragPreviewRenderer', 'UITextDropProposal', 'UITextField', 'UITextFormattingCoordinator', 'UITextInputAssistantItem', 'UITextInputMode', 'UITextInputPasswordRules', 'UITextInputStringTokenizer', 'UITextInteraction', 'UITextPlaceholder', 'UITextPosition', 'UITextRange', 'UITextSelectionRect', 'UITextView', 'UITitlebar', 'UIToolbar', 'UIToolbarAppearance', 'UITouch', 'UITraitCollection', 'UIUserNotificationAction', 'UIUserNotificationCategory', 'UIUserNotificationSettings', 'UIVibrancyEffect', 'UIVideoEditorController', 'UIView', 'UIViewConfigurationState', 'UIViewController', 'UIViewPrintFormatter', 'UIViewPropertyAnimator', 'UIVisualEffect', 'UIVisualEffectView', 'UIWebView', 'UIWindow', 'UIWindowScene', 'UIWindowSceneDestructionRequestOptions', 'UNCalendarNotificationTrigger', 'UNLocationNotificationTrigger', 'UNMutableNotificationContent', 'UNNotification', 'UNNotificationAction', 'UNNotificationAttachment', 'UNNotificationCategory', 'UNNotificationContent', 'UNNotificationRequest', 'UNNotificationResponse', 'UNNotificationServiceExtension', 'UNNotificationSettings', 'UNNotificationSound', 'UNNotificationTrigger', 'UNPushNotificationTrigger', 'UNTextInputNotificationAction', 'UNTextInputNotificationResponse', 'UNTimeIntervalNotificationTrigger', 'UNUserNotificationCenter', 'UTType', 'VNBarcodeObservation', 'VNCircle', 'VNClassificationObservation', 'VNClassifyImageRequest', 'VNContour', 'VNContoursObservation', 'VNCoreMLFeatureValueObservation', 'VNCoreMLModel', 'VNCoreMLRequest', 'VNDetectBarcodesRequest', 'VNDetectContoursRequest', 'VNDetectFaceCaptureQualityRequest', 'VNDetectFaceLandmarksRequest', 'VNDetectFaceRectanglesRequest', 'VNDetectHorizonRequest', 'VNDetectHumanBodyPoseRequest', 'VNDetectHumanHandPoseRequest', 'VNDetectHumanRectanglesRequest', 'VNDetectRectanglesRequest', 'VNDetectTextRectanglesRequest', 'VNDetectTrajectoriesRequest', 'VNDetectedObjectObservation', 'VNDetectedPoint', 'VNDocumentCameraScan', 'VNDocumentCameraViewController', 'VNFaceLandmarkRegion', 'VNFaceLandmarkRegion2D', 'VNFaceLandmarks', 'VNFaceLandmarks2D', 'VNFaceObservation', 'VNFeaturePrintObservation', 'VNGenerateAttentionBasedSaliencyImageRequest', 'VNGenerateImageFeaturePrintRequest', 'VNGenerateObjectnessBasedSaliencyImageRequest', 'VNGenerateOpticalFlowRequest', 'VNGeometryUtils', 'VNHomographicImageRegistrationRequest', 'VNHorizonObservation', 'VNHumanBodyPoseObservation', 'VNHumanHandPoseObservation', 'VNImageAlignmentObservation', 'VNImageBasedRequest', 'VNImageHomographicAlignmentObservation', 'VNImageRegistrationRequest', 'VNImageRequestHandler', 'VNImageTranslationAlignmentObservation', 'VNObservation', 'VNPixelBufferObservation', 'VNPoint', 'VNRecognizeAnimalsRequest', 'VNRecognizeTextRequest', 'VNRecognizedObjectObservation', 'VNRecognizedPoint', 'VNRecognizedPointsObservation', 'VNRecognizedText', 'VNRecognizedTextObservation', 'VNRectangleObservation', 'VNRequest', 'VNSaliencyImageObservation', 'VNSequenceRequestHandler', 'VNStatefulRequest', 'VNTargetedImageRequest', 'VNTextObservation', 'VNTrackObjectRequest', 'VNTrackRectangleRequest', 'VNTrackingRequest', 'VNTrajectoryObservation', 'VNTranslationalImageRegistrationRequest', 'VNVector', 'VNVideoProcessor', 'VNVideoProcessorCadence', 'VNVideoProcessorFrameRateCadence', 'VNVideoProcessorRequestProcessingOptions', 'VNVideoProcessorTimeIntervalCadence', 'VSAccountApplicationProvider', 'VSAccountManager', 'VSAccountManagerResult', 'VSAccountMetadata', 'VSAccountMetadataRequest', 'VSAccountProviderResponse', 'VSSubscription', 'VSSubscriptionRegistrationCenter', 'WCSession', 'WCSessionFile', 'WCSessionFileTransfer', 'WCSessionUserInfoTransfer', 'WKBackForwardList', 'WKBackForwardListItem', 'WKContentRuleList', 'WKContentRuleListStore', 'WKContentWorld', 'WKContextMenuElementInfo', 'WKFindConfiguration', 'WKFindResult', 'WKFrameInfo', 'WKHTTPCookieStore', 'WKNavigation', 'WKNavigationAction', 'WKNavigationResponse', 'WKOpenPanelParameters', 'WKPDFConfiguration', 'WKPreferences', 'WKPreviewElementInfo', 'WKProcessPool', 'WKScriptMessage', 'WKSecurityOrigin', 'WKSnapshotConfiguration', 'WKUserContentController', 'WKUserScript', 'WKWebView', 'WKWebViewConfiguration', 'WKWebpagePreferences', 'WKWebsiteDataRecord', 'WKWebsiteDataStore', 'WKWindowFeatures', '__EntityAccessibilityWrapper'} COCOA_PROTOCOLS = {'ABNewPersonViewControllerDelegate', 'ABPeoplePickerNavigationControllerDelegate', 'ABPersonViewControllerDelegate', 'ABUnknownPersonViewControllerDelegate', 'ADActionViewControllerChildInterface', 'ADActionViewControllerInterface', 'ADBannerViewDelegate', 'ADInterstitialAdDelegate', 'AEAssessmentSessionDelegate', 'ARAnchorCopying', 'ARCoachingOverlayViewDelegate', 'ARSCNViewDelegate', 'ARSKViewDelegate', 'ARSessionDelegate', 'ARSessionObserver', 'ARSessionProviding', 'ARTrackable', 'ASAccountAuthenticationModificationControllerDelegate', 'ASAccountAuthenticationModificationControllerPresentationContextProviding', 'ASAuthorizationControllerDelegate', 'ASAuthorizationControllerPresentationContextProviding', 'ASAuthorizationCredential', 'ASAuthorizationProvider', 'ASAuthorizationProviderExtensionAuthorizationRequestHandler', 'ASWebAuthenticationPresentationContextProviding', 'ASWebAuthenticationSessionRequestDelegate', 'ASWebAuthenticationSessionWebBrowserSessionHandling', 'AUAudioUnitFactory', 'AVAssetDownloadDelegate', 'AVAssetResourceLoaderDelegate', 'AVAssetWriterDelegate', 'AVAsynchronousKeyValueLoading', 'AVCaptureAudioDataOutputSampleBufferDelegate', 'AVCaptureDataOutputSynchronizerDelegate', 'AVCaptureDepthDataOutputDelegate', 'AVCaptureFileOutputDelegate', 'AVCaptureFileOutputRecordingDelegate', 'AVCaptureMetadataOutputObjectsDelegate', 'AVCapturePhotoCaptureDelegate', 'AVCapturePhotoFileDataRepresentationCustomizer', 'AVCaptureVideoDataOutputSampleBufferDelegate', 'AVContentKeyRecipient', 'AVContentKeySessionDelegate', 'AVFragmentMinding', 'AVPictureInPictureControllerDelegate', 'AVPlayerItemLegibleOutputPushDelegate', 'AVPlayerItemMetadataCollectorPushDelegate', 'AVPlayerItemMetadataOutputPushDelegate', 'AVPlayerItemOutputPullDelegate', 'AVPlayerItemOutputPushDelegate', 'AVPlayerViewControllerDelegate', 'AVQueuedSampleBufferRendering', 'AVRoutePickerViewDelegate', 'AVVideoCompositing', 'AVVideoCompositionInstruction', 'AVVideoCompositionValidationHandling', 'AXCustomContentProvider', 'CAAction', 'CAAnimationDelegate', 'CALayerDelegate', 'CAMediaTiming', 'CAMetalDrawable', 'CBCentralManagerDelegate', 'CBPeripheralDelegate', 'CBPeripheralManagerDelegate', 'CHHapticAdvancedPatternPlayer', 'CHHapticDeviceCapability', 'CHHapticParameterAttributes', 'CHHapticPatternPlayer', 'CIAccordionFoldTransition', 'CIAffineClamp', 'CIAffineTile', 'CIAreaAverage', 'CIAreaHistogram', 'CIAreaMaximum', 'CIAreaMaximumAlpha', 'CIAreaMinMax', 'CIAreaMinMaxRed', 'CIAreaMinimum', 'CIAreaMinimumAlpha', 'CIAreaReductionFilter', 'CIAttributedTextImageGenerator', 'CIAztecCodeGenerator', 'CIBarcodeGenerator', 'CIBarsSwipeTransition', 'CIBicubicScaleTransform', 'CIBlendWithMask', 'CIBloom', 'CIBokehBlur', 'CIBoxBlur', 'CIBumpDistortion', 'CIBumpDistortionLinear', 'CICMYKHalftone', 'CICheckerboardGenerator', 'CICircleSplashDistortion', 'CICircularScreen', 'CICircularWrap', 'CICode128BarcodeGenerator', 'CIColorAbsoluteDifference', 'CIColorClamp', 'CIColorControls', 'CIColorCrossPolynomial', 'CIColorCube', 'CIColorCubeWithColorSpace', 'CIColorCubesMixedWithMask', 'CIColorCurves', 'CIColorInvert', 'CIColorMap', 'CIColorMatrix', 'CIColorMonochrome', 'CIColorPolynomial', 'CIColorPosterize', 'CIColorThreshold', 'CIColorThresholdOtsu', 'CIColumnAverage', 'CIComicEffect', 'CICompositeOperation', 'CIConvolution', 'CICopyMachineTransition', 'CICoreMLModel', 'CICrystallize', 'CIDepthOfField', 'CIDepthToDisparity', 'CIDiscBlur', 'CIDisintegrateWithMaskTransition', 'CIDisparityToDepth', 'CIDisplacementDistortion', 'CIDissolveTransition', 'CIDither', 'CIDocumentEnhancer', 'CIDotScreen', 'CIDroste', 'CIEdgePreserveUpsample', 'CIEdgeWork', 'CIEdges', 'CIEightfoldReflectedTile', 'CIExposureAdjust', 'CIFalseColor', 'CIFilter', 'CIFilterConstructor', 'CIFlashTransition', 'CIFourCoordinateGeometryFilter', 'CIFourfoldReflectedTile', 'CIFourfoldRotatedTile', 'CIFourfoldTranslatedTile', 'CIGaborGradients', 'CIGammaAdjust', 'CIGaussianBlur', 'CIGaussianGradient', 'CIGlassDistortion', 'CIGlassLozenge', 'CIGlideReflectedTile', 'CIGloom', 'CIHatchedScreen', 'CIHeightFieldFromMask', 'CIHexagonalPixellate', 'CIHighlightShadowAdjust', 'CIHistogramDisplay', 'CIHoleDistortion', 'CIHueAdjust', 'CIHueSaturationValueGradient', 'CIImageProcessorInput', 'CIImageProcessorOutput', 'CIKMeans', 'CIKaleidoscope', 'CIKeystoneCorrectionCombined', 'CIKeystoneCorrectionHorizontal', 'CIKeystoneCorrectionVertical', 'CILabDeltaE', 'CILanczosScaleTransform', 'CILenticularHaloGenerator', 'CILightTunnel', 'CILineOverlay', 'CILineScreen', 'CILinearGradient', 'CILinearToSRGBToneCurve', 'CIMaskToAlpha', 'CIMaskedVariableBlur', 'CIMaximumComponent', 'CIMedian', 'CIMeshGenerator', 'CIMinimumComponent', 'CIMix', 'CIModTransition', 'CIMorphologyGradient', 'CIMorphologyMaximum', 'CIMorphologyMinimum', 'CIMorphologyRectangleMaximum', 'CIMorphologyRectangleMinimum', 'CIMotionBlur', 'CINinePartStretched', 'CINinePartTiled', 'CINoiseReduction', 'CIOpTile', 'CIPDF417BarcodeGenerator', 'CIPageCurlTransition', 'CIPageCurlWithShadowTransition', 'CIPaletteCentroid', 'CIPalettize', 'CIParallelogramTile', 'CIPerspectiveCorrection', 'CIPerspectiveRotate', 'CIPerspectiveTile', 'CIPerspectiveTransform', 'CIPerspectiveTransformWithExtent', 'CIPhotoEffect', 'CIPinchDistortion', 'CIPixellate', 'CIPlugInRegistration', 'CIPointillize', 'CIQRCodeGenerator', 'CIRadialGradient', 'CIRandomGenerator', 'CIRippleTransition', 'CIRoundedRectangleGenerator', 'CIRowAverage', 'CISRGBToneCurveToLinear', 'CISaliencyMap', 'CISepiaTone', 'CIShadedMaterial', 'CISharpenLuminance', 'CISixfoldReflectedTile', 'CISixfoldRotatedTile', 'CISmoothLinearGradient', 'CISpotColor', 'CISpotLight', 'CIStarShineGenerator', 'CIStraighten', 'CIStretchCrop', 'CIStripesGenerator', 'CISunbeamsGenerator', 'CISwipeTransition', 'CITemperatureAndTint', 'CITextImageGenerator', 'CIThermal', 'CIToneCurve', 'CITorusLensDistortion', 'CITransitionFilter', 'CITriangleKaleidoscope', 'CITriangleTile', 'CITwelvefoldReflectedTile', 'CITwirlDistortion', 'CIUnsharpMask', 'CIVibrance', 'CIVignette', 'CIVignetteEffect', 'CIVortexDistortion', 'CIWhitePointAdjust', 'CIXRay', 'CIZoomBlur', 'CKRecordKeyValueSetting', 'CKRecordValue', 'CLKComplicationDataSource', 'CLLocationManagerDelegate', 'CLSContextProvider', 'CLSDataStoreDelegate', 'CMFallDetectionDelegate', 'CMHeadphoneMotionManagerDelegate', 'CNChangeHistoryEventVisitor', 'CNContactPickerDelegate', 'CNContactViewControllerDelegate', 'CNKeyDescriptor', 'CPApplicationDelegate', 'CPBarButtonProviding', 'CPInterfaceControllerDelegate', 'CPListTemplateDelegate', 'CPListTemplateItem', 'CPMapTemplateDelegate', 'CPNowPlayingTemplateObserver', 'CPPointOfInterestTemplateDelegate', 'CPSearchTemplateDelegate', 'CPSelectableListItem', 'CPSessionConfigurationDelegate', 'CPTabBarTemplateDelegate', 'CPTemplateApplicationDashboardSceneDelegate', 'CPTemplateApplicationSceneDelegate', 'CSSearchableIndexDelegate', 'CTSubscriberDelegate', 'CTTelephonyNetworkInfoDelegate', 'CXCallDirectoryExtensionContextDelegate', 'CXCallObserverDelegate', 'CXProviderDelegate', 'EAAccessoryDelegate', 'EAGLDrawable', 'EAWiFiUnconfiguredAccessoryBrowserDelegate', 'EKCalendarChooserDelegate', 'EKEventEditViewDelegate', 'EKEventViewDelegate', 'GCDevice', 'GKAchievementViewControllerDelegate', 'GKAgentDelegate', 'GKChallengeEventHandlerDelegate', 'GKChallengeListener', 'GKFriendRequestComposeViewControllerDelegate', 'GKGameCenterControllerDelegate', 'GKGameModel', 'GKGameModelPlayer', 'GKGameModelUpdate', 'GKGameSessionEventListener', 'GKGameSessionSharingViewControllerDelegate', 'GKInviteEventListener', 'GKLeaderboardViewControllerDelegate', 'GKLocalPlayerListener', 'GKMatchDelegate', 'GKMatchmakerViewControllerDelegate', 'GKPeerPickerControllerDelegate', 'GKRandom', 'GKSavedGameListener', 'GKSceneRootNodeType', 'GKSessionDelegate', 'GKStrategist', 'GKTurnBasedEventListener', 'GKTurnBasedMatchmakerViewControllerDelegate', 'GKVoiceChatClient', 'GLKNamedEffect', 'GLKViewControllerDelegate', 'GLKViewDelegate', 'HKLiveWorkoutBuilderDelegate', 'HKWorkoutSessionDelegate', 'HMAccessoryBrowserDelegate', 'HMAccessoryDelegate', 'HMCameraSnapshotControlDelegate', 'HMCameraStreamControlDelegate', 'HMHomeDelegate', 'HMHomeManagerDelegate', 'HMNetworkConfigurationProfileDelegate', 'ICCameraDeviceDelegate', 'ICCameraDeviceDownloadDelegate', 'ICDeviceBrowserDelegate', 'ICDeviceDelegate', 'ICScannerDeviceDelegate', 'ILMessageFilterQueryHandling', 'INActivateCarSignalIntentHandling', 'INAddMediaIntentHandling', 'INAddTasksIntentHandling', 'INAppendToNoteIntentHandling', 'INBookRestaurantReservationIntentHandling', 'INCallsDomainHandling', 'INCancelRideIntentHandling', 'INCancelWorkoutIntentHandling', 'INCarCommandsDomainHandling', 'INCarPlayDomainHandling', 'INCreateNoteIntentHandling', 'INCreateTaskListIntentHandling', 'INDeleteTasksIntentHandling', 'INEndWorkoutIntentHandling', 'INGetAvailableRestaurantReservationBookingDefaultsIntentHandling', 'INGetAvailableRestaurantReservationBookingsIntentHandling', 'INGetCarLockStatusIntentHandling', 'INGetCarPowerLevelStatusIntentHandling', 'INGetCarPowerLevelStatusIntentResponseObserver', 'INGetRestaurantGuestIntentHandling', 'INGetRideStatusIntentHandling', 'INGetRideStatusIntentResponseObserver', 'INGetUserCurrentRestaurantReservationBookingsIntentHandling', 'INGetVisualCodeIntentHandling', 'INIntentHandlerProviding', 'INListCarsIntentHandling', 'INListRideOptionsIntentHandling', 'INMessagesDomainHandling', 'INNotebookDomainHandling', 'INPauseWorkoutIntentHandling', 'INPayBillIntentHandling', 'INPaymentsDomainHandling', 'INPhotosDomainHandling', 'INPlayMediaIntentHandling', 'INRadioDomainHandling', 'INRequestPaymentIntentHandling', 'INRequestRideIntentHandling', 'INResumeWorkoutIntentHandling', 'INRidesharingDomainHandling', 'INSaveProfileInCarIntentHandling', 'INSearchCallHistoryIntentHandling', 'INSearchForAccountsIntentHandling', 'INSearchForBillsIntentHandling', 'INSearchForMediaIntentHandling', 'INSearchForMessagesIntentHandling', 'INSearchForNotebookItemsIntentHandling', 'INSearchForPhotosIntentHandling', 'INSendMessageIntentHandling', 'INSendPaymentIntentHandling', 'INSendRideFeedbackIntentHandling', 'INSetAudioSourceInCarIntentHandling', 'INSetCarLockStatusIntentHandling', 'INSetClimateSettingsInCarIntentHandling', 'INSetDefrosterSettingsInCarIntentHandling', 'INSetMessageAttributeIntentHandling', 'INSetProfileInCarIntentHandling', 'INSetRadioStationIntentHandling', 'INSetSeatSettingsInCarIntentHandling', 'INSetTaskAttributeIntentHandling', 'INSnoozeTasksIntentHandling', 'INSpeakable', 'INStartAudioCallIntentHandling', 'INStartCallIntentHandling', 'INStartPhotoPlaybackIntentHandling', 'INStartVideoCallIntentHandling', 'INStartWorkoutIntentHandling', 'INTransferMoneyIntentHandling', 'INUIAddVoiceShortcutButtonDelegate', 'INUIAddVoiceShortcutViewControllerDelegate', 'INUIEditVoiceShortcutViewControllerDelegate', 'INUIHostedViewControlling', 'INUIHostedViewSiriProviding', 'INUpdateMediaAffinityIntentHandling', 'INVisualCodeDomainHandling', 'INWorkoutsDomainHandling', 'JSExport', 'MCAdvertiserAssistantDelegate', 'MCBrowserViewControllerDelegate', 'MCNearbyServiceAdvertiserDelegate', 'MCNearbyServiceBrowserDelegate', 'MCSessionDelegate', 'MDLAssetResolver', 'MDLComponent', 'MDLJointAnimation', 'MDLLightProbeIrradianceDataSource', 'MDLMeshBuffer', 'MDLMeshBufferAllocator', 'MDLMeshBufferZone', 'MDLNamed', 'MDLObjectContainerComponent', 'MDLTransformComponent', 'MDLTransformOp', 'MFMailComposeViewControllerDelegate', 'MFMessageComposeViewControllerDelegate', 'MIDICIProfileResponderDelegate', 'MKAnnotation', 'MKGeoJSONObject', 'MKLocalSearchCompleterDelegate', 'MKMapViewDelegate', 'MKOverlay', 'MKReverseGeocoderDelegate', 'MLBatchProvider', 'MLCustomLayer', 'MLCustomModel', 'MLFeatureProvider', 'MLWritable', 'MPMediaPickerControllerDelegate', 'MPMediaPlayback', 'MPNowPlayingSessionDelegate', 'MPPlayableContentDataSource', 'MPPlayableContentDelegate', 'MPSystemMusicPlayerController', 'MSAuthenticationPresentationContext', 'MSMessagesAppTranscriptPresentation', 'MSStickerBrowserViewDataSource', 'MTKViewDelegate', 'MTLAccelerationStructure', 'MTLAccelerationStructureCommandEncoder', 'MTLArgumentEncoder', 'MTLBinaryArchive', 'MTLBlitCommandEncoder', 'MTLBuffer', 'MTLCaptureScope', 'MTLCommandBuffer', 'MTLCommandBufferEncoderInfo', 'MTLCommandEncoder', 'MTLCommandQueue', 'MTLComputeCommandEncoder', 'MTLComputePipelineState', 'MTLCounter', 'MTLCounterSampleBuffer', 'MTLCounterSet', 'MTLDepthStencilState', 'MTLDevice', 'MTLDrawable', 'MTLDynamicLibrary', 'MTLEvent', 'MTLFence', 'MTLFunction', 'MTLFunctionHandle', 'MTLFunctionLog', 'MTLFunctionLogDebugLocation', 'MTLHeap', 'MTLIndirectCommandBuffer', 'MTLIndirectComputeCommand', 'MTLIndirectComputeCommandEncoder', 'MTLIndirectRenderCommand', 'MTLIndirectRenderCommandEncoder', 'MTLIntersectionFunctionTable', 'MTLLibrary', 'MTLLogContainer', 'MTLParallelRenderCommandEncoder', 'MTLRasterizationRateMap', 'MTLRenderCommandEncoder', 'MTLRenderPipelineState', 'MTLResource', 'MTLResourceStateCommandEncoder', 'MTLSamplerState', 'MTLSharedEvent', 'MTLTexture', 'MTLVisibleFunctionTable', 'MXMetricManagerSubscriber', 'MyClassJavaScriptMethods', 'NCWidgetProviding', 'NEAppPushDelegate', 'NFCFeliCaTag', 'NFCISO15693Tag', 'NFCISO7816Tag', 'NFCMiFareTag', 'NFCNDEFReaderSessionDelegate', 'NFCNDEFTag', 'NFCReaderSession', 'NFCReaderSessionDelegate', 'NFCTag', 'NFCTagReaderSessionDelegate', 'NFCVASReaderSessionDelegate', 'NISessionDelegate', 'NSCacheDelegate', 'NSCoding', 'NSCollectionLayoutContainer', 'NSCollectionLayoutEnvironment', 'NSCollectionLayoutVisibleItem', 'NSCopying', 'NSDecimalNumberBehaviors', 'NSDiscardableContent', 'NSExtensionRequestHandling', 'NSFastEnumeration', 'NSFetchRequestResult', 'NSFetchedResultsControllerDelegate', 'NSFetchedResultsSectionInfo', 'NSFileManagerDelegate', 'NSFilePresenter', 'NSFileProviderChangeObserver', 'NSFileProviderEnumerationObserver', 'NSFileProviderEnumerator', 'NSFileProviderItem', 'NSFileProviderServiceSource', 'NSItemProviderReading', 'NSItemProviderWriting', 'NSKeyedArchiverDelegate', 'NSKeyedUnarchiverDelegate', 'NSLayoutManagerDelegate', 'NSLocking', 'NSMachPortDelegate', 'NSMetadataQueryDelegate', 'NSMutableCopying', 'NSNetServiceBrowserDelegate', 'NSNetServiceDelegate', 'NSPortDelegate', 'NSProgressReporting', 'NSSecureCoding', 'NSStreamDelegate', 'NSTextAttachmentContainer', 'NSTextLayoutOrientationProvider', 'NSTextStorageDelegate', 'NSURLAuthenticationChallengeSender', 'NSURLConnectionDataDelegate', 'NSURLConnectionDelegate', 'NSURLConnectionDownloadDelegate', 'NSURLProtocolClient', 'NSURLSessionDataDelegate', 'NSURLSessionDelegate', 'NSURLSessionDownloadDelegate', 'NSURLSessionStreamDelegate', 'NSURLSessionTaskDelegate', 'NSURLSessionWebSocketDelegate', 'NSUserActivityDelegate', 'NSXMLParserDelegate', 'NSXPCListenerDelegate', 'NSXPCProxyCreating', 'NWTCPConnectionAuthenticationDelegate', 'OSLogEntryFromProcess', 'OSLogEntryWithPayload', 'PDFDocumentDelegate', 'PDFViewDelegate', 'PHContentEditingController', 'PHLivePhotoFrame', 'PHLivePhotoViewDelegate', 'PHPhotoLibraryAvailabilityObserver', 'PHPhotoLibraryChangeObserver', 'PHPickerViewControllerDelegate', 'PKAddPassesViewControllerDelegate', 'PKAddPaymentPassViewControllerDelegate', 'PKAddSecureElementPassViewControllerDelegate', 'PKCanvasViewDelegate', 'PKDisbursementAuthorizationControllerDelegate', 'PKIssuerProvisioningExtensionAuthorizationProviding', 'PKPaymentAuthorizationControllerDelegate', 'PKPaymentAuthorizationViewControllerDelegate', 'PKPaymentInformationRequestHandling', 'PKPushRegistryDelegate', 'PKToolPickerObserver', 'PreviewDisplaying', 'QLPreviewControllerDataSource', 'QLPreviewControllerDelegate', 'QLPreviewItem', 'QLPreviewingController', 'RPBroadcastActivityControllerDelegate', 'RPBroadcastActivityViewControllerDelegate', 'RPBroadcastControllerDelegate', 'RPPreviewViewControllerDelegate', 'RPScreenRecorderDelegate', 'SCNActionable', 'SCNAnimatable', 'SCNAnimation', 'SCNAvoidOccluderConstraintDelegate', 'SCNBoundingVolume', 'SCNBufferStream', 'SCNCameraControlConfiguration', 'SCNCameraControllerDelegate', 'SCNNodeRendererDelegate', 'SCNPhysicsContactDelegate', 'SCNProgramDelegate', 'SCNSceneExportDelegate', 'SCNSceneRenderer', 'SCNSceneRendererDelegate', 'SCNShadable', 'SCNTechniqueSupport', 'SFSafariViewControllerDelegate', 'SFSpeechRecognitionTaskDelegate', 'SFSpeechRecognizerDelegate', 'SKCloudServiceSetupViewControllerDelegate', 'SKOverlayDelegate', 'SKPaymentQueueDelegate', 'SKPaymentTransactionObserver', 'SKPhysicsContactDelegate', 'SKProductsRequestDelegate', 'SKRequestDelegate', 'SKSceneDelegate', 'SKStoreProductViewControllerDelegate', 'SKViewDelegate', 'SKWarpable', 'SNRequest', 'SNResult', 'SNResultsObserving', 'SRSensorReaderDelegate', 'TKSmartCardTokenDriverDelegate', 'TKSmartCardUserInteractionDelegate', 'TKTokenDelegate', 'TKTokenDriverDelegate', 'TKTokenSessionDelegate', 'UIAccelerometerDelegate', 'UIAccessibilityContainerDataTable', 'UIAccessibilityContainerDataTableCell', 'UIAccessibilityContentSizeCategoryImageAdjusting', 'UIAccessibilityIdentification', 'UIAccessibilityReadingContent', 'UIActionSheetDelegate', 'UIActivityItemSource', 'UIActivityItemsConfigurationReading', 'UIAdaptivePresentationControllerDelegate', 'UIAlertViewDelegate', 'UIAppearance', 'UIAppearanceContainer', 'UIApplicationDelegate', 'UIBarPositioning', 'UIBarPositioningDelegate', 'UICloudSharingControllerDelegate', 'UICollectionViewDataSource', 'UICollectionViewDataSourcePrefetching', 'UICollectionViewDelegate', 'UICollectionViewDelegateFlowLayout', 'UICollectionViewDragDelegate', 'UICollectionViewDropCoordinator', 'UICollectionViewDropDelegate', 'UICollectionViewDropItem', 'UICollectionViewDropPlaceholderContext', 'UICollisionBehaviorDelegate', 'UIColorPickerViewControllerDelegate', 'UIConfigurationState', 'UIContentConfiguration', 'UIContentContainer', 'UIContentSizeCategoryAdjusting', 'UIContentView', 'UIContextMenuInteractionAnimating', 'UIContextMenuInteractionCommitAnimating', 'UIContextMenuInteractionDelegate', 'UICoordinateSpace', 'UIDataSourceModelAssociation', 'UIDataSourceTranslating', 'UIDocumentBrowserViewControllerDelegate', 'UIDocumentInteractionControllerDelegate', 'UIDocumentMenuDelegate', 'UIDocumentPickerDelegate', 'UIDragAnimating', 'UIDragDropSession', 'UIDragInteractionDelegate', 'UIDragSession', 'UIDropInteractionDelegate', 'UIDropSession', 'UIDynamicAnimatorDelegate', 'UIDynamicItem', 'UIFocusAnimationContext', 'UIFocusDebuggerOutput', 'UIFocusEnvironment', 'UIFocusItem', 'UIFocusItemContainer', 'UIFocusItemScrollableContainer', 'UIFontPickerViewControllerDelegate', 'UIGestureRecognizerDelegate', 'UIGuidedAccessRestrictionDelegate', 'UIImageConfiguration', 'UIImagePickerControllerDelegate', 'UIIndirectScribbleInteractionDelegate', 'UIInputViewAudioFeedback', 'UIInteraction', 'UIItemProviderPresentationSizeProviding', 'UIKeyInput', 'UILargeContentViewerInteractionDelegate', 'UILargeContentViewerItem', 'UILayoutSupport', 'UIMenuBuilder', 'UINavigationBarDelegate', 'UINavigationControllerDelegate', 'UIObjectRestoration', 'UIPageViewControllerDataSource', 'UIPageViewControllerDelegate', 'UIPasteConfigurationSupporting', 'UIPencilInteractionDelegate', 'UIPickerViewAccessibilityDelegate', 'UIPickerViewDataSource', 'UIPickerViewDelegate', 'UIPointerInteractionAnimating', 'UIPointerInteractionDelegate', 'UIPopoverBackgroundViewMethods', 'UIPopoverControllerDelegate', 'UIPopoverPresentationControllerDelegate', 'UIPreviewActionItem', 'UIPreviewInteractionDelegate', 'UIPrintInteractionControllerDelegate', 'UIPrinterPickerControllerDelegate', 'UIResponderStandardEditActions', 'UISceneDelegate', 'UIScreenshotServiceDelegate', 'UIScribbleInteractionDelegate', 'UIScrollViewAccessibilityDelegate', 'UIScrollViewDelegate', 'UISearchBarDelegate', 'UISearchControllerDelegate', 'UISearchDisplayDelegate', 'UISearchResultsUpdating', 'UISearchSuggestion', 'UISearchTextFieldDelegate', 'UISearchTextFieldPasteItem', 'UISplitViewControllerDelegate', 'UISpringLoadedInteractionBehavior', 'UISpringLoadedInteractionContext', 'UISpringLoadedInteractionEffect', 'UISpringLoadedInteractionSupporting', 'UIStateRestoring', 'UITabBarControllerDelegate', 'UITabBarDelegate', 'UITableViewDataSource', 'UITableViewDataSourcePrefetching', 'UITableViewDelegate', 'UITableViewDragDelegate', 'UITableViewDropCoordinator', 'UITableViewDropDelegate', 'UITableViewDropItem', 'UITableViewDropPlaceholderContext', 'UITextDocumentProxy', 'UITextDragDelegate', 'UITextDragRequest', 'UITextDraggable', 'UITextDropDelegate', 'UITextDropRequest', 'UITextDroppable', 'UITextFieldDelegate', 'UITextFormattingCoordinatorDelegate', 'UITextInput', 'UITextInputDelegate', 'UITextInputTokenizer', 'UITextInputTraits', 'UITextInteractionDelegate', 'UITextPasteConfigurationSupporting', 'UITextPasteDelegate', 'UITextPasteItem', 'UITextSelecting', 'UITextViewDelegate', 'UITimingCurveProvider', 'UIToolbarDelegate', 'UITraitEnvironment', 'UIUserActivityRestoring', 'UIVideoEditorControllerDelegate', 'UIViewAnimating', 'UIViewControllerAnimatedTransitioning', 'UIViewControllerContextTransitioning', 'UIViewControllerInteractiveTransitioning', 'UIViewControllerPreviewing', 'UIViewControllerPreviewingDelegate', 'UIViewControllerRestoration', 'UIViewControllerTransitionCoordinator', 'UIViewControllerTransitionCoordinatorContext', 'UIViewControllerTransitioningDelegate', 'UIViewImplicitlyAnimating', 'UIWebViewDelegate', 'UIWindowSceneDelegate', 'UNNotificationContentExtension', 'UNUserNotificationCenterDelegate', 'VNDocumentCameraViewControllerDelegate', 'VNFaceObservationAccepting', 'VNRequestProgressProviding', 'VNRequestRevisionProviding', 'VSAccountManagerDelegate', 'WCSessionDelegate', 'WKHTTPCookieStoreObserver', 'WKNavigationDelegate', 'WKPreviewActionItem', 'WKScriptMessageHandler', 'WKScriptMessageHandlerWithReply', 'WKUIDelegate', 'WKURLSchemeHandler', 'WKURLSchemeTask'} COCOA_PRIMITIVES = {'ACErrorCode', 'ALCcontext_struct', 'ALCdevice_struct', 'ALMXGlyphEntry', 'ALMXHeader', 'API_UNAVAILABLE', 'AUChannelInfo', 'AUDependentParameter', 'AUDistanceAttenuationData', 'AUHostIdentifier', 'AUHostVersionIdentifier', 'AUInputSamplesInOutputCallbackStruct', 'AUMIDIEvent', 'AUMIDIOutputCallbackStruct', 'AUNodeInteraction', 'AUNodeRenderCallback', 'AUNumVersion', 'AUParameterAutomationEvent', 'AUParameterEvent', 'AUParameterMIDIMapping', 'AUPreset', 'AUPresetEvent', 'AURecordedParameterEvent', 'AURenderCallbackStruct', 'AURenderEventHeader', 'AUSamplerBankPresetData', 'AUSamplerInstrumentData', 'AnchorPoint', 'AnchorPointTable', 'AnkrTable', 'AudioBalanceFade', 'AudioBuffer', 'AudioBufferList', 'AudioBytePacketTranslation', 'AudioChannelDescription', 'AudioChannelLayout', 'AudioClassDescription', 'AudioCodecMagicCookieInfo', 'AudioCodecPrimeInfo', 'AudioComponentDescription', 'AudioComponentPlugInInterface', 'AudioConverterPrimeInfo', 'AudioFileMarker', 'AudioFileMarkerList', 'AudioFilePacketTableInfo', 'AudioFileRegion', 'AudioFileRegionList', 'AudioFileTypeAndFormatID', 'AudioFile_SMPTE_Time', 'AudioFormatInfo', 'AudioFormatListItem', 'AudioFramePacketTranslation', 'AudioIndependentPacketTranslation', 'AudioOutputUnitMIDICallbacks', 'AudioOutputUnitStartAtTimeParams', 'AudioPacketDependencyInfoTranslation', 'AudioPacketRangeByteCountTranslation', 'AudioPacketRollDistanceTranslation', 'AudioPanningInfo', 'AudioQueueBuffer', 'AudioQueueChannelAssignment', 'AudioQueueLevelMeterState', 'AudioQueueParameterEvent', 'AudioStreamBasicDescription', 'AudioStreamPacketDescription', 'AudioTimeStamp', 'AudioUnitCocoaViewInfo', 'AudioUnitConnection', 'AudioUnitExternalBuffer', 'AudioUnitFrequencyResponseBin', 'AudioUnitMIDIControlMapping', 'AudioUnitMeterClipping', 'AudioUnitNodeConnection', 'AudioUnitOtherPluginDesc', 'AudioUnitParameter', 'AudioUnitParameterEvent', 'AudioUnitParameterHistoryInfo', 'AudioUnitParameterInfo', 'AudioUnitParameterNameInfo', 'AudioUnitParameterStringFromValue', 'AudioUnitParameterValueFromString', 'AudioUnitParameterValueName', 'AudioUnitParameterValueTranslation', 'AudioUnitPresetMAS_SettingData', 'AudioUnitPresetMAS_Settings', 'AudioUnitProperty', 'AudioUnitRenderContext', 'AudioValueRange', 'AudioValueTranslation', 'AuthorizationOpaqueRef', 'BslnFormat0Part', 'BslnFormat1Part', 'BslnFormat2Part', 'BslnFormat3Part', 'BslnTable', 'CABarBeatTime', 'CAFAudioDescription', 'CAFChunkHeader', 'CAFDataChunk', 'CAFFileHeader', 'CAFInfoStrings', 'CAFInstrumentChunk', 'CAFMarker', 'CAFMarkerChunk', 'CAFOverviewChunk', 'CAFOverviewSample', 'CAFPacketTableHeader', 'CAFPeakChunk', 'CAFPositionPeak', 'CAFRegion', 'CAFRegionChunk', 'CAFStringID', 'CAFStrings', 'CAFUMIDChunk', 'CAF_SMPTE_Time', 'CAF_UUID_ChunkHeader', 'CA_BOXABLE', 'CFHostClientContext', 'CFNetServiceClientContext', 'CF_BRIDGED_MUTABLE_TYPE', 'CF_BRIDGED_TYPE', 'CF_RELATED_TYPE', 'CGAffineTransform', 'CGDataConsumerCallbacks', 'CGDataProviderDirectCallbacks', 'CGDataProviderSequentialCallbacks', 'CGFunctionCallbacks', 'CGPDFArray', 'CGPDFContentStream', 'CGPDFDictionary', 'CGPDFObject', 'CGPDFOperatorTable', 'CGPDFScanner', 'CGPDFStream', 'CGPDFString', 'CGPathElement', 'CGPatternCallbacks', 'CGVector', 'CG_BOXABLE', 'CLLocationCoordinate2D', 'CM_BRIDGED_TYPE', 'CTParagraphStyleSetting', 'CVPlanarComponentInfo', 'CVPlanarPixelBufferInfo', 'CVPlanarPixelBufferInfo_YCbCrBiPlanar', 'CVPlanarPixelBufferInfo_YCbCrPlanar', 'CVSMPTETime', 'CV_BRIDGED_TYPE', 'ComponentInstanceRecord', 'ExtendedAudioFormatInfo', 'ExtendedControlEvent', 'ExtendedNoteOnEvent', 'ExtendedTempoEvent', 'FontVariation', 'GCQuaternion', 'GKBox', 'GKQuad', 'GKTriangle', 'GLKEffectPropertyPrv', 'HostCallbackInfo', 'IIO_BRIDGED_TYPE', 'IUnknownVTbl', 'JustDirectionTable', 'JustPCAction', 'JustPCActionSubrecord', 'JustPCConditionalAddAction', 'JustPCDecompositionAction', 'JustPCDuctilityAction', 'JustPCGlyphRepeatAddAction', 'JustPostcompTable', 'JustTable', 'JustWidthDeltaEntry', 'JustWidthDeltaGroup', 'KernIndexArrayHeader', 'KernKerningPair', 'KernOffsetTable', 'KernOrderedListEntry', 'KernOrderedListHeader', 'KernSimpleArrayHeader', 'KernStateEntry', 'KernStateHeader', 'KernSubtableHeader', 'KernTableHeader', 'KernVersion0Header', 'KernVersion0SubtableHeader', 'KerxAnchorPointAction', 'KerxControlPointAction', 'KerxControlPointEntry', 'KerxControlPointHeader', 'KerxCoordinateAction', 'KerxIndexArrayHeader', 'KerxKerningPair', 'KerxOrderedListEntry', 'KerxOrderedListHeader', 'KerxSimpleArrayHeader', 'KerxStateEntry', 'KerxStateHeader', 'KerxSubtableHeader', 'KerxTableHeader', 'LcarCaretClassEntry', 'LcarCaretTable', 'LtagStringRange', 'LtagTable', 'MDL_CLASS_EXPORT', 'MIDICIDeviceIdentification', 'MIDIChannelMessage', 'MIDIControlTransform', 'MIDIDriverInterface', 'MIDIEventList', 'MIDIEventPacket', 'MIDIIOErrorNotification', 'MIDIMessage_128', 'MIDIMessage_64', 'MIDIMessage_96', 'MIDIMetaEvent', 'MIDINoteMessage', 'MIDINotification', 'MIDIObjectAddRemoveNotification', 'MIDIObjectPropertyChangeNotification', 'MIDIPacket', 'MIDIPacketList', 'MIDIRawData', 'MIDISysexSendRequest', 'MIDIThruConnectionEndpoint', 'MIDIThruConnectionParams', 'MIDITransform', 'MIDIValueMap', 'MPSDeviceOptions', 'MixerDistanceParams', 'MortChain', 'MortContextualSubtable', 'MortFeatureEntry', 'MortInsertionSubtable', 'MortLigatureSubtable', 'MortRearrangementSubtable', 'MortSubtable', 'MortSwashSubtable', 'MortTable', 'MorxChain', 'MorxContextualSubtable', 'MorxInsertionSubtable', 'MorxLigatureSubtable', 'MorxRearrangementSubtable', 'MorxSubtable', 'MorxTable', 'MusicDeviceNoteParams', 'MusicDeviceStdNoteParams', 'MusicEventUserData', 'MusicTrackLoopInfo', 'NoteParamsControlValue', 'OpaqueAudioComponent', 'OpaqueAudioComponentInstance', 'OpaqueAudioConverter', 'OpaqueAudioQueue', 'OpaqueAudioQueueProcessingTap', 'OpaqueAudioQueueTimeline', 'OpaqueExtAudioFile', 'OpaqueJSClass', 'OpaqueJSContext', 'OpaqueJSContextGroup', 'OpaqueJSPropertyNameAccumulator', 'OpaqueJSPropertyNameArray', 'OpaqueJSString', 'OpaqueJSValue', 'OpaqueMusicEventIterator', 'OpaqueMusicPlayer', 'OpaqueMusicSequence', 'OpaqueMusicTrack', 'OpbdSideValues', 'OpbdTable', 'ParameterEvent', 'PropLookupSegment', 'PropLookupSingle', 'PropTable', 'ROTAGlyphEntry', 'ROTAHeader', 'SCNMatrix4', 'SCNVector3', 'SCNVector4', 'SFNTLookupArrayHeader', 'SFNTLookupBinarySearchHeader', 'SFNTLookupSegment', 'SFNTLookupSegmentHeader', 'SFNTLookupSingle', 'SFNTLookupSingleHeader', 'SFNTLookupTable', 'SFNTLookupTrimmedArrayHeader', 'SFNTLookupVectorHeader', 'SMPTETime', 'STClassTable', 'STEntryOne', 'STEntryTwo', 'STEntryZero', 'STHeader', 'STXEntryOne', 'STXEntryTwo', 'STXEntryZero', 'STXHeader', 'ScheduledAudioFileRegion', 'ScheduledAudioSlice', 'SecKeychainAttribute', 'SecKeychainAttributeInfo', 'SecKeychainAttributeList', 'TrakTable', 'TrakTableData', 'TrakTableEntry', 'UIAccessibility', 'VTDecompressionOutputCallbackRecord', 'VTInt32Point', 'VTInt32Size', '_CFHTTPAuthentication', '_GLKMatrix2', '_GLKMatrix3', '_GLKMatrix4', '_GLKQuaternion', '_GLKVector2', '_GLKVector3', '_GLKVector4', '_GLKVertexAttributeParameters', '_MTLAxisAlignedBoundingBox', '_MTLPackedFloat3', '_MTLPackedFloat4x3', '_NSRange', '_NSZone', '__CFHTTPMessage', '__CFHost', '__CFNetDiagnostic', '__CFNetService', '__CFNetServiceBrowser', '__CFNetServiceMonitor', '__CFXMLNode', '__CFXMLParser', '__GLsync', '__SecAccess', '__SecCertificate', '__SecIdentity', '__SecKey', '__SecRandom', '__attribute__', 'gss_OID_desc_struct', 'gss_OID_set_desc_struct', 'gss_auth_identity', 'gss_buffer_desc_struct', 'gss_buffer_set_desc_struct', 'gss_channel_bindings_struct', 'gss_cred_id_t_desc_struct', 'gss_ctx_id_t_desc_struct', 'gss_iov_buffer_desc_struct', 'gss_krb5_cfx_keydata', 'gss_krb5_lucid_context_v1', 'gss_krb5_lucid_context_version', 'gss_krb5_lucid_key', 'gss_krb5_rfc1964_keydata', 'gss_name_t_desc_struct', 'opaqueCMBufferQueueTriggerToken', 'sfntCMapEncoding', 'sfntCMapExtendedSubHeader', 'sfntCMapHeader', 'sfntCMapSubHeader', 'sfntDescriptorHeader', 'sfntDirectory', 'sfntDirectoryEntry', 'sfntFeatureHeader', 'sfntFeatureName', 'sfntFontDescriptor', 'sfntFontFeatureSetting', 'sfntFontRunFeature', 'sfntInstance', 'sfntNameHeader', 'sfntNameRecord', 'sfntVariationAxis', 'sfntVariationHeader'} if __name__ == '__main__': # pragma: no cover import os import re FRAMEWORKS_PATH = '/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/' frameworks = os.listdir(FRAMEWORKS_PATH) all_interfaces = set() all_protocols = set() all_primitives = set() for framework in frameworks: frameworkHeadersDir = FRAMEWORKS_PATH + framework + '/Headers/' if not os.path.exists(frameworkHeadersDir): continue headerFilenames = os.listdir(frameworkHeadersDir) for f in headerFilenames: if not f.endswith('.h'): continue headerFilePath = frameworkHeadersDir + f try: with open(headerFilePath, encoding='utf-8') as f: content = f.read() except UnicodeDecodeError: print("Decoding error for file: {0}".format(headerFilePath)) continue res = re.findall(r'(?<=@interface )\w+', content) for r in res: all_interfaces.add(r) res = re.findall(r'(?<=@protocol )\w+', content) for r in res: all_protocols.add(r) res = re.findall(r'(?<=typedef enum )\w+', content) for r in res: all_primitives.add(r) res = re.findall(r'(?<=typedef struct )\w+', content) for r in res: all_primitives.add(r) res = re.findall(r'(?<=typedef const struct )\w+', content) for r in res: all_primitives.add(r) print("ALL interfaces: \n") print(sorted(list(all_interfaces))) print("\nALL protocols: \n") print(sorted(list(all_protocols))) print("\nALL primitives: \n") print(sorted(list(all_primitives))) pygments-2.11.2/pygments/lexers/_tsql_builtins.py0000644000175000017500000003614414165547207022123 0ustar carstencarsten""" pygments.lexers._tsql_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ These are manually translated lists from https://msdn.microsoft.com. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # See https://msdn.microsoft.com/en-us/library/ms174986.aspx. OPERATORS = ( '!<', '!=', '!>', '<', '<=', '<>', '=', '>', '>=', '+', '+=', '-', '-=', '*', '*=', '/', '/=', '%', '%=', '&', '&=', '|', '|=', '^', '^=', '~', '::', ) OPERATOR_WORDS = ( 'all', 'and', 'any', 'between', 'except', 'exists', 'in', 'intersect', 'like', 'not', 'or', 'some', 'union', ) _KEYWORDS_SERVER = ( 'add', 'all', 'alter', 'and', 'any', 'as', 'asc', 'authorization', 'backup', 'begin', 'between', 'break', 'browse', 'bulk', 'by', 'cascade', 'case', 'catch', 'check', 'checkpoint', 'close', 'clustered', 'coalesce', 'collate', 'column', 'commit', 'compute', 'constraint', 'contains', 'containstable', 'continue', 'convert', 'create', 'cross', 'current', 'current_date', 'current_time', 'current_timestamp', 'current_user', 'cursor', 'database', 'dbcc', 'deallocate', 'declare', 'default', 'delete', 'deny', 'desc', 'disk', 'distinct', 'distributed', 'double', 'drop', 'dump', 'else', 'end', 'errlvl', 'escape', 'except', 'exec', 'execute', 'exists', 'exit', 'external', 'fetch', 'file', 'fillfactor', 'for', 'foreign', 'freetext', 'freetexttable', 'from', 'full', 'function', 'goto', 'grant', 'group', 'having', 'holdlock', 'identity', 'identity_insert', 'identitycol', 'if', 'in', 'index', 'inner', 'insert', 'intersect', 'into', 'is', 'join', 'key', 'kill', 'left', 'like', 'lineno', 'load', 'merge', 'national', 'nocheck', 'nonclustered', 'not', 'null', 'nullif', 'of', 'off', 'offsets', 'on', 'open', 'opendatasource', 'openquery', 'openrowset', 'openxml', 'option', 'or', 'order', 'outer', 'over', 'percent', 'pivot', 'plan', 'precision', 'primary', 'print', 'proc', 'procedure', 'public', 'raiserror', 'read', 'readtext', 'reconfigure', 'references', 'replication', 'restore', 'restrict', 'return', 'revert', 'revoke', 'right', 'rollback', 'rowcount', 'rowguidcol', 'rule', 'save', 'schema', 'securityaudit', 'select', 'semantickeyphrasetable', 'semanticsimilaritydetailstable', 'semanticsimilaritytable', 'session_user', 'set', 'setuser', 'shutdown', 'some', 'statistics', 'system_user', 'table', 'tablesample', 'textsize', 'then', 'throw', 'to', 'top', 'tran', 'transaction', 'trigger', 'truncate', 'try', 'try_convert', 'tsequal', 'union', 'unique', 'unpivot', 'update', 'updatetext', 'use', 'user', 'values', 'varying', 'view', 'waitfor', 'when', 'where', 'while', 'with', 'within', 'writetext', ) _KEYWORDS_FUTURE = ( 'absolute', 'action', 'admin', 'after', 'aggregate', 'alias', 'allocate', 'are', 'array', 'asensitive', 'assertion', 'asymmetric', 'at', 'atomic', 'before', 'binary', 'bit', 'blob', 'boolean', 'both', 'breadth', 'call', 'called', 'cardinality', 'cascaded', 'cast', 'catalog', 'char', 'character', 'class', 'clob', 'collation', 'collect', 'completion', 'condition', 'connect', 'connection', 'constraints', 'constructor', 'corr', 'corresponding', 'covar_pop', 'covar_samp', 'cube', 'cume_dist', 'current_catalog', 'current_default_transform_group', 'current_path', 'current_role', 'current_schema', 'current_transform_group_for_type', 'cycle', 'data', 'date', 'day', 'dec', 'decimal', 'deferrable', 'deferred', 'depth', 'deref', 'describe', 'descriptor', 'destroy', 'destructor', 'deterministic', 'diagnostics', 'dictionary', 'disconnect', 'domain', 'dynamic', 'each', 'element', 'end-exec', 'equals', 'every', 'exception', 'false', 'filter', 'first', 'float', 'found', 'free', 'fulltexttable', 'fusion', 'general', 'get', 'global', 'go', 'grouping', 'hold', 'host', 'hour', 'ignore', 'immediate', 'indicator', 'initialize', 'initially', 'inout', 'input', 'int', 'integer', 'intersection', 'interval', 'isolation', 'iterate', 'language', 'large', 'last', 'lateral', 'leading', 'less', 'level', 'like_regex', 'limit', 'ln', 'local', 'localtime', 'localtimestamp', 'locator', 'map', 'match', 'member', 'method', 'minute', 'mod', 'modifies', 'modify', 'module', 'month', 'multiset', 'names', 'natural', 'nchar', 'nclob', 'new', 'next', 'no', 'none', 'normalize', 'numeric', 'object', 'occurrences_regex', 'old', 'only', 'operation', 'ordinality', 'out', 'output', 'overlay', 'pad', 'parameter', 'parameters', 'partial', 'partition', 'path', 'percent_rank', 'percentile_cont', 'percentile_disc', 'position_regex', 'postfix', 'prefix', 'preorder', 'prepare', 'preserve', 'prior', 'privileges', 'range', 'reads', 'real', 'recursive', 'ref', 'referencing', 'regr_avgx', 'regr_avgy', 'regr_count', 'regr_intercept', 'regr_r2', 'regr_slope', 'regr_sxx', 'regr_sxy', 'regr_syy', 'relative', 'release', 'result', 'returns', 'role', 'rollup', 'routine', 'row', 'rows', 'savepoint', 'scope', 'scroll', 'search', 'second', 'section', 'sensitive', 'sequence', 'session', 'sets', 'similar', 'size', 'smallint', 'space', 'specific', 'specifictype', 'sql', 'sqlexception', 'sqlstate', 'sqlwarning', 'start', 'state', 'statement', 'static', 'stddev_pop', 'stddev_samp', 'structure', 'submultiset', 'substring_regex', 'symmetric', 'system', 'temporary', 'terminate', 'than', 'time', 'timestamp', 'timezone_hour', 'timezone_minute', 'trailing', 'translate_regex', 'translation', 'treat', 'true', 'uescape', 'under', 'unknown', 'unnest', 'usage', 'using', 'value', 'var_pop', 'var_samp', 'varchar', 'variable', 'whenever', 'width_bucket', 'window', 'within', 'without', 'work', 'write', 'xmlagg', 'xmlattributes', 'xmlbinary', 'xmlcast', 'xmlcomment', 'xmlconcat', 'xmldocument', 'xmlelement', 'xmlexists', 'xmlforest', 'xmliterate', 'xmlnamespaces', 'xmlparse', 'xmlpi', 'xmlquery', 'xmlserialize', 'xmltable', 'xmltext', 'xmlvalidate', 'year', 'zone', ) _KEYWORDS_ODBC = ( 'absolute', 'action', 'ada', 'add', 'all', 'allocate', 'alter', 'and', 'any', 'are', 'as', 'asc', 'assertion', 'at', 'authorization', 'avg', 'begin', 'between', 'bit', 'bit_length', 'both', 'by', 'cascade', 'cascaded', 'case', 'cast', 'catalog', 'char', 'char_length', 'character', 'character_length', 'check', 'close', 'coalesce', 'collate', 'collation', 'column', 'commit', 'connect', 'connection', 'constraint', 'constraints', 'continue', 'convert', 'corresponding', 'count', 'create', 'cross', 'current', 'current_date', 'current_time', 'current_timestamp', 'current_user', 'cursor', 'date', 'day', 'deallocate', 'dec', 'decimal', 'declare', 'default', 'deferrable', 'deferred', 'delete', 'desc', 'describe', 'descriptor', 'diagnostics', 'disconnect', 'distinct', 'domain', 'double', 'drop', 'else', 'end', 'end-exec', 'escape', 'except', 'exception', 'exec', 'execute', 'exists', 'external', 'extract', 'false', 'fetch', 'first', 'float', 'for', 'foreign', 'fortran', 'found', 'from', 'full', 'get', 'global', 'go', 'goto', 'grant', 'group', 'having', 'hour', 'identity', 'immediate', 'in', 'include', 'index', 'indicator', 'initially', 'inner', 'input', 'insensitive', 'insert', 'int', 'integer', 'intersect', 'interval', 'into', 'is', 'isolation', 'join', 'key', 'language', 'last', 'leading', 'left', 'level', 'like', 'local', 'lower', 'match', 'max', 'min', 'minute', 'module', 'month', 'names', 'national', 'natural', 'nchar', 'next', 'no', 'none', 'not', 'null', 'nullif', 'numeric', 'octet_length', 'of', 'on', 'only', 'open', 'option', 'or', 'order', 'outer', 'output', 'overlaps', 'pad', 'partial', 'pascal', 'position', 'precision', 'prepare', 'preserve', 'primary', 'prior', 'privileges', 'procedure', 'public', 'read', 'real', 'references', 'relative', 'restrict', 'revoke', 'right', 'rollback', 'rows', 'schema', 'scroll', 'second', 'section', 'select', 'session', 'session_user', 'set', 'size', 'smallint', 'some', 'space', 'sql', 'sqlca', 'sqlcode', 'sqlerror', 'sqlstate', 'sqlwarning', 'substring', 'sum', 'system_user', 'table', 'temporary', 'then', 'time', 'timestamp', 'timezone_hour', 'timezone_minute', 'to', 'trailing', 'transaction', 'translate', 'translation', 'trim', 'true', 'union', 'unique', 'unknown', 'update', 'upper', 'usage', 'user', 'using', 'value', 'values', 'varchar', 'varying', 'view', 'when', 'whenever', 'where', 'with', 'work', 'write', 'year', 'zone', ) # See https://msdn.microsoft.com/en-us/library/ms189822.aspx. KEYWORDS = sorted(set(_KEYWORDS_FUTURE + _KEYWORDS_ODBC + _KEYWORDS_SERVER)) # See https://msdn.microsoft.com/en-us/library/ms187752.aspx. TYPES = ( 'bigint', 'binary', 'bit', 'char', 'cursor', 'date', 'datetime', 'datetime2', 'datetimeoffset', 'decimal', 'float', 'hierarchyid', 'image', 'int', 'money', 'nchar', 'ntext', 'numeric', 'nvarchar', 'real', 'smalldatetime', 'smallint', 'smallmoney', 'sql_variant', 'table', 'text', 'time', 'timestamp', 'tinyint', 'uniqueidentifier', 'varbinary', 'varchar', 'xml', ) # See https://msdn.microsoft.com/en-us/library/ms174318.aspx. FUNCTIONS = ( '$partition', 'abs', 'acos', 'app_name', 'applock_mode', 'applock_test', 'ascii', 'asin', 'assemblyproperty', 'atan', 'atn2', 'avg', 'binary_checksum', 'cast', 'ceiling', 'certencoded', 'certprivatekey', 'char', 'charindex', 'checksum', 'checksum_agg', 'choose', 'col_length', 'col_name', 'columnproperty', 'compress', 'concat', 'connectionproperty', 'context_info', 'convert', 'cos', 'cot', 'count', 'count_big', 'current_request_id', 'current_timestamp', 'current_transaction_id', 'current_user', 'cursor_status', 'database_principal_id', 'databasepropertyex', 'dateadd', 'datediff', 'datediff_big', 'datefromparts', 'datename', 'datepart', 'datetime2fromparts', 'datetimefromparts', 'datetimeoffsetfromparts', 'day', 'db_id', 'db_name', 'decompress', 'degrees', 'dense_rank', 'difference', 'eomonth', 'error_line', 'error_message', 'error_number', 'error_procedure', 'error_severity', 'error_state', 'exp', 'file_id', 'file_idex', 'file_name', 'filegroup_id', 'filegroup_name', 'filegroupproperty', 'fileproperty', 'floor', 'format', 'formatmessage', 'fulltextcatalogproperty', 'fulltextserviceproperty', 'get_filestream_transaction_context', 'getansinull', 'getdate', 'getutcdate', 'grouping', 'grouping_id', 'has_perms_by_name', 'host_id', 'host_name', 'iif', 'index_col', 'indexkey_property', 'indexproperty', 'is_member', 'is_rolemember', 'is_srvrolemember', 'isdate', 'isjson', 'isnull', 'isnumeric', 'json_modify', 'json_query', 'json_value', 'left', 'len', 'log', 'log10', 'lower', 'ltrim', 'max', 'min', 'min_active_rowversion', 'month', 'nchar', 'newid', 'newsequentialid', 'ntile', 'object_definition', 'object_id', 'object_name', 'object_schema_name', 'objectproperty', 'objectpropertyex', 'opendatasource', 'openjson', 'openquery', 'openrowset', 'openxml', 'original_db_name', 'original_login', 'parse', 'parsename', 'patindex', 'permissions', 'pi', 'power', 'pwdcompare', 'pwdencrypt', 'quotename', 'radians', 'rand', 'rank', 'replace', 'replicate', 'reverse', 'right', 'round', 'row_number', 'rowcount_big', 'rtrim', 'schema_id', 'schema_name', 'scope_identity', 'serverproperty', 'session_context', 'session_user', 'sign', 'sin', 'smalldatetimefromparts', 'soundex', 'sp_helplanguage', 'space', 'sqrt', 'square', 'stats_date', 'stdev', 'stdevp', 'str', 'string_escape', 'string_split', 'stuff', 'substring', 'sum', 'suser_id', 'suser_name', 'suser_sid', 'suser_sname', 'switchoffset', 'sysdatetime', 'sysdatetimeoffset', 'system_user', 'sysutcdatetime', 'tan', 'textptr', 'textvalid', 'timefromparts', 'todatetimeoffset', 'try_cast', 'try_convert', 'try_parse', 'type_id', 'type_name', 'typeproperty', 'unicode', 'upper', 'user_id', 'user_name', 'var', 'varp', 'xact_state', 'year', ) pygments-2.11.2/pygments/lexers/python.py0000644000175000017500000014677014165547207020420 0ustar carstencarsten""" pygments.lexers.python ~~~~~~~~~~~~~~~~~~~~~~ Lexers for Python and related languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import keyword from pygments.lexer import Lexer, RegexLexer, include, bygroups, using, \ default, words, combined, do_insertions, this from pygments.util import get_bool_opt, shebang_matches from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Other, Error from pygments import unistring as uni __all__ = ['PythonLexer', 'PythonConsoleLexer', 'PythonTracebackLexer', 'Python2Lexer', 'Python2TracebackLexer', 'CythonLexer', 'DgLexer', 'NumPyLexer'] line_re = re.compile('.*?\n') class PythonLexer(RegexLexer): """ For `Python `_ source code (version 3.x). .. versionadded:: 0.10 .. versionchanged:: 2.5 This is now the default ``PythonLexer``. It is still available as the alias ``Python3Lexer``. """ name = 'Python' aliases = ['python', 'py', 'sage', 'python3', 'py3'] filenames = [ '*.py', '*.pyw', # Jython '*.jy', # Sage '*.sage', # SCons '*.sc', 'SConstruct', 'SConscript', # Skylark/Starlark (used by Bazel, Buck, and Pants) '*.bzl', 'BUCK', 'BUILD', 'BUILD.bazel', 'WORKSPACE', # Twisted Application infrastructure '*.tac', ] mimetypes = ['text/x-python', 'application/x-python', 'text/x-python3', 'application/x-python3'] flags = re.MULTILINE | re.UNICODE uni_name = "[%s][%s]*" % (uni.xid_start, uni.xid_continue) def innerstring_rules(ttype): return [ # the old style '%s' % (...) string formatting (still valid in Py3) (r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?' '[hlL]?[E-GXc-giorsaux%]', String.Interpol), # the new style '{}'.format(...) string formatting (r'\{' r'((\w+)((\.\w+)|(\[[^\]]+\]))*)?' # field name r'(\![sra])?' # conversion r'(\:(.?[<>=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?' r'\}', String.Interpol), # backslashes, quotes and formatting signs must be parsed one at a time (r'[^\\\'"%{\n]+', ttype), (r'[\'"\\]', ttype), # unhandled string formatting sign (r'%|(\{{1,2})', ttype) # newlines are an error (use "nl" state) ] def fstring_rules(ttype): return [ # Assuming that a '}' is the closing brace after format specifier. # Sadly, this means that we won't detect syntax error. But it's # more important to parse correct syntax correctly, than to # highlight invalid syntax. (r'\}', String.Interpol), (r'\{', String.Interpol, 'expr-inside-fstring'), # backslashes, quotes and formatting signs must be parsed one at a time (r'[^\\\'"{}\n]+', ttype), (r'[\'"\\]', ttype), # newlines are an error (use "nl" state) ] tokens = { 'root': [ (r'\n', Text), (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")', bygroups(Text, String.Affix, String.Doc)), (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')", bygroups(Text, String.Affix, String.Doc)), (r'\A#!.+$', Comment.Hashbang), (r'#.*$', Comment.Single), (r'\\\n', Text), (r'\\', Text), include('keywords'), include('soft-keywords'), (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'), (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'), (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), 'fromimport'), (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), 'import'), include('expr'), ], 'expr': [ # raw f-strings ('(?i)(rf|fr)(""")', bygroups(String.Affix, String.Double), combined('rfstringescape', 'tdqf')), ("(?i)(rf|fr)(''')", bygroups(String.Affix, String.Single), combined('rfstringescape', 'tsqf')), ('(?i)(rf|fr)(")', bygroups(String.Affix, String.Double), combined('rfstringescape', 'dqf')), ("(?i)(rf|fr)(')", bygroups(String.Affix, String.Single), combined('rfstringescape', 'sqf')), # non-raw f-strings ('([fF])(""")', bygroups(String.Affix, String.Double), combined('fstringescape', 'tdqf')), ("([fF])(''')", bygroups(String.Affix, String.Single), combined('fstringescape', 'tsqf')), ('([fF])(")', bygroups(String.Affix, String.Double), combined('fstringescape', 'dqf')), ("([fF])(')", bygroups(String.Affix, String.Single), combined('fstringescape', 'sqf')), # raw strings ('(?i)(rb|br|r)(""")', bygroups(String.Affix, String.Double), 'tdqs'), ("(?i)(rb|br|r)(''')", bygroups(String.Affix, String.Single), 'tsqs'), ('(?i)(rb|br|r)(")', bygroups(String.Affix, String.Double), 'dqs'), ("(?i)(rb|br|r)(')", bygroups(String.Affix, String.Single), 'sqs'), # non-raw strings ('([uUbB]?)(""")', bygroups(String.Affix, String.Double), combined('stringescape', 'tdqs')), ("([uUbB]?)(''')", bygroups(String.Affix, String.Single), combined('stringescape', 'tsqs')), ('([uUbB]?)(")', bygroups(String.Affix, String.Double), combined('stringescape', 'dqs')), ("([uUbB]?)(')", bygroups(String.Affix, String.Single), combined('stringescape', 'sqs')), (r'[^\S\n]+', Text), include('numbers'), (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|.]', Operator), (r'[]{}:(),;[]', Punctuation), (r'(in|is|and|or|not)\b', Operator.Word), include('expr-keywords'), include('builtins'), include('magicfuncs'), include('magicvars'), include('name'), ], 'expr-inside-fstring': [ (r'[{([]', Punctuation, 'expr-inside-fstring-inner'), # without format specifier (r'(=\s*)?' # debug (https://bugs.python.org/issue36817) r'(\![sraf])?' # conversion r'\}', String.Interpol, '#pop'), # with format specifier # we'll catch the remaining '}' in the outer scope (r'(=\s*)?' # debug (https://bugs.python.org/issue36817) r'(\![sraf])?' # conversion r':', String.Interpol, '#pop'), (r'\s+', Text), # allow new lines include('expr'), ], 'expr-inside-fstring-inner': [ (r'[{([]', Punctuation, 'expr-inside-fstring-inner'), (r'[])}]', Punctuation, '#pop'), (r'\s+', Text), # allow new lines include('expr'), ], 'expr-keywords': [ # Based on https://docs.python.org/3/reference/expressions.html (words(( 'async for', 'await', 'else', 'for', 'if', 'lambda', 'yield', 'yield from'), suffix=r'\b'), Keyword), (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant), ], 'keywords': [ (words(( 'assert', 'async', 'await', 'break', 'continue', 'del', 'elif', 'else', 'except', 'finally', 'for', 'global', 'if', 'lambda', 'pass', 'raise', 'nonlocal', 'return', 'try', 'while', 'yield', 'yield from', 'as', 'with'), suffix=r'\b'), Keyword), (words(('True', 'False', 'None'), suffix=r'\b'), Keyword.Constant), ], 'soft-keywords': [ # `match`, `case` and `_` soft keywords (r'(^[ \t]*)' # at beginning of line + possible indentation r'(match|case)\b' # a possible keyword r'(?![ \t]*(?:' # not followed by... r'[:,;=^&|@~)\]}]|(?:' + # characters and keywords that mean this isn't r'|'.join(keyword.kwlist) + r')\b))', # pattern matching bygroups(Text, Keyword), 'soft-keywords-inner'), ], 'soft-keywords-inner': [ # optional `_` keyword (r'(\s+)([^\n_]*)(_\b)', bygroups(Text, using(this), Keyword)), default('#pop') ], 'builtins': [ (words(( '__import__', 'abs', 'all', 'any', 'bin', 'bool', 'bytearray', 'breakpoint', 'bytes', 'chr', 'classmethod', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass', 'iter', 'len', 'list', 'locals', 'map', 'max', 'memoryview', 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'vars', 'zip'), prefix=r'(?`_ source code. .. versionchanged:: 2.5 This class has been renamed from ``PythonLexer``. ``PythonLexer`` now refers to the Python 3 variant. File name patterns like ``*.py`` have been moved to Python 3 as well. """ name = 'Python 2.x' aliases = ['python2', 'py2'] filenames = [] # now taken over by PythonLexer (3.x) mimetypes = ['text/x-python2', 'application/x-python2'] def innerstring_rules(ttype): return [ # the old style '%s' % (...) string formatting (r'%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?' '[hlL]?[E-GXc-giorsux%]', String.Interpol), # backslashes, quotes and formatting signs must be parsed one at a time (r'[^\\\'"%\n]+', ttype), (r'[\'"\\]', ttype), # unhandled string formatting sign (r'%', ttype), # newlines are an error (use "nl" state) ] tokens = { 'root': [ (r'\n', Text), (r'^(\s*)([rRuUbB]{,2})("""(?:.|\n)*?""")', bygroups(Text, String.Affix, String.Doc)), (r"^(\s*)([rRuUbB]{,2})('''(?:.|\n)*?''')", bygroups(Text, String.Affix, String.Doc)), (r'[^\S\n]+', Text), (r'\A#!.+$', Comment.Hashbang), (r'#.*$', Comment.Single), (r'[]{}:(),;[]', Punctuation), (r'\\\n', Text), (r'\\', Text), (r'(in|is|and|or|not)\b', Operator.Word), (r'!=|==|<<|>>|[-~+/*%=<>&^|.]', Operator), include('keywords'), (r'(def)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'funcname'), (r'(class)((?:\s|\\\s)+)', bygroups(Keyword, Text), 'classname'), (r'(from)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), 'fromimport'), (r'(import)((?:\s|\\\s)+)', bygroups(Keyword.Namespace, Text), 'import'), include('builtins'), include('magicfuncs'), include('magicvars'), include('backtick'), ('([rR]|[uUbB][rR]|[rR][uUbB])(""")', bygroups(String.Affix, String.Double), 'tdqs'), ("([rR]|[uUbB][rR]|[rR][uUbB])(''')", bygroups(String.Affix, String.Single), 'tsqs'), ('([rR]|[uUbB][rR]|[rR][uUbB])(")', bygroups(String.Affix, String.Double), 'dqs'), ("([rR]|[uUbB][rR]|[rR][uUbB])(')", bygroups(String.Affix, String.Single), 'sqs'), ('([uUbB]?)(""")', bygroups(String.Affix, String.Double), combined('stringescape', 'tdqs')), ("([uUbB]?)(''')", bygroups(String.Affix, String.Single), combined('stringescape', 'tsqs')), ('([uUbB]?)(")', bygroups(String.Affix, String.Double), combined('stringescape', 'dqs')), ("([uUbB]?)(')", bygroups(String.Affix, String.Single), combined('stringescape', 'sqs')), include('name'), include('numbers'), ], 'keywords': [ (words(( 'assert', 'break', 'continue', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'global', 'if', 'lambda', 'pass', 'print', 'raise', 'return', 'try', 'while', 'yield', 'yield from', 'as', 'with'), suffix=r'\b'), Keyword), ], 'builtins': [ (words(( '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float', 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len', 'list', 'locals', 'long', 'map', 'max', 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'property', 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'vars', 'xrange', 'zip'), prefix=r'(?>> a = 'foo' >>> print a foo >>> 1 / 0 Traceback (most recent call last): File "", line 1, in ZeroDivisionError: integer division or modulo by zero Additional options: `python3` Use Python 3 lexer for code. Default is ``True``. .. versionadded:: 1.0 .. versionchanged:: 2.5 Now defaults to ``True``. """ name = 'Python console session' aliases = ['pycon'] mimetypes = ['text/x-python-doctest'] def __init__(self, **options): self.python3 = get_bool_opt(options, 'python3', True) Lexer.__init__(self, **options) def get_tokens_unprocessed(self, text): if self.python3: pylexer = PythonLexer(**self.options) tblexer = PythonTracebackLexer(**self.options) else: pylexer = Python2Lexer(**self.options) tblexer = Python2TracebackLexer(**self.options) curcode = '' insertions = [] curtb = '' tbindex = 0 tb = 0 for match in line_re.finditer(text): line = match.group() if line.startswith('>>> ') or line.startswith('... '): tb = 0 insertions.append((len(curcode), [(0, Generic.Prompt, line[:4])])) curcode += line[4:] elif line.rstrip() == '...' and not tb: # only a new >>> prompt can end an exception block # otherwise an ellipsis in place of the traceback frames # will be mishandled insertions.append((len(curcode), [(0, Generic.Prompt, '...')])) curcode += line[3:] else: if curcode: yield from do_insertions( insertions, pylexer.get_tokens_unprocessed(curcode)) curcode = '' insertions = [] if (line.startswith('Traceback (most recent call last):') or re.match(' File "[^"]+", line \\d+\\n$', line)): tb = 1 curtb = line tbindex = match.start() elif line == 'KeyboardInterrupt\n': yield match.start(), Name.Class, line elif tb: curtb += line if not (line.startswith(' ') or line.strip() == '...'): tb = 0 for i, t, v in tblexer.get_tokens_unprocessed(curtb): yield tbindex+i, t, v curtb = '' else: yield match.start(), Generic.Output, line if curcode: yield from do_insertions(insertions, pylexer.get_tokens_unprocessed(curcode)) if curtb: for i, t, v in tblexer.get_tokens_unprocessed(curtb): yield tbindex+i, t, v class PythonTracebackLexer(RegexLexer): """ For Python 3.x tracebacks, with support for chained exceptions. .. versionadded:: 1.0 .. versionchanged:: 2.5 This is now the default ``PythonTracebackLexer``. It is still available as the alias ``Python3TracebackLexer``. """ name = 'Python Traceback' aliases = ['pytb', 'py3tb'] filenames = ['*.pytb', '*.py3tb'] mimetypes = ['text/x-python-traceback', 'text/x-python3-traceback'] tokens = { 'root': [ (r'\n', Text), (r'^Traceback \(most recent call last\):\n', Generic.Traceback, 'intb'), (r'^During handling of the above exception, another ' r'exception occurred:\n\n', Generic.Traceback), (r'^The above exception was the direct cause of the ' r'following exception:\n\n', Generic.Traceback), (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'), (r'^.*\n', Other), ], 'intb': [ (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)', bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)), (r'^( File )("[^"]+")(, line )(\d+)(\n)', bygroups(Text, Name.Builtin, Text, Number, Text)), (r'^( )(.+)(\n)', bygroups(Text, using(PythonLexer), Text), 'markers'), (r'^([ \t]*)(\.\.\.)(\n)', bygroups(Text, Comment, Text)), # for doctests... (r'^([^:]+)(: )(.+)(\n)', bygroups(Generic.Error, Text, Name, Text), '#pop'), (r'^([a-zA-Z_][\w.]*)(:?\n)', bygroups(Generic.Error, Text), '#pop') ], 'markers': [ # Either `PEP 657 ` # error locations in Python 3.11+, or single-caret markers # for syntax errors before that. (r'^( {4,})([~^]+)(\n)', bygroups(Text, Punctuation.Marker, Text), '#pop'), default('#pop'), ], } Python3TracebackLexer = PythonTracebackLexer class Python2TracebackLexer(RegexLexer): """ For Python tracebacks. .. versionadded:: 0.7 .. versionchanged:: 2.5 This class has been renamed from ``PythonTracebackLexer``. ``PythonTracebackLexer`` now refers to the Python 3 variant. """ name = 'Python 2.x Traceback' aliases = ['py2tb'] filenames = ['*.py2tb'] mimetypes = ['text/x-python2-traceback'] tokens = { 'root': [ # Cover both (most recent call last) and (innermost last) # The optional ^C allows us to catch keyboard interrupt signals. (r'^(\^C)?(Traceback.*\n)', bygroups(Text, Generic.Traceback), 'intb'), # SyntaxError starts with this. (r'^(?= File "[^"]+", line \d+)', Generic.Traceback, 'intb'), (r'^.*\n', Other), ], 'intb': [ (r'^( File )("[^"]+")(, line )(\d+)(, in )(.+)(\n)', bygroups(Text, Name.Builtin, Text, Number, Text, Name, Text)), (r'^( File )("[^"]+")(, line )(\d+)(\n)', bygroups(Text, Name.Builtin, Text, Number, Text)), (r'^( )(.+)(\n)', bygroups(Text, using(Python2Lexer), Text), 'marker'), (r'^([ \t]*)(\.\.\.)(\n)', bygroups(Text, Comment, Text)), # for doctests... (r'^([^:]+)(: )(.+)(\n)', bygroups(Generic.Error, Text, Name, Text), '#pop'), (r'^([a-zA-Z_]\w*)(:?\n)', bygroups(Generic.Error, Text), '#pop') ], 'marker': [ # For syntax errors. (r'( {4,})(\^)', bygroups(Text, Punctuation.Marker), '#pop'), default('#pop'), ], } class CythonLexer(RegexLexer): """ For Pyrex and `Cython `_ source code. .. versionadded:: 1.1 """ name = 'Cython' aliases = ['cython', 'pyx', 'pyrex'] filenames = ['*.pyx', '*.pxd', '*.pxi'] mimetypes = ['text/x-cython', 'application/x-cython'] tokens = { 'root': [ (r'\n', Text), (r'^(\s*)("""(?:.|\n)*?""")', bygroups(Text, String.Doc)), (r"^(\s*)('''(?:.|\n)*?''')", bygroups(Text, String.Doc)), (r'[^\S\n]+', Text), (r'#.*$', Comment), (r'[]{}:(),;[]', Punctuation), (r'\\\n', Text), (r'\\', Text), (r'(in|is|and|or|not)\b', Operator.Word), (r'(<)([a-zA-Z0-9.?]+)(>)', bygroups(Punctuation, Keyword.Type, Punctuation)), (r'!=|==|<<|>>|[-~+/*%=<>&^|.?]', Operator), (r'(from)(\d+)(<=)(\s+)(<)(\d+)(:)', bygroups(Keyword, Number.Integer, Operator, Name, Operator, Name, Punctuation)), include('keywords'), (r'(def|property)(\s+)', bygroups(Keyword, Text), 'funcname'), (r'(cp?def)(\s+)', bygroups(Keyword, Text), 'cdef'), # (should actually start a block with only cdefs) (r'(cdef)(:)', bygroups(Keyword, Punctuation)), (r'(class|struct)(\s+)', bygroups(Keyword, Text), 'classname'), (r'(from)(\s+)', bygroups(Keyword, Text), 'fromimport'), (r'(c?import)(\s+)', bygroups(Keyword, Text), 'import'), include('builtins'), include('backtick'), ('(?:[rR]|[uU][rR]|[rR][uU])"""', String, 'tdqs'), ("(?:[rR]|[uU][rR]|[rR][uU])'''", String, 'tsqs'), ('(?:[rR]|[uU][rR]|[rR][uU])"', String, 'dqs'), ("(?:[rR]|[uU][rR]|[rR][uU])'", String, 'sqs'), ('[uU]?"""', String, combined('stringescape', 'tdqs')), ("[uU]?'''", String, combined('stringescape', 'tsqs')), ('[uU]?"', String, combined('stringescape', 'dqs')), ("[uU]?'", String, combined('stringescape', 'sqs')), include('name'), include('numbers'), ], 'keywords': [ (words(( 'assert', 'async', 'await', 'break', 'by', 'continue', 'ctypedef', 'del', 'elif', 'else', 'except', 'except?', 'exec', 'finally', 'for', 'fused', 'gil', 'global', 'if', 'include', 'lambda', 'nogil', 'pass', 'print', 'raise', 'return', 'try', 'while', 'yield', 'as', 'with'), suffix=r'\b'), Keyword), (r'(DEF|IF|ELIF|ELSE)\b', Comment.Preproc), ], 'builtins': [ (words(( '__import__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bint', 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float', 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len', 'list', 'locals', 'long', 'map', 'max', 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'property', 'Py_ssize_t', 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'unsigned', 'vars', 'xrange', 'zip'), prefix=r'(?`_, a functional and object-oriented programming language running on the CPython 3 VM. .. versionadded:: 1.6 """ name = 'dg' aliases = ['dg'] filenames = ['*.dg'] mimetypes = ['text/x-dg'] tokens = { 'root': [ (r'\s+', Text), (r'#.*?$', Comment.Single), (r'(?i)0b[01]+', Number.Bin), (r'(?i)0o[0-7]+', Number.Oct), (r'(?i)0x[0-9a-f]+', Number.Hex), (r'(?i)[+-]?[0-9]+\.[0-9]+(e[+-]?[0-9]+)?j?', Number.Float), (r'(?i)[+-]?[0-9]+e[+-]?\d+j?', Number.Float), (r'(?i)[+-]?[0-9]+j?', Number.Integer), (r"(?i)(br|r?b?)'''", String, combined('stringescape', 'tsqs', 'string')), (r'(?i)(br|r?b?)"""', String, combined('stringescape', 'tdqs', 'string')), (r"(?i)(br|r?b?)'", String, combined('stringescape', 'sqs', 'string')), (r'(?i)(br|r?b?)"', String, combined('stringescape', 'dqs', 'string')), (r"`\w+'*`", Operator), (r'\b(and|in|is|or|where)\b', Operator.Word), (r'[!$%&*+\-./:<-@\\^|~;,]+', Operator), (words(( 'bool', 'bytearray', 'bytes', 'classmethod', 'complex', 'dict', 'dict\'', 'float', 'frozenset', 'int', 'list', 'list\'', 'memoryview', 'object', 'property', 'range', 'set', 'set\'', 'slice', 'staticmethod', 'str', 'super', 'tuple', 'tuple\'', 'type'), prefix=r'(?|/=|=:=|=/=|=<|>=|==?|<-|!|\?)' word_operators = ( 'and', 'andalso', 'band', 'bnot', 'bor', 'bsl', 'bsr', 'bxor', 'div', 'not', 'or', 'orelse', 'rem', 'xor' ) atom_re = r"(?:[a-z]\w*|'[^\n']*[^\\]')" variable_re = r'(?:[A-Z_]\w*)' esc_char_re = r'[bdefnrstv\'"\\]' esc_octal_re = r'[0-7][0-7]?[0-7]?' esc_hex_re = r'(?:x[0-9a-fA-F]{2}|x\{[0-9a-fA-F]+\})' esc_ctrl_re = r'\^[a-zA-Z]' escape_re = r'(?:\\(?:'+esc_char_re+r'|'+esc_octal_re+r'|'+esc_hex_re+r'|'+esc_ctrl_re+r'))' macro_re = r'(?:'+variable_re+r'|'+atom_re+r')' base_re = r'(?:[2-9]|[12][0-9]|3[0-6])' tokens = { 'root': [ (r'\s+', Whitespace), (r'(%.*)(\n)', bygroups(Comment, Whitespace)), (words(keywords, suffix=r'\b'), Keyword), (words(builtins, suffix=r'\b'), Name.Builtin), (words(word_operators, suffix=r'\b'), Operator.Word), (r'^-', Punctuation, 'directive'), (operators, Operator), (r'"', String, 'string'), (r'<<', Name.Label), (r'>>', Name.Label), ('(' + atom_re + ')(:)', bygroups(Name.Namespace, Punctuation)), ('(?:^|(?<=:))(' + atom_re + r')(\s*)(\()', bygroups(Name.Function, Whitespace, Punctuation)), (r'[+-]?' + base_re + r'#[0-9a-zA-Z]+', Number.Integer), (r'[+-]?\d+', Number.Integer), (r'[+-]?\d+.\d+', Number.Float), (r'[]\[:_@\".{}()|;,]', Punctuation), (variable_re, Name.Variable), (atom_re, Name), (r'\?'+macro_re, Name.Constant), (r'\$(?:'+escape_re+r'|\\[ %]|[^\\])', String.Char), (r'#'+atom_re+r'(:?\.'+atom_re+r')?', Name.Label), # Erlang script shebang (r'\A#!.+\n', Comment.Hashbang), # EEP 43: Maps # http://www.erlang.org/eeps/eep-0043.html (r'#\{', Punctuation, 'map_key'), ], 'string': [ (escape_re, String.Escape), (r'"', String, '#pop'), (r'~[0-9.*]*[~#+BPWXb-ginpswx]', String.Interpol), (r'[^"\\~]+', String), (r'~', String), ], 'directive': [ (r'(define)(\s*)(\()('+macro_re+r')', bygroups(Name.Entity, Whitespace, Punctuation, Name.Constant), '#pop'), (r'(record)(\s*)(\()('+macro_re+r')', bygroups(Name.Entity, Whitespace, Punctuation, Name.Label), '#pop'), (atom_re, Name.Entity, '#pop'), ], 'map_key': [ include('root'), (r'=>', Punctuation, 'map_val'), (r':=', Punctuation, 'map_val'), (r'\}', Punctuation, '#pop'), ], 'map_val': [ include('root'), (r',', Punctuation, '#pop'), (r'(?=\})', Punctuation, '#pop'), ], } class ErlangShellLexer(Lexer): """ Shell sessions in erl (for Erlang code). .. versionadded:: 1.1 """ name = 'Erlang erl session' aliases = ['erl'] filenames = ['*.erl-sh'] mimetypes = ['text/x-erl-shellsession'] _prompt_re = re.compile(r'(?:\([\w@_.]+\))?\d+>(?=\s|\Z)') def get_tokens_unprocessed(self, text): erlexer = ErlangLexer(**self.options) curcode = '' insertions = [] for match in line_re.finditer(text): line = match.group() m = self._prompt_re.match(line) if m is not None: end = m.end() insertions.append((len(curcode), [(0, Generic.Prompt, line[:end])])) curcode += line[end:] else: if curcode: yield from do_insertions(insertions, erlexer.get_tokens_unprocessed(curcode)) curcode = '' insertions = [] if line.startswith('*'): yield match.start(), Generic.Traceback, line else: yield match.start(), Generic.Output, line if curcode: yield from do_insertions(insertions, erlexer.get_tokens_unprocessed(curcode)) def gen_elixir_string_rules(name, symbol, token): states = {} states['string_' + name] = [ (r'[^#%s\\]+' % (symbol,), token), include('escapes'), (r'\\.', token), (r'(%s)' % (symbol,), bygroups(token), "#pop"), include('interpol') ] return states def gen_elixir_sigstr_rules(term, term_class, token, interpol=True): if interpol: return [ (r'[^#%s\\]+' % (term_class,), token), include('escapes'), (r'\\.', token), (r'%s[a-zA-Z]*' % (term,), token, '#pop'), include('interpol') ] else: return [ (r'[^%s\\]+' % (term_class,), token), (r'\\.', token), (r'%s[a-zA-Z]*' % (term,), token, '#pop'), ] class ElixirLexer(RegexLexer): """ For the `Elixir language `_. .. versionadded:: 1.5 """ name = 'Elixir' aliases = ['elixir', 'ex', 'exs'] filenames = ['*.ex', '*.eex', '*.exs', '*.leex'] mimetypes = ['text/x-elixir'] KEYWORD = ('fn', 'do', 'end', 'after', 'else', 'rescue', 'catch') KEYWORD_OPERATOR = ('not', 'and', 'or', 'when', 'in') BUILTIN = ( 'case', 'cond', 'for', 'if', 'unless', 'try', 'receive', 'raise', 'quote', 'unquote', 'unquote_splicing', 'throw', 'super', ) BUILTIN_DECLARATION = ( 'def', 'defp', 'defmodule', 'defprotocol', 'defmacro', 'defmacrop', 'defdelegate', 'defexception', 'defstruct', 'defimpl', 'defcallback', ) BUILTIN_NAMESPACE = ('import', 'require', 'use', 'alias') CONSTANT = ('nil', 'true', 'false') PSEUDO_VAR = ('_', '__MODULE__', '__DIR__', '__ENV__', '__CALLER__') OPERATORS3 = ( '<<<', '>>>', '|||', '&&&', '^^^', '~~~', '===', '!==', '~>>', '<~>', '|~>', '<|>', ) OPERATORS2 = ( '==', '!=', '<=', '>=', '&&', '||', '<>', '++', '--', '|>', '=~', '->', '<-', '|', '.', '=', '~>', '<~', ) OPERATORS1 = ('<', '>', '+', '-', '*', '/', '!', '^', '&') PUNCTUATION = ( '\\\\', '<<', '>>', '=>', '(', ')', ':', ';', ',', '[', ']', ) def get_tokens_unprocessed(self, text): for index, token, value in RegexLexer.get_tokens_unprocessed(self, text): if token is Name: if value in self.KEYWORD: yield index, Keyword, value elif value in self.KEYWORD_OPERATOR: yield index, Operator.Word, value elif value in self.BUILTIN: yield index, Keyword, value elif value in self.BUILTIN_DECLARATION: yield index, Keyword.Declaration, value elif value in self.BUILTIN_NAMESPACE: yield index, Keyword.Namespace, value elif value in self.CONSTANT: yield index, Name.Constant, value elif value in self.PSEUDO_VAR: yield index, Name.Builtin.Pseudo, value else: yield index, token, value else: yield index, token, value def gen_elixir_sigil_rules(): # all valid sigil terminators (excluding heredocs) terminators = [ (r'\{', r'\}', '}', 'cb'), (r'\[', r'\]', r'\]', 'sb'), (r'\(', r'\)', ')', 'pa'), ('<', '>', '>', 'ab'), ('/', '/', '/', 'slas'), (r'\|', r'\|', '|', 'pipe'), ('"', '"', '"', 'quot'), ("'", "'", "'", 'apos'), ] # heredocs have slightly different rules triquotes = [(r'"""', 'triquot'), (r"'''", 'triapos')] token = String.Other states = {'sigils': []} for term, name in triquotes: states['sigils'] += [ (r'(~[a-z])(%s)' % (term,), bygroups(token, String.Heredoc), (name + '-end', name + '-intp')), (r'(~[A-Z])(%s)' % (term,), bygroups(token, String.Heredoc), (name + '-end', name + '-no-intp')), ] states[name + '-end'] = [ (r'[a-zA-Z]+', token, '#pop'), default('#pop'), ] states[name + '-intp'] = [ (r'^(\s*)(' + term + ')', bygroups(Whitespace, String.Heredoc), '#pop'), include('heredoc_interpol'), ] states[name + '-no-intp'] = [ (r'^(\s*)(' + term +')', bygroups(Whitespace, String.Heredoc), '#pop'), include('heredoc_no_interpol'), ] for lterm, rterm, rterm_class, name in terminators: states['sigils'] += [ (r'~[a-z]' + lterm, token, name + '-intp'), (r'~[A-Z]' + lterm, token, name + '-no-intp'), ] states[name + '-intp'] = \ gen_elixir_sigstr_rules(rterm, rterm_class, token) states[name + '-no-intp'] = \ gen_elixir_sigstr_rules(rterm, rterm_class, token, interpol=False) return states op3_re = "|".join(re.escape(s) for s in OPERATORS3) op2_re = "|".join(re.escape(s) for s in OPERATORS2) op1_re = "|".join(re.escape(s) for s in OPERATORS1) ops_re = r'(?:%s|%s|%s)' % (op3_re, op2_re, op1_re) punctuation_re = "|".join(re.escape(s) for s in PUNCTUATION) alnum = r'\w' name_re = r'(?:\.\.\.|[a-z_]%s*[!?]?)' % alnum modname_re = r'[A-Z]%(alnum)s*(?:\.[A-Z]%(alnum)s*)*' % {'alnum': alnum} complex_name_re = r'(?:%s|%s|%s)' % (name_re, modname_re, ops_re) special_atom_re = r'(?:\.\.\.|<<>>|%\{\}|%|\{\})' long_hex_char_re = r'(\\x\{)([\da-fA-F]+)(\})' hex_char_re = r'(\\x[\da-fA-F]{1,2})' escape_char_re = r'(\\[abdefnrstv])' tokens = { 'root': [ (r'\s+', Whitespace), (r'#.*$', Comment.Single), # Various kinds of characters (r'(\?)' + long_hex_char_re, bygroups(String.Char, String.Escape, Number.Hex, String.Escape)), (r'(\?)' + hex_char_re, bygroups(String.Char, String.Escape)), (r'(\?)' + escape_char_re, bygroups(String.Char, String.Escape)), (r'\?\\?.', String.Char), # '::' has to go before atoms (r':::', String.Symbol), (r'::', Operator), # atoms (r':' + special_atom_re, String.Symbol), (r':' + complex_name_re, String.Symbol), (r':"', String.Symbol, 'string_double_atom'), (r":'", String.Symbol, 'string_single_atom'), # [keywords: ...] (r'(%s|%s)(:)(?=\s|\n)' % (special_atom_re, complex_name_re), bygroups(String.Symbol, Punctuation)), # @attributes (r'@' + name_re, Name.Attribute), # identifiers (name_re, Name), (r'(%%?)(%s)' % (modname_re,), bygroups(Punctuation, Name.Class)), # operators and punctuation (op3_re, Operator), (op2_re, Operator), (punctuation_re, Punctuation), (r'&\d', Name.Entity), # anon func arguments (op1_re, Operator), # numbers (r'0b[01]+', Number.Bin), (r'0o[0-7]+', Number.Oct), (r'0x[\da-fA-F]+', Number.Hex), (r'\d(_?\d)*\.\d(_?\d)*([eE][-+]?\d(_?\d)*)?', Number.Float), (r'\d(_?\d)*', Number.Integer), # strings and heredocs (r'(""")(\s*)', bygroups(String.Heredoc, Whitespace), 'heredoc_double'), (r"(''')(\s*)$", bygroups(String.Heredoc, Whitespace), 'heredoc_single'), (r'"', String.Double, 'string_double'), (r"'", String.Single, 'string_single'), include('sigils'), (r'%\{', Punctuation, 'map_key'), (r'\{', Punctuation, 'tuple'), ], 'heredoc_double': [ (r'^(\s*)(""")', bygroups(Whitespace, String.Heredoc), '#pop'), include('heredoc_interpol'), ], 'heredoc_single': [ (r"^\s*'''", String.Heredoc, '#pop'), include('heredoc_interpol'), ], 'heredoc_interpol': [ (r'[^#\\\n]+', String.Heredoc), include('escapes'), (r'\\.', String.Heredoc), (r'\n+', String.Heredoc), include('interpol'), ], 'heredoc_no_interpol': [ (r'[^\\\n]+', String.Heredoc), (r'\\.', String.Heredoc), (r'\n+', Whitespace), ], 'escapes': [ (long_hex_char_re, bygroups(String.Escape, Number.Hex, String.Escape)), (hex_char_re, String.Escape), (escape_char_re, String.Escape), ], 'interpol': [ (r'#\{', String.Interpol, 'interpol_string'), ], 'interpol_string': [ (r'\}', String.Interpol, "#pop"), include('root') ], 'map_key': [ include('root'), (r':', Punctuation, 'map_val'), (r'=>', Punctuation, 'map_val'), (r'\}', Punctuation, '#pop'), ], 'map_val': [ include('root'), (r',', Punctuation, '#pop'), (r'(?=\})', Punctuation, '#pop'), ], 'tuple': [ include('root'), (r'\}', Punctuation, '#pop'), ], } tokens.update(gen_elixir_string_rules('double', '"', String.Double)) tokens.update(gen_elixir_string_rules('single', "'", String.Single)) tokens.update(gen_elixir_string_rules('double_atom', '"', String.Symbol)) tokens.update(gen_elixir_string_rules('single_atom', "'", String.Symbol)) tokens.update(gen_elixir_sigil_rules()) class ElixirConsoleLexer(Lexer): """ For Elixir interactive console (iex) output like: .. sourcecode:: iex iex> [head | tail] = [1,2,3] [1,2,3] iex> head 1 iex> tail [2,3] iex> [head | tail] [1,2,3] iex> length [head | tail] 3 .. versionadded:: 1.5 """ name = 'Elixir iex session' aliases = ['iex'] mimetypes = ['text/x-elixir-shellsession'] _prompt_re = re.compile(r'(iex|\.{3})((?:\([\w@_.]+\))?\d+|\(\d+\))?> ') def get_tokens_unprocessed(self, text): exlexer = ElixirLexer(**self.options) curcode = '' in_error = False insertions = [] for match in line_re.finditer(text): line = match.group() if line.startswith('** '): in_error = True insertions.append((len(curcode), [(0, Generic.Error, line[:-1])])) curcode += line[-1:] else: m = self._prompt_re.match(line) if m is not None: in_error = False end = m.end() insertions.append((len(curcode), [(0, Generic.Prompt, line[:end])])) curcode += line[end:] else: if curcode: yield from do_insertions( insertions, exlexer.get_tokens_unprocessed(curcode)) curcode = '' insertions = [] token = Generic.Error if in_error else Generic.Output yield match.start(), token, line if curcode: yield from do_insertions( insertions, exlexer.get_tokens_unprocessed(curcode)) pygments-2.11.2/pygments/lexers/_openedge_builtins.py0000644000175000017500000014036614165547207022730 0ustar carstencarsten""" pygments.lexers._openedge_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Builtin list for the OpenEdgeLexer. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ OPENEDGEKEYWORDS = ( 'ABS', 'ABSO', 'ABSOL', 'ABSOLU', 'ABSOLUT', 'ABSOLUTE', 'ABSTRACT', 'ACCELERATOR', 'ACCUM', 'ACCUMU', 'ACCUMUL', 'ACCUMULA', 'ACCUMULAT', 'ACCUMULATE', 'ACTIVE-FORM', 'ACTIVE-WINDOW', 'ADD', 'ADD-BUFFER', 'ADD-CALC-COLUMN', 'ADD-COLUMNS-FROM', 'ADD-EVENTS-PROCEDURE', 'ADD-FIELDS-FROM', 'ADD-FIRST', 'ADD-INDEX-FIELD', 'ADD-LAST', 'ADD-LIKE-COLUMN', 'ADD-LIKE-FIELD', 'ADD-LIKE-INDEX', 'ADD-NEW-FIELD', 'ADD-NEW-INDEX', 'ADD-SCHEMA-LOCATION', 'ADD-SUPER-PROCEDURE', 'ADM-DATA', 'ADVISE', 'ALERT-BOX', 'ALIAS', 'ALL', 'ALLOW-COLUMN-SEARCHING', 'ALLOW-REPLICATION', 'ALTER', 'ALWAYS-ON-TOP', 'AMBIG', 'AMBIGU', 'AMBIGUO', 'AMBIGUOU', 'AMBIGUOUS', 'ANALYZ', 'ANALYZE', 'AND', 'ANSI-ONLY', 'ANY', 'ANYWHERE', 'APPEND', 'APPL-ALERT', 'APPL-ALERT-', 'APPL-ALERT-B', 'APPL-ALERT-BO', 'APPL-ALERT-BOX', 'APPL-ALERT-BOXE', 'APPL-ALERT-BOXES', 'APPL-CONTEXT-ID', 'APPLICATION', 'APPLY', 'APPSERVER-INFO', 'APPSERVER-PASSWORD', 'APPSERVER-USERID', 'ARRAY-MESSAGE', 'AS', 'ASC', 'ASCE', 'ASCEN', 'ASCEND', 'ASCENDI', 'ASCENDIN', 'ASCENDING', 'ASK-OVERWRITE', 'ASSEMBLY', 'ASSIGN', 'ASYNC-REQUEST-COUNT', 'ASYNC-REQUEST-HANDLE', 'ASYNCHRONOUS', 'AT', 'ATTACHED-PAIRLIST', 'ATTR', 'ATTR-SPACE', 'ATTRI', 'ATTRIB', 'ATTRIBU', 'ATTRIBUT', 'AUDIT-CONTROL', 'AUDIT-ENABLED', 'AUDIT-EVENT-CONTEXT', 'AUDIT-POLICY', 'AUTHENTICATION-FAILED', 'AUTHORIZATION', 'AUTO-COMP', 'AUTO-COMPL', 'AUTO-COMPLE', 'AUTO-COMPLET', 'AUTO-COMPLETI', 'AUTO-COMPLETIO', 'AUTO-COMPLETION', 'AUTO-END-KEY', 'AUTO-ENDKEY', 'AUTO-GO', 'AUTO-IND', 'AUTO-INDE', 'AUTO-INDEN', 'AUTO-INDENT', 'AUTO-RESIZE', 'AUTO-RET', 'AUTO-RETU', 'AUTO-RETUR', 'AUTO-RETURN', 'AUTO-SYNCHRONIZE', 'AUTO-Z', 'AUTO-ZA', 'AUTO-ZAP', 'AUTOMATIC', 'AVAIL', 'AVAILA', 'AVAILAB', 'AVAILABL', 'AVAILABLE', 'AVAILABLE-FORMATS', 'AVE', 'AVER', 'AVERA', 'AVERAG', 'AVERAGE', 'AVG', 'BACK', 'BACKG', 'BACKGR', 'BACKGRO', 'BACKGROU', 'BACKGROUN', 'BACKGROUND', 'BACKWARD', 'BACKWARDS', 'BASE64-DECODE', 'BASE64-ENCODE', 'BASE-ADE', 'BASE-KEY', 'BATCH', 'BATCH-', 'BATCH-M', 'BATCH-MO', 'BATCH-MOD', 'BATCH-MODE', 'BATCH-SIZE', 'BEFORE-H', 'BEFORE-HI', 'BEFORE-HID', 'BEFORE-HIDE', 'BEGIN-EVENT-GROUP', 'BEGINS', 'BELL', 'BETWEEN', 'BGC', 'BGCO', 'BGCOL', 'BGCOLO', 'BGCOLOR', 'BIG-ENDIAN', 'BINARY', 'BIND', 'BIND-WHERE', 'BLANK', 'BLOCK-ITERATION-DISPLAY', 'BLOCK-LEVEL', 'BORDER-B', 'BORDER-BO', 'BORDER-BOT', 'BORDER-BOTT', 'BORDER-BOTTO', 'BORDER-BOTTOM-CHARS', 'BORDER-BOTTOM-P', 'BORDER-BOTTOM-PI', 'BORDER-BOTTOM-PIX', 'BORDER-BOTTOM-PIXE', 'BORDER-BOTTOM-PIXEL', 'BORDER-BOTTOM-PIXELS', 'BORDER-L', 'BORDER-LE', 'BORDER-LEF', 'BORDER-LEFT', 'BORDER-LEFT-', 'BORDER-LEFT-C', 'BORDER-LEFT-CH', 'BORDER-LEFT-CHA', 'BORDER-LEFT-CHAR', 'BORDER-LEFT-CHARS', 'BORDER-LEFT-P', 'BORDER-LEFT-PI', 'BORDER-LEFT-PIX', 'BORDER-LEFT-PIXE', 'BORDER-LEFT-PIXEL', 'BORDER-LEFT-PIXELS', 'BORDER-R', 'BORDER-RI', 'BORDER-RIG', 'BORDER-RIGH', 'BORDER-RIGHT', 'BORDER-RIGHT-', 'BORDER-RIGHT-C', 'BORDER-RIGHT-CH', 'BORDER-RIGHT-CHA', 'BORDER-RIGHT-CHAR', 'BORDER-RIGHT-CHARS', 'BORDER-RIGHT-P', 'BORDER-RIGHT-PI', 'BORDER-RIGHT-PIX', 'BORDER-RIGHT-PIXE', 'BORDER-RIGHT-PIXEL', 'BORDER-RIGHT-PIXELS', 'BORDER-T', 'BORDER-TO', 'BORDER-TOP', 'BORDER-TOP-', 'BORDER-TOP-C', 'BORDER-TOP-CH', 'BORDER-TOP-CHA', 'BORDER-TOP-CHAR', 'BORDER-TOP-CHARS', 'BORDER-TOP-P', 'BORDER-TOP-PI', 'BORDER-TOP-PIX', 'BORDER-TOP-PIXE', 'BORDER-TOP-PIXEL', 'BORDER-TOP-PIXELS', 'BOX', 'BOX-SELECT', 'BOX-SELECTA', 'BOX-SELECTAB', 'BOX-SELECTABL', 'BOX-SELECTABLE', 'BREAK', 'BROWSE', 'BUFFER', 'BUFFER-CHARS', 'BUFFER-COMPARE', 'BUFFER-COPY', 'BUFFER-CREATE', 'BUFFER-DELETE', 'BUFFER-FIELD', 'BUFFER-HANDLE', 'BUFFER-LINES', 'BUFFER-NAME', 'BUFFER-PARTITION-ID', 'BUFFER-RELEASE', 'BUFFER-VALUE', 'BUTTON', 'BUTTONS', 'BY', 'BY-POINTER', 'BY-VARIANT-POINTER', 'CACHE', 'CACHE-SIZE', 'CALL', 'CALL-NAME', 'CALL-TYPE', 'CAN-CREATE', 'CAN-DELETE', 'CAN-DO', 'CAN-DO-DOMAIN-SUPPORT', 'CAN-FIND', 'CAN-QUERY', 'CAN-READ', 'CAN-SET', 'CAN-WRITE', 'CANCEL-BREAK', 'CANCEL-BUTTON', 'CAPS', 'CAREFUL-PAINT', 'CASE', 'CASE-SEN', 'CASE-SENS', 'CASE-SENSI', 'CASE-SENSIT', 'CASE-SENSITI', 'CASE-SENSITIV', 'CASE-SENSITIVE', 'CAST', 'CATCH', 'CDECL', 'CENTER', 'CENTERE', 'CENTERED', 'CHAINED', 'CHARACTER', 'CHARACTER_LENGTH', 'CHARSET', 'CHECK', 'CHECKED', 'CHOOSE', 'CHR', 'CLASS', 'CLASS-TYPE', 'CLEAR', 'CLEAR-APPL-CONTEXT', 'CLEAR-LOG', 'CLEAR-SELECT', 'CLEAR-SELECTI', 'CLEAR-SELECTIO', 'CLEAR-SELECTION', 'CLEAR-SORT-ARROW', 'CLEAR-SORT-ARROWS', 'CLIENT-CONNECTION-ID', 'CLIENT-PRINCIPAL', 'CLIENT-TTY', 'CLIENT-TYPE', 'CLIENT-WORKSTATION', 'CLIPBOARD', 'CLOSE', 'CLOSE-LOG', 'CODE', 'CODEBASE-LOCATOR', 'CODEPAGE', 'CODEPAGE-CONVERT', 'COL', 'COL-OF', 'COLLATE', 'COLON', 'COLON-ALIGN', 'COLON-ALIGNE', 'COLON-ALIGNED', 'COLOR', 'COLOR-TABLE', 'COLU', 'COLUM', 'COLUMN', 'COLUMN-BGCOLOR', 'COLUMN-DCOLOR', 'COLUMN-FGCOLOR', 'COLUMN-FONT', 'COLUMN-LAB', 'COLUMN-LABE', 'COLUMN-LABEL', 'COLUMN-MOVABLE', 'COLUMN-OF', 'COLUMN-PFCOLOR', 'COLUMN-READ-ONLY', 'COLUMN-RESIZABLE', 'COLUMN-SCROLLING', 'COLUMNS', 'COM-HANDLE', 'COM-SELF', 'COMBO-BOX', 'COMMAND', 'COMPARES', 'COMPILE', 'COMPILER', 'COMPLETE', 'CONFIG-NAME', 'CONNECT', 'CONNECTED', 'CONSTRUCTOR', 'CONTAINS', 'CONTENTS', 'CONTEXT', 'CONTEXT-HELP', 'CONTEXT-HELP-FILE', 'CONTEXT-HELP-ID', 'CONTEXT-POPUP', 'CONTROL', 'CONTROL-BOX', 'CONTROL-FRAME', 'CONVERT', 'CONVERT-3D-COLORS', 'CONVERT-TO-OFFS', 'CONVERT-TO-OFFSE', 'CONVERT-TO-OFFSET', 'COPY-DATASET', 'COPY-LOB', 'COPY-SAX-ATTRIBUTES', 'COPY-TEMP-TABLE', 'COUNT', 'COUNT-OF', 'CPCASE', 'CPCOLL', 'CPINTERNAL', 'CPLOG', 'CPPRINT', 'CPRCODEIN', 'CPRCODEOUT', 'CPSTREAM', 'CPTERM', 'CRC-VALUE', 'CREATE', 'CREATE-LIKE', 'CREATE-LIKE-SEQUENTIAL', 'CREATE-NODE-NAMESPACE', 'CREATE-RESULT-LIST-ENTRY', 'CREATE-TEST-FILE', 'CURRENT', 'CURRENT-CHANGED', 'CURRENT-COLUMN', 'CURRENT-ENV', 'CURRENT-ENVI', 'CURRENT-ENVIR', 'CURRENT-ENVIRO', 'CURRENT-ENVIRON', 'CURRENT-ENVIRONM', 'CURRENT-ENVIRONME', 'CURRENT-ENVIRONMEN', 'CURRENT-ENVIRONMENT', 'CURRENT-ITERATION', 'CURRENT-LANG', 'CURRENT-LANGU', 'CURRENT-LANGUA', 'CURRENT-LANGUAG', 'CURRENT-LANGUAGE', 'CURRENT-QUERY', 'CURRENT-REQUEST-INFO', 'CURRENT-RESPONSE-INFO', 'CURRENT-RESULT-ROW', 'CURRENT-ROW-MODIFIED', 'CURRENT-VALUE', 'CURRENT-WINDOW', 'CURRENT_DATE', 'CURS', 'CURSO', 'CURSOR', 'CURSOR-CHAR', 'CURSOR-LINE', 'CURSOR-OFFSET', 'DATA-BIND', 'DATA-ENTRY-RET', 'DATA-ENTRY-RETU', 'DATA-ENTRY-RETUR', 'DATA-ENTRY-RETURN', 'DATA-REL', 'DATA-RELA', 'DATA-RELAT', 'DATA-RELATI', 'DATA-RELATIO', 'DATA-RELATION', 'DATA-SOURCE', 'DATA-SOURCE-COMPLETE-MAP', 'DATA-SOURCE-MODIFIED', 'DATA-SOURCE-ROWID', 'DATA-T', 'DATA-TY', 'DATA-TYP', 'DATA-TYPE', 'DATABASE', 'DATASERVERS', 'DATASET', 'DATASET-HANDLE', 'DATE', 'DATE-F', 'DATE-FO', 'DATE-FOR', 'DATE-FORM', 'DATE-FORMA', 'DATE-FORMAT', 'DAY', 'DB-CONTEXT', 'DB-REFERENCES', 'DBCODEPAGE', 'DBCOLLATION', 'DBNAME', 'DBPARAM', 'DBREST', 'DBRESTR', 'DBRESTRI', 'DBRESTRIC', 'DBRESTRICT', 'DBRESTRICTI', 'DBRESTRICTIO', 'DBRESTRICTION', 'DBRESTRICTIONS', 'DBTASKID', 'DBTYPE', 'DBVERS', 'DBVERSI', 'DBVERSIO', 'DBVERSION', 'DCOLOR', 'DDE', 'DDE-ERROR', 'DDE-I', 'DDE-ID', 'DDE-ITEM', 'DDE-NAME', 'DDE-TOPIC', 'DEBLANK', 'DEBU', 'DEBUG', 'DEBUG-ALERT', 'DEBUG-LIST', 'DEBUGGER', 'DECIMAL', 'DECIMALS', 'DECLARE', 'DECLARE-NAMESPACE', 'DECRYPT', 'DEFAULT', 'DEFAULT-B', 'DEFAULT-BU', 'DEFAULT-BUFFER-HANDLE', 'DEFAULT-BUT', 'DEFAULT-BUTT', 'DEFAULT-BUTTO', 'DEFAULT-BUTTON', 'DEFAULT-COMMIT', 'DEFAULT-EX', 'DEFAULT-EXT', 'DEFAULT-EXTE', 'DEFAULT-EXTEN', 'DEFAULT-EXTENS', 'DEFAULT-EXTENSI', 'DEFAULT-EXTENSIO', 'DEFAULT-EXTENSION', 'DEFAULT-NOXL', 'DEFAULT-NOXLA', 'DEFAULT-NOXLAT', 'DEFAULT-NOXLATE', 'DEFAULT-VALUE', 'DEFAULT-WINDOW', 'DEFINE', 'DEFINE-USER-EVENT-MANAGER', 'DEFINED', 'DEL', 'DELE', 'DELEGATE', 'DELET', 'DELETE PROCEDURE', 'DELETE', 'DELETE-CHAR', 'DELETE-CHARA', 'DELETE-CHARAC', 'DELETE-CHARACT', 'DELETE-CHARACTE', 'DELETE-CHARACTER', 'DELETE-CURRENT-ROW', 'DELETE-LINE', 'DELETE-RESULT-LIST-ENTRY', 'DELETE-SELECTED-ROW', 'DELETE-SELECTED-ROWS', 'DELIMITER', 'DESC', 'DESCE', 'DESCEN', 'DESCEND', 'DESCENDI', 'DESCENDIN', 'DESCENDING', 'DESELECT-FOCUSED-ROW', 'DESELECT-ROWS', 'DESELECT-SELECTED-ROW', 'DESELECTION', 'DESTRUCTOR', 'DIALOG-BOX', 'DICT', 'DICTI', 'DICTIO', 'DICTION', 'DICTIONA', 'DICTIONAR', 'DICTIONARY', 'DIR', 'DISABLE', 'DISABLE-AUTO-ZAP', 'DISABLE-DUMP-TRIGGERS', 'DISABLE-LOAD-TRIGGERS', 'DISABLED', 'DISCON', 'DISCONN', 'DISCONNE', 'DISCONNEC', 'DISCONNECT', 'DISP', 'DISPL', 'DISPLA', 'DISPLAY', 'DISPLAY-MESSAGE', 'DISPLAY-T', 'DISPLAY-TY', 'DISPLAY-TYP', 'DISPLAY-TYPE', 'DISTINCT', 'DO', 'DOMAIN-DESCRIPTION', 'DOMAIN-NAME', 'DOMAIN-TYPE', 'DOS', 'DOUBLE', 'DOWN', 'DRAG-ENABLED', 'DROP', 'DROP-DOWN', 'DROP-DOWN-LIST', 'DROP-FILE-NOTIFY', 'DROP-TARGET', 'DS-CLOSE-CURSOR', 'DSLOG-MANAGER', 'DUMP', 'DYNAMIC', 'DYNAMIC-ENUM', 'DYNAMIC-FUNCTION', 'DYNAMIC-INVOKE', 'EACH', 'ECHO', 'EDGE', 'EDGE-', 'EDGE-C', 'EDGE-CH', 'EDGE-CHA', 'EDGE-CHAR', 'EDGE-CHARS', 'EDGE-P', 'EDGE-PI', 'EDGE-PIX', 'EDGE-PIXE', 'EDGE-PIXEL', 'EDGE-PIXELS', 'EDIT-CAN-PASTE', 'EDIT-CAN-UNDO', 'EDIT-CLEAR', 'EDIT-COPY', 'EDIT-CUT', 'EDIT-PASTE', 'EDIT-UNDO', 'EDITING', 'EDITOR', 'ELSE', 'EMPTY', 'EMPTY-TEMP-TABLE', 'ENABLE', 'ENABLED-FIELDS', 'ENCODE', 'ENCRYPT', 'ENCRYPT-AUDIT-MAC-KEY', 'ENCRYPTION-SALT', 'END', 'END-DOCUMENT', 'END-ELEMENT', 'END-EVENT-GROUP', 'END-FILE-DROP', 'END-KEY', 'END-MOVE', 'END-RESIZE', 'END-ROW-RESIZE', 'END-USER-PROMPT', 'ENDKEY', 'ENTERED', 'ENTITY-EXPANSION-LIMIT', 'ENTRY', 'ENUM', 'EQ', 'ERROR', 'ERROR-COL', 'ERROR-COLU', 'ERROR-COLUM', 'ERROR-COLUMN', 'ERROR-ROW', 'ERROR-STACK-TRACE', 'ERROR-STAT', 'ERROR-STATU', 'ERROR-STATUS', 'ESCAPE', 'ETIME', 'EVENT', 'EVENT-GROUP-ID', 'EVENT-PROCEDURE', 'EVENT-PROCEDURE-CONTEXT', 'EVENT-T', 'EVENT-TY', 'EVENT-TYP', 'EVENT-TYPE', 'EVENTS', 'EXCEPT', 'EXCLUSIVE', 'EXCLUSIVE-', 'EXCLUSIVE-ID', 'EXCLUSIVE-L', 'EXCLUSIVE-LO', 'EXCLUSIVE-LOC', 'EXCLUSIVE-LOCK', 'EXCLUSIVE-WEB-USER', 'EXECUTE', 'EXISTS', 'EXP', 'EXPAND', 'EXPANDABLE', 'EXPLICIT', 'EXPORT', 'EXPORT-PRINCIPAL', 'EXTENDED', 'EXTENT', 'EXTERNAL', 'FALSE', 'FETCH', 'FETCH-SELECTED-ROW', 'FGC', 'FGCO', 'FGCOL', 'FGCOLO', 'FGCOLOR', 'FIELD', 'FIELDS', 'FILE', 'FILE-CREATE-DATE', 'FILE-CREATE-TIME', 'FILE-INFO', 'FILE-INFOR', 'FILE-INFORM', 'FILE-INFORMA', 'FILE-INFORMAT', 'FILE-INFORMATI', 'FILE-INFORMATIO', 'FILE-INFORMATION', 'FILE-MOD-DATE', 'FILE-MOD-TIME', 'FILE-NAME', 'FILE-OFF', 'FILE-OFFS', 'FILE-OFFSE', 'FILE-OFFSET', 'FILE-SIZE', 'FILE-TYPE', 'FILENAME', 'FILL', 'FILL-IN', 'FILLED', 'FILTERS', 'FINAL', 'FINALLY', 'FIND', 'FIND-BY-ROWID', 'FIND-CASE-SENSITIVE', 'FIND-CURRENT', 'FIND-FIRST', 'FIND-GLOBAL', 'FIND-LAST', 'FIND-NEXT-OCCURRENCE', 'FIND-PREV-OCCURRENCE', 'FIND-SELECT', 'FIND-UNIQUE', 'FIND-WRAP-AROUND', 'FINDER', 'FIRST', 'FIRST-ASYNCH-REQUEST', 'FIRST-CHILD', 'FIRST-COLUMN', 'FIRST-FORM', 'FIRST-OBJECT', 'FIRST-OF', 'FIRST-PROC', 'FIRST-PROCE', 'FIRST-PROCED', 'FIRST-PROCEDU', 'FIRST-PROCEDUR', 'FIRST-PROCEDURE', 'FIRST-SERVER', 'FIRST-TAB-I', 'FIRST-TAB-IT', 'FIRST-TAB-ITE', 'FIRST-TAB-ITEM', 'FIT-LAST-COLUMN', 'FIXED-ONLY', 'FLAT-BUTTON', 'FLOAT', 'FOCUS', 'FOCUSED-ROW', 'FOCUSED-ROW-SELECTED', 'FONT', 'FONT-TABLE', 'FOR', 'FORCE-FILE', 'FORE', 'FOREG', 'FOREGR', 'FOREGRO', 'FOREGROU', 'FOREGROUN', 'FOREGROUND', 'FORM INPUT', 'FORM', 'FORM-LONG-INPUT', 'FORMA', 'FORMAT', 'FORMATTE', 'FORMATTED', 'FORWARD', 'FORWARDS', 'FRAGMEN', 'FRAGMENT', 'FRAM', 'FRAME', 'FRAME-COL', 'FRAME-DB', 'FRAME-DOWN', 'FRAME-FIELD', 'FRAME-FILE', 'FRAME-INDE', 'FRAME-INDEX', 'FRAME-LINE', 'FRAME-NAME', 'FRAME-ROW', 'FRAME-SPA', 'FRAME-SPAC', 'FRAME-SPACI', 'FRAME-SPACIN', 'FRAME-SPACING', 'FRAME-VAL', 'FRAME-VALU', 'FRAME-VALUE', 'FRAME-X', 'FRAME-Y', 'FREQUENCY', 'FROM', 'FROM-C', 'FROM-CH', 'FROM-CHA', 'FROM-CHAR', 'FROM-CHARS', 'FROM-CUR', 'FROM-CURR', 'FROM-CURRE', 'FROM-CURREN', 'FROM-CURRENT', 'FROM-P', 'FROM-PI', 'FROM-PIX', 'FROM-PIXE', 'FROM-PIXEL', 'FROM-PIXELS', 'FULL-HEIGHT', 'FULL-HEIGHT-', 'FULL-HEIGHT-C', 'FULL-HEIGHT-CH', 'FULL-HEIGHT-CHA', 'FULL-HEIGHT-CHAR', 'FULL-HEIGHT-CHARS', 'FULL-HEIGHT-P', 'FULL-HEIGHT-PI', 'FULL-HEIGHT-PIX', 'FULL-HEIGHT-PIXE', 'FULL-HEIGHT-PIXEL', 'FULL-HEIGHT-PIXELS', 'FULL-PATHN', 'FULL-PATHNA', 'FULL-PATHNAM', 'FULL-PATHNAME', 'FULL-WIDTH', 'FULL-WIDTH-', 'FULL-WIDTH-C', 'FULL-WIDTH-CH', 'FULL-WIDTH-CHA', 'FULL-WIDTH-CHAR', 'FULL-WIDTH-CHARS', 'FULL-WIDTH-P', 'FULL-WIDTH-PI', 'FULL-WIDTH-PIX', 'FULL-WIDTH-PIXE', 'FULL-WIDTH-PIXEL', 'FULL-WIDTH-PIXELS', 'FUNCTION', 'FUNCTION-CALL-TYPE', 'GATEWAY', 'GATEWAYS', 'GE', 'GENERATE-MD5', 'GENERATE-PBE-KEY', 'GENERATE-PBE-SALT', 'GENERATE-RANDOM-KEY', 'GENERATE-UUID', 'GET', 'GET-ATTR-CALL-TYPE', 'GET-ATTRIBUTE-NODE', 'GET-BINARY-DATA', 'GET-BLUE', 'GET-BLUE-', 'GET-BLUE-V', 'GET-BLUE-VA', 'GET-BLUE-VAL', 'GET-BLUE-VALU', 'GET-BLUE-VALUE', 'GET-BROWSE-COLUMN', 'GET-BUFFER-HANDLE', 'GET-BYTE', 'GET-CALLBACK-PROC-CONTEXT', 'GET-CALLBACK-PROC-NAME', 'GET-CGI-LIST', 'GET-CGI-LONG-VALUE', 'GET-CGI-VALUE', 'GET-CLASS', 'GET-CODEPAGES', 'GET-COLLATIONS', 'GET-CONFIG-VALUE', 'GET-CURRENT', 'GET-DOUBLE', 'GET-DROPPED-FILE', 'GET-DYNAMIC', 'GET-ERROR-COLUMN', 'GET-ERROR-ROW', 'GET-FILE', 'GET-FILE-NAME', 'GET-FILE-OFFSE', 'GET-FILE-OFFSET', 'GET-FIRST', 'GET-FLOAT', 'GET-GREEN', 'GET-GREEN-', 'GET-GREEN-V', 'GET-GREEN-VA', 'GET-GREEN-VAL', 'GET-GREEN-VALU', 'GET-GREEN-VALUE', 'GET-INDEX-BY-NAMESPACE-NAME', 'GET-INDEX-BY-QNAME', 'GET-INT64', 'GET-ITERATION', 'GET-KEY-VAL', 'GET-KEY-VALU', 'GET-KEY-VALUE', 'GET-LAST', 'GET-LOCALNAME-BY-INDEX', 'GET-LONG', 'GET-MESSAGE', 'GET-NEXT', 'GET-NUMBER', 'GET-POINTER-VALUE', 'GET-PREV', 'GET-PRINTERS', 'GET-PROPERTY', 'GET-QNAME-BY-INDEX', 'GET-RED', 'GET-RED-', 'GET-RED-V', 'GET-RED-VA', 'GET-RED-VAL', 'GET-RED-VALU', 'GET-RED-VALUE', 'GET-REPOSITIONED-ROW', 'GET-RGB-VALUE', 'GET-SELECTED', 'GET-SELECTED-', 'GET-SELECTED-W', 'GET-SELECTED-WI', 'GET-SELECTED-WID', 'GET-SELECTED-WIDG', 'GET-SELECTED-WIDGE', 'GET-SELECTED-WIDGET', 'GET-SHORT', 'GET-SIGNATURE', 'GET-SIZE', 'GET-STRING', 'GET-TAB-ITEM', 'GET-TEXT-HEIGHT', 'GET-TEXT-HEIGHT-', 'GET-TEXT-HEIGHT-C', 'GET-TEXT-HEIGHT-CH', 'GET-TEXT-HEIGHT-CHA', 'GET-TEXT-HEIGHT-CHAR', 'GET-TEXT-HEIGHT-CHARS', 'GET-TEXT-HEIGHT-P', 'GET-TEXT-HEIGHT-PI', 'GET-TEXT-HEIGHT-PIX', 'GET-TEXT-HEIGHT-PIXE', 'GET-TEXT-HEIGHT-PIXEL', 'GET-TEXT-HEIGHT-PIXELS', 'GET-TEXT-WIDTH', 'GET-TEXT-WIDTH-', 'GET-TEXT-WIDTH-C', 'GET-TEXT-WIDTH-CH', 'GET-TEXT-WIDTH-CHA', 'GET-TEXT-WIDTH-CHAR', 'GET-TEXT-WIDTH-CHARS', 'GET-TEXT-WIDTH-P', 'GET-TEXT-WIDTH-PI', 'GET-TEXT-WIDTH-PIX', 'GET-TEXT-WIDTH-PIXE', 'GET-TEXT-WIDTH-PIXEL', 'GET-TEXT-WIDTH-PIXELS', 'GET-TYPE-BY-INDEX', 'GET-TYPE-BY-NAMESPACE-NAME', 'GET-TYPE-BY-QNAME', 'GET-UNSIGNED-LONG', 'GET-UNSIGNED-SHORT', 'GET-URI-BY-INDEX', 'GET-VALUE-BY-INDEX', 'GET-VALUE-BY-NAMESPACE-NAME', 'GET-VALUE-BY-QNAME', 'GET-WAIT-STATE', 'GETBYTE', 'GLOBAL', 'GO-ON', 'GO-PEND', 'GO-PENDI', 'GO-PENDIN', 'GO-PENDING', 'GRANT', 'GRAPHIC-E', 'GRAPHIC-ED', 'GRAPHIC-EDG', 'GRAPHIC-EDGE', 'GRID-FACTOR-H', 'GRID-FACTOR-HO', 'GRID-FACTOR-HOR', 'GRID-FACTOR-HORI', 'GRID-FACTOR-HORIZ', 'GRID-FACTOR-HORIZO', 'GRID-FACTOR-HORIZON', 'GRID-FACTOR-HORIZONT', 'GRID-FACTOR-HORIZONTA', 'GRID-FACTOR-HORIZONTAL', 'GRID-FACTOR-V', 'GRID-FACTOR-VE', 'GRID-FACTOR-VER', 'GRID-FACTOR-VERT', 'GRID-FACTOR-VERTI', 'GRID-FACTOR-VERTIC', 'GRID-FACTOR-VERTICA', 'GRID-FACTOR-VERTICAL', 'GRID-SNAP', 'GRID-UNIT-HEIGHT', 'GRID-UNIT-HEIGHT-', 'GRID-UNIT-HEIGHT-C', 'GRID-UNIT-HEIGHT-CH', 'GRID-UNIT-HEIGHT-CHA', 'GRID-UNIT-HEIGHT-CHARS', 'GRID-UNIT-HEIGHT-P', 'GRID-UNIT-HEIGHT-PI', 'GRID-UNIT-HEIGHT-PIX', 'GRID-UNIT-HEIGHT-PIXE', 'GRID-UNIT-HEIGHT-PIXEL', 'GRID-UNIT-HEIGHT-PIXELS', 'GRID-UNIT-WIDTH', 'GRID-UNIT-WIDTH-', 'GRID-UNIT-WIDTH-C', 'GRID-UNIT-WIDTH-CH', 'GRID-UNIT-WIDTH-CHA', 'GRID-UNIT-WIDTH-CHAR', 'GRID-UNIT-WIDTH-CHARS', 'GRID-UNIT-WIDTH-P', 'GRID-UNIT-WIDTH-PI', 'GRID-UNIT-WIDTH-PIX', 'GRID-UNIT-WIDTH-PIXE', 'GRID-UNIT-WIDTH-PIXEL', 'GRID-UNIT-WIDTH-PIXELS', 'GRID-VISIBLE', 'GROUP', 'GT', 'GUID', 'HANDLE', 'HANDLER', 'HAS-RECORDS', 'HAVING', 'HEADER', 'HEIGHT', 'HEIGHT-', 'HEIGHT-C', 'HEIGHT-CH', 'HEIGHT-CHA', 'HEIGHT-CHAR', 'HEIGHT-CHARS', 'HEIGHT-P', 'HEIGHT-PI', 'HEIGHT-PIX', 'HEIGHT-PIXE', 'HEIGHT-PIXEL', 'HEIGHT-PIXELS', 'HELP', 'HEX-DECODE', 'HEX-ENCODE', 'HIDDEN', 'HIDE', 'HORI', 'HORIZ', 'HORIZO', 'HORIZON', 'HORIZONT', 'HORIZONTA', 'HORIZONTAL', 'HOST-BYTE-ORDER', 'HTML-CHARSET', 'HTML-END-OF-LINE', 'HTML-END-OF-PAGE', 'HTML-FRAME-BEGIN', 'HTML-FRAME-END', 'HTML-HEADER-BEGIN', 'HTML-HEADER-END', 'HTML-TITLE-BEGIN', 'HTML-TITLE-END', 'HWND', 'ICON', 'IF', 'IMAGE', 'IMAGE-DOWN', 'IMAGE-INSENSITIVE', 'IMAGE-SIZE', 'IMAGE-SIZE-C', 'IMAGE-SIZE-CH', 'IMAGE-SIZE-CHA', 'IMAGE-SIZE-CHAR', 'IMAGE-SIZE-CHARS', 'IMAGE-SIZE-P', 'IMAGE-SIZE-PI', 'IMAGE-SIZE-PIX', 'IMAGE-SIZE-PIXE', 'IMAGE-SIZE-PIXEL', 'IMAGE-SIZE-PIXELS', 'IMAGE-UP', 'IMMEDIATE-DISPLAY', 'IMPLEMENTS', 'IMPORT', 'IMPORT-PRINCIPAL', 'IN', 'IN-HANDLE', 'INCREMENT-EXCLUSIVE-ID', 'INDEX', 'INDEX-HINT', 'INDEX-INFORMATION', 'INDEXED-REPOSITION', 'INDICATOR', 'INFO', 'INFOR', 'INFORM', 'INFORMA', 'INFORMAT', 'INFORMATI', 'INFORMATIO', 'INFORMATION', 'INHERIT-BGC', 'INHERIT-BGCO', 'INHERIT-BGCOL', 'INHERIT-BGCOLO', 'INHERIT-BGCOLOR', 'INHERIT-FGC', 'INHERIT-FGCO', 'INHERIT-FGCOL', 'INHERIT-FGCOLO', 'INHERIT-FGCOLOR', 'INHERITS', 'INIT', 'INITI', 'INITIA', 'INITIAL', 'INITIAL-DIR', 'INITIAL-FILTER', 'INITIALIZE-DOCUMENT-TYPE', 'INITIATE', 'INNER-CHARS', 'INNER-LINES', 'INPUT', 'INPUT-O', 'INPUT-OU', 'INPUT-OUT', 'INPUT-OUTP', 'INPUT-OUTPU', 'INPUT-OUTPUT', 'INPUT-VALUE', 'INSERT', 'INSERT-ATTRIBUTE', 'INSERT-B', 'INSERT-BA', 'INSERT-BAC', 'INSERT-BACK', 'INSERT-BACKT', 'INSERT-BACKTA', 'INSERT-BACKTAB', 'INSERT-FILE', 'INSERT-ROW', 'INSERT-STRING', 'INSERT-T', 'INSERT-TA', 'INSERT-TAB', 'INT64', 'INT', 'INTEGER', 'INTERFACE', 'INTERNAL-ENTRIES', 'INTO', 'INVOKE', 'IS', 'IS-ATTR', 'IS-ATTR-', 'IS-ATTR-S', 'IS-ATTR-SP', 'IS-ATTR-SPA', 'IS-ATTR-SPAC', 'IS-ATTR-SPACE', 'IS-CLASS', 'IS-JSON', 'IS-LEAD-BYTE', 'IS-OPEN', 'IS-PARAMETER-SET', 'IS-PARTITIONED', 'IS-ROW-SELECTED', 'IS-SELECTED', 'IS-XML', 'ITEM', 'ITEMS-PER-ROW', 'JOIN', 'JOIN-BY-SQLDB', 'KBLABEL', 'KEEP-CONNECTION-OPEN', 'KEEP-FRAME-Z', 'KEEP-FRAME-Z-', 'KEEP-FRAME-Z-O', 'KEEP-FRAME-Z-OR', 'KEEP-FRAME-Z-ORD', 'KEEP-FRAME-Z-ORDE', 'KEEP-FRAME-Z-ORDER', 'KEEP-MESSAGES', 'KEEP-SECURITY-CACHE', 'KEEP-TAB-ORDER', 'KEY', 'KEY-CODE', 'KEY-FUNC', 'KEY-FUNCT', 'KEY-FUNCTI', 'KEY-FUNCTIO', 'KEY-FUNCTION', 'KEY-LABEL', 'KEYCODE', 'KEYFUNC', 'KEYFUNCT', 'KEYFUNCTI', 'KEYFUNCTIO', 'KEYFUNCTION', 'KEYLABEL', 'KEYS', 'KEYWORD', 'KEYWORD-ALL', 'LABEL', 'LABEL-BGC', 'LABEL-BGCO', 'LABEL-BGCOL', 'LABEL-BGCOLO', 'LABEL-BGCOLOR', 'LABEL-DC', 'LABEL-DCO', 'LABEL-DCOL', 'LABEL-DCOLO', 'LABEL-DCOLOR', 'LABEL-FGC', 'LABEL-FGCO', 'LABEL-FGCOL', 'LABEL-FGCOLO', 'LABEL-FGCOLOR', 'LABEL-FONT', 'LABEL-PFC', 'LABEL-PFCO', 'LABEL-PFCOL', 'LABEL-PFCOLO', 'LABEL-PFCOLOR', 'LABELS', 'LABELS-HAVE-COLONS', 'LANDSCAPE', 'LANGUAGE', 'LANGUAGES', 'LARGE', 'LARGE-TO-SMALL', 'LAST', 'LAST-ASYNCH-REQUEST', 'LAST-BATCH', 'LAST-CHILD', 'LAST-EVEN', 'LAST-EVENT', 'LAST-FORM', 'LAST-KEY', 'LAST-OBJECT', 'LAST-OF', 'LAST-PROCE', 'LAST-PROCED', 'LAST-PROCEDU', 'LAST-PROCEDUR', 'LAST-PROCEDURE', 'LAST-SERVER', 'LAST-TAB-I', 'LAST-TAB-IT', 'LAST-TAB-ITE', 'LAST-TAB-ITEM', 'LASTKEY', 'LC', 'LDBNAME', 'LE', 'LEAVE', 'LEFT-ALIGN', 'LEFT-ALIGNE', 'LEFT-ALIGNED', 'LEFT-TRIM', 'LENGTH', 'LIBRARY', 'LIKE', 'LIKE-SEQUENTIAL', 'LINE', 'LINE-COUNT', 'LINE-COUNTE', 'LINE-COUNTER', 'LIST-EVENTS', 'LIST-ITEM-PAIRS', 'LIST-ITEMS', 'LIST-PROPERTY-NAMES', 'LIST-QUERY-ATTRS', 'LIST-SET-ATTRS', 'LIST-WIDGETS', 'LISTI', 'LISTIN', 'LISTING', 'LITERAL-QUESTION', 'LITTLE-ENDIAN', 'LOAD', 'LOAD-DOMAINS', 'LOAD-ICON', 'LOAD-IMAGE', 'LOAD-IMAGE-DOWN', 'LOAD-IMAGE-INSENSITIVE', 'LOAD-IMAGE-UP', 'LOAD-MOUSE-P', 'LOAD-MOUSE-PO', 'LOAD-MOUSE-POI', 'LOAD-MOUSE-POIN', 'LOAD-MOUSE-POINT', 'LOAD-MOUSE-POINTE', 'LOAD-MOUSE-POINTER', 'LOAD-PICTURE', 'LOAD-SMALL-ICON', 'LOCAL-NAME', 'LOCAL-VERSION-INFO', 'LOCATOR-COLUMN-NUMBER', 'LOCATOR-LINE-NUMBER', 'LOCATOR-PUBLIC-ID', 'LOCATOR-SYSTEM-ID', 'LOCATOR-TYPE', 'LOCK-REGISTRATION', 'LOCKED', 'LOG', 'LOG-AUDIT-EVENT', 'LOG-MANAGER', 'LOGICAL', 'LOGIN-EXPIRATION-TIMESTAMP', 'LOGIN-HOST', 'LOGIN-STATE', 'LOGOUT', 'LONGCHAR', 'LOOKAHEAD', 'LOOKUP', 'LT', 'MACHINE-CLASS', 'MANDATORY', 'MANUAL-HIGHLIGHT', 'MAP', 'MARGIN-EXTRA', 'MARGIN-HEIGHT', 'MARGIN-HEIGHT-', 'MARGIN-HEIGHT-C', 'MARGIN-HEIGHT-CH', 'MARGIN-HEIGHT-CHA', 'MARGIN-HEIGHT-CHAR', 'MARGIN-HEIGHT-CHARS', 'MARGIN-HEIGHT-P', 'MARGIN-HEIGHT-PI', 'MARGIN-HEIGHT-PIX', 'MARGIN-HEIGHT-PIXE', 'MARGIN-HEIGHT-PIXEL', 'MARGIN-HEIGHT-PIXELS', 'MARGIN-WIDTH', 'MARGIN-WIDTH-', 'MARGIN-WIDTH-C', 'MARGIN-WIDTH-CH', 'MARGIN-WIDTH-CHA', 'MARGIN-WIDTH-CHAR', 'MARGIN-WIDTH-CHARS', 'MARGIN-WIDTH-P', 'MARGIN-WIDTH-PI', 'MARGIN-WIDTH-PIX', 'MARGIN-WIDTH-PIXE', 'MARGIN-WIDTH-PIXEL', 'MARGIN-WIDTH-PIXELS', 'MARK-NEW', 'MARK-ROW-STATE', 'MATCHES', 'MAX', 'MAX-BUTTON', 'MAX-CHARS', 'MAX-DATA-GUESS', 'MAX-HEIGHT', 'MAX-HEIGHT-C', 'MAX-HEIGHT-CH', 'MAX-HEIGHT-CHA', 'MAX-HEIGHT-CHAR', 'MAX-HEIGHT-CHARS', 'MAX-HEIGHT-P', 'MAX-HEIGHT-PI', 'MAX-HEIGHT-PIX', 'MAX-HEIGHT-PIXE', 'MAX-HEIGHT-PIXEL', 'MAX-HEIGHT-PIXELS', 'MAX-ROWS', 'MAX-SIZE', 'MAX-VAL', 'MAX-VALU', 'MAX-VALUE', 'MAX-WIDTH', 'MAX-WIDTH-', 'MAX-WIDTH-C', 'MAX-WIDTH-CH', 'MAX-WIDTH-CHA', 'MAX-WIDTH-CHAR', 'MAX-WIDTH-CHARS', 'MAX-WIDTH-P', 'MAX-WIDTH-PI', 'MAX-WIDTH-PIX', 'MAX-WIDTH-PIXE', 'MAX-WIDTH-PIXEL', 'MAX-WIDTH-PIXELS', 'MAXI', 'MAXIM', 'MAXIMIZE', 'MAXIMU', 'MAXIMUM', 'MAXIMUM-LEVEL', 'MD5-DIGEST', 'MEMBER', 'MEMPTR-TO-NODE-VALUE', 'MENU', 'MENU-BAR', 'MENU-ITEM', 'MENU-K', 'MENU-KE', 'MENU-KEY', 'MENU-M', 'MENU-MO', 'MENU-MOU', 'MENU-MOUS', 'MENU-MOUSE', 'MENUBAR', 'MERGE-BY-FIELD', 'MESSAGE', 'MESSAGE-AREA', 'MESSAGE-AREA-FONT', 'MESSAGE-LINES', 'METHOD', 'MIN', 'MIN-BUTTON', 'MIN-COLUMN-WIDTH-C', 'MIN-COLUMN-WIDTH-CH', 'MIN-COLUMN-WIDTH-CHA', 'MIN-COLUMN-WIDTH-CHAR', 'MIN-COLUMN-WIDTH-CHARS', 'MIN-COLUMN-WIDTH-P', 'MIN-COLUMN-WIDTH-PI', 'MIN-COLUMN-WIDTH-PIX', 'MIN-COLUMN-WIDTH-PIXE', 'MIN-COLUMN-WIDTH-PIXEL', 'MIN-COLUMN-WIDTH-PIXELS', 'MIN-HEIGHT', 'MIN-HEIGHT-', 'MIN-HEIGHT-C', 'MIN-HEIGHT-CH', 'MIN-HEIGHT-CHA', 'MIN-HEIGHT-CHAR', 'MIN-HEIGHT-CHARS', 'MIN-HEIGHT-P', 'MIN-HEIGHT-PI', 'MIN-HEIGHT-PIX', 'MIN-HEIGHT-PIXE', 'MIN-HEIGHT-PIXEL', 'MIN-HEIGHT-PIXELS', 'MIN-SIZE', 'MIN-VAL', 'MIN-VALU', 'MIN-VALUE', 'MIN-WIDTH', 'MIN-WIDTH-', 'MIN-WIDTH-C', 'MIN-WIDTH-CH', 'MIN-WIDTH-CHA', 'MIN-WIDTH-CHAR', 'MIN-WIDTH-CHARS', 'MIN-WIDTH-P', 'MIN-WIDTH-PI', 'MIN-WIDTH-PIX', 'MIN-WIDTH-PIXE', 'MIN-WIDTH-PIXEL', 'MIN-WIDTH-PIXELS', 'MINI', 'MINIM', 'MINIMU', 'MINIMUM', 'MOD', 'MODIFIED', 'MODU', 'MODUL', 'MODULO', 'MONTH', 'MOUSE', 'MOUSE-P', 'MOUSE-PO', 'MOUSE-POI', 'MOUSE-POIN', 'MOUSE-POINT', 'MOUSE-POINTE', 'MOUSE-POINTER', 'MOVABLE', 'MOVE-AFTER', 'MOVE-AFTER-', 'MOVE-AFTER-T', 'MOVE-AFTER-TA', 'MOVE-AFTER-TAB', 'MOVE-AFTER-TAB-', 'MOVE-AFTER-TAB-I', 'MOVE-AFTER-TAB-IT', 'MOVE-AFTER-TAB-ITE', 'MOVE-AFTER-TAB-ITEM', 'MOVE-BEFOR', 'MOVE-BEFORE', 'MOVE-BEFORE-', 'MOVE-BEFORE-T', 'MOVE-BEFORE-TA', 'MOVE-BEFORE-TAB', 'MOVE-BEFORE-TAB-', 'MOVE-BEFORE-TAB-I', 'MOVE-BEFORE-TAB-IT', 'MOVE-BEFORE-TAB-ITE', 'MOVE-BEFORE-TAB-ITEM', 'MOVE-COL', 'MOVE-COLU', 'MOVE-COLUM', 'MOVE-COLUMN', 'MOVE-TO-B', 'MOVE-TO-BO', 'MOVE-TO-BOT', 'MOVE-TO-BOTT', 'MOVE-TO-BOTTO', 'MOVE-TO-BOTTOM', 'MOVE-TO-EOF', 'MOVE-TO-T', 'MOVE-TO-TO', 'MOVE-TO-TOP', 'MPE', 'MTIME', 'MULTI-COMPILE', 'MULTIPLE', 'MULTIPLE-KEY', 'MULTITASKING-INTERVAL', 'MUST-EXIST', 'NAME', 'NAMESPACE-PREFIX', 'NAMESPACE-URI', 'NATIVE', 'NE', 'NEEDS-APPSERVER-PROMPT', 'NEEDS-PROMPT', 'NEW', 'NEW-INSTANCE', 'NEW-ROW', 'NEXT', 'NEXT-COLUMN', 'NEXT-PROMPT', 'NEXT-ROWID', 'NEXT-SIBLING', 'NEXT-TAB-I', 'NEXT-TAB-IT', 'NEXT-TAB-ITE', 'NEXT-TAB-ITEM', 'NEXT-VALUE', 'NO', 'NO-APPLY', 'NO-ARRAY-MESSAGE', 'NO-ASSIGN', 'NO-ATTR', 'NO-ATTR-', 'NO-ATTR-L', 'NO-ATTR-LI', 'NO-ATTR-LIS', 'NO-ATTR-LIST', 'NO-ATTR-S', 'NO-ATTR-SP', 'NO-ATTR-SPA', 'NO-ATTR-SPAC', 'NO-ATTR-SPACE', 'NO-AUTO-VALIDATE', 'NO-BIND-WHERE', 'NO-BOX', 'NO-CONSOLE', 'NO-CONVERT', 'NO-CONVERT-3D-COLORS', 'NO-CURRENT-VALUE', 'NO-DEBUG', 'NO-DRAG', 'NO-ECHO', 'NO-EMPTY-SPACE', 'NO-ERROR', 'NO-F', 'NO-FI', 'NO-FIL', 'NO-FILL', 'NO-FOCUS', 'NO-HELP', 'NO-HIDE', 'NO-INDEX-HINT', 'NO-INHERIT-BGC', 'NO-INHERIT-BGCO', 'NO-INHERIT-BGCOLOR', 'NO-INHERIT-FGC', 'NO-INHERIT-FGCO', 'NO-INHERIT-FGCOL', 'NO-INHERIT-FGCOLO', 'NO-INHERIT-FGCOLOR', 'NO-JOIN-BY-SQLDB', 'NO-LABE', 'NO-LABELS', 'NO-LOBS', 'NO-LOCK', 'NO-LOOKAHEAD', 'NO-MAP', 'NO-MES', 'NO-MESS', 'NO-MESSA', 'NO-MESSAG', 'NO-MESSAGE', 'NO-PAUSE', 'NO-PREFE', 'NO-PREFET', 'NO-PREFETC', 'NO-PREFETCH', 'NO-ROW-MARKERS', 'NO-SCROLLBAR-VERTICAL', 'NO-SEPARATE-CONNECTION', 'NO-SEPARATORS', 'NO-TAB-STOP', 'NO-UND', 'NO-UNDE', 'NO-UNDER', 'NO-UNDERL', 'NO-UNDERLI', 'NO-UNDERLIN', 'NO-UNDERLINE', 'NO-UNDO', 'NO-VAL', 'NO-VALI', 'NO-VALID', 'NO-VALIDA', 'NO-VALIDAT', 'NO-VALIDATE', 'NO-WAIT', 'NO-WORD-WRAP', 'NODE-VALUE-TO-MEMPTR', 'NONAMESPACE-SCHEMA-LOCATION', 'NONE', 'NORMALIZE', 'NOT', 'NOT-ACTIVE', 'NOW', 'NULL', 'NUM-ALI', 'NUM-ALIA', 'NUM-ALIAS', 'NUM-ALIASE', 'NUM-ALIASES', 'NUM-BUFFERS', 'NUM-BUT', 'NUM-BUTT', 'NUM-BUTTO', 'NUM-BUTTON', 'NUM-BUTTONS', 'NUM-COL', 'NUM-COLU', 'NUM-COLUM', 'NUM-COLUMN', 'NUM-COLUMNS', 'NUM-COPIES', 'NUM-DBS', 'NUM-DROPPED-FILES', 'NUM-ENTRIES', 'NUM-FIELDS', 'NUM-FORMATS', 'NUM-ITEMS', 'NUM-ITERATIONS', 'NUM-LINES', 'NUM-LOCKED-COL', 'NUM-LOCKED-COLU', 'NUM-LOCKED-COLUM', 'NUM-LOCKED-COLUMN', 'NUM-LOCKED-COLUMNS', 'NUM-MESSAGES', 'NUM-PARAMETERS', 'NUM-REFERENCES', 'NUM-REPLACED', 'NUM-RESULTS', 'NUM-SELECTED', 'NUM-SELECTED-', 'NUM-SELECTED-ROWS', 'NUM-SELECTED-W', 'NUM-SELECTED-WI', 'NUM-SELECTED-WID', 'NUM-SELECTED-WIDG', 'NUM-SELECTED-WIDGE', 'NUM-SELECTED-WIDGET', 'NUM-SELECTED-WIDGETS', 'NUM-TABS', 'NUM-TO-RETAIN', 'NUM-VISIBLE-COLUMNS', 'NUMERIC', 'NUMERIC-F', 'NUMERIC-FO', 'NUMERIC-FOR', 'NUMERIC-FORM', 'NUMERIC-FORMA', 'NUMERIC-FORMAT', 'OCTET-LENGTH', 'OF', 'OFF', 'OK', 'OK-CANCEL', 'OLD', 'ON', 'ON-FRAME', 'ON-FRAME-', 'ON-FRAME-B', 'ON-FRAME-BO', 'ON-FRAME-BOR', 'ON-FRAME-BORD', 'ON-FRAME-BORDE', 'ON-FRAME-BORDER', 'OPEN', 'OPSYS', 'OPTION', 'OR', 'ORDERED-JOIN', 'ORDINAL', 'OS-APPEND', 'OS-COMMAND', 'OS-COPY', 'OS-CREATE-DIR', 'OS-DELETE', 'OS-DIR', 'OS-DRIVE', 'OS-DRIVES', 'OS-ERROR', 'OS-GETENV', 'OS-RENAME', 'OTHERWISE', 'OUTPUT', 'OVERLAY', 'OVERRIDE', 'OWNER', 'PAGE', 'PAGE-BOT', 'PAGE-BOTT', 'PAGE-BOTTO', 'PAGE-BOTTOM', 'PAGE-NUM', 'PAGE-NUMB', 'PAGE-NUMBE', 'PAGE-NUMBER', 'PAGE-SIZE', 'PAGE-TOP', 'PAGE-WID', 'PAGE-WIDT', 'PAGE-WIDTH', 'PAGED', 'PARAM', 'PARAME', 'PARAMET', 'PARAMETE', 'PARAMETER', 'PARENT', 'PARSE-STATUS', 'PARTIAL-KEY', 'PASCAL', 'PASSWORD-FIELD', 'PATHNAME', 'PAUSE', 'PBE-HASH-ALG', 'PBE-HASH-ALGO', 'PBE-HASH-ALGOR', 'PBE-HASH-ALGORI', 'PBE-HASH-ALGORIT', 'PBE-HASH-ALGORITH', 'PBE-HASH-ALGORITHM', 'PBE-KEY-ROUNDS', 'PDBNAME', 'PERSIST', 'PERSISTE', 'PERSISTEN', 'PERSISTENT', 'PERSISTENT-CACHE-DISABLED', 'PFC', 'PFCO', 'PFCOL', 'PFCOLO', 'PFCOLOR', 'PIXELS', 'PIXELS-PER-COL', 'PIXELS-PER-COLU', 'PIXELS-PER-COLUM', 'PIXELS-PER-COLUMN', 'PIXELS-PER-ROW', 'POPUP-M', 'POPUP-ME', 'POPUP-MEN', 'POPUP-MENU', 'POPUP-O', 'POPUP-ON', 'POPUP-ONL', 'POPUP-ONLY', 'PORTRAIT', 'POSITION', 'PRECISION', 'PREFER-DATASET', 'PREPARE-STRING', 'PREPARED', 'PREPROC', 'PREPROCE', 'PREPROCES', 'PREPROCESS', 'PRESEL', 'PRESELE', 'PRESELEC', 'PRESELECT', 'PREV', 'PREV-COLUMN', 'PREV-SIBLING', 'PREV-TAB-I', 'PREV-TAB-IT', 'PREV-TAB-ITE', 'PREV-TAB-ITEM', 'PRIMARY', 'PRINTER', 'PRINTER-CONTROL-HANDLE', 'PRINTER-HDC', 'PRINTER-NAME', 'PRINTER-PORT', 'PRINTER-SETUP', 'PRIVATE', 'PRIVATE-D', 'PRIVATE-DA', 'PRIVATE-DAT', 'PRIVATE-DATA', 'PRIVILEGES', 'PROC-HA', 'PROC-HAN', 'PROC-HAND', 'PROC-HANDL', 'PROC-HANDLE', 'PROC-ST', 'PROC-STA', 'PROC-STAT', 'PROC-STATU', 'PROC-STATUS', 'PROC-TEXT', 'PROC-TEXT-BUFFER', 'PROCE', 'PROCED', 'PROCEDU', 'PROCEDUR', 'PROCEDURE', 'PROCEDURE-CALL-TYPE', 'PROCEDURE-TYPE', 'PROCESS', 'PROFILER', 'PROGRAM-NAME', 'PROGRESS', 'PROGRESS-S', 'PROGRESS-SO', 'PROGRESS-SOU', 'PROGRESS-SOUR', 'PROGRESS-SOURC', 'PROGRESS-SOURCE', 'PROMPT', 'PROMPT-F', 'PROMPT-FO', 'PROMPT-FOR', 'PROMSGS', 'PROPATH', 'PROPERTY', 'PROTECTED', 'PROVERS', 'PROVERSI', 'PROVERSIO', 'PROVERSION', 'PROXY', 'PROXY-PASSWORD', 'PROXY-USERID', 'PUBLIC', 'PUBLIC-ID', 'PUBLISH', 'PUBLISHED-EVENTS', 'PUT', 'PUT-BYTE', 'PUT-DOUBLE', 'PUT-FLOAT', 'PUT-INT64', 'PUT-KEY-VAL', 'PUT-KEY-VALU', 'PUT-KEY-VALUE', 'PUT-LONG', 'PUT-SHORT', 'PUT-STRING', 'PUT-UNSIGNED-LONG', 'PUTBYTE', 'QUERY', 'QUERY-CLOSE', 'QUERY-OFF-END', 'QUERY-OPEN', 'QUERY-PREPARE', 'QUERY-TUNING', 'QUESTION', 'QUIT', 'QUOTER', 'R-INDEX', 'RADIO-BUTTONS', 'RADIO-SET', 'RANDOM', 'RAW', 'RAW-TRANSFER', 'RCODE-INFO', 'RCODE-INFOR', 'RCODE-INFORM', 'RCODE-INFORMA', 'RCODE-INFORMAT', 'RCODE-INFORMATI', 'RCODE-INFORMATIO', 'RCODE-INFORMATION', 'READ-AVAILABLE', 'READ-EXACT-NUM', 'READ-FILE', 'READ-JSON', 'READ-ONLY', 'READ-XML', 'READ-XMLSCHEMA', 'READKEY', 'REAL', 'RECID', 'RECORD-LENGTH', 'RECT', 'RECTA', 'RECTAN', 'RECTANG', 'RECTANGL', 'RECTANGLE', 'RECURSIVE', 'REFERENCE-ONLY', 'REFRESH', 'REFRESH-AUDIT-POLICY', 'REFRESHABLE', 'REGISTER-DOMAIN', 'RELEASE', 'REMOTE', 'REMOVE-EVENTS-PROCEDURE', 'REMOVE-SUPER-PROCEDURE', 'REPEAT', 'REPLACE', 'REPLACE-SELECTION-TEXT', 'REPOSITION', 'REPOSITION-BACKWARD', 'REPOSITION-FORWARD', 'REPOSITION-MODE', 'REPOSITION-TO-ROW', 'REPOSITION-TO-ROWID', 'REQUEST', 'REQUEST-INFO', 'RESET', 'RESIZA', 'RESIZAB', 'RESIZABL', 'RESIZABLE', 'RESIZE', 'RESPONSE-INFO', 'RESTART-ROW', 'RESTART-ROWID', 'RETAIN', 'RETAIN-SHAPE', 'RETRY', 'RETRY-CANCEL', 'RETURN', 'RETURN-ALIGN', 'RETURN-ALIGNE', 'RETURN-INS', 'RETURN-INSE', 'RETURN-INSER', 'RETURN-INSERT', 'RETURN-INSERTE', 'RETURN-INSERTED', 'RETURN-TO-START-DI', 'RETURN-TO-START-DIR', 'RETURN-VAL', 'RETURN-VALU', 'RETURN-VALUE', 'RETURN-VALUE-DATA-TYPE', 'RETURNS', 'REVERSE-FROM', 'REVERT', 'REVOKE', 'RGB-VALUE', 'RIGHT-ALIGNED', 'RIGHT-TRIM', 'ROLES', 'ROUND', 'ROUTINE-LEVEL', 'ROW', 'ROW-HEIGHT-CHARS', 'ROW-HEIGHT-PIXELS', 'ROW-MARKERS', 'ROW-OF', 'ROW-RESIZABLE', 'ROWID', 'RULE', 'RUN', 'RUN-PROCEDURE', 'SAVE CACHE', 'SAVE', 'SAVE-AS', 'SAVE-FILE', 'SAX-COMPLE', 'SAX-COMPLET', 'SAX-COMPLETE', 'SAX-PARSE', 'SAX-PARSE-FIRST', 'SAX-PARSE-NEXT', 'SAX-PARSER-ERROR', 'SAX-RUNNING', 'SAX-UNINITIALIZED', 'SAX-WRITE-BEGIN', 'SAX-WRITE-COMPLETE', 'SAX-WRITE-CONTENT', 'SAX-WRITE-ELEMENT', 'SAX-WRITE-ERROR', 'SAX-WRITE-IDLE', 'SAX-WRITE-TAG', 'SAX-WRITER', 'SCHEMA', 'SCHEMA-LOCATION', 'SCHEMA-MARSHAL', 'SCHEMA-PATH', 'SCREEN', 'SCREEN-IO', 'SCREEN-LINES', 'SCREEN-VAL', 'SCREEN-VALU', 'SCREEN-VALUE', 'SCROLL', 'SCROLL-BARS', 'SCROLL-DELTA', 'SCROLL-OFFSET', 'SCROLL-TO-CURRENT-ROW', 'SCROLL-TO-I', 'SCROLL-TO-IT', 'SCROLL-TO-ITE', 'SCROLL-TO-ITEM', 'SCROLL-TO-SELECTED-ROW', 'SCROLLABLE', 'SCROLLBAR-H', 'SCROLLBAR-HO', 'SCROLLBAR-HOR', 'SCROLLBAR-HORI', 'SCROLLBAR-HORIZ', 'SCROLLBAR-HORIZO', 'SCROLLBAR-HORIZON', 'SCROLLBAR-HORIZONT', 'SCROLLBAR-HORIZONTA', 'SCROLLBAR-HORIZONTAL', 'SCROLLBAR-V', 'SCROLLBAR-VE', 'SCROLLBAR-VER', 'SCROLLBAR-VERT', 'SCROLLBAR-VERTI', 'SCROLLBAR-VERTIC', 'SCROLLBAR-VERTICA', 'SCROLLBAR-VERTICAL', 'SCROLLED-ROW-POS', 'SCROLLED-ROW-POSI', 'SCROLLED-ROW-POSIT', 'SCROLLED-ROW-POSITI', 'SCROLLED-ROW-POSITIO', 'SCROLLED-ROW-POSITION', 'SCROLLING', 'SDBNAME', 'SEAL', 'SEAL-TIMESTAMP', 'SEARCH', 'SEARCH-SELF', 'SEARCH-TARGET', 'SECTION', 'SECURITY-POLICY', 'SEEK', 'SELECT', 'SELECT-ALL', 'SELECT-FOCUSED-ROW', 'SELECT-NEXT-ROW', 'SELECT-PREV-ROW', 'SELECT-ROW', 'SELECTABLE', 'SELECTED', 'SELECTION', 'SELECTION-END', 'SELECTION-LIST', 'SELECTION-START', 'SELECTION-TEXT', 'SELF', 'SEND', 'SEND-SQL-STATEMENT', 'SENSITIVE', 'SEPARATE-CONNECTION', 'SEPARATOR-FGCOLOR', 'SEPARATORS', 'SERIALIZABLE', 'SERIALIZE-HIDDEN', 'SERIALIZE-NAME', 'SERVER', 'SERVER-CONNECTION-BOUND', 'SERVER-CONNECTION-BOUND-REQUEST', 'SERVER-CONNECTION-CONTEXT', 'SERVER-CONNECTION-ID', 'SERVER-OPERATING-MODE', 'SESSION', 'SESSION-ID', 'SET', 'SET-APPL-CONTEXT', 'SET-ATTR-CALL-TYPE', 'SET-ATTRIBUTE-NODE', 'SET-BLUE', 'SET-BLUE-', 'SET-BLUE-V', 'SET-BLUE-VA', 'SET-BLUE-VAL', 'SET-BLUE-VALU', 'SET-BLUE-VALUE', 'SET-BREAK', 'SET-BUFFERS', 'SET-CALLBACK', 'SET-CLIENT', 'SET-COMMIT', 'SET-CONTENTS', 'SET-CURRENT-VALUE', 'SET-DB-CLIENT', 'SET-DYNAMIC', 'SET-EVENT-MANAGER-OPTION', 'SET-GREEN', 'SET-GREEN-', 'SET-GREEN-V', 'SET-GREEN-VA', 'SET-GREEN-VAL', 'SET-GREEN-VALU', 'SET-GREEN-VALUE', 'SET-INPUT-SOURCE', 'SET-OPTION', 'SET-OUTPUT-DESTINATION', 'SET-PARAMETER', 'SET-POINTER-VALUE', 'SET-PROPERTY', 'SET-RED', 'SET-RED-', 'SET-RED-V', 'SET-RED-VA', 'SET-RED-VAL', 'SET-RED-VALU', 'SET-RED-VALUE', 'SET-REPOSITIONED-ROW', 'SET-RGB-VALUE', 'SET-ROLLBACK', 'SET-SELECTION', 'SET-SIZE', 'SET-SORT-ARROW', 'SET-WAIT-STATE', 'SETUSER', 'SETUSERI', 'SETUSERID', 'SHA1-DIGEST', 'SHARE', 'SHARE-', 'SHARE-L', 'SHARE-LO', 'SHARE-LOC', 'SHARE-LOCK', 'SHARED', 'SHOW-IN-TASKBAR', 'SHOW-STAT', 'SHOW-STATS', 'SIDE-LAB', 'SIDE-LABE', 'SIDE-LABEL', 'SIDE-LABEL-H', 'SIDE-LABEL-HA', 'SIDE-LABEL-HAN', 'SIDE-LABEL-HAND', 'SIDE-LABEL-HANDL', 'SIDE-LABEL-HANDLE', 'SIDE-LABELS', 'SIGNATURE', 'SILENT', 'SIMPLE', 'SINGLE', 'SINGLE-RUN', 'SINGLETON', 'SIZE', 'SIZE-C', 'SIZE-CH', 'SIZE-CHA', 'SIZE-CHAR', 'SIZE-CHARS', 'SIZE-P', 'SIZE-PI', 'SIZE-PIX', 'SIZE-PIXE', 'SIZE-PIXEL', 'SIZE-PIXELS', 'SKIP', 'SKIP-DELETED-RECORD', 'SLIDER', 'SMALL-ICON', 'SMALL-TITLE', 'SMALLINT', 'SOME', 'SORT', 'SORT-ASCENDING', 'SORT-NUMBER', 'SOURCE', 'SOURCE-PROCEDURE', 'SPACE', 'SQL', 'SQRT', 'SSL-SERVER-NAME', 'STANDALONE', 'START', 'START-DOCUMENT', 'START-ELEMENT', 'START-MOVE', 'START-RESIZE', 'START-ROW-RESIZE', 'STATE-DETAIL', 'STATIC', 'STATUS', 'STATUS-AREA', 'STATUS-AREA-FONT', 'STDCALL', 'STOP', 'STOP-AFTER', 'STOP-PARSING', 'STOPPE', 'STOPPED', 'STORED-PROC', 'STORED-PROCE', 'STORED-PROCED', 'STORED-PROCEDU', 'STORED-PROCEDUR', 'STORED-PROCEDURE', 'STREAM', 'STREAM-HANDLE', 'STREAM-IO', 'STRETCH-TO-FIT', 'STRICT', 'STRICT-ENTITY-RESOLUTION', 'STRING', 'STRING-VALUE', 'STRING-XREF', 'SUB-AVE', 'SUB-AVER', 'SUB-AVERA', 'SUB-AVERAG', 'SUB-AVERAGE', 'SUB-COUNT', 'SUB-MAXIMUM', 'SUB-MENU', 'SUB-MIN', 'SUB-MINIMUM', 'SUB-TOTAL', 'SUBSCRIBE', 'SUBST', 'SUBSTI', 'SUBSTIT', 'SUBSTITU', 'SUBSTITUT', 'SUBSTITUTE', 'SUBSTR', 'SUBSTRI', 'SUBSTRIN', 'SUBSTRING', 'SUBTYPE', 'SUM', 'SUM-MAX', 'SUM-MAXI', 'SUM-MAXIM', 'SUM-MAXIMU', 'SUPER', 'SUPER-PROCEDURES', 'SUPPRESS-NAMESPACE-PROCESSING', 'SUPPRESS-W', 'SUPPRESS-WA', 'SUPPRESS-WAR', 'SUPPRESS-WARN', 'SUPPRESS-WARNI', 'SUPPRESS-WARNIN', 'SUPPRESS-WARNING', 'SUPPRESS-WARNINGS', 'SYMMETRIC-ENCRYPTION-ALGORITHM', 'SYMMETRIC-ENCRYPTION-IV', 'SYMMETRIC-ENCRYPTION-KEY', 'SYMMETRIC-SUPPORT', 'SYSTEM-ALERT', 'SYSTEM-ALERT-', 'SYSTEM-ALERT-B', 'SYSTEM-ALERT-BO', 'SYSTEM-ALERT-BOX', 'SYSTEM-ALERT-BOXE', 'SYSTEM-ALERT-BOXES', 'SYSTEM-DIALOG', 'SYSTEM-HELP', 'SYSTEM-ID', 'TAB-POSITION', 'TAB-STOP', 'TABLE', 'TABLE-HANDLE', 'TABLE-NUMBER', 'TABLE-SCAN', 'TARGET', 'TARGET-PROCEDURE', 'TEMP-DIR', 'TEMP-DIRE', 'TEMP-DIREC', 'TEMP-DIRECT', 'TEMP-DIRECTO', 'TEMP-DIRECTOR', 'TEMP-DIRECTORY', 'TEMP-TABLE', 'TEMP-TABLE-PREPARE', 'TERM', 'TERMI', 'TERMIN', 'TERMINA', 'TERMINAL', 'TERMINATE', 'TEXT', 'TEXT-CURSOR', 'TEXT-SEG-GROW', 'TEXT-SELECTED', 'THEN', 'THIS-OBJECT', 'THIS-PROCEDURE', 'THREAD-SAFE', 'THREE-D', 'THROUGH', 'THROW', 'THRU', 'TIC-MARKS', 'TIME', 'TIME-SOURCE', 'TITLE', 'TITLE-BGC', 'TITLE-BGCO', 'TITLE-BGCOL', 'TITLE-BGCOLO', 'TITLE-BGCOLOR', 'TITLE-DC', 'TITLE-DCO', 'TITLE-DCOL', 'TITLE-DCOLO', 'TITLE-DCOLOR', 'TITLE-FGC', 'TITLE-FGCO', 'TITLE-FGCOL', 'TITLE-FGCOLO', 'TITLE-FGCOLOR', 'TITLE-FO', 'TITLE-FON', 'TITLE-FONT', 'TO', 'TO-ROWID', 'TODAY', 'TOGGLE-BOX', 'TOOLTIP', 'TOOLTIPS', 'TOP-NAV-QUERY', 'TOP-ONLY', 'TOPIC', 'TOTAL', 'TRAILING', 'TRANS', 'TRANS-INIT-PROCEDURE', 'TRANSACTION', 'TRANSACTION-MODE', 'TRANSPARENT', 'TRIGGER', 'TRIGGERS', 'TRIM', 'TRUE', 'TRUNC', 'TRUNCA', 'TRUNCAT', 'TRUNCATE', 'TYPE', 'TYPE-OF', 'UNBOX', 'UNBUFF', 'UNBUFFE', 'UNBUFFER', 'UNBUFFERE', 'UNBUFFERED', 'UNDERL', 'UNDERLI', 'UNDERLIN', 'UNDERLINE', 'UNDO', 'UNFORM', 'UNFORMA', 'UNFORMAT', 'UNFORMATT', 'UNFORMATTE', 'UNFORMATTED', 'UNION', 'UNIQUE', 'UNIQUE-ID', 'UNIQUE-MATCH', 'UNIX', 'UNLESS-HIDDEN', 'UNLOAD', 'UNSIGNED-LONG', 'UNSUBSCRIBE', 'UP', 'UPDATE', 'UPDATE-ATTRIBUTE', 'URL', 'URL-DECODE', 'URL-ENCODE', 'URL-PASSWORD', 'URL-USERID', 'USE', 'USE-DICT-EXPS', 'USE-FILENAME', 'USE-INDEX', 'USE-REVVIDEO', 'USE-TEXT', 'USE-UNDERLINE', 'USE-WIDGET-POOL', 'USER', 'USER-ID', 'USERID', 'USING', 'V6DISPLAY', 'V6FRAME', 'VALID-EVENT', 'VALID-HANDLE', 'VALID-OBJECT', 'VALIDATE', 'VALIDATE-EXPRESSION', 'VALIDATE-MESSAGE', 'VALIDATE-SEAL', 'VALIDATION-ENABLED', 'VALUE', 'VALUE-CHANGED', 'VALUES', 'VAR', 'VARI', 'VARIA', 'VARIAB', 'VARIABL', 'VARIABLE', 'VERBOSE', 'VERSION', 'VERT', 'VERTI', 'VERTIC', 'VERTICA', 'VERTICAL', 'VIEW', 'VIEW-AS', 'VIEW-FIRST-COLUMN-ON-REOPEN', 'VIRTUAL-HEIGHT', 'VIRTUAL-HEIGHT-', 'VIRTUAL-HEIGHT-C', 'VIRTUAL-HEIGHT-CH', 'VIRTUAL-HEIGHT-CHA', 'VIRTUAL-HEIGHT-CHAR', 'VIRTUAL-HEIGHT-CHARS', 'VIRTUAL-HEIGHT-P', 'VIRTUAL-HEIGHT-PI', 'VIRTUAL-HEIGHT-PIX', 'VIRTUAL-HEIGHT-PIXE', 'VIRTUAL-HEIGHT-PIXEL', 'VIRTUAL-HEIGHT-PIXELS', 'VIRTUAL-WIDTH', 'VIRTUAL-WIDTH-', 'VIRTUAL-WIDTH-C', 'VIRTUAL-WIDTH-CH', 'VIRTUAL-WIDTH-CHA', 'VIRTUAL-WIDTH-CHAR', 'VIRTUAL-WIDTH-CHARS', 'VIRTUAL-WIDTH-P', 'VIRTUAL-WIDTH-PI', 'VIRTUAL-WIDTH-PIX', 'VIRTUAL-WIDTH-PIXE', 'VIRTUAL-WIDTH-PIXEL', 'VIRTUAL-WIDTH-PIXELS', 'VISIBLE', 'VOID', 'WAIT', 'WAIT-FOR', 'WARNING', 'WEB-CONTEXT', 'WEEKDAY', 'WHEN', 'WHERE', 'WHILE', 'WIDGET', 'WIDGET-E', 'WIDGET-EN', 'WIDGET-ENT', 'WIDGET-ENTE', 'WIDGET-ENTER', 'WIDGET-ID', 'WIDGET-L', 'WIDGET-LE', 'WIDGET-LEA', 'WIDGET-LEAV', 'WIDGET-LEAVE', 'WIDGET-POOL', 'WIDTH', 'WIDTH-', 'WIDTH-C', 'WIDTH-CH', 'WIDTH-CHA', 'WIDTH-CHAR', 'WIDTH-CHARS', 'WIDTH-P', 'WIDTH-PI', 'WIDTH-PIX', 'WIDTH-PIXE', 'WIDTH-PIXEL', 'WIDTH-PIXELS', 'WINDOW', 'WINDOW-MAXIM', 'WINDOW-MAXIMI', 'WINDOW-MAXIMIZ', 'WINDOW-MAXIMIZE', 'WINDOW-MAXIMIZED', 'WINDOW-MINIM', 'WINDOW-MINIMI', 'WINDOW-MINIMIZ', 'WINDOW-MINIMIZE', 'WINDOW-MINIMIZED', 'WINDOW-NAME', 'WINDOW-NORMAL', 'WINDOW-STA', 'WINDOW-STAT', 'WINDOW-STATE', 'WINDOW-SYSTEM', 'WITH', 'WORD-INDEX', 'WORD-WRAP', 'WORK-AREA-HEIGHT-PIXELS', 'WORK-AREA-WIDTH-PIXELS', 'WORK-AREA-X', 'WORK-AREA-Y', 'WORK-TAB', 'WORK-TABL', 'WORK-TABLE', 'WORKFILE', 'WRITE', 'WRITE-CDATA', 'WRITE-CHARACTERS', 'WRITE-COMMENT', 'WRITE-DATA-ELEMENT', 'WRITE-EMPTY-ELEMENT', 'WRITE-ENTITY-REF', 'WRITE-EXTERNAL-DTD', 'WRITE-FRAGMENT', 'WRITE-JSON', 'WRITE-MESSAGE', 'WRITE-PROCESSING-INSTRUCTION', 'WRITE-STATUS', 'WRITE-XML', 'WRITE-XMLSCHEMA', 'X', 'X-OF', 'XCODE', 'XML-DATA-TYPE', 'XML-ENTITY-EXPANSION-LIMIT', 'XML-NODE-TYPE', 'XML-SCHEMA-PATH', 'XML-STRICT-ENTITY-RESOLUTION', 'XML-SUPPRESS-NAMESPACE-PROCESSING', 'XREF', 'XREF-XML', 'Y', 'Y-OF', 'YEAR', 'YEAR-OFFSET', 'YES', 'YES-NO', 'YES-NO-CANCEL' ) pygments-2.11.2/pygments/lexers/pony.py0000644000175000017500000000625414165547207020054 0ustar carstencarsten""" pygments.lexers.pony ~~~~~~~~~~~~~~~~~~~~ Lexers for Pony and related languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['PonyLexer'] class PonyLexer(RegexLexer): """ For Pony source code. .. versionadded:: 2.4 """ name = 'Pony' aliases = ['pony'] filenames = ['*.pony'] _caps = r'(iso|trn|ref|val|box|tag)' tokens = { 'root': [ (r'\n', Text), (r'[^\S\n]+', Text), (r'//.*\n', Comment.Single), (r'/\*', Comment.Multiline, 'nested_comment'), (r'"""(?:.|\n)*?"""', String.Doc), (r'"', String, 'string'), (r'\'.*\'', String.Char), (r'=>|[]{}:().~;,|&!^?[]', Punctuation), (words(( 'addressof', 'and', 'as', 'consume', 'digestof', 'is', 'isnt', 'not', 'or'), suffix=r'\b'), Operator.Word), (r'!=|==|<<|>>|[-+/*%=<>]', Operator), (words(( 'box', 'break', 'compile_error', 'compile_intrinsic', 'continue', 'do', 'else', 'elseif', 'embed', 'end', 'error', 'for', 'if', 'ifdef', 'in', 'iso', 'lambda', 'let', 'match', 'object', 'recover', 'ref', 'repeat', 'return', 'tag', 'then', 'this', 'trn', 'try', 'until', 'use', 'var', 'val', 'where', 'while', 'with', '#any', '#read', '#send', '#share'), suffix=r'\b'), Keyword), (r'(actor|class|struct|primitive|interface|trait|type)((?:\s)+)', bygroups(Keyword, Text), 'typename'), (r'(new|fun|be)((?:\s)+)', bygroups(Keyword, Text), 'methodname'), (words(( 'I8', 'U8', 'I16', 'U16', 'I32', 'U32', 'I64', 'U64', 'I128', 'U128', 'ILong', 'ULong', 'ISize', 'USize', 'F32', 'F64', 'Bool', 'Pointer', 'None', 'Any', 'Array', 'String', 'Iterator'), suffix=r'\b'), Name.Builtin.Type), (r'_?[A-Z]\w*', Name.Type), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+', Number.Float), (r'0x[0-9a-fA-F]+', Number.Hex), (r'\d+', Number.Integer), (r'(true|false)\b', Name.Builtin), (r'_\d*', Name), (r'_?[a-z][\w\']*', Name) ], 'typename': [ (_caps + r'?((?:\s)*)(_?[A-Z]\w*)', bygroups(Keyword, Text, Name.Class), '#pop') ], 'methodname': [ (_caps + r'?((?:\s)*)(_?[a-z]\w*)', bygroups(Keyword, Text, Name.Function), '#pop') ], 'nested_comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'string': [ (r'"', String, '#pop'), (r'\\"', String), (r'[^\\"]+', String) ] } pygments-2.11.2/pygments/lexers/bare.py0000644000175000017500000000574514165547207020004 0ustar carstencarsten""" pygments.lexers.bare ~~~~~~~~~~~~~~~~~~~~ Lexer for the BARE schema. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, words, bygroups from pygments.token import Text, Comment, Keyword, Name, Literal, Whitespace __all__ = ['BareLexer'] class BareLexer(RegexLexer): """ For `BARE schema `_ schema source. .. versionadded:: 2.7 """ name = 'BARE' filenames = ['*.bare'] aliases = ['bare'] flags = re.MULTILINE | re.UNICODE keywords = [ 'type', 'enum', 'u8', 'u16', 'u32', 'u64', 'uint', 'i8', 'i16', 'i32', 'i64', 'int', 'f32', 'f64', 'bool', 'void', 'data', 'string', 'optional', 'map', ] tokens = { 'root': [ (r'(type)(\s+)([A-Z][a-zA-Z0-9]+)(\s+)(\{)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Text), 'struct'), (r'(type)(\s+)([A-Z][a-zA-Z0-9]+)(\s+)(\()', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Text), 'union'), (r'(type)(\s+)([A-Z][a-zA-Z0-9]+)(\s+)', bygroups(Keyword, Whitespace, Name, Whitespace), 'typedef'), (r'(enum)(\s+)([A-Z][a-zA-Z0-9]+)(\s+\{)', bygroups(Keyword, Whitespace, Name.Class, Whitespace), 'enum'), (r'#.*?$', Comment), (r'\s+', Whitespace), ], 'struct': [ (r'\{', Text, '#push'), (r'\}', Text, '#pop'), (r'([a-zA-Z0-9]+)(:)(\s*)', bygroups(Name.Attribute, Text, Whitespace), 'typedef'), (r'\s+', Whitespace), ], 'union': [ (r'\)', Text, '#pop'), (r'(\s*)(\|)(\s*)', bygroups(Whitespace, Text, Whitespace)), (r'[A-Z][a-zA-Z0-9]+', Name.Class), (words(keywords), Keyword), (r'\s+', Whitespace), ], 'typedef': [ (r'\[\]', Text), (r'#.*?$', Comment, '#pop'), (r'(\[)(\d+)(\])', bygroups(Text, Literal, Text)), (r'<|>', Text), (r'\(', Text, 'union'), (r'(\[)([a-z][a-z-A-Z0-9]+)(\])', bygroups(Text, Keyword, Text)), (r'(\[)([A-Z][a-z-A-Z0-9]+)(\])', bygroups(Text, Name.Class, Text)), (r'([A-Z][a-z-A-Z0-9]+)', Name.Class), (words(keywords), Keyword), (r'\n', Text, '#pop'), (r'\{', Text, 'struct'), (r'\s+', Whitespace), (r'\d+', Literal), ], 'enum': [ (r'\{', Text, '#push'), (r'\}', Text, '#pop'), (r'([A-Z][A-Z0-9_]*)(\s*=\s*)(\d+)', bygroups(Name.Attribute, Text, Literal)), (r'([A-Z][A-Z0-9_]*)', bygroups(Name.Attribute)), (r'#.*?$', Comment), (r'\s+', Whitespace), ], } pygments-2.11.2/pygments/lexers/data.py0000644000175000017500000005773214165547207020007 0ustar carstencarsten""" pygments.lexers.data ~~~~~~~~~~~~~~~~~~~~ Lexers for data file format. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import Lexer, ExtendedRegexLexer, LexerContext, \ include, bygroups from pygments.token import Text, Comment, Keyword, Name, String, Number, \ Punctuation, Literal, Error, Whitespace __all__ = ['YamlLexer', 'JsonLexer', 'JsonBareObjectLexer', 'JsonLdLexer'] class YamlLexerContext(LexerContext): """Indentation context for the YAML lexer.""" def __init__(self, *args, **kwds): super().__init__(*args, **kwds) self.indent_stack = [] self.indent = -1 self.next_indent = 0 self.block_scalar_indent = None class YamlLexer(ExtendedRegexLexer): """ Lexer for `YAML `_, a human-friendly data serialization language. .. versionadded:: 0.11 """ name = 'YAML' aliases = ['yaml'] filenames = ['*.yaml', '*.yml'] mimetypes = ['text/x-yaml'] def something(token_class): """Do not produce empty tokens.""" def callback(lexer, match, context): text = match.group() if not text: return yield match.start(), token_class, text context.pos = match.end() return callback def reset_indent(token_class): """Reset the indentation levels.""" def callback(lexer, match, context): text = match.group() context.indent_stack = [] context.indent = -1 context.next_indent = 0 context.block_scalar_indent = None yield match.start(), token_class, text context.pos = match.end() return callback def save_indent(token_class, start=False): """Save a possible indentation level.""" def callback(lexer, match, context): text = match.group() extra = '' if start: context.next_indent = len(text) if context.next_indent < context.indent: while context.next_indent < context.indent: context.indent = context.indent_stack.pop() if context.next_indent > context.indent: extra = text[context.indent:] text = text[:context.indent] else: context.next_indent += len(text) if text: yield match.start(), token_class, text if extra: yield match.start()+len(text), token_class.Error, extra context.pos = match.end() return callback def set_indent(token_class, implicit=False): """Set the previously saved indentation level.""" def callback(lexer, match, context): text = match.group() if context.indent < context.next_indent: context.indent_stack.append(context.indent) context.indent = context.next_indent if not implicit: context.next_indent += len(text) yield match.start(), token_class, text context.pos = match.end() return callback def set_block_scalar_indent(token_class): """Set an explicit indentation level for a block scalar.""" def callback(lexer, match, context): text = match.group() context.block_scalar_indent = None if not text: return increment = match.group(1) if increment: current_indent = max(context.indent, 0) increment = int(increment) context.block_scalar_indent = current_indent + increment if text: yield match.start(), token_class, text context.pos = match.end() return callback def parse_block_scalar_empty_line(indent_token_class, content_token_class): """Process an empty line in a block scalar.""" def callback(lexer, match, context): text = match.group() if (context.block_scalar_indent is None or len(text) <= context.block_scalar_indent): if text: yield match.start(), indent_token_class, text else: indentation = text[:context.block_scalar_indent] content = text[context.block_scalar_indent:] yield match.start(), indent_token_class, indentation yield (match.start()+context.block_scalar_indent, content_token_class, content) context.pos = match.end() return callback def parse_block_scalar_indent(token_class): """Process indentation spaces in a block scalar.""" def callback(lexer, match, context): text = match.group() if context.block_scalar_indent is None: if len(text) <= max(context.indent, 0): context.stack.pop() context.stack.pop() return context.block_scalar_indent = len(text) else: if len(text) < context.block_scalar_indent: context.stack.pop() context.stack.pop() return if text: yield match.start(), token_class, text context.pos = match.end() return callback def parse_plain_scalar_indent(token_class): """Process indentation spaces in a plain scalar.""" def callback(lexer, match, context): text = match.group() if len(text) <= context.indent: context.stack.pop() context.stack.pop() return if text: yield match.start(), token_class, text context.pos = match.end() return callback tokens = { # the root rules 'root': [ # ignored whitespaces (r'[ ]+(?=#|$)', Whitespace), # line breaks (r'\n+', Whitespace), # a comment (r'#[^\n]*', Comment.Single), # the '%YAML' directive (r'^%YAML(?=[ ]|$)', reset_indent(Name.Tag), 'yaml-directive'), # the %TAG directive (r'^%TAG(?=[ ]|$)', reset_indent(Name.Tag), 'tag-directive'), # document start and document end indicators (r'^(?:---|\.\.\.)(?=[ ]|$)', reset_indent(Name.Namespace), 'block-line'), # indentation spaces (r'[ ]*(?!\s|$)', save_indent(Whitespace, start=True), ('block-line', 'indentation')), ], # trailing whitespaces after directives or a block scalar indicator 'ignored-line': [ # ignored whitespaces (r'[ ]+(?=#|$)', Whitespace), # a comment (r'#[^\n]*', Comment.Single), # line break (r'\n', Whitespace, '#pop:2'), ], # the %YAML directive 'yaml-directive': [ # the version number (r'([ ]+)([0-9]+\.[0-9]+)', bygroups(Whitespace, Number), 'ignored-line'), ], # the %TAG directive 'tag-directive': [ # a tag handle and the corresponding prefix (r'([ ]+)(!|![\w-]*!)' r'([ ]+)(!|!?[\w;/?:@&=+$,.!~*\'()\[\]%-]+)', bygroups(Whitespace, Keyword.Type, Whitespace, Keyword.Type), 'ignored-line'), ], # block scalar indicators and indentation spaces 'indentation': [ # trailing whitespaces are ignored (r'[ ]*$', something(Whitespace), '#pop:2'), # whitespaces preceding block collection indicators (r'[ ]+(?=[?:-](?:[ ]|$))', save_indent(Whitespace)), # block collection indicators (r'[?:-](?=[ ]|$)', set_indent(Punctuation.Indicator)), # the beginning a block line (r'[ ]*', save_indent(Whitespace), '#pop'), ], # an indented line in the block context 'block-line': [ # the line end (r'[ ]*(?=#|$)', something(Whitespace), '#pop'), # whitespaces separating tokens (r'[ ]+', Whitespace), # key with colon (r'''([^#,:?\[\]{}"'\n]+)(:)(?=[ ]|$)''', bygroups(Name.Tag, set_indent(Punctuation, implicit=True))), # tags, anchors and aliases, include('descriptors'), # block collections and scalars include('block-nodes'), # flow collections and quoted scalars include('flow-nodes'), # a plain scalar (r'(?=[^\s?:,\[\]{}#&*!|>\'"%@`-]|[?:-]\S)', something(Name.Variable), 'plain-scalar-in-block-context'), ], # tags, anchors, aliases 'descriptors': [ # a full-form tag (r'!<[\w#;/?:@&=+$,.!~*\'()\[\]%-]+>', Keyword.Type), # a tag in the form '!', '!suffix' or '!handle!suffix' (r'!(?:[\w-]+!)?' r'[\w#;/?:@&=+$,.!~*\'()\[\]%-]*', Keyword.Type), # an anchor (r'&[\w-]+', Name.Label), # an alias (r'\*[\w-]+', Name.Variable), ], # block collections and scalars 'block-nodes': [ # implicit key (r':(?=[ ]|$)', set_indent(Punctuation.Indicator, implicit=True)), # literal and folded scalars (r'[|>]', Punctuation.Indicator, ('block-scalar-content', 'block-scalar-header')), ], # flow collections and quoted scalars 'flow-nodes': [ # a flow sequence (r'\[', Punctuation.Indicator, 'flow-sequence'), # a flow mapping (r'\{', Punctuation.Indicator, 'flow-mapping'), # a single-quoted scalar (r'\'', String, 'single-quoted-scalar'), # a double-quoted scalar (r'\"', String, 'double-quoted-scalar'), ], # the content of a flow collection 'flow-collection': [ # whitespaces (r'[ ]+', Whitespace), # line breaks (r'\n+', Whitespace), # a comment (r'#[^\n]*', Comment.Single), # simple indicators (r'[?:,]', Punctuation.Indicator), # tags, anchors and aliases include('descriptors'), # nested collections and quoted scalars include('flow-nodes'), # a plain scalar (r'(?=[^\s?:,\[\]{}#&*!|>\'"%@`])', something(Name.Variable), 'plain-scalar-in-flow-context'), ], # a flow sequence indicated by '[' and ']' 'flow-sequence': [ # include flow collection rules include('flow-collection'), # the closing indicator (r'\]', Punctuation.Indicator, '#pop'), ], # a flow mapping indicated by '{' and '}' 'flow-mapping': [ # key with colon (r'''([^,:?\[\]{}"'\n]+)(:)(?=[ ]|$)''', bygroups(Name.Tag, Punctuation)), # include flow collection rules include('flow-collection'), # the closing indicator (r'\}', Punctuation.Indicator, '#pop'), ], # block scalar lines 'block-scalar-content': [ # line break (r'\n', Whitespace), # empty line (r'^[ ]+$', parse_block_scalar_empty_line(Whitespace, Name.Constant)), # indentation spaces (we may leave the state here) (r'^[ ]*', parse_block_scalar_indent(Whitespace)), # line content (r'[\S\t ]+', Name.Constant), ], # the content of a literal or folded scalar 'block-scalar-header': [ # indentation indicator followed by chomping flag (r'([1-9])?[+-]?(?=[ ]|$)', set_block_scalar_indent(Punctuation.Indicator), 'ignored-line'), # chomping flag followed by indentation indicator (r'[+-]?([1-9])?(?=[ ]|$)', set_block_scalar_indent(Punctuation.Indicator), 'ignored-line'), ], # ignored and regular whitespaces in quoted scalars 'quoted-scalar-whitespaces': [ # leading and trailing whitespaces are ignored (r'^[ ]+', Whitespace), (r'[ ]+$', Whitespace), # line breaks are ignored (r'\n+', Whitespace), # other whitespaces are a part of the value (r'[ ]+', Name.Variable), ], # single-quoted scalars 'single-quoted-scalar': [ # include whitespace and line break rules include('quoted-scalar-whitespaces'), # escaping of the quote character (r'\'\'', String.Escape), # regular non-whitespace characters (r'[^\s\']+', String), # the closing quote (r'\'', String, '#pop'), ], # double-quoted scalars 'double-quoted-scalar': [ # include whitespace and line break rules include('quoted-scalar-whitespaces'), # escaping of special characters (r'\\[0abt\tn\nvfre "\\N_LP]', String), # escape codes (r'\\(?:x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})', String.Escape), # regular non-whitespace characters (r'[^\s"\\]+', String), # the closing quote (r'"', String, '#pop'), ], # the beginning of a new line while scanning a plain scalar 'plain-scalar-in-block-context-new-line': [ # empty lines (r'^[ ]+$', Whitespace), # line breaks (r'\n+', Whitespace), # document start and document end indicators (r'^(?=---|\.\.\.)', something(Name.Namespace), '#pop:3'), # indentation spaces (we may leave the block line state here) (r'^[ ]*', parse_plain_scalar_indent(Whitespace), '#pop'), ], # a plain scalar in the block context 'plain-scalar-in-block-context': [ # the scalar ends with the ':' indicator (r'[ ]*(?=:[ ]|:$)', something(Whitespace), '#pop'), # the scalar ends with whitespaces followed by a comment (r'[ ]+(?=#)', Whitespace, '#pop'), # trailing whitespaces are ignored (r'[ ]+$', Whitespace), # line breaks are ignored (r'\n+', Whitespace, 'plain-scalar-in-block-context-new-line'), # other whitespaces are a part of the value (r'[ ]+', Literal.Scalar.Plain), # regular non-whitespace characters (r'(?::(?!\s)|[^\s:])+', Literal.Scalar.Plain), ], # a plain scalar is the flow context 'plain-scalar-in-flow-context': [ # the scalar ends with an indicator character (r'[ ]*(?=[,:?\[\]{}])', something(Whitespace), '#pop'), # the scalar ends with a comment (r'[ ]+(?=#)', Whitespace, '#pop'), # leading and trailing whitespaces are ignored (r'^[ ]+', Whitespace), (r'[ ]+$', Whitespace), # line breaks are ignored (r'\n+', Whitespace), # other whitespaces are a part of the value (r'[ ]+', Name.Variable), # regular non-whitespace characters (r'[^\s,:?\[\]{}]+', Name.Variable), ], } def get_tokens_unprocessed(self, text=None, context=None): if context is None: context = YamlLexerContext(text, 0) return super().get_tokens_unprocessed(text, context) class JsonLexer(Lexer): """ For JSON data structures. .. versionadded:: 1.5 """ name = 'JSON' aliases = ['json', 'json-object'] filenames = ['*.json', 'Pipfile.lock'] mimetypes = ['application/json', 'application/json-object'] # No validation of integers, floats, or constants is done. # As long as the characters are members of the following # sets, the token will be considered valid. For example, # # "--1--" is parsed as an integer # "1...eee" is parsed as a float # "trustful" is parsed as a constant # integers = set('-0123456789') floats = set('.eE+') constants = set('truefalsenull') # true|false|null hexadecimals = set('0123456789abcdefABCDEF') punctuations = set('{}[],') whitespaces = {'\u0020', '\u000a', '\u000d', '\u0009'} def get_tokens_unprocessed(self, text): """Parse JSON data.""" in_string = False in_escape = False in_unicode_escape = 0 in_whitespace = False in_constant = False in_number = False in_float = False in_punctuation = False start = 0 # The queue is used to store data that may need to be tokenized # differently based on what follows. In particular, JSON object # keys are tokenized differently than string values, but cannot # be distinguished until punctuation is encountered outside the # string. # # A ":" character after the string indicates that the string is # an object key; any other character indicates the string is a # regular string value. # # The queue holds tuples that contain the following data: # # (start_index, token_type, text) # # By default the token type of text in double quotes is # String.Double. The token type will be replaced if a colon # is encountered after the string closes. # queue = [] for stop, character in enumerate(text): if in_string: if in_unicode_escape: if character in self.hexadecimals: in_unicode_escape -= 1 if not in_unicode_escape: in_escape = False else: in_unicode_escape = 0 in_escape = False elif in_escape: if character == 'u': in_unicode_escape = 4 else: in_escape = False elif character == '\\': in_escape = True elif character == '"': queue.append((start, String.Double, text[start:stop + 1])) in_string = False in_escape = False in_unicode_escape = 0 continue elif in_whitespace: if character in self.whitespaces: continue if queue: queue.append((start, Whitespace, text[start:stop])) else: yield start, Whitespace, text[start:stop] in_whitespace = False # Fall through so the new character can be evaluated. elif in_constant: if character in self.constants: continue yield start, Keyword.Constant, text[start:stop] in_constant = False # Fall through so the new character can be evaluated. elif in_number: if character in self.integers: continue elif character in self.floats: in_float = True continue if in_float: yield start, Number.Float, text[start:stop] else: yield start, Number.Integer, text[start:stop] in_number = False in_float = False # Fall through so the new character can be evaluated. elif in_punctuation: if character in self.punctuations: continue yield start, Punctuation, text[start:stop] in_punctuation = False # Fall through so the new character can be evaluated. start = stop if character == '"': in_string = True elif character in self.whitespaces: in_whitespace = True elif character in {'f', 'n', 't'}: # The first letters of true|false|null # Exhaust the queue. Accept the existing token types. yield from queue queue.clear() in_constant = True elif character in self.integers: # Exhaust the queue. Accept the existing token types. yield from queue queue.clear() in_number = True elif character == ':': # Yield from the queue. Replace string token types. for _start, _token, _text in queue: # There can be only two types of tokens before a ':': # Whitespace, or a quoted string. If it's a quoted string # we emit Name.Tag, otherwise, we yield the whitespace # tokens. In all other cases this is invalid JSON. This # allows for things like '"foo" "bar": "baz"' but we're not # a validating JSON lexer so it's acceptable if _token is Whitespace: yield _start, _token, _text elif _token is String.Double: yield _start, Name.Tag, _text else: yield _start, Error, _text queue.clear() in_punctuation = True elif character in self.punctuations: # Exhaust the queue. Accept the existing token types. yield from queue queue.clear() in_punctuation = True else: # Exhaust the queue. Accept the existing token types. yield from queue queue.clear() yield start, Error, character # Yield any remaining text. yield from queue if in_string: yield start, Error, text[start:] elif in_float: yield start, Number.Float, text[start:] elif in_number: yield start, Number.Integer, text[start:] elif in_constant: yield start, Keyword.Constant, text[start:] elif in_whitespace: yield start, Whitespace, text[start:] elif in_punctuation: yield start, Punctuation, text[start:] class JsonBareObjectLexer(JsonLexer): """ For JSON data structures (with missing object curly braces). .. versionadded:: 2.2 .. deprecated:: 2.8.0 Behaves the same as `JsonLexer` now. """ name = 'JSONBareObject' aliases = [] filenames = [] mimetypes = [] class JsonLdLexer(JsonLexer): """ For `JSON-LD `_ linked data. .. versionadded:: 2.0 """ name = 'JSON-LD' aliases = ['jsonld', 'json-ld'] filenames = ['*.jsonld'] mimetypes = ['application/ld+json'] json_ld_keywords = { '"@%s"' % keyword for keyword in ( 'base', 'container', 'context', 'direction', 'graph', 'id', 'import', 'included', 'index', 'json', 'language', 'list', 'nest', 'none', 'prefix', 'propagate', 'protected', 'reverse', 'set', 'type', 'value', 'version', 'vocab', ) } def get_tokens_unprocessed(self, text): for start, token, value in super().get_tokens_unprocessed(text): if token is Name.Tag and value in self.json_ld_keywords: yield start, Name.Decorator, value else: yield start, token, value pygments-2.11.2/pygments/lexers/pawn.py0000644000175000017500000001772214165547207020036 0ustar carstencarsten""" pygments.lexers.pawn ~~~~~~~~~~~~~~~~~~~~ Lexers for the Pawn languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation from pygments.util import get_bool_opt __all__ = ['SourcePawnLexer', 'PawnLexer'] class SourcePawnLexer(RegexLexer): """ For SourcePawn source code with preprocessor directives. .. versionadded:: 1.6 """ name = 'SourcePawn' aliases = ['sp'] filenames = ['*.sp'] mimetypes = ['text/x-sourcepawn'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/\*.*?\*/)+' #: only one /* */ style comment _ws1 = r'\s*(?:/[*].*?[*]/\s*)*' tokens = { 'root': [ # preprocessor directives: without whitespace (r'^#if\s+0', Comment.Preproc, 'if0'), ('^#', Comment.Preproc, 'macro'), # or with whitespace ('^' + _ws1 + r'#if\s+0', Comment.Preproc, 'if0'), ('^' + _ws1 + '#', Comment.Preproc, 'macro'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?\*(.|\n)*?\*(\\\n)?/', Comment.Multiline), (r'[{}]', Punctuation), (r'L?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex), (r'0[0-7]+[LlUu]*', Number.Oct), (r'\d+[LlUu]*', Number.Integer), (r'[~!%^&*+=|?:<>/-]', Operator), (r'[()\[\],.;]', Punctuation), (r'(case|const|continue|native|' r'default|else|enum|for|if|new|operator|' r'public|return|sizeof|static|decl|struct|switch)\b', Keyword), (r'(bool|Float)\b', Keyword.Type), (r'(true|false)\b', Keyword.Constant), (r'[a-zA-Z_]\w*', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/\*(.|\n)*?\*/', Comment.Multiline), (r'//.*?\n', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Comment.Preproc, '#pop'), ], 'if0': [ (r'^\s*#if.*?(?/-]', Operator), (r'[()\[\],.;]', Punctuation), (r'(switch|case|default|const|new|static|char|continue|break|' r'if|else|for|while|do|operator|enum|' r'public|return|sizeof|tagof|state|goto)\b', Keyword), (r'(bool|Float)\b', Keyword.Type), (r'(true|false)\b', Keyword.Constant), (r'[a-zA-Z_]\w*', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'\\\n', String), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/\*(.|\n)*?\*/', Comment.Multiline), (r'//.*?\n', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Comment.Preproc, '#pop'), ], 'if0': [ (r'^\s*#if.*?(?>|<=|>=|&&|\|\||\*\*|[-+~!*/%<>&^|?:]', Operator), ], 'data': [ (r'\s+', Whitespace), (r'0x[a-fA-F0-9]+', Number.Hex), (r'0[0-7]+', Number.Oct), (r'\d+\.\d+', Number.Float), (r'\d+', Number.Integer), (r'\$([\w.:-]+)', Name.Variable), (r'([\w.,@:-]+)', Text), ], 'params': [ (r';', Keyword, '#pop'), (r'\n', Text, '#pop'), (r'(else|elseif|then)\b', Keyword), include('basic'), include('data'), ], 'params-in-brace': [ (r'\}', Keyword, ('#pop', '#pop')), include('params') ], 'params-in-paren': [ (r'\)', Keyword, ('#pop', '#pop')), include('params') ], 'params-in-bracket': [ (r'\]', Keyword, ('#pop', '#pop')), include('params') ], 'string': [ (r'\[', String.Double, 'string-square'), (r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\])', String.Double), (r'"', String.Double, '#pop') ], 'string-square': [ (r'\[', String.Double, 'string-square'), (r'(?s)(\\\\|\\[0-7]+|\\.|\\\n|[^\]\\])', String.Double), (r'\]', String.Double, '#pop') ], 'brace': [ (r'\}', Keyword, '#pop'), include('command-in-brace'), include('basic'), include('data'), ], 'paren': [ (r'\)', Keyword, '#pop'), include('command-in-paren'), include('basic'), include('data'), ], 'bracket': [ (r'\]', Keyword, '#pop'), include('command-in-bracket'), include('basic'), include('data'), ], 'comment': [ (r'.*[^\\]\n', Comment, '#pop'), (r'.*\\\n', Comment), ], } def analyse_text(text): return shebang_matches(text, r'(tcl)') pygments-2.11.2/pygments/lexers/modula2.py0000644000175000017500000014751214165547207020435 0ustar carstencarsten""" pygments.lexers.modula2 ~~~~~~~~~~~~~~~~~~~~~~~ Multi-Dialect Lexer for Modula-2. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include from pygments.util import get_bool_opt, get_list_opt from pygments.token import Text, Comment, Operator, Keyword, Name, \ String, Number, Punctuation, Error __all__ = ['Modula2Lexer'] # Multi-Dialect Modula-2 Lexer class Modula2Lexer(RegexLexer): """ For `Modula-2 `_ source code. The Modula-2 lexer supports several dialects. By default, it operates in fallback mode, recognising the *combined* literals, punctuation symbols and operators of all supported dialects, and the *combined* reserved words and builtins of PIM Modula-2, ISO Modula-2 and Modula-2 R10, while not differentiating between library defined identifiers. To select a specific dialect, a dialect option may be passed or a dialect tag may be embedded into a source file. Dialect Options: `m2pim` Select PIM Modula-2 dialect. `m2iso` Select ISO Modula-2 dialect. `m2r10` Select Modula-2 R10 dialect. `objm2` Select Objective Modula-2 dialect. The PIM and ISO dialect options may be qualified with a language extension. Language Extensions: `+aglet` Select Aglet Modula-2 extensions, available with m2iso. `+gm2` Select GNU Modula-2 extensions, available with m2pim. `+p1` Select p1 Modula-2 extensions, available with m2iso. `+xds` Select XDS Modula-2 extensions, available with m2iso. Passing a Dialect Option via Unix Commandline Interface Dialect options may be passed to the lexer using the `dialect` key. Only one such option should be passed. If multiple dialect options are passed, the first valid option is used, any subsequent options are ignored. Examples: `$ pygmentize -O full,dialect=m2iso -f html -o /path/to/output /path/to/input` Use ISO dialect to render input to HTML output `$ pygmentize -O full,dialect=m2iso+p1 -f rtf -o /path/to/output /path/to/input` Use ISO dialect with p1 extensions to render input to RTF output Embedding a Dialect Option within a source file A dialect option may be embedded in a source file in form of a dialect tag, a specially formatted comment that specifies a dialect option. Dialect Tag EBNF:: dialectTag : OpeningCommentDelim Prefix dialectOption ClosingCommentDelim ; dialectOption : 'm2pim' | 'm2iso' | 'm2r10' | 'objm2' | 'm2iso+aglet' | 'm2pim+gm2' | 'm2iso+p1' | 'm2iso+xds' ; Prefix : '!' ; OpeningCommentDelim : '(*' ; ClosingCommentDelim : '*)' ; No whitespace is permitted between the tokens of a dialect tag. In the event that a source file contains multiple dialect tags, the first tag that contains a valid dialect option will be used and any subsequent dialect tags will be ignored. Ideally, a dialect tag should be placed at the beginning of a source file. An embedded dialect tag overrides a dialect option set via command line. Examples: ``(*!m2r10*) DEFINITION MODULE Foobar; ...`` Use Modula2 R10 dialect to render this source file. ``(*!m2pim+gm2*) DEFINITION MODULE Bazbam; ...`` Use PIM dialect with GNU extensions to render this source file. Algol Publication Mode: In Algol publication mode, source text is rendered for publication of algorithms in scientific papers and academic texts, following the format of the Revised Algol-60 Language Report. It is activated by passing one of two corresponding styles as an option: `algol` render reserved words lowercase underline boldface and builtins lowercase boldface italic `algol_nu` render reserved words lowercase boldface (no underlining) and builtins lowercase boldface italic The lexer automatically performs the required lowercase conversion when this mode is activated. Example: ``$ pygmentize -O full,style=algol -f latex -o /path/to/output /path/to/input`` Render input file in Algol publication mode to LaTeX output. Rendering Mode of First Class ADT Identifiers: The rendering of standard library first class ADT identifiers is controlled by option flag "treat_stdlib_adts_as_builtins". When this option is turned on, standard library ADT identifiers are rendered as builtins. When it is turned off, they are rendered as ordinary library identifiers. `treat_stdlib_adts_as_builtins` (default: On) The option is useful for dialects that support ADTs as first class objects and provide ADTs in the standard library that would otherwise be built-in. At present, only Modula-2 R10 supports library ADTs as first class objects and therefore, no ADT identifiers are defined for any other dialects. Example: ``$ pygmentize -O full,dialect=m2r10,treat_stdlib_adts_as_builtins=Off ...`` Render standard library ADTs as ordinary library types. .. versionadded:: 1.3 .. versionchanged:: 2.1 Added multi-dialect support. """ name = 'Modula-2' aliases = ['modula2', 'm2'] filenames = ['*.def', '*.mod'] mimetypes = ['text/x-modula2'] flags = re.MULTILINE | re.DOTALL tokens = { 'whitespace': [ (r'\n+', Text), # blank lines (r'\s+', Text), # whitespace ], 'dialecttags': [ # PIM Dialect Tag (r'\(\*!m2pim\*\)', Comment.Special), # ISO Dialect Tag (r'\(\*!m2iso\*\)', Comment.Special), # M2R10 Dialect Tag (r'\(\*!m2r10\*\)', Comment.Special), # ObjM2 Dialect Tag (r'\(\*!objm2\*\)', Comment.Special), # Aglet Extensions Dialect Tag (r'\(\*!m2iso\+aglet\*\)', Comment.Special), # GNU Extensions Dialect Tag (r'\(\*!m2pim\+gm2\*\)', Comment.Special), # p1 Extensions Dialect Tag (r'\(\*!m2iso\+p1\*\)', Comment.Special), # XDS Extensions Dialect Tag (r'\(\*!m2iso\+xds\*\)', Comment.Special), ], 'identifiers': [ (r'([a-zA-Z_$][\w$]*)', Name), ], 'prefixed_number_literals': [ # # Base-2, whole number (r'0b[01]+(\'[01]+)*', Number.Bin), # # Base-16, whole number (r'0[ux][0-9A-F]+(\'[0-9A-F]+)*', Number.Hex), ], 'plain_number_literals': [ # # Base-10, real number with exponent (r'[0-9]+(\'[0-9]+)*' # integral part r'\.[0-9]+(\'[0-9]+)*' # fractional part r'[eE][+-]?[0-9]+(\'[0-9]+)*', # exponent Number.Float), # # Base-10, real number without exponent (r'[0-9]+(\'[0-9]+)*' # integral part r'\.[0-9]+(\'[0-9]+)*', # fractional part Number.Float), # # Base-10, whole number (r'[0-9]+(\'[0-9]+)*', Number.Integer), ], 'suffixed_number_literals': [ # # Base-8, whole number (r'[0-7]+B', Number.Oct), # # Base-8, character code (r'[0-7]+C', Number.Oct), # # Base-16, number (r'[0-9A-F]+H', Number.Hex), ], 'string_literals': [ (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), ], 'digraph_operators': [ # Dot Product Operator (r'\*\.', Operator), # Array Concatenation Operator (r'\+>', Operator), # M2R10 + ObjM2 # Inequality Operator (r'<>', Operator), # ISO + PIM # Less-Or-Equal, Subset (r'<=', Operator), # Greater-Or-Equal, Superset (r'>=', Operator), # Identity Operator (r'==', Operator), # M2R10 + ObjM2 # Type Conversion Operator (r'::', Operator), # M2R10 + ObjM2 # Assignment Symbol (r':=', Operator), # Postfix Increment Mutator (r'\+\+', Operator), # M2R10 + ObjM2 # Postfix Decrement Mutator (r'--', Operator), # M2R10 + ObjM2 ], 'unigraph_operators': [ # Arithmetic Operators (r'[+-]', Operator), (r'[*/]', Operator), # ISO 80000-2 compliant Set Difference Operator (r'\\', Operator), # M2R10 + ObjM2 # Relational Operators (r'[=#<>]', Operator), # Dereferencing Operator (r'\^', Operator), # Dereferencing Operator Synonym (r'@', Operator), # ISO # Logical AND Operator Synonym (r'&', Operator), # PIM + ISO # Logical NOT Operator Synonym (r'~', Operator), # PIM + ISO # Smalltalk Message Prefix (r'`', Operator), # ObjM2 ], 'digraph_punctuation': [ # Range Constructor (r'\.\.', Punctuation), # Opening Chevron Bracket (r'<<', Punctuation), # M2R10 + ISO # Closing Chevron Bracket (r'>>', Punctuation), # M2R10 + ISO # Blueprint Punctuation (r'->', Punctuation), # M2R10 + ISO # Distinguish |# and # in M2 R10 (r'\|#', Punctuation), # Distinguish ## and # in M2 R10 (r'##', Punctuation), # Distinguish |* and * in M2 R10 (r'\|\*', Punctuation), ], 'unigraph_punctuation': [ # Common Punctuation (r'[()\[\]{},.:;|]', Punctuation), # Case Label Separator Synonym (r'!', Punctuation), # ISO # Blueprint Punctuation (r'\?', Punctuation), # M2R10 + ObjM2 ], 'comments': [ # Single Line Comment (r'^//.*?\n', Comment.Single), # M2R10 + ObjM2 # Block Comment (r'\(\*([^$].*?)\*\)', Comment.Multiline), # Template Block Comment (r'/\*(.*?)\*/', Comment.Multiline), # M2R10 + ObjM2 ], 'pragmas': [ # ISO Style Pragmas (r'<\*.*?\*>', Comment.Preproc), # ISO, M2R10 + ObjM2 # Pascal Style Pragmas (r'\(\*\$.*?\*\)', Comment.Preproc), # PIM ], 'root': [ include('whitespace'), include('dialecttags'), include('pragmas'), include('comments'), include('identifiers'), include('suffixed_number_literals'), # PIM + ISO include('prefixed_number_literals'), # M2R10 + ObjM2 include('plain_number_literals'), include('string_literals'), include('digraph_punctuation'), include('digraph_operators'), include('unigraph_punctuation'), include('unigraph_operators'), ] } # C o m m o n D a t a s e t s # Common Reserved Words Dataset common_reserved_words = ( # 37 common reserved words 'AND', 'ARRAY', 'BEGIN', 'BY', 'CASE', 'CONST', 'DEFINITION', 'DIV', 'DO', 'ELSE', 'ELSIF', 'END', 'EXIT', 'FOR', 'FROM', 'IF', 'IMPLEMENTATION', 'IMPORT', 'IN', 'LOOP', 'MOD', 'MODULE', 'NOT', 'OF', 'OR', 'POINTER', 'PROCEDURE', 'RECORD', 'REPEAT', 'RETURN', 'SET', 'THEN', 'TO', 'TYPE', 'UNTIL', 'VAR', 'WHILE', ) # Common Builtins Dataset common_builtins = ( # 16 common builtins 'ABS', 'BOOLEAN', 'CARDINAL', 'CHAR', 'CHR', 'FALSE', 'INTEGER', 'LONGINT', 'LONGREAL', 'MAX', 'MIN', 'NIL', 'ODD', 'ORD', 'REAL', 'TRUE', ) # Common Pseudo-Module Builtins Dataset common_pseudo_builtins = ( # 4 common pseudo builtins 'ADDRESS', 'BYTE', 'WORD', 'ADR' ) # P I M M o d u l a - 2 D a t a s e t s # Lexemes to Mark as Error Tokens for PIM Modula-2 pim_lexemes_to_reject = ( '!', '`', '@', '$', '%', '?', '\\', '==', '++', '--', '::', '*.', '+>', '->', '<<', '>>', '|#', '##', ) # PIM Modula-2 Additional Reserved Words Dataset pim_additional_reserved_words = ( # 3 additional reserved words 'EXPORT', 'QUALIFIED', 'WITH', ) # PIM Modula-2 Additional Builtins Dataset pim_additional_builtins = ( # 16 additional builtins 'BITSET', 'CAP', 'DEC', 'DISPOSE', 'EXCL', 'FLOAT', 'HALT', 'HIGH', 'INC', 'INCL', 'NEW', 'NIL', 'PROC', 'SIZE', 'TRUNC', 'VAL', ) # PIM Modula-2 Additional Pseudo-Module Builtins Dataset pim_additional_pseudo_builtins = ( # 5 additional pseudo builtins 'SYSTEM', 'PROCESS', 'TSIZE', 'NEWPROCESS', 'TRANSFER', ) # I S O M o d u l a - 2 D a t a s e t s # Lexemes to Mark as Error Tokens for ISO Modula-2 iso_lexemes_to_reject = ( '`', '$', '%', '?', '\\', '==', '++', '--', '::', '*.', '+>', '->', '<<', '>>', '|#', '##', ) # ISO Modula-2 Additional Reserved Words Dataset iso_additional_reserved_words = ( # 9 additional reserved words (ISO 10514-1) 'EXCEPT', 'EXPORT', 'FINALLY', 'FORWARD', 'PACKEDSET', 'QUALIFIED', 'REM', 'RETRY', 'WITH', # 10 additional reserved words (ISO 10514-2 & ISO 10514-3) 'ABSTRACT', 'AS', 'CLASS', 'GUARD', 'INHERIT', 'OVERRIDE', 'READONLY', 'REVEAL', 'TRACED', 'UNSAFEGUARDED', ) # ISO Modula-2 Additional Builtins Dataset iso_additional_builtins = ( # 26 additional builtins (ISO 10514-1) 'BITSET', 'CAP', 'CMPLX', 'COMPLEX', 'DEC', 'DISPOSE', 'EXCL', 'FLOAT', 'HALT', 'HIGH', 'IM', 'INC', 'INCL', 'INT', 'INTERRUPTIBLE', 'LENGTH', 'LFLOAT', 'LONGCOMPLEX', 'NEW', 'PROC', 'PROTECTION', 'RE', 'SIZE', 'TRUNC', 'UNINTERRUBTIBLE', 'VAL', # 5 additional builtins (ISO 10514-2 & ISO 10514-3) 'CREATE', 'DESTROY', 'EMPTY', 'ISMEMBER', 'SELF', ) # ISO Modula-2 Additional Pseudo-Module Builtins Dataset iso_additional_pseudo_builtins = ( # 14 additional builtins (SYSTEM) 'SYSTEM', 'BITSPERLOC', 'LOCSPERBYTE', 'LOCSPERWORD', 'LOC', 'ADDADR', 'SUBADR', 'DIFADR', 'MAKEADR', 'ADR', 'ROTATE', 'SHIFT', 'CAST', 'TSIZE', # 13 additional builtins (COROUTINES) 'COROUTINES', 'ATTACH', 'COROUTINE', 'CURRENT', 'DETACH', 'HANDLER', 'INTERRUPTSOURCE', 'IOTRANSFER', 'IsATTACHED', 'LISTEN', 'NEWCOROUTINE', 'PROT', 'TRANSFER', # 9 additional builtins (EXCEPTIONS) 'EXCEPTIONS', 'AllocateSource', 'CurrentNumber', 'ExceptionNumber', 'ExceptionSource', 'GetMessage', 'IsCurrentSource', 'IsExceptionalExecution', 'RAISE', # 3 additional builtins (TERMINATION) 'TERMINATION', 'IsTerminating', 'HasHalted', # 4 additional builtins (M2EXCEPTION) 'M2EXCEPTION', 'M2Exceptions', 'M2Exception', 'IsM2Exception', 'indexException', 'rangeException', 'caseSelectException', 'invalidLocation', 'functionException', 'wholeValueException', 'wholeDivException', 'realValueException', 'realDivException', 'complexValueException', 'complexDivException', 'protException', 'sysException', 'coException', 'exException', ) # M o d u l a - 2 R 1 0 D a t a s e t s # Lexemes to Mark as Error Tokens for Modula-2 R10 m2r10_lexemes_to_reject = ( '!', '`', '@', '$', '%', '&', '<>', ) # Modula-2 R10 reserved words in addition to the common set m2r10_additional_reserved_words = ( # 12 additional reserved words 'ALIAS', 'ARGLIST', 'BLUEPRINT', 'COPY', 'GENLIB', 'INDETERMINATE', 'NEW', 'NONE', 'OPAQUE', 'REFERENTIAL', 'RELEASE', 'RETAIN', # 2 additional reserved words with symbolic assembly option 'ASM', 'REG', ) # Modula-2 R10 builtins in addition to the common set m2r10_additional_builtins = ( # 26 additional builtins 'CARDINAL', 'COUNT', 'EMPTY', 'EXISTS', 'INSERT', 'LENGTH', 'LONGCARD', 'OCTET', 'PTR', 'PRED', 'READ', 'READNEW', 'REMOVE', 'RETRIEVE', 'SORT', 'STORE', 'SUBSET', 'SUCC', 'TLIMIT', 'TMAX', 'TMIN', 'TRUE', 'TSIZE', 'UNICHAR', 'WRITE', 'WRITEF', ) # Modula-2 R10 Additional Pseudo-Module Builtins Dataset m2r10_additional_pseudo_builtins = ( # 13 additional builtins (TPROPERTIES) 'TPROPERTIES', 'PROPERTY', 'LITERAL', 'TPROPERTY', 'TLITERAL', 'TBUILTIN', 'TDYN', 'TREFC', 'TNIL', 'TBASE', 'TPRECISION', 'TMAXEXP', 'TMINEXP', # 4 additional builtins (CONVERSION) 'CONVERSION', 'TSXFSIZE', 'SXF', 'VAL', # 35 additional builtins (UNSAFE) 'UNSAFE', 'CAST', 'INTRINSIC', 'AVAIL', 'ADD', 'SUB', 'ADDC', 'SUBC', 'FETCHADD', 'FETCHSUB', 'SHL', 'SHR', 'ASHR', 'ROTL', 'ROTR', 'ROTLC', 'ROTRC', 'BWNOT', 'BWAND', 'BWOR', 'BWXOR', 'BWNAND', 'BWNOR', 'SETBIT', 'TESTBIT', 'LSBIT', 'MSBIT', 'CSBITS', 'BAIL', 'HALT', 'TODO', 'FFI', 'ADDR', 'VARGLIST', 'VARGC', # 11 additional builtins (ATOMIC) 'ATOMIC', 'INTRINSIC', 'AVAIL', 'SWAP', 'CAS', 'INC', 'DEC', 'BWAND', 'BWNAND', 'BWOR', 'BWXOR', # 7 additional builtins (COMPILER) 'COMPILER', 'DEBUG', 'MODNAME', 'PROCNAME', 'LINENUM', 'DEFAULT', 'HASH', # 5 additional builtins (ASSEMBLER) 'ASSEMBLER', 'REGISTER', 'SETREG', 'GETREG', 'CODE', ) # O b j e c t i v e M o d u l a - 2 D a t a s e t s # Lexemes to Mark as Error Tokens for Objective Modula-2 objm2_lexemes_to_reject = ( '!', '$', '%', '&', '<>', ) # Objective Modula-2 Extensions # reserved words in addition to Modula-2 R10 objm2_additional_reserved_words = ( # 16 additional reserved words 'BYCOPY', 'BYREF', 'CLASS', 'CONTINUE', 'CRITICAL', 'INOUT', 'METHOD', 'ON', 'OPTIONAL', 'OUT', 'PRIVATE', 'PROTECTED', 'PROTOCOL', 'PUBLIC', 'SUPER', 'TRY', ) # Objective Modula-2 Extensions # builtins in addition to Modula-2 R10 objm2_additional_builtins = ( # 3 additional builtins 'OBJECT', 'NO', 'YES', ) # Objective Modula-2 Extensions # pseudo-module builtins in addition to Modula-2 R10 objm2_additional_pseudo_builtins = ( # None ) # A g l e t M o d u l a - 2 D a t a s e t s # Aglet Extensions # reserved words in addition to ISO Modula-2 aglet_additional_reserved_words = ( # None ) # Aglet Extensions # builtins in addition to ISO Modula-2 aglet_additional_builtins = ( # 9 additional builtins 'BITSET8', 'BITSET16', 'BITSET32', 'CARDINAL8', 'CARDINAL16', 'CARDINAL32', 'INTEGER8', 'INTEGER16', 'INTEGER32', ) # Aglet Modula-2 Extensions # pseudo-module builtins in addition to ISO Modula-2 aglet_additional_pseudo_builtins = ( # None ) # G N U M o d u l a - 2 D a t a s e t s # GNU Extensions # reserved words in addition to PIM Modula-2 gm2_additional_reserved_words = ( # 10 additional reserved words 'ASM', '__ATTRIBUTE__', '__BUILTIN__', '__COLUMN__', '__DATE__', '__FILE__', '__FUNCTION__', '__LINE__', '__MODULE__', 'VOLATILE', ) # GNU Extensions # builtins in addition to PIM Modula-2 gm2_additional_builtins = ( # 21 additional builtins 'BITSET8', 'BITSET16', 'BITSET32', 'CARDINAL8', 'CARDINAL16', 'CARDINAL32', 'CARDINAL64', 'COMPLEX32', 'COMPLEX64', 'COMPLEX96', 'COMPLEX128', 'INTEGER8', 'INTEGER16', 'INTEGER32', 'INTEGER64', 'REAL8', 'REAL16', 'REAL32', 'REAL96', 'REAL128', 'THROW', ) # GNU Extensions # pseudo-module builtins in addition to PIM Modula-2 gm2_additional_pseudo_builtins = ( # None ) # p 1 M o d u l a - 2 D a t a s e t s # p1 Extensions # reserved words in addition to ISO Modula-2 p1_additional_reserved_words = ( # None ) # p1 Extensions # builtins in addition to ISO Modula-2 p1_additional_builtins = ( # None ) # p1 Modula-2 Extensions # pseudo-module builtins in addition to ISO Modula-2 p1_additional_pseudo_builtins = ( # 1 additional builtin 'BCD', ) # X D S M o d u l a - 2 D a t a s e t s # XDS Extensions # reserved words in addition to ISO Modula-2 xds_additional_reserved_words = ( # 1 additional reserved word 'SEQ', ) # XDS Extensions # builtins in addition to ISO Modula-2 xds_additional_builtins = ( # 9 additional builtins 'ASH', 'ASSERT', 'DIFFADR_TYPE', 'ENTIER', 'INDEX', 'LEN', 'LONGCARD', 'SHORTCARD', 'SHORTINT', ) # XDS Modula-2 Extensions # pseudo-module builtins in addition to ISO Modula-2 xds_additional_pseudo_builtins = ( # 22 additional builtins (SYSTEM) 'PROCESS', 'NEWPROCESS', 'BOOL8', 'BOOL16', 'BOOL32', 'CARD8', 'CARD16', 'CARD32', 'INT8', 'INT16', 'INT32', 'REF', 'MOVE', 'FILL', 'GET', 'PUT', 'CC', 'int', 'unsigned', 'size_t', 'void' # 3 additional builtins (COMPILER) 'COMPILER', 'OPTION', 'EQUATION' ) # P I M S t a n d a r d L i b r a r y D a t a s e t s # PIM Modula-2 Standard Library Modules Dataset pim_stdlib_module_identifiers = ( 'Terminal', 'FileSystem', 'InOut', 'RealInOut', 'MathLib0', 'Storage', ) # PIM Modula-2 Standard Library Types Dataset pim_stdlib_type_identifiers = ( 'Flag', 'FlagSet', 'Response', 'Command', 'Lock', 'Permission', 'MediumType', 'File', 'FileProc', 'DirectoryProc', 'FileCommand', 'DirectoryCommand', ) # PIM Modula-2 Standard Library Procedures Dataset pim_stdlib_proc_identifiers = ( 'Read', 'BusyRead', 'ReadAgain', 'Write', 'WriteString', 'WriteLn', 'Create', 'Lookup', 'Close', 'Delete', 'Rename', 'SetRead', 'SetWrite', 'SetModify', 'SetOpen', 'Doio', 'SetPos', 'GetPos', 'Length', 'Reset', 'Again', 'ReadWord', 'WriteWord', 'ReadChar', 'WriteChar', 'CreateMedium', 'DeleteMedium', 'AssignName', 'DeassignName', 'ReadMedium', 'LookupMedium', 'OpenInput', 'OpenOutput', 'CloseInput', 'CloseOutput', 'ReadString', 'ReadInt', 'ReadCard', 'ReadWrd', 'WriteInt', 'WriteCard', 'WriteOct', 'WriteHex', 'WriteWrd', 'ReadReal', 'WriteReal', 'WriteFixPt', 'WriteRealOct', 'sqrt', 'exp', 'ln', 'sin', 'cos', 'arctan', 'entier', 'ALLOCATE', 'DEALLOCATE', ) # PIM Modula-2 Standard Library Variables Dataset pim_stdlib_var_identifiers = ( 'Done', 'termCH', 'in', 'out' ) # PIM Modula-2 Standard Library Constants Dataset pim_stdlib_const_identifiers = ( 'EOL', ) # I S O S t a n d a r d L i b r a r y D a t a s e t s # ISO Modula-2 Standard Library Modules Dataset iso_stdlib_module_identifiers = ( # TO DO ) # ISO Modula-2 Standard Library Types Dataset iso_stdlib_type_identifiers = ( # TO DO ) # ISO Modula-2 Standard Library Procedures Dataset iso_stdlib_proc_identifiers = ( # TO DO ) # ISO Modula-2 Standard Library Variables Dataset iso_stdlib_var_identifiers = ( # TO DO ) # ISO Modula-2 Standard Library Constants Dataset iso_stdlib_const_identifiers = ( # TO DO ) # M 2 R 1 0 S t a n d a r d L i b r a r y D a t a s e t s # Modula-2 R10 Standard Library ADTs Dataset m2r10_stdlib_adt_identifiers = ( 'BCD', 'LONGBCD', 'BITSET', 'SHORTBITSET', 'LONGBITSET', 'LONGLONGBITSET', 'COMPLEX', 'LONGCOMPLEX', 'SHORTCARD', 'LONGLONGCARD', 'SHORTINT', 'LONGLONGINT', 'POSINT', 'SHORTPOSINT', 'LONGPOSINT', 'LONGLONGPOSINT', 'BITSET8', 'BITSET16', 'BITSET32', 'BITSET64', 'BITSET128', 'BS8', 'BS16', 'BS32', 'BS64', 'BS128', 'CARDINAL8', 'CARDINAL16', 'CARDINAL32', 'CARDINAL64', 'CARDINAL128', 'CARD8', 'CARD16', 'CARD32', 'CARD64', 'CARD128', 'INTEGER8', 'INTEGER16', 'INTEGER32', 'INTEGER64', 'INTEGER128', 'INT8', 'INT16', 'INT32', 'INT64', 'INT128', 'STRING', 'UNISTRING', ) # Modula-2 R10 Standard Library Blueprints Dataset m2r10_stdlib_blueprint_identifiers = ( 'ProtoRoot', 'ProtoComputational', 'ProtoNumeric', 'ProtoScalar', 'ProtoNonScalar', 'ProtoCardinal', 'ProtoInteger', 'ProtoReal', 'ProtoComplex', 'ProtoVector', 'ProtoTuple', 'ProtoCompArray', 'ProtoCollection', 'ProtoStaticArray', 'ProtoStaticSet', 'ProtoStaticString', 'ProtoArray', 'ProtoString', 'ProtoSet', 'ProtoMultiSet', 'ProtoDictionary', 'ProtoMultiDict', 'ProtoExtension', 'ProtoIO', 'ProtoCardMath', 'ProtoIntMath', 'ProtoRealMath', ) # Modula-2 R10 Standard Library Modules Dataset m2r10_stdlib_module_identifiers = ( 'ASCII', 'BooleanIO', 'CharIO', 'UnicharIO', 'OctetIO', 'CardinalIO', 'LongCardIO', 'IntegerIO', 'LongIntIO', 'RealIO', 'LongRealIO', 'BCDIO', 'LongBCDIO', 'CardMath', 'LongCardMath', 'IntMath', 'LongIntMath', 'RealMath', 'LongRealMath', 'BCDMath', 'LongBCDMath', 'FileIO', 'FileSystem', 'Storage', 'IOSupport', ) # Modula-2 R10 Standard Library Types Dataset m2r10_stdlib_type_identifiers = ( 'File', 'Status', # TO BE COMPLETED ) # Modula-2 R10 Standard Library Procedures Dataset m2r10_stdlib_proc_identifiers = ( 'ALLOCATE', 'DEALLOCATE', 'SIZE', # TO BE COMPLETED ) # Modula-2 R10 Standard Library Variables Dataset m2r10_stdlib_var_identifiers = ( 'stdIn', 'stdOut', 'stdErr', ) # Modula-2 R10 Standard Library Constants Dataset m2r10_stdlib_const_identifiers = ( 'pi', 'tau', ) # D i a l e c t s # Dialect modes dialects = ( 'unknown', 'm2pim', 'm2iso', 'm2r10', 'objm2', 'm2iso+aglet', 'm2pim+gm2', 'm2iso+p1', 'm2iso+xds', ) # D a t a b a s e s # Lexemes to Mark as Errors Database lexemes_to_reject_db = { # Lexemes to reject for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Lexemes to reject for PIM Modula-2 'm2pim': ( pim_lexemes_to_reject, ), # Lexemes to reject for ISO Modula-2 'm2iso': ( iso_lexemes_to_reject, ), # Lexemes to reject for Modula-2 R10 'm2r10': ( m2r10_lexemes_to_reject, ), # Lexemes to reject for Objective Modula-2 'objm2': ( objm2_lexemes_to_reject, ), # Lexemes to reject for Aglet Modula-2 'm2iso+aglet': ( iso_lexemes_to_reject, ), # Lexemes to reject for GNU Modula-2 'm2pim+gm2': ( pim_lexemes_to_reject, ), # Lexemes to reject for p1 Modula-2 'm2iso+p1': ( iso_lexemes_to_reject, ), # Lexemes to reject for XDS Modula-2 'm2iso+xds': ( iso_lexemes_to_reject, ), } # Reserved Words Database reserved_words_db = { # Reserved words for unknown dialect 'unknown': ( common_reserved_words, pim_additional_reserved_words, iso_additional_reserved_words, m2r10_additional_reserved_words, ), # Reserved words for PIM Modula-2 'm2pim': ( common_reserved_words, pim_additional_reserved_words, ), # Reserved words for Modula-2 R10 'm2iso': ( common_reserved_words, iso_additional_reserved_words, ), # Reserved words for ISO Modula-2 'm2r10': ( common_reserved_words, m2r10_additional_reserved_words, ), # Reserved words for Objective Modula-2 'objm2': ( common_reserved_words, m2r10_additional_reserved_words, objm2_additional_reserved_words, ), # Reserved words for Aglet Modula-2 Extensions 'm2iso+aglet': ( common_reserved_words, iso_additional_reserved_words, aglet_additional_reserved_words, ), # Reserved words for GNU Modula-2 Extensions 'm2pim+gm2': ( common_reserved_words, pim_additional_reserved_words, gm2_additional_reserved_words, ), # Reserved words for p1 Modula-2 Extensions 'm2iso+p1': ( common_reserved_words, iso_additional_reserved_words, p1_additional_reserved_words, ), # Reserved words for XDS Modula-2 Extensions 'm2iso+xds': ( common_reserved_words, iso_additional_reserved_words, xds_additional_reserved_words, ), } # Builtins Database builtins_db = { # Builtins for unknown dialect 'unknown': ( common_builtins, pim_additional_builtins, iso_additional_builtins, m2r10_additional_builtins, ), # Builtins for PIM Modula-2 'm2pim': ( common_builtins, pim_additional_builtins, ), # Builtins for ISO Modula-2 'm2iso': ( common_builtins, iso_additional_builtins, ), # Builtins for ISO Modula-2 'm2r10': ( common_builtins, m2r10_additional_builtins, ), # Builtins for Objective Modula-2 'objm2': ( common_builtins, m2r10_additional_builtins, objm2_additional_builtins, ), # Builtins for Aglet Modula-2 Extensions 'm2iso+aglet': ( common_builtins, iso_additional_builtins, aglet_additional_builtins, ), # Builtins for GNU Modula-2 Extensions 'm2pim+gm2': ( common_builtins, pim_additional_builtins, gm2_additional_builtins, ), # Builtins for p1 Modula-2 Extensions 'm2iso+p1': ( common_builtins, iso_additional_builtins, p1_additional_builtins, ), # Builtins for XDS Modula-2 Extensions 'm2iso+xds': ( common_builtins, iso_additional_builtins, xds_additional_builtins, ), } # Pseudo-Module Builtins Database pseudo_builtins_db = { # Builtins for unknown dialect 'unknown': ( common_pseudo_builtins, pim_additional_pseudo_builtins, iso_additional_pseudo_builtins, m2r10_additional_pseudo_builtins, ), # Builtins for PIM Modula-2 'm2pim': ( common_pseudo_builtins, pim_additional_pseudo_builtins, ), # Builtins for ISO Modula-2 'm2iso': ( common_pseudo_builtins, iso_additional_pseudo_builtins, ), # Builtins for ISO Modula-2 'm2r10': ( common_pseudo_builtins, m2r10_additional_pseudo_builtins, ), # Builtins for Objective Modula-2 'objm2': ( common_pseudo_builtins, m2r10_additional_pseudo_builtins, objm2_additional_pseudo_builtins, ), # Builtins for Aglet Modula-2 Extensions 'm2iso+aglet': ( common_pseudo_builtins, iso_additional_pseudo_builtins, aglet_additional_pseudo_builtins, ), # Builtins for GNU Modula-2 Extensions 'm2pim+gm2': ( common_pseudo_builtins, pim_additional_pseudo_builtins, gm2_additional_pseudo_builtins, ), # Builtins for p1 Modula-2 Extensions 'm2iso+p1': ( common_pseudo_builtins, iso_additional_pseudo_builtins, p1_additional_pseudo_builtins, ), # Builtins for XDS Modula-2 Extensions 'm2iso+xds': ( common_pseudo_builtins, iso_additional_pseudo_builtins, xds_additional_pseudo_builtins, ), } # Standard Library ADTs Database stdlib_adts_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library ADTs for PIM Modula-2 'm2pim': ( # No first class library types ), # Standard Library ADTs for ISO Modula-2 'm2iso': ( # No first class library types ), # Standard Library ADTs for Modula-2 R10 'm2r10': ( m2r10_stdlib_adt_identifiers, ), # Standard Library ADTs for Objective Modula-2 'objm2': ( m2r10_stdlib_adt_identifiers, ), # Standard Library ADTs for Aglet Modula-2 'm2iso+aglet': ( # No first class library types ), # Standard Library ADTs for GNU Modula-2 'm2pim+gm2': ( # No first class library types ), # Standard Library ADTs for p1 Modula-2 'm2iso+p1': ( # No first class library types ), # Standard Library ADTs for XDS Modula-2 'm2iso+xds': ( # No first class library types ), } # Standard Library Modules Database stdlib_modules_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library Modules for PIM Modula-2 'm2pim': ( pim_stdlib_module_identifiers, ), # Standard Library Modules for ISO Modula-2 'm2iso': ( iso_stdlib_module_identifiers, ), # Standard Library Modules for Modula-2 R10 'm2r10': ( m2r10_stdlib_blueprint_identifiers, m2r10_stdlib_module_identifiers, m2r10_stdlib_adt_identifiers, ), # Standard Library Modules for Objective Modula-2 'objm2': ( m2r10_stdlib_blueprint_identifiers, m2r10_stdlib_module_identifiers, ), # Standard Library Modules for Aglet Modula-2 'm2iso+aglet': ( iso_stdlib_module_identifiers, ), # Standard Library Modules for GNU Modula-2 'm2pim+gm2': ( pim_stdlib_module_identifiers, ), # Standard Library Modules for p1 Modula-2 'm2iso+p1': ( iso_stdlib_module_identifiers, ), # Standard Library Modules for XDS Modula-2 'm2iso+xds': ( iso_stdlib_module_identifiers, ), } # Standard Library Types Database stdlib_types_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library Types for PIM Modula-2 'm2pim': ( pim_stdlib_type_identifiers, ), # Standard Library Types for ISO Modula-2 'm2iso': ( iso_stdlib_type_identifiers, ), # Standard Library Types for Modula-2 R10 'm2r10': ( m2r10_stdlib_type_identifiers, ), # Standard Library Types for Objective Modula-2 'objm2': ( m2r10_stdlib_type_identifiers, ), # Standard Library Types for Aglet Modula-2 'm2iso+aglet': ( iso_stdlib_type_identifiers, ), # Standard Library Types for GNU Modula-2 'm2pim+gm2': ( pim_stdlib_type_identifiers, ), # Standard Library Types for p1 Modula-2 'm2iso+p1': ( iso_stdlib_type_identifiers, ), # Standard Library Types for XDS Modula-2 'm2iso+xds': ( iso_stdlib_type_identifiers, ), } # Standard Library Procedures Database stdlib_procedures_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library Procedures for PIM Modula-2 'm2pim': ( pim_stdlib_proc_identifiers, ), # Standard Library Procedures for ISO Modula-2 'm2iso': ( iso_stdlib_proc_identifiers, ), # Standard Library Procedures for Modula-2 R10 'm2r10': ( m2r10_stdlib_proc_identifiers, ), # Standard Library Procedures for Objective Modula-2 'objm2': ( m2r10_stdlib_proc_identifiers, ), # Standard Library Procedures for Aglet Modula-2 'm2iso+aglet': ( iso_stdlib_proc_identifiers, ), # Standard Library Procedures for GNU Modula-2 'm2pim+gm2': ( pim_stdlib_proc_identifiers, ), # Standard Library Procedures for p1 Modula-2 'm2iso+p1': ( iso_stdlib_proc_identifiers, ), # Standard Library Procedures for XDS Modula-2 'm2iso+xds': ( iso_stdlib_proc_identifiers, ), } # Standard Library Variables Database stdlib_variables_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library Variables for PIM Modula-2 'm2pim': ( pim_stdlib_var_identifiers, ), # Standard Library Variables for ISO Modula-2 'm2iso': ( iso_stdlib_var_identifiers, ), # Standard Library Variables for Modula-2 R10 'm2r10': ( m2r10_stdlib_var_identifiers, ), # Standard Library Variables for Objective Modula-2 'objm2': ( m2r10_stdlib_var_identifiers, ), # Standard Library Variables for Aglet Modula-2 'm2iso+aglet': ( iso_stdlib_var_identifiers, ), # Standard Library Variables for GNU Modula-2 'm2pim+gm2': ( pim_stdlib_var_identifiers, ), # Standard Library Variables for p1 Modula-2 'm2iso+p1': ( iso_stdlib_var_identifiers, ), # Standard Library Variables for XDS Modula-2 'm2iso+xds': ( iso_stdlib_var_identifiers, ), } # Standard Library Constants Database stdlib_constants_db = { # Empty entry for unknown dialect 'unknown': ( # LEAVE THIS EMPTY ), # Standard Library Constants for PIM Modula-2 'm2pim': ( pim_stdlib_const_identifiers, ), # Standard Library Constants for ISO Modula-2 'm2iso': ( iso_stdlib_const_identifiers, ), # Standard Library Constants for Modula-2 R10 'm2r10': ( m2r10_stdlib_const_identifiers, ), # Standard Library Constants for Objective Modula-2 'objm2': ( m2r10_stdlib_const_identifiers, ), # Standard Library Constants for Aglet Modula-2 'm2iso+aglet': ( iso_stdlib_const_identifiers, ), # Standard Library Constants for GNU Modula-2 'm2pim+gm2': ( pim_stdlib_const_identifiers, ), # Standard Library Constants for p1 Modula-2 'm2iso+p1': ( iso_stdlib_const_identifiers, ), # Standard Library Constants for XDS Modula-2 'm2iso+xds': ( iso_stdlib_const_identifiers, ), } # M e t h o d s # initialise a lexer instance def __init__(self, **options): # # check dialect options # dialects = get_list_opt(options, 'dialect', []) # for dialect_option in dialects: if dialect_option in self.dialects[1:-1]: # valid dialect option found self.set_dialect(dialect_option) break # # Fallback Mode (DEFAULT) else: # no valid dialect option self.set_dialect('unknown') # self.dialect_set_by_tag = False # # check style options # styles = get_list_opt(options, 'style', []) # # use lowercase mode for Algol style if 'algol' in styles or 'algol_nu' in styles: self.algol_publication_mode = True else: self.algol_publication_mode = False # # Check option flags # self.treat_stdlib_adts_as_builtins = get_bool_opt( options, 'treat_stdlib_adts_as_builtins', True) # # call superclass initialiser RegexLexer.__init__(self, **options) # Set lexer to a specified dialect def set_dialect(self, dialect_id): # # if __debug__: # print 'entered set_dialect with arg: ', dialect_id # # check dialect name against known dialects if dialect_id not in self.dialects: dialect = 'unknown' # default else: dialect = dialect_id # # compose lexemes to reject set lexemes_to_reject_set = set() # add each list of reject lexemes for this dialect for list in self.lexemes_to_reject_db[dialect]: lexemes_to_reject_set.update(set(list)) # # compose reserved words set reswords_set = set() # add each list of reserved words for this dialect for list in self.reserved_words_db[dialect]: reswords_set.update(set(list)) # # compose builtins set builtins_set = set() # add each list of builtins for this dialect excluding reserved words for list in self.builtins_db[dialect]: builtins_set.update(set(list).difference(reswords_set)) # # compose pseudo-builtins set pseudo_builtins_set = set() # add each list of builtins for this dialect excluding reserved words for list in self.pseudo_builtins_db[dialect]: pseudo_builtins_set.update(set(list).difference(reswords_set)) # # compose ADTs set adts_set = set() # add each list of ADTs for this dialect excluding reserved words for list in self.stdlib_adts_db[dialect]: adts_set.update(set(list).difference(reswords_set)) # # compose modules set modules_set = set() # add each list of builtins for this dialect excluding builtins for list in self.stdlib_modules_db[dialect]: modules_set.update(set(list).difference(builtins_set)) # # compose types set types_set = set() # add each list of types for this dialect excluding builtins for list in self.stdlib_types_db[dialect]: types_set.update(set(list).difference(builtins_set)) # # compose procedures set procedures_set = set() # add each list of procedures for this dialect excluding builtins for list in self.stdlib_procedures_db[dialect]: procedures_set.update(set(list).difference(builtins_set)) # # compose variables set variables_set = set() # add each list of variables for this dialect excluding builtins for list in self.stdlib_variables_db[dialect]: variables_set.update(set(list).difference(builtins_set)) # # compose constants set constants_set = set() # add each list of constants for this dialect excluding builtins for list in self.stdlib_constants_db[dialect]: constants_set.update(set(list).difference(builtins_set)) # # update lexer state self.dialect = dialect self.lexemes_to_reject = lexemes_to_reject_set self.reserved_words = reswords_set self.builtins = builtins_set self.pseudo_builtins = pseudo_builtins_set self.adts = adts_set self.modules = modules_set self.types = types_set self.procedures = procedures_set self.variables = variables_set self.constants = constants_set # # if __debug__: # print 'exiting set_dialect' # print ' self.dialect: ', self.dialect # print ' self.lexemes_to_reject: ', self.lexemes_to_reject # print ' self.reserved_words: ', self.reserved_words # print ' self.builtins: ', self.builtins # print ' self.pseudo_builtins: ', self.pseudo_builtins # print ' self.adts: ', self.adts # print ' self.modules: ', self.modules # print ' self.types: ', self.types # print ' self.procedures: ', self.procedures # print ' self.variables: ', self.variables # print ' self.types: ', self.types # print ' self.constants: ', self.constants # Extracts a dialect name from a dialect tag comment string and checks # the extracted name against known dialects. If a match is found, the # matching name is returned, otherwise dialect id 'unknown' is returned def get_dialect_from_dialect_tag(self, dialect_tag): # # if __debug__: # print 'entered get_dialect_from_dialect_tag with arg: ', dialect_tag # # constants left_tag_delim = '(*!' right_tag_delim = '*)' left_tag_delim_len = len(left_tag_delim) right_tag_delim_len = len(right_tag_delim) indicator_start = left_tag_delim_len indicator_end = -(right_tag_delim_len) # # check comment string for dialect indicator if len(dialect_tag) > (left_tag_delim_len + right_tag_delim_len) \ and dialect_tag.startswith(left_tag_delim) \ and dialect_tag.endswith(right_tag_delim): # # if __debug__: # print 'dialect tag found' # # extract dialect indicator indicator = dialect_tag[indicator_start:indicator_end] # # if __debug__: # print 'extracted: ', indicator # # check against known dialects for index in range(1, len(self.dialects)): # # if __debug__: # print 'dialects[', index, ']: ', self.dialects[index] # if indicator == self.dialects[index]: # # if __debug__: # print 'matching dialect found' # # indicator matches known dialect return indicator else: # indicator does not match any dialect return 'unknown' # default else: # invalid indicator string return 'unknown' # default # intercept the token stream, modify token attributes and return them def get_tokens_unprocessed(self, text): for index, token, value in RegexLexer.get_tokens_unprocessed(self, text): # # check for dialect tag if dialect has not been set by tag if not self.dialect_set_by_tag and token == Comment.Special: indicated_dialect = self.get_dialect_from_dialect_tag(value) if indicated_dialect != 'unknown': # token is a dialect indicator # reset reserved words and builtins self.set_dialect(indicated_dialect) self.dialect_set_by_tag = True # # check for reserved words, predefined and stdlib identifiers if token is Name: if value in self.reserved_words: token = Keyword.Reserved if self.algol_publication_mode: value = value.lower() # elif value in self.builtins: token = Name.Builtin if self.algol_publication_mode: value = value.lower() # elif value in self.pseudo_builtins: token = Name.Builtin.Pseudo if self.algol_publication_mode: value = value.lower() # elif value in self.adts: if not self.treat_stdlib_adts_as_builtins: token = Name.Namespace else: token = Name.Builtin.Pseudo if self.algol_publication_mode: value = value.lower() # elif value in self.modules: token = Name.Namespace # elif value in self.types: token = Name.Class # elif value in self.procedures: token = Name.Function # elif value in self.variables: token = Name.Variable # elif value in self.constants: token = Name.Constant # elif token in Number: # # mark prefix number literals as error for PIM and ISO dialects if self.dialect not in ('unknown', 'm2r10', 'objm2'): if "'" in value or value[0:2] in ('0b', '0x', '0u'): token = Error # elif self.dialect in ('m2r10', 'objm2'): # mark base-8 number literals as errors for M2 R10 and ObjM2 if token is Number.Oct: token = Error # mark suffix base-16 literals as errors for M2 R10 and ObjM2 elif token is Number.Hex and 'H' in value: token = Error # mark real numbers with E as errors for M2 R10 and ObjM2 elif token is Number.Float and 'E' in value: token = Error # elif token in Comment: # # mark single line comment as error for PIM and ISO dialects if token is Comment.Single: if self.dialect not in ('unknown', 'm2r10', 'objm2'): token = Error # if token is Comment.Preproc: # mark ISO pragma as error for PIM dialects if value.startswith('<*') and \ self.dialect.startswith('m2pim'): token = Error # mark PIM pragma as comment for other dialects elif value.startswith('(*$') and \ self.dialect != 'unknown' and \ not self.dialect.startswith('m2pim'): token = Comment.Multiline # else: # token is neither Name nor Comment # # mark lexemes matching the dialect's error token set as errors if value in self.lexemes_to_reject: token = Error # # substitute lexemes when in Algol mode if self.algol_publication_mode: if value == '#': value = '≠' elif value == '<=': value = '≤' elif value == '>=': value = '≥' elif value == '==': value = '≡' elif value == '*.': value = '•' # return result yield index, token, value def analyse_text(text): """It's Pascal-like, but does not use FUNCTION -- uses PROCEDURE instead.""" # Check if this looks like Pascal, if not, bail out early if not ('(*' in text and '*)' in text and ':=' in text): return result = 0 # Procedure is in Modula2 if re.search(r'\bPROCEDURE\b', text): result += 0.6 # FUNCTION is only valid in Pascal, but not in Modula2 if re.search(r'\bFUNCTION\b', text): result = 0.0 return result pygments-2.11.2/pygments/lexers/_usd_builtins.py0000644000175000017500000000317214165547207021726 0ustar carstencarsten""" pygments.lexers._usd_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A collection of known USD-related keywords, attributes, and types. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ COMMON_ATTRIBUTES = [ "extent", "xformOpOrder", ] KEYWORDS = [ "class", "clips", "custom", "customData", "def", "dictionary", "inherits", "over", "payload", "references", "rel", "subLayers", "timeSamples", "uniform", "variantSet", "variantSets", "variants", ] OPERATORS = [ "add", "append", "delete", "prepend", "reorder", ] SPECIAL_NAMES = [ "active", "apiSchemas", "defaultPrim", "elementSize", "endTimeCode", "hidden", "instanceable", "interpolation", "kind", "startTimeCode", "upAxis", ] TYPES = [ "asset", "bool", "color3d", "color3f", "color3h", "color4d", "color4f", "color4h", "double", "double2", "double3", "double4", "float", "float2", "float3", "float4", "frame4d", "half", "half2", "half3", "half4", "int", "int2", "int3", "int4", "keyword", "matrix2d", "matrix3d", "matrix4d", "normal3d", "normal3f", "normal3h", "point3d", "point3f", "point3h", "quatd", "quatf", "quath", "string", "syn", "token", "uchar", "uchar2", "uchar3", "uchar4", "uint", "uint2", "uint3", "uint4", "usdaType", "vector3d", "vector3f", "vector3h", ] pygments-2.11.2/pygments/lexers/graphviz.py0000644000175000017500000000352714165547207020721 0ustar carstencarsten""" pygments.lexers.graphviz ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for the DOT language (graphviz). :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups from pygments.token import Comment, Keyword, Operator, Name, String, Number, \ Punctuation, Whitespace __all__ = ['GraphvizLexer'] class GraphvizLexer(RegexLexer): """ For graphviz DOT graph description language. .. versionadded:: 2.8 """ name = 'Graphviz' aliases = ['graphviz', 'dot'] filenames = ['*.gv', '*.dot'] mimetypes = ['text/x-graphviz', 'text/vnd.graphviz'] tokens = { 'root': [ (r'\s+', Whitespace), (r'(#|//).*?$', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'(?i)(node|edge|graph|digraph|subgraph|strict)\b', Keyword), (r'--|->', Operator), (r'[{}[\]:;,]', Punctuation), (r'(\b\D\w*)(\s*)(=)(\s*)', bygroups(Name.Attribute, Whitespace, Punctuation, Whitespace), 'attr_id'), (r'\b(n|ne|e|se|s|sw|w|nw|c|_)\b', Name.Builtin), (r'\b\D\w*', Name.Tag), # node (r'[-]?((\.[0-9]+)|([0-9]+(\.[0-9]*)?))', Number), (r'"(\\"|[^"])*?"', Name.Tag), # quoted node (r'<', Punctuation, 'xml'), ], 'attr_id': [ (r'\b\D\w*', String, '#pop'), (r'[-]?((\.[0-9]+)|([0-9]+(\.[0-9]*)?))', Number, '#pop'), (r'"(\\"|[^"])*?"', String.Double, '#pop'), (r'<', Punctuation, ('#pop', 'xml')), ], 'xml': [ (r'<', Punctuation, '#push'), (r'>', Punctuation, '#pop'), (r'\s+', Whitespace), (r'[^<>\s]', Name.Tag), ] } pygments-2.11.2/pygments/lexers/mime.py0000644000175000017500000001656214165547207020021 0ustar carstencarsten""" pygments.lexers.mime ~~~~~~~~~~~~~~~~~~~~ Lexer for Multipurpose Internet Mail Extensions (MIME) data. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include from pygments.lexers import get_lexer_for_mimetype from pygments.token import Text, Name, String, Operator, Comment, Other from pygments.util import get_int_opt, ClassNotFound __all__ = ["MIMELexer"] class MIMELexer(RegexLexer): """ Lexer for Multipurpose Internet Mail Extensions (MIME) data. This lexer is designed to process nested multipart data. It assumes that the given data contains both header and body (and is split at an empty line). If no valid header is found, then the entire data will be treated as body. Additional options accepted: `MIME-max-level` Max recursion level for nested MIME structure. Any negative number would treated as unlimited. (default: -1) `Content-Type` Treat the data as a specific content type. Useful when header is missing, or this lexer would try to parse from header. (default: `text/plain`) `Multipart-Boundary` Set the default multipart boundary delimiter. This option is only used when `Content-Type` is `multipart` and header is missing. This lexer would try to parse from header by default. (default: None) `Content-Transfer-Encoding` Treat the data as a specific encoding. Or this lexer would try to parse from header by default. (default: None) .. versionadded:: 2.5 """ name = "MIME" aliases = ["mime"] mimetypes = ["multipart/mixed", "multipart/related", "multipart/alternative"] def __init__(self, **options): super().__init__(**options) self.boundary = options.get("Multipart-Boundary") self.content_transfer_encoding = options.get("Content_Transfer_Encoding") self.content_type = options.get("Content_Type", "text/plain") self.max_nested_level = get_int_opt(options, "MIME-max-level", -1) def get_header_tokens(self, match): field = match.group(1) if field.lower() in self.attention_headers: yield match.start(1), Name.Tag, field + ":" yield match.start(2), Text.Whitespace, match.group(2) pos = match.end(2) body = match.group(3) for i, t, v in self.get_tokens_unprocessed(body, ("root", field.lower())): yield pos + i, t, v else: yield match.start(), Comment, match.group() def get_body_tokens(self, match): pos_body_start = match.start() entire_body = match.group() # skip first newline if entire_body[0] == '\n': yield pos_body_start, Text.Whitespace, '\n' pos_body_start = pos_body_start + 1 entire_body = entire_body[1:] # if it is not a mulitpart if not self.content_type.startswith("multipart") or not self.boundary: for i, t, v in self.get_bodypart_tokens(entire_body): yield pos_body_start + i, t, v return # find boundary bdry_pattern = r"^--%s(--)?\n" % re.escape(self.boundary) bdry_matcher = re.compile(bdry_pattern, re.MULTILINE) # some data has prefix text before first boundary m = bdry_matcher.search(entire_body) if m: pos_part_start = pos_body_start + m.end() pos_iter_start = lpos_end = m.end() yield pos_body_start, Text, entire_body[:m.start()] yield pos_body_start + lpos_end, String.Delimiter, m.group() else: pos_part_start = pos_body_start pos_iter_start = 0 # process tokens of each body part for m in bdry_matcher.finditer(entire_body, pos_iter_start): # bodypart lpos_start = pos_part_start - pos_body_start lpos_end = m.start() part = entire_body[lpos_start:lpos_end] for i, t, v in self.get_bodypart_tokens(part): yield pos_part_start + i, t, v # boundary yield pos_body_start + lpos_end, String.Delimiter, m.group() pos_part_start = pos_body_start + m.end() # some data has suffix text after last boundary lpos_start = pos_part_start - pos_body_start if lpos_start != len(entire_body): yield pos_part_start, Text, entire_body[lpos_start:] def get_bodypart_tokens(self, text): # return if: # * no content # * no content type specific # * content encoding is not readable # * max recurrsion exceed if not text.strip() or not self.content_type: return [(0, Other, text)] cte = self.content_transfer_encoding if cte and cte not in {"8bit", "7bit", "quoted-printable"}: return [(0, Other, text)] if self.max_nested_level == 0: return [(0, Other, text)] # get lexer try: lexer = get_lexer_for_mimetype(self.content_type) except ClassNotFound: return [(0, Other, text)] if isinstance(lexer, type(self)): lexer.max_nested_level = self.max_nested_level - 1 return lexer.get_tokens_unprocessed(text) def store_content_type(self, match): self.content_type = match.group(1) prefix_len = match.start(1) - match.start(0) yield match.start(0), Text.Whitespace, match.group(0)[:prefix_len] yield match.start(1), Name.Label, match.group(2) yield match.end(2), String.Delimiter, '/' yield match.start(3), Name.Label, match.group(3) def get_content_type_subtokens(self, match): yield match.start(1), Text, match.group(1) yield match.start(2), Text.Whitespace, match.group(2) yield match.start(3), Name.Attribute, match.group(3) yield match.start(4), Operator, match.group(4) yield match.start(5), String, match.group(5) if match.group(3).lower() == "boundary": boundary = match.group(5).strip() if boundary[0] == '"' and boundary[-1] == '"': boundary = boundary[1:-1] self.boundary = boundary def store_content_transfer_encoding(self, match): self.content_transfer_encoding = match.group(0).lower() yield match.start(0), Name.Constant, match.group(0) attention_headers = {"content-type", "content-transfer-encoding"} tokens = { "root": [ (r"^([\w-]+):( *)([\s\S]*?\n)(?![ \t])", get_header_tokens), (r"^$[\s\S]+", get_body_tokens), ], "header": [ # folding (r"\n[ \t]", Text.Whitespace), (r"\n(?![ \t])", Text.Whitespace, "#pop"), ], "content-type": [ include("header"), ( r"^\s*((multipart|application|audio|font|image|model|text|video" r"|message)/([\w-]+))", store_content_type, ), (r'(;)((?:[ \t]|\n[ \t])*)([\w:-]+)(=)([\s\S]*?)(?=;|\n(?![ \t]))', get_content_type_subtokens), (r';[ \t]*\n(?![ \t])', Text, '#pop'), ], "content-transfer-encoding": [ include("header"), (r"([\w-]+)", store_content_transfer_encoding), ], } pygments-2.11.2/pygments/lexers/trafficscript.py0000644000175000017500000000275014165547207021727 0ustar carstencarsten""" pygments.lexers.trafficscript ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for RiverBed's TrafficScript (RTS) language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import String, Number, Name, Keyword, Operator, Text, Comment __all__ = ['RtsLexer'] class RtsLexer(RegexLexer): """ For `Riverbed Stingray Traffic Manager `_ .. versionadded:: 2.1 """ name = 'TrafficScript' aliases = ['trafficscript', 'rts'] filenames = ['*.rts'] tokens = { 'root' : [ (r"'(\\\\|\\[^\\]|[^'\\])*'", String), (r'"', String, 'escapable-string'), (r'(0x[0-9a-fA-F]+|\d+)', Number), (r'\d+\.\d+', Number.Float), (r'\$[a-zA-Z](\w|_)*', Name.Variable), (r'(if|else|for(each)?|in|while|do|break|sub|return|import)', Keyword), (r'[a-zA-Z][\w.]*', Name.Function), (r'[-+*/%=,;(){}<>^.!~|&\[\]\?\:]', Operator), (r'(>=|<=|==|!=|' r'&&|\|\||' r'\+=|.=|-=|\*=|/=|%=|<<=|>>=|&=|\|=|\^=|' r'>>|<<|' r'\+\+|--|=>)', Operator), (r'[ \t\r]+', Text), (r'#[^\n]*', Comment), ], 'escapable-string' : [ (r'\\[tsn]', String.Escape), (r'[^"]', String), (r'"', String, '#pop'), ], } pygments-2.11.2/pygments/lexers/iolang.py0000644000175000017500000000353714165547207020341 0ustar carstencarsten""" pygments.lexers.iolang ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the Io language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number __all__ = ['IoLexer'] class IoLexer(RegexLexer): """ For `Io `_ (a small, prototype-based programming language) source. .. versionadded:: 0.10 """ name = 'Io' filenames = ['*.io'] aliases = ['io'] mimetypes = ['text/x-iosrc'] tokens = { 'root': [ (r'\n', Text), (r'\s+', Text), # Comments (r'//(.*?)\n', Comment.Single), (r'#(.*?)\n', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'/\+', Comment.Multiline, 'nestedcomment'), # DoubleQuotedString (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Operators (r'::=|:=|=|\(|\)|;|,|\*|-|\+|>|<|@|!|/|\||\^|\.|%|&|\[|\]|\{|\}', Operator), # keywords (r'(clone|do|doFile|doString|method|for|if|else|elseif|then)\b', Keyword), # constants (r'(nil|false|true)\b', Name.Constant), # names (r'(Object|list|List|Map|args|Sequence|Coroutine|File)\b', Name.Builtin), (r'[a-zA-Z_]\w*', Name), # numbers (r'(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?', Number.Float), (r'\d+', Number.Integer) ], 'nestedcomment': [ (r'[^+/]+', Comment.Multiline), (r'/\+', Comment.Multiline, '#push'), (r'\+/', Comment.Multiline, '#pop'), (r'[+/]', Comment.Multiline), ] } pygments-2.11.2/pygments/lexers/_lua_builtins.py0000644000175000017500000002012114165547207021705 0ustar carstencarsten""" pygments.lexers._lua_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file contains the names and modules of lua functions It is able to re-generate itself, but for adding new functions you probably have to add some callbacks (see function module_callbacks). Do not edit the MODULES dict by hand. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ MODULES = {'basic': ('_G', '_VERSION', 'assert', 'collectgarbage', 'dofile', 'error', 'getmetatable', 'ipairs', 'load', 'loadfile', 'next', 'pairs', 'pcall', 'print', 'rawequal', 'rawget', 'rawlen', 'rawset', 'select', 'setmetatable', 'tonumber', 'tostring', 'type', 'xpcall'), 'bit32': ('bit32.arshift', 'bit32.band', 'bit32.bnot', 'bit32.bor', 'bit32.btest', 'bit32.bxor', 'bit32.extract', 'bit32.lrotate', 'bit32.lshift', 'bit32.replace', 'bit32.rrotate', 'bit32.rshift'), 'coroutine': ('coroutine.create', 'coroutine.isyieldable', 'coroutine.resume', 'coroutine.running', 'coroutine.status', 'coroutine.wrap', 'coroutine.yield'), 'debug': ('debug.debug', 'debug.gethook', 'debug.getinfo', 'debug.getlocal', 'debug.getmetatable', 'debug.getregistry', 'debug.getupvalue', 'debug.getuservalue', 'debug.sethook', 'debug.setlocal', 'debug.setmetatable', 'debug.setupvalue', 'debug.setuservalue', 'debug.traceback', 'debug.upvalueid', 'debug.upvaluejoin'), 'io': ('io.close', 'io.flush', 'io.input', 'io.lines', 'io.open', 'io.output', 'io.popen', 'io.read', 'io.stderr', 'io.stdin', 'io.stdout', 'io.tmpfile', 'io.type', 'io.write'), 'math': ('math.abs', 'math.acos', 'math.asin', 'math.atan', 'math.atan2', 'math.ceil', 'math.cos', 'math.cosh', 'math.deg', 'math.exp', 'math.floor', 'math.fmod', 'math.frexp', 'math.huge', 'math.ldexp', 'math.log', 'math.max', 'math.maxinteger', 'math.min', 'math.mininteger', 'math.modf', 'math.pi', 'math.pow', 'math.rad', 'math.random', 'math.randomseed', 'math.sin', 'math.sinh', 'math.sqrt', 'math.tan', 'math.tanh', 'math.tointeger', 'math.type', 'math.ult'), 'modules': ('package.config', 'package.cpath', 'package.loaded', 'package.loadlib', 'package.path', 'package.preload', 'package.searchers', 'package.searchpath', 'require'), 'os': ('os.clock', 'os.date', 'os.difftime', 'os.execute', 'os.exit', 'os.getenv', 'os.remove', 'os.rename', 'os.setlocale', 'os.time', 'os.tmpname'), 'string': ('string.byte', 'string.char', 'string.dump', 'string.find', 'string.format', 'string.gmatch', 'string.gsub', 'string.len', 'string.lower', 'string.match', 'string.pack', 'string.packsize', 'string.rep', 'string.reverse', 'string.sub', 'string.unpack', 'string.upper'), 'table': ('table.concat', 'table.insert', 'table.move', 'table.pack', 'table.remove', 'table.sort', 'table.unpack'), 'utf8': ('utf8.char', 'utf8.charpattern', 'utf8.codepoint', 'utf8.codes', 'utf8.len', 'utf8.offset')} if __name__ == '__main__': # pragma: no cover import re import sys # urllib ends up wanting to import a module called 'math' -- if # pygments/lexers is in the path, this ends badly. for i in range(len(sys.path)-1, -1, -1): if sys.path[i].endswith('/lexers'): del sys.path[i] try: from urllib import urlopen except ImportError: from urllib.request import urlopen import pprint # you can't generally find out what module a function belongs to if you # have only its name. Because of this, here are some callback functions # that recognize if a gioven function belongs to a specific module def module_callbacks(): def is_in_coroutine_module(name): return name.startswith('coroutine.') def is_in_modules_module(name): if name in ['require', 'module'] or name.startswith('package'): return True else: return False def is_in_string_module(name): return name.startswith('string.') def is_in_table_module(name): return name.startswith('table.') def is_in_math_module(name): return name.startswith('math') def is_in_io_module(name): return name.startswith('io.') def is_in_os_module(name): return name.startswith('os.') def is_in_debug_module(name): return name.startswith('debug.') return {'coroutine': is_in_coroutine_module, 'modules': is_in_modules_module, 'string': is_in_string_module, 'table': is_in_table_module, 'math': is_in_math_module, 'io': is_in_io_module, 'os': is_in_os_module, 'debug': is_in_debug_module} def get_newest_version(): f = urlopen('http://www.lua.org/manual/') r = re.compile(r'^(Lua )?\1') for line in f: m = r.match(line) if m is not None: return m.groups()[0] def get_lua_functions(version): f = urlopen('http://www.lua.org/manual/%s/' % version) r = re.compile(r'^\1') functions = [] for line in f: m = r.match(line) if m is not None: functions.append(m.groups()[0]) return functions def get_function_module(name): for mod, cb in module_callbacks().items(): if cb(name): return mod if '.' in name: return name.split('.')[0] else: return 'basic' def regenerate(filename, modules): with open(filename) as fp: content = fp.read() header = content[:content.find('MODULES = {')] footer = content[content.find("if __name__ == '__main__':"):] with open(filename, 'w') as fp: fp.write(header) fp.write('MODULES = %s\n\n' % pprint.pformat(modules)) fp.write(footer) def run(): version = get_newest_version() functions = set() for v in ('5.2', version): print('> Downloading function index for Lua %s' % v) f = get_lua_functions(v) print('> %d functions found, %d new:' % (len(f), len(set(f) - functions))) functions |= set(f) functions = sorted(functions) modules = {} for full_function_name in functions: print('>> %s' % full_function_name) m = get_function_module(full_function_name) modules.setdefault(m, []).append(full_function_name) modules = {k: tuple(v) for k, v in modules.items()} regenerate(__file__, modules) run() pygments-2.11.2/pygments/lexers/shell.py0000644000175000017500000010607414165547207020177 0ustar carstencarsten""" pygments.lexers.shell ~~~~~~~~~~~~~~~~~~~~~ Lexers for various shells. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import Lexer, RegexLexer, do_insertions, bygroups, \ include, default, this, using, words from pygments.token import Punctuation, \ Text, Comment, Operator, Keyword, Name, String, Number, Generic from pygments.util import shebang_matches __all__ = ['BashLexer', 'BashSessionLexer', 'TcshLexer', 'BatchLexer', 'SlurmBashLexer', 'MSDOSSessionLexer', 'PowerShellLexer', 'PowerShellSessionLexer', 'TcshSessionLexer', 'FishShellLexer', 'ExeclineLexer'] line_re = re.compile('.*?\n') class BashLexer(RegexLexer): """ Lexer for (ba|k|z|)sh shell scripts. .. versionadded:: 0.6 """ name = 'Bash' aliases = ['bash', 'sh', 'ksh', 'zsh', 'shell'] filenames = ['*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', '.kshrc', 'kshrc', 'PKGBUILD'] mimetypes = ['application/x-sh', 'application/x-shellscript', 'text/x-shellscript'] tokens = { 'root': [ include('basic'), (r'`', String.Backtick, 'backticks'), include('data'), include('interp'), ], 'interp': [ (r'\$\(\(', Keyword, 'math'), (r'\$\(', Keyword, 'paren'), (r'\$\{#?', String.Interpol, 'curly'), (r'\$[a-zA-Z_]\w*', Name.Variable), # user variable (r'\$(?:\d+|[#$?!_*@-])', Name.Variable), # builtin (r'\$', Text), ], 'basic': [ (r'\b(if|fi|else|while|in|do|done|for|then|return|function|case|' r'select|continue|until|esac|elif)(\s*)\b', bygroups(Keyword, Text)), (r'\b(alias|bg|bind|break|builtin|caller|cd|command|compgen|' r'complete|declare|dirs|disown|echo|enable|eval|exec|exit|' r'export|false|fc|fg|getopts|hash|help|history|jobs|kill|let|' r'local|logout|popd|printf|pushd|pwd|read|readonly|set|shift|' r'shopt|source|suspend|test|time|times|trap|true|type|typeset|' r'ulimit|umask|unalias|unset|wait)(?=[\s)`])', Name.Builtin), (r'\A#!.+\n', Comment.Hashbang), (r'#.*\n', Comment.Single), (r'\\[\w\W]', String.Escape), (r'(\b\w+)(\s*)(\+?=)', bygroups(Name.Variable, Text, Operator)), (r'[\[\]{}()=]', Operator), (r'<<<', Operator), # here-string (r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String), (r'&&|\|\|', Operator), ], 'data': [ (r'(?s)\$?"(\\.|[^"\\$])*"', String.Double), (r'"', String.Double, 'string'), (r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), (r"(?s)'.*?'", String.Single), (r';', Punctuation), (r'&', Punctuation), (r'\|', Punctuation), (r'\s+', Text), (r'\d+\b', Number), (r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text), (r'<', Text), ], 'string': [ (r'"', String.Double, '#pop'), (r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double), include('interp'), ], 'curly': [ (r'\}', String.Interpol, '#pop'), (r':-', Keyword), (r'\w+', Name.Variable), (r'[^}:"\'`$\\]+', Punctuation), (r':', Punctuation), include('root'), ], 'paren': [ (r'\)', Keyword, '#pop'), include('root'), ], 'math': [ (r'\)\)', Keyword, '#pop'), (r'[-+*/%^|&]|\*\*|\|\|', Operator), (r'\d+#\d+', Number), (r'\d+#(?! )', Number), (r'\d+', Number), include('root'), ], 'backticks': [ (r'`', String.Backtick, '#pop'), include('root'), ], } def analyse_text(text): if shebang_matches(text, r'(ba|z|)sh'): return 1 if text.startswith('$ '): return 0.2 class SlurmBashLexer(BashLexer): """ Lexer for (ba|k|z|)sh Slurm scripts. .. versionadded:: 2.4 """ name = 'Slurm' aliases = ['slurm', 'sbatch'] filenames = ['*.sl'] mimetypes = [] EXTRA_KEYWORDS = {'srun'} def get_tokens_unprocessed(self, text): for index, token, value in BashLexer.get_tokens_unprocessed(self, text): if token is Text and value in self.EXTRA_KEYWORDS: yield index, Name.Builtin, value elif token is Comment.Single and 'SBATCH' in value: yield index, Keyword.Pseudo, value else: yield index, token, value class ShellSessionBaseLexer(Lexer): """ Base lexer for shell sessions. .. versionadded:: 2.1 """ _venv = re.compile(r'^(\([^)]*\))(\s*)') def get_tokens_unprocessed(self, text): innerlexer = self._innerLexerCls(**self.options) pos = 0 curcode = '' insertions = [] backslash_continuation = False for match in line_re.finditer(text): line = match.group() venv_match = self._venv.match(line) if venv_match: venv = venv_match.group(1) venv_whitespace = venv_match.group(2) insertions.append((len(curcode), [(0, Generic.Prompt.VirtualEnv, venv)])) if venv_whitespace: insertions.append((len(curcode), [(0, Text, venv_whitespace)])) line = line[venv_match.end():] m = self._ps1rgx.match(line) if m: # To support output lexers (say diff output), the output # needs to be broken by prompts whenever the output lexer # changes. if not insertions: pos = match.start() insertions.append((len(curcode), [(0, Generic.Prompt, m.group(1))])) curcode += m.group(2) backslash_continuation = curcode.endswith('\\\n') elif backslash_continuation: if line.startswith(self._ps2): insertions.append((len(curcode), [(0, Generic.Prompt, line[:len(self._ps2)])])) curcode += line[len(self._ps2):] else: curcode += line backslash_continuation = curcode.endswith('\\\n') else: if insertions: toks = innerlexer.get_tokens_unprocessed(curcode) for i, t, v in do_insertions(insertions, toks): yield pos+i, t, v yield match.start(), Generic.Output, line insertions = [] curcode = '' if insertions: for i, t, v in do_insertions(insertions, innerlexer.get_tokens_unprocessed(curcode)): yield pos+i, t, v class BashSessionLexer(ShellSessionBaseLexer): """ Lexer for Bash shell sessions, i.e. command lines, including a prompt, interspersed with output. .. versionadded:: 1.1 """ name = 'Bash Session' aliases = ['console', 'shell-session'] filenames = ['*.sh-session', '*.shell-session'] mimetypes = ['application/x-shell-session', 'application/x-sh-session'] _innerLexerCls = BashLexer _ps1rgx = re.compile( r'^((?:(?:\[.*?\])|(?:\(\S+\))?(?:| |sh\S*?|\w+\S+[@:]\S+(?:\s+\S+)' \ r'?|\[\S+[@:][^\n]+\].+))\s*[$#%]\s*)(.*\n?)') _ps2 = '> ' class BatchLexer(RegexLexer): """ Lexer for the DOS/Windows Batch file format. .. versionadded:: 0.7 """ name = 'Batchfile' aliases = ['batch', 'bat', 'dosbatch', 'winbatch'] filenames = ['*.bat', '*.cmd'] mimetypes = ['application/x-dos-batch'] flags = re.MULTILINE | re.IGNORECASE _nl = r'\n\x1a' _punct = r'&<>|' _ws = r'\t\v\f\r ,;=\xa0' _nlws = r'\s\x1a\xa0,;=' _space = r'(?:(?:(?:\^[%s])?[%s])+)' % (_nl, _ws) _keyword_terminator = (r'(?=(?:\^[%s]?)?[%s+./:[\\\]]|[%s%s(])' % (_nl, _ws, _nl, _punct)) _token_terminator = r'(?=\^?[%s]|[%s%s])' % (_ws, _punct, _nl) _start_label = r'((?:(?<=^[^:])|^[^:]?)[%s]*)(:)' % _ws _label = r'(?:(?:[^%s%s+:^]|\^[%s]?[\w\W])*)' % (_nlws, _punct, _nl) _label_compound = r'(?:(?:[^%s%s+:^)]|\^[%s]?[^)])*)' % (_nlws, _punct, _nl) _number = r'(?:-?(?:0[0-7]+|0x[\da-f]+|\d+)%s)' % _token_terminator _opword = r'(?:equ|geq|gtr|leq|lss|neq)' _string = r'(?:"[^%s"]*(?:"|(?=[%s])))' % (_nl, _nl) _variable = (r'(?:(?:%%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|' r'[^%%:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%%%s^]|' r'\^[^%%%s])[^=%s]*=(?:[^%%%s^]|\^[^%%%s])*)?)?%%))|' r'(?:\^?![^!:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:' r'[^!%s^]|\^[^!%s])[^=%s]*=(?:[^!%s^]|\^[^!%s])*)?)?\^?!))' % (_nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl)) _core_token = r'(?:(?:(?:\^[%s]?)?[^"%s%s])+)' % (_nl, _nlws, _punct) _core_token_compound = r'(?:(?:(?:\^[%s]?)?[^"%s%s)])+)' % (_nl, _nlws, _punct) _token = r'(?:[%s]+|%s)' % (_punct, _core_token) _token_compound = r'(?:[%s]+|%s)' % (_punct, _core_token_compound) _stoken = (r'(?:[%s]+|(?:%s|%s|%s)+)' % (_punct, _string, _variable, _core_token)) def _make_begin_state(compound, _core_token=_core_token, _core_token_compound=_core_token_compound, _keyword_terminator=_keyword_terminator, _nl=_nl, _punct=_punct, _string=_string, _space=_space, _start_label=_start_label, _stoken=_stoken, _token_terminator=_token_terminator, _variable=_variable, _ws=_ws): rest = '(?:%s|%s|[^"%%%s%s%s])*' % (_string, _variable, _nl, _punct, ')' if compound else '') rest_of_line = r'(?:(?:[^%s^]|\^[%s]?[\w\W])*)' % (_nl, _nl) rest_of_line_compound = r'(?:(?:[^%s^)]|\^[%s]?[^)])*)' % (_nl, _nl) set_space = r'((?:(?:\^[%s]?)?[^\S\n])*)' % _nl suffix = '' if compound: _keyword_terminator = r'(?:(?=\))|%s)' % _keyword_terminator _token_terminator = r'(?:(?=\))|%s)' % _token_terminator suffix = '/compound' return [ ((r'\)', Punctuation, '#pop') if compound else (r'\)((?=\()|%s)%s' % (_token_terminator, rest_of_line), Comment.Single)), (r'(?=%s)' % _start_label, Text, 'follow%s' % suffix), (_space, using(this, state='text')), include('redirect%s' % suffix), (r'[%s]+' % _nl, Text), (r'\(', Punctuation, 'root/compound'), (r'@+', Punctuation), (r'((?:for|if|rem)(?:(?=(?:\^[%s]?)?/)|(?:(?!\^)|' r'(?<=m))(?:(?=\()|%s)))(%s?%s?(?:\^[%s]?)?/(?:\^[%s]?)?\?)' % (_nl, _token_terminator, _space, _core_token_compound if compound else _core_token, _nl, _nl), bygroups(Keyword, using(this, state='text')), 'follow%s' % suffix), (r'(goto%s)(%s(?:\^[%s]?)?/(?:\^[%s]?)?\?%s)' % (_keyword_terminator, rest, _nl, _nl, rest), bygroups(Keyword, using(this, state='text')), 'follow%s' % suffix), (words(('assoc', 'break', 'cd', 'chdir', 'cls', 'color', 'copy', 'date', 'del', 'dir', 'dpath', 'echo', 'endlocal', 'erase', 'exit', 'ftype', 'keys', 'md', 'mkdir', 'mklink', 'move', 'path', 'pause', 'popd', 'prompt', 'pushd', 'rd', 'ren', 'rename', 'rmdir', 'setlocal', 'shift', 'start', 'time', 'title', 'type', 'ver', 'verify', 'vol'), suffix=_keyword_terminator), Keyword, 'follow%s' % suffix), (r'(call)(%s?)(:)' % _space, bygroups(Keyword, using(this, state='text'), Punctuation), 'call%s' % suffix), (r'call%s' % _keyword_terminator, Keyword), (r'(for%s(?!\^))(%s)(/f%s)' % (_token_terminator, _space, _token_terminator), bygroups(Keyword, using(this, state='text'), Keyword), ('for/f', 'for')), (r'(for%s(?!\^))(%s)(/l%s)' % (_token_terminator, _space, _token_terminator), bygroups(Keyword, using(this, state='text'), Keyword), ('for/l', 'for')), (r'for%s(?!\^)' % _token_terminator, Keyword, ('for2', 'for')), (r'(goto%s)(%s?)(:?)' % (_keyword_terminator, _space), bygroups(Keyword, using(this, state='text'), Punctuation), 'label%s' % suffix), (r'(if(?:(?=\()|%s)(?!\^))(%s?)((?:/i%s)?)(%s?)((?:not%s)?)(%s?)' % (_token_terminator, _space, _token_terminator, _space, _token_terminator, _space), bygroups(Keyword, using(this, state='text'), Keyword, using(this, state='text'), Keyword, using(this, state='text')), ('(?', 'if')), (r'rem(((?=\()|%s)%s?%s?.*|%s%s)' % (_token_terminator, _space, _stoken, _keyword_terminator, rest_of_line_compound if compound else rest_of_line), Comment.Single, 'follow%s' % suffix), (r'(set%s)%s(/a)' % (_keyword_terminator, set_space), bygroups(Keyword, using(this, state='text'), Keyword), 'arithmetic%s' % suffix), (r'(set%s)%s((?:/p)?)%s((?:(?:(?:\^[%s]?)?[^"%s%s^=%s]|' r'\^[%s]?[^"=])+)?)((?:(?:\^[%s]?)?=)?)' % (_keyword_terminator, set_space, set_space, _nl, _nl, _punct, ')' if compound else '', _nl, _nl), bygroups(Keyword, using(this, state='text'), Keyword, using(this, state='text'), using(this, state='variable'), Punctuation), 'follow%s' % suffix), default('follow%s' % suffix) ] def _make_follow_state(compound, _label=_label, _label_compound=_label_compound, _nl=_nl, _space=_space, _start_label=_start_label, _token=_token, _token_compound=_token_compound, _ws=_ws): suffix = '/compound' if compound else '' state = [] if compound: state.append((r'(?=\))', Text, '#pop')) state += [ (r'%s([%s]*)(%s)(.*)' % (_start_label, _ws, _label_compound if compound else _label), bygroups(Text, Punctuation, Text, Name.Label, Comment.Single)), include('redirect%s' % suffix), (r'(?=[%s])' % _nl, Text, '#pop'), (r'\|\|?|&&?', Punctuation, '#pop'), include('text') ] return state def _make_arithmetic_state(compound, _nl=_nl, _punct=_punct, _string=_string, _variable=_variable, _ws=_ws, _nlws=_nlws): op = r'=+\-*/!~' state = [] if compound: state.append((r'(?=\))', Text, '#pop')) state += [ (r'0[0-7]+', Number.Oct), (r'0x[\da-f]+', Number.Hex), (r'\d+', Number.Integer), (r'[(),]+', Punctuation), (r'([%s]|%%|\^\^)+' % op, Operator), (r'(%s|%s|(\^[%s]?)?[^()%s%%\^"%s%s]|\^[%s]?%s)+' % (_string, _variable, _nl, op, _nlws, _punct, _nlws, r'[^)]' if compound else r'[\w\W]'), using(this, state='variable')), (r'(?=[\x00|&])', Text, '#pop'), include('follow') ] return state def _make_call_state(compound, _label=_label, _label_compound=_label_compound): state = [] if compound: state.append((r'(?=\))', Text, '#pop')) state.append((r'(:?)(%s)' % (_label_compound if compound else _label), bygroups(Punctuation, Name.Label), '#pop')) return state def _make_label_state(compound, _label=_label, _label_compound=_label_compound, _nl=_nl, _punct=_punct, _string=_string, _variable=_variable): state = [] if compound: state.append((r'(?=\))', Text, '#pop')) state.append((r'(%s?)((?:%s|%s|\^[%s]?%s|[^"%%^%s%s%s])*)' % (_label_compound if compound else _label, _string, _variable, _nl, r'[^)]' if compound else r'[\w\W]', _nl, _punct, r')' if compound else ''), bygroups(Name.Label, Comment.Single), '#pop')) return state def _make_redirect_state(compound, _core_token_compound=_core_token_compound, _nl=_nl, _punct=_punct, _stoken=_stoken, _string=_string, _space=_space, _variable=_variable, _nlws=_nlws): stoken_compound = (r'(?:[%s]+|(?:%s|%s|%s)+)' % (_punct, _string, _variable, _core_token_compound)) return [ (r'((?:(?<=[%s])\d)?)(>>?&|<&)([%s]*)(\d)' % (_nlws, _nlws), bygroups(Number.Integer, Punctuation, Text, Number.Integer)), (r'((?:(?<=[%s])(?>?|<)(%s?%s)' % (_nlws, _nl, _space, stoken_compound if compound else _stoken), bygroups(Number.Integer, Punctuation, using(this, state='text'))) ] tokens = { 'root': _make_begin_state(False), 'follow': _make_follow_state(False), 'arithmetic': _make_arithmetic_state(False), 'call': _make_call_state(False), 'label': _make_label_state(False), 'redirect': _make_redirect_state(False), 'root/compound': _make_begin_state(True), 'follow/compound': _make_follow_state(True), 'arithmetic/compound': _make_arithmetic_state(True), 'call/compound': _make_call_state(True), 'label/compound': _make_label_state(True), 'redirect/compound': _make_redirect_state(True), 'variable-or-escape': [ (_variable, Name.Variable), (r'%%%%|\^[%s]?(\^!|[\w\W])' % _nl, String.Escape) ], 'string': [ (r'"', String.Double, '#pop'), (_variable, Name.Variable), (r'\^!|%%', String.Escape), (r'[^"%%^%s]+|[%%^]' % _nl, String.Double), default('#pop') ], 'sqstring': [ include('variable-or-escape'), (r'[^%]+|%', String.Single) ], 'bqstring': [ include('variable-or-escape'), (r'[^%]+|%', String.Backtick) ], 'text': [ (r'"', String.Double, 'string'), include('variable-or-escape'), (r'[^"%%^%s%s\d)]+|.' % (_nlws, _punct), Text) ], 'variable': [ (r'"', String.Double, 'string'), include('variable-or-escape'), (r'[^"%%^%s]+|.' % _nl, Name.Variable) ], 'for': [ (r'(%s)(in)(%s)(\()' % (_space, _space), bygroups(using(this, state='text'), Keyword, using(this, state='text'), Punctuation), '#pop'), include('follow') ], 'for2': [ (r'\)', Punctuation), (r'(%s)(do%s)' % (_space, _token_terminator), bygroups(using(this, state='text'), Keyword), '#pop'), (r'[%s]+' % _nl, Text), include('follow') ], 'for/f': [ (r'(")((?:%s|[^"])*?")([%s]*)(\))' % (_variable, _nlws), bygroups(String.Double, using(this, state='string'), Text, Punctuation)), (r'"', String.Double, ('#pop', 'for2', 'string')), (r"('(?:%%%%|%s|[\w\W])*?')([%s]*)(\))" % (_variable, _nlws), bygroups(using(this, state='sqstring'), Text, Punctuation)), (r'(`(?:%%%%|%s|[\w\W])*?`)([%s]*)(\))' % (_variable, _nlws), bygroups(using(this, state='bqstring'), Text, Punctuation)), include('for2') ], 'for/l': [ (r'-?\d+', Number.Integer), include('for2') ], 'if': [ (r'((?:cmdextversion|errorlevel)%s)(%s)(\d+)' % (_token_terminator, _space), bygroups(Keyword, using(this, state='text'), Number.Integer), '#pop'), (r'(defined%s)(%s)(%s)' % (_token_terminator, _space, _stoken), bygroups(Keyword, using(this, state='text'), using(this, state='variable')), '#pop'), (r'(exist%s)(%s%s)' % (_token_terminator, _space, _stoken), bygroups(Keyword, using(this, state='text')), '#pop'), (r'(%s%s)(%s)(%s%s)' % (_number, _space, _opword, _space, _number), bygroups(using(this, state='arithmetic'), Operator.Word, using(this, state='arithmetic')), '#pop'), (_stoken, using(this, state='text'), ('#pop', 'if2')), ], 'if2': [ (r'(%s?)(==)(%s?%s)' % (_space, _space, _stoken), bygroups(using(this, state='text'), Operator, using(this, state='text')), '#pop'), (r'(%s)(%s)(%s%s)' % (_space, _opword, _space, _stoken), bygroups(using(this, state='text'), Operator.Word, using(this, state='text')), '#pop') ], '(?': [ (_space, using(this, state='text')), (r'\(', Punctuation, ('#pop', 'else?', 'root/compound')), default('#pop') ], 'else?': [ (_space, using(this, state='text')), (r'else%s' % _token_terminator, Keyword, '#pop'), default('#pop') ] } class MSDOSSessionLexer(ShellSessionBaseLexer): """ Lexer for MS DOS shell sessions, i.e. command lines, including a prompt, interspersed with output. .. versionadded:: 2.1 """ name = 'MSDOS Session' aliases = ['doscon'] filenames = [] mimetypes = [] _innerLexerCls = BatchLexer _ps1rgx = re.compile(r'^([^>]*>)(.*\n?)') _ps2 = 'More? ' class TcshLexer(RegexLexer): """ Lexer for tcsh scripts. .. versionadded:: 0.10 """ name = 'Tcsh' aliases = ['tcsh', 'csh'] filenames = ['*.tcsh', '*.csh'] mimetypes = ['application/x-csh'] tokens = { 'root': [ include('basic'), (r'\$\(', Keyword, 'paren'), (r'\$\{#?', Keyword, 'curly'), (r'`', String.Backtick, 'backticks'), include('data'), ], 'basic': [ (r'\b(if|endif|else|while|then|foreach|case|default|' r'continue|goto|breaksw|end|switch|endsw)\s*\b', Keyword), (r'\b(alias|alloc|bg|bindkey|break|builtins|bye|caller|cd|chdir|' r'complete|dirs|echo|echotc|eval|exec|exit|fg|filetest|getxvers|' r'glob|getspath|hashstat|history|hup|inlib|jobs|kill|' r'limit|log|login|logout|ls-F|migrate|newgrp|nice|nohup|notify|' r'onintr|popd|printenv|pushd|rehash|repeat|rootnode|popd|pushd|' r'set|shift|sched|setenv|setpath|settc|setty|setxvers|shift|' r'source|stop|suspend|source|suspend|telltc|time|' r'umask|unalias|uncomplete|unhash|universe|unlimit|unset|unsetenv|' r'ver|wait|warp|watchlog|where|which)\s*\b', Name.Builtin), (r'#.*', Comment), (r'\\[\w\W]', String.Escape), (r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)), (r'[\[\]{}()=]+', Operator), (r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String), (r';', Punctuation), ], 'data': [ (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double), (r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), (r'\s+', Text), (r'[^=\s\[\]{}()$"\'`\\;#]+', Text), (r'\d+(?= |\Z)', Number), (r'\$#?(\w+|.)', Name.Variable), ], 'curly': [ (r'\}', Keyword, '#pop'), (r':-', Keyword), (r'\w+', Name.Variable), (r'[^}:"\'`$]+', Punctuation), (r':', Punctuation), include('root'), ], 'paren': [ (r'\)', Keyword, '#pop'), include('root'), ], 'backticks': [ (r'`', String.Backtick, '#pop'), include('root'), ], } class TcshSessionLexer(ShellSessionBaseLexer): """ Lexer for Tcsh sessions, i.e. command lines, including a prompt, interspersed with output. .. versionadded:: 2.1 """ name = 'Tcsh Session' aliases = ['tcshcon'] filenames = [] mimetypes = [] _innerLexerCls = TcshLexer _ps1rgx = re.compile(r'^([^>]+>)(.*\n?)') _ps2 = '? ' class PowerShellLexer(RegexLexer): """ For Windows PowerShell code. .. versionadded:: 1.5 """ name = 'PowerShell' aliases = ['powershell', 'pwsh', 'posh', 'ps1', 'psm1'] filenames = ['*.ps1', '*.psm1'] mimetypes = ['text/x-powershell'] flags = re.DOTALL | re.IGNORECASE | re.MULTILINE keywords = ( 'while validateset validaterange validatepattern validatelength ' 'validatecount until trap switch return ref process param parameter in ' 'if global: function foreach for finally filter end elseif else ' 'dynamicparam do default continue cmdletbinding break begin alias \\? ' '% #script #private #local #global mandatory parametersetname position ' 'valuefrompipeline valuefrompipelinebypropertyname ' 'valuefromremainingarguments helpmessage try catch throw').split() operators = ( 'and as band bnot bor bxor casesensitive ccontains ceq cge cgt cle ' 'clike clt cmatch cne cnotcontains cnotlike cnotmatch contains ' 'creplace eq exact f file ge gt icontains ieq ige igt ile ilike ilt ' 'imatch ine inotcontains inotlike inotmatch ireplace is isnot le like ' 'lt match ne not notcontains notlike notmatch or regex replace ' 'wildcard').split() verbs = ( 'write where watch wait use update unregister unpublish unprotect ' 'unlock uninstall undo unblock trace test tee take sync switch ' 'suspend submit stop step start split sort skip show set send select ' 'search scroll save revoke resume restore restart resolve resize ' 'reset request repair rename remove register redo receive read push ' 'publish protect pop ping out optimize open new move mount merge ' 'measure lock limit join invoke install initialize import hide group ' 'grant get format foreach find export expand exit enter enable edit ' 'dismount disconnect disable deny debug cxnew copy convertto ' 'convertfrom convert connect confirm compress complete compare close ' 'clear checkpoint block backup assert approve aggregate add').split() aliases_ = ( 'ac asnp cat cd cfs chdir clc clear clhy cli clp cls clv cnsn ' 'compare copy cp cpi cpp curl cvpa dbp del diff dir dnsn ebp echo epal ' 'epcsv epsn erase etsn exsn fc fhx fl foreach ft fw gal gbp gc gci gcm ' 'gcs gdr ghy gi gjb gl gm gmo gp gps gpv group gsn gsnp gsv gu gv gwmi ' 'h history icm iex ihy ii ipal ipcsv ipmo ipsn irm ise iwmi iwr kill lp ' 'ls man md measure mi mount move mp mv nal ndr ni nmo npssc nsn nv ogv ' 'oh popd ps pushd pwd r rbp rcjb rcsn rd rdr ren ri rjb rm rmdir rmo ' 'rni rnp rp rsn rsnp rujb rv rvpa rwmi sajb sal saps sasv sbp sc select ' 'set shcm si sl sleep sls sort sp spjb spps spsv start sujb sv swmi tee ' 'trcm type wget where wjb write').split() commenthelp = ( 'component description example externalhelp forwardhelpcategory ' 'forwardhelptargetname functionality inputs link ' 'notes outputs parameter remotehelprunspace role synopsis').split() tokens = { 'root': [ # we need to count pairs of parentheses for correct highlight # of '$(...)' blocks in strings (r'\(', Punctuation, 'child'), (r'\s+', Text), (r'^(\s*#[#\s]*)(\.(?:%s))([^\n]*$)' % '|'.join(commenthelp), bygroups(Comment, String.Doc, Comment)), (r'#[^\n]*?$', Comment), (r'(<|<)#', Comment.Multiline, 'multline'), (r'@"\n', String.Heredoc, 'heredoc-double'), (r"@'\n.*?\n'@", String.Heredoc), # escaped syntax (r'`[\'"$@-]', Punctuation), (r'"', String.Double, 'string'), (r"'([^']|'')*'", String.Single), (r'(\$|@@|@)((global|script|private|env):)?\w+', Name.Variable), (r'(%s)\b' % '|'.join(keywords), Keyword), (r'-(%s)\b' % '|'.join(operators), Operator), (r'(%s)-[a-z_]\w*\b' % '|'.join(verbs), Name.Builtin), (r'(%s)\s' % '|'.join(aliases_), Name.Builtin), (r'\[[a-z_\[][\w. `,\[\]]*\]', Name.Constant), # .net [type]s (r'-[a-z_]\w*', Name), (r'\w+', Name), (r'[.,;:@{}\[\]$()=+*/\\&%!~?^`|<>-]', Punctuation), ], 'child': [ (r'\)', Punctuation, '#pop'), include('root'), ], 'multline': [ (r'[^#&.]+', Comment.Multiline), (r'#(>|>)', Comment.Multiline, '#pop'), (r'\.(%s)' % '|'.join(commenthelp), String.Doc), (r'[#&.]', Comment.Multiline), ], 'string': [ (r"`[0abfnrtv'\"$`]", String.Escape), (r'[^$`"]+', String.Double), (r'\$\(', Punctuation, 'child'), (r'""', String.Double), (r'[`$]', String.Double), (r'"', String.Double, '#pop'), ], 'heredoc-double': [ (r'\n"@', String.Heredoc, '#pop'), (r'\$\(', Punctuation, 'child'), (r'[^@\n]+"]', String.Heredoc), (r".", String.Heredoc), ] } class PowerShellSessionLexer(ShellSessionBaseLexer): """ Lexer for PowerShell sessions, i.e. command lines, including a prompt, interspersed with output. .. versionadded:: 2.1 """ name = 'PowerShell Session' aliases = ['pwsh-session', 'ps1con'] filenames = [] mimetypes = [] _innerLexerCls = PowerShellLexer _ps1rgx = re.compile(r'^((?:\[[^]]+\]: )?PS[^>]*> ?)(.*\n?)') _ps2 = '>> ' class FishShellLexer(RegexLexer): """ Lexer for Fish shell scripts. .. versionadded:: 2.1 """ name = 'Fish' aliases = ['fish', 'fishshell'] filenames = ['*.fish', '*.load'] mimetypes = ['application/x-fish'] tokens = { 'root': [ include('basic'), include('data'), include('interp'), ], 'interp': [ (r'\$\(\(', Keyword, 'math'), (r'\(', Keyword, 'paren'), (r'\$#?(\w+|.)', Name.Variable), ], 'basic': [ (r'\b(begin|end|if|else|while|break|for|in|return|function|block|' r'case|continue|switch|not|and|or|set|echo|exit|pwd|true|false|' r'cd|count|test)(\s*)\b', bygroups(Keyword, Text)), (r'\b(alias|bg|bind|breakpoint|builtin|command|commandline|' r'complete|contains|dirh|dirs|emit|eval|exec|fg|fish|fish_config|' r'fish_indent|fish_pager|fish_prompt|fish_right_prompt|' r'fish_update_completions|fishd|funced|funcsave|functions|help|' r'history|isatty|jobs|math|mimedb|nextd|open|popd|prevd|psub|' r'pushd|random|read|set_color|source|status|trap|type|ulimit|' r'umask|vared|fc|getopts|hash|kill|printf|time|wait)\s*\b(?!\.)', Name.Builtin), (r'#.*\n', Comment), (r'\\[\w\W]', String.Escape), (r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)), (r'[\[\]()=]', Operator), (r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String), ], 'data': [ (r'(?s)\$?"(\\\\|\\[0-7]+|\\.|[^"\\$])*"', String.Double), (r'"', String.Double, 'string'), (r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), (r"(?s)'.*?'", String.Single), (r';', Punctuation), (r'&|\||\^|<|>', Operator), (r'\s+', Text), (r'\d+(?= |\Z)', Number), (r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text), ], 'string': [ (r'"', String.Double, '#pop'), (r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double), include('interp'), ], 'paren': [ (r'\)', Keyword, '#pop'), include('root'), ], 'math': [ (r'\)\)', Keyword, '#pop'), (r'[-+*/%^|&]|\*\*|\|\|', Operator), (r'\d+#\d+', Number), (r'\d+#(?! )', Number), (r'\d+', Number), include('root'), ], } class ExeclineLexer(RegexLexer): """ Lexer for Laurent Bercot's execline language (https://skarnet.org/software/execline). .. versionadded:: 2.7 """ name = 'execline' aliases = ['execline'] filenames = ['*.exec'] tokens = { 'root': [ include('basic'), include('data'), include('interp') ], 'interp': [ (r'\$\{', String.Interpol, 'curly'), (r'\$[\w@#]+', Name.Variable), # user variable (r'\$', Text), ], 'basic': [ (r'\b(background|backtick|cd|define|dollarat|elgetopt|' r'elgetpositionals|elglob|emptyenv|envfile|exec|execlineb|' r'exit|export|fdblock|fdclose|fdmove|fdreserve|fdswap|' r'forbacktickx|foreground|forstdin|forx|getcwd|getpid|heredoc|' r'homeof|if|ifelse|ifte|ifthenelse|importas|loopwhilex|' r'multidefine|multisubstitute|pipeline|piperw|posix-cd|' r'redirfd|runblock|shift|trap|tryexec|umask|unexport|wait|' r'withstdinas)\b', Name.Builtin), (r'\A#!.+\n', Comment.Hashbang), (r'#.*\n', Comment.Single), (r'[{}]', Operator) ], 'data': [ (r'(?s)"(\\.|[^"\\$])*"', String.Double), (r'"', String.Double, 'string'), (r'\s+', Text), (r'[^\s{}$"\\]+', Text) ], 'string': [ (r'"', String.Double, '#pop'), (r'(?s)(\\\\|\\.|[^"\\$])+', String.Double), include('interp'), ], 'curly': [ (r'\}', String.Interpol, '#pop'), (r'[\w#@]+', Name.Variable), include('root') ] } def analyse_text(text): if shebang_matches(text, r'execlineb'): return 1 pygments-2.11.2/pygments/lexers/gcodelexer.py0000644000175000017500000000147214165547207021205 0ustar carstencarsten""" pygments.lexers.gcodelexer ~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for the G Code Language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups from pygments.token import Comment, Name, Text, Keyword, Number __all__ = ['GcodeLexer'] class GcodeLexer(RegexLexer): """ For gcode source code. .. versionadded:: 2.9 """ name = 'g-code' aliases = ['gcode'] filenames = ['*.gcode'] tokens = { 'root': [ (r';.*\n', Comment), (r'^[gmGM]\d{1,4}\s', Name.Builtin), # M or G commands (r'([^gGmM])([+-]?\d*[.]?\d+)', bygroups(Keyword, Number)), (r'\s', Text.Whitespace), (r'.*\n', Text), ] } pygments-2.11.2/pygments/lexers/_julia_builtins.py0000644000175000017500000002715314165547207022244 0ustar carstencarsten""" pygments.lexers._julia_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Julia builtins. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # operators # see https://github.com/JuliaLang/julia/blob/master/src/julia-parser.scm # Julia v1.6.0-rc1 OPERATORS_LIST = [ # other '->', # prec-assignment ':=', '$=', # prec-conditional, prec-lazy-or, prec-lazy-and '?', '||', '&&', # prec-colon ':', # prec-plus '$', # prec-decl '::', ] DOTTED_OPERATORS_LIST = [ # prec-assignment r'=', r'+=', r'-=', r'*=', r'/=', r'//=', r'\=', r'^=', r'÷=', r'%=', r'<<=', r'>>=', r'>>>=', r'|=', r'&=', r'⊻=', r'≔', r'⩴', r"≕'", r'~', # prec-pair '=>', # prec-arrow r'→', r'↔', r'↚', r'↛', r'↞', r'↠', r'↢', r'↣', r'↦', r'↤', r'↮', r'⇎', r'⇍', r'⇏', r'⇐', r'⇒', r'⇔', r'⇴', r'⇶', r'⇷', r'⇸', r'⇹', r'⇺', r'⇻', r'⇼', r'⇽', r'⇾', r'⇿', r'⟵', r'⟶', r'⟷', r'⟹', r'⟺', r'⟻', r'⟼', r'⟽', r'⟾', r'⟿', r'⤀', r'⤁', r'⤂', r'⤃', r'⤄', r'⤅', r'⤆', r'⤇', r'⤌', r'⤍', r'⤎', r'⤏', r'⤐', r'⤑', r'⤔', r'⤕', r'⤖', r'⤗', r'⤘', r'⤝', r'⤞', r'⤟', r'⤠', r'⥄', r'⥅', r'⥆', r'⥇', r'⥈', r'⥊', r'⥋', r'⥎', r'⥐', r'⥒', r'⥓', r'⥖', r'⥗', r'⥚', r'⥛', r'⥞', r'⥟', r'⥢', r'⥤', r'⥦', r'⥧', r'⥨', r'⥩', r'⥪', r'⥫', r'⥬', r'⥭', r'⥰', r'⧴', r'⬱', r'⬰', r'⬲', r'⬳', r'⬴', r'⬵', r'⬶', r'⬷', r'⬸', r'⬹', r'⬺', r'⬻', r'⬼', r'⬽', r'⬾', r'⬿', r'⭀', r'⭁', r'⭂', r'⭃', r'⭄', r'⭇', r'⭈', r'⭉', r'⭊', r'⭋', r'⭌', r'←', r'→', r'⇜', r'⇝', r'↜', r'↝', r'↩', r'↪', r'↫', r'↬', r'↼', r'↽', r'⇀', r'⇁', r'⇄', r'⇆', r'⇇', r'⇉', r'⇋', r'⇌', r'⇚', r'⇛', r'⇠', r'⇢', r'↷', r'↶', r'↺', r'↻', r'-->', r'<--', r'<-->', # prec-comparison r'>', r'<', r'>=', r'≥', r'<=', r'≤', r'==', r'===', r'≡', r'!=', r'≠', r'!==', r'≢', r'∈', r'∉', r'∋', r'∌', r'⊆', r'⊈', r'⊂', r'⊄', r'⊊', r'∝', r'∊', r'∍', r'∥', r'∦', r'∷', r'∺', r'∻', r'∽', r'∾', r'≁', r'≃', r'≂', r'≄', r'≅', r'≆', r'≇', r'≈', r'≉', r'≊', r'≋', r'≌', r'≍', r'≎', r'≐', r'≑', r'≒', r'≓', r'≖', r'≗', r'≘', r'≙', r'≚', r'≛', r'≜', r'≝', r'≞', r'≟', r'≣', r'≦', r'≧', r'≨', r'≩', r'≪', r'≫', r'≬', r'≭', r'≮', r'≯', r'≰', r'≱', r'≲', r'≳', r'≴', r'≵', r'≶', r'≷', r'≸', r'≹', r'≺', r'≻', r'≼', r'≽', r'≾', r'≿', r'⊀', r'⊁', r'⊃', r'⊅', r'⊇', r'⊉', r'⊋', r'⊏', r'⊐', r'⊑', r'⊒', r'⊜', r'⊩', r'⊬', r'⊮', r'⊰', r'⊱', r'⊲', r'⊳', r'⊴', r'⊵', r'⊶', r'⊷', r'⋍', r'⋐', r'⋑', r'⋕', r'⋖', r'⋗', r'⋘', r'⋙', r'⋚', r'⋛', r'⋜', r'⋝', r'⋞', r'⋟', r'⋠', r'⋡', r'⋢', r'⋣', r'⋤', r'⋥', r'⋦', r'⋧', r'⋨', r'⋩', r'⋪', r'⋫', r'⋬', r'⋭', r'⋲', r'⋳', r'⋴', r'⋵', r'⋶', r'⋷', r'⋸', r'⋹', r'⋺', r'⋻', r'⋼', r'⋽', r'⋾', r'⋿', r'⟈', r'⟉', r'⟒', r'⦷', r'⧀', r'⧁', r'⧡', r'⧣', r'⧤', r'⧥', r'⩦', r'⩧', r'⩪', r'⩫', r'⩬', r'⩭', r'⩮', r'⩯', r'⩰', r'⩱', r'⩲', r'⩳', r'⩵', r'⩶', r'⩷', r'⩸', r'⩹', r'⩺', r'⩻', r'⩼', r'⩽', r'⩾', r'⩿', r'⪀', r'⪁', r'⪂', r'⪃', r'⪄', r'⪅', r'⪆', r'⪇', r'⪈', r'⪉', r'⪊', r'⪋', r'⪌', r'⪍', r'⪎', r'⪏', r'⪐', r'⪑', r'⪒', r'⪓', r'⪔', r'⪕', r'⪖', r'⪗', r'⪘', r'⪙', r'⪚', r'⪛', r'⪜', r'⪝', r'⪞', r'⪟', r'⪠', r'⪡', r'⪢', r'⪣', r'⪤', r'⪥', r'⪦', r'⪧', r'⪨', r'⪩', r'⪪', r'⪫', r'⪬', r'⪭', r'⪮', r'⪯', r'⪰', r'⪱', r'⪲', r'⪳', r'⪴', r'⪵', r'⪶', r'⪷', r'⪸', r'⪹', r'⪺', r'⪻', r'⪼', r'⪽', r'⪾', r'⪿', r'⫀', r'⫁', r'⫂', r'⫃', r'⫄', r'⫅', r'⫆', r'⫇', r'⫈', r'⫉', r'⫊', r'⫋', r'⫌', r'⫍', r'⫎', r'⫏', r'⫐', r'⫑', r'⫒', r'⫓', r'⫔', r'⫕', r'⫖', r'⫗', r'⫘', r'⫙', r'⫷', r'⫸', r'⫹', r'⫺', r'⊢', r'⊣', r'⟂', r'<:', r'>:', # prec-pipe '<|', '|>', # prec-colon r'…', r'⁝', r'⋮', r'⋱', r'⋰', r'⋯', # prec-plus r'+', r'-', r'¦', r'|', r'⊕', r'⊖', r'⊞', r'⊟', r'++', r'∪', r'∨', r'⊔', r'±', r'∓', r'∔', r'∸', r'≏', r'⊎', r'⊻', r'⊽', r'⋎', r'⋓', r'⧺', r'⧻', r'⨈', r'⨢', r'⨣', r'⨤', r'⨥', r'⨦', r'⨧', r'⨨', r'⨩', r'⨪', r'⨫', r'⨬', r'⨭', r'⨮', r'⨹', r'⨺', r'⩁', r'⩂', r'⩅', r'⩊', r'⩌', r'⩏', r'⩐', r'⩒', r'⩔', r'⩖', r'⩗', r'⩛', r'⩝', r'⩡', r'⩢', r'⩣', # prec-times r'*', r'/', r'⌿', r'÷', r'%', r'&', r'⋅', r'∘', r'×', '\\', r'∩', r'∧', r'⊗', r'⊘', r'⊙', r'⊚', r'⊛', r'⊠', r'⊡', r'⊓', r'∗', r'∙', r'∤', r'⅋', r'≀', r'⊼', r'⋄', r'⋆', r'⋇', r'⋉', r'⋊', r'⋋', r'⋌', r'⋏', r'⋒', r'⟑', r'⦸', r'⦼', r'⦾', r'⦿', r'⧶', r'⧷', r'⨇', r'⨰', r'⨱', r'⨲', r'⨳', r'⨴', r'⨵', r'⨶', r'⨷', r'⨸', r'⨻', r'⨼', r'⨽', r'⩀', r'⩃', r'⩄', r'⩋', r'⩍', r'⩎', r'⩑', r'⩓', r'⩕', r'⩘', r'⩚', r'⩜', r'⩞', r'⩟', r'⩠', r'⫛', r'⊍', r'▷', r'⨝', r'⟕', r'⟖', r'⟗', r'⨟', # prec-rational, prec-bitshift '//', '>>', '<<', '>>>', # prec-power r'^', r'↑', r'↓', r'⇵', r'⟰', r'⟱', r'⤈', r'⤉', r'⤊', r'⤋', r'⤒', r'⤓', r'⥉', r'⥌', r'⥍', r'⥏', r'⥑', r'⥔', r'⥕', r'⥘', r'⥙', r'⥜', r'⥝', r'⥠', r'⥡', r'⥣', r'⥥', r'⥮', r'⥯', r'↑', r'↓', # unary-ops, excluding unary-and-binary-ops '!', r'¬', r'√', r'∛', r'∜' ] # Generated with the following in Julia v1.6.0-rc1 ''' #!/usr/bin/env julia import REPL.REPLCompletions res = String["in", "isa", "where"] for kw in collect(x.keyword for x in REPLCompletions.complete_keyword("")) if !(contains(kw, " ") || kw == "struct") push!(res, kw) end end sort!(unique!(setdiff!(res, ["true", "false"]))) foreach(x -> println("\'", x, "\',"), res) ''' KEYWORD_LIST = ( 'baremodule', 'begin', 'break', 'catch', 'ccall', 'const', 'continue', 'do', 'else', 'elseif', 'end', 'export', 'finally', 'for', 'function', 'global', 'if', 'import', 'in', 'isa', 'let', 'local', 'macro', 'module', 'quote', 'return', 'try', 'using', 'where', 'while', ) # Generated with the following in Julia v1.6.0-rc1 ''' #!/usr/bin/env julia import REPL.REPLCompletions res = String[] for compl in filter!(x -> isa(x, REPLCompletions.ModuleCompletion) && (x.parent === Base || x.parent === Core), REPLCompletions.completions("", 0)[1]) try v = eval(Symbol(compl.mod)) if (v isa Type || v isa TypeVar) && (compl.mod != "=>") push!(res, compl.mod) end catch e end end sort!(unique!(res)) foreach(x -> println("\'", x, "\',"), res) ''' BUILTIN_LIST = ( 'AbstractArray', 'AbstractChannel', 'AbstractChar', 'AbstractDict', 'AbstractDisplay', 'AbstractFloat', 'AbstractIrrational', 'AbstractMatch', 'AbstractMatrix', 'AbstractPattern', 'AbstractRange', 'AbstractSet', 'AbstractString', 'AbstractUnitRange', 'AbstractVecOrMat', 'AbstractVector', 'Any', 'ArgumentError', 'Array', 'AssertionError', 'BigFloat', 'BigInt', 'BitArray', 'BitMatrix', 'BitSet', 'BitVector', 'Bool', 'BoundsError', 'CapturedException', 'CartesianIndex', 'CartesianIndices', 'Cchar', 'Cdouble', 'Cfloat', 'Channel', 'Char', 'Cint', 'Cintmax_t', 'Clong', 'Clonglong', 'Cmd', 'Colon', 'Complex', 'ComplexF16', 'ComplexF32', 'ComplexF64', 'ComposedFunction', 'CompositeException', 'Condition', 'Cptrdiff_t', 'Cshort', 'Csize_t', 'Cssize_t', 'Cstring', 'Cuchar', 'Cuint', 'Cuintmax_t', 'Culong', 'Culonglong', 'Cushort', 'Cvoid', 'Cwchar_t', 'Cwstring', 'DataType', 'DenseArray', 'DenseMatrix', 'DenseVecOrMat', 'DenseVector', 'Dict', 'DimensionMismatch', 'Dims', 'DivideError', 'DomainError', 'EOFError', 'Enum', 'ErrorException', 'Exception', 'ExponentialBackOff', 'Expr', 'Float16', 'Float32', 'Float64', 'Function', 'GlobalRef', 'HTML', 'IO', 'IOBuffer', 'IOContext', 'IOStream', 'IdDict', 'IndexCartesian', 'IndexLinear', 'IndexStyle', 'InexactError', 'InitError', 'Int', 'Int128', 'Int16', 'Int32', 'Int64', 'Int8', 'Integer', 'InterruptException', 'InvalidStateException', 'Irrational', 'KeyError', 'LinRange', 'LineNumberNode', 'LinearIndices', 'LoadError', 'MIME', 'Matrix', 'Method', 'MethodError', 'Missing', 'MissingException', 'Module', 'NTuple', 'NamedTuple', 'Nothing', 'Number', 'OrdinalRange', 'OutOfMemoryError', 'OverflowError', 'Pair', 'PartialQuickSort', 'PermutedDimsArray', 'Pipe', 'ProcessFailedException', 'Ptr', 'QuoteNode', 'Rational', 'RawFD', 'ReadOnlyMemoryError', 'Real', 'ReentrantLock', 'Ref', 'Regex', 'RegexMatch', 'RoundingMode', 'SegmentationFault', 'Set', 'Signed', 'Some', 'StackOverflowError', 'StepRange', 'StepRangeLen', 'StridedArray', 'StridedMatrix', 'StridedVecOrMat', 'StridedVector', 'String', 'StringIndexError', 'SubArray', 'SubString', 'SubstitutionString', 'Symbol', 'SystemError', 'Task', 'TaskFailedException', 'Text', 'TextDisplay', 'Timer', 'Tuple', 'Type', 'TypeError', 'TypeVar', 'UInt', 'UInt128', 'UInt16', 'UInt32', 'UInt64', 'UInt8', 'UndefInitializer', 'UndefKeywordError', 'UndefRefError', 'UndefVarError', 'Union', 'UnionAll', 'UnitRange', 'Unsigned', 'Val', 'Vararg', 'VecElement', 'VecOrMat', 'Vector', 'VersionNumber', 'WeakKeyDict', 'WeakRef', ) # Generated with the following in Julia v1.6.0-rc1 ''' #!/usr/bin/env julia import REPL.REPLCompletions res = String["true", "false"] for compl in filter!(x -> isa(x, REPLCompletions.ModuleCompletion) && (x.parent === Base || x.parent === Core), REPLCompletions.completions("", 0)[1]) try v = eval(Symbol(compl.mod)) if !(v isa Function || v isa Type || v isa TypeVar || v isa Module || v isa Colon) push!(res, compl.mod) end catch e end end sort!(unique!(res)) foreach(x -> println("\'", x, "\',"), res) ''' LITERAL_LIST = ( 'ARGS', 'C_NULL', 'DEPOT_PATH', 'ENDIAN_BOM', 'ENV', 'Inf', 'Inf16', 'Inf32', 'Inf64', 'InsertionSort', 'LOAD_PATH', 'MergeSort', 'NaN', 'NaN16', 'NaN32', 'NaN64', 'PROGRAM_FILE', 'QuickSort', 'RoundDown', 'RoundFromZero', 'RoundNearest', 'RoundNearestTiesAway', 'RoundNearestTiesUp', 'RoundToZero', 'RoundUp', 'VERSION', 'devnull', 'false', 'im', 'missing', 'nothing', 'pi', 'stderr', 'stdin', 'stdout', 'true', 'undef', 'π', 'ℯ', ) pygments-2.11.2/pygments/lexers/d.py0000644000175000017500000002316314165547207017310 0ustar carstencarsten""" pygments.lexers.d ~~~~~~~~~~~~~~~~~ Lexers for D languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, words, bygroups from pygments.token import Text, Comment, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['DLexer', 'CrocLexer', 'MiniDLexer'] class DLexer(RegexLexer): """ For D source. .. versionadded:: 1.2 """ name = 'D' filenames = ['*.d', '*.di'] aliases = ['d'] mimetypes = ['text/x-dsrc'] tokens = { 'root': [ (r'\n', Whitespace), (r'\s+', Whitespace), # (r'\\\n', Text), # line continuations # Comments (r'(//.*?)(\n)', bygroups(Comment.Single, Whitespace)), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'/\+', Comment.Multiline, 'nested_comment'), # Keywords (words(( 'abstract', 'alias', 'align', 'asm', 'assert', 'auto', 'body', 'break', 'case', 'cast', 'catch', 'class', 'const', 'continue', 'debug', 'default', 'delegate', 'delete', 'deprecated', 'do', 'else', 'enum', 'export', 'extern', 'finally', 'final', 'foreach_reverse', 'foreach', 'for', 'function', 'goto', 'if', 'immutable', 'import', 'interface', 'invariant', 'inout', 'in', 'is', 'lazy', 'mixin', 'module', 'new', 'nothrow', 'out', 'override', 'package', 'pragma', 'private', 'protected', 'public', 'pure', 'ref', 'return', 'scope', 'shared', 'static', 'struct', 'super', 'switch', 'synchronized', 'template', 'this', 'throw', 'try', 'typeid', 'typeof', 'union', 'unittest', 'version', 'volatile', 'while', 'with', '__gshared', '__traits', '__vector', '__parameters'), suffix=r'\b'), Keyword), (words(( # Removed in 2.072 'typedef', ), suffix=r'\b'), Keyword.Removed), (words(( 'bool', 'byte', 'cdouble', 'cent', 'cfloat', 'char', 'creal', 'dchar', 'double', 'float', 'idouble', 'ifloat', 'int', 'ireal', 'long', 'real', 'short', 'ubyte', 'ucent', 'uint', 'ulong', 'ushort', 'void', 'wchar'), suffix=r'\b'), Keyword.Type), (r'(false|true|null)\b', Keyword.Constant), (words(( '__FILE__', '__FILE_FULL_PATH__', '__MODULE__', '__LINE__', '__FUNCTION__', '__PRETTY_FUNCTION__', '__DATE__', '__EOF__', '__TIME__', '__TIMESTAMP__', '__VENDOR__', '__VERSION__'), suffix=r'\b'), Keyword.Pseudo), (r'macro\b', Keyword.Reserved), (r'(string|wstring|dstring|size_t|ptrdiff_t)\b', Name.Builtin), # FloatLiteral # -- HexFloat (r'0[xX]([0-9a-fA-F_]*\.[0-9a-fA-F_]+|[0-9a-fA-F_]+)' r'[pP][+\-]?[0-9_]+[fFL]?[i]?', Number.Float), # -- DecimalFloat (r'[0-9_]+(\.[0-9_]+[eE][+\-]?[0-9_]+|' r'\.[0-9_]*|[eE][+\-]?[0-9_]+)[fFL]?[i]?', Number.Float), (r'\.(0|[1-9][0-9_]*)([eE][+\-]?[0-9_]+)?[fFL]?[i]?', Number.Float), # IntegerLiteral # -- Binary (r'0[Bb][01_]+', Number.Bin), # -- Octal (r'0[0-7_]+', Number.Oct), # -- Hexadecimal (r'0[xX][0-9a-fA-F_]+', Number.Hex), # -- Decimal (r'(0|[1-9][0-9_]*)([LUu]|Lu|LU|uL|UL)?', Number.Integer), # CharacterLiteral (r"""'(\\['"?\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}""" r"""|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|\\&\w+;|.)'""", String.Char), # StringLiteral # -- WysiwygString (r'r"[^"]*"[cwd]?', String), # -- AlternateWysiwygString (r'`[^`]*`[cwd]?', String), # -- DoubleQuotedString (r'"(\\\\|\\[^\\]|[^"\\])*"[cwd]?', String), # -- EscapeSequence (r"\\(['\"?\\abfnrtv]|x[0-9a-fA-F]{2}|[0-7]{1,3}" r"|u[0-9a-fA-F]{4}|U[0-9a-fA-F]{8}|&\w+;)", String), # -- HexString (r'x"[0-9a-fA-F_\s]*"[cwd]?', String), # -- DelimitedString (r'q"\[', String, 'delimited_bracket'), (r'q"\(', String, 'delimited_parenthesis'), (r'q"<', String, 'delimited_angle'), (r'q"\{', String, 'delimited_curly'), (r'q"([a-zA-Z_]\w*)\n.*?\n\1"', String), (r'q"(.).*?\1"', String), # -- TokenString (r'q\{', String, 'token_string'), # Attributes (r'@([a-zA-Z_]\w*)?', Name.Decorator), # Tokens (r'(~=|\^=|%=|\*=|==|!>=|!<=|!<>=|!<>|!<|!>|!=|>>>=|>>>|>>=|>>|>=' r'|<>=|<>|<<=|<<|<=|\+\+|\+=|--|-=|\|\||\|=|&&|&=|\.\.\.|\.\.|/=)' r'|[/.&|\-+<>!()\[\]{}?,;:$=*%^~]', Punctuation), # Identifier (r'[a-zA-Z_]\w*', Name), # Line (r'(#line)(\s)(.*)(\n)', bygroups(Comment.Special, Whitespace, Comment.Special, Whitespace)), ], 'nested_comment': [ (r'[^+/]+', Comment.Multiline), (r'/\+', Comment.Multiline, '#push'), (r'\+/', Comment.Multiline, '#pop'), (r'[+/]', Comment.Multiline), ], 'token_string': [ (r'\{', Punctuation, 'token_string_nest'), (r'\}', String, '#pop'), include('root'), ], 'token_string_nest': [ (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), include('root'), ], 'delimited_bracket': [ (r'[^\[\]]+', String), (r'\[', String, 'delimited_inside_bracket'), (r'\]"', String, '#pop'), ], 'delimited_inside_bracket': [ (r'[^\[\]]+', String), (r'\[', String, '#push'), (r'\]', String, '#pop'), ], 'delimited_parenthesis': [ (r'[^()]+', String), (r'\(', String, 'delimited_inside_parenthesis'), (r'\)"', String, '#pop'), ], 'delimited_inside_parenthesis': [ (r'[^()]+', String), (r'\(', String, '#push'), (r'\)', String, '#pop'), ], 'delimited_angle': [ (r'[^<>]+', String), (r'<', String, 'delimited_inside_angle'), (r'>"', String, '#pop'), ], 'delimited_inside_angle': [ (r'[^<>]+', String), (r'<', String, '#push'), (r'>', String, '#pop'), ], 'delimited_curly': [ (r'[^{}]+', String), (r'\{', String, 'delimited_inside_curly'), (r'\}"', String, '#pop'), ], 'delimited_inside_curly': [ (r'[^{}]+', String), (r'\{', String, '#push'), (r'\}', String, '#pop'), ], } class CrocLexer(RegexLexer): """ For `Croc `_ source. """ name = 'Croc' filenames = ['*.croc'] aliases = ['croc'] mimetypes = ['text/x-crocsrc'] tokens = { 'root': [ (r'\n', Whitespace), (r'\s+', Whitespace), # Comments (r'(//.*?)(\n)', bygroups(Comment.Single, Whitespace)), (r'/\*', Comment.Multiline, 'nestedcomment'), # Keywords (words(( 'as', 'assert', 'break', 'case', 'catch', 'class', 'continue', 'default', 'do', 'else', 'finally', 'for', 'foreach', 'function', 'global', 'namespace', 'if', 'import', 'in', 'is', 'local', 'module', 'return', 'scope', 'super', 'switch', 'this', 'throw', 'try', 'vararg', 'while', 'with', 'yield'), suffix=r'\b'), Keyword), (r'(false|true|null)\b', Keyword.Constant), # FloatLiteral (r'([0-9][0-9_]*)(?=[.eE])(\.[0-9][0-9_]*)?([eE][+\-]?[0-9_]+)?', Number.Float), # IntegerLiteral # -- Binary (r'0[bB][01][01_]*', Number.Bin), # -- Hexadecimal (r'0[xX][0-9a-fA-F][0-9a-fA-F_]*', Number.Hex), # -- Decimal (r'([0-9][0-9_]*)(?![.eE])', Number.Integer), # CharacterLiteral (r"""'(\\['"\\nrt]|\\x[0-9a-fA-F]{2}|\\[0-9]{1,3}""" r"""|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|.)'""", String.Char), # StringLiteral # -- WysiwygString (r'@"(""|[^"])*"', String), (r'@`(``|[^`])*`', String), (r"@'(''|[^'])*'", String), # -- DoubleQuotedString (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Tokens (r'(~=|\^=|%=|\*=|==|!=|>>>=|>>>|>>=|>>|>=|<=>|\?=|-\>' r'|<<=|<<|<=|\+\+|\+=|--|-=|\|\||\|=|&&|&=|\.\.|/=)' r'|[-/.&$@|\+<>!()\[\]{}?,;:=*%^~#\\]', Punctuation), # Identifier (r'[a-zA-Z_]\w*', Name), ], 'nestedcomment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], } class MiniDLexer(CrocLexer): """ For MiniD source. MiniD is now known as Croc. """ name = 'MiniD' filenames = [] # don't lex .md as MiniD, reserve for Markdown aliases = ['minid'] mimetypes = ['text/x-minidsrc'] pygments-2.11.2/pygments/lexers/basic.py0000644000175000017500000006634214165547207020154 0ustar carstencarsten""" pygments.lexers.basic ~~~~~~~~~~~~~~~~~~~~~ Lexers for BASIC like languages (other than VB.net). :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, default, words, include from pygments.token import Comment, Error, Keyword, Name, Number, \ Punctuation, Operator, String, Text, Whitespace from pygments.lexers import _vbscript_builtins __all__ = ['BlitzBasicLexer', 'BlitzMaxLexer', 'MonkeyLexer', 'CbmBasicV2Lexer', 'QBasicLexer', 'VBScriptLexer', 'BBCBasicLexer'] class BlitzMaxLexer(RegexLexer): """ For `BlitzMax `_ source code. .. versionadded:: 1.4 """ name = 'BlitzMax' aliases = ['blitzmax', 'bmax'] filenames = ['*.bmx'] mimetypes = ['text/x-bmx'] bmax_vopwords = r'\b(Shl|Shr|Sar|Mod)\b' bmax_sktypes = r'@{1,2}|[!#$%]' bmax_lktypes = r'\b(Int|Byte|Short|Float|Double|Long)\b' bmax_name = r'[a-z_]\w*' bmax_var = (r'(%s)(?:(?:([ \t]*)(%s)|([ \t]*:[ \t]*\b(?:Shl|Shr|Sar|Mod)\b)' r'|([ \t]*)(:)([ \t]*)(?:%s|(%s)))(?:([ \t]*)(Ptr))?)') % \ (bmax_name, bmax_sktypes, bmax_lktypes, bmax_name) bmax_func = bmax_var + r'?((?:[ \t]|\.\.\n)*)([(])' flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ # Text (r'\s+', Whitespace), (r'(\.\.)(\n)', bygroups(Text, Whitespace)), # Line continuation # Comments (r"'.*?\n", Comment.Single), (r'([ \t]*)\bRem\n(\n|.)*?\s*\bEnd([ \t]*)Rem', Comment.Multiline), # Data types ('"', String.Double, 'string'), # Numbers (r'[0-9]+\.[0-9]*(?!\.)', Number.Float), (r'\.[0-9]*(?!\.)', Number.Float), (r'[0-9]+', Number.Integer), (r'\$[0-9a-f]+', Number.Hex), (r'\%[10]+', Number.Bin), # Other (r'(?:(?:(:)?([ \t]*)(:?%s|([+\-*/&|~]))|Or|And|Not|[=<>^]))' % (bmax_vopwords), Operator), (r'[(),.:\[\]]', Punctuation), (r'(?:#[\w \t]*)', Name.Label), (r'(?:\?[\w \t]*)', Comment.Preproc), # Identifiers (r'\b(New)\b([ \t]?)([(]?)(%s)' % (bmax_name), bygroups(Keyword.Reserved, Whitespace, Punctuation, Name.Class)), (r'\b(Import|Framework|Module)([ \t]+)(%s\.%s)' % (bmax_name, bmax_name), bygroups(Keyword.Reserved, Whitespace, Keyword.Namespace)), (bmax_func, bygroups(Name.Function, Whitespace, Keyword.Type, Operator, Whitespace, Punctuation, Whitespace, Keyword.Type, Name.Class, Whitespace, Keyword.Type, Whitespace, Punctuation)), (bmax_var, bygroups(Name.Variable, Whitespace, Keyword.Type, Operator, Whitespace, Punctuation, Whitespace, Keyword.Type, Name.Class, Whitespace, Keyword.Type)), (r'\b(Type|Extends)([ \t]+)(%s)' % (bmax_name), bygroups(Keyword.Reserved, Whitespace, Name.Class)), # Keywords (r'\b(Ptr)\b', Keyword.Type), (r'\b(Pi|True|False|Null|Self|Super)\b', Keyword.Constant), (r'\b(Local|Global|Const|Field)\b', Keyword.Declaration), (words(( 'TNullMethodException', 'TNullFunctionException', 'TNullObjectException', 'TArrayBoundsException', 'TRuntimeException'), prefix=r'\b', suffix=r'\b'), Name.Exception), (words(( 'Strict', 'SuperStrict', 'Module', 'ModuleInfo', 'End', 'Return', 'Continue', 'Exit', 'Public', 'Private', 'Var', 'VarPtr', 'Chr', 'Len', 'Asc', 'SizeOf', 'Sgn', 'Abs', 'Min', 'Max', 'New', 'Release', 'Delete', 'Incbin', 'IncbinPtr', 'IncbinLen', 'Framework', 'Include', 'Import', 'Extern', 'EndExtern', 'Function', 'EndFunction', 'Type', 'EndType', 'Extends', 'Method', 'EndMethod', 'Abstract', 'Final', 'If', 'Then', 'Else', 'ElseIf', 'EndIf', 'For', 'To', 'Next', 'Step', 'EachIn', 'While', 'Wend', 'EndWhile', 'Repeat', 'Until', 'Forever', 'Select', 'Case', 'Default', 'EndSelect', 'Try', 'Catch', 'EndTry', 'Throw', 'Assert', 'Goto', 'DefData', 'ReadData', 'RestoreData'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), # Final resolve (for variable names and such) (r'(%s)' % (bmax_name), Name.Variable), ], 'string': [ (r'""', String.Double), (r'"C?', String.Double, '#pop'), (r'[^"]+', String.Double), ], } class BlitzBasicLexer(RegexLexer): """ For `BlitzBasic `_ source code. .. versionadded:: 2.0 """ name = 'BlitzBasic' aliases = ['blitzbasic', 'b3d', 'bplus'] filenames = ['*.bb', '*.decls'] mimetypes = ['text/x-bb'] bb_sktypes = r'@{1,2}|[#$%]' bb_name = r'[a-z]\w*' bb_var = (r'(%s)(?:([ \t]*)(%s)|([ \t]*)([.])([ \t]*)(?:(%s)))?') % \ (bb_name, bb_sktypes, bb_name) flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ # Text (r'\s+', Whitespace), # Comments (r";.*?\n", Comment.Single), # Data types ('"', String.Double, 'string'), # Numbers (r'[0-9]+\.[0-9]*(?!\.)', Number.Float), (r'\.[0-9]+(?!\.)', Number.Float), (r'[0-9]+', Number.Integer), (r'\$[0-9a-f]+', Number.Hex), (r'\%[10]+', Number.Bin), # Other (words(('Shl', 'Shr', 'Sar', 'Mod', 'Or', 'And', 'Not', 'Abs', 'Sgn', 'Handle', 'Int', 'Float', 'Str', 'First', 'Last', 'Before', 'After'), prefix=r'\b', suffix=r'\b'), Operator), (r'([+\-*/~=<>^])', Operator), (r'[(),:\[\]\\]', Punctuation), (r'\.([ \t]*)(%s)' % bb_name, Name.Label), # Identifiers (r'\b(New)\b([ \t]+)(%s)' % (bb_name), bygroups(Keyword.Reserved, Whitespace, Name.Class)), (r'\b(Gosub|Goto)\b([ \t]+)(%s)' % (bb_name), bygroups(Keyword.Reserved, Whitespace, Name.Label)), (r'\b(Object)\b([ \t]*)([.])([ \t]*)(%s)\b' % (bb_name), bygroups(Operator, Whitespace, Punctuation, Whitespace, Name.Class)), (r'\b%s\b([ \t]*)(\()' % bb_var, bygroups(Name.Function, Whitespace, Keyword.Type, Whitespace, Punctuation, Whitespace, Name.Class, Whitespace, Punctuation)), (r'\b(Function)\b([ \t]+)%s' % bb_var, bygroups(Keyword.Reserved, Whitespace, Name.Function, Whitespace, Keyword.Type, Whitespace, Punctuation, Whitespace, Name.Class)), (r'\b(Type)([ \t]+)(%s)' % (bb_name), bygroups(Keyword.Reserved, Whitespace, Name.Class)), # Keywords (r'\b(Pi|True|False|Null)\b', Keyword.Constant), (r'\b(Local|Global|Const|Field|Dim)\b', Keyword.Declaration), (words(( 'End', 'Return', 'Exit', 'Chr', 'Len', 'Asc', 'New', 'Delete', 'Insert', 'Include', 'Function', 'Type', 'If', 'Then', 'Else', 'ElseIf', 'EndIf', 'For', 'To', 'Next', 'Step', 'Each', 'While', 'Wend', 'Repeat', 'Until', 'Forever', 'Select', 'Case', 'Default', 'Goto', 'Gosub', 'Data', 'Read', 'Restore'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), # Final resolve (for variable names and such) # (r'(%s)' % (bb_name), Name.Variable), (bb_var, bygroups(Name.Variable, Whitespace, Keyword.Type, Whitespace, Punctuation, Whitespace, Name.Class)), ], 'string': [ (r'""', String.Double), (r'"C?', String.Double, '#pop'), (r'[^"\n]+', String.Double), ], } class MonkeyLexer(RegexLexer): """ For `Monkey `_ source code. .. versionadded:: 1.6 """ name = 'Monkey' aliases = ['monkey'] filenames = ['*.monkey'] mimetypes = ['text/x-monkey'] name_variable = r'[a-z_]\w*' name_function = r'[A-Z]\w*' name_constant = r'[A-Z_][A-Z0-9_]*' name_class = r'[A-Z]\w*' name_module = r'[a-z0-9_]*' keyword_type = r'(?:Int|Float|String|Bool|Object|Array|Void)' # ? == Bool // % == Int // # == Float // $ == String keyword_type_special = r'[?%#$]' flags = re.MULTILINE tokens = { 'root': [ # Text (r'\s+', Whitespace), # Comments (r"'.*", Comment), (r'(?i)^#rem\b', Comment.Multiline, 'comment'), # preprocessor directives (r'(?i)^(?:#If|#ElseIf|#Else|#EndIf|#End|#Print|#Error)\b', Comment.Preproc), # preprocessor variable (any line starting with '#' that is not a directive) (r'^#', Comment.Preproc, 'variables'), # String ('"', String.Double, 'string'), # Numbers (r'[0-9]+\.[0-9]*(?!\.)', Number.Float), (r'\.[0-9]+(?!\.)', Number.Float), (r'[0-9]+', Number.Integer), (r'\$[0-9a-fA-Z]+', Number.Hex), (r'\%[10]+', Number.Bin), # Native data types (r'\b%s\b' % keyword_type, Keyword.Type), # Exception handling (r'(?i)\b(?:Try|Catch|Throw)\b', Keyword.Reserved), (r'Throwable', Name.Exception), # Builtins (r'(?i)\b(?:Null|True|False)\b', Name.Builtin), (r'(?i)\b(?:Self|Super)\b', Name.Builtin.Pseudo), (r'\b(?:HOST|LANG|TARGET|CONFIG)\b', Name.Constant), # Keywords (r'(?i)^(Import)(\s+)(.*)(\n)', bygroups(Keyword.Namespace, Whitespace, Name.Namespace, Whitespace)), (r'(?i)^Strict\b.*\n', Keyword.Reserved), (r'(?i)(Const|Local|Global|Field)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'variables'), (r'(?i)(New|Class|Interface|Extends|Implements)(\s+)', bygroups(Keyword.Reserved, Whitespace), 'classname'), (r'(?i)(Function|Method)(\s+)', bygroups(Keyword.Reserved, Whitespace), 'funcname'), (r'(?i)(?:End|Return|Public|Private|Extern|Property|' r'Final|Abstract)\b', Keyword.Reserved), # Flow Control stuff (r'(?i)(?:If|Then|Else|ElseIf|EndIf|' r'Select|Case|Default|' r'While|Wend|' r'Repeat|Until|Forever|' r'For|To|Until|Step|EachIn|Next|' r'Exit|Continue)(?=\s)', Keyword.Reserved), # not used yet (r'(?i)\b(?:Module|Inline)\b', Keyword.Reserved), # Array (r'[\[\]]', Punctuation), # Other (r'<=|>=|<>|\*=|/=|\+=|-=|&=|~=|\|=|[-&*/^+=<>|~]', Operator), (r'(?i)(?:Not|Mod|Shl|Shr|And|Or)', Operator.Word), (r'[(){}!#,.:]', Punctuation), # catch the rest (r'%s\b' % name_constant, Name.Constant), (r'%s\b' % name_function, Name.Function), (r'%s\b' % name_variable, Name.Variable), ], 'funcname': [ (r'(?i)%s\b' % name_function, Name.Function), (r':', Punctuation, 'classname'), (r'\s+', Whitespace), (r'\(', Punctuation, 'variables'), (r'\)', Punctuation, '#pop') ], 'classname': [ (r'%s\.' % name_module, Name.Namespace), (r'%s\b' % keyword_type, Keyword.Type), (r'%s\b' % name_class, Name.Class), # array (of given size) (r'(\[)(\s*)(\d*)(\s*)(\])', bygroups(Punctuation, Whitespace, Number.Integer, Whitespace, Punctuation)), # generics (r'\s+(?!<)', Whitespace, '#pop'), (r'<', Punctuation, '#push'), (r'>', Punctuation, '#pop'), (r'\n', Whitespace, '#pop'), default('#pop') ], 'variables': [ (r'%s\b' % name_constant, Name.Constant), (r'%s\b' % name_variable, Name.Variable), (r'%s' % keyword_type_special, Keyword.Type), (r'\s+', Whitespace), (r':', Punctuation, 'classname'), (r',', Punctuation, '#push'), default('#pop') ], 'string': [ (r'[^"~]+', String.Double), (r'~q|~n|~r|~t|~z|~~', String.Escape), (r'"', String.Double, '#pop'), ], 'comment': [ (r'(?i)^#rem.*?', Comment.Multiline, "#push"), (r'(?i)^#end.*?', Comment.Multiline, "#pop"), (r'\n', Comment.Multiline), (r'.+', Comment.Multiline), ], } class CbmBasicV2Lexer(RegexLexer): """ For CBM BASIC V2 sources. .. versionadded:: 1.6 """ name = 'CBM BASIC V2' aliases = ['cbmbas'] filenames = ['*.bas'] flags = re.IGNORECASE tokens = { 'root': [ (r'rem.*\n', Comment.Single), (r'\s+', Whitespace), (r'new|run|end|for|to|next|step|go(to|sub)?|on|return|stop|cont' r'|if|then|input#?|read|wait|load|save|verify|poke|sys|print#?' r'|list|clr|cmd|open|close|get#?', Keyword.Reserved), (r'data|restore|dim|let|def|fn', Keyword.Declaration), (r'tab|spc|sgn|int|abs|usr|fre|pos|sqr|rnd|log|exp|cos|sin|tan|atn' r'|peek|len|val|asc|(str|chr|left|right|mid)\$', Name.Builtin), (r'[-+*/^<>=]', Operator), (r'not|and|or', Operator.Word), (r'"[^"\n]*.', String), (r'\d+|[-+]?\d*\.\d*(e[-+]?\d+)?', Number.Float), (r'[(),:;]', Punctuation), (r'\w+[$%]?', Name), ] } def analyse_text(text): # if it starts with a line number, it shouldn't be a "modern" Basic # like VB.net if re.match(r'^\d+', text): return 0.2 class QBasicLexer(RegexLexer): """ For `QBasic `_ source code. .. versionadded:: 2.0 """ name = 'QBasic' aliases = ['qbasic', 'basic'] filenames = ['*.BAS', '*.bas'] mimetypes = ['text/basic'] declarations = ('DATA', 'LET') functions = ( 'ABS', 'ASC', 'ATN', 'CDBL', 'CHR$', 'CINT', 'CLNG', 'COMMAND$', 'COS', 'CSNG', 'CSRLIN', 'CVD', 'CVDMBF', 'CVI', 'CVL', 'CVS', 'CVSMBF', 'DATE$', 'ENVIRON$', 'EOF', 'ERDEV', 'ERDEV$', 'ERL', 'ERR', 'EXP', 'FILEATTR', 'FIX', 'FRE', 'FREEFILE', 'HEX$', 'INKEY$', 'INP', 'INPUT$', 'INSTR', 'INT', 'IOCTL$', 'LBOUND', 'LCASE$', 'LEFT$', 'LEN', 'LOC', 'LOF', 'LOG', 'LPOS', 'LTRIM$', 'MID$', 'MKD$', 'MKDMBF$', 'MKI$', 'MKL$', 'MKS$', 'MKSMBF$', 'OCT$', 'PEEK', 'PEN', 'PLAY', 'PMAP', 'POINT', 'POS', 'RIGHT$', 'RND', 'RTRIM$', 'SADD', 'SCREEN', 'SEEK', 'SETMEM', 'SGN', 'SIN', 'SPACE$', 'SPC', 'SQR', 'STICK', 'STR$', 'STRIG', 'STRING$', 'TAB', 'TAN', 'TIME$', 'TIMER', 'UBOUND', 'UCASE$', 'VAL', 'VARPTR', 'VARPTR$', 'VARSEG' ) metacommands = ('$DYNAMIC', '$INCLUDE', '$STATIC') operators = ('AND', 'EQV', 'IMP', 'NOT', 'OR', 'XOR') statements = ( 'BEEP', 'BLOAD', 'BSAVE', 'CALL', 'CALL ABSOLUTE', 'CALL INTERRUPT', 'CALLS', 'CHAIN', 'CHDIR', 'CIRCLE', 'CLEAR', 'CLOSE', 'CLS', 'COLOR', 'COM', 'COMMON', 'CONST', 'DATA', 'DATE$', 'DECLARE', 'DEF FN', 'DEF SEG', 'DEFDBL', 'DEFINT', 'DEFLNG', 'DEFSNG', 'DEFSTR', 'DEF', 'DIM', 'DO', 'LOOP', 'DRAW', 'END', 'ENVIRON', 'ERASE', 'ERROR', 'EXIT', 'FIELD', 'FILES', 'FOR', 'NEXT', 'FUNCTION', 'GET', 'GOSUB', 'GOTO', 'IF', 'THEN', 'INPUT', 'INPUT #', 'IOCTL', 'KEY', 'KEY', 'KILL', 'LET', 'LINE', 'LINE INPUT', 'LINE INPUT #', 'LOCATE', 'LOCK', 'UNLOCK', 'LPRINT', 'LSET', 'MID$', 'MKDIR', 'NAME', 'ON COM', 'ON ERROR', 'ON KEY', 'ON PEN', 'ON PLAY', 'ON STRIG', 'ON TIMER', 'ON UEVENT', 'ON', 'OPEN', 'OPEN COM', 'OPTION BASE', 'OUT', 'PAINT', 'PALETTE', 'PCOPY', 'PEN', 'PLAY', 'POKE', 'PRESET', 'PRINT', 'PRINT #', 'PRINT USING', 'PSET', 'PUT', 'PUT', 'RANDOMIZE', 'READ', 'REDIM', 'REM', 'RESET', 'RESTORE', 'RESUME', 'RETURN', 'RMDIR', 'RSET', 'RUN', 'SCREEN', 'SEEK', 'SELECT CASE', 'SHARED', 'SHELL', 'SLEEP', 'SOUND', 'STATIC', 'STOP', 'STRIG', 'SUB', 'SWAP', 'SYSTEM', 'TIME$', 'TIMER', 'TROFF', 'TRON', 'TYPE', 'UEVENT', 'UNLOCK', 'VIEW', 'WAIT', 'WHILE', 'WEND', 'WIDTH', 'WINDOW', 'WRITE' ) keywords = ( 'ACCESS', 'ALIAS', 'ANY', 'APPEND', 'AS', 'BASE', 'BINARY', 'BYVAL', 'CASE', 'CDECL', 'DOUBLE', 'ELSE', 'ELSEIF', 'ENDIF', 'INTEGER', 'IS', 'LIST', 'LOCAL', 'LONG', 'LOOP', 'MOD', 'NEXT', 'OFF', 'ON', 'OUTPUT', 'RANDOM', 'SIGNAL', 'SINGLE', 'STEP', 'STRING', 'THEN', 'TO', 'UNTIL', 'USING', 'WEND' ) tokens = { 'root': [ (r'\n+', Text), (r'\s+', Text.Whitespace), (r'^(\s*)(\d*)(\s*)(REM .*)$', bygroups(Text.Whitespace, Name.Label, Text.Whitespace, Comment.Single)), (r'^(\s*)(\d+)(\s*)', bygroups(Text.Whitespace, Name.Label, Text.Whitespace)), (r'(?=[\s]*)(\w+)(?=[\s]*=)', Name.Variable.Global), (r'(?=[^"]*)\'.*$', Comment.Single), (r'"[^\n"]*"', String.Double), (r'(END)(\s+)(FUNCTION|IF|SELECT|SUB)', bygroups(Keyword.Reserved, Text.Whitespace, Keyword.Reserved)), (r'(DECLARE)(\s+)([A-Z]+)(\s+)(\S+)', bygroups(Keyword.Declaration, Text.Whitespace, Name.Variable, Text.Whitespace, Name)), (r'(DIM)(\s+)(SHARED)(\s+)([^\s(]+)', bygroups(Keyword.Declaration, Text.Whitespace, Name.Variable, Text.Whitespace, Name.Variable.Global)), (r'(DIM)(\s+)([^\s(]+)', bygroups(Keyword.Declaration, Text.Whitespace, Name.Variable.Global)), (r'^(\s*)([a-zA-Z_]+)(\s*)(\=)', bygroups(Text.Whitespace, Name.Variable.Global, Text.Whitespace, Operator)), (r'(GOTO|GOSUB)(\s+)(\w+\:?)', bygroups(Keyword.Reserved, Text.Whitespace, Name.Label)), (r'(SUB)(\s+)(\w+\:?)', bygroups(Keyword.Reserved, Text.Whitespace, Name.Label)), include('declarations'), include('functions'), include('metacommands'), include('operators'), include('statements'), include('keywords'), (r'[a-zA-Z_]\w*[$@#&!]', Name.Variable.Global), (r'[a-zA-Z_]\w*\:', Name.Label), (r'\-?\d*\.\d+[@|#]?', Number.Float), (r'\-?\d+[@|#]', Number.Float), (r'\-?\d+#?', Number.Integer.Long), (r'\-?\d+#?', Number.Integer), (r'!=|==|:=|\.=|<<|>>|[-~+/\\*%=<>&^|?:!.]', Operator), (r'[\[\]{}(),;]', Punctuation), (r'[\w]+', Name.Variable.Global), ], # can't use regular \b because of X$() # XXX: use words() here 'declarations': [ (r'\b(%s)(?=\(|\b)' % '|'.join(map(re.escape, declarations)), Keyword.Declaration), ], 'functions': [ (r'\b(%s)(?=\(|\b)' % '|'.join(map(re.escape, functions)), Keyword.Reserved), ], 'metacommands': [ (r'\b(%s)(?=\(|\b)' % '|'.join(map(re.escape, metacommands)), Keyword.Constant), ], 'operators': [ (r'\b(%s)(?=\(|\b)' % '|'.join(map(re.escape, operators)), Operator.Word), ], 'statements': [ (r'\b(%s)\b' % '|'.join(map(re.escape, statements)), Keyword.Reserved), ], 'keywords': [ (r'\b(%s)\b' % '|'.join(keywords), Keyword), ], } def analyse_text(text): if '$DYNAMIC' in text or '$STATIC' in text: return 0.9 class VBScriptLexer(RegexLexer): """ VBScript is scripting language that is modeled on Visual Basic. .. versionadded:: 2.4 """ name = 'VBScript' aliases = ['vbscript'] filenames = ['*.vbs', '*.VBS'] flags = re.IGNORECASE tokens = { 'root': [ (r"'[^\n]*", Comment.Single), (r'\s+', Whitespace), ('"', String.Double, 'string'), ('&h[0-9a-f]+', Number.Hex), # Float variant 1, for example: 1., 1.e2, 1.2e3 (r'[0-9]+\.[0-9]*(e[+-]?[0-9]+)?', Number.Float), (r'\.[0-9]+(e[+-]?[0-9]+)?', Number.Float), # Float variant 2, for example: .1, .1e2 (r'[0-9]+e[+-]?[0-9]+', Number.Float), # Float variant 3, for example: 123e45 (r'[0-9]+', Number.Integer), ('#.+#', String), # date or time value (r'(dim)(\s+)([a-z_][a-z0-9_]*)', bygroups(Keyword.Declaration, Whitespace, Name.Variable), 'dim_more'), (r'(function|sub)(\s+)([a-z_][a-z0-9_]*)', bygroups(Keyword.Declaration, Whitespace, Name.Function)), (r'(class)(\s+)([a-z_][a-z0-9_]*)', bygroups(Keyword.Declaration, Whitespace, Name.Class)), (r'(const)(\s+)([a-z_][a-z0-9_]*)', bygroups(Keyword.Declaration, Whitespace, Name.Constant)), (r'(end)(\s+)(class|function|if|property|sub|with)', bygroups(Keyword, Whitespace, Keyword)), (r'(on)(\s+)(error)(\s+)(goto)(\s+)(0)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword, Whitespace, Number.Integer)), (r'(on)(\s+)(error)(\s+)(resume)(\s+)(next)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword, Whitespace, Keyword)), (r'(option)(\s+)(explicit)', bygroups(Keyword, Whitespace, Keyword)), (r'(property)(\s+)(get|let|set)(\s+)([a-z_][a-z0-9_]*)', bygroups(Keyword.Declaration, Whitespace, Keyword.Declaration, Whitespace, Name.Property)), (r'rem\s.*[^\n]*', Comment.Single), (words(_vbscript_builtins.KEYWORDS, suffix=r'\b'), Keyword), (words(_vbscript_builtins.OPERATORS), Operator), (words(_vbscript_builtins.OPERATOR_WORDS, suffix=r'\b'), Operator.Word), (words(_vbscript_builtins.BUILTIN_CONSTANTS, suffix=r'\b'), Name.Constant), (words(_vbscript_builtins.BUILTIN_FUNCTIONS, suffix=r'\b'), Name.Builtin), (words(_vbscript_builtins.BUILTIN_VARIABLES, suffix=r'\b'), Name.Builtin), (r'[a-z_][a-z0-9_]*', Name), (r'\b_\n', Operator), (words(r'(),.:'), Punctuation), (r'.+(\n)?', Error) ], 'dim_more': [ (r'(\s*)(,)(\s*)([a-z_][a-z0-9]*)', bygroups(Whitespace, Punctuation, Whitespace, Name.Variable)), default('#pop'), ], 'string': [ (r'[^"\n]+', String.Double), (r'\"\"', String.Double), (r'"', String.Double, '#pop'), (r'\n', Error, '#pop'), # Unterminated string ], } class BBCBasicLexer(RegexLexer): """ BBC Basic was supplied on the BBC Micro, and later Acorn RISC OS. It is also used by BBC Basic For Windows. .. versionadded:: 2.4 """ base_keywords = ['OTHERWISE', 'AND', 'DIV', 'EOR', 'MOD', 'OR', 'ERROR', 'LINE', 'OFF', 'STEP', 'SPC', 'TAB', 'ELSE', 'THEN', 'OPENIN', 'PTR', 'PAGE', 'TIME', 'LOMEM', 'HIMEM', 'ABS', 'ACS', 'ADVAL', 'ASC', 'ASN', 'ATN', 'BGET', 'COS', 'COUNT', 'DEG', 'ERL', 'ERR', 'EVAL', 'EXP', 'EXT', 'FALSE', 'FN', 'GET', 'INKEY', 'INSTR', 'INT', 'LEN', 'LN', 'LOG', 'NOT', 'OPENUP', 'OPENOUT', 'PI', 'POINT', 'POS', 'RAD', 'RND', 'SGN', 'SIN', 'SQR', 'TAN', 'TO', 'TRUE', 'USR', 'VAL', 'VPOS', 'CHR$', 'GET$', 'INKEY$', 'LEFT$', 'MID$', 'RIGHT$', 'STR$', 'STRING$', 'EOF', 'PTR', 'PAGE', 'TIME', 'LOMEM', 'HIMEM', 'SOUND', 'BPUT', 'CALL', 'CHAIN', 'CLEAR', 'CLOSE', 'CLG', 'CLS', 'DATA', 'DEF', 'DIM', 'DRAW', 'END', 'ENDPROC', 'ENVELOPE', 'FOR', 'GOSUB', 'GOTO', 'GCOL', 'IF', 'INPUT', 'LET', 'LOCAL', 'MODE', 'MOVE', 'NEXT', 'ON', 'VDU', 'PLOT', 'PRINT', 'PROC', 'READ', 'REM', 'REPEAT', 'REPORT', 'RESTORE', 'RETURN', 'RUN', 'STOP', 'COLOUR', 'TRACE', 'UNTIL', 'WIDTH', 'OSCLI'] basic5_keywords = ['WHEN', 'OF', 'ENDCASE', 'ENDIF', 'ENDWHILE', 'CASE', 'CIRCLE', 'FILL', 'ORIGIN', 'POINT', 'RECTANGLE', 'SWAP', 'WHILE', 'WAIT', 'MOUSE', 'QUIT', 'SYS', 'INSTALL', 'LIBRARY', 'TINT', 'ELLIPSE', 'BEATS', 'TEMPO', 'VOICES', 'VOICE', 'STEREO', 'OVERLAY', 'APPEND', 'AUTO', 'CRUNCH', 'DELETE', 'EDIT', 'HELP', 'LIST', 'LOAD', 'LVAR', 'NEW', 'OLD', 'RENUMBER', 'SAVE', 'TEXTLOAD', 'TEXTSAVE', 'TWIN', 'TWINO', 'INSTALL', 'SUM', 'BEAT'] name = 'BBC Basic' aliases = ['bbcbasic'] filenames = ['*.bbc'] tokens = { 'root': [ (r"[0-9]+", Name.Label), (r"(\*)([^\n]*)", bygroups(Keyword.Pseudo, Comment.Special)), default('code'), ], 'code': [ (r"(REM)([^\n]*)", bygroups(Keyword.Declaration, Comment.Single)), (r'\n', Whitespace, 'root'), (r'\s+', Whitespace), (r':', Comment.Preproc), # Some special cases to make functions come out nicer (r'(DEF)(\s*)(FN|PROC)([A-Za-z_@][\w@]*)', bygroups(Keyword.Declaration, Whitespace, Keyword.Declaration, Name.Function)), (r'(FN|PROC)([A-Za-z_@][\w@]*)', bygroups(Keyword, Name.Function)), (r'(GOTO|GOSUB|THEN|RESTORE)(\s*)(\d+)', bygroups(Keyword, Whitespace, Name.Label)), (r'(TRUE|FALSE)', Keyword.Constant), (r'(PAGE|LOMEM|HIMEM|TIME|WIDTH|ERL|ERR|REPORT\$|POS|VPOS|VOICES)', Keyword.Pseudo), (words(base_keywords), Keyword), (words(basic5_keywords), Keyword), ('"', String.Double, 'string'), ('%[01]{1,32}', Number.Bin), ('&[0-9a-f]{1,8}', Number.Hex), (r'[+-]?[0-9]+\.[0-9]*(E[+-]?[0-9]+)?', Number.Float), (r'[+-]?\.[0-9]+(E[+-]?[0-9]+)?', Number.Float), (r'[+-]?[0-9]+E[+-]?[0-9]+', Number.Float), (r'[+-]?\d+', Number.Integer), (r'([A-Za-z_@][\w@]*[%$]?)', Name.Variable), (r'([+\-]=|[$!|?+\-*/%^=><();]|>=|<=|<>|<<|>>|>>>|,)', Operator), ], 'string': [ (r'[^"\n]+', String.Double), (r'"', String.Double, '#pop'), (r'\n', Error, 'root'), # Unterminated string ], } def analyse_text(text): if text.startswith('10REM >') or text.startswith('REM >'): return 0.9 pygments-2.11.2/pygments/lexers/verification.py0000644000175000017500000000750414165547207021550 0ustar carstencarsten""" pygments.lexers.verification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for Intermediate Verification Languages (IVLs). :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, words from pygments.token import Comment, Operator, Keyword, Name, Number, \ Punctuation, Text, Generic __all__ = ['BoogieLexer', 'SilverLexer'] class BoogieLexer(RegexLexer): """ For `Boogie `_ source code. .. versionadded:: 2.1 """ name = 'Boogie' aliases = ['boogie'] filenames = ['*.bpl'] tokens = { 'root': [ # Whitespace and Comments (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'//[/!](.*?)\n', Comment.Doc), (r'//(.*?)\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (words(( 'axiom', 'break', 'call', 'ensures', 'else', 'exists', 'function', 'forall', 'if', 'invariant', 'modifies', 'procedure', 'requires', 'then', 'var', 'while'), suffix=r'\b'), Keyword), (words(('const',), suffix=r'\b'), Keyword.Reserved), (words(('bool', 'int', 'ref'), suffix=r'\b'), Keyword.Type), include('numbers'), (r"(>=|<=|:=|!=|==>|&&|\|\||[+/\-=>*<\[\]])", Operator), (r'\{.*?\}', Generic.Emph), #triggers (r"([{}():;,.])", Punctuation), # Identifier (r'[a-zA-Z_]\w*', Name), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'numbers': [ (r'[0-9]+', Number.Integer), ], } class SilverLexer(RegexLexer): """ For `Silver `_ source code. .. versionadded:: 2.2 """ name = 'Silver' aliases = ['silver'] filenames = ['*.sil', '*.vpr'] tokens = { 'root': [ # Whitespace and Comments (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation (r'//[/!](.*?)\n', Comment.Doc), (r'//(.*?)\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (words(( 'result', 'true', 'false', 'null', 'method', 'function', 'predicate', 'program', 'domain', 'axiom', 'var', 'returns', 'field', 'define', 'fold', 'unfold', 'inhale', 'exhale', 'new', 'assert', 'assume', 'goto', 'while', 'if', 'elseif', 'else', 'fresh', 'constraining', 'Seq', 'Set', 'Multiset', 'union', 'intersection', 'setminus', 'subset', 'unfolding', 'in', 'old', 'forall', 'exists', 'acc', 'wildcard', 'write', 'none', 'epsilon', 'perm', 'unique', 'apply', 'package', 'folding', 'label', 'forperm'), suffix=r'\b'), Keyword), (words(('requires', 'ensures', 'invariant'), suffix=r'\b'), Name.Decorator), (words(('Int', 'Perm', 'Bool', 'Ref', 'Rational'), suffix=r'\b'), Keyword.Type), include('numbers'), (r'[!%&*+=|?:<>/\-\[\]]', Operator), (r'\{.*?\}', Generic.Emph), #triggers (r'([{}():;,.])', Punctuation), # Identifier (r'[\w$]\w*', Name), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'numbers': [ (r'[0-9]+', Number.Integer), ], } pygments-2.11.2/pygments/lexers/felix.py0000644000175000017500000002264714165547207020202 0ustar carstencarsten""" pygments.lexers.felix ~~~~~~~~~~~~~~~~~~~~~ Lexer for the Felix language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups, default, words, \ combined from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['FelixLexer'] class FelixLexer(RegexLexer): """ For `Felix `_ source code. .. versionadded:: 1.2 """ name = 'Felix' aliases = ['felix', 'flx'] filenames = ['*.flx', '*.flxh'] mimetypes = ['text/x-felix'] preproc = ( 'elif', 'else', 'endif', 'if', 'ifdef', 'ifndef', ) keywords = ( '_', '_deref', 'all', 'as', 'assert', 'attempt', 'call', 'callback', 'case', 'caseno', 'cclass', 'code', 'compound', 'ctypes', 'do', 'done', 'downto', 'elif', 'else', 'endattempt', 'endcase', 'endif', 'endmatch', 'enum', 'except', 'exceptions', 'expect', 'finally', 'for', 'forall', 'forget', 'fork', 'functor', 'goto', 'ident', 'if', 'incomplete', 'inherit', 'instance', 'interface', 'jump', 'lambda', 'loop', 'match', 'module', 'namespace', 'new', 'noexpand', 'nonterm', 'obj', 'of', 'open', 'parse', 'raise', 'regexp', 'reglex', 'regmatch', 'rename', 'return', 'the', 'then', 'to', 'type', 'typecase', 'typedef', 'typematch', 'typeof', 'upto', 'when', 'whilst', 'with', 'yield', ) keyword_directives = ( '_gc_pointer', '_gc_type', 'body', 'comment', 'const', 'export', 'header', 'inline', 'lval', 'macro', 'noinline', 'noreturn', 'package', 'private', 'pod', 'property', 'public', 'publish', 'requires', 'todo', 'virtual', 'use', ) keyword_declarations = ( 'def', 'let', 'ref', 'val', 'var', ) keyword_types = ( 'unit', 'void', 'any', 'bool', 'byte', 'offset', 'address', 'caddress', 'cvaddress', 'vaddress', 'tiny', 'short', 'int', 'long', 'vlong', 'utiny', 'ushort', 'vshort', 'uint', 'ulong', 'uvlong', 'int8', 'int16', 'int32', 'int64', 'uint8', 'uint16', 'uint32', 'uint64', 'float', 'double', 'ldouble', 'complex', 'dcomplex', 'lcomplex', 'imaginary', 'dimaginary', 'limaginary', 'char', 'wchar', 'uchar', 'charp', 'charcp', 'ucharp', 'ucharcp', 'string', 'wstring', 'ustring', 'cont', 'array', 'varray', 'list', 'lvalue', 'opt', 'slice', ) keyword_constants = ( 'false', 'true', ) operator_words = ( 'and', 'not', 'in', 'is', 'isin', 'or', 'xor', ) name_builtins = ( '_svc', 'while', ) name_pseudo = ( 'root', 'self', 'this', ) decimal_suffixes = '([tTsSiIlLvV]|ll|LL|([iIuU])(8|16|32|64))?' tokens = { 'root': [ include('whitespace'), # Keywords (words(('axiom', 'ctor', 'fun', 'gen', 'proc', 'reduce', 'union'), suffix=r'\b'), Keyword, 'funcname'), (words(('class', 'cclass', 'cstruct', 'obj', 'struct'), suffix=r'\b'), Keyword, 'classname'), (r'(instance|module|typeclass)\b', Keyword, 'modulename'), (words(keywords, suffix=r'\b'), Keyword), (words(keyword_directives, suffix=r'\b'), Name.Decorator), (words(keyword_declarations, suffix=r'\b'), Keyword.Declaration), (words(keyword_types, suffix=r'\b'), Keyword.Type), (words(keyword_constants, suffix=r'\b'), Keyword.Constant), # Operators include('operators'), # Float Literal # -- Hex Float (r'0[xX]([0-9a-fA-F_]*\.[0-9a-fA-F_]+|[0-9a-fA-F_]+)' r'[pP][+\-]?[0-9_]+[lLfFdD]?', Number.Float), # -- DecimalFloat (r'[0-9_]+(\.[0-9_]+[eE][+\-]?[0-9_]+|' r'\.[0-9_]*|[eE][+\-]?[0-9_]+)[lLfFdD]?', Number.Float), (r'\.(0|[1-9][0-9_]*)([eE][+\-]?[0-9_]+)?[lLfFdD]?', Number.Float), # IntegerLiteral # -- Binary (r'0[Bb][01_]+%s' % decimal_suffixes, Number.Bin), # -- Octal (r'0[0-7_]+%s' % decimal_suffixes, Number.Oct), # -- Hexadecimal (r'0[xX][0-9a-fA-F_]+%s' % decimal_suffixes, Number.Hex), # -- Decimal (r'(0|[1-9][0-9_]*)%s' % decimal_suffixes, Number.Integer), # Strings ('([rR][cC]?|[cC][rR])"""', String, 'tdqs'), ("([rR][cC]?|[cC][rR])'''", String, 'tsqs'), ('([rR][cC]?|[cC][rR])"', String, 'dqs'), ("([rR][cC]?|[cC][rR])'", String, 'sqs'), ('[cCfFqQwWuU]?"""', String, combined('stringescape', 'tdqs')), ("[cCfFqQwWuU]?'''", String, combined('stringescape', 'tsqs')), ('[cCfFqQwWuU]?"', String, combined('stringescape', 'dqs')), ("[cCfFqQwWuU]?'", String, combined('stringescape', 'sqs')), # Punctuation (r'[\[\]{}:(),;?]', Punctuation), # Labels (r'[a-zA-Z_]\w*:>', Name.Label), # Identifiers (r'(%s)\b' % '|'.join(name_builtins), Name.Builtin), (r'(%s)\b' % '|'.join(name_pseudo), Name.Builtin.Pseudo), (r'[a-zA-Z_]\w*', Name), ], 'whitespace': [ (r'\s+', Whitespace), include('comment'), # Preprocessor (r'(#)(\s*)(if)(\s+)(0)', bygroups(Comment.Preproc, Whitespace, Comment.Preproc, Whitespace, Comment.Preproc), 'if0'), (r'#', Comment.Preproc, 'macro'), ], 'operators': [ (r'(%s)\b' % '|'.join(operator_words), Operator.Word), (r'!=|==|<<|>>|\|\||&&|[-~+/*%=<>&^|.$]', Operator), ], 'comment': [ (r'//(.*?)$', Comment.Single), (r'/[*]', Comment.Multiline, 'comment2'), ], 'comment2': [ (r'[^/*]', Comment.Multiline), (r'/[*]', Comment.Multiline, '#push'), (r'[*]/', Comment.Multiline, '#pop'), (r'[/*]', Comment.Multiline), ], 'if0': [ (r'^(\s*)(#if.*?(?]*?>)', bygroups(Comment.Preproc, Whitespace, String), '#pop'), (r'(import|include)(\s+)("[^"]*?")', bygroups(Comment.Preproc, Whitespace, String), '#pop'), (r"(import|include)(\s+)('[^']*?')", bygroups(Comment.Preproc, Whitespace, String), '#pop'), (r'[^/\n]+', Comment.Preproc), # (r'/[*](.|\n)*?[*]/', Comment), # (r'//.*?\n', Comment, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Whitespace, '#pop'), ], 'funcname': [ include('whitespace'), (r'[a-zA-Z_]\w*', Name.Function, '#pop'), # anonymous functions (r'(?=\()', Text, '#pop'), ], 'classname': [ include('whitespace'), (r'[a-zA-Z_]\w*', Name.Class, '#pop'), # anonymous classes (r'(?=\{)', Text, '#pop'), ], 'modulename': [ include('whitespace'), (r'\[', Punctuation, ('modulename2', 'tvarlist')), default('modulename2'), ], 'modulename2': [ include('whitespace'), (r'([a-zA-Z_]\w*)', Name.Namespace, '#pop:2'), ], 'tvarlist': [ include('whitespace'), include('operators'), (r'\[', Punctuation, '#push'), (r'\]', Punctuation, '#pop'), (r',', Punctuation), (r'(with|where)\b', Keyword), (r'[a-zA-Z_]\w*', Name), ], 'stringescape': [ (r'\\([\\abfnrtv"\']|\n|N\{.*?\}|u[a-fA-F0-9]{4}|' r'U[a-fA-F0-9]{8}|x[a-fA-F0-9]{2}|[0-7]{1,3})', String.Escape) ], 'strings': [ (r'%(\([a-zA-Z0-9]+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?' '[hlL]?[E-GXc-giorsux%]', String.Interpol), (r'[^\\\'"%\n]+', String), # quotes, percents and backslashes must be parsed one at a time (r'[\'"\\]', String), # unhandled string formatting sign (r'%', String) # newlines are an error (use "nl" state) ], 'nl': [ (r'\n', String) ], 'dqs': [ (r'"', String, '#pop'), # included here again for raw strings (r'\\\\|\\"|\\\n', String.Escape), include('strings') ], 'sqs': [ (r"'", String, '#pop'), # included here again for raw strings (r"\\\\|\\'|\\\n", String.Escape), include('strings') ], 'tdqs': [ (r'"""', String, '#pop'), include('strings'), include('nl') ], 'tsqs': [ (r"'''", String, '#pop'), include('strings'), include('nl') ], } pygments-2.11.2/pygments/lexers/stata.py0000644000175000017500000001441614165547207020202 0ustar carstencarsten""" pygments.lexers.stata ~~~~~~~~~~~~~~~~~~~~~ Lexer for Stata :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, default, include, words from pygments.token import Comment, Keyword, Name, Number, \ String, Text, Operator from pygments.lexers._stata_builtins import builtins_base, builtins_functions __all__ = ['StataLexer'] class StataLexer(RegexLexer): """ For `Stata `_ do files. .. versionadded:: 2.2 """ # Syntax based on # - http://fmwww.bc.edu/RePEc/bocode/s/synlightlist.ado # - https://github.com/isagalaev/highlight.js/blob/master/src/languages/stata.js # - https://github.com/jpitblado/vim-stata/blob/master/syntax/stata.vim name = 'Stata' aliases = ['stata', 'do'] filenames = ['*.do', '*.ado'] mimetypes = ['text/x-stata', 'text/stata', 'application/x-stata'] flags = re.MULTILINE | re.DOTALL tokens = { 'root': [ include('comments'), include('strings'), include('macros'), include('numbers'), include('keywords'), include('operators'), include('format'), (r'.', Text), ], # Comments are a complicated beast in Stata because they can be # nested and there are a few corner cases with that. See: # - github.com/kylebarron/language-stata/issues/90 # - statalist.org/forums/forum/general-stata-discussion/general/1448244 'comments': [ (r'(^//|(?<=\s)//)(?!/)', Comment.Single, 'comments-double-slash'), (r'^\s*\*', Comment.Single, 'comments-star'), (r'/\*', Comment.Multiline, 'comments-block'), (r'(^///|(?<=\s)///)', Comment.Special, 'comments-triple-slash') ], 'comments-block': [ (r'/\*', Comment.Multiline, '#push'), # this ends and restarts a comment block. but need to catch this so # that it doesn\'t start _another_ level of comment blocks (r'\*/\*', Comment.Multiline), (r'(\*/\s+\*(?!/)[^\n]*)|(\*/)', Comment.Multiline, '#pop'), # Match anything else as a character inside the comment (r'.', Comment.Multiline), ], 'comments-star': [ (r'///.*?\n', Comment.Single, ('#pop', 'comments-triple-slash')), (r'(^//|(?<=\s)//)(?!/)', Comment.Single, ('#pop', 'comments-double-slash')), (r'/\*', Comment.Multiline, 'comments-block'), (r'.(?=\n)', Comment.Single, '#pop'), (r'.', Comment.Single), ], 'comments-triple-slash': [ (r'\n', Comment.Special, '#pop'), # A // breaks out of a comment for the rest of the line (r'//.*?(?=\n)', Comment.Single, '#pop'), (r'.', Comment.Special), ], 'comments-double-slash': [ (r'\n', Text, '#pop'), (r'.', Comment.Single), ], # `"compound string"' and regular "string"; note the former are # nested. 'strings': [ (r'`"', String, 'string-compound'), (r'(?=|<|>|&|!=', Operator), (r'\*|\+|\^|/|!|~|==|~=', Operator) ], # Stata numbers 'numbers': [ # decimal number (r'\b[+-]?([0-9]+(\.[0-9]+)?|\.[0-9]+|\.)([eE][+-]?[0-9]+)?[i]?\b', Number), ], # Stata formats 'format': [ (r'%-?\d{1,2}(\.\d{1,2})?[gfe]c?', Name.Other), (r'%(21x|16H|16L|8H|8L)', Name.Other), (r'%-?(tc|tC|td|tw|tm|tq|th|ty|tg)\S{0,32}', Name.Other), (r'%[-~]?\d{1,4}s', Name.Other), ] } pygments-2.11.2/pygments/lexers/graph.py0000644000175000017500000000742614165547207020172 0ustar carstencarsten""" pygments.lexers.graph ~~~~~~~~~~~~~~~~~~~~~ Lexers for graph query languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, this, words from pygments.token import Keyword, Punctuation, Comment, Operator, Name,\ String, Number, Whitespace __all__ = ['CypherLexer'] class CypherLexer(RegexLexer): """ For `Cypher Query Language `_ For the Cypher version in Neo4j 3.3 .. versionadded:: 2.0 """ name = 'Cypher' aliases = ['cypher'] filenames = ['*.cyp', '*.cypher'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ include('comment'), include('clauses'), include('keywords'), include('relations'), include('strings'), include('whitespace'), include('barewords'), ], 'comment': [ (r'^.*//.*$', Comment.Single), ], 'keywords': [ (r'(create|order|match|limit|set|skip|start|return|with|where|' r'delete|foreach|not|by|true|false)\b', Keyword), ], 'clauses': [ # based on https://neo4j.com/docs/cypher-refcard/3.3/ (r'(create)(\s+)(index|unique)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(drop)(\s+)(contraint|index)(\s+)(on)\b', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword)), (r'(ends)(\s+)(with)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(is)(\s+)(node)(\s+)(key)\b', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword)), (r'(is)(\s+)(null|unique)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(load)(\s+)(csv)(\s+)(from)\b', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword)), (r'(on)(\s+)(match|create)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(optional)(\s+)(match)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(order)(\s+)(by)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(starts)(\s+)(with)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(union)(\s+)(all)\b', bygroups(Keyword, Whitespace, Keyword)), (r'(using)(\s+)(periodic)(\s+)(commit)\b', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword)), (words(( 'all', 'any', 'as', 'asc', 'ascending', 'assert', 'call', 'case', 'create', 'delete', 'desc', 'descending', 'distinct', 'end', 'fieldterminator', 'foreach', 'in', 'limit', 'match', 'merge', 'none', 'not', 'null', 'remove', 'return', 'set', 'skip', 'single', 'start', 'then', 'union', 'unwind', 'yield', 'where', 'when', 'with'), suffix=r'\b'), Keyword), ], 'relations': [ (r'(-\[)(.*?)(\]->)', bygroups(Operator, using(this), Operator)), (r'(<-\[)(.*?)(\]-)', bygroups(Operator, using(this), Operator)), (r'(-\[)(.*?)(\]-)', bygroups(Operator, using(this), Operator)), (r'-->|<--|\[|\]', Operator), (r'<|>|<>|=|<=|=>|\(|\)|\||:|,|;', Punctuation), (r'[.*{}]', Punctuation), ], 'strings': [ (r'"(?:\\[tbnrf\'"\\]|[^\\"])*"', String), (r'`(?:``|[^`])+`', Name.Variable), ], 'whitespace': [ (r'\s+', Whitespace), ], 'barewords': [ (r'[a-z]\w*', Name), (r'\d+', Number), ], } pygments-2.11.2/pygments/lexers/csound.py0000644000175000017500000004110114165547207020350 0ustar carstencarsten""" pygments.lexers.csound ~~~~~~~~~~~~~~~~~~~~~~ Lexers for Csound languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, default, include, using, words from pygments.token import Comment, Error, Keyword, Name, Number, Operator, Punctuation, \ String, Text, Whitespace from pygments.lexers._csound_builtins import OPCODES, DEPRECATED_OPCODES, REMOVED_OPCODES from pygments.lexers.html import HtmlLexer from pygments.lexers.python import PythonLexer from pygments.lexers.scripting import LuaLexer __all__ = ['CsoundScoreLexer', 'CsoundOrchestraLexer', 'CsoundDocumentLexer'] newline = (r'((?:(?:;|//).*)*)(\n)', bygroups(Comment.Single, Text)) class CsoundLexer(RegexLexer): tokens = { 'whitespace': [ (r'[ \t]+', Whitespace), (r'/[*](?:.|\n)*?[*]/', Comment.Multiline), (r'(?:;|//).*$', Comment.Single), (r'(\\)(\n)', bygroups(Text, Whitespace)) ], 'preprocessor directives': [ (r'#(?:e(?:nd(?:if)?|lse)\b|##)|@@?[ \t]*\d+', Comment.Preproc), (r'#includestr', Comment.Preproc, 'includestr directive'), (r'#include', Comment.Preproc, 'include directive'), (r'#[ \t]*define', Comment.Preproc, 'define directive'), (r'#(?:ifn?def|undef)\b', Comment.Preproc, 'macro directive') ], 'include directive': [ include('whitespace'), (r'([^ \t]).*?\1', String, '#pop') ], 'includestr directive': [ include('whitespace'), (r'"', String, ('#pop', 'quoted string')) ], 'define directive': [ (r'\n', Whitespace), include('whitespace'), (r'([A-Z_a-z]\w*)(\()', bygroups(Comment.Preproc, Punctuation), ('#pop', 'macro parameter name list')), (r'[A-Z_a-z]\w*', Comment.Preproc, ('#pop', 'before macro body')) ], 'macro parameter name list': [ include('whitespace'), (r'[A-Z_a-z]\w*', Comment.Preproc), (r"['#]", Punctuation), (r'\)', Punctuation, ('#pop', 'before macro body')) ], 'before macro body': [ (r'\n', Whitespace), include('whitespace'), (r'#', Punctuation, ('#pop', 'macro body')) ], 'macro body': [ (r'(?:\\(?!#)|[^#\\]|\n)+', Comment.Preproc), (r'\\#', Comment.Preproc), (r'(?`_ scores. .. versionadded:: 2.1 """ name = 'Csound Score' aliases = ['csound-score', 'csound-sco'] filenames = ['*.sco'] tokens = { 'root': [ (r'\n', Whitespace), include('whitespace and macro uses'), include('preprocessor directives'), (r'[aBbCdefiqstvxy]', Keyword), # There is also a w statement that is generated internally and should not be # used; see https://github.com/csound/csound/issues/750. (r'z', Keyword.Constant), # z is a constant equal to 800,000,000,000. 800 billion seconds is about # 25,367.8 years. See also # https://csound.com/docs/manual/ScoreTop.html and # https://github.com/csound/csound/search?q=stof+path%3AEngine+filename%3Asread.c. (r'([nNpP][pP])(\d+)', bygroups(Keyword, Number.Integer)), (r'[mn]', Keyword, 'mark statement'), include('numbers'), (r'[!+\-*/^%&|<>#~.]', Operator), (r'[()\[\]]', Punctuation), (r'"', String, 'quoted string'), (r'\{', Comment.Preproc, 'loop after left brace'), ], 'mark statement': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', Name.Label), (r'\n', Whitespace, '#pop') ], 'loop after left brace': [ include('whitespace and macro uses'), (r'\d+', Number.Integer, ('#pop', 'loop after repeat count')), ], 'loop after repeat count': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', Comment.Preproc, ('#pop', 'loop')) ], 'loop': [ (r'\}', Comment.Preproc, '#pop'), include('root') ], # Braced strings are not allowed in Csound scores, but this is needed because the # superclass includes it. 'braced string': [ (r'\}\}', String, '#pop'), (r'[^}]|\}(?!\})', String) ] } class CsoundOrchestraLexer(CsoundLexer): """ For `Csound `_ orchestras. .. versionadded:: 2.1 """ name = 'Csound Orchestra' aliases = ['csound', 'csound-orc'] filenames = ['*.orc', '*.udo'] user_defined_opcodes = set() def opcode_name_callback(lexer, match): opcode = match.group(0) lexer.user_defined_opcodes.add(opcode) yield match.start(), Name.Function, opcode def name_callback(lexer, match): type_annotation_token = Keyword.Type name = match.group(1) if name in OPCODES or name in DEPRECATED_OPCODES or name in REMOVED_OPCODES: yield match.start(), Name.Builtin, name elif name in lexer.user_defined_opcodes: yield match.start(), Name.Function, name else: type_annotation_token = Name name_match = re.search(r'^(g?[afikSw])(\w+)', name) if name_match: yield name_match.start(1), Keyword.Type, name_match.group(1) yield name_match.start(2), Name, name_match.group(2) else: yield match.start(), Name, name if match.group(2): yield match.start(2), Punctuation, match.group(2) yield match.start(3), type_annotation_token, match.group(3) tokens = { 'root': [ (r'\n', Whitespace), (r'^([ \t]*)(\w+)(:)([ \t]+|$)', bygroups(Whitespace, Name.Label, Punctuation, Whitespace)), include('whitespace and macro uses'), include('preprocessor directives'), (r'\binstr\b', Keyword.Declaration, 'instrument numbers and identifiers'), (r'\bopcode\b', Keyword.Declaration, 'after opcode keyword'), (r'\b(?:end(?:in|op))\b', Keyword.Declaration), include('partial statements') ], 'partial statements': [ (r'\b(?:0dbfs|A4|k(?:r|smps)|nchnls(?:_i)?|sr)\b', Name.Variable.Global), include('numbers'), (r'\+=|-=|\*=|/=|<<|>>|<=|>=|==|!=|&&|\|\||[~¬]|[=!+\-*/^%&|<>#?:]', Operator), (r'[(),\[\]]', Punctuation), (r'"', String, 'quoted string'), (r'\{\{', String, 'braced string'), (words(( 'do', 'else', 'elseif', 'endif', 'enduntil', 'fi', 'if', 'ithen', 'kthen', 'od', 'then', 'until', 'while', ), prefix=r'\b', suffix=r'\b'), Keyword), (words(('return', 'rireturn'), prefix=r'\b', suffix=r'\b'), Keyword.Pseudo), (r'\b[ik]?goto\b', Keyword, 'goto label'), (r'\b(r(?:einit|igoto)|tigoto)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), 'goto label'), (r'\b(c(?:g|in?|k|nk?)goto)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument')), (r'\b(timout)(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument', 'goto argument')), (r'\b(loop_[gl][et])(\(|\b)', bygroups(Keyword.Pseudo, Punctuation), ('goto label', 'goto argument', 'goto argument', 'goto argument')), (r'\bprintk?s\b', Name.Builtin, 'prints opcode'), (r'\b(?:readscore|scoreline(?:_i)?)\b', Name.Builtin, 'Csound score opcode'), (r'\bpyl?run[it]?\b', Name.Builtin, 'Python opcode'), (r'\blua_(?:exec|opdef)\b', Name.Builtin, 'Lua opcode'), (r'\bp\d+\b', Name.Variable.Instance), (r'\b([A-Z_a-z]\w*)(?:(:)([A-Za-z]))?\b', name_callback) ], 'instrument numbers and identifiers': [ include('whitespace and macro uses'), (r'\d+|[A-Z_a-z]\w*', Name.Function), (r'[+,]', Punctuation), (r'\n', Whitespace, '#pop') ], 'after opcode keyword': [ include('whitespace and macro uses'), (r'[A-Z_a-z]\w*', opcode_name_callback, ('#pop', 'opcode type signatures')), (r'\n', Whitespace, '#pop') ], 'opcode type signatures': [ include('whitespace and macro uses'), # https://github.com/csound/csound/search?q=XIDENT+path%3AEngine+filename%3Acsound_orc.lex (r'0|[afijkKoOpPStV\[\]]+', Keyword.Type), (r',', Punctuation), (r'\n', Whitespace, '#pop') ], 'quoted string': [ (r'"', String, '#pop'), (r'[^\\"$%)]+', String), include('macro uses'), include('escape sequences'), include('format specifiers'), (r'[\\$%)]', String) ], 'braced string': [ (r'\}\}', String, '#pop'), (r'(?:[^\\%)}]|\}(?!\}))+', String), include('escape sequences'), include('format specifiers'), (r'[\\%)]', String) ], 'escape sequences': [ # https://github.com/csound/csound/search?q=unquote_string+path%3AEngine+filename%3Acsound_orc_compile.c (r'\\(?:[\\abnrt"]|[0-7]{1,3})', String.Escape) ], # Format specifiers are highlighted in all strings, even though only # fprintks https://csound.com/docs/manual/fprintks.html # fprints https://csound.com/docs/manual/fprints.html # printf/printf_i https://csound.com/docs/manual/printf.html # printks https://csound.com/docs/manual/printks.html # prints https://csound.com/docs/manual/prints.html # sprintf https://csound.com/docs/manual/sprintf.html # sprintfk https://csound.com/docs/manual/sprintfk.html # work with strings that contain format specifiers. In addition, these opcodes’ # handling of format specifiers is inconsistent: # - fprintks and fprints accept %a and %A specifiers, and accept %s specifiers # starting in Csound 6.15.0. # - printks and prints accept %a and %A specifiers, but don’t accept %s # specifiers. # - printf, printf_i, sprintf, and sprintfk don’t accept %a and %A specifiers, # but accept %s specifiers. # See https://github.com/csound/csound/issues/747 for more information. 'format specifiers': [ (r'%[#0\- +]*\d*(?:\.\d+)?[AE-GXac-giosux]', String.Interpol), (r'%%', String.Escape) ], 'goto argument': [ include('whitespace and macro uses'), (r',', Punctuation, '#pop'), include('partial statements') ], 'goto label': [ include('whitespace and macro uses'), (r'\w+', Name.Label, '#pop'), default('#pop') ], 'prints opcode': [ include('whitespace and macro uses'), (r'"', String, 'prints quoted string'), default('#pop') ], 'prints quoted string': [ (r'\\\\[aAbBnNrRtT]', String.Escape), (r'%[!nNrRtT]|[~^]{1,2}', String.Escape), include('quoted string') ], 'Csound score opcode': [ include('whitespace and macro uses'), (r'"', String, 'quoted string'), (r'\{\{', String, 'Csound score'), (r'\n', Whitespace, '#pop') ], 'Csound score': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(CsoundScoreLexer)) ], 'Python opcode': [ include('whitespace and macro uses'), (r'"', String, 'quoted string'), (r'\{\{', String, 'Python'), (r'\n', Whitespace, '#pop') ], 'Python': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(PythonLexer)) ], 'Lua opcode': [ include('whitespace and macro uses'), (r'"', String, 'quoted string'), (r'\{\{', String, 'Lua'), (r'\n', Whitespace, '#pop') ], 'Lua': [ (r'\}\}', String, '#pop'), (r'([^}]+)|\}(?!\})', using(LuaLexer)) ] } class CsoundDocumentLexer(RegexLexer): """ For `Csound `_ documents. .. versionadded:: 2.1 """ name = 'Csound Document' aliases = ['csound-document', 'csound-csd'] filenames = ['*.csd'] # These tokens are based on those in XmlLexer in pygments/lexers/html.py. Making # CsoundDocumentLexer a subclass of XmlLexer rather than RegexLexer may seem like a # better idea, since Csound Document files look like XML files. However, Csound # Documents can contain Csound comments (preceded by //, for example) before and # after the root element, unescaped bitwise AND & and less than < operators, etc. In # other words, while Csound Document files look like XML files, they may not actually # be XML files. tokens = { 'root': [ (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'(?:;|//).*$', Comment.Single), (r'[^/;<]+|/(?!/)', Text), (r'<\s*CsInstruments', Name.Tag, ('orchestra', 'tag')), (r'<\s*CsScore', Name.Tag, ('score', 'tag')), (r'<\s*[Hh][Tt][Mm][Ll]', Name.Tag, ('HTML', 'tag')), (r'<\s*[\w:.-]+', Name.Tag, 'tag'), (r'<\s*/\s*[\w:.-]+\s*>', Name.Tag) ], 'orchestra': [ (r'<\s*/\s*CsInstruments\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*CsInstruments\s*>)', using(CsoundOrchestraLexer)) ], 'score': [ (r'<\s*/\s*CsScore\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*CsScore\s*>)', using(CsoundScoreLexer)) ], 'HTML': [ (r'<\s*/\s*[Hh][Tt][Mm][Ll]\s*>', Name.Tag, '#pop'), (r'(.|\n)+?(?=<\s*/\s*[Hh][Tt][Mm][Ll]\s*>)', using(HtmlLexer)) ], 'tag': [ (r'\s+', Whitespace), (r'[\w.:-]+\s*=', Name.Attribute, 'attr'), (r'/?\s*>', Name.Tag, '#pop') ], 'attr': [ (r'\s+', Whitespace), (r'".*?"', String, '#pop'), (r"'.*?'", String, '#pop'), (r'[^\s>]+', String, '#pop') ] } pygments-2.11.2/pygments/lexers/idl.py0000644000175000017500000003557014165547207017642 0ustar carstencarsten""" pygments.lexers.idl ~~~~~~~~~~~~~~~~~~~ Lexers for IDL. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, words from pygments.token import Text, Comment, Operator, Keyword, Name, Number, String __all__ = ['IDLLexer'] class IDLLexer(RegexLexer): """ Pygments Lexer for IDL (Interactive Data Language). .. versionadded:: 1.6 """ name = 'IDL' aliases = ['idl'] filenames = ['*.pro'] mimetypes = ['text/idl'] flags = re.IGNORECASE | re.MULTILINE _RESERVED = ( 'and', 'begin', 'break', 'case', 'common', 'compile_opt', 'continue', 'do', 'else', 'end', 'endcase', 'endelse', 'endfor', 'endforeach', 'endif', 'endrep', 'endswitch', 'endwhile', 'eq', 'for', 'foreach', 'forward_function', 'function', 'ge', 'goto', 'gt', 'if', 'inherits', 'le', 'lt', 'mod', 'ne', 'not', 'of', 'on_ioerror', 'or', 'pro', 'repeat', 'switch', 'then', 'until', 'while', 'xor') """Reserved words from: http://www.exelisvis.com/docs/reswords.html""" _BUILTIN_LIB = ( 'abs', 'acos', 'adapt_hist_equal', 'alog', 'alog10', 'amoeba', 'annotate', 'app_user_dir', 'app_user_dir_query', 'arg_present', 'array_equal', 'array_indices', 'arrow', 'ascii_template', 'asin', 'assoc', 'atan', 'axis', 'a_correlate', 'bandpass_filter', 'bandreject_filter', 'barplot', 'bar_plot', 'beseli', 'beselj', 'beselk', 'besely', 'beta', 'bilinear', 'binary_template', 'bindgen', 'binomial', 'bin_date', 'bit_ffs', 'bit_population', 'blas_axpy', 'blk_con', 'box_cursor', 'breakpoint', 'broyden', 'butterworth', 'bytarr', 'byte', 'byteorder', 'bytscl', 'caldat', 'calendar', 'call_external', 'call_function', 'call_method', 'call_procedure', 'canny', 'catch', 'cd', r'cdf_\w*', 'ceil', 'chebyshev', 'check_math', 'chisqr_cvf', 'chisqr_pdf', 'choldc', 'cholsol', 'cindgen', 'cir_3pnt', 'close', 'cluster', 'cluster_tree', 'clust_wts', 'cmyk_convert', 'colorbar', 'colorize_sample', 'colormap_applicable', 'colormap_gradient', 'colormap_rotation', 'colortable', 'color_convert', 'color_exchange', 'color_quan', 'color_range_map', 'comfit', 'command_line_args', 'complex', 'complexarr', 'complexround', 'compute_mesh_normals', 'cond', 'congrid', 'conj', 'constrained_min', 'contour', 'convert_coord', 'convol', 'convol_fft', 'coord2to3', 'copy_lun', 'correlate', 'cos', 'cosh', 'cpu', 'cramer', 'create_cursor', 'create_struct', 'create_view', 'crossp', 'crvlength', 'cti_test', 'ct_luminance', 'cursor', 'curvefit', 'cvttobm', 'cv_coord', 'cw_animate', 'cw_animate_getp', 'cw_animate_load', 'cw_animate_run', 'cw_arcball', 'cw_bgroup', 'cw_clr_index', 'cw_colorsel', 'cw_defroi', 'cw_field', 'cw_filesel', 'cw_form', 'cw_fslider', 'cw_light_editor', 'cw_light_editor_get', 'cw_light_editor_set', 'cw_orient', 'cw_palette_editor', 'cw_palette_editor_get', 'cw_palette_editor_set', 'cw_pdmenu', 'cw_rgbslider', 'cw_tmpl', 'cw_zoom', 'c_correlate', 'dblarr', 'db_exists', 'dcindgen', 'dcomplex', 'dcomplexarr', 'define_key', 'define_msgblk', 'define_msgblk_from_file', 'defroi', 'defsysv', 'delvar', 'dendrogram', 'dendro_plot', 'deriv', 'derivsig', 'determ', 'device', 'dfpmin', 'diag_matrix', 'dialog_dbconnect', 'dialog_message', 'dialog_pickfile', 'dialog_printersetup', 'dialog_printjob', 'dialog_read_image', 'dialog_write_image', 'digital_filter', 'dilate', 'dindgen', 'dissolve', 'dist', 'distance_measure', 'dlm_load', 'dlm_register', 'doc_library', 'double', 'draw_roi', 'edge_dog', 'efont', 'eigenql', 'eigenvec', 'ellipse', 'elmhes', 'emboss', 'empty', 'enable_sysrtn', 'eof', r'eos_\w*', 'erase', 'erf', 'erfc', 'erfcx', 'erode', 'errorplot', 'errplot', 'estimator_filter', 'execute', 'exit', 'exp', 'expand', 'expand_path', 'expint', 'extrac', 'extract_slice', 'factorial', 'fft', 'filepath', 'file_basename', 'file_chmod', 'file_copy', 'file_delete', 'file_dirname', 'file_expand_path', 'file_info', 'file_lines', 'file_link', 'file_mkdir', 'file_move', 'file_poll_input', 'file_readlink', 'file_same', 'file_search', 'file_test', 'file_which', 'findgen', 'finite', 'fix', 'flick', 'float', 'floor', 'flow3', 'fltarr', 'flush', 'format_axis_values', 'free_lun', 'fstat', 'fulstr', 'funct', 'fv_test', 'fx_root', 'fz_roots', 'f_cvf', 'f_pdf', 'gamma', 'gamma_ct', 'gauss2dfit', 'gaussfit', 'gaussian_function', 'gaussint', 'gauss_cvf', 'gauss_pdf', 'gauss_smooth', 'getenv', 'getwindows', 'get_drive_list', 'get_dxf_objects', 'get_kbrd', 'get_login_info', 'get_lun', 'get_screen_size', 'greg2jul', r'grib_\w*', 'grid3', 'griddata', 'grid_input', 'grid_tps', 'gs_iter', r'h5[adfgirst]_\w*', 'h5_browser', 'h5_close', 'h5_create', 'h5_get_libversion', 'h5_open', 'h5_parse', 'hanning', 'hash', r'hdf_\w*', 'heap_free', 'heap_gc', 'heap_nosave', 'heap_refcount', 'heap_save', 'help', 'hilbert', 'histogram', 'hist_2d', 'hist_equal', 'hls', 'hough', 'hqr', 'hsv', 'h_eq_ct', 'h_eq_int', 'i18n_multibytetoutf8', 'i18n_multibytetowidechar', 'i18n_utf8tomultibyte', 'i18n_widechartomultibyte', 'ibeta', 'icontour', 'iconvertcoord', 'idelete', 'identity', 'idlexbr_assistant', 'idlitsys_createtool', 'idl_base64', 'idl_validname', 'iellipse', 'igamma', 'igetcurrent', 'igetdata', 'igetid', 'igetproperty', 'iimage', 'image', 'image_cont', 'image_statistics', 'imaginary', 'imap', 'indgen', 'intarr', 'interpol', 'interpolate', 'interval_volume', 'int_2d', 'int_3d', 'int_tabulated', 'invert', 'ioctl', 'iopen', 'iplot', 'ipolygon', 'ipolyline', 'iputdata', 'iregister', 'ireset', 'iresolve', 'irotate', 'ir_filter', 'isa', 'isave', 'iscale', 'isetcurrent', 'isetproperty', 'ishft', 'isocontour', 'isosurface', 'isurface', 'itext', 'itranslate', 'ivector', 'ivolume', 'izoom', 'i_beta', 'journal', 'json_parse', 'json_serialize', 'jul2greg', 'julday', 'keyword_set', 'krig2d', 'kurtosis', 'kw_test', 'l64indgen', 'label_date', 'label_region', 'ladfit', 'laguerre', 'laplacian', 'la_choldc', 'la_cholmprove', 'la_cholsol', 'la_determ', 'la_eigenproblem', 'la_eigenql', 'la_eigenvec', 'la_elmhes', 'la_gm_linear_model', 'la_hqr', 'la_invert', 'la_least_squares', 'la_least_square_equality', 'la_linear_equation', 'la_ludc', 'la_lumprove', 'la_lusol', 'la_svd', 'la_tridc', 'la_trimprove', 'la_triql', 'la_trired', 'la_trisol', 'least_squares_filter', 'leefilt', 'legend', 'legendre', 'linbcg', 'lindgen', 'linfit', 'linkimage', 'list', 'll_arc_distance', 'lmfit', 'lmgr', 'lngamma', 'lnp_test', 'loadct', 'locale_get', 'logical_and', 'logical_or', 'logical_true', 'lon64arr', 'lonarr', 'long', 'long64', 'lsode', 'ludc', 'lumprove', 'lusol', 'lu_complex', 'machar', 'make_array', 'make_dll', 'make_rt', 'map', 'mapcontinents', 'mapgrid', 'map_2points', 'map_continents', 'map_grid', 'map_image', 'map_patch', 'map_proj_forward', 'map_proj_image', 'map_proj_info', 'map_proj_init', 'map_proj_inverse', 'map_set', 'matrix_multiply', 'matrix_power', 'max', 'md_test', 'mean', 'meanabsdev', 'mean_filter', 'median', 'memory', 'mesh_clip', 'mesh_decimate', 'mesh_issolid', 'mesh_merge', 'mesh_numtriangles', 'mesh_obj', 'mesh_smooth', 'mesh_surfacearea', 'mesh_validate', 'mesh_volume', 'message', 'min', 'min_curve_surf', 'mk_html_help', 'modifyct', 'moment', 'morph_close', 'morph_distance', 'morph_gradient', 'morph_hitormiss', 'morph_open', 'morph_thin', 'morph_tophat', 'multi', 'm_correlate', r'ncdf_\w*', 'newton', 'noise_hurl', 'noise_pick', 'noise_scatter', 'noise_slur', 'norm', 'n_elements', 'n_params', 'n_tags', 'objarr', 'obj_class', 'obj_destroy', 'obj_hasmethod', 'obj_isa', 'obj_new', 'obj_valid', 'online_help', 'on_error', 'open', 'oplot', 'oploterr', 'parse_url', 'particle_trace', 'path_cache', 'path_sep', 'pcomp', 'plot', 'plot3d', 'ploterr', 'plots', 'plot_3dbox', 'plot_field', 'pnt_line', 'point_lun', 'polarplot', 'polar_contour', 'polar_surface', 'poly', 'polyfill', 'polyfillv', 'polygon', 'polyline', 'polyshade', 'polywarp', 'poly_2d', 'poly_area', 'poly_fit', 'popd', 'powell', 'pref_commit', 'pref_get', 'pref_set', 'prewitt', 'primes', 'print', 'printd', 'product', 'profile', 'profiler', 'profiles', 'project_vol', 'psafm', 'pseudo', 'ps_show_fonts', 'ptrarr', 'ptr_free', 'ptr_new', 'ptr_valid', 'pushd', 'p_correlate', 'qgrid3', 'qhull', 'qromb', 'qromo', 'qsimp', 'query_ascii', 'query_bmp', 'query_csv', 'query_dicom', 'query_gif', 'query_image', 'query_jpeg', 'query_jpeg2000', 'query_mrsid', 'query_pict', 'query_png', 'query_ppm', 'query_srf', 'query_tiff', 'query_wav', 'radon', 'randomn', 'randomu', 'ranks', 'rdpix', 'read', 'reads', 'readu', 'read_ascii', 'read_binary', 'read_bmp', 'read_csv', 'read_dicom', 'read_gif', 'read_image', 'read_interfile', 'read_jpeg', 'read_jpeg2000', 'read_mrsid', 'read_pict', 'read_png', 'read_ppm', 'read_spr', 'read_srf', 'read_sylk', 'read_tiff', 'read_wav', 'read_wave', 'read_x11_bitmap', 'read_xwd', 'real_part', 'rebin', 'recall_commands', 'recon3', 'reduce_colors', 'reform', 'region_grow', 'register_cursor', 'regress', 'replicate', 'replicate_inplace', 'resolve_all', 'resolve_routine', 'restore', 'retall', 'return', 'reverse', 'rk4', 'roberts', 'rot', 'rotate', 'round', 'routine_filepath', 'routine_info', 'rs_test', 'r_correlate', 'r_test', 'save', 'savgol', 'scale3', 'scale3d', 'scope_level', 'scope_traceback', 'scope_varfetch', 'scope_varname', 'search2d', 'search3d', 'sem_create', 'sem_delete', 'sem_lock', 'sem_release', 'setenv', 'set_plot', 'set_shading', 'sfit', 'shade_surf', 'shade_surf_irr', 'shade_volume', 'shift', 'shift_diff', 'shmdebug', 'shmmap', 'shmunmap', 'shmvar', 'show3', 'showfont', 'simplex', 'sin', 'sindgen', 'sinh', 'size', 'skewness', 'skip_lun', 'slicer3', 'slide_image', 'smooth', 'sobel', 'socket', 'sort', 'spawn', 'spher_harm', 'sph_4pnt', 'sph_scat', 'spline', 'spline_p', 'spl_init', 'spl_interp', 'sprsab', 'sprsax', 'sprsin', 'sprstp', 'sqrt', 'standardize', 'stddev', 'stop', 'strarr', 'strcmp', 'strcompress', 'streamline', 'stregex', 'stretch', 'string', 'strjoin', 'strlen', 'strlowcase', 'strmatch', 'strmessage', 'strmid', 'strpos', 'strput', 'strsplit', 'strtrim', 'struct_assign', 'struct_hide', 'strupcase', 'surface', 'surfr', 'svdc', 'svdfit', 'svsol', 'swap_endian', 'swap_endian_inplace', 'symbol', 'systime', 's_test', 't3d', 'tag_names', 'tan', 'tanh', 'tek_color', 'temporary', 'tetra_clip', 'tetra_surface', 'tetra_volume', 'text', 'thin', 'threed', 'timegen', 'time_test2', 'tm_test', 'total', 'trace', 'transpose', 'triangulate', 'trigrid', 'triql', 'trired', 'trisol', 'tri_surf', 'truncate_lun', 'ts_coef', 'ts_diff', 'ts_fcast', 'ts_smooth', 'tv', 'tvcrs', 'tvlct', 'tvrd', 'tvscl', 'typename', 't_cvt', 't_pdf', 'uindgen', 'uint', 'uintarr', 'ul64indgen', 'ulindgen', 'ulon64arr', 'ulonarr', 'ulong', 'ulong64', 'uniq', 'unsharp_mask', 'usersym', 'value_locate', 'variance', 'vector', 'vector_field', 'vel', 'velovect', 'vert_t3d', 'voigt', 'voronoi', 'voxel_proj', 'wait', 'warp_tri', 'watershed', 'wdelete', 'wf_draw', 'where', 'widget_base', 'widget_button', 'widget_combobox', 'widget_control', 'widget_displaycontextmen', 'widget_draw', 'widget_droplist', 'widget_event', 'widget_info', 'widget_label', 'widget_list', 'widget_propertysheet', 'widget_slider', 'widget_tab', 'widget_table', 'widget_text', 'widget_tree', 'widget_tree_move', 'widget_window', 'wiener_filter', 'window', 'writeu', 'write_bmp', 'write_csv', 'write_gif', 'write_image', 'write_jpeg', 'write_jpeg2000', 'write_nrif', 'write_pict', 'write_png', 'write_ppm', 'write_spr', 'write_srf', 'write_sylk', 'write_tiff', 'write_wav', 'write_wave', 'wset', 'wshow', 'wtn', 'wv_applet', 'wv_cwt', 'wv_cw_wavelet', 'wv_denoise', 'wv_dwt', 'wv_fn_coiflet', 'wv_fn_daubechies', 'wv_fn_gaussian', 'wv_fn_haar', 'wv_fn_morlet', 'wv_fn_paul', 'wv_fn_symlet', 'wv_import_data', 'wv_import_wavelet', 'wv_plot3d_wps', 'wv_plot_multires', 'wv_pwt', 'wv_tool_denoise', 'xbm_edit', 'xdisplayfile', 'xdxf', 'xfont', 'xinteranimate', 'xloadct', 'xmanager', 'xmng_tmpl', 'xmtool', 'xobjview', 'xobjview_rotate', 'xobjview_write_image', 'xpalette', 'xpcolor', 'xplot3d', 'xregistered', 'xroi', 'xsq_test', 'xsurface', 'xvaredit', 'xvolume', 'xvolume_rotate', 'xvolume_write_image', 'xyouts', 'zoom', 'zoom_24') """Functions from: http://www.exelisvis.com/docs/routines-1.html""" tokens = { 'root': [ (r'^\s*;.*?\n', Comment.Single), (words(_RESERVED, prefix=r'\b', suffix=r'\b'), Keyword), (words(_BUILTIN_LIB, prefix=r'\b', suffix=r'\b'), Name.Builtin), (r'\+=|-=|\^=|\*=|/=|#=|##=|<=|>=|=', Operator), (r'\+\+|--|->|\+|-|##|#|\*|/|<|>|&&|\^|~|\|\|\?|:', Operator), (r'\b(mod=|lt=|le=|eq=|ne=|ge=|gt=|not=|and=|or=|xor=)', Operator), (r'\b(mod|lt|le|eq|ne|ge|gt|not|and|or|xor)\b', Operator), (r'"[^\"]*"', String.Double), (r"'[^\']*'", String.Single), (r'\b[+\-]?([0-9]*\.[0-9]+|[0-9]+\.[0-9]*)(D|E)?([+\-]?[0-9]+)?\b', Number.Float), (r'\b\'[+\-]?[0-9A-F]+\'X(U?(S?|L{1,2})|B)\b', Number.Hex), (r'\b\'[+\-]?[0-7]+\'O(U?(S?|L{1,2})|B)\b', Number.Oct), (r'\b[+\-]?[0-9]+U?L{1,2}\b', Number.Integer.Long), (r'\b[+\-]?[0-9]+U?S?\b', Number.Integer), (r'\b[+\-]?[0-9]+B\b', Number), (r'.', Text), ] } def analyse_text(text): """endelse seems to be unique to IDL, endswitch is rare at least.""" result = 0 if 'endelse' in text: result += 0.2 if 'endswitch' in text: result += 0.01 return resultpygments-2.11.2/pygments/lexers/roboconf.py0000644000175000017500000000377614165547207020704 0ustar carstencarsten""" pygments.lexers.roboconf ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Roboconf DSL. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, words, re from pygments.token import Text, Operator, Keyword, Name, Comment __all__ = ['RoboconfGraphLexer', 'RoboconfInstancesLexer'] class RoboconfGraphLexer(RegexLexer): """ Lexer for `Roboconf `_ graph files. .. versionadded:: 2.1 """ name = 'Roboconf Graph' aliases = ['roboconf-graph'] filenames = ['*.graph'] flags = re.IGNORECASE | re.MULTILINE tokens = { 'root': [ # Skip white spaces (r'\s+', Text), # There is one operator (r'=', Operator), # Keywords (words(('facet', 'import'), suffix=r'\s*\b', prefix=r'\b'), Keyword), (words(( 'installer', 'extends', 'exports', 'imports', 'facets', 'children'), suffix=r'\s*:?', prefix=r'\b'), Name), # Comments (r'#.*\n', Comment), # Default (r'[^#]', Text), (r'.*\n', Text) ] } class RoboconfInstancesLexer(RegexLexer): """ Lexer for `Roboconf `_ instances files. .. versionadded:: 2.1 """ name = 'Roboconf Instances' aliases = ['roboconf-instances'] filenames = ['*.instances'] flags = re.IGNORECASE | re.MULTILINE tokens = { 'root': [ # Skip white spaces (r'\s+', Text), # Keywords (words(('instance of', 'import'), suffix=r'\s*\b', prefix=r'\b'), Keyword), (words(('name', 'count'), suffix=r's*:?', prefix=r'\b'), Name), (r'\s*[\w.-]+\s*:', Name), # Comments (r'#.*\n', Comment), # Default (r'[^#]', Text), (r'.*\n', Text) ] } pygments-2.11.2/pygments/lexers/scdoc.py0000644000175000017500000000430414165547207020154 0ustar carstencarsten""" pygments.lexers.scdoc ~~~~~~~~~~~~~~~~~~~~~ Lexer for scdoc, a simple man page generator. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, \ using, this from pygments.token import Text, Comment, Keyword, String, \ Generic __all__ = ['ScdocLexer'] class ScdocLexer(RegexLexer): """ `scdoc` is a simple man page generator for POSIX systems written in C99. https://git.sr.ht/~sircmpwn/scdoc .. versionadded:: 2.5 """ name = 'scdoc' aliases = ['scdoc', 'scd'] filenames = ['*.scd', '*.scdoc'] flags = re.MULTILINE tokens = { 'root': [ # comment (r'^(;.+\n)', bygroups(Comment)), # heading with pound prefix (r'^(#)([^#].+\n)', bygroups(Generic.Heading, Text)), (r'^(#{2})(.+\n)', bygroups(Generic.Subheading, Text)), # bulleted lists (r'^(\s*)([*-])(\s)(.+\n)', bygroups(Text, Keyword, Text, using(this, state='inline'))), # numbered lists (r'^(\s*)(\.+\.)( .+\n)', bygroups(Text, Keyword, using(this, state='inline'))), # quote (r'^(\s*>\s)(.+\n)', bygroups(Keyword, Generic.Emph)), # text block (r'^(```\n)([\w\W]*?)(^```$)', bygroups(String, Text, String)), include('inline'), ], 'inline': [ # escape (r'\\.', Text), # underlines (r'(\s)(_[^_]+_)(\W|\n)', bygroups(Text, Generic.Emph, Text)), # bold (r'(\s)(\*[^*]+\*)(\W|\n)', bygroups(Text, Generic.Strong, Text)), # inline code (r'`[^`]+`', String.Backtick), # general text, must come last! (r'[^\\\s]+', Text), (r'.', Text), ], } def analyse_text(text): """This is very similar to markdown, save for the escape characters needed for * and _.""" result = 0 if '\\*' in text: result += 0.01 if '\\_' in text: result += 0.01 return result pygments-2.11.2/pygments/lexers/savi.py0000644000175000017500000001036414165547207020026 0ustar carstencarsten""" pygments.lexers.savi ~~~~~~~~~~~~~~~~~~~~ Lexer for Savi. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, include from pygments.token import \ Whitespace, Keyword, Name, String, Number, \ Operator, Punctuation, Comment, Generic, Error __all__ = ['SaviLexer'] # The canonical version of this file can be found in the following repository, # where it is kept in sync with any language changes, as well as the other # pygments-like lexers that are maintained for use with other tools: # - https://github.com/savi-lang/savi/blob/main/tooling/pygments/lexers/savi.py # # If you're changing this file in the pygments repository, please ensure that # any changes you make are also propagated to the official Savi repository, # in order to avoid accidental clobbering of your changes later when an update # from the Savi repository flows forward into the pygments repository. # # If you're changing this file in the Savi repository, please ensure that # any changes you make are also reflected in the other pygments-like lexers # (rouge, vscode, etc) so that all of the lexers can be kept cleanly in sync. class SaviLexer(RegexLexer): """ For `Savi `_ source code. .. versionadded: 2.10 """ name = 'Savi' aliases = ['savi'] filenames = ['*.savi'] tokens = { "root": [ # Line Comment (r'//.*?$', Comment.Single), # Doc Comment (r'::.*?$', Comment.Single), # Capability Operator (r'(\')(\w+)(?=[^\'])', bygroups(Operator, Name)), # Double-Quote String (r'\w?"', String.Double, "string.double"), # Single-Char String (r"'", String.Char, "string.char"), # Class (or other type) (r'([_A-Z]\w*)', Name.Class), # Declare (r'^([ \t]*)(:\w+)', bygroups(Whitespace, Name.Tag), "decl"), # Error-Raising Calls/Names (r'((\w+|\+|\-|\*)\!)', Generic.Deleted), # Numeric Values (r'\b\d([\d_]*(\.[\d_]+)?)\b', Number), # Hex Numeric Values (r'\b0x([0-9a-fA-F_]+)\b', Number.Hex), # Binary Numeric Values (r'\b0b([01_]+)\b', Number.Bin), # Function Call (with braces) (r'\w+(?=\()', Name.Function), # Function Call (with receiver) (r'(\.)(\s*)(\w+)', bygroups(Punctuation, Whitespace, Name.Function)), # Function Call (with self receiver) (r'(@)(\w+)', bygroups(Punctuation, Name.Function)), # Parenthesis (r'\(', Punctuation, "root"), (r'\)', Punctuation, "#pop"), # Brace (r'\{', Punctuation, "root"), (r'\}', Punctuation, "#pop"), # Bracket (r'\[', Punctuation, "root"), (r'(\])(\!)', bygroups(Punctuation, Generic.Deleted), "#pop"), (r'\]', Punctuation, "#pop"), # Punctuation (r'[,;:\.@]', Punctuation), # Piping Operators (r'(\|\>)', Operator), # Branching Operators (r'(\&\&|\|\||\?\?|\&\?|\|\?|\.\?)', Operator), # Comparison Operators (r'(\<\=\>|\=\~|\=\=|\<\=|\>\=|\<|\>)', Operator), # Arithmetic Operators (r'(\+|\-|\/|\*|\%)', Operator), # Assignment Operators (r'(\=)', Operator), # Other Operators (r'(\!|\<\<|\<|\&|\|)', Operator), # Identifiers (r'\b\w+\b', Name), # Whitespace (r'[ \t\r]+\n*|\n+', Whitespace), ], # Declare (nested rules) "decl": [ (r'\b[a-z_]\w*\b(?!\!)', Keyword.Declaration), (r':', Punctuation, "#pop"), (r'\n', Whitespace, "#pop"), include("root"), ], # Double-Quote String (nested rules) "string.double": [ (r'\\u[0-9a-fA-F]{4}', String.Escape), (r'\\x[0-9a-fA-F]{2}', String.Escape), (r'\\[bfnrt\\\']', String.Escape), (r'\\"', String.Escape), (r'"', String.Double, "#pop"), (r'[^\\"]+', String.Double), (r'.', Error), ], # Single-Char String (nested rules) "string.char": [ (r'\\u[0-9a-fA-F]{4}', String.Escape), (r'\\x[0-9a-fA-F]{2}', String.Escape), (r'\\[bfnrt\\\']', String.Escape), (r"\\'", String.Escape), (r"'", String.Char, "#pop"), (r"[^\\']+", String.Char), (r'.', Error), ], } pygments-2.11.2/pygments/lexers/modeling.py0000644000175000017500000003211014165547207020653 0ustar carstencarsten""" pygments.lexers.modeling ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for modeling languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace from pygments.lexers.html import HtmlLexer from pygments.lexers import _stan_builtins __all__ = ['ModelicaLexer', 'BugsLexer', 'JagsLexer', 'StanLexer'] class ModelicaLexer(RegexLexer): """ For `Modelica `_ source code. .. versionadded:: 1.1 """ name = 'Modelica' aliases = ['modelica'] filenames = ['*.mo'] mimetypes = ['text/x-modelica'] flags = re.DOTALL | re.MULTILINE _name = r"(?:'(?:[^\\']|\\.)+'|[a-zA-Z_]\w*)" tokens = { 'whitespace': [ (r'[\s\ufeff]+', Text), (r'//[^\n]*\n?', Comment.Single), (r'/\*.*?\*/', Comment.Multiline) ], 'root': [ include('whitespace'), (r'"', String.Double, 'string'), (r'[()\[\]{},;]+', Punctuation), (r'\.?[*^/+-]|\.|<>|[<>:=]=?', Operator), (r'\d+(\.?\d*[eE][-+]?\d+|\.\d*)', Number.Float), (r'\d+', Number.Integer), (r'(abs|acos|actualStream|array|asin|assert|AssertionLevel|atan|' r'atan2|backSample|Boolean|cardinality|cat|ceil|change|Clock|' r'Connections|cos|cosh|cross|delay|diagonal|div|edge|exp|' r'ExternalObject|fill|floor|getInstanceName|hold|homotopy|' r'identity|inStream|integer|Integer|interval|inverse|isPresent|' r'linspace|log|log10|matrix|max|min|mod|ndims|noClock|noEvent|' r'ones|outerProduct|pre|previous|product|Real|reinit|rem|rooted|' r'sample|scalar|semiLinear|shiftSample|sign|sin|sinh|size|skew|' r'smooth|spatialDistribution|sqrt|StateSelect|String|subSample|' r'sum|superSample|symmetric|tan|tanh|terminal|terminate|time|' r'transpose|vector|zeros)\b', Name.Builtin), (r'(algorithm|annotation|break|connect|constant|constrainedby|der|' r'discrete|each|else|elseif|elsewhen|encapsulated|enumeration|' r'equation|exit|expandable|extends|external|firstTick|final|flow|for|if|' r'import|impure|in|initial|inner|input|interval|loop|nondiscrete|outer|' r'output|parameter|partial|protected|public|pure|redeclare|' r'replaceable|return|stream|then|when|while)\b', Keyword.Reserved), (r'(and|not|or)\b', Operator.Word), (r'(block|class|connector|end|function|model|operator|package|' r'record|type)\b', Keyword.Reserved, 'class'), (r'(false|true)\b', Keyword.Constant), (r'within\b', Keyword.Reserved, 'package-prefix'), (_name, Name) ], 'class': [ include('whitespace'), (r'(function|record)\b', Keyword.Reserved), (r'(if|for|when|while)\b', Keyword.Reserved, '#pop'), (_name, Name.Class, '#pop'), default('#pop') ], 'package-prefix': [ include('whitespace'), (_name, Name.Namespace, '#pop'), default('#pop') ], 'string': [ (r'"', String.Double, '#pop'), (r'\\[\'"?\\abfnrtv]', String.Escape), (r'(?i)<\s*html\s*>([^\\"]|\\.)+?(<\s*/\s*html\s*>|(?="))', using(HtmlLexer)), (r'<|\\?[^"\\<]+', String.Double) ] } class BugsLexer(RegexLexer): """ Pygments Lexer for `OpenBugs `_ and WinBugs models. .. versionadded:: 1.6 """ name = 'BUGS' aliases = ['bugs', 'winbugs', 'openbugs'] filenames = ['*.bug'] _FUNCTIONS = ( # Scalar functions 'abs', 'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctanh', 'cloglog', 'cos', 'cosh', 'cumulative', 'cut', 'density', 'deviance', 'equals', 'expr', 'gammap', 'ilogit', 'icloglog', 'integral', 'log', 'logfact', 'loggam', 'logit', 'max', 'min', 'phi', 'post.p.value', 'pow', 'prior.p.value', 'probit', 'replicate.post', 'replicate.prior', 'round', 'sin', 'sinh', 'solution', 'sqrt', 'step', 'tan', 'tanh', 'trunc', # Vector functions 'inprod', 'interp.lin', 'inverse', 'logdet', 'mean', 'eigen.vals', 'ode', 'prod', 'p.valueM', 'rank', 'ranked', 'replicate.postM', 'sd', 'sort', 'sum', # Special 'D', 'I', 'F', 'T', 'C') """ OpenBUGS built-in functions From http://www.openbugs.info/Manuals/ModelSpecification.html#ContentsAII This also includes - T, C, I : Truncation and censoring. ``T`` and ``C`` are in OpenBUGS. ``I`` in WinBUGS. - D : ODE - F : Functional http://www.openbugs.info/Examples/Functionals.html """ _DISTRIBUTIONS = ('dbern', 'dbin', 'dcat', 'dnegbin', 'dpois', 'dhyper', 'dbeta', 'dchisqr', 'ddexp', 'dexp', 'dflat', 'dgamma', 'dgev', 'df', 'dggamma', 'dgpar', 'dloglik', 'dlnorm', 'dlogis', 'dnorm', 'dpar', 'dt', 'dunif', 'dweib', 'dmulti', 'ddirch', 'dmnorm', 'dmt', 'dwish') """ OpenBUGS built-in distributions Functions from http://www.openbugs.info/Manuals/ModelSpecification.html#ContentsAI """ tokens = { 'whitespace': [ (r"\s+", Text), ], 'comments': [ # Comments (r'#.*$', Comment.Single), ], 'root': [ # Comments include('comments'), include('whitespace'), # Block start (r'(model)(\s+)(\{)', bygroups(Keyword.Namespace, Text, Punctuation)), # Reserved Words (r'(for|in)(?![\w.])', Keyword.Reserved), # Built-in Functions (r'(%s)(?=\s*\()' % r'|'.join(_FUNCTIONS + _DISTRIBUTIONS), Name.Builtin), # Regular variable names (r'[A-Za-z][\w.]*', Name), # Number Literals (r'[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?', Number), # Punctuation (r'\[|\]|\(|\)|:|,|;', Punctuation), # Assignment operators # SLexer makes these tokens Operators. (r'<-|~', Operator), # Infix and prefix operators (r'\+|-|\*|/', Operator), # Block (r'[{}]', Punctuation), ] } def analyse_text(text): if re.search(r"^\s*model\s*{", text, re.M): return 0.7 else: return 0.0 class JagsLexer(RegexLexer): """ Pygments Lexer for JAGS. .. versionadded:: 1.6 """ name = 'JAGS' aliases = ['jags'] filenames = ['*.jag', '*.bug'] # JAGS _FUNCTIONS = ( 'abs', 'arccos', 'arccosh', 'arcsin', 'arcsinh', 'arctan', 'arctanh', 'cos', 'cosh', 'cloglog', 'equals', 'exp', 'icloglog', 'ifelse', 'ilogit', 'log', 'logfact', 'loggam', 'logit', 'phi', 'pow', 'probit', 'round', 'sin', 'sinh', 'sqrt', 'step', 'tan', 'tanh', 'trunc', 'inprod', 'interp.lin', 'logdet', 'max', 'mean', 'min', 'prod', 'sum', 'sd', 'inverse', 'rank', 'sort', 't', 'acos', 'acosh', 'asin', 'asinh', 'atan', # Truncation/Censoring (should I include) 'T', 'I') # Distributions with density, probability and quartile functions _DISTRIBUTIONS = tuple('[dpq]%s' % x for x in ('bern', 'beta', 'dchiqsqr', 'ddexp', 'dexp', 'df', 'gamma', 'gen.gamma', 'logis', 'lnorm', 'negbin', 'nchisqr', 'norm', 'par', 'pois', 'weib')) # Other distributions without density and probability _OTHER_DISTRIBUTIONS = ( 'dt', 'dunif', 'dbetabin', 'dbern', 'dbin', 'dcat', 'dhyper', 'ddirch', 'dmnorm', 'dwish', 'dmt', 'dmulti', 'dbinom', 'dchisq', 'dnbinom', 'dweibull', 'ddirich') tokens = { 'whitespace': [ (r"\s+", Text), ], 'names': [ # Regular variable names (r'[a-zA-Z][\w.]*\b', Name), ], 'comments': [ # do not use stateful comments (r'(?s)/\*.*?\*/', Comment.Multiline), # Comments (r'#.*$', Comment.Single), ], 'root': [ # Comments include('comments'), include('whitespace'), # Block start (r'(model|data)(\s+)(\{)', bygroups(Keyword.Namespace, Text, Punctuation)), (r'var(?![\w.])', Keyword.Declaration), # Reserved Words (r'(for|in)(?![\w.])', Keyword.Reserved), # Builtins # Need to use lookahead because . is a valid char (r'(%s)(?=\s*\()' % r'|'.join(_FUNCTIONS + _DISTRIBUTIONS + _OTHER_DISTRIBUTIONS), Name.Builtin), # Names include('names'), # Number Literals (r'[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?', Number), (r'\[|\]|\(|\)|:|,|;', Punctuation), # Assignment operators (r'<-|~', Operator), # # JAGS includes many more than OpenBUGS (r'\+|-|\*|\/|\|\|[&]{2}|[<>=]=?|\^|%.*?%', Operator), (r'[{}]', Punctuation), ] } def analyse_text(text): if re.search(r'^\s*model\s*\{', text, re.M): if re.search(r'^\s*data\s*\{', text, re.M): return 0.9 elif re.search(r'^\s*var', text, re.M): return 0.9 else: return 0.3 else: return 0 class StanLexer(RegexLexer): """Pygments Lexer for Stan models. The Stan modeling language is specified in the *Stan Modeling Language User's Guide and Reference Manual, v2.17.0*, `pdf `__. .. versionadded:: 1.6 """ name = 'Stan' aliases = ['stan'] filenames = ['*.stan'] tokens = { 'whitespace': [ (r"\s+", Text), ], 'comments': [ (r'(?s)/\*.*?\*/', Comment.Multiline), # Comments (r'(//|#).*$', Comment.Single), ], 'root': [ # Stan is more restrictive on strings than this regex (r'"[^"]*"', String), # Comments include('comments'), # block start include('whitespace'), # Block start (r'(%s)(\s*)(\{)' % r'|'.join(('functions', 'data', r'transformed\s+?data', 'parameters', r'transformed\s+parameters', 'model', r'generated\s+quantities')), bygroups(Keyword.Namespace, Text, Punctuation)), # target keyword (r'target\s*\+=', Keyword), # Reserved Words (r'(%s)\b' % r'|'.join(_stan_builtins.KEYWORDS), Keyword), # Truncation (r'T(?=\s*\[)', Keyword), # Data types (r'(%s)\b' % r'|'.join(_stan_builtins.TYPES), Keyword.Type), # < should be punctuation, but elsewhere I can't tell if it is in # a range constraint (r'(<)(\s*)(upper|lower)(\s*)(=)', bygroups(Operator, Whitespace, Keyword, Whitespace, Punctuation)), (r'(,)(\s*)(upper)(\s*)(=)', bygroups(Punctuation, Whitespace, Keyword, Whitespace, Punctuation)), # Punctuation (r"[;,\[\]()]", Punctuation), # Builtin (r'(%s)(?=\s*\()' % '|'.join(_stan_builtins.FUNCTIONS), Name.Builtin), (r'(~)(\s*)(%s)(?=\s*\()' % '|'.join(_stan_builtins.DISTRIBUTIONS), bygroups(Operator, Whitespace, Name.Builtin)), # Special names ending in __, like lp__ (r'[A-Za-z]\w*__\b', Name.Builtin.Pseudo), (r'(%s)\b' % r'|'.join(_stan_builtins.RESERVED), Keyword.Reserved), # user-defined functions (r'[A-Za-z]\w*(?=\s*\()]', Name.Function), # Regular variable names (r'[A-Za-z]\w*\b', Name), # Real Literals (r'[0-9]+(\.[0-9]*)?([eE][+-]?[0-9]+)?', Number.Float), (r'\.[0-9]+([eE][+-]?[0-9]+)?', Number.Float), # Integer Literals (r'[0-9]+', Number.Integer), # Assignment operators (r'<-|(?:\+|-|\.?/|\.?\*|=)?=|~', Operator), # Infix, prefix and postfix operators (and = ) (r"\+|-|\.?\*|\.?/|\\|'|\^|!=?|<=?|>=?|\|\||&&|%|\?|:", Operator), # Block delimiters (r'[{}]', Punctuation), # Distribution | (r'\|', Punctuation) ] } def analyse_text(text): if re.search(r'^\s*parameters\s*\{', text, re.M): return 1.0 else: return 0.0 pygments-2.11.2/pygments/lexers/_mysql_builtins.py0000644000175000017500000005765714165547207022321 0ustar carstencarsten""" pygments.lexers._mysql_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Self-updating data files for the MySQL lexer. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ MYSQL_CONSTANTS = ( 'false', 'null', 'true', 'unknown', ) # At this time, no easily-parsed, definitive list of data types # has been found in the MySQL source code or documentation. (The # `sql/sql_yacc.yy` file is definitive but is difficult to parse.) # Therefore these types are currently maintained manually. # # Some words in this list -- like "long", "national", "precision", # and "varying" -- appear to only occur in combination with other # data type keywords. Therefore they are included as separate words # even though they do not naturally occur in syntax separately. # # This list is also used to strip data types out of the list of # MySQL keywords, which is automatically updated later in the file. # MYSQL_DATATYPES = ( # Numeric data types 'bigint', 'bit', 'bool', 'boolean', 'dec', 'decimal', 'double', 'fixed', 'float', 'float4', 'float8', 'int', 'int1', 'int2', 'int3', 'int4', 'int8', 'integer', 'mediumint', 'middleint', 'numeric', 'precision', 'real', 'serial', 'smallint', 'tinyint', # Date and time data types 'date', 'datetime', 'time', 'timestamp', 'year', # String data types 'binary', 'blob', 'char', 'enum', 'long', 'longblob', 'longtext', 'mediumblob', 'mediumtext', 'national', 'nchar', 'nvarchar', 'set', 'text', 'tinyblob', 'tinytext', 'varbinary', 'varchar', 'varcharacter', 'varying', # Spatial data types 'geometry', 'geometrycollection', 'linestring', 'multilinestring', 'multipoint', 'multipolygon', 'point', 'polygon', # JSON data types 'json', ) # Everything below this line is auto-generated from the MySQL source code. # Run this file in Python and it will update itself. # ----------------------------------------------------------------------------- MYSQL_FUNCTIONS = ( 'abs', 'acos', 'adddate', 'addtime', 'aes_decrypt', 'aes_encrypt', 'any_value', 'asin', 'atan', 'atan2', 'benchmark', 'bin', 'bin_to_uuid', 'bit_and', 'bit_count', 'bit_length', 'bit_or', 'bit_xor', 'can_access_column', 'can_access_database', 'can_access_event', 'can_access_resource_group', 'can_access_routine', 'can_access_table', 'can_access_trigger', 'can_access_view', 'cast', 'ceil', 'ceiling', 'char_length', 'character_length', 'coercibility', 'compress', 'concat', 'concat_ws', 'connection_id', 'conv', 'convert_cpu_id_mask', 'convert_interval_to_user_interval', 'convert_tz', 'cos', 'cot', 'count', 'crc32', 'curdate', 'current_role', 'curtime', 'date_add', 'date_format', 'date_sub', 'datediff', 'dayname', 'dayofmonth', 'dayofweek', 'dayofyear', 'degrees', 'elt', 'exp', 'export_set', 'extract', 'extractvalue', 'field', 'find_in_set', 'floor', 'format_bytes', 'format_pico_time', 'found_rows', 'from_base64', 'from_days', 'from_unixtime', 'get_dd_column_privileges', 'get_dd_create_options', 'get_dd_index_private_data', 'get_dd_index_sub_part_length', 'get_dd_property_key_value', 'get_dd_tablespace_private_data', 'get_lock', 'greatest', 'group_concat', 'gtid_subset', 'gtid_subtract', 'hex', 'icu_version', 'ifnull', 'inet6_aton', 'inet6_ntoa', 'inet_aton', 'inet_ntoa', 'instr', 'internal_auto_increment', 'internal_avg_row_length', 'internal_check_time', 'internal_checksum', 'internal_data_free', 'internal_data_length', 'internal_dd_char_length', 'internal_get_comment_or_error', 'internal_get_dd_column_extra', 'internal_get_enabled_role_json', 'internal_get_hostname', 'internal_get_mandatory_roles_json', 'internal_get_partition_nodegroup', 'internal_get_username', 'internal_get_view_warning_or_error', 'internal_index_column_cardinality', 'internal_index_length', 'internal_is_enabled_role', 'internal_is_mandatory_role', 'internal_keys_disabled', 'internal_max_data_length', 'internal_table_rows', 'internal_tablespace_autoextend_size', 'internal_tablespace_data_free', 'internal_tablespace_extent_size', 'internal_tablespace_extra', 'internal_tablespace_free_extents', 'internal_tablespace_id', 'internal_tablespace_initial_size', 'internal_tablespace_logfile_group_name', 'internal_tablespace_logfile_group_number', 'internal_tablespace_maximum_size', 'internal_tablespace_row_format', 'internal_tablespace_status', 'internal_tablespace_total_extents', 'internal_tablespace_type', 'internal_tablespace_version', 'internal_update_time', 'is_free_lock', 'is_ipv4', 'is_ipv4_compat', 'is_ipv4_mapped', 'is_ipv6', 'is_used_lock', 'is_uuid', 'is_visible_dd_object', 'isnull', 'json_array', 'json_array_append', 'json_array_insert', 'json_arrayagg', 'json_contains', 'json_contains_path', 'json_depth', 'json_extract', 'json_insert', 'json_keys', 'json_length', 'json_merge', 'json_merge_patch', 'json_merge_preserve', 'json_object', 'json_objectagg', 'json_overlaps', 'json_pretty', 'json_quote', 'json_remove', 'json_replace', 'json_schema_valid', 'json_schema_validation_report', 'json_search', 'json_set', 'json_storage_free', 'json_storage_size', 'json_type', 'json_unquote', 'json_valid', 'last_day', 'last_insert_id', 'lcase', 'least', 'length', 'like_range_max', 'like_range_min', 'ln', 'load_file', 'locate', 'log', 'log10', 'log2', 'lower', 'lpad', 'ltrim', 'make_set', 'makedate', 'maketime', 'master_pos_wait', 'max', 'mbrcontains', 'mbrcoveredby', 'mbrcovers', 'mbrdisjoint', 'mbrequals', 'mbrintersects', 'mbroverlaps', 'mbrtouches', 'mbrwithin', 'md5', 'mid', 'min', 'monthname', 'name_const', 'now', 'nullif', 'oct', 'octet_length', 'ord', 'period_add', 'period_diff', 'pi', 'position', 'pow', 'power', 'ps_current_thread_id', 'ps_thread_id', 'quote', 'radians', 'rand', 'random_bytes', 'regexp_instr', 'regexp_like', 'regexp_replace', 'regexp_substr', 'release_all_locks', 'release_lock', 'remove_dd_property_key', 'reverse', 'roles_graphml', 'round', 'rpad', 'rtrim', 'sec_to_time', 'session_user', 'sha', 'sha1', 'sha2', 'sign', 'sin', 'sleep', 'soundex', 'space', 'sqrt', 'st_area', 'st_asbinary', 'st_asgeojson', 'st_astext', 'st_aswkb', 'st_aswkt', 'st_buffer', 'st_buffer_strategy', 'st_centroid', 'st_contains', 'st_convexhull', 'st_crosses', 'st_difference', 'st_dimension', 'st_disjoint', 'st_distance', 'st_distance_sphere', 'st_endpoint', 'st_envelope', 'st_equals', 'st_exteriorring', 'st_geohash', 'st_geomcollfromtext', 'st_geomcollfromtxt', 'st_geomcollfromwkb', 'st_geometrycollectionfromtext', 'st_geometrycollectionfromwkb', 'st_geometryfromtext', 'st_geometryfromwkb', 'st_geometryn', 'st_geometrytype', 'st_geomfromgeojson', 'st_geomfromtext', 'st_geomfromwkb', 'st_interiorringn', 'st_intersection', 'st_intersects', 'st_isclosed', 'st_isempty', 'st_issimple', 'st_isvalid', 'st_latfromgeohash', 'st_latitude', 'st_length', 'st_linefromtext', 'st_linefromwkb', 'st_linestringfromtext', 'st_linestringfromwkb', 'st_longfromgeohash', 'st_longitude', 'st_makeenvelope', 'st_mlinefromtext', 'st_mlinefromwkb', 'st_mpointfromtext', 'st_mpointfromwkb', 'st_mpolyfromtext', 'st_mpolyfromwkb', 'st_multilinestringfromtext', 'st_multilinestringfromwkb', 'st_multipointfromtext', 'st_multipointfromwkb', 'st_multipolygonfromtext', 'st_multipolygonfromwkb', 'st_numgeometries', 'st_numinteriorring', 'st_numinteriorrings', 'st_numpoints', 'st_overlaps', 'st_pointfromgeohash', 'st_pointfromtext', 'st_pointfromwkb', 'st_pointn', 'st_polyfromtext', 'st_polyfromwkb', 'st_polygonfromtext', 'st_polygonfromwkb', 'st_simplify', 'st_srid', 'st_startpoint', 'st_swapxy', 'st_symdifference', 'st_touches', 'st_transform', 'st_union', 'st_validate', 'st_within', 'st_x', 'st_y', 'statement_digest', 'statement_digest_text', 'std', 'stddev', 'stddev_pop', 'stddev_samp', 'str_to_date', 'strcmp', 'subdate', 'substr', 'substring', 'substring_index', 'subtime', 'sum', 'sysdate', 'system_user', 'tan', 'time_format', 'time_to_sec', 'timediff', 'to_base64', 'to_days', 'to_seconds', 'trim', 'ucase', 'uncompress', 'uncompressed_length', 'unhex', 'unix_timestamp', 'updatexml', 'upper', 'uuid', 'uuid_short', 'uuid_to_bin', 'validate_password_strength', 'var_pop', 'var_samp', 'variance', 'version', 'wait_for_executed_gtid_set', 'wait_until_sql_thread_after_gtids', 'weekday', 'weekofyear', 'yearweek', ) MYSQL_OPTIMIZER_HINTS = ( 'bka', 'bnl', 'dupsweedout', 'firstmatch', 'group_index', 'hash_join', 'index', 'index_merge', 'intoexists', 'join_fixed_order', 'join_index', 'join_order', 'join_prefix', 'join_suffix', 'loosescan', 'materialization', 'max_execution_time', 'merge', 'mrr', 'no_bka', 'no_bnl', 'no_group_index', 'no_hash_join', 'no_icp', 'no_index', 'no_index_merge', 'no_join_index', 'no_merge', 'no_mrr', 'no_order_index', 'no_range_optimization', 'no_semijoin', 'no_skip_scan', 'order_index', 'qb_name', 'resource_group', 'semijoin', 'set_var', 'skip_scan', 'subquery', ) MYSQL_KEYWORDS = ( 'accessible', 'account', 'action', 'active', 'add', 'admin', 'after', 'against', 'aggregate', 'algorithm', 'all', 'alter', 'always', 'analyze', 'and', 'any', 'array', 'as', 'asc', 'ascii', 'asensitive', 'at', 'attribute', 'auto_increment', 'autoextend_size', 'avg', 'avg_row_length', 'backup', 'before', 'begin', 'between', 'binlog', 'block', 'both', 'btree', 'buckets', 'by', 'byte', 'cache', 'call', 'cascade', 'cascaded', 'case', 'catalog_name', 'chain', 'change', 'changed', 'channel', 'character', 'charset', 'check', 'checksum', 'cipher', 'class_origin', 'client', 'clone', 'close', 'coalesce', 'code', 'collate', 'collation', 'column', 'column_format', 'column_name', 'columns', 'comment', 'commit', 'committed', 'compact', 'completion', 'component', 'compressed', 'compression', 'concurrent', 'condition', 'connection', 'consistent', 'constraint', 'constraint_catalog', 'constraint_name', 'constraint_schema', 'contains', 'context', 'continue', 'convert', 'cpu', 'create', 'cross', 'cube', 'cume_dist', 'current', 'current_date', 'current_time', 'current_timestamp', 'current_user', 'cursor', 'cursor_name', 'data', 'database', 'databases', 'datafile', 'day', 'day_hour', 'day_microsecond', 'day_minute', 'day_second', 'deallocate', 'declare', 'default', 'default_auth', 'definer', 'definition', 'delay_key_write', 'delayed', 'delete', 'dense_rank', 'desc', 'describe', 'description', 'deterministic', 'diagnostics', 'directory', 'disable', 'discard', 'disk', 'distinct', 'distinctrow', 'div', 'do', 'drop', 'dual', 'dumpfile', 'duplicate', 'dynamic', 'each', 'else', 'elseif', 'empty', 'enable', 'enclosed', 'encryption', 'end', 'ends', 'enforced', 'engine', 'engine_attribute', 'engines', 'error', 'errors', 'escape', 'escaped', 'event', 'events', 'every', 'except', 'exchange', 'exclude', 'execute', 'exists', 'exit', 'expansion', 'expire', 'explain', 'export', 'extended', 'extent_size', 'failed_login_attempts', 'false', 'fast', 'faults', 'fetch', 'fields', 'file', 'file_block_size', 'filter', 'first', 'first_value', 'flush', 'following', 'follows', 'for', 'force', 'foreign', 'format', 'found', 'from', 'full', 'fulltext', 'function', 'general', 'generated', 'geomcollection', 'get', 'get_format', 'get_master_public_key', 'global', 'grant', 'grants', 'group', 'group_replication', 'grouping', 'groups', 'handler', 'hash', 'having', 'help', 'high_priority', 'histogram', 'history', 'host', 'hosts', 'hour', 'hour_microsecond', 'hour_minute', 'hour_second', 'identified', 'if', 'ignore', 'ignore_server_ids', 'import', 'in', 'inactive', 'index', 'indexes', 'infile', 'initial_size', 'inner', 'inout', 'insensitive', 'insert', 'insert_method', 'install', 'instance', 'interval', 'into', 'invisible', 'invoker', 'io', 'io_after_gtids', 'io_before_gtids', 'io_thread', 'ipc', 'is', 'isolation', 'issuer', 'iterate', 'join', 'json_table', 'json_value', 'key', 'key_block_size', 'keys', 'kill', 'lag', 'language', 'last', 'last_value', 'lateral', 'lead', 'leading', 'leave', 'leaves', 'left', 'less', 'level', 'like', 'limit', 'linear', 'lines', 'list', 'load', 'local', 'localtime', 'localtimestamp', 'lock', 'locked', 'locks', 'logfile', 'logs', 'loop', 'low_priority', 'master', 'master_auto_position', 'master_bind', 'master_compression_algorithms', 'master_connect_retry', 'master_delay', 'master_heartbeat_period', 'master_host', 'master_log_file', 'master_log_pos', 'master_password', 'master_port', 'master_public_key_path', 'master_retry_count', 'master_server_id', 'master_ssl', 'master_ssl_ca', 'master_ssl_capath', 'master_ssl_cert', 'master_ssl_cipher', 'master_ssl_crl', 'master_ssl_crlpath', 'master_ssl_key', 'master_ssl_verify_server_cert', 'master_tls_ciphersuites', 'master_tls_version', 'master_user', 'master_zstd_compression_level', 'match', 'max_connections_per_hour', 'max_queries_per_hour', 'max_rows', 'max_size', 'max_updates_per_hour', 'max_user_connections', 'maxvalue', 'medium', 'member', 'memory', 'merge', 'message_text', 'microsecond', 'migrate', 'min_rows', 'minute', 'minute_microsecond', 'minute_second', 'mod', 'mode', 'modifies', 'modify', 'month', 'mutex', 'mysql_errno', 'name', 'names', 'natural', 'ndb', 'ndbcluster', 'nested', 'network_namespace', 'never', 'new', 'next', 'no', 'no_wait', 'no_write_to_binlog', 'nodegroup', 'none', 'not', 'nowait', 'nth_value', 'ntile', 'null', 'nulls', 'number', 'of', 'off', 'offset', 'oj', 'old', 'on', 'one', 'only', 'open', 'optimize', 'optimizer_costs', 'option', 'optional', 'optionally', 'options', 'or', 'order', 'ordinality', 'organization', 'others', 'out', 'outer', 'outfile', 'over', 'owner', 'pack_keys', 'page', 'parser', 'partial', 'partition', 'partitioning', 'partitions', 'password', 'password_lock_time', 'path', 'percent_rank', 'persist', 'persist_only', 'phase', 'plugin', 'plugin_dir', 'plugins', 'port', 'precedes', 'preceding', 'prepare', 'preserve', 'prev', 'primary', 'privilege_checks_user', 'privileges', 'procedure', 'process', 'processlist', 'profile', 'profiles', 'proxy', 'purge', 'quarter', 'query', 'quick', 'random', 'range', 'rank', 'read', 'read_only', 'read_write', 'reads', 'rebuild', 'recover', 'recursive', 'redo_buffer_size', 'redundant', 'reference', 'references', 'regexp', 'relay', 'relay_log_file', 'relay_log_pos', 'relay_thread', 'relaylog', 'release', 'reload', 'remove', 'rename', 'reorganize', 'repair', 'repeat', 'repeatable', 'replace', 'replicate_do_db', 'replicate_do_table', 'replicate_ignore_db', 'replicate_ignore_table', 'replicate_rewrite_db', 'replicate_wild_do_table', 'replicate_wild_ignore_table', 'replication', 'require', 'require_row_format', 'require_table_primary_key_check', 'reset', 'resignal', 'resource', 'respect', 'restart', 'restore', 'restrict', 'resume', 'retain', 'return', 'returned_sqlstate', 'returning', 'returns', 'reuse', 'reverse', 'revoke', 'right', 'rlike', 'role', 'rollback', 'rollup', 'rotate', 'routine', 'row', 'row_count', 'row_format', 'row_number', 'rows', 'rtree', 'savepoint', 'schedule', 'schema', 'schema_name', 'schemas', 'second', 'second_microsecond', 'secondary', 'secondary_engine', 'secondary_engine_attribute', 'secondary_load', 'secondary_unload', 'security', 'select', 'sensitive', 'separator', 'serializable', 'server', 'session', 'share', 'show', 'shutdown', 'signal', 'signed', 'simple', 'skip', 'slave', 'slow', 'snapshot', 'socket', 'some', 'soname', 'sounds', 'source', 'spatial', 'specific', 'sql', 'sql_after_gtids', 'sql_after_mts_gaps', 'sql_before_gtids', 'sql_big_result', 'sql_buffer_result', 'sql_calc_found_rows', 'sql_no_cache', 'sql_small_result', 'sql_thread', 'sql_tsi_day', 'sql_tsi_hour', 'sql_tsi_minute', 'sql_tsi_month', 'sql_tsi_quarter', 'sql_tsi_second', 'sql_tsi_week', 'sql_tsi_year', 'sqlexception', 'sqlstate', 'sqlwarning', 'srid', 'ssl', 'stacked', 'start', 'starting', 'starts', 'stats_auto_recalc', 'stats_persistent', 'stats_sample_pages', 'status', 'stop', 'storage', 'stored', 'straight_join', 'stream', 'string', 'subclass_origin', 'subject', 'subpartition', 'subpartitions', 'super', 'suspend', 'swaps', 'switches', 'system', 'table', 'table_checksum', 'table_name', 'tables', 'tablespace', 'temporary', 'temptable', 'terminated', 'than', 'then', 'thread_priority', 'ties', 'timestampadd', 'timestampdiff', 'tls', 'to', 'trailing', 'transaction', 'trigger', 'triggers', 'true', 'truncate', 'type', 'types', 'unbounded', 'uncommitted', 'undefined', 'undo', 'undo_buffer_size', 'undofile', 'unicode', 'uninstall', 'union', 'unique', 'unknown', 'unlock', 'unsigned', 'until', 'update', 'upgrade', 'usage', 'use', 'use_frm', 'user', 'user_resources', 'using', 'utc_date', 'utc_time', 'utc_timestamp', 'validation', 'value', 'values', 'variables', 'vcpu', 'view', 'virtual', 'visible', 'wait', 'warnings', 'week', 'weight_string', 'when', 'where', 'while', 'window', 'with', 'without', 'work', 'wrapper', 'write', 'x509', 'xa', 'xid', 'xml', 'xor', 'year_month', 'zerofill', ) if __name__ == '__main__': # pragma: no cover import re from urllib.request import urlopen from pygments.util import format_lines # MySQL source code SOURCE_URL = 'https://github.com/mysql/mysql-server/raw/8.0' LEX_URL = SOURCE_URL + '/sql/lex.h' ITEM_CREATE_URL = SOURCE_URL + '/sql/item_create.cc' def update_myself(): # Pull content from lex.h. lex_file = urlopen(LEX_URL).read().decode('utf8', errors='ignore') keywords = parse_lex_keywords(lex_file) functions = parse_lex_functions(lex_file) optimizer_hints = parse_lex_optimizer_hints(lex_file) # Parse content in item_create.cc. item_create_file = urlopen(ITEM_CREATE_URL).read().decode('utf8', errors='ignore') functions.update(parse_item_create_functions(item_create_file)) # Remove data types from the set of keywords. keywords -= set(MYSQL_DATATYPES) update_content('MYSQL_FUNCTIONS', tuple(sorted(functions))) update_content('MYSQL_KEYWORDS', tuple(sorted(keywords))) update_content('MYSQL_OPTIMIZER_HINTS', tuple(sorted(optimizer_hints))) def parse_lex_keywords(f): """Parse keywords in lex.h.""" results = set() for m in re.finditer(r'{SYM(?:_HK)?\("(?P[a-z0-9_]+)",', f, flags=re.I): results.add(m.group('keyword').lower()) if not results: raise ValueError('No keywords found') return results def parse_lex_optimizer_hints(f): """Parse optimizer hints in lex.h.""" results = set() for m in re.finditer(r'{SYM_H\("(?P[a-z0-9_]+)",', f, flags=re.I): results.add(m.group('keyword').lower()) if not results: raise ValueError('No optimizer hints found') return results def parse_lex_functions(f): """Parse MySQL function names from lex.h.""" results = set() for m in re.finditer(r'{SYM_FN?\("(?P[a-z0-9_]+)",', f, flags=re.I): results.add(m.group('function').lower()) if not results: raise ValueError('No lex functions found') return results def parse_item_create_functions(f): """Parse MySQL function names from item_create.cc.""" results = set() for m in re.finditer(r'{"(?P[^"]+?)",\s*SQL_F[^(]+?\(', f, flags=re.I): results.add(m.group('function').lower()) if not results: raise ValueError('No item_create functions found') return results def update_content(field_name, content): """Overwrite this file with content parsed from MySQL's source code.""" with open(__file__) as f: data = f.read() # Line to start/end inserting re_match = re.compile(r'^%s\s*=\s*\($.*?^\s*\)$' % field_name, re.M | re.S) m = re_match.search(data) if not m: raise ValueError('Could not find an existing definition for %s' % field_name) new_block = format_lines(field_name, content) data = data[:m.start()] + new_block + data[m.end():] with open(__file__, 'w', newline='\n') as f: f.write(data) update_myself() pygments-2.11.2/pygments/lexers/markup.py0000644000175000017500000006416314165547207020371 0ustar carstencarsten""" pygments.lexers.markup ~~~~~~~~~~~~~~~~~~~~~~ Lexers for non-HTML markup languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexers.html import XmlLexer from pygments.lexers.javascript import JavascriptLexer from pygments.lexers.css import CssLexer from pygments.lexer import RegexLexer, DelegatingLexer, include, bygroups, \ using, this, do_insertions, default, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Generic, Other from pygments.util import get_bool_opt, ClassNotFound __all__ = ['BBCodeLexer', 'MoinWikiLexer', 'RstLexer', 'TexLexer', 'GroffLexer', 'MozPreprocHashLexer', 'MozPreprocPercentLexer', 'MozPreprocXulLexer', 'MozPreprocJavascriptLexer', 'MozPreprocCssLexer', 'MarkdownLexer', 'TiddlyWiki5Lexer'] class BBCodeLexer(RegexLexer): """ A lexer that highlights BBCode(-like) syntax. .. versionadded:: 0.6 """ name = 'BBCode' aliases = ['bbcode'] mimetypes = ['text/x-bbcode'] tokens = { 'root': [ (r'[^[]+', Text), # tag/end tag begin (r'\[/?\w+', Keyword, 'tag'), # stray bracket (r'\[', Text), ], 'tag': [ (r'\s+', Text), # attribute with value (r'(\w+)(=)("?[^\s"\]]+"?)', bygroups(Name.Attribute, Operator, String)), # tag argument (a la [color=green]) (r'(=)("?[^\s"\]]+"?)', bygroups(Operator, String)), # tag end (r'\]', Keyword, '#pop'), ], } class MoinWikiLexer(RegexLexer): """ For MoinMoin (and Trac) Wiki markup. .. versionadded:: 0.7 """ name = 'MoinMoin/Trac Wiki markup' aliases = ['trac-wiki', 'moin'] filenames = [] mimetypes = ['text/x-trac-wiki'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'^#.*$', Comment), (r'(!)(\S+)', bygroups(Keyword, Text)), # Ignore-next # Titles (r'^(=+)([^=]+)(=+)(\s*#.+)?$', bygroups(Generic.Heading, using(this), Generic.Heading, String)), # Literal code blocks, with optional shebang (r'(\{\{\{)(\n#!.+)?', bygroups(Name.Builtin, Name.Namespace), 'codeblock'), (r'(\'\'\'?|\|\||`|__|~~|\^|,,|::)', Comment), # Formatting # Lists (r'^( +)([.*-])( )', bygroups(Text, Name.Builtin, Text)), (r'^( +)([a-z]{1,5}\.)( )', bygroups(Text, Name.Builtin, Text)), # Other Formatting (r'\[\[\w+.*?\]\]', Keyword), # Macro (r'(\[[^\s\]]+)(\s+[^\]]+?)?(\])', bygroups(Keyword, String, Keyword)), # Link (r'^----+$', Keyword), # Horizontal rules (r'[^\n\'\[{!_~^,|]+', Text), (r'\n', Text), (r'.', Text), ], 'codeblock': [ (r'\}\}\}', Name.Builtin, '#pop'), # these blocks are allowed to be nested in Trac, but not MoinMoin (r'\{\{\{', Text, '#push'), (r'[^{}]+', Comment.Preproc), # slurp boring text (r'.', Comment.Preproc), # allow loose { or } ], } class RstLexer(RegexLexer): """ For `reStructuredText `_ markup. .. versionadded:: 0.7 Additional options accepted: `handlecodeblocks` Highlight the contents of ``.. sourcecode:: language``, ``.. code:: language`` and ``.. code-block:: language`` directives with a lexer for the given language (default: ``True``). .. versionadded:: 0.8 """ name = 'reStructuredText' aliases = ['restructuredtext', 'rst', 'rest'] filenames = ['*.rst', '*.rest'] mimetypes = ["text/x-rst", "text/prs.fallenstein.rst"] flags = re.MULTILINE def _handle_sourcecode(self, match): from pygments.lexers import get_lexer_by_name # section header yield match.start(1), Punctuation, match.group(1) yield match.start(2), Text, match.group(2) yield match.start(3), Operator.Word, match.group(3) yield match.start(4), Punctuation, match.group(4) yield match.start(5), Text, match.group(5) yield match.start(6), Keyword, match.group(6) yield match.start(7), Text, match.group(7) # lookup lexer if wanted and existing lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name(match.group(6).strip()) except ClassNotFound: pass indention = match.group(8) indention_size = len(indention) code = (indention + match.group(9) + match.group(10) + match.group(11)) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(8), String, code return # highlight the lines with the lexer. ins = [] codelines = code.splitlines(True) code = '' for line in codelines: if len(line) > indention_size: ins.append((len(code), [(0, Text, line[:indention_size])])) code += line[indention_size:] else: code += line yield from do_insertions(ins, lexer.get_tokens_unprocessed(code)) # from docutils.parsers.rst.states closers = '\'")]}>\u2019\u201d\xbb!?' unicode_delimiters = '\u2010\u2011\u2012\u2013\u2014\u00a0' end_string_suffix = (r'((?=$)|(?=[-/:.,; \n\x00%s%s]))' % (re.escape(unicode_delimiters), re.escape(closers))) tokens = { 'root': [ # Heading with overline (r'^(=+|-+|`+|:+|\.+|\'+|"+|~+|\^+|_+|\*+|\++|#+)([ \t]*\n)' r'(.+)(\n)(\1)(\n)', bygroups(Generic.Heading, Text, Generic.Heading, Text, Generic.Heading, Text)), # Plain heading (r'^(\S.*)(\n)(={3,}|-{3,}|`{3,}|:{3,}|\.{3,}|\'{3,}|"{3,}|' r'~{3,}|\^{3,}|_{3,}|\*{3,}|\+{3,}|#{3,})(\n)', bygroups(Generic.Heading, Text, Generic.Heading, Text)), # Bulleted lists (r'^(\s*)([-*+])( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), # Numbered lists (r'^(\s*)([0-9#ivxlcmIVXLCM]+\.)( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), (r'^(\s*)(\(?[0-9#ivxlcmIVXLCM]+\))( .+\n(?:\1 .+\n)*)', bygroups(Text, Number, using(this, state='inline'))), # Numbered, but keep words at BOL from becoming lists (r'^(\s*)([A-Z]+\.)( .+\n(?:\1 .+\n)+)', bygroups(Text, Number, using(this, state='inline'))), (r'^(\s*)(\(?[A-Za-z]+\))( .+\n(?:\1 .+\n)+)', bygroups(Text, Number, using(this, state='inline'))), # Line blocks (r'^(\s*)(\|)( .+\n(?:\| .+\n)*)', bygroups(Text, Operator, using(this, state='inline'))), # Sourcecode directives (r'^( *\.\.)(\s*)((?:source)?code(?:-block)?)(::)([ \t]*)([^\n]+)' r'(\n[ \t]*\n)([ \t]+)(.*)(\n)((?:(?:\8.*)?\n)+)', _handle_sourcecode), # A directive (r'^( *\.\.)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))', bygroups(Punctuation, Text, Operator.Word, Punctuation, Text, using(this, state='inline'))), # A reference target (r'^( *\.\.)(\s*)(_(?:[^:\\]|\\.)+:)(.*?)$', bygroups(Punctuation, Text, Name.Tag, using(this, state='inline'))), # A footnote/citation target (r'^( *\.\.)(\s*)(\[.+\])(.*?)$', bygroups(Punctuation, Text, Name.Tag, using(this, state='inline'))), # A substitution def (r'^( *\.\.)(\s*)(\|.+\|)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))', bygroups(Punctuation, Text, Name.Tag, Text, Operator.Word, Punctuation, Text, using(this, state='inline'))), # Comments (r'^ *\.\..*(\n( +.*\n|\n)+)?', Comment.Preproc), # Field list marker (r'^( *)(:(?:\\\\|\\:|[^:\n])+:(?=\s))([ \t]*)', bygroups(Text, Name.Class, Text)), # Definition list (r'^(\S.*(?)(`__?)', # reference with inline target bygroups(String, String.Interpol, String)), (r'`.+?`__?', String), # reference (r'(`.+?`)(:[a-zA-Z0-9:-]+?:)?', bygroups(Name.Variable, Name.Attribute)), # role (r'(:[a-zA-Z0-9:-]+?:)(`.+?`)', bygroups(Name.Attribute, Name.Variable)), # role (content first) (r'\*\*.+?\*\*', Generic.Strong), # Strong emphasis (r'\*.+?\*', Generic.Emph), # Emphasis (r'\[.*?\]_', String), # Footnote or citation (r'<.+?>', Name.Tag), # Hyperlink (r'[^\\\n\[*`:]+', Text), (r'.', Text), ], 'literal': [ (r'[^`]+', String), (r'``' + end_string_suffix, String, '#pop'), (r'`', String), ] } def __init__(self, **options): self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True) RegexLexer.__init__(self, **options) def analyse_text(text): if text[:2] == '..' and text[2:3] != '.': return 0.3 p1 = text.find("\n") p2 = text.find("\n", p1 + 1) if (p2 > -1 and # has two lines p1 * 2 + 1 == p2 and # they are the same length text[p1+1] in '-=' and # the next line both starts and ends with text[p1+1] == text[p2-1]): # ...a sufficiently high header return 0.5 class TexLexer(RegexLexer): """ Lexer for the TeX and LaTeX typesetting languages. """ name = 'TeX' aliases = ['tex', 'latex'] filenames = ['*.tex', '*.aux', '*.toc'] mimetypes = ['text/x-tex', 'text/x-latex'] tokens = { 'general': [ (r'%.*?\n', Comment), (r'[{}]', Name.Builtin), (r'[&_^]', Name.Builtin), ], 'root': [ (r'\\\[', String.Backtick, 'displaymath'), (r'\\\(', String, 'inlinemath'), (r'\$\$', String.Backtick, 'displaymath'), (r'\$', String, 'inlinemath'), (r'\\([a-zA-Z]+|.)', Keyword, 'command'), (r'\\$', Keyword), include('general'), (r'[^\\$%&_^{}]+', Text), ], 'math': [ (r'\\([a-zA-Z]+|.)', Name.Variable), include('general'), (r'[0-9]+', Number), (r'[-=!+*/()\[\]]', Operator), (r'[^=!+*/()\[\]\\$%&_^{}0-9-]+', Name.Builtin), ], 'inlinemath': [ (r'\\\)', String, '#pop'), (r'\$', String, '#pop'), include('math'), ], 'displaymath': [ (r'\\\]', String, '#pop'), (r'\$\$', String, '#pop'), (r'\$', Name.Builtin), include('math'), ], 'command': [ (r'\[.*?\]', Name.Attribute), (r'\*', Keyword), default('#pop'), ], } def analyse_text(text): for start in ("\\documentclass", "\\input", "\\documentstyle", "\\relax"): if text[:len(start)] == start: return True class GroffLexer(RegexLexer): """ Lexer for the (g)roff typesetting language, supporting groff extensions. Mainly useful for highlighting manpage sources. .. versionadded:: 0.6 """ name = 'Groff' aliases = ['groff', 'nroff', 'man'] filenames = ['*.[1-9]', '*.man', '*.1p', '*.3pm'] mimetypes = ['application/x-troff', 'text/troff'] tokens = { 'root': [ (r'(\.)(\w+)', bygroups(Text, Keyword), 'request'), (r'\.', Punctuation, 'request'), # Regular characters, slurp till we find a backslash or newline (r'[^\\\n]+', Text, 'textline'), default('textline'), ], 'textline': [ include('escapes'), (r'[^\\\n]+', Text), (r'\n', Text, '#pop'), ], 'escapes': [ # groff has many ways to write escapes. (r'\\"[^\n]*', Comment), (r'\\[fn]\w', String.Escape), (r'\\\(.{2}', String.Escape), (r'\\.\[.*\]', String.Escape), (r'\\.', String.Escape), (r'\\\n', Text, 'request'), ], 'request': [ (r'\n', Text, '#pop'), include('escapes'), (r'"[^\n"]+"', String.Double), (r'\d+', Number), (r'\S+', String), (r'\s+', Text), ], } def analyse_text(text): if text[:1] != '.': return False if text[:3] == '.\\"': return True if text[:4] == '.TH ': return True if text[1:3].isalnum() and text[3].isspace(): return 0.9 class MozPreprocHashLexer(RegexLexer): """ Lexer for Mozilla Preprocessor files (with '#' as the marker). Other data is left untouched. .. versionadded:: 2.0 """ name = 'mozhashpreproc' aliases = [name] filenames = [] mimetypes = [] tokens = { 'root': [ (r'^#', Comment.Preproc, ('expr', 'exprstart')), (r'.+', Other), ], 'exprstart': [ (r'(literal)(.*)', bygroups(Comment.Preproc, Text), '#pop:2'), (words(( 'define', 'undef', 'if', 'ifdef', 'ifndef', 'else', 'elif', 'elifdef', 'elifndef', 'endif', 'expand', 'filter', 'unfilter', 'include', 'includesubst', 'error')), Comment.Preproc, '#pop'), ], 'expr': [ (words(('!', '!=', '==', '&&', '||')), Operator), (r'(defined)(\()', bygroups(Keyword, Punctuation)), (r'\)', Punctuation), (r'[0-9]+', Number.Decimal), (r'__\w+?__', Name.Variable), (r'@\w+?@', Name.Class), (r'\w+', Name), (r'\n', Text, '#pop'), (r'\s+', Text), (r'\S', Punctuation), ], } class MozPreprocPercentLexer(MozPreprocHashLexer): """ Lexer for Mozilla Preprocessor files (with '%' as the marker). Other data is left untouched. .. versionadded:: 2.0 """ name = 'mozpercentpreproc' aliases = [name] filenames = [] mimetypes = [] tokens = { 'root': [ (r'^%', Comment.Preproc, ('expr', 'exprstart')), (r'.+', Other), ], } class MozPreprocXulLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `XmlLexer`. .. versionadded:: 2.0 """ name = "XUL+mozpreproc" aliases = ['xul+mozpreproc'] filenames = ['*.xul.in'] mimetypes = [] def __init__(self, **options): super().__init__(XmlLexer, MozPreprocHashLexer, **options) class MozPreprocJavascriptLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `JavascriptLexer`. .. versionadded:: 2.0 """ name = "Javascript+mozpreproc" aliases = ['javascript+mozpreproc'] filenames = ['*.js.in'] mimetypes = [] def __init__(self, **options): super().__init__(JavascriptLexer, MozPreprocHashLexer, **options) class MozPreprocCssLexer(DelegatingLexer): """ Subclass of the `MozPreprocHashLexer` that highlights unlexed data with the `CssLexer`. .. versionadded:: 2.0 """ name = "CSS+mozpreproc" aliases = ['css+mozpreproc'] filenames = ['*.css.in'] mimetypes = [] def __init__(self, **options): super().__init__(CssLexer, MozPreprocPercentLexer, **options) class MarkdownLexer(RegexLexer): """ For `Markdown `_ markup. .. versionadded:: 2.2 """ name = 'Markdown' aliases = ['markdown', 'md'] filenames = ['*.md', '*.markdown'] mimetypes = ["text/x-markdown"] flags = re.MULTILINE def _handle_codeblock(self, match): """ match args: 1:backticks, 2:lang_name, 3:newline, 4:code, 5:backticks """ from pygments.lexers import get_lexer_by_name # section header yield match.start(1), String.Backtick, match.group(1) yield match.start(2), String.Backtick, match.group(2) yield match.start(3), Text , match.group(3) # lookup lexer if wanted and existing lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name( match.group(2).strip() ) except ClassNotFound: pass code = match.group(4) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(4), String, code else: yield from do_insertions([], lexer.get_tokens_unprocessed(code)) yield match.start(5), String.Backtick, match.group(5) tokens = { 'root': [ # heading with '#' prefix (atx-style) (r'(^#[^#].+)(\n)', bygroups(Generic.Heading, Text)), # subheading with '#' prefix (atx-style) (r'(^#{2,6}[^#].+)(\n)', bygroups(Generic.Subheading, Text)), # heading with '=' underlines (Setext-style) (r'^(.+)(\n)(=+)(\n)', bygroups(Generic.Heading, Text, Generic.Heading, Text)), # subheading with '-' underlines (Setext-style) (r'^(.+)(\n)(-+)(\n)', bygroups(Generic.Subheading, Text, Generic.Subheading, Text)), # task list (r'^(\s*)([*-] )(\[[ xX]\])( .+\n)', bygroups(Text, Keyword, Keyword, using(this, state='inline'))), # bulleted list (r'^(\s*)([*-])(\s)(.+\n)', bygroups(Text, Keyword, Text, using(this, state='inline'))), # numbered list (r'^(\s*)([0-9]+\.)( .+\n)', bygroups(Text, Keyword, using(this, state='inline'))), # quote (r'^(\s*>\s)(.+\n)', bygroups(Keyword, Generic.Emph)), # code block fenced by 3 backticks (r'^(\s*```\n[\w\W]*?^\s*```$\n)', String.Backtick), # code block with language (r'^(\s*```)(\w+)(\n)([\w\W]*?)(^\s*```$\n)', _handle_codeblock), include('inline'), ], 'inline': [ # escape (r'\\.', Text), # inline code (r'([^`]?)(`[^`\n]+`)', bygroups(Text, String.Backtick)), # warning: the following rules eat outer tags. # eg. **foo _bar_ baz** => foo and baz are not recognized as bold # bold fenced by '**' (r'([^\*]?)(\*\*[^* \n][^*\n]*\*\*)', bygroups(Text, Generic.Strong)), # bold fenced by '__' (r'([^_]?)(__[^_ \n][^_\n]*__)', bygroups(Text, Generic.Strong)), # italics fenced by '*' (r'([^\*]?)(\*[^* \n][^*\n]*\*)', bygroups(Text, Generic.Emph)), # italics fenced by '_' (r'([^_]?)(_[^_ \n][^_\n]*_)', bygroups(Text, Generic.Emph)), # strikethrough (r'([^~]?)(~~[^~ \n][^~\n]*~~)', bygroups(Text, Generic.Deleted)), # mentions and topics (twitter and github stuff) (r'[@#][\w/:]+', Name.Entity), # (image?) links eg: ![Image of Yaktocat](https://octodex.github.com/images/yaktocat.png) (r'(!?\[)([^]]+)(\])(\()([^)]+)(\))', bygroups(Text, Name.Tag, Text, Text, Name.Attribute, Text)), # reference-style links, e.g.: # [an example][id] # [id]: http://example.com/ (r'(\[)([^]]+)(\])(\[)([^]]*)(\])', bygroups(Text, Name.Tag, Text, Text, Name.Label, Text)), (r'^(\s*\[)([^]]*)(\]:\s*)(.+)', bygroups(Text, Name.Label, Text, Name.Attribute)), # general text, must come last! (r'[^\\\s]+', Text), (r'.', Text), ], } def __init__(self, **options): self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True) RegexLexer.__init__(self, **options) class TiddlyWiki5Lexer(RegexLexer): """ For `TiddlyWiki5 `_ markup. .. versionadded:: 2.7 """ name = 'tiddler' aliases = ['tid'] filenames = ['*.tid'] mimetypes = ["text/vnd.tiddlywiki"] flags = re.MULTILINE def _handle_codeblock(self, match): """ match args: 1:backticks, 2:lang_name, 3:newline, 4:code, 5:backticks """ from pygments.lexers import get_lexer_by_name # section header yield match.start(1), String, match.group(1) yield match.start(2), String, match.group(2) yield match.start(3), Text, match.group(3) # lookup lexer if wanted and existing lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name(match.group(2).strip()) except ClassNotFound: pass code = match.group(4) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(4), String, code return yield from do_insertions([], lexer.get_tokens_unprocessed(code)) yield match.start(5), String, match.group(5) def _handle_cssblock(self, match): """ match args: 1:style tag 2:newline, 3:code, 4:closing style tag """ from pygments.lexers import get_lexer_by_name # section header yield match.start(1), String, match.group(1) yield match.start(2), String, match.group(2) lexer = None if self.handlecodeblocks: try: lexer = get_lexer_by_name('css') except ClassNotFound: pass code = match.group(3) # no lexer for this language. handle it like it was a code block if lexer is None: yield match.start(3), String, code return yield from do_insertions([], lexer.get_tokens_unprocessed(code)) yield match.start(4), String, match.group(4) tokens = { 'root': [ # title in metadata section (r'^(title)(:\s)(.+\n)', bygroups(Keyword, Text, Generic.Heading)), # headings (r'^(!)([^!].+\n)', bygroups(Generic.Heading, Text)), (r'^(!{2,6})(.+\n)', bygroups(Generic.Subheading, Text)), # bulleted or numbered lists or single-line block quotes # (can be mixed) (r'^(\s*)([*#>]+)(\s*)(.+\n)', bygroups(Text, Keyword, Text, using(this, state='inline'))), # multi-line block quotes (r'^(<<<.*\n)([\w\W]*?)(^<<<.*$)', bygroups(String, Text, String)), # table header (r'^(\|.*?\|h)$', bygroups(Generic.Strong)), # table footer or caption (r'^(\|.*?\|[cf])$', bygroups(Generic.Emph)), # table class (r'^(\|.*?\|k)$', bygroups(Name.Tag)), # definitions (r'^(;.*)$', bygroups(Generic.Strong)), # text block (r'^(```\n)([\w\W]*?)(^```$)', bygroups(String, Text, String)), # code block with language (r'^(```)(\w+)(\n)([\w\W]*?)(^```$)', _handle_codeblock), # CSS style block (r'^($)', _handle_cssblock), include('keywords'), include('inline'), ], 'keywords': [ (words(( '\\define', '\\end', 'caption', 'created', 'modified', 'tags', 'title', 'type'), prefix=r'^', suffix=r'\b'), Keyword), ], 'inline': [ # escape (r'\\.', Text), # created or modified date (r'\d{17}', Number.Integer), # italics (r'(\s)(//[^/]+//)((?=\W|\n))', bygroups(Text, Generic.Emph, Text)), # superscript (r'(\s)(\^\^[^\^]+\^\^)', bygroups(Text, Generic.Emph)), # subscript (r'(\s)(,,[^,]+,,)', bygroups(Text, Generic.Emph)), # underscore (r'(\s)(__[^_]+__)', bygroups(Text, Generic.Strong)), # bold (r"(\s)(''[^']+'')((?=\W|\n))", bygroups(Text, Generic.Strong, Text)), # strikethrough (r'(\s)(~~[^~]+~~)((?=\W|\n))', bygroups(Text, Generic.Deleted, Text)), # TiddlyWiki variables (r'<<[^>]+>>', Name.Tag), (r'\$\$[^$]+\$\$', Name.Tag), (r'\$\([^)]+\)\$', Name.Tag), # TiddlyWiki style or class (r'^@@.*$', Name.Tag), # HTML tags (r']+>', Name.Tag), # inline code (r'`[^`]+`', String.Backtick), # HTML escaped symbols (r'&\S*?;', String.Regex), # Wiki links (r'(\[{2})([^]\|]+)(\]{2})', bygroups(Text, Name.Tag, Text)), # External links (r'(\[{2})([^]\|]+)(\|)([^]\|]+)(\]{2})', bygroups(Text, Name.Tag, Text, Name.Attribute, Text)), # Transclusion (r'(\{{2})([^}]+)(\}{2})', bygroups(Text, Name.Tag, Text)), # URLs (r'(\b.?.?tps?://[^\s"]+)', bygroups(Name.Attribute)), # general text, must come last! (r'[\w]+', Text), (r'.', Text) ], } def __init__(self, **options): self.handlecodeblocks = get_bool_opt(options, 'handlecodeblocks', True) RegexLexer.__init__(self, **options) pygments-2.11.2/pygments/lexers/whiley.py0000644000175000017500000000762314165547207020371 0ustar carstencarsten""" pygments.lexers.whiley ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the Whiley language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Comment, Keyword, Name, Number, Operator, \ Punctuation, String, Text __all__ = ['WhileyLexer'] class WhileyLexer(RegexLexer): """ Lexer for the Whiley programming language. .. versionadded:: 2.2 """ name = 'Whiley' filenames = ['*.whiley'] aliases = ['whiley'] mimetypes = ['text/x-whiley'] # See the language specification: # http://whiley.org/download/WhileyLanguageSpec.pdf tokens = { 'root': [ # Whitespace (r'\s+', Text), # Comments (r'//.*', Comment.Single), # don't parse empty comment as doc comment (r'/\*\*/', Comment.Multiline), (r'(?s)/\*\*.*?\*/', String.Doc), (r'(?s)/\*.*?\*/', Comment.Multiline), # Keywords (words(( 'if', 'else', 'while', 'for', 'do', 'return', 'switch', 'case', 'default', 'break', 'continue', 'requires', 'ensures', 'where', 'assert', 'assume', 'all', 'no', 'some', 'in', 'is', 'new', 'throw', 'try', 'catch', 'debug', 'skip', 'fail', 'finite', 'total'), suffix=r'\b'), Keyword.Reserved), (words(( 'function', 'method', 'public', 'private', 'protected', 'export', 'native'), suffix=r'\b'), Keyword.Declaration), # "constant" & "type" are not keywords unless used in declarations (r'(constant|type)(\s+)([a-zA-Z_]\w*)(\s+)(is)\b', bygroups(Keyword.Declaration, Text, Name, Text, Keyword.Reserved)), (r'(true|false|null)\b', Keyword.Constant), (r'(bool|byte|int|real|any|void)\b', Keyword.Type), # "from" is not a keyword unless used with import (r'(import)(\s+)(\*)([^\S\n]+)(from)\b', bygroups(Keyword.Namespace, Text, Punctuation, Text, Keyword.Namespace)), (r'(import)(\s+)([a-zA-Z_]\w*)([^\S\n]+)(from)\b', bygroups(Keyword.Namespace, Text, Name, Text, Keyword.Namespace)), (r'(package|import)\b', Keyword.Namespace), # standard library: https://github.com/Whiley/WhileyLibs/ (words(( # types defined in whiley.lang.Int 'i8', 'i16', 'i32', 'i64', 'u8', 'u16', 'u32', 'u64', 'uint', 'nat', # whiley.lang.Any 'toString'), suffix=r'\b'), Name.Builtin), # byte literal (r'[01]+b', Number.Bin), # decimal literal (r'[0-9]+\.[0-9]+', Number.Float), # match "1." but not ranges like "3..5" (r'[0-9]+\.(?!\.)', Number.Float), # integer literal (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), # character literal (r"""'[^\\]'""", String.Char), (r"""(')(\\['"\\btnfr])(')""", bygroups(String.Char, String.Escape, String.Char)), # string literal (r'"', String, 'string'), # operators and punctuation (r'[{}()\[\],.;]', Punctuation), (r'[+\-*/%&|<>^!~@=:?' # unicode operators r'\u2200\u2203\u2205\u2282\u2286\u2283\u2287' r'\u222A\u2229\u2264\u2265\u2208\u2227\u2228' r']', Operator), # identifier (r'[a-zA-Z_]\w*', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\[btnfr]', String.Escape), (r'\\u[0-9a-fA-F]{4}', String.Escape), (r'\\.', String), (r'[^\\"]+', String), ], } pygments-2.11.2/pygments/lexers/configs.py0000644000175000017500000011677114165547207020525 0ustar carstencarsten""" pygments.lexers.configs ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for configuration file formats. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import ExtendedRegexLexer, RegexLexer, default, words, \ bygroups, include, using from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace, Literal, Error, Generic from pygments.lexers.shell import BashLexer from pygments.lexers.data import JsonLexer __all__ = ['IniLexer', 'RegeditLexer', 'PropertiesLexer', 'KconfigLexer', 'Cfengine3Lexer', 'ApacheConfLexer', 'SquidConfLexer', 'NginxConfLexer', 'LighttpdConfLexer', 'DockerLexer', 'TerraformLexer', 'TermcapLexer', 'TerminfoLexer', 'PkgConfigLexer', 'PacmanConfLexer', 'AugeasLexer', 'TOMLLexer', 'NestedTextLexer', 'SingularityLexer'] class IniLexer(RegexLexer): """ Lexer for configuration files in INI style. """ name = 'INI' aliases = ['ini', 'cfg', 'dosini'] filenames = [ '*.ini', '*.cfg', '*.inf', '.editorconfig', # systemd unit files # https://www.freedesktop.org/software/systemd/man/systemd.unit.html '*.service', '*.socket', '*.device', '*.mount', '*.automount', '*.swap', '*.target', '*.path', '*.timer', '*.slice', '*.scope', ] mimetypes = ['text/x-ini', 'text/inf'] tokens = { 'root': [ (r'\s+', Whitespace), (r'[;#].*', Comment.Single), (r'\[.*?\]$', Keyword), (r'(.*?)([ \t]*)(=)([ \t]*)([^\t\n]*)', bygroups(Name.Attribute, Whitespace, Operator, Whitespace, String)), # standalone option, supported by some INI parsers (r'(.+?)$', Name.Attribute), ], } def analyse_text(text): npos = text.find('\n') if npos < 3: return False return text[0] == '[' and text[npos-1] == ']' class RegeditLexer(RegexLexer): """ Lexer for `Windows Registry `_ files produced by regedit. .. versionadded:: 1.6 """ name = 'reg' aliases = ['registry'] filenames = ['*.reg'] mimetypes = ['text/x-windows-registry'] tokens = { 'root': [ (r'Windows Registry Editor.*', Text), (r'\s+', Whitespace), (r'[;#].*', Comment.Single), (r'(\[)(-?)(HKEY_[A-Z_]+)(.*?\])$', bygroups(Keyword, Operator, Name.Builtin, Keyword)), # String keys, which obey somewhat normal escaping (r'("(?:\\"|\\\\|[^"])+")([ \t]*)(=)([ \t]*)', bygroups(Name.Attribute, Whitespace, Operator, Whitespace), 'value'), # Bare keys (includes @) (r'(.*?)([ \t]*)(=)([ \t]*)', bygroups(Name.Attribute, Whitespace, Operator, Whitespace), 'value'), ], 'value': [ (r'-', Operator, '#pop'), # delete value (r'(dword|hex(?:\([0-9a-fA-F]\))?)(:)([0-9a-fA-F,]+)', bygroups(Name.Variable, Punctuation, Number), '#pop'), # As far as I know, .reg files do not support line continuation. (r'.+', String, '#pop'), default('#pop'), ] } def analyse_text(text): return text.startswith('Windows Registry Editor') class PropertiesLexer(RegexLexer): """ Lexer for configuration files in Java's properties format. Note: trailing whitespace counts as part of the value as per spec .. versionadded:: 1.4 """ name = 'Properties' aliases = ['properties', 'jproperties'] filenames = ['*.properties'] mimetypes = ['text/x-java-properties'] tokens = { 'root': [ (r'^(\w+)([ \t])(\w+\s*)$', bygroups(Name.Attribute, Whitespace, String)), (r'^\w+(\\[ \t]\w*)*$', Name.Attribute), (r'(^ *)([#!].*)', bygroups(Whitespace, Comment)), # More controversial comments (r'(^ *)((?:;|//).*)', bygroups(Whitespace, Comment)), (r'(.*?)([ \t]*)([=:])([ \t]*)(.*(?:(?<=\\)\n.*)*)', bygroups(Name.Attribute, Whitespace, Operator, Whitespace, String)), (r'\s', Whitespace), ], } def _rx_indent(level): # Kconfig *always* interprets a tab as 8 spaces, so this is the default. # Edit this if you are in an environment where KconfigLexer gets expanded # input (tabs expanded to spaces) and the expansion tab width is != 8, # e.g. in connection with Trac (trac.ini, [mimeviewer], tab_width). # Value range here is 2 <= {tab_width} <= 8. tab_width = 8 # Regex matching a given indentation {level}, assuming that indentation is # a multiple of {tab_width}. In other cases there might be problems. if tab_width == 2: space_repeat = '+' else: space_repeat = '{1,%d}' % (tab_width - 1) if level == 1: level_repeat = '' else: level_repeat = '{%s}' % level return r'(?:\t| %s\t| {%s})%s.*\n' % (space_repeat, tab_width, level_repeat) class KconfigLexer(RegexLexer): """ For Linux-style Kconfig files. .. versionadded:: 1.6 """ name = 'Kconfig' aliases = ['kconfig', 'menuconfig', 'linux-config', 'kernel-config'] # Adjust this if new kconfig file names appear in your environment filenames = ['Kconfig*', '*Config.in*', 'external.in*', 'standard-modules.in'] mimetypes = ['text/x-kconfig'] # No re.MULTILINE, indentation-aware help text needs line-by-line handling flags = 0 def call_indent(level): # If indentation >= {level} is detected, enter state 'indent{level}' return (_rx_indent(level), String.Doc, 'indent%s' % level) def do_indent(level): # Print paragraphs of indentation level >= {level} as String.Doc, # ignoring blank lines. Then return to 'root' state. return [ (_rx_indent(level), String.Doc), (r'\s*\n', Text), default('#pop:2') ] tokens = { 'root': [ (r'\s+', Whitespace), (r'#.*?\n', Comment.Single), (words(( 'mainmenu', 'config', 'menuconfig', 'choice', 'endchoice', 'comment', 'menu', 'endmenu', 'visible if', 'if', 'endif', 'source', 'prompt', 'select', 'depends on', 'default', 'range', 'option'), suffix=r'\b'), Keyword), (r'(---help---|help)[\t ]*\n', Keyword, 'help'), (r'(bool|tristate|string|hex|int|defconfig_list|modules|env)\b', Name.Builtin), (r'[!=&|]', Operator), (r'[()]', Punctuation), (r'[0-9]+', Number.Integer), (r"'(''|[^'])*'", String.Single), (r'"(""|[^"])*"', String.Double), (r'\S+', Text), ], # Help text is indented, multi-line and ends when a lower indentation # level is detected. 'help': [ # Skip blank lines after help token, if any (r'\s*\n', Text), # Determine the first help line's indentation level heuristically(!). # Attention: this is not perfect, but works for 99% of "normal" # indentation schemes up to a max. indentation level of 7. call_indent(7), call_indent(6), call_indent(5), call_indent(4), call_indent(3), call_indent(2), call_indent(1), default('#pop'), # for incomplete help sections without text ], # Handle text for indentation levels 7 to 1 'indent7': do_indent(7), 'indent6': do_indent(6), 'indent5': do_indent(5), 'indent4': do_indent(4), 'indent3': do_indent(3), 'indent2': do_indent(2), 'indent1': do_indent(1), } class Cfengine3Lexer(RegexLexer): """ Lexer for `CFEngine3 `_ policy files. .. versionadded:: 1.5 """ name = 'CFEngine3' aliases = ['cfengine3', 'cf3'] filenames = ['*.cf'] mimetypes = [] tokens = { 'root': [ (r'#.*?\n', Comment), (r'(body)(\s+)(\S+)(\s+)(control)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Keyword)), (r'(body|bundle)(\s+)(\S+)(\s+)(\w+)(\()', bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Function, Punctuation), 'arglist'), (r'(body|bundle)(\s+)(\S+)(\s+)(\w+)', bygroups(Keyword, Whitespace, Keyword, Whitespace, Name.Function)), (r'(")([^"]+)(")(\s+)(string|slist|int|real)(\s*)(=>)(\s*)', bygroups(Punctuation, Name.Variable, Punctuation, Whitespace, Keyword.Type, Whitespace, Operator, Whitespace)), (r'(\S+)(\s*)(=>)(\s*)', bygroups(Keyword.Reserved, Whitespace, Operator, Text)), (r'"', String, 'string'), (r'(\w+)(\()', bygroups(Name.Function, Punctuation)), (r'([\w.!&|()]+)(::)', bygroups(Name.Class, Punctuation)), (r'(\w+)(:)', bygroups(Keyword.Declaration, Punctuation)), (r'@[{(][^)}]+[})]', Name.Variable), (r'[(){},;]', Punctuation), (r'=>', Operator), (r'->', Operator), (r'\d+\.\d+', Number.Float), (r'\d+', Number.Integer), (r'\w+', Name.Function), (r'\s+', Whitespace), ], 'string': [ (r'\$[{(]', String.Interpol, 'interpol'), (r'\\.', String.Escape), (r'"', String, '#pop'), (r'\n', String), (r'.', String), ], 'interpol': [ (r'\$[{(]', String.Interpol, '#push'), (r'[})]', String.Interpol, '#pop'), (r'[^${()}]+', String.Interpol), ], 'arglist': [ (r'\)', Punctuation, '#pop'), (r',', Punctuation), (r'\w+', Name.Variable), (r'\s+', Whitespace), ], } class ApacheConfLexer(RegexLexer): """ Lexer for configuration files following the Apache config file format. .. versionadded:: 0.6 """ name = 'ApacheConf' aliases = ['apacheconf', 'aconf', 'apache'] filenames = ['.htaccess', 'apache.conf', 'apache2.conf'] mimetypes = ['text/x-apacheconf'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'\s+', Whitespace), (r'#(.*\\\n)+.*$|(#.*?)$', Comment), (r'(<[^\s>/][^\s>]*)(?:(\s+)(.*))?(>)', bygroups(Name.Tag, Whitespace, String, Name.Tag)), (r'(]+)(>)', bygroups(Name.Tag, Name.Tag)), (r'[a-z]\w*', Name.Builtin, 'value'), (r'\.+', Text), ], 'value': [ (r'\\\n', Text), (r'\n+', Whitespace, '#pop'), (r'\\', Text), (r'[^\S\n]+', Whitespace), (r'\d+\.\d+\.\d+\.\d+(?:/\d+)?', Number), (r'\d+', Number), (r'/([*a-z0-9][*\w./-]+)', String.Other), (r'(on|off|none|any|all|double|email|dns|min|minimal|' r'os|productonly|full|emerg|alert|crit|error|warn|' r'notice|info|debug|registry|script|inetd|standalone|' r'user|group)\b', Keyword), (r'"([^"\\]*(?:\\(.|\n)[^"\\]*)*)"', String.Double), (r'[^\s"\\]+', Text) ], } class SquidConfLexer(RegexLexer): """ Lexer for `squid `_ configuration files. .. versionadded:: 0.9 """ name = 'SquidConf' aliases = ['squidconf', 'squid.conf', 'squid'] filenames = ['squid.conf'] mimetypes = ['text/x-squidconf'] flags = re.IGNORECASE keywords = ( "access_log", "acl", "always_direct", "announce_host", "announce_period", "announce_port", "announce_to", "anonymize_headers", "append_domain", "as_whois_server", "auth_param_basic", "authenticate_children", "authenticate_program", "authenticate_ttl", "broken_posts", "buffered_logs", "cache_access_log", "cache_announce", "cache_dir", "cache_dns_program", "cache_effective_group", "cache_effective_user", "cache_host", "cache_host_acl", "cache_host_domain", "cache_log", "cache_mem", "cache_mem_high", "cache_mem_low", "cache_mgr", "cachemgr_passwd", "cache_peer", "cache_peer_access", "cache_replacement_policy", "cache_stoplist", "cache_stoplist_pattern", "cache_store_log", "cache_swap", "cache_swap_high", "cache_swap_log", "cache_swap_low", "client_db", "client_lifetime", "client_netmask", "connect_timeout", "coredump_dir", "dead_peer_timeout", "debug_options", "delay_access", "delay_class", "delay_initial_bucket_level", "delay_parameters", "delay_pools", "deny_info", "dns_children", "dns_defnames", "dns_nameservers", "dns_testnames", "emulate_httpd_log", "err_html_text", "fake_user_agent", "firewall_ip", "forwarded_for", "forward_snmpd_port", "fqdncache_size", "ftpget_options", "ftpget_program", "ftp_list_width", "ftp_passive", "ftp_user", "half_closed_clients", "header_access", "header_replace", "hierarchy_stoplist", "high_response_time_warning", "high_page_fault_warning", "hosts_file", "htcp_port", "http_access", "http_anonymizer", "httpd_accel", "httpd_accel_host", "httpd_accel_port", "httpd_accel_uses_host_header", "httpd_accel_with_proxy", "http_port", "http_reply_access", "icp_access", "icp_hit_stale", "icp_port", "icp_query_timeout", "ident_lookup", "ident_lookup_access", "ident_timeout", "incoming_http_average", "incoming_icp_average", "inside_firewall", "ipcache_high", "ipcache_low", "ipcache_size", "local_domain", "local_ip", "logfile_rotate", "log_fqdn", "log_icp_queries", "log_mime_hdrs", "maximum_object_size", "maximum_single_addr_tries", "mcast_groups", "mcast_icp_query_timeout", "mcast_miss_addr", "mcast_miss_encode_key", "mcast_miss_port", "memory_pools", "memory_pools_limit", "memory_replacement_policy", "mime_table", "min_http_poll_cnt", "min_icp_poll_cnt", "minimum_direct_hops", "minimum_object_size", "minimum_retry_timeout", "miss_access", "negative_dns_ttl", "negative_ttl", "neighbor_timeout", "neighbor_type_domain", "netdb_high", "netdb_low", "netdb_ping_period", "netdb_ping_rate", "never_direct", "no_cache", "passthrough_proxy", "pconn_timeout", "pid_filename", "pinger_program", "positive_dns_ttl", "prefer_direct", "proxy_auth", "proxy_auth_realm", "query_icmp", "quick_abort", "quick_abort_max", "quick_abort_min", "quick_abort_pct", "range_offset_limit", "read_timeout", "redirect_children", "redirect_program", "redirect_rewrites_host_header", "reference_age", "refresh_pattern", "reload_into_ims", "request_body_max_size", "request_size", "request_timeout", "shutdown_lifetime", "single_parent_bypass", "siteselect_timeout", "snmp_access", "snmp_incoming_address", "snmp_port", "source_ping", "ssl_proxy", "store_avg_object_size", "store_objects_per_bucket", "strip_query_terms", "swap_level1_dirs", "swap_level2_dirs", "tcp_incoming_address", "tcp_outgoing_address", "tcp_recv_bufsize", "test_reachability", "udp_hit_obj", "udp_hit_obj_size", "udp_incoming_address", "udp_outgoing_address", "unique_hostname", "unlinkd_program", "uri_whitespace", "useragent_log", "visible_hostname", "wais_relay", "wais_relay_host", "wais_relay_port", ) opts = ( "proxy-only", "weight", "ttl", "no-query", "default", "round-robin", "multicast-responder", "on", "off", "all", "deny", "allow", "via", "parent", "no-digest", "heap", "lru", "realm", "children", "q1", "q2", "credentialsttl", "none", "disable", "offline_toggle", "diskd", ) actions = ( "shutdown", "info", "parameter", "server_list", "client_list", r'squid.conf', ) actions_stats = ( "objects", "vm_objects", "utilization", "ipcache", "fqdncache", "dns", "redirector", "io", "reply_headers", "filedescriptors", "netdb", ) actions_log = ("status", "enable", "disable", "clear") acls = ( "url_regex", "urlpath_regex", "referer_regex", "port", "proto", "req_mime_type", "rep_mime_type", "method", "browser", "user", "src", "dst", "time", "dstdomain", "ident", "snmp_community", ) ip_re = ( r'(?:(?:(?:[3-9]\d?|2(?:5[0-5]|[0-4]?\d)?|1\d{0,2}|0x0*[0-9a-f]{1,2}|' r'0+[1-3]?[0-7]{0,2})(?:\.(?:[3-9]\d?|2(?:5[0-5]|[0-4]?\d)?|1\d{0,2}|' r'0x0*[0-9a-f]{1,2}|0+[1-3]?[0-7]{0,2})){3})|(?!.*::.*::)(?:(?!:)|' r':(?=:))(?:[0-9a-f]{0,4}(?:(?<=::)|(?`_ configuration files. .. versionadded:: 0.11 """ name = 'Nginx configuration file' aliases = ['nginx'] filenames = ['nginx.conf'] mimetypes = ['text/x-nginx-conf'] tokens = { 'root': [ (r'(include)(\s+)([^\s;]+)', bygroups(Keyword, Whitespace, Name)), (r'[^\s;#]+', Keyword, 'stmt'), include('base'), ], 'block': [ (r'\}', Punctuation, '#pop:2'), (r'[^\s;#]+', Keyword.Namespace, 'stmt'), include('base'), ], 'stmt': [ (r'\{', Punctuation, 'block'), (r';', Punctuation, '#pop'), include('base'), ], 'base': [ (r'#.*\n', Comment.Single), (r'on|off', Name.Constant), (r'\$[^\s;#()]+', Name.Variable), (r'([a-z0-9.-]+)(:)([0-9]+)', bygroups(Name, Punctuation, Number.Integer)), (r'[a-z-]+/[a-z-+]+', String), # mimetype # (r'[a-zA-Z._-]+', Keyword), (r'[0-9]+[km]?\b', Number.Integer), (r'(~)(\s*)([^\s{]+)', bygroups(Punctuation, Whitespace, String.Regex)), (r'[:=~]', Punctuation), (r'[^\s;#{}$]+', String), # catch all (r'/[^\s;#]*', Name), # pathname (r'\s+', Whitespace), (r'[$;]', Text), # leftover characters ], } class LighttpdConfLexer(RegexLexer): """ Lexer for `Lighttpd `_ configuration files. .. versionadded:: 0.11 """ name = 'Lighttpd configuration file' aliases = ['lighttpd', 'lighty'] filenames = ['lighttpd.conf'] mimetypes = ['text/x-lighttpd-conf'] tokens = { 'root': [ (r'#.*\n', Comment.Single), (r'/\S*', Name), # pathname (r'[a-zA-Z._-]+', Keyword), (r'\d+\.\d+\.\d+\.\d+(?:/\d+)?', Number), (r'[0-9]+', Number), (r'=>|=~|\+=|==|=|\+', Operator), (r'\$[A-Z]+', Name.Builtin), (r'[(){}\[\],]', Punctuation), (r'"([^"\\]*(?:\\.[^"\\]*)*)"', String.Double), (r'\s+', Whitespace), ], } class DockerLexer(RegexLexer): """ Lexer for `Docker `_ configuration files. .. versionadded:: 2.0 """ name = 'Docker' aliases = ['docker', 'dockerfile'] filenames = ['Dockerfile', '*.docker'] mimetypes = ['text/x-dockerfile-config'] _keywords = (r'(?:MAINTAINER|EXPOSE|WORKDIR|USER|STOPSIGNAL)') _bash_keywords = (r'(?:RUN|CMD|ENTRYPOINT|ENV|ARG|LABEL|ADD|COPY)') _lb = r'(?:\s*\\?\s*)' # dockerfile line break regex flags = re.IGNORECASE | re.MULTILINE tokens = { 'root': [ (r'#.*', Comment), (r'(FROM)([ \t]*)(\S*)([ \t]*)(?:(AS)([ \t]*)(\S*))?', bygroups(Keyword, Whitespace, String, Whitespace, Keyword, Whitespace, String)), (r'(ONBUILD)(\s+)(%s)' % (_lb,), bygroups(Keyword, Whitespace, using(BashLexer))), (r'(HEALTHCHECK)(\s+)((%s--\w+=\w+%s)*)' % (_lb, _lb), bygroups(Keyword, Whitespace, using(BashLexer))), (r'(VOLUME|ENTRYPOINT|CMD|SHELL)(\s+)(%s)(\[.*?\])' % (_lb,), bygroups(Keyword, Whitespace, using(BashLexer), using(JsonLexer))), (r'(LABEL|ENV|ARG)(\s+)((%s\w+=\w+%s)*)' % (_lb, _lb), bygroups(Keyword, Whitespace, using(BashLexer))), (r'(%s|VOLUME)\b(\s+)(.*)' % (_keywords), bygroups(Keyword, Whitespace, String)), (r'(%s)(\s+)' % (_bash_keywords,), bygroups(Keyword, Whitespace)), (r'(.*\\\n)*.+', using(BashLexer)), ] } class TerraformLexer(ExtendedRegexLexer): """ Lexer for `terraformi .tf files `_. .. versionadded:: 2.1 """ name = 'Terraform' aliases = ['terraform', 'tf'] filenames = ['*.tf'] mimetypes = ['application/x-tf', 'application/x-terraform'] classes = ('backend', 'data', 'module', 'output', 'provider', 'provisioner', 'resource', 'variable') classes_re = "({})".format(('|').join(classes)) types = ('string', 'number', 'bool', 'list', 'tuple', 'map', 'set', 'object', 'null') numeric_functions = ('abs', 'ceil', 'floor', 'log', 'max', 'mix', 'parseint', 'pow', 'signum') string_functions = ('chomp', 'format', 'formatlist', 'indent', 'join', 'lower', 'regex', 'regexall', 'replace', 'split', 'strrev', 'substr', 'title', 'trim', 'trimprefix', 'trimsuffix', 'trimspace', 'upper' ) collection_functions = ('alltrue', 'anytrue', 'chunklist', 'coalesce', 'coalescelist', 'compact', 'concat', 'contains', 'distinct', 'element', 'flatten', 'index', 'keys', 'length', 'list', 'lookup', 'map', 'matchkeys', 'merge', 'range', 'reverse', 'setintersection', 'setproduct', 'setsubtract', 'setunion', 'slice', 'sort', 'sum', 'transpose', 'values', 'zipmap' ) encoding_functions = ('base64decode', 'base64encode', 'base64gzip', 'csvdecode', 'jsondecode', 'jsonencode', 'textdecodebase64', 'textencodebase64', 'urlencode', 'yamldecode', 'yamlencode') filesystem_functions = ('abspath', 'dirname', 'pathexpand', 'basename', 'file', 'fileexists', 'fileset', 'filebase64', 'templatefile') date_time_functions = ('formatdate', 'timeadd', 'timestamp') hash_crypto_functions = ('base64sha256', 'base64sha512', 'bcrypt', 'filebase64sha256', 'filebase64sha512', 'filemd5', 'filesha1', 'filesha256', 'filesha512', 'md5', 'rsadecrypt', 'sha1', 'sha256', 'sha512', 'uuid', 'uuidv5') ip_network_functions = ('cidrhost', 'cidrnetmask', 'cidrsubnet', 'cidrsubnets') type_conversion_functions = ('can', 'defaults', 'tobool', 'tolist', 'tomap', 'tonumber', 'toset', 'tostring', 'try') builtins = numeric_functions + string_functions + collection_functions + encoding_functions +\ filesystem_functions + date_time_functions + hash_crypto_functions + ip_network_functions +\ type_conversion_functions builtins_re = "({})".format(('|').join(builtins)) def heredoc_callback(self, match, ctx): # Parse a terraform heredoc # match: 1 = <<[-]?, 2 = name 3 = rest of line start = match.start(1) yield start, Operator, match.group(1) # <<[-~]? yield match.start(2), String.Delimiter, match.group(2) # heredoc name ctx.pos = match.start(3) ctx.end = match.end(3) yield ctx.pos, String.Heredoc, match.group(3) ctx.pos = match.end() hdname = match.group(2) tolerant = match.group(1)[-1] == "-" lines = [] line_re = re.compile('.*?\n') for match in line_re.finditer(ctx.text, ctx.pos): if tolerant: check = match.group().strip() else: check = match.group().rstrip() if check == hdname: for amatch in lines: yield amatch.start(), String.Heredoc, amatch.group() yield match.start(), String.Delimiter, match.group() ctx.pos = match.end() break else: lines.append(match) else: # end of heredoc not found -- error! for amatch in lines: yield amatch.start(), Error, amatch.group() ctx.end = len(ctx.text) tokens = { 'root': [ include('basic'), include('whitespace'), # Strings (r'(".*")', bygroups(String.Double)), # Constants (words(('true', 'false'), prefix=r'\b', suffix=r'\b'), Name.Constant), # Types (words(types, prefix=r'\b', suffix=r'\b'), Keyword.Type), include('identifier'), include('punctuation'), (r'[0-9]+', Number), ], 'basic': [ (r'\s*/\*', Comment.Multiline, 'comment'), (r'\s*#.*\n', Comment.Single), include('whitespace'), # e.g. terraform { # e.g. egress { (r'(\s*)([0-9a-zA-Z-_]+)(\s*)(=?)(\s*)(\{)', bygroups(Whitespace, Name.Builtin, Whitespace, Operator, Whitespace, Punctuation)), # Assignment with attributes, e.g. something = ... (r'(\s*)([0-9a-zA-Z-_]+)(\s*)(=)(\s*)', bygroups(Whitespace, Name.Attribute, Whitespace, Operator, Whitespace)), # Assignment with environment variables and similar, e.g. "something" = ... # or key value assignment, e.g. "SlotName" : ... (r'(\s*)("\S+")(\s*)([=:])(\s*)', bygroups(Whitespace, Literal.String.Double, Whitespace, Operator, Whitespace)), # Functions, e.g. jsonencode(element("value")) (builtins_re + r'(\()', bygroups(Name.Function, Punctuation)), # List of attributes, e.g. ignore_changes = [last_modified, filename] (r'(\[)([a-z_,\s]+)(\])', bygroups(Punctuation, Name.Builtin, Punctuation)), # e.g. resource "aws_security_group" "allow_tls" { # e.g. backend "consul" { (classes_re + r'(\s+)', bygroups(Keyword.Reserved, Whitespace), 'blockname'), # here-doc style delimited strings ( r'(<<-?)\s*([a-zA-Z_]\w*)(.*?\n)', heredoc_callback, ) ], 'blockname': [ # e.g. resource "aws_security_group" "allow_tls" { # e.g. backend "consul" { (r'(\s*)("[0-9a-zA-Z-_]+")?(\s*)("[0-9a-zA-Z-_]+")(\s+)(\{)', bygroups(Whitespace, Name.Class, Whitespace, Name.Variable, Whitespace, Punctuation)), ], 'identifier': [ (r'\b(var\.[0-9a-zA-Z-_\.\[\]]+)\b', bygroups(Name.Variable)), (r'\b([0-9a-zA-Z-_\[\]]+\.[0-9a-zA-Z-_\.\[\]]+)\b', bygroups(Name.Variable)), ], 'punctuation': [ (r'[\[\]()\{\},.?:!=]', Punctuation), ], 'comment': [ (r'[^*/]', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'whitespace': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(Text, Whitespace)), ], } class TermcapLexer(RegexLexer): """ Lexer for termcap database source. This is very simple and minimal. .. versionadded:: 2.1 """ name = 'Termcap' aliases = ['termcap'] filenames = ['termcap', 'termcap.src'] mimetypes = [] # NOTE: # * multiline with trailing backslash # * separator is ':' # * to embed colon as data, we must use \072 # * space after separator is not allowed (mayve) tokens = { 'root': [ (r'^#.*', Comment), (r'^[^\s#:|]+', Name.Tag, 'names'), (r'\s+', Whitespace), ], 'names': [ (r'\n', Whitespace, '#pop'), (r':', Punctuation, 'defs'), (r'\|', Punctuation), (r'[^:|]+', Name.Attribute), ], 'defs': [ (r'(\\)(\n[ \t]*)', bygroups(Text, Whitespace)), (r'\n[ \t]*', Whitespace, '#pop:2'), (r'(#)([0-9]+)', bygroups(Operator, Number)), (r'=', Operator, 'data'), (r':', Punctuation), (r'[^\s:=#]+', Name.Class), ], 'data': [ (r'\\072', Literal), (r':', Punctuation, '#pop'), (r'[^:\\]+', Literal), # for performance (r'.', Literal), ], } class TerminfoLexer(RegexLexer): """ Lexer for terminfo database source. This is very simple and minimal. .. versionadded:: 2.1 """ name = 'Terminfo' aliases = ['terminfo'] filenames = ['terminfo', 'terminfo.src'] mimetypes = [] # NOTE: # * multiline with leading whitespace # * separator is ',' # * to embed comma as data, we can use \, # * space after separator is allowed tokens = { 'root': [ (r'^#.*$', Comment), (r'^[^\s#,|]+', Name.Tag, 'names'), (r'\s+', Whitespace), ], 'names': [ (r'\n', Whitespace, '#pop'), (r'(,)([ \t]*)', bygroups(Punctuation, Whitespace), 'defs'), (r'\|', Punctuation), (r'[^,|]+', Name.Attribute), ], 'defs': [ (r'\n[ \t]+', Whitespace), (r'\n', Whitespace, '#pop:2'), (r'(#)([0-9]+)', bygroups(Operator, Number)), (r'=', Operator, 'data'), (r'(,)([ \t]*)', bygroups(Punctuation, Whitespace)), (r'[^\s,=#]+', Name.Class), ], 'data': [ (r'\\[,\\]', Literal), (r'(,)([ \t]*)', bygroups(Punctuation, Whitespace), '#pop'), (r'[^\\,]+', Literal), # for performance (r'.', Literal), ], } class PkgConfigLexer(RegexLexer): """ Lexer for `pkg-config `_ (see also `manual page `_). .. versionadded:: 2.1 """ name = 'PkgConfig' aliases = ['pkgconfig'] filenames = ['*.pc'] mimetypes = [] tokens = { 'root': [ (r'#.*$', Comment.Single), # variable definitions (r'^(\w+)(=)', bygroups(Name.Attribute, Operator)), # keyword lines (r'^([\w.]+)(:)', bygroups(Name.Tag, Punctuation), 'spvalue'), # variable references include('interp'), # fallback (r'\s+', Whitespace), (r'[^${}#=:\n.]+', Text), (r'.', Text), ], 'interp': [ # you can escape literal "$" as "$$" (r'\$\$', Text), # variable references (r'\$\{', String.Interpol, 'curly'), ], 'curly': [ (r'\}', String.Interpol, '#pop'), (r'\w+', Name.Attribute), ], 'spvalue': [ include('interp'), (r'#.*$', Comment.Single, '#pop'), (r'\n', Whitespace, '#pop'), # fallback (r'\s+', Whitespace), (r'[^${}#\n\s]+', Text), (r'.', Text), ], } class PacmanConfLexer(RegexLexer): """ Lexer for `pacman.conf `_. Actually, IniLexer works almost fine for this format, but it yield error token. It is because pacman.conf has a form without assignment like: UseSyslog Color TotalDownload CheckSpace VerbosePkgLists These are flags to switch on. .. versionadded:: 2.1 """ name = 'PacmanConf' aliases = ['pacmanconf'] filenames = ['pacman.conf'] mimetypes = [] tokens = { 'root': [ # comment (r'#.*$', Comment.Single), # section header (r'^(\s*)(\[.*?\])(\s*)$', bygroups(Whitespace, Keyword, Whitespace)), # variable definitions # (Leading space is allowed...) (r'(\w+)(\s*)(=)', bygroups(Name.Attribute, Whitespace, Operator)), # flags to on (r'^(\s*)(\w+)(\s*)$', bygroups(Whitespace, Name.Attribute, Whitespace)), # built-in special values (words(( '$repo', # repository '$arch', # architecture '%o', # outfile '%u', # url ), suffix=r'\b'), Name.Variable), # fallback (r'\s+', Whitespace), (r'.', Text), ], } class AugeasLexer(RegexLexer): """ Lexer for `Augeas `_. .. versionadded:: 2.4 """ name = 'Augeas' aliases = ['augeas'] filenames = ['*.aug'] tokens = { 'root': [ (r'(module)(\s*)([^\s=]+)', bygroups(Keyword.Namespace, Whitespace, Name.Namespace)), (r'(let)(\s*)([^\s=]+)', bygroups(Keyword.Declaration, Whitespace, Name.Variable)), (r'(del|store|value|counter|seq|key|label|autoload|incl|excl|transform|test|get|put)(\s+)', bygroups(Name.Builtin, Whitespace)), (r'(\()([^:]+)(\:)(unit|string|regexp|lens|tree|filter)(\))', bygroups(Punctuation, Name.Variable, Punctuation, Keyword.Type, Punctuation)), (r'\(\*', Comment.Multiline, 'comment'), (r'[*+\-.;=?|]', Operator), (r'[()\[\]{}]', Operator), (r'"', String.Double, 'string'), (r'\/', String.Regex, 'regex'), (r'([A-Z]\w*)(\.)(\w+)', bygroups(Name.Namespace, Punctuation, Name.Variable)), (r'.', Name.Variable), (r'\s+', Whitespace), ], 'string': [ (r'\\.', String.Escape), (r'[^"]', String.Double), (r'"', String.Double, '#pop'), ], 'regex': [ (r'\\.', String.Escape), (r'[^/]', String.Regex), (r'\/', String.Regex, '#pop'), ], 'comment': [ (r'[^*)]', Comment.Multiline), (r'\(\*', Comment.Multiline, '#push'), (r'\*\)', Comment.Multiline, '#pop'), (r'[)*]', Comment.Multiline) ], } class TOMLLexer(RegexLexer): """ Lexer for `TOML `_, a simple language for config files. .. versionadded:: 2.4 """ name = 'TOML' aliases = ['toml'] filenames = ['*.toml', 'Pipfile', 'poetry.lock'] tokens = { 'root': [ # Table (r'^(\s*)(\[.*?\])$', bygroups(Whitespace, Keyword)), # Basics, comments, strings (r'[ \t]+', Whitespace), (r'\n', Whitespace), (r'#.*?$', Comment.Single), # Basic string (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Literal string (r'\'\'\'(.*)\'\'\'', String), (r'\'[^\']*\'', String), (r'(true|false)$', Keyword.Constant), (r'[a-zA-Z_][\w\-]*', Name), # Datetime # TODO this needs to be expanded, as TOML is rather flexible: # https://github.com/toml-lang/toml#offset-date-time (r'\d{4}-\d{2}-\d{2}(?:T| )\d{2}:\d{2}:\d{2}(?:Z|[-+]\d{2}:\d{2})', Number.Integer), # Numbers (r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?', Number.Float), (r'\d+[eE][+-]?[0-9]+j?', Number.Float), # Handle +-inf, +-infinity, +-nan (r'[+-]?(?:(inf(?:inity)?)|nan)', Number.Float), (r'[+-]?\d+', Number.Integer), # Punctuation (r'[]{}:(),;[]', Punctuation), (r'\.', Punctuation), # Operators (r'=', Operator) ] } class NestedTextLexer(RegexLexer): """ Lexer for `NextedText `_, a human-friendly data format. .. versionadded:: 2.9 """ name = 'NestedText' aliases = ['nestedtext', 'nt'] filenames = ['*.nt'] _quoted_dict_item = r'^(\s*)({0})(.*?)({0}: ?)(.*?)(\s*)$' tokens = { 'root': [ (r'^(\s*)(#.*?)$', bygroups(Whitespace, Comment)), (r'^(\s*)(>)( ?)(.*?)(\s*)$', bygroups(Whitespace, Punctuation, Whitespace, String, Whitespace)), (r'^(\s*)(-)( ?)(.*?)(\s*)$', bygroups(Whitespace, Punctuation, Whitespace, String, Whitespace)), (_quoted_dict_item.format("'"), bygroups(Whitespace, Punctuation, Name, Punctuation, String, Whitespace)), (_quoted_dict_item.format('"'), bygroups(Whitespace, Punctuation, Name, Punctuation, String, Whitespace)), (r'^(\s*)(.*?)(:)( ?)(.*?)(\s*)$', bygroups(Whitespace, Name, Punctuation, Whitespace, String, Whitespace)), ], } class SingularityLexer(RegexLexer): """ Lexer for `Singularity definition files `_. .. versionadded:: 2.6 """ name = 'Singularity' aliases = ['singularity'] filenames = ['*.def', 'Singularity'] flags = re.IGNORECASE | re.MULTILINE | re.DOTALL _headers = r'^(\s*)(bootstrap|from|osversion|mirrorurl|include|registry|namespace|includecmd)(:)' _section = r'^(%(?:pre|post|setup|environment|help|labels|test|runscript|files|startscript))(\s*)' _appsect = r'^(%app(?:install|help|run|labels|env|test|files))(\s*)' tokens = { 'root': [ (_section, bygroups(Generic.Heading, Whitespace), 'script'), (_appsect, bygroups(Generic.Heading, Whitespace), 'script'), (_headers, bygroups(Whitespace, Keyword, Text)), (r'\s*#.*?\n', Comment), (r'\b(([0-9]+\.?[0-9]*)|(\.[0-9]+))\b', Number), (r'[ \t]+', Whitespace), (r'(?!^\s*%).', Text), ], 'script': [ (r'(.+?(?=^\s*%))|(.*)', using(BashLexer), '#pop'), ], } def analyse_text(text): """This is a quite simple script file, but there are a few keywords which seem unique to this language.""" result = 0 if re.search(r'\b(?:osversion|includecmd|mirrorurl)\b', text, re.IGNORECASE): result += 0.5 if re.search(SingularityLexer._section[1:], text): result += 0.49 return result pygments-2.11.2/pygments/lexers/varnish.py0000644000175000017500000001611314165547207020534 0ustar carstencarsten""" pygments.lexers.varnish ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Varnish configuration :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups, using, this, \ inherit, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Literal __all__ = ['VCLLexer', 'VCLSnippetLexer'] class VCLLexer(RegexLexer): """ For Varnish Configuration Language (VCL). .. versionadded:: 2.2 """ name = 'VCL' aliases = ['vcl'] filenames = ['*.vcl'] mimetypes = ['text/x-vclsrc'] def analyse_text(text): # If the very first line is 'vcl 4.0;' it's pretty much guaranteed # that this is VCL if text.startswith('vcl 4.0;'): return 1.0 # Skip over comments and blank lines # This is accurate enough that returning 0.9 is reasonable. # Almost no VCL files start without some comments. elif '\nvcl 4.0;' in text[:1000]: return 0.9 tokens = { 'probe': [ include('whitespace'), include('comments'), (r'(\.\w+)(\s*=\s*)([^;]*)(;)', bygroups(Name.Attribute, Operator, using(this), Punctuation)), (r'\}', Punctuation, '#pop'), ], 'acl': [ include('whitespace'), include('comments'), (r'[!/]+', Operator), (r';', Punctuation), (r'\d+', Number), (r'\}', Punctuation, '#pop'), ], 'backend': [ include('whitespace'), (r'(\.probe)(\s*=\s*)(\w+)(;)', bygroups(Name.Attribute, Operator, Name.Variable.Global, Punctuation)), (r'(\.probe)(\s*=\s*)(\{)', bygroups(Name.Attribute, Operator, Punctuation), 'probe'), (r'(\.\w+\b)(\s*=\s*)([^;\s]*)(\s*;)', bygroups(Name.Attribute, Operator, using(this), Punctuation)), (r'\{', Punctuation, '#push'), (r'\}', Punctuation, '#pop'), ], 'statements': [ (r'(\d\.)?\d+[sdwhmy]', Literal.Date), (r'(\d\.)?\d+ms', Literal.Date), (r'(vcl_pass|vcl_hash|vcl_hit|vcl_init|vcl_backend_fetch|vcl_pipe|' r'vcl_backend_response|vcl_synth|vcl_deliver|vcl_backend_error|' r'vcl_fini|vcl_recv|vcl_purge|vcl_miss)\b', Name.Function), (r'(pipe|retry|hash|synth|deliver|purge|abandon|lookup|pass|fail|ok|' r'miss|fetch|restart)\b', Name.Constant), (r'(beresp|obj|resp|req|req_top|bereq)\.http\.[a-zA-Z_-]+\b', Name.Variable), (words(( 'obj.status', 'req.hash_always_miss', 'beresp.backend', 'req.esi_level', 'req.can_gzip', 'beresp.ttl', 'obj.uncacheable', 'req.ttl', 'obj.hits', 'client.identity', 'req.hash_ignore_busy', 'obj.reason', 'req.xid', 'req_top.proto', 'beresp.age', 'obj.proto', 'obj.age', 'local.ip', 'beresp.uncacheable', 'req.method', 'beresp.backend.ip', 'now', 'obj.grace', 'req.restarts', 'beresp.keep', 'req.proto', 'resp.proto', 'bereq.xid', 'bereq.between_bytes_timeout', 'req.esi', 'bereq.first_byte_timeout', 'bereq.method', 'bereq.connect_timeout', 'beresp.do_gzip', 'resp.status', 'beresp.do_gunzip', 'beresp.storage_hint', 'resp.is_streaming', 'beresp.do_stream', 'req_top.method', 'bereq.backend', 'beresp.backend.name', 'beresp.status', 'req.url', 'obj.keep', 'obj.ttl', 'beresp.reason', 'bereq.retries', 'resp.reason', 'bereq.url', 'beresp.do_esi', 'beresp.proto', 'client.ip', 'bereq.proto', 'server.hostname', 'remote.ip', 'req.backend_hint', 'server.identity', 'req_top.url', 'beresp.grace', 'beresp.was_304', 'server.ip', 'bereq.uncacheable'), suffix=r'\b'), Name.Variable), (r'[!%&+*\-,/<.}{>=|~]+', Operator), (r'[();]', Punctuation), (r'[,]+', Punctuation), (words(('hash_data', 'regsub', 'regsuball', 'if', 'else', 'elsif', 'elif', 'synth', 'synthetic', 'ban', 'return', 'set', 'unset', 'import', 'include', 'new', 'rollback', 'call'), suffix=r'\b'), Keyword), (r'storage\.\w+\.\w+\b', Name.Variable), (words(('true', 'false')), Name.Builtin), (r'\d+\b', Number), (r'(backend)(\s+\w+)(\s*\{)', bygroups(Keyword, Name.Variable.Global, Punctuation), 'backend'), (r'(probe\s)(\s*\w+\s)(\{)', bygroups(Keyword, Name.Variable.Global, Punctuation), 'probe'), (r'(acl\s)(\s*\w+\s)(\{)', bygroups(Keyword, Name.Variable.Global, Punctuation), 'acl'), (r'(vcl )(4.0)(;)$', bygroups(Keyword.Reserved, Name.Constant, Punctuation)), (r'(sub\s+)([a-zA-Z]\w*)(\s*\{)', bygroups(Keyword, Name.Function, Punctuation)), (r'([a-zA-Z_]\w*)' r'(\.)' r'([a-zA-Z_]\w*)' r'(\s*\(.*\))', bygroups(Name.Function, Punctuation, Name.Function, using(this))), (r'[a-zA-Z_]\w*', Name), ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline), ], 'comments': [ (r'#.*$', Comment), (r'/\*', Comment.Multiline, 'comment'), (r'//.*$', Comment), ], 'string': [ (r'"', String, '#pop'), (r'[^"\n]+', String), # all other characters ], 'multistring': [ (r'[^"}]', String), (r'"\}', String, '#pop'), (r'["}]', String), ], 'whitespace': [ (r'L?"', String, 'string'), (r'\{"', String, 'multistring'), (r'\n', Text), (r'\s+', Text), (r'\\\n', Text), # line continuation ], 'root': [ include('whitespace'), include('comments'), include('statements'), (r'\s+', Text), ], } class VCLSnippetLexer(VCLLexer): """ For Varnish Configuration Language snippets. .. versionadded:: 2.2 """ name = 'VCLSnippets' aliases = ['vclsnippets', 'vclsnippet'] mimetypes = ['text/x-vclsnippet'] filenames = [] def analyse_text(text): # override method inherited from VCLLexer return 0 tokens = { 'snippetspre': [ (r'\.\.\.+', Comment), (r'(bereq|req|req_top|resp|beresp|obj|client|server|local|remote|' r'storage)($|\.\*)', Name.Variable), ], 'snippetspost': [ (r'(backend)\b', Keyword.Reserved), ], 'root': [ include('snippetspre'), inherit, include('snippetspost'), ], } pygments-2.11.2/pygments/lexers/nit.py0000644000175000017500000000523714165547207017661 0ustar carstencarsten""" pygments.lexers.nit ~~~~~~~~~~~~~~~~~~~ Lexer for the Nit language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation __all__ = ['NitLexer'] class NitLexer(RegexLexer): """ For `nit `_ source. .. versionadded:: 2.0 """ name = 'Nit' aliases = ['nit'] filenames = ['*.nit'] tokens = { 'root': [ (r'#.*?$', Comment.Single), (words(( 'package', 'module', 'import', 'class', 'abstract', 'interface', 'universal', 'enum', 'end', 'fun', 'type', 'init', 'redef', 'isa', 'do', 'readable', 'writable', 'var', 'intern', 'extern', 'public', 'protected', 'private', 'intrude', 'if', 'then', 'else', 'while', 'loop', 'for', 'in', 'and', 'or', 'not', 'implies', 'return', 'continue', 'break', 'abort', 'assert', 'new', 'is', 'once', 'super', 'self', 'true', 'false', 'nullable', 'null', 'as', 'isset', 'label', '__debug__'), suffix=r'(?=[\r\n\t( ])'), Keyword), (r'[A-Z]\w*', Name.Class), (r'"""(([^\'\\]|\\.)|\\r|\\n)*((\{\{?)?(""?\{\{?)*""""*)', String), # Simple long string (r'\'\'\'(((\\.|[^\'\\])|\\r|\\n)|\'((\\.|[^\'\\])|\\r|\\n)|' r'\'\'((\\.|[^\'\\])|\\r|\\n))*\'\'\'', String), # Simple long string alt (r'"""(([^\'\\]|\\.)|\\r|\\n)*((""?)?(\{\{?""?)*\{\{\{\{*)', String), # Start long string (r'\}\}\}(((\\.|[^\'\\])|\\r|\\n))*(""?)?(\{\{?""?)*\{\{\{\{*', String), # Mid long string (r'\}\}\}(((\\.|[^\'\\])|\\r|\\n))*(\{\{?)?(""?\{\{?)*""""*', String), # End long string (r'"(\\.|([^"}{\\]))*"', String), # Simple String (r'"(\\.|([^"}{\\]))*\{', String), # Start string (r'\}(\\.|([^"}{\\]))*\{', String), # Mid String (r'\}(\\.|([^"}{\\]))*"', String), # End String (r'(\'[^\'\\]\')|(\'\\.\')', String.Char), (r'[0-9]+', Number.Integer), (r'[0-9]*.[0-9]+', Number.Float), (r'0(x|X)[0-9A-Fa-f]+', Number.Hex), (r'[a-z]\w*', Name), (r'_\w+', Name.Variable.Instance), (r'==|!=|<==>|>=|>>|>|<=|<<|<|\+|-|=|/|\*|%|\+=|-=|!|@', Operator), (r'\(|\)|\[|\]|,|\.\.\.|\.\.|\.|::|:', Punctuation), (r'`\{[^`]*`\}', Text), # Extern blocks won't be Lexed by Nit (r'[\r\n\t ]+', Text), ], } pygments-2.11.2/pygments/lexers/hdl.py0000644000175000017500000005377014165547207017643 0ustar carstencarsten""" pygments.lexers.hdl ~~~~~~~~~~~~~~~~~~~ Lexers for hardware descriptor languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, include, using, this, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['VerilogLexer', 'SystemVerilogLexer', 'VhdlLexer'] class VerilogLexer(RegexLexer): """ For verilog source code with preprocessor directives. .. versionadded:: 1.4 """ name = 'verilog' aliases = ['verilog', 'v'] filenames = ['*.v'] mimetypes = ['text/x-verilog'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/[*].*?[*]/)+' tokens = { 'root': [ (r'^\s*`define', Comment.Preproc, 'macro'), (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(String.Escape, Whitespace)), # line continuation (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'[{}#@]', Punctuation), (r'L?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'([0-9]+)|(\'h)[0-9a-fA-F]+', Number.Hex), (r'([0-9]+)|(\'b)[01]+', Number.Bin), (r'([0-9]+)|(\'d)[0-9]+', Number.Integer), (r'([0-9]+)|(\'o)[0-7]+', Number.Oct), (r'\'[01xz]', Number), (r'\d+[Ll]?', Number.Integer), (r'[~!%^&*+=|?:<>/-]', Operator), (r'[()\[\],.;\']', Punctuation), (r'`[a-zA-Z_]\w*', Name.Constant), (r'^(\s*)(package)(\s+)', bygroups(Whitespace, Keyword.Namespace, Text)), (r'^(\s*)(import)(\s+)', bygroups(Whitespace, Keyword.Namespace, Text), 'import'), (words(( 'always', 'always_comb', 'always_ff', 'always_latch', 'and', 'assign', 'automatic', 'begin', 'break', 'buf', 'bufif0', 'bufif1', 'case', 'casex', 'casez', 'cmos', 'const', 'continue', 'deassign', 'default', 'defparam', 'disable', 'do', 'edge', 'else', 'end', 'endcase', 'endfunction', 'endgenerate', 'endmodule', 'endpackage', 'endprimitive', 'endspecify', 'endtable', 'endtask', 'enum', 'event', 'final', 'for', 'force', 'forever', 'fork', 'function', 'generate', 'genvar', 'highz0', 'highz1', 'if', 'initial', 'inout', 'input', 'integer', 'join', 'large', 'localparam', 'macromodule', 'medium', 'module', 'nand', 'negedge', 'nmos', 'nor', 'not', 'notif0', 'notif1', 'or', 'output', 'packed', 'parameter', 'pmos', 'posedge', 'primitive', 'pull0', 'pull1', 'pulldown', 'pullup', 'rcmos', 'ref', 'release', 'repeat', 'return', 'rnmos', 'rpmos', 'rtran', 'rtranif0', 'rtranif1', 'scalared', 'signed', 'small', 'specify', 'specparam', 'strength', 'string', 'strong0', 'strong1', 'struct', 'table', 'task', 'tran', 'tranif0', 'tranif1', 'type', 'typedef', 'unsigned', 'var', 'vectored', 'void', 'wait', 'weak0', 'weak1', 'while', 'xnor', 'xor'), suffix=r'\b'), Keyword), (words(( 'accelerate', 'autoexpand_vectornets', 'celldefine', 'default_nettype', 'else', 'elsif', 'endcelldefine', 'endif', 'endprotect', 'endprotected', 'expand_vectornets', 'ifdef', 'ifndef', 'include', 'noaccelerate', 'noexpand_vectornets', 'noremove_gatenames', 'noremove_netnames', 'nounconnected_drive', 'protect', 'protected', 'remove_gatenames', 'remove_netnames', 'resetall', 'timescale', 'unconnected_drive', 'undef'), prefix=r'`', suffix=r'\b'), Comment.Preproc), (words(( 'bits', 'bitstoreal', 'bitstoshortreal', 'countdrivers', 'display', 'fclose', 'fdisplay', 'finish', 'floor', 'fmonitor', 'fopen', 'fstrobe', 'fwrite', 'getpattern', 'history', 'incsave', 'input', 'itor', 'key', 'list', 'log', 'monitor', 'monitoroff', 'monitoron', 'nokey', 'nolog', 'printtimescale', 'random', 'readmemb', 'readmemh', 'realtime', 'realtobits', 'reset', 'reset_count', 'reset_value', 'restart', 'rtoi', 'save', 'scale', 'scope', 'shortrealtobits', 'showscopes', 'showvariables', 'showvars', 'sreadmemb', 'sreadmemh', 'stime', 'stop', 'strobe', 'time', 'timeformat', 'write'), prefix=r'\$', suffix=r'\b'), Name.Builtin), (words(( 'byte', 'shortint', 'int', 'longint', 'integer', 'time', 'bit', 'logic', 'reg', 'supply0', 'supply1', 'tri', 'triand', 'trior', 'tri0', 'tri1', 'trireg', 'uwire', 'wire', 'wand', 'wor' 'shortreal', 'real', 'realtime'), suffix=r'\b'), Keyword.Type), (r'[a-zA-Z_]\w*:(?!:)', Name.Label), (r'\$?[a-zA-Z_]\w*', Name), (r'\\(\S+)', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'(\\)(\n)', bygroups(String.Escape, Whitespace)), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'//.*?\n', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Whitespace, '#pop'), ], 'import': [ (r'[\w:]+\*?', Name.Namespace, '#pop') ] } def analyse_text(text): """Verilog code will use one of reg/wire/assign for sure, and that is not common elsewhere.""" result = 0 if 'reg' in text: result += 0.1 if 'wire' in text: result += 0.1 if 'assign' in text: result += 0.1 return result class SystemVerilogLexer(RegexLexer): """ Extends verilog lexer to recognise all SystemVerilog keywords from IEEE 1800-2009 standard. .. versionadded:: 1.5 """ name = 'systemverilog' aliases = ['systemverilog', 'sv'] filenames = ['*.sv', '*.svh'] mimetypes = ['text/x-systemverilog'] #: optional Comment or Whitespace _ws = r'(?:\s|//.*?\n|/[*].*?[*]/)+' tokens = { 'root': [ (r'^(\s*)(`define)', bygroups(Whitespace, Comment.Preproc), 'macro'), (r'^(\s*)(package)(\s+)', bygroups(Whitespace, Keyword.Namespace, Whitespace)), (r'^(\s*)(import)(\s+)', bygroups(Whitespace, Keyword.Namespace, Whitespace), 'import'), (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(String.Escape, Whitespace)), # line continuation (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'[{}#@]', Punctuation), (r'L?"', String, 'string'), (r"L?'(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\'\n])'", String.Char), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'([1-9][_0-9]*)?\s*\'[sS]?[bB]\s*[xXzZ?01][_xXzZ?01]*', Number.Bin), (r'([1-9][_0-9]*)?\s*\'[sS]?[oO]\s*[xXzZ?0-7][_xXzZ?0-7]*', Number.Oct), (r'([1-9][_0-9]*)?\s*\'[sS]?[dD]\s*[xXzZ?0-9][_xXzZ?0-9]*', Number.Integer), (r'([1-9][_0-9]*)?\s*\'[sS]?[hH]\s*[xXzZ?0-9a-fA-F][_xXzZ?0-9a-fA-F]*', Number.Hex), (r'\'[01xXzZ]', Number), (r'[0-9][_0-9]*', Number.Integer), (r'[~!%^&*+=|?:<>/-]', Operator), (words(('inside', 'dist'), suffix=r'\b'), Operator.Word), (r'[()\[\],.;\'$]', Punctuation), (r'`[a-zA-Z_]\w*', Name.Constant), (words(( 'accept_on', 'alias', 'always', 'always_comb', 'always_ff', 'always_latch', 'and', 'assert', 'assign', 'assume', 'automatic', 'before', 'begin', 'bind', 'bins', 'binsof', 'break', 'buf', 'bufif0', 'bufif1', 'case', 'casex', 'casez', 'cell', 'checker', 'clocking', 'cmos', 'config', 'constraint', 'context', 'continue', 'cover', 'covergroup', 'coverpoint', 'cross', 'deassign', 'default', 'defparam', 'design', 'disable', 'do', 'edge', 'else', 'end', 'endcase', 'endchecker', 'endclocking', 'endconfig', 'endfunction', 'endgenerate', 'endgroup', 'endinterface', 'endmodule', 'endpackage', 'endprimitive', 'endprogram', 'endproperty', 'endsequence', 'endspecify', 'endtable', 'endtask', 'enum', 'eventually', 'expect', 'export', 'extern', 'final', 'first_match', 'for', 'force', 'foreach', 'forever', 'fork', 'forkjoin', 'function', 'generate', 'genvar', 'global', 'highz0', 'highz1', 'if', 'iff', 'ifnone', 'ignore_bins', 'illegal_bins', 'implies', 'implements', 'import', 'incdir', 'include', 'initial', 'inout', 'input', 'instance', 'interconnect', 'interface', 'intersect', 'join', 'join_any', 'join_none', 'large', 'let', 'liblist', 'library', 'local', 'localparam', 'macromodule', 'matches', 'medium', 'modport', 'module', 'nand', 'negedge', 'nettype', 'new', 'nexttime', 'nmos', 'nor', 'noshowcancelled', 'not', 'notif0', 'notif1', 'null', 'or', 'output', 'package', 'packed', 'parameter', 'pmos', 'posedge', 'primitive', 'priority', 'program', 'property', 'protected', 'pull0', 'pull1', 'pulldown', 'pullup', 'pulsestyle_ondetect', 'pulsestyle_onevent', 'pure', 'rand', 'randc', 'randcase', 'randsequence', 'rcmos', 'ref', 'reject_on', 'release', 'repeat', 'restrict', 'return', 'rnmos', 'rpmos', 'rtran', 'rtranif0', 'rtranif1', 's_always', 's_eventually', 's_nexttime', 's_until', 's_until_with', 'scalared', 'sequence', 'showcancelled', 'small', 'soft', 'solve', 'specify', 'specparam', 'static', 'strong', 'strong0', 'strong1', 'struct', 'super', 'sync_accept_on', 'sync_reject_on', 'table', 'tagged', 'task', 'this', 'throughout', 'timeprecision', 'timeunit', 'tran', 'tranif0', 'tranif1', 'typedef', 'union', 'unique', 'unique0', 'until', 'until_with', 'untyped', 'use', 'vectored', 'virtual', 'wait', 'wait_order', 'weak', 'weak0', 'weak1', 'while', 'wildcard', 'with', 'within', 'xnor', 'xor'), suffix=r'\b'), Keyword), (r'(class)(\s+)([a-zA-Z_]\w*)', bygroups(Keyword.Declaration, Whitespace, Name.Class)), (r'(extends)(\s+)([a-zA-Z_]\w*)', bygroups(Keyword.Declaration, Whitespace, Name.Class)), (r'(endclass\b)(?:(\s*)(:)(\s*)([a-zA-Z_]\w*))?', bygroups(Keyword.Declaration, Whitespace, Punctuation, Whitespace, Name.Class)), (words(( # Variable types 'bit', 'byte', 'chandle', 'const', 'event', 'int', 'integer', 'logic', 'longint', 'real', 'realtime', 'reg', 'shortint', 'shortreal', 'signed', 'string', 'time', 'type', 'unsigned', 'var', 'void', # Net types 'supply0', 'supply1', 'tri', 'triand', 'trior', 'trireg', 'tri0', 'tri1', 'uwire', 'wand', 'wire', 'wor'), suffix=r'\b'), Keyword.Type), (words(( '`__FILE__', '`__LINE__', '`begin_keywords', '`celldefine', '`default_nettype', '`define', '`else', '`elsif', '`end_keywords', '`endcelldefine', '`endif', '`ifdef', '`ifndef', '`include', '`line', '`nounconnected_drive', '`pragma', '`resetall', '`timescale', '`unconnected_drive', '`undef', '`undefineall'), suffix=r'\b'), Comment.Preproc), (words(( # Simulation control tasks (20.2) '$exit', '$finish', '$stop', # Simulation time functions (20.3) '$realtime', '$stime', '$time', # Timescale tasks (20.4) '$printtimescale', '$timeformat', # Conversion functions '$bitstoreal', '$bitstoshortreal', '$cast', '$itor', '$realtobits', '$rtoi', '$shortrealtobits', '$signed', '$unsigned', # Data query functions (20.6) '$bits', '$isunbounded', '$typename', # Array query functions (20.7) '$dimensions', '$high', '$increment', '$left', '$low', '$right', '$size', '$unpacked_dimensions', # Math functions (20.8) '$acos', '$acosh', '$asin', '$asinh', '$atan', '$atan2', '$atanh', '$ceil', '$clog2', '$cos', '$cosh', '$exp', '$floor', '$hypot', '$ln', '$log10', '$pow', '$sin', '$sinh', '$sqrt', '$tan', '$tanh', # Bit vector system functions (20.9) '$countbits', '$countones', '$isunknown', '$onehot', '$onehot0', # Severity tasks (20.10) '$info', '$error', '$fatal', '$warning', # Assertion control tasks (20.12) '$assertcontrol', '$assertfailoff', '$assertfailon', '$assertkill', '$assertnonvacuouson', '$assertoff', '$asserton', '$assertpassoff', '$assertpasson', '$assertvacuousoff', # Sampled value system functions (20.13) '$changed', '$changed_gclk', '$changing_gclk', '$falling_gclk', '$fell', '$fell_gclk', '$future_gclk', '$past', '$past_gclk', '$rising_gclk', '$rose', '$rose_gclk', '$sampled', '$stable', '$stable_gclk', '$steady_gclk', # Coverage control functions (20.14) '$coverage_control', '$coverage_get', '$coverage_get_max', '$coverage_merge', '$coverage_save', '$get_coverage', '$load_coverage_db', '$set_coverage_db_name', # Probabilistic distribution functions (20.15) '$dist_chi_square', '$dist_erlang', '$dist_exponential', '$dist_normal', '$dist_poisson', '$dist_t', '$dist_uniform', '$random', # Stochastic analysis tasks and functions (20.16) '$q_add', '$q_exam', '$q_full', '$q_initialize', '$q_remove', # PLA modeling tasks (20.17) '$async$and$array', '$async$and$plane', '$async$nand$array', '$async$nand$plane', '$async$nor$array', '$async$nor$plane', '$async$or$array', '$async$or$plane', '$sync$and$array', '$sync$and$plane', '$sync$nand$array', '$sync$nand$plane', '$sync$nor$array', '$sync$nor$plane', '$sync$or$array', '$sync$or$plane', # Miscellaneous tasks and functions (20.18) '$system', # Display tasks (21.2) '$display', '$displayb', '$displayh', '$displayo', '$monitor', '$monitorb', '$monitorh', '$monitoro', '$monitoroff', '$monitoron', '$strobe', '$strobeb', '$strobeh', '$strobeo', '$write', '$writeb', '$writeh', '$writeo', # File I/O tasks and functions (21.3) '$fclose', '$fdisplay', '$fdisplayb', '$fdisplayh', '$fdisplayo', '$feof', '$ferror', '$fflush', '$fgetc', '$fgets', '$fmonitor', '$fmonitorb', '$fmonitorh', '$fmonitoro', '$fopen', '$fread', '$fscanf', '$fseek', '$fstrobe', '$fstrobeb', '$fstrobeh', '$fstrobeo', '$ftell', '$fwrite', '$fwriteb', '$fwriteh', '$fwriteo', '$rewind', '$sformat', '$sformatf', '$sscanf', '$swrite', '$swriteb', '$swriteh', '$swriteo', '$ungetc', # Memory load tasks (21.4) '$readmemb', '$readmemh', # Memory dump tasks (21.5) '$writememb', '$writememh', # Command line input (21.6) '$test$plusargs', '$value$plusargs', # VCD tasks (21.7) '$dumpall', '$dumpfile', '$dumpflush', '$dumplimit', '$dumpoff', '$dumpon', '$dumpports', '$dumpportsall', '$dumpportsflush', '$dumpportslimit', '$dumpportsoff', '$dumpportson', '$dumpvars', ), suffix=r'\b'), Name.Builtin), (r'[a-zA-Z_]\w*:(?!:)', Name.Label), (r'\$?[a-zA-Z_]\w*', Name), (r'\\(\S+)', Name), ], 'string': [ (r'"', String, '#pop'), (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), (r'[^\\"\n]+', String), # all other characters (r'(\\)(\n)', bygroups(String.Escape, Whitespace)), # line continuation (r'\\', String), # stray backslash ], 'macro': [ (r'[^/\n]+', Comment.Preproc), (r'/[*](.|\n)*?[*]/', Comment.Multiline), (r'//.*?$', Comment.Single, '#pop'), (r'/', Comment.Preproc), (r'(?<=\\)\n', Comment.Preproc), (r'\n', Whitespace, '#pop'), ], 'import': [ (r'[\w:]+\*?', Name.Namespace, '#pop') ] } class VhdlLexer(RegexLexer): """ For VHDL source code. .. versionadded:: 1.5 """ name = 'vhdl' aliases = ['vhdl'] filenames = ['*.vhdl', '*.vhd'] mimetypes = ['text/x-vhdl'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(String.Escape, Whitespace)), # line continuation (r'--.*?$', Comment.Single), (r"'(U|X|0|1|Z|W|L|H|-)'", String.Char), (r'[~!%^&*+=|?:<>/-]', Operator), (r"'[a-z_]\w*", Name.Attribute), (r'[()\[\],.;\']', Punctuation), (r'"[^\n\\"]*"', String), (r'(library)(\s+)([a-z_]\w*)', bygroups(Keyword, Whitespace, Name.Namespace)), (r'(use)(\s+)(entity)', bygroups(Keyword, Whitespace, Keyword)), (r'(use)(\s+)([a-z_][\w.]*\.)(all)', bygroups(Keyword, Whitespace, Name.Namespace, Keyword)), (r'(use)(\s+)([a-z_][\w.]*)', bygroups(Keyword, Whitespace, Name.Namespace)), (r'(std|ieee)(\.[a-z_]\w*)', bygroups(Name.Namespace, Name.Namespace)), (words(('std', 'ieee', 'work'), suffix=r'\b'), Name.Namespace), (r'(entity|component)(\s+)([a-z_]\w*)', bygroups(Keyword, Whitespace, Name.Class)), (r'(architecture|configuration)(\s+)([a-z_]\w*)(\s+)' r'(of)(\s+)([a-z_]\w*)(\s+)(is)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Keyword, Whitespace, Name.Class, Whitespace, Keyword)), (r'([a-z_]\w*)(:)(\s+)(process|for)', bygroups(Name.Class, Operator, Whitespace, Keyword)), (r'(end)(\s+)', bygroups(using(this), Whitespace), 'endblock'), include('types'), include('keywords'), include('numbers'), (r'[a-z_]\w*', Name), ], 'endblock': [ include('keywords'), (r'[a-z_]\w*', Name.Class), (r'\s+', Whitespace), (r';', Punctuation, '#pop'), ], 'types': [ (words(( 'boolean', 'bit', 'character', 'severity_level', 'integer', 'time', 'delay_length', 'natural', 'positive', 'string', 'bit_vector', 'file_open_kind', 'file_open_status', 'std_ulogic', 'std_ulogic_vector', 'std_logic', 'std_logic_vector', 'signed', 'unsigned'), suffix=r'\b'), Keyword.Type), ], 'keywords': [ (words(( 'abs', 'access', 'after', 'alias', 'all', 'and', 'architecture', 'array', 'assert', 'attribute', 'begin', 'block', 'body', 'buffer', 'bus', 'case', 'component', 'configuration', 'constant', 'disconnect', 'downto', 'else', 'elsif', 'end', 'entity', 'exit', 'file', 'for', 'function', 'generate', 'generic', 'group', 'guarded', 'if', 'impure', 'in', 'inertial', 'inout', 'is', 'label', 'library', 'linkage', 'literal', 'loop', 'map', 'mod', 'nand', 'new', 'next', 'nor', 'not', 'null', 'of', 'on', 'open', 'or', 'others', 'out', 'package', 'port', 'postponed', 'procedure', 'process', 'pure', 'range', 'record', 'register', 'reject', 'rem', 'return', 'rol', 'ror', 'select', 'severity', 'signal', 'shared', 'sla', 'sll', 'sra', 'srl', 'subtype', 'then', 'to', 'transport', 'type', 'units', 'until', 'use', 'variable', 'wait', 'when', 'while', 'with', 'xnor', 'xor'), suffix=r'\b'), Keyword), ], 'numbers': [ (r'\d{1,2}#[0-9a-f_]+#?', Number.Integer), (r'\d+', Number.Integer), (r'(\d+\.\d*|\.\d+|\d+)E[+-]?\d+', Number.Float), (r'X"[0-9a-f_]+"', Number.Hex), (r'O"[0-7_]+"', Number.Oct), (r'B"[01_]+"', Number.Bin), ], } pygments-2.11.2/pygments/lexers/elpi.py0000644000175000017500000001351714165547207020020 0ustar carstencarsten""" pygments.lexers.elpi ~~~~~~~~~~~~~~~~~~~~ Lexer for the `Elpi `_ programming language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, include from pygments.token import \ Text, Comment, Operator, Keyword, Name, String, Number __all__ = ['ElpiLexer'] class ElpiLexer(RegexLexer): """ Lexer for the `Elpi `_ programming language. .. versionadded::2.11 """ name = 'Elpi' aliases = ['elpi'] filenames = ['*.elpi'] mimetypes = ['text/x-elpi'] lcase_re = r"[a-z]" ucase_re = r"[A-Z]" digit_re = r"[0-9]" schar2_re = r"([+*^?/<>`'@#~=&!])" schar_re = r"({}|-|\$|_)".format(schar2_re) idchar_re = r"({}|{}|{}|{})".format(lcase_re,ucase_re,digit_re,schar_re) idcharstarns_re = r"({}+|(?=\.[a-z])\.{}+)".format(idchar_re,idchar_re) symbchar_re = r"({}|{}|{}|{}|:)".format(lcase_re, ucase_re, digit_re, schar_re) constant_re = r"({}{}*|{}{}*|{}{}*|_{}+)".format(ucase_re, idchar_re, lcase_re, idcharstarns_re,schar2_re, symbchar_re,idchar_re) symbol_re=r"(,|<=>|->|:-|;|\?-|->|&|=>|\bas\b|\buvar\b|<|=<|=|==|>=|>|\bi<|\bi=<|\bi>=|\bi>|\bis\b|\br<|\br=<|\br>=|\br>|\bs<|\bs=<|\bs>=|\bs>|@|::|\[\]|`->|`:|`:=|\^|-|\+|\bi-|\bi\+|r-|r\+|/|\*|\bdiv\b|\bi\*|\bmod\b|\br\*|~|\bi~|\br~)" escape_re=r"\(({}|{})\)".format(constant_re,symbol_re) const_sym_re = r"({}|{}|{})".format(constant_re,symbol_re,escape_re) tokens = { 'root': [ include('elpi') ], 'elpi': [ include('_elpi-comment'), (r"(:before|:after|:if|:name)(\s*)(\")",bygroups(Keyword.Mode,Text.Whitespace,String.Double),'elpi-string'), (r"(:index)(\s*\()",bygroups(Keyword.Mode,Text.Whitespace),'elpi-indexing-expr'), (r"\b(external pred|pred)(\s+)({})".format(const_sym_re),bygroups(Keyword.Declaration,Text.Whitespace,Name.Function),'elpi-pred-item'), (r"\b(external type|type)(\s+)(({}(,\s*)?)+)".format(const_sym_re),bygroups(Keyword.Declaration,Text.Whitespace,Name.Function),'elpi-type'), (r"\b(kind)(\s+)(({}|,)+)".format(const_sym_re),bygroups(Keyword.Declaration,Text.Whitespace,Name.Function),'elpi-type'), (r"\b(typeabbrev)(\s+)({})".format(const_sym_re),bygroups(Keyword.Declaration,Text.Whitespace,Name.Function),'elpi-type'), (r"\b(accumulate)(\s+)(\")",bygroups(Keyword.Declaration,Text.Whitespace,String.Double),'elpi-string'), (r"\b(accumulate|namespace|local)(\s+)({})".format(constant_re),bygroups(Keyword.Declaration,Text.Whitespace,Text)), (r"\b(shorten)(\s+)({}\.)".format(constant_re),bygroups(Keyword.Declaration,Text.Whitespace,Text)), (r"\b(pi|sigma)(\s+)([a-zA-Z][A-Za-z0-9_ ]*)(\\)",bygroups(Keyword.Declaration,Text.Whitespace,Name.Variable,Text)), (r"\b(constraint)(\s+)(({}(\s+)?)+)".format(const_sym_re),bygroups(Keyword.Declaration,Text.Whitespace,Name.Function),'elpi-chr-rule-start'), (r"(?=[A-Z_]){}".format(constant_re),Name.Variable), (r"(?=[a-z_]){}\\".format(constant_re),Name.Variable), (r"_",Name.Variable), (r"({}|!|=>|;)".format(symbol_re),Keyword.Declaration), (constant_re,Text), (r"\[|\]|\||=>",Keyword.Declaration), (r'"', String.Double, 'elpi-string'), (r'`', String.Double, 'elpi-btick'), (r'\'', String.Double, 'elpi-tick'), (r'\{[^\{]', Text, 'elpi-spill'), (r"\(",Text,'elpi-in-parens'), (r'\d[\d_]*', Number.Integer), (r'-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)', Number.Float), (r"[\+\*\-/\^\.]", Operator), ], '_elpi-comment': [ (r'%[^\n]*\n',Comment), (r'/\*',Comment,'elpi-multiline-comment'), (r"\s+",Text.Whitespace), ], 'elpi-multiline-comment': [ (r'\*/',Comment,'#pop'), (r'.',Comment) ], 'elpi-indexing-expr':[ (r'[0-9 _]+',Number.Integer), (r'\)',Text,'#pop'), ], 'elpi-type': [ (r"(ctype\s+)(\")",bygroups(Keyword.Type,String.Double),'elpi-string'), (r'->',Keyword.Type), (constant_re,Keyword.Type), (r"\(|\)",Keyword.Type), (r"\.",Text,'#pop'), include('_elpi-comment'), ], 'elpi-chr-rule-start': [ (r"\{",Text,'elpi-chr-rule'), include('_elpi-comment'), ], 'elpi-chr-rule': [ (r"\brule\b",Keyword.Declaration), (r"\\",Keyword.Declaration), (r"\}",Text,'#pop:2'), include('elpi'), ], 'elpi-pred-item': [ (r"[io]:",Keyword.Mode,'elpi-ctype'), (r"\.",Text,'#pop'), include('_elpi-comment'), ], 'elpi-ctype': [ (r"(ctype\s+)(\")",bygroups(Keyword.Type,String.Double),'elpi-string'), (constant_re,Keyword.Type), (r"\(|\)",Keyword.Type), (r",",Text,'#pop'), (r"\.",Text,'#pop:2'), include('_elpi-comment'), ], 'elpi-btick': [ (r'[^` ]+', String.Double), (r'`', String.Double, '#pop'), ], 'elpi-tick': [ (r'[^\' ]+', String.Double), (r'\'', String.Double, '#pop'), ], 'elpi-string': [ (r'[^\"]+', String.Double), (r'"', String.Double, '#pop'), ], 'elpi-spill': [ (r'\{[^\{]', Text, '#push'), (r'\}[^\}]', Text, '#pop'), include('elpi'), ], 'elpi-in-parens': [ (r"\(", Operator, '#push'), (r"\)", Operator, '#pop'), include('elpi'), ], } pygments-2.11.2/pygments/lexers/robotframework.py0000644000175000017500000004377014165547207022136 0ustar carstencarsten""" pygments.lexers.robotframework ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for Robot Framework. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Copyright 2012 Nokia Siemens Networks Oyj # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import re from pygments.lexer import Lexer from pygments.token import Token __all__ = ['RobotFrameworkLexer'] HEADING = Token.Generic.Heading SETTING = Token.Keyword.Namespace IMPORT = Token.Name.Namespace TC_KW_NAME = Token.Generic.Subheading KEYWORD = Token.Name.Function ARGUMENT = Token.String VARIABLE = Token.Name.Variable COMMENT = Token.Comment SEPARATOR = Token.Punctuation SYNTAX = Token.Punctuation GHERKIN = Token.Generic.Emph ERROR = Token.Error def normalize(string, remove=''): string = string.lower() for char in remove + ' ': if char in string: string = string.replace(char, '') return string class RobotFrameworkLexer(Lexer): """ For `Robot Framework `_ test data. Supports both space and pipe separated plain text formats. .. versionadded:: 1.6 """ name = 'RobotFramework' aliases = ['robotframework'] filenames = ['*.robot'] mimetypes = ['text/x-robotframework'] def __init__(self, **options): options['tabsize'] = 2 options['encoding'] = 'UTF-8' Lexer.__init__(self, **options) def get_tokens_unprocessed(self, text): row_tokenizer = RowTokenizer() var_tokenizer = VariableTokenizer() index = 0 for row in text.splitlines(): for value, token in row_tokenizer.tokenize(row): for value, token in var_tokenizer.tokenize(value, token): if value: yield index, token, str(value) index += len(value) class VariableTokenizer: def tokenize(self, string, token): var = VariableSplitter(string, identifiers='$@%&') if var.start < 0 or token in (COMMENT, ERROR): yield string, token return for value, token in self._tokenize(var, string, token): if value: yield value, token def _tokenize(self, var, string, orig_token): before = string[:var.start] yield before, orig_token yield var.identifier + '{', SYNTAX yield from self.tokenize(var.base, VARIABLE) yield '}', SYNTAX if var.index is not None: yield '[', SYNTAX yield from self.tokenize(var.index, VARIABLE) yield ']', SYNTAX yield from self.tokenize(string[var.end:], orig_token) class RowTokenizer: def __init__(self): self._table = UnknownTable() self._splitter = RowSplitter() testcases = TestCaseTable() settings = SettingTable(testcases.set_default_template) variables = VariableTable() keywords = KeywordTable() self._tables = {'settings': settings, 'setting': settings, 'metadata': settings, 'variables': variables, 'variable': variables, 'testcases': testcases, 'testcase': testcases, 'tasks': testcases, 'task': testcases, 'keywords': keywords, 'keyword': keywords, 'userkeywords': keywords, 'userkeyword': keywords} def tokenize(self, row): commented = False heading = False for index, value in enumerate(self._splitter.split(row)): # First value, and every second after that, is a separator. index, separator = divmod(index-1, 2) if value.startswith('#'): commented = True elif index == 0 and value.startswith('*'): self._table = self._start_table(value) heading = True yield from self._tokenize(value, index, commented, separator, heading) self._table.end_row() def _start_table(self, header): name = normalize(header, remove='*') return self._tables.get(name, UnknownTable()) def _tokenize(self, value, index, commented, separator, heading): if commented: yield value, COMMENT elif separator: yield value, SEPARATOR elif heading: yield value, HEADING else: yield from self._table.tokenize(value, index) class RowSplitter: _space_splitter = re.compile('( {2,})') _pipe_splitter = re.compile(r'((?:^| +)\|(?: +|$))') def split(self, row): splitter = (row.startswith('| ') and self._split_from_pipes or self._split_from_spaces) yield from splitter(row) yield '\n' def _split_from_spaces(self, row): yield '' # Start with (pseudo)separator similarly as with pipes yield from self._space_splitter.split(row) def _split_from_pipes(self, row): _, separator, rest = self._pipe_splitter.split(row, 1) yield separator while self._pipe_splitter.search(rest): cell, separator, rest = self._pipe_splitter.split(rest, 1) yield cell yield separator yield rest class Tokenizer: _tokens = None def __init__(self): self._index = 0 def tokenize(self, value): values_and_tokens = self._tokenize(value, self._index) self._index += 1 if isinstance(values_and_tokens, type(Token)): values_and_tokens = [(value, values_and_tokens)] return values_and_tokens def _tokenize(self, value, index): index = min(index, len(self._tokens) - 1) return self._tokens[index] def _is_assign(self, value): if value.endswith('='): value = value[:-1].strip() var = VariableSplitter(value, identifiers='$@&') return var.start == 0 and var.end == len(value) class Comment(Tokenizer): _tokens = (COMMENT,) class Setting(Tokenizer): _tokens = (SETTING, ARGUMENT) _keyword_settings = ('suitesetup', 'suiteprecondition', 'suiteteardown', 'suitepostcondition', 'testsetup', 'tasksetup', 'testprecondition', 'testteardown','taskteardown', 'testpostcondition', 'testtemplate', 'tasktemplate') _import_settings = ('library', 'resource', 'variables') _other_settings = ('documentation', 'metadata', 'forcetags', 'defaulttags', 'testtimeout','tasktimeout') _custom_tokenizer = None def __init__(self, template_setter=None): Tokenizer.__init__(self) self._template_setter = template_setter def _tokenize(self, value, index): if index == 1 and self._template_setter: self._template_setter(value) if index == 0: normalized = normalize(value) if normalized in self._keyword_settings: self._custom_tokenizer = KeywordCall(support_assign=False) elif normalized in self._import_settings: self._custom_tokenizer = ImportSetting() elif normalized not in self._other_settings: return ERROR elif self._custom_tokenizer: return self._custom_tokenizer.tokenize(value) return Tokenizer._tokenize(self, value, index) class ImportSetting(Tokenizer): _tokens = (IMPORT, ARGUMENT) class TestCaseSetting(Setting): _keyword_settings = ('setup', 'precondition', 'teardown', 'postcondition', 'template') _import_settings = () _other_settings = ('documentation', 'tags', 'timeout') def _tokenize(self, value, index): if index == 0: type = Setting._tokenize(self, value[1:-1], index) return [('[', SYNTAX), (value[1:-1], type), (']', SYNTAX)] return Setting._tokenize(self, value, index) class KeywordSetting(TestCaseSetting): _keyword_settings = ('teardown',) _other_settings = ('documentation', 'arguments', 'return', 'timeout', 'tags') class Variable(Tokenizer): _tokens = (SYNTAX, ARGUMENT) def _tokenize(self, value, index): if index == 0 and not self._is_assign(value): return ERROR return Tokenizer._tokenize(self, value, index) class KeywordCall(Tokenizer): _tokens = (KEYWORD, ARGUMENT) def __init__(self, support_assign=True): Tokenizer.__init__(self) self._keyword_found = not support_assign self._assigns = 0 def _tokenize(self, value, index): if not self._keyword_found and self._is_assign(value): self._assigns += 1 return SYNTAX # VariableTokenizer tokenizes this later. if self._keyword_found: return Tokenizer._tokenize(self, value, index - self._assigns) self._keyword_found = True return GherkinTokenizer().tokenize(value, KEYWORD) class GherkinTokenizer: _gherkin_prefix = re.compile('^(Given|When|Then|And) ', re.IGNORECASE) def tokenize(self, value, token): match = self._gherkin_prefix.match(value) if not match: return [(value, token)] end = match.end() return [(value[:end], GHERKIN), (value[end:], token)] class TemplatedKeywordCall(Tokenizer): _tokens = (ARGUMENT,) class ForLoop(Tokenizer): def __init__(self): Tokenizer.__init__(self) self._in_arguments = False def _tokenize(self, value, index): token = self._in_arguments and ARGUMENT or SYNTAX if value.upper() in ('IN', 'IN RANGE'): self._in_arguments = True return token class _Table: _tokenizer_class = None def __init__(self, prev_tokenizer=None): self._tokenizer = self._tokenizer_class() self._prev_tokenizer = prev_tokenizer self._prev_values_on_row = [] def tokenize(self, value, index): if self._continues(value, index): self._tokenizer = self._prev_tokenizer yield value, SYNTAX else: yield from self._tokenize(value, index) self._prev_values_on_row.append(value) def _continues(self, value, index): return value == '...' and all(self._is_empty(t) for t in self._prev_values_on_row) def _is_empty(self, value): return value in ('', '\\') def _tokenize(self, value, index): return self._tokenizer.tokenize(value) def end_row(self): self.__init__(prev_tokenizer=self._tokenizer) class UnknownTable(_Table): _tokenizer_class = Comment def _continues(self, value, index): return False class VariableTable(_Table): _tokenizer_class = Variable class SettingTable(_Table): _tokenizer_class = Setting def __init__(self, template_setter, prev_tokenizer=None): _Table.__init__(self, prev_tokenizer) self._template_setter = template_setter def _tokenize(self, value, index): if index == 0 and normalize(value) == 'testtemplate': self._tokenizer = Setting(self._template_setter) return _Table._tokenize(self, value, index) def end_row(self): self.__init__(self._template_setter, prev_tokenizer=self._tokenizer) class TestCaseTable(_Table): _setting_class = TestCaseSetting _test_template = None _default_template = None @property def _tokenizer_class(self): if self._test_template or (self._default_template and self._test_template is not False): return TemplatedKeywordCall return KeywordCall def _continues(self, value, index): return index > 0 and _Table._continues(self, value, index) def _tokenize(self, value, index): if index == 0: if value: self._test_template = None return GherkinTokenizer().tokenize(value, TC_KW_NAME) if index == 1 and self._is_setting(value): if self._is_template(value): self._test_template = False self._tokenizer = self._setting_class(self.set_test_template) else: self._tokenizer = self._setting_class() if index == 1 and self._is_for_loop(value): self._tokenizer = ForLoop() if index == 1 and self._is_empty(value): return [(value, SYNTAX)] return _Table._tokenize(self, value, index) def _is_setting(self, value): return value.startswith('[') and value.endswith(']') def _is_template(self, value): return normalize(value) == '[template]' def _is_for_loop(self, value): return value.startswith(':') and normalize(value, remove=':') == 'for' def set_test_template(self, template): self._test_template = self._is_template_set(template) def set_default_template(self, template): self._default_template = self._is_template_set(template) def _is_template_set(self, template): return normalize(template) not in ('', '\\', 'none', '${empty}') class KeywordTable(TestCaseTable): _tokenizer_class = KeywordCall _setting_class = KeywordSetting def _is_template(self, value): return False # Following code copied directly from Robot Framework 2.7.5. class VariableSplitter: def __init__(self, string, identifiers): self.identifier = None self.base = None self.index = None self.start = -1 self.end = -1 self._identifiers = identifiers self._may_have_internal_variables = False try: self._split(string) except ValueError: pass else: self._finalize() def get_replaced_base(self, variables): if self._may_have_internal_variables: return variables.replace_string(self.base) return self.base def _finalize(self): self.identifier = self._variable_chars[0] self.base = ''.join(self._variable_chars[2:-1]) self.end = self.start + len(self._variable_chars) if self._has_list_or_dict_variable_index(): self.index = ''.join(self._list_and_dict_variable_index_chars[1:-1]) self.end += len(self._list_and_dict_variable_index_chars) def _has_list_or_dict_variable_index(self): return self._list_and_dict_variable_index_chars\ and self._list_and_dict_variable_index_chars[-1] == ']' def _split(self, string): start_index, max_index = self._find_variable(string) self.start = start_index self._open_curly = 1 self._state = self._variable_state self._variable_chars = [string[start_index], '{'] self._list_and_dict_variable_index_chars = [] self._string = string start_index += 2 for index, char in enumerate(string[start_index:]): index += start_index # Giving start to enumerate only in Py 2.6+ try: self._state(char, index) except StopIteration: return if index == max_index and not self._scanning_list_variable_index(): return def _scanning_list_variable_index(self): return self._state in [self._waiting_list_variable_index_state, self._list_variable_index_state] def _find_variable(self, string): max_end_index = string.rfind('}') if max_end_index == -1: raise ValueError('No variable end found') if self._is_escaped(string, max_end_index): return self._find_variable(string[:max_end_index]) start_index = self._find_start_index(string, 1, max_end_index) if start_index == -1: raise ValueError('No variable start found') return start_index, max_end_index def _find_start_index(self, string, start, end): index = string.find('{', start, end) - 1 if index < 0: return -1 if self._start_index_is_ok(string, index): return index return self._find_start_index(string, index+2, end) def _start_index_is_ok(self, string, index): return string[index] in self._identifiers\ and not self._is_escaped(string, index) def _is_escaped(self, string, index): escaped = False while index > 0 and string[index-1] == '\\': index -= 1 escaped = not escaped return escaped def _variable_state(self, char, index): self._variable_chars.append(char) if char == '}' and not self._is_escaped(self._string, index): self._open_curly -= 1 if self._open_curly == 0: if not self._is_list_or_dict_variable(): raise StopIteration self._state = self._waiting_list_variable_index_state elif char in self._identifiers: self._state = self._internal_variable_start_state def _is_list_or_dict_variable(self): return self._variable_chars[0] in ('@','&') def _internal_variable_start_state(self, char, index): self._state = self._variable_state if char == '{': self._variable_chars.append(char) self._open_curly += 1 self._may_have_internal_variables = True else: self._variable_state(char, index) def _waiting_list_variable_index_state(self, char, index): if char != '[': raise StopIteration self._list_and_dict_variable_index_chars.append(char) self._state = self._list_variable_index_state def _list_variable_index_state(self, char, index): self._list_and_dict_variable_index_chars.append(char) if char == ']': raise StopIteration pygments-2.11.2/pygments/lexers/freefem.py0000644000175000017500000006466614165547207020513 0ustar carstencarsten""" pygments.lexers.freefem ~~~~~~~~~~~~~~~~~~~~~~~ Lexer for FreeFem++ language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups, inherit, words, \ default from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation from pygments.lexers.c_cpp import CLexer, CppLexer from pygments.lexers import _mql_builtins __all__ = ['FreeFemLexer'] class FreeFemLexer(CppLexer): """ For `FreeFem++ `_ source. This is an extension of the CppLexer, as the FreeFem Language is a superset of C++. .. versionadded:: 2.4 """ name = 'Freefem' aliases = ['freefem'] filenames = ['*.edp'] mimetypes = ['text/x-freefem'] # Language operators operators = {'+', '-', '*', '.*', '/', './', '%', '^', '^-1', ':', '\''} # types types = {'bool', 'border', 'complex', 'dmatrix', 'fespace', 'func', 'gslspline', 'ifstream', 'int', 'macro', 'matrix', 'mesh', 'mesh3', 'mpiComm', 'mpiGroup', 'mpiRequest', 'NewMacro', 'EndMacro', 'ofstream', 'Pmmap', 'problem', 'Psemaphore', 'real', 'solve', 'string', 'varf'} # finite element spaces fespaces = {'BDM1', 'BDM1Ortho', 'Edge03d', 'Edge13d', 'Edge23d', 'FEQF', 'HCT', 'P0', 'P03d', 'P0Edge', 'P1', 'P13d', 'P1b', 'P1b3d', 'P1bl', 'P1bl3d', 'P1dc', 'P1Edge', 'P1nc', 'P2', 'P23d', 'P2b', 'P2BR', 'P2dc', 'P2Edge', 'P2h', 'P2Morley', 'P2pnc', 'P3', 'P3dc', 'P3Edge', 'P4', 'P4dc', 'P4Edge', 'P5Edge', 'RT0', 'RT03d', 'RT0Ortho', 'RT1', 'RT1Ortho', 'RT2', 'RT2Ortho'} # preprocessor preprocessor = {'ENDIFMACRO', 'include', 'IFMACRO', 'load'} # Language keywords keywords = { 'adj', 'append', 'area', 'ARGV', 'be', 'binary', 'BoundaryEdge', 'bordermeasure', 'CG', 'Cholesky', 'cin', 'cout', 'Crout', 'default', 'diag', 'edgeOrientation', 'endl', 'false', 'ffind', 'FILE', 'find', 'fixed', 'flush', 'GMRES', 'good', 'hTriangle', 'im', 'imax', 'imin', 'InternalEdge', 'l1', 'l2', 'label', 'lenEdge', 'length', 'LINE', 'linfty', 'LU', 'm', 'max', 'measure', 'min', 'mpiAnySource', 'mpiBAND', 'mpiBXOR', 'mpiCommWorld', 'mpiLAND', 'mpiLOR', 'mpiLXOR', 'mpiMAX', 'mpiMIN', 'mpiPROD', 'mpirank', 'mpisize', 'mpiSUM', 'mpiUndefined', 'n', 'N', 'nbe', 'ndof', 'ndofK', 'noshowbase', 'noshowpos', 'notaregion', 'nt', 'nTonEdge', 'nuEdge', 'nuTriangle', 'nv', 'P', 'pi', 'precision', 'qf1pE', 'qf1pElump', 'qf1pT', 'qf1pTlump', 'qfV1', 'qfV1lump', 'qf2pE', 'qf2pT', 'qf2pT4P1', 'qfV2', 'qf3pE', 'qf4pE', 'qf5pE', 'qf5pT', 'qfV5', 'qf7pT', 'qf9pT', 'qfnbpE', 'quantile', 're', 'region', 'rfind', 'scientific', 'searchMethod', 'setw', 'showbase', 'showpos', 'sparsesolver', 'sum', 'tellp', 'true', 'UMFPACK', 'unused', 'whoinElement', 'verbosity', 'version', 'volume', 'x', 'y', 'z' } # Language shipped functions and class ( ) functions = { 'abs', 'acos', 'acosh', 'adaptmesh', 'adj', 'AffineCG', 'AffineGMRES', 'arg', 'asin', 'asinh', 'assert', 'atan', 'atan2', 'atanh', 'atof', 'atoi', 'BFGS', 'broadcast', 'buildlayers', 'buildmesh', 'ceil', 'chi', 'complexEigenValue', 'copysign', 'change', 'checkmovemesh', 'clock', 'cmaes', 'conj', 'convect', 'cos', 'cosh', 'cube', 'd', 'dd', 'dfft', 'diffnp', 'diffpos', 'dimKrylov', 'dist', 'dumptable', 'dx', 'dxx', 'dxy', 'dxz', 'dy', 'dyx', 'dyy', 'dyz', 'dz', 'dzx', 'dzy', 'dzz', 'EigenValue', 'emptymesh', 'erf', 'erfc', 'exec', 'exit', 'exp', 'fdim', 'floor', 'fmax', 'fmin', 'fmod', 'freeyams', 'getARGV', 'getline', 'gmshload', 'gmshload3', 'gslcdfugaussianP', 'gslcdfugaussianQ', 'gslcdfugaussianPinv', 'gslcdfugaussianQinv', 'gslcdfgaussianP', 'gslcdfgaussianQ', 'gslcdfgaussianPinv', 'gslcdfgaussianQinv', 'gslcdfgammaP', 'gslcdfgammaQ', 'gslcdfgammaPinv', 'gslcdfgammaQinv', 'gslcdfcauchyP', 'gslcdfcauchyQ', 'gslcdfcauchyPinv', 'gslcdfcauchyQinv', 'gslcdflaplaceP', 'gslcdflaplaceQ', 'gslcdflaplacePinv', 'gslcdflaplaceQinv', 'gslcdfrayleighP', 'gslcdfrayleighQ', 'gslcdfrayleighPinv', 'gslcdfrayleighQinv', 'gslcdfchisqP', 'gslcdfchisqQ', 'gslcdfchisqPinv', 'gslcdfchisqQinv', 'gslcdfexponentialP', 'gslcdfexponentialQ', 'gslcdfexponentialPinv', 'gslcdfexponentialQinv', 'gslcdfexppowP', 'gslcdfexppowQ', 'gslcdftdistP', 'gslcdftdistQ', 'gslcdftdistPinv', 'gslcdftdistQinv', 'gslcdffdistP', 'gslcdffdistQ', 'gslcdffdistPinv', 'gslcdffdistQinv', 'gslcdfbetaP', 'gslcdfbetaQ', 'gslcdfbetaPinv', 'gslcdfbetaQinv', 'gslcdfflatP', 'gslcdfflatQ', 'gslcdfflatPinv', 'gslcdfflatQinv', 'gslcdflognormalP', 'gslcdflognormalQ', 'gslcdflognormalPinv', 'gslcdflognormalQinv', 'gslcdfgumbel1P', 'gslcdfgumbel1Q', 'gslcdfgumbel1Pinv', 'gslcdfgumbel1Qinv', 'gslcdfgumbel2P', 'gslcdfgumbel2Q', 'gslcdfgumbel2Pinv', 'gslcdfgumbel2Qinv', 'gslcdfweibullP', 'gslcdfweibullQ', 'gslcdfweibullPinv', 'gslcdfweibullQinv', 'gslcdfparetoP', 'gslcdfparetoQ', 'gslcdfparetoPinv', 'gslcdfparetoQinv', 'gslcdflogisticP', 'gslcdflogisticQ', 'gslcdflogisticPinv', 'gslcdflogisticQinv', 'gslcdfbinomialP', 'gslcdfbinomialQ', 'gslcdfpoissonP', 'gslcdfpoissonQ', 'gslcdfgeometricP', 'gslcdfgeometricQ', 'gslcdfnegativebinomialP', 'gslcdfnegativebinomialQ', 'gslcdfpascalP', 'gslcdfpascalQ', 'gslinterpakima', 'gslinterpakimaperiodic', 'gslinterpcsplineperiodic', 'gslinterpcspline', 'gslinterpsteffen', 'gslinterplinear', 'gslinterppolynomial', 'gslranbernoullipdf', 'gslranbeta', 'gslranbetapdf', 'gslranbinomialpdf', 'gslranexponential', 'gslranexponentialpdf', 'gslranexppow', 'gslranexppowpdf', 'gslrancauchy', 'gslrancauchypdf', 'gslranchisq', 'gslranchisqpdf', 'gslranerlang', 'gslranerlangpdf', 'gslranfdist', 'gslranfdistpdf', 'gslranflat', 'gslranflatpdf', 'gslrangamma', 'gslrangammaint', 'gslrangammapdf', 'gslrangammamt', 'gslrangammaknuth', 'gslrangaussian', 'gslrangaussianratiomethod', 'gslrangaussianziggurat', 'gslrangaussianpdf', 'gslranugaussian', 'gslranugaussianratiomethod', 'gslranugaussianpdf', 'gslrangaussiantail', 'gslrangaussiantailpdf', 'gslranugaussiantail', 'gslranugaussiantailpdf', 'gslranlandau', 'gslranlandaupdf', 'gslrangeometricpdf', 'gslrangumbel1', 'gslrangumbel1pdf', 'gslrangumbel2', 'gslrangumbel2pdf', 'gslranlogistic', 'gslranlogisticpdf', 'gslranlognormal', 'gslranlognormalpdf', 'gslranlogarithmicpdf', 'gslrannegativebinomialpdf', 'gslranpascalpdf', 'gslranpareto', 'gslranparetopdf', 'gslranpoissonpdf', 'gslranrayleigh', 'gslranrayleighpdf', 'gslranrayleightail', 'gslranrayleightailpdf', 'gslrantdist', 'gslrantdistpdf', 'gslranlaplace', 'gslranlaplacepdf', 'gslranlevy', 'gslranweibull', 'gslranweibullpdf', 'gslsfairyAi', 'gslsfairyBi', 'gslsfairyAiscaled', 'gslsfairyBiscaled', 'gslsfairyAideriv', 'gslsfairyBideriv', 'gslsfairyAiderivscaled', 'gslsfairyBiderivscaled', 'gslsfairyzeroAi', 'gslsfairyzeroBi', 'gslsfairyzeroAideriv', 'gslsfairyzeroBideriv', 'gslsfbesselJ0', 'gslsfbesselJ1', 'gslsfbesselJn', 'gslsfbesselY0', 'gslsfbesselY1', 'gslsfbesselYn', 'gslsfbesselI0', 'gslsfbesselI1', 'gslsfbesselIn', 'gslsfbesselI0scaled', 'gslsfbesselI1scaled', 'gslsfbesselInscaled', 'gslsfbesselK0', 'gslsfbesselK1', 'gslsfbesselKn', 'gslsfbesselK0scaled', 'gslsfbesselK1scaled', 'gslsfbesselKnscaled', 'gslsfbesselj0', 'gslsfbesselj1', 'gslsfbesselj2', 'gslsfbesseljl', 'gslsfbessely0', 'gslsfbessely1', 'gslsfbessely2', 'gslsfbesselyl', 'gslsfbesseli0scaled', 'gslsfbesseli1scaled', 'gslsfbesseli2scaled', 'gslsfbesselilscaled', 'gslsfbesselk0scaled', 'gslsfbesselk1scaled', 'gslsfbesselk2scaled', 'gslsfbesselklscaled', 'gslsfbesselJnu', 'gslsfbesselYnu', 'gslsfbesselInuscaled', 'gslsfbesselInu', 'gslsfbesselKnuscaled', 'gslsfbesselKnu', 'gslsfbessellnKnu', 'gslsfbesselzeroJ0', 'gslsfbesselzeroJ1', 'gslsfbesselzeroJnu', 'gslsfclausen', 'gslsfhydrogenicR1', 'gslsfdawson', 'gslsfdebye1', 'gslsfdebye2', 'gslsfdebye3', 'gslsfdebye4', 'gslsfdebye5', 'gslsfdebye6', 'gslsfdilog', 'gslsfmultiply', 'gslsfellintKcomp', 'gslsfellintEcomp', 'gslsfellintPcomp', 'gslsfellintDcomp', 'gslsfellintF', 'gslsfellintE', 'gslsfellintRC', 'gslsferfc', 'gslsflogerfc', 'gslsferf', 'gslsferfZ', 'gslsferfQ', 'gslsfhazard', 'gslsfexp', 'gslsfexpmult', 'gslsfexpm1', 'gslsfexprel', 'gslsfexprel2', 'gslsfexpreln', 'gslsfexpintE1', 'gslsfexpintE2', 'gslsfexpintEn', 'gslsfexpintE1scaled', 'gslsfexpintE2scaled', 'gslsfexpintEnscaled', 'gslsfexpintEi', 'gslsfexpintEiscaled', 'gslsfShi', 'gslsfChi', 'gslsfexpint3', 'gslsfSi', 'gslsfCi', 'gslsfatanint', 'gslsffermidiracm1', 'gslsffermidirac0', 'gslsffermidirac1', 'gslsffermidirac2', 'gslsffermidiracint', 'gslsffermidiracmhalf', 'gslsffermidirachalf', 'gslsffermidirac3half', 'gslsffermidiracinc0', 'gslsflngamma', 'gslsfgamma', 'gslsfgammastar', 'gslsfgammainv', 'gslsftaylorcoeff', 'gslsffact', 'gslsfdoublefact', 'gslsflnfact', 'gslsflndoublefact', 'gslsflnchoose', 'gslsfchoose', 'gslsflnpoch', 'gslsfpoch', 'gslsfpochrel', 'gslsfgammaincQ', 'gslsfgammaincP', 'gslsfgammainc', 'gslsflnbeta', 'gslsfbeta', 'gslsfbetainc', 'gslsfgegenpoly1', 'gslsfgegenpoly2', 'gslsfgegenpoly3', 'gslsfgegenpolyn', 'gslsfhyperg0F1', 'gslsfhyperg1F1int', 'gslsfhyperg1F1', 'gslsfhypergUint', 'gslsfhypergU', 'gslsfhyperg2F0', 'gslsflaguerre1', 'gslsflaguerre2', 'gslsflaguerre3', 'gslsflaguerren', 'gslsflambertW0', 'gslsflambertWm1', 'gslsflegendrePl', 'gslsflegendreP1', 'gslsflegendreP2', 'gslsflegendreP3', 'gslsflegendreQ0', 'gslsflegendreQ1', 'gslsflegendreQl', 'gslsflegendrePlm', 'gslsflegendresphPlm', 'gslsflegendrearraysize', 'gslsfconicalPhalf', 'gslsfconicalPmhalf', 'gslsfconicalP0', 'gslsfconicalP1', 'gslsfconicalPsphreg', 'gslsfconicalPcylreg', 'gslsflegendreH3d0', 'gslsflegendreH3d1', 'gslsflegendreH3d', 'gslsflog', 'gslsflogabs', 'gslsflog1plusx', 'gslsflog1plusxmx', 'gslsfpowint', 'gslsfpsiint', 'gslsfpsi', 'gslsfpsi1piy', 'gslsfpsi1int', 'gslsfpsi1', 'gslsfpsin', 'gslsfsynchrotron1', 'gslsfsynchrotron2', 'gslsftransport2', 'gslsftransport3', 'gslsftransport4', 'gslsftransport5', 'gslsfsin', 'gslsfcos', 'gslsfhypot', 'gslsfsinc', 'gslsflnsinh', 'gslsflncosh', 'gslsfanglerestrictsymm', 'gslsfanglerestrictpos', 'gslsfzetaint', 'gslsfzeta', 'gslsfzetam1', 'gslsfzetam1int', 'gslsfhzeta', 'gslsfetaint', 'gslsfeta', 'imag', 'int1d', 'int2d', 'int3d', 'intalledges', 'intallfaces', 'interpolate', 'invdiff', 'invdiffnp', 'invdiffpos', 'Isend', 'isInf', 'isNaN', 'isoline', 'Irecv', 'j0', 'j1', 'jn', 'jump', 'lgamma', 'LinearCG', 'LinearGMRES', 'log', 'log10', 'lrint', 'lround', 'max', 'mean', 'medit', 'min', 'mmg3d', 'movemesh', 'movemesh23', 'mpiAlltoall', 'mpiAlltoallv', 'mpiAllgather', 'mpiAllgatherv', 'mpiAllReduce', 'mpiBarrier', 'mpiGather', 'mpiGatherv', 'mpiRank', 'mpiReduce', 'mpiScatter', 'mpiScatterv', 'mpiSize', 'mpiWait', 'mpiWaitAny', 'mpiWtick', 'mpiWtime', 'mshmet', 'NaN', 'NLCG', 'on', 'plot', 'polar', 'Post', 'pow', 'processor', 'processorblock', 'projection', 'randinit', 'randint31', 'randint32', 'random', 'randreal1', 'randreal2', 'randreal3', 'randres53', 'Read', 'readmesh', 'readmesh3', 'Recv', 'rint', 'round', 'savemesh', 'savesol', 'savevtk', 'seekg', 'Sent', 'set', 'sign', 'signbit', 'sin', 'sinh', 'sort', 'splitComm', 'splitmesh', 'sqrt', 'square', 'srandom', 'srandomdev', 'Stringification', 'swap', 'system', 'tan', 'tanh', 'tellg', 'tetg', 'tetgconvexhull', 'tetgreconstruction', 'tetgtransfo', 'tgamma', 'triangulate', 'trunc', 'Wait', 'Write', 'y0', 'y1', 'yn' } # function parameters parameters = { 'A', 'A1', 'abserror', 'absolute', 'aniso', 'aspectratio', 'B', 'B1', 'bb', 'beginend', 'bin', 'boundary', 'bw', 'close', 'cmm', 'coef', 'composante', 'cutoff', 'datafilename', 'dataname', 'dim', 'distmax', 'displacement', 'doptions', 'dparams', 'eps', 'err', 'errg', 'facemerge', 'facetcl', 'factorize', 'file', 'fill', 'fixedborder', 'flabel', 'flags', 'floatmesh', 'floatsol', 'fregion', 'gradation', 'grey', 'hmax', 'hmin', 'holelist', 'hsv', 'init', 'inquire', 'inside', 'IsMetric', 'iso', 'ivalue', 'keepbackvertices', 'label', 'labeldown', 'labelmid', 'labelup', 'levelset', 'loptions', 'lparams', 'maxit', 'maxsubdiv', 'meditff', 'mem', 'memory', 'metric', 'mode', 'nbarrow', 'nbiso', 'nbiter', 'nbjacoby', 'nboffacetcl', 'nbofholes', 'nbofregions', 'nbregul', 'nbsmooth', 'nbvx', 'ncv', 'nev', 'nomeshgeneration', 'normalization', 'omega', 'op', 'optimize', 'option', 'options', 'order', 'orientation', 'periodic', 'power', 'precon', 'prev', 'ps', 'ptmerge', 'qfe', 'qforder', 'qft', 'qfV', 'ratio', 'rawvector', 'reffacelow', 'reffacemid', 'reffaceup', 'refnum', 'reftet', 'reftri', 'region', 'regionlist', 'renumv', 'rescaling', 'ridgeangle', 'save', 'sigma', 'sizeofvolume', 'smoothing', 'solver', 'sparams', 'split', 'splitin2', 'splitpbedge', 'stop', 'strategy', 'swap', 'switch', 'sym', 't', 'tgv', 'thetamax', 'tol', 'tolpivot', 'tolpivotsym', 'transfo', 'U2Vc', 'value', 'varrow', 'vector', 'veps', 'viso', 'wait', 'width', 'withsurfacemesh', 'WindowIndex', 'which', 'zbound' } # deprecated deprecated = {'fixeborder'} # do not highlight suppress_highlight = { 'alignof', 'asm', 'constexpr', 'decltype', 'div', 'double', 'grad', 'mutable', 'namespace', 'noexcept', 'restrict', 'static_assert', 'template', 'this', 'thread_local', 'typeid', 'typename', 'using' } def get_tokens_unprocessed(self, text): for index, token, value in CppLexer.get_tokens_unprocessed(self, text): if value in self.operators: yield index, Operator, value elif value in self.types: yield index, Keyword.Type, value elif value in self.fespaces: yield index, Name.Class, value elif value in self.preprocessor: yield index, Comment.Preproc, value elif value in self.keywords: yield index, Keyword.Reserved, value elif value in self.functions: yield index, Name.Function, value elif value in self.parameters: yield index, Keyword.Pseudo, value elif value in self.suppress_highlight: yield index, Name, value else: yield index, token, value pygments-2.11.2/pygments/lexers/_mapping.py0000644000175000017500000017452414165547207020667 0ustar carstencarsten""" pygments.lexers._mapping ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer mapping definitions. This file is generated by itself. Everytime you change something on a builtin lexer definition, run this script from the lexers folder to update it. Do not alter the LEXERS dictionary by hand. :copyright: Copyright 2006-2014, 2016 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ LEXERS = { 'ABAPLexer': ('pygments.lexers.business', 'ABAP', ('abap',), ('*.abap', '*.ABAP'), ('text/x-abap',)), 'AMDGPULexer': ('pygments.lexers.amdgpu', 'AMDGPU', ('amdgpu',), ('*.isa',), ()), 'APLLexer': ('pygments.lexers.apl', 'APL', ('apl',), ('*.apl', '*.aplf', '*.aplo', '*.apln', '*.aplc', '*.apli', '*.dyalog'), ()), 'AbnfLexer': ('pygments.lexers.grammar_notation', 'ABNF', ('abnf',), ('*.abnf',), ('text/x-abnf',)), 'ActionScript3Lexer': ('pygments.lexers.actionscript', 'ActionScript 3', ('actionscript3', 'as3'), ('*.as',), ('application/x-actionscript3', 'text/x-actionscript3', 'text/actionscript3')), 'ActionScriptLexer': ('pygments.lexers.actionscript', 'ActionScript', ('actionscript', 'as'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')), 'AdaLexer': ('pygments.lexers.pascal', 'Ada', ('ada', 'ada95', 'ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)), 'AdlLexer': ('pygments.lexers.archetype', 'ADL', ('adl',), ('*.adl', '*.adls', '*.adlf', '*.adlx'), ()), 'AgdaLexer': ('pygments.lexers.haskell', 'Agda', ('agda',), ('*.agda',), ('text/x-agda',)), 'AheuiLexer': ('pygments.lexers.esoteric', 'Aheui', ('aheui',), ('*.aheui',), ()), 'AlloyLexer': ('pygments.lexers.dsls', 'Alloy', ('alloy',), ('*.als',), ('text/x-alloy',)), 'AmbientTalkLexer': ('pygments.lexers.ambient', 'AmbientTalk', ('ambienttalk', 'ambienttalk/2', 'at'), ('*.at',), ('text/x-ambienttalk',)), 'AmplLexer': ('pygments.lexers.ampl', 'Ampl', ('ampl',), ('*.run',), ()), 'Angular2HtmlLexer': ('pygments.lexers.templates', 'HTML + Angular2', ('html+ng2',), ('*.ng2',), ()), 'Angular2Lexer': ('pygments.lexers.templates', 'Angular2', ('ng2',), (), ()), 'AntlrActionScriptLexer': ('pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-actionscript', 'antlr-as'), ('*.G', '*.g'), ()), 'AntlrCSharpLexer': ('pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()), 'AntlrCppLexer': ('pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()), 'AntlrJavaLexer': ('pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()), 'AntlrLexer': ('pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()), 'AntlrObjectiveCLexer': ('pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()), 'AntlrPerlLexer': ('pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()), 'AntlrPythonLexer': ('pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()), 'AntlrRubyLexer': ('pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()), 'ApacheConfLexer': ('pygments.lexers.configs', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)), 'AppleScriptLexer': ('pygments.lexers.scripting', 'AppleScript', ('applescript',), ('*.applescript',), ()), 'ArduinoLexer': ('pygments.lexers.c_like', 'Arduino', ('arduino',), ('*.ino',), ('text/x-arduino',)), 'ArrowLexer': ('pygments.lexers.arrow', 'Arrow', ('arrow',), ('*.arw',), ()), 'AscLexer': ('pygments.lexers.asc', 'ASCII armored', ('asc', 'pem'), ('*.asc', '*.pem', 'id_dsa', 'id_ecdsa', 'id_ecdsa_sk', 'id_ed25519', 'id_ed25519_sk', 'id_rsa'), ('application/pgp-keys', 'application/pgp-encrypted', 'application/pgp-signature')), 'AspectJLexer': ('pygments.lexers.jvm', 'AspectJ', ('aspectj',), ('*.aj',), ('text/x-aspectj',)), 'AsymptoteLexer': ('pygments.lexers.graphics', 'Asymptote', ('asymptote', 'asy'), ('*.asy',), ('text/x-asymptote',)), 'AugeasLexer': ('pygments.lexers.configs', 'Augeas', ('augeas',), ('*.aug',), ()), 'AutoItLexer': ('pygments.lexers.automation', 'AutoIt', ('autoit',), ('*.au3',), ('text/x-autoit',)), 'AutohotkeyLexer': ('pygments.lexers.automation', 'autohotkey', ('autohotkey', 'ahk'), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)), 'AwkLexer': ('pygments.lexers.textedit', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk',)), 'BBCBasicLexer': ('pygments.lexers.basic', 'BBC Basic', ('bbcbasic',), ('*.bbc',), ()), 'BBCodeLexer': ('pygments.lexers.markup', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)), 'BCLexer': ('pygments.lexers.algebra', 'BC', ('bc',), ('*.bc',), ()), 'BSTLexer': ('pygments.lexers.bibtex', 'BST', ('bst', 'bst-pybtex'), ('*.bst',), ()), 'BareLexer': ('pygments.lexers.bare', 'BARE', ('bare',), ('*.bare',), ()), 'BaseMakefileLexer': ('pygments.lexers.make', 'Base Makefile', ('basemake',), (), ()), 'BashLexer': ('pygments.lexers.shell', 'Bash', ('bash', 'sh', 'ksh', 'zsh', 'shell'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass', '*.exheres-0', '*.exlib', '*.zsh', '.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc', '.kshrc', 'kshrc', 'PKGBUILD'), ('application/x-sh', 'application/x-shellscript', 'text/x-shellscript')), 'BashSessionLexer': ('pygments.lexers.shell', 'Bash Session', ('console', 'shell-session'), ('*.sh-session', '*.shell-session'), ('application/x-shell-session', 'application/x-sh-session')), 'BatchLexer': ('pygments.lexers.shell', 'Batchfile', ('batch', 'bat', 'dosbatch', 'winbatch'), ('*.bat', '*.cmd'), ('application/x-dos-batch',)), 'BddLexer': ('pygments.lexers.bdd', 'Bdd', ('bdd',), ('*.feature',), ('text/x-bdd',)), 'BefungeLexer': ('pygments.lexers.esoteric', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)), 'BibTeXLexer': ('pygments.lexers.bibtex', 'BibTeX', ('bibtex', 'bib'), ('*.bib',), ('text/x-bibtex',)), 'BlitzBasicLexer': ('pygments.lexers.basic', 'BlitzBasic', ('blitzbasic', 'b3d', 'bplus'), ('*.bb', '*.decls'), ('text/x-bb',)), 'BlitzMaxLexer': ('pygments.lexers.basic', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)), 'BnfLexer': ('pygments.lexers.grammar_notation', 'BNF', ('bnf',), ('*.bnf',), ('text/x-bnf',)), 'BoaLexer': ('pygments.lexers.boa', 'Boa', ('boa',), ('*.boa',), ()), 'BooLexer': ('pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)), 'BoogieLexer': ('pygments.lexers.verification', 'Boogie', ('boogie',), ('*.bpl',), ()), 'BrainfuckLexer': ('pygments.lexers.esoteric', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)), 'BugsLexer': ('pygments.lexers.modeling', 'BUGS', ('bugs', 'winbugs', 'openbugs'), ('*.bug',), ()), 'CAmkESLexer': ('pygments.lexers.esoteric', 'CAmkES', ('camkes', 'idl4'), ('*.camkes', '*.idl4'), ()), 'CLexer': ('pygments.lexers.c_cpp', 'C', ('c',), ('*.c', '*.h', '*.idc', '*.x[bp]m'), ('text/x-chdr', 'text/x-csrc', 'image/x-xbitmap', 'image/x-xpixmap')), 'CMakeLexer': ('pygments.lexers.make', 'CMake', ('cmake',), ('*.cmake', 'CMakeLists.txt'), ('text/x-cmake',)), 'CObjdumpLexer': ('pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)), 'CPSALexer': ('pygments.lexers.lisp', 'CPSA', ('cpsa',), ('*.cpsa',), ()), 'CSharpAspxLexer': ('pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()), 'CSharpLexer': ('pygments.lexers.dotnet', 'C#', ('csharp', 'c#', 'cs'), ('*.cs',), ('text/x-csharp',)), 'Ca65Lexer': ('pygments.lexers.asm', 'ca65 assembler', ('ca65',), ('*.s',), ()), 'CadlLexer': ('pygments.lexers.archetype', 'cADL', ('cadl',), ('*.cadl',), ()), 'CapDLLexer': ('pygments.lexers.esoteric', 'CapDL', ('capdl',), ('*.cdl',), ()), 'CapnProtoLexer': ('pygments.lexers.capnproto', "Cap'n Proto", ('capnp',), ('*.capnp',), ()), 'CbmBasicV2Lexer': ('pygments.lexers.basic', 'CBM BASIC V2', ('cbmbas',), ('*.bas',), ()), 'CddlLexer': ('pygments.lexers.cddl', 'CDDL', ('cddl',), ('*.cddl',), ('text/x-cddl',)), 'CeylonLexer': ('pygments.lexers.jvm', 'Ceylon', ('ceylon',), ('*.ceylon',), ('text/x-ceylon',)), 'Cfengine3Lexer': ('pygments.lexers.configs', 'CFEngine3', ('cfengine3', 'cf3'), ('*.cf',), ()), 'ChaiscriptLexer': ('pygments.lexers.scripting', 'ChaiScript', ('chaiscript', 'chai'), ('*.chai',), ('text/x-chaiscript', 'application/x-chaiscript')), 'ChapelLexer': ('pygments.lexers.chapel', 'Chapel', ('chapel', 'chpl'), ('*.chpl',), ()), 'CharmciLexer': ('pygments.lexers.c_like', 'Charmci', ('charmci',), ('*.ci',), ()), 'CheetahHtmlLexer': ('pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire', 'htmlcheetah'), (), ('text/html+cheetah', 'text/html+spitfire')), 'CheetahJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Cheetah', ('javascript+cheetah', 'js+cheetah', 'javascript+spitfire', 'js+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')), 'CheetahLexer': ('pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')), 'CheetahXmlLexer': ('pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')), 'CirruLexer': ('pygments.lexers.webmisc', 'Cirru', ('cirru',), ('*.cirru',), ('text/x-cirru',)), 'ClayLexer': ('pygments.lexers.c_like', 'Clay', ('clay',), ('*.clay',), ('text/x-clay',)), 'CleanLexer': ('pygments.lexers.clean', 'Clean', ('clean',), ('*.icl', '*.dcl'), ()), 'ClojureLexer': ('pygments.lexers.jvm', 'Clojure', ('clojure', 'clj'), ('*.clj',), ('text/x-clojure', 'application/x-clojure')), 'ClojureScriptLexer': ('pygments.lexers.jvm', 'ClojureScript', ('clojurescript', 'cljs'), ('*.cljs',), ('text/x-clojurescript', 'application/x-clojurescript')), 'CobolFreeformatLexer': ('pygments.lexers.business', 'COBOLFree', ('cobolfree',), ('*.cbl', '*.CBL'), ()), 'CobolLexer': ('pygments.lexers.business', 'COBOL', ('cobol',), ('*.cob', '*.COB', '*.cpy', '*.CPY'), ('text/x-cobol',)), 'CoffeeScriptLexer': ('pygments.lexers.javascript', 'CoffeeScript', ('coffeescript', 'coffee-script', 'coffee'), ('*.coffee',), ('text/coffeescript',)), 'ColdfusionCFCLexer': ('pygments.lexers.templates', 'Coldfusion CFC', ('cfc',), ('*.cfc',), ()), 'ColdfusionHtmlLexer': ('pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml'), ('application/x-coldfusion',)), 'ColdfusionLexer': ('pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()), 'CommonLispLexer': ('pygments.lexers.lisp', 'Common Lisp', ('common-lisp', 'cl', 'lisp'), ('*.cl', '*.lisp'), ('text/x-common-lisp',)), 'ComponentPascalLexer': ('pygments.lexers.oberon', 'Component Pascal', ('componentpascal', 'cp'), ('*.cp', '*.cps'), ('text/x-component-pascal',)), 'CoqLexer': ('pygments.lexers.theorem', 'Coq', ('coq',), ('*.v',), ('text/x-coq',)), 'CppLexer': ('pygments.lexers.c_cpp', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx', '*.C', '*.H', '*.cp', '*.CPP'), ('text/x-c++hdr', 'text/x-c++src')), 'CppObjdumpLexer': ('pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)), 'CrmshLexer': ('pygments.lexers.dsls', 'Crmsh', ('crmsh', 'pcmk'), ('*.crmsh', '*.pcmk'), ()), 'CrocLexer': ('pygments.lexers.d', 'Croc', ('croc',), ('*.croc',), ('text/x-crocsrc',)), 'CryptolLexer': ('pygments.lexers.haskell', 'Cryptol', ('cryptol', 'cry'), ('*.cry',), ('text/x-cryptol',)), 'CrystalLexer': ('pygments.lexers.crystal', 'Crystal', ('cr', 'crystal'), ('*.cr',), ('text/x-crystal',)), 'CsoundDocumentLexer': ('pygments.lexers.csound', 'Csound Document', ('csound-document', 'csound-csd'), ('*.csd',), ()), 'CsoundOrchestraLexer': ('pygments.lexers.csound', 'Csound Orchestra', ('csound', 'csound-orc'), ('*.orc', '*.udo'), ()), 'CsoundScoreLexer': ('pygments.lexers.csound', 'Csound Score', ('csound-score', 'csound-sco'), ('*.sco',), ()), 'CssDjangoLexer': ('pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), (), ('text/css+django', 'text/css+jinja')), 'CssErbLexer': ('pygments.lexers.templates', 'CSS+Ruby', ('css+ruby', 'css+erb'), (), ('text/css+ruby',)), 'CssGenshiLexer': ('pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)), 'CssLexer': ('pygments.lexers.css', 'CSS', ('css',), ('*.css',), ('text/css',)), 'CssPhpLexer': ('pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)), 'CssSmartyLexer': ('pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)), 'CudaLexer': ('pygments.lexers.c_like', 'CUDA', ('cuda', 'cu'), ('*.cu', '*.cuh'), ('text/x-cuda',)), 'CypherLexer': ('pygments.lexers.graph', 'Cypher', ('cypher',), ('*.cyp', '*.cypher'), ()), 'CythonLexer': ('pygments.lexers.python', 'Cython', ('cython', 'pyx', 'pyrex'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')), 'DLexer': ('pygments.lexers.d', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)), 'DObjdumpLexer': ('pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)), 'DarcsPatchLexer': ('pygments.lexers.diff', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()), 'DartLexer': ('pygments.lexers.javascript', 'Dart', ('dart',), ('*.dart',), ('text/x-dart',)), 'Dasm16Lexer': ('pygments.lexers.asm', 'DASM16', ('dasm16',), ('*.dasm16', '*.dasm'), ('text/x-dasm16',)), 'DebianControlLexer': ('pygments.lexers.installers', 'Debian Control file', ('debcontrol', 'control'), ('control',), ()), 'DelphiLexer': ('pygments.lexers.pascal', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas', '*.dpr'), ('text/x-pascal',)), 'DevicetreeLexer': ('pygments.lexers.devicetree', 'Devicetree', ('devicetree', 'dts'), ('*.dts', '*.dtsi'), ('text/x-c',)), 'DgLexer': ('pygments.lexers.python', 'dg', ('dg',), ('*.dg',), ('text/x-dg',)), 'DiffLexer': ('pygments.lexers.diff', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')), 'DjangoLexer': ('pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')), 'DockerLexer': ('pygments.lexers.configs', 'Docker', ('docker', 'dockerfile'), ('Dockerfile', '*.docker'), ('text/x-dockerfile-config',)), 'DtdLexer': ('pygments.lexers.html', 'DTD', ('dtd',), ('*.dtd',), ('application/xml-dtd',)), 'DuelLexer': ('pygments.lexers.webmisc', 'Duel', ('duel', 'jbst', 'jsonml+bst'), ('*.duel', '*.jbst'), ('text/x-duel', 'text/x-jbst')), 'DylanConsoleLexer': ('pygments.lexers.dylan', 'Dylan session', ('dylan-console', 'dylan-repl'), ('*.dylan-console',), ('text/x-dylan-console',)), 'DylanLexer': ('pygments.lexers.dylan', 'Dylan', ('dylan',), ('*.dylan', '*.dyl', '*.intr'), ('text/x-dylan',)), 'DylanLidLexer': ('pygments.lexers.dylan', 'DylanLID', ('dylan-lid', 'lid'), ('*.lid', '*.hdp'), ('text/x-dylan-lid',)), 'ECLLexer': ('pygments.lexers.ecl', 'ECL', ('ecl',), ('*.ecl',), ('application/x-ecl',)), 'ECLexer': ('pygments.lexers.c_like', 'eC', ('ec',), ('*.ec', '*.eh'), ('text/x-echdr', 'text/x-ecsrc')), 'EarlGreyLexer': ('pygments.lexers.javascript', 'Earl Grey', ('earl-grey', 'earlgrey', 'eg'), ('*.eg',), ('text/x-earl-grey',)), 'EasytrieveLexer': ('pygments.lexers.scripting', 'Easytrieve', ('easytrieve',), ('*.ezt', '*.mac'), ('text/x-easytrieve',)), 'EbnfLexer': ('pygments.lexers.parsers', 'EBNF', ('ebnf',), ('*.ebnf',), ('text/x-ebnf',)), 'EiffelLexer': ('pygments.lexers.eiffel', 'Eiffel', ('eiffel',), ('*.e',), ('text/x-eiffel',)), 'ElixirConsoleLexer': ('pygments.lexers.erlang', 'Elixir iex session', ('iex',), (), ('text/x-elixir-shellsession',)), 'ElixirLexer': ('pygments.lexers.erlang', 'Elixir', ('elixir', 'ex', 'exs'), ('*.ex', '*.eex', '*.exs', '*.leex'), ('text/x-elixir',)), 'ElmLexer': ('pygments.lexers.elm', 'Elm', ('elm',), ('*.elm',), ('text/x-elm',)), 'ElpiLexer': ('pygments.lexers.elpi', 'Elpi', ('elpi',), ('*.elpi',), ('text/x-elpi',)), 'EmacsLispLexer': ('pygments.lexers.lisp', 'EmacsLisp', ('emacs-lisp', 'elisp', 'emacs'), ('*.el',), ('text/x-elisp', 'application/x-elisp')), 'EmailLexer': ('pygments.lexers.email', 'E-mail', ('email', 'eml'), ('*.eml',), ('message/rfc822',)), 'ErbLexer': ('pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)), 'ErlangLexer': ('pygments.lexers.erlang', 'Erlang', ('erlang',), ('*.erl', '*.hrl', '*.es', '*.escript'), ('text/x-erlang',)), 'ErlangShellLexer': ('pygments.lexers.erlang', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)), 'EvoqueHtmlLexer': ('pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), ('*.html',), ('text/html+evoque',)), 'EvoqueLexer': ('pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)), 'EvoqueXmlLexer': ('pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), ('*.xml',), ('application/xml+evoque',)), 'ExeclineLexer': ('pygments.lexers.shell', 'execline', ('execline',), ('*.exec',), ()), 'EzhilLexer': ('pygments.lexers.ezhil', 'Ezhil', ('ezhil',), ('*.n',), ('text/x-ezhil',)), 'FSharpLexer': ('pygments.lexers.dotnet', 'F#', ('fsharp', 'f#'), ('*.fs', '*.fsi'), ('text/x-fsharp',)), 'FStarLexer': ('pygments.lexers.ml', 'FStar', ('fstar',), ('*.fst', '*.fsti'), ('text/x-fstar',)), 'FactorLexer': ('pygments.lexers.factor', 'Factor', ('factor',), ('*.factor',), ('text/x-factor',)), 'FancyLexer': ('pygments.lexers.ruby', 'Fancy', ('fancy', 'fy'), ('*.fy', '*.fancypack'), ('text/x-fancysrc',)), 'FantomLexer': ('pygments.lexers.fantom', 'Fantom', ('fan',), ('*.fan',), ('application/x-fantom',)), 'FelixLexer': ('pygments.lexers.felix', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)), 'FennelLexer': ('pygments.lexers.lisp', 'Fennel', ('fennel', 'fnl'), ('*.fnl',), ()), 'FishShellLexer': ('pygments.lexers.shell', 'Fish', ('fish', 'fishshell'), ('*.fish', '*.load'), ('application/x-fish',)), 'FlatlineLexer': ('pygments.lexers.dsls', 'Flatline', ('flatline',), (), ('text/x-flatline',)), 'FloScriptLexer': ('pygments.lexers.floscript', 'FloScript', ('floscript', 'flo'), ('*.flo',), ()), 'ForthLexer': ('pygments.lexers.forth', 'Forth', ('forth',), ('*.frt', '*.fs'), ('application/x-forth',)), 'FortranFixedLexer': ('pygments.lexers.fortran', 'FortranFixed', ('fortranfixed',), ('*.f', '*.F'), ()), 'FortranLexer': ('pygments.lexers.fortran', 'Fortran', ('fortran', 'f90'), ('*.f03', '*.f90', '*.F03', '*.F90'), ('text/x-fortran',)), 'FoxProLexer': ('pygments.lexers.foxpro', 'FoxPro', ('foxpro', 'vfp', 'clipper', 'xbase'), ('*.PRG', '*.prg'), ()), 'FreeFemLexer': ('pygments.lexers.freefem', 'Freefem', ('freefem',), ('*.edp',), ('text/x-freefem',)), 'FutharkLexer': ('pygments.lexers.futhark', 'Futhark', ('futhark',), ('*.fut',), ('text/x-futhark',)), 'GAPLexer': ('pygments.lexers.algebra', 'GAP', ('gap',), ('*.g', '*.gd', '*.gi', '*.gap'), ()), 'GDScriptLexer': ('pygments.lexers.gdscript', 'GDScript', ('gdscript', 'gd'), ('*.gd',), ('text/x-gdscript', 'application/x-gdscript')), 'GLShaderLexer': ('pygments.lexers.graphics', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)), 'GSQLLexer': ('pygments.lexers.gsql', 'GSQL', ('gsql',), ('*.gsql',), ()), 'GasLexer': ('pygments.lexers.asm', 'GAS', ('gas', 'asm'), ('*.s', '*.S'), ('text/x-gas',)), 'GcodeLexer': ('pygments.lexers.gcodelexer', 'g-code', ('gcode',), ('*.gcode',), ()), 'GenshiLexer': ('pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')), 'GenshiTextLexer': ('pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')), 'GettextLexer': ('pygments.lexers.textfmts', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')), 'GherkinLexer': ('pygments.lexers.testing', 'Gherkin', ('gherkin', 'cucumber'), ('*.feature',), ('text/x-gherkin',)), 'GnuplotLexer': ('pygments.lexers.graphics', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)), 'GoLexer': ('pygments.lexers.go', 'Go', ('go', 'golang'), ('*.go',), ('text/x-gosrc',)), 'GoloLexer': ('pygments.lexers.jvm', 'Golo', ('golo',), ('*.golo',), ()), 'GoodDataCLLexer': ('pygments.lexers.business', 'GoodData-CL', ('gooddata-cl',), ('*.gdc',), ('text/x-gooddata-cl',)), 'GosuLexer': ('pygments.lexers.jvm', 'Gosu', ('gosu',), ('*.gs', '*.gsx', '*.gsp', '*.vark'), ('text/x-gosu',)), 'GosuTemplateLexer': ('pygments.lexers.jvm', 'Gosu Template', ('gst',), ('*.gst',), ('text/x-gosu-template',)), 'GraphvizLexer': ('pygments.lexers.graphviz', 'Graphviz', ('graphviz', 'dot'), ('*.gv', '*.dot'), ('text/x-graphviz', 'text/vnd.graphviz')), 'GroffLexer': ('pygments.lexers.markup', 'Groff', ('groff', 'nroff', 'man'), ('*.[1-9]', '*.man', '*.1p', '*.3pm'), ('application/x-troff', 'text/troff')), 'GroovyLexer': ('pygments.lexers.jvm', 'Groovy', ('groovy',), ('*.groovy', '*.gradle'), ('text/x-groovy',)), 'HLSLShaderLexer': ('pygments.lexers.graphics', 'HLSL', ('hlsl',), ('*.hlsl', '*.hlsli'), ('text/x-hlsl',)), 'HamlLexer': ('pygments.lexers.html', 'Haml', ('haml',), ('*.haml',), ('text/x-haml',)), 'HandlebarsHtmlLexer': ('pygments.lexers.templates', 'HTML+Handlebars', ('html+handlebars',), ('*.handlebars', '*.hbs'), ('text/html+handlebars', 'text/x-handlebars-template')), 'HandlebarsLexer': ('pygments.lexers.templates', 'Handlebars', ('handlebars',), (), ()), 'HaskellLexer': ('pygments.lexers.haskell', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)), 'HaxeLexer': ('pygments.lexers.haxe', 'Haxe', ('haxe', 'hxsl', 'hx'), ('*.hx', '*.hxsl'), ('text/haxe', 'text/x-haxe', 'text/x-hx')), 'HexdumpLexer': ('pygments.lexers.hexdump', 'Hexdump', ('hexdump',), (), ()), 'HsailLexer': ('pygments.lexers.asm', 'HSAIL', ('hsail', 'hsa'), ('*.hsail',), ('text/x-hsail',)), 'HspecLexer': ('pygments.lexers.haskell', 'Hspec', ('hspec',), (), ()), 'HtmlDjangoLexer': ('pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja', 'htmldjango'), (), ('text/html+django', 'text/html+jinja')), 'HtmlGenshiLexer': ('pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)), 'HtmlLexer': ('pygments.lexers.html', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')), 'HtmlPhpLexer': ('pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')), 'HtmlSmartyLexer': ('pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)), 'HttpLexer': ('pygments.lexers.textfmts', 'HTTP', ('http',), (), ()), 'HxmlLexer': ('pygments.lexers.haxe', 'Hxml', ('haxeml', 'hxml'), ('*.hxml',), ()), 'HyLexer': ('pygments.lexers.lisp', 'Hy', ('hylang',), ('*.hy',), ('text/x-hy', 'application/x-hy')), 'HybrisLexer': ('pygments.lexers.scripting', 'Hybris', ('hybris', 'hy'), ('*.hy', '*.hyb'), ('text/x-hybris', 'application/x-hybris')), 'IDLLexer': ('pygments.lexers.idl', 'IDL', ('idl',), ('*.pro',), ('text/idl',)), 'IconLexer': ('pygments.lexers.unicon', 'Icon', ('icon',), ('*.icon', '*.ICON'), ()), 'IdrisLexer': ('pygments.lexers.haskell', 'Idris', ('idris', 'idr'), ('*.idr',), ('text/x-idris',)), 'IgorLexer': ('pygments.lexers.igor', 'Igor', ('igor', 'igorpro'), ('*.ipf',), ('text/ipf',)), 'Inform6Lexer': ('pygments.lexers.int_fiction', 'Inform 6', ('inform6', 'i6'), ('*.inf',), ()), 'Inform6TemplateLexer': ('pygments.lexers.int_fiction', 'Inform 6 template', ('i6t',), ('*.i6t',), ()), 'Inform7Lexer': ('pygments.lexers.int_fiction', 'Inform 7', ('inform7', 'i7'), ('*.ni', '*.i7x'), ()), 'IniLexer': ('pygments.lexers.configs', 'INI', ('ini', 'cfg', 'dosini'), ('*.ini', '*.cfg', '*.inf', '.editorconfig', '*.service', '*.socket', '*.device', '*.mount', '*.automount', '*.swap', '*.target', '*.path', '*.timer', '*.slice', '*.scope'), ('text/x-ini', 'text/inf')), 'IoLexer': ('pygments.lexers.iolang', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)), 'IokeLexer': ('pygments.lexers.jvm', 'Ioke', ('ioke', 'ik'), ('*.ik',), ('text/x-iokesrc',)), 'IrcLogsLexer': ('pygments.lexers.textfmts', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)), 'IsabelleLexer': ('pygments.lexers.theorem', 'Isabelle', ('isabelle',), ('*.thy',), ('text/x-isabelle',)), 'JLexer': ('pygments.lexers.j', 'J', ('j',), ('*.ijs',), ('text/x-j',)), 'JSLTLexer': ('pygments.lexers.jslt', 'JSLT', ('jslt',), ('*.jslt',), ('text/x-jslt',)), 'JagsLexer': ('pygments.lexers.modeling', 'JAGS', ('jags',), ('*.jag', '*.bug'), ()), 'JasminLexer': ('pygments.lexers.jvm', 'Jasmin', ('jasmin', 'jasminxt'), ('*.j',), ()), 'JavaLexer': ('pygments.lexers.jvm', 'Java', ('java',), ('*.java',), ('text/x-java',)), 'JavascriptDjangoLexer': ('pygments.lexers.templates', 'JavaScript+Django/Jinja', ('javascript+django', 'js+django', 'javascript+jinja', 'js+jinja'), (), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')), 'JavascriptErbLexer': ('pygments.lexers.templates', 'JavaScript+Ruby', ('javascript+ruby', 'js+ruby', 'javascript+erb', 'js+erb'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')), 'JavascriptGenshiLexer': ('pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')), 'JavascriptLexer': ('pygments.lexers.javascript', 'JavaScript', ('javascript', 'js'), ('*.js', '*.jsm', '*.mjs', '*.cjs'), ('application/javascript', 'application/x-javascript', 'text/x-javascript', 'text/javascript')), 'JavascriptPhpLexer': ('pygments.lexers.templates', 'JavaScript+PHP', ('javascript+php', 'js+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')), 'JavascriptSmartyLexer': ('pygments.lexers.templates', 'JavaScript+Smarty', ('javascript+smarty', 'js+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')), 'JclLexer': ('pygments.lexers.scripting', 'JCL', ('jcl',), ('*.jcl',), ('text/x-jcl',)), 'JsgfLexer': ('pygments.lexers.grammar_notation', 'JSGF', ('jsgf',), ('*.jsgf',), ('application/jsgf', 'application/x-jsgf', 'text/jsgf')), 'JsonBareObjectLexer': ('pygments.lexers.data', 'JSONBareObject', (), (), ()), 'JsonLdLexer': ('pygments.lexers.data', 'JSON-LD', ('jsonld', 'json-ld'), ('*.jsonld',), ('application/ld+json',)), 'JsonLexer': ('pygments.lexers.data', 'JSON', ('json', 'json-object'), ('*.json', 'Pipfile.lock'), ('application/json', 'application/json-object')), 'JspLexer': ('pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)), 'JuliaConsoleLexer': ('pygments.lexers.julia', 'Julia console', ('jlcon', 'julia-repl'), (), ()), 'JuliaLexer': ('pygments.lexers.julia', 'Julia', ('julia', 'jl'), ('*.jl',), ('text/x-julia', 'application/x-julia')), 'JuttleLexer': ('pygments.lexers.javascript', 'Juttle', ('juttle',), ('*.juttle',), ('application/juttle', 'application/x-juttle', 'text/x-juttle', 'text/juttle')), 'KalLexer': ('pygments.lexers.javascript', 'Kal', ('kal',), ('*.kal',), ('text/kal', 'application/kal')), 'KconfigLexer': ('pygments.lexers.configs', 'Kconfig', ('kconfig', 'menuconfig', 'linux-config', 'kernel-config'), ('Kconfig*', '*Config.in*', 'external.in*', 'standard-modules.in'), ('text/x-kconfig',)), 'KernelLogLexer': ('pygments.lexers.textfmts', 'Kernel log', ('kmsg', 'dmesg'), ('*.kmsg', '*.dmesg'), ()), 'KokaLexer': ('pygments.lexers.haskell', 'Koka', ('koka',), ('*.kk', '*.kki'), ('text/x-koka',)), 'KotlinLexer': ('pygments.lexers.jvm', 'Kotlin', ('kotlin',), ('*.kt', '*.kts'), ('text/x-kotlin',)), 'KuinLexer': ('pygments.lexers.kuin', 'Kuin', ('kuin',), ('*.kn',), ()), 'LSLLexer': ('pygments.lexers.scripting', 'LSL', ('lsl',), ('*.lsl',), ('text/x-lsl',)), 'LassoCssLexer': ('pygments.lexers.templates', 'CSS+Lasso', ('css+lasso',), (), ('text/css+lasso',)), 'LassoHtmlLexer': ('pygments.lexers.templates', 'HTML+Lasso', ('html+lasso',), (), ('text/html+lasso', 'application/x-httpd-lasso', 'application/x-httpd-lasso[89]')), 'LassoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Lasso', ('javascript+lasso', 'js+lasso'), (), ('application/x-javascript+lasso', 'text/x-javascript+lasso', 'text/javascript+lasso')), 'LassoLexer': ('pygments.lexers.javascript', 'Lasso', ('lasso', 'lassoscript'), ('*.lasso', '*.lasso[89]'), ('text/x-lasso',)), 'LassoXmlLexer': ('pygments.lexers.templates', 'XML+Lasso', ('xml+lasso',), (), ('application/xml+lasso',)), 'LeanLexer': ('pygments.lexers.theorem', 'Lean', ('lean',), ('*.lean',), ('text/x-lean',)), 'LessCssLexer': ('pygments.lexers.css', 'LessCss', ('less',), ('*.less',), ('text/x-less-css',)), 'LighttpdConfLexer': ('pygments.lexers.configs', 'Lighttpd configuration file', ('lighttpd', 'lighty'), ('lighttpd.conf',), ('text/x-lighttpd-conf',)), 'LilyPondLexer': ('pygments.lexers.lilypond', 'LilyPond', ('lilypond',), ('*.ly',), ()), 'LimboLexer': ('pygments.lexers.inferno', 'Limbo', ('limbo',), ('*.b',), ('text/limbo',)), 'LiquidLexer': ('pygments.lexers.templates', 'liquid', ('liquid',), ('*.liquid',), ()), 'LiterateAgdaLexer': ('pygments.lexers.haskell', 'Literate Agda', ('literate-agda', 'lagda'), ('*.lagda',), ('text/x-literate-agda',)), 'LiterateCryptolLexer': ('pygments.lexers.haskell', 'Literate Cryptol', ('literate-cryptol', 'lcryptol', 'lcry'), ('*.lcry',), ('text/x-literate-cryptol',)), 'LiterateHaskellLexer': ('pygments.lexers.haskell', 'Literate Haskell', ('literate-haskell', 'lhaskell', 'lhs'), ('*.lhs',), ('text/x-literate-haskell',)), 'LiterateIdrisLexer': ('pygments.lexers.haskell', 'Literate Idris', ('literate-idris', 'lidris', 'lidr'), ('*.lidr',), ('text/x-literate-idris',)), 'LiveScriptLexer': ('pygments.lexers.javascript', 'LiveScript', ('livescript', 'live-script'), ('*.ls',), ('text/livescript',)), 'LlvmLexer': ('pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)), 'LlvmMirBodyLexer': ('pygments.lexers.asm', 'LLVM-MIR Body', ('llvm-mir-body',), (), ()), 'LlvmMirLexer': ('pygments.lexers.asm', 'LLVM-MIR', ('llvm-mir',), ('*.mir',), ()), 'LogosLexer': ('pygments.lexers.objective', 'Logos', ('logos',), ('*.x', '*.xi', '*.xm', '*.xmi'), ('text/x-logos',)), 'LogtalkLexer': ('pygments.lexers.prolog', 'Logtalk', ('logtalk',), ('*.lgt', '*.logtalk'), ('text/x-logtalk',)), 'LuaLexer': ('pygments.lexers.scripting', 'Lua', ('lua',), ('*.lua', '*.wlua'), ('text/x-lua', 'application/x-lua')), 'MIMELexer': ('pygments.lexers.mime', 'MIME', ('mime',), (), ('multipart/mixed', 'multipart/related', 'multipart/alternative')), 'MOOCodeLexer': ('pygments.lexers.scripting', 'MOOCode', ('moocode', 'moo'), ('*.moo',), ('text/x-moocode',)), 'MSDOSSessionLexer': ('pygments.lexers.shell', 'MSDOS Session', ('doscon',), (), ()), 'MakefileLexer': ('pygments.lexers.make', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', '*.mk', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)), 'MakoCssLexer': ('pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)), 'MakoHtmlLexer': ('pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)), 'MakoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Mako', ('javascript+mako', 'js+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')), 'MakoLexer': ('pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)), 'MakoXmlLexer': ('pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)), 'MaqlLexer': ('pygments.lexers.business', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')), 'MarkdownLexer': ('pygments.lexers.markup', 'Markdown', ('markdown', 'md'), ('*.md', '*.markdown'), ('text/x-markdown',)), 'MaskLexer': ('pygments.lexers.javascript', 'Mask', ('mask',), ('*.mask',), ('text/x-mask',)), 'MasonLexer': ('pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)), 'MathematicaLexer': ('pygments.lexers.algebra', 'Mathematica', ('mathematica', 'mma', 'nb'), ('*.nb', '*.cdf', '*.nbp', '*.ma'), ('application/mathematica', 'application/vnd.wolfram.mathematica', 'application/vnd.wolfram.mathematica.package', 'application/vnd.wolfram.cdf')), 'MatlabLexer': ('pygments.lexers.matlab', 'Matlab', ('matlab',), ('*.m',), ('text/matlab',)), 'MatlabSessionLexer': ('pygments.lexers.matlab', 'Matlab session', ('matlabsession',), (), ()), 'MaximaLexer': ('pygments.lexers.maxima', 'Maxima', ('maxima', 'macsyma'), ('*.mac', '*.max'), ()), 'MesonLexer': ('pygments.lexers.meson', 'Meson', ('meson', 'meson.build'), ('meson.build', 'meson_options.txt'), ('text/x-meson',)), 'MiniDLexer': ('pygments.lexers.d', 'MiniD', ('minid',), (), ('text/x-minidsrc',)), 'MiniScriptLexer': ('pygments.lexers.scripting', 'MiniScript', ('miniscript', 'ms'), ('*.ms',), ('text/x-minicript', 'application/x-miniscript')), 'ModelicaLexer': ('pygments.lexers.modeling', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)), 'Modula2Lexer': ('pygments.lexers.modula2', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)), 'MoinWikiLexer': ('pygments.lexers.markup', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)), 'MonkeyLexer': ('pygments.lexers.basic', 'Monkey', ('monkey',), ('*.monkey',), ('text/x-monkey',)), 'MonteLexer': ('pygments.lexers.monte', 'Monte', ('monte',), ('*.mt',), ()), 'MoonScriptLexer': ('pygments.lexers.scripting', 'MoonScript', ('moonscript', 'moon'), ('*.moon',), ('text/x-moonscript', 'application/x-moonscript')), 'MoselLexer': ('pygments.lexers.mosel', 'Mosel', ('mosel',), ('*.mos',), ()), 'MozPreprocCssLexer': ('pygments.lexers.markup', 'CSS+mozpreproc', ('css+mozpreproc',), ('*.css.in',), ()), 'MozPreprocHashLexer': ('pygments.lexers.markup', 'mozhashpreproc', ('mozhashpreproc',), (), ()), 'MozPreprocJavascriptLexer': ('pygments.lexers.markup', 'Javascript+mozpreproc', ('javascript+mozpreproc',), ('*.js.in',), ()), 'MozPreprocPercentLexer': ('pygments.lexers.markup', 'mozpercentpreproc', ('mozpercentpreproc',), (), ()), 'MozPreprocXulLexer': ('pygments.lexers.markup', 'XUL+mozpreproc', ('xul+mozpreproc',), ('*.xul.in',), ()), 'MqlLexer': ('pygments.lexers.c_like', 'MQL', ('mql', 'mq4', 'mq5', 'mql4', 'mql5'), ('*.mq4', '*.mq5', '*.mqh'), ('text/x-mql',)), 'MscgenLexer': ('pygments.lexers.dsls', 'Mscgen', ('mscgen', 'msc'), ('*.msc',), ()), 'MuPADLexer': ('pygments.lexers.algebra', 'MuPAD', ('mupad',), ('*.mu',), ()), 'MxmlLexer': ('pygments.lexers.actionscript', 'MXML', ('mxml',), ('*.mxml',), ()), 'MySqlLexer': ('pygments.lexers.sql', 'MySQL', ('mysql',), (), ('text/x-mysql',)), 'MyghtyCssLexer': ('pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)), 'MyghtyHtmlLexer': ('pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)), 'MyghtyJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Myghty', ('javascript+myghty', 'js+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')), 'MyghtyLexer': ('pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)), 'MyghtyXmlLexer': ('pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)), 'NCLLexer': ('pygments.lexers.ncl', 'NCL', ('ncl',), ('*.ncl',), ('text/ncl',)), 'NSISLexer': ('pygments.lexers.installers', 'NSIS', ('nsis', 'nsi', 'nsh'), ('*.nsi', '*.nsh'), ('text/x-nsis',)), 'NasmLexer': ('pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM'), ('text/x-nasm',)), 'NasmObjdumpLexer': ('pygments.lexers.asm', 'objdump-nasm', ('objdump-nasm',), ('*.objdump-intel',), ('text/x-nasm-objdump',)), 'NemerleLexer': ('pygments.lexers.dotnet', 'Nemerle', ('nemerle',), ('*.n',), ('text/x-nemerle',)), 'NesCLexer': ('pygments.lexers.c_like', 'nesC', ('nesc',), ('*.nc',), ('text/x-nescsrc',)), 'NestedTextLexer': ('pygments.lexers.configs', 'NestedText', ('nestedtext', 'nt'), ('*.nt',), ()), 'NewLispLexer': ('pygments.lexers.lisp', 'NewLisp', ('newlisp',), ('*.lsp', '*.nl', '*.kif'), ('text/x-newlisp', 'application/x-newlisp')), 'NewspeakLexer': ('pygments.lexers.smalltalk', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)), 'NginxConfLexer': ('pygments.lexers.configs', 'Nginx configuration file', ('nginx',), ('nginx.conf',), ('text/x-nginx-conf',)), 'NimrodLexer': ('pygments.lexers.nimrod', 'Nimrod', ('nimrod', 'nim'), ('*.nim', '*.nimrod'), ('text/x-nim',)), 'NitLexer': ('pygments.lexers.nit', 'Nit', ('nit',), ('*.nit',), ()), 'NixLexer': ('pygments.lexers.nix', 'Nix', ('nixos', 'nix'), ('*.nix',), ('text/x-nix',)), 'NodeConsoleLexer': ('pygments.lexers.javascript', 'Node.js REPL console session', ('nodejsrepl',), (), ('text/x-nodejsrepl',)), 'NotmuchLexer': ('pygments.lexers.textfmts', 'Notmuch', ('notmuch',), (), ()), 'NuSMVLexer': ('pygments.lexers.smv', 'NuSMV', ('nusmv',), ('*.smv',), ()), 'NumPyLexer': ('pygments.lexers.python', 'NumPy', ('numpy',), (), ()), 'ObjdumpLexer': ('pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)), 'ObjectiveCLexer': ('pygments.lexers.objective', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m', '*.h'), ('text/x-objective-c',)), 'ObjectiveCppLexer': ('pygments.lexers.objective', 'Objective-C++', ('objective-c++', 'objectivec++', 'obj-c++', 'objc++'), ('*.mm', '*.hh'), ('text/x-objective-c++',)), 'ObjectiveJLexer': ('pygments.lexers.javascript', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)), 'OcamlLexer': ('pygments.lexers.ml', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)), 'OctaveLexer': ('pygments.lexers.matlab', 'Octave', ('octave',), ('*.m',), ('text/octave',)), 'OdinLexer': ('pygments.lexers.archetype', 'ODIN', ('odin',), ('*.odin',), ('text/odin',)), 'OmgIdlLexer': ('pygments.lexers.c_like', 'OMG Interface Definition Language', ('omg-idl',), ('*.idl', '*.pidl'), ()), 'OocLexer': ('pygments.lexers.ooc', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)), 'OpaLexer': ('pygments.lexers.ml', 'Opa', ('opa',), ('*.opa',), ('text/x-opa',)), 'OpenEdgeLexer': ('pygments.lexers.business', 'OpenEdge ABL', ('openedge', 'abl', 'progress'), ('*.p', '*.cls'), ('text/x-openedge', 'application/x-openedge')), 'OutputLexer': ('pygments.lexers.special', 'Text output', ('output',), (), ()), 'PacmanConfLexer': ('pygments.lexers.configs', 'PacmanConf', ('pacmanconf',), ('pacman.conf',), ()), 'PanLexer': ('pygments.lexers.dsls', 'Pan', ('pan',), ('*.pan',), ()), 'ParaSailLexer': ('pygments.lexers.parasail', 'ParaSail', ('parasail',), ('*.psi', '*.psl'), ('text/x-parasail',)), 'PawnLexer': ('pygments.lexers.pawn', 'Pawn', ('pawn',), ('*.p', '*.pwn', '*.inc'), ('text/x-pawn',)), 'PegLexer': ('pygments.lexers.grammar_notation', 'PEG', ('peg',), ('*.peg',), ('text/x-peg',)), 'Perl6Lexer': ('pygments.lexers.perl', 'Perl6', ('perl6', 'pl6', 'raku'), ('*.pl', '*.pm', '*.nqp', '*.p6', '*.6pl', '*.p6l', '*.pl6', '*.6pm', '*.p6m', '*.pm6', '*.t', '*.raku', '*.rakumod', '*.rakutest', '*.rakudoc'), ('text/x-perl6', 'application/x-perl6')), 'PerlLexer': ('pygments.lexers.perl', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm', '*.t', '*.perl'), ('text/x-perl', 'application/x-perl')), 'PhpLexer': ('pygments.lexers.php', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]', '*.inc'), ('text/x-php',)), 'PigLexer': ('pygments.lexers.jvm', 'Pig', ('pig',), ('*.pig',), ('text/x-pig',)), 'PikeLexer': ('pygments.lexers.c_like', 'Pike', ('pike',), ('*.pike', '*.pmod'), ('text/x-pike',)), 'PkgConfigLexer': ('pygments.lexers.configs', 'PkgConfig', ('pkgconfig',), ('*.pc',), ()), 'PlPgsqlLexer': ('pygments.lexers.sql', 'PL/pgSQL', ('plpgsql',), (), ('text/x-plpgsql',)), 'PointlessLexer': ('pygments.lexers.pointless', 'Pointless', ('pointless',), ('*.ptls',), ()), 'PonyLexer': ('pygments.lexers.pony', 'Pony', ('pony',), ('*.pony',), ()), 'PostScriptLexer': ('pygments.lexers.graphics', 'PostScript', ('postscript', 'postscr'), ('*.ps', '*.eps'), ('application/postscript',)), 'PostgresConsoleLexer': ('pygments.lexers.sql', 'PostgreSQL console (psql)', ('psql', 'postgresql-console', 'postgres-console'), (), ('text/x-postgresql-psql',)), 'PostgresLexer': ('pygments.lexers.sql', 'PostgreSQL SQL dialect', ('postgresql', 'postgres'), (), ('text/x-postgresql',)), 'PovrayLexer': ('pygments.lexers.graphics', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)), 'PowerShellLexer': ('pygments.lexers.shell', 'PowerShell', ('powershell', 'pwsh', 'posh', 'ps1', 'psm1'), ('*.ps1', '*.psm1'), ('text/x-powershell',)), 'PowerShellSessionLexer': ('pygments.lexers.shell', 'PowerShell Session', ('pwsh-session', 'ps1con'), (), ()), 'PraatLexer': ('pygments.lexers.praat', 'Praat', ('praat',), ('*.praat', '*.proc', '*.psc'), ()), 'ProcfileLexer': ('pygments.lexers.procfile', 'Procfile', ('procfile',), ('Procfile',), ()), 'PrologLexer': ('pygments.lexers.prolog', 'Prolog', ('prolog',), ('*.ecl', '*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)), 'PromQLLexer': ('pygments.lexers.promql', 'PromQL', ('promql',), ('*.promql',), ()), 'PropertiesLexer': ('pygments.lexers.configs', 'Properties', ('properties', 'jproperties'), ('*.properties',), ('text/x-java-properties',)), 'ProtoBufLexer': ('pygments.lexers.dsls', 'Protocol Buffer', ('protobuf', 'proto'), ('*.proto',), ()), 'PsyshConsoleLexer': ('pygments.lexers.php', 'PsySH console session for PHP', ('psysh',), (), ()), 'PugLexer': ('pygments.lexers.html', 'Pug', ('pug', 'jade'), ('*.pug', '*.jade'), ('text/x-pug', 'text/x-jade')), 'PuppetLexer': ('pygments.lexers.dsls', 'Puppet', ('puppet',), ('*.pp',), ()), 'PyPyLogLexer': ('pygments.lexers.console', 'PyPy Log', ('pypylog', 'pypy'), ('*.pypylog',), ('application/x-pypylog',)), 'Python2Lexer': ('pygments.lexers.python', 'Python 2.x', ('python2', 'py2'), (), ('text/x-python2', 'application/x-python2')), 'Python2TracebackLexer': ('pygments.lexers.python', 'Python 2.x Traceback', ('py2tb',), ('*.py2tb',), ('text/x-python2-traceback',)), 'PythonConsoleLexer': ('pygments.lexers.python', 'Python console session', ('pycon',), (), ('text/x-python-doctest',)), 'PythonLexer': ('pygments.lexers.python', 'Python', ('python', 'py', 'sage', 'python3', 'py3'), ('*.py', '*.pyw', '*.jy', '*.sage', '*.sc', 'SConstruct', 'SConscript', '*.bzl', 'BUCK', 'BUILD', 'BUILD.bazel', 'WORKSPACE', '*.tac'), ('text/x-python', 'application/x-python', 'text/x-python3', 'application/x-python3')), 'PythonTracebackLexer': ('pygments.lexers.python', 'Python Traceback', ('pytb', 'py3tb'), ('*.pytb', '*.py3tb'), ('text/x-python-traceback', 'text/x-python3-traceback')), 'QBasicLexer': ('pygments.lexers.basic', 'QBasic', ('qbasic', 'basic'), ('*.BAS', '*.bas'), ('text/basic',)), 'QVToLexer': ('pygments.lexers.qvt', 'QVTO', ('qvto', 'qvt'), ('*.qvto',), ()), 'QmlLexer': ('pygments.lexers.webmisc', 'QML', ('qml', 'qbs'), ('*.qml', '*.qbs'), ('application/x-qml', 'application/x-qt.qbs+qml')), 'RConsoleLexer': ('pygments.lexers.r', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()), 'RNCCompactLexer': ('pygments.lexers.rnc', 'Relax-NG Compact', ('rng-compact', 'rnc'), ('*.rnc',), ()), 'RPMSpecLexer': ('pygments.lexers.installers', 'RPMSpec', ('spec',), ('*.spec',), ('text/x-rpm-spec',)), 'RacketLexer': ('pygments.lexers.lisp', 'Racket', ('racket', 'rkt'), ('*.rkt', '*.rktd', '*.rktl'), ('text/x-racket', 'application/x-racket')), 'RagelCLexer': ('pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()), 'RagelCppLexer': ('pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()), 'RagelDLexer': ('pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()), 'RagelEmbeddedLexer': ('pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()), 'RagelJavaLexer': ('pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()), 'RagelLexer': ('pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()), 'RagelObjectiveCLexer': ('pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()), 'RagelRubyLexer': ('pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()), 'RawTokenLexer': ('pygments.lexers.special', 'Raw token data', (), (), ('application/x-pygments-tokens',)), 'RdLexer': ('pygments.lexers.r', 'Rd', ('rd',), ('*.Rd',), ('text/x-r-doc',)), 'ReasonLexer': ('pygments.lexers.ml', 'ReasonML', ('reasonml', 'reason'), ('*.re', '*.rei'), ('text/x-reasonml',)), 'RebolLexer': ('pygments.lexers.rebol', 'REBOL', ('rebol',), ('*.r', '*.r3', '*.reb'), ('text/x-rebol',)), 'RedLexer': ('pygments.lexers.rebol', 'Red', ('red', 'red/system'), ('*.red', '*.reds'), ('text/x-red', 'text/x-red-system')), 'RedcodeLexer': ('pygments.lexers.esoteric', 'Redcode', ('redcode',), ('*.cw',), ()), 'RegeditLexer': ('pygments.lexers.configs', 'reg', ('registry',), ('*.reg',), ('text/x-windows-registry',)), 'ResourceLexer': ('pygments.lexers.resource', 'ResourceBundle', ('resourcebundle', 'resource'), (), ()), 'RexxLexer': ('pygments.lexers.scripting', 'Rexx', ('rexx', 'arexx'), ('*.rexx', '*.rex', '*.rx', '*.arexx'), ('text/x-rexx',)), 'RhtmlLexer': ('pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)), 'RideLexer': ('pygments.lexers.ride', 'Ride', ('ride',), ('*.ride',), ('text/x-ride',)), 'RitaLexer': ('pygments.lexers.rita', 'Rita', ('rita',), ('*.rita',), ('text/rita',)), 'RoboconfGraphLexer': ('pygments.lexers.roboconf', 'Roboconf Graph', ('roboconf-graph',), ('*.graph',), ()), 'RoboconfInstancesLexer': ('pygments.lexers.roboconf', 'Roboconf Instances', ('roboconf-instances',), ('*.instances',), ()), 'RobotFrameworkLexer': ('pygments.lexers.robotframework', 'RobotFramework', ('robotframework',), ('*.robot',), ('text/x-robotframework',)), 'RqlLexer': ('pygments.lexers.sql', 'RQL', ('rql',), ('*.rql',), ('text/x-rql',)), 'RslLexer': ('pygments.lexers.dsls', 'RSL', ('rsl',), ('*.rsl',), ('text/rsl',)), 'RstLexer': ('pygments.lexers.markup', 'reStructuredText', ('restructuredtext', 'rst', 'rest'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')), 'RtsLexer': ('pygments.lexers.trafficscript', 'TrafficScript', ('trafficscript', 'rts'), ('*.rts',), ()), 'RubyConsoleLexer': ('pygments.lexers.ruby', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)), 'RubyLexer': ('pygments.lexers.ruby', 'Ruby', ('ruby', 'rb', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby', 'Gemfile', 'Vagrantfile'), ('text/x-ruby', 'application/x-ruby')), 'RustLexer': ('pygments.lexers.rust', 'Rust', ('rust', 'rs'), ('*.rs', '*.rs.in'), ('text/rust', 'text/x-rust')), 'SASLexer': ('pygments.lexers.sas', 'SAS', ('sas',), ('*.SAS', '*.sas'), ('text/x-sas', 'text/sas', 'application/x-sas')), 'SLexer': ('pygments.lexers.r', 'S', ('splus', 's', 'r'), ('*.S', '*.R', '.Rhistory', '.Rprofile', '.Renviron'), ('text/S-plus', 'text/S', 'text/x-r-source', 'text/x-r', 'text/x-R', 'text/x-r-history', 'text/x-r-profile')), 'SMLLexer': ('pygments.lexers.ml', 'Standard ML', ('sml',), ('*.sml', '*.sig', '*.fun'), ('text/x-standardml', 'application/x-standardml')), 'SarlLexer': ('pygments.lexers.jvm', 'SARL', ('sarl',), ('*.sarl',), ('text/x-sarl',)), 'SassLexer': ('pygments.lexers.css', 'Sass', ('sass',), ('*.sass',), ('text/x-sass',)), 'SaviLexer': ('pygments.lexers.savi', 'Savi', ('savi',), ('*.savi',), ()), 'ScalaLexer': ('pygments.lexers.jvm', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)), 'ScamlLexer': ('pygments.lexers.html', 'Scaml', ('scaml',), ('*.scaml',), ('text/x-scaml',)), 'ScdocLexer': ('pygments.lexers.scdoc', 'scdoc', ('scdoc', 'scd'), ('*.scd', '*.scdoc'), ()), 'SchemeLexer': ('pygments.lexers.lisp', 'Scheme', ('scheme', 'scm'), ('*.scm', '*.ss'), ('text/x-scheme', 'application/x-scheme')), 'ScilabLexer': ('pygments.lexers.matlab', 'Scilab', ('scilab',), ('*.sci', '*.sce', '*.tst'), ('text/scilab',)), 'ScssLexer': ('pygments.lexers.css', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)), 'SedLexer': ('pygments.lexers.textedit', 'Sed', ('sed', 'gsed', 'ssed'), ('*.sed', '*.[gs]sed'), ('text/x-sed',)), 'ShExCLexer': ('pygments.lexers.rdf', 'ShExC', ('shexc', 'shex'), ('*.shex',), ('text/shex',)), 'ShenLexer': ('pygments.lexers.lisp', 'Shen', ('shen',), ('*.shen',), ('text/x-shen', 'application/x-shen')), 'SieveLexer': ('pygments.lexers.sieve', 'Sieve', ('sieve',), ('*.siv', '*.sieve'), ()), 'SilverLexer': ('pygments.lexers.verification', 'Silver', ('silver',), ('*.sil', '*.vpr'), ()), 'SingularityLexer': ('pygments.lexers.configs', 'Singularity', ('singularity',), ('*.def', 'Singularity'), ()), 'SlashLexer': ('pygments.lexers.slash', 'Slash', ('slash',), ('*.sla',), ()), 'SlimLexer': ('pygments.lexers.webmisc', 'Slim', ('slim',), ('*.slim',), ('text/x-slim',)), 'SlurmBashLexer': ('pygments.lexers.shell', 'Slurm', ('slurm', 'sbatch'), ('*.sl',), ()), 'SmaliLexer': ('pygments.lexers.dalvik', 'Smali', ('smali',), ('*.smali',), ('text/smali',)), 'SmalltalkLexer': ('pygments.lexers.smalltalk', 'Smalltalk', ('smalltalk', 'squeak', 'st'), ('*.st',), ('text/x-smalltalk',)), 'SmartGameFormatLexer': ('pygments.lexers.sgf', 'SmartGameFormat', ('sgf',), ('*.sgf',), ()), 'SmartyLexer': ('pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)), 'SmithyLexer': ('pygments.lexers.smithy', 'Smithy', ('smithy',), ('*.smithy',), ()), 'SnobolLexer': ('pygments.lexers.snobol', 'Snobol', ('snobol',), ('*.snobol',), ('text/x-snobol',)), 'SnowballLexer': ('pygments.lexers.dsls', 'Snowball', ('snowball',), ('*.sbl',), ()), 'SolidityLexer': ('pygments.lexers.solidity', 'Solidity', ('solidity',), ('*.sol',), ()), 'SophiaLexer': ('pygments.lexers.sophia', 'Sophia', ('sophia',), ('*.aes',), ()), 'SourcePawnLexer': ('pygments.lexers.pawn', 'SourcePawn', ('sp',), ('*.sp',), ('text/x-sourcepawn',)), 'SourcesListLexer': ('pygments.lexers.installers', 'Debian Sourcelist', ('debsources', 'sourceslist', 'sources.list'), ('sources.list',), ()), 'SparqlLexer': ('pygments.lexers.rdf', 'SPARQL', ('sparql',), ('*.rq', '*.sparql'), ('application/sparql-query',)), 'SpiceLexer': ('pygments.lexers.spice', 'Spice', ('spice', 'spicelang'), ('*.spice',), ('text/x-spice',)), 'SqlLexer': ('pygments.lexers.sql', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)), 'SqliteConsoleLexer': ('pygments.lexers.sql', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)), 'SquidConfLexer': ('pygments.lexers.configs', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)), 'SrcinfoLexer': ('pygments.lexers.srcinfo', 'Srcinfo', ('srcinfo',), ('.SRCINFO',), ()), 'SspLexer': ('pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)), 'StanLexer': ('pygments.lexers.modeling', 'Stan', ('stan',), ('*.stan',), ()), 'StataLexer': ('pygments.lexers.stata', 'Stata', ('stata', 'do'), ('*.do', '*.ado'), ('text/x-stata', 'text/stata', 'application/x-stata')), 'SuperColliderLexer': ('pygments.lexers.supercollider', 'SuperCollider', ('supercollider', 'sc'), ('*.sc', '*.scd'), ('application/supercollider', 'text/supercollider')), 'SwiftLexer': ('pygments.lexers.objective', 'Swift', ('swift',), ('*.swift',), ('text/x-swift',)), 'SwigLexer': ('pygments.lexers.c_like', 'SWIG', ('swig',), ('*.swg', '*.i'), ('text/swig',)), 'SystemVerilogLexer': ('pygments.lexers.hdl', 'systemverilog', ('systemverilog', 'sv'), ('*.sv', '*.svh'), ('text/x-systemverilog',)), 'TAPLexer': ('pygments.lexers.testing', 'TAP', ('tap',), ('*.tap',), ()), 'TNTLexer': ('pygments.lexers.tnt', 'Typographic Number Theory', ('tnt',), ('*.tnt',), ()), 'TOMLLexer': ('pygments.lexers.configs', 'TOML', ('toml',), ('*.toml', 'Pipfile', 'poetry.lock'), ()), 'Tads3Lexer': ('pygments.lexers.int_fiction', 'TADS 3', ('tads3',), ('*.t',), ()), 'TasmLexer': ('pygments.lexers.asm', 'TASM', ('tasm',), ('*.asm', '*.ASM', '*.tasm'), ('text/x-tasm',)), 'TclLexer': ('pygments.lexers.tcl', 'Tcl', ('tcl',), ('*.tcl', '*.rvt'), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')), 'TcshLexer': ('pygments.lexers.shell', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)), 'TcshSessionLexer': ('pygments.lexers.shell', 'Tcsh Session', ('tcshcon',), (), ()), 'TeaTemplateLexer': ('pygments.lexers.templates', 'Tea', ('tea',), ('*.tea',), ('text/x-tea',)), 'TealLexer': ('pygments.lexers.teal', 'teal', ('teal',), ('*.teal',), ()), 'TeraTermLexer': ('pygments.lexers.teraterm', 'Tera Term macro', ('teratermmacro', 'teraterm', 'ttl'), ('*.ttl',), ('text/x-teratermmacro',)), 'TermcapLexer': ('pygments.lexers.configs', 'Termcap', ('termcap',), ('termcap', 'termcap.src'), ()), 'TerminfoLexer': ('pygments.lexers.configs', 'Terminfo', ('terminfo',), ('terminfo', 'terminfo.src'), ()), 'TerraformLexer': ('pygments.lexers.configs', 'Terraform', ('terraform', 'tf'), ('*.tf',), ('application/x-tf', 'application/x-terraform')), 'TexLexer': ('pygments.lexers.markup', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')), 'TextLexer': ('pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)), 'ThingsDBLexer': ('pygments.lexers.thingsdb', 'ThingsDB', ('ti', 'thingsdb'), ('*.ti',), ()), 'ThriftLexer': ('pygments.lexers.dsls', 'Thrift', ('thrift',), ('*.thrift',), ('application/x-thrift',)), 'TiddlyWiki5Lexer': ('pygments.lexers.markup', 'tiddler', ('tid',), ('*.tid',), ('text/vnd.tiddlywiki',)), 'TodotxtLexer': ('pygments.lexers.textfmts', 'Todotxt', ('todotxt',), ('todo.txt', '*.todotxt'), ('text/x-todo',)), 'TransactSqlLexer': ('pygments.lexers.sql', 'Transact-SQL', ('tsql', 't-sql'), ('*.sql',), ('text/x-tsql',)), 'TreetopLexer': ('pygments.lexers.parsers', 'Treetop', ('treetop',), ('*.treetop', '*.tt'), ()), 'TurtleLexer': ('pygments.lexers.rdf', 'Turtle', ('turtle',), ('*.ttl',), ('text/turtle', 'application/x-turtle')), 'TwigHtmlLexer': ('pygments.lexers.templates', 'HTML+Twig', ('html+twig',), ('*.twig',), ('text/html+twig',)), 'TwigLexer': ('pygments.lexers.templates', 'Twig', ('twig',), (), ('application/x-twig',)), 'TypeScriptLexer': ('pygments.lexers.javascript', 'TypeScript', ('typescript', 'ts'), ('*.ts',), ('application/x-typescript', 'text/x-typescript')), 'TypoScriptCssDataLexer': ('pygments.lexers.typoscript', 'TypoScriptCssData', ('typoscriptcssdata',), (), ()), 'TypoScriptHtmlDataLexer': ('pygments.lexers.typoscript', 'TypoScriptHtmlData', ('typoscripthtmldata',), (), ()), 'TypoScriptLexer': ('pygments.lexers.typoscript', 'TypoScript', ('typoscript',), ('*.typoscript',), ('text/x-typoscript',)), 'UcodeLexer': ('pygments.lexers.unicon', 'ucode', ('ucode',), ('*.u', '*.u1', '*.u2'), ()), 'UniconLexer': ('pygments.lexers.unicon', 'Unicon', ('unicon',), ('*.icn',), ('text/unicon',)), 'UrbiscriptLexer': ('pygments.lexers.urbi', 'UrbiScript', ('urbiscript',), ('*.u',), ('application/x-urbiscript',)), 'UsdLexer': ('pygments.lexers.usd', 'USD', ('usd', 'usda'), ('*.usd', '*.usda'), ()), 'VBScriptLexer': ('pygments.lexers.basic', 'VBScript', ('vbscript',), ('*.vbs', '*.VBS'), ()), 'VCLLexer': ('pygments.lexers.varnish', 'VCL', ('vcl',), ('*.vcl',), ('text/x-vclsrc',)), 'VCLSnippetLexer': ('pygments.lexers.varnish', 'VCLSnippets', ('vclsnippets', 'vclsnippet'), (), ('text/x-vclsnippet',)), 'VCTreeStatusLexer': ('pygments.lexers.console', 'VCTreeStatus', ('vctreestatus',), (), ()), 'VGLLexer': ('pygments.lexers.dsls', 'VGL', ('vgl',), ('*.rpf',), ()), 'ValaLexer': ('pygments.lexers.c_like', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)), 'VbNetAspxLexer': ('pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()), 'VbNetLexer': ('pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')), 'VelocityHtmlLexer': ('pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)), 'VelocityLexer': ('pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()), 'VelocityXmlLexer': ('pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)), 'VerilogLexer': ('pygments.lexers.hdl', 'verilog', ('verilog', 'v'), ('*.v',), ('text/x-verilog',)), 'VhdlLexer': ('pygments.lexers.hdl', 'vhdl', ('vhdl',), ('*.vhdl', '*.vhd'), ('text/x-vhdl',)), 'VimLexer': ('pygments.lexers.textedit', 'VimL', ('vim',), ('*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'), ('text/x-vim',)), 'WDiffLexer': ('pygments.lexers.diff', 'WDiff', ('wdiff',), ('*.wdiff',), ()), 'WatLexer': ('pygments.lexers.webassembly', 'WebAssembly', ('wast', 'wat'), ('*.wat', '*.wast'), ()), 'WebIDLLexer': ('pygments.lexers.webidl', 'Web IDL', ('webidl',), ('*.webidl',), ()), 'WhileyLexer': ('pygments.lexers.whiley', 'Whiley', ('whiley',), ('*.whiley',), ('text/x-whiley',)), 'X10Lexer': ('pygments.lexers.x10', 'X10', ('x10', 'xten'), ('*.x10',), ('text/x-x10',)), 'XQueryLexer': ('pygments.lexers.webmisc', 'XQuery', ('xquery', 'xqy', 'xq', 'xql', 'xqm'), ('*.xqy', '*.xquery', '*.xq', '*.xql', '*.xqm'), ('text/xquery', 'application/xquery')), 'XmlDjangoLexer': ('pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), (), ('application/xml+django', 'application/xml+jinja')), 'XmlErbLexer': ('pygments.lexers.templates', 'XML+Ruby', ('xml+ruby', 'xml+erb'), (), ('application/xml+ruby',)), 'XmlLexer': ('pygments.lexers.html', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl', '*.wsf'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml')), 'XmlPhpLexer': ('pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)), 'XmlSmartyLexer': ('pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)), 'XorgLexer': ('pygments.lexers.xorg', 'Xorg', ('xorg.conf',), ('xorg.conf',), ()), 'XsltLexer': ('pygments.lexers.html', 'XSLT', ('xslt',), ('*.xsl', '*.xslt', '*.xpl'), ('application/xsl+xml', 'application/xslt+xml')), 'XtendLexer': ('pygments.lexers.jvm', 'Xtend', ('xtend',), ('*.xtend',), ('text/x-xtend',)), 'XtlangLexer': ('pygments.lexers.lisp', 'xtlang', ('extempore',), ('*.xtm',), ()), 'YamlJinjaLexer': ('pygments.lexers.templates', 'YAML+Jinja', ('yaml+jinja', 'salt', 'sls'), ('*.sls',), ('text/x-yaml+jinja', 'text/x-sls')), 'YamlLexer': ('pygments.lexers.data', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',)), 'YangLexer': ('pygments.lexers.yang', 'YANG', ('yang',), ('*.yang',), ('application/yang',)), 'ZeekLexer': ('pygments.lexers.dsls', 'Zeek', ('zeek', 'bro'), ('*.zeek', '*.bro'), ()), 'ZephirLexer': ('pygments.lexers.php', 'Zephir', ('zephir',), ('*.zep',), ()), 'ZigLexer': ('pygments.lexers.zig', 'Zig', ('zig',), ('*.zig',), ('text/zig',)), 'apdlexer': ('pygments.lexers.apdlexer', 'ANSYS parametric design language', ('ansys', 'apdl'), ('*.ans',), ()), } if __name__ == '__main__': # pragma: no cover import sys import os # lookup lexers found_lexers = [] sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) for root, dirs, files in os.walk('.'): for filename in files: if filename.endswith('.py') and not filename.startswith('_'): module_name = 'pygments.lexers%s.%s' % ( root[1:].replace('/', '.'), filename[:-3]) print(module_name) module = __import__(module_name, None, None, ['']) for lexer_name in module.__all__: lexer = getattr(module, lexer_name) found_lexers.append( '%r: %r' % (lexer_name, (module_name, lexer.name, tuple(lexer.aliases), tuple(lexer.filenames), tuple(lexer.mimetypes)))) # sort them to make the diff minimal found_lexers.sort() # extract useful sourcecode from this file with open(__file__) as fp: content = fp.read() # replace crnl to nl for Windows. # # Note that, originally, contributers should keep nl of master # repository, for example by using some kind of automatic # management EOL, like `EolExtension # `. content = content.replace("\r\n", "\n") header = content[:content.find('LEXERS = {')] footer = content[content.find("if __name__ == '__main__':"):] # write new file with open(__file__, 'w') as fp: fp.write(header) fp.write('LEXERS = {\n %s,\n}\n\n' % ',\n '.join(found_lexers)) fp.write(footer) print ('=== %d lexers processed.' % len(found_lexers)) pygments-2.11.2/pygments/lexers/teraterm.py0000644000175000017500000002323514165547207020710 0ustar carstencarsten""" pygments.lexers.teraterm ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for Tera Term macro files. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups from pygments.token import Text, Comment, Operator, Name, String, \ Number, Keyword __all__ = ['TeraTermLexer'] class TeraTermLexer(RegexLexer): """ For `Tera Term `_ macro source code. .. versionadded:: 2.4 """ name = 'Tera Term macro' aliases = ['teratermmacro', 'teraterm', 'ttl'] filenames = ['*.ttl'] mimetypes = ['text/x-teratermmacro'] tokens = { 'root': [ include('comments'), include('labels'), include('commands'), include('builtin-variables'), include('user-variables'), include('operators'), include('numeric-literals'), include('string-literals'), include('all-whitespace'), (r'\S', Text), ], 'comments': [ (r';[^\r\n]*', Comment.Single), (r'/\*', Comment.Multiline, 'in-comment'), ], 'in-comment': [ (r'\*/', Comment.Multiline, '#pop'), (r'[^*/]+', Comment.Multiline), (r'[*/]', Comment.Multiline) ], 'labels': [ (r'(?i)^(\s*)(:[a-z0-9_]+)', bygroups(Text, Name.Label)), ], 'commands': [ ( r'(?i)\b(' r'basename|' r'beep|' r'bplusrecv|' r'bplussend|' r'break|' r'bringupbox|' # 'call' is handled separately. r'callmenu|' r'changedir|' r'checksum16|' r'checksum16file|' r'checksum32|' r'checksum32file|' r'checksum8|' r'checksum8file|' r'clearscreen|' r'clipb2var|' r'closesbox|' r'closett|' r'code2str|' r'connect|' r'continue|' r'crc16|' r'crc16file|' r'crc32|' r'crc32file|' r'cygconnect|' r'delpassword|' r'dirname|' r'dirnamebox|' r'disconnect|' r'dispstr|' r'do|' r'else|' r'elseif|' r'enablekeyb|' r'end|' r'endif|' r'enduntil|' r'endwhile|' r'exec|' r'execcmnd|' r'exit|' r'expandenv|' r'fileclose|' r'fileconcat|' r'filecopy|' r'filecreate|' r'filedelete|' r'filelock|' r'filemarkptr|' r'filenamebox|' r'fileopen|' r'fileread|' r'filereadln|' r'filerename|' r'filesearch|' r'fileseek|' r'fileseekback|' r'filestat|' r'filestrseek|' r'filestrseek2|' r'filetruncate|' r'fileunlock|' r'filewrite|' r'filewriteln|' r'findclose|' r'findfirst|' r'findnext|' r'flushrecv|' r'foldercreate|' r'folderdelete|' r'foldersearch|' r'for|' r'getdate|' r'getdir|' r'getenv|' r'getfileattr|' r'gethostname|' r'getipv4addr|' r'getipv6addr|' r'getmodemstatus|' r'getpassword|' r'getspecialfolder|' r'gettime|' r'gettitle|' r'getttdir|' r'getver|' # 'goto' is handled separately. r'if|' r'ifdefined|' r'include|' r'inputbox|' r'int2str|' r'intdim|' r'ispassword|' r'kmtfinish|' r'kmtget|' r'kmtrecv|' r'kmtsend|' r'listbox|' r'loadkeymap|' r'logautoclosemode|' r'logclose|' r'loginfo|' r'logopen|' r'logpause|' r'logrotate|' r'logstart|' r'logwrite|' r'loop|' r'makepath|' r'messagebox|' r'mpause|' r'next|' r'passwordbox|' r'pause|' r'quickvanrecv|' r'quickvansend|' r'random|' r'recvln|' r'regexoption|' r'restoresetup|' r'return|' r'rotateleft|' r'rotateright|' r'scprecv|' r'scpsend|' r'send|' r'sendbreak|' r'sendbroadcast|' r'sendfile|' r'sendkcode|' r'sendln|' r'sendlnbroadcast|' r'sendlnmulticast|' r'sendmulticast|' r'setbaud|' r'setdate|' r'setdebug|' r'setdir|' r'setdlgpos|' r'setdtr|' r'setecho|' r'setenv|' r'setexitcode|' r'setfileattr|' r'setflowctrl|' r'setmulticastname|' r'setpassword|' r'setrts|' r'setspeed|' r'setsync|' r'settime|' r'settitle|' r'show|' r'showtt|' r'sprintf|' r'sprintf2|' r'statusbox|' r'str2code|' r'str2int|' r'strcompare|' r'strconcat|' r'strcopy|' r'strdim|' r'strinsert|' r'strjoin|' r'strlen|' r'strmatch|' r'strremove|' r'strreplace|' r'strscan|' r'strspecial|' r'strsplit|' r'strtrim|' r'testlink|' r'then|' r'tolower|' r'toupper|' r'unlink|' r'until|' r'uptime|' r'var2clipb|' r'wait|' r'wait4all|' r'waitevent|' r'waitln|' r'waitn|' r'waitrecv|' r'waitregex|' r'while|' r'xmodemrecv|' r'xmodemsend|' r'yesnobox|' r'ymodemrecv|' r'ymodemsend|' r'zmodemrecv|' r'zmodemsend' r')\b', Keyword, ), (r'(?i)(call|goto)([ \t]+)([a-z0-9_]+)', bygroups(Keyword, Text, Name.Label)), ], 'builtin-variables': [ ( r'(?i)(' r'groupmatchstr1|' r'groupmatchstr2|' r'groupmatchstr3|' r'groupmatchstr4|' r'groupmatchstr5|' r'groupmatchstr6|' r'groupmatchstr7|' r'groupmatchstr8|' r'groupmatchstr9|' r'inputstr|' r'matchstr|' r'mtimeout|' r'param1|' r'param2|' r'param3|' r'param4|' r'param5|' r'param6|' r'param7|' r'param8|' r'param9|' r'paramcnt|' r'params|' r'result|' r'timeout' r')\b', Name.Builtin ), ], 'user-variables': [ (r'(?i)[a-z_][a-z0-9_]*', Name.Variable), ], 'numeric-literals': [ (r'(-?)([0-9]+)', bygroups(Operator, Number.Integer)), (r'(?i)\$[0-9a-f]+', Number.Hex), ], 'string-literals': [ (r'(?i)#(?:[0-9]+|\$[0-9a-f]+)', String.Char), (r"'", String.Single, 'in-single-string'), (r'"', String.Double, 'in-double-string'), ], 'in-general-string': [ (r'\\[\\nt]', String.Escape), # Only three escapes are supported. (r'.', String), ], 'in-single-string': [ (r"'", String.Single, '#pop'), include('in-general-string'), ], 'in-double-string': [ (r'"', String.Double, '#pop'), include('in-general-string'), ], 'operators': [ (r'and|not|or|xor', Operator.Word), (r'[!%&*+<=>^~\|\/-]+', Operator), (r'[()]', String.Symbol), ], 'all-whitespace': [ (r'\s+', Text), ], } # Turtle and Tera Term macro files share the same file extension # but each has a recognizable and distinct syntax. def analyse_text(text): if re.search(TeraTermLexer.tokens['commands'][0][0], text): return 0.01 pygments-2.11.2/pygments/lexers/slash.py0000644000175000017500000002044214165547207020174 0ustar carstencarsten""" pygments.lexers.slash ~~~~~~~~~~~~~~~~~~~~~ Lexer for the `Slash `_ programming language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import ExtendedRegexLexer, bygroups, DelegatingLexer from pygments.token import Name, Number, String, Comment, Punctuation, \ Other, Keyword, Operator, Whitespace __all__ = ['SlashLexer'] class SlashLanguageLexer(ExtendedRegexLexer): _nkw = r'(?=[^a-zA-Z_0-9])' def move_state(new_state): return ("#pop", new_state) def right_angle_bracket(lexer, match, ctx): if len(ctx.stack) > 1 and ctx.stack[-2] == "string": ctx.stack.pop() yield match.start(), String.Interpol, '}' ctx.pos = match.end() pass tokens = { "root": [ (r"<%=", Comment.Preproc, move_state("slash")), (r"<%!!", Comment.Preproc, move_state("slash")), (r"<%#.*?%>", Comment.Multiline), (r"<%", Comment.Preproc, move_state("slash")), (r".|\n", Other), ], "string": [ (r"\\", String.Escape, move_state("string_e")), (r"\"", String, move_state("slash")), (r"#\{", String.Interpol, "slash"), (r'.|\n', String), ], "string_e": [ (r'n', String.Escape, move_state("string")), (r't', String.Escape, move_state("string")), (r'r', String.Escape, move_state("string")), (r'e', String.Escape, move_state("string")), (r'x[a-fA-F0-9]{2}', String.Escape, move_state("string")), (r'.', String.Escape, move_state("string")), ], "regexp": [ (r'}[a-z]*', String.Regex, move_state("slash")), (r'\\(.|\n)', String.Regex), (r'{', String.Regex, "regexp_r"), (r'.|\n', String.Regex), ], "regexp_r": [ (r'}[a-z]*', String.Regex, "#pop"), (r'\\(.|\n)', String.Regex), (r'{', String.Regex, "regexp_r"), ], "slash": [ (r"%>", Comment.Preproc, move_state("root")), (r"\"", String, move_state("string")), (r"'[a-zA-Z0-9_]+", String), (r'%r{', String.Regex, move_state("regexp")), (r'/\*.*?\*/', Comment.Multiline), (r"(#|//).*?\n", Comment.Single), (r'-?[0-9]+e[+-]?[0-9]+', Number.Float), (r'-?[0-9]+\.[0-9]+(e[+-]?[0-9]+)?', Number.Float), (r'-?[0-9]+', Number.Integer), (r'nil'+_nkw, Name.Builtin), (r'true'+_nkw, Name.Builtin), (r'false'+_nkw, Name.Builtin), (r'self'+_nkw, Name.Builtin), (r'(class)(\s+)([A-Z][a-zA-Z0-9_\']*)', bygroups(Keyword, Whitespace, Name.Class)), (r'class'+_nkw, Keyword), (r'extends'+_nkw, Keyword), (r'(def)(\s+)(self)(\s*)(\.)(\s*)([a-z_][a-zA-Z0-9_\']*=?|<<|>>|==|<=>|<=|<|>=|>|\+|-(self)?|~(self)?|\*|/|%|^|&&|&|\||\[\]=?)', bygroups(Keyword, Whitespace, Name.Builtin, Whitespace, Punctuation, Whitespace, Name.Function)), (r'(def)(\s+)([a-z_][a-zA-Z0-9_\']*=?|<<|>>|==|<=>|<=|<|>=|>|\+|-(self)?|~(self)?|\*|/|%|^|&&|&|\||\[\]=?)', bygroups(Keyword, Whitespace, Name.Function)), (r'def'+_nkw, Keyword), (r'if'+_nkw, Keyword), (r'elsif'+_nkw, Keyword), (r'else'+_nkw, Keyword), (r'unless'+_nkw, Keyword), (r'for'+_nkw, Keyword), (r'in'+_nkw, Keyword), (r'while'+_nkw, Keyword), (r'until'+_nkw, Keyword), (r'and'+_nkw, Keyword), (r'or'+_nkw, Keyword), (r'not'+_nkw, Keyword), (r'lambda'+_nkw, Keyword), (r'try'+_nkw, Keyword), (r'catch'+_nkw, Keyword), (r'return'+_nkw, Keyword), (r'next'+_nkw, Keyword), (r'last'+_nkw, Keyword), (r'throw'+_nkw, Keyword), (r'use'+_nkw, Keyword), (r'switch'+_nkw, Keyword), (r'\\', Keyword), (r'λ', Keyword), (r'__FILE__'+_nkw, Name.Builtin.Pseudo), (r'__LINE__'+_nkw, Name.Builtin.Pseudo), (r'[A-Z][a-zA-Z0-9_\']*'+_nkw, Name.Constant), (r'[a-z_][a-zA-Z0-9_\']*'+_nkw, Name), (r'@[a-z_][a-zA-Z0-9_\']*'+_nkw, Name.Variable.Instance), (r'@@[a-z_][a-zA-Z0-9_\']*'+_nkw, Name.Variable.Class), (r'\(', Punctuation), (r'\)', Punctuation), (r'\[', Punctuation), (r'\]', Punctuation), (r'\{', Punctuation), (r'\}', right_angle_bracket), (r';', Punctuation), (r',', Punctuation), (r'<<=', Operator), (r'>>=', Operator), (r'<<', Operator), (r'>>', Operator), (r'==', Operator), (r'!=', Operator), (r'=>', Operator), (r'=', Operator), (r'<=>', Operator), (r'<=', Operator), (r'>=', Operator), (r'<', Operator), (r'>', Operator), (r'\+\+', Operator), (r'\+=', Operator), (r'-=', Operator), (r'\*\*=', Operator), (r'\*=', Operator), (r'\*\*', Operator), (r'\*', Operator), (r'/=', Operator), (r'\+', Operator), (r'-', Operator), (r'/', Operator), (r'%=', Operator), (r'%', Operator), (r'^=', Operator), (r'&&=', Operator), (r'&=', Operator), (r'&&', Operator), (r'&', Operator), (r'\|\|=', Operator), (r'\|=', Operator), (r'\|\|', Operator), (r'\|', Operator), (r'!', Operator), (r'\.\.\.', Operator), (r'\.\.', Operator), (r'\.', Operator), (r'::', Operator), (r':', Operator), (r'(\s|\n)+', Whitespace), (r'[a-z_][a-zA-Z0-9_\']*', Name.Variable), ], } class SlashLexer(DelegatingLexer): """ Lexer for the Slash programming language. .. versionadded:: 2.4 """ name = 'Slash' aliases = ['slash'] filenames = ['*.sla'] def __init__(self, **options): from pygments.lexers.web import HtmlLexer super().__init__(HtmlLexer, SlashLanguageLexer, **options) pygments-2.11.2/pygments/lexers/_sourcemod_builtins.py0000644000175000017500000006465214165547207023145 0ustar carstencarsten""" pygments.lexers._sourcemod_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file contains the names of SourceMod functions. It is able to re-generate itself. Do not edit the FUNCTIONS list by hand. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ FUNCTIONS = ( 'OnEntityCreated', 'OnEntityDestroyed', 'OnGetGameDescription', 'OnLevelInit', 'SDKHook', 'SDKHookEx', 'SDKUnhook', 'SDKHooks_TakeDamage', 'SDKHooks_DropWeapon', 'TopMenuHandler', 'CreateTopMenu', 'LoadTopMenuConfig', 'AddToTopMenu', 'GetTopMenuInfoString', 'GetTopMenuObjName', 'RemoveFromTopMenu', 'DisplayTopMenu', 'DisplayTopMenuCategory', 'FindTopMenuCategory', 'SetTopMenuTitleCaching', 'OnAdminMenuCreated', 'OnAdminMenuReady', 'GetAdminTopMenu', 'AddTargetsToMenu', 'AddTargetsToMenu2', 'RedisplayAdminMenu', 'TEHook', 'AddTempEntHook', 'RemoveTempEntHook', 'TE_Start', 'TE_IsValidProp', 'TE_WriteNum', 'TE_ReadNum', 'TE_WriteFloat', 'TE_ReadFloat', 'TE_WriteVector', 'TE_ReadVector', 'TE_WriteAngles', 'TE_WriteFloatArray', 'TE_Send', 'TE_WriteEncodedEnt', 'TE_SendToAll', 'TE_SendToClient', 'CreateKeyValues', 'KvSetString', 'KvSetNum', 'KvSetUInt64', 'KvSetFloat', 'KvSetColor', 'KvSetVector', 'KvGetString', 'KvGetNum', 'KvGetFloat', 'KvGetColor', 'KvGetUInt64', 'KvGetVector', 'KvJumpToKey', 'KvJumpToKeySymbol', 'KvGotoFirstSubKey', 'KvGotoNextKey', 'KvSavePosition', 'KvDeleteKey', 'KvDeleteThis', 'KvGoBack', 'KvRewind', 'KvGetSectionName', 'KvSetSectionName', 'KvGetDataType', 'KeyValuesToFile', 'FileToKeyValues', 'StringToKeyValues', 'KvSetEscapeSequences', 'KvNodesInStack', 'KvCopySubkeys', 'KvFindKeyById', 'KvGetNameSymbol', 'KvGetSectionSymbol', 'TE_SetupSparks', 'TE_SetupSmoke', 'TE_SetupDust', 'TE_SetupMuzzleFlash', 'TE_SetupMetalSparks', 'TE_SetupEnergySplash', 'TE_SetupArmorRicochet', 'TE_SetupGlowSprite', 'TE_SetupExplosion', 'TE_SetupBloodSprite', 'TE_SetupBeamRingPoint', 'TE_SetupBeamPoints', 'TE_SetupBeamLaser', 'TE_SetupBeamRing', 'TE_SetupBeamFollow', 'HookEvent', 'HookEventEx', 'UnhookEvent', 'CreateEvent', 'FireEvent', 'CancelCreatedEvent', 'GetEventBool', 'SetEventBool', 'GetEventInt', 'SetEventInt', 'GetEventFloat', 'SetEventFloat', 'GetEventString', 'SetEventString', 'GetEventName', 'SetEventBroadcast', 'GetUserMessageType', 'GetUserMessageId', 'GetUserMessageName', 'StartMessage', 'StartMessageEx', 'EndMessage', 'MsgHook', 'MsgPostHook', 'HookUserMessage', 'UnhookUserMessage', 'StartMessageAll', 'StartMessageOne', 'InactivateClient', 'ReconnectClient', 'GetMaxEntities', 'GetEntityCount', 'IsValidEntity', 'IsValidEdict', 'IsEntNetworkable', 'CreateEdict', 'RemoveEdict', 'GetEdictFlags', 'SetEdictFlags', 'GetEdictClassname', 'GetEntityNetClass', 'ChangeEdictState', 'GetEntData', 'SetEntData', 'GetEntDataFloat', 'SetEntDataFloat', 'GetEntDataEnt2', 'SetEntDataEnt2', 'GetEntDataVector', 'SetEntDataVector', 'GetEntDataString', 'SetEntDataString', 'FindSendPropOffs', 'FindSendPropInfo', 'FindDataMapOffs', 'FindDataMapInfo', 'GetEntSendPropOffs', 'GetEntProp', 'SetEntProp', 'GetEntPropFloat', 'SetEntPropFloat', 'GetEntPropEnt', 'SetEntPropEnt', 'GetEntPropVector', 'SetEntPropVector', 'GetEntPropString', 'SetEntPropString', 'GetEntPropArraySize', 'GetEntDataArray', 'SetEntDataArray', 'GetEntityAddress', 'GetEntityClassname', 'float', 'FloatMul', 'FloatDiv', 'FloatAdd', 'FloatSub', 'FloatFraction', 'RoundToZero', 'RoundToCeil', 'RoundToFloor', 'RoundToNearest', 'FloatCompare', 'SquareRoot', 'Pow', 'Exponential', 'Logarithm', 'Sine', 'Cosine', 'Tangent', 'FloatAbs', 'ArcTangent', 'ArcCosine', 'ArcSine', 'ArcTangent2', 'RoundFloat', 'operator%', 'DegToRad', 'RadToDeg', 'GetURandomInt', 'GetURandomFloat', 'SetURandomSeed', 'SetURandomSeedSimple', 'RemovePlayerItem', 'GivePlayerItem', 'GetPlayerWeaponSlot', 'IgniteEntity', 'ExtinguishEntity', 'TeleportEntity', 'ForcePlayerSuicide', 'SlapPlayer', 'FindEntityByClassname', 'GetClientEyeAngles', 'CreateEntityByName', 'DispatchSpawn', 'DispatchKeyValue', 'DispatchKeyValueFloat', 'DispatchKeyValueVector', 'GetClientAimTarget', 'GetTeamCount', 'GetTeamName', 'GetTeamScore', 'SetTeamScore', 'GetTeamClientCount', 'SetEntityModel', 'GetPlayerDecalFile', 'GetPlayerJingleFile', 'GetServerNetStats', 'EquipPlayerWeapon', 'ActivateEntity', 'SetClientInfo', 'GivePlayerAmmo', 'SetClientListeningFlags', 'GetClientListeningFlags', 'SetListenOverride', 'GetListenOverride', 'IsClientMuted', 'TR_GetPointContents', 'TR_GetPointContentsEnt', 'TR_TraceRay', 'TR_TraceHull', 'TR_TraceRayFilter', 'TR_TraceHullFilter', 'TR_TraceRayEx', 'TR_TraceHullEx', 'TR_TraceRayFilterEx', 'TR_TraceHullFilterEx', 'TR_GetFraction', 'TR_GetEndPosition', 'TR_GetEntityIndex', 'TR_DidHit', 'TR_GetHitGroup', 'TR_GetPlaneNormal', 'TR_PointOutsideWorld', 'SortIntegers', 'SortFloats', 'SortStrings', 'SortFunc1D', 'SortCustom1D', 'SortCustom2D', 'SortADTArray', 'SortFuncADTArray', 'SortADTArrayCustom', 'CompileRegex', 'MatchRegex', 'GetRegexSubString', 'SimpleRegexMatch', 'TF2_GetPlayerClass', 'TF2_SetPlayerClass', 'TF2_RemoveWeaponSlot', 'TF2_RemoveAllWeapons', 'TF2_IsPlayerInCondition', 'TF2_GetObjectType', 'TF2_GetObjectMode', 'NominateMap', 'RemoveNominationByMap', 'RemoveNominationByOwner', 'GetExcludeMapList', 'GetNominatedMapList', 'CanMapChooserStartVote', 'InitiateMapChooserVote', 'HasEndOfMapVoteFinished', 'EndOfMapVoteEnabled', 'OnNominationRemoved', 'OnMapVoteStarted', 'CreateTimer', 'KillTimer', 'TriggerTimer', 'GetTickedTime', 'GetMapTimeLeft', 'GetMapTimeLimit', 'ExtendMapTimeLimit', 'GetTickInterval', 'OnMapTimeLeftChanged', 'IsServerProcessing', 'CreateDataTimer', 'ByteCountToCells', 'CreateArray', 'ClearArray', 'CloneArray', 'ResizeArray', 'GetArraySize', 'PushArrayCell', 'PushArrayString', 'PushArrayArray', 'GetArrayCell', 'GetArrayString', 'GetArrayArray', 'SetArrayCell', 'SetArrayString', 'SetArrayArray', 'ShiftArrayUp', 'RemoveFromArray', 'SwapArrayItems', 'FindStringInArray', 'FindValueInArray', 'ProcessTargetString', 'ReplyToTargetError', 'MultiTargetFilter', 'AddMultiTargetFilter', 'RemoveMultiTargetFilter', 'OnBanClient', 'OnBanIdentity', 'OnRemoveBan', 'BanClient', 'BanIdentity', 'RemoveBan', 'CreateTrie', 'SetTrieValue', 'SetTrieArray', 'SetTrieString', 'GetTrieValue', 'GetTrieArray', 'GetTrieString', 'RemoveFromTrie', 'ClearTrie', 'GetTrieSize', 'GetFunctionByName', 'CreateGlobalForward', 'CreateForward', 'GetForwardFunctionCount', 'AddToForward', 'RemoveFromForward', 'RemoveAllFromForward', 'Call_StartForward', 'Call_StartFunction', 'Call_PushCell', 'Call_PushCellRef', 'Call_PushFloat', 'Call_PushFloatRef', 'Call_PushArray', 'Call_PushArrayEx', 'Call_PushString', 'Call_PushStringEx', 'Call_Finish', 'Call_Cancel', 'NativeCall', 'CreateNative', 'ThrowNativeError', 'GetNativeStringLength', 'GetNativeString', 'SetNativeString', 'GetNativeCell', 'GetNativeCellRef', 'SetNativeCellRef', 'GetNativeArray', 'SetNativeArray', 'FormatNativeString', 'RequestFrameCallback', 'RequestFrame', 'OnRebuildAdminCache', 'DumpAdminCache', 'AddCommandOverride', 'GetCommandOverride', 'UnsetCommandOverride', 'CreateAdmGroup', 'FindAdmGroup', 'SetAdmGroupAddFlag', 'GetAdmGroupAddFlag', 'GetAdmGroupAddFlags', 'SetAdmGroupImmuneFrom', 'GetAdmGroupImmuneCount', 'GetAdmGroupImmuneFrom', 'AddAdmGroupCmdOverride', 'GetAdmGroupCmdOverride', 'RegisterAuthIdentType', 'CreateAdmin', 'GetAdminUsername', 'BindAdminIdentity', 'SetAdminFlag', 'GetAdminFlag', 'GetAdminFlags', 'AdminInheritGroup', 'GetAdminGroupCount', 'GetAdminGroup', 'SetAdminPassword', 'GetAdminPassword', 'FindAdminByIdentity', 'RemoveAdmin', 'FlagBitsToBitArray', 'FlagBitArrayToBits', 'FlagArrayToBits', 'FlagBitsToArray', 'FindFlagByName', 'FindFlagByChar', 'FindFlagChar', 'ReadFlagString', 'CanAdminTarget', 'CreateAuthMethod', 'SetAdmGroupImmunityLevel', 'GetAdmGroupImmunityLevel', 'SetAdminImmunityLevel', 'GetAdminImmunityLevel', 'FlagToBit', 'BitToFlag', 'ServerCommand', 'ServerCommandEx', 'InsertServerCommand', 'ServerExecute', 'ClientCommand', 'FakeClientCommand', 'FakeClientCommandEx', 'PrintToServer', 'PrintToConsole', 'ReplyToCommand', 'GetCmdReplySource', 'SetCmdReplySource', 'IsChatTrigger', 'ShowActivity2', 'ShowActivity', 'ShowActivityEx', 'FormatActivitySource', 'SrvCmd', 'RegServerCmd', 'ConCmd', 'RegConsoleCmd', 'RegAdminCmd', 'GetCmdArgs', 'GetCmdArg', 'GetCmdArgString', 'CreateConVar', 'FindConVar', 'ConVarChanged', 'HookConVarChange', 'UnhookConVarChange', 'GetConVarBool', 'SetConVarBool', 'GetConVarInt', 'SetConVarInt', 'GetConVarFloat', 'SetConVarFloat', 'GetConVarString', 'SetConVarString', 'ResetConVar', 'GetConVarDefault', 'GetConVarFlags', 'SetConVarFlags', 'GetConVarBounds', 'SetConVarBounds', 'GetConVarName', 'QueryClientConVar', 'GetCommandIterator', 'ReadCommandIterator', 'CheckCommandAccess', 'CheckAccess', 'IsValidConVarChar', 'GetCommandFlags', 'SetCommandFlags', 'FindFirstConCommand', 'FindNextConCommand', 'SendConVarValue', 'AddServerTag', 'RemoveServerTag', 'CommandListener', 'AddCommandListener', 'RemoveCommandListener', 'CommandExists', 'OnClientSayCommand', 'OnClientSayCommand_Post', 'TF2_IgnitePlayer', 'TF2_RespawnPlayer', 'TF2_RegeneratePlayer', 'TF2_AddCondition', 'TF2_RemoveCondition', 'TF2_SetPlayerPowerPlay', 'TF2_DisguisePlayer', 'TF2_RemovePlayerDisguise', 'TF2_StunPlayer', 'TF2_MakeBleed', 'TF2_GetClass', 'TF2_CalcIsAttackCritical', 'TF2_OnIsHolidayActive', 'TF2_IsHolidayActive', 'TF2_IsPlayerInDuel', 'TF2_RemoveWearable', 'TF2_OnConditionAdded', 'TF2_OnConditionRemoved', 'TF2_OnWaitingForPlayersStart', 'TF2_OnWaitingForPlayersEnd', 'TF2_OnPlayerTeleport', 'SQL_Connect', 'SQL_DefConnect', 'SQL_ConnectCustom', 'SQLite_UseDatabase', 'SQL_CheckConfig', 'SQL_GetDriver', 'SQL_ReadDriver', 'SQL_GetDriverIdent', 'SQL_GetDriverProduct', 'SQL_SetCharset', 'SQL_GetAffectedRows', 'SQL_GetInsertId', 'SQL_GetError', 'SQL_EscapeString', 'SQL_QuoteString', 'SQL_FastQuery', 'SQL_Query', 'SQL_PrepareQuery', 'SQL_FetchMoreResults', 'SQL_HasResultSet', 'SQL_GetRowCount', 'SQL_GetFieldCount', 'SQL_FieldNumToName', 'SQL_FieldNameToNum', 'SQL_FetchRow', 'SQL_MoreRows', 'SQL_Rewind', 'SQL_FetchString', 'SQL_FetchFloat', 'SQL_FetchInt', 'SQL_IsFieldNull', 'SQL_FetchSize', 'SQL_BindParamInt', 'SQL_BindParamFloat', 'SQL_BindParamString', 'SQL_Execute', 'SQL_LockDatabase', 'SQL_UnlockDatabase', 'SQLTCallback', 'SQL_IsSameConnection', 'SQL_TConnect', 'SQL_TQuery', 'SQL_CreateTransaction', 'SQL_AddQuery', 'SQLTxnSuccess', 'SQLTxnFailure', 'SQL_ExecuteTransaction', 'CloseHandle', 'CloneHandle', 'MenuHandler', 'CreateMenu', 'DisplayMenu', 'DisplayMenuAtItem', 'AddMenuItem', 'InsertMenuItem', 'RemoveMenuItem', 'RemoveAllMenuItems', 'GetMenuItem', 'GetMenuSelectionPosition', 'GetMenuItemCount', 'SetMenuPagination', 'GetMenuPagination', 'GetMenuStyle', 'SetMenuTitle', 'GetMenuTitle', 'CreatePanelFromMenu', 'GetMenuExitButton', 'SetMenuExitButton', 'GetMenuExitBackButton', 'SetMenuExitBackButton', 'SetMenuNoVoteButton', 'CancelMenu', 'GetMenuOptionFlags', 'SetMenuOptionFlags', 'IsVoteInProgress', 'CancelVote', 'VoteMenu', 'VoteMenuToAll', 'VoteHandler', 'SetVoteResultCallback', 'CheckVoteDelay', 'IsClientInVotePool', 'RedrawClientVoteMenu', 'GetMenuStyleHandle', 'CreatePanel', 'CreateMenuEx', 'GetClientMenu', 'CancelClientMenu', 'GetMaxPageItems', 'GetPanelStyle', 'SetPanelTitle', 'DrawPanelItem', 'DrawPanelText', 'CanPanelDrawFlags', 'SetPanelKeys', 'SendPanelToClient', 'GetPanelTextRemaining', 'GetPanelCurrentKey', 'SetPanelCurrentKey', 'RedrawMenuItem', 'InternalShowMenu', 'GetMenuVoteInfo', 'IsNewVoteAllowed', 'PrefetchSound', 'EmitAmbientSound', 'FadeClientVolume', 'StopSound', 'EmitSound', 'EmitSentence', 'GetDistGainFromSoundLevel', 'AmbientSHook', 'NormalSHook', 'AddAmbientSoundHook', 'AddNormalSoundHook', 'RemoveAmbientSoundHook', 'RemoveNormalSoundHook', 'EmitSoundToClient', 'EmitSoundToAll', 'ATTN_TO_SNDLEVEL', 'GetGameSoundParams', 'EmitGameSound', 'EmitAmbientGameSound', 'EmitGameSoundToClient', 'EmitGameSoundToAll', 'PrecacheScriptSound', 'strlen', 'StrContains', 'strcmp', 'strncmp', 'StrEqual', 'strcopy', 'Format', 'FormatEx', 'VFormat', 'StringToInt', 'StringToIntEx', 'IntToString', 'StringToFloat', 'StringToFloatEx', 'FloatToString', 'BreakString', 'TrimString', 'SplitString', 'ReplaceString', 'ReplaceStringEx', 'GetCharBytes', 'IsCharAlpha', 'IsCharNumeric', 'IsCharSpace', 'IsCharMB', 'IsCharUpper', 'IsCharLower', 'StripQuotes', 'CharToUpper', 'CharToLower', 'FindCharInString', 'StrCat', 'ExplodeString', 'ImplodeStrings', 'GetVectorLength', 'GetVectorDistance', 'GetVectorDotProduct', 'GetVectorCrossProduct', 'NormalizeVector', 'GetAngleVectors', 'GetVectorAngles', 'GetVectorVectors', 'AddVectors', 'SubtractVectors', 'ScaleVector', 'NegateVector', 'MakeVectorFromPoints', 'BaseComm_IsClientGagged', 'BaseComm_IsClientMuted', 'BaseComm_SetClientGag', 'BaseComm_SetClientMute', 'FormatUserLogText', 'FindPluginByFile', 'FindTarget', 'AcceptEntityInput', 'SetVariantBool', 'SetVariantString', 'SetVariantInt', 'SetVariantFloat', 'SetVariantVector3D', 'SetVariantPosVector3D', 'SetVariantColor', 'SetVariantEntity', 'GameRules_GetProp', 'GameRules_SetProp', 'GameRules_GetPropFloat', 'GameRules_SetPropFloat', 'GameRules_GetPropEnt', 'GameRules_SetPropEnt', 'GameRules_GetPropVector', 'GameRules_SetPropVector', 'GameRules_GetPropString', 'GameRules_SetPropString', 'GameRules_GetRoundState', 'OnClientConnect', 'OnClientConnected', 'OnClientPutInServer', 'OnClientDisconnect', 'OnClientDisconnect_Post', 'OnClientCommand', 'OnClientSettingsChanged', 'OnClientAuthorized', 'OnClientPreAdminCheck', 'OnClientPostAdminFilter', 'OnClientPostAdminCheck', 'GetMaxClients', 'GetMaxHumanPlayers', 'GetClientCount', 'GetClientName', 'GetClientIP', 'GetClientAuthString', 'GetClientAuthId', 'GetSteamAccountID', 'GetClientUserId', 'IsClientConnected', 'IsClientInGame', 'IsClientInKickQueue', 'IsClientAuthorized', 'IsFakeClient', 'IsClientSourceTV', 'IsClientReplay', 'IsClientObserver', 'IsPlayerAlive', 'GetClientInfo', 'GetClientTeam', 'SetUserAdmin', 'GetUserAdmin', 'AddUserFlags', 'RemoveUserFlags', 'SetUserFlagBits', 'GetUserFlagBits', 'CanUserTarget', 'RunAdminCacheChecks', 'NotifyPostAdminCheck', 'CreateFakeClient', 'SetFakeClientConVar', 'GetClientHealth', 'GetClientModel', 'GetClientWeapon', 'GetClientMaxs', 'GetClientMins', 'GetClientAbsAngles', 'GetClientAbsOrigin', 'GetClientArmor', 'GetClientDeaths', 'GetClientFrags', 'GetClientDataRate', 'IsClientTimingOut', 'GetClientTime', 'GetClientLatency', 'GetClientAvgLatency', 'GetClientAvgLoss', 'GetClientAvgChoke', 'GetClientAvgData', 'GetClientAvgPackets', 'GetClientOfUserId', 'KickClient', 'KickClientEx', 'ChangeClientTeam', 'GetClientSerial', 'GetClientFromSerial', 'FindStringTable', 'GetNumStringTables', 'GetStringTableNumStrings', 'GetStringTableMaxStrings', 'GetStringTableName', 'FindStringIndex', 'ReadStringTable', 'GetStringTableDataLength', 'GetStringTableData', 'SetStringTableData', 'AddToStringTable', 'LockStringTables', 'AddFileToDownloadsTable', 'GetEntityFlags', 'SetEntityFlags', 'GetEntityMoveType', 'SetEntityMoveType', 'GetEntityRenderMode', 'SetEntityRenderMode', 'GetEntityRenderFx', 'SetEntityRenderFx', 'SetEntityRenderColor', 'GetEntityGravity', 'SetEntityGravity', 'SetEntityHealth', 'GetClientButtons', 'EntityOutput', 'HookEntityOutput', 'UnhookEntityOutput', 'HookSingleEntityOutput', 'UnhookSingleEntityOutput', 'SMC_CreateParser', 'SMC_ParseFile', 'SMC_GetErrorString', 'SMC_ParseStart', 'SMC_SetParseStart', 'SMC_ParseEnd', 'SMC_SetParseEnd', 'SMC_NewSection', 'SMC_KeyValue', 'SMC_EndSection', 'SMC_SetReaders', 'SMC_RawLine', 'SMC_SetRawLine', 'BfWriteBool', 'BfWriteByte', 'BfWriteChar', 'BfWriteShort', 'BfWriteWord', 'BfWriteNum', 'BfWriteFloat', 'BfWriteString', 'BfWriteEntity', 'BfWriteAngle', 'BfWriteCoord', 'BfWriteVecCoord', 'BfWriteVecNormal', 'BfWriteAngles', 'BfReadBool', 'BfReadByte', 'BfReadChar', 'BfReadShort', 'BfReadWord', 'BfReadNum', 'BfReadFloat', 'BfReadString', 'BfReadEntity', 'BfReadAngle', 'BfReadCoord', 'BfReadVecCoord', 'BfReadVecNormal', 'BfReadAngles', 'BfGetNumBytesLeft', 'CreateProfiler', 'StartProfiling', 'StopProfiling', 'GetProfilerTime', 'OnPluginStart', 'AskPluginLoad2', 'OnPluginEnd', 'OnPluginPauseChange', 'OnGameFrame', 'OnMapStart', 'OnMapEnd', 'OnConfigsExecuted', 'OnAutoConfigsBuffered', 'OnAllPluginsLoaded', 'GetMyHandle', 'GetPluginIterator', 'MorePlugins', 'ReadPlugin', 'GetPluginStatus', 'GetPluginFilename', 'IsPluginDebugging', 'GetPluginInfo', 'FindPluginByNumber', 'SetFailState', 'ThrowError', 'GetTime', 'FormatTime', 'LoadGameConfigFile', 'GameConfGetOffset', 'GameConfGetKeyValue', 'GameConfGetAddress', 'GetSysTickCount', 'AutoExecConfig', 'RegPluginLibrary', 'LibraryExists', 'GetExtensionFileStatus', 'OnLibraryAdded', 'OnLibraryRemoved', 'ReadMapList', 'SetMapListCompatBind', 'OnClientFloodCheck', 'OnClientFloodResult', 'CanTestFeatures', 'GetFeatureStatus', 'RequireFeature', 'LoadFromAddress', 'StoreToAddress', 'CreateStack', 'PushStackCell', 'PushStackString', 'PushStackArray', 'PopStackCell', 'PopStackString', 'PopStackArray', 'IsStackEmpty', 'PopStack', 'OnPlayerRunCmd', 'BuildPath', 'OpenDirectory', 'ReadDirEntry', 'OpenFile', 'DeleteFile', 'ReadFileLine', 'ReadFile', 'ReadFileString', 'WriteFile', 'WriteFileString', 'WriteFileLine', 'ReadFileCell', 'WriteFileCell', 'IsEndOfFile', 'FileSeek', 'FilePosition', 'FileExists', 'RenameFile', 'DirExists', 'FileSize', 'FlushFile', 'RemoveDir', 'CreateDirectory', 'GetFileTime', 'LogToOpenFile', 'LogToOpenFileEx', 'PbReadInt', 'PbReadFloat', 'PbReadBool', 'PbReadString', 'PbReadColor', 'PbReadAngle', 'PbReadVector', 'PbReadVector2D', 'PbGetRepeatedFieldCount', 'PbSetInt', 'PbSetFloat', 'PbSetBool', 'PbSetString', 'PbSetColor', 'PbSetAngle', 'PbSetVector', 'PbSetVector2D', 'PbAddInt', 'PbAddFloat', 'PbAddBool', 'PbAddString', 'PbAddColor', 'PbAddAngle', 'PbAddVector', 'PbAddVector2D', 'PbRemoveRepeatedFieldValue', 'PbReadMessage', 'PbReadRepeatedMessage', 'PbAddMessage', 'SetNextMap', 'GetNextMap', 'ForceChangeLevel', 'GetMapHistorySize', 'GetMapHistory', 'GeoipCode2', 'GeoipCode3', 'GeoipCountry', 'MarkNativeAsOptional', 'RegClientCookie', 'FindClientCookie', 'SetClientCookie', 'GetClientCookie', 'SetAuthIdCookie', 'AreClientCookiesCached', 'OnClientCookiesCached', 'CookieMenuHandler', 'SetCookiePrefabMenu', 'SetCookieMenuItem', 'ShowCookieMenu', 'GetCookieIterator', 'ReadCookieIterator', 'GetCookieAccess', 'GetClientCookieTime', 'LoadTranslations', 'SetGlobalTransTarget', 'GetClientLanguage', 'GetServerLanguage', 'GetLanguageCount', 'GetLanguageInfo', 'SetClientLanguage', 'GetLanguageByCode', 'GetLanguageByName', 'CS_OnBuyCommand', 'CS_OnCSWeaponDrop', 'CS_OnGetWeaponPrice', 'CS_OnTerminateRound', 'CS_RespawnPlayer', 'CS_SwitchTeam', 'CS_DropWeapon', 'CS_TerminateRound', 'CS_GetTranslatedWeaponAlias', 'CS_GetWeaponPrice', 'CS_GetClientClanTag', 'CS_SetClientClanTag', 'CS_GetTeamScore', 'CS_SetTeamScore', 'CS_GetMVPCount', 'CS_SetMVPCount', 'CS_GetClientContributionScore', 'CS_SetClientContributionScore', 'CS_GetClientAssists', 'CS_SetClientAssists', 'CS_AliasToWeaponID', 'CS_WeaponIDToAlias', 'CS_IsValidWeaponID', 'CS_UpdateClientModel', 'LogToGame', 'SetRandomSeed', 'GetRandomFloat', 'GetRandomInt', 'IsMapValid', 'IsDedicatedServer', 'GetEngineTime', 'GetGameTime', 'GetGameTickCount', 'GetGameDescription', 'GetGameFolderName', 'GetCurrentMap', 'PrecacheModel', 'PrecacheSentenceFile', 'PrecacheDecal', 'PrecacheGeneric', 'IsModelPrecached', 'IsDecalPrecached', 'IsGenericPrecached', 'PrecacheSound', 'IsSoundPrecached', 'CreateDialog', 'GetEngineVersion', 'PrintToChat', 'PrintToChatAll', 'PrintCenterText', 'PrintCenterTextAll', 'PrintHintText', 'PrintHintTextToAll', 'ShowVGUIPanel', 'CreateHudSynchronizer', 'SetHudTextParams', 'SetHudTextParamsEx', 'ShowSyncHudText', 'ClearSyncHud', 'ShowHudText', 'ShowMOTDPanel', 'DisplayAskConnectBox', 'EntIndexToEntRef', 'EntRefToEntIndex', 'MakeCompatEntRef', 'SetClientViewEntity', 'SetLightStyle', 'GetClientEyePosition', 'CreateDataPack', 'WritePackCell', 'WritePackFloat', 'WritePackString', 'ReadPackCell', 'ReadPackFloat', 'ReadPackString', 'ResetPack', 'GetPackPosition', 'SetPackPosition', 'IsPackReadable', 'LogMessage', 'LogToFile', 'LogToFileEx', 'LogAction', 'LogError', 'OnLogAction', 'GameLogHook', 'AddGameLogHook', 'RemoveGameLogHook', 'FindTeamByName', 'StartPrepSDKCall', 'PrepSDKCall_SetVirtual', 'PrepSDKCall_SetSignature', 'PrepSDKCall_SetAddress', 'PrepSDKCall_SetFromConf', 'PrepSDKCall_SetReturnInfo', 'PrepSDKCall_AddParameter', 'EndPrepSDKCall', 'SDKCall', 'GetPlayerResourceEntity', ) if __name__ == '__main__': # pragma: no cover import re import sys try: from urllib import FancyURLopener except ImportError: from urllib.request import FancyURLopener from pygments.util import format_lines # urllib ends up wanting to import a module called 'math' -- if # pygments/lexers is in the path, this ends badly. for i in range(len(sys.path)-1, -1, -1): if sys.path[i].endswith('/lexers'): del sys.path[i] class Opener(FancyURLopener): version = 'Mozilla/5.0 (Pygments Sourcemod Builtins Update)' opener = Opener() def get_version(): f = opener.open('http://docs.sourcemod.net/api/index.php') r = re.compile(r'SourceMod v\.([\d\.]+(?:-\w+)?)') for line in f: m = r.search(line) if m is not None: return m.groups()[0] raise ValueError('No version in api docs') def get_sm_functions(): f = opener.open('http://docs.sourcemod.net/api/SMfuncs.js') r = re.compile(r'SMfunctions\[\d+\] = Array \("(?:public )?([^,]+)",".+"\);') functions = [] for line in f: m = r.match(line) if m is not None: functions.append(m.groups()[0]) return functions def regenerate(filename, natives): with open(filename) as fp: content = fp.read() header = content[:content.find('FUNCTIONS = (')] footer = content[content.find("if __name__ == '__main__':")-1:] with open(filename, 'w') as fp: fp.write(header) fp.write(format_lines('FUNCTIONS', natives)) fp.write(footer) def run(): version = get_version() print('> Downloading function index for SourceMod %s' % version) functions = get_sm_functions() print('> %d functions found:' % len(functions)) functionlist = [] for full_function_name in functions: print('>> %s' % full_function_name) functionlist.append(full_function_name) regenerate(__file__, functionlist) run() pygments-2.11.2/pygments/lexers/dsls.py0000644000175000017500000010671114165547207020033 0ustar carstencarsten""" pygments.lexers.dsls ~~~~~~~~~~~~~~~~~~~~ Lexers for various domain-specific languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import ExtendedRegexLexer, RegexLexer, bygroups, words, \ include, default, this, using, combined from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['ProtoBufLexer', 'ZeekLexer', 'PuppetLexer', 'RslLexer', 'MscgenLexer', 'VGLLexer', 'AlloyLexer', 'PanLexer', 'CrmshLexer', 'ThriftLexer', 'FlatlineLexer', 'SnowballLexer'] class ProtoBufLexer(RegexLexer): """ Lexer for `Protocol Buffer `_ definition files. .. versionadded:: 1.4 """ name = 'Protocol Buffer' aliases = ['protobuf', 'proto'] filenames = ['*.proto'] tokens = { 'root': [ (r'[ \t]+', Whitespace), (r'[,;{}\[\]()<>]', Punctuation), (r'/(\\\n)?/(\n|(.|\n)*?[^\\]\n)', Comment.Single), (r'/(\\\n)?\*(.|\n)*?\*(\\\n)?/', Comment.Multiline), (words(( 'import', 'option', 'optional', 'required', 'repeated', 'reserved', 'default', 'packed', 'ctype', 'extensions', 'to', 'max', 'rpc', 'returns', 'oneof', 'syntax'), prefix=r'\b', suffix=r'\b'), Keyword), (words(( 'int32', 'int64', 'uint32', 'uint64', 'sint32', 'sint64', 'fixed32', 'fixed64', 'sfixed32', 'sfixed64', 'float', 'double', 'bool', 'string', 'bytes'), suffix=r'\b'), Keyword.Type), (r'(true|false)\b', Keyword.Constant), (r'(package)(\s+)', bygroups(Keyword.Namespace, Whitespace), 'package'), (r'(message|extend)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'message'), (r'(enum|group|service)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'type'), (r'\".*?\"', String), (r'\'.*?\'', String), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*', Number.Float), (r'(\d+\.\d*|\.\d+|\d+[fF])[fF]?', Number.Float), (r'(\-?(inf|nan))\b', Number.Float), (r'0x[0-9a-fA-F]+[LlUu]*', Number.Hex), (r'0[0-7]+[LlUu]*', Number.Oct), (r'\d+[LlUu]*', Number.Integer), (r'[+-=]', Operator), (r'([a-zA-Z_][\w.]*)([ \t]*)(=)', bygroups(Name.Attribute, Whitespace, Operator)), (r'[a-zA-Z_][\w.]*', Name), ], 'package': [ (r'[a-zA-Z_]\w*', Name.Namespace, '#pop'), default('#pop'), ], 'message': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop'), default('#pop'), ], 'type': [ (r'[a-zA-Z_]\w*', Name, '#pop'), default('#pop'), ], } class ThriftLexer(RegexLexer): """ For `Thrift `__ interface definitions. .. versionadded:: 2.1 """ name = 'Thrift' aliases = ['thrift'] filenames = ['*.thrift'] mimetypes = ['application/x-thrift'] tokens = { 'root': [ include('whitespace'), include('comments'), (r'"', String.Double, combined('stringescape', 'dqs')), (r'\'', String.Single, combined('stringescape', 'sqs')), (r'(namespace)(\s+)', bygroups(Keyword.Namespace, Whitespace), 'namespace'), (r'(enum|union|struct|service|exception)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'class'), (r'((?:(?:[^\W\d]|\$)[\w.\[\]$<>]*\s+)+?)' # return arguments r'((?:[^\W\d]|\$)[\w$]*)' # method name r'(\s*)(\()', # signature start bygroups(using(this), Name.Function, Whitespace, Operator)), include('keywords'), include('numbers'), (r'[&=]', Operator), (r'[:;,{}()<>\[\]]', Punctuation), (r'[a-zA-Z_](\.\w|\w)*', Name), ], 'whitespace': [ (r'\n', Whitespace), (r'\s+', Whitespace), ], 'comments': [ (r'#.*$', Comment), (r'//.*?\n', Comment), (r'/\*[\w\W]*?\*/', Comment.Multiline), ], 'stringescape': [ (r'\\([\\nrt"\'])', String.Escape), ], 'dqs': [ (r'"', String.Double, '#pop'), (r'[^\\"\n]+', String.Double), ], 'sqs': [ (r"'", String.Single, '#pop'), (r'[^\\\'\n]+', String.Single), ], 'namespace': [ (r'[a-z*](\.\w|\w)*', Name.Namespace, '#pop'), default('#pop'), ], 'class': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop'), default('#pop'), ], 'keywords': [ (r'(async|oneway|extends|throws|required|optional)\b', Keyword), (r'(true|false)\b', Keyword.Constant), (r'(const|typedef)\b', Keyword.Declaration), (words(( 'cpp_namespace', 'cpp_include', 'cpp_type', 'java_package', 'cocoa_prefix', 'csharp_namespace', 'delphi_namespace', 'php_namespace', 'py_module', 'perl_package', 'ruby_namespace', 'smalltalk_category', 'smalltalk_prefix', 'xsd_all', 'xsd_optional', 'xsd_nillable', 'xsd_namespace', 'xsd_attrs', 'include'), suffix=r'\b'), Keyword.Namespace), (words(( 'void', 'bool', 'byte', 'i16', 'i32', 'i64', 'double', 'string', 'binary', 'map', 'list', 'set', 'slist', 'senum'), suffix=r'\b'), Keyword.Type), (words(( 'BEGIN', 'END', '__CLASS__', '__DIR__', '__FILE__', '__FUNCTION__', '__LINE__', '__METHOD__', '__NAMESPACE__', 'abstract', 'alias', 'and', 'args', 'as', 'assert', 'begin', 'break', 'case', 'catch', 'class', 'clone', 'continue', 'declare', 'def', 'default', 'del', 'delete', 'do', 'dynamic', 'elif', 'else', 'elseif', 'elsif', 'end', 'enddeclare', 'endfor', 'endforeach', 'endif', 'endswitch', 'endwhile', 'ensure', 'except', 'exec', 'finally', 'float', 'for', 'foreach', 'function', 'global', 'goto', 'if', 'implements', 'import', 'in', 'inline', 'instanceof', 'interface', 'is', 'lambda', 'module', 'native', 'new', 'next', 'nil', 'not', 'or', 'pass', 'public', 'print', 'private', 'protected', 'raise', 'redo', 'rescue', 'retry', 'register', 'return', 'self', 'sizeof', 'static', 'super', 'switch', 'synchronized', 'then', 'this', 'throw', 'transient', 'try', 'undef', 'unless', 'unsigned', 'until', 'use', 'var', 'virtual', 'volatile', 'when', 'while', 'with', 'xor', 'yield'), prefix=r'\b', suffix=r'\b'), Keyword.Reserved), ], 'numbers': [ (r'[+-]?(\d+\.\d+([eE][+-]?\d+)?|\.?\d+[eE][+-]?\d+)', Number.Float), (r'[+-]?0x[0-9A-Fa-f]+', Number.Hex), (r'[+-]?[0-9]+', Number.Integer), ], } class ZeekLexer(RegexLexer): """ For `Zeek `_ scripts. .. versionadded:: 2.5 """ name = 'Zeek' aliases = ['zeek', 'bro'] filenames = ['*.zeek', '*.bro'] _hex = r'[0-9a-fA-F]' _float = r'((\d*\.?\d+)|(\d+\.?\d*))([eE][-+]?\d+)?' _h = r'[A-Za-z0-9][-A-Za-z0-9]*' tokens = { 'root': [ include('whitespace'), include('comments'), include('directives'), include('attributes'), include('types'), include('keywords'), include('literals'), include('operators'), include('punctuation'), (r'((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(?=\s*\()', Name.Function), include('identifiers'), ], 'whitespace': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(Text, Whitespace)), ], 'comments': [ (r'#.*$', Comment), ], 'directives': [ (r'@(load-plugin|load-sigs|load|unload)\b.*$', Comment.Preproc), (r'@(DEBUG|DIR|FILENAME|deprecated|if|ifdef|ifndef|else|endif)\b', Comment.Preproc), (r'(@prefixes)(\s*)((\+?=).*)$', bygroups(Comment.Preproc, Whitespace, Comment.Preproc)), ], 'attributes': [ (words(('redef', 'priority', 'log', 'optional', 'default', 'add_func', 'delete_func', 'expire_func', 'read_expire', 'write_expire', 'create_expire', 'synchronized', 'persistent', 'rotate_interval', 'rotate_size', 'encrypt', 'raw_output', 'mergeable', 'error_handler', 'type_column', 'deprecated'), prefix=r'&', suffix=r'\b'), Keyword.Pseudo), ], 'types': [ (words(('any', 'enum', 'record', 'set', 'table', 'vector', 'function', 'hook', 'event', 'addr', 'bool', 'count', 'double', 'file', 'int', 'interval', 'pattern', 'port', 'string', 'subnet', 'time'), suffix=r'\b'), Keyword.Type), (r'(opaque)(\s+)(of)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)\b', bygroups(Keyword.Type, Whitespace, Operator.Word, Whitespace, Keyword.Type)), (r'(type)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(\s*)(:)(\s*)\b(record|enum)\b', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Operator, Whitespace, Keyword.Type)), (r'(type)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)(\s*)(:)', bygroups(Keyword, Whitespace, Name, Whitespace, Operator)), (r'(redef)(\s+)(record|enum)(\s+)((?:[A-Za-z_]\w*)(?:::(?:[A-Za-z_]\w*))*)\b', bygroups(Keyword, Whitespace, Keyword.Type, Whitespace, Name.Class)), ], 'keywords': [ (words(('redef', 'export', 'if', 'else', 'for', 'while', 'return', 'break', 'next', 'continue', 'fallthrough', 'switch', 'default', 'case', 'add', 'delete', 'when', 'timeout', 'schedule'), suffix=r'\b'), Keyword), (r'(print)\b', Keyword), (r'(global|local|const|option)\b', Keyword.Declaration), (r'(module)(\s+)(([A-Za-z_]\w*)(?:::([A-Za-z_]\w*))*)\b', bygroups(Keyword.Namespace, Whitespace, Name.Namespace)), ], 'literals': [ (r'"', String, 'string'), # Not the greatest match for patterns, but generally helps # disambiguate between start of a pattern and just a division # operator. (r'/(?=.*/)', String.Regex, 'regex'), (r'(T|F)\b', Keyword.Constant), # Port (r'\d{1,5}/(udp|tcp|icmp|unknown)\b', Number), # IPv4 Address (r'(\d{1,3}.){3}(\d{1,3})\b', Number), # IPv6 Address (r'\[([0-9a-fA-F]{0,4}:){2,7}([0-9a-fA-F]{0,4})?((\d{1,3}.){3}(\d{1,3}))?\]', Number), # Numeric (r'0[xX]' + _hex + r'+\b', Number.Hex), (_float + r'\s*(day|hr|min|sec|msec|usec)s?\b', Number.Float), (_float + r'\b', Number.Float), (r'(\d+)\b', Number.Integer), # Hostnames (_h + r'(\.' + _h + r')+', String), ], 'operators': [ (r'[!%*/+<=>~|&^-]', Operator), (r'([-+=&|]{2}|[+=!><-]=)', Operator), (r'(in|as|is|of)\b', Operator.Word), (r'\??\$', Operator), ], 'punctuation': [ (r'[{}()\[\],;.]', Punctuation), # The "ternary if", which uses '?' and ':', could instead be # treated as an Operator, but colons are more frequently used to # separate field/identifier names from their types, so the (often) # less-prominent Punctuation is used even with '?' for consistency. (r'[?:]', Punctuation), ], 'identifiers': [ (r'([a-zA-Z_]\w*)(::)', bygroups(Name, Punctuation)), (r'[a-zA-Z_]\w*', Name) ], 'string': [ (r'\\.', String.Escape), (r'%-?[0-9]*(\.[0-9]+)?[DTd-gsx]', String.Escape), (r'"', String, '#pop'), (r'.', String), ], 'regex': [ (r'\\.', String.Escape), (r'/', String.Regex, '#pop'), (r'.', String.Regex), ], } BroLexer = ZeekLexer class PuppetLexer(RegexLexer): """ For `Puppet `__ configuration DSL. .. versionadded:: 1.6 """ name = 'Puppet' aliases = ['puppet'] filenames = ['*.pp'] tokens = { 'root': [ include('comments'), include('keywords'), include('names'), include('numbers'), include('operators'), include('strings'), (r'[]{}:(),;[]', Punctuation), (r'\s+', Whitespace), ], 'comments': [ (r'(\s*)(#.*)$', bygroups(Whitespace, Comment)), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), ], 'operators': [ (r'(=>|\?|<|>|=|\+|-|/|\*|~|!|\|)', Operator), (r'(in|and|or|not)\b', Operator.Word), ], 'names': [ (r'[a-zA-Z_]\w*', Name.Attribute), (r'(\$\S+)(\[)(\S+)(\])', bygroups(Name.Variable, Punctuation, String, Punctuation)), (r'\$\S+', Name.Variable), ], 'numbers': [ # Copypasta from the Python lexer (r'(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?', Number.Float), (r'\d+[eE][+-]?[0-9]+j?', Number.Float), (r'0[0-7]+j?', Number.Oct), (r'0[xX][a-fA-F0-9]+', Number.Hex), (r'\d+L', Number.Integer.Long), (r'\d+j?', Number.Integer) ], 'keywords': [ # Left out 'group' and 'require' # Since they're often used as attributes (words(( 'absent', 'alert', 'alias', 'audit', 'augeas', 'before', 'case', 'check', 'class', 'computer', 'configured', 'contained', 'create_resources', 'crit', 'cron', 'debug', 'default', 'define', 'defined', 'directory', 'else', 'elsif', 'emerg', 'err', 'exec', 'extlookup', 'fail', 'false', 'file', 'filebucket', 'fqdn_rand', 'generate', 'host', 'if', 'import', 'include', 'info', 'inherits', 'inline_template', 'installed', 'interface', 'k5login', 'latest', 'link', 'loglevel', 'macauthorization', 'mailalias', 'maillist', 'mcx', 'md5', 'mount', 'mounted', 'nagios_command', 'nagios_contact', 'nagios_contactgroup', 'nagios_host', 'nagios_hostdependency', 'nagios_hostescalation', 'nagios_hostextinfo', 'nagios_hostgroup', 'nagios_service', 'nagios_servicedependency', 'nagios_serviceescalation', 'nagios_serviceextinfo', 'nagios_servicegroup', 'nagios_timeperiod', 'node', 'noop', 'notice', 'notify', 'package', 'present', 'purged', 'realize', 'regsubst', 'resources', 'role', 'router', 'running', 'schedule', 'scheduled_task', 'search', 'selboolean', 'selmodule', 'service', 'sha1', 'shellquote', 'split', 'sprintf', 'ssh_authorized_key', 'sshkey', 'stage', 'stopped', 'subscribe', 'tag', 'tagged', 'template', 'tidy', 'true', 'undef', 'unmounted', 'user', 'versioncmp', 'vlan', 'warning', 'yumrepo', 'zfs', 'zone', 'zpool'), prefix='(?i)', suffix=r'\b'), Keyword), ], 'strings': [ (r'"([^"])*"', String), (r"'(\\'|[^'])*'", String), ], } class RslLexer(RegexLexer): """ `RSL `_ is the formal specification language used in RAISE (Rigorous Approach to Industrial Software Engineering) method. .. versionadded:: 2.0 """ name = 'RSL' aliases = ['rsl'] filenames = ['*.rsl'] mimetypes = ['text/rsl'] flags = re.MULTILINE | re.DOTALL tokens = { 'root': [ (words(( 'Bool', 'Char', 'Int', 'Nat', 'Real', 'Text', 'Unit', 'abs', 'all', 'always', 'any', 'as', 'axiom', 'card', 'case', 'channel', 'chaos', 'class', 'devt_relation', 'dom', 'elems', 'else', 'elif', 'end', 'exists', 'extend', 'false', 'for', 'hd', 'hide', 'if', 'in', 'is', 'inds', 'initialise', 'int', 'inter', 'isin', 'len', 'let', 'local', 'ltl_assertion', 'object', 'of', 'out', 'post', 'pre', 'read', 'real', 'rng', 'scheme', 'skip', 'stop', 'swap', 'then', 'theory', 'test_case', 'tl', 'transition_system', 'true', 'type', 'union', 'until', 'use', 'value', 'variable', 'while', 'with', 'write', '~isin', '-inflist', '-infset', '-list', '-set'), prefix=r'\b', suffix=r'\b'), Keyword), (r'(variable|value)\b', Keyword.Declaration), (r'--.*?\n', Comment), (r'<:.*?:>', Comment), (r'\{!.*?!\}', Comment), (r'/\*.*?\*/', Comment), (r'^([ \t]*)([\w]+)([ \t]*)(:[^:])', bygroups(Whitespace, Name.Function, Whitespace, Name.Function)), (r'(^[ \t]*)([\w]+)([ \t]*)(\([\w\s,]*\))([ \t]*)(is|as)', bygroups(Whitespace, Name.Function, Whitespace, Text, Whitespace, Keyword)), (r'\b[A-Z]\w*\b', Keyword.Type), (r'(true|false)\b', Keyword.Constant), (r'".*"', String), (r'\'.\'', String.Char), (r'(><|->|-m->|/\\|<=|<<=|<\.|\|\||\|\^\||-~->|-~m->|\\/|>=|>>|' r'\.>|\+\+|-\\|<->|=>|:-|~=|\*\*|<<|>>=|\+>|!!|\|=\||#)', Operator), (r'[0-9]+\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-f]+', Number.Hex), (r'[0-9]+', Number.Integer), (r'\s+', Whitespace), (r'.', Text), ], } def analyse_text(text): """ Check for the most common text in the beginning of a RSL file. """ if re.search(r'scheme\s*.*?=\s*class\s*type', text, re.I) is not None: return 1.0 class MscgenLexer(RegexLexer): """ For `Mscgen `_ files. .. versionadded:: 1.6 """ name = 'Mscgen' aliases = ['mscgen', 'msc'] filenames = ['*.msc'] _var = r'(\w+|"(?:\\"|[^"])*")' tokens = { 'root': [ (r'msc\b', Keyword.Type), # Options (r'(hscale|HSCALE|width|WIDTH|wordwraparcs|WORDWRAPARCS' r'|arcgradient|ARCGRADIENT)\b', Name.Property), # Operators (r'(abox|ABOX|rbox|RBOX|box|BOX|note|NOTE)\b', Operator.Word), (r'(\.|-|\|){3}', Keyword), (r'(?:-|=|\.|:){2}' r'|<<=>>|<->|<=>|<<>>|<:>' r'|->|=>>|>>|=>|:>|-x|-X' r'|<-|<<=|<<|<=|<:|x-|X-|=', Operator), # Names (r'\*', Name.Builtin), (_var, Name.Variable), # Other (r'\[', Punctuation, 'attrs'), (r'\{|\}|,|;', Punctuation), include('comments') ], 'attrs': [ (r'\]', Punctuation, '#pop'), (_var + r'(\s*)(=)(\s*)' + _var, bygroups(Name.Attribute, Whitespace, Operator, Whitespace, String)), (r',', Punctuation), include('comments') ], 'comments': [ (r'(?://|#).*?\n', Comment.Single), (r'/\*(?:.|\n)*?\*/', Comment.Multiline), (r'[ \t\r\n]+', Whitespace) ] } class VGLLexer(RegexLexer): """ For `SampleManager VGL `_ source code. .. versionadded:: 1.6 """ name = 'VGL' aliases = ['vgl'] filenames = ['*.rpf'] flags = re.MULTILINE | re.DOTALL | re.IGNORECASE tokens = { 'root': [ (r'\{[^}]*\}', Comment.Multiline), (r'declare', Keyword.Constant), (r'(if|then|else|endif|while|do|endwhile|and|or|prompt|object' r'|create|on|line|with|global|routine|value|endroutine|constant' r'|global|set|join|library|compile_option|file|exists|create|copy' r'|delete|enable|windows|name|notprotected)(?! *[=<>.,()])', Keyword), (r'(true|false|null|empty|error|locked)', Keyword.Constant), (r'[~^*#!%&\[\]()<>|+=:;,./?-]', Operator), (r'"[^"]*"', String), (r'(\.)([a-z_$][\w$]*)', bygroups(Operator, Name.Attribute)), (r'[0-9][0-9]*(\.[0-9]+(e[+\-]?[0-9]+)?)?', Number), (r'[a-z_$][\w$]*', Name), (r'[\r\n]+', Whitespace), (r'\s+', Whitespace) ] } class AlloyLexer(RegexLexer): """ For `Alloy `_ source code. .. versionadded:: 2.0 """ name = 'Alloy' aliases = ['alloy'] filenames = ['*.als'] mimetypes = ['text/x-alloy'] flags = re.MULTILINE | re.DOTALL iden_rex = r'[a-zA-Z_][\w\']*' text_tuple = (r'[^\S\n]+', Whitespace) tokens = { 'sig': [ (r'(extends)\b', Keyword, '#pop'), (iden_rex, Name), text_tuple, (r',', Punctuation), (r'\{', Operator, '#pop'), ], 'module': [ text_tuple, (iden_rex, Name, '#pop'), ], 'fun': [ text_tuple, (r'\{', Operator, '#pop'), (iden_rex, Name, '#pop'), ], 'root': [ (r'--.*?$', Comment.Single), (r'//.*?$', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), text_tuple, (r'(module|open)(\s+)', bygroups(Keyword.Namespace, Whitespace), 'module'), (r'(sig|enum)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'sig'), (r'(iden|univ|none)\b', Keyword.Constant), (r'(int|Int)\b', Keyword.Type), (r'(this|abstract|extends|set|seq|one|lone|let)\b', Keyword), (r'(all|some|no|sum|disj|when|else)\b', Keyword), (r'(run|check|for|but|exactly|expect|as)\b', Keyword), (r'(and|or|implies|iff|in)\b', Operator.Word), (r'(fun|pred|fact|assert)(\s+)', bygroups(Keyword, Whitespace), 'fun'), (r'!|#|&&|\+\+|<<|>>|>=|<=>|<=|\.|->', Operator), (r'[-+/*%=<>&!^|~{}\[\]().]', Operator), (iden_rex, Name), (r'[:,]', Punctuation), (r'[0-9]+', Number.Integer), (r'"(\\\\|\\[^\\]|[^"\\])*"', String), (r'\n', Whitespace), ] } class PanLexer(RegexLexer): """ Lexer for `pan `_ source files. Based on tcsh lexer. .. versionadded:: 2.0 """ name = 'Pan' aliases = ['pan'] filenames = ['*.pan'] tokens = { 'root': [ include('basic'), (r'\(', Keyword, 'paren'), (r'\{', Keyword, 'curly'), include('data'), ], 'basic': [ (words(( 'if', 'for', 'with', 'else', 'type', 'bind', 'while', 'valid', 'final', 'prefix', 'unique', 'object', 'foreach', 'include', 'template', 'function', 'variable', 'structure', 'extensible', 'declaration'), prefix=r'\b', suffix=r'\b'), Keyword), (words(( 'file_contents', 'format', 'index', 'length', 'match', 'matches', 'replace', 'splice', 'split', 'substr', 'to_lowercase', 'to_uppercase', 'debug', 'error', 'traceback', 'deprecated', 'base64_decode', 'base64_encode', 'digest', 'escape', 'unescape', 'append', 'create', 'first', 'nlist', 'key', 'list', 'merge', 'next', 'prepend', 'is_boolean', 'is_defined', 'is_double', 'is_list', 'is_long', 'is_nlist', 'is_null', 'is_number', 'is_property', 'is_resource', 'is_string', 'to_boolean', 'to_double', 'to_long', 'to_string', 'clone', 'delete', 'exists', 'path_exists', 'if_exists', 'return', 'value'), prefix=r'\b', suffix=r'\b'), Name.Builtin), (r'#.*', Comment), (r'\\[\w\W]', String.Escape), (r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Whitespace, Operator)), (r'[\[\]{}()=]+', Operator), (r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String), (r';', Punctuation), ], 'data': [ (r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double), (r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single), (r'\s+', Whitespace), (r'[^=\s\[\]{}()$"\'`\\;#]+', Text), (r'\d+(?= |\Z)', Number), ], 'curly': [ (r'\}', Keyword, '#pop'), (r':-', Keyword), (r'\w+', Name.Variable), (r'[^}:"\'`$]+', Punctuation), (r':', Punctuation), include('root'), ], 'paren': [ (r'\)', Keyword, '#pop'), include('root'), ], } class CrmshLexer(RegexLexer): """ Lexer for `crmsh `_ configuration files for Pacemaker clusters. .. versionadded:: 2.1 """ name = 'Crmsh' aliases = ['crmsh', 'pcmk'] filenames = ['*.crmsh', '*.pcmk'] mimetypes = [] elem = words(( 'node', 'primitive', 'group', 'clone', 'ms', 'location', 'colocation', 'order', 'fencing_topology', 'rsc_ticket', 'rsc_template', 'property', 'rsc_defaults', 'op_defaults', 'acl_target', 'acl_group', 'user', 'role', 'tag'), suffix=r'(?![\w#$-])') sub = words(( 'params', 'meta', 'operations', 'op', 'rule', 'attributes', 'utilization'), suffix=r'(?![\w#$-])') acl = words(('read', 'write', 'deny'), suffix=r'(?![\w#$-])') bin_rel = words(('and', 'or'), suffix=r'(?![\w#$-])') un_ops = words(('defined', 'not_defined'), suffix=r'(?![\w#$-])') date_exp = words(('in_range', 'date', 'spec', 'in'), suffix=r'(?![\w#$-])') acl_mod = (r'(?:tag|ref|reference|attribute|type|xpath)') bin_ops = (r'(?:lt|gt|lte|gte|eq|ne)') val_qual = (r'(?:string|version|number)') rsc_role_action = (r'(?:Master|Started|Slave|Stopped|' r'start|promote|demote|stop)') tokens = { 'root': [ (r'^(#.*)(\n)?', bygroups(Comment, Whitespace)), # attr=value (nvpair) (r'([\w#$-]+)(=)("(?:""|[^"])*"|\S+)', bygroups(Name.Attribute, Punctuation, String)), # need this construct, otherwise numeric node ids # are matched as scores # elem id: (r'(node)(\s+)([\w#$-]+)(:)', bygroups(Keyword, Whitespace, Name, Punctuation)), # scores (r'([+-]?([0-9]+|inf)):', Number), # keywords (elements and other) (elem, Keyword), (sub, Keyword), (acl, Keyword), # binary operators (r'(?:%s:)?(%s)(?![\w#$-])' % (val_qual, bin_ops), Operator.Word), # other operators (bin_rel, Operator.Word), (un_ops, Operator.Word), (date_exp, Operator.Word), # builtin attributes (e.g. #uname) (r'#[a-z]+(?![\w#$-])', Name.Builtin), # acl_mod:blah (r'(%s)(:)("(?:""|[^"])*"|\S+)' % acl_mod, bygroups(Keyword, Punctuation, Name)), # rsc_id[:(role|action)] # NB: this matches all other identifiers (r'([\w#$-]+)(?:(:)(%s))?(?![\w#$-])' % rsc_role_action, bygroups(Name, Punctuation, Operator.Word)), # punctuation (r'(\\(?=\n)|[\[\](){}/:@])', Punctuation), (r'\s+|\n', Whitespace), ], } class FlatlineLexer(RegexLexer): """ Lexer for `Flatline `_ expressions. .. versionadded:: 2.2 """ name = 'Flatline' aliases = ['flatline'] filenames = [] mimetypes = ['text/x-flatline'] special_forms = ('let',) builtins = ( "!=", "*", "+", "-", "<", "<=", "=", ">", ">=", "abs", "acos", "all", "all-but", "all-with-defaults", "all-with-numeric-default", "and", "asin", "atan", "avg", "avg-window", "bin-center", "bin-count", "call", "category-count", "ceil", "cond", "cond-window", "cons", "cos", "cosh", "count", "diff-window", "div", "ensure-value", "ensure-weighted-value", "epoch", "epoch-day", "epoch-fields", "epoch-hour", "epoch-millisecond", "epoch-minute", "epoch-month", "epoch-second", "epoch-weekday", "epoch-year", "exp", "f", "field", "field-prop", "fields", "filter", "first", "floor", "head", "if", "in", "integer", "language", "length", "levenshtein", "linear-regression", "list", "ln", "log", "log10", "map", "matches", "matches?", "max", "maximum", "md5", "mean", "median", "min", "minimum", "missing", "missing-count", "missing?", "missing_count", "mod", "mode", "normalize", "not", "nth", "occurrences", "or", "percentile", "percentile-label", "population", "population-fraction", "pow", "preferred", "preferred?", "quantile-label", "rand", "rand-int", "random-value", "re-quote", "real", "replace", "replace-first", "rest", "round", "row-number", "segment-label", "sha1", "sha256", "sin", "sinh", "sqrt", "square", "standard-deviation", "standard_deviation", "str", "subs", "sum", "sum-squares", "sum-window", "sum_squares", "summary", "summary-no", "summary-str", "tail", "tan", "tanh", "to-degrees", "to-radians", "variance", "vectorize", "weighted-random-value", "window", "winnow", "within-percentiles?", "z-score", ) valid_name = r'(?!#)[\w!$%*+<=>?/.#-]+' tokens = { 'root': [ # whitespaces - usually not relevant (r'[,]+', Text), (r'\s+', Whitespace), # numbers (r'-?\d+\.\d+', Number.Float), (r'-?\d+', Number.Integer), (r'0x-?[a-f\d]+', Number.Hex), # strings, symbols and characters (r'"(\\\\|\\[^\\]|[^"\\])*"', String), (r"\\(.|[a-z]+)", String.Char), # expression template placeholder (r'_', String.Symbol), # highlight the special forms (words(special_forms, suffix=' '), Keyword), # highlight the builtins (words(builtins, suffix=' '), Name.Builtin), # the remaining functions (r'(?<=\()' + valid_name, Name.Function), # find the remaining variables (valid_name, Name.Variable), # parentheses (r'(\(|\))', Punctuation), ], } class SnowballLexer(ExtendedRegexLexer): """ Lexer for `Snowball `_ source code. .. versionadded:: 2.2 """ name = 'Snowball' aliases = ['snowball'] filenames = ['*.sbl'] _ws = r'\n\r\t ' def __init__(self, **options): self._reset_stringescapes() ExtendedRegexLexer.__init__(self, **options) def _reset_stringescapes(self): self._start = "'" self._end = "'" def _string(do_string_first): def callback(lexer, match, ctx): s = match.start() text = match.group() string = re.compile(r'([^%s]*)(.)' % re.escape(lexer._start)).match escape = re.compile(r'([^%s]*)(.)' % re.escape(lexer._end)).match pos = 0 do_string = do_string_first while pos < len(text): if do_string: match = string(text, pos) yield s + match.start(1), String.Single, match.group(1) if match.group(2) == "'": yield s + match.start(2), String.Single, match.group(2) ctx.stack.pop() break yield s + match.start(2), String.Escape, match.group(2) pos = match.end() match = escape(text, pos) yield s + match.start(), String.Escape, match.group() if match.group(2) != lexer._end: ctx.stack[-1] = 'escape' break pos = match.end() do_string = True ctx.pos = s + match.end() return callback def _stringescapes(lexer, match, ctx): lexer._start = match.group(3) lexer._end = match.group(5) return bygroups(Keyword.Reserved, Whitespace, String.Escape, Whitespace, String.Escape)(lexer, match, ctx) tokens = { 'root': [ (words(('len', 'lenof'), suffix=r'\b'), Operator.Word), include('root1'), ], 'root1': [ (r'[%s]+' % _ws, Whitespace), (r'\d+', Number.Integer), (r"'", String.Single, 'string'), (r'[()]', Punctuation), (r'/\*[\w\W]*?\*/', Comment.Multiline), (r'//.*', Comment.Single), (r'[!*+\-/<=>]=|[-=]>|<[+-]|[$*+\-/<=>?\[\]]', Operator), (words(('as', 'get', 'hex', 'among', 'define', 'decimal', 'backwardmode'), suffix=r'\b'), Keyword.Reserved), (words(('strings', 'booleans', 'integers', 'routines', 'externals', 'groupings'), suffix=r'\b'), Keyword.Reserved, 'declaration'), (words(('do', 'or', 'and', 'for', 'hop', 'non', 'not', 'set', 'try', 'fail', 'goto', 'loop', 'next', 'test', 'true', 'false', 'unset', 'atmark', 'attach', 'delete', 'gopast', 'insert', 'repeat', 'sizeof', 'tomark', 'atleast', 'atlimit', 'reverse', 'setmark', 'tolimit', 'setlimit', 'backwards', 'substring'), suffix=r'\b'), Operator.Word), (words(('size', 'limit', 'cursor', 'maxint', 'minint'), suffix=r'\b'), Name.Builtin), (r'(stringdef\b)([%s]*)([^%s]+)' % (_ws, _ws), bygroups(Keyword.Reserved, Whitespace, String.Escape)), (r'(stringescapes\b)([%s]*)(.)([%s]*)(.)' % (_ws, _ws), _stringescapes), (r'[A-Za-z]\w*', Name), ], 'declaration': [ (r'\)', Punctuation, '#pop'), (words(('len', 'lenof'), suffix=r'\b'), Name, ('root1', 'declaration')), include('root1'), ], 'string': [ (r"[^']*'", _string(True)), ], 'escape': [ (r"[^']*'", _string(False)), ], } def get_tokens_unprocessed(self, text=None, context=None): self._reset_stringescapes() return ExtendedRegexLexer.get_tokens_unprocessed(self, text, context) pygments-2.11.2/pygments/lexers/parasail.py0000644000175000017500000000523114165547207020655 0ustar carstencarsten""" pygments.lexers.parasail ~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for ParaSail. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Literal __all__ = ['ParaSailLexer'] class ParaSailLexer(RegexLexer): """ For `ParaSail `_ source code. .. versionadded:: 2.1 """ name = 'ParaSail' aliases = ['parasail'] filenames = ['*.psi', '*.psl'] mimetypes = ['text/x-parasail'] flags = re.MULTILINE tokens = { 'root': [ (r'[^\S\n]+', Text), (r'//.*?\n', Comment.Single), (r'\b(and|or|xor)=', Operator.Word), (r'\b(and(\s+then)?|or(\s+else)?|xor|rem|mod|' r'(is|not)\s+null)\b', Operator.Word), # Keywords (r'\b(abs|abstract|all|block|class|concurrent|const|continue|' r'each|end|exit|extends|exports|forward|func|global|implements|' r'import|in|interface|is|lambda|locked|new|not|null|of|op|' r'optional|private|queued|ref|return|reverse|separate|some|' r'type|until|var|with|' # Control flow r'if|then|else|elsif|case|for|while|loop)\b', Keyword.Reserved), (r'(abstract\s+)?(interface|class|op|func|type)', Keyword.Declaration), # Literals (r'"[^"]*"', String), (r'\\[\'ntrf"0]', String.Escape), (r'#[a-zA-Z]\w*', Literal), # Enumeration include('numbers'), (r"'[^']'", String.Char), (r'[a-zA-Z]\w*', Name), # Operators and Punctuation (r'(<==|==>|<=>|\*\*=|<\|=|<<=|>>=|==|!=|=\?|<=|>=|' r'\*\*|<<|>>|=>|:=|\+=|-=|\*=|\|=|\||/=|\+|-|\*|/|' r'\.\.|<\.\.|\.\.<|<\.\.<)', Operator), (r'(<|>|\[|\]|\(|\)|\||:|;|,|.|\{|\}|->)', Punctuation), (r'\n+', Text), ], 'numbers': [ (r'\d[0-9_]*#[0-9a-fA-F][0-9a-fA-F_]*#', Number.Hex), # any base (r'0[xX][0-9a-fA-F][0-9a-fA-F_]*', Number.Hex), # C-like hex (r'0[bB][01][01_]*', Number.Bin), # C-like bin (r'\d[0-9_]*\.\d[0-9_]*[eE][+-]\d[0-9_]*', # float exp Number.Float), (r'\d[0-9_]*\.\d[0-9_]*', Number.Float), # float (r'\d[0-9_]*', Number.Integer), # integer ], } pygments-2.11.2/pygments/lexers/factor.py0000644000175000017500000004617214165547207020350 0ustar carstencarsten""" pygments.lexers.factor ~~~~~~~~~~~~~~~~~~~~~~ Lexers for the Factor language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, default, words from pygments.token import Text, Comment, Keyword, Name, String, Number, \ Whitespace, Punctuation __all__ = ['FactorLexer'] class FactorLexer(RegexLexer): """ Lexer for the `Factor `_ language. .. versionadded:: 1.4 """ name = 'Factor' aliases = ['factor'] filenames = ['*.factor'] mimetypes = ['text/x-factor'] flags = re.MULTILINE | re.UNICODE builtin_kernel = words(( '-rot', '2bi', '2bi@', '2bi*', '2curry', '2dip', '2drop', '2dup', '2keep', '2nip', '2over', '2tri', '2tri@', '2tri*', '3bi', '3curry', '3dip', '3drop', '3dup', '3keep', '3tri', '4dip', '4drop', '4dup', '4keep', '', '=', '>boolean', 'clone', '?', '?execute', '?if', 'and', 'assert', 'assert=', 'assert?', 'bi', 'bi-curry', 'bi-curry@', 'bi-curry*', 'bi@', 'bi*', 'boa', 'boolean', 'boolean?', 'both?', 'build', 'call', 'callstack', 'callstack>array', 'callstack?', 'clear', '(clone)', 'compose', 'compose?', 'curry', 'curry?', 'datastack', 'die', 'dip', 'do', 'drop', 'dup', 'dupd', 'either?', 'eq?', 'equal?', 'execute', 'hashcode', 'hashcode*', 'identity-hashcode', 'identity-tuple', 'identity-tuple?', 'if', 'if*', 'keep', 'loop', 'most', 'new', 'nip', 'not', 'null', 'object', 'or', 'over', 'pick', 'prepose', 'retainstack', 'rot', 'same?', 'swap', 'swapd', 'throw', 'tri', 'tri-curry', 'tri-curry@', 'tri-curry*', 'tri@', 'tri*', 'tuple', 'tuple?', 'unless', 'unless*', 'until', 'when', 'when*', 'while', 'with', 'wrapper', 'wrapper?', 'xor'), suffix=r'(\s+)') builtin_assocs = words(( '2cache', '', '>alist', '?at', '?of', 'assoc', 'assoc-all?', 'assoc-any?', 'assoc-clone-like', 'assoc-combine', 'assoc-diff', 'assoc-diff!', 'assoc-differ', 'assoc-each', 'assoc-empty?', 'assoc-filter', 'assoc-filter!', 'assoc-filter-as', 'assoc-find', 'assoc-hashcode', 'assoc-intersect', 'assoc-like', 'assoc-map', 'assoc-map-as', 'assoc-partition', 'assoc-refine', 'assoc-size', 'assoc-stack', 'assoc-subset?', 'assoc-union', 'assoc-union!', 'assoc=', 'assoc>map', 'assoc?', 'at', 'at+', 'at*', 'cache', 'change-at', 'clear-assoc', 'delete-at', 'delete-at*', 'enum', 'enum?', 'extract-keys', 'inc-at', 'key?', 'keys', 'map>assoc', 'maybe-set-at', 'new-assoc', 'of', 'push-at', 'rename-at', 'set-at', 'sift-keys', 'sift-values', 'substitute', 'unzip', 'value-at', 'value-at*', 'value?', 'values', 'zip'), suffix=r'(\s+)') builtin_combinators = words(( '2cleave', '2cleave>quot', '3cleave', '3cleave>quot', '4cleave', '4cleave>quot', 'alist>quot', 'call-effect', 'case', 'case-find', 'case>quot', 'cleave', 'cleave>quot', 'cond', 'cond>quot', 'deep-spread>quot', 'execute-effect', 'linear-case-quot', 'no-case', 'no-case?', 'no-cond', 'no-cond?', 'recursive-hashcode', 'shallow-spread>quot', 'spread', 'to-fixed-point', 'wrong-values', 'wrong-values?'), suffix=r'(\s+)') builtin_math = words(( '-', '/', '/f', '/i', '/mod', '2/', '2^', '<', '<=', '', '>', '>=', '>bignum', '>fixnum', '>float', '>integer', '(all-integers?)', '(each-integer)', '(find-integer)', '*', '+', '?1+', 'abs', 'align', 'all-integers?', 'bignum', 'bignum?', 'bit?', 'bitand', 'bitnot', 'bitor', 'bits>double', 'bits>float', 'bitxor', 'complex', 'complex?', 'denominator', 'double>bits', 'each-integer', 'even?', 'find-integer', 'find-last-integer', 'fixnum', 'fixnum?', 'float', 'float>bits', 'float?', 'fp-bitwise=', 'fp-infinity?', 'fp-nan-payload', 'fp-nan?', 'fp-qnan?', 'fp-sign', 'fp-snan?', 'fp-special?', 'if-zero', 'imaginary-part', 'integer', 'integer>fixnum', 'integer>fixnum-strict', 'integer?', 'log2', 'log2-expects-positive', 'log2-expects-positive?', 'mod', 'neg', 'neg?', 'next-float', 'next-power-of-2', 'number', 'number=', 'number?', 'numerator', 'odd?', 'out-of-fixnum-range', 'out-of-fixnum-range?', 'power-of-2?', 'prev-float', 'ratio', 'ratio?', 'rational', 'rational?', 'real', 'real-part', 'real?', 'recip', 'rem', 'sgn', 'shift', 'sq', 'times', 'u<', 'u<=', 'u>', 'u>=', 'unless-zero', 'unordered?', 'when-zero', 'zero?'), suffix=r'(\s+)') builtin_sequences = words(( '1sequence', '2all?', '2each', '2map', '2map-as', '2map-reduce', '2reduce', '2selector', '2sequence', '3append', '3append-as', '3each', '3map', '3map-as', '3sequence', '4sequence', '', '', '', '?first', '?last', '?nth', '?second', '?set-nth', 'accumulate', 'accumulate!', 'accumulate-as', 'all?', 'any?', 'append', 'append!', 'append-as', 'assert-sequence', 'assert-sequence=', 'assert-sequence?', 'binary-reduce', 'bounds-check', 'bounds-check?', 'bounds-error', 'bounds-error?', 'but-last', 'but-last-slice', 'cartesian-each', 'cartesian-map', 'cartesian-product', 'change-nth', 'check-slice', 'check-slice-error', 'clone-like', 'collapse-slice', 'collector', 'collector-for', 'concat', 'concat-as', 'copy', 'count', 'cut', 'cut-slice', 'cut*', 'delete-all', 'delete-slice', 'drop-prefix', 'each', 'each-from', 'each-index', 'empty?', 'exchange', 'filter', 'filter!', 'filter-as', 'find', 'find-from', 'find-index', 'find-index-from', 'find-last', 'find-last-from', 'first', 'first2', 'first3', 'first4', 'flip', 'follow', 'fourth', 'glue', 'halves', 'harvest', 'head', 'head-slice', 'head-slice*', 'head*', 'head?', 'if-empty', 'immutable', 'immutable-sequence', 'immutable-sequence?', 'immutable?', 'index', 'index-from', 'indices', 'infimum', 'infimum-by', 'insert-nth', 'interleave', 'iota', 'iota-tuple', 'iota-tuple?', 'join', 'join-as', 'last', 'last-index', 'last-index-from', 'length', 'lengthen', 'like', 'longer', 'longer?', 'longest', 'map', 'map!', 'map-as', 'map-find', 'map-find-last', 'map-index', 'map-integers', 'map-reduce', 'map-sum', 'max-length', 'member-eq?', 'member?', 'midpoint@', 'min-length', 'mismatch', 'move', 'new-like', 'new-resizable', 'new-sequence', 'non-negative-integer-expected', 'non-negative-integer-expected?', 'nth', 'nths', 'pad-head', 'pad-tail', 'padding', 'partition', 'pop', 'pop*', 'prefix', 'prepend', 'prepend-as', 'produce', 'produce-as', 'product', 'push', 'push-all', 'push-either', 'push-if', 'reduce', 'reduce-index', 'remove', 'remove!', 'remove-eq', 'remove-eq!', 'remove-nth', 'remove-nth!', 'repetition', 'repetition?', 'replace-slice', 'replicate', 'replicate-as', 'rest', 'rest-slice', 'reverse', 'reverse!', 'reversed', 'reversed?', 'second', 'selector', 'selector-for', 'sequence', 'sequence-hashcode', 'sequence=', 'sequence?', 'set-first', 'set-fourth', 'set-last', 'set-length', 'set-nth', 'set-second', 'set-third', 'short', 'shorten', 'shorter', 'shorter?', 'shortest', 'sift', 'slice', 'slice-error', 'slice-error?', 'slice?', 'snip', 'snip-slice', 'start', 'start*', 'subseq', 'subseq?', 'suffix', 'suffix!', 'sum', 'sum-lengths', 'supremum', 'supremum-by', 'surround', 'tail', 'tail-slice', 'tail-slice*', 'tail*', 'tail?', 'third', 'trim', 'trim-head', 'trim-head-slice', 'trim-slice', 'trim-tail', 'trim-tail-slice', 'unclip', 'unclip-last', 'unclip-last-slice', 'unclip-slice', 'unless-empty', 'virtual-exemplar', 'virtual-sequence', 'virtual-sequence?', 'virtual@', 'when-empty'), suffix=r'(\s+)') builtin_namespaces = words(( '+@', 'change', 'change-global', 'counter', 'dec', 'get', 'get-global', 'global', 'inc', 'init-namespaces', 'initialize', 'is-global', 'make-assoc', 'namespace', 'namestack', 'off', 'on', 'set', 'set-global', 'set-namestack', 'toggle', 'with-global', 'with-scope', 'with-variable', 'with-variables'), suffix=r'(\s+)') builtin_arrays = words(( '1array', '2array', '3array', '4array', '', '>array', 'array', 'array?', 'pair', 'pair?', 'resize-array'), suffix=r'(\s+)') builtin_io = words(( '(each-stream-block-slice)', '(each-stream-block)', '(stream-contents-by-block)', '(stream-contents-by-element)', '(stream-contents-by-length-or-block)', '(stream-contents-by-length)', '+byte+', '+character+', 'bad-seek-type', 'bad-seek-type?', 'bl', 'contents', 'each-block', 'each-block-size', 'each-block-slice', 'each-line', 'each-morsel', 'each-stream-block', 'each-stream-block-slice', 'each-stream-line', 'error-stream', 'flush', 'input-stream', 'input-stream?', 'invalid-read-buffer', 'invalid-read-buffer?', 'lines', 'nl', 'output-stream', 'output-stream?', 'print', 'read', 'read-into', 'read-partial', 'read-partial-into', 'read-until', 'read1', 'readln', 'seek-absolute', 'seek-absolute?', 'seek-end', 'seek-end?', 'seek-input', 'seek-output', 'seek-relative', 'seek-relative?', 'stream-bl', 'stream-contents', 'stream-contents*', 'stream-copy', 'stream-copy*', 'stream-element-type', 'stream-flush', 'stream-length', 'stream-lines', 'stream-nl', 'stream-print', 'stream-read', 'stream-read-into', 'stream-read-partial', 'stream-read-partial-into', 'stream-read-partial-unsafe', 'stream-read-unsafe', 'stream-read-until', 'stream-read1', 'stream-readln', 'stream-seek', 'stream-seekable?', 'stream-tell', 'stream-write', 'stream-write1', 'tell-input', 'tell-output', 'with-error-stream', 'with-error-stream*', 'with-error>output', 'with-input-output+error-streams', 'with-input-output+error-streams*', 'with-input-stream', 'with-input-stream*', 'with-output-stream', 'with-output-stream*', 'with-output>error', 'with-output+error-stream', 'with-output+error-stream*', 'with-streams', 'with-streams*', 'write', 'write1'), suffix=r'(\s+)') builtin_strings = words(( '1string', '', '>string', 'resize-string', 'string', 'string?'), suffix=r'(\s+)') builtin_vectors = words(( '1vector', '', '>vector', '?push', 'vector', 'vector?'), suffix=r'(\s+)') builtin_continuations = words(( '', '', '', 'attempt-all', 'attempt-all-error', 'attempt-all-error?', 'callback-error-hook', 'callcc0', 'callcc1', 'cleanup', 'compute-restarts', 'condition', 'condition?', 'continuation', 'continuation?', 'continue', 'continue-restart', 'continue-with', 'current-continuation', 'error', 'error-continuation', 'error-in-thread', 'error-thread', 'ifcc', 'ignore-errors', 'in-callback?', 'original-error', 'recover', 'restart', 'restart?', 'restarts', 'rethrow', 'rethrow-restarts', 'return', 'return-continuation', 'thread-error-hook', 'throw-continue', 'throw-restarts', 'with-datastack', 'with-return'), suffix=r'(\s+)') tokens = { 'root': [ # factor allows a file to start with a shebang (r'#!.*$', Comment.Preproc), default('base'), ], 'base': [ (r'\s+', Whitespace), # defining words (r'((?:MACRO|MEMO|TYPED)?:[:]?)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function)), (r'(M:[:]?)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Name.Function)), (r'(C:)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function, Whitespace, Name.Class)), (r'(GENERIC:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function)), (r'(HOOK:|GENERIC#)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function, Whitespace, Name.Function)), (r'(\()(\s)', bygroups(Name.Function, Whitespace), 'stackeffect'), (r'(;)(\s)', bygroups(Keyword, Whitespace)), # imports and namespaces (r'(USING:)(\s+)', bygroups(Keyword.Namespace, Whitespace), 'vocabs'), (r'(USE:|UNUSE:|IN:|QUALIFIED:)(\s+)(\S+)', bygroups(Keyword.Namespace, Whitespace, Name.Namespace)), (r'(QUALIFIED-WITH:)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword.Namespace, Whitespace, Name.Namespace, Whitespace, Name.Namespace)), (r'(FROM:|EXCLUDE:)(\s+)(\S+)(\s+=>\s)', bygroups(Keyword.Namespace, Whitespace, Name.Namespace, Whitespace), 'words'), (r'(RENAME:)(\s+)(\S+)(\s+)(\S+)(\s+)(=>)(\s+)(\S+)', bygroups(Keyword.Namespace, Whitespace, Name.Function, Whitespace, Name.Namespace, Whitespace, Punctuation, Whitespace, Name.Function)), (r'(ALIAS:|TYPEDEF:)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword.Namespace, Whitespace, Name.Function, Whitespace, Name.Function)), (r'(DEFER:|FORGET:|POSTPONE:)(\s+)(\S+)', bygroups(Keyword.Namespace, Whitespace, Name.Function)), # tuples and classes (r'(TUPLE:|ERROR:)(\s+)(\S+)(\s+)(<)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Punctuation, Whitespace, Name.Class), 'slots'), (r'(TUPLE:|ERROR:|BUILTIN:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class), 'slots'), (r'(MIXIN:|UNION:|INTERSECTION:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class)), (r'(PREDICATE:)(\s+)(\S+)(\s+)(<)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Punctuation, Whitespace, Name.Class)), (r'(C:)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function, Whitespace, Name.Class)), (r'(INSTANCE:)(\s+)(\S+)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class, Whitespace, Name.Class)), (r'(SLOT:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function)), (r'(SINGLETON:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class)), (r'SINGLETONS:', Keyword, 'classes'), # other syntax (r'(CONSTANT:|SYMBOL:|MAIN:|HELP:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Function)), (r'(SYMBOLS:)(\s+)', bygroups(Keyword, Whitespace), 'words'), (r'(SYNTAX:)(\s+)', bygroups(Keyword, Whitespace)), (r'(ALIEN:)(\s+)', bygroups(Keyword, Whitespace)), (r'(STRUCT:)(\s+)(\S+)', bygroups(Keyword, Whitespace, Name.Class)), (r'(FUNCTION:)(\s+)' r'(\S+)(\s+)(\S+)(\s+)' r'(\()(\s+)([^)]+)(\))(\s)', bygroups(Keyword.Namespace, Whitespace, Text, Whitespace, Name.Function, Whitespace, Punctuation, Whitespace, Text, Punctuation, Whitespace)), (r'(FUNCTION-ALIAS:)(\s+)' r'(\S+)(\s+)(\S+)(\s+)' r'(\S+)(\s+)' r'(\()(\s+)([^)]+)(\))(\s)', bygroups(Keyword.Namespace, Whitespace, Text, Whitespace, Name.Function, Whitespace, Name.Function, Whitespace, Punctuation, Whitespace, Text, Punctuation, Whitespace)), # vocab.private (r'()(\s)', bygroups(Keyword.Namespace, Whitespace)), # strings (r'"""\s(?:.|\n)*?\s"""', String), (r'"(?:\\\\|\\"|[^"])*"', String), (r'(\S+")(\s+)((?:\\\\|\\"|[^"])*")', bygroups(String, Whitespace, String)), (r'(CHAR:)(\s+)(\\[\\abfnrstv]|[^\\]\S*)(\s)', bygroups(String.Char, Whitespace, String.Char, Whitespace)), # comments (r'!\s+.*$', Comment), (r'#!\s+.*$', Comment), (r'/\*\s+(?:.|\n)*?\s\*/', Comment), # boolean constants (r'[tf]\b', Name.Constant), # symbols and literals (r'[\\$]\s+\S+', Name.Constant), (r'M\\\s+\S+\s+\S+', Name.Constant), # numbers (r'[+-]?(?:[\d,]*\d)?\.(?:\d([\d,]*\d)?)?(?:[eE][+-]?\d+)?\s', Number), (r'[+-]?\d(?:[\d,]*\d)?(?:[eE][+-]?\d+)?\s', Number), (r'0x[a-fA-F\d](?:[a-fA-F\d,]*[a-fA-F\d])?(?:p\d([\d,]*\d)?)?\s', Number), (r'NAN:\s+[a-fA-F\d](?:[a-fA-F\d,]*[a-fA-F\d])?(?:p\d([\d,]*\d)?)?\s', Number), (r'0b[01]+\s', Number.Bin), (r'0o[0-7]+\s', Number.Oct), (r'(?:\d([\d,]*\d)?)?\+\d(?:[\d,]*\d)?/\d(?:[\d,]*\d)?\s', Number), (r'(?:\-\d([\d,]*\d)?)?\-\d(?:[\d,]*\d)?/\d(?:[\d,]*\d)?\s', Number), # keywords (r'(?:deprecated|final|foldable|flushable|inline|recursive)\s', Keyword), # builtins (builtin_kernel, bygroups(Name.Builtin, Whitespace)), (builtin_assocs, bygroups(Name.Builtin, Whitespace)), (builtin_combinators, bygroups(Name.Builtin, Whitespace)), (builtin_math, bygroups(Name.Builtin, Whitespace)), (builtin_sequences, bygroups(Name.Builtin, Whitespace)), (builtin_namespaces, bygroups(Name.Builtin, Whitespace)), (builtin_arrays, bygroups(Name.Builtin, Whitespace)), (builtin_io, bygroups(Name.Builtin, Whitespace)), (builtin_strings, bygroups(Name.Builtin, Whitespace)), (builtin_vectors, bygroups(Name.Builtin, Whitespace)), (builtin_continuations, bygroups(Name.Builtin, Whitespace)), # everything else is text (r'\S+', Text), ], 'stackeffect': [ (r'\s+', Whitespace), (r'(\()(\s+)', bygroups(Name.Function, Whitespace), 'stackeffect'), (r'(\))(\s+)', bygroups(Name.Function, Whitespace), '#pop'), (r'(--)(\s+)', bygroups(Name.Function, Whitespace)), (r'\S+', Name.Variable), ], 'slots': [ (r'\s+', Whitespace), (r'(;)(\s+)', bygroups(Keyword, Whitespace), '#pop'), (r'(\{)(\s+)(\S+)(\s+)([^}]+)(\s+)(\})(\s+)', bygroups(Text, Whitespace, Name.Variable, Whitespace, Text, Whitespace, Text, Whitespace)), (r'\S+', Name.Variable), ], 'vocabs': [ (r'\s+', Whitespace), (r'(;)(\s+)', bygroups(Keyword, Whitespace), '#pop'), (r'\S+', Name.Namespace), ], 'classes': [ (r'\s+', Whitespace), (r'(;)(\s+)', bygroups(Keyword, Whitespace), '#pop'), (r'\S+', Name.Class), ], 'words': [ (r'\s+', Whitespace), (r'(;)(\s+)', bygroups(Keyword, Whitespace), '#pop'), (r'\S+', Name.Function), ], } pygments-2.11.2/pygments/lexers/console.py0000644000175000017500000001006414165547207020523 0ustar carstencarsten""" pygments.lexers.console ~~~~~~~~~~~~~~~~~~~~~~~ Lexers for misc console output. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, include, bygroups from pygments.token import Generic, Comment, String, Text, Keyword, Name, \ Punctuation, Number, Whitespace __all__ = ['VCTreeStatusLexer', 'PyPyLogLexer'] class VCTreeStatusLexer(RegexLexer): """ For colorizing output of version control status commands, like "hg status" or "svn status". .. versionadded:: 2.0 """ name = 'VCTreeStatus' aliases = ['vctreestatus'] filenames = [] mimetypes = [] tokens = { 'root': [ (r'^A \+ C\s+', Generic.Error), (r'^A\s+\+?\s+', String), (r'^M\s+', Generic.Inserted), (r'^C\s+', Generic.Error), (r'^D\s+', Generic.Deleted), (r'^[?!]\s+', Comment.Preproc), (r' >\s+.*\n', Comment.Preproc), (r'\S+', Text), (r'\s+', Whitespace), ] } class PyPyLogLexer(RegexLexer): """ Lexer for PyPy log files. .. versionadded:: 1.5 """ name = "PyPy Log" aliases = ["pypylog", "pypy"] filenames = ["*.pypylog"] mimetypes = ['application/x-pypylog'] tokens = { "root": [ (r"\[\w+\] \{jit-log-.*?$", Keyword, "jit-log"), (r"\[\w+\] \{jit-backend-counts$", Keyword, "jit-backend-counts"), include("extra-stuff"), ], "jit-log": [ (r"\[\w+\] jit-log-.*?}$", Keyword, "#pop"), (r"^\+\d+: ", Comment), (r"--end of the loop--", Comment), (r"[ifp]\d+", Name), (r"ptr\d+", Name), (r"(\()(\w+(?:\.\w+)?)(\))", bygroups(Punctuation, Name.Builtin, Punctuation)), (r"[\[\]=,()]", Punctuation), (r"(\d+\.\d+|inf|-inf)", Number.Float), (r"-?\d+", Number.Integer), (r"'.*'", String), (r"(None|descr|ConstClass|ConstPtr|TargetToken)", Name), (r"<.*?>+", Name.Builtin), (r"(label|debug_merge_point|jump|finish)", Name.Class), (r"(int_add_ovf|int_add|int_sub_ovf|int_sub|int_mul_ovf|int_mul|" r"int_floordiv|int_mod|int_lshift|int_rshift|int_and|int_or|" r"int_xor|int_eq|int_ne|int_ge|int_gt|int_le|int_lt|int_is_zero|" r"int_is_true|" r"uint_floordiv|uint_ge|uint_lt|" r"float_add|float_sub|float_mul|float_truediv|float_neg|" r"float_eq|float_ne|float_ge|float_gt|float_le|float_lt|float_abs|" r"ptr_eq|ptr_ne|instance_ptr_eq|instance_ptr_ne|" r"cast_int_to_float|cast_float_to_int|" r"force_token|quasiimmut_field|same_as|virtual_ref_finish|" r"virtual_ref|mark_opaque_ptr|" r"call_may_force|call_assembler|call_loopinvariant|" r"call_release_gil|call_pure|call|" r"new_with_vtable|new_array|newstr|newunicode|new|" r"arraylen_gc|" r"getarrayitem_gc_pure|getarrayitem_gc|setarrayitem_gc|" r"getarrayitem_raw|setarrayitem_raw|getfield_gc_pure|" r"getfield_gc|getinteriorfield_gc|setinteriorfield_gc|" r"getfield_raw|setfield_gc|setfield_raw|" r"strgetitem|strsetitem|strlen|copystrcontent|" r"unicodegetitem|unicodesetitem|unicodelen|" r"guard_true|guard_false|guard_value|guard_isnull|" r"guard_nonnull_class|guard_nonnull|guard_class|guard_no_overflow|" r"guard_not_forced|guard_no_exception|guard_not_invalidated)", Name.Builtin), include("extra-stuff"), ], "jit-backend-counts": [ (r"\[\w+\] jit-backend-counts}$", Keyword, "#pop"), (r":", Punctuation), (r"\d+", Number), include("extra-stuff"), ], "extra-stuff": [ (r"\s+", Whitespace), (r"#.*?$", Comment), ], } pygments-2.11.2/pygments/lexers/textedit.py0000644000175000017500000001670014165547207020716 0ustar carstencarsten""" pygments.lexers.textedit ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for languages related to text processing. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from bisect import bisect from pygments.lexer import RegexLexer, bygroups, default, include, this, using from pygments.lexers.python import PythonLexer from pygments.token import Comment, Error, Keyword, Name, Number, Operator, \ Punctuation, String, Text, Whitespace __all__ = ['AwkLexer', 'SedLexer', 'VimLexer'] class AwkLexer(RegexLexer): """ For Awk scripts. .. versionadded:: 1.5 """ name = 'Awk' aliases = ['awk', 'gawk', 'mawk', 'nawk'] filenames = ['*.awk'] mimetypes = ['application/x-awk'] tokens = { 'commentsandwhitespace': [ (r'\s+', Text), (r'#.*$', Comment.Single) ], 'slashstartsregex': [ include('commentsandwhitespace'), (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/' r'\B', String.Regex, '#pop'), (r'(?=/)', Text, ('#pop', 'badregex')), default('#pop') ], 'badregex': [ (r'\n', Text, '#pop') ], 'root': [ (r'^(?=\s|/)', Text, 'slashstartsregex'), include('commentsandwhitespace'), (r'\+\+|--|\|\||&&|in\b|\$|!?~|' r'(\*\*|[-<>+*%\^/!=|])=?', Operator, 'slashstartsregex'), (r'[{(\[;,]', Punctuation, 'slashstartsregex'), (r'[})\].]', Punctuation), (r'(break|continue|do|while|exit|for|if|else|' r'return)\b', Keyword, 'slashstartsregex'), (r'function\b', Keyword.Declaration, 'slashstartsregex'), (r'(atan2|cos|exp|int|log|rand|sin|sqrt|srand|gensub|gsub|index|' r'length|match|split|sprintf|sub|substr|tolower|toupper|close|' r'fflush|getline|next|nextfile|print|printf|strftime|systime|' r'delete|system)\b', Keyword.Reserved), (r'(ARGC|ARGIND|ARGV|BEGIN|CONVFMT|ENVIRON|END|ERRNO|FIELDWIDTHS|' r'FILENAME|FNR|FS|IGNORECASE|NF|NR|OFMT|OFS|ORFS|RLENGTH|RS|' r'RSTART|RT|SUBSEP)\b', Name.Builtin), (r'[$a-zA-Z_]\w*', Name.Other), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), ] } class SedLexer(RegexLexer): """ Lexer for Sed script files. """ name = 'Sed' aliases = ['sed', 'gsed', 'ssed'] filenames = ['*.sed', '*.[gs]sed'] mimetypes = ['text/x-sed'] flags = re.MULTILINE # Match the contents within delimeters such as // _inside_delims = r'((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)' tokens = { 'root': [ (r'\s+', Whitespace), (r'#.*$', Comment.Single), (r'[0-9]+', Number.Integer), (r'\$', Operator), (r'[{};,!]', Punctuation), (r'[dDFgGhHlnNpPqQxz=]', Keyword), (r'([berRtTvwW:])([^;\n]*)', bygroups(Keyword, String.Single)), (r'([aci])((?:.*?\\\n)*(?:.*?[^\\]$))', bygroups(Keyword, String.Double)), (r'([qQ])([0-9]*)', bygroups(Keyword, Number.Integer)), (r'(/)' + _inside_delims + r'(/)', bygroups(Punctuation, String.Regex, Punctuation)), (r'(\\(.))' + _inside_delims + r'(\2)', bygroups(Punctuation, None, String.Regex, Punctuation)), (r'(y)(.)' + _inside_delims + r'(\2)' + _inside_delims + r'(\2)', bygroups(Keyword, Punctuation, String.Single, Punctuation, String.Single, Punctuation)), (r'(s)(.)' + _inside_delims + r'(\2)' + _inside_delims + r'(\2)((?:[gpeIiMm]|[0-9])*)', bygroups(Keyword, Punctuation, String.Regex, Punctuation, String.Single, Punctuation, Keyword)) ] } class VimLexer(RegexLexer): """ Lexer for VimL script files. .. versionadded:: 0.8 """ name = 'VimL' aliases = ['vim'] filenames = ['*.vim', '.vimrc', '.exrc', '.gvimrc', '_vimrc', '_exrc', '_gvimrc', 'vimrc', 'gvimrc'] mimetypes = ['text/x-vim'] flags = re.MULTILINE _python = r'py(?:t(?:h(?:o(?:n)?)?)?)?' tokens = { 'root': [ (r'^([ \t:]*)(' + _python + r')([ \t]*)(<<)([ \t]*)(.*)((?:\n|.)*)(\6)', bygroups(using(this), Keyword, Text, Operator, Text, Text, using(PythonLexer), Text)), (r'^([ \t:]*)(' + _python + r')([ \t])(.*)', bygroups(using(this), Keyword, Text, using(PythonLexer))), (r'^\s*".*', Comment), (r'[ \t]+', Text), # TODO: regexes can have other delims (r'/[^/\\\n]*(?:\\[\s\S][^/\\\n]*)*/', String.Regex), (r'"[^"\\\n]*(?:\\[\s\S][^"\\\n]*)*"', String.Double), (r"'[^\n']*(?:''[^\n']*)*'", String.Single), # Who decided that doublequote was a good comment character?? (r'(?<=\s)"[^\-:.%#=*].*', Comment), (r'-?\d+', Number), (r'#[0-9a-f]{6}', Number.Hex), (r'^:', Punctuation), (r'[()<>+=!|,~-]', Punctuation), # Inexact list. Looks decent. (r'\b(let|if|else|endif|elseif|fun|function|endfunction)\b', Keyword), (r'\b(NONE|bold|italic|underline|dark|light)\b', Name.Builtin), (r'\b\w+\b', Name.Other), # These are postprocessed below (r'.', Text), ], } def __init__(self, **options): from pygments.lexers._vim_builtins import auto, command, option self._cmd = command self._opt = option self._aut = auto RegexLexer.__init__(self, **options) def is_in(self, w, mapping): r""" It's kind of difficult to decide if something might be a keyword in VimL because it allows you to abbreviate them. In fact, 'ab[breviate]' is a good example. :ab, :abbre, or :abbreviate are valid ways to call it so rather than making really awful regexps like:: \bab(?:b(?:r(?:e(?:v(?:i(?:a(?:t(?:e)?)?)?)?)?)?)?)?\b we match `\b\w+\b` and then call is_in() on those tokens. See `scripts/get_vimkw.py` for how the lists are extracted. """ p = bisect(mapping, (w,)) if p > 0: if mapping[p-1][0] == w[:len(mapping[p-1][0])] and \ mapping[p-1][1][:len(w)] == w: return True if p < len(mapping): return mapping[p][0] == w[:len(mapping[p][0])] and \ mapping[p][1][:len(w)] == w return False def get_tokens_unprocessed(self, text): # TODO: builtins are only subsequent tokens on lines # and 'keywords' only happen at the beginning except # for :au ones for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text): if token is Name.Other: if self.is_in(value, self._cmd): yield index, Keyword, value elif self.is_in(value, self._opt) or \ self.is_in(value, self._aut): yield index, Name.Builtin, value else: yield index, Text, value else: yield index, token, value pygments-2.11.2/pygments/lexers/boa.py0000644000175000017500000000757314165547207017635 0ustar carstencarsten""" pygments.lexers.boa ~~~~~~~~~~~~~~~~~~~ Lexers for the Boa language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, words from pygments.token import String, Comment, Keyword, Name, Number, Text, \ Operator, Punctuation, Whitespace __all__ = ['BoaLexer'] line_re = re.compile('.*?\n') class BoaLexer(RegexLexer): """ Lexer for the `Boa `_ language. .. versionadded:: 2.4 """ name = 'Boa' aliases = ['boa'] filenames = ['*.boa'] reserved = words( ('input', 'output', 'of', 'weight', 'before', 'after', 'stop', 'ifall', 'foreach', 'exists', 'function', 'break', 'switch', 'case', 'visitor', 'default', 'return', 'visit', 'while', 'if', 'else'), suffix=r'\b', prefix=r'\b') keywords = words( ('bottom', 'collection', 'maximum', 'mean', 'minimum', 'set', 'sum', 'top', 'string', 'int', 'bool', 'float', 'time', 'false', 'true', 'array', 'map', 'stack', 'enum', 'type'), suffix=r'\b', prefix=r'\b') classes = words( ('Project', 'ForgeKind', 'CodeRepository', 'Revision', 'RepositoryKind', 'ChangedFile', 'FileKind', 'ASTRoot', 'Namespace', 'Declaration', 'Type', 'Method', 'Variable', 'Statement', 'Expression', 'Modifier', 'StatementKind', 'ExpressionKind', 'ModifierKind', 'Visibility', 'TypeKind', 'Person', 'ChangeKind'), suffix=r'\b', prefix=r'\b') operators = ('->', ':=', ':', '=', '<<', '!', '++', '||', '&&', '+', '-', '*', ">", "<") string_sep = ('`', '\"') built_in_functions = words( ( # Array functions 'new', 'sort', # Date & Time functions 'yearof', 'dayofyear', 'hourof', 'minuteof', 'secondof', 'now', 'addday', 'addmonth', 'addweek', 'addyear', 'dayofmonth', 'dayofweek', 'dayofyear', 'formattime', 'trunctoday', 'trunctohour', 'trunctominute', 'trunctomonth', 'trunctosecond', 'trunctoyear', # Map functions 'clear', 'haskey', 'keys', 'lookup', 'remove', 'values', # Math functions 'abs', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'cos', 'cosh', 'exp', 'floor', 'highbit', 'isfinite', 'isinf', 'isnan', 'isnormal', 'log', 'log10', 'max', 'min', 'nrand', 'pow', 'rand', 'round', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc', # Other functions 'def', 'hash', 'len', # Set functions 'add', 'contains', 'remove', # String functions 'format', 'lowercase', 'match', 'matchposns', 'matchstrs', 'regex', 'split', 'splitall', 'splitn', 'strfind', 'strreplace', 'strrfind', 'substring', 'trim', 'uppercase', # Type Conversion functions 'bool', 'float', 'int', 'string', 'time', # Domain-Specific functions 'getast', 'getsnapshot', 'hasfiletype', 'isfixingrevision', 'iskind', 'isliteral', ), prefix=r'\b', suffix=r'\(') tokens = { 'root': [ (r'#.*?$', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (reserved, Keyword.Reserved), (built_in_functions, Name.Function), (keywords, Keyword.Type), (classes, Name.Classes), (words(operators), Operator), (r'[][(),;{}\\.]', Punctuation), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r"`(\\\\|\\[^\\]|[^`\\])*`", String.Backtick), (words(string_sep), String.Delimiter), (r'[a-zA-Z_]+', Name.Variable), (r'[0-9]+', Number.Integer), (r'\s+', Whitespace), # Whitespace ] } pygments-2.11.2/pygments/lexers/scripting.py0000644000175000017500000021064014165547207021065 0ustar carstencarsten""" pygments.lexers.scripting ~~~~~~~~~~~~~~~~~~~~~~~~~ Lexer for scripting and embedded languages. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, default, combined, \ words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Error, Whitespace, Other from pygments.util import get_bool_opt, get_list_opt __all__ = ['LuaLexer', 'MoonScriptLexer', 'ChaiscriptLexer', 'LSLLexer', 'AppleScriptLexer', 'RexxLexer', 'MOOCodeLexer', 'HybrisLexer', 'EasytrieveLexer', 'JclLexer', 'MiniScriptLexer'] class LuaLexer(RegexLexer): """ For `Lua `_ source code. Additional options accepted: `func_name_highlighting` If given and ``True``, highlight builtin function names (default: ``True``). `disabled_modules` If given, must be a list of module names whose function names should not be highlighted. By default all modules are highlighted. To get a list of allowed modules have a look into the `_lua_builtins` module: .. sourcecode:: pycon >>> from pygments.lexers._lua_builtins import MODULES >>> MODULES.keys() ['string', 'coroutine', 'modules', 'io', 'basic', ...] """ name = 'Lua' aliases = ['lua'] filenames = ['*.lua', '*.wlua'] mimetypes = ['text/x-lua', 'application/x-lua'] _comment_multiline = r'(?:--\[(?P=*)\[[\w\W]*?\](?P=level)\])' _comment_single = r'(?:--.*$)' _space = r'(?:\s+)' _s = r'(?:%s|%s|%s)' % (_comment_multiline, _comment_single, _space) _name = r'(?:[^\W\d]\w*)' tokens = { 'root': [ # Lua allows a file to start with a shebang. (r'#!.*', Comment.Preproc), default('base'), ], 'ws': [ (_comment_multiline, Comment.Multiline), (_comment_single, Comment.Single), (_space, Text), ], 'base': [ include('ws'), (r'(?i)0x[\da-f]*(\.[\da-f]*)?(p[+-]?\d+)?', Number.Hex), (r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+e[+-]?\d+', Number.Float), (r'\d+', Number.Integer), # multiline strings (r'(?s)\[(=*)\[.*?\]\1\]', String), (r'::', Punctuation, 'label'), (r'\.{3}', Punctuation), (r'[=<>|~&+\-*/%#^]+|\.\.', Operator), (r'[\[\]{}().,:;]', Punctuation), (r'(and|or|not)\b', Operator.Word), ('(break|do|else|elseif|end|for|if|in|repeat|return|then|until|' r'while)\b', Keyword.Reserved), (r'goto\b', Keyword.Reserved, 'goto'), (r'(local)\b', Keyword.Declaration), (r'(true|false|nil)\b', Keyword.Constant), (r'(function)\b', Keyword.Reserved, 'funcname'), (r'[A-Za-z_]\w*(\.[A-Za-z_]\w*)?', Name), ("'", String.Single, combined('stringescape', 'sqs')), ('"', String.Double, combined('stringescape', 'dqs')) ], 'funcname': [ include('ws'), (r'[.:]', Punctuation), (r'%s(?=%s*[.:])' % (_name, _s), Name.Class), (_name, Name.Function, '#pop'), # inline function (r'\(', Punctuation, '#pop'), ], 'goto': [ include('ws'), (_name, Name.Label, '#pop'), ], 'label': [ include('ws'), (r'::', Punctuation, '#pop'), (_name, Name.Label), ], 'stringescape': [ (r'\\([abfnrtv\\"\']|[\r\n]{1,2}|z\s*|x[0-9a-fA-F]{2}|\d{1,3}|' r'u\{[0-9a-fA-F]+\})', String.Escape), ], 'sqs': [ (r"'", String.Single, '#pop'), (r"[^\\']+", String.Single), ], 'dqs': [ (r'"', String.Double, '#pop'), (r'[^\\"]+', String.Double), ] } def __init__(self, **options): self.func_name_highlighting = get_bool_opt( options, 'func_name_highlighting', True) self.disabled_modules = get_list_opt(options, 'disabled_modules', []) self._functions = set() if self.func_name_highlighting: from pygments.lexers._lua_builtins import MODULES for mod, func in MODULES.items(): if mod not in self.disabled_modules: self._functions.update(func) RegexLexer.__init__(self, **options) def get_tokens_unprocessed(self, text): for index, token, value in \ RegexLexer.get_tokens_unprocessed(self, text): if token is Name: if value in self._functions: yield index, Name.Builtin, value continue elif '.' in value: a, b = value.split('.') yield index, Name, a yield index + len(a), Punctuation, '.' yield index + len(a) + 1, Name, b continue yield index, token, value class MoonScriptLexer(LuaLexer): """ For `MoonScript `_ source code. .. versionadded:: 1.5 """ name = 'MoonScript' aliases = ['moonscript', 'moon'] filenames = ['*.moon'] mimetypes = ['text/x-moonscript', 'application/x-moonscript'] tokens = { 'root': [ (r'#!(.*?)$', Comment.Preproc), default('base'), ], 'base': [ ('--.*$', Comment.Single), (r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number.Float), (r'(?i)\d+e[+-]?\d+', Number.Float), (r'(?i)0x[0-9a-f]*', Number.Hex), (r'\d+', Number.Integer), (r'\n', Whitespace), (r'[^\S\n]+', Text), (r'(?s)\[(=*)\[.*?\]\1\]', String), (r'(->|=>)', Name.Function), (r':[a-zA-Z_]\w*', Name.Variable), (r'(==|!=|~=|<=|>=|\.\.\.|\.\.|[=+\-*/%^<>#!.\\:])', Operator), (r'[;,]', Punctuation), (r'[\[\]{}()]', Keyword.Type), (r'[a-zA-Z_]\w*:', Name.Variable), (words(( 'class', 'extends', 'if', 'then', 'super', 'do', 'with', 'import', 'export', 'while', 'elseif', 'return', 'for', 'in', 'from', 'when', 'using', 'else', 'and', 'or', 'not', 'switch', 'break'), suffix=r'\b'), Keyword), (r'(true|false|nil)\b', Keyword.Constant), (r'(and|or|not)\b', Operator.Word), (r'(self)\b', Name.Builtin.Pseudo), (r'@@?([a-zA-Z_]\w*)?', Name.Variable.Class), (r'[A-Z]\w*', Name.Class), # proper name (r'[A-Za-z_]\w*(\.[A-Za-z_]\w*)?', Name), ("'", String.Single, combined('stringescape', 'sqs')), ('"', String.Double, combined('stringescape', 'dqs')) ], 'stringescape': [ (r'''\\([abfnrtv\\"']|\d{1,3})''', String.Escape) ], 'sqs': [ ("'", String.Single, '#pop'), ("[^']+", String) ], 'dqs': [ ('"', String.Double, '#pop'), ('[^"]+', String) ] } def get_tokens_unprocessed(self, text): # set . as Operator instead of Punctuation for index, token, value in LuaLexer.get_tokens_unprocessed(self, text): if token == Punctuation and value == ".": token = Operator yield index, token, value class ChaiscriptLexer(RegexLexer): """ For `ChaiScript `_ source code. .. versionadded:: 2.0 """ name = 'ChaiScript' aliases = ['chaiscript', 'chai'] filenames = ['*.chai'] mimetypes = ['text/x-chaiscript', 'application/x-chaiscript'] flags = re.DOTALL | re.MULTILINE tokens = { 'commentsandwhitespace': [ (r'\s+', Text), (r'//.*?\n', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'^\#.*?\n', Comment.Single) ], 'slashstartsregex': [ include('commentsandwhitespace'), (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/' r'([gim]+\b|\B)', String.Regex, '#pop'), (r'(?=/)', Text, ('#pop', 'badregex')), default('#pop') ], 'badregex': [ (r'\n', Text, '#pop') ], 'root': [ include('commentsandwhitespace'), (r'\n', Text), (r'[^\S\n]+', Text), (r'\+\+|--|~|&&|\?|:|\|\||\\(?=\n)|\.\.' r'(<<|>>>?|==?|!=?|[-<>+*%&|^/])=?', Operator, 'slashstartsregex'), (r'[{(\[;,]', Punctuation, 'slashstartsregex'), (r'[})\].]', Punctuation), (r'[=+\-*/]', Operator), (r'(for|in|while|do|break|return|continue|if|else|' r'throw|try|catch' r')\b', Keyword, 'slashstartsregex'), (r'(var)\b', Keyword.Declaration, 'slashstartsregex'), (r'(attr|def|fun)\b', Keyword.Reserved), (r'(true|false)\b', Keyword.Constant), (r'(eval|throw)\b', Name.Builtin), (r'`\S+`', Name.Builtin), (r'[$a-zA-Z_]\w*', Name.Other), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-fA-F]+', Number.Hex), (r'[0-9]+', Number.Integer), (r'"', String.Double, 'dqstring'), (r"'(\\\\|\\[^\\]|[^'\\])*'", String.Single), ], 'dqstring': [ (r'\$\{[^"}]+?\}', String.Interpol), (r'\$', String.Double), (r'\\\\', String.Double), (r'\\"', String.Double), (r'[^\\"$]+', String.Double), (r'"', String.Double, '#pop'), ], } class LSLLexer(RegexLexer): """ For Second Life's Linden Scripting Language source code. .. versionadded:: 2.0 """ name = 'LSL' aliases = ['lsl'] filenames = ['*.lsl'] mimetypes = ['text/x-lsl'] flags = re.MULTILINE lsl_keywords = r'\b(?:do|else|for|if|jump|return|while)\b' lsl_types = r'\b(?:float|integer|key|list|quaternion|rotation|string|vector)\b' lsl_states = r'\b(?:(?:state)\s+\w+|default)\b' lsl_events = r'\b(?:state_(?:entry|exit)|touch(?:_(?:start|end))?|(?:land_)?collision(?:_(?:start|end))?|timer|listen|(?:no_)?sensor|control|(?:not_)?at_(?:rot_)?target|money|email|run_time_permissions|changed|attach|dataserver|moving_(?:start|end)|link_message|(?:on|object)_rez|remote_data|http_re(?:sponse|quest)|path_update|transaction_result)\b' lsl_functions_builtin = r'\b(?:ll(?:ReturnObjectsBy(?:ID|Owner)|Json(?:2List|[GS]etValue|ValueType)|Sin|Cos|Tan|Atan2|Sqrt|Pow|Abs|Fabs|Frand|Floor|Ceil|Round|Vec(?:Mag|Norm|Dist)|Rot(?:Between|2(?:Euler|Fwd|Left|Up))|(?:Euler|Axes)2Rot|Whisper|(?:Region|Owner)?Say|Shout|Listen(?:Control|Remove)?|Sensor(?:Repeat|Remove)?|Detected(?:Name|Key|Owner|Type|Pos|Vel|Grab|Rot|Group|LinkNumber)|Die|Ground|Wind|(?:[GS]et)(?:AnimationOverride|MemoryLimit|PrimMediaParams|ParcelMusicURL|Object(?:Desc|Name)|PhysicsMaterial|Status|Scale|Color|Alpha|Texture|Pos|Rot|Force|Torque)|ResetAnimationOverride|(?:Scale|Offset|Rotate)Texture|(?:Rot)?Target(?:Remove)?|(?:Stop)?MoveToTarget|Apply(?:Rotational)?Impulse|Set(?:KeyframedMotion|ContentType|RegionPos|(?:Angular)?Velocity|Buoyancy|HoverHeight|ForceAndTorque|TimerEvent|ScriptState|Damage|TextureAnim|Sound(?:Queueing|Radius)|Vehicle(?:Type|(?:Float|Vector|Rotation)Param)|(?:Touch|Sit)?Text|Camera(?:Eye|At)Offset|PrimitiveParams|ClickAction|Link(?:Alpha|Color|PrimitiveParams(?:Fast)?|Texture(?:Anim)?|Camera|Media)|RemoteScriptAccessPin|PayPrice|LocalRot)|ScaleByFactor|Get(?:(?:Max|Min)ScaleFactor|ClosestNavPoint|StaticPath|SimStats|Env|PrimitiveParams|Link(?:PrimitiveParams|Number(?:OfSides)?|Key|Name|Media)|HTTPHeader|FreeURLs|Object(?:Details|PermMask|PrimCount)|Parcel(?:MaxPrims|Details|Prim(?:Count|Owners))|Attached|(?:SPMax|Free|Used)Memory|Region(?:Name|TimeDilation|FPS|Corner|AgentCount)|Root(?:Position|Rotation)|UnixTime|(?:Parcel|Region)Flags|(?:Wall|GMT)clock|SimulatorHostname|BoundingBox|GeometricCenter|Creator|NumberOf(?:Prims|NotecardLines|Sides)|Animation(?:List)?|(?:Camera|Local)(?:Pos|Rot)|Vel|Accel|Omega|Time(?:stamp|OfDay)|(?:Object|CenterOf)?Mass|MassMKS|Energy|Owner|(?:Owner)?Key|SunDirection|Texture(?:Offset|Scale|Rot)|Inventory(?:Number|Name|Key|Type|Creator|PermMask)|Permissions(?:Key)?|StartParameter|List(?:Length|EntryType)|Date|Agent(?:Size|Info|Language|List)|LandOwnerAt|NotecardLine|Script(?:Name|State))|(?:Get|Reset|GetAndReset)Time|PlaySound(?:Slave)?|LoopSound(?:Master|Slave)?|(?:Trigger|Stop|Preload)Sound|(?:(?:Get|Delete)Sub|Insert)String|To(?:Upper|Lower)|Give(?:InventoryList|Money)|RezObject|(?:Stop)?LookAt|Sleep|CollisionFilter|(?:Take|Release)Controls|DetachFromAvatar|AttachToAvatar(?:Temp)?|InstantMessage|(?:GetNext)?Email|StopHover|MinEventDelay|RotLookAt|String(?:Length|Trim)|(?:Start|Stop)Animation|TargetOmega|RequestPermissions|(?:Create|Break)Link|BreakAllLinks|(?:Give|Remove)Inventory|Water|PassTouches|Request(?:Agent|Inventory)Data|TeleportAgent(?:Home|GlobalCoords)?|ModifyLand|CollisionSound|ResetScript|MessageLinked|PushObject|PassCollisions|AxisAngle2Rot|Rot2(?:Axis|Angle)|A(?:cos|sin)|AngleBetween|AllowInventoryDrop|SubStringIndex|List2(?:CSV|Integer|Json|Float|String|Key|Vector|Rot|List(?:Strided)?)|DeleteSubList|List(?:Statistics|Sort|Randomize|(?:Insert|Find|Replace)List)|EdgeOfWorld|AdjustSoundVolume|Key2Name|TriggerSoundLimited|EjectFromLand|(?:CSV|ParseString)2List|OverMyLand|SameGroup|UnSit|Ground(?:Slope|Normal|Contour)|GroundRepel|(?:Set|Remove)VehicleFlags|(?:AvatarOn)?(?:Link)?SitTarget|Script(?:Danger|Profiler)|Dialog|VolumeDetect|ResetOtherScript|RemoteLoadScriptPin|(?:Open|Close)RemoteDataChannel|SendRemoteData|RemoteDataReply|(?:Integer|String)ToBase64|XorBase64|Log(?:10)?|Base64To(?:String|Integer)|ParseStringKeepNulls|RezAtRoot|RequestSimulatorData|ForceMouselook|(?:Load|Release|(?:E|Une)scape)URL|ParcelMedia(?:CommandList|Query)|ModPow|MapDestination|(?:RemoveFrom|AddTo|Reset)Land(?:Pass|Ban)List|(?:Set|Clear)CameraParams|HTTP(?:Request|Response)|TextBox|DetectedTouch(?:UV|Face|Pos|(?:N|Bin)ormal|ST)|(?:MD5|SHA1|DumpList2)String|Request(?:Secure)?URL|Clear(?:Prim|Link)Media|(?:Link)?ParticleSystem|(?:Get|Request)(?:Username|DisplayName)|RegionSayTo|CastRay|GenerateKey|TransferLindenDollars|ManageEstateAccess|(?:Create|Delete)Character|ExecCharacterCmd|Evade|FleeFrom|NavigateTo|PatrolPoints|Pursue|UpdateCharacter|WanderWithin))\b' lsl_constants_float = r'\b(?:DEG_TO_RAD|PI(?:_BY_TWO)?|RAD_TO_DEG|SQRT2|TWO_PI)\b' lsl_constants_integer = r'\b(?:JSON_APPEND|STATUS_(?:PHYSICS|ROTATE_[XYZ]|PHANTOM|SANDBOX|BLOCK_GRAB(?:_OBJECT)?|(?:DIE|RETURN)_AT_EDGE|CAST_SHADOWS|OK|MALFORMED_PARAMS|TYPE_MISMATCH|BOUNDS_ERROR|NOT_(?:FOUND|SUPPORTED)|INTERNAL_ERROR|WHITELIST_FAILED)|AGENT(?:_(?:BY_(?:LEGACY_|USER)NAME|FLYING|ATTACHMENTS|SCRIPTED|MOUSELOOK|SITTING|ON_OBJECT|AWAY|WALKING|IN_AIR|TYPING|CROUCHING|BUSY|ALWAYS_RUN|AUTOPILOT|LIST_(?:PARCEL(?:_OWNER)?|REGION)))?|CAMERA_(?:PITCH|DISTANCE|BEHINDNESS_(?:ANGLE|LAG)|(?:FOCUS|POSITION)(?:_(?:THRESHOLD|LOCKED|LAG))?|FOCUS_OFFSET|ACTIVE)|ANIM_ON|LOOP|REVERSE|PING_PONG|SMOOTH|ROTATE|SCALE|ALL_SIDES|LINK_(?:ROOT|SET|ALL_(?:OTHERS|CHILDREN)|THIS)|ACTIVE|PASSIVE|SCRIPTED|CONTROL_(?:FWD|BACK|(?:ROT_)?(?:LEFT|RIGHT)|UP|DOWN|(?:ML_)?LBUTTON)|PERMISSION_(?:RETURN_OBJECTS|DEBIT|OVERRIDE_ANIMATIONS|SILENT_ESTATE_MANAGEMENT|TAKE_CONTROLS|TRIGGER_ANIMATION|ATTACH|CHANGE_LINKS|(?:CONTROL|TRACK)_CAMERA|TELEPORT)|INVENTORY_(?:TEXTURE|SOUND|OBJECT|SCRIPT|LANDMARK|CLOTHING|NOTECARD|BODYPART|ANIMATION|GESTURE|ALL|NONE)|CHANGED_(?:INVENTORY|COLOR|SHAPE|SCALE|TEXTURE|LINK|ALLOWED_DROP|OWNER|REGION(?:_START)?|TELEPORT|MEDIA)|OBJECT_(?:(?:PHYSICS|SERVER|STREAMING)_COST|UNKNOWN_DETAIL|CHARACTER_TIME|PHANTOM|PHYSICS|TEMP_ON_REZ|NAME|DESC|POS|PRIM_EQUIVALENCE|RETURN_(?:PARCEL(?:_OWNER)?|REGION)|ROO?T|VELOCITY|OWNER|GROUP|CREATOR|ATTACHED_POINT|RENDER_WEIGHT|PATHFINDING_TYPE|(?:RUNNING|TOTAL)_SCRIPT_COUNT|SCRIPT_(?:MEMORY|TIME))|TYPE_(?:INTEGER|FLOAT|STRING|KEY|VECTOR|ROTATION|INVALID)|(?:DEBUG|PUBLIC)_CHANNEL|ATTACH_(?:AVATAR_CENTER|CHEST|HEAD|BACK|PELVIS|MOUTH|CHIN|NECK|NOSE|BELLY|[LR](?:SHOULDER|HAND|FOOT|EAR|EYE|[UL](?:ARM|LEG)|HIP)|(?:LEFT|RIGHT)_PEC|HUD_(?:CENTER_[12]|TOP_(?:RIGHT|CENTER|LEFT)|BOTTOM(?:_(?:RIGHT|LEFT))?))|LAND_(?:LEVEL|RAISE|LOWER|SMOOTH|NOISE|REVERT)|DATA_(?:ONLINE|NAME|BORN|SIM_(?:POS|STATUS|RATING)|PAYINFO)|PAYMENT_INFO_(?:ON_FILE|USED)|REMOTE_DATA_(?:CHANNEL|REQUEST|REPLY)|PSYS_(?:PART_(?:BF_(?:ZERO|ONE(?:_MINUS_(?:DEST_COLOR|SOURCE_(ALPHA|COLOR)))?|DEST_COLOR|SOURCE_(ALPHA|COLOR))|BLEND_FUNC_(DEST|SOURCE)|FLAGS|(?:START|END)_(?:COLOR|ALPHA|SCALE|GLOW)|MAX_AGE|(?:RIBBON|WIND|INTERP_(?:COLOR|SCALE)|BOUNCE|FOLLOW_(?:SRC|VELOCITY)|TARGET_(?:POS|LINEAR)|EMISSIVE)_MASK)|SRC_(?:MAX_AGE|PATTERN|ANGLE_(?:BEGIN|END)|BURST_(?:RATE|PART_COUNT|RADIUS|SPEED_(?:MIN|MAX))|ACCEL|TEXTURE|TARGET_KEY|OMEGA|PATTERN_(?:DROP|EXPLODE|ANGLE(?:_CONE(?:_EMPTY)?)?)))|VEHICLE_(?:REFERENCE_FRAME|TYPE_(?:NONE|SLED|CAR|BOAT|AIRPLANE|BALLOON)|(?:LINEAR|ANGULAR)_(?:FRICTION_TIMESCALE|MOTOR_DIRECTION)|LINEAR_MOTOR_OFFSET|HOVER_(?:HEIGHT|EFFICIENCY|TIMESCALE)|BUOYANCY|(?:LINEAR|ANGULAR)_(?:DEFLECTION_(?:EFFICIENCY|TIMESCALE)|MOTOR_(?:DECAY_)?TIMESCALE)|VERTICAL_ATTRACTION_(?:EFFICIENCY|TIMESCALE)|BANKING_(?:EFFICIENCY|MIX|TIMESCALE)|FLAG_(?:NO_DEFLECTION_UP|LIMIT_(?:ROLL_ONLY|MOTOR_UP)|HOVER_(?:(?:WATER|TERRAIN|UP)_ONLY|GLOBAL_HEIGHT)|MOUSELOOK_(?:STEER|BANK)|CAMERA_DECOUPLED))|PRIM_(?:TYPE(?:_(?:BOX|CYLINDER|PRISM|SPHERE|TORUS|TUBE|RING|SCULPT))?|HOLE_(?:DEFAULT|CIRCLE|SQUARE|TRIANGLE)|MATERIAL(?:_(?:STONE|METAL|GLASS|WOOD|FLESH|PLASTIC|RUBBER))?|SHINY_(?:NONE|LOW|MEDIUM|HIGH)|BUMP_(?:NONE|BRIGHT|DARK|WOOD|BARK|BRICKS|CHECKER|CONCRETE|TILE|STONE|DISKS|GRAVEL|BLOBS|SIDING|LARGETILE|STUCCO|SUCTION|WEAVE)|TEXGEN_(?:DEFAULT|PLANAR)|SCULPT_(?:TYPE_(?:SPHERE|TORUS|PLANE|CYLINDER|MASK)|FLAG_(?:MIRROR|INVERT))|PHYSICS(?:_(?:SHAPE_(?:CONVEX|NONE|PRIM|TYPE)))?|(?:POS|ROT)_LOCAL|SLICE|TEXT|FLEXIBLE|POINT_LIGHT|TEMP_ON_REZ|PHANTOM|POSITION|SIZE|ROTATION|TEXTURE|NAME|OMEGA|DESC|LINK_TARGET|COLOR|BUMP_SHINY|FULLBRIGHT|TEXGEN|GLOW|MEDIA_(?:ALT_IMAGE_ENABLE|CONTROLS|(?:CURRENT|HOME)_URL|AUTO_(?:LOOP|PLAY|SCALE|ZOOM)|FIRST_CLICK_INTERACT|(?:WIDTH|HEIGHT)_PIXELS|WHITELIST(?:_ENABLE)?|PERMS_(?:INTERACT|CONTROL)|PARAM_MAX|CONTROLS_(?:STANDARD|MINI)|PERM_(?:NONE|OWNER|GROUP|ANYONE)|MAX_(?:URL_LENGTH|WHITELIST_(?:SIZE|COUNT)|(?:WIDTH|HEIGHT)_PIXELS)))|MASK_(?:BASE|OWNER|GROUP|EVERYONE|NEXT)|PERM_(?:TRANSFER|MODIFY|COPY|MOVE|ALL)|PARCEL_(?:MEDIA_COMMAND_(?:STOP|PAUSE|PLAY|LOOP|TEXTURE|URL|TIME|AGENT|UNLOAD|AUTO_ALIGN|TYPE|SIZE|DESC|LOOP_SET)|FLAG_(?:ALLOW_(?:FLY|(?:GROUP_)?SCRIPTS|LANDMARK|TERRAFORM|DAMAGE|CREATE_(?:GROUP_)?OBJECTS)|USE_(?:ACCESS_(?:GROUP|LIST)|BAN_LIST|LAND_PASS_LIST)|LOCAL_SOUND_ONLY|RESTRICT_PUSHOBJECT|ALLOW_(?:GROUP|ALL)_OBJECT_ENTRY)|COUNT_(?:TOTAL|OWNER|GROUP|OTHER|SELECTED|TEMP)|DETAILS_(?:NAME|DESC|OWNER|GROUP|AREA|ID|SEE_AVATARS))|LIST_STAT_(?:MAX|MIN|MEAN|MEDIAN|STD_DEV|SUM(?:_SQUARES)?|NUM_COUNT|GEOMETRIC_MEAN|RANGE)|PAY_(?:HIDE|DEFAULT)|REGION_FLAG_(?:ALLOW_DAMAGE|FIXED_SUN|BLOCK_TERRAFORM|SANDBOX|DISABLE_(?:COLLISIONS|PHYSICS)|BLOCK_FLY|ALLOW_DIRECT_TELEPORT|RESTRICT_PUSHOBJECT)|HTTP_(?:METHOD|MIMETYPE|BODY_(?:MAXLENGTH|TRUNCATED)|CUSTOM_HEADER|PRAGMA_NO_CACHE|VERBOSE_THROTTLE|VERIFY_CERT)|STRING_(?:TRIM(?:_(?:HEAD|TAIL))?)|CLICK_ACTION_(?:NONE|TOUCH|SIT|BUY|PAY|OPEN(?:_MEDIA)?|PLAY|ZOOM)|TOUCH_INVALID_FACE|PROFILE_(?:NONE|SCRIPT_MEMORY)|RC_(?:DATA_FLAGS|DETECT_PHANTOM|GET_(?:LINK_NUM|NORMAL|ROOT_KEY)|MAX_HITS|REJECT_(?:TYPES|AGENTS|(?:NON)?PHYSICAL|LAND))|RCERR_(?:CAST_TIME_EXCEEDED|SIM_PERF_LOW|UNKNOWN)|ESTATE_ACCESS_(?:ALLOWED_(?:AGENT|GROUP)_(?:ADD|REMOVE)|BANNED_AGENT_(?:ADD|REMOVE))|DENSITY|FRICTION|RESTITUTION|GRAVITY_MULTIPLIER|KFM_(?:COMMAND|CMD_(?:PLAY|STOP|PAUSE|SET_MODE)|MODE|FORWARD|LOOP|PING_PONG|REVERSE|DATA|ROTATION|TRANSLATION)|ERR_(?:GENERIC|PARCEL_PERMISSIONS|MALFORMED_PARAMS|RUNTIME_PERMISSIONS|THROTTLED)|CHARACTER_(?:CMD_(?:(?:SMOOTH_)?STOP|JUMP)|DESIRED_(?:TURN_)?SPEED|RADIUS|STAY_WITHIN_PARCEL|LENGTH|ORIENTATION|ACCOUNT_FOR_SKIPPED_FRAMES|AVOIDANCE_MODE|TYPE(?:_(?:[A-D]|NONE))?|MAX_(?:DECEL|TURN_RADIUS|(?:ACCEL|SPEED)))|PURSUIT_(?:OFFSET|FUZZ_FACTOR|GOAL_TOLERANCE|INTERCEPT)|REQUIRE_LINE_OF_SIGHT|FORCE_DIRECT_PATH|VERTICAL|HORIZONTAL|AVOID_(?:CHARACTERS|DYNAMIC_OBSTACLES|NONE)|PU_(?:EVADE_(?:HIDDEN|SPOTTED)|FAILURE_(?:DYNAMIC_PATHFINDING_DISABLED|INVALID_(?:GOAL|START)|NO_(?:NAVMESH|VALID_DESTINATION)|OTHER|TARGET_GONE|(?:PARCEL_)?UNREACHABLE)|(?:GOAL|SLOWDOWN_DISTANCE)_REACHED)|TRAVERSAL_TYPE(?:_(?:FAST|NONE|SLOW))?|CONTENT_TYPE_(?:ATOM|FORM|HTML|JSON|LLSD|RSS|TEXT|XHTML|XML)|GCNP_(?:RADIUS|STATIC)|(?:PATROL|WANDER)_PAUSE_AT_WAYPOINTS|OPT_(?:AVATAR|CHARACTER|EXCLUSION_VOLUME|LEGACY_LINKSET|MATERIAL_VOLUME|OTHER|STATIC_OBSTACLE|WALKABLE)|SIM_STAT_PCT_CHARS_STEPPED)\b' lsl_constants_integer_boolean = r'\b(?:FALSE|TRUE)\b' lsl_constants_rotation = r'\b(?:ZERO_ROTATION)\b' lsl_constants_string = r'\b(?:EOF|JSON_(?:ARRAY|DELETE|FALSE|INVALID|NULL|NUMBER|OBJECT|STRING|TRUE)|NULL_KEY|TEXTURE_(?:BLANK|DEFAULT|MEDIA|PLYWOOD|TRANSPARENT)|URL_REQUEST_(?:GRANTED|DENIED))\b' lsl_constants_vector = r'\b(?:TOUCH_INVALID_(?:TEXCOORD|VECTOR)|ZERO_VECTOR)\b' lsl_invalid_broken = r'\b(?:LAND_(?:LARGE|MEDIUM|SMALL)_BRUSH)\b' lsl_invalid_deprecated = r'\b(?:ATTACH_[LR]PEC|DATA_RATING|OBJECT_ATTACHMENT_(?:GEOMETRY_BYTES|SURFACE_AREA)|PRIM_(?:CAST_SHADOWS|MATERIAL_LIGHT|TYPE_LEGACY)|PSYS_SRC_(?:INNER|OUTER)ANGLE|VEHICLE_FLAG_NO_FLY_UP|ll(?:Cloud|Make(?:Explosion|Fountain|Smoke|Fire)|RemoteDataSetRegion|Sound(?:Preload)?|XorBase64Strings(?:Correct)?))\b' lsl_invalid_illegal = r'\b(?:event)\b' lsl_invalid_unimplemented = r'\b(?:CHARACTER_(?:MAX_ANGULAR_(?:ACCEL|SPEED)|TURN_SPEED_MULTIPLIER)|PERMISSION_(?:CHANGE_(?:JOINTS|PERMISSIONS)|RELEASE_OWNERSHIP|REMAP_CONTROLS)|PRIM_PHYSICS_MATERIAL|PSYS_SRC_OBJ_REL_MASK|ll(?:CollisionSprite|(?:Stop)?PointAt|(?:(?:Refresh|Set)Prim)URL|(?:Take|Release)Camera|RemoteLoadScript))\b' lsl_reserved_godmode = r'\b(?:ll(?:GodLikeRezObject|Set(?:Inventory|Object)PermMask))\b' lsl_reserved_log = r'\b(?:print)\b' lsl_operators = r'\+\+|\-\-|<<|>>|&&?|\|\|?|\^|~|[!%<>=*+\-/]=?' tokens = { 'root': [ (r'//.*?\n', Comment.Single), (r'/\*', Comment.Multiline, 'comment'), (r'"', String.Double, 'string'), (lsl_keywords, Keyword), (lsl_types, Keyword.Type), (lsl_states, Name.Class), (lsl_events, Name.Builtin), (lsl_functions_builtin, Name.Function), (lsl_constants_float, Keyword.Constant), (lsl_constants_integer, Keyword.Constant), (lsl_constants_integer_boolean, Keyword.Constant), (lsl_constants_rotation, Keyword.Constant), (lsl_constants_string, Keyword.Constant), (lsl_constants_vector, Keyword.Constant), (lsl_invalid_broken, Error), (lsl_invalid_deprecated, Error), (lsl_invalid_illegal, Error), (lsl_invalid_unimplemented, Error), (lsl_reserved_godmode, Keyword.Reserved), (lsl_reserved_log, Keyword.Reserved), (r'\b([a-zA-Z_]\w*)\b', Name.Variable), (r'(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d*', Number.Float), (r'(\d+\.\d*|\.\d+)', Number.Float), (r'0[xX][0-9a-fA-F]+', Number.Hex), (r'\d+', Number.Integer), (lsl_operators, Operator), (r':=?', Error), (r'[,;{}()\[\]]', Punctuation), (r'\n+', Whitespace), (r'\s+', Whitespace) ], 'comment': [ (r'[^*/]+', Comment.Multiline), (r'/\*', Comment.Multiline, '#push'), (r'\*/', Comment.Multiline, '#pop'), (r'[*/]', Comment.Multiline) ], 'string': [ (r'\\([nt"\\])', String.Escape), (r'"', String.Double, '#pop'), (r'\\.', Error), (r'[^"\\]+', String.Double), ] } class AppleScriptLexer(RegexLexer): """ For `AppleScript source code `_, including `AppleScript Studio `_. Contributed by Andreas Amann . .. versionadded:: 1.0 """ name = 'AppleScript' aliases = ['applescript'] filenames = ['*.applescript'] flags = re.MULTILINE | re.DOTALL Identifiers = r'[a-zA-Z]\w*' # XXX: use words() for all of these Literals = ('AppleScript', 'current application', 'false', 'linefeed', 'missing value', 'pi', 'quote', 'result', 'return', 'space', 'tab', 'text item delimiters', 'true', 'version') Classes = ('alias ', 'application ', 'boolean ', 'class ', 'constant ', 'date ', 'file ', 'integer ', 'list ', 'number ', 'POSIX file ', 'real ', 'record ', 'reference ', 'RGB color ', 'script ', 'text ', 'unit types', '(?:Unicode )?text', 'string') BuiltIn = ('attachment', 'attribute run', 'character', 'day', 'month', 'paragraph', 'word', 'year') HandlerParams = ('about', 'above', 'against', 'apart from', 'around', 'aside from', 'at', 'below', 'beneath', 'beside', 'between', 'for', 'given', 'instead of', 'on', 'onto', 'out of', 'over', 'since') Commands = ('ASCII (character|number)', 'activate', 'beep', 'choose URL', 'choose application', 'choose color', 'choose file( name)?', 'choose folder', 'choose from list', 'choose remote application', 'clipboard info', 'close( access)?', 'copy', 'count', 'current date', 'delay', 'delete', 'display (alert|dialog)', 'do shell script', 'duplicate', 'exists', 'get eof', 'get volume settings', 'info for', 'launch', 'list (disks|folder)', 'load script', 'log', 'make', 'mount volume', 'new', 'offset', 'open( (for access|location))?', 'path to', 'print', 'quit', 'random number', 'read', 'round', 'run( script)?', 'say', 'scripting components', 'set (eof|the clipboard to|volume)', 'store script', 'summarize', 'system attribute', 'system info', 'the clipboard', 'time to GMT', 'write', 'quoted form') References = ('(in )?back of', '(in )?front of', '[0-9]+(st|nd|rd|th)', 'first', 'second', 'third', 'fourth', 'fifth', 'sixth', 'seventh', 'eighth', 'ninth', 'tenth', 'after', 'back', 'before', 'behind', 'every', 'front', 'index', 'last', 'middle', 'some', 'that', 'through', 'thru', 'where', 'whose') Operators = ("and", "or", "is equal", "equals", "(is )?equal to", "is not", "isn't", "isn't equal( to)?", "is not equal( to)?", "doesn't equal", "does not equal", "(is )?greater than", "comes after", "is not less than or equal( to)?", "isn't less than or equal( to)?", "(is )?less than", "comes before", "is not greater than or equal( to)?", "isn't greater than or equal( to)?", "(is )?greater than or equal( to)?", "is not less than", "isn't less than", "does not come before", "doesn't come before", "(is )?less than or equal( to)?", "is not greater than", "isn't greater than", "does not come after", "doesn't come after", "starts? with", "begins? with", "ends? with", "contains?", "does not contain", "doesn't contain", "is in", "is contained by", "is not in", "is not contained by", "isn't contained by", "div", "mod", "not", "(a )?(ref( to)?|reference to)", "is", "does") Control = ('considering', 'else', 'error', 'exit', 'from', 'if', 'ignoring', 'in', 'repeat', 'tell', 'then', 'times', 'to', 'try', 'until', 'using terms from', 'while', 'whith', 'with timeout( of)?', 'with transaction', 'by', 'continue', 'end', 'its?', 'me', 'my', 'return', 'of', 'as') Declarations = ('global', 'local', 'prop(erty)?', 'set', 'get') Reserved = ('but', 'put', 'returning', 'the') StudioClasses = ('action cell', 'alert reply', 'application', 'box', 'browser( cell)?', 'bundle', 'button( cell)?', 'cell', 'clip view', 'color well', 'color-panel', 'combo box( item)?', 'control', 'data( (cell|column|item|row|source))?', 'default entry', 'dialog reply', 'document', 'drag info', 'drawer', 'event', 'font(-panel)?', 'formatter', 'image( (cell|view))?', 'matrix', 'menu( item)?', 'item', 'movie( view)?', 'open-panel', 'outline view', 'panel', 'pasteboard', 'plugin', 'popup button', 'progress indicator', 'responder', 'save-panel', 'scroll view', 'secure text field( cell)?', 'slider', 'sound', 'split view', 'stepper', 'tab view( item)?', 'table( (column|header cell|header view|view))', 'text( (field( cell)?|view))?', 'toolbar( item)?', 'user-defaults', 'view', 'window') StudioEvents = ('accept outline drop', 'accept table drop', 'action', 'activated', 'alert ended', 'awake from nib', 'became key', 'became main', 'begin editing', 'bounds changed', 'cell value', 'cell value changed', 'change cell value', 'change item value', 'changed', 'child of item', 'choose menu item', 'clicked', 'clicked toolbar item', 'closed', 'column clicked', 'column moved', 'column resized', 'conclude drop', 'data representation', 'deminiaturized', 'dialog ended', 'document nib name', 'double clicked', 'drag( (entered|exited|updated))?', 'drop', 'end editing', 'exposed', 'idle', 'item expandable', 'item value', 'item value changed', 'items changed', 'keyboard down', 'keyboard up', 'launched', 'load data representation', 'miniaturized', 'mouse down', 'mouse dragged', 'mouse entered', 'mouse exited', 'mouse moved', 'mouse up', 'moved', 'number of browser rows', 'number of items', 'number of rows', 'open untitled', 'opened', 'panel ended', 'parameters updated', 'plugin loaded', 'prepare drop', 'prepare outline drag', 'prepare outline drop', 'prepare table drag', 'prepare table drop', 'read from file', 'resigned active', 'resigned key', 'resigned main', 'resized( sub views)?', 'right mouse down', 'right mouse dragged', 'right mouse up', 'rows changed', 'scroll wheel', 'selected tab view item', 'selection changed', 'selection changing', 'should begin editing', 'should close', 'should collapse item', 'should end editing', 'should expand item', 'should open( untitled)?', 'should quit( after last window closed)?', 'should select column', 'should select item', 'should select row', 'should select tab view item', 'should selection change', 'should zoom', 'shown', 'update menu item', 'update parameters', 'update toolbar item', 'was hidden', 'was miniaturized', 'will become active', 'will close', 'will dismiss', 'will display browser cell', 'will display cell', 'will display item cell', 'will display outline cell', 'will finish launching', 'will hide', 'will miniaturize', 'will move', 'will open', 'will pop up', 'will quit', 'will resign active', 'will resize( sub views)?', 'will select tab view item', 'will show', 'will zoom', 'write to file', 'zoomed') StudioCommands = ('animate', 'append', 'call method', 'center', 'close drawer', 'close panel', 'display', 'display alert', 'display dialog', 'display panel', 'go', 'hide', 'highlight', 'increment', 'item for', 'load image', 'load movie', 'load nib', 'load panel', 'load sound', 'localized string', 'lock focus', 'log', 'open drawer', 'path for', 'pause', 'perform action', 'play', 'register', 'resume', 'scroll', 'select( all)?', 'show', 'size to fit', 'start', 'step back', 'step forward', 'stop', 'synchronize', 'unlock focus', 'update') StudioProperties = ('accepts arrow key', 'action method', 'active', 'alignment', 'allowed identifiers', 'allows branch selection', 'allows column reordering', 'allows column resizing', 'allows column selection', 'allows customization', 'allows editing text attributes', 'allows empty selection', 'allows mixed state', 'allows multiple selection', 'allows reordering', 'allows undo', 'alpha( value)?', 'alternate image', 'alternate increment value', 'alternate title', 'animation delay', 'associated file name', 'associated object', 'auto completes', 'auto display', 'auto enables items', 'auto repeat', 'auto resizes( outline column)?', 'auto save expanded items', 'auto save name', 'auto save table columns', 'auto saves configuration', 'auto scroll', 'auto sizes all columns to fit', 'auto sizes cells', 'background color', 'bezel state', 'bezel style', 'bezeled', 'border rect', 'border type', 'bordered', 'bounds( rotation)?', 'box type', 'button returned', 'button type', 'can choose directories', 'can choose files', 'can draw', 'can hide', 'cell( (background color|size|type))?', 'characters', 'class', 'click count', 'clicked( data)? column', 'clicked data item', 'clicked( data)? row', 'closeable', 'collating', 'color( (mode|panel))', 'command key down', 'configuration', 'content(s| (size|view( margins)?))?', 'context', 'continuous', 'control key down', 'control size', 'control tint', 'control view', 'controller visible', 'coordinate system', 'copies( on scroll)?', 'corner view', 'current cell', 'current column', 'current( field)? editor', 'current( menu)? item', 'current row', 'current tab view item', 'data source', 'default identifiers', 'delta (x|y|z)', 'destination window', 'directory', 'display mode', 'displayed cell', 'document( (edited|rect|view))?', 'double value', 'dragged column', 'dragged distance', 'dragged items', 'draws( cell)? background', 'draws grid', 'dynamically scrolls', 'echos bullets', 'edge', 'editable', 'edited( data)? column', 'edited data item', 'edited( data)? row', 'enabled', 'enclosing scroll view', 'ending page', 'error handling', 'event number', 'event type', 'excluded from windows menu', 'executable path', 'expanded', 'fax number', 'field editor', 'file kind', 'file name', 'file type', 'first responder', 'first visible column', 'flipped', 'floating', 'font( panel)?', 'formatter', 'frameworks path', 'frontmost', 'gave up', 'grid color', 'has data items', 'has horizontal ruler', 'has horizontal scroller', 'has parent data item', 'has resize indicator', 'has shadow', 'has sub menu', 'has vertical ruler', 'has vertical scroller', 'header cell', 'header view', 'hidden', 'hides when deactivated', 'highlights by', 'horizontal line scroll', 'horizontal page scroll', 'horizontal ruler view', 'horizontally resizable', 'icon image', 'id', 'identifier', 'ignores multiple clicks', 'image( (alignment|dims when disabled|frame style|scaling))?', 'imports graphics', 'increment value', 'indentation per level', 'indeterminate', 'index', 'integer value', 'intercell spacing', 'item height', 'key( (code|equivalent( modifier)?|window))?', 'knob thickness', 'label', 'last( visible)? column', 'leading offset', 'leaf', 'level', 'line scroll', 'loaded', 'localized sort', 'location', 'loop mode', 'main( (bunde|menu|window))?', 'marker follows cell', 'matrix mode', 'maximum( content)? size', 'maximum visible columns', 'menu( form representation)?', 'miniaturizable', 'miniaturized', 'minimized image', 'minimized title', 'minimum column width', 'minimum( content)? size', 'modal', 'modified', 'mouse down state', 'movie( (controller|file|rect))?', 'muted', 'name', 'needs display', 'next state', 'next text', 'number of tick marks', 'only tick mark values', 'opaque', 'open panel', 'option key down', 'outline table column', 'page scroll', 'pages across', 'pages down', 'palette label', 'pane splitter', 'parent data item', 'parent window', 'pasteboard', 'path( (names|separator))?', 'playing', 'plays every frame', 'plays selection only', 'position', 'preferred edge', 'preferred type', 'pressure', 'previous text', 'prompt', 'properties', 'prototype cell', 'pulls down', 'rate', 'released when closed', 'repeated', 'requested print time', 'required file type', 'resizable', 'resized column', 'resource path', 'returns records', 'reuses columns', 'rich text', 'roll over', 'row height', 'rulers visible', 'save panel', 'scripts path', 'scrollable', 'selectable( identifiers)?', 'selected cell', 'selected( data)? columns?', 'selected data items?', 'selected( data)? rows?', 'selected item identifier', 'selection by rect', 'send action on arrow key', 'sends action when done editing', 'separates columns', 'separator item', 'sequence number', 'services menu', 'shared frameworks path', 'shared support path', 'sheet', 'shift key down', 'shows alpha', 'shows state by', 'size( mode)?', 'smart insert delete enabled', 'sort case sensitivity', 'sort column', 'sort order', 'sort type', 'sorted( data rows)?', 'sound', 'source( mask)?', 'spell checking enabled', 'starting page', 'state', 'string value', 'sub menu', 'super menu', 'super view', 'tab key traverses cells', 'tab state', 'tab type', 'tab view', 'table view', 'tag', 'target( printer)?', 'text color', 'text container insert', 'text container origin', 'text returned', 'tick mark position', 'time stamp', 'title(d| (cell|font|height|position|rect))?', 'tool tip', 'toolbar', 'trailing offset', 'transparent', 'treat packages as directories', 'truncated labels', 'types', 'unmodified characters', 'update views', 'use sort indicator', 'user defaults', 'uses data source', 'uses ruler', 'uses threaded animation', 'uses title from previous column', 'value wraps', 'version', 'vertical( (line scroll|page scroll|ruler view))?', 'vertically resizable', 'view', 'visible( document rect)?', 'volume', 'width', 'window', 'windows menu', 'wraps', 'zoomable', 'zoomed') tokens = { 'root': [ (r'\s+', Text), (r'¬\n', String.Escape), (r"'s\s+", Text), # This is a possessive, consider moving (r'(--|#).*?$', Comment), (r'\(\*', Comment.Multiline, 'comment'), (r'[(){}!,.:]', Punctuation), (r'(«)([^»]+)(»)', bygroups(Text, Name.Builtin, Text)), (r'\b((?:considering|ignoring)\s*)' r'(application responses|case|diacriticals|hyphens|' r'numeric strings|punctuation|white space)', bygroups(Keyword, Name.Builtin)), (r'(-|\*|\+|&|≠|>=?|<=?|=|≥|≤|/|÷|\^)', Operator), (r"\b(%s)\b" % '|'.join(Operators), Operator.Word), (r'^(\s*(?:on|end)\s+)' r'(%s)' % '|'.join(StudioEvents[::-1]), bygroups(Keyword, Name.Function)), (r'^(\s*)(in|on|script|to)(\s+)', bygroups(Text, Keyword, Text)), (r'\b(as )(%s)\b' % '|'.join(Classes), bygroups(Keyword, Name.Class)), (r'\b(%s)\b' % '|'.join(Literals), Name.Constant), (r'\b(%s)\b' % '|'.join(Commands), Name.Builtin), (r'\b(%s)\b' % '|'.join(Control), Keyword), (r'\b(%s)\b' % '|'.join(Declarations), Keyword), (r'\b(%s)\b' % '|'.join(Reserved), Name.Builtin), (r'\b(%s)s?\b' % '|'.join(BuiltIn), Name.Builtin), (r'\b(%s)\b' % '|'.join(HandlerParams), Name.Builtin), (r'\b(%s)\b' % '|'.join(StudioProperties), Name.Attribute), (r'\b(%s)s?\b' % '|'.join(StudioClasses), Name.Builtin), (r'\b(%s)\b' % '|'.join(StudioCommands), Name.Builtin), (r'\b(%s)\b' % '|'.join(References), Name.Builtin), (r'"(\\\\|\\[^\\]|[^"\\])*"', String.Double), (r'\b(%s)\b' % Identifiers, Name.Variable), (r'[-+]?(\d+\.\d*|\d*\.\d+)(E[-+][0-9]+)?', Number.Float), (r'[-+]?\d+', Number.Integer), ], 'comment': [ (r'\(\*', Comment.Multiline, '#push'), (r'\*\)', Comment.Multiline, '#pop'), ('[^*(]+', Comment.Multiline), ('[*(]', Comment.Multiline), ], } class RexxLexer(RegexLexer): """ `Rexx `_ is a scripting language available for a wide range of different platforms with its roots found on mainframe systems. It is popular for I/O- and data based tasks and can act as glue language to bind different applications together. .. versionadded:: 2.0 """ name = 'Rexx' aliases = ['rexx', 'arexx'] filenames = ['*.rexx', '*.rex', '*.rx', '*.arexx'] mimetypes = ['text/x-rexx'] flags = re.IGNORECASE tokens = { 'root': [ (r'\s+', Whitespace), (r'/\*', Comment.Multiline, 'comment'), (r'"', String, 'string_double'), (r"'", String, 'string_single'), (r'[0-9]+(\.[0-9]+)?(e[+-]?[0-9])?', Number), (r'([a-z_]\w*)(\s*)(:)(\s*)(procedure)\b', bygroups(Name.Function, Whitespace, Operator, Whitespace, Keyword.Declaration)), (r'([a-z_]\w*)(\s*)(:)', bygroups(Name.Label, Whitespace, Operator)), include('function'), include('keyword'), include('operator'), (r'[a-z_]\w*', Text), ], 'function': [ (words(( 'abbrev', 'abs', 'address', 'arg', 'b2x', 'bitand', 'bitor', 'bitxor', 'c2d', 'c2x', 'center', 'charin', 'charout', 'chars', 'compare', 'condition', 'copies', 'd2c', 'd2x', 'datatype', 'date', 'delstr', 'delword', 'digits', 'errortext', 'form', 'format', 'fuzz', 'insert', 'lastpos', 'left', 'length', 'linein', 'lineout', 'lines', 'max', 'min', 'overlay', 'pos', 'queued', 'random', 'reverse', 'right', 'sign', 'sourceline', 'space', 'stream', 'strip', 'substr', 'subword', 'symbol', 'time', 'trace', 'translate', 'trunc', 'value', 'verify', 'word', 'wordindex', 'wordlength', 'wordpos', 'words', 'x2b', 'x2c', 'x2d', 'xrange'), suffix=r'(\s*)(\()'), bygroups(Name.Builtin, Whitespace, Operator)), ], 'keyword': [ (r'(address|arg|by|call|do|drop|else|end|exit|for|forever|if|' r'interpret|iterate|leave|nop|numeric|off|on|options|parse|' r'pull|push|queue|return|say|select|signal|to|then|trace|until|' r'while)\b', Keyword.Reserved), ], 'operator': [ (r'(-|//|/|\(|\)|\*\*|\*|\\<<|\\<|\\==|\\=|\\>>|\\>|\\|\|\||\||' r'&&|&|%|\+|<<=|<<|<=|<>|<|==|=|><|>=|>>=|>>|>|¬<<|¬<|¬==|¬=|' r'¬>>|¬>|¬|\.|,)', Operator), ], 'string_double': [ (r'[^"\n]+', String), (r'""', String), (r'"', String, '#pop'), (r'\n', Text, '#pop'), # Stray linefeed also terminates strings. ], 'string_single': [ (r'[^\'\n]+', String), (r'\'\'', String), (r'\'', String, '#pop'), (r'\n', Text, '#pop'), # Stray linefeed also terminates strings. ], 'comment': [ (r'[^*]+', Comment.Multiline), (r'\*/', Comment.Multiline, '#pop'), (r'\*', Comment.Multiline), ] } _c = lambda s: re.compile(s, re.MULTILINE) _ADDRESS_COMMAND_PATTERN = _c(r'^\s*address\s+command\b') _ADDRESS_PATTERN = _c(r'^\s*address\s+') _DO_WHILE_PATTERN = _c(r'^\s*do\s+while\b') _IF_THEN_DO_PATTERN = _c(r'^\s*if\b.+\bthen\s+do\s*$') _PROCEDURE_PATTERN = _c(r'^\s*([a-z_]\w*)(\s*)(:)(\s*)(procedure)\b') _ELSE_DO_PATTERN = _c(r'\belse\s+do\s*$') _PARSE_ARG_PATTERN = _c(r'^\s*parse\s+(upper\s+)?(arg|value)\b') PATTERNS_AND_WEIGHTS = ( (_ADDRESS_COMMAND_PATTERN, 0.2), (_ADDRESS_PATTERN, 0.05), (_DO_WHILE_PATTERN, 0.1), (_ELSE_DO_PATTERN, 0.1), (_IF_THEN_DO_PATTERN, 0.1), (_PROCEDURE_PATTERN, 0.5), (_PARSE_ARG_PATTERN, 0.2), ) def analyse_text(text): """ Check for inital comment and patterns that distinguish Rexx from other C-like languages. """ if re.search(r'/\*\**\s*rexx', text, re.IGNORECASE): # Header matches MVS Rexx requirements, this is certainly a Rexx # script. return 1.0 elif text.startswith('/*'): # Header matches general Rexx requirements; the source code might # still be any language using C comments such as C++, C# or Java. lowerText = text.lower() result = sum(weight for (pattern, weight) in RexxLexer.PATTERNS_AND_WEIGHTS if pattern.search(lowerText)) + 0.01 return min(result, 1.0) class MOOCodeLexer(RegexLexer): """ For `MOOCode `_ (the MOO scripting language). .. versionadded:: 0.9 """ name = 'MOOCode' filenames = ['*.moo'] aliases = ['moocode', 'moo'] mimetypes = ['text/x-moocode'] tokens = { 'root': [ # Numbers (r'(0|[1-9][0-9_]*)', Number.Integer), # Strings (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # exceptions (r'(E_PERM|E_DIV)', Name.Exception), # db-refs (r'((#[-0-9]+)|(\$\w+))', Name.Entity), # Keywords (r'\b(if|else|elseif|endif|for|endfor|fork|endfork|while' r'|endwhile|break|continue|return|try' r'|except|endtry|finally|in)\b', Keyword), # builtins (r'(random|length)', Name.Builtin), # special variables (r'(player|caller|this|args)', Name.Variable.Instance), # skip whitespace (r'\s+', Text), (r'\n', Text), # other operators (r'([!;=,{}&|:.\[\]@()<>?]+)', Operator), # function call (r'(\w+)(\()', bygroups(Name.Function, Operator)), # variables (r'(\w+)', Text), ] } class HybrisLexer(RegexLexer): """ For `Hybris `_ source code. .. versionadded:: 1.4 """ name = 'Hybris' aliases = ['hybris', 'hy'] filenames = ['*.hy', '*.hyb'] mimetypes = ['text/x-hybris', 'application/x-hybris'] flags = re.MULTILINE | re.DOTALL tokens = { 'root': [ # method names (r'^(\s*(?:function|method|operator\s+)+?)' r'([a-zA-Z_]\w*)' r'(\s*)(\()', bygroups(Keyword, Name.Function, Text, Operator)), (r'[^\S\n]+', Text), (r'//.*?\n', Comment.Single), (r'/\*.*?\*/', Comment.Multiline), (r'@[a-zA-Z_][\w.]*', Name.Decorator), (r'(break|case|catch|next|default|do|else|finally|for|foreach|of|' r'unless|if|new|return|switch|me|throw|try|while)\b', Keyword), (r'(extends|private|protected|public|static|throws|function|method|' r'operator)\b', Keyword.Declaration), (r'(true|false|null|__FILE__|__LINE__|__VERSION__|__LIB_PATH__|' r'__INC_PATH__)\b', Keyword.Constant), (r'(class|struct)(\s+)', bygroups(Keyword.Declaration, Text), 'class'), (r'(import|include)(\s+)', bygroups(Keyword.Namespace, Text), 'import'), (words(( 'gc_collect', 'gc_mm_items', 'gc_mm_usage', 'gc_collect_threshold', 'urlencode', 'urldecode', 'base64encode', 'base64decode', 'sha1', 'crc32', 'sha2', 'md5', 'md5_file', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'exp', 'fabs', 'floor', 'fmod', 'log', 'log10', 'pow', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'isint', 'isfloat', 'ischar', 'isstring', 'isarray', 'ismap', 'isalias', 'typeof', 'sizeof', 'toint', 'tostring', 'fromxml', 'toxml', 'binary', 'pack', 'load', 'eval', 'var_names', 'var_values', 'user_functions', 'dyn_functions', 'methods', 'call', 'call_method', 'mknod', 'mkfifo', 'mount', 'umount2', 'umount', 'ticks', 'usleep', 'sleep', 'time', 'strtime', 'strdate', 'dllopen', 'dlllink', 'dllcall', 'dllcall_argv', 'dllclose', 'env', 'exec', 'fork', 'getpid', 'wait', 'popen', 'pclose', 'exit', 'kill', 'pthread_create', 'pthread_create_argv', 'pthread_exit', 'pthread_join', 'pthread_kill', 'smtp_send', 'http_get', 'http_post', 'http_download', 'socket', 'bind', 'listen', 'accept', 'getsockname', 'getpeername', 'settimeout', 'connect', 'server', 'recv', 'send', 'close', 'print', 'println', 'printf', 'input', 'readline', 'serial_open', 'serial_fcntl', 'serial_get_attr', 'serial_get_ispeed', 'serial_get_ospeed', 'serial_set_attr', 'serial_set_ispeed', 'serial_set_ospeed', 'serial_write', 'serial_read', 'serial_close', 'xml_load', 'xml_parse', 'fopen', 'fseek', 'ftell', 'fsize', 'fread', 'fwrite', 'fgets', 'fclose', 'file', 'readdir', 'pcre_replace', 'size', 'pop', 'unmap', 'has', 'keys', 'values', 'length', 'find', 'substr', 'replace', 'split', 'trim', 'remove', 'contains', 'join'), suffix=r'\b'), Name.Builtin), (words(( 'MethodReference', 'Runner', 'Dll', 'Thread', 'Pipe', 'Process', 'Runnable', 'CGI', 'ClientSocket', 'Socket', 'ServerSocket', 'File', 'Console', 'Directory', 'Exception'), suffix=r'\b'), Keyword.Type), (r'"(\\\\|\\[^\\]|[^"\\])*"', String), (r"'\\.'|'[^\\]'|'\\u[0-9a-f]{4}'", String.Char), (r'(\.)([a-zA-Z_]\w*)', bygroups(Operator, Name.Attribute)), (r'[a-zA-Z_]\w*:', Name.Label), (r'[a-zA-Z_$]\w*', Name), (r'[~^*!%&\[\](){}<>|+=:;,./?\-@]+', Operator), (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), (r'0x[0-9a-f]+', Number.Hex), (r'[0-9]+L?', Number.Integer), (r'\n', Text), ], 'class': [ (r'[a-zA-Z_]\w*', Name.Class, '#pop') ], 'import': [ (r'[\w.]+\*?', Name.Namespace, '#pop') ], } def analyse_text(text): """public method and private method don't seem to be quite common elsewhere.""" result = 0 if re.search(r'\b(?:public|private)\s+method\b', text): result += 0.01 return result class EasytrieveLexer(RegexLexer): """ Easytrieve Plus is a programming language for extracting, filtering and converting sequential data. Furthermore it can layout data for reports. It is mainly used on mainframe platforms and can access several of the mainframe's native file formats. It is somewhat comparable to awk. .. versionadded:: 2.1 """ name = 'Easytrieve' aliases = ['easytrieve'] filenames = ['*.ezt', '*.mac'] mimetypes = ['text/x-easytrieve'] flags = 0 # Note: We cannot use r'\b' at the start and end of keywords because # Easytrieve Plus delimiter characters are: # # * space ( ) # * apostrophe (') # * period (.) # * comma (,) # * paranthesis ( and ) # * colon (:) # # Additionally words end once a '*' appears, indicatins a comment. _DELIMITERS = r' \'.,():\n' _DELIMITERS_OR_COMENT = _DELIMITERS + '*' _DELIMITER_PATTERN = '[' + _DELIMITERS + ']' _DELIMITER_PATTERN_CAPTURE = '(' + _DELIMITER_PATTERN + ')' _NON_DELIMITER_OR_COMMENT_PATTERN = '[^' + _DELIMITERS_OR_COMENT + ']' _OPERATORS_PATTERN = '[.+\\-/=\\[\\](){}<>;,&%¬]' _KEYWORDS = [ 'AFTER-BREAK', 'AFTER-LINE', 'AFTER-SCREEN', 'AIM', 'AND', 'ATTR', 'BEFORE', 'BEFORE-BREAK', 'BEFORE-LINE', 'BEFORE-SCREEN', 'BUSHU', 'BY', 'CALL', 'CASE', 'CHECKPOINT', 'CHKP', 'CHKP-STATUS', 'CLEAR', 'CLOSE', 'COL', 'COLOR', 'COMMIT', 'CONTROL', 'COPY', 'CURSOR', 'D', 'DECLARE', 'DEFAULT', 'DEFINE', 'DELETE', 'DENWA', 'DISPLAY', 'DLI', 'DO', 'DUPLICATE', 'E', 'ELSE', 'ELSE-IF', 'END', 'END-CASE', 'END-DO', 'END-IF', 'END-PROC', 'ENDPAGE', 'ENDTABLE', 'ENTER', 'EOF', 'EQ', 'ERROR', 'EXIT', 'EXTERNAL', 'EZLIB', 'F1', 'F10', 'F11', 'F12', 'F13', 'F14', 'F15', 'F16', 'F17', 'F18', 'F19', 'F2', 'F20', 'F21', 'F22', 'F23', 'F24', 'F25', 'F26', 'F27', 'F28', 'F29', 'F3', 'F30', 'F31', 'F32', 'F33', 'F34', 'F35', 'F36', 'F4', 'F5', 'F6', 'F7', 'F8', 'F9', 'FETCH', 'FILE-STATUS', 'FILL', 'FINAL', 'FIRST', 'FIRST-DUP', 'FOR', 'GE', 'GET', 'GO', 'GOTO', 'GQ', 'GR', 'GT', 'HEADING', 'HEX', 'HIGH-VALUES', 'IDD', 'IDMS', 'IF', 'IN', 'INSERT', 'JUSTIFY', 'KANJI-DATE', 'KANJI-DATE-LONG', 'KANJI-TIME', 'KEY', 'KEY-PRESSED', 'KOKUGO', 'KUN', 'LAST-DUP', 'LE', 'LEVEL', 'LIKE', 'LINE', 'LINE-COUNT', 'LINE-NUMBER', 'LINK', 'LIST', 'LOW-VALUES', 'LQ', 'LS', 'LT', 'MACRO', 'MASK', 'MATCHED', 'MEND', 'MESSAGE', 'MOVE', 'MSTART', 'NE', 'NEWPAGE', 'NOMASK', 'NOPRINT', 'NOT', 'NOTE', 'NOVERIFY', 'NQ', 'NULL', 'OF', 'OR', 'OTHERWISE', 'PA1', 'PA2', 'PA3', 'PAGE-COUNT', 'PAGE-NUMBER', 'PARM-REGISTER', 'PATH-ID', 'PATTERN', 'PERFORM', 'POINT', 'POS', 'PRIMARY', 'PRINT', 'PROCEDURE', 'PROGRAM', 'PUT', 'READ', 'RECORD', 'RECORD-COUNT', 'RECORD-LENGTH', 'REFRESH', 'RELEASE', 'RENUM', 'REPEAT', 'REPORT', 'REPORT-INPUT', 'RESHOW', 'RESTART', 'RETRIEVE', 'RETURN-CODE', 'ROLLBACK', 'ROW', 'S', 'SCREEN', 'SEARCH', 'SECONDARY', 'SELECT', 'SEQUENCE', 'SIZE', 'SKIP', 'SOKAKU', 'SORT', 'SQL', 'STOP', 'SUM', 'SYSDATE', 'SYSDATE-LONG', 'SYSIN', 'SYSIPT', 'SYSLST', 'SYSPRINT', 'SYSSNAP', 'SYSTIME', 'TALLY', 'TERM-COLUMNS', 'TERM-NAME', 'TERM-ROWS', 'TERMINATION', 'TITLE', 'TO', 'TRANSFER', 'TRC', 'UNIQUE', 'UNTIL', 'UPDATE', 'UPPERCASE', 'USER', 'USERID', 'VALUE', 'VERIFY', 'W', 'WHEN', 'WHILE', 'WORK', 'WRITE', 'X', 'XDM', 'XRST' ] tokens = { 'root': [ (r'\*.*\n', Comment.Single), (r'\n+', Whitespace), # Macro argument (r'&' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+\.', Name.Variable, 'after_macro_argument'), # Macro call (r'%' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name.Variable), (r'(FILE|MACRO|REPORT)(\s+)', bygroups(Keyword.Declaration, Whitespace), 'after_declaration'), (r'(JOB|PARM)' + r'(' + _DELIMITER_PATTERN + r')', bygroups(Keyword.Declaration, Operator)), (words(_KEYWORDS, suffix=_DELIMITER_PATTERN_CAPTURE), bygroups(Keyword.Reserved, Operator)), (_OPERATORS_PATTERN, Operator), # Procedure declaration (r'(' + _NON_DELIMITER_OR_COMMENT_PATTERN + r'+)(\s*)(\.?)(\s*)(PROC)(\s*\n)', bygroups(Name.Function, Whitespace, Operator, Whitespace, Keyword.Declaration, Whitespace)), (r'[0-9]+\.[0-9]*', Number.Float), (r'[0-9]+', Number.Integer), (r"'(''|[^'])*'", String), (r'\s+', Whitespace), # Everything else just belongs to a name (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name), ], 'after_declaration': [ (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name.Function), default('#pop'), ], 'after_macro_argument': [ (r'\*.*\n', Comment.Single, '#pop'), (r'\s+', Whitespace, '#pop'), (_OPERATORS_PATTERN, Operator, '#pop'), (r"'(''|[^'])*'", String, '#pop'), # Everything else just belongs to a name (_NON_DELIMITER_OR_COMMENT_PATTERN + r'+', Name), ], } _COMMENT_LINE_REGEX = re.compile(r'^\s*\*') _MACRO_HEADER_REGEX = re.compile(r'^\s*MACRO') def analyse_text(text): """ Perform a structural analysis for basic Easytrieve constructs. """ result = 0.0 lines = text.split('\n') hasEndProc = False hasHeaderComment = False hasFile = False hasJob = False hasProc = False hasParm = False hasReport = False def isCommentLine(line): return EasytrieveLexer._COMMENT_LINE_REGEX.match(lines[0]) is not None def isEmptyLine(line): return not bool(line.strip()) # Remove possible empty lines and header comments. while lines and (isEmptyLine(lines[0]) or isCommentLine(lines[0])): if not isEmptyLine(lines[0]): hasHeaderComment = True del lines[0] if EasytrieveLexer._MACRO_HEADER_REGEX.match(lines[0]): # Looks like an Easytrieve macro. result = 0.4 if hasHeaderComment: result += 0.4 else: # Scan the source for lines starting with indicators. for line in lines: words = line.split() if (len(words) >= 2): firstWord = words[0] if not hasReport: if not hasJob: if not hasFile: if not hasParm: if firstWord == 'PARM': hasParm = True if firstWord == 'FILE': hasFile = True if firstWord == 'JOB': hasJob = True elif firstWord == 'PROC': hasProc = True elif firstWord == 'END-PROC': hasEndProc = True elif firstWord == 'REPORT': hasReport = True # Weight the findings. if hasJob and (hasProc == hasEndProc): if hasHeaderComment: result += 0.1 if hasParm: if hasProc: # Found PARM, JOB and PROC/END-PROC: # pretty sure this is Easytrieve. result += 0.8 else: # Found PARAM and JOB: probably this is Easytrieve result += 0.5 else: # Found JOB and possibly other keywords: might be Easytrieve result += 0.11 if hasParm: # Note: PARAM is not a proper English word, so this is # regarded a much better indicator for Easytrieve than # the other words. result += 0.2 if hasFile: result += 0.01 if hasReport: result += 0.01 assert 0.0 <= result <= 1.0 return result class JclLexer(RegexLexer): """ `Job Control Language (JCL) `_ is a scripting language used on mainframe platforms to instruct the system on how to run a batch job or start a subsystem. It is somewhat comparable to MS DOS batch and Unix shell scripts. .. versionadded:: 2.1 """ name = 'JCL' aliases = ['jcl'] filenames = ['*.jcl'] mimetypes = ['text/x-jcl'] flags = re.IGNORECASE tokens = { 'root': [ (r'//\*.*\n', Comment.Single), (r'//', Keyword.Pseudo, 'statement'), (r'/\*', Keyword.Pseudo, 'jes2_statement'), # TODO: JES3 statement (r'.*\n', Other) # Input text or inline code in any language. ], 'statement': [ (r'\s*\n', Whitespace, '#pop'), (r'([a-z]\w*)(\s+)(exec|job)(\s*)', bygroups(Name.Label, Whitespace, Keyword.Reserved, Whitespace), 'option'), (r'[a-z]\w*', Name.Variable, 'statement_command'), (r'\s+', Whitespace, 'statement_command'), ], 'statement_command': [ (r'\s+(command|cntl|dd|endctl|endif|else|include|jcllib|' r'output|pend|proc|set|then|xmit)\s+', Keyword.Reserved, 'option'), include('option') ], 'jes2_statement': [ (r'\s*\n', Whitespace, '#pop'), (r'\$', Keyword, 'option'), (r'\b(jobparam|message|netacct|notify|output|priority|route|' r'setup|signoff|xeq|xmit)\b', Keyword, 'option'), ], 'option': [ # (r'\n', Text, 'root'), (r'\*', Name.Builtin), (r'[\[\](){}<>;,]', Punctuation), (r'[-+*/=&%]', Operator), (r'[a-z_]\w*', Name), (r'\d+\.\d*', Number.Float), (r'\.\d+', Number.Float), (r'\d+', Number.Integer), (r"'", String, 'option_string'), (r'[ \t]+', Whitespace, 'option_comment'), (r'\.', Punctuation), ], 'option_string': [ (r"(\n)(//)", bygroups(Text, Keyword.Pseudo)), (r"''", String), (r"[^']", String), (r"'", String, '#pop'), ], 'option_comment': [ # (r'\n', Text, 'root'), (r'.+', Comment.Single), ] } _JOB_HEADER_PATTERN = re.compile(r'^//[a-z#$@][a-z0-9#$@]{0,7}\s+job(\s+.*)?$', re.IGNORECASE) def analyse_text(text): """ Recognize JCL job by header. """ result = 0.0 lines = text.split('\n') if len(lines) > 0: if JclLexer._JOB_HEADER_PATTERN.match(lines[0]): result = 1.0 assert 0.0 <= result <= 1.0 return result class MiniScriptLexer(RegexLexer): """ For `MiniScript `_ source code. .. versionadded:: 2.6 """ name = 'MiniScript' aliases = ['miniscript', 'ms'] filenames = ['*.ms'] mimetypes = ['text/x-minicript', 'application/x-miniscript'] tokens = { 'root': [ (r'#!(.*?)$', Comment.Preproc), default('base'), ], 'base': [ ('//.*$', Comment.Single), (r'(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?', Number), (r'(?i)\d+e[+-]?\d+', Number), (r'\d+', Number), (r'\n', Text), (r'[^\S\n]+', Text), (r'"', String, 'string_double'), (r'(==|!=|<=|>=|[=+\-*/%^<>.:])', Operator), (r'[;,\[\]{}()]', Punctuation), (words(( 'break', 'continue', 'else', 'end', 'for', 'function', 'if', 'in', 'isa', 'then', 'repeat', 'return', 'while'), suffix=r'\b'), Keyword), (words(( 'abs', 'acos', 'asin', 'atan', 'ceil', 'char', 'cos', 'floor', 'log', 'round', 'rnd', 'pi', 'sign', 'sin', 'sqrt', 'str', 'tan', 'hasIndex', 'indexOf', 'len', 'val', 'code', 'remove', 'lower', 'upper', 'replace', 'split', 'indexes', 'values', 'join', 'sum', 'sort', 'shuffle', 'push', 'pop', 'pull', 'range', 'print', 'input', 'time', 'wait', 'locals', 'globals', 'outer', 'yield'), suffix=r'\b'), Name.Builtin), (r'(true|false|null)\b', Keyword.Constant), (r'(and|or|not|new)\b', Operator.Word), (r'(self|super|__isa)\b', Name.Builtin.Pseudo), (r'[a-zA-Z_]\w*', Name.Variable) ], 'string_double': [ (r'[^"\n]+', String), (r'""', String), (r'"', String, '#pop'), (r'\n', Text, '#pop'), # Stray linefeed also terminates strings. ] } pygments-2.11.2/pygments/lexers/__init__.py0000644000175000017500000002577314165547207020635 0ustar carstencarsten""" pygments.lexers ~~~~~~~~~~~~~~~ Pygments lexers. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import sys import types import fnmatch from os.path import basename from pygments.lexers._mapping import LEXERS from pygments.modeline import get_filetype_from_buffer from pygments.plugin import find_plugin_lexers from pygments.util import ClassNotFound, guess_decode COMPAT = { 'Python3Lexer': 'PythonLexer', 'Python3TracebackLexer': 'PythonTracebackLexer', } __all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class', 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT) _lexer_cache = {} _pattern_cache = {} def _fn_matches(fn, glob): """Return whether the supplied file name fn matches pattern filename.""" if glob not in _pattern_cache: pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) return pattern.match(fn) return _pattern_cache[glob].match(fn) def _load_lexers(module_name): """Load a lexer (and all others in the module too).""" mod = __import__(module_name, None, None, ['__all__']) for lexer_name in mod.__all__: cls = getattr(mod, lexer_name) _lexer_cache[cls.name] = cls def get_all_lexers(): """Return a generator of tuples in the form ``(name, aliases, filenames, mimetypes)`` of all know lexers. """ for item in LEXERS.values(): yield item[1:] for lexer in find_plugin_lexers(): yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes def find_lexer_class(name): """Lookup a lexer class by name. Return None if not found. """ if name in _lexer_cache: return _lexer_cache[name] # lookup builtin lexers for module_name, lname, aliases, _, _ in LEXERS.values(): if name == lname: _load_lexers(module_name) return _lexer_cache[name] # continue with lexers from setuptools entrypoints for cls in find_plugin_lexers(): if cls.name == name: return cls def find_lexer_class_by_name(_alias): """Lookup a lexer class by alias. Like `get_lexer_by_name`, but does not instantiate the class. .. versionadded:: 2.2 """ if not _alias: raise ClassNotFound('no lexer for alias %r found' % _alias) # lookup builtin lexers for module_name, name, aliases, _, _ in LEXERS.values(): if _alias.lower() in aliases: if name not in _lexer_cache: _load_lexers(module_name) return _lexer_cache[name] # continue with lexers from setuptools entrypoints for cls in find_plugin_lexers(): if _alias.lower() in cls.aliases: return cls raise ClassNotFound('no lexer for alias %r found' % _alias) def get_lexer_by_name(_alias, **options): """Get a lexer by an alias. Raises ClassNotFound if not found. """ if not _alias: raise ClassNotFound('no lexer for alias %r found' % _alias) # lookup builtin lexers for module_name, name, aliases, _, _ in LEXERS.values(): if _alias.lower() in aliases: if name not in _lexer_cache: _load_lexers(module_name) return _lexer_cache[name](**options) # continue with lexers from setuptools entrypoints for cls in find_plugin_lexers(): if _alias.lower() in cls.aliases: return cls(**options) raise ClassNotFound('no lexer for alias %r found' % _alias) def load_lexer_from_file(filename, lexername="CustomLexer", **options): """Load a lexer from a file. This method expects a file located relative to the current working directory, which contains a Lexer class. By default, it expects the Lexer to be name CustomLexer; you can specify your own class name as the second argument to this function. Users should be very careful with the input, because this method is equivalent to running eval on the input file. Raises ClassNotFound if there are any problems importing the Lexer. .. versionadded:: 2.2 """ try: # This empty dict will contain the namespace for the exec'd file custom_namespace = {} with open(filename, 'rb') as f: exec(f.read(), custom_namespace) # Retrieve the class `lexername` from that namespace if lexername not in custom_namespace: raise ClassNotFound('no valid %s class found in %s' % (lexername, filename)) lexer_class = custom_namespace[lexername] # And finally instantiate it with the options return lexer_class(**options) except OSError as err: raise ClassNotFound('cannot read %s: %s' % (filename, err)) except ClassNotFound: raise except Exception as err: raise ClassNotFound('error when loading custom lexer: %s' % err) def find_lexer_class_for_filename(_fn, code=None): """Get a lexer for a filename. If multiple lexers match the filename pattern, use ``analyse_text()`` to figure out which one is more appropriate. Returns None if not found. """ matches = [] fn = basename(_fn) for modname, name, _, filenames, _ in LEXERS.values(): for filename in filenames: if _fn_matches(fn, filename): if name not in _lexer_cache: _load_lexers(modname) matches.append((_lexer_cache[name], filename)) for cls in find_plugin_lexers(): for filename in cls.filenames: if _fn_matches(fn, filename): matches.append((cls, filename)) if isinstance(code, bytes): # decode it, since all analyse_text functions expect unicode code = guess_decode(code) def get_rating(info): cls, filename = info # explicit patterns get a bonus bonus = '*' not in filename and 0.5 or 0 # The class _always_ defines analyse_text because it's included in # the Lexer class. The default implementation returns None which # gets turned into 0.0. Run scripts/detect_missing_analyse_text.py # to find lexers which need it overridden. if code: return cls.analyse_text(code) + bonus, cls.__name__ return cls.priority + bonus, cls.__name__ if matches: matches.sort(key=get_rating) # print "Possible lexers, after sort:", matches return matches[-1][0] def get_lexer_for_filename(_fn, code=None, **options): """Get a lexer for a filename. If multiple lexers match the filename pattern, use ``analyse_text()`` to figure out which one is more appropriate. Raises ClassNotFound if not found. """ res = find_lexer_class_for_filename(_fn, code) if not res: raise ClassNotFound('no lexer for filename %r found' % _fn) return res(**options) def get_lexer_for_mimetype(_mime, **options): """Get a lexer for a mimetype. Raises ClassNotFound if not found. """ for modname, name, _, _, mimetypes in LEXERS.values(): if _mime in mimetypes: if name not in _lexer_cache: _load_lexers(modname) return _lexer_cache[name](**options) for cls in find_plugin_lexers(): if _mime in cls.mimetypes: return cls(**options) raise ClassNotFound('no lexer for mimetype %r found' % _mime) def _iter_lexerclasses(plugins=True): """Return an iterator over all lexer classes.""" for key in sorted(LEXERS): module_name, name = LEXERS[key][:2] if name not in _lexer_cache: _load_lexers(module_name) yield _lexer_cache[name] if plugins: yield from find_plugin_lexers() def guess_lexer_for_filename(_fn, _text, **options): """ Lookup all lexers that handle those filenames primary (``filenames``) or secondary (``alias_filenames``). Then run a text analysis for those lexers and choose the best result. usage:: >>> from pygments.lexers import guess_lexer_for_filename >>> guess_lexer_for_filename('hello.html', '<%= @foo %>') >>> guess_lexer_for_filename('hello.html', '

{{ title|e }}

') >>> guess_lexer_for_filename('style.css', 'a { color: }') """ fn = basename(_fn) primary = {} matching_lexers = set() for lexer in _iter_lexerclasses(): for filename in lexer.filenames: if _fn_matches(fn, filename): matching_lexers.add(lexer) primary[lexer] = True for filename in lexer.alias_filenames: if _fn_matches(fn, filename): matching_lexers.add(lexer) primary[lexer] = False if not matching_lexers: raise ClassNotFound('no lexer for filename %r found' % fn) if len(matching_lexers) == 1: return matching_lexers.pop()(**options) result = [] for lexer in matching_lexers: rv = lexer.analyse_text(_text) if rv == 1.0: return lexer(**options) result.append((rv, lexer)) def type_sort(t): # sort by: # - analyse score # - is primary filename pattern? # - priority # - last resort: class name return (t[0], primary[t[1]], t[1].priority, t[1].__name__) result.sort(key=type_sort) return result[-1][1](**options) def guess_lexer(_text, **options): """Guess a lexer by strong distinctions in the text (eg, shebang).""" if not isinstance(_text, str): inencoding = options.get('inencoding', options.get('encoding')) if inencoding: _text = _text.decode(inencoding or 'utf8') else: _text, _ = guess_decode(_text) # try to get a vim modeline first ft = get_filetype_from_buffer(_text) if ft is not None: try: return get_lexer_by_name(ft, **options) except ClassNotFound: pass best_lexer = [0.0, None] for lexer in _iter_lexerclasses(): rv = lexer.analyse_text(_text) if rv == 1.0: return lexer(**options) if rv > best_lexer[0]: best_lexer[:] = (rv, lexer) if not best_lexer[0] or best_lexer[1] is None: raise ClassNotFound('no lexer matching the text found') return best_lexer[1](**options) class _automodule(types.ModuleType): """Automatically import lexers.""" def __getattr__(self, name): info = LEXERS.get(name) if info: _load_lexers(info[0]) cls = _lexer_cache[info[1]] setattr(self, name, cls) return cls if name in COMPAT: return getattr(self, COMPAT[name]) raise AttributeError(name) oldmod = sys.modules[__name__] newmod = _automodule(__name__) newmod.__dict__.update(oldmod.__dict__) sys.modules[__name__] = newmod del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types pygments-2.11.2/pygments/lexers/special.py0000644000175000017500000000656614165547207020515 0ustar carstencarsten""" pygments.lexers.special ~~~~~~~~~~~~~~~~~~~~~~~ Special lexers. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import ast import re from pygments.lexer import Lexer from pygments.token import Token, Error, Text, Generic from pygments.util import get_choice_opt __all__ = ['TextLexer', 'OutputLexer', 'RawTokenLexer'] class TextLexer(Lexer): """ "Null" lexer, doesn't highlight anything. """ name = 'Text only' aliases = ['text'] filenames = ['*.txt'] mimetypes = ['text/plain'] priority = 0.01 def get_tokens_unprocessed(self, text): yield 0, Text, text def analyse_text(text): return TextLexer.priority class OutputLexer(Lexer): """ Simple lexer that highlights everything as ``Token.Generic.Output``. .. versionadded:: 2.10 """ name = 'Text output' aliases = ['output'] def get_tokens_unprocessed(self, text): yield 0, Generic.Output, text _ttype_cache = {} line_re = re.compile('.*?\n') class RawTokenLexer(Lexer): """ Recreate a token stream formatted with the `RawTokenFormatter`. Additional options accepted: `compress` If set to ``"gz"`` or ``"bz2"``, decompress the token stream with the given compression algorithm before lexing (default: ``""``). """ name = 'Raw token data' aliases = [] filenames = [] mimetypes = ['application/x-pygments-tokens'] def __init__(self, **options): self.compress = get_choice_opt(options, 'compress', ['', 'none', 'gz', 'bz2'], '') Lexer.__init__(self, **options) def get_tokens(self, text): if self.compress: if isinstance(text, str): text = text.encode('latin1') try: if self.compress == 'gz': import gzip text = gzip.decompress(text) elif self.compress == 'bz2': import bz2 text = bz2.decompress(text) except OSError: yield Error, text.decode('latin1') if isinstance(text, bytes): text = text.decode('latin1') # do not call Lexer.get_tokens() because stripping is not optional. text = text.strip('\n') + '\n' for i, t, v in self.get_tokens_unprocessed(text): yield t, v def get_tokens_unprocessed(self, text): length = 0 for match in line_re.finditer(text): try: ttypestr, val = match.group().rstrip().split('\t', 1) ttype = _ttype_cache.get(ttypestr) if not ttype: ttype = Token ttypes = ttypestr.split('.')[1:] for ttype_ in ttypes: if not ttype_ or not ttype_[0].isupper(): raise ValueError('malformed token name') ttype = getattr(ttype, ttype_) _ttype_cache[ttypestr] = ttype val = ast.literal_eval(val) if not isinstance(val, str): raise ValueError('expected str') except (SyntaxError, ValueError): val = match.group() ttype = Error yield length, ttype, val length += len(val) pygments-2.11.2/pygments/lexers/gsql.py0000755000175000017500000000726514165547207020043 0ustar carstencarsten""" pygments.lexers.gsql ~~~~~~~~~~~~~~~~~~~~ Lexers for TigerGraph GSQL graph query language :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, include, bygroups, using, this, words from pygments.token import Keyword, Punctuation, Comment, Operator, Name,\ String, Number, Whitespace, Token __all__ = ["GSQLLexer"] class GSQLLexer(RegexLexer): """ For `GSQL `_ queries (version 3.x). .. versionadded:: 2.10 """ name = 'GSQL' aliases = ['gsql'] filenames = ['*.gsql'] flags = re.MULTILINE | re.IGNORECASE tokens = { 'root': [ include('comment'), include('keywords'), include('clauses'), include('accums'), include('relations'), include('strings'), include('whitespace'), include('barewords'), include('operators'), ], 'comment': [ (r'\#.*', Comment.Single), (r'/\*(.|\n)*?\*/', Comment.Multiline), ], 'keywords': [ (words(( 'ACCUM', 'AND', 'ANY', 'API', 'AS', 'ASC', 'AVG', 'BAG', 'BATCH', 'BETWEEN', 'BOOL', 'BOTH', 'BREAK', 'BY', 'CASE', 'CATCH', 'COALESCE', 'COMPRESS', 'CONTINUE', 'COUNT', 'CREATE', 'DATETIME', 'DATETIME_ADD', 'DATETIME_SUB', 'DELETE', 'DESC', 'DISTRIBUTED', 'DO', 'DOUBLE', 'EDGE', 'ELSE', 'END', 'ESCAPE', 'EXCEPTION', 'FALSE', 'FILE', 'FILTER', 'FLOAT', 'FOREACH', 'FOR', 'FROM', 'GRAPH', 'GROUP', 'GSQL_INT_MAX', 'GSQL_INT_MIN', 'GSQL_UINT_MAX', 'HAVING', 'IF', 'IN', 'INSERT', 'INT', 'INTERPRET', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ISEMPTY', 'JSONARRAY', 'JSONOBJECT', 'LASTHOP', 'LEADING', 'LIKE', 'LIMIT', 'LIST', 'LOAD_ACCUM', 'LOG', 'MAP', 'MATCH', 'MAX', 'MIN', 'MINUS', 'NOT', 'NOW', 'NULL', 'OFFSET', 'OR', 'ORDER', 'PATH', 'PER', 'PINNED', 'POST_ACCUM', 'POST-ACCUM', 'PRIMARY_ID', 'PRINT', 'QUERY', 'RAISE', 'RANGE', 'REPLACE', 'RESET_COLLECTION_ACCUM', 'RETURN', 'RETURNS', 'RUN', 'SAMPLE', 'SELECT', 'SELECT_VERTEX', 'SET', 'SRC', 'STATIC', 'STRING', 'SUM', 'SYNTAX', 'TARGET', 'TAGSTGT', 'THEN', 'TO', 'TO_CSV', 'TO_DATETIME', 'TRAILING', 'TRIM', 'TRUE', 'TRY', 'TUPLE', 'TYPEDEF', 'UINT', 'UNION', 'UPDATE', 'VALUES', 'VERTEX', 'WHEN', 'WHERE', 'WHILE', 'WITH'), prefix=r'(?|<-', Operator), (r'[.*{}\[\]\<\>\_]', Punctuation), ], 'strings': [ (r'"([^"\\]|\\.)*"', String), (r'@{1,2}\w+', Name.Variable), ], 'whitespace': [ (r'\s+', Whitespace), ], 'barewords': [ (r'[a-z]\w*', Name), (r'(\d+\.\d+|\d+)', Number), ], 'operators': [ (r'\$|[^0-9|\/|\-](\-\=|\+\=|\*\=|\\\=|\=|\=\=|\=\=\=|\+|\-|\*|\\|\+\=|\>|\<)[^\>|\/]', Operator), (r'(\||\(|\)|\,|\;|\=|\-|\+|\*|\/|\>|\<|\:)', Operator), ], } pygments-2.11.2/pygments/lexers/sieve.py0000644000175000017500000000436114165547207020177 0ustar carstencarsten""" pygments.lexers.sieve ~~~~~~~~~~~~~~~~~~~~~ Lexer for Sieve file format. https://tools.ietf.org/html/rfc5228 https://tools.ietf.org/html/rfc5173 https://tools.ietf.org/html/rfc5229 https://tools.ietf.org/html/rfc5230 https://tools.ietf.org/html/rfc5232 https://tools.ietf.org/html/rfc5235 https://tools.ietf.org/html/rfc5429 https://tools.ietf.org/html/rfc8580 :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups from pygments.token import Comment, Name, Literal, String, Text, Punctuation, Keyword __all__ = ["SieveLexer"] class SieveLexer(RegexLexer): """ Lexer for sieve format. """ name = 'Sieve' filenames = ['*.siv', '*.sieve'] aliases = ['sieve'] tokens = { 'root': [ (r'\s+', Text), (r'[();,{}\[\]]', Punctuation), # import: (r'(?i)require', Keyword.Namespace), # tags: (r'(?i)(:)(addresses|all|contains|content|create|copy|comparator|count|days|detail|domain|fcc|flags|from|handle|importance|is|localpart|length|lowerfirst|lower|matches|message|mime|options|over|percent|quotewildcard|raw|regex|specialuse|subject|text|under|upperfirst|upper|value)', bygroups(Name.Tag, Name.Tag)), # tokens: (r'(?i)(address|addflag|allof|anyof|body|discard|elsif|else|envelope|ereject|exists|false|fileinto|if|hasflag|header|keep|notify_method_capability|notify|not|redirect|reject|removeflag|setflag|size|spamtest|stop|string|true|vacation|virustest)', Name.Builtin), (r'(?i)set', Keyword.Declaration), # number: (r'([0-9.]+)([kmgKMG])?', bygroups(Literal.Number, Literal.Number)), # comment: (r'#.*$', Comment.Single), (r'/\*.*\*/', Comment.Multiline), # string: (r'"[^"]*?"', String), # text block: (r'text:', Name.Tag, 'text'), ], 'text': [ (r'[^.].*?\n', String), (r'^\.', Punctuation, "#pop"), ] } pygments-2.11.2/pygments/lexers/_stan_builtins.py0000644000175000017500000002433114165547207022100 0ustar carstencarsten""" pygments.lexers._stan_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This file contains the names of functions for Stan used by ``pygments.lexers.math.StanLexer. This is for Stan language version 2.17.0. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ KEYWORDS = ( 'break', 'continue', 'else', 'for', 'if', 'in', 'print', 'reject', 'return', 'while', ) TYPES = ( 'cholesky_factor_corr', 'cholesky_factor_cov', 'corr_matrix', 'cov_matrix', 'int', 'matrix', 'ordered', 'positive_ordered', 'real', 'row_vector', 'simplex', 'unit_vector', 'vector', 'void', ) FUNCTIONS = ( 'abs', 'acos', 'acosh', 'algebra_solver', 'append_array', 'append_col', 'append_row', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'bernoulli_cdf', 'bernoulli_lccdf', 'bernoulli_lcdf', 'bernoulli_logit_lpmf', 'bernoulli_logit_rng', 'bernoulli_lpmf', 'bernoulli_rng', 'bessel_first_kind', 'bessel_second_kind', 'beta_binomial_cdf', 'beta_binomial_lccdf', 'beta_binomial_lcdf', 'beta_binomial_lpmf', 'beta_binomial_rng', 'beta_cdf', 'beta_lccdf', 'beta_lcdf', 'beta_lpdf', 'beta_rng', 'binary_log_loss', 'binomial_cdf', 'binomial_coefficient_log', 'binomial_lccdf', 'binomial_lcdf', 'binomial_logit_lpmf', 'binomial_lpmf', 'binomial_rng', 'block', 'categorical_logit_lpmf', 'categorical_logit_rng', 'categorical_lpmf', 'categorical_rng', 'cauchy_cdf', 'cauchy_lccdf', 'cauchy_lcdf', 'cauchy_lpdf', 'cauchy_rng', 'cbrt', 'ceil', 'chi_square_cdf', 'chi_square_lccdf', 'chi_square_lcdf', 'chi_square_lpdf', 'chi_square_rng', 'cholesky_decompose', 'choose', 'col', 'cols', 'columns_dot_product', 'columns_dot_self', 'cos', 'cosh', 'cov_exp_quad', 'crossprod', 'csr_extract_u', 'csr_extract_v', 'csr_extract_w', 'csr_matrix_times_vector', 'csr_to_dense_matrix', 'cumulative_sum', 'determinant', 'diag_matrix', 'diag_post_multiply', 'diag_pre_multiply', 'diagonal', 'digamma', 'dims', 'dirichlet_lpdf', 'dirichlet_rng', 'distance', 'dot_product', 'dot_self', 'double_exponential_cdf', 'double_exponential_lccdf', 'double_exponential_lcdf', 'double_exponential_lpdf', 'double_exponential_rng', 'e', 'eigenvalues_sym', 'eigenvectors_sym', 'erf', 'erfc', 'exp', 'exp2', 'exp_mod_normal_cdf', 'exp_mod_normal_lccdf', 'exp_mod_normal_lcdf', 'exp_mod_normal_lpdf', 'exp_mod_normal_rng', 'expm1', 'exponential_cdf', 'exponential_lccdf', 'exponential_lcdf', 'exponential_lpdf', 'exponential_rng', 'fabs', 'falling_factorial', 'fdim', 'floor', 'fma', 'fmax', 'fmin', 'fmod', 'frechet_cdf', 'frechet_lccdf', 'frechet_lcdf', 'frechet_lpdf', 'frechet_rng', 'gamma_cdf', 'gamma_lccdf', 'gamma_lcdf', 'gamma_lpdf', 'gamma_p', 'gamma_q', 'gamma_rng', 'gaussian_dlm_obs_lpdf', 'get_lp', 'gumbel_cdf', 'gumbel_lccdf', 'gumbel_lcdf', 'gumbel_lpdf', 'gumbel_rng', 'head', 'hypergeometric_lpmf', 'hypergeometric_rng', 'hypot', 'inc_beta', 'int_step', 'integrate_ode', 'integrate_ode_bdf', 'integrate_ode_rk45', 'inv', 'inv_chi_square_cdf', 'inv_chi_square_lccdf', 'inv_chi_square_lcdf', 'inv_chi_square_lpdf', 'inv_chi_square_rng', 'inv_cloglog', 'inv_gamma_cdf', 'inv_gamma_lccdf', 'inv_gamma_lcdf', 'inv_gamma_lpdf', 'inv_gamma_rng', 'inv_logit', 'inv_Phi', 'inv_sqrt', 'inv_square', 'inv_wishart_lpdf', 'inv_wishart_rng', 'inverse', 'inverse_spd', 'is_inf', 'is_nan', 'lbeta', 'lchoose', 'lgamma', 'lkj_corr_cholesky_lpdf', 'lkj_corr_cholesky_rng', 'lkj_corr_lpdf', 'lkj_corr_rng', 'lmgamma', 'lmultiply', 'log', 'log10', 'log1m', 'log1m_exp', 'log1m_inv_logit', 'log1p', 'log1p_exp', 'log2', 'log_determinant', 'log_diff_exp', 'log_falling_factorial', 'log_inv_logit', 'log_mix', 'log_rising_factorial', 'log_softmax', 'log_sum_exp', 'logistic_cdf', 'logistic_lccdf', 'logistic_lcdf', 'logistic_lpdf', 'logistic_rng', 'logit', 'lognormal_cdf', 'lognormal_lccdf', 'lognormal_lcdf', 'lognormal_lpdf', 'lognormal_rng', 'machine_precision', 'matrix_exp', 'max', 'mdivide_left_spd', 'mdivide_left_tri_low', 'mdivide_right_spd', 'mdivide_right_tri_low', 'mean', 'min', 'modified_bessel_first_kind', 'modified_bessel_second_kind', 'multi_gp_cholesky_lpdf', 'multi_gp_lpdf', 'multi_normal_cholesky_lpdf', 'multi_normal_cholesky_rng', 'multi_normal_lpdf', 'multi_normal_prec_lpdf', 'multi_normal_rng', 'multi_student_t_lpdf', 'multi_student_t_rng', 'multinomial_lpmf', 'multinomial_rng', 'multiply_log', 'multiply_lower_tri_self_transpose', 'neg_binomial_2_cdf', 'neg_binomial_2_lccdf', 'neg_binomial_2_lcdf', 'neg_binomial_2_log_lpmf', 'neg_binomial_2_log_rng', 'neg_binomial_2_lpmf', 'neg_binomial_2_rng', 'neg_binomial_cdf', 'neg_binomial_lccdf', 'neg_binomial_lcdf', 'neg_binomial_lpmf', 'neg_binomial_rng', 'negative_infinity', 'normal_cdf', 'normal_lccdf', 'normal_lcdf', 'normal_lpdf', 'normal_rng', 'not_a_number', 'num_elements', 'ordered_logistic_lpmf', 'ordered_logistic_rng', 'owens_t', 'pareto_cdf', 'pareto_lccdf', 'pareto_lcdf', 'pareto_lpdf', 'pareto_rng', 'pareto_type_2_cdf', 'pareto_type_2_lccdf', 'pareto_type_2_lcdf', 'pareto_type_2_lpdf', 'pareto_type_2_rng', 'Phi', 'Phi_approx', 'pi', 'poisson_cdf', 'poisson_lccdf', 'poisson_lcdf', 'poisson_log_lpmf', 'poisson_log_rng', 'poisson_lpmf', 'poisson_rng', 'positive_infinity', 'pow', 'print', 'prod', 'qr_Q', 'qr_R', 'quad_form', 'quad_form_diag', 'quad_form_sym', 'rank', 'rayleigh_cdf', 'rayleigh_lccdf', 'rayleigh_lcdf', 'rayleigh_lpdf', 'rayleigh_rng', 'reject', 'rep_array', 'rep_matrix', 'rep_row_vector', 'rep_vector', 'rising_factorial', 'round', 'row', 'rows', 'rows_dot_product', 'rows_dot_self', 'scaled_inv_chi_square_cdf', 'scaled_inv_chi_square_lccdf', 'scaled_inv_chi_square_lcdf', 'scaled_inv_chi_square_lpdf', 'scaled_inv_chi_square_rng', 'sd', 'segment', 'sin', 'singular_values', 'sinh', 'size', 'skew_normal_cdf', 'skew_normal_lccdf', 'skew_normal_lcdf', 'skew_normal_lpdf', 'skew_normal_rng', 'softmax', 'sort_asc', 'sort_desc', 'sort_indices_asc', 'sort_indices_desc', 'sqrt', 'sqrt2', 'square', 'squared_distance', 'step', 'student_t_cdf', 'student_t_lccdf', 'student_t_lcdf', 'student_t_lpdf', 'student_t_rng', 'sub_col', 'sub_row', 'sum', 'tail', 'tan', 'tanh', 'target', 'tcrossprod', 'tgamma', 'to_array_1d', 'to_array_2d', 'to_matrix', 'to_row_vector', 'to_vector', 'trace', 'trace_gen_quad_form', 'trace_quad_form', 'trigamma', 'trunc', 'uniform_cdf', 'uniform_lccdf', 'uniform_lcdf', 'uniform_lpdf', 'uniform_rng', 'variance', 'von_mises_lpdf', 'von_mises_rng', 'weibull_cdf', 'weibull_lccdf', 'weibull_lcdf', 'weibull_lpdf', 'weibull_rng', 'wiener_lpdf', 'wishart_lpdf', 'wishart_rng', ) DISTRIBUTIONS = ( 'bernoulli', 'bernoulli_logit', 'beta', 'beta_binomial', 'binomial', 'binomial_logit', 'categorical', 'categorical_logit', 'cauchy', 'chi_square', 'dirichlet', 'double_exponential', 'exp_mod_normal', 'exponential', 'frechet', 'gamma', 'gaussian_dlm_obs', 'gumbel', 'hypergeometric', 'inv_chi_square', 'inv_gamma', 'inv_wishart', 'lkj_corr', 'lkj_corr_cholesky', 'logistic', 'lognormal', 'multi_gp', 'multi_gp_cholesky', 'multi_normal', 'multi_normal_cholesky', 'multi_normal_prec', 'multi_student_t', 'multinomial', 'neg_binomial', 'neg_binomial_2', 'neg_binomial_2_log', 'normal', 'ordered_logistic', 'pareto', 'pareto_type_2', 'poisson', 'poisson_log', 'rayleigh', 'scaled_inv_chi_square', 'skew_normal', 'student_t', 'uniform', 'von_mises', 'weibull', 'wiener', 'wishart', ) RESERVED = ( 'alignas', 'alignof', 'and', 'and_eq', 'asm', 'auto', 'bitand', 'bitor', 'bool', 'break', 'case', 'catch', 'char', 'char16_t', 'char32_t', 'class', 'compl', 'const', 'const_cast', 'constexpr', 'continue', 'decltype', 'default', 'delete', 'do', 'double', 'dynamic_cast', 'else', 'enum', 'explicit', 'export', 'extern', 'false', 'float', 'for', 'friend', 'fvar', 'goto', 'if', 'in', 'inline', 'int', 'long', 'lp__', 'mutable', 'namespace', 'new', 'noexcept', 'not', 'not_eq', 'nullptr', 'operator', 'or', 'or_eq', 'private', 'protected', 'public', 'register', 'reinterpret_cast', 'repeat', 'return', 'short', 'signed', 'sizeof', 'STAN_MAJOR', 'STAN_MATH_MAJOR', 'STAN_MATH_MINOR', 'STAN_MATH_PATCH', 'STAN_MINOR', 'STAN_PATCH', 'static', 'static_assert', 'static_cast', 'struct', 'switch', 'template', 'then', 'this', 'thread_local', 'throw', 'true', 'try', 'typedef', 'typeid', 'typename', 'union', 'unsigned', 'until', 'using', 'var', 'virtual', 'void', 'volatile', 'wchar_t', 'while', 'xor', 'xor_eq', ) pygments-2.11.2/pygments/lexers/go.py0000644000175000017500000000723614165547207017475 0ustar carstencarsten""" pygments.lexers.go ~~~~~~~~~~~~~~~~~~ Lexers for the Google Go language. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['GoLexer'] class GoLexer(RegexLexer): """ For `Go `_ source. .. versionadded:: 1.2 """ name = 'Go' filenames = ['*.go'] aliases = ['go', 'golang'] mimetypes = ['text/x-gosrc'] flags = re.MULTILINE | re.UNICODE tokens = { 'root': [ (r'\n', Whitespace), (r'\s+', Whitespace), (r'(\\)(\n)', bygroups(Text, Whitespace)), # line continuations (r'//(.*?)$', Comment.Single), (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), (r'(import|package)\b', Keyword.Namespace), (r'(var|func|struct|map|chan|type|interface|const)\b', Keyword.Declaration), (words(( 'break', 'default', 'select', 'case', 'defer', 'go', 'else', 'goto', 'switch', 'fallthrough', 'if', 'range', 'continue', 'for', 'return'), suffix=r'\b'), Keyword), (r'(true|false|iota|nil)\b', Keyword.Constant), # It seems the builtin types aren't actually keywords, but # can be used as functions. So we need two declarations. (words(( 'uint', 'uint8', 'uint16', 'uint32', 'uint64', 'int', 'int8', 'int16', 'int32', 'int64', 'float', 'float32', 'float64', 'complex64', 'complex128', 'byte', 'rune', 'string', 'bool', 'error', 'uintptr', 'print', 'println', 'panic', 'recover', 'close', 'complex', 'real', 'imag', 'len', 'cap', 'append', 'copy', 'delete', 'new', 'make'), suffix=r'\b(\()'), bygroups(Name.Builtin, Punctuation)), (words(( 'uint', 'uint8', 'uint16', 'uint32', 'uint64', 'int', 'int8', 'int16', 'int32', 'int64', 'float', 'float32', 'float64', 'complex64', 'complex128', 'byte', 'rune', 'string', 'bool', 'error', 'uintptr'), suffix=r'\b'), Keyword.Type), # imaginary_lit (r'\d+i', Number), (r'\d+\.\d*([Ee][-+]\d+)?i', Number), (r'\.\d+([Ee][-+]\d+)?i', Number), (r'\d+[Ee][-+]\d+i', Number), # float_lit (r'\d+(\.\d+[eE][+\-]?\d+|' r'\.\d*|[eE][+\-]?\d+)', Number.Float), (r'\.\d+([eE][+\-]?\d+)?', Number.Float), # int_lit # -- octal_lit (r'0[0-7]+', Number.Oct), # -- hex_lit (r'0[xX][0-9a-fA-F]+', Number.Hex), # -- decimal_lit (r'(0|[1-9][0-9]*)', Number.Integer), # char_lit (r"""'(\\['"\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}""" r"""|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])'""", String.Char), # StringLiteral # -- raw_string_lit (r'`[^`]*`', String), # -- interpreted_string_lit (r'"(\\\\|\\[^\\]|[^"\\])*"', String), # Tokens (r'(<<=|>>=|<<|>>|<=|>=|&\^=|&\^|\+=|-=|\*=|/=|%=|&=|\|=|&&|\|\|' r'|<-|\+\+|--|==|!=|:=|\.\.\.|[+\-*/%&])', Operator), (r'[|^<>=!()\[\]{}.,;:]', Punctuation), # identifier (r'[^\W\d]\w*', Name.Other), ] } pygments-2.11.2/pygments/lexers/cddl.py0000644000175000017500000001231414165547207017767 0ustar carstencarsten""" pygments.lexers.cddl ~~~~~~~~~~~~~~~~~~~~ Lexer for the Concise data definition language (CDDL), a notational convention to express CBOR and JSON data structures. More information: https://datatracker.ietf.org/doc/rfc8610/ :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re __all__ = ['CddlLexer'] from pygments.lexer import RegexLexer, bygroups, include, words from pygments.token import ( Comment, Error, Keyword, Name, Number, Operator, Punctuation, String, Text, Whitespace, ) class CddlLexer(RegexLexer): """ Lexer for CDDL definitions. .. versionadded:: 2.8 """ name = "CDDL" aliases = ["cddl"] filenames = ["*.cddl"] mimetypes = ["text/x-cddl"] _prelude_types = [ "any", "b64legacy", "b64url", "bigfloat", "bigint", "bignint", "biguint", "bool", "bstr", "bytes", "cbor-any", "decfrac", "eb16", "eb64legacy", "eb64url", "encoded-cbor", "false", "float", "float16", "float16-32", "float32", "float32-64", "float64", "int", "integer", "mime-message", "nil", "nint", "null", "number", "regexp", "tdate", "text", "time", "true", "tstr", "uint", "undefined", "unsigned", "uri", ] _controls = [ ".and", ".bits", ".cbor", ".cborseq", ".default", ".eq", ".ge", ".gt", ".le", ".lt", ".ne", ".regexp", ".size", ".within", ] _re_id = ( r"[$@A-Z_a-z]" r"(?:[\-\.]+(?=[$@0-9A-Z_a-z])|[$@0-9A-Z_a-z])*" ) # While the spec reads more like "an int must not start with 0" we use a # lookahead here that says "after a 0 there must be no digit". This makes the # '0' the invalid character in '01', which looks nicer when highlighted. _re_uint = r"(?:0b[01]+|0x[0-9a-fA-F]+|[1-9]\d*|0(?!\d))" _re_int = r"-?" + _re_uint flags = re.UNICODE | re.MULTILINE tokens = { "commentsandwhitespace": [(r"\s+", Whitespace), (r";.+$", Comment.Single)], "root": [ include("commentsandwhitespace"), # tag types (r"#(\d\.{uint})?".format(uint=_re_uint), Keyword.Type), # type or any # occurence ( r"({uint})?(\*)({uint})?".format(uint=_re_uint), bygroups(Number, Operator, Number), ), (r"\?|\+", Operator), # occurrence (r"\^", Operator), # cuts (r"(\.\.\.|\.\.)", Operator), # rangeop (words(_controls, suffix=r"\b"), Operator.Word), # ctlops # into choice op (r"&(?=\s*({groupname}|\())".format(groupname=_re_id), Operator), (r"~(?=\s*{})".format(_re_id), Operator), # unwrap op (r"//|/(?!/)", Operator), # double und single slash (r"=>|/==|/=|=", Operator), (r"[\[\]{}\(\),<>:]", Punctuation), # Bytestrings (r"(b64)(')", bygroups(String.Affix, String.Single), "bstrb64url"), (r"(h)(')", bygroups(String.Affix, String.Single), "bstrh"), (r"'", String.Single, "bstr"), # Barewords as member keys (must be matched before values, types, typenames, # groupnames). # Token type is String as barewords are always interpreted as such. ( r"({bareword})(\s*)(:)".format(bareword=_re_id), bygroups(String, Whitespace, Punctuation), ), # predefined types ( words(_prelude_types, prefix=r"(?![\-_$@])\b", suffix=r"\b(?![\-_$@])"), Name.Builtin, ), # user-defined groupnames, typenames (_re_id, Name.Class), # values (r"0b[01]+", Number.Bin), (r"0o[0-7]+", Number.Oct), (r"0x[0-9a-fA-F]+(\.[0-9a-fA-F]+)?p[+-]?\d+", Number.Hex), # hexfloat (r"0x[0-9a-fA-F]+", Number.Hex), # hex # Float ( r"{int}(?=(\.\d|e[+-]?\d))(?:\.\d+)?(?:e[+-]?\d+)?".format(int=_re_int), Number.Float, ), # Int (_re_int, Number.Integer), (r'"(\\\\|\\"|[^"])*"', String.Double), ], "bstrb64url": [ (r"'", String.Single, "#pop"), include("commentsandwhitespace"), (r"\\.", String.Escape), (r"[0-9a-zA-Z\-_=]+", String.Single), (r".", Error), # (r";.+$", Token.Other), ], "bstrh": [ (r"'", String.Single, "#pop"), include("commentsandwhitespace"), (r"\\.", String.Escape), (r"[0-9a-fA-F]+", String.Single), (r".", Error), ], "bstr": [ (r"'", String.Single, "#pop"), (r"\\.", String.Escape), (r"[^'\\]+", String.Single), ], } pygments-2.11.2/pygments/lexers/textfmts.py0000644000175000017500000003546614165547207020754 0ustar carstencarsten""" pygments.lexers.textfmts ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for various text formats. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexers import guess_lexer, get_lexer_by_name from pygments.lexer import RegexLexer, bygroups, default, include from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Generic, Literal, Punctuation from pygments.util import ClassNotFound __all__ = ['IrcLogsLexer', 'TodotxtLexer', 'HttpLexer', 'GettextLexer', 'NotmuchLexer', 'KernelLogLexer'] class IrcLogsLexer(RegexLexer): """ Lexer for IRC logs in *irssi*, *xchat* or *weechat* style. """ name = 'IRC logs' aliases = ['irc'] filenames = ['*.weechatlog'] mimetypes = ['text/x-irclog'] flags = re.VERBOSE | re.MULTILINE timestamp = r""" ( # irssi / xchat and others (?: \[|\()? # Opening bracket or paren for the timestamp (?: # Timestamp (?: (?:\d{1,4} [-/])* # Date as - or /-separated groups of digits (?:\d{1,4}) [T ])? # Date/time separator: T or space (?: \d?\d [:.])* # Time as :/.-separated groups of 1 or 2 digits (?: \d?\d) ) (?: \]|\))?\s+ # Closing bracket or paren for the timestamp | # weechat \d{4}\s\w{3}\s\d{2}\s # Date \d{2}:\d{2}:\d{2}\s+ # Time + Whitespace | # xchat \w{3}\s\d{2}\s # Date \d{2}:\d{2}:\d{2}\s+ # Time + Whitespace )? """ tokens = { 'root': [ # log start/end (r'^\*\*\*\*(.*)\*\*\*\*$', Comment), # hack ("^" + timestamp + r'(\s*<[^>]*>\s*)$', bygroups(Comment.Preproc, Name.Tag)), # normal msgs ("^" + timestamp + r""" (\s*<.*?>\s*) # Nick """, bygroups(Comment.Preproc, Name.Tag), 'msg'), # /me msgs ("^" + timestamp + r""" (\s*[*]\s+) # Star (\S+\s+.*?\n) # Nick + rest of message """, bygroups(Comment.Preproc, Keyword, Generic.Inserted)), # join/part msgs ("^" + timestamp + r""" (\s*(?:\*{3}|?)\s*) # Star(s) or symbols (\S+\s+) # Nick + Space (.*?\n) # Rest of message """, bygroups(Comment.Preproc, Keyword, String, Comment)), (r"^.*?\n", Text), ], 'msg': [ (r"\S+:(?!//)", Name.Attribute), # Prefix (r".*\n", Text, '#pop'), ], } class GettextLexer(RegexLexer): """ Lexer for Gettext catalog files. .. versionadded:: 0.9 """ name = 'Gettext Catalog' aliases = ['pot', 'po'] filenames = ['*.pot', '*.po'] mimetypes = ['application/x-gettext', 'text/x-gettext', 'text/gettext'] tokens = { 'root': [ (r'^#,\s.*?$', Keyword.Type), (r'^#:\s.*?$', Keyword.Declaration), # (r'^#$', Comment), (r'^(#|#\.\s|#\|\s|#~\s|#\s).*$', Comment.Single), (r'^(")([A-Za-z-]+:)(.*")$', bygroups(String, Name.Property, String)), (r'^".*"$', String), (r'^(msgid|msgid_plural|msgstr|msgctxt)(\s+)(".*")$', bygroups(Name.Variable, Text, String)), (r'^(msgstr\[)(\d)(\])(\s+)(".*")$', bygroups(Name.Variable, Number.Integer, Name.Variable, Text, String)), ] } class HttpLexer(RegexLexer): """ Lexer for HTTP sessions. .. versionadded:: 1.5 """ name = 'HTTP' aliases = ['http'] flags = re.DOTALL def get_tokens_unprocessed(self, text, stack=('root',)): """Reset the content-type state.""" self.content_type = None return RegexLexer.get_tokens_unprocessed(self, text, stack) def header_callback(self, match): if match.group(1).lower() == 'content-type': content_type = match.group(5).strip() if ';' in content_type: content_type = content_type[:content_type.find(';')].strip() self.content_type = content_type yield match.start(1), Name.Attribute, match.group(1) yield match.start(2), Text, match.group(2) yield match.start(3), Operator, match.group(3) yield match.start(4), Text, match.group(4) yield match.start(5), Literal, match.group(5) yield match.start(6), Text, match.group(6) def continuous_header_callback(self, match): yield match.start(1), Text, match.group(1) yield match.start(2), Literal, match.group(2) yield match.start(3), Text, match.group(3) def content_callback(self, match): content_type = getattr(self, 'content_type', None) content = match.group() offset = match.start() if content_type: from pygments.lexers import get_lexer_for_mimetype possible_lexer_mimetypes = [content_type] if '+' in content_type: # application/calendar+xml can be treated as application/xml # if there's not a better match. general_type = re.sub(r'^(.*)/.*\+(.*)$', r'\1/\2', content_type) possible_lexer_mimetypes.append(general_type) for i in possible_lexer_mimetypes: try: lexer = get_lexer_for_mimetype(i) except ClassNotFound: pass else: for idx, token, value in lexer.get_tokens_unprocessed(content): yield offset + idx, token, value return yield offset, Text, content tokens = { 'root': [ (r'(GET|POST|PUT|DELETE|HEAD|OPTIONS|TRACE|PATCH)( +)([^ ]+)( +)' r'(HTTP)(/)(1\.[01]|2(?:\.0)?|3)(\r?\n|\Z)', bygroups(Name.Function, Text, Name.Namespace, Text, Keyword.Reserved, Operator, Number, Text), 'headers'), (r'(HTTP)(/)(1\.[01]|2(?:\.0)?|3)( +)(\d{3})(?:( +)([^\r\n]*))?(\r?\n|\Z)', bygroups(Keyword.Reserved, Operator, Number, Text, Number, Text, Name.Exception, Text), 'headers'), ], 'headers': [ (r'([^\s:]+)( *)(:)( *)([^\r\n]+)(\r?\n|\Z)', header_callback), (r'([\t ]+)([^\r\n]+)(\r?\n|\Z)', continuous_header_callback), (r'\r?\n', Text, 'content') ], 'content': [ (r'.+', content_callback) ] } def analyse_text(text): return text.startswith(('GET /', 'POST /', 'PUT /', 'DELETE /', 'HEAD /', 'OPTIONS /', 'TRACE /', 'PATCH /')) class TodotxtLexer(RegexLexer): """ Lexer for `Todo.txt `_ todo list format. .. versionadded:: 2.0 """ name = 'Todotxt' aliases = ['todotxt'] # *.todotxt is not a standard extension for Todo.txt files; including it # makes testing easier, and also makes autodetecting file type easier. filenames = ['todo.txt', '*.todotxt'] mimetypes = ['text/x-todo'] # Aliases mapping standard token types of Todo.txt format concepts CompleteTaskText = Operator # Chosen to de-emphasize complete tasks IncompleteTaskText = Text # Incomplete tasks should look like plain text # Priority should have most emphasis to indicate importance of tasks Priority = Generic.Heading # Dates should have next most emphasis because time is important Date = Generic.Subheading # Project and context should have equal weight, and be in different colors Project = Generic.Error Context = String # If tag functionality is added, it should have the same weight as Project # and Context, and a different color. Generic.Traceback would work well. # Regex patterns for building up rules; dates, priorities, projects, and # contexts are all atomic # TODO: Make date regex more ISO 8601 compliant date_regex = r'\d{4,}-\d{2}-\d{2}' priority_regex = r'\([A-Z]\)' project_regex = r'\+\S+' context_regex = r'@\S+' # Compound regex expressions complete_one_date_regex = r'(x )(' + date_regex + r')' complete_two_date_regex = (complete_one_date_regex + r'( )(' + date_regex + r')') priority_date_regex = r'(' + priority_regex + r')( )(' + date_regex + r')' tokens = { # Should parse starting at beginning of line; each line is a task 'root': [ # Complete task entry points: two total: # 1. Complete task with two dates (complete_two_date_regex, bygroups(CompleteTaskText, Date, CompleteTaskText, Date), 'complete'), # 2. Complete task with one date (complete_one_date_regex, bygroups(CompleteTaskText, Date), 'complete'), # Incomplete task entry points: six total: # 1. Priority plus date (priority_date_regex, bygroups(Priority, IncompleteTaskText, Date), 'incomplete'), # 2. Priority only (priority_regex, Priority, 'incomplete'), # 3. Leading date (date_regex, Date, 'incomplete'), # 4. Leading context (context_regex, Context, 'incomplete'), # 5. Leading project (project_regex, Project, 'incomplete'), # 6. Non-whitespace catch-all (r'\S+', IncompleteTaskText, 'incomplete'), ], # Parse a complete task 'complete': [ # Newline indicates end of task, should return to root (r'\s*\n', CompleteTaskText, '#pop'), # Tokenize contexts and projects (context_regex, Context), (project_regex, Project), # Tokenize non-whitespace text (r'\S+', CompleteTaskText), # Tokenize whitespace not containing a newline (r'\s+', CompleteTaskText), ], # Parse an incomplete task 'incomplete': [ # Newline indicates end of task, should return to root (r'\s*\n', IncompleteTaskText, '#pop'), # Tokenize contexts and projects (context_regex, Context), (project_regex, Project), # Tokenize non-whitespace text (r'\S+', IncompleteTaskText), # Tokenize whitespace not containing a newline (r'\s+', IncompleteTaskText), ], } class NotmuchLexer(RegexLexer): """ For `Notmuch `_ email text format. .. versionadded:: 2.5 Additional options accepted: `body_lexer` If given, highlight the contents of the message body with the specified lexer, else guess it according to the body content (default: ``None``). """ name = 'Notmuch' aliases = ['notmuch'] def _highlight_code(self, match): code = match.group(1) try: if self.body_lexer: lexer = get_lexer_by_name(self.body_lexer) else: lexer = guess_lexer(code.strip()) except ClassNotFound: lexer = get_lexer_by_name('text') yield from lexer.get_tokens_unprocessed(code) tokens = { 'root': [ (r'\fmessage\{\s*', Keyword, ('message', 'message-attr')), ], 'message-attr': [ (r'(\s*id:\s*)(\S+)', bygroups(Name.Attribute, String)), (r'(\s*(?:depth|match|excluded):\s*)(\d+)', bygroups(Name.Attribute, Number.Integer)), (r'(\s*filename:\s*)(.+\n)', bygroups(Name.Attribute, String)), default('#pop'), ], 'message': [ (r'\fmessage\}\n', Keyword, '#pop'), (r'\fheader\{\n', Keyword, 'header'), (r'\fbody\{\n', Keyword, 'body'), ], 'header': [ (r'\fheader\}\n', Keyword, '#pop'), (r'((?:Subject|From|To|Cc|Date):\s*)(.*\n)', bygroups(Name.Attribute, String)), (r'(.*)(\s*\(.*\))(\s*\(.*\)\n)', bygroups(Generic.Strong, Literal, Name.Tag)), ], 'body': [ (r'\fpart\{\n', Keyword, 'part'), (r'\f(part|attachment)\{\s*', Keyword, ('part', 'part-attr')), (r'\fbody\}\n', Keyword, '#pop'), ], 'part-attr': [ (r'(ID:\s*)(\d+)', bygroups(Name.Attribute, Number.Integer)), (r'(,\s*)((?:Filename|Content-id):\s*)([^,]+)', bygroups(Punctuation, Name.Attribute, String)), (r'(,\s*)(Content-type:\s*)(.+\n)', bygroups(Punctuation, Name.Attribute, String)), default('#pop'), ], 'part': [ (r'\f(?:part|attachment)\}\n', Keyword, '#pop'), (r'\f(?:part|attachment)\{\s*', Keyword, ('#push', 'part-attr')), (r'^Non-text part: .*\n', Comment), (r'(?s)(.*?(?=\f(?:part|attachment)\}\n))', _highlight_code), ], } def analyse_text(text): return 1.0 if text.startswith('\fmessage{') else 0.0 def __init__(self, **options): self.body_lexer = options.get('body_lexer', None) RegexLexer.__init__(self, **options) class KernelLogLexer(RegexLexer): """ For Linux Kernel log ("dmesg") output. .. versionadded:: 2.6 """ name = 'Kernel log' aliases = ['kmsg', 'dmesg'] filenames = ['*.kmsg', '*.dmesg'] tokens = { 'root': [ (r'^[^:]+:debug : (?=\[)', Text, 'debug'), (r'^[^:]+:info : (?=\[)', Text, 'info'), (r'^[^:]+:warn : (?=\[)', Text, 'warn'), (r'^[^:]+:notice: (?=\[)', Text, 'warn'), (r'^[^:]+:err : (?=\[)', Text, 'error'), (r'^[^:]+:crit : (?=\[)', Text, 'error'), (r'^(?=\[)', Text, 'unknown'), ], 'unknown': [ (r'^(?=.+(warning|notice|audit|deprecated))', Text, 'warn'), (r'^(?=.+(error|critical|fail|Bug))', Text, 'error'), default('info'), ], 'base': [ (r'\[[0-9. ]+\] ', Number), (r'(?<=\] ).+?:', Keyword), (r'\n', Text, '#pop'), ], 'debug': [ include('base'), (r'.+\n', Comment, '#pop') ], 'info': [ include('base'), (r'.+\n', Text, '#pop') ], 'warn': [ include('base'), (r'.+\n', Generic.Strong, '#pop') ], 'error': [ include('base'), (r'.+\n', Generic.Error, '#pop') ] } pygments-2.11.2/pygments/lexers/arrow.py0000644000175000017500000000674214165547207020223 0ustar carstencarsten""" pygments.lexers.arrow ~~~~~~~~~~~~~~~~~~~~~ Lexer for Arrow. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.lexer import RegexLexer, bygroups, default, include from pygments.token import Text, Operator, Keyword, Punctuation, Name, \ String, Number, Whitespace __all__ = ['ArrowLexer'] TYPES = r'\b(int|bool|char)((?:\[\])*)(?=\s+)' IDENT = r'([a-zA-Z_][a-zA-Z0-9_]*)' DECL = TYPES + r'(\s+)' + IDENT class ArrowLexer(RegexLexer): """ Lexer for Arrow: https://pypi.org/project/py-arrow-lang/ .. versionadded:: 2.7 """ name = 'Arrow' aliases = ['arrow'] filenames = ['*.arw'] tokens = { 'root': [ (r'\s+', Whitespace), (r'^[|\s]+', Punctuation), include('blocks'), include('statements'), include('expressions'), ], 'blocks': [ (r'(function)(\n+)(/-->)(\s*)' + DECL + # 4 groups r'(\()', bygroups( Keyword.Reserved, Whitespace, Punctuation, Whitespace, Keyword.Type, Punctuation, Whitespace, Name.Function, Punctuation ), 'fparams'), (r'/-->$|\\-->$|/--<|\\--<|\^', Punctuation), ], 'statements': [ (DECL, bygroups(Keyword.Type, Punctuation, Text, Name.Variable)), (r'\[', Punctuation, 'index'), (r'=', Operator), (r'require|main', Keyword.Reserved), (r'print', Keyword.Reserved, 'print'), ], 'expressions': [ (r'\s+', Whitespace), (r'[0-9]+', Number.Integer), (r'true|false', Keyword.Constant), (r"'", String.Char, 'char'), (r'"', String.Double, 'string'), (r'\{', Punctuation, 'array'), (r'==|!=|<|>|\+|-|\*|/|%', Operator), (r'and|or|not|length', Operator.Word), (r'(input)(\s+)(int|char\[\])', bygroups( Keyword.Reserved, Whitespace, Keyword.Type )), (IDENT + r'(\()', bygroups( Name.Function, Punctuation ), 'fargs'), (IDENT, Name.Variable), (r'\[', Punctuation, 'index'), (r'\(', Punctuation, 'expressions'), (r'\)', Punctuation, '#pop'), ], 'print': [ include('expressions'), (r',', Punctuation), default('#pop'), ], 'fparams': [ (DECL, bygroups(Keyword.Type, Punctuation, Whitespace, Name.Variable)), (r',', Punctuation), (r'\)', Punctuation, '#pop'), ], 'escape': [ (r'\\(["\\/abfnrtv]|[0-9]{1,3}|x[0-9a-fA-F]{2}|u[0-9a-fA-F]{4})', String.Escape), ], 'char': [ (r"'", String.Char, '#pop'), include('escape'), (r"[^'\\]", String.Char), ], 'string': [ (r'"', String.Double, '#pop'), include('escape'), (r'[^"\\]+', String.Double), ], 'array': [ include('expressions'), (r'\}', Punctuation, '#pop'), (r',', Punctuation), ], 'fargs': [ include('expressions'), (r'\)', Punctuation, '#pop'), (r',', Punctuation), ], 'index': [ include('expressions'), (r'\]', Punctuation, '#pop'), ], } pygments-2.11.2/pygments/lexers/_csound_builtins.py0000644000175000017500000004353614165547207022436 0ustar carstencarsten""" pygments.lexers._csound_builtins ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ REMOVED_OPCODES = set(''' OSCsendA beadsynt beosc buchla getrowlin lua_exec lua_iaopcall lua_iaopcall_off lua_ikopcall lua_ikopcall_off lua_iopcall lua_iopcall_off lua_opdef mp3scal_check mp3scal_load mp3scal_load2 mp3scal_play mp3scal_play2 pvsgendy socksend_k signalflowgraph sumTableFilter systime tabrowlin vbap1move '''.split()) # Opcodes in Csound 6.16.0 using: # python3 -c " # import re # from subprocess import Popen, PIPE # output = Popen(['csound', '--list-opcodes0'], stderr=PIPE, text=True).communicate()[1] # opcodes = output[re.search(r'^\$', output, re.M).end() : re.search(r'^\d+ opcodes\$', output, re.M).start()].split() # output = Popen(['csound', '--list-opcodes2'], stderr=PIPE, text=True).communicate()[1] # all_opcodes = output[re.search(r'^\$', output, re.M).end() : re.search(r'^\d+ opcodes\$', output, re.M).start()].split() # deprecated_opcodes = [opcode for opcode in all_opcodes if opcode not in opcodes] # # Remove opcodes that csound.py treats as keywords. # keyword_opcodes = [ # 'cggoto', # https://csound.com/docs/manual/cggoto.html # 'cigoto', # https://csound.com/docs/manual/cigoto.html # 'cingoto', # (undocumented) # 'ckgoto', # https://csound.com/docs/manual/ckgoto.html # 'cngoto', # https://csound.com/docs/manual/cngoto.html # 'cnkgoto', # (undocumented) # 'endin', # https://csound.com/docs/manual/endin.html # 'endop', # https://csound.com/docs/manual/endop.html # 'goto', # https://csound.com/docs/manual/goto.html # 'igoto', # https://csound.com/docs/manual/igoto.html # 'instr', # https://csound.com/docs/manual/instr.html # 'kgoto', # https://csound.com/docs/manual/kgoto.html # 'loop_ge', # https://csound.com/docs/manual/loop_ge.html # 'loop_gt', # https://csound.com/docs/manual/loop_gt.html # 'loop_le', # https://csound.com/docs/manual/loop_le.html # 'loop_lt', # https://csound.com/docs/manual/loop_lt.html # 'opcode', # https://csound.com/docs/manual/opcode.html # 'reinit', # https://csound.com/docs/manual/reinit.html # 'return', # https://csound.com/docs/manual/return.html # 'rireturn', # https://csound.com/docs/manual/rireturn.html # 'rigoto', # https://csound.com/docs/manual/rigoto.html # 'tigoto', # https://csound.com/docs/manual/tigoto.html # 'timout' # https://csound.com/docs/manual/timout.html # ] # opcodes = [opcode for opcode in opcodes if opcode not in keyword_opcodes] # newline = '\n' # print(f'''OPCODES = set(\''' # {newline.join(opcodes)} # \'''.split()) # # DEPRECATED_OPCODES = set(\''' # {newline.join(deprecated_opcodes)} # \'''.split()) # ''') # " OPCODES = set(''' ATSadd ATSaddnz ATSbufread ATScross ATSinfo ATSinterpread ATSpartialtap ATSread ATSreadnz ATSsinnoi FLbox FLbutBank FLbutton FLcloseButton FLcolor FLcolor2 FLcount FLexecButton FLgetsnap FLgroup FLgroupEnd FLgroup_end FLhide FLhvsBox FLhvsBoxSetValue FLjoy FLkeyIn FLknob FLlabel FLloadsnap FLmouse FLpack FLpackEnd FLpack_end FLpanel FLpanelEnd FLpanel_end FLprintk FLprintk2 FLroller FLrun FLsavesnap FLscroll FLscrollEnd FLscroll_end FLsetAlign FLsetBox FLsetColor FLsetColor2 FLsetFont FLsetPosition FLsetSize FLsetSnapGroup FLsetText FLsetTextColor FLsetTextSize FLsetTextType FLsetVal FLsetVal_i FLsetVali FLsetsnap FLshow FLslidBnk FLslidBnk2 FLslidBnk2Set FLslidBnk2Setk FLslidBnkGetHandle FLslidBnkSet FLslidBnkSetk FLslider FLtabs FLtabsEnd FLtabs_end FLtext FLupdate FLvalue FLvkeybd FLvslidBnk FLvslidBnk2 FLxyin JackoAudioIn JackoAudioInConnect JackoAudioOut JackoAudioOutConnect JackoFreewheel JackoInfo JackoInit JackoMidiInConnect JackoMidiOut JackoMidiOutConnect JackoNoteOut JackoOn JackoTransport K35_hpf K35_lpf MixerClear MixerGetLevel MixerReceive MixerSend MixerSetLevel MixerSetLevel_i OSCbundle OSCcount OSCinit OSCinitM OSClisten OSCraw OSCsend OSCsend_lo S STKBandedWG STKBeeThree STKBlowBotl STKBlowHole STKBowed STKBrass STKClarinet STKDrummer STKFMVoices STKFlute STKHevyMetl STKMandolin STKModalBar STKMoog STKPercFlut STKPlucked STKResonate STKRhodey STKSaxofony STKShakers STKSimple STKSitar STKStifKarp STKTubeBell STKVoicForm STKWhistle STKWurley a abs active adsr adsyn adsynt adsynt2 aftouch allpole alpass alwayson ampdb ampdbfs ampmidi ampmidicurve ampmidid apoleparams arduinoRead arduinoReadF arduinoStart arduinoStop areson aresonk atone atonek atonex autocorr babo balance balance2 bamboo barmodel bbcutm bbcuts betarand bexprnd bformdec1 bformenc1 binit biquad biquada birnd bob bpf bpfcos bqrez butbp butbr buthp butlp butterbp butterbr butterhp butterlp button buzz c2r cabasa cauchy cauchyi cbrt ceil cell cent centroid ceps cepsinv chanctrl changed changed2 chani chano chebyshevpoly checkbox chn_S chn_a chn_k chnclear chnexport chnget chngeta chngeti chngetk chngetks chngets chnmix chnparams chnset chnseta chnseti chnsetk chnsetks chnsets chuap clear clfilt clip clockoff clockon cmp cmplxprod cntCreate cntCycles cntDelete cntDelete_i cntRead cntReset cntState comb combinv compilecsd compileorc compilestr compress compress2 connect control convle convolve copya2ftab copyf2array cos cosh cosinv cosseg cossegb cossegr count count_i cps2pch cpsmidi cpsmidib cpsmidinn cpsoct cpspch cpstmid cpstun cpstuni cpsxpch cpumeter cpuprc cross2 crossfm crossfmi crossfmpm crossfmpmi crosspm crosspmi crunch ctlchn ctrl14 ctrl21 ctrl7 ctrlinit ctrlpreset ctrlprint ctrlprintpresets ctrlsave ctrlselect cuserrnd dam date dates db dbamp dbfsamp dcblock dcblock2 dconv dct dctinv deinterleave delay delay1 delayk delayr delayw deltap deltap3 deltapi deltapn deltapx deltapxw denorm diff diode_ladder directory diskgrain diskin diskin2 dispfft display distort distort1 divz doppler dot downsamp dripwater dssiactivate dssiaudio dssictls dssiinit dssilist dumpk dumpk2 dumpk3 dumpk4 duserrnd dust dust2 envlpx envlpxr ephasor eqfil evalstr event event_i exciter exitnow exp expcurve expon exprand exprandi expseg expsega expsegb expsegba expsegr fareylen fareyleni faustaudio faustcompile faustctl faustdsp faustgen faustplay fft fftinv ficlose filebit filelen filenchnls filepeak filescal filesr filevalid fillarray filter2 fin fini fink fiopen flanger flashtxt flooper flooper2 floor fluidAllOut fluidCCi fluidCCk fluidControl fluidEngine fluidInfo fluidLoad fluidNote fluidOut fluidProgramSelect fluidSetInterpMethod fmanal fmax fmb3 fmbell fmin fmmetal fmod fmpercfl fmrhode fmvoice fmwurlie fof fof2 fofilter fog fold follow follow2 foscil foscili fout fouti foutir foutk fprintks fprints frac fractalnoise framebuffer freeverb ftaudio ftchnls ftconv ftcps ftexists ftfree ftgen ftgenonce ftgentmp ftlen ftload ftloadk ftlptim ftmorf ftom ftprint ftresize ftresizei ftsamplebank ftsave ftsavek ftset ftslice ftslicei ftsr gain gainslider gauss gaussi gausstrig gbuzz genarray genarray_i gendy gendyc gendyx getcfg getcol getftargs getrow getseed gogobel grain grain2 grain3 granule gtf guiro harmon harmon2 harmon3 harmon4 hdf5read hdf5write hilbert hilbert2 hrtfearly hrtfmove hrtfmove2 hrtfreverb hrtfstat hsboscil hvs1 hvs2 hvs3 hypot i ihold imagecreate imagefree imagegetpixel imageload imagesave imagesetpixel imagesize in in32 inch inh init initc14 initc21 initc7 inleta inletf inletk inletkid inletv ino inq inrg ins insglobal insremot int integ interleave interp invalue inx inz jacktransport jitter jitter2 joystick jspline k la_i_add_mc la_i_add_mr la_i_add_vc la_i_add_vr la_i_assign_mc la_i_assign_mr la_i_assign_t la_i_assign_vc la_i_assign_vr la_i_conjugate_mc la_i_conjugate_mr la_i_conjugate_vc la_i_conjugate_vr la_i_distance_vc la_i_distance_vr la_i_divide_mc la_i_divide_mr la_i_divide_vc la_i_divide_vr la_i_dot_mc la_i_dot_mc_vc la_i_dot_mr la_i_dot_mr_vr la_i_dot_vc la_i_dot_vr la_i_get_mc la_i_get_mr la_i_get_vc la_i_get_vr la_i_invert_mc la_i_invert_mr la_i_lower_solve_mc la_i_lower_solve_mr la_i_lu_det_mc la_i_lu_det_mr la_i_lu_factor_mc la_i_lu_factor_mr la_i_lu_solve_mc la_i_lu_solve_mr la_i_mc_create la_i_mc_set la_i_mr_create la_i_mr_set la_i_multiply_mc la_i_multiply_mr la_i_multiply_vc la_i_multiply_vr la_i_norm1_mc la_i_norm1_mr la_i_norm1_vc la_i_norm1_vr la_i_norm_euclid_mc la_i_norm_euclid_mr la_i_norm_euclid_vc la_i_norm_euclid_vr la_i_norm_inf_mc la_i_norm_inf_mr la_i_norm_inf_vc la_i_norm_inf_vr la_i_norm_max_mc la_i_norm_max_mr la_i_print_mc la_i_print_mr la_i_print_vc la_i_print_vr la_i_qr_eigen_mc la_i_qr_eigen_mr la_i_qr_factor_mc la_i_qr_factor_mr la_i_qr_sym_eigen_mc la_i_qr_sym_eigen_mr la_i_random_mc la_i_random_mr la_i_random_vc la_i_random_vr la_i_size_mc la_i_size_mr la_i_size_vc la_i_size_vr la_i_subtract_mc la_i_subtract_mr la_i_subtract_vc la_i_subtract_vr la_i_t_assign la_i_trace_mc la_i_trace_mr la_i_transpose_mc la_i_transpose_mr la_i_upper_solve_mc la_i_upper_solve_mr la_i_vc_create la_i_vc_set la_i_vr_create la_i_vr_set la_k_a_assign la_k_add_mc la_k_add_mr la_k_add_vc la_k_add_vr la_k_assign_a la_k_assign_f la_k_assign_mc la_k_assign_mr la_k_assign_t la_k_assign_vc la_k_assign_vr la_k_conjugate_mc la_k_conjugate_mr la_k_conjugate_vc la_k_conjugate_vr la_k_current_f la_k_current_vr la_k_distance_vc la_k_distance_vr la_k_divide_mc la_k_divide_mr la_k_divide_vc la_k_divide_vr la_k_dot_mc la_k_dot_mc_vc la_k_dot_mr la_k_dot_mr_vr la_k_dot_vc la_k_dot_vr la_k_f_assign la_k_get_mc la_k_get_mr la_k_get_vc la_k_get_vr la_k_invert_mc la_k_invert_mr la_k_lower_solve_mc la_k_lower_solve_mr la_k_lu_det_mc la_k_lu_det_mr la_k_lu_factor_mc la_k_lu_factor_mr la_k_lu_solve_mc la_k_lu_solve_mr la_k_mc_set la_k_mr_set la_k_multiply_mc la_k_multiply_mr la_k_multiply_vc la_k_multiply_vr la_k_norm1_mc la_k_norm1_mr la_k_norm1_vc la_k_norm1_vr la_k_norm_euclid_mc la_k_norm_euclid_mr la_k_norm_euclid_vc la_k_norm_euclid_vr la_k_norm_inf_mc la_k_norm_inf_mr la_k_norm_inf_vc la_k_norm_inf_vr la_k_norm_max_mc la_k_norm_max_mr la_k_qr_eigen_mc la_k_qr_eigen_mr la_k_qr_factor_mc la_k_qr_factor_mr la_k_qr_sym_eigen_mc la_k_qr_sym_eigen_mr la_k_random_mc la_k_random_mr la_k_random_vc la_k_random_vr la_k_subtract_mc la_k_subtract_mr la_k_subtract_vc la_k_subtract_vr la_k_t_assign la_k_trace_mc la_k_trace_mr la_k_upper_solve_mc la_k_upper_solve_mr la_k_vc_set la_k_vr_set lag lagud lastcycle lenarray lfo lfsr limit limit1 lincos line linen linenr lineto link_beat_force link_beat_get link_beat_request link_create link_enable link_is_enabled link_metro link_peers link_tempo_get link_tempo_set linlin linrand linseg linsegb linsegr liveconv locsend locsig log log10 log2 logbtwo logcurve loopseg loopsegp looptseg loopxseg lorenz loscil loscil3 loscil3phs loscilphs loscilx lowpass2 lowres lowresx lpcanal lpcfilter lpf18 lpform lpfreson lphasor lpinterp lposcil lposcil3 lposcila lposcilsa lposcilsa2 lpread lpreson lpshold lpsholdp lpslot lufs mac maca madsr mags mandel mandol maparray maparray_i marimba massign max max_k maxabs maxabsaccum maxaccum maxalloc maxarray mclock mdelay median mediank metro metro2 mfb midglobal midiarp midic14 midic21 midic7 midichannelaftertouch midichn midicontrolchange midictrl mididefault midifilestatus midiin midinoteoff midinoteoncps midinoteonkey midinoteonoct midinoteonpch midion midion2 midiout midiout_i midipgm midipitchbend midipolyaftertouch midiprogramchange miditempo midremot min minabs minabsaccum minaccum minarray mincer mirror mode modmatrix monitor moog moogladder moogladder2 moogvcf moogvcf2 moscil mp3bitrate mp3in mp3len mp3nchnls mp3scal mp3sr mpulse mrtmsg ms2st mtof mton multitap mute mvchpf mvclpf1 mvclpf2 mvclpf3 mvclpf4 mvmfilter mxadsr nchnls_hw nestedap nlalp nlfilt nlfilt2 noise noteoff noteon noteondur noteondur2 notnum nreverb nrpn nsamp nstance nstrnum nstrstr ntof ntom ntrpol nxtpow2 octave octcps octmidi octmidib octmidinn octpch olabuffer oscbnk oscil oscil1 oscil1i oscil3 oscili oscilikt osciliktp oscilikts osciln oscils oscilx out out32 outall outc outch outh outiat outic outic14 outipat outipb outipc outkat outkc outkc14 outkpat outkpb outkpc outleta outletf outletk outletkid outletv outo outq outq1 outq2 outq3 outq4 outrg outs outs1 outs2 outvalue outx outz p p5gconnect p5gdata pan pan2 pareq part2txt partials partikkel partikkelget partikkelset partikkelsync passign paulstretch pcauchy pchbend pchmidi pchmidib pchmidinn pchoct pchtom pconvolve pcount pdclip pdhalf pdhalfy peak pgmassign pgmchn phaser1 phaser2 phasor phasorbnk phs pindex pinker pinkish pitch pitchac pitchamdf planet platerev plltrack pluck poisson pol2rect polyaft polynomial port portk poscil poscil3 pow powershape powoftwo pows prealloc prepiano print print_type printarray printf printf_i printk printk2 printks printks2 println prints printsk product pset ptablew ptrack puts pvadd pvbufread pvcross pvinterp pvoc pvread pvs2array pvs2tab pvsadsyn pvsanal pvsarp pvsbandp pvsbandr pvsbandwidth pvsbin pvsblur pvsbuffer pvsbufread pvsbufread2 pvscale pvscent pvsceps pvscfs pvscross pvsdemix pvsdiskin pvsdisp pvsenvftw pvsfilter pvsfread pvsfreeze pvsfromarray pvsftr pvsftw pvsfwrite pvsgain pvshift pvsifd pvsin pvsinfo pvsinit pvslock pvslpc pvsmaska pvsmix pvsmooth pvsmorph pvsosc pvsout pvspitch pvstanal pvstencil pvstrace pvsvoc pvswarp pvsynth pwd pyassign pyassigni pyassignt pycall pycall1 pycall1i pycall1t pycall2 pycall2i pycall2t pycall3 pycall3i pycall3t pycall4 pycall4i pycall4t pycall5 pycall5i pycall5t pycall6 pycall6i pycall6t pycall7 pycall7i pycall7t pycall8 pycall8i pycall8t pycalli pycalln pycallni pycallt pyeval pyevali pyevalt pyexec pyexeci pyexect pyinit pylassign pylassigni pylassignt pylcall pylcall1 pylcall1i pylcall1t pylcall2 pylcall2i pylcall2t pylcall3 pylcall3i pylcall3t pylcall4 pylcall4i pylcall4t pylcall5 pylcall5i pylcall5t pylcall6 pylcall6i pylcall6t pylcall7 pylcall7i pylcall7t pylcall8 pylcall8i pylcall8t pylcalli pylcalln pylcallni pylcallt pyleval pylevali pylevalt pylexec pylexeci pylexect pylrun pylruni pylrunt pyrun pyruni pyrunt qinf qnan r2c rand randc randh randi random randomh randomi rbjeq readclock readf readfi readk readk2 readk3 readk4 readks readscore readscratch rect2pol release remoteport remove repluck reshapearray reson resonbnk resonk resonr resonx resonxk resony resonz resyn reverb reverb2 reverbsc rewindscore rezzy rfft rifft rms rnd rnd31 rndseed round rspline rtclock s16b14 s32b14 samphold sandpaper sc_lag sc_lagud sc_phasor sc_trig scale scale2 scalearray scanhammer scans scantable scanu scanu2 schedkwhen schedkwhennamed schedule schedulek schedwhen scoreline scoreline_i seed sekere select semitone sense sensekey seqtime seqtime2 serialBegin serialEnd serialFlush serialPrint serialRead serialWrite serialWrite_i setcol setctrl setksmps setrow setscorepos sfilist sfinstr sfinstr3 sfinstr3m sfinstrm sfload sflooper sfpassign sfplay sfplay3 sfplay3m sfplaym sfplist sfpreset shaker shiftin shiftout signum sin sinh sininv sinsyn skf sleighbells slicearray slicearray_i slider16 slider16f slider16table slider16tablef slider32 slider32f slider32table slider32tablef slider64 slider64f slider64table slider64tablef slider8 slider8f slider8table slider8tablef sliderKawai sndloop sndwarp sndwarpst sockrecv sockrecvs socksend socksends sorta sortd soundin space spat3d spat3di spat3dt spdist spf splitrig sprintf sprintfk spsend sqrt squinewave st2ms statevar sterrain stix strcat strcatk strchar strchark strcmp strcmpk strcpy strcpyk strecv streson strfromurl strget strindex strindexk string2array strlen strlenk strlower strlowerk strrindex strrindexk strset strstrip strsub strsubk strtod strtodk strtol strtolk strupper strupperk stsend subinstr subinstrinit sum sumarray svfilter svn syncgrain syncloop syncphasor system system_i tab tab2array tab2pvs tab_i tabifd table table3 table3kt tablecopy tablefilter tablefilteri tablegpw tablei tableicopy tableigpw tableikt tableimix tablekt tablemix tableng tablera tableseg tableshuffle tableshufflei tablew tablewa tablewkt tablexkt tablexseg tabmorph tabmorpha tabmorphak tabmorphi tabplay tabrec tabsum tabw tabw_i tambourine tan tanh taninv taninv2 tbvcf tempest tempo temposcal tempoval timedseq timeinstk timeinsts timek times tival tlineto tone tonek tonex tradsyn trandom transeg transegb transegr trcross trfilter trhighest trigExpseg trigLinseg trigger trighold trigphasor trigseq trim trim_i trirand trlowest trmix trscale trshift trsplit turnoff turnoff2 turnoff2_i turnoff3 turnon tvconv unirand unwrap upsamp urandom urd vactrol vadd vadd_i vaddv vaddv_i vaget valpass vaset vbap vbapg vbapgmove vbaplsinit vbapmove vbapz vbapzmove vcella vclpf vco vco2 vco2ft vco2ift vco2init vcomb vcopy vcopy_i vdel_k vdelay vdelay3 vdelayk vdelayx vdelayxq vdelayxs vdelayxw vdelayxwq vdelayxws vdivv vdivv_i vecdelay veloc vexp vexp_i vexpseg vexpv vexpv_i vibes vibr vibrato vincr vlimit vlinseg vlowres vmap vmirror vmult vmult_i vmultv vmultv_i voice vosim vphaseseg vport vpow vpow_i vpowv vpowv_i vps vpvoc vrandh vrandi vsubv vsubv_i vtaba vtabi vtabk vtable1k vtablea vtablei vtablek vtablewa vtablewi vtablewk vtabwa vtabwi vtabwk vwrap waveset websocket weibull wgbow wgbowedbar wgbrass wgclar wgflute wgpluck wgpluck2 wguide1 wguide2 wiiconnect wiidata wiirange wiisend window wrap writescratch wterrain wterrain2 xadsr xin xout xscanmap xscans xscansmap xscanu xtratim xyscale zacl zakinit zamod zar zarg zaw zawm zdf_1pole zdf_1pole_mode zdf_2pole zdf_2pole_mode zdf_ladder zfilter2 zir ziw ziwm zkcl zkmod zkr zkw zkwm '''.split()) DEPRECATED_OPCODES = set(''' array bformdec bformenc copy2ftab copy2ttab hrtfer ktableseg lentab maxtab mintab pop pop_f ptable ptable3 ptablei ptableiw push push_f scalet sndload soundout soundouts specaddm specdiff specdisp specfilt spechist specptrk specscal specsum spectrum stack sumtab tabgen tableiw tabmap tabmap_i tabslice tb0 tb0_init tb1 tb10 tb10_init tb11 tb11_init tb12 tb12_init tb13 tb13_init tb14 tb14_init tb15 tb15_init tb1_init tb2 tb2_init tb3 tb3_init tb4 tb4_init tb5 tb5_init tb6 tb6_init tb7 tb7_init tb8 tb8_init tb9 tb9_init vbap16 vbap4 vbap4move vbap8 vbap8move xyin '''.split()) pygments-2.11.2/pygments/lexers/solidity.py0000644000175000017500000000614314165547207020724 0ustar carstencarsten""" pygments.lexers.solidity ~~~~~~~~~~~~~~~~~~~~~~~~ Lexers for Solidity. :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from pygments.lexer import RegexLexer, bygroups, include, words from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ Number, Punctuation, Whitespace __all__ = ['SolidityLexer'] class SolidityLexer(RegexLexer): """ For Solidity source code. .. versionadded:: 2.5 """ name = 'Solidity' aliases = ['solidity'] filenames = ['*.sol'] mimetypes = [] flags = re.MULTILINE | re.UNICODE datatype = ( r'\b(address|bool|(?:(?:bytes|hash|int|string|uint)(?:8|16|24|32|40|48|56|64' r'|72|80|88|96|104|112|120|128|136|144|152|160|168|176|184|192|200|208' r'|216|224|232|240|248|256)?))\b' ) tokens = { 'root': [ include('whitespace'), include('comments'), (r'\bpragma\s+solidity\b', Keyword, 'pragma'), (r'\b(contract)(\s+)([a-zA-Z_]\w*)', bygroups(Keyword, Whitespace, Name.Entity)), (datatype + r'(\s+)((?:external|public|internal|private)\s+)?' + r'([a-zA-Z_]\w*)', bygroups(Keyword.Type, Whitespace, Keyword, Name.Variable)), (r'\b(enum|event|function|struct)(\s+)([a-zA-Z_]\w*)', bygroups(Keyword.Type, Whitespace, Name.Variable)), (r'\b(msg|block|tx)\.([A-Za-z_][a-zA-Z0-9_]*)\b', Keyword), (words(( 'block', 'break', 'constant', 'constructor', 'continue', 'contract', 'do', 'else', 'external', 'false', 'for', 'function', 'if', 'import', 'inherited', 'internal', 'is', 'library', 'mapping', 'memory', 'modifier', 'msg', 'new', 'payable', 'private', 'public', 'require', 'return', 'returns', 'struct', 'suicide', 'throw', 'this', 'true', 'tx', 'var', 'while'), prefix=r'\b', suffix=r'\b'), Keyword.Type), (words(('keccak256',), prefix=r'\b', suffix=r'\b'), Name.Builtin), (datatype, Keyword.Type), include('constants'), (r'[a-zA-Z_]\w*', Text), (r'[!<=>+*/-]', Operator), (r'[.;:{}(),\[\]]', Punctuation) ], 'comments': [ (r'//(\n|[\w\W]*?[^\\]\n)', Comment.Single), (r'/(\\\n)?[*][\w\W]*?[*](\\\n)?/', Comment.Multiline), (r'/(\\\n)?[*][\w\W]*', Comment.Multiline) ], 'constants': [ (r'("(\\"|.)*?")', String.Double), (r"('(\\'|.)*?')", String.Single), (r'\b0[xX][0-9a-fA-F]+\b', Number.Hex), (r'\b\d+\b', Number.Decimal), ], 'pragma': [ include('whitespace'), include('comments'), (r'(\^|>=|<)(\s*)(\d+\.\d+\.\d+)', bygroups(Operator, Whitespace, Keyword)), (r';', Punctuation, '#pop') ], 'whitespace': [ (r'\s+', Whitespace), (r'\n', Whitespace) ] } pygments-2.11.2/pygments/lexers/sql.py0000644000175000017500000010265714165547207017672 0ustar carstencarsten""" pygments.lexers.sql ~~~~~~~~~~~~~~~~~~~ Lexers for various SQL dialects and related interactive sessions. Postgres specific lexers: `PostgresLexer` A SQL lexer for the PostgreSQL dialect. Differences w.r.t. the SQL lexer are: - keywords and data types list parsed from the PG docs (run the `_postgres_builtins` module to update them); - Content of $-strings parsed using a specific lexer, e.g. the content of a PL/Python function is parsed using the Python lexer; - parse PG specific constructs: E-strings, $-strings, U&-strings, different operators and punctuation. `PlPgsqlLexer` A lexer for the PL/pgSQL language. Adds a few specific construct on top of the PG SQL lexer (such as <