pygments-2.11.2/ 0000755 0001750 0001750 00000000000 14165547207 013336 5 ustar carsten carsten pygments-2.11.2/description.rst 0000644 0001750 0001750 00000001334 14165547207 016414 0 ustar carsten carsten Pygments
~~~~~~~~
Pygments is a syntax highlighting package written in Python.
It is a generic syntax highlighter suitable for use in code hosting, forums,
wikis or other applications that need to prettify source code. Highlights
are:
* a wide range of over 500 languages and other text formats is supported
* special attention is paid to details, increasing quality by a fair amount
* support for new languages and formats are added easily
* a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
formats that PIL supports and ANSI sequences
* it is usable as a command-line tool and as a library
Copyright 2006-2021 by the Pygments team, see ``AUTHORS``.
Licensed under the BSD, see ``LICENSE`` for details. pygments-2.11.2/AUTHORS 0000644 0001750 0001750 00000022417 14165547207 014414 0 ustar carsten carsten Pygments is written and maintained by Georg Brandl .
Major developers are Tim Hatch and Armin Ronacher
.
Other contributors, listed alphabetically, are:
* Sam Aaron -- Ioke lexer
* Jean Abou Samra -- LilyPond lexer
* João Abecasis -- JSLT lexer
* Ali Afshar -- image formatter
* Thomas Aglassinger -- Easytrieve, JCL, Rexx, Transact-SQL and VBScript
lexers
* Muthiah Annamalai -- Ezhil lexer
* Kumar Appaiah -- Debian control lexer
* Andreas Amann -- AppleScript lexer
* Timothy Armstrong -- Dart lexer fixes
* Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers
* Jeremy Ashkenas -- CoffeeScript lexer
* José Joaquín Atria -- Praat lexer
* Stefan Matthias Aust -- Smalltalk lexer
* Lucas Bajolet -- Nit lexer
* Ben Bangert -- Mako lexers
* Max Battcher -- Darcs patch lexer
* Thomas Baruchel -- APL lexer
* Tim Baumann -- (Literate) Agda lexer
* Paul Baumgart, 280 North, Inc. -- Objective-J lexer
* Michael Bayer -- Myghty lexers
* Thomas Beale -- Archetype lexers
* John Benediktsson -- Factor lexer
* Trevor Bergeron -- mIRC formatter
* Vincent Bernat -- LessCSS lexer
* Christopher Bertels -- Fancy lexer
* Sébastien Bigaret -- QVT Operational lexer
* Jarrett Billingsley -- MiniD lexer
* Adam Blinkinsop -- Haskell, Redcode lexers
* Stéphane Blondon -- Procfile, SGF and Sieve lexers
* Frits van Bommel -- assembler lexers
* Pierre Bourdon -- bugfixes
* Martijn Braam -- Kernel log lexer, BARE lexer
* Matthias Bussonnier -- ANSI style handling for terminal-256 formatter
* chebee7i -- Python traceback lexer improvements
* Hiram Chirino -- Scaml and Jade lexers
* Mauricio Caceres -- SAS and Stata lexers.
* Ian Cooper -- VGL lexer
* David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers
* Leaf Corcoran -- MoonScript lexer
* Christopher Creutzig -- MuPAD lexer
* Daniël W. Crompton -- Pike lexer
* Pete Curry -- bugfixes
* Bryan Davis -- EBNF lexer
* Bruno Deferrari -- Shen lexer
* Luke Drummond -- Meson lexer
* Giedrius Dubinskas -- HTML formatter improvements
* Owen Durni -- Haxe lexer
* Alexander Dutton, Oxford University Computing Services -- SPARQL lexer
* James Edwards -- Terraform lexer
* Nick Efford -- Python 3 lexer
* Sven Efftinge -- Xtend lexer
* Artem Egorkine -- terminal256 formatter
* Matthew Fernandez -- CAmkES lexer
* Paweł Fertyk -- GDScript lexer, HTML formatter improvements
* Michael Ficarra -- CPSA lexer
* James H. Fisher -- PostScript lexer
* William S. Fulton -- SWIG lexer
* Carlos Galdino -- Elixir and Elixir Console lexers
* Michael Galloy -- IDL lexer
* Naveen Garg -- Autohotkey lexer
* Simon Garnotel -- FreeFem++ lexer
* Laurent Gautier -- R/S lexer
* Alex Gaynor -- PyPy log lexer
* Richard Gerkin -- Igor Pro lexer
* Alain Gilbert -- TypeScript lexer
* Alex Gilding -- BlitzBasic lexer
* GitHub, Inc -- DASM16, Augeas, TOML, and Slash lexers
* Bertrand Goetzmann -- Groovy lexer
* Krzysiek Goj -- Scala lexer
* Rostyslav Golda -- FloScript lexer
* Andrey Golovizin -- BibTeX lexers
* Matt Good -- Genshi, Cheetah lexers
* Michał Górny -- vim modeline support
* Alex Gosse -- TrafficScript lexer
* Patrick Gotthardt -- PHP namespaces support
* Hubert Gruniaux -- C and C++ lexer improvements
* Olivier Guibe -- Asymptote lexer
* Phil Hagelberg -- Fennel lexer
* Florian Hahn -- Boogie lexer
* Martin Harriman -- SNOBOL lexer
* Matthew Harrison -- SVG formatter
* Steven Hazel -- Tcl lexer
* Dan Michael Heggø -- Turtle lexer
* Aslak Hellesøy -- Gherkin lexer
* Greg Hendershott -- Racket lexer
* Justin Hendrick -- ParaSail lexer
* Jordi Gutiérrez Hermoso -- Octave lexer
* David Hess, Fish Software, Inc. -- Objective-J lexer
* Ken Hilton -- Typographic Number Theory and Arrow lexers
* Varun Hiremath -- Debian control lexer
* Rob Hoelz -- Perl 6 lexer
* Doug Hogan -- Mscgen lexer
* Ben Hollis -- Mason lexer
* Max Horn -- GAP lexer
* Fred Hornsey -- OMG IDL Lexer
* Alastair Houghton -- Lexer inheritance facility
* Tim Howard -- BlitzMax lexer
* Dustin Howett -- Logos lexer
* Ivan Inozemtsev -- Fantom lexer
* Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session,
MSDOS session, BC, WDiff
* Brian R. Jackson -- Tea lexer
* Christian Jann -- ShellSession lexer
* Dennis Kaarsemaker -- sources.list lexer
* Dmitri Kabak -- Inferno Limbo lexer
* Igor Kalnitsky -- vhdl lexer
* Colin Kennedy - USD lexer
* Alexander Kit -- MaskJS lexer
* Pekka Klärck -- Robot Framework lexer
* Gerwin Klein -- Isabelle lexer
* Eric Knibbe -- Lasso lexer
* Stepan Koltsov -- Clay lexer
* Oliver Kopp - Friendly grayscale style
* Adam Koprowski -- Opa lexer
* Benjamin Kowarsch -- Modula-2 lexer
* Domen Kožar -- Nix lexer
* Oleh Krekel -- Emacs Lisp lexer
* Alexander Kriegisch -- Kconfig and AspectJ lexers
* Marek Kubica -- Scheme lexer
* Jochen Kupperschmidt -- Markdown processor
* Gerd Kurzbach -- Modelica lexer
* Jon Larimer, Google Inc. -- Smali lexer
* Olov Lassus -- Dart lexer
* Matt Layman -- TAP lexer
* Kristian Lyngstøl -- Varnish lexers
* Sylvestre Ledru -- Scilab lexer
* Chee Sing Lee -- Flatline lexer
* Mark Lee -- Vala lexer
* Valentin Lorentz -- C++ lexer improvements
* Ben Mabey -- Gherkin lexer
* Angus MacArthur -- QML lexer
* Louis Mandel -- X10 lexer
* Louis Marchand -- Eiffel lexer
* Simone Margaritelli -- Hybris lexer
* Kirk McDonald -- D lexer
* Gordon McGregor -- SystemVerilog lexer
* Stephen McKamey -- Duel/JBST lexer
* Brian McKenna -- F# lexer
* Charles McLaughlin -- Puppet lexer
* Kurt McKee -- Tera Term macro lexer, PostgreSQL updates, MySQL overhaul
* Joe Eli McIlvain -- Savi lexer
* Lukas Meuser -- BBCode formatter, Lua lexer
* Cat Miller -- Pig lexer
* Paul Miller -- LiveScript lexer
* Hong Minhee -- HTTP lexer
* Michael Mior -- Awk lexer
* Bruce Mitchener -- Dylan lexer rewrite
* Reuben Morais -- SourcePawn lexer
* Jon Morton -- Rust lexer
* Paulo Moura -- Logtalk lexer
* Mher Movsisyan -- DTD lexer
* Dejan Muhamedagic -- Crmsh lexer
* Ana Nelson -- Ragel, ANTLR, R console lexers
* Kurt Neufeld -- Markdown lexer
* Nam T. Nguyen -- Monokai style
* Jesper Noehr -- HTML formatter "anchorlinenos"
* Mike Nolta -- Julia lexer
* Avery Nortonsmith -- Pointless lexer
* Jonas Obrist -- BBCode lexer
* Edward O'Callaghan -- Cryptol lexer
* David Oliva -- Rebol lexer
* Pat Pannuto -- nesC lexer
* Jon Parise -- Protocol buffers and Thrift lexers
* Benjamin Peterson -- Test suite refactoring
* Ronny Pfannschmidt -- BBCode lexer
* Dominik Picheta -- Nimrod lexer
* Andrew Pinkham -- RTF Formatter Refactoring
* Clément Prévost -- UrbiScript lexer
* Tanner Prynn -- cmdline -x option and loading lexers from files
* Oleh Prypin -- Crystal lexer (based on Ruby lexer)
* Xidorn Quan -- Web IDL lexer
* Elias Rabel -- Fortran fixed form lexer
* raichoo -- Idris lexer
* Daniel Ramirez -- GDScript lexer
* Kashif Rasul -- CUDA lexer
* Nathan Reed -- HLSL lexer
* Justin Reidy -- MXML lexer
* Norman Richards -- JSON lexer
* Corey Richardson -- Rust lexer updates
* Lubomir Rintel -- GoodData MAQL and CL lexers
* Andre Roberge -- Tango style
* Georg Rollinger -- HSAIL lexer
* Michiel Roos -- TypoScript lexer
* Konrad Rudolph -- LaTeX formatter enhancements
* Mario Ruggier -- Evoque lexers
* Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements
* Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers
* Matteo Sasso -- Common Lisp lexer
* Joe Schafer -- Ada lexer
* Max Schillinger -- TiddlyWiki5 lexer
* Ken Schutte -- Matlab lexers
* René Schwaiger -- Rainbow Dash style
* Sebastian Schweizer -- Whiley lexer
* Tassilo Schweyer -- Io, MOOCode lexers
* Pablo Seminario -- PromQL lexer
* Ted Shaw -- AutoIt lexer
* Joerg Sieker -- ABAP lexer
* Robert Simmons -- Standard ML lexer
* Kirill Simonov -- YAML lexer
* Corbin Simpson -- Monte lexer
* Ville Skyttä -- ASCII armored lexer
* Alexander Smishlajev -- Visual FoxPro lexer
* Steve Spigarelli -- XQuery lexer
* Jerome St-Louis -- eC lexer
* Camil Staps -- Clean and NuSMV lexers; Solarized style
* James Strachan -- Kotlin lexer
* Tom Stuart -- Treetop lexer
* Colin Sullivan -- SuperCollider lexer
* Ben Swift -- Extempore lexer
* tatt61880 -- Kuin lexer
* Edoardo Tenani -- Arduino lexer
* Tiberius Teng -- default style overhaul
* Jeremy Thurgood -- Erlang, Squid config lexers
* Brian Tiffin -- OpenCOBOL lexer
* Bob Tolbert -- Hy lexer
* Matthias Trute -- Forth lexer
* Tuoa Spi T4 -- Bdd lexer
* Erick Tryzelaar -- Felix lexer
* Alexander Udalov -- Kotlin lexer improvements
* Thomas Van Doren -- Chapel lexer
* Daniele Varrazzo -- PostgreSQL lexers
* Abe Voelker -- OpenEdge ABL lexer
* Pepijn de Vos -- HTML formatter CTags support
* Matthias Vallentin -- Bro lexer
* Benoît Vinot -- AMPL lexer
* Linh Vu Hong -- RSL lexer
* Immanuel Washington -- Smithy lexer
* Nathan Weizenbaum -- Haml and Sass lexers
* Nathan Whetsell -- Csound lexers
* Dietmar Winkler -- Modelica lexer
* Nils Winter -- Smalltalk lexer
* Davy Wybiral -- Clojure lexer
* Whitney Young -- ObjectiveC lexer
* Diego Zamboni -- CFengine3 lexer
* Enrique Zamudio -- Ceylon lexer
* Alex Zimin -- Nemerle lexer
* Rob Zimmerman -- Kal lexer
* Vincent Zurczak -- Roboconf lexer
* Hubert Gruniaux -- C and C++ lexer improvements
* Thomas Symalla -- AMDGPU Lexer
* 15b3 -- Image Formatter improvements
* Fabian Neumann -- CDDL lexer
* Thomas Duboucher -- CDDL lexer
* Philipp Imhof -- Pango Markup formatter
* Thomas Voss -- Sed lexer
* Martin Fischer -- WCAG contrast testing
* Marc Auberer -- Spice lexer
Many thanks for all contributions!
pygments-2.11.2/requirements.txt 0000644 0001750 0001750 00000000124 14165547207 016617 0 ustar carsten carsten pytest-cov
pytest-randomly
pytest>=6.0
pyflakes
pylint
tox
wcag-contrast-ratio
lxml
pygments-2.11.2/Contributing.md 0000644 0001750 0001750 00000004763 14165547207 016341 0 ustar carsten carsten Licensing
=========
The code is distributed under the BSD 2-clause license. Contributors making pull
requests must agree that they are able and willing to put their contributions
under that license.
Contribution checklist
======================
* Check the documentation for how to write
[a new lexer](https://pygments.org/docs/lexerdevelopment/),
[a new formatter](https://pygments.org/docs/formatterdevelopment/) or
[a new filter](https://pygments.org/docs/filterdevelopment/)
* When writing rules, try to merge simple rules. For instance, combine:
```python
_PUNCTUATION = [
(r"\(", token.Punctuation),
(r"\)", token.Punctuation),
(r"\[", token.Punctuation),
(r"\]", token.Punctuation),
("{", token.Punctuation),
("}", token.Punctuation),
]
```
into:
```python
(r"[\(\)\[\]{}]", token.Punctuation)
```
* Be careful with ``.*``. This matches greedily as much as it can. For instance,
rule like ``@.*@`` will match the whole string ``@first@ second @third@``,
instead of matching ``@first@`` and ``@second@``. You can use ``@.*?@`` in
this case to stop early. The ``?`` tries to match _as few times_ as possible.
* Don't add imports of your lexer anywhere in the codebase. (In case you're
curious about ``compiled.py`` -- this file exists for backwards compatibility
reasons.)
* Use the standard importing convention: ``from token import Punctuation``
* For test cases that assert on the tokens produced by a lexer, use tools:
* You can use the ``testcase`` formatter to produce a piece of code that
can be pasted into a unittest file:
``python -m pygments -l lua -f testcase <<< "local a = 5"``
* Most snippets should instead be put as a sample file under
``tests/snippets//*.txt``. These files are automatically
picked up as individual tests, asserting that the input produces the
expected tokens.
To add a new test, create a file with just your code snippet under a
subdirectory based on your lexer's main alias. Then run
``pytest --update-goldens `` to auto-populate the currently
expected tokens. Check that they look good and check in the file.
Also run the same command whenever you need to update the test if the
actual produced tokens change (assuming the change is expected).
* Large test files should go in ``tests/examplefiles``. This works
similar to ``snippets``, but the token output is stored in a separate
file. Output can also be regenerated with ``--update-goldens``.
pygments-2.11.2/setup.py 0000755 0001750 0001750 00000000074 14165547207 015054 0 ustar carsten carsten #!/usr/bin/env python
from setuptools import setup
setup()
pygments-2.11.2/external/ 0000755 0001750 0001750 00000000000 14165547207 015160 5 ustar carsten carsten pygments-2.11.2/external/autopygmentize 0000755 0001750 0001750 00000006626 14165547207 020204 0 ustar carsten carsten #!/bin/bash
# Best effort auto-pygmentization with transparent decompression
# by Reuben Thomas 2008-2021
# This program is in the public domain.
# Strategy: first see if pygmentize can find a lexer; if not, ask file; if that finds nothing, fail
# Set the environment variable PYGMENTIZE_OPTS or pass options before the file path to configure pygments.
# This program can be used as a .lessfilter for the less pager to auto-color less's output
file="${!#}" # last argument
options=${@:1:$(($#-1))} # handle others args as options to pass to pygmentize
file_common_opts="--brief --dereference"
case $(file --mime-type --uncompress $file_common_opts "$file") in
application/xml|image/svg+xml) lexer=xml;;
application/javascript) lexer=javascript;;
application/json) lexer=json;;
text/html) lexer=html;;
text/troff) lexer=nroff;;
text/x-asm) lexer=nasm;;
text/x-awk) lexer=awk;;
text/x-c) lexer=c;;
text/x-c++) lexer=cpp;;
text/x-crystal) lexer=crystal;;
text/x-diff) lexer=diff;;
text/x-fortran) lexer=fortran;;
text/x-gawk) lexer=gawk;;
text/x-java) lexer=java;;
text/x-lisp) lexer=common-lisp;;
text/x-lua) lexer=lua;;
text/x-makefile) lexer=make;;
text/x-msdos-batch) lexer=bat;;
text/x-nawk) lexer=nawk;;
text/x-pascal) lexer=pascal;;
text/x-perl) lexer=perl;;
text/x-php) lexer=php;;
text/x-po) lexer=po;;
text/x-python) lexer=python;;
text/x-ruby) lexer=ruby;;
text/x-shellscript) lexer=sh;;
text/x-tcl) lexer=tcl;;
text/x-tex|text/x-texinfo) lexer=latex;; # FIXME: texinfo really needs its own lexer
# Types that file outputs which pygmentize didn't support as of file 5.20, pygments 2.0
# text/calendar
# text/inf
# text/PGP
# text/rtf
# text/texmacs
# text/vnd.graphviz
# text/x-bcpl
# text/x-info
# text/x-m4
# text/x-vcard
# text/x-xmcd
text/plain) # special filenames. TODO: insert more
case $(basename "$file") in
.zshrc) lexer=sh;;
esac
# pygmentize -N is much cheaper than file, but makes some bad guesses (e.g.
# it guesses ".pl" is Prolog, not Perl)
lexer=$(pygmentize -N "$file")
;;
esac
# Find a concatenator for compressed files
concat=cat
case $(file $file_common_opts --mime-type "$file") in
application/gzip) concat=zcat;;
application/x-bzip2) concat=bzcat;;
application/x-xz) concat=xzcat;;
esac
# Find a suitable reader, preceded by a hex dump for binary files,
# or fmt for text with very long lines
prereader=""
reader=cat
encoding=$(file --mime-encoding --uncompress $file_common_opts "$file")
# FIXME: need a way to switch between hex and text view, as file often
# misdiagnoses files when they contain a few control characters
# if [[ $encoding == "binary" ]]; then
# prereader="od -x" # POSIX fallback
# if [[ -n $(which hd) ]]; then
# prereader=hd # preferred
# fi
# lexer=hexdump
# encoding=latin1
#el
# FIXME: Using fmt does not work well for system logs
# if [[ "$lexer" == "text" ]]; then
# if file "$file" | grep -ql "text, with very long lines"; then
# reader=fmt
# fi
# fi
if [[ "$lexer" != "text" ]]; then
reader="pygmentize -O inencoding=$encoding $PYGMENTIZE_OPTS $options -l $lexer"
fi
# Run the reader
if [[ -n "$prereader" ]]; then
exec $concat "$file" | $prereader | $reader
else
exec $concat "$file" | $reader
fi
pygments-2.11.2/external/markdown-processor.py 0000644 0001750 0001750 00000003741 14165547207 021376 0 ustar carsten carsten """
The Pygments Markdown Preprocessor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This fragment is a Markdown_ preprocessor that renders source code
to HTML via Pygments. To use it, invoke Markdown like so::
import markdown
html = markdown.markdown(someText, extensions=[CodeBlockExtension()])
This uses CSS classes by default, so use
``pygmentize -S -f html > pygments.css``
to create a stylesheet to be added to the website.
You can then highlight source code in your markdown markup::
[sourcecode:lexer]
some code
[/sourcecode]
.. _Markdown: https://pypi.python.org/pypi/Markdown
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# Options
# ~~~~~~~
# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False
import re
from markdown.preprocessors import Preprocessor
from markdown.extensions import Extension
from pygments import highlight
from pygments.formatters import HtmlFormatter
from pygments.lexers import get_lexer_by_name, TextLexer
class CodeBlockPreprocessor(Preprocessor):
pattern = re.compile(r'\[sourcecode:(.+?)\](.+?)\[/sourcecode\]', re.S)
formatter = HtmlFormatter(noclasses=INLINESTYLES)
def run(self, lines):
def repl(m):
try:
lexer = get_lexer_by_name(m.group(1))
except ValueError:
lexer = TextLexer()
code = highlight(m.group(2), lexer, self.formatter)
code = code.replace('\n\n', '\n \n').replace('\n', ' ')
return '\n\n
%s
\n\n' % code
joined_lines = "\n".join(lines)
joined_lines = self.pattern.sub(repl, joined_lines)
return joined_lines.split("\n")
class CodeBlockExtension(Extension):
def extendMarkdown(self, md, md_globals):
md.preprocessors.add('CodeBlockPreprocessor', CodeBlockPreprocessor(), '_begin')
pygments-2.11.2/external/rst-directive.py 0000644 0001750 0001750 00000005013 14165547207 020315 0 ustar carsten carsten """
The Pygments reStructuredText directive
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This fragment is a Docutils_ 0.5 directive that renders source code
(to HTML only, currently) via Pygments.
To use it, adjust the options below and copy the code into a module
that you import on initialization. The code then automatically
registers a ``sourcecode`` directive that you can use instead of
normal code blocks like this::
.. sourcecode:: python
My code goes here.
If you want to have different code styles, e.g. one with line numbers
and one without, add formatters with their names in the VARIANTS dict
below. You can invoke them instead of the DEFAULT one by using a
directive option::
.. sourcecode:: python
:linenos:
My code goes here.
Look at the `directive documentation`_ to get all the gory details.
.. _Docutils: https://docutils.sourceforge.io/
.. _directive documentation:
https://docutils.sourceforge.io/docs/howto/rst-directives.html
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# Options
# ~~~~~~~
# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False
from pygments.formatters import HtmlFormatter
# The default formatter
DEFAULT = HtmlFormatter(noclasses=INLINESTYLES)
# Add name -> formatter pairs for every variant you want to use
VARIANTS = {
# 'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=True),
}
from docutils import nodes
from docutils.parsers.rst import directives, Directive
from pygments import highlight
from pygments.lexers import get_lexer_by_name, TextLexer
class Pygments(Directive):
""" Source code syntax hightlighting.
"""
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = True
option_spec = {key: directives.flag for key in VARIANTS}
has_content = True
def run(self):
self.assert_has_content()
try:
lexer = get_lexer_by_name(self.arguments[0])
except ValueError:
# no lexer found - use the text one instead of an exception
lexer = TextLexer()
# take an arbitrary option if more than one is given
formatter = self.options and VARIANTS[list(self.options)[0]] or DEFAULT
parsed = highlight('\n'.join(self.content), lexer, formatter)
return [nodes.raw('', parsed, format='html')]
directives.register_directive('sourcecode', Pygments)
pygments-2.11.2/external/lilypond-builtins-generator.ly 0000644 0001750 0001750 00000021666 14165547207 023206 0 ustar carsten carsten %% Autogenerate a list of LilyPond keywords
\version "2.23.6"
#(use-modules (ice-9 receive)
(ice-9 regex))
#(define port (open-output-file "../pygments/lexers/_lilypond_builtins.py"))
#(define output-preamble
"\"\"\"
pygments.lexers._lilypond_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LilyPond builtins.
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
\"\"\"
# Contents generated by the script lilypond-builtins-generator.ly
# found in the external/ directory of the source tree.
")
#(format port "~a" output-preamble)
#(define (dump-py-list name vals)
(let* ((string-vals
(map symbol->string vals))
(fixed-vals
(filter-map
(lambda (str)
; To avoid conflicts with Scheme builtins,
; a leading backslash is prepended to \<,
; \= and a few others. The lexer finds it
; itself, so remove it here.
(cond
((equal? str "\\\\")
#f)
((string-startswith str "\\")
(string-drop str 1))
(else
str)))
string-vals))
(sorted-vals ; reproducibility
; Avoid duplicates (e.g., identical pitches
; in different languages)
(uniq-list
(sort fixed-vals string)))
(formatted-vals
(map
(lambda (val)
(format #f " \"~a\"," val name))
sorted-vals))
(joint-vals
(string-join formatted-vals "\n")))
(format port
"~a = [
~a
]
"
name
joint-vals)))
%% KEYWORDS
#(define keywords
'(
; Lexical modes.
notemode
lyricmode
lyricsto
addlyrics
chordmode
chords
figuremode
figures
drummode
drums
; Output definitions.
header
layout
midi
paper
; Context definitions.
;; \context is also used in music. We take it as
;; a keyword in both cases.
context
with
name
type
accepts
denies
alias
defaultchild
consists
remove
description
;; Not strictly a keyword, but can be viewed so.
inherit-acceptability
; Blocks.
book
bookpart
score
; Other.
new
etc
include
language
version))
#(dump-py-list 'keywords keywords)
%% CLEFS
#(define all-clefs
(map string->symbol (map car supported-clefs)))
#(dump-py-list 'clefs all-clefs)
%% SCALES
#(define all-scales
'(major
minor
ionian
locrian
aeolian
mixolydian
lydian
phrygian
dorian))
#(dump-py-list 'scales all-scales)
%% REPEAT TYPES
#(define all-repeat-types
'(volta percent unfold segno))
#(dump-py-list 'repeat_types all-repeat-types)
%% UNITS
#(define all-units
'(mm cm in pt staff-space))
#(dump-py-list 'units all-units)
%% CHORD MODIFIERS
#(define all-chord-modifiers
'(m dim aug maj))
#(dump-py-list 'chord_modifiers all-chord-modifiers)
%% PITCHES
#(define all-pitch-language-names
(map car language-pitch-names))
#(dump-py-list 'pitch_language_names all-pitch-language-names)
#(define all-pitch-names
(append
; We highlight rests just like pitches.
'(r R)
(map car (append-map cdr language-pitch-names))
; Drum note names.
(map car drumPitchNames)))
#(dump-py-list 'pitches all-pitch-names)
%% MUSIC FUNCTIONS AND SHORTCUTS
% View these as music functions.
#(define extra-music-functions
'(set
unset
override
revert
tweak
once
undo
temporary
repeat
alternative
tempo
change))
#(let* ((module (current-module))
(module-alist (ly:module->alist module))
(all-music-functions
(filter
(lambda (entry)
(ly:music-function? (cdr entry)))
module-alist))
(all-predefined-music-objects
(filter
(lambda (entry)
(ly:music? (cdr entry)))
module-alist)))
(receive (articulations non-articulations)
(partition
(lambda (entry)
(ly:event? (cdr entry)))
all-predefined-music-objects)
(receive (dynamics non-dynamic-articulations)
(partition
(lambda (entry)
(any
(lambda (type)
(music-is-of-type? (cdr entry)
type))
'(dynamic-event crescendo-event decrescendo-event)))
articulations)
(dump-py-list 'music_functions
(append extra-music-functions
(map car all-music-functions)))
(dump-py-list 'dynamics (map car dynamics))
(dump-py-list 'articulations (map car non-dynamic-articulations))
(dump-py-list 'music_commands (map car non-articulations)))))
%% MARKUP COMMANDS
#(let* ((markup-name-regexp
(make-regexp "(.*)-markup(-list)?"))
(modules
(cons (current-module)
(map resolve-module '((lily) (lily accreg)))))
(alist
(apply append
(map ly:module->alist modules)))
(markup-commands
(filter
(lambda (entry)
(or (markup-function? (cdr entry))
(markup-list-function? (cdr entry))))
alist))
(markup-command-names
(map
(lambda (entry)
(let* ((string-name (symbol->string (car entry)))
(match (regexp-exec markup-name-regexp string-name)))
(string->symbol (match:substring match 1))))
markup-commands))
(markup-words
(append '(markup markuplist)
markup-command-names)))
(dump-py-list 'markup_commands markup-words))
%% GROBS
#(let ((grob-names (map car all-grob-descriptions)))
(dump-py-list 'grobs grob-names))
%% CONTEXTS
#(let* ((layout-module
(ly:output-def-scope $defaultlayout))
(layout-alist
(ly:module->alist layout-module))
(all-context-defs
(filter
(lambda (entry)
(ly:context-def? (cdr entry)))
layout-alist))
(context-def-names
(map car all-context-defs)))
(dump-py-list 'contexts context-def-names))
%% TRANSLATORS
#(let* ((all-translators
(ly:get-all-translators))
(translator-names
(map ly:translator-name all-translators)))
(dump-py-list 'translators translator-names))
%% SCHEME FUNCTIONS
#(let* ((module (resolve-module '(lily)))
(module-alist (ly:module->alist module))
(all-functions
(filter
(lambda (entry)
(or (procedure? (cdr entry))
(macro? (cdr entry))))
module-alist))
(all-function-names
(map car all-functions)))
(dump-py-list 'scheme_functions all-function-names))
%% PROPERTIES
#(dump-py-list 'context_properties all-translation-properties)
#(dump-py-list 'grob_properties all-backend-properties)
%% PAPER VARIABLES
% Reference: https://lilypond.org/doc/v2.22/Documentation/notation/page-layout
#(define all-paper-variables
'(paper-height
top-margin
bottom-margin
ragged-bottom
ragged-last-bottom
markup-system-spacing
score-markup-spacing
score-system-spacing
system-system-spacing
markup-markup-spacing
last-bottom-spacing
top-system-spacing
top-markup-spacing
paper-width
line-width
left-margin
right-margin
check-consistency
ragged-right
ragged-last
two-sided
inner-margin
outer-margin
binding-offset
horizontal-shift
indent
short-indent
max-systems-per-page
min-systems-per-page
systems-per-page
system-count
page-breaking
page-breaking-system-system-spacing
page-count
blank-page-penalty
blank-last-page-penalty
auto-first-page-number
first-page-number
print-first-page-number
page-number-type
page-spacing-weight
print-all-headers
system-separator-markup
footnote-separator-markup
; Let's view these four as \paper variables.
basic-distance
minimum-distance
padding
stretchability))
#(dump-py-list 'paper_variables all-paper-variables)
%% HEADER VARIABLES
% Reference: https://lilypond.org/doc/v2.22/Documentation/notation/creating-titles-headers-and-footers.html#default-layout-of-bookpart-and-score-titles
#(define all-header-variables
'(dedication
title
subtitle
subsubtitle
instrument
poet
composer
meter
arranger
tagline
copyright
piece
opus
; The following are used in LSR snippets and regression tests.
lsrtags
doctitle
texidoc))
#(dump-py-list 'header_variables all-header-variables)
#(close-port port)
pygments-2.11.2/external/pygments.bashcomp 0000644 0001750 0001750 00000002047 14165547207 020547 0 ustar carsten carsten #!bash
#
# Bash completion support for Pygments (the 'pygmentize' command).
#
_pygmentize()
{
local cur prev
COMPREPLY=()
cur=`_get_cword`
prev=${COMP_WORDS[COMP_CWORD-1]}
case "$prev" in
-f)
FORMATTERS=`pygmentize -L formatters | grep '* ' | cut -c3- | sed -e 's/,//g' -e 's/:$//'`
COMPREPLY=( $( compgen -W '$FORMATTERS' -- "$cur" ) )
return 0
;;
-l)
LEXERS=`pygmentize -L lexers | grep '* ' | cut -c3- | sed -e 's/,//g' -e 's/:$//'`
COMPREPLY=( $( compgen -W '$LEXERS' -- "$cur" ) )
return 0
;;
-S)
STYLES=`pygmentize -L styles | grep '* ' | cut -c3- | sed s/:$//`
COMPREPLY=( $( compgen -W '$STYLES' -- "$cur" ) )
return 0
;;
esac
if [[ "$cur" == -* ]]; then
COMPREPLY=( $( compgen -W '-f -l -S -L -g -O -P -F \
-N -H -h -V -o' -- "$cur" ) )
return 0
fi
}
complete -F _pygmentize -o default pygmentize
pygments-2.11.2/external/lasso-builtins-generator-9.lasso 0000755 0001750 0001750 00000007566 14165547207 023346 0 ustar carsten carsten #!/usr/bin/lasso9
/*
Builtins Generator for Lasso 9
This is the shell script that was used to extract Lasso 9's built-in keywords
and generate most of the _lasso_builtins.py file. When run, it creates a file
containing the types, traits, methods, and members of the currently-installed
version of Lasso 9.
A list of tags in Lasso 8 can be generated with this code:
insert(string_removeleading(#i, -pattern='_global_'));
/iterate;
#l8tags->sort;
iterate(#l8tags, local('i'));
string_lowercase(#i)+" ";
/iterate;
*/
output("This output statement is required for a complete list of methods.")
local(f) = file("_lasso_builtins-9.py")
#f->doWithClose => {
#f->openTruncate
#f->writeString('# -*- coding: utf-8 -*-
"""
pygments.lexers._lasso_builtins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Built-in Lasso types, traits, methods, and members.
:copyright: Copyright 2006-'+date->year+' by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
')
// Load and register contents of $LASSO9_MASTER_HOME/LassoModules/
database_initialize
// Load all of the libraries from builtins and lassoserver
// This forces all possible available types and methods to be registered
local(srcs =
(:
dir(sys_masterHomePath + '/LassoLibraries/builtins/')->eachFilePath,
dir(sys_masterHomePath + '/LassoLibraries/lassoserver/')->eachFilePath
)
)
with topLevelDir in delve(#srcs)
where not #topLevelDir->lastComponent->beginsWith('.')
do protect => {
handle_error => {
stdoutnl('Unable to load: ' + #topLevelDir + ' ' + error_msg)
}
library_thread_loader->loadLibrary(#topLevelDir)
stdoutnl('Loaded: ' + #topLevelDir)
}
email_initialize
log_initialize
session_initialize
local(
typesList = set(),
traitsList = set(),
unboundMethodsList = set(),
memberMethodsList = set()
)
// types
with type in sys_listTypes
where not #type->asString->endsWith('$') // skip threads
do {
#typesList->insert(#type)
}
// traits
with trait in sys_listTraits
where not #trait->asString->beginsWith('$') // skip combined traits
do {
#traitsList->insert(#trait)
}
// member methods
with type in #typesList
do {
with method in #type->getType->listMethods
where #method->typeName == #type // skip inherited methods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
do {
#memberMethodsList->insert(#name)
}
}
with trait in #traitsList
do {
with method in #trait->getType->provides
where #method->typeName == #trait // skip inherited methods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
do {
#memberMethodsList->insert(#name)
}
}
// unbound methods
with method in sys_listUnboundMethods
let name = #method->methodName
where not #name->asString->endsWith('=') // skip setter methods
where #name->asString->isAlpha(1) // skip unpublished methods
where #typesList !>> #name
where #traitsList !>> #name
do {
#unboundMethodsList->insert(#name)
}
// write to file
with i in (:
pair(#typesList, "BUILTINS = {
'Types': (
"),
pair(#traitsList, " ),
'Traits': (
"),
pair(#unboundMethodsList, " ),
'Unbound Methods': (
"),
pair(#memberMethodsList, " )
}
MEMBERS = {
'Member Methods': (
")
)
do {
#f->writeString(#i->second)
with t in (#i->first)
let ts = #t->asString
order by #ts
do {
#f->writeString(" '"+#ts->lowercase&asString+"',\n")
}
}
#f->writeString(" )
}
")
}
pygments-2.11.2/external/moin-parser.py 0000644 0001750 0001750 00000006770 14165547207 020000 0 ustar carsten carsten """
The Pygments MoinMoin Parser
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a MoinMoin parser plugin that renders source code to HTML via
Pygments; you need Pygments 0.7 or newer for this parser to work.
To use it, set the options below to match your setup and put this file in
the data/plugin/parser subdirectory of your Moin instance, and give it the
name that the parser directive should have. For example, if you name the
file ``code.py``, you can get a highlighted Python code sample with this
Wiki markup::
{{{
#!code python
[...]
}}}
Additionally, if you set ATTACHMENTS below to True, Pygments will also be
called for all attachments for whose filenames there is no other parser
registered.
You are responsible for including CSS rules that will map the Pygments CSS
classes to colors. You can output a stylesheet file with `pygmentize`, put
it into the `htdocs` directory of your Moin instance and then include it in
the `stylesheets` configuration option in the Moin config, e.g.::
stylesheets = [('screen', '/htdocs/pygments.css')]
If you do not want to do that and are willing to accept larger HTML
output, you can set the INLINESTYLES option below to True.
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
# Options
# ~~~~~~~
# Set to True if you want to highlight attachments, in addition to
# {{{ }}} blocks.
ATTACHMENTS = True
# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False
import sys
from pygments import highlight
from pygments.lexers import get_lexer_by_name, get_lexer_for_filename, TextLexer
from pygments.formatters import HtmlFormatter
from pygments.util import ClassNotFound
# wrap lines in s so that the Moin-generated line numbers work
class MoinHtmlFormatter(HtmlFormatter):
def wrap(self, source, outfile):
for line in source:
yield 1, '' + line[1] + ''
htmlformatter = MoinHtmlFormatter(noclasses=INLINESTYLES)
textlexer = TextLexer()
codeid = [0]
class Parser:
"""
MoinMoin Pygments parser.
"""
if ATTACHMENTS:
extensions = '*'
else:
extensions = []
Dependencies = []
def __init__(self, raw, request, **kw):
self.raw = raw
self.req = request
if "format_args" in kw:
# called from a {{{ }}} block
try:
self.lexer = get_lexer_by_name(kw['format_args'].strip())
except ClassNotFound:
self.lexer = textlexer
return
if "filename" in kw:
# called for an attachment
filename = kw['filename']
else:
# called for an attachment by an older moin
# HACK: find out the filename by peeking into the execution
# frame which might not always work
try:
frame = sys._getframe(1)
filename = frame.f_locals['filename']
except:
filename = 'x.txt'
try:
self.lexer = get_lexer_for_filename(filename)
except ClassNotFound:
self.lexer = textlexer
def format(self, formatter):
codeid[0] += 1
id = "pygments_%s" % codeid[0]
w = self.req.write
w(formatter.code_area(1, id, start=1, step=1))
w(formatter.rawHTML(highlight(self.raw, self.lexer, htmlformatter)))
w(formatter.code_area(0, id))
pygments-2.11.2/.github/ 0000755 0001750 0001750 00000000000 14165547207 014676 5 ustar carsten carsten pygments-2.11.2/.github/actions/ 0000755 0001750 0001750 00000000000 14165547207 016336 5 ustar carsten carsten pygments-2.11.2/.github/actions/pyodide-package/ 0000755 0001750 0001750 00000000000 14165547207 021364 5 ustar carsten carsten pygments-2.11.2/.github/actions/pyodide-package/action.yml 0000644 0001750 0001750 00000000244 14165547207 023364 0 ustar carsten carsten name: 'Update Pyodide package'
description: 'Update the WASM compiled Pygments with Pyodide'
runs:
using: 'docker'
image: 'birkenfeld/pyodide-pygments-builder'
pygments-2.11.2/.github/workflows/ 0000755 0001750 0001750 00000000000 14165547207 016733 5 ustar carsten carsten pygments-2.11.2/.github/workflows/docs.yaml 0000644 0001750 0001750 00000002052 14165547207 020546 0 ustar carsten carsten name: Docs
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Checkout Pygments
uses: actions/checkout@v1
- name: Install Sphinx & WCAG contrast ratio
run: pip install Sphinx wcag-contrast-ratio
- name: Create Pyodide WASM package
uses: ./.github/actions/pyodide-package
- name: Sphinx build
run: |
cd doc
WEBSITE_BUILD=1 make dirhtml
cp -a ../pyodide _build/dirhtml/_static
touch _build/dirhtml/.nojekyll
echo -e 'pygments.org\nwww.pygments.org' > _build/dirhtml/CNAME
echo 'Automated deployment of docs for GitHub pages.' > _build/dirhtml/README
- name: Deploy to repo
uses: peaceiris/actions-gh-pages@v2.5.0
env:
ACTIONS_DEPLOY_KEY: ${{ secrets.ACTIONS_DEPLOY_KEY }}
EXTERNAL_REPOSITORY: pygments/pygments.github.io
PUBLISH_BRANCH: master
PUBLISH_DIR: ./doc/_build/dirhtml
pygments-2.11.2/.github/workflows/build.yaml 0000644 0001750 0001750 00000003237 14165547207 020723 0 ustar carsten carsten name: Pygments
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
python-version: ["3.5", "3.6", "3.7", "3.8", "3.9", "3.10"]
max-parallel: 4
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install package
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install .
- name: Test package
run: make test TEST=-v
if: runner.os == 'Linux'
- name: Test package
run: pytest
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- name: Run make check
run: make check
- name: Fail if the basic checks failed
run: make check
if: runner.os == 'Linux'
check-mapfiles:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- name: Regenerate mapfiles
run: make mapfiles
- name: Fail if mapfiles changed
run: |
if git ls-files -m | grep mapping; then
echo 'Please run "make mapfiles" and add the changes to a commit.'
exit 1
fi
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Check out regexlint
run: git clone https://github.com/pygments/regexlint
- name: Run regexlint
run: make regexlint REGEXLINT=`pwd`/regexlint
pygments-2.11.2/MANIFEST.in 0000644 0001750 0001750 00000000212 14165547207 015067 0 ustar carsten carsten include Makefile CHANGES LICENSE AUTHORS
include external/*
recursive-include tests *
recursive-include doc *
recursive-include scripts *
pygments-2.11.2/doc/ 0000755 0001750 0001750 00000000000 14165547207 014103 5 ustar carsten carsten pygments-2.11.2/doc/make.bat 0000644 0001750 0001750 00000011754 14165547207 015520 0 ustar carsten carsten @ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^` where ^ is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Pygments.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Pygments.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
:end
pygments-2.11.2/doc/download.rst 0000644 0001750 0001750 00000002360 14165547207 016445 0 ustar carsten carsten Download and installation
=========================
The current release is version |version|.
Packaged versions
-----------------
You can download it `from the Python Package Index
`_. For installation of packages from
PyPI, we recommend `Pip `_, which works on all
major platforms.
Under Linux, most distributions include a package for Pygments, usually called
``pygments`` or ``python-pygments``. You can install it with the package
manager as usual.
Development sources
-------------------
We're using the Git version control system. You can get the development source
using this command::
git clone https://github.com/pygments/pygments
Development takes place at `GitHub `_.
The latest changes in the development source code are listed in the `changelog
`_.
.. Documentation
-------------
.. XXX todo
You can download the documentation either as
a bunch of rst files from the Git repository, see above, or
as a tar.gz containing rendered HTML files: