pax_global_header00006660000000000000000000000064145423375320014522gustar00rootroot0000000000000052 comment=cf927e8142398d41b1c122e8a2827cd6e9e39eed markdown-1.2.0/000077500000000000000000000000001454233753200133445ustar00rootroot00000000000000markdown-1.2.0/.gitignore000066400000000000000000000000531454233753200153320ustar00rootroot00000000000000/node_modules/ /src/parser.* .tern-* /dist markdown-1.2.0/.mocharc.cjs000066400000000000000000000001551454233753200155400ustar00rootroot00000000000000module.exports = { extension: ["ts"], spec: ["test/test-*.ts"], loader: "ts-node/esm/transpile-only" } markdown-1.2.0/CHANGELOG.md000066400000000000000000000112251454233753200151560ustar00rootroot00000000000000## 1.2.0 (2023-12-25) ### Bug fixes Properly require whitespace before link titles. Parse autolinks as their own nodes ### New features Wrap autolinks in an `Autolink` syntax node, rather than just `URL`, and exclude the wrapping angle brackets from the `URL` nodes. ## 1.1.2 (2023-12-07) ### Bug fixes Fix a bug that could cause blockquote markers to be attached to the wrong parent node, causing them to overlap with sibling syntax nodes. ## 1.1.1 (2023-11-17) ### Bug fixes Make sure GFM autolinking accepts URLs like test.co.uk Fix a bug in `Autolink` that made it fail to accept some URLs with hyphens. ## 1.1.0 (2023-08-03) ### New features The new `Autolink` extension (included in the `GFM` extension bundle) marks some types of URLs even without angle brackets. ## 1.0.5 (2023-06-30) ### Bug fixes Fix another issue in reuse of nodes when the input has gaps. ## 1.0.4 (2023-06-29) ### Bug fixes Fix another bug in incremental parsing across input gaps. ## 1.0.3 (2023-06-22) ### Bug fixes Only parse list items as tasks when there is whitespace after the checkbox brackets. Remove an unnecessary regexp operator Fix a crash doing an incremental parse on input ranges with gaps between them. ## 1.0.2 (2022-09-21) ### Bug fixes In the stikethrough extension, ignore opening marks with a space after and closing marks with a space before them. ## 1.0.1 (2022-06-29) ### Bug fixes Fix a crash that could occur when there were gaps in the parseable ranges right at the start of a line. ## 1.0.0 (2022-06-06) ### New features First stable version. ## 0.16.1 (2022-05-20) ### Bug fixes Fix a bug that prevented style tags from built-in extensions from being applied. ## 0.16.0 (2022-04-20) ### New features This package now attached highlighting information to its syntax tree. It is now possible to include highlighting information when defining nodes in extensions via `NodeSpec.style`. ## 0.15.6 (2022-03-18) ### Bug fixes Fix a bug where GFM tables occuring directly below a paragraph weren't recognized. ## 0.15.5 (2022-02-18) ### New features The `BlockContext` type now has a `depth` property providing the amount of parent nodes, and a `parentType` method allowing code to inspect the type of those nodes. ## 0.15.4 (2022-02-02) ### Bug fixes Fix compatibility fallback for engines with RegExp `\p` support. ## 0.15.3 (2021-12-13) ### Bug fixes Fix a bug where, if there were multiple extensions passed to the editor, the `wrap` option got dropped from the resulting configuration. ## 0.15.2 (2021-11-08) ### Bug fixes Fix a bug where an ordered list item after a nested bullet list would get treated as part of the bullet list item. ## 0.15.1 (2021-10-11) ### Bug fixes Fix a bug that caused `endLeafBlock` configuration to be ignored by the parser. ## 0.15.0 (2021-08-11) ### Breaking changes The module name has changed from `lezer-markdown` to `@lezer/markdown`. `MarkdownParser` now extends `Parser` and follows its interface. The Markdown parser no longer has its own support for nested parsing (but can be wrapped with `parseCode` to get a similar effect). ### New features The new `parseCode` function can be used to set up a mixed-language parser for Markdown. ## 0.14.5 (2021-05-12) ### Bug fixes Fix an issue were continued paragraph lines starting with tabs could cause the parser to create a tree with invalid node positions. ## 0.14.4 (2021-03-09) ### Bug fixes Fix a bug where an unterminated nested code block could call a nested parser with a start position beyond the end of the document. Fix a bug where the parser could return an invalid tree when `forceFinish` was called during a nested parse. ## 0.14.3 (2021-02-22) ### Breaking changes `parseInline` has been moved to `MarkdownParser` so that it can also be called from an inline context. ### New features Heading nodes now have different types based on their level. The `elt` helper method can now be called with a `Tree` to wrap the result of a nested parse in an element. The `startNested` method is now exported. ## 0.14.2 (2021-02-12) ### Bug fixes `BlockParser.parse`'s exported type was missing an argument. Fix a bug that would cause incorrect offsets for children nested two deep in an element passed to `BlockContext.addElement`. ## 0.14.1 (2021-02-11) ### Bug fixes Fix table parsing when header cells are empty. ## 0.14.0 (2021-02-10) ### New features Add an extension interface. The `configure` method now takes more options, allowing client code to define new syntax node types and parse logic. Add extensions for subscript, superscript, strikethrough, tables, and task lists to the distribution. ## 0.13.0 (2020-12-04) ### Breaking changes First numbered release. markdown-1.2.0/LICENSE000066400000000000000000000021311454233753200143460ustar00rootroot00000000000000MIT License Copyright (C) 2020 by Marijn Haverbeke and others Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. markdown-1.2.0/README.md000066400000000000000000001422501454233753200146270ustar00rootroot00000000000000 # lezer-markdown This is an incremental Markdown ([CommonMark](https://commonmark.org/) with support for extension) parser that integrates well with the [Lezer](https://lezer.codemirror.net/) parser system. It does not in fact use the Lezer runtime (that runs LR parsers, and Markdown can't really be parsed that way), but it produces Lezer-style compact syntax trees and consumes fragments of such trees for its incremental parsing. Note that this only _parses_ the document, producing a data structure that represents its syntactic form, and doesn't help with outputting HTML. Also, in order to be single-pass and incremental, it doesn't do some things that a conforming CommonMark parser is expected to do—specifically, it doesn't validate link references, so it'll parse `[a][b]` and similar as a link, even if no `[b]` reference is declared. The [@codemirror/lang-markdown](https://github.com/codemirror/lang-markdown) package integrates this parser with CodeMirror to provide Markdown editor support. The code is licensed under an MIT license. ## Interface
parser: MarkdownParser

The default CommonMark parser.

class MarkdownParser extends Parser

A Markdown parser configuration.

nodeSet: NodeSet

The parser's syntax node types.

configure(specMarkdownExtension) → MarkdownParser

Reconfigure the parser.

parseInline(textstring, offsetnumber) → Element[]

Parse the given piece of inline text at the given offset, returning an array of Element objects representing the inline content.

interface MarkdownConfig

Objects of this type are used to configure the Markdown parser.

props⁠?: readonly NodePropSource[]

Node props to add to the parser's node set.

defineNodes⁠?: readonly (string | NodeSpec)[]

Define new node types for use in parser extensions.

parseBlock⁠?: readonly BlockParser[]

Define additional block parsing logic.

parseInline⁠?: readonly InlineParser[]

Define new inline parsing logic.

remove⁠?: readonly string[]

Remove the named parsers from the configuration.

wrap⁠?: ParseWrapper

Add a parse wrapper (such as a mixed-language parser) to this parser.

type MarkdownExtension = MarkdownConfig | readonly MarkdownExtension[]

To make it possible to group extensions together into bigger extensions (such as the Github-flavored Markdown extension), reconfiguration accepts nested arrays of config objects.

parseCode(configObject) → MarkdownExtension

Create a Markdown extension to enable nested parsing on code blocks and/or embedded HTML.

config
codeParser⁠?: fn(infostring) → Parser | null

When provided, this will be used to parse the content of code blocks. info is the string after the opening ``` marker, or the empty string if there is no such info or this is an indented code block. If there is a parser available for the code, it should return a function that can construct the parse.

htmlParser⁠?: Parser

The parser used to parse HTML tags (both block and inline).

### GitHub Flavored Markdown
GFM: MarkdownConfig[]

Extension bundle containing Table, TaskList, Strikethrough, and Autolink.

Table: MarkdownConfig

This extension provides GFM-style tables, using syntax like this:

| head 1 | head 2 |
| ---    | ---    |
| cell 1 | cell 2 |
TaskList: MarkdownConfig

Extension providing GFM-style task list items, where list items can be prefixed with [ ] or [x] to add a checkbox.

Strikethrough: MarkdownConfig

An extension that implements GFM-style Strikethrough syntax using ~~ delimiters.

Extension that implements autolinking for www./http:///https:///mailto:/xmpp: URLs and email addresses.

### Other extensions
Subscript: MarkdownConfig

Extension providing Pandoc-style subscript using ~ markers.

Superscript: MarkdownConfig

Extension providing Pandoc-style superscript using ^ markers.

Emoji: MarkdownConfig

Extension that parses two colons with only letters, underscores, and numbers between them as Emoji nodes.

### Extension The parser can, to a certain extent, be extended to handle additional syntax.

interface NodeSpec

Used in the configuration to define new syntax node types.

name: string

The node's name.

block⁠?: boolean

Should be set to true if this type represents a block node.

composite⁠?: fn(cxBlockContext, lineLine, valuenumber) → boolean

If this is a composite block, this should hold a function that, at the start of a new line where that block is active, checks whether the composite block should continue (return value) and optionally adjusts the line's base position and registers nodes for any markers involved in the block's syntax.

style⁠?: Tag | readonly Tag[] | Object<Tag | readonly Tag[]>

Add highlighting tag information for this node. The value of this property may either by a tag or array of tags to assign directly to this node, or an object in the style of styleTags's argument to assign more complicated rules.

class BlockContext implements PartialParse

Block-level parsing functions get access to this context object.

lineStart: number

The start of the current line.

parser: MarkdownParser

The parser configuration used.

depth: number

The number of parent blocks surrounding the current block.

parentType(depth⁠?: number = this.depth - 1) → NodeType

Get the type of the parent block at the given depth. When no depth is passed, return the type of the innermost parent.

nextLine() → boolean

Move to the next input line. This should only be called by (non-composite) block parsers that consume the line directly, or leaf block parser nextLine methods when they consume the current line (and return true).

prevLineEnd() → number

The end position of the previous line.

startComposite(typestring, startnumber, value⁠?: number = 0)

Start a composite block. Should only be called from block parser functions that return null.

addElement(eltElement)

Add a block element. Can be called by block parsers.

addLeafElement(leafLeafBlock, eltElement)

Add a block element from a leaf parser. This makes sure any extra composite block markup (such as blockquote markers) inside the block are also added to the syntax tree.

elt(typestring, fromnumber, tonumber, children⁠?: readonly Element[]) → Element

Create an Element object to represent some syntax node.

interface BlockParser

Block parsers handle block-level structure. There are three general types of block parsers:

  • Composite block parsers, which handle things like lists and blockquotes. These define a parse method that starts a composite block and returns null when it recognizes its syntax.

  • Eager leaf block parsers, used for things like code or HTML blocks. These can unambiguously recognize their content from its first line. They define a parse method that, if it recognizes the construct, moves the current line forward to the line beyond the end of the block, add a syntax node for the block, and return true.

  • Leaf block parsers that observe a paragraph-like construct as it comes in, and optionally decide to handle it at some point. This is used for "setext" (underlined) headings and link references. These define a leaf method that checks the first line of the block and returns a LeafBlockParser object if it wants to observe that block.

name: string

The name of the parser. Can be used by other block parsers to specify precedence.

parse⁠?: fn(cxBlockContext, lineLine) → boolean | null

The eager parse function, which can look at the block's first line and return false to do nothing, true if it has parsed (and moved past a block), or null if it has started a composite block.

leaf⁠?: fn(cxBlockContext, leafLeafBlock) → LeafBlockParser | null

A leaf parse function. If no regular parse functions match for a given line, its content will be accumulated for a paragraph-style block. This method can return an object that overrides that style of parsing in some situations.

endLeaf⁠?: fn(cxBlockContext, lineLine, leafLeafBlock) → boolean

Some constructs, such as code blocks or newly started blockquotes, can interrupt paragraphs even without a blank line. If your construct can do this, provide a predicate here that recognizes lines that should end a paragraph (or other non-eager leaf block).

before⁠?: string

When given, this parser will be installed directly before the block parser with the given name. The default configuration defines block parsers with names LinkReference, IndentedCode, FencedCode, Blockquote, HorizontalRule, BulletList, OrderedList, ATXHeading, HTMLBlock, and SetextHeading.

after⁠?: string

When given, the parser will be installed directly after the parser with the given name.

interface LeafBlockParser

Objects that are used to override paragraph-style blocks should conform to this interface.

nextLine(cxBlockContext, lineLine, leafLeafBlock) → boolean

Update the parser's state for the next line, and optionally finish the block. This is not called for the first line (the object is contructed at that line), but for any further lines. When it returns true, the block is finished. It is okay for the function to consume the current line or any subsequent lines when returning true.

finish(cxBlockContext, leafLeafBlock) → boolean

Called when the block is finished by external circumstances (such as a blank line or the start of another construct). If this parser can handle the block up to its current position, it should finish the block and return true.

class Line

Data structure used during block-level per-line parsing.

text: string

The line's full text.

baseIndent: number

The base indent provided by the composite contexts (that have been handled so far).

basePos: number

The string position corresponding to the base indent.

pos: number

The position of the next non-whitespace character beyond any list, blockquote, or other composite block markers.

indent: number

The column of the next non-whitespace character.

next: number

The character code of the character after pos.

skipSpace(fromnumber) → number

Skip whitespace after the given position, return the position of the next non-space character or the end of the line if there's only space after from.

moveBase(tonumber)

Move the line's base position forward to the given position. This should only be called by composite block parsers or markup skipping functions.

moveBaseColumn(indentnumber)

Move the line's base position forward to the given column.

addMarker(eltElement)

Store a composite-block-level marker. Should be called from markup skipping functions when they consume any non-whitespace characters.

countIndent(tonumber, from⁠?: number = 0, indent⁠?: number = 0) → number

Find the column position at to, optionally starting at a given position and column.

findColumn(goalnumber) → number

Find the position corresponding to the given column.

class LeafBlock

Data structure used to accumulate a block's content during leaf block parsing.

parsers: LeafBlockParser[]

The block parsers active for this block.

start: number

The start position of the block.

content: string

The block's text content.

class InlineContext

Inline parsing functions get access to this context, and use it to read the content and emit syntax nodes.

parser: MarkdownParser

The parser that is being used.

text: string

The text of this inline section.

offset: number

The starting offset of the section in the document.

char(posnumber) → number

Get the character code at the given (document-relative) position.

end: number

The position of the end of this inline section.

slice(fromnumber, tonumber) → string

Get a substring of this inline section. Again uses document-relative positions.

addDelimiter(typeDelimiterType, fromnumber, tonumber, openboolean, closeboolean) → number

Add a delimiter at this given position. open and close indicate whether this delimiter is opening, closing, or both. Returns the end of the delimiter, for convenient returning from parse functions.

addElement(eltElement) → number

Add an inline element. Returns the end of the element.

findOpeningDelimiter(typeDelimiterType) → number | null

Find an opening delimiter of the given type. Returns null if no delimiter is found, or an index that can be passed to takeContent otherwise.

takeContent(startIndexnumber) → Element[]

Remove all inline elements and delimiters starting from the given index (which you should get from findOpeningDelimiter, resolve delimiters inside of them, and return them as an array of elements.

skipSpace(fromnumber) → number

Skip space after the given (document) position, returning either the position of the next non-space character or the end of the section.

elt(typestring, fromnumber, tonumber, children⁠?: readonly Element[]) → Element

Create an Element for a syntax node.

interface InlineParser

Inline parsers are called for every character of parts of the document that are parsed as inline content.

name: string

This parser's name, which can be used by other parsers to indicate a relative precedence.

parse(cxInlineContext, nextnumber, posnumber) → number

The parse function. Gets the next character and its position as arguments. Should return -1 if it doesn't handle the character, or add some element or delimiter and return the end position of the content it parsed if it can.

before⁠?: string

When given, this parser will be installed directly before the parser with the given name. The default configuration defines inline parsers with names Escape, Entity, InlineCode, HTMLTag, Emphasis, HardBreak, Link, and Image. When no before or after property is given, the parser is added to the end of the list.

after⁠?: string

When given, the parser will be installed directly after the parser with the given name.

interface DelimiterType

Delimiters are used during inline parsing to store the positions of things that might be delimiters, if another matching delimiter is found. They are identified by objects with these properties.

resolve⁠?: string

If this is given, the delimiter should be matched automatically when a piece of inline content is finished. Such delimiters will be matched with delimiters of the same type according to their open and close properties. When a match is found, the content between the delimiters is wrapped in a node whose name is given by the value of this property.

When this isn't given, you need to match the delimiter eagerly using the findOpeningDelimiter and takeContent methods.

mark⁠?: string

If the delimiter itself should, when matched, create a syntax node, set this to the name of the syntax node.

class Element

Elements are used to compose syntax nodes during parsing.

type: number

The node's id.

from: number

The start of the node, as an offset from the start of the document.

to: number

The end of the node.

markdown-1.2.0/bin/000077500000000000000000000000001454233753200141145ustar00rootroot00000000000000markdown-1.2.0/bin/build-readme.cjs000066400000000000000000000026701454233753200171540ustar00rootroot00000000000000// Build github-proof readmes that contain the package's API // docs as HTML. const {gather} = require("getdocs-ts") const {build} = require("builddocs") const {join} = require("path"), fs = require("fs") let root = join(__dirname, "..") function buildReadme() { let template = fs.readFileSync(join(root, "src", "README.md"), "utf8") let placeholders = template.match(/\n@\w+(?=\n|$)/g), dummy = placeholders.join("\n\n
\n\n") let html = build({ mainText: dummy, anchorPrefix: "", allowUnresolvedTypes: false, imports: [type => { if (/\bcommon\b/.test(type.typeSource)) return `https://lezer.codemirror.net/docs/ref/#common.${type.type}` if (/\blr\b/.test(type.typeSource)) return `https://lezer.codemirror.net/docs/ref/#lr.${type.type}` if (/\bhighlight\b/.test(type.typeSource)) return `https://lezer.codemirror.net/docs/ref/#highlight.${type.type}` if (type.type == "NodeSet") console.log(type.typeSource) }] }, gather({filename: join(root, "src", "index.ts"), basedir: join(root, "src"), })) html = html.replace(/<\/?span.*?>/g, "") .replace(/id="(.*?)"/g, (_, id) => `id="user-content-${id.toLowerCase()}"`) .replace(/href="#(.*?)"/g, (_, id) => `href="#user-content-${id.toLowerCase()}"`) let pieces = html.split("\n
\n") let i = 0 return template.replace(/\n@\w+(?=\n|$)/g, _ => pieces[i++]) } fs.writeFileSync(join(root, "README.md"), buildReadme()) markdown-1.2.0/package.json000066400000000000000000000020441454233753200156320ustar00rootroot00000000000000{ "name": "@lezer/markdown", "version": "1.2.0", "description": "Incremental Markdown parser that consumes and emits Lezer trees", "main": "dist/index.cjs", "type": "module", "exports": { "import": "./dist/index.js", "require": "./dist/index.cjs" }, "module": "dist/index.js", "types": "dist/index.d.ts", "author": "Marijn Haverbeke ", "license": "MIT", "devDependencies": { "ist": "^1.1.1", "mocha": "^10.2.0", "@lezer/html": "^1.0.0", "rollup": "^2.52.2", "rollup-plugin-typescript2": "^0.34.1", "ts-node": "^10.0.0", "typescript": "^4.3.4", "getdocs-ts": "^0.1.0", "builddocs": "^1.0.0" }, "dependencies": { "@lezer/common": "^1.0.0", "@lezer/highlight": "^1.0.0" }, "repository": { "type" : "git", "url" : "https://github.com/lezer-parser/markdown.git" }, "scripts": { "watch": "rollup -w -c rollup.config.js", "prepare": "rollup -c rollup.config.js", "test": "mocha", "build-readme": "node bin/build-readme.cjs" } } markdown-1.2.0/rollup.config.js000066400000000000000000000012051454233753200164610ustar00rootroot00000000000000import typescript from "rollup-plugin-typescript2" export default { input: "./src/index.ts", output: [{ format: "cjs", file: "./dist/index.cjs", externalLiveBindings: false }, { format: "es", file: "./dist/index.js", externalLiveBindings: false }], external: ["@lezer/common", "@lezer/highlight", "@lezer/lr"], plugins: [ typescript({ check: false, tsconfigOverride: { compilerOptions: { lib: ["es5", "es6"], sourceMap: true, target: "es6", strict: false, declaration: true } }, include: ["src/*.ts"] }) ] } markdown-1.2.0/src/000077500000000000000000000000001454233753200141335ustar00rootroot00000000000000markdown-1.2.0/src/README.md000066400000000000000000000030351454233753200154130ustar00rootroot00000000000000 # lezer-markdown This is an incremental Markdown ([CommonMark](https://commonmark.org/) with support for extension) parser that integrates well with the [Lezer](https://lezer.codemirror.net/) parser system. It does not in fact use the Lezer runtime (that runs LR parsers, and Markdown can't really be parsed that way), but it produces Lezer-style compact syntax trees and consumes fragments of such trees for its incremental parsing. Note that this only _parses_ the document, producing a data structure that represents its syntactic form, and doesn't help with outputting HTML. Also, in order to be single-pass and incremental, it doesn't do some things that a conforming CommonMark parser is expected to do—specifically, it doesn't validate link references, so it'll parse `[a][b]` and similar as a link, even if no `[b]` reference is declared. The [@codemirror/lang-markdown](https://github.com/codemirror/lang-markdown) package integrates this parser with CodeMirror to provide Markdown editor support. The code is licensed under an MIT license. ## Interface @parser @MarkdownParser @MarkdownConfig @MarkdownExtension @parseCode ### GitHub Flavored Markdown @GFM @Table @TaskList @Strikethrough @Autolink ### Other extensions @Subscript @Superscript @Emoji ### Extension The parser can, to a certain extent, be extended to handle additional syntax. @NodeSpec @BlockContext @BlockParser @LeafBlockParser @Line @LeafBlock @InlineContext @InlineParser @DelimiterType @Element markdown-1.2.0/src/extension.ts000066400000000000000000000240101454233753200165140ustar00rootroot00000000000000import {InlineContext, BlockContext, MarkdownConfig, LeafBlockParser, LeafBlock, Line, Element, space, Punctuation} from "./markdown" import {tags as t} from "@lezer/highlight" const StrikethroughDelim = {resolve: "Strikethrough", mark: "StrikethroughMark"} /// An extension that implements /// [GFM-style](https://github.github.com/gfm/#strikethrough-extension-) /// Strikethrough syntax using `~~` delimiters. export const Strikethrough: MarkdownConfig = { defineNodes: [{ name: "Strikethrough", style: {"Strikethrough/...": t.strikethrough} }, { name: "StrikethroughMark", style: t.processingInstruction }], parseInline: [{ name: "Strikethrough", parse(cx, next, pos) { if (next != 126 /* '~' */ || cx.char(pos + 1) != 126 || cx.char(pos + 2) == 126) return -1 let before = cx.slice(pos - 1, pos), after = cx.slice(pos + 2, pos + 3) let sBefore = /\s|^$/.test(before), sAfter = /\s|^$/.test(after) let pBefore = Punctuation.test(before), pAfter = Punctuation.test(after) return cx.addDelimiter(StrikethroughDelim, pos, pos + 2, !sAfter && (!pAfter || sBefore || pBefore), !sBefore && (!pBefore || sAfter || pAfter)) }, after: "Emphasis" }] } function parseRow(cx: BlockContext, line: string, startI = 0, elts?: Element[], offset = 0) { let count = 0, first = true, cellStart = -1, cellEnd = -1, esc = false let parseCell = () => { elts!.push(cx.elt("TableCell", offset + cellStart, offset + cellEnd, cx.parser.parseInline(line.slice(cellStart, cellEnd), offset + cellStart))) } for (let i = startI; i < line.length; i++) { let next = line.charCodeAt(i) if (next == 124 /* '|' */ && !esc) { if (!first || cellStart > -1) count++ first = false if (elts) { if (cellStart > -1) parseCell() elts.push(cx.elt("TableDelimiter", i + offset, i + offset + 1)) } cellStart = cellEnd = -1 } else if (esc || next != 32 && next != 9) { if (cellStart < 0) cellStart = i cellEnd = i + 1 } esc = !esc && next == 92 } if (cellStart > -1) { count++ if (elts) parseCell() } return count } function hasPipe(str: string, start: number) { for (let i = start; i < str.length; i++) { let next = str.charCodeAt(i) if (next == 124 /* '|' */) return true if (next == 92 /* '\\' */) i++ } return false } const delimiterLine = /^\|?(\s*:?-+:?\s*\|)+(\s*:?-+:?\s*)?$/ class TableParser implements LeafBlockParser { // Null means we haven't seen the second line yet, false means this // isn't a table, and an array means this is a table and we've // parsed the given rows so far. rows: false | null | Element[] = null nextLine(cx: BlockContext, line: Line, leaf: LeafBlock) { if (this.rows == null) { // Second line this.rows = false let lineText if ((line.next == 45 || line.next == 58 || line.next == 124 /* '-:|' */) && delimiterLine.test(lineText = line.text.slice(line.pos))) { let firstRow: Element[] = [], firstCount = parseRow(cx, leaf.content, 0, firstRow, leaf.start) if (firstCount == parseRow(cx, lineText, line.pos)) this.rows = [cx.elt("TableHeader", leaf.start, leaf.start + leaf.content.length, firstRow), cx.elt("TableDelimiter", cx.lineStart + line.pos, cx.lineStart + line.text.length)] } } else if (this.rows) { // Line after the second let content: Element[] = [] parseRow(cx, line.text, line.pos, content, cx.lineStart) this.rows.push(cx.elt("TableRow", cx.lineStart + line.pos, cx.lineStart + line.text.length, content)) } return false } finish(cx: BlockContext, leaf: LeafBlock) { if (!this.rows) return false cx.addLeafElement(leaf, cx.elt("Table", leaf.start, leaf.start + leaf.content.length, this.rows as readonly Element[])) return true } } /// This extension provides /// [GFM-style](https://github.github.com/gfm/#tables-extension-) /// tables, using syntax like this: /// /// ``` /// | head 1 | head 2 | /// | --- | --- | /// | cell 1 | cell 2 | /// ``` export const Table: MarkdownConfig = { defineNodes: [ {name: "Table", block: true}, {name: "TableHeader", style: {"TableHeader/...": t.heading}}, "TableRow", {name: "TableCell", style: t.content}, {name: "TableDelimiter", style: t.processingInstruction}, ], parseBlock: [{ name: "Table", leaf(_, leaf) { return hasPipe(leaf.content, 0) ? new TableParser : null }, endLeaf(cx, line, leaf) { if (leaf.parsers.some(p => p instanceof TableParser) || !hasPipe(line.text, line.basePos)) return false let next = cx.scanLine(cx.absoluteLineEnd + 1).text return delimiterLine.test(next) && parseRow(cx, line.text, line.basePos) == parseRow(cx, next, line.basePos) }, before: "SetextHeading" }] } class TaskParser implements LeafBlockParser { nextLine() { return false } finish(cx: BlockContext, leaf: LeafBlock) { cx.addLeafElement(leaf, cx.elt("Task", leaf.start, leaf.start + leaf.content.length, [ cx.elt("TaskMarker", leaf.start, leaf.start + 3), ...cx.parser.parseInline(leaf.content.slice(3), leaf.start + 3) ])) return true } } /// Extension providing /// [GFM-style](https://github.github.com/gfm/#task-list-items-extension-) /// task list items, where list items can be prefixed with `[ ]` or /// `[x]` to add a checkbox. export const TaskList: MarkdownConfig = { defineNodes: [ {name: "Task", block: true, style: t.list}, {name: "TaskMarker", style: t.atom} ], parseBlock: [{ name: "TaskList", leaf(cx, leaf) { return /^\[[ xX]\][ \t]/.test(leaf.content) && cx.parentType().name == "ListItem" ? new TaskParser : null }, after: "SetextHeading" }] } const autolinkRE = /(www\.)|(https?:\/\/)|([\w.+-]+@)|(mailto:|xmpp:)/gy const urlRE = /[\w-]+(\.[\w-]+)+(\/[^\s<]*)?/gy const lastTwoDomainWords = /[\w-]+\.[\w-]+($|\/)/ const emailRE = /[\w.+-]+@[\w-]+(\.[\w.-]+)+/gy const xmppResourceRE = /\/[a-zA-Z\d@.]+/gy function count(str: string, from: number, to: number, ch: string) { let result = 0 for (let i = from; i < to; i++) if (str[i] == ch) result++ return result } function autolinkURLEnd(text: string, from: number) { urlRE.lastIndex = from let m = urlRE.exec(text) if (!m || lastTwoDomainWords.exec(m[0])![0].indexOf("_") > -1) return -1 let end = from + m[0].length for (;;) { let last = text[end - 1], m if (/[?!.,:*_~]/.test(last) || last == ")" && count(text, from, end, ")") > count(text, from, end, "(")) end-- else if (last == ";" && (m = /&(?:#\d+|#x[a-f\d]+|\w+);$/.exec(text.slice(from, end)))) end = from + m.index else break } return end } function autolinkEmailEnd(text: string, from: number) { emailRE.lastIndex = from let m = emailRE.exec(text) if (!m) return -1 let last = m[0][m[0].length - 1] return last == "_" || last == "-" ? -1 : from + m[0].length - (last == "." ? 1 : 0) } /// Extension that implements autolinking for /// `www.`/`http://`/`https://`/`mailto:`/`xmpp:` URLs and email /// addresses. export const Autolink: MarkdownConfig = { parseInline: [{ name: "Autolink", parse(cx, next, absPos) { let pos = absPos - cx.offset autolinkRE.lastIndex = pos let m = autolinkRE.exec(cx.text), end = -1 if (!m) return -1 if (m[1] || m[2]) { // www., http:// end = autolinkURLEnd(cx.text, pos + m[0].length) } else if (m[3]) { // email address end = autolinkEmailEnd(cx.text, pos) } else { // mailto:/xmpp: end = autolinkEmailEnd(cx.text, pos + m[0].length) if (end > -1 && m[0] == "xmpp:") { xmppResourceRE.lastIndex = end m = xmppResourceRE.exec(cx.text) if (m) end = m.index + m[0].length } } if (end < 0) return -1 cx.addElement(cx.elt("URL", absPos, end + cx.offset)) return end + cx.offset } }] } /// Extension bundle containing [`Table`](#Table), /// [`TaskList`](#TaskList), [`Strikethrough`](#Strikethrough), and /// [`Autolink`](#Autolink). export const GFM = [Table, TaskList, Strikethrough, Autolink] function parseSubSuper(ch: number, node: string, mark: string) { return (cx: InlineContext, next: number, pos: number) => { if (next != ch || cx.char(pos + 1) == ch) return -1 let elts = [cx.elt(mark, pos, pos + 1)] for (let i = pos + 1; i < cx.end; i++) { let next = cx.char(i) if (next == ch) return cx.addElement(cx.elt(node, pos, i + 1, elts.concat(cx.elt(mark, i, i + 1)))) if (next == 92 /* '\\' */) elts.push(cx.elt("Escape", i, i++ + 2)) if (space(next)) break } return -1 } } /// Extension providing /// [Pandoc-style](https://pandoc.org/MANUAL.html#superscripts-and-subscripts) /// superscript using `^` markers. export const Superscript: MarkdownConfig = { defineNodes: [ {name: "Superscript", style: t.special(t.content)}, {name: "SuperscriptMark", style: t.processingInstruction} ], parseInline: [{ name: "Superscript", parse: parseSubSuper(94 /* '^' */, "Superscript", "SuperscriptMark") }] } /// Extension providing /// [Pandoc-style](https://pandoc.org/MANUAL.html#superscripts-and-subscripts) /// subscript using `~` markers. export const Subscript: MarkdownConfig = { defineNodes: [ {name: "Subscript", style: t.special(t.content)}, {name: "SubscriptMark", style: t.processingInstruction} ], parseInline: [{ name: "Subscript", parse: parseSubSuper(126 /* '~' */, "Subscript", "SubscriptMark") }] } /// Extension that parses two colons with only letters, underscores, /// and numbers between them as `Emoji` nodes. export const Emoji: MarkdownConfig = { defineNodes: [{name: "Emoji", style: t.character}], parseInline: [{ name: "Emoji", parse(cx, next, pos) { let match: RegExpMatchArray | null if (next != 58 /* ':' */ || !(match = /^[a-zA-Z_0-9]+:/.exec(cx.slice(pos + 1, cx.end)))) return -1 return cx.addElement(cx.elt("Emoji", pos, pos + 1 + match[0].length)) } }] } markdown-1.2.0/src/index.ts000066400000000000000000000005531454233753200156150ustar00rootroot00000000000000export {parser, MarkdownParser, MarkdownConfig, MarkdownExtension, NodeSpec, InlineParser, BlockParser, LeafBlockParser, Line, Element, LeafBlock, DelimiterType, BlockContext, InlineContext} from "./markdown" export {parseCode} from "./nest" export {Table, TaskList, Strikethrough, Autolink, GFM, Subscript, Superscript, Emoji} from "./extension" markdown-1.2.0/src/markdown.ts000066400000000000000000002163611454233753200163360ustar00rootroot00000000000000import {Tree, TreeBuffer, NodeType, NodeProp, NodePropSource, TreeFragment, NodeSet, TreeCursor, Input, Parser, PartialParse, SyntaxNode, ParseWrapper} from "@lezer/common" import {styleTags, tags as t, Tag} from "@lezer/highlight" class CompositeBlock { static create(type: number, value: number, from: number, parentHash: number, end: number) { let hash = (parentHash + (parentHash << 8) + type + (value << 4)) | 0 return new CompositeBlock(type, value, from, hash, end, [], []) } /// @internal hashProp: [NodeProp, any][] constructor(readonly type: number, // Used for indentation in list items, markup character in lists readonly value: number, readonly from: number, readonly hash: number, public end: number, readonly children: (Tree | TreeBuffer)[], readonly positions: number[]) { this.hashProp = [[NodeProp.contextHash, hash]] } addChild(child: Tree, pos: number) { if (child.prop(NodeProp.contextHash) != this.hash) child = new Tree(child.type, child.children, child.positions, child.length, this.hashProp) this.children.push(child) this.positions.push(pos) } toTree(nodeSet: NodeSet, end = this.end) { let last = this.children.length - 1 if (last >= 0) end = Math.max(end, this.positions[last] + this.children[last].length + this.from) return new Tree(nodeSet.types[this.type], this.children, this.positions, end - this.from).balance({ makeTree: (children, positions, length) => new Tree(NodeType.none, children, positions, length, this.hashProp) }) } } export enum Type { Document = 1, CodeBlock, FencedCode, Blockquote, HorizontalRule, BulletList, OrderedList, ListItem, ATXHeading1, ATXHeading2, ATXHeading3, ATXHeading4, ATXHeading5, ATXHeading6, SetextHeading1, SetextHeading2, HTMLBlock, LinkReference, Paragraph, CommentBlock, ProcessingInstructionBlock, // Inline Escape, Entity, HardBreak, Emphasis, StrongEmphasis, Link, Image, InlineCode, HTMLTag, Comment, ProcessingInstruction, Autolink, // Smaller tokens HeaderMark, QuoteMark, ListMark, LinkMark, EmphasisMark, CodeMark, CodeText, CodeInfo, LinkTitle, LinkLabel, URL } /// Data structure used to accumulate a block's content during [leaf /// block parsing](#BlockParser.leaf). export class LeafBlock { /// @internal marks: Element[] = [] /// The block parsers active for this block. parsers: LeafBlockParser[] = [] /// @internal constructor( /// The start position of the block. readonly start: number, /// The block's text content. public content: string ) {} } /// Data structure used during block-level per-line parsing. export class Line { /// The line's full text. text = "" /// The base indent provided by the composite contexts (that have /// been handled so far). baseIndent = 0 /// The string position corresponding to the base indent. basePos = 0 /// The number of contexts handled @internal depth = 0 /// Any markers (i.e. block quote markers) parsed for the contexts. @internal markers: Element[] = [] /// The position of the next non-whitespace character beyond any /// list, blockquote, or other composite block markers. pos = 0 /// The column of the next non-whitespace character. indent = 0 /// The character code of the character after `pos`. next = -1 /// @internal forward() { if (this.basePos > this.pos) this.forwardInner() } /// @internal forwardInner() { let newPos = this.skipSpace(this.basePos) this.indent = this.countIndent(newPos, this.pos, this.indent) this.pos = newPos this.next = newPos == this.text.length ? -1 : this.text.charCodeAt(newPos) } /// Skip whitespace after the given position, return the position of /// the next non-space character or the end of the line if there's /// only space after `from`. skipSpace(from: number) { return skipSpace(this.text, from) } /// @internal reset(text: string) { this.text = text this.baseIndent = this.basePos = this.pos = this.indent = 0 this.forwardInner() this.depth = 1 while (this.markers.length) this.markers.pop() } /// Move the line's base position forward to the given position. /// This should only be called by composite [block /// parsers](#BlockParser.parse) or [markup skipping /// functions](#NodeSpec.composite). moveBase(to: number) { this.basePos = to this.baseIndent = this.countIndent(to, this.pos, this.indent) } /// Move the line's base position forward to the given _column_. moveBaseColumn(indent: number) { this.baseIndent = indent this.basePos = this.findColumn(indent) } /// Store a composite-block-level marker. Should be called from /// [markup skipping functions](#NodeSpec.composite) when they /// consume any non-whitespace characters. addMarker(elt: Element) { this.markers.push(elt) } /// Find the column position at `to`, optionally starting at a given /// position and column. countIndent(to: number, from = 0, indent = 0) { for (let i = from; i < to; i++) indent += this.text.charCodeAt(i) == 9 ? 4 - indent % 4 : 1 return indent } /// Find the position corresponding to the given column. findColumn(goal: number) { let i = 0 for (let indent = 0; i < this.text.length && indent < goal; i++) indent += this.text.charCodeAt(i) == 9 ? 4 - indent % 4 : 1 return i } /// @internal scrub() { if (!this.baseIndent) return this.text let result = "" for (let i = 0; i < this.basePos; i++) result += " " return result + this.text.slice(this.basePos) } } function skipForList(bl: CompositeBlock, cx: BlockContext, line: Line) { if (line.pos == line.text.length || (bl != cx.block && line.indent >= cx.stack[line.depth + 1].value + line.baseIndent)) return true if (line.indent >= line.baseIndent + 4) return false let size = (bl.type == Type.OrderedList ? isOrderedList : isBulletList)(line, cx, false) return size > 0 && (bl.type != Type.BulletList || isHorizontalRule(line, cx, false) < 0) && line.text.charCodeAt(line.pos + size - 1) == bl.value } const DefaultSkipMarkup: {[type: number]: (bl: CompositeBlock, cx: BlockContext, line: Line) => boolean} = { [Type.Blockquote](bl, cx, line) { if (line.next != 62 /* '>' */) return false line.markers.push(elt(Type.QuoteMark, cx.lineStart + line.pos, cx.lineStart + line.pos + 1)) line.moveBase(line.pos + (space(line.text.charCodeAt(line.pos + 1)) ? 2 : 1)) bl.end = cx.lineStart + line.text.length return true }, [Type.ListItem](bl, _cx, line) { if (line.indent < line.baseIndent + bl.value && line.next > -1) return false line.moveBaseColumn(line.baseIndent + bl.value) return true }, [Type.OrderedList]: skipForList, [Type.BulletList]: skipForList, [Type.Document]() { return true } } export function space(ch: number) { return ch == 32 || ch == 9 || ch == 10 || ch == 13 } function skipSpace(line: string, i = 0) { while (i < line.length && space(line.charCodeAt(i))) i++ return i } function skipSpaceBack(line: string, i: number, to: number) { while (i > to && space(line.charCodeAt(i - 1))) i-- return i } function isFencedCode(line: Line) { if (line.next != 96 && line.next != 126 /* '`~' */) return -1 let pos = line.pos + 1 while (pos < line.text.length && line.text.charCodeAt(pos) == line.next) pos++ if (pos < line.pos + 3) return -1 if (line.next == 96) for (let i = pos; i < line.text.length; i++) if (line.text.charCodeAt(i) == 96) return -1 return pos } function isBlockquote(line: Line) { return line.next != 62 /* '>' */ ? -1 : line.text.charCodeAt(line.pos + 1) == 32 ? 2 : 1 } function isHorizontalRule(line: Line, cx: BlockContext, breaking: boolean) { if (line.next != 42 && line.next != 45 && line.next != 95 /* '_-*' */) return -1 let count = 1 for (let pos = line.pos + 1; pos < line.text.length; pos++) { let ch = line.text.charCodeAt(pos) if (ch == line.next) count++ else if (!space(ch)) return -1 } // Setext headers take precedence if (breaking && line.next == 45 && isSetextUnderline(line) > -1 && line.depth == cx.stack.length) return -1 return count < 3 ? -1 : 1 } function inList(cx: BlockContext, type: Type) { for (let i = cx.stack.length - 1; i >= 0; i--) if (cx.stack[i].type == type) return true return false } function isBulletList(line: Line, cx: BlockContext, breaking: boolean) { return (line.next == 45 || line.next == 43 || line.next == 42 /* '-+*' */) && (line.pos == line.text.length - 1 || space(line.text.charCodeAt(line.pos + 1))) && (!breaking || inList(cx, Type.BulletList) || line.skipSpace(line.pos + 2) < line.text.length) ? 1 : -1 } function isOrderedList(line: Line, cx: BlockContext, breaking: boolean) { let pos = line.pos, next = line.next for (;;) { if (next >= 48 && next <= 57 /* '0-9' */) pos++ else break if (pos == line.text.length) return -1 next = line.text.charCodeAt(pos) } if (pos == line.pos || pos > line.pos + 9 || (next != 46 && next != 41 /* '.)' */) || (pos < line.text.length - 1 && !space(line.text.charCodeAt(pos + 1))) || breaking && !inList(cx, Type.OrderedList) && (line.skipSpace(pos + 1) == line.text.length || pos > line.pos + 1 || line.next != 49 /* '1' */)) return -1 return pos + 1 - line.pos } function isAtxHeading(line: Line) { if (line.next != 35 /* '#' */) return -1 let pos = line.pos + 1 while (pos < line.text.length && line.text.charCodeAt(pos) == 35) pos++ if (pos < line.text.length && line.text.charCodeAt(pos) != 32) return -1 let size = pos - line.pos return size > 6 ? -1 : size } function isSetextUnderline(line: Line) { if (line.next != 45 && line.next != 61 /* '-=' */ || line.indent >= line.baseIndent + 4) return -1 let pos = line.pos + 1 while (pos < line.text.length && line.text.charCodeAt(pos) == line.next) pos++ let end = pos while (pos < line.text.length && space(line.text.charCodeAt(pos))) pos++ return pos == line.text.length ? end : -1 } const EmptyLine = /^[ \t]*$/, CommentEnd = /-->/, ProcessingEnd = /\?>/ const HTMLBlockStyle = [ [/^<(?:script|pre|style)(?:\s|>|$)/i, /<\/(?:script|pre|style)>/i], [/^\s*/i.exec(after) if (comment) return cx.append(elt(Type.Comment, start, start + 1 + comment[0].length)) let procInst = /^\?[^]*?\?>/.exec(after) if (procInst) return cx.append(elt(Type.ProcessingInstruction, start, start + 1 + procInst[0].length)) let m = /^(?:![A-Z][^]*?>|!\[CDATA\[[^]*?\]\]>|\/\s*[a-zA-Z][\w-]*\s*>|\s*[a-zA-Z][\w-]*(\s+[a-zA-Z:_][\w-.:]*(?:\s*=\s*(?:[^\s"'=<>`]+|'[^']*'|"[^"]*"))?)*\s*(\/\s*)?>)/.exec(after) if (!m) return -1 return cx.append(elt(Type.HTMLTag, start, start + 1 + m[0].length)) }, Emphasis(cx, next, start) { if (next != 95 && next != 42) return -1 let pos = start + 1 while (cx.char(pos) == next) pos++ let before = cx.slice(start - 1, start), after = cx.slice(pos, pos + 1) let pBefore = Punctuation.test(before), pAfter = Punctuation.test(after) let sBefore = /\s|^$/.test(before), sAfter = /\s|^$/.test(after) let leftFlanking = !sAfter && (!pAfter || sBefore || pBefore) let rightFlanking = !sBefore && (!pBefore || sAfter || pAfter) let canOpen = leftFlanking && (next == 42 || !rightFlanking || pBefore) let canClose = rightFlanking && (next == 42 || !leftFlanking || pAfter) return cx.append(new InlineDelimiter(next == 95 ? EmphasisUnderscore : EmphasisAsterisk, start, pos, (canOpen ? Mark.Open : Mark.None) | (canClose ? Mark.Close : Mark.None))) }, HardBreak(cx, next, start) { if (next == 92 /* '\\' */ && cx.char(start + 1) == 10 /* '\n' */) return cx.append(elt(Type.HardBreak, start, start + 2)) if (next == 32) { let pos = start + 1 while (cx.char(pos) == 32) pos++ if (cx.char(pos) == 10 && pos >= start + 2) return cx.append(elt(Type.HardBreak, start, pos + 1)) } return -1 }, Link(cx, next, start) { return next == 91 /* '[' */ ? cx.append(new InlineDelimiter(LinkStart, start, start + 1, Mark.Open)) : -1 }, Image(cx, next, start) { return next == 33 /* '!' */ && cx.char(start + 1) == 91 /* '[' */ ? cx.append(new InlineDelimiter(ImageStart, start, start + 2, Mark.Open)) : -1 }, LinkEnd(cx, next, start) { if (next != 93 /* ']' */) return -1 // Scanning back to the next link/image start marker for (let i = cx.parts.length - 1; i >= 0; i--) { let part = cx.parts[i] if (part instanceof InlineDelimiter && (part.type == LinkStart || part.type == ImageStart)) { // If this one has been set invalid (because it would produce // a nested link) or there's no valid link here ignore both. if (!part.side || cx.skipSpace(part.to) == start && !/[(\[]/.test(cx.slice(start + 1, start + 2))) { cx.parts[i] = null return -1 } // Finish the content and replace the entire range in // this.parts with the link/image node. let content = cx.takeContent(i) let link = cx.parts[i] = finishLink(cx, content, part.type == LinkStart ? Type.Link : Type.Image, part.from, start + 1) // Set any open-link markers before this link to invalid. if (part.type == LinkStart) for (let j = 0; j < i; j++) { let p = cx.parts[j] if (p instanceof InlineDelimiter && p.type == LinkStart) p.side = Mark.None } return link.to } } return -1 } } function finishLink(cx: InlineContext, content: Element[], type: Type, start: number, startPos: number) { let {text} = cx, next = cx.char(startPos), endPos = startPos content.unshift(elt(Type.LinkMark, start, start + (type == Type.Image ? 2 : 1))) content.push(elt(Type.LinkMark, startPos - 1, startPos)) if (next == 40 /* '(' */) { let pos = cx.skipSpace(startPos + 1) let dest = parseURL(text, pos - cx.offset, cx.offset), title if (dest) { pos = cx.skipSpace(dest.to) // The destination and title must be separated by whitespace if (pos != dest.to) { title = parseLinkTitle(text, pos - cx.offset, cx.offset) if (title) pos = cx.skipSpace(title.to) } } if (cx.char(pos) == 41 /* ')' */) { content.push(elt(Type.LinkMark, startPos, startPos + 1)) endPos = pos + 1 if (dest) content.push(dest) if (title) content.push(title) content.push(elt(Type.LinkMark, pos, endPos)) } } else if (next == 91 /* '[' */) { let label = parseLinkLabel(text, startPos - cx.offset, cx.offset, false) if (label) { content.push(label) endPos = label.to } } return elt(type, start, endPos, content) } // These return `null` when falling off the end of the input, `false` // when parsing fails otherwise (for use in the incremental link // reference parser). function parseURL(text: string, start: number, offset: number): null | false | Element { let next = text.charCodeAt(start) if (next == 60 /* '<' */) { for (let pos = start + 1; pos < text.length; pos++) { let ch = text.charCodeAt(pos) if (ch == 62 /* '>' */) return elt(Type.URL, start + offset, pos + 1 + offset) if (ch == 60 || ch == 10 /* '<\n' */) return false } return null } else { let depth = 0, pos = start for (let escaped = false; pos < text.length; pos++) { let ch = text.charCodeAt(pos) if (space(ch)) { break } else if (escaped) { escaped = false } else if (ch == 40 /* '(' */) { depth++ } else if (ch == 41 /* ')' */) { if (!depth) break depth-- } else if (ch == 92 /* '\\' */) { escaped = true } } return pos > start ? elt(Type.URL, start + offset, pos + offset) : pos == text.length ? null : false } } function parseLinkTitle(text: string, start: number, offset: number): null | false | Element { let next = text.charCodeAt(start) if (next != 39 && next != 34 && next != 40 /* '"\'(' */) return false let end = next == 40 ? 41 : next for (let pos = start + 1, escaped = false; pos < text.length; pos++) { let ch = text.charCodeAt(pos) if (escaped) escaped = false else if (ch == end) return elt(Type.LinkTitle, start + offset, pos + 1 + offset) else if (ch == 92 /* '\\' */) escaped = true } return null } function parseLinkLabel(text: string, start: number, offset: number, requireNonWS: boolean): null | false | Element { for (let escaped = false, pos = start + 1, end = Math.min(text.length, pos + 999); pos < end; pos++) { let ch = text.charCodeAt(pos) if (escaped) escaped = false else if (ch == 93 /* ']' */) return requireNonWS ? false : elt(Type.LinkLabel, start + offset, pos + 1 + offset) else { if (requireNonWS && !space(ch)) requireNonWS = false if (ch == 91 /* '[' */) return false else if (ch == 92 /* '\\' */) escaped = true } } return null } /// Inline parsing functions get access to this context, and use it to /// read the content and emit syntax nodes. export class InlineContext { /// @internal parts: (Element | InlineDelimiter | null)[] = [] /// @internal constructor( /// The parser that is being used. readonly parser: MarkdownParser, /// The text of this inline section. readonly text: string, /// The starting offset of the section in the document. readonly offset: number ) {} /// Get the character code at the given (document-relative) /// position. char(pos: number) { return pos >= this.end ? -1 : this.text.charCodeAt(pos - this.offset) } /// The position of the end of this inline section. get end() { return this.offset + this.text.length } /// Get a substring of this inline section. Again uses /// document-relative positions. slice(from: number, to: number) { return this.text.slice(from - this.offset, to - this.offset) } /// @internal append(elt: Element | InlineDelimiter) { this.parts.push(elt) return elt.to } /// Add a [delimiter](#DelimiterType) at this given position. `open` /// and `close` indicate whether this delimiter is opening, closing, /// or both. Returns the end of the delimiter, for convenient /// returning from [parse functions](#InlineParser.parse). addDelimiter(type: DelimiterType, from: number, to: number, open: boolean, close: boolean) { return this.append(new InlineDelimiter(type, from, to, (open ? Mark.Open : Mark.None) | (close ? Mark.Close : Mark.None))) } /// Add an inline element. Returns the end of the element. addElement(elt: Element) { return this.append(elt) } /// Resolve markers between this.parts.length and from, wrapping matched markers in the /// appropriate node and updating the content of this.parts. @internal resolveMarkers(from: number) { // Scan forward, looking for closing tokens for (let i = from; i < this.parts.length; i++) { let close = this.parts[i] if (!(close instanceof InlineDelimiter && close.type.resolve && (close.side & Mark.Close))) continue let emp = close.type == EmphasisUnderscore || close.type == EmphasisAsterisk let closeSize = close.to - close.from let open: InlineDelimiter | undefined, j = i - 1 // Continue scanning for a matching opening token for (; j >= from; j--) { let part = this.parts[j] if (part instanceof InlineDelimiter && (part.side & Mark.Open) && part.type == close.type && // Ignore emphasis delimiters where the character count doesn't match !(emp && ((close.side & Mark.Open) || (part.side & Mark.Close)) && (part.to - part.from + closeSize) % 3 == 0 && ((part.to - part.from) % 3 || closeSize % 3))) { open = part break } } if (!open) continue let type = close.type.resolve, content = [] let start = open.from, end = close.to // Emphasis marker effect depends on the character count. Size consumed is minimum of the two // markers. if (emp) { let size = Math.min(2, open.to - open.from, closeSize) start = open.to - size end = close.from + size type = size == 1 ? "Emphasis" : "StrongEmphasis" } // Move the covered region into content, optionally adding marker nodes if (open.type.mark) content.push(this.elt(open.type.mark, start, open.to)) for (let k = j + 1; k < i; k++) { if (this.parts[k] instanceof Element) content.push(this.parts[k] as Element) this.parts[k] = null } if (close.type.mark) content.push(this.elt(close.type.mark, close.from, end)) let element = this.elt(type, start, end, content) // If there are leftover emphasis marker characters, shrink the close/open markers. Otherwise, clear them. this.parts[j] = emp && open.from != start ? new InlineDelimiter(open.type, open.from, start, open.side) : null let keep = this.parts[i] = emp && close.to != end ? new InlineDelimiter(close.type, end, close.to, close.side) : null // Insert the new element in this.parts if (keep) this.parts.splice(i, 0, element) else this.parts[i] = element } // Collect the elements remaining in this.parts into an array. let result = [] for (let i = from; i < this.parts.length; i++) { let part = this.parts[i] if (part instanceof Element) result.push(part) } return result } /// Find an opening delimiter of the given type. Returns `null` if /// no delimiter is found, or an index that can be passed to /// [`takeContent`](#InlineContext.takeContent) otherwise. findOpeningDelimiter(type: DelimiterType) { for (let i = this.parts.length - 1; i >= 0; i--) { let part = this.parts[i] if (part instanceof InlineDelimiter && part.type == type) return i } return null } /// Remove all inline elements and delimiters starting from the /// given index (which you should get from /// [`findOpeningDelimiter`](#InlineContext.findOpeningDelimiter), /// resolve delimiters inside of them, and return them as an array /// of elements. takeContent(startIndex: number) { let content = this.resolveMarkers(startIndex) this.parts.length = startIndex return content } /// Skip space after the given (document) position, returning either /// the position of the next non-space character or the end of the /// section. skipSpace(from: number) { return skipSpace(this.text, from - this.offset) + this.offset } /// Create an [`Element`](#Element) for a syntax node. elt(type: string, from: number, to: number, children?: readonly Element[]): Element elt(tree: Tree, at: number): Element elt(type: string | Tree, from: number, to?: number, children?: readonly Element[]): Element { if (typeof type == "string") return elt(this.parser.getNodeType(type), from, to!, children) return new TreeElement(type, from) } } function injectMarks(elements: readonly (Element | TreeElement)[], marks: Element[]) { if (!marks.length) return elements if (!elements.length) return marks let elts = elements.slice(), eI = 0 for (let mark of marks) { while (eI < elts.length && elts[eI].to < mark.to) eI++ if (eI < elts.length && elts[eI].from < mark.from) { let e = elts[eI] if (e instanceof Element) elts[eI] = new Element(e.type, e.from, e.to, injectMarks(e.children, [mark])) } else { elts.splice(eI++, 0, mark) } } return elts } // These are blocks that can span blank lines, and should thus only be // reused if their next sibling is also being reused. const NotLast = [Type.CodeBlock, Type.ListItem, Type.OrderedList, Type.BulletList] class FragmentCursor { // Index into fragment array i = 0 // Active fragment fragment: TreeFragment | null = null fragmentEnd = -1 // Cursor into the current fragment, if any. When `moveTo` returns // true, this points at the first block after `pos`. cursor: TreeCursor | null = null constructor(readonly fragments: readonly TreeFragment[], readonly input: Input) { if (fragments.length) this.fragment = fragments[this.i++] } nextFragment() { this.fragment = this.i < this.fragments.length ? this.fragments[this.i++] : null this.cursor = null this.fragmentEnd = -1 } moveTo(pos: number, lineStart: number) { while (this.fragment && this.fragment.to <= pos) this.nextFragment() if (!this.fragment || this.fragment.from > (pos ? pos - 1 : 0)) return false if (this.fragmentEnd < 0) { let end = this.fragment.to while (end > 0 && this.input.read(end - 1, end) != "\n") end-- this.fragmentEnd = end ? end - 1 : 0 } let c = this.cursor if (!c) { c = this.cursor = this.fragment.tree.cursor() c.firstChild() } let rPos = pos + this.fragment.offset while (c.to <= rPos) if (!c.parent()) return false for (;;) { if (c.from >= rPos) return this.fragment.from <= lineStart if (!c.childAfter(rPos)) return false } } matches(hash: number) { let tree = this.cursor!.tree return tree && tree.prop(NodeProp.contextHash) == hash } takeNodes(cx: BlockContext) { let cur = this.cursor!, off = this.fragment!.offset, fragEnd = this.fragmentEnd - (this.fragment!.openEnd ? 1 : 0) let start = cx.absoluteLineStart, end = start, blockI = cx.block.children.length let prevEnd = end, prevI = blockI for (;;) { if (cur.to - off > fragEnd) { if (cur.type.isAnonymous && cur.firstChild()) continue break } let pos = toRelative(cur.from - off, cx.ranges) if (cur.to - off <= cx.ranges[cx.rangeI].to) { // Fits in current range cx.addNode(cur.tree!, pos) } else { let dummy = new Tree(cx.parser.nodeSet.types[Type.Paragraph], [], [], 0, cx.block.hashProp) cx.reusePlaceholders.set(dummy, cur.tree!) cx.addNode(dummy, pos) } // Taken content must always end in a block, because incremental // parsing happens on block boundaries. Never stop directly // after an indented code block, since those can continue after // any number of blank lines. if (cur.type.is("Block")) { if (NotLast.indexOf(cur.type.id) < 0) { end = cur.to - off blockI = cx.block.children.length } else { end = prevEnd blockI = prevI prevEnd = cur.to - off prevI = cx.block.children.length } } if (!cur.nextSibling()) break } while (cx.block.children.length > blockI) { cx.block.children.pop() cx.block.positions.pop() } return end - start } } // Convert an input-stream-relative position to a // Markdown-doc-relative position by subtracting the size of all input // gaps before `abs`. function toRelative(abs: number, ranges: readonly {from: number, to: number}[]) { let pos = abs for (let i = 1; i < ranges.length; i++) { let gapFrom = ranges[i - 1].to, gapTo = ranges[i].from if (gapFrom < abs) pos -= gapTo - gapFrom } return pos } const markdownHighlighting = styleTags({ "Blockquote/...": t.quote, HorizontalRule: t.contentSeparator, "ATXHeading1/... SetextHeading1/...": t.heading1, "ATXHeading2/... SetextHeading2/...": t.heading2, "ATXHeading3/...": t.heading3, "ATXHeading4/...": t.heading4, "ATXHeading5/...": t.heading5, "ATXHeading6/...": t.heading6, "Comment CommentBlock": t.comment, Escape: t.escape, Entity: t.character, "Emphasis/...": t.emphasis, "StrongEmphasis/...": t.strong, "Link/... Image/...": t.link, "OrderedList/... BulletList/...": t.list, "BlockQuote/...": t.quote, "InlineCode CodeText": t.monospace, "URL Autolink": t.url, "HeaderMark HardBreak QuoteMark ListMark LinkMark EmphasisMark CodeMark": t.processingInstruction, "CodeInfo LinkLabel": t.labelName, LinkTitle: t.string, Paragraph: t.content }) /// The default CommonMark parser. export const parser = new MarkdownParser( new NodeSet(nodeTypes).extend(markdownHighlighting), Object.keys(DefaultBlockParsers).map(n => DefaultBlockParsers[n]), Object.keys(DefaultBlockParsers).map(n => DefaultLeafBlocks[n]), Object.keys(DefaultBlockParsers), DefaultEndLeaf, DefaultSkipMarkup, Object.keys(DefaultInline).map(n => DefaultInline[n]), Object.keys(DefaultInline), [] ) markdown-1.2.0/src/nest.ts000066400000000000000000000035141454233753200154570ustar00rootroot00000000000000import {SyntaxNode, Parser, Input, parseMixed, SyntaxNodeRef} from "@lezer/common" import {Type, MarkdownExtension} from "./markdown" function leftOverSpace(node: SyntaxNode, from: number, to: number) { let ranges = [] for (let n = node.firstChild, pos = from;; n = n.nextSibling) { let nextPos = n ? n.from : to if (nextPos > pos) ranges.push({from: pos, to: nextPos}) if (!n) break pos = n.to } return ranges } /// Create a Markdown extension to enable nested parsing on code /// blocks and/or embedded HTML. export function parseCode(config: { /// When provided, this will be used to parse the content of code /// blocks. `info` is the string after the opening ` ``` ` marker, /// or the empty string if there is no such info or this is an /// indented code block. If there is a parser available for the /// code, it should return a function that can construct the /// [parse](https://lezer.codemirror.net/docs/ref/#common.PartialParse). codeParser?: (info: string) => null | Parser /// The parser used to parse HTML tags (both block and inline). htmlParser?: Parser, }): MarkdownExtension { let {codeParser, htmlParser} = config let wrap = parseMixed((node: SyntaxNodeRef, input: Input) => { let id = node.type.id if (codeParser && (id == Type.CodeBlock || id == Type.FencedCode)) { let info = "" if (id == Type.FencedCode) { let infoNode = node.node.getChild(Type.CodeInfo) if (infoNode) info = input.read(infoNode.from, infoNode.to) } let parser = codeParser(info) if (parser) return {parser, overlay: node => node.type.id == Type.CodeText} } else if (htmlParser && (id == Type.HTMLBlock || id == Type.HTMLTag)) { return {parser: htmlParser, overlay: leftOverSpace(node.node, node.from, node.to)} } return null }) return {wrap} } markdown-1.2.0/test/000077500000000000000000000000001454233753200143235ustar00rootroot00000000000000markdown-1.2.0/test/compare-tree.ts000066400000000000000000000012351454233753200172570ustar00rootroot00000000000000import {Tree} from "@lezer/common" export function compareTree(a: Tree, b: Tree) { let curA = a.cursor(), curB = b.cursor() for (;;) { let mismatch = null, next = false if (curA.type != curB.type) mismatch = `Node type mismatch (${curA.name} vs ${curB.name})` else if (curA.from != curB.from) mismatch = `Start pos mismatch for ${curA.name}: ${curA.from} vs ${curB.from}` else if (curA.to != curB.to) mismatch = `End pos mismatch for ${curA.name}: ${curA.to} vs ${curB.to}` else if ((next = curA.next()) != curB.next()) mismatch = `Tree size mismatch` if (mismatch) throw new Error(`${mismatch}\n ${a}\n ${b}`) if (!next) break } } markdown-1.2.0/test/spec.ts000066400000000000000000000043421454233753200156300ustar00rootroot00000000000000import {Tree} from "@lezer/common" import {MarkdownParser} from ".." const abbrev: {[abbr: string]: string} = { __proto__: null as any, CB: "CodeBlock", FC: "FencedCode", Q: "Blockquote", HR: "HorizontalRule", BL: "BulletList", OL: "OrderedList", LI: "ListItem", H1: "ATXHeading1", H2: "ATXHeading2", H3: "ATXHeading3", H4: "ATXHeading4", H5: "ATXHeading5", H6: "ATXHeading6", SH1: "SetextHeading1", SH2: "SetextHeading2", HB: "HTMLBlock", PI: "ProcessingInstructionBlock", CMB: "CommentBlock", LR: "LinkReference", P: "Paragraph", Esc: "Escape", Ent: "Entity", BR: "HardBreak", Em: "Emphasis", St: "StrongEmphasis", Ln: "Link", Al: "Autolink", Im: "Image", C: "InlineCode", HT: "HTMLTag", CM: "Comment", Pi: "ProcessingInstruction", h: "HeaderMark", q: "QuoteMark", l: "ListMark", L: "LinkMark", e: "EmphasisMark", c: "CodeMark", cI: "CodeInfo", cT: "CodeText", LT: "LinkTitle", LL: "LinkLabel" } export class SpecParser { constructor(readonly parser: MarkdownParser, readonly localAbbrev?: {[name: string]: string}) {} type(name: string) { name = (this.localAbbrev && this.localAbbrev[name]) || abbrev[name] || name return this.parser.nodeSet.types.find(t => t.name == name)?.id } parse(spec: string, specName: string) { let doc = "", buffer = [], stack: number[] = [] for (let pos = 0; pos < spec.length; pos++) { let ch = spec[pos] if (ch == "{") { let name = /^(\w+):/.exec(spec.slice(pos + 1)), tag = name && this.type(name[1]) if (tag == null) throw new Error(`Invalid node opening mark at ${pos} in ${specName}`) pos += name![0].length stack.push(tag, doc.length, buffer.length) } else if (ch == "}") { if (!stack.length) throw new Error(`Mismatched node close mark at ${pos} in ${specName}`) let bufStart = stack.pop()!, from = stack.pop()!, type = stack.pop()! buffer.push(type, from, doc.length, 4 + buffer.length - bufStart) } else { doc += ch } } if (stack.length) throw new Error(`Unclosed node in ${specName}`) return {tree: Tree.build({buffer, nodeSet: this.parser.nodeSet, topID: this.type("Document"), length: doc.length}), doc} } } markdown-1.2.0/test/test-extension.ts000066400000000000000000000147171454233753200176760ustar00rootroot00000000000000import {parser as cmParser, GFM, Subscript, Superscript, Emoji} from "../dist/index.js" import {compareTree} from "./compare-tree.js" import {SpecParser} from "./spec.js" const parser = cmParser.configure([GFM, Subscript, Superscript, Emoji]) const specParser = new SpecParser(parser, { __proto__: null as any, Th: "Strikethrough", tm: "StrikethroughMark", TB: "Table", TH: "TableHeader", TR: "TableRow", TC: "TableCell", tb: "TableDelimiter", T: "Task", t: "TaskMarker", Sub: "Subscript", sub: "SubscriptMark", Sup: "Superscript", sup: "SuperscriptMark", ji: "Emoji" }) function test(name: string, spec: string, p = parser) { it(name, () => { let {tree, doc} = specParser.parse(spec, name) compareTree(p.parse(doc), tree) }) } describe("Extension", () => { test("Tables (example 198)", ` {TB:{TH:{tb:|} {TC:foo} {tb:|} {TC:bar} {tb:|}} {tb:| --- | --- |} {TR:{tb:|} {TC:baz} {tb:|} {TC:bim} {tb:|}}}`) test("Tables (example 199)", ` {TB:{TH:{tb:|} {TC:abc} {tb:|} {TC:defghi} {tb:|}} {tb::-: | -----------:} {TR:{TC:bar} {tb:|} {TC:baz}}}`) test("Tables (example 200)", ` {TB:{TH:{tb:|} {TC:f{Esc:\\|}oo} {tb:|}} {tb:| ------ |} {TR:{tb:|} {TC:b {C:{c:\`}\\|{c:\`}} az} {tb:|}} {TR:{tb:|} {TC:b {St:{e:**}{Esc:\\|}{e:**}} im} {tb:|}}}`) test("Tables (example 201)", ` {TB:{TH:{tb:|} {TC:abc} {tb:|} {TC:def} {tb:|}} {tb:| --- | --- |} {TR:{tb:|} {TC:bar} {tb:|} {TC:baz} {tb:|}}} {Q:{q:>} {P:bar}}`) test("Tables (example 202)", ` {TB:{TH:{tb:|} {TC:abc} {tb:|} {TC:def} {tb:|}} {tb:| --- | --- |} {TR:{tb:|} {TC:bar} {tb:|} {TC:baz} {tb:|}} {TR:{TC:bar}}} {P:bar}`) test("Tables (example 203)", ` {P:| abc | def | | --- | | bar |}`) test("Tables (example 204)", ` {TB:{TH:{tb:|} {TC:abc} {tb:|} {TC:def} {tb:|}} {tb:| --- | --- |} {TR:{tb:|} {TC:bar} {tb:|}} {TR:{tb:|} {TC:bar} {tb:|} {TC:baz} {tb:|} {TC:boo} {tb:|}}}`) test("Tables (example 205)", ` {TB:{TH:{tb:|} {TC:abc} {tb:|} {TC:def} {tb:|}} {tb:| --- | --- |}}`) test("Tables (in blockquote)", ` {Q:{q:>} {TB:{TH:{tb:|} {TC:one} {tb:|} {TC:two} {tb:|}} {q:>} {tb:| --- | --- |} {q:>} {TR:{tb:|} {TC:123} {tb:|} {TC:456} {tb:|}}} {q:>} {q:>} {P:Okay}}`) test("Tables (empty header)", ` {TB:{TH:{tb:|} {tb:|} {tb:|}} {tb:| :-: | :-: |} {TR:{tb:|} {TC:One} {tb:|} {TC:Two} {tb:|}}}`) test("Tables (end paragraph)", ` {P:Hello} {TB:{TH:{tb:|} {TC:foo} {tb:|} {TC:bar} {tb:|}} {tb:| --- | --- |} {TR:{tb:|} {TC:baz} {tb:|} {TC:bim} {tb:|}}}`) test("Tables (invalid tables don't end paragraph)", ` {P:Hello | abc | def | | --- | | bar |}`) test("Task list (example 279)", ` {BL:{LI:{l:-} {T:{t:[ ]} foo}} {LI:{l:-} {T:{t:[x]} bar}}}`) test("Task list (example 280)", ` {BL:{LI:{l:-} {T:{t:[x]} foo} {BL:{LI:{l:-} {T:{t:[ ]} bar}} {LI:{l:-} {T:{t:[x]} baz}}}} {LI:{l:-} {T:{t:[ ]} bim}}}`) test("Autolink (example 622)", ` {P:{URL:www.commonmark.org}}`) test("Autolink (example 623)", ` {P:Visit {URL:www.commonmark.org/help} for more information.}`) test("Autolink (example 624)", ` {P:Visit {URL:www.commonmark.org}.} {P:Visit {URL:www.commonmark.org/a.b}.}`) test("Autolink (example 625)", ` {P:{URL:www.google.com/search?q=Markup+(business)}} {P:{URL:www.google.com/search?q=Markup+(business)}))} {P:({URL:www.google.com/search?q=Markup+(business)})} {P:({URL:www.google.com/search?q=Markup+(business)}}`) test("Autolink (example 626)", ` {P:{URL:www.google.com/search?q=(business))+ok}}`) test("Autolink (example 627)", ` {P:{URL:www.google.com/search?q=commonmark&hl=en}} {P:{URL:www.google.com/search?q=commonmark}{Entity:&hl;}}`) test("Autolink (example 628)", ` {P:{URL:www.commonmark.org/he} No quote, no ^sup^} {P:No setext either ===}`, parser.configure({remove: ["Superscript", "Blockquote", "SetextHeading"]})) test("Autolink (.co.uk)", ` {P:{URL:www.blah.co.uk/path}}`) test("Autolink (email .co.uk)", ` {P:{URL:foo@bar.co.uk}}`) test("Autolink (http://www.foo-bar.com/)", ` {P:{URL:http://www.foo-bar.com/}}`) test("Autolink (exclude underscores)", ` {P:http://www.foo_/ http://foo_.com}`) }) markdown-1.2.0/test/test-incremental.ts000066400000000000000000000203001454233753200201440ustar00rootroot00000000000000import {Tree, TreeFragment} from "@lezer/common" import ist from "ist" import {parser} from "../dist/index.js" import {compareTree} from "./compare-tree.js" let doc1 = ` Header --- One **two** three *four* five. > Start of quote > > 1. Nested list > > 2. More content > inside the [list][link] > > Continued item > > ~~~ > Block of code > ~~~ > > 3. And so on [link]: /ref [another]: /one And a final paragraph. *** The end. ` type ChangeSpec = {from: number, to?: number, insert?: string}[] class State { constructor(readonly doc: string, readonly tree: Tree, readonly fragments: readonly TreeFragment[]) {} static start(doc: string) { let tree = parser.parse(doc) return new State(doc, tree, TreeFragment.addTree(tree)) } update(changes: ChangeSpec, reparse = true) { let changed = [], doc = this.doc, off = 0 for (let {from, to = from, insert = ""} of changes) { doc = doc.slice(0, from) + insert + doc.slice(to) changed.push({fromA: from - off, toA: to - off, fromB: from, toB: from + insert.length}) off += insert.length - (to - from) } let fragments = TreeFragment.applyChanges(this.fragments, changed, 2) if (!reparse) return new State(doc, Tree.empty, fragments) let tree = parser.parse(doc, fragments) return new State(doc, tree, TreeFragment.addTree(tree, fragments)) } } let _state1: State | null = null, state1 = () => _state1 || (_state1 = State.start(doc1)) function overlap(a: Tree, b: Tree) { let inA = new Set(), shared = 0, sharingTo = 0 for (let cur = a.cursor(); cur.next();) if (cur.tree) inA.add(cur.tree) for (let cur = b.cursor(); cur.next();) if (cur.tree && inA.has(cur.tree) && cur.type.is("Block") && cur.from >= sharingTo) { shared += cur.to - cur.from sharingTo = cur.to } return Math.round(shared * 100 / b.length) } function testChange(change: ChangeSpec, reuse = 10) { let state = state1().update(change) compareTree(state.tree, parser.parse(state.doc)) if (reuse) ist(overlap(state.tree, state1().tree), reuse, ">") } describe("Markdown incremental parsing", () => { it("can produce the proper tree", () => { // Replace 'three' with 'bears' let state = state1().update([{from: 24, to: 29, insert: "bears"}]) compareTree(state.tree, state1().tree) }) it("reuses nodes from the previous parse", () => { // Replace 'three' with 'bears' let state = state1().update([{from: 24, to: 29, insert: "bears"}]) ist(overlap(state1().tree, state.tree), 80, ">") }) it("can reuse content for a change in a block context", () => { // Replace 'content' with 'monkeys' let state = state1().update([{from: 92, to: 99, insert: "monkeys"}]) compareTree(state.tree, state1().tree) ist(overlap(state1().tree, state.tree), 20, ">") }) it("can handle deleting a quote mark", () => testChange([{from: 82, to: 83}])) it("can handle adding to a quoted block", () => testChange([{from: 37, insert: "> "}, {from: 45, insert: "> "}])) it("can handle a change in a post-linkref paragraph", () => testChange([{from: 249, to: 251}])) it("can handle a change in a paragraph-adjacent linkrefs", () => testChange([{from: 230, to: 231}])) it("can deal with multiple changes applied separately", () => { let state = state1().update([{from: 190, to: 191}], false).update([{from: 30, insert: "hi\n\nyou"}]) compareTree(state.tree, parser.parse(state.doc)) ist(overlap(state.tree, state1().tree), 20, ">") }) it("works when a change happens directly after a block", () => testChange([{from: 150, to: 167}])) it("works when a change deletes a blank line after a paragraph", () => testChange([{from: 207, to: 213}])) it("doesn't get confused by removing paragraph-breaking markup", () => testChange([{from: 264, to: 265}])) function r(n: number) { return Math.floor(Math.random() * n) } function rStr(len: number) { let result = "", chars = "\n>x-" while (result.length < len) result += chars[r(chars.length)] return result } it("survives random changes", () => { for (let i = 0, l = doc1.length; i < 20; i++) { let c = 1 + r(4), changes = [] for (let i = 0, rFrom = 0; i < c; i++) { let rTo = rFrom + Math.floor((l - rFrom) / (c - i)) let from = rFrom + r(rTo - rFrom - 1), to = r(2) == 1 ? from : from + r(Math.min(rTo - from, 20)) let iR = r(3), insert = iR == 0 && from != to ? "" : iR == 1 ? "\n\n" : rStr(r(5) + 1) changes.push({from, to, insert}) l += insert.length - (to - from) rFrom = to + insert.length } testChange(changes, 0) } }) it("can handle large documents", () => { let doc = doc1.repeat(50) let state = State.start(doc) let newState = state.update([{from: doc.length >> 1, insert: "a\n\nb"}]) ist(overlap(state.tree, newState.tree), 90, ">") }) it("properly re-parses a continued indented code block", () => { let state = State.start(` One paragraph to create a bit of string length here Code Block Another paragraph that is long enough to create a fragment `).update([{from: 76, insert: " "}]) compareTree(state.tree, parser.parse(state.doc)) }) it("properly re-parses a continued list", () => { let state = State.start(` One paragraph to create a bit of string length here * List More content Another paragraph that is long enough to create a fragment `).update([{from: 65, insert: " * "}]) compareTree(state.tree, parser.parse(state.doc)) }) it("can recover from incremental parses that stop in the middle of a list", () => { let doc = ` 1. I am a list item with ***some* emphasized content inside** and the parser hopefully stops parsing after me. 2. Oh no the list continues. ` let parse = parser.startParse(doc), tree parse.advance() ist(parse.parsedPos, doc.length, "<") parse.stopAt(parse.parsedPos) while (!(tree = parse.advance())) {} let state = new State(doc, tree, TreeFragment.addTree(tree)).update([]) ist(state.tree.topNode.lastChild!.from, 1) }) it("can reuse list items", () => { let start = State.start(" - List item\n".repeat(100)) let state = start.update([{from: 18, to: 19}]) ist(overlap(start.tree, state.tree), 80, ">") }) it("returns a tree starting at the first range", () => { let result = parser.parse("foo\n\nbar", [], [{from: 5, to: 8}]) ist(result.toString(), "Document(Paragraph)") ist(result.length, 3) ist(result.positions[0], 0) }) it("Allows gaps in the input", () => { let doc = ` The first X *y* X< >X paragraph. - And *a X<*>X list* ` let tree = parser.parse(doc, [], [{from: 0, to: 11}, {from: 12, to: 17}, {from: 23, to: 46}, {from: 51, to: 58}]) ist(tree.toString(), "Document(Paragraph(Emphasis(EmphasisMark,EmphasisMark)),BulletList(ListItem(ListMark,Paragraph(Emphasis(EmphasisMark,EmphasisMark)))))") ist(tree.length, doc.length) let top = tree.topNode, first = top.firstChild! ist(first.name, "Paragraph") ist(first.from, 1) ist(first.to, 34) let last = top.lastChild!.lastChild!.lastChild!, em = last.lastChild! ist(last.name, "Paragraph") ist(last.from, 39) ist(last.to, 57) ist(em.name, "Emphasis") ist(em.from, 43) ist(em.to, 57) }) it("can reuse nodes at the end of the document", () => { let doc = `* List item ~~~js function foo() { return false } ~~~ ` let tree = parser.parse(doc) let ins = 11 let doc2 = doc.slice(0, ins) + "\n* " + doc.slice(ins) let fragments = TreeFragment.applyChanges(TreeFragment.addTree(tree), [{fromA: ins, toA: ins, fromB: ins, toB: ins + 3}]) let tree2 = parser.parse(doc2, fragments) ist(tree2.topNode.lastChild!.tree, tree.topNode.lastChild!.tree) }) it("places reused nodes at the right position when there are gaps before them", () => { let doc = " {{}}\nb\n{{}}" let ast1 = parser.parse(doc, undefined, [{from: 0, to: 1}, {from: 5, to: 8}]) let frag = TreeFragment.applyChanges(TreeFragment.addTree(ast1), [{fromA: 0, toA: 0, fromB: 0, toB: 1}]) let ast2 = parser.parse(" " + doc, frag, [{from: 0, to: 2}, {from: 6, to: 9}]) ist(ast2.toString(), "Document(Paragraph)") let p = ast2.topNode.firstChild! ist(p.from, 7) ist(p.to, 8) }) }) markdown-1.2.0/test/test-markdown.ts000066400000000000000000001734571454233753200175130ustar00rootroot00000000000000import {parser} from "../dist/index.js" import {compareTree} from "./compare-tree.js" import {SpecParser} from "./spec.js" const specParser = new SpecParser(parser) function test(name: string, spec: string) { it(name, () => { let {tree, doc} = specParser.parse(spec, name) compareTree(parser.parse(doc), tree) }) } // These are the tests from revision 0.29 of the CommonMark spec, // mechanically translated to the format used here (because their // original format, providing expected HTML output, doesn't cover most // of the aspects of the output that we're interested in), and then eyeballed // to check whether the produced output corresponds to the intent of // the test. describe("CommonMark spec", () => { test("Tabs (example 1)", ` {CB:{cT:foo baz bim}} `) test("Tabs (example 2)", ` {CB:{cT:foo baz bim}} `) test("Tabs (example 3)", ` {CB:{cT:a a } {cT:ὐ a}} `) test("Tabs (example 4)", ` {BL:{LI: {l:-} {P:foo} {P:bar}}} `) test("Tabs (example 5)", ` {BL:{LI:{l:-} {P:foo} {CB:{cT:bar}}}} `) test("Tabs (example 6)", ` {Q:{q:>} {CB:{cT:foo}}} `) test("Tabs (example 7)", ` {BL:{LI:{l:-} {CB:{cT:foo}}}} `) test("Tabs (example 8)", ` {CB:{cT:foo } {cT:bar}} `) test("Tabs (example 9)", ` {BL:{LI: {l:-} {P:foo} {BL:{LI:{l:-} {P:bar} {BL:{LI:{l:-} {P:baz}}}}}}} `) test("Tabs (example 10)", ` {P:# Foo} `) test("Tabs (example 11)", ` {HR:* * * } `) test("Precedence (example 12)", ` {BL:{LI:{l:-} {P:\`one}} {LI:{l:-} {P:two\`}}} `) test("Thematic breaks (example 13)", ` {HR:***} {HR:---} {HR:___} `) test("Thematic breaks (example 14)", ` {P:+++} `) test("Thematic breaks (example 15)", ` {P:===} `) test("Thematic breaks (example 16)", ` {P:-- ** __} `) test("Thematic breaks (example 17)", ` {HR:***} {HR:***} {HR:***} `) test("Thematic breaks (example 18)", ` {CB:{cT:***}} `) test("Thematic breaks (example 19)", ` {P:Foo ***} `) test("Thematic breaks (example 20)", ` {HR:_____________________________________} `) test("Thematic breaks (example 21)", ` {HR:- - -} `) test("Thematic breaks (example 22)", ` {HR:** * ** * ** * **} `) test("Thematic breaks (example 23)", ` {HR:- - - -} `) test("Thematic breaks (example 24)", ` {HR:- - - - } `) test("Thematic breaks (example 25)", ` {P:_ _ _ _ a} {P:a------} {P:---a---} `) test("Thematic breaks (example 26)", ` {P:{Em:{e:*}-{e:*}}} `) test("Thematic breaks (example 27)", ` {BL:{LI:{l:-} {P:foo}}} {HR:***} {BL:{LI:{l:-} {P:bar}}} `) test("Thematic breaks (example 28)", ` {P:Foo} {HR:***} {P:bar} `) test("Thematic breaks (example 29)", ` {SH2:Foo {h:---}} {P:bar} `) test("Thematic breaks (example 30)", ` {BL:{LI:{l:*} {P:Foo}}} {HR:* * *} {BL:{LI:{l:*} {P:Bar}}} `) test("Thematic breaks (example 31)", ` {BL:{LI:{l:-} {P:Foo}} {LI:{l:-} {HR:* * *}}} `) test("ATX headings (example 32)", ` {H1:{h:#} foo} {H2:{h:##} foo} {H3:{h:###} foo} {H4:{h:####} foo} {H5:{h:#####} foo} {H6:{h:######} foo} `) test("ATX headings (example 33)", ` {P:####### foo} `) test("ATX headings (example 34)", ` {P:#5 bolt} {P:#hashtag} `) test("ATX headings (example 35)", ` {P:{Esc:\\#}# foo} `) test("ATX headings (example 36)", ` {H1:{h:#} foo {Em:{e:*}bar{e:*}} {Esc:\\*}baz{Esc:\\*}} `) test("ATX headings (example 37)", ` {H1:{h:#} foo } `) test("ATX headings (example 38)", ` {H3:{h:###} foo} {H2:{h:##} foo} {H1:{h:#} foo} `) test("ATX headings (example 39)", ` {CB:{cT:# foo}} `) test("ATX headings (example 40)", ` {P:foo # bar} `) test("ATX headings (example 41)", ` {H2:{h:##} foo {h:##}} {H3:{h:###} bar {h:###}} `) test("ATX headings (example 42)", ` {H1:{h:#} foo {h:##################################}} {H5:{h:#####} foo {h:##}} `) test("ATX headings (example 43)", ` {H3:{h:###} foo {h:###} } `) test("ATX headings (example 44)", ` {H3:{h:###} foo ### b} `) test("ATX headings (example 45)", ` {H1:{h:#} foo#} `) test("ATX headings (example 46)", ` {H3:{h:###} foo {Esc:\\#}##} {H2:{h:##} foo #{Esc:\\#}#} {H1:{h:#} foo {Esc:\\#}} `) test("ATX headings (example 47)", ` {HR:****} {H2:{h:##} foo} {HR:****} `) test("ATX headings (example 48)", ` {P:Foo bar} {H1:{h:#} baz} {P:Bar foo} `) test("ATX headings (example 49)", ` {H2:{h:##} } {H1:{h:#}} {H3:{h:###} {h:###}} `) test("Setext headings (example 50)", ` {SH1:Foo {Em:{e:*}bar{e:*}} {h:=========}} {SH2:Foo {Em:{e:*}bar{e:*}} {h:---------}} `) test("Setext headings (example 51)", ` {SH1:Foo {Em:{e:*}bar baz{e:*}} {h:====}} `) test("Setext headings (example 52)", ` {SH1:Foo {Em:{e:*}bar baz{e:*}} {h:====}} `) test("Setext headings (example 53)", ` {SH2:Foo {h:-------------------------}} {SH1:Foo {h:=}} `) test("Setext headings (example 54)", ` {SH2:Foo {h:---}} {SH2:Foo {h:-----}} {SH1:Foo {h:===}} `) test("Setext headings (example 55)", ` {CB:{cT:Foo } {cT:--- } {cT:Foo}} {HR:---} `) test("Setext headings (example 56)", ` {SH2:Foo {h:----} } `) test("Setext headings (example 57)", ` {P:Foo ---} `) test("Setext headings (example 58)", ` {P:Foo = =} {P:Foo} {HR:--- -} `) test("Setext headings (example 59)", ` {SH2:Foo {h:-----}} `) test("Setext headings (example 60)", ` {SH2:Foo\\ {h:----}} `) test("Setext headings (example 61)", ` {SH2:\`Foo {h:----}} {P:\`} {SH2:} `) test("Setext headings (example 62)", ` {Q:{q:>} {P:Foo}} {HR:---} `) test("Setext headings (example 63)", ` {Q:{q:>} {P:foo bar ===}} `) test("Setext headings (example 64)", ` {BL:{LI:{l:-} {P:Foo}}} {HR:---} `) test("Setext headings (example 65)", ` {SH2:Foo Bar {h:---}} `) test("Setext headings (example 66)", ` {HR:---} {SH2:Foo {h:---}} {SH2:Bar {h:---}} {P:Baz} `) test("Setext headings (example 67)", ` {P:====} `) test("Setext headings (example 68)", ` {HR:---} {HR:---} `) test("Setext headings (example 69)", ` {BL:{LI:{l:-} {P:foo}}} {HR:-----} `) test("Setext headings (example 70)", ` {CB:{cT:foo}} {HR:---} `) test("Setext headings (example 71)", ` {Q:{q:>} {P:foo}} {HR:-----} `) test("Setext headings (example 72)", ` {SH2:{Esc:\\>} foo {h:------}} `) test("Setext headings (example 73)", ` {P:Foo} {SH2:bar {h:---}} {P:baz} `) test("Setext headings (example 74)", ` {P:Foo bar} {HR:---} {P:baz} `) test("Setext headings (example 75)", ` {P:Foo bar} {HR:* * *} {P:baz} `) test("Setext headings (example 76)", ` {P:Foo bar {Esc:\\-}-- baz} `) test("Indented code blocks (example 77)", ` {CB:{cT:a simple } {cT: indented code block}} `) test("Indented code blocks (example 78)", ` {BL:{LI: {l:-} {P:foo} {P:bar}}} `) test("Indented code blocks (example 79)", ` {OL:{LI:{l:1.} {P:foo} {BL:{LI:{l:-} {P:bar}}}}} `) test("Indented code blocks (example 80)", ` {CB:{cT: } {cT:*hi* } {cT:- one}} `) test("Indented code blocks (example 81)", ` {CB:{cT:chunk1 } {cT:chunk2 } {cT: } {cT: } {cT: } {cT:chunk3}} `) test("Indented code blocks (example 82)", ` {CB:{cT:chunk1 } {cT: } {cT: chunk2}} `) test("Indented code blocks (example 83)", ` {P:Foo bar} `) test("Indented code blocks (example 84)", ` {CB:{cT:foo}} {P:bar} `) test("Indented code blocks (example 85)", ` {H1:{h:#} Heading} {CB:{cT:foo}} {SH2:Heading {h:------}} {CB:{cT:foo}} {HR:----} `) test("Indented code blocks (example 86)", ` {CB:{cT: foo } {cT:bar}} `) test("Indented code blocks (example 87)", ` {CB:{cT:foo}} `) test("Indented code blocks (example 88)", ` {CB:{cT:foo }} `) test("Fenced code blocks (example 89)", ` {FC:{c:\`\`\`} {cT:< >} {c:\`\`\`}} `) test("Fenced code blocks (example 90)", ` {FC:{c:~~~} {cT:< >} {c:~~~}} `) test("Fenced code blocks (example 91)", ` {P:{C:{c:\`\`} foo {c:\`\`}}} `) test("Fenced code blocks (example 92)", ` {FC:{c:\`\`\`} {cT:aaa ~~~} {c:\`\`\`}} `) test("Fenced code blocks (example 93)", ` {FC:{c:~~~} {cT:aaa \`\`\`} {c:~~~}} `) test("Fenced code blocks (example 94)", ` {FC:{c:\`\`\`\`} {cT:aaa \`\`\`} {c:\`\`\`\`\`\`}} `) test("Fenced code blocks (example 95)", ` {FC:{c:~~~~} {cT:aaa ~~~} {c:~~~~}} `) test("Fenced code blocks (example 96)", ` {FC:{c:\`\`\`} }`) test("Fenced code blocks (example 97)", ` {FC:{c:\`\`\`\`\`} {cT: \`\`\` aaa }}`) test("Fenced code blocks (example 98)", ` {Q:{q:>} {FC:{c:\`\`\`} {q:>} {cT:aaa}}} {P:bbb} `) test("Fenced code blocks (example 99)", ` {FC:{c:\`\`\`} {cT: } {c:\`\`\`}} `) test("Fenced code blocks (example 100)", ` {FC:{c:\`\`\`} {c:\`\`\`}} `) test("Fenced code blocks (example 101)", ` {FC:{c:\`\`\`} {cT: aaa aaa} {c:\`\`\`}} `) test("Fenced code blocks (example 102)", ` {FC:{c:\`\`\`} {cT:aaa aaa aaa} {c:\`\`\`}} `) test("Fenced code blocks (example 103)", ` {FC:{c:\`\`\`} {cT: aaa aaa aaa} {c:\`\`\`}} `) test("Fenced code blocks (example 104)", ` {CB:{cT:\`\`\` } {cT:aaa } {cT:\`\`\`}} `) test("Fenced code blocks (example 105)", ` {FC:{c:\`\`\`} {cT:aaa} {c:\`\`\`}} `) test("Fenced code blocks (example 106)", ` {FC:{c:\`\`\`} {cT:aaa} {c:\`\`\`}} `) test("Fenced code blocks (example 107)", ` {FC:{c:\`\`\`} {cT:aaa \`\`\` }}`) test("Fenced code blocks (example 108)", ` {P:{C:{c:\`\`\`} {c:\`\`\`}} aaa} `) test("Fenced code blocks (example 109)", ` {FC:{c:~~~~~~} {cT:aaa ~~~ ~~ }}`) test("Fenced code blocks (example 110)", ` {P:foo} {FC:{c:\`\`\`} {cT:bar} {c:\`\`\`}} {P:baz} `) test("Fenced code blocks (example 111)", ` {SH2:foo {h:---}} {FC:{c:~~~} {cT:bar} {c:~~~}} {H1:{h:#} baz} `) test("Fenced code blocks (example 112)", ` {FC:{c:\`\`\`}{cI:ruby} {cT:def foo(x) return 3 end} {c:\`\`\`}} `) test("Fenced code blocks (example 113)", ` {FC:{c:~~~~} {cI:ruby startline=3 $%@#$} {cT:def foo(x) return 3 end} {c:~~~~~~~}} `) test("Fenced code blocks (example 114)", ` {FC:{c:\`\`\`\`}{cI:;} {c:\`\`\`\`}} `) test("Fenced code blocks (example 115)", ` {P:{C:{c:\`\`\`} aa {c:\`\`\`}} foo} `) test("Fenced code blocks (example 116)", ` {FC:{c:~~~} {cI:aa \`\`\` ~~~} {cT:foo} {c:~~~}} `) test("Fenced code blocks (example 117)", ` {FC:{c:\`\`\`} {cT:\`\`\` aaa} {c:\`\`\`}} `) test("HTML blocks (example 118)", ` {HB:
**Hello**,}

{P:{Em:{e:_}world{e:_}}.
{HT:
}} {HB:
} `) test("HTML blocks (example 119)", ` {HB:
hi
} {P:okay.} `) test("HTML blocks (example 120)", ` {HB:
*foo*} `) test("HTML blocks (example 122)", ` {HB:
} {P:{Em:{e:*}Markdown{e:*}}} {HB:
} `) test("HTML blocks (example 123)", ` {HB:
} `) test("HTML blocks (example 124)", ` {HB:
} `) test("HTML blocks (example 125)", ` {HB:
*foo*} {P:{Em:{e:*}bar{e:*}}} `) test("HTML blocks (example 126)", ` {HB:} `) test("HTML blocks (example 130)", ` {HB:
foo
} `) test("HTML blocks (example 131)", ` {HB:
\`\`\` c int x = 33; x\`\`\`} `) test("HTML blocks (example 132)", ` {HB: *bar* } `) test("HTML blocks (example 133)", ` {HB: *bar* } `) test("HTML blocks (example 134)", ` {HB: *bar* } `) test("HTML blocks (example 135)", ` {HB: *bar*} `) test("HTML blocks (example 136)", ` {HB: *foo* } `) test("HTML blocks (example 137)", ` {HB:} {P:{Em:{e:*}foo{e:*}}} {HB:} `) test("HTML blocks (example 138)", ` {P:{HT:}{Em:{e:*}foo{e:*}}{HT:}} `) test("HTML blocks (example 139)", ` {HB:

import Text.HTML.TagSoup

main :: IO ()
main = print $ parseTags tags
} {P:okay} `) test("HTML blocks (example 140)", ` {HB:} {P:okay} `) test("HTML blocks (example 141)", ` {HB:} {P:okay} `) test("HTML blocks (example 142)", ` {HB:} {P:{Em:{e:*}foo{e:*}}} `) test("HTML blocks (example 146)", ` {CMB:*bar*} {P:{Em:{e:*}baz{e:*}}} `) test("HTML blocks (example 147)", ` {HB:1. *bar*} `) test("HTML blocks (example 148)", ` {CMB:} {P:okay} `) test("HTML blocks (example 149)", ` {PI:'; ?>} {P:okay} `) test("HTML blocks (example 150)", ` {HB:} `) test("HTML blocks (example 151)", ` {HB:} {P:okay} `) test("HTML blocks (example 152)", ` {CMB:} {CB:{cT:}} `) test("HTML blocks (example 153)", ` {HB:
} {CB:{cT:
}} `) test("HTML blocks (example 154)", ` {P:Foo} {HB:
bar
} `) test("HTML blocks (example 155)", ` {HB:
bar
*foo*} `) test("HTML blocks (example 156)", ` {P:Foo {HT:} baz} `) test("HTML blocks (example 157)", ` {HB:
} {P:{Em:{e:*}Emphasized{e:*}} text.} {HB:
} `) test("HTML blocks (example 158)", ` {HB:
*Emphasized* text.
} `) test("HTML blocks (example 159)", ` {HB:} {HB:} {HB:} {HB:} {HB:
Hi
} `) test("HTML blocks (example 160)", ` {HB:} {HB:} {CB:{cT:}} {HB:} {HB:
} {cT: Hi } {cT:
} `) test("Link reference definitions (example 161)", ` {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 162)", ` {LR:{LL:[foo]}{L::} {URL:/url} {LT:'the title'} } {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 163)", ` {LR:{LL:[Foo*bar\\]]}{L::}{URL:my_(url)} {LT:'title (with parens)'}} {P:{Ln:{L:[}Foo*bar{Esc:\\]}{L:]}}} `) test("Link reference definitions (example 164)", ` {LR:{LL:[Foo bar]}{L::} {URL:} {LT:'title'}} {P:{Ln:{L:[}Foo bar{L:]}}} `) test("Link reference definitions (example 165)", ` {LR:{LL:[foo]}{L::} {URL:/url} {LT:' title line1 line2 '}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 166)", ` {P:{Ln:{L:[}foo{L:]}}: /url 'title} {P:with blank line'} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 167)", ` {LR:{LL:[foo]}{L::} {URL:/url}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 168)", ` {P:{Ln:{L:[}foo{L:]}}:} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 169)", ` {LR:{LL:[foo]}{L::} {URL:<>}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 170)", ` {P:{Ln:{L:[}foo{L:]}}: {HT:}(baz)} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 171)", ` {LR:{LL:[foo]}{L::} {URL:/url\`bar\`*baz} {LT:"foo\\"bar\\baz"}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 172)", ` {P:{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:url}} `) test("Link reference definitions (example 173)", ` {P:{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:first}} {LR:{LL:[foo]}{L::} {URL:second}} `) test("Link reference definitions (example 174)", ` {LR:{LL:[FOO]}{L::} {URL:/url}} {P:{Ln:{L:[}Foo{L:]}}} `) test("Link reference definitions (example 175)", ` {LR:{LL:[ΑΓΩ]}{L::} {URL:/φου}} {P:{Ln:{L:[}αγω{L:]}}} `) test("Link reference definitions (example 176)", ` {LR:{LL:[foo]}{L::} {URL:/url}} `) test("Link reference definitions (example 177)", ` {LR:{LL:[ foo ]}{L::} {URL:/url}} {P:bar} `) test("Link reference definitions (example 178)", ` {P:{Ln:{L:[}foo{L:]}}: /url "title" ok} `) test("Link reference definitions (example 179)", ` {LR:{LL:[foo]}{L::} {URL:/url}} {P:"title" ok} `) test("Link reference definitions (example 180)", ` {CB:{cT:[foo]: /url "title"}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 181)", ` {FC:{c:\`\`\`} {cT:[foo]: /url} {c:\`\`\`}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 182)", ` {P:Foo {Ln:{L:[}bar{L:]}}: /baz} {P:{Ln:{L:[}bar{L:]}}} `) test("Link reference definitions (example 183)", ` {H1:{h:#} {Ln:{L:[}Foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url}} {Q:{q:>} {P:bar}} `) test("Link reference definitions (example 184)", ` {LR:{LL:[foo]}{L::} {URL:/url}} {SH1:bar {h:===}} {P:{Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 185)", ` {LR:{LL:[foo]}{L::} {URL:/url}} {P:=== {Ln:{L:[}foo{L:]}}} `) test("Link reference definitions (example 186)", ` {LR:{LL:[foo]}{L::} {URL:/foo-url} {LT:"foo"}} {LR:{LL:[bar]}{L::} {URL:/bar-url} {LT:"bar"}} {LR:{LL:[baz]}{L::} {URL:/baz-url}} {P:{Ln:{L:[}foo{L:]}}, {Ln:{L:[}bar{L:]}}, {Ln:{L:[}baz{L:]}}} `) test("Link reference definitions (example 187)", ` {P:{Ln:{L:[}foo{L:]}}} {Q:{q:>} {LR:{LL:[foo]}{L::} {URL:/url}}} `) test("Link reference definitions (example 188)", ` {LR:{LL:[foo]}{L::} {URL:/url}} `) test("Paragraphs (example 189)", ` {P:aaa} {P:bbb} `) test("Paragraphs (example 190)", ` {P:aaa bbb} {P:ccc ddd} `) test("Paragraphs (example 191)", ` {P:aaa} {P:bbb} `) test("Paragraphs (example 192)", ` {P:aaa bbb} `) test("Paragraphs (example 193)", ` {P:aaa bbb ccc} `) test("Paragraphs (example 194)", ` {P:aaa bbb} `) test("Paragraphs (example 195)", ` {CB:{cT:aaa}} {P:bbb} `) test("Paragraphs (example 196)", ` {P:aaa{BR: }bbb } `) test("Blank lines (example 197)", ` {P:aaa} {H1:{h:#} aaa} `) test("Block quotes (example 198)", ` {Q:{q:>} {H1:{h:#} Foo} {q:>} {P:bar {q:>} baz}} `) test("Block quotes (example 199)", ` {Q:{q:>}{H1:{h:#} Foo} {q:>}{P:bar {q:>} baz}} `) test("Block quotes (example 200)", ` {Q:{q:>} {H1:{h:#} Foo} {q:>} {P:bar {q:>} baz}} `) test("Block quotes (example 201)", ` {CB:{cT:> # Foo } {cT:> bar } {cT:> baz}} `) test("Block quotes (example 202)", ` {Q:{q:>} {H1:{h:#} Foo} {q:>} {P:bar baz}} `) test("Block quotes (example 203)", ` {Q:{q:>} {P:bar baz {q:>} foo}} `) test("Block quotes (example 204)", ` {Q:{q:>} {P:foo}} {HR:---} `) test("Block quotes (example 205)", ` {Q:{q:>} {BL:{LI:{l:-} {P:foo}}}} {BL:{LI:{l:-} {P:bar}}} `) test("Block quotes (example 206)", ` {Q:{q:>} {CB:{cT:foo}}} {CB:{cT:bar}} `) test("Block quotes (example 207)", ` {Q:{q:>} {FC:{c:\`\`\`}}} {P:foo} {FC:{c:\`\`\`} }`) test("Block quotes (example 208)", ` {Q:{q:>} {P:foo - bar}} `) test("Block quotes (example 209)", ` {Q:{q:>}{P:}} `) test("Block quotes (example 210)", ` {Q:{q:>}{P:} {q:>} {q:>} } `) test("Block quotes (example 211)", ` {Q:{q:>}{P: {q:>} foo} {q:>} } `) test("Block quotes (example 212)", ` {Q:{q:>} {P:foo}} {Q:{q:>} {P:bar}} `) test("Block quotes (example 213)", ` {Q:{q:>} {P:foo {q:>} bar}} `) test("Block quotes (example 214)", ` {Q:{q:>} {P:foo} {q:>} {q:>} {P:bar}} `) test("Block quotes (example 215)", ` {P:foo} {Q:{q:>} {P:bar}} `) test("Block quotes (example 216)", ` {Q:{q:>} {P:aaa}} {HR:***} {Q:{q:>} {P:bbb}} `) test("Block quotes (example 217)", ` {Q:{q:>} {P:bar baz}} `) test("Block quotes (example 218)", ` {Q:{q:>} {P:bar}} {P:baz} `) test("Block quotes (example 219)", ` {Q:{q:>} {P:bar} {q:>}} {P:baz} `) test("Block quotes (example 220)", ` {Q:{q:>} {Q:{q:>} {Q:{q:>} {P:foo bar}}}} `) test("Block quotes (example 221)", ` {Q:{q:>}{Q:{q:>}{Q:{q:>} {P:foo {q:>} bar {q:>}{q:>}baz}}}} `) test("Block quotes (example 222)", ` {Q:{q:>} {CB:{cT:code}}} {Q:{q:>} {P:not code}} `) test("List items (example 223)", ` {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}} `) test("List items (example 224)", ` {OL:{LI:{l:1.} {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}}}} `) test("List items (example 225)", ` {BL:{LI:{l:-} {P:one}}} {P:two} `) test("List items (example 226)", ` {BL:{LI:{l:-} {P:one} {P:two}}} `) test("List items (example 227)", ` {BL:{LI: {l:-} {P:one}}} {CB:{cT: two}} `) test("List items (example 228)", ` {BL:{LI: {l:-} {P:one} {P:two}}} `) test("List items (example 229)", ` {Q:{q:>} {Q:{q:>} {OL:{LI:{l:1.} {P:one} {q:>}{q:>} {q:>}{q:>} {P:two}}}}} `) test("List items (example 230)", ` {Q:{q:>}{Q:{q:>}{BL:{LI:{l:-} {P:one} {q:>}{q:>}}} {q:>} {q:>} {P:two}}} `) test("List items (example 231)", ` {P:-one} {P:2.two} `) test("List items (example 232)", ` {BL:{LI:{l:-} {P:foo} {P:bar}}} `) test("List items (example 233)", ` {OL:{LI:{l:1.} {P:foo} {FC:{c:\`\`\`} {cT:bar} {c:\`\`\`}} {P:baz} {Q:{q:>} {P:bam}}}} `) test("List items (example 234)", ` {BL:{LI:{l:-} {P:Foo} {CB:{cT:bar } {cT:baz}}}} `) test("List items (example 235)", ` {OL:{LI:{l:123456789.} {P:ok}}} `) test("List items (example 236)", ` {P:1234567890. not ok} `) test("List items (example 237)", ` {OL:{LI:{l:0.} {P:ok}}} `) test("List items (example 238)", ` {OL:{LI:{l:003.} {P:ok}}} `) test("List items (example 239)", ` {P:-1. not ok} `) test("List items (example 240)", ` {BL:{LI:{l:-} {P:foo} {CB:{cT:bar}}}} `) test("List items (example 241)", ` {OL:{LI: {l:10.} {P:foo} {CB:{cT:bar}}}} `) test("List items (example 242)", ` {CB:{cT:indented code}} {P:paragraph} {CB:{cT:more code}} `) test("List items (example 243)", ` {OL:{LI:{l:1.} {CB:{cT:indented code}} {P:paragraph} {CB:{cT:more code}}}} `) test("List items (example 244)", ` {OL:{LI:{l:1.} {CB:{cT: indented code}} {P:paragraph} {CB:{cT:more code}}}} `) test("List items (example 245)", ` {P:foo} {P:bar} `) test("List items (example 246)", ` {BL:{LI:{l:-} {P:foo}}} {P:bar} `) test("List items (example 247)", ` {BL:{LI:{l:-} {P:foo} {P:bar}}} `) test("List items (example 248)", ` {BL:{LI:{l:-}{P: foo}} {LI:{l:-}{P:} {FC:{c:\`\`\`} {cT: bar} {c:\`\`\`}}} {LI:{l:-}{P: baz}}} `) test("List items (example 249)", ` {BL:{LI:{l:-} {P: foo}}} `) test("List items (example 250)", ` {BL:{LI:{l:-}{P:} {P:foo}}} `) test("List items (example 251)", ` {BL:{LI:{l:-} {P:foo}} {LI:{l:-}{P:}} {LI:{l:-} {P:bar}}} `) test("List items (example 252)", ` {BL:{LI:{l:-} {P:foo}} {LI:{l:-} {P:}} {LI:{l:-} {P:bar}}} `) test("List items (example 253)", ` {OL:{LI:{l:1.} {P:foo}} {LI:{l:2.}{P:}} {LI:{l:3.} {P:bar}}} `) test("List items (example 254)", ` {BL:{LI:{l:*}{P:}}} `) test("List items (example 255)", ` {P:foo *} {P:foo 1.} `) test("List items (example 256)", ` {OL:{LI: {l:1.} {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}}}} `) test("List items (example 257)", ` {OL:{LI: {l:1.} {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}}}} `) test("List items (example 258)", ` {OL:{LI: {l:1.} {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}}}} `) test("List items (example 259)", ` {CB:{cT:1. A paragraph } {cT: with two lines. } {cT: indented code } {cT: > A block quote.}} `) test("List items (example 260)", ` {OL:{LI: {l:1.} {P:A paragraph with two lines.} {CB:{cT:indented code}} {Q:{q:>} {P:A block quote.}}}} `) test("List items (example 261)", ` {OL:{LI: {l:1.} {P:A paragraph with two lines.}}} `) test("List items (example 262)", ` {Q:{q:>} {OL:{LI:{l:1.} {Q:{q:>} {P:Blockquote continued here.}}}}} `) test("List items (example 263)", ` {Q:{q:>} {OL:{LI:{l:1.} {Q:{q:>} {P:Blockquote {q:>} continued here.}}}}} `) test("List items (example 264)", ` {BL:{LI:{l:-} {P:foo} {BL:{LI:{l:-} {P:bar} {BL:{LI:{l:-} {P:baz} {BL:{LI:{l:-} {P:boo}}}}}}}}} `) test("List items (example 265)", ` {BL:{LI:{l:-} {P:foo}} {LI: {l:-} {P:bar}} {LI: {l:-} {P:baz}} {LI: {l:-} {P:boo}}} `) test("List items (example 266)", ` {OL:{LI:{l:10)} {P:foo} {BL:{LI:{l:-} {P:bar}}}}} `) test("List items (example 267)", ` {OL:{LI:{l:10)} {P:foo}}} {BL:{LI: {l:-} {P:bar}}} `) test("List items (example 268)", ` {BL:{LI:{l:-} {BL:{LI:{l:-} {P:foo}}}}} `) test("List items (example 269)", ` {OL:{LI:{l:1.} {BL:{LI:{l:-} {OL:{LI:{l:2.} {P:foo}}}}}}} `) test("List items (example 270)", ` {BL:{LI:{l:-} {H1:{h:#} Foo}} {LI:{l:-} {SH2:Bar {h:---}} {P:baz}}} `) test("Lists (example 271)", ` {BL:{LI:{l:-} {P:foo}} {LI:{l:-} {P:bar}}} {BL:{LI:{l:+} {P:baz}}} `) test("Lists (example 272)", ` {OL:{LI:{l:1.} {P:foo}} {LI:{l:2.} {P:bar}}} {OL:{LI:{l:3)} {P:baz}}} `) test("Lists (example 273)", ` {P:Foo} {BL:{LI:{l:-} {P:bar}} {LI:{l:-} {P:baz}}} `) test("Lists (example 274)", ` {P:The number of windows in my house is 14. The number of doors is 6.} `) test("Lists (example 275)", ` {P:The number of windows in my house is} {OL:{LI:{l:1.} {P:The number of doors is 6.}}} `) test("Lists (example 276)", ` {BL:{LI:{l:-} {P:foo}} {LI:{l:-} {P:bar}} {LI:{l:-} {P:baz}}} `) test("Lists (example 277)", ` {BL:{LI:{l:-} {P:foo} {BL:{LI:{l:-} {P:bar} {BL:{LI:{l:-} {P:baz} {P:bim}}}}}}} `) test("Lists (example 278)", ` {BL:{LI:{l:-} {P:foo}} {LI:{l:-} {P:bar}}} {CMB:} {BL:{LI:{l:-} {P:baz}} {LI:{l:-} {P:bim}}} `) test("Lists (example 279)", ` {BL:{LI:{l:-} {P:foo} {P:notcode}} {LI:{l:-} {P:foo}}} {CMB:} {CB:{cT:code}} `) test("Lists (example 280)", ` {BL:{LI:{l:-} {P:a}} {LI: {l:-} {P:b}} {LI: {l:-} {P:c}} {LI: {l:-} {P:d}} {LI: {l:-} {P:e}} {LI: {l:-} {P:f}} {LI:{l:-} {P:g}}} `) test("Lists (example 281)", ` {OL:{LI:{l:1.} {P:a}} {LI: {l:2.} {P:b}} {LI: {l:3.} {P:c}}} `) test("Lists (example 282)", ` {BL:{LI:{l:-} {P:a}} {LI: {l:-} {P:b}} {LI: {l:-} {P:c}} {LI: {l:-} {P:d - e}}} `) test("Lists (example 283)", ` {OL:{LI:{l:1.} {P:a}} {LI: {l:2.} {P:b}}} {CB:{cT:3. c}} `) test("Lists (example 284)", ` {BL:{LI:{l:-} {P:a}} {LI:{l:-} {P:b}} {LI:{l:-} {P:c}}} `) test("Lists (example 285)", ` {BL:{LI:{l:*} {P:a}} {LI:{l:*}{P:}} {LI:{l:*} {P:c}}} `) test("Lists (example 286)", ` {BL:{LI:{l:-} {P:a}} {LI:{l:-} {P:b} {P:c}} {LI:{l:-} {P:d}}} `) test("Lists (example 287)", ` {BL:{LI:{l:-} {P:a}} {LI:{l:-} {P:b} {LR:{LL:[ref]}{L::} {URL:/url}}} {LI:{l:-} {P:d}}} `) test("Lists (example 288)", ` {BL:{LI:{l:-} {P:a}} {LI:{l:-} {FC:{c:\`\`\`} {cT:b } {c:\`\`\`}}} {LI:{l:-} {P:c}}} `) test("Lists (example 289)", ` {BL:{LI:{l:-} {P:a} {BL:{LI:{l:-} {P:b} {P:c}}}} {LI:{l:-} {P:d}}} `) test("Lists (example 290)", ` {BL:{LI:{l:*} {P:a} {Q:{q:>} {P:b} {q:>}}} {LI:{l:*} {P:c}}} `) test("Lists (example 291)", ` {BL:{LI:{l:-} {P:a} {Q:{q:>} {P:b}} {FC:{c:\`\`\`} {cT:c} {c:\`\`\`}}} {LI:{l:-} {P:d}}} `) test("Lists (example 292)", ` {BL:{LI:{l:-} {P:a}}} `) test("Lists (example 293)", ` {BL:{LI:{l:-} {P:a} {BL:{LI:{l:-} {P:b}}}}} `) test("Lists (example 294)", ` {OL:{LI:{l:1.} {FC:{c:\`\`\`} {cT:foo} {c:\`\`\`}} {P:bar}}} `) test("Lists (example 295)", ` {BL:{LI:{l:*} {P:foo} {BL:{LI:{l:*} {P:bar}}} {P:baz}}} `) test("Lists (example 296)", ` {BL:{LI:{l:-} {P:a} {BL:{LI:{l:-} {P:b}} {LI:{l:-} {P:c}}}} {LI:{l:-} {P:d} {BL:{LI:{l:-} {P:e}} {LI:{l:-} {P:f}}}}} `) test("Backslash escapes (example 297)", ` {P:{Esc:\\!}{Esc:\\"}{Esc:\\#}{Esc:\\$}{Esc:\\%}{Esc:\\&}{Esc:\\'}{Esc:\\(}{Esc:\\)}{Esc:\\*}{Esc:\\+}{Esc:\\,}{Esc:\\-}{Esc:\\.}{Esc:\\/}{Esc:\\:}{Esc:\\;}{Esc:\\<}{Esc:\\=}{Esc:\\>}{Esc:\\?}{Esc:\\@}{Esc:\\[}{Esc:\\\\}{Esc:\\]}{Esc:\\^}{Esc:\\_}{Esc:\\\`}{Esc:\\|}{Esc:\\~}} `) test("Backslash escapes (example 299)", ` {P:\\ \\A\\a\\ \\3\\φ\\«} `) test("Backslash escapes (example 300)", ` {P:{Esc:\\*}not emphasized* {Esc:\\<}br/> not a tag {Esc:\\[}not a link](/foo) {Esc:\\\`}not code\` 1{Esc:\\.} not a list {Esc:\\*} not a list {Esc:\\#} not a heading {Esc:\\[}foo]: /url "not a reference" {Esc:\\&}ouml; not a character entity} `) test("Backslash escapes (example 301)", ` {P:{Esc:\\\\}{Em:{e:*}emphasis{e:*}}} `) test("Backslash escapes (example 302)", ` {P:foo{BR:\\ }bar} `) test("Backslash escapes (example 303)", ` {P:{C:{c:\`\`} \\[\\\` {c:\`\`}}} `) test("Backslash escapes (example 304)", ` {CB:{cT:\\[\\]}} `) test("Backslash escapes (example 305)", ` {FC:{c:~~~} {cT:\\[\\]} {c:~~~}} `) test("Backslash escapes (example 306)", ` {P:{Al:{L:<}{URL:http://example.com?find=\\*}{L:>}}} `) test("Backslash escapes (example 307)", ` {HB:
} `) test("Backslash escapes (example 308)", ` {P:{Ln:{L:[}foo{L:]}{L:(}{URL:/bar\\*} {LT:"ti\\*tle"}{L:)}}} `) test("Backslash escapes (example 309)", ` {P:{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/bar\\*} {LT:"ti\\*tle"}} `) test("Backslash escapes (example 310)", ` {FC:{c:\`\`\`} {cI:foo\\+bar} {cT:foo} {c:\`\`\`}} `) test("Inlines (example 298)", ` {P:{C:{c:\`}hi{c:\`}}lo\`} `) test("Entity and numeric character references (example 311)", ` {P:{Ent: } {Ent:&} {Ent:©} {Ent:Æ} {Ent:Ď} {Ent:¾} {Ent:ℋ} {Ent:ⅆ} {Ent:∲} {Ent:≧̸}} `) test("Entity and numeric character references (example 312)", ` {P:{Ent:#} {Ent:Ӓ} {Ent:Ϡ} {Ent:�}} `) test("Entity and numeric character references (example 313)", ` {P:{Ent:"} {Ent:ആ} {Ent:ಫ}} `) // Our implementation doesn't check for invalid entity names test("Entity and numeric character references (example 314)", ` {P:  {Ent:&x;} &#; &#x; {Ent:�} &#abcdef0; {Ent:&ThisIsNotDefined;} &hi?;} `) test("Entity and numeric character references (example 315)", ` {P:©} `) // Again, not checking for made-up entity names. test("Entity and numeric character references (example 316)", ` {P:{Ent:&MadeUpEntity;}} `) test("Entity and numeric character references (example 317)", ` {HB:} `) test("Entity and numeric character references (example 318)", ` {P:{Ln:{L:[}foo{L:]}{L:(}{URL:/föö} {LT:"föö"}{L:)}}} `) test("Entity and numeric character references (example 319)", ` {P:{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/föö} {LT:"föö"}} `) test("Entity and numeric character references (example 320)", ` {FC:{c:\`\`\`} {cI:föö} {cT:foo} {c:\`\`\`}} `) test("Entity and numeric character references (example 321)", ` {P:{C:{c:\`}föö{c:\`}}} `) test("Entity and numeric character references (example 322)", ` {CB:{cT:föfö}} `) test("Entity and numeric character references (example 323)", ` {P:{Ent:*}foo{Ent:*} {Em:{e:*}foo{e:*}}} `) test("Entity and numeric character references (example 324)", ` {P:{Ent:*} foo} {BL:{LI:{l:*} {P:foo}}} `) test("Entity and numeric character references (example 325)", ` {P:foo{Ent: }{Ent: }bar} `) test("Entity and numeric character references (example 326)", ` {P:{Ent: }foo} `) test("Entity and numeric character references (example 327)", ` {P:{Ln:{L:[}a{L:]}}(url {Ent:"}tit{Ent:"})} `) test("Code spans (example 328)", ` {P:{C:{c:\`}foo{c:\`}}} `) test("Code spans (example 329)", ` {P:{C:{c:\`\`} foo \` bar {c:\`\`}}} `) test("Code spans (example 330)", ` {P:{C:{c:\`} \`\` {c:\`}}} `) test("Code spans (example 331)", ` {P:{C:{c:\`} \`\` {c:\`}}} `) test("Code spans (example 332)", ` {P:{C:{c:\`} a{c:\`}}} `) test("Code spans (example 333)", ` {P:{C:{c:\`} b {c:\`}}} `) test("Code spans (example 334)", ` {P:{C:{c:\`} {c:\`}} {C:{c:\`} {c:\`}}} `) test("Code spans (example 335)", ` {P:{C:{c:\`\`} foo bar baz {c:\`\`}}} `) test("Code spans (example 336)", ` {P:{C:{c:\`\`} foo {c:\`\`}}} `) test("Code spans (example 337)", ` {P:{C:{c:\`}foo bar baz{c:\`}}} `) test("Code spans (example 338)", ` {P:{C:{c:\`}foo\\{c:\`}}bar\`} `) test("Code spans (example 339)", ` {P:{C:{c:\`\`}foo\`bar{c:\`\`}}} `) test("Code spans (example 340)", ` {P:{C:{c:\`} foo \`\` bar {c:\`}}} `) test("Code spans (example 341)", ` {P:*foo{C:{c:\`}*{c:\`}}} `) test("Code spans (example 342)", ` {P:[not a {C:{c:\`}link](/foo{c:\`}})} `) test("Code spans (example 343)", ` {P:{C:{c:\`}\`} `) test("Code spans (example 344)", ` {P:{HT:}\`} `) test("Code spans (example 345)", ` {P:{C:{c:\`}\`} `) test("Code spans (example 346)", ` {P:{Al:{L:<}{URL:http://foo.bar.\`baz}{L:>}}\`} `) test("Code spans (example 347)", ` {P:\`\`\`foo\`\`} `) test("Code spans (example 348)", ` {P:\`foo} `) test("Code spans (example 349)", ` {P:\`foo{C:{c:\`\`}bar{c:\`\`}}} `) test("Emphasis and strong emphasis (example 350)", ` {P:{Em:{e:*}foo bar{e:*}}} `) test("Emphasis and strong emphasis (example 351)", ` {P:a * foo bar*} `) test("Emphasis and strong emphasis (example 352)", ` {P:a*"foo"*} `) test("Emphasis and strong emphasis (example 353)", ` {P:* a *} `) test("Emphasis and strong emphasis (example 354)", ` {P:foo{Em:{e:*}bar{e:*}}} `) test("Emphasis and strong emphasis (example 355)", ` {P:5{Em:{e:*}6{e:*}}78} `) test("Emphasis and strong emphasis (example 356)", ` {P:{Em:{e:_}foo bar{e:_}}} `) test("Emphasis and strong emphasis (example 357)", ` {P:_ foo bar_} `) test("Emphasis and strong emphasis (example 358)", ` {P:a_"foo"_} `) test("Emphasis and strong emphasis (example 359)", ` {P:foo_bar_} `) test("Emphasis and strong emphasis (example 360)", ` {P:5_6_78} `) test("Emphasis and strong emphasis (example 361)", ` {P:пристаням_стремятся_} `) test("Emphasis and strong emphasis (example 362)", ` {P:aa_"bb"_cc} `) test("Emphasis and strong emphasis (example 363)", ` {P:foo-{Em:{e:_}(bar){e:_}}} `) test("Emphasis and strong emphasis (example 364)", ` {P:_foo*} `) test("Emphasis and strong emphasis (example 365)", ` {P:*foo bar *} `) test("Emphasis and strong emphasis (example 366)", ` {P:*foo bar *} `) test("Emphasis and strong emphasis (example 367)", ` {P:*(*foo)} `) test("Emphasis and strong emphasis (example 368)", ` {P:{Em:{e:*}({Em:{e:*}foo{e:*}}){e:*}}} `) test("Emphasis and strong emphasis (example 369)", ` {P:{Em:{e:*}foo{e:*}}bar} `) test("Emphasis and strong emphasis (example 370)", ` {P:_foo bar _} `) test("Emphasis and strong emphasis (example 371)", ` {P:_(_foo)} `) test("Emphasis and strong emphasis (example 372)", ` {P:{Em:{e:_}({Em:{e:_}foo{e:_}}){e:_}}} `) test("Emphasis and strong emphasis (example 373)", ` {P:_foo_bar} `) test("Emphasis and strong emphasis (example 374)", ` {P:_пристаням_стремятся} `) test("Emphasis and strong emphasis (example 375)", ` {P:{Em:{e:_}foo_bar_baz{e:_}}} `) test("Emphasis and strong emphasis (example 376)", ` {P:{Em:{e:_}(bar){e:_}}.} `) test("Emphasis and strong emphasis (example 377)", ` {P:{St:{e:**}foo bar{e:**}}} `) test("Emphasis and strong emphasis (example 378)", ` {P:** foo bar**} `) test("Emphasis and strong emphasis (example 379)", ` {P:a**"foo"**} `) test("Emphasis and strong emphasis (example 380)", ` {P:foo{St:{e:**}bar{e:**}}} `) test("Emphasis and strong emphasis (example 381)", ` {P:{St:{e:__}foo bar{e:__}}} `) test("Emphasis and strong emphasis (example 382)", ` {P:__ foo bar__} `) test("Emphasis and strong emphasis (example 383)", ` {P:__ foo bar__} `) test("Emphasis and strong emphasis (example 384)", ` {P:a__"foo"__} `) test("Emphasis and strong emphasis (example 385)", ` {P:foo__bar__} `) test("Emphasis and strong emphasis (example 386)", ` {P:5__6__78} `) test("Emphasis and strong emphasis (example 387)", ` {P:пристаням__стремятся__} `) test("Emphasis and strong emphasis (example 388)", ` {P:{St:{e:__}foo, {St:{e:__}bar{e:__}}, baz{e:__}}} `) test("Emphasis and strong emphasis (example 389)", ` {P:foo-{St:{e:__}(bar){e:__}}} `) test("Emphasis and strong emphasis (example 390)", ` {P:**foo bar **} `) test("Emphasis and strong emphasis (example 391)", ` {P:**(**foo)} `) test("Emphasis and strong emphasis (example 392)", ` {P:{Em:{e:*}({St:{e:**}foo{e:**}}){e:*}}} `) test("Emphasis and strong emphasis (example 393)", ` {P:{St:{e:**}Gomphocarpus ({Em:{e:*}Gomphocarpus physocarpus{e:*}}, syn. {Em:{e:*}Asclepias physocarpa{e:*}}){e:**}}} `) test("Emphasis and strong emphasis (example 394)", ` {P:{St:{e:**}foo "{Em:{e:*}bar{e:*}}" foo{e:**}}} `) test("Emphasis and strong emphasis (example 395)", ` {P:{St:{e:**}foo{e:**}}bar} `) test("Emphasis and strong emphasis (example 396)", ` {P:__foo bar __} `) test("Emphasis and strong emphasis (example 397)", ` {P:__(__foo)} `) test("Emphasis and strong emphasis (example 398)", ` {P:{Em:{e:_}({St:{e:__}foo{e:__}}){e:_}}} `) test("Emphasis and strong emphasis (example 399)", ` {P:__foo__bar} `) test("Emphasis and strong emphasis (example 400)", ` {P:__пристаням__стремятся} `) test("Emphasis and strong emphasis (example 401)", ` {P:{St:{e:__}foo__bar__baz{e:__}}} `) test("Emphasis and strong emphasis (example 402)", ` {P:{St:{e:__}(bar){e:__}}.} `) test("Emphasis and strong emphasis (example 403)", ` {P:{Em:{e:*}foo {Ln:{L:[}bar{L:]}{L:(}{URL:/url}{L:)}}{e:*}}} `) test("Emphasis and strong emphasis (example 404)", ` {P:{Em:{e:*}foo bar{e:*}}} `) test("Emphasis and strong emphasis (example 405)", ` {P:{Em:{e:_}foo {St:{e:__}bar{e:__}} baz{e:_}}} `) test("Emphasis and strong emphasis (example 406)", ` {P:{Em:{e:_}foo {Em:{e:_}bar{e:_}} baz{e:_}}} `) test("Emphasis and strong emphasis (example 407)", ` {P:{Em:{e:_}{Em:{e:_}foo{e:_}} bar{e:_}}} `) test("Emphasis and strong emphasis (example 408)", ` {P:{Em:{e:*}foo {Em:{e:*}bar{e:*}}{e:*}}} `) test("Emphasis and strong emphasis (example 409)", ` {P:{Em:{e:*}foo {St:{e:**}bar{e:**}} baz{e:*}}} `) test("Emphasis and strong emphasis (example 410)", ` {P:{Em:{e:*}foo{St:{e:**}bar{e:**}}baz{e:*}}} `) test("Emphasis and strong emphasis (example 411)", ` {P:{Em:{e:*}foo**bar{e:*}}} `) test("Emphasis and strong emphasis (example 412)", ` {P:{Em:{e:*}{St:{e:**}foo{e:**}} bar{e:*}}} `) test("Emphasis and strong emphasis (example 413)", ` {P:{Em:{e:*}foo {St:{e:**}bar{e:**}}{e:*}}} `) test("Emphasis and strong emphasis (example 414)", ` {P:{Em:{e:*}foo{St:{e:**}bar{e:**}}{e:*}}} `) test("Emphasis and strong emphasis (example 415)", ` {P:foo{Em:{e:*}{St:{e:**}bar{e:**}}{e:*}}baz} `) test("Emphasis and strong emphasis (example 416)", ` {P:foo{St:{e:**}{St:{e:**}{St:{e:**}bar{e:**}}{e:**}}{e:**}}***baz} `) test("Emphasis and strong emphasis (example 417)", ` {P:{Em:{e:*}foo {St:{e:**}bar {Em:{e:*}baz{e:*}} bim{e:**}} bop{e:*}}} `) test("Emphasis and strong emphasis (example 418)", ` {P:{Em:{e:*}foo {Ln:{L:[}{Em:{e:*}bar{e:*}}{L:]}{L:(}{URL:/url}{L:)}}{e:*}}} `) test("Emphasis and strong emphasis (example 419)", ` {P:** is not an empty emphasis} `) test("Emphasis and strong emphasis (example 420)", ` {P:**** is not an empty strong emphasis} `) test("Emphasis and strong emphasis (example 421)", ` {P:{St:{e:**}foo {Ln:{L:[}bar{L:]}{L:(}{URL:/url}{L:)}}{e:**}}} `) test("Emphasis and strong emphasis (example 422)", ` {P:{St:{e:**}foo bar{e:**}}} `) test("Emphasis and strong emphasis (example 423)", ` {P:{St:{e:__}foo {Em:{e:_}bar{e:_}} baz{e:__}}} `) test("Emphasis and strong emphasis (example 424)", ` {P:{St:{e:__}foo {St:{e:__}bar{e:__}} baz{e:__}}} `) test("Emphasis and strong emphasis (example 425)", ` {P:{St:{e:__}{St:{e:__}foo{e:__}} bar{e:__}}} `) test("Emphasis and strong emphasis (example 426)", ` {P:{St:{e:**}foo {St:{e:**}bar{e:**}}{e:**}}} `) test("Emphasis and strong emphasis (example 427)", ` {P:{St:{e:**}foo {Em:{e:*}bar{e:*}} baz{e:**}}} `) test("Emphasis and strong emphasis (example 428)", ` {P:{St:{e:**}foo{Em:{e:*}bar{e:*}}baz{e:**}}} `) test("Emphasis and strong emphasis (example 429)", ` {P:{St:{e:**}{Em:{e:*}foo{e:*}} bar{e:**}}} `) test("Emphasis and strong emphasis (example 430)", ` {P:{St:{e:**}foo {Em:{e:*}bar{e:*}}{e:**}}} `) test("Emphasis and strong emphasis (example 431)", ` {P:{St:{e:**}foo {Em:{e:*}bar {St:{e:**}baz{e:**}} bim{e:*}} bop{e:**}}} `) test("Emphasis and strong emphasis (example 432)", ` {P:{St:{e:**}foo {Ln:{L:[}{Em:{e:*}bar{e:*}}{L:]}{L:(}{URL:/url}{L:)}}{e:**}}} `) test("Emphasis and strong emphasis (example 433)", ` {P:__ is not an empty emphasis} `) test("Emphasis and strong emphasis (example 434)", ` {P:____ is not an empty strong emphasis} `) test("Emphasis and strong emphasis (example 435)", ` {P:foo ***} `) test("Emphasis and strong emphasis (example 436)", ` {P:foo {Em:{e:*}{Esc:\\*}{e:*}}} `) test("Emphasis and strong emphasis (example 437)", ` {P:foo {Em:{e:*}_{e:*}}} `) test("Emphasis and strong emphasis (example 438)", ` {P:foo *****} `) test("Emphasis and strong emphasis (example 439)", ` {P:foo {St:{e:**}{Esc:\\*}{e:**}}} `) test("Emphasis and strong emphasis (example 440)", ` {P:foo {St:{e:**}_{e:**}}} `) test("Emphasis and strong emphasis (example 441)", ` {P:*{Em:{e:*}foo{e:*}}} `) test("Emphasis and strong emphasis (example 442)", ` {P:{Em:{e:*}foo{e:*}}*} `) test("Emphasis and strong emphasis (example 443)", ` {P:*{St:{e:**}foo{e:**}}} `) test("Emphasis and strong emphasis (example 444)", ` {P:***{Em:{e:*}foo{e:*}}} `) test("Emphasis and strong emphasis (example 445)", ` {P:{St:{e:**}foo{e:**}}*} `) test("Emphasis and strong emphasis (example 446)", ` {P:{Em:{e:*}foo{e:*}}***} `) test("Emphasis and strong emphasis (example 447)", ` {P:foo ___} `) test("Emphasis and strong emphasis (example 448)", ` {P:foo {Em:{e:_}{Esc:\\_}{e:_}}} `) test("Emphasis and strong emphasis (example 449)", ` {P:foo {Em:{e:_}*{e:_}}} `) test("Emphasis and strong emphasis (example 450)", ` {P:foo _____} `) test("Emphasis and strong emphasis (example 451)", ` {P:foo {St:{e:__}{Esc:\\_}{e:__}}} `) test("Emphasis and strong emphasis (example 452)", ` {P:foo {St:{e:__}*{e:__}}} `) test("Emphasis and strong emphasis (example 453)", ` {P:_{Em:{e:_}foo{e:_}}} `) test("Emphasis and strong emphasis (example 454)", ` {P:{Em:{e:_}foo{e:_}}_} `) test("Emphasis and strong emphasis (example 455)", ` {P:_{St:{e:__}foo{e:__}}} `) test("Emphasis and strong emphasis (example 456)", ` {P:___{Em:{e:_}foo{e:_}}} `) test("Emphasis and strong emphasis (example 457)", ` {P:{St:{e:__}foo{e:__}}_} `) test("Emphasis and strong emphasis (example 458)", ` {P:{Em:{e:_}foo{e:_}}___} `) test("Emphasis and strong emphasis (example 459)", ` {P:{St:{e:**}foo{e:**}}} `) test("Emphasis and strong emphasis (example 460)", ` {P:{Em:{e:*}{Em:{e:_}foo{e:_}}{e:*}}} `) test("Emphasis and strong emphasis (example 461)", ` {P:{St:{e:__}foo{e:__}}} `) test("Emphasis and strong emphasis (example 462)", ` {P:{Em:{e:_}{Em:{e:*}foo{e:*}}{e:_}}} `) test("Emphasis and strong emphasis (example 463)", ` {P:{St:{e:**}{St:{e:**}foo{e:**}}{e:**}}} `) test("Emphasis and strong emphasis (example 464)", ` {P:{St:{e:__}{St:{e:__}foo{e:__}}{e:__}}} `) test("Emphasis and strong emphasis (example 465)", ` {P:{St:{e:**}{St:{e:**}{St:{e:**}foo{e:**}}{e:**}}{e:**}}} `) test("Emphasis and strong emphasis (example 466)", ` {P:{Em:{e:*}{St:{e:**}foo{e:**}}{e:*}}} `) test("Emphasis and strong emphasis (example 467)", ` {P:{Em:{e:_}{St:{e:__}{St:{e:__}foo{e:__}}{e:__}}{e:_}}} `) test("Emphasis and strong emphasis (example 468)", ` {P:{Em:{e:*}foo _bar{e:*}} baz_} `) test("Emphasis and strong emphasis (example 469)", ` {P:{Em:{e:*}foo {St:{e:__}bar *baz bim{e:__}} bam{e:*}}} `) test("Emphasis and strong emphasis (example 470)", ` {P:**foo {St:{e:**}bar baz{e:**}}} `) test("Emphasis and strong emphasis (example 471)", ` {P:*foo {Em:{e:*}bar baz{e:*}}} `) test("Emphasis and strong emphasis (example 472)", ` {P:*{Ln:{L:[}bar*{L:]}{L:(}{URL:/url}{L:)}}} `) test("Emphasis and strong emphasis (example 473)", ` {P:_foo {Ln:{L:[}bar_{L:]}{L:(}{URL:/url}{L:)}}} `) test("Emphasis and strong emphasis (example 474)", ` {P:*{HT:}} `) test("Emphasis and strong emphasis (example 475)", ` {P:**{HT:}} `) test("Emphasis and strong emphasis (example 476)", ` {P:__{HT:}} `) test("Emphasis and strong emphasis (example 477)", ` {P:{Em:{e:*}a {C:{c:\`}*{c:\`}}{e:*}}} `) test("Emphasis and strong emphasis (example 478)", ` {P:{Em:{e:_}a {C:{c:\`}_{c:\`}}{e:_}}} `) test("Emphasis and strong emphasis (example 479)", ` {P:**a{Al:{L:<}{URL:http://foo.bar/?q=**}{L:>}}} `) test("Emphasis and strong emphasis (example 480)", ` {P:__a{Al:{L:<}{URL:http://foo.bar/?q=__}{L:>}}} `) test("Links (example 481)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/uri} {LT:"title"}{L:)}}} `) test("Links (example 482)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 483)", ` {P:{Ln:{L:[}link{L:]}{L:(}{L:)}}} `) test("Links (example 484)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:<>}{L:)}}} `) test("Links (example 485)", ` {P:{Ln:{L:[}link{L:]}}(/my uri)} `) test("Links (example 486)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:}{L:)}}} `) test("Links (example 487)", ` {P:{Ln:{L:[}link{L:]}}(foo bar)} `) // Many of these don't align with the output in the spec because our // implementation doesn't check for an existing link reference when // accepting non-inline links. test("Links (example 488)", ` {P:{Ln:{L:[}link{L:]}}({HT:})} `) test("Links (example 489)", ` {P:{Ln:{L:[}a{L:]}{L:(}{URL:}{L:)}}} `) test("Links (example 490)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:}{L:)}}} `) test("Links (example 491)", ` {P:{Ln:{L:[}a{L:]}}( {Ln:{L:[}a{L:]}}({HT:}c)} `) test("Links (example 492)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:\\(foo\\)}{L:)}}} `) test("Links (example 493)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:foo(and(bar))}{L:)}}} `) test("Links (example 494)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:foo\\(and\\(bar\\)}{L:)}}} `) test("Links (example 495)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:}{L:)}}} `) test("Links (example 496)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:foo\\)\\:}{L:)}}} `) test("Links (example 497)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:#fragment}{L:)}}} {P:{Ln:{L:[}link{L:]}{L:(}{URL:http://example.com#fragment}{L:)}}} {P:{Ln:{L:[}link{L:]}{L:(}{URL:http://example.com?foo=3#frag}{L:)}}} `) test("Links (example 498)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:foo\\bar}{L:)}}} `) test("Links (example 499)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:foo%20bä}{L:)}}} `) test("Links (example 500)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:"title"}{L:)}}} `) test("Links (example 501)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/url} {LT:"title"}{L:)}} {Ln:{L:[}link{L:]}{L:(}{URL:/url} {LT:'title'}{L:)}} {Ln:{L:[}link{L:]}{L:(}{URL:/url} {LT:(title)}{L:)}}} `) test("Links (example 502)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/url} {LT:"title \\"""}{L:)}}} `) test("Links (example 503)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/url "title"}{L:)}}} `) test("Links (example 504)", ` {P:{Ln:{L:[}link{L:]}}(/url "title "and" title")} `) test("Links (example 505)", ` {P:{Ln:{L:[}link{L:]}{L:(}{URL:/url} {LT:'title "and" title'}{L:)}}} `) test("Links (example 506)", ` {P:{Ln:{L:[}link{L:]}{L:(} {URL:/uri} {LT:"title"} {L:)}}} `) test("Links (example 507)", ` {P:{Ln:{L:[}link{L:]}} (/uri)} `) test("Links (example 508)", ` {P:[link [foo {Ln:{L:[}bar{L:]}}]](/uri)} `) test("Links (example 509)", ` {P:{Ln:{L:[}link{L:]}} bar](/uri)} `) test("Links (example 510)", ` {P:[link {Ln:{L:[}bar{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 511)", ` {P:{Ln:{L:[}link {Esc:\\[}bar{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 512)", ` {P:{Ln:{L:[}link {Em:{e:*}foo {St:{e:**}bar{e:**}} {C:{c:\`}#{c:\`}}{e:*}}{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 513)", ` {P:{Ln:{L:[}{Im:{L:![}moon{L:]}{L:(}{URL:moon.jpg}{L:)}}{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 514)", ` {P:[foo {Ln:{L:[}bar{L:]}{L:(}{URL:/uri}{L:)}}](/uri)} `) test("Links (example 515)", ` {P:[foo {Em:{e:*}[bar {Ln:{L:[}baz{L:]}{L:(}{URL:/uri}{L:)}}](/uri){e:*}}](/uri)} `) test("Links (example 516)", ` {P:{Im:{L:![}[{Ln:{L:[}foo{L:]}{L:(}{URL:uri1}{L:)}}](uri2){L:]}{L:(}{URL:uri3}{L:)}}} `) test("Links (example 517)", ` {P:*{Ln:{L:[}foo*{L:]}{L:(}{URL:/uri}{L:)}}} `) test("Links (example 518)", ` {P:{Ln:{L:[}foo *bar{L:]}{L:(}{URL:baz*}{L:)}}} `) // Not the spirit of the test, because the shortcut link is still // accepted. test("Links (example 519)", ` {P:*foo {Ln:{L:[}bar* baz{L:]}}} `) test("Links (example 520)", ` {P:[foo {HT:}} `) test("Links (example 521)", ` {P:[foo{C:{c:\`}](/uri){c:\`}}} `) test("Links (example 522)", ` {P:[foo{Al:{L:<}{URL:http://example.com/?search=](uri)}{L:>}}} `) test("Links (example 523)", ` {P:{Ln:{L:[}foo{L:]}{LL:[bar]}}} {LR:{LL:[bar]}{L::} {URL:/url} {LT:"title"}} `) // This has a different shape than the original test case, because // we accept the innermost link. test("Links (example 524)", ` {P:[link [foo {Ln:{L:[}bar{L:]}}]]{Ln:{L:[}ref{L:]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 525)", ` {P:{Ln:{L:[}link {Esc:\\[}bar{L:]}{LL:[ref]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 526)", ` {P:{Ln:{L:[}link {Em:{e:*}foo {St:{e:**}bar{e:**}} {C:{c:\`}#{c:\`}}{e:*}}{L:]}{LL:[ref]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 527)", ` {P:{Ln:{L:[}{Im:{L:![}moon{L:]}{L:(}{URL:moon.jpg}{L:)}}{L:]}{LL:[ref]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 528)", ` {P:[foo {Ln:{L:[}bar{L:]}{L:(}{URL:/uri}{L:)}}]{Ln:{L:[}ref{L:]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 529)", ` {P:[foo {Em:{e:*}bar {Ln:{L:[}baz{L:]}{LL:[ref]}}{e:*}}]{Ln:{L:[}ref{L:]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 530)", ` {P:*{Ln:{L:[}foo*{L:]}{LL:[ref]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 531)", ` {P:{Ln:{L:[}foo *bar{L:]}{LL:[ref]}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 532)", ` {P:[foo {HT:}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 533)", ` {P:[foo{C:{c:\`}][ref]{c:\`}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 534)", ` {P:[foo{Al:{L:<}{URL:http://example.com/?search=][ref]}{L:>}}} {LR:{LL:[ref]}{L::} {URL:/uri}} `) test("Links (example 535)", ` {P:{Ln:{L:[}foo{L:]}{LL:[BaR]}}} {LR:{LL:[bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 536)", ` {P:{Ln:{L:[}Толпой{L:]}{LL:[Толпой]}} is a Russian word.} {LR:{LL:[ТОЛПОЙ]}{L::} {URL:/url}} `) test("Links (example 537)", ` {LR:{LL:[Foo bar]}{L::} {URL:/url}} {P:{Ln:{L:[}Baz{L:]}{LL:[Foo bar]}}} `) test("Links (example 538)", ` {P:{Ln:{L:[}foo{L:]}} {Ln:{L:[}bar{L:]}}} {LR:{LL:[bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 539)", ` {P:{Ln:{L:[}foo{L:]}} {Ln:{L:[}bar{L:]}}} {LR:{LL:[bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 540)", ` {LR:{LL:[foo]}{L::} {URL:/url1}} {LR:{LL:[foo]}{L::} {URL:/url2}} {P:{Ln:{L:[}bar{L:]}{LL:[foo]}}} `) test("Links (example 541)", ` {P:{Ln:{L:[}bar{L:]}{LL:[foo\\!]}}} {LR:{LL:[foo!]}{L::} {URL:/url}} `) test("Links (example 542)", ` {P:{Ln:{L:[}foo{L:]}}[ref[]} {P:[ref[]: /uri} `) test("Links (example 543)", ` {P:{Ln:{L:[}foo{L:]}}[ref{Ln:{L:[}bar{L:]}}]} {P:[ref{Ln:{L:[}bar{L:]}}]: /uri} `) test("Links (example 544)", ` {P:[[{Ln:{L:[}foo{L:]}}]]} {P:[[{Ln:{L:[}foo{L:]}}]]: /url} `) test("Links (example 545)", ` {P:{Ln:{L:[}foo{L:]}{LL:[ref\\[]}}} {LR:{LL:[ref\\[]}{L::} {URL:/uri}} `) test("Links (example 546)", ` {LR:{LL:[bar\\\\]}{L::} {URL:/uri}} {P:{Ln:{L:[}bar{Esc:\\\\}{L:]}}} `) test("Links (example 547)", ` {P:[]} {P:[]: /uri} `) test("Links (example 548)", ` {P:[ ]} {P:[ ]: /uri} `) test("Links (example 549)", ` {P:{Ln:{L:[}foo{L:]}{LL:[]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 550)", ` {P:{Ln:{L:[}{Em:{e:*}foo{e:*}} bar{L:]}{LL:[]}}} {LR:{LL:[*foo* bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 551)", ` {P:{Ln:{L:[}Foo{L:]}{LL:[]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 552)", ` {P:{Ln:{L:[}foo{L:]}} []} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 553)", ` {P:{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 554)", ` {P:{Ln:{L:[}{Em:{e:*}foo{e:*}} bar{L:]}}} {LR:{LL:[*foo* bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 555)", ` {P:[{Ln:{L:[}{Em:{e:*}foo{e:*}} bar{L:]}}]} {LR:{LL:[*foo* bar]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 556)", ` {P:[[bar {Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url}} `) test("Links (example 557)", ` {P:{Ln:{L:[}Foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 558)", ` {P:{Ln:{L:[}foo{L:]}} bar} {LR:{LL:[foo]}{L::} {URL:/url}} `) test("Links (example 559)", ` {P:{Esc:\\[}foo]} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Links (example 560)", ` {LR:{LL:[foo*]}{L::} {URL:/url}} {P:*{Ln:{L:[}foo*{L:]}}} `) test("Links (example 561)", ` {P:{Ln:{L:[}foo{L:]}{LL:[bar]}}} {LR:{LL:[foo]}{L::} {URL:/url1}} {LR:{LL:[bar]}{L::} {URL:/url2}} `) test("Links (example 562)", ` {P:{Ln:{L:[}foo{L:]}{LL:[]}}} {LR:{LL:[foo]}{L::} {URL:/url1}} `) test("Links (example 563)", ` {P:{Ln:{L:[}foo{L:]}{L:(}{L:)}}} {LR:{LL:[foo]}{L::} {URL:/url1}} `) test("Links (example 564)", ` {P:{Ln:{L:[}foo{L:]}}(not a link)} {LR:{LL:[foo]}{L::} {URL:/url1}} `) // Not really testing what it is supposed to, because the first two // bracket pairs are blindly accepted as link. test("Links (example 565)", ` {P:{Ln:{L:[}foo{L:]}{LL:[bar]}}{Ln:{L:[}baz{L:]}}} {LR:{LL:[baz]}{L::} {URL:/url}} `) test("Links (example 566)", ` {P:{Ln:{L:[}foo{L:]}{LL:[bar]}}{Ln:{L:[}baz{L:]}}} {LR:{LL:[baz]}{L::} {URL:/url1}} {LR:{LL:[bar]}{L::} {URL:/url2}} `) test("Links (example 567)", ` {P:{Ln:{L:[}foo{L:]}{LL:[bar]}}{Ln:{L:[}baz{L:]}}} {LR:{LL:[baz]}{L::} {URL:/url1}} {LR:{LL:[foo]}{L::} {URL:/url2}} `) test("Images (example 568)", ` {P:{Im:{L:![}foo{L:]}{L:(}{URL:/url} {LT:"title"}{L:)}}} `) test("Images (example 569)", ` {P:{Im:{L:![}foo {Em:{e:*}bar{e:*}}{L:]}}} {LR:{LL:[foo *bar*]}{L::} {URL:train.jpg} {LT:"train & tracks"}} `) test("Images (example 570)", ` {P:{Im:{L:![}foo {Im:{L:![}bar{L:]}{L:(}{URL:/url}{L:)}}{L:]}{L:(}{URL:/url2}{L:)}}} `) test("Images (example 571)", ` {P:{Im:{L:![}foo {Ln:{L:[}bar{L:]}{L:(}{URL:/url}{L:)}}{L:]}{L:(}{URL:/url2}{L:)}}} `) test("Images (example 572)", ` {P:{Im:{L:![}foo {Em:{e:*}bar{e:*}}{L:]}{LL:[]}}} {LR:{LL:[foo *bar*]}{L::} {URL:train.jpg} {LT:"train & tracks"}} `) test("Images (example 573)", ` {P:{Im:{L:![}foo {Em:{e:*}bar{e:*}}{L:]}{LL:[foobar]}}} {LR:{LL:[FOOBAR]}{L::} {URL:train.jpg} {LT:"train & tracks"}} `) test("Images (example 574)", ` {P:{Im:{L:![}foo{L:]}{L:(}{URL:train.jpg}{L:)}}} `) test("Images (example 575)", ` {P:My {Im:{L:![}foo bar{L:]}{L:(}{URL:/path/to/train.jpg} {LT:"title"} {L:)}}} `) test("Images (example 576)", ` {P:{Im:{L:![}foo{L:]}{L:(}{URL:}{L:)}}} `) test("Images (example 577)", ` {P:{Im:{L:![}{L:]}{L:(}{URL:/url}{L:)}}} `) test("Images (example 578)", ` {P:{Im:{L:![}foo{L:]}{LL:[bar]}}} {LR:{LL:[bar]}{L::} {URL:/url}} `) test("Images (example 579)", ` {P:{Im:{L:![}foo{L:]}{LL:[bar]}}} {LR:{LL:[BAR]}{L::} {URL:/url}} `) test("Images (example 580)", ` {P:{Im:{L:![}foo{L:]}{LL:[]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 581)", ` {P:{Im:{L:![}{Em:{e:*}foo{e:*}} bar{L:]}{LL:[]}}} {LR:{LL:[*foo* bar]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 582)", ` {P:{Im:{L:![}Foo{L:]}{LL:[]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 583)", ` {P:{Im:{L:![}foo{L:]}} []} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 584)", ` {P:{Im:{L:![}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 585)", ` {P:{Im:{L:![}{Em:{e:*}foo{e:*}} bar{L:]}}} {LR:{LL:[*foo* bar]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 586)", ` {P:{Im:{L:![}{Ln:{L:[}foo{L:]}}{L:]}}} {P:[{Ln:{L:[}foo{L:]}}]: /url "title"} `) test("Images (example 587)", ` {P:{Im:{L:![}Foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 588)", ` {P:!{Esc:\\[}foo]} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Images (example 589)", ` {P:{Esc:\\!}{Ln:{L:[}foo{L:]}}} {LR:{LL:[foo]}{L::} {URL:/url} {LT:"title"}} `) test("Autolinks (example 590)", ` {P:{Al:{L:<}{URL:http://foo.bar.baz}{L:>}}} `) test("Autolinks (example 591)", ` {P:{Al:{L:<}{URL:http://foo.bar.baz/test?q=hello&id=22&boolean}{L:>}}} `) test("Autolinks (example 592)", ` {P:{Al:{L:<}{URL:irc://foo.bar:2233/baz}{L:>}}} `) test("Autolinks (example 593)", ` {P:{Al:{L:<}{URL:MAILTO:FOO@BAR.BAZ}{L:>}}} `) test("Autolinks (example 594)", ` {P:{Al:{L:<}{URL:a+b+c:d}{L:>}}} `) test("Autolinks (example 595)", ` {P:{Al:{L:<}{URL:made-up-scheme://foo,bar}{L:>}}} `) test("Autolinks (example 596)", ` {P:{Al:{L:<}{URL:http://../}{L:>}}} `) test("Autolinks (example 597)", ` {P:{Al:{L:<}{URL:localhost:5001/foo}{L:>}}} `) test("Autolinks (example 598)", ` {P:} `) test("Autolinks (example 599)", ` {P:{Al:{L:<}{URL:http://example.com/\\[\\}{L:>}}} `) test("Autolinks (example 600)", ` {P:{Al:{L:<}{URL:foo@bar.example.com}{L:>}}} `) test("Autolinks (example 601)", ` {P:{Al:{L:<}{URL:foo+special@Bar.baz-bar0.com}{L:>}}} `) test("Autolinks (example 602)", ` {P:} `) test("Autolinks (example 603)", ` {P:<>} `) test("Autolinks (example 604)", ` {P:< http://foo.bar >} `) test("Autolinks (example 605)", ` {P:} `) test("Autolinks (example 606)", ` {P:} `) test("Autolinks (example 607)", ` {P:http://example.com} `) test("Autolinks (example 608)", ` {P:foo@bar.example.com} `) test("Raw HTML (example 609)", ` {P:{HT:}{HT:}{HT:}} `) test("Raw HTML (example 610)", ` {P:{HT:}{HT:}} `) test("Raw HTML (example 611)", ` {P:{HT:}{HT:}} `) test("Raw HTML (example 612)", ` {P:{HT:}} `) test("Raw HTML (example 613)", ` {P:Foo {HT:}} `) test("Raw HTML (example 614)", ` {P:<33> <__>} `) test("Raw HTML (example 615)", ` {P:} `) test("Raw HTML (example 616)", ` {P:}{HT:< foo>}{HT:} } `) test("Raw HTML (example 618)", ` {P:} `) test("Raw HTML (example 619)", ` {P:{HT:}{HT:}} `) test("Raw HTML (example 620)", ` {P:} `) test("Raw HTML (example 621)", ` {P:foo {CM:}} `) test("Raw HTML (example 622)", ` {P:foo } `) test("Raw HTML (example 623)", ` {P:foo foo -->} {P:foo } `) test("Raw HTML (example 624)", ` {P:foo {Pi:}} `) test("Raw HTML (example 625)", ` {P:foo {HT:}} `) test("Raw HTML (example 626)", ` {P:foo {HT:&<]]>}} `) test("Raw HTML (example 627)", ` {P:foo {HT:}} `) test("Raw HTML (example 628)", ` {P:foo {HT:}} `) test("Raw HTML (example 629)", ` {P:} `) test("Hard line breaks (example 630)", ` {P:foo{BR: }baz} `) test("Hard line breaks (example 631)", ` {P:foo{BR:\\ }baz} `) test("Hard line breaks (example 632)", ` {P:foo{BR: }baz} `) test("Hard line breaks (example 633)", ` {P:foo{BR: } bar} `) test("Hard line breaks (example 634)", ` {P:foo{BR:\\ } bar} `) test("Hard line breaks (example 635)", ` {P:{Em:{e:*}foo{BR: }bar{e:*}}} `) test("Hard line breaks (example 636)", ` {P:{Em:{e:*}foo{BR:\\ }bar{e:*}}} `) test("Hard line breaks (example 637)", ` {P:{C:{c:\`}code span{c:\`}}} `) test("Hard line breaks (example 638)", ` {P:{C:{c:\`}code\\ span{c:\`}}} `) test("Hard line breaks (example 639)", ` {P:{HT:}} `) test("Hard line breaks (example 640)", ` {P:{HT:}} `) test("Hard line breaks (example 641)", ` {P:foo\\} `) test("Hard line breaks (example 642)", ` {P:foo } `) test("Hard line breaks (example 643)", ` {H3:{h:###} foo\\} `) test("Hard line breaks (example 644)", ` {H3:{h:###} foo } `) test("Soft line breaks (example 645)", ` {P:foo baz} `) test("Soft line breaks (example 646)", ` {P:foo baz} `) test("Textual content (example 647)", ` {P:hello $.;'there} `) test("Textual content (example 648)", ` {P:Foo χρῆν} `) test("Textual content (example 649)", ` {P:Multiple spaces} `) }) describe("Custom Markdown tests", () => { // (Not ideal that the 3rd mark is inside the previous item, but // this'd require quite a big overhaul to fix.) test("Quote markers don't end up inside inner list items", ` {Q:{q:>} {BL:{LI:{l:-} {P:Hello}} {q:>} {LI:{l:-} {P:Two} {q:>}} {q:>} {LI:{l:-} {P:Three}}}} `) test("Nested bullet lists don't break ordered list parsing", ` {OL:{LI:{l:1.} {P:A} {BL:{LI:{l:*} {P:A1}} {LI:{l:*} {P:A2}}}} {LI:{l:2.} {P:B}}} `) it("Doesn't get confused by tabs indenting a list item", () => { let doc = ` - a\n\t\tb` if (parser.parse(doc).length > doc.length) throw new RangeError("Wrong tree length") }) }) markdown-1.2.0/test/test-nesting.ts000066400000000000000000000046141454233753200173240ustar00rootroot00000000000000import {Tree, NodeProp} from "@lezer/common" import {parser as html} from "@lezer/html" import ist from "ist" import {parser, parseCode} from "../dist/index.js" let full = parser.configure(parseCode({ codeParser(str) { return !str || str == "html" ? html : null }, htmlParser: html.configure({dialect: "noMatch"}) })) function findMounts(tree: Tree) { let result = [] for (let cur = tree.cursor(); cur.next();) { let mount = cur.tree?.prop(NodeProp.mounted) if (mount) result.push({at: cur.from, mount}) } return result } function test(doc: string, ...nest: [string, ...number[]][]) { return () => { let tree = full.parse(doc), mounts = findMounts(tree) ist(mounts.length, nest.length) nest.forEach(([repr, ...ranges], i) => { let {mount, at} = mounts[i] ist(mount.tree.toString(), "Document(" + repr + ")") ist(mount.overlay!.map(r => (r.from + at) + "," + (r.to + at)).join(), ranges.join()) }) } } describe("Code parsing", () => { it("parses HTML blocks", test(` Paragraph
Hello & goodbye
`, ["Element(OpenTag(StartTag,TagName,Attribute(AttributeName,Is,UnquotedAttributeValue),EndTag),Text,EntityReference,Text,CloseTag(StartCloseTag,TagName,EndTag))", 12, 51])) it("parses inline HTML", test( `Paragraph with inline tags in it.`, ["Element(OpenTag(StartTag,TagName,EndTag))", 15, 19], ["CloseTag(StartCloseTag,TagName,EndTag)", 30, 35])) it("parses indented code", test(` Paragraph. Hi `, ["DoctypeDecl,Text", 17, 33, 37, 39])) it("parses fenced code", test(` Okay ~~~

Hey

~~~`, ["Element(OpenTag(StartTag,TagName,EndTag),Text,CloseTag(StartCloseTag,TagName,EndTag))", 11, 25])) it("allows gaps in fenced code", test(` - >~~~ > >yay > ~~~`, ["DoctypeDecl,Text", 11, 27, 30, 33])) it("passes fenced code info", test(` ~~~html » ~~~ ~~~python False ~~~`, ["EntityReference", 9, 16])) it("can parse disjoint ranges", () => { let tree = parser.parse(`==foo\n==\n==ba==r\n==`, undefined, [{from: 2, to: 6}, {from: 8, to: 9}, {from: 11, to: 13}, {from: 15, to: 17}]) ist(tree.toString(), "Document(Paragraph,Paragraph)") ist(tree.length, 15) ist(tree.topNode.firstChild!.from, 0) ist(tree.topNode.firstChild!.to, 3) ist(tree.topNode.lastChild!.from, 9) ist(tree.topNode.lastChild!.to, 14) }) }) markdown-1.2.0/test/tsconfig.json000066400000000000000000000003451454233753200170340ustar00rootroot00000000000000{ "compilerOptions": { "lib": ["es2017"], "noImplicitReturns": true, "noUnusedLocals": true, "strict": true, "target": "es6", "newLine": "lf", "moduleResolution": "node" }, "include": ["*.ts"] } markdown-1.2.0/tsconfig.json000066400000000000000000000004341454233753200160540ustar00rootroot00000000000000{ "compilerOptions": { "lib": ["es2017"], "noImplicitReturns": true, "noUnusedLocals": true, "strict": true, "target": "es6", "module": "es2015", "newLine": "lf", "stripInternal": true, "moduleResolution": "node" }, "include": ["src/*.ts"] }