pax_global_header00006660000000000000000000000064142136571150014517gustar00rootroot0000000000000052 comment=5d89b9e354eaae7c2e254b9dc410c28b2e61441b prosemirror-markdown-1.8.0/000077500000000000000000000000001421365711500157305ustar00rootroot00000000000000prosemirror-markdown-1.8.0/.gitignore000066400000000000000000000000371421365711500177200ustar00rootroot00000000000000/node_modules .tern-port /dist prosemirror-markdown-1.8.0/.npmignore000066400000000000000000000000371421365711500177270ustar00rootroot00000000000000/node_modules .tern-port /test prosemirror-markdown-1.8.0/.npmrc000066400000000000000000000000231421365711500170430ustar00rootroot00000000000000package-lock=false prosemirror-markdown-1.8.0/.tern-project000066400000000000000000000001571421365711500203500ustar00rootroot00000000000000{ "libs": ["browser"], "plugins": { "node": {}, "complete_strings": {}, "es_modules": {} } } prosemirror-markdown-1.8.0/CHANGELOG.md000066400000000000000000000076271421365711500175550ustar00rootroot00000000000000## 1.8.0 (2022-03-14) ### New features `MarkdownSerializer` now takes an `escapeExtraCharacters` option that can be used to control backslash-escaping behavior. Fix types for new option ## 1.7.1 (2022-02-16) ### Bug fixes Avoid escaping underscores surrounded by word characters. ## 1.7.0 (2022-01-06) ### New features Upgrade markdown-it to version 12. ## 1.6.2 (2022-01-04) ### Bug fixes Fix a bug where URL text in links and images was overzealously escaped. ## 1.6.1 (2021-12-16) ### Bug fixes Fix a bug where `MarkdownParser.parse` could return null when the parsed content doesn't fit the schema. Make sure underscores are escaped when serializing to Markdown. ## 1.6.0 (2021-09-21) ### New features `MarkdownParser.tokenizer` is now public, for easier creation of parsers that base on other parsers. ## 1.5.2 (2021-09-03) ### Bug fixes Serializing to Markdown now properly escapes '>' characters at the start of the line. ## 1.5.1 (2021-01-06) ### Bug fixes The Markdown parser will now correctly set the `tight` attribute on list nodes. ## 1.5.0 (2020-07-17) ### New features Markdown parse specs can now be specified as `noCloseToken`, which will cause the parser to treat them as a single token, rather than a pair of `_open`/`_close` tokens. ## 1.4.5 (2020-05-14) ### Bug fixes Don't allow hard_break nodes in headings. ## 1.4.4 (2019-12-19) ### Bug fixes Fix issue that broke parsing ordered lists with a starting number other than 1. ## 1.4.3 (2019-12-17) ### Bug fixes Don't use short-hand angle bracket syntax when outputting self-linking URLs that are relative. ## 1.4.2 (2019-11-20) ### Bug fixes Rename ES module files to use a .js extension, since Webpack gets confused by .mjs ## 1.4.1 (2019-11-19) ### Bug fixes The file referred to in the package's `module` field now is compiled down to ES5. ## 1.4.0 (2019-11-08) ### New features Add a `module` field to package json file. ## 1.3.2 (2019-10-30) ### Bug fixes Code blocks in the schema no longer allow marks inside them. Code blocks are now parsed with `preserveWhiteSpace: full`, preventing removal of newline characters. ## 1.3.1 (2019-06-08) ### Bug fixes Fix a bug that could occur when parsing multiple adjacent pieces of text with the same style. ## 1.3.0 (2019-01-22) ### Bug fixes Inline code containing backticks is now serialized wrapped in the appropriate amount of backticks. ### New features The serializer now serializes links whose target is the same as their text content using \< \> syntax. Mark opening and close string callbacks now get passed the mark's context (parent fragment and index). ## 1.2.2 (2018-11-22) ### Bug fixes Hard breaks at the end of an emphasized or strong mark are no longer serialized to invalid Markdown text. ## 1.2.1 (2018-10-19) ### Bug fixes Fixes a bug where inline mark delimiters were serialized incorrectly (the closing and opening marks were swapped, which was only noticeable when they are different). ## 1.2.0 (2018-10-08) ### Bug fixes Fixes an issue where the Markdown serializer would escape special characters in inline code. ### New features Upgrade the markdown-it dependency to version 8. ## 1.1.1 (2018-07-08) ### Bug fixes Fix bug that caused superfluous backslashes to be inserted at the start of some lines when serializing to Markdown. ## 1.1.0 (2018-06-20) ### New features You can now override the handling of softbreak tokens in a custom handler. ## 1.0.4 (2018-04-17) ### Bug fixes Fix crash when serializing marks with line breaks inside of them. ## 1.0.3 (2018-01-10) ### Bug fixes Fix dependency version range for prosemirror-model. ## 1.0.2 (2017-12-07) ### Bug fixes Code blocks are always wrapped in triple backticks when serializing, to avoid parsing corner cases around indented code blocks. ## 1.0.1 (2017-11-05) ### Bug fixes Link marks are now non-inclusive (typing after them produces non-linked text). ## 1.0.0 (2017-10-13) First stable release. prosemirror-markdown-1.8.0/CONTRIBUTING.md000066400000000000000000000072051421365711500201650ustar00rootroot00000000000000# How to contribute - [Getting help](#getting-help) - [Submitting bug reports](#submitting-bug-reports) - [Contributing code](#contributing-code) ## Getting help Community discussion, questions, and informal bug reporting is done on the [discuss.ProseMirror forum](http://discuss.prosemirror.net). ## Submitting bug reports Report bugs on the [GitHub issue tracker](http://github.com/prosemirror/prosemirror/issues). Before reporting a bug, please read these pointers. - The issue tracker is for *bugs*, not requests for help. Questions should be asked on the [forum](http://discuss.prosemirror.net). - Include information about the version of the code that exhibits the problem. For browser-related issues, include the browser and browser version on which the problem occurred. - Mention very precisely what went wrong. "X is broken" is not a good bug report. What did you expect to happen? What happened instead? Describe the exact steps a maintainer has to take to make the problem occur. A screencast can be useful, but is no substitute for a textual description. - A great way to make it easy to reproduce your problem, if it can not be trivially reproduced on the website demos, is to submit a script that triggers the issue. ## Contributing code - Make sure you have a [GitHub Account](https://github.com/signup/free) - Fork the relevant repository ([how to fork a repo](https://help.github.com/articles/fork-a-repo)) - Create a local checkout of the code. You can use the [main repository](https://github.com/prosemirror/prosemirror) to easily check out all core modules. - Make your changes, and commit them - Follow the code style of the rest of the project (see below). Run `npm run lint` (in the main repository checkout) to make sure that the linter is happy. - If your changes are easy to test or likely to regress, add tests in the relevant `test/` directory. Either put them in an existing `test-*.js` file, if they fit there, or add a new file. - Make sure all tests pass. Run `npm run test` to verify tests pass (you will need Node.js v6+). - Submit a pull request ([how to create a pull request](https://help.github.com/articles/fork-a-repo)). Don't put more than one feature/fix in a single pull request. By contributing code to ProseMirror you - Agree to license the contributed code under the project's [MIT license](https://github.com/ProseMirror/prosemirror/blob/master/LICENSE). - Confirm that you have the right to contribute and license the code in question. (Either you hold all rights on the code, or the rights holder has explicitly granted the right to use it like this, through a compatible open source license or through a direct agreement with you.) ### Coding standards - ES6 syntax, targeting an ES5 runtime (i.e. don't use library elements added by ES6, don't use ES7/ES.next syntax). - 2 spaces per indentation level, no tabs. - No semicolons except when necessary. - Follow the surrounding code when it comes to spacing, brace placement, etc. - Brace-less single-statement bodies are encouraged (whenever they don't impact readability). - [getdocs](https://github.com/marijnh/getdocs)-style doc comments above items that are part of the public API. - When documenting non-public items, you can put the type after a single colon, so that getdocs doesn't pick it up and add it to the API reference. - The linter (`npm run lint`) complains about unused variables and functions. Prefix their names with an underscore to muffle it. - ProseMirror does *not* follow JSHint or JSLint prescribed style. Patches that try to 'fix' code to pass one of these linters will not be accepted. prosemirror-markdown-1.8.0/LICENSE000066400000000000000000000021131421365711500167320ustar00rootroot00000000000000Copyright (C) 2015-2017 by Marijn Haverbeke and others Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. prosemirror-markdown-1.8.0/README.md000066400000000000000000000222151421365711500172110ustar00rootroot00000000000000# prosemirror-markdown [ [**WEBSITE**](http://prosemirror.net) | [**ISSUES**](https://github.com/prosemirror/prosemirror-markdown/issues) | [**FORUM**](https://discuss.prosemirror.net) | [**GITTER**](https://gitter.im/ProseMirror/prosemirror) ] This is a (non-core) module for [ProseMirror](http://prosemirror.net). ProseMirror is a well-behaved rich semantic content editor based on contentEditable, with support for collaborative editing and custom document schemas. This module implements a ProseMirror [schema](https://prosemirror.net/docs/guide/#schema) that corresponds to the document schema used by [CommonMark](http://commonmark.org/), and a parser and serializer to convert between ProseMirror documents in that schema and CommonMark/Markdown text. This code is released under an [MIT license](https://github.com/prosemirror/prosemirror/tree/master/LICENSE). There's a [forum](http://discuss.prosemirror.net) for general discussion and support requests, and the [Github bug tracker](https://github.com/prosemirror/prosemirror/issues) is the place to report issues. We aim to be an inclusive, welcoming community. To make that explicit, we have a [code of conduct](http://contributor-covenant.org/version/1/1/0/) that applies to communication around the project. ## Documentation * **`schema`**`: Schema`\ Document schema for the data model used by CommonMark. ### class MarkdownParser A configuration of a Markdown parser. Such a parser uses [markdown-it](https://github.com/markdown-it/markdown-it) to tokenize a file, and then runs the custom rules it is given over the tokens to create a ProseMirror document tree. * `new `**`MarkdownParser`**`(schema: Schema, tokenizer: MarkdownIt, tokens: Object)`\ Create a parser with the given configuration. You can configure the markdown-it parser to parse the dialect you want, and provide a description of the ProseMirror entities those tokens map to in the `tokens` object, which maps token names to descriptions of what to do with them. Such a description is an object, and may have the following properties: **`node`**`: ?string` : This token maps to a single node, whose type can be looked up in the schema under the given name. Exactly one of `node`, `block`, or `mark` must be set. **`block`**`: ?string` : This token (unless `noCloseToken` is true) comes in `_open` and `_close` variants (which are appended to the base token name provides a the object property), and wraps a block of content. The block should be wrapped in a node of the type named to by the property's value. If the token does not have `_open` or `_close`, use the `noCloseToken` option. **`mark`**`: ?string` : This token (again, unless `noCloseToken` is true) also comes in `_open` and `_close` variants, but should add a mark (named by the value) to its content, rather than wrapping it in a node. **`attrs`**`: ?Object` : Attributes for the node or mark. When `getAttrs` is provided, it takes precedence. **`getAttrs`**`: ?(MarkdownToken) → Object` : A function used to compute the attributes for the node or mark that takes a [markdown-it token](https://markdown-it.github.io/markdown-it/#Token) and returns an attribute object. **`noCloseToken`**`: ?boolean` : Indicates that the [markdown-it token](https://markdown-it.github.io/markdown-it/#Token) has no `_open` or `_close` for the nodes. This defaults to `true` for `code_inline`, `code_block` and `fence`. **`ignore`**`: ?bool` : When true, ignore content for the matched token. * **`tokens`**`: Object`\ The value of the `tokens` object used to construct this parser. Can be useful to copy and modify to base other parsers on. * **`tokenizer`**`: This`\ parser's markdown-it tokenizer. * **`parse`**`(text: string) → Node`\ Parse a string as [CommonMark](http://commonmark.org/) markup, and create a ProseMirror document as prescribed by this parser's rules. * **`defaultMarkdownParser`**`: MarkdownParser`\ A parser parsing unextended [CommonMark](http://commonmark.org/), without inline HTML, and producing a document in the basic schema. ### class MarkdownSerializer A specification for serializing a ProseMirror document as Markdown/CommonMark text. * `new `**`MarkdownSerializer`**`(nodes: Object< fn(state: MarkdownSerializerState, node: Node, parent: Node, index: number) >, marks: Object, options: ?Object)`\ Construct a serializer with the given configuration. The `nodes` object should map node names in a given schema to function that take a serializer state and such a node, and serialize the node. The `marks` object should hold objects with `open` and `close` properties, which hold the strings that should appear before and after a piece of text marked that way, either directly or as a function that takes a serializer state and a mark, and returns a string. `open` and `close` can also be functions, which will be called as (state: MarkdownSerializerState, mark: Mark, parent: Fragment, index: number) → string Where `parent` and `index` allow you to inspect the mark's context to see which nodes it applies to. Mark information objects can also have a `mixable` property which, when `true`, indicates that the order in which the mark's opening and closing syntax appears relative to other mixable marks can be varied. (For example, you can say `**a *b***` and `*a **b***`, but not `` `a *b*` ``.) To disable character escaping in a mark, you can give it an `escape` property of `false`. Such a mark has to have the highest precedence (must always be the innermost mark). The `expelEnclosingWhitespace` mark property causes the serializer to move enclosing whitespace from inside the marks to outside the marks. This is necessary for emphasis marks as CommonMark does not permit enclosing whitespace inside emphasis marks, see: http://spec.commonmark.org/0.26/#example-330 * **`options`**`: ?Object`\ Optional additional options. * **`escapeExtraCharacters`**`: ?RegExp`\ Extra characters can be added for escaping. This is passed directly to String.replace(), and the matching characters are preceded by a backslash. * **`nodes`**`: Object< fn(MarkdownSerializerState, Node) >`\ The node serializer functions for this serializer. * **`marks`**`: Object`\ The mark serializer info. * **`serialize`**`(content: Node, options: ?Object) → string`\ Serialize the content of the given node to [CommonMark](http://commonmark.org/). ### class MarkdownSerializerState This is an object used to track state and expose methods related to markdown serialization. Instances are passed to node and mark serialization methods (see `toMarkdown`). * **`options`**`: Object`\ The options passed to the serializer. * **`tightLists`**`: ?bool`\ Whether to render lists in a tight style. This can be overridden on a node level by specifying a tight attribute on the node. Defaults to false. * **`wrapBlock`**`(delim: string, firstDelim: ?string, node: Node, f: fn())`\ Render a block, prefixing each line with `delim`, and the first line in `firstDelim`. `node` should be the node that is closed at the end of the block, and `f` is a function that renders the content of the block. * **`ensureNewLine`**`()`\ Ensure the current content ends with a newline. * **`write`**`(content: ?string)`\ Prepare the state for writing output (closing closed paragraphs, adding delimiters, and so on), and then optionally add content (unescaped) to the output. * **`closeBlock`**`(node: Node)`\ Close the block for the given node. * **`text`**`(text: string, escape: ?bool)`\ Add the given text to the document. When escape is not `false`, it will be escaped. * **`render`**`(node: Node)`\ Render the given node as a block. * **`renderContent`**`(parent: Node)`\ Render the contents of `parent` as block nodes. * **`renderInline`**`(parent: Node)`\ Render the contents of `parent` as inline content. * **`renderList`**`(node: Node, delim: string, firstDelim: fn(number) → string)`\ Render a node's content as a list. `delim` should be the extra indentation added to all lines except the first in an item, `firstDelim` is a function going from an item index to a delimiter for the first line of the item. * **`esc`**`(str: string, startOfLine: ?bool) → string`\ Escape the given string so that it can safely appear in Markdown content. If `startOfLine` is true, also escape characters that have special meaning only at the start of the line. * **`repeat`**`(str: string, n: number) → string`\ Repeat the given string `n` times. * **`getEnclosingWhitespace`**`(text: string) → {leading: ?string, trailing: ?string}`\ Get leading and trailing whitespace from a string. Values of leading or trailing property of the return object will be undefined if there is no match. * **`defaultMarkdownSerializer`**`: MarkdownSerializer`\ A serializer for the [basic schema](#schema). prosemirror-markdown-1.8.0/package.json000066400000000000000000000017701421365711500202230ustar00rootroot00000000000000{ "name": "prosemirror-markdown", "version": "1.8.0", "description": "ProseMirror Markdown integration", "main": "dist/index.js", "module": "dist/index.es.js", "license": "MIT", "maintainers": [ { "name": "Marijn Haverbeke", "email": "marijnh@gmail.com", "web": "http://marijnhaverbeke.nl" } ], "repository": { "type": "git", "url": "git://github.com/prosemirror/prosemirror-markdown.git" }, "dependencies": { "markdown-it": "^12.0.0", "prosemirror-model": "^1.0.0" }, "devDependencies": { "ist": "1.0.0", "mocha": "^9.1.2", "prosemirror-test-builder": "^1.0.0", "punycode": "^1.4.0", "rollup": "^2.26.3", "@rollup/plugin-buble": "^0.21.3", "builddocs": "^0.3.0" }, "scripts": { "test": "mocha test/test-*.js", "build": "rollup -c", "watch": "rollup -c -w", "prepare": "npm run build", "build_readme": "builddocs --name markdown --format markdown --main src/README.md src/*.js > README.md" } } prosemirror-markdown-1.8.0/rollup.config.js000066400000000000000000000005121421365711500210450ustar00rootroot00000000000000module.exports = { input: './src/index.js', output: [{ file: 'dist/index.js', format: 'cjs', sourcemap: true }, { file: 'dist/index.es.js', format: 'es', sourcemap: true }], plugins: [require('@rollup/plugin-buble')()], external(id) { return id[0] != "." && !require("path").isAbsolute(id) } } prosemirror-markdown-1.8.0/src/000077500000000000000000000000001421365711500165175ustar00rootroot00000000000000prosemirror-markdown-1.8.0/src/README.md000066400000000000000000000026141421365711500200010ustar00rootroot00000000000000# prosemirror-markdown [ [**WEBSITE**](http://prosemirror.net) | [**ISSUES**](https://github.com/prosemirror/prosemirror-markdown/issues) | [**FORUM**](https://discuss.prosemirror.net) | [**GITTER**](https://gitter.im/ProseMirror/prosemirror) ] This is a (non-core) module for [ProseMirror](http://prosemirror.net). ProseMirror is a well-behaved rich semantic content editor based on contentEditable, with support for collaborative editing and custom document schemas. This module implements a ProseMirror [schema](https://prosemirror.net/docs/guide/#schema) that corresponds to the document schema used by [CommonMark](http://commonmark.org/), and a parser and serializer to convert between ProseMirror documents in that schema and CommonMark/Markdown text. This code is released under an [MIT license](https://github.com/prosemirror/prosemirror/tree/master/LICENSE). There's a [forum](http://discuss.prosemirror.net) for general discussion and support requests, and the [Github bug tracker](https://github.com/prosemirror/prosemirror/issues) is the place to report issues. We aim to be an inclusive, welcoming community. To make that explicit, we have a [code of conduct](http://contributor-covenant.org/version/1/1/0/) that applies to communication around the project. ## Documentation @schema @MarkdownParser @defaultMarkdownParser @MarkdownSerializer @MarkdownSerializerState @defaultMarkdownSerializer prosemirror-markdown-1.8.0/src/from_markdown.js000066400000000000000000000225351421365711500217310ustar00rootroot00000000000000import markdownit from "markdown-it" import {schema} from "./schema" import {Mark} from "prosemirror-model" function maybeMerge(a, b) { if (a.isText && b.isText && Mark.sameSet(a.marks, b.marks)) return a.withText(a.text + b.text) } // Object used to track the context of a running parse. class MarkdownParseState { constructor(schema, tokenHandlers) { this.schema = schema this.stack = [{type: schema.topNodeType, content: []}] this.marks = Mark.none this.tokenHandlers = tokenHandlers } top() { return this.stack[this.stack.length - 1] } push(elt) { if (this.stack.length) this.top().content.push(elt) } // : (string) // Adds the given text to the current position in the document, // using the current marks as styling. addText(text) { if (!text) return let nodes = this.top().content, last = nodes[nodes.length - 1] let node = this.schema.text(text, this.marks), merged if (last && (merged = maybeMerge(last, node))) nodes[nodes.length - 1] = merged else nodes.push(node) } // : (Mark) // Adds the given mark to the set of active marks. openMark(mark) { this.marks = mark.addToSet(this.marks) } // : (Mark) // Removes the given mark from the set of active marks. closeMark(mark) { this.marks = mark.removeFromSet(this.marks) } parseTokens(toks) { for (let i = 0; i < toks.length; i++) { let tok = toks[i] let handler = this.tokenHandlers[tok.type] if (!handler) throw new Error("Token type `" + tok.type + "` not supported by Markdown parser") handler(this, tok, toks, i) } } // : (NodeType, ?Object, ?[Node]) → ?Node // Add a node at the current position. addNode(type, attrs, content) { let node = type.createAndFill(attrs, content, this.marks) if (!node) return null this.push(node) return node } // : (NodeType, ?Object) // Wrap subsequent content in a node of the given type. openNode(type, attrs) { this.stack.push({type: type, attrs: attrs, content: []}) } // : () → ?Node // Close and return the node that is currently on top of the stack. closeNode() { if (this.marks.length) this.marks = Mark.none let info = this.stack.pop() return this.addNode(info.type, info.attrs, info.content) } } function attrs(spec, token, tokens, i) { if (spec.getAttrs) return spec.getAttrs(token, tokens, i) // For backwards compatibility when `attrs` is a Function else if (spec.attrs instanceof Function) return spec.attrs(token) else return spec.attrs } // Code content is represented as a single token with a `content` // property in Markdown-it. function noCloseToken(spec, type) { return spec.noCloseToken || type == "code_inline" || type == "code_block" || type == "fence" } function withoutTrailingNewline(str) { return str[str.length - 1] == "\n" ? str.slice(0, str.length - 1) : str } function noOp() {} function tokenHandlers(schema, tokens) { let handlers = Object.create(null) for (let type in tokens) { let spec = tokens[type] if (spec.block) { let nodeType = schema.nodeType(spec.block) if (noCloseToken(spec, type)) { handlers[type] = (state, tok, tokens, i) => { state.openNode(nodeType, attrs(spec, tok, tokens, i)) state.addText(withoutTrailingNewline(tok.content)) state.closeNode() } } else { handlers[type + "_open"] = (state, tok, tokens, i) => state.openNode(nodeType, attrs(spec, tok, tokens, i)) handlers[type + "_close"] = state => state.closeNode() } } else if (spec.node) { let nodeType = schema.nodeType(spec.node) handlers[type] = (state, tok, tokens, i) => state.addNode(nodeType, attrs(spec, tok, tokens, i)) } else if (spec.mark) { let markType = schema.marks[spec.mark] if (noCloseToken(spec, type)) { handlers[type] = (state, tok, tokens, i) => { state.openMark(markType.create(attrs(spec, tok, tokens, i))) state.addText(withoutTrailingNewline(tok.content)) state.closeMark(markType) } } else { handlers[type + "_open"] = (state, tok, tokens, i) => state.openMark(markType.create(attrs(spec, tok, tokens, i))) handlers[type + "_close"] = state => state.closeMark(markType) } } else if (spec.ignore) { if (noCloseToken(spec, type)) { handlers[type] = noOp } else { handlers[type + "_open"] = noOp handlers[type + "_close"] = noOp } } else { throw new RangeError("Unrecognized parsing spec " + JSON.stringify(spec)) } } handlers.text = (state, tok) => state.addText(tok.content) handlers.inline = (state, tok) => state.parseTokens(tok.children) handlers.softbreak = handlers.softbreak || (state => state.addText("\n")) return handlers } // ::- A configuration of a Markdown parser. Such a parser uses // [markdown-it](https://github.com/markdown-it/markdown-it) to // tokenize a file, and then runs the custom rules it is given over // the tokens to create a ProseMirror document tree. export class MarkdownParser { // :: (Schema, MarkdownIt, Object) // Create a parser with the given configuration. You can configure // the markdown-it parser to parse the dialect you want, and provide // a description of the ProseMirror entities those tokens map to in // the `tokens` object, which maps token names to descriptions of // what to do with them. Such a description is an object, and may // have the following properties: // // **`node`**`: ?string` // : This token maps to a single node, whose type can be looked up // in the schema under the given name. Exactly one of `node`, // `block`, or `mark` must be set. // // **`block`**`: ?string` // : This token (unless `noCloseToken` is true) comes in `_open` // and `_close` variants (which are appended to the base token // name provides a the object property), and wraps a block of // content. The block should be wrapped in a node of the type // named to by the property's value. If the token does not have // `_open` or `_close`, use the `noCloseToken` option. // // **`mark`**`: ?string` // : This token (again, unless `noCloseToken` is true) also comes // in `_open` and `_close` variants, but should add a mark // (named by the value) to its content, rather than wrapping it // in a node. // // **`attrs`**`: ?Object` // : Attributes for the node or mark. When `getAttrs` is provided, // it takes precedence. // // **`getAttrs`**`: ?(MarkdownToken) → Object` // : A function used to compute the attributes for the node or mark // that takes a [markdown-it // token](https://markdown-it.github.io/markdown-it/#Token) and // returns an attribute object. // // **`noCloseToken`**`: ?boolean` // : Indicates that the [markdown-it // token](https://markdown-it.github.io/markdown-it/#Token) has // no `_open` or `_close` for the nodes. This defaults to `true` // for `code_inline`, `code_block` and `fence`. // // **`ignore`**`: ?bool` // : When true, ignore content for the matched token. constructor(schema, tokenizer, tokens) { // :: Object The value of the `tokens` object used to construct // this parser. Can be useful to copy and modify to base other // parsers on. this.tokens = tokens this.schema = schema // :: This parser's markdown-it tokenizer. this.tokenizer = tokenizer this.tokenHandlers = tokenHandlers(schema, tokens) } // :: (string) → Node // Parse a string as [CommonMark](http://commonmark.org/) markup, // and create a ProseMirror document as prescribed by this parser's // rules. parse(text) { let state = new MarkdownParseState(this.schema, this.tokenHandlers), doc state.parseTokens(this.tokenizer.parse(text, {})) do { doc = state.closeNode() } while (state.stack.length) return doc || this.schema.topNodeType.createAndFill() } } function listIsTight(tokens, i) { while (++i < tokens.length) if (tokens[i].type != "list_item_open") return tokens[i].hidden return false } // :: MarkdownParser // A parser parsing unextended [CommonMark](http://commonmark.org/), // without inline HTML, and producing a document in the basic schema. export const defaultMarkdownParser = new MarkdownParser(schema, markdownit("commonmark", {html: false}), { blockquote: {block: "blockquote"}, paragraph: {block: "paragraph"}, list_item: {block: "list_item"}, bullet_list: {block: "bullet_list", getAttrs: (_, tokens, i) => ({tight: listIsTight(tokens, i)})}, ordered_list: {block: "ordered_list", getAttrs: (tok, tokens, i) => ({ order: +tok.attrGet("start") || 1, tight: listIsTight(tokens, i) })}, heading: {block: "heading", getAttrs: tok => ({level: +tok.tag.slice(1)})}, code_block: {block: "code_block", noCloseToken: true}, fence: {block: "code_block", getAttrs: tok => ({params: tok.info || ""}), noCloseToken: true}, hr: {node: "horizontal_rule"}, image: {node: "image", getAttrs: tok => ({ src: tok.attrGet("src"), title: tok.attrGet("title") || null, alt: tok.children[0] && tok.children[0].content || null })}, hardbreak: {node: "hard_break"}, em: {mark: "em"}, strong: {mark: "strong"}, link: {mark: "link", getAttrs: tok => ({ href: tok.attrGet("href"), title: tok.attrGet("title") || null })}, code_inline: {mark: "code", noCloseToken: true} }) prosemirror-markdown-1.8.0/src/index.js000066400000000000000000000004361421365711500201670ustar00rootroot00000000000000// Defines a parser and serializer for [CommonMark](http://commonmark.org/) text. export {schema} from "./schema" export {defaultMarkdownParser, MarkdownParser} from "./from_markdown" export {MarkdownSerializer, defaultMarkdownSerializer, MarkdownSerializerState} from "./to_markdown" prosemirror-markdown-1.8.0/src/schema.js000066400000000000000000000075521421365711500203260ustar00rootroot00000000000000import {Schema} from "prosemirror-model" // ::Schema Document schema for the data model used by CommonMark. export const schema = new Schema({ nodes: { doc: { content: "block+" }, paragraph: { content: "inline*", group: "block", parseDOM: [{tag: "p"}], toDOM() { return ["p", 0] } }, blockquote: { content: "block+", group: "block", parseDOM: [{tag: "blockquote"}], toDOM() { return ["blockquote", 0] } }, horizontal_rule: { group: "block", parseDOM: [{tag: "hr"}], toDOM() { return ["div", ["hr"]] } }, heading: { attrs: {level: {default: 1}}, content: "(text | image)*", group: "block", defining: true, parseDOM: [{tag: "h1", attrs: {level: 1}}, {tag: "h2", attrs: {level: 2}}, {tag: "h3", attrs: {level: 3}}, {tag: "h4", attrs: {level: 4}}, {tag: "h5", attrs: {level: 5}}, {tag: "h6", attrs: {level: 6}}], toDOM(node) { return ["h" + node.attrs.level, 0] } }, code_block: { content: "text*", group: "block", code: true, defining: true, marks: "", attrs: {params: {default: ""}}, parseDOM: [{tag: "pre", preserveWhitespace: "full", getAttrs: node => ( {params: node.getAttribute("data-params") || ""} )}], toDOM(node) { return ["pre", node.attrs.params ? {"data-params": node.attrs.params} : {}, ["code", 0]] } }, ordered_list: { content: "list_item+", group: "block", attrs: {order: {default: 1}, tight: {default: false}}, parseDOM: [{tag: "ol", getAttrs(dom) { return {order: dom.hasAttribute("start") ? +dom.getAttribute("start") : 1, tight: dom.hasAttribute("data-tight")} }}], toDOM(node) { return ["ol", {start: node.attrs.order == 1 ? null : node.attrs.order, "data-tight": node.attrs.tight ? "true" : null}, 0] } }, bullet_list: { content: "list_item+", group: "block", attrs: {tight: {default: false}}, parseDOM: [{tag: "ul", getAttrs: dom => ({tight: dom.hasAttribute("data-tight")})}], toDOM(node) { return ["ul", {"data-tight": node.attrs.tight ? "true" : null}, 0] } }, list_item: { content: "paragraph block*", defining: true, parseDOM: [{tag: "li"}], toDOM() { return ["li", 0] } }, text: { group: "inline" }, image: { inline: true, attrs: { src: {}, alt: {default: null}, title: {default: null} }, group: "inline", draggable: true, parseDOM: [{tag: "img[src]", getAttrs(dom) { return { src: dom.getAttribute("src"), title: dom.getAttribute("title"), alt: dom.getAttribute("alt") } }}], toDOM(node) { return ["img", node.attrs] } }, hard_break: { inline: true, group: "inline", selectable: false, parseDOM: [{tag: "br"}], toDOM() { return ["br"] } } }, marks: { em: { parseDOM: [{tag: "i"}, {tag: "em"}, {style: "font-style", getAttrs: value => value == "italic" && null}], toDOM() { return ["em"] } }, strong: { parseDOM: [{tag: "b"}, {tag: "strong"}, {style: "font-weight", getAttrs: value => /^(bold(er)?|[5-9]\d{2,})$/.test(value) && null}], toDOM() { return ["strong"] } }, link: { attrs: { href: {}, title: {default: null} }, inclusive: false, parseDOM: [{tag: "a[href]", getAttrs(dom) { return {href: dom.getAttribute("href"), title: dom.getAttribute("title")} }}], toDOM(node) { return ["a", node.attrs] } }, code: { parseDOM: [{tag: "code"}], toDOM() { return ["code"] } } } }) prosemirror-markdown-1.8.0/src/to_markdown.js000066400000000000000000000356461421365711500214170ustar00rootroot00000000000000// ::- A specification for serializing a ProseMirror document as // Markdown/CommonMark text. export class MarkdownSerializer { // :: (Object<(state: MarkdownSerializerState, node: Node, parent: Node, index: number)>, Object, ?Object) // Construct a serializer with the given configuration. The `nodes` // object should map node names in a given schema to function that // take a serializer state and such a node, and serialize the node. // // The `marks` object should hold objects with `open` and `close` // properties, which hold the strings that should appear before and // after a piece of text marked that way, either directly or as a // function that takes a serializer state and a mark, and returns a // string. `open` and `close` can also be functions, which will be // called as // // (state: MarkdownSerializerState, mark: Mark, // parent: Fragment, index: number) → string // // Where `parent` and `index` allow you to inspect the mark's // context to see which nodes it applies to. // // Mark information objects can also have a `mixable` property // which, when `true`, indicates that the order in which the mark's // opening and closing syntax appears relative to other mixable // marks can be varied. (For example, you can say `**a *b***` and // `*a **b***`, but not `` `a *b*` ``.) // // To disable character escaping in a mark, you can give it an // `escape` property of `false`. Such a mark has to have the highest // precedence (must always be the innermost mark). // // The `expelEnclosingWhitespace` mark property causes the // serializer to move enclosing whitespace from inside the marks to // outside the marks. This is necessary for emphasis marks as // CommonMark does not permit enclosing whitespace inside emphasis // marks, see: http://spec.commonmark.org/0.26/#example-330 // // options::- Optional additional options. // escapeExtraCharacters:: ?RegExp // Extra characters can be added for escaping. This is passed // directly to String.replace(), and the matching characters are // preceded by a backslash. constructor(nodes, marks, options) { // :: Object<(MarkdownSerializerState, Node)> The node serializer // functions for this serializer. this.nodes = nodes // :: Object The mark serializer info. this.marks = marks this.options = options || {} } // :: (Node, ?Object) → string // Serialize the content of the given node to // [CommonMark](http://commonmark.org/). serialize(content, options) { options = Object.assign(this.options, options) let state = new MarkdownSerializerState(this.nodes, this.marks, options) state.renderContent(content) return state.out } } // :: MarkdownSerializer // A serializer for the [basic schema](#schema). export const defaultMarkdownSerializer = new MarkdownSerializer({ blockquote(state, node) { state.wrapBlock("> ", null, node, () => state.renderContent(node)) }, code_block(state, node) { state.write("```" + (node.attrs.params || "") + "\n") state.text(node.textContent, false) state.ensureNewLine() state.write("```") state.closeBlock(node) }, heading(state, node) { state.write(state.repeat("#", node.attrs.level) + " ") state.renderInline(node) state.closeBlock(node) }, horizontal_rule(state, node) { state.write(node.attrs.markup || "---") state.closeBlock(node) }, bullet_list(state, node) { state.renderList(node, " ", () => (node.attrs.bullet || "*") + " ") }, ordered_list(state, node) { let start = node.attrs.order || 1 let maxW = String(start + node.childCount - 1).length let space = state.repeat(" ", maxW + 2) state.renderList(node, space, i => { let nStr = String(start + i) return state.repeat(" ", maxW - nStr.length) + nStr + ". " }) }, list_item(state, node) { state.renderContent(node) }, paragraph(state, node) { state.renderInline(node) state.closeBlock(node) }, image(state, node) { state.write("![" + state.esc(node.attrs.alt || "") + "](" + node.attrs.src + (node.attrs.title ? ' "' + node.attrs.title.replace(/"/g, '\\"') + '"' : "") + ")") }, hard_break(state, node, parent, index) { for (let i = index + 1; i < parent.childCount; i++) if (parent.child(i).type != node.type) { state.write("\\\n") return } }, text(state, node) { state.text(node.text) } }, { em: {open: "*", close: "*", mixable: true, expelEnclosingWhitespace: true}, strong: {open: "**", close: "**", mixable: true, expelEnclosingWhitespace: true}, link: { open(_state, mark, parent, index) { return isPlainURL(mark, parent, index, 1) ? "<" : "[" }, close(state, mark, parent, index) { return isPlainURL(mark, parent, index, -1) ? ">" : "](" + mark.attrs.href + (mark.attrs.title ? ' "' + mark.attrs.title.replace(/"/g, '\\"') + '"' : "") + ")" } }, code: {open(_state, _mark, parent, index) { return backticksFor(parent.child(index), -1) }, close(_state, _mark, parent, index) { return backticksFor(parent.child(index - 1), 1) }, escape: false} }) function backticksFor(node, side) { let ticks = /`+/g, m, len = 0 if (node.isText) while (m = ticks.exec(node.text)) len = Math.max(len, m[0].length) let result = len > 0 && side > 0 ? " `" : "`" for (let i = 0; i < len; i++) result += "`" if (len > 0 && side < 0) result += " " return result } function isPlainURL(link, parent, index, side) { if (link.attrs.title || !/^\w+:/.test(link.attrs.href)) return false let content = parent.child(index + (side < 0 ? -1 : 0)) if (!content.isText || content.text != link.attrs.href || content.marks[content.marks.length - 1] != link) return false if (index == (side < 0 ? 1 : parent.childCount - 1)) return true let next = parent.child(index + (side < 0 ? -2 : 1)) return !link.isInSet(next.marks) } // ::- This is an object used to track state and expose // methods related to markdown serialization. Instances are passed to // node and mark serialization methods (see `toMarkdown`). export class MarkdownSerializerState { constructor(nodes, marks, options) { this.nodes = nodes this.marks = marks this.delim = this.out = "" this.closed = false this.inTightList = false // :: Object // The options passed to the serializer. // tightLists:: ?bool // Whether to render lists in a tight style. This can be overridden // on a node level by specifying a tight attribute on the node. // Defaults to false. this.options = options || {} if (typeof this.options.tightLists == "undefined") this.options.tightLists = false } flushClose(size) { if (this.closed) { if (!this.atBlank()) this.out += "\n" if (size == null) size = 2 if (size > 1) { let delimMin = this.delim let trim = /\s+$/.exec(delimMin) if (trim) delimMin = delimMin.slice(0, delimMin.length - trim[0].length) for (let i = 1; i < size; i++) this.out += delimMin + "\n" } this.closed = false } } // :: (string, ?string, Node, ()) // Render a block, prefixing each line with `delim`, and the first // line in `firstDelim`. `node` should be the node that is closed at // the end of the block, and `f` is a function that renders the // content of the block. wrapBlock(delim, firstDelim, node, f) { let old = this.delim this.write(firstDelim || delim) this.delim += delim f() this.delim = old this.closeBlock(node) } atBlank() { return /(^|\n)$/.test(this.out) } // :: () // Ensure the current content ends with a newline. ensureNewLine() { if (!this.atBlank()) this.out += "\n" } // :: (?string) // Prepare the state for writing output (closing closed paragraphs, // adding delimiters, and so on), and then optionally add content // (unescaped) to the output. write(content) { this.flushClose() if (this.delim && this.atBlank()) this.out += this.delim if (content) this.out += content } // :: (Node) // Close the block for the given node. closeBlock(node) { this.closed = node } // :: (string, ?bool) // Add the given text to the document. When escape is not `false`, // it will be escaped. text(text, escape) { let lines = text.split("\n") for (let i = 0; i < lines.length; i++) { var startOfLine = this.atBlank() || this.closed this.write() this.out += escape !== false ? this.esc(lines[i], startOfLine) : lines[i] if (i != lines.length - 1) this.out += "\n" } } // :: (Node) // Render the given node as a block. render(node, parent, index) { if (typeof parent == "number") throw new Error("!") if (!this.nodes[node.type.name]) throw new Error("Token type `" + node.type.name + "` not supported by Markdown renderer") this.nodes[node.type.name](this, node, parent, index) } // :: (Node) // Render the contents of `parent` as block nodes. renderContent(parent) { parent.forEach((node, _, i) => this.render(node, parent, i)) } // :: (Node) // Render the contents of `parent` as inline content. renderInline(parent) { let active = [], trailing = "" let progress = (node, _, index) => { let marks = node ? node.marks : [] // Remove marks from `hard_break` that are the last node inside // that mark to prevent parser edge cases with new lines just // before closing marks. // (FIXME it'd be nice if we had a schema-agnostic way to // identify nodes that serialize as hard breaks) if (node && node.type.name === "hard_break") marks = marks.filter(m => { if (index + 1 == parent.childCount) return false let next = parent.child(index + 1) return m.isInSet(next.marks) && (!next.isText || /\S/.test(next.text)) }) let leading = trailing trailing = "" // If whitespace has to be expelled from the node, adjust // leading and trailing accordingly. if (node && node.isText && marks.some(mark => { let info = this.marks[mark.type.name] return info && info.expelEnclosingWhitespace })) { let [_, lead, inner, trail] = /^(\s*)(.*?)(\s*)$/m.exec(node.text) leading += lead trailing = trail if (lead || trail) { node = inner ? node.withText(inner) : null if (!node) marks = active } } let inner = marks.length && marks[marks.length - 1], noEsc = inner && this.marks[inner.type.name].escape === false let len = marks.length - (noEsc ? 1 : 0) // Try to reorder 'mixable' marks, such as em and strong, which // in Markdown may be opened and closed in different order, so // that order of the marks for the token matches the order in // active. outer: for (let i = 0; i < len; i++) { let mark = marks[i] if (!this.marks[mark.type.name].mixable) break for (let j = 0; j < active.length; j++) { let other = active[j] if (!this.marks[other.type.name].mixable) break if (mark.eq(other)) { if (i > j) marks = marks.slice(0, j).concat(mark).concat(marks.slice(j, i)).concat(marks.slice(i + 1, len)) else if (j > i) marks = marks.slice(0, i).concat(marks.slice(i + 1, j)).concat(mark).concat(marks.slice(j, len)) continue outer } } } // Find the prefix of the mark set that didn't change let keep = 0 while (keep < Math.min(active.length, len) && marks[keep].eq(active[keep])) ++keep // Close the marks that need to be closed while (keep < active.length) this.text(this.markString(active.pop(), false, parent, index), false) // Output any previously expelled trailing whitespace outside the marks if (leading) this.text(leading) // Open the marks that need to be opened if (node) { while (active.length < len) { let add = marks[active.length] active.push(add) this.text(this.markString(add, true, parent, index), false) } // Render the node. Special case code marks, since their content // may not be escaped. if (noEsc && node.isText) this.text(this.markString(inner, true, parent, index) + node.text + this.markString(inner, false, parent, index + 1), false) else this.render(node, parent, index) } } parent.forEach(progress) progress(null, null, parent.childCount) } // :: (Node, string, (number) → string) // Render a node's content as a list. `delim` should be the extra // indentation added to all lines except the first in an item, // `firstDelim` is a function going from an item index to a // delimiter for the first line of the item. renderList(node, delim, firstDelim) { if (this.closed && this.closed.type == node.type) this.flushClose(3) else if (this.inTightList) this.flushClose(1) let isTight = typeof node.attrs.tight != "undefined" ? node.attrs.tight : this.options.tightLists let prevTight = this.inTightList this.inTightList = isTight node.forEach((child, _, i) => { if (i && isTight) this.flushClose(1) this.wrapBlock(delim, firstDelim(i), node, () => this.render(child, node, i)) }) this.inTightList = prevTight } // :: (string, ?bool) → string // Escape the given string so that it can safely appear in Markdown // content. If `startOfLine` is true, also escape characters that // have special meaning only at the start of the line. esc(str, startOfLine) { str = str.replace( /[`*\\~\[\]_]/g, (m, i) => m == "_" && i > 0 && i + 1 < str.length && str[i-1].match(/\w/) && str[i+1].match(/\w/) ? m : "\\" + m ) if (startOfLine) str = str.replace(/^[:#\-*+>]/, "\\$&").replace(/^(\s*\d+)\./, "$1\\.") if (this.options.escapeExtraCharacters) str = str.replace(this.options.escapeExtraCharacters, "\\$&") return str } quote(str) { var wrap = str.indexOf('"') == -1 ? '""' : str.indexOf("'") == -1 ? "''" : "()" return wrap[0] + str + wrap[1] } // :: (string, number) → string // Repeat the given string `n` times. repeat(str, n) { let out = "" for (let i = 0; i < n; i++) out += str return out } // : (Mark, bool, string?) → string // Get the markdown string for a given opening or closing mark. markString(mark, open, parent, index) { let info = this.marks[mark.type.name] let value = open ? info.open : info.close return typeof value == "string" ? value : value(this, mark, parent, index) } // :: (string) → { leading: ?string, trailing: ?string } // Get leading and trailing whitespace from a string. Values of // leading or trailing property of the return object will be undefined // if there is no match. getEnclosingWhitespace(text) { return { leading: (text.match(/^(\s+)/) || [])[0], trailing: (text.match(/(\s+)$/) || [])[0] } } } prosemirror-markdown-1.8.0/test/000077500000000000000000000000001421365711500167075ustar00rootroot00000000000000prosemirror-markdown-1.8.0/test/build.js000066400000000000000000000010721421365711500203440ustar00rootroot00000000000000const {builders} = require("prosemirror-test-builder") const {schema} = require("..") module.exports = builders(schema, { p: {nodeType: "paragraph"}, h1: {nodeType: "heading", level: 1}, h2: {nodeType: "heading", level: 2}, hr: {nodeType: "horizontal_rule"}, li: {nodeType: "list_item"}, ol: {nodeType: "ordered_list"}, ol3: {nodeType: "ordered_list", order: 3}, ul: {nodeType: "bullet_list"}, pre: {nodeType: "code_block"}, a: {markType: "link", href: "foo"}, br: {nodeType: "hard_break"}, img: {nodeType: "image", src: "img.png", alt: "x"} }) prosemirror-markdown-1.8.0/test/test-custom-parser.js000066400000000000000000000015271421365711500230330ustar00rootroot00000000000000const {eq} = require("prosemirror-test-builder") const ist = require("ist") const markdownit = require("markdown-it") const {schema, MarkdownParser} = require("..") const {doc, p, hard_break} = require("./build") const md = markdownit("commonmark", {html: false}) const ignoreBlockquoteParser = new MarkdownParser(schema, md, { blockquote: {ignore: true}, paragraph: {block: "paragraph"}, softbreak: {node: 'hard_break'} }) function parseWith(parser) { return (text, doc) => { ist(parser.parse(text), doc, eq) } } describe("custom markdown parser", () => { it("ignores a blockquote", () => parseWith(ignoreBlockquoteParser)("> hello!", doc(p("hello!")))) it("converts softbreaks to hard_break nodes", () => parseWith(ignoreBlockquoteParser)("hello\nworld!", doc(p("hello", hard_break(), 'world!')))) }) prosemirror-markdown-1.8.0/test/test-parse.js000066400000000000000000000150541421365711500213410ustar00rootroot00000000000000const {eq} = require("prosemirror-test-builder") const ist = require("ist") const {schema, defaultMarkdownParser, defaultMarkdownSerializer, MarkdownSerializer} = require("..") const {doc, blockquote, h1, h2, p, hr, li, ol, ol3, ul, pre, em, strong, code, a, link, br, img} = require("./build") function parse(text, doc) { ist(defaultMarkdownParser.parse(text), doc, eq) } function serialize(doc, text) { ist(defaultMarkdownSerializer.serialize(doc), text) } function same(text, doc) { parse(text, doc) serialize(doc, text) } describe("markdown", () => { it("parses a paragraph", () => same("hello!", doc(p("hello!")))) it("parses headings", () => same("# one\n\n## two\n\nthree", doc(h1("one"), h2("two"), p("three")))) it("parses a blockquote", () => same("> once\n\n> > twice", doc(blockquote(p("once")), blockquote(blockquote(p("twice")))))) // FIXME bring back testing for preserving bullets and tight attrs // when supported again it("parses a bullet list", () => same("* foo\n\n * bar\n\n * baz\n\n* quux", doc(ul(li(p("foo"), ul(li(p("bar")), li(p("baz")))), li(p("quux")))))) it("parses an ordered list", () => same("1. Hello\n\n2. Goodbye\n\n3. Nest\n\n 1. Hey\n\n 2. Aye", doc(ol(li(p("Hello")), li(p("Goodbye")), li(p("Nest"), ol(li(p("Hey")), li(p("Aye")))))))) it("preserves ordered list start number", () => same("3. Foo\n\n4. Bar", doc(ol3(li(p("Foo")), li(p("Bar")))))) it("parses a code block", () => same("Some code:\n\n```\nHere it is\n```\n\nPara", doc(p("Some code:"), schema.node("code_block", {params: ""}, [schema.text("Here it is")]), p("Para")))) it("parses an intended code block", () => parse("Some code:\n\n Here it is\n\nPara", doc(p("Some code:"), pre("Here it is"), p("Para")))) it("parses a fenced code block with info string", () => same("foo\n\n```javascript\n1\n```", doc(p("foo"), schema.node("code_block", {params: "javascript"}, [schema.text("1")])))) it("parses inline marks", () => same("Hello. Some *em* text, some **strong** text, and some `code`", doc(p("Hello. Some ", em("em"), " text, some ", strong("strong"), " text, and some ", code("code"))))) it("parses overlapping inline marks", () => same("This is **strong *emphasized text with `code` in* it**", doc(p("This is ", strong("strong ", em("emphasized text with ", code("code"), " in"), " it"))))) it("parses links inside strong text", () => same("**[link](foo) is bold**", doc(p(strong(a("link"), " is bold"))))) it("parses code mark inside strong text", () => same("**`code` is bold**", doc(p(strong(code("code"), " is bold"))))) it("parses code mark containing backticks", () => same("``` one backtick: ` two backticks: `` ```", doc(p(code("one backtick: ` two backticks: ``"))))) it("parses code mark containing only whitespace", () => serialize(doc(p("Three spaces: ", code(" "))), "Three spaces: ` `")) it("parses links", () => same("My [link](foo) goes to foo", doc(p("My ", a("link"), " goes to foo")))) it("parses urls", () => same("Link to ", doc(p("Link to ", link({href: "https://prosemirror.net"}, "https://prosemirror.net"))))) it("correctly serializes relative urls", () => { same("[foo.html](foo.html)", doc(p(link({href: "foo.html"}, "foo.html")))) }) it("can handle link titles", () => { same('[a](x.html "title \\"quoted\\"")', doc(p(link({href: "x.html", title: 'title "quoted"'}, "a")))) }) it("doesn't escape underscores in link", () => { same('[link](http://foo.com/a_b_c)', doc(p(link({href: "http://foo.com/a_b_c"}, "link")))) }) it("parses emphasized urls", () => same("Link to **", doc(p("Link to ", em(link({href: "https://prosemirror.net"}, "https://prosemirror.net")))))) it("parses an image", () => same("Here's an image: ![x](img.png)", doc(p("Here's an image: ", img)))) it("parses a line break", () => same("line one\\\nline two", doc(p("line one", br, "line two")))) it("parses a horizontal rule", () => same("one two\n\n---\n\nthree", doc(p("one two"), hr, p("three")))) it("ignores HTML tags", () => same("Foo < img> bar", doc(p("Foo < img> bar")))) it("doesn't accidentally generate list markup", () => same("1\\. foo", doc(p("1. foo")))) it("doesn't fail with line break inside inline mark", () => same("**text1\ntext2**", doc(p(strong("text1\ntext2"))))) it("drops trailing hard breaks", () => serialize(doc(p("a", br, br)), "a")) it("expels enclosing whitespace from inside emphasis", () => serialize(doc(p("Some emphasized text with", strong(em(" whitespace ")), "surrounding the emphasis.")), "Some emphasized text with ***whitespace*** surrounding the emphasis.")) it("drops nodes when all whitespace is expelled from them", () => serialize(doc(p("Text with", em(" "), "an emphasized space")), "Text with an emphasized space")) it("preserves list tightness", () => { same("* foo\n* bar", doc(ul({tight: true}, li(p("foo")), li(p("bar"))))) same("1. foo\n2. bar", doc(ol({tight: true}, li(p("foo")), li(p("bar"))))) }) it("doesn't put a code block after a list item inside the list item", () => same("* list item\n\n```\ncode\n```", doc(ul({tight: true}, li(p("list item"))), pre("code")))) it("doesn't escape characters in code", () => same("foo`*`", doc(p("foo", code("*"))))) it("doesn't escape underscores between word characters", () => same( "abc_def", doc(p("abc_def")) ) ) it("doesn't escape strips of underscores between word characters", () => same( "abc___def", doc(p("abc___def")) ) ) it("escapes underscores at word boundaries", () => same( "\\_abc\\_", doc(p("_abc_")) ) ) it("escapes underscores surrounded by non-word characters", () => same( "/\\_abc\\_)", doc(p("/_abc_)")) ) ) context("custom serializer", () => { let markdownSerializer = new MarkdownSerializer( defaultMarkdownSerializer.nodes, defaultMarkdownSerializer.marks, { escapeExtraCharacters: /[\|!]/g, } ); it("escapes extra characters from options", () => { ist(markdownSerializer.serialize(doc(p("foo|bar!"))), "foo\\|bar\\!"); }); }); })