pax_global_header00006660000000000000000000000064132422444010014506gustar00rootroot0000000000000052 comment=55d61fa8aa702f59229e6cff85793c22e580eaf5 blackfriday-1.5.1/000077500000000000000000000000001324224440100137655ustar00rootroot00000000000000blackfriday-1.5.1/.gitignore000066400000000000000000000000561324224440100157560ustar00rootroot00000000000000*.out *.swp *.8 *.6 _obj _test* markdown tags blackfriday-1.5.1/.travis.yml000066400000000000000000000012501324224440100160740ustar00rootroot00000000000000sudo: false language: go go: - 1.5.4 - 1.6.2 - tip matrix: include: - go: 1.2.2 script: - go get -t -v ./... - go test -v -race ./... - go: 1.3.3 script: - go get -t -v ./... - go test -v -race ./... - go: 1.4.3 script: - go get -t -v ./... - go test -v -race ./... allow_failures: - go: tip fast_finish: true install: - # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step). script: - go get -t -v ./... - diff -u <(echo -n) <(gofmt -d -s .) - go tool vet . - go test -v -race ./... blackfriday-1.5.1/LICENSE.txt000066400000000000000000000026101324224440100156070ustar00rootroot00000000000000Blackfriday is distributed under the Simplified BSD License: > Copyright © 2011 Russ Ross > All rights reserved. > > Redistribution and use in source and binary forms, with or without > modification, are permitted provided that the following conditions > are met: > > 1. Redistributions of source code must retain the above copyright > notice, this list of conditions and the following disclaimer. > > 2. Redistributions in binary form must reproduce the above > copyright notice, this list of conditions and the following > disclaimer in the documentation and/or other materials provided with > the distribution. > > THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS > FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, > BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; > LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER > CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT > LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN > ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE > POSSIBILITY OF SUCH DAMAGE. blackfriday-1.5.1/README.md000066400000000000000000000315531324224440100152530ustar00rootroot00000000000000Blackfriday [![Build Status][BuildSVG]][BuildURL] [![Godoc][GodocV2SVG]][GodocV2URL] =========== Blackfriday is a [Markdown][1] processor implemented in [Go][2]. It is paranoid about its input (so you can safely feed it user-supplied data), it is fast, it supports common extensions (tables, smart punctuation substitutions, etc.), and it is safe for all utf-8 (unicode) input. HTML output is currently supported, along with Smartypants extensions. It started as a translation from C of [Sundown][3]. Installation ------------ Blackfriday is compatible with any modern Go release. With Go and git installed: go get -u gopkg.in/russross/blackfriday.v2 will download, compile, and install the package into your `$GOPATH` directory hierarchy. Versions -------- Currently maintained and recommended version of Blackfriday is `v2`. It's being developed on its own branch: https://github.com/russross/blackfriday/tree/v2 and the documentation is available at https://godoc.org/gopkg.in/russross/blackfriday.v2. It is `go get`-able via via [gopkg.in][6] at `gopkg.in/russross/blackfriday.v2`, but we highly recommend using package management tool like [dep][7] or [Glide][8] and make use of semantic versioning. With package management you should import `github.com/russross/blackfriday` and specify that you're using version 2.0.0. Version 2 offers a number of improvements over v1: * Cleaned up API * A separate call to [`Parse`][4], which produces an abstract syntax tree for the document * Latest bug fixes * Flexibility to easily add your own rendering extensions Potential drawbacks: * Our benchmarks show v2 to be slightly slower than v1. Currently in the ballpark of around 15%. * API breakage. If you can't afford modifying your code to adhere to the new API and don't care too much about the new features, v2 is probably not for you. * Several bug fixes are trailing behind and still need to be forward-ported to v2. See issue [#348](https://github.com/russross/blackfriday/issues/348) for tracking. If you are still interested in the legacy `v1`, you can import it from `github.com/russross/blackfriday`. Documentation for the legacy v1 can be found here: https://godoc.org/github.com/russross/blackfriday ### Known issue with `dep` There is a known problem with using Blackfriday v1 _transitively_ and `dep`. Currently `dep` prioritizes semver versions over anything else, and picks the latest one, plus it does not apply a `[[constraint]]` specifier to transitively pulled in packages. So if you're using something that uses Blackfriday v1, but that something does not use `dep` yet, you will get Blackfriday v2 pulled in and your first dependency will fail to build. There are couple of fixes for it, documented here: https://github.com/golang/dep/blob/master/docs/FAQ.md#how-do-i-constrain-a-transitive-dependencys-version Meanwhile, `dep` team is working on a more general solution to the constraints on transitive dependencies problem: https://github.com/golang/dep/issues/1124. Usage ----- ### v1 For basic usage, it is as simple as getting your input into a byte slice and calling: output := blackfriday.MarkdownBasic(input) This renders it with no extensions enabled. To get a more useful feature set, use this instead: output := blackfriday.MarkdownCommon(input) ### v2 For the most sensible markdown processing, it is as simple as getting your input into a byte slice and calling: ```go output := blackfriday.Run(input) ``` Your input will be parsed and the output rendered with a set of most popular extensions enabled. If you want the most basic feature set, corresponding with the bare Markdown specification, use: ```go output := blackfriday.Run(input, blackfriday.WithNoExtensions()) ``` ### Sanitize untrusted content Blackfriday itself does nothing to protect against malicious content. If you are dealing with user-supplied markdown, we recommend running Blackfriday's output through HTML sanitizer such as [Bluemonday][5]. Here's an example of simple usage of Blackfriday together with Bluemonday: ```go import ( "github.com/microcosm-cc/bluemonday" "gopkg.in/russross/blackfriday.v2" ) // ... unsafe := blackfriday.Run(input) html := bluemonday.UGCPolicy().SanitizeBytes(unsafe) ``` ### Custom options, v1 If you want to customize the set of options, first get a renderer (currently only the HTML output engine), then use it to call the more general `Markdown` function. For examples, see the implementations of `MarkdownBasic` and `MarkdownCommon` in `markdown.go`. ### Custom options, v2 If you want to customize the set of options, use `blackfriday.WithExtensions`, `blackfriday.WithRenderer` and `blackfriday.WithRefOverride`. ### `blackfriday-tool` You can also check out `blackfriday-tool` for a more complete example of how to use it. Download and install it using: go get github.com/russross/blackfriday-tool This is a simple command-line tool that allows you to process a markdown file using a standalone program. You can also browse the source directly on github if you are just looking for some example code: * Note that if you have not already done so, installing `blackfriday-tool` will be sufficient to download and install blackfriday in addition to the tool itself. The tool binary will be installed in `$GOPATH/bin`. This is a statically-linked binary that can be copied to wherever you need it without worrying about dependencies and library versions. ### Sanitized anchor names Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when `EXTENSION_AUTO_HEADER_IDS` is enabled. The algorithm has a specification, so that other packages can create compatible anchor names and links to those anchors. The specification is located at https://godoc.org/github.com/russross/blackfriday#hdr-Sanitized_Anchor_Names. [`SanitizedAnchorName`](https://godoc.org/github.com/russross/blackfriday#SanitizedAnchorName) exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at [`github.com/shurcooL/sanitized_anchor_name`](https://godoc.org/github.com/shurcooL/sanitized_anchor_name). It can be useful for clients that want a small package and don't need full functionality of blackfriday. Features -------- All features of Sundown are supported, including: * **Compatibility**. The Markdown v1.0.3 test suite passes with the `--tidy` option. Without `--tidy`, the differences are mostly in whitespace and entity escaping, where blackfriday is more consistent and cleaner. * **Common extensions**, including table support, fenced code blocks, autolinks, strikethroughs, non-strict emphasis, etc. * **Safety**. Blackfriday is paranoid when parsing, making it safe to feed untrusted user input without fear of bad things happening. The test suite stress tests this and there are no known inputs that make it crash. If you find one, please let me know and send me the input that does it. NOTE: "safety" in this context means *runtime safety only*. In order to protect yourself against JavaScript injection in untrusted content, see [this example](https://github.com/russross/blackfriday#sanitize-untrusted-content). * **Fast processing**. It is fast enough to render on-demand in most web applications without having to cache the output. * **Thread safety**. You can run multiple parsers in different goroutines without ill effect. There is no dependence on global shared state. * **Minimal dependencies**. Blackfriday only depends on standard library packages in Go. The source code is pretty self-contained, so it is easy to add to any project, including Google App Engine projects. * **Standards compliant**. Output successfully validates using the W3C validation tool for HTML 4.01 and XHTML 1.0 Transitional. Extensions ---------- In addition to the standard markdown syntax, this package implements the following extensions: * **Intra-word emphasis supression**. The `_` character is commonly used inside words when discussing code, so having markdown interpret it as an emphasis command is usually the wrong thing. Blackfriday lets you treat all emphasis markers as normal characters when they occur inside a word. * **Tables**. Tables can be created by drawing them in the input using a simple syntax: ``` Name | Age --------|------ Bob | 27 Alice | 23 ``` * **Fenced code blocks**. In addition to the normal 4-space indentation to mark code blocks, you can explicitly mark them and supply a language (to make syntax highlighting simple). Just mark it like this: ``` go func getTrue() bool { return true } ``` You can use 3 or more backticks to mark the beginning of the block, and the same number to mark the end of the block. To preserve classes of fenced code blocks while using the bluemonday HTML sanitizer, use the following policy: ``` go p := bluemonday.UGCPolicy() p.AllowAttrs("class").Matching(regexp.MustCompile("^language-[a-zA-Z0-9]+$")).OnElements("code") html := p.SanitizeBytes(unsafe) ``` * **Definition lists**. A simple definition list is made of a single-line term followed by a colon and the definition for that term. Cat : Fluffy animal everyone likes Internet : Vector of transmission for pictures of cats Terms must be separated from the previous definition by a blank line. * **Footnotes**. A marker in the text that will become a superscript number; a footnote definition that will be placed in a list of footnotes at the end of the document. A footnote looks like this: This is a footnote.[^1] [^1]: the footnote text. * **Autolinking**. Blackfriday can find URLs that have not been explicitly marked as links and turn them into links. * **Strikethrough**. Use two tildes (`~~`) to mark text that should be crossed out. * **Hard line breaks**. With this extension enabled (it is off by default in the `MarkdownBasic` and `MarkdownCommon` convenience functions), newlines in the input translate into line breaks in the output. * **Smart quotes**. Smartypants-style punctuation substitution is supported, turning normal double- and single-quote marks into curly quotes, etc. * **LaTeX-style dash parsing** is an additional option, where `--` is translated into `–`, and `---` is translated into `—`. This differs from most smartypants processors, which turn a single hyphen into an ndash and a double hyphen into an mdash. * **Smart fractions**, where anything that looks like a fraction is translated into suitable HTML (instead of just a few special cases like most smartypant processors). For example, `4/5` becomes `45`, which renders as 45. Other renderers --------------- Blackfriday is structured to allow alternative rendering engines. Here are a few of note: * [github_flavored_markdown](https://godoc.org/github.com/shurcooL/github_flavored_markdown): provides a GitHub Flavored Markdown renderer with fenced code block highlighting, clickable heading anchor links. It's not customizable, and its goal is to produce HTML output equivalent to the [GitHub Markdown API endpoint](https://developer.github.com/v3/markdown/#render-a-markdown-document-in-raw-mode), except the rendering is performed locally. * [markdownfmt](https://github.com/shurcooL/markdownfmt): like gofmt, but for markdown. * [LaTeX output](https://bitbucket.org/ambrevar/blackfriday-latex): renders output as LaTeX. TODO ---- * More unit testing * Improve Unicode support. It does not understand all Unicode rules (about what constitutes a letter, a punctuation symbol, etc.), so it may fail to detect word boundaries correctly in some instances. It is safe on all UTF-8 input. License ------- [Blackfriday is distributed under the Simplified BSD License](LICENSE.txt) [1]: https://daringfireball.net/projects/markdown/ "Markdown" [2]: https://golang.org/ "Go Language" [3]: https://github.com/vmg/sundown "Sundown" [4]: https://godoc.org/gopkg.in/russross/blackfriday.v2#Parse "Parse func" [5]: https://github.com/microcosm-cc/bluemonday "Bluemonday" [6]: https://labix.org/gopkg.in "gopkg.in" [7]: https://github.com/golang/dep/ "dep" [8]: https://github.com/Masterminds/glide "Glide" [BuildSVG]: https://travis-ci.org/russross/blackfriday.svg?branch=master [BuildURL]: https://travis-ci.org/russross/blackfriday [GodocV2SVG]: https://godoc.org/gopkg.in/russross/blackfriday.v2?status.svg [GodocV2URL]: https://godoc.org/gopkg.in/russross/blackfriday.v2 blackfriday-1.5.1/block.go000066400000000000000000000732771324224440100154260ustar00rootroot00000000000000// // Blackfriday Markdown Processor // Available at http://github.com/russross/blackfriday // // Copyright © 2011 Russ Ross . // Distributed under the Simplified BSD License. // See README.md for details. // // // Functions to parse block-level elements. // package blackfriday import ( "bytes" "unicode" ) // Parse block-level data. // Note: this function and many that it calls assume that // the input buffer ends with a newline. func (p *parser) block(out *bytes.Buffer, data []byte) { if len(data) == 0 || data[len(data)-1] != '\n' { panic("block input is missing terminating newline") } // this is called recursively: enforce a maximum depth if p.nesting >= p.maxNesting { return } p.nesting++ // parse out one block-level construct at a time for len(data) > 0 { // prefixed header: // // # Header 1 // ## Header 2 // ... // ###### Header 6 if p.isPrefixHeader(data) { data = data[p.prefixHeader(out, data):] continue } // block of preformatted HTML: // //
// ... //
if data[0] == '<' { if i := p.html(out, data, true); i > 0 { data = data[i:] continue } } // title block // // % stuff // % more stuff // % even more stuff if p.flags&EXTENSION_TITLEBLOCK != 0 { if data[0] == '%' { if i := p.titleBlock(out, data, true); i > 0 { data = data[i:] continue } } } // blank lines. note: returns the # of bytes to skip if i := p.isEmpty(data); i > 0 { data = data[i:] continue } // indented code block: // // func max(a, b int) int { // if a > b { // return a // } // return b // } if p.codePrefix(data) > 0 { data = data[p.code(out, data):] continue } // fenced code block: // // ``` go // func fact(n int) int { // if n <= 1 { // return n // } // return n * fact(n-1) // } // ``` if p.flags&EXTENSION_FENCED_CODE != 0 { if i := p.fencedCodeBlock(out, data, true); i > 0 { data = data[i:] continue } } // horizontal rule: // // ------ // or // ****** // or // ______ if p.isHRule(data) { p.r.HRule(out) var i int for i = 0; data[i] != '\n'; i++ { } data = data[i:] continue } // block quote: // // > A big quote I found somewhere // > on the web if p.quotePrefix(data) > 0 { data = data[p.quote(out, data):] continue } // table: // // Name | Age | Phone // ------|-----|--------- // Bob | 31 | 555-1234 // Alice | 27 | 555-4321 if p.flags&EXTENSION_TABLES != 0 { if i := p.table(out, data); i > 0 { data = data[i:] continue } } // an itemized/unordered list: // // * Item 1 // * Item 2 // // also works with + or - if p.uliPrefix(data) > 0 { data = data[p.list(out, data, 0):] continue } // a numbered/ordered list: // // 1. Item 1 // 2. Item 2 if p.oliPrefix(data) > 0 { data = data[p.list(out, data, LIST_TYPE_ORDERED):] continue } // definition lists: // // Term 1 // : Definition a // : Definition b // // Term 2 // : Definition c if p.flags&EXTENSION_DEFINITION_LISTS != 0 { if p.dliPrefix(data) > 0 { data = data[p.list(out, data, LIST_TYPE_DEFINITION):] continue } } // anything else must look like a normal paragraph // note: this finds underlined headers, too data = data[p.paragraph(out, data):] } p.nesting-- } func (p *parser) isPrefixHeader(data []byte) bool { if data[0] != '#' { return false } if p.flags&EXTENSION_SPACE_HEADERS != 0 { level := 0 for level < 6 && data[level] == '#' { level++ } if data[level] != ' ' { return false } } return true } func (p *parser) prefixHeader(out *bytes.Buffer, data []byte) int { level := 0 for level < 6 && data[level] == '#' { level++ } i := skipChar(data, level, ' ') end := skipUntilChar(data, i, '\n') skip := end id := "" if p.flags&EXTENSION_HEADER_IDS != 0 { j, k := 0, 0 // find start/end of header id for j = i; j < end-1 && (data[j] != '{' || data[j+1] != '#'); j++ { } for k = j + 1; k < end && data[k] != '}'; k++ { } // extract header id iff found if j < end && k < end { id = string(data[j+2 : k]) end = j skip = k + 1 for end > 0 && data[end-1] == ' ' { end-- } } } for end > 0 && data[end-1] == '#' { if isBackslashEscaped(data, end-1) { break } end-- } for end > 0 && data[end-1] == ' ' { end-- } if end > i { if id == "" && p.flags&EXTENSION_AUTO_HEADER_IDS != 0 { id = SanitizedAnchorName(string(data[i:end])) } work := func() bool { p.inline(out, data[i:end]) return true } p.r.Header(out, work, level, id) } return skip } func (p *parser) isUnderlinedHeader(data []byte) int { // test of level 1 header if data[0] == '=' { i := skipChar(data, 1, '=') i = skipChar(data, i, ' ') if data[i] == '\n' { return 1 } else { return 0 } } // test of level 2 header if data[0] == '-' { i := skipChar(data, 1, '-') i = skipChar(data, i, ' ') if data[i] == '\n' { return 2 } else { return 0 } } return 0 } func (p *parser) titleBlock(out *bytes.Buffer, data []byte, doRender bool) int { if data[0] != '%' { return 0 } splitData := bytes.Split(data, []byte("\n")) var i int for idx, b := range splitData { if !bytes.HasPrefix(b, []byte("%")) { i = idx // - 1 break } } data = bytes.Join(splitData[0:i], []byte("\n")) p.r.TitleBlock(out, data) return len(data) } func (p *parser) html(out *bytes.Buffer, data []byte, doRender bool) int { var i, j int // identify the opening tag if data[0] != '<' { return 0 } curtag, tagfound := p.htmlFindTag(data[1:]) // handle special cases if !tagfound { // check for an HTML comment if size := p.htmlComment(out, data, doRender); size > 0 { return size } // check for an
tag if size := p.htmlHr(out, data, doRender); size > 0 { return size } // check for HTML CDATA if size := p.htmlCDATA(out, data, doRender); size > 0 { return size } // no special case recognized return 0 } // look for an unindented matching closing tag // followed by a blank line found := false /* closetag := []byte("\n") j = len(curtag) + 1 for !found { // scan for a closing tag at the beginning of a line if skip := bytes.Index(data[j:], closetag); skip >= 0 { j += skip + len(closetag) } else { break } // see if it is the only thing on the line if skip := p.isEmpty(data[j:]); skip > 0 { // see if it is followed by a blank line/eof j += skip if j >= len(data) { found = true i = j } else { if skip := p.isEmpty(data[j:]); skip > 0 { j += skip found = true i = j } } } } */ // if not found, try a second pass looking for indented match // but not if tag is "ins" or "del" (following original Markdown.pl) if !found && curtag != "ins" && curtag != "del" { i = 1 for i < len(data) { i++ for i < len(data) && !(data[i-1] == '<' && data[i] == '/') { i++ } if i+2+len(curtag) >= len(data) { break } j = p.htmlFindEnd(curtag, data[i-1:]) if j > 0 { i += j - 1 found = true break } } } if !found { return 0 } // the end of the block has been found if doRender { // trim newlines end := i for end > 0 && data[end-1] == '\n' { end-- } p.r.BlockHtml(out, data[:end]) } return i } func (p *parser) renderHTMLBlock(out *bytes.Buffer, data []byte, start int, doRender bool) int { // html block needs to end with a blank line if i := p.isEmpty(data[start:]); i > 0 { size := start + i if doRender { // trim trailing newlines end := size for end > 0 && data[end-1] == '\n' { end-- } p.r.BlockHtml(out, data[:end]) } return size } return 0 } // HTML comment, lax form func (p *parser) htmlComment(out *bytes.Buffer, data []byte, doRender bool) int { i := p.inlineHTMLComment(out, data) return p.renderHTMLBlock(out, data, i, doRender) } // HTML CDATA section func (p *parser) htmlCDATA(out *bytes.Buffer, data []byte, doRender bool) int { const cdataTag = "') { i++ } i++ // no end-of-comment marker if i >= len(data) { return 0 } return p.renderHTMLBlock(out, data, i, doRender) } // HR, which is the only self-closing block tag considered func (p *parser) htmlHr(out *bytes.Buffer, data []byte, doRender bool) int { if data[0] != '<' || (data[1] != 'h' && data[1] != 'H') || (data[2] != 'r' && data[2] != 'R') { return 0 } if data[3] != ' ' && data[3] != '/' && data[3] != '>' { // not an
tag after all; at least not a valid one return 0 } i := 3 for data[i] != '>' && data[i] != '\n' { i++ } if data[i] == '>' { return p.renderHTMLBlock(out, data, i+1, doRender) } return 0 } func (p *parser) htmlFindTag(data []byte) (string, bool) { i := 0 for isalnum(data[i]) { i++ } key := string(data[:i]) if _, ok := blockTags[key]; ok { return key, true } return "", false } func (p *parser) htmlFindEnd(tag string, data []byte) int { // assume data[0] == '<' && data[1] == '/' already tested // check if tag is a match closetag := []byte("") if !bytes.HasPrefix(data, closetag) { return 0 } i := len(closetag) // check that the rest of the line is blank skip := 0 if skip = p.isEmpty(data[i:]); skip == 0 { return 0 } i += skip skip = 0 if i >= len(data) { return i } if p.flags&EXTENSION_LAX_HTML_BLOCKS != 0 { return i } if skip = p.isEmpty(data[i:]); skip == 0 { // following line must be blank return 0 } return i + skip } func (*parser) isEmpty(data []byte) int { // it is okay to call isEmpty on an empty buffer if len(data) == 0 { return 0 } var i int for i = 0; i < len(data) && data[i] != '\n'; i++ { if data[i] != ' ' && data[i] != '\t' { return 0 } } return i + 1 } func (*parser) isHRule(data []byte) bool { i := 0 // skip up to three spaces for i < 3 && data[i] == ' ' { i++ } // look at the hrule char if data[i] != '*' && data[i] != '-' && data[i] != '_' { return false } c := data[i] // the whole line must be the char or whitespace n := 0 for data[i] != '\n' { switch { case data[i] == c: n++ case data[i] != ' ': return false } i++ } return n >= 3 } // isFenceLine checks if there's a fence line (e.g., ``` or ``` go) at the beginning of data, // and returns the end index if so, or 0 otherwise. It also returns the marker found. // If syntax is not nil, it gets set to the syntax specified in the fence line. // A final newline is mandatory to recognize the fence line, unless newlineOptional is true. func isFenceLine(data []byte, syntax *string, oldmarker string, newlineOptional bool) (end int, marker string) { i, size := 0, 0 // skip up to three spaces for i < len(data) && i < 3 && data[i] == ' ' { i++ } // check for the marker characters: ~ or ` if i >= len(data) { return 0, "" } if data[i] != '~' && data[i] != '`' { return 0, "" } c := data[i] // the whole line must be the same char or whitespace for i < len(data) && data[i] == c { size++ i++ } // the marker char must occur at least 3 times if size < 3 { return 0, "" } marker = string(data[i-size : i]) // if this is the end marker, it must match the beginning marker if oldmarker != "" && marker != oldmarker { return 0, "" } // TODO(shurcooL): It's probably a good idea to simplify the 2 code paths here // into one, always get the syntax, and discard it if the caller doesn't care. if syntax != nil { syn := 0 i = skipChar(data, i, ' ') if i >= len(data) { if newlineOptional && i == len(data) { return i, marker } return 0, "" } syntaxStart := i if data[i] == '{' { i++ syntaxStart++ for i < len(data) && data[i] != '}' && data[i] != '\n' { syn++ i++ } if i >= len(data) || data[i] != '}' { return 0, "" } // strip all whitespace at the beginning and the end // of the {} block for syn > 0 && isspace(data[syntaxStart]) { syntaxStart++ syn-- } for syn > 0 && isspace(data[syntaxStart+syn-1]) { syn-- } i++ } else { for i < len(data) && !isspace(data[i]) { syn++ i++ } } *syntax = string(data[syntaxStart : syntaxStart+syn]) } i = skipChar(data, i, ' ') if i >= len(data) || data[i] != '\n' { if newlineOptional && i == len(data) { return i, marker } return 0, "" } return i + 1, marker // Take newline into account. } // fencedCodeBlock returns the end index if data contains a fenced code block at the beginning, // or 0 otherwise. It writes to out if doRender is true, otherwise it has no side effects. // If doRender is true, a final newline is mandatory to recognize the fenced code block. func (p *parser) fencedCodeBlock(out *bytes.Buffer, data []byte, doRender bool) int { var syntax string beg, marker := isFenceLine(data, &syntax, "", false) if beg == 0 || beg >= len(data) { return 0 } var work bytes.Buffer for { // safe to assume beg < len(data) // check for the end of the code block newlineOptional := !doRender fenceEnd, _ := isFenceLine(data[beg:], nil, marker, newlineOptional) if fenceEnd != 0 { beg += fenceEnd break } // copy the current line end := skipUntilChar(data, beg, '\n') + 1 // did we reach the end of the buffer without a closing marker? if end >= len(data) { return 0 } // verbatim copy to the working buffer if doRender { work.Write(data[beg:end]) } beg = end } if doRender { p.r.BlockCode(out, work.Bytes(), syntax) } return beg } func (p *parser) table(out *bytes.Buffer, data []byte) int { var header bytes.Buffer i, columns := p.tableHeader(&header, data) if i == 0 { return 0 } var body bytes.Buffer for i < len(data) { pipes, rowStart := 0, i for ; data[i] != '\n'; i++ { if data[i] == '|' { pipes++ } } if pipes == 0 { i = rowStart break } // include the newline in data sent to tableRow i++ p.tableRow(&body, data[rowStart:i], columns, false) } p.r.Table(out, header.Bytes(), body.Bytes(), columns) return i } // check if the specified position is preceded by an odd number of backslashes func isBackslashEscaped(data []byte, i int) bool { backslashes := 0 for i-backslashes-1 >= 0 && data[i-backslashes-1] == '\\' { backslashes++ } return backslashes&1 == 1 } func (p *parser) tableHeader(out *bytes.Buffer, data []byte) (size int, columns []int) { i := 0 colCount := 1 for i = 0; data[i] != '\n'; i++ { if data[i] == '|' && !isBackslashEscaped(data, i) { colCount++ } } // doesn't look like a table header if colCount == 1 { return } // include the newline in the data sent to tableRow header := data[:i+1] // column count ignores pipes at beginning or end of line if data[0] == '|' { colCount-- } if i > 2 && data[i-1] == '|' && !isBackslashEscaped(data, i-1) { colCount-- } columns = make([]int, colCount) // move on to the header underline i++ if i >= len(data) { return } if data[i] == '|' && !isBackslashEscaped(data, i) { i++ } i = skipChar(data, i, ' ') // each column header is of form: / *:?-+:? *|/ with # dashes + # colons >= 3 // and trailing | optional on last column col := 0 for data[i] != '\n' { dashes := 0 if data[i] == ':' { i++ columns[col] |= TABLE_ALIGNMENT_LEFT dashes++ } for data[i] == '-' { i++ dashes++ } if data[i] == ':' { i++ columns[col] |= TABLE_ALIGNMENT_RIGHT dashes++ } for data[i] == ' ' { i++ } // end of column test is messy switch { case dashes < 3: // not a valid column return case data[i] == '|' && !isBackslashEscaped(data, i): // marker found, now skip past trailing whitespace col++ i++ for data[i] == ' ' { i++ } // trailing junk found after last column if col >= colCount && data[i] != '\n' { return } case (data[i] != '|' || isBackslashEscaped(data, i)) && col+1 < colCount: // something else found where marker was required return case data[i] == '\n': // marker is optional for the last column col++ default: // trailing junk found after last column return } } if col != colCount { return } p.tableRow(out, header, columns, true) size = i + 1 return } func (p *parser) tableRow(out *bytes.Buffer, data []byte, columns []int, header bool) { i, col := 0, 0 var rowWork bytes.Buffer if data[i] == '|' && !isBackslashEscaped(data, i) { i++ } for col = 0; col < len(columns) && i < len(data); col++ { for data[i] == ' ' { i++ } cellStart := i for (data[i] != '|' || isBackslashEscaped(data, i)) && data[i] != '\n' { i++ } cellEnd := i // skip the end-of-cell marker, possibly taking us past end of buffer i++ for cellEnd > cellStart && data[cellEnd-1] == ' ' { cellEnd-- } var cellWork bytes.Buffer p.inline(&cellWork, data[cellStart:cellEnd]) if header { p.r.TableHeaderCell(&rowWork, cellWork.Bytes(), columns[col]) } else { p.r.TableCell(&rowWork, cellWork.Bytes(), columns[col]) } } // pad it out with empty columns to get the right number for ; col < len(columns); col++ { if header { p.r.TableHeaderCell(&rowWork, nil, columns[col]) } else { p.r.TableCell(&rowWork, nil, columns[col]) } } // silently ignore rows with too many cells p.r.TableRow(out, rowWork.Bytes()) } // returns blockquote prefix length func (p *parser) quotePrefix(data []byte) int { i := 0 for i < 3 && data[i] == ' ' { i++ } if data[i] == '>' { if data[i+1] == ' ' { return i + 2 } return i + 1 } return 0 } // blockquote ends with at least one blank line // followed by something without a blockquote prefix func (p *parser) terminateBlockquote(data []byte, beg, end int) bool { if p.isEmpty(data[beg:]) <= 0 { return false } if end >= len(data) { return true } return p.quotePrefix(data[end:]) == 0 && p.isEmpty(data[end:]) == 0 } // parse a blockquote fragment func (p *parser) quote(out *bytes.Buffer, data []byte) int { var raw bytes.Buffer beg, end := 0, 0 for beg < len(data) { end = beg // Step over whole lines, collecting them. While doing that, check for // fenced code and if one's found, incorporate it altogether, // irregardless of any contents inside it for data[end] != '\n' { if p.flags&EXTENSION_FENCED_CODE != 0 { if i := p.fencedCodeBlock(out, data[end:], false); i > 0 { // -1 to compensate for the extra end++ after the loop: end += i - 1 break } } end++ } end++ if pre := p.quotePrefix(data[beg:]); pre > 0 { // skip the prefix beg += pre } else if p.terminateBlockquote(data, beg, end) { break } // this line is part of the blockquote raw.Write(data[beg:end]) beg = end } var cooked bytes.Buffer p.block(&cooked, raw.Bytes()) p.r.BlockQuote(out, cooked.Bytes()) return end } // returns prefix length for block code func (p *parser) codePrefix(data []byte) int { if data[0] == ' ' && data[1] == ' ' && data[2] == ' ' && data[3] == ' ' { return 4 } return 0 } func (p *parser) code(out *bytes.Buffer, data []byte) int { var work bytes.Buffer i := 0 for i < len(data) { beg := i for data[i] != '\n' { i++ } i++ blankline := p.isEmpty(data[beg:i]) > 0 if pre := p.codePrefix(data[beg:i]); pre > 0 { beg += pre } else if !blankline { // non-empty, non-prefixed line breaks the pre i = beg break } // verbatim copy to the working buffeu if blankline { work.WriteByte('\n') } else { work.Write(data[beg:i]) } } // trim all the \n off the end of work workbytes := work.Bytes() eol := len(workbytes) for eol > 0 && workbytes[eol-1] == '\n' { eol-- } if eol != len(workbytes) { work.Truncate(eol) } work.WriteByte('\n') p.r.BlockCode(out, work.Bytes(), "") return i } // returns unordered list item prefix func (p *parser) uliPrefix(data []byte) int { i := 0 // start with up to 3 spaces for i < 3 && data[i] == ' ' { i++ } // need a *, +, or - followed by a space if (data[i] != '*' && data[i] != '+' && data[i] != '-') || data[i+1] != ' ' { return 0 } return i + 2 } // returns ordered list item prefix func (p *parser) oliPrefix(data []byte) int { i := 0 // start with up to 3 spaces for i < 3 && data[i] == ' ' { i++ } // count the digits start := i for data[i] >= '0' && data[i] <= '9' { i++ } // we need >= 1 digits followed by a dot and a space if start == i || data[i] != '.' || data[i+1] != ' ' { return 0 } return i + 2 } // returns definition list item prefix func (p *parser) dliPrefix(data []byte) int { i := 0 // need a : followed by a spaces if data[i] != ':' || data[i+1] != ' ' { return 0 } for data[i] == ' ' { i++ } return i + 2 } // parse ordered or unordered list block func (p *parser) list(out *bytes.Buffer, data []byte, flags int) int { i := 0 flags |= LIST_ITEM_BEGINNING_OF_LIST work := func() bool { for i < len(data) { skip := p.listItem(out, data[i:], &flags) i += skip if skip == 0 || flags&LIST_ITEM_END_OF_LIST != 0 { break } flags &= ^LIST_ITEM_BEGINNING_OF_LIST } return true } p.r.List(out, work, flags) return i } // Parse a single list item. // Assumes initial prefix is already removed if this is a sublist. func (p *parser) listItem(out *bytes.Buffer, data []byte, flags *int) int { // keep track of the indentation of the first line itemIndent := 0 for itemIndent < 3 && data[itemIndent] == ' ' { itemIndent++ } i := p.uliPrefix(data) if i == 0 { i = p.oliPrefix(data) } if i == 0 { i = p.dliPrefix(data) // reset definition term flag if i > 0 { *flags &= ^LIST_TYPE_TERM } } if i == 0 { // if in defnition list, set term flag and continue if *flags&LIST_TYPE_DEFINITION != 0 { *flags |= LIST_TYPE_TERM } else { return 0 } } // skip leading whitespace on first line for data[i] == ' ' { i++ } // find the end of the line line := i for i > 0 && data[i-1] != '\n' { i++ } // get working buffer var raw bytes.Buffer // put the first line into the working buffer raw.Write(data[line:i]) line = i // process the following lines containsBlankLine := false sublist := 0 gatherlines: for line < len(data) { i++ // find the end of this line for data[i-1] != '\n' { i++ } // if it is an empty line, guess that it is part of this item // and move on to the next line if p.isEmpty(data[line:i]) > 0 { containsBlankLine = true raw.Write(data[line:i]) line = i continue } // calculate the indentation indent := 0 for indent < 4 && line+indent < i && data[line+indent] == ' ' { indent++ } chunk := data[line+indent : i] // evaluate how this line fits in switch { // is this a nested list item? case (p.uliPrefix(chunk) > 0 && !p.isHRule(chunk)) || p.oliPrefix(chunk) > 0 || p.dliPrefix(chunk) > 0: if containsBlankLine { // end the list if the type changed after a blank line if indent <= itemIndent && ((*flags&LIST_TYPE_ORDERED != 0 && p.uliPrefix(chunk) > 0) || (*flags&LIST_TYPE_ORDERED == 0 && p.oliPrefix(chunk) > 0)) { *flags |= LIST_ITEM_END_OF_LIST break gatherlines } *flags |= LIST_ITEM_CONTAINS_BLOCK } // to be a nested list, it must be indented more // if not, it is the next item in the same list if indent <= itemIndent { break gatherlines } // is this the first item in the nested list? if sublist == 0 { sublist = raw.Len() } // is this a nested prefix header? case p.isPrefixHeader(chunk): // if the header is not indented, it is not nested in the list // and thus ends the list if containsBlankLine && indent < 4 { *flags |= LIST_ITEM_END_OF_LIST break gatherlines } *flags |= LIST_ITEM_CONTAINS_BLOCK // anything following an empty line is only part // of this item if it is indented 4 spaces // (regardless of the indentation of the beginning of the item) case containsBlankLine && indent < 4: if *flags&LIST_TYPE_DEFINITION != 0 && i < len(data)-1 { // is the next item still a part of this list? next := i for data[next] != '\n' { next++ } for next < len(data)-1 && data[next] == '\n' { next++ } if i < len(data)-1 && data[i] != ':' && data[next] != ':' { *flags |= LIST_ITEM_END_OF_LIST } } else { *flags |= LIST_ITEM_END_OF_LIST } break gatherlines // a blank line means this should be parsed as a block case containsBlankLine: *flags |= LIST_ITEM_CONTAINS_BLOCK } containsBlankLine = false // add the line into the working buffer without prefix raw.Write(data[line+indent : i]) line = i } // If reached end of data, the Renderer.ListItem call we're going to make below // is definitely the last in the list. if line >= len(data) { *flags |= LIST_ITEM_END_OF_LIST } rawBytes := raw.Bytes() // render the contents of the list item var cooked bytes.Buffer if *flags&LIST_ITEM_CONTAINS_BLOCK != 0 && *flags&LIST_TYPE_TERM == 0 { // intermediate render of block item, except for definition term if sublist > 0 { p.block(&cooked, rawBytes[:sublist]) p.block(&cooked, rawBytes[sublist:]) } else { p.block(&cooked, rawBytes) } } else { // intermediate render of inline item if sublist > 0 { p.inline(&cooked, rawBytes[:sublist]) p.block(&cooked, rawBytes[sublist:]) } else { p.inline(&cooked, rawBytes) } } // render the actual list item cookedBytes := cooked.Bytes() parsedEnd := len(cookedBytes) // strip trailing newlines for parsedEnd > 0 && cookedBytes[parsedEnd-1] == '\n' { parsedEnd-- } p.r.ListItem(out, cookedBytes[:parsedEnd], *flags) return line } // render a single paragraph that has already been parsed out func (p *parser) renderParagraph(out *bytes.Buffer, data []byte) { if len(data) == 0 { return } // trim leading spaces beg := 0 for data[beg] == ' ' { beg++ } // trim trailing newline end := len(data) - 1 // trim trailing spaces for end > beg && data[end-1] == ' ' { end-- } work := func() bool { p.inline(out, data[beg:end]) return true } p.r.Paragraph(out, work) } func (p *parser) paragraph(out *bytes.Buffer, data []byte) int { // prev: index of 1st char of previous line // line: index of 1st char of current line // i: index of cursor/end of current line var prev, line, i int // keep going until we find something to mark the end of the paragraph for i < len(data) { // mark the beginning of the current line prev = line current := data[i:] line = i // did we find a blank line marking the end of the paragraph? if n := p.isEmpty(current); n > 0 { // did this blank line followed by a definition list item? if p.flags&EXTENSION_DEFINITION_LISTS != 0 { if i < len(data)-1 && data[i+1] == ':' { return p.list(out, data[prev:], LIST_TYPE_DEFINITION) } } p.renderParagraph(out, data[:i]) return i + n } // an underline under some text marks a header, so our paragraph ended on prev line if i > 0 { if level := p.isUnderlinedHeader(current); level > 0 { // render the paragraph p.renderParagraph(out, data[:prev]) // ignore leading and trailing whitespace eol := i - 1 for prev < eol && data[prev] == ' ' { prev++ } for eol > prev && data[eol-1] == ' ' { eol-- } // render the header // this ugly double closure avoids forcing variables onto the heap work := func(o *bytes.Buffer, pp *parser, d []byte) func() bool { return func() bool { pp.inline(o, d) return true } }(out, p, data[prev:eol]) id := "" if p.flags&EXTENSION_AUTO_HEADER_IDS != 0 { id = SanitizedAnchorName(string(data[prev:eol])) } p.r.Header(out, work, level, id) // find the end of the underline for data[i] != '\n' { i++ } return i } } // if the next line starts a block of HTML, then the paragraph ends here if p.flags&EXTENSION_LAX_HTML_BLOCKS != 0 { if data[i] == '<' && p.html(out, current, false) > 0 { // rewind to before the HTML block p.renderParagraph(out, data[:i]) return i } } // if there's a prefixed header or a horizontal rule after this, paragraph is over if p.isPrefixHeader(current) || p.isHRule(current) { p.renderParagraph(out, data[:i]) return i } // if there's a fenced code block, paragraph is over if p.flags&EXTENSION_FENCED_CODE != 0 { if p.fencedCodeBlock(out, current, false) > 0 { p.renderParagraph(out, data[:i]) return i } } // if there's a definition list item, prev line is a definition term if p.flags&EXTENSION_DEFINITION_LISTS != 0 { if p.dliPrefix(current) != 0 { return p.list(out, data[prev:], LIST_TYPE_DEFINITION) } } // if there's a list after this, paragraph is over if p.flags&EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK != 0 { if p.uliPrefix(current) != 0 || p.oliPrefix(current) != 0 || p.quotePrefix(current) != 0 || p.codePrefix(current) != 0 { p.renderParagraph(out, data[:i]) return i } } // otherwise, scan to the beginning of the next line for data[i] != '\n' { i++ } i++ } p.renderParagraph(out, data[:i]) return i } // SanitizedAnchorName returns a sanitized anchor name for the given text. // // It implements the algorithm specified in the package comment. func SanitizedAnchorName(text string) string { var anchorName []rune futureDash := false for _, r := range text { switch { case unicode.IsLetter(r) || unicode.IsNumber(r): if futureDash && len(anchorName) > 0 { anchorName = append(anchorName, '-') } futureDash = false anchorName = append(anchorName, unicode.ToLower(r)) default: futureDash = true } } return string(anchorName) } blackfriday-1.5.1/block_test.go000066400000000000000000001414211324224440100164500ustar00rootroot00000000000000// // Blackfriday Markdown Processor // Available at http://github.com/russross/blackfriday // // Copyright © 2011 Russ Ross . // Distributed under the Simplified BSD License. // See README.md for details. // // // Unit tests for block parsing // package blackfriday import ( "strings" "testing" ) func runMarkdownBlockWithRenderer(input string, extensions int, renderer Renderer) string { return string(Markdown([]byte(input), renderer, extensions)) } func runMarkdownBlock(input string, extensions int) string { htmlFlags := 0 htmlFlags |= HTML_USE_XHTML renderer := HtmlRenderer(htmlFlags, "", "") return runMarkdownBlockWithRenderer(input, extensions, renderer) } func runnerWithRendererParameters(parameters HtmlRendererParameters) func(string, int) string { return func(input string, extensions int) string { htmlFlags := 0 htmlFlags |= HTML_USE_XHTML renderer := HtmlRendererWithParameters(htmlFlags, "", "", parameters) return runMarkdownBlockWithRenderer(input, extensions, renderer) } } func doTestsBlock(t *testing.T, tests []string, extensions int) { doTestsBlockWithRunner(t, tests, extensions, runMarkdownBlock) } func doTestsBlockWithRunner(t *testing.T, tests []string, extensions int, runner func(string, int) string) { // catch and report panics var candidate string defer func() { if err := recover(); err != nil { t.Errorf("\npanic while processing [%#v]: %s\n", candidate, err) } }() for i := 0; i+1 < len(tests); i += 2 { input := tests[i] candidate = input expected := tests[i+1] actual := runner(candidate, extensions) if actual != expected { t.Errorf("\nInput [%#v]\nExpected[%#v]\nActual [%#v]", candidate, expected, actual) } // now test every substring to stress test bounds checking if !testing.Short() { for start := 0; start < len(input); start++ { for end := start + 1; end <= len(input); end++ { candidate = input[start:end] _ = runMarkdownBlock(candidate, extensions) } } } } } func TestPrefixHeaderNoExtensions(t *testing.T) { var tests = []string{ "# Header 1\n", "

Header 1

\n", "## Header 2\n", "

Header 2

\n", "### Header 3\n", "

Header 3

\n", "#### Header 4\n", "

Header 4

\n", "##### Header 5\n", "
Header 5
\n", "###### Header 6\n", "
Header 6
\n", "####### Header 7\n", "
# Header 7
\n", "#Header 1\n", "

Header 1

\n", "##Header 2\n", "

Header 2

\n", "###Header 3\n", "

Header 3

\n", "####Header 4\n", "

Header 4

\n", "#####Header 5\n", "
Header 5
\n", "######Header 6\n", "
Header 6
\n", "#######Header 7\n", "
#Header 7
\n", "Hello\n# Header 1\nGoodbye\n", "

Hello

\n\n

Header 1

\n\n

Goodbye

\n", "* List\n# Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n * Nested list\n # Nested header\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", "#Header 1 \\#\n", "

Header 1 #

\n", "#Header 1 \\# foo\n", "

Header 1 # foo

\n", "#Header 1 #\\##\n", "

Header 1 ##

\n", } doTestsBlock(t, tests, 0) } func TestPrefixHeaderSpaceExtension(t *testing.T) { var tests = []string{ "# Header 1\n", "

Header 1

\n", "## Header 2\n", "

Header 2

\n", "### Header 3\n", "

Header 3

\n", "#### Header 4\n", "

Header 4

\n", "##### Header 5\n", "
Header 5
\n", "###### Header 6\n", "
Header 6
\n", "####### Header 7\n", "

####### Header 7

\n", "#Header 1\n", "

#Header 1

\n", "##Header 2\n", "

##Header 2

\n", "###Header 3\n", "

###Header 3

\n", "####Header 4\n", "

####Header 4

\n", "#####Header 5\n", "

#####Header 5

\n", "######Header 6\n", "

######Header 6

\n", "#######Header 7\n", "

#######Header 7

\n", "Hello\n# Header 1\nGoodbye\n", "

Hello

\n\n

Header 1

\n\n

Goodbye

\n", "* List\n# Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header\n* List\n", "
    \n
  • List\n#Header
  • \n
  • List
  • \n
\n", "* List\n * Nested list\n # Nested header\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", } doTestsBlock(t, tests, EXTENSION_SPACE_HEADERS) } func TestPrefixHeaderIdExtension(t *testing.T) { var tests = []string{ "# Header 1 {#someid}\n", "

Header 1

\n", "# Header 1 {#someid} \n", "

Header 1

\n", "# Header 1 {#someid}\n", "

Header 1

\n", "# Header 1 {#someid\n", "

Header 1 {#someid

\n", "# Header 1 {#someid\n", "

Header 1 {#someid

\n", "# Header 1 {#someid}}\n", "

Header 1

\n\n

}

\n", "## Header 2 {#someid}\n", "

Header 2

\n", "### Header 3 {#someid}\n", "

Header 3

\n", "#### Header 4 {#someid}\n", "

Header 4

\n", "##### Header 5 {#someid}\n", "
Header 5
\n", "###### Header 6 {#someid}\n", "
Header 6
\n", "####### Header 7 {#someid}\n", "
# Header 7
\n", "# Header 1 # {#someid}\n", "

Header 1

\n", "## Header 2 ## {#someid}\n", "

Header 2

\n", "Hello\n# Header 1\nGoodbye\n", "

Hello

\n\n

Header 1

\n\n

Goodbye

\n", "* List\n# Header {#someid}\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header {#someid}\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n * Nested list\n # Nested header {#someid}\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", } doTestsBlock(t, tests, EXTENSION_HEADER_IDS) } func TestPrefixHeaderIdExtensionWithPrefixAndSuffix(t *testing.T) { var tests = []string{ "# header 1 {#someid}\n", "

header 1

\n", "## header 2 {#someid}\n", "

header 2

\n", "### header 3 {#someid}\n", "

header 3

\n", "#### header 4 {#someid}\n", "

header 4

\n", "##### header 5 {#someid}\n", "
header 5
\n", "###### header 6 {#someid}\n", "
header 6
\n", "####### header 7 {#someid}\n", "
# header 7
\n", "# header 1 # {#someid}\n", "

header 1

\n", "## header 2 ## {#someid}\n", "

header 2

\n", "* List\n# Header {#someid}\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header {#someid}\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n * Nested list\n # Nested header {#someid}\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", } parameters := HtmlRendererParameters{ HeaderIDPrefix: "PRE:", HeaderIDSuffix: ":POST", } doTestsBlockWithRunner(t, tests, EXTENSION_HEADER_IDS, runnerWithRendererParameters(parameters)) } func TestPrefixAutoHeaderIdExtension(t *testing.T) { var tests = []string{ "# Header 1\n", "

Header 1

\n", "# Header 1 \n", "

Header 1

\n", "## Header 2\n", "

Header 2

\n", "### Header 3\n", "

Header 3

\n", "#### Header 4\n", "

Header 4

\n", "##### Header 5\n", "
Header 5
\n", "###### Header 6\n", "
Header 6
\n", "####### Header 7\n", "
# Header 7
\n", "Hello\n# Header 1\nGoodbye\n", "

Hello

\n\n

Header 1

\n\n

Goodbye

\n", "* List\n# Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n * Nested list\n # Nested header\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", "# Header\n\n# Header\n", "

Header

\n\n

Header

\n", "# Header 1\n\n# Header 1", "

Header 1

\n\n

Header 1

\n", "# Header\n\n# Header 1\n\n# Header\n\n# Header", "

Header

\n\n

Header 1

\n\n

Header

\n\n

Header

\n", } doTestsBlock(t, tests, EXTENSION_AUTO_HEADER_IDS) } func TestPrefixAutoHeaderIdExtensionWithPrefixAndSuffix(t *testing.T) { var tests = []string{ "# Header 1\n", "

Header 1

\n", "# Header 1 \n", "

Header 1

\n", "## Header 2\n", "

Header 2

\n", "### Header 3\n", "

Header 3

\n", "#### Header 4\n", "

Header 4

\n", "##### Header 5\n", "
Header 5
\n", "###### Header 6\n", "
Header 6
\n", "####### Header 7\n", "
# Header 7
\n", "Hello\n# Header 1\nGoodbye\n", "

Hello

\n\n

Header 1

\n\n

Goodbye

\n", "* List\n# Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n#Header\n* List\n", "
    \n
  • List

    \n\n

    Header

  • \n\n
  • List

  • \n
\n", "* List\n * Nested list\n # Nested header\n", "
    \n
  • List

    \n\n
      \n
    • Nested list

      \n\n" + "

      Nested header

    • \n
  • \n
\n", "# Header\n\n# Header\n", "

Header

\n\n

Header

\n", "# Header 1\n\n# Header 1", "

Header 1

\n\n

Header 1

\n", "# Header\n\n# Header 1\n\n# Header\n\n# Header", "

Header

\n\n

Header 1

\n\n

Header

\n\n

Header

\n", } parameters := HtmlRendererParameters{ HeaderIDPrefix: "PRE:", HeaderIDSuffix: ":POST", } doTestsBlockWithRunner(t, tests, EXTENSION_AUTO_HEADER_IDS, runnerWithRendererParameters(parameters)) } func TestPrefixMultipleHeaderExtensions(t *testing.T) { var tests = []string{ "# Header\n\n# Header {#header}\n\n# Header 1", "

Header

\n\n

Header

\n\n

Header 1

\n", } doTestsBlock(t, tests, EXTENSION_AUTO_HEADER_IDS|EXTENSION_HEADER_IDS) } func TestUnderlineHeaders(t *testing.T) { var tests = []string{ "Header 1\n========\n", "

Header 1

\n", "Header 2\n--------\n", "

Header 2

\n", "A\n=\n", "

A

\n", "B\n-\n", "

B

\n", "Paragraph\nHeader\n=\n", "

Paragraph

\n\n

Header

\n", "Header\n===\nParagraph\n", "

Header

\n\n

Paragraph

\n", "Header\n===\nAnother header\n---\n", "

Header

\n\n

Another header

\n", " Header\n======\n", "

Header

\n", " Code\n========\n", "
Code\n
\n\n

========

\n", "Header with *inline*\n=====\n", "

Header with inline

\n", "* List\n * Sublist\n Not a header\n ------\n", "
    \n
  • List\n\n
      \n
    • Sublist\nNot a header\n------
    • \n
  • \n
\n", "Paragraph\n\n\n\n\nHeader\n===\n", "

Paragraph

\n\n

Header

\n", "Trailing space \n==== \n\n", "

Trailing space

\n", "Trailing spaces\n==== \n\n", "

Trailing spaces

\n", "Double underline\n=====\n=====\n", "

Double underline

\n\n

=====

\n", } doTestsBlock(t, tests, 0) } func TestUnderlineHeadersAutoIDs(t *testing.T) { var tests = []string{ "Header 1\n========\n", "

Header 1

\n", "Header 2\n--------\n", "

Header 2

\n", "A\n=\n", "

A

\n", "B\n-\n", "

B

\n", "Paragraph\nHeader\n=\n", "

Paragraph

\n\n

Header

\n", "Header\n===\nParagraph\n", "

Header

\n\n

Paragraph

\n", "Header\n===\nAnother header\n---\n", "

Header

\n\n

Another header

\n", " Header\n======\n", "

Header

\n", "Header with *inline*\n=====\n", "

Header with inline

\n", "Paragraph\n\n\n\n\nHeader\n===\n", "

Paragraph

\n\n

Header

\n", "Trailing space \n==== \n\n", "

Trailing space

\n", "Trailing spaces\n==== \n\n", "

Trailing spaces

\n", "Double underline\n=====\n=====\n", "

Double underline

\n\n

=====

\n", "Header\n======\n\nHeader\n======\n", "

Header

\n\n

Header

\n", "Header 1\n========\n\nHeader 1\n========\n", "

Header 1

\n\n

Header 1

\n", } doTestsBlock(t, tests, EXTENSION_AUTO_HEADER_IDS) } func TestHorizontalRule(t *testing.T) { var tests = []string{ "-\n", "

-

\n", "--\n", "

--

\n", "---\n", "
\n", "----\n", "
\n", "*\n", "

*

\n", "**\n", "

**

\n", "***\n", "
\n", "****\n", "
\n", "_\n", "

_

\n", "__\n", "

__

\n", "___\n", "
\n", "____\n", "
\n", "-*-\n", "

-*-

\n", "- - -\n", "
\n", "* * *\n", "
\n", "_ _ _\n", "
\n", "-----*\n", "

-----*

\n", " ------ \n", "
\n", "Hello\n***\n", "

Hello

\n\n
\n", "---\n***\n___\n", "
\n\n
\n\n
\n", } doTestsBlock(t, tests, 0) } func TestUnorderedList(t *testing.T) { var tests = []string{ "* Hello\n", "
    \n
  • Hello
  • \n
\n", "* Yin\n* Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "* Ting\n* Bong\n* Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "* Yin\n\n* Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "* Ting\n\n* Bong\n* Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "+ Hello\n", "
    \n
  • Hello
  • \n
\n", "+ Yin\n+ Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "+ Ting\n+ Bong\n+ Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "+ Yin\n\n+ Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "+ Ting\n\n+ Bong\n+ Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "- Hello\n", "
    \n
  • Hello
  • \n
\n", "- Yin\n- Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "- Ting\n- Bong\n- Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "- Yin\n\n- Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "- Ting\n\n- Bong\n- Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "*Hello\n", "

*Hello

\n", "* Hello \n", "
    \n
  • Hello
  • \n
\n", "* Hello \n Next line \n", "
    \n
  • Hello\nNext line
  • \n
\n", "Paragraph\n* No linebreak\n", "

Paragraph\n* No linebreak

\n", "Paragraph\n\n* Linebreak\n", "

Paragraph

\n\n
    \n
  • Linebreak
  • \n
\n", "* List\n\n1. Spacer Mixed listing\n", "
    \n
  • List
  • \n
\n\n
    \n
  1. Spacer Mixed listing
  2. \n
\n", "* List\n * Nested list\n", "
    \n
  • List\n\n
      \n
    • Nested list
    • \n
  • \n
\n", "* List\n\n * Nested list\n", "
    \n
  • List

    \n\n
      \n
    • Nested list
    • \n
  • \n
\n", "* List\n Second line\n\n + Nested\n", "
    \n
  • List\nSecond line

    \n\n
      \n
    • Nested
    • \n
  • \n
\n", "* List\n + Nested\n\n Continued\n", "
    \n
  • List

    \n\n
      \n
    • Nested
    • \n
    \n\n

    Continued

  • \n
\n", "* List\n * shallow indent\n", "
    \n
  • List\n\n
      \n
    • shallow indent
    • \n
  • \n
\n", "* List\n" + " * shallow indent\n" + " * part of second list\n" + " * still second\n" + " * almost there\n" + " * third level\n", "
    \n" + "
  • List\n\n" + "
      \n" + "
    • shallow indent
    • \n" + "
    • part of second list
    • \n" + "
    • still second
    • \n" + "
    • almost there\n\n" + "
        \n" + "
      • third level
      • \n" + "
    • \n" + "
  • \n" + "
\n", "* List\n extra indent, same paragraph\n", "
    \n
  • List\n extra indent, same paragraph
  • \n
\n", "* List\n\n code block\n", "
    \n
  • List

    \n\n
    code block\n
  • \n
\n", "* List\n\n code block with spaces\n", "
    \n
  • List

    \n\n
      code block with spaces\n
  • \n
\n", "* List\n\n * sublist\n\n normal text\n\n * another sublist\n", "
    \n
  • List

    \n\n
      \n
    • sublist
    • \n
    \n\n

    normal text

    \n\n
      \n
    • another sublist
    • \n
  • \n
\n", `* Foo bar qux `, `
  • Foo

    bar
    
    qux
    
`, } doTestsBlock(t, tests, 0) } func TestFencedCodeBlockWithinList(t *testing.T) { doTestsBlock(t, []string{ "* Foo\n\n ```\n bar\n\n qux\n ```\n", `
  • Foo

    bar
    
    qux
    
`, }, EXTENSION_FENCED_CODE) } func TestOrderedList(t *testing.T) { var tests = []string{ "1. Hello\n", "
    \n
  1. Hello
  2. \n
\n", "1. Yin\n2. Yang\n", "
    \n
  1. Yin
  2. \n
  3. Yang
  4. \n
\n", "1. Ting\n2. Bong\n3. Goo\n", "
    \n
  1. Ting
  2. \n
  3. Bong
  4. \n
  5. Goo
  6. \n
\n", "1. Yin\n\n2. Yang\n", "
    \n
  1. Yin

  2. \n\n
  3. Yang

  4. \n
\n", "1. Ting\n\n2. Bong\n3. Goo\n", "
    \n
  1. Ting

  2. \n\n
  3. Bong

  4. \n\n
  5. Goo

  6. \n
\n", "1 Hello\n", "

1 Hello

\n", "1.Hello\n", "

1.Hello

\n", "1. Hello \n", "
    \n
  1. Hello
  2. \n
\n", "1. Hello \n Next line \n", "
    \n
  1. Hello\nNext line
  2. \n
\n", "Paragraph\n1. No linebreak\n", "

Paragraph\n1. No linebreak

\n", "Paragraph\n\n1. Linebreak\n", "

Paragraph

\n\n
    \n
  1. Linebreak
  2. \n
\n", "1. List\n 1. Nested list\n", "
    \n
  1. List\n\n
      \n
    1. Nested list
    2. \n
  2. \n
\n", "1. List\n\n 1. Nested list\n", "
    \n
  1. List

    \n\n
      \n
    1. Nested list
    2. \n
  2. \n
\n", "1. List\n Second line\n\n 1. Nested\n", "
    \n
  1. List\nSecond line

    \n\n
      \n
    1. Nested
    2. \n
  2. \n
\n", "1. List\n 1. Nested\n\n Continued\n", "
    \n
  1. List

    \n\n
      \n
    1. Nested
    2. \n
    \n\n

    Continued

  2. \n
\n", "1. List\n 1. shallow indent\n", "
    \n
  1. List\n\n
      \n
    1. shallow indent
    2. \n
  2. \n
\n", "1. List\n" + " 1. shallow indent\n" + " 2. part of second list\n" + " 3. still second\n" + " 4. almost there\n" + " 1. third level\n", "
    \n" + "
  1. List\n\n" + "
      \n" + "
    1. shallow indent
    2. \n" + "
    3. part of second list
    4. \n" + "
    5. still second
    6. \n" + "
    7. almost there\n\n" + "
        \n" + "
      1. third level
      2. \n" + "
    8. \n" + "
  2. \n" + "
\n", "1. List\n extra indent, same paragraph\n", "
    \n
  1. List\n extra indent, same paragraph
  2. \n
\n", "1. List\n\n code block\n", "
    \n
  1. List

    \n\n
    code block\n
  2. \n
\n", "1. List\n\n code block with spaces\n", "
    \n
  1. List

    \n\n
      code block with spaces\n
  2. \n
\n", "1. List\n\n* Spacer Mixed listing\n", "
    \n
  1. List
  2. \n
\n\n
    \n
  • Spacer Mixed listing
  • \n
\n", "1. List\n* Mixed listing\n", "
    \n
  1. List
  2. \n
  3. Mixed listing
  4. \n
\n", "1. List\n * Mixted list\n", "
    \n
  1. List\n\n
      \n
    • Mixted list
    • \n
  2. \n
\n", "1. List\n * Mixed list\n", "
    \n
  1. List\n\n
      \n
    • Mixed list
    • \n
  2. \n
\n", "* Start with unordered\n 1. Ordered\n", "
    \n
  • Start with unordered\n\n
      \n
    1. Ordered
    2. \n
  • \n
\n", "* Start with unordered\n 1. Ordered\n", "
    \n
  • Start with unordered\n\n
      \n
    1. Ordered
    2. \n
  • \n
\n", "1. numbers\n1. are ignored\n", "
    \n
  1. numbers
  2. \n
  3. are ignored
  4. \n
\n", `1. Foo bar qux `, `
  1. Foo

    bar
    
    
    
    qux
    
`, } doTestsBlock(t, tests, 0) } func TestDefinitionList(t *testing.T) { var tests = []string{ "Term 1\n: Definition a\n", "
\n
Term 1
\n
Definition a
\n
\n", "Term 1\n: Definition a \n", "
\n
Term 1
\n
Definition a
\n
\n", "Term 1\n: Definition a\n: Definition b\n", "
\n
Term 1
\n
Definition a
\n
Definition b
\n
\n", "Term 1\n: Definition a\n\nTerm 2\n: Definition b\n", "
\n" + "
Term 1
\n" + "
Definition a
\n" + "
Term 2
\n" + "
Definition b
\n" + "
\n", "Term 1\n: Definition a\n\nTerm 2\n: Definition b\n\nTerm 3\n: Definition c\n", "
\n" + "
Term 1
\n" + "
Definition a
\n" + "
Term 2
\n" + "
Definition b
\n" + "
Term 3
\n" + "
Definition c
\n" + "
\n", "Term 1\n: Definition a\n: Definition b\n\nTerm 2\n: Definition c\n", "
\n" + "
Term 1
\n" + "
Definition a
\n" + "
Definition b
\n" + "
Term 2
\n" + "
Definition c
\n" + "
\n", "Term 1\n\n: Definition a\n\nTerm 2\n\n: Definition b\n", "
\n" + "
Term 1
\n" + "

Definition a

\n" + "
Term 2
\n" + "

Definition b

\n" + "
\n", "Term 1\n\n: Definition a\n\n: Definition b\n\nTerm 2\n\n: Definition c\n", "
\n" + "
Term 1
\n" + "

Definition a

\n" + "

Definition b

\n" + "
Term 2
\n" + "

Definition c

\n" + "
\n", "Term 1\n: Definition a\nNext line\n", "
\n
Term 1
\n
Definition a\nNext line
\n
\n", "Term 1\n: Definition a\n Next line\n", "
\n
Term 1
\n
Definition a\nNext line
\n
\n", "Term 1\n: Definition a \n Next line \n", "
\n
Term 1
\n
Definition a\nNext line
\n
\n", "Term 1\n: Definition a\nNext line\n\nTerm 2\n: Definition b", "
\n" + "
Term 1
\n" + "
Definition a\nNext line
\n" + "
Term 2
\n" + "
Definition b
\n" + "
\n", "Term 1\n: Definition a\n", "
\n
Term 1
\n
Definition a
\n
\n", "Term 1\n:Definition a\n", "

Term 1\n:Definition a

\n", "Term 1\n\n: Definition a\n\nTerm 2\n\n: Definition b\n\nText 1", "
\n" + "
Term 1
\n" + "

Definition a

\n" + "
Term 2
\n" + "

Definition b

\n" + "
\n" + "\n

Text 1

\n", "Term 1\n\n: Definition a\n\nText 1\n\nTerm 2\n\n: Definition b\n\nText 2", "
\n" + "
Term 1
\n" + "

Definition a

\n" + "
\n" + "\n

Text 1

\n" + "\n
\n" + "
Term 2
\n" + "

Definition b

\n" + "
\n" + "\n

Text 2

\n", "Term 1\n: Definition a\n\n Text 1\n\n 1. First\n 2. Second", "
\n" + "
Term 1
\n" + "

Definition a

\n\n" + "

Text 1

\n\n" + "
    \n
  1. First
  2. \n
  3. Second
  4. \n
\n" + "
\n", } doTestsBlock(t, tests, EXTENSION_DEFINITION_LISTS) } func TestPreformattedHtml(t *testing.T) { var tests = []string{ "
\n", "
\n", "
\n
\n", "
\n
\n", "
\n
\nParagraph\n", "

\n
\nParagraph

\n", "
\n
\n", "
\n
\n", "
\nAnything here\n
\n", "
\nAnything here\n
\n", "
\n Anything here\n
\n", "
\n Anything here\n
\n", "
\nAnything here\n
\n", "
\nAnything here\n
\n", "
\nThis is *not* &proceessed\n
\n", "
\nThis is *not* &proceessed\n
\n", "\n Something\n\n", "

\n Something\n

\n", "
\n Something here\n\n", "

\n Something here\n

\n", "Paragraph\n
\nHere? >&<\n
\n", "

Paragraph\n

\nHere? >&<\n

\n", "Paragraph\n\n
\nHow about here? >&<\n
\n", "

Paragraph

\n\n
\nHow about here? >&<\n
\n", "Paragraph\n
\nHere? >&<\n
\nAnd here?\n", "

Paragraph\n

\nHere? >&<\n
\nAnd here?

\n", "Paragraph\n\n
\nHow about here? >&<\n
\nAnd here?\n", "

Paragraph

\n\n

\nHow about here? >&<\n
\nAnd here?

\n", "Paragraph\n
\nHere? >&<\n
\n\nAnd here?\n", "

Paragraph\n

\nHere? >&<\n

\n\n

And here?

\n", "Paragraph\n\n
\nHow about here? >&<\n
\n\nAnd here?\n", "

Paragraph

\n\n
\nHow about here? >&<\n
\n\n

And here?

\n", } doTestsBlock(t, tests, 0) } func TestPreformattedHtmlLax(t *testing.T) { var tests = []string{ "Paragraph\n
\nHere? >&<\n
\n", "

Paragraph

\n\n
\nHere? >&<\n
\n", "Paragraph\n\n
\nHow about here? >&<\n
\n", "

Paragraph

\n\n
\nHow about here? >&<\n
\n", "Paragraph\n
\nHere? >&<\n
\nAnd here?\n", "

Paragraph

\n\n
\nHere? >&<\n
\n\n

And here?

\n", "Paragraph\n\n
\nHow about here? >&<\n
\nAnd here?\n", "

Paragraph

\n\n
\nHow about here? >&<\n
\n\n

And here?

\n", "Paragraph\n
\nHere? >&<\n
\n\nAnd here?\n", "

Paragraph

\n\n
\nHere? >&<\n
\n\n

And here?

\n", "Paragraph\n\n
\nHow about here? >&<\n
\n\nAnd here?\n", "

Paragraph

\n\n
\nHow about here? >&<\n
\n\n

And here?

\n", } doTestsBlock(t, tests, EXTENSION_LAX_HTML_BLOCKS) } func TestFencedCodeBlock(t *testing.T) { var tests = []string{ "``` go\nfunc foo() bool {\n\treturn true;\n}\n```\n", "
func foo() bool {\n\treturn true;\n}\n
\n", "``` c\n/* special & char < > \" escaping */\n```\n", "
/* special & char < > " escaping */\n
\n", "``` c\nno *inline* processing ~~of text~~\n```\n", "
no *inline* processing ~~of text~~\n
\n", "```\nNo language\n```\n", "
No language\n
\n", "``` {ocaml}\nlanguage in braces\n```\n", "
language in braces\n
\n", "``` {ocaml} \nwith extra whitespace\n```\n", "
with extra whitespace\n
\n", "```{ ocaml }\nwith extra whitespace\n```\n", "
with extra whitespace\n
\n", "~ ~~ java\nWith whitespace\n~~~\n", "

~ ~~ java\nWith whitespace\n~~~

\n", "~~\nonly two\n~~\n", "

~~\nonly two\n~~

\n", "```` python\nextra\n````\n", "
extra\n
\n", "~~~ perl\nthree to start, four to end\n~~~~\n", "

~~~ perl\nthree to start, four to end\n~~~~

\n", "~~~~ perl\nfour to start, three to end\n~~~\n", "

~~~~ perl\nfour to start, three to end\n~~~

\n", "~~~ bash\ntildes\n~~~\n", "
tildes\n
\n", "``` lisp\nno ending\n", "

``` lisp\nno ending

\n", "~~~ lisp\nend with language\n~~~ lisp\n", "

~~~ lisp\nend with language\n~~~ lisp

\n", "```\nmismatched begin and end\n~~~\n", "

```\nmismatched begin and end\n~~~

\n", "~~~\nmismatched begin and end\n```\n", "

~~~\nmismatched begin and end\n```

\n", " ``` oz\nleading spaces\n```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", "``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
``` oz\n
\n\n

leading spaces\n ```

\n", "Bla bla\n\n``` oz\ncode blocks breakup paragraphs\n```\n\nBla Bla\n", "

Bla bla

\n\n
code blocks breakup paragraphs\n
\n\n

Bla Bla

\n", "Some text before a fenced code block\n``` oz\ncode blocks breakup paragraphs\n```\nAnd some text after a fenced code block", "

Some text before a fenced code block

\n\n
code blocks breakup paragraphs\n
\n\n

And some text after a fenced code block

\n", "`", "

`

\n", "Bla bla\n\n``` oz\ncode blocks breakup paragraphs\n```\n\nBla Bla\n\n``` oz\nmultiple code blocks work okay\n```\n\nBla Bla\n", "

Bla bla

\n\n
code blocks breakup paragraphs\n
\n\n

Bla Bla

\n\n
multiple code blocks work okay\n
\n\n

Bla Bla

\n", "Some text before a fenced code block\n``` oz\ncode blocks breakup paragraphs\n```\nSome text in between\n``` oz\nmultiple code blocks work okay\n```\nAnd some text after a fenced code block", "

Some text before a fenced code block

\n\n
code blocks breakup paragraphs\n
\n\n

Some text in between

\n\n
multiple code blocks work okay\n
\n\n

And some text after a fenced code block

\n", "```\n[]:()\n```\n", "
[]:()\n
\n", "```\n[]:()\n[]:)\n[]:(\n[]:x\n[]:testing\n[:testing\n\n[]:\nlinebreak\n[]()\n\n[]:\n[]()\n```", "
[]:()\n[]:)\n[]:(\n[]:x\n[]:testing\n[:testing\n\n[]:\nlinebreak\n[]()\n\n[]:\n[]()\n
\n", } doTestsBlock(t, tests, EXTENSION_FENCED_CODE) } func TestFencedCodeInsideBlockquotes(t *testing.T) { cat := func(s ...string) string { return strings.Join(s, "\n") } var tests = []string{ cat("> ```go", "package moo", "", "```", ""), `
package moo

`, // ------------------------------------------- cat("> foo", "> ", "> ```go", "package moo", "```", "> ", "> goo.", ""), `

foo

package moo

goo.

`, // ------------------------------------------- cat("> foo", "> ", "> quote", "continues", "```", ""), `

foo

quote continues ` + "```" + `

`, // ------------------------------------------- cat("> foo", "> ", "> ```go", "package moo", "```", "> ", "> goo.", "> ", "> ```go", "package zoo", "```", "> ", "> woo.", ""), `

foo

package moo

goo.

package zoo

woo.

`, } // These 2 alternative forms of blockquoted fenced code blocks should produce same output. forms := [2]string{ cat("> plain quoted text", "> ```fenced", "code", " with leading single space correctly preserved", "okay", "```", "> rest of quoted text"), cat("> plain quoted text", "> ```fenced", "> code", "> with leading single space correctly preserved", "> okay", "> ```", "> rest of quoted text"), } want := `

plain quoted text

code
 with leading single space correctly preserved
okay

rest of quoted text

` tests = append(tests, forms[0], want) tests = append(tests, forms[1], want) doTestsBlock(t, tests, EXTENSION_FENCED_CODE) } func TestTable(t *testing.T) { var tests = []string{ "a | b\n---|---\nc | d\n", "\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n
ab
cd
\n", "a | b\n---|--\nc | d\n", "

a | b\n---|--\nc | d

\n", "|a|b|c|d|\n|----|----|----|---|\n|e|f|g|h|\n", "\n\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n\n
abcd
efgh
\n", "*a*|__b__|[c](C)|d\n---|---|---|---\ne|f|g|h\n", "\n\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n\n
abcd
efgh
\n", "a|b|c\n---|---|---\nd|e|f\ng|h\ni|j|k|l|m\nn|o|p\n", "\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n" + "\n\n\n\n\n\n" + "\n\n\n\n\n\n" + "\n\n\n\n\n\n
abc
def
gh
ijk
nop
\n", "a|b|c\n---|---|---\n*d*|__e__|f\n", "\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n
abc
def
\n", "a|b|c|d\n:--|--:|:-:|---\ne|f|g|h\n", "\n\n\n\n\n" + "\n\n\n\n\n" + "\n\n\n\n" + "\n\n\n\n
abcd
efgh
\n", "a|b|c\n---|---|---\n", "\n\n\n\n\n\n\n\n\n\n\n
abc
\n", "a| b|c | d | e\n---|---|---|---|---\nf| g|h | i |j\n", "\n\n\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n\n\n
abcde
fghij
\n", "a|b\\|c|d\n---|---|---\nf|g\\|h|i\n", "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ab|cd
fg|hi
\n", } doTestsBlock(t, tests, EXTENSION_TABLES) } func TestUnorderedListWith_EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK(t *testing.T) { var tests = []string{ "* Hello\n", "
    \n
  • Hello
  • \n
\n", "* Yin\n* Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "* Ting\n* Bong\n* Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "* Yin\n\n* Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "* Ting\n\n* Bong\n* Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "+ Hello\n", "
    \n
  • Hello
  • \n
\n", "+ Yin\n+ Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "+ Ting\n+ Bong\n+ Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "+ Yin\n\n+ Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "+ Ting\n\n+ Bong\n+ Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "- Hello\n", "
    \n
  • Hello
  • \n
\n", "- Yin\n- Yang\n", "
    \n
  • Yin
  • \n
  • Yang
  • \n
\n", "- Ting\n- Bong\n- Goo\n", "
    \n
  • Ting
  • \n
  • Bong
  • \n
  • Goo
  • \n
\n", "- Yin\n\n- Yang\n", "
    \n
  • Yin

  • \n\n
  • Yang

  • \n
\n", "- Ting\n\n- Bong\n- Goo\n", "
    \n
  • Ting

  • \n\n
  • Bong

  • \n\n
  • Goo

  • \n
\n", "*Hello\n", "

*Hello

\n", "* Hello \n", "
    \n
  • Hello
  • \n
\n", "* Hello \n Next line \n", "
    \n
  • Hello\nNext line
  • \n
\n", "Paragraph\n* No linebreak\n", "

Paragraph

\n\n
    \n
  • No linebreak
  • \n
\n", "Paragraph\n\n* Linebreak\n", "

Paragraph

\n\n
    \n
  • Linebreak
  • \n
\n", "* List\n * Nested list\n", "
    \n
  • List\n\n
      \n
    • Nested list
    • \n
  • \n
\n", "* List\n\n * Nested list\n", "
    \n
  • List

    \n\n
      \n
    • Nested list
    • \n
  • \n
\n", "* List\n Second line\n\n + Nested\n", "
    \n
  • List\nSecond line

    \n\n
      \n
    • Nested
    • \n
  • \n
\n", "* List\n + Nested\n\n Continued\n", "
    \n
  • List

    \n\n
      \n
    • Nested
    • \n
    \n\n

    Continued

  • \n
\n", "* List\n * shallow indent\n", "
    \n
  • List\n\n
      \n
    • shallow indent
    • \n
  • \n
\n", "* List\n" + " * shallow indent\n" + " * part of second list\n" + " * still second\n" + " * almost there\n" + " * third level\n", "
    \n" + "
  • List\n\n" + "
      \n" + "
    • shallow indent
    • \n" + "
    • part of second list
    • \n" + "
    • still second
    • \n" + "
    • almost there\n\n" + "
        \n" + "
      • third level
      • \n" + "
    • \n" + "
  • \n" + "
\n", "* List\n extra indent, same paragraph\n", "
    \n
  • List\n extra indent, same paragraph
  • \n
\n", "* List\n\n code block\n", "
    \n
  • List

    \n\n
    code block\n
  • \n
\n", "* List\n\n code block with spaces\n", "
    \n
  • List

    \n\n
      code block with spaces\n
  • \n
\n", "* List\n\n * sublist\n\n normal text\n\n * another sublist\n", "
    \n
  • List

    \n\n
      \n
    • sublist
    • \n
    \n\n

    normal text

    \n\n
      \n
    • another sublist
    • \n
  • \n
\n", } doTestsBlock(t, tests, EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK) } func TestOrderedList_EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK(t *testing.T) { var tests = []string{ "1. Hello\n", "
    \n
  1. Hello
  2. \n
\n", "1. Yin\n2. Yang\n", "
    \n
  1. Yin
  2. \n
  3. Yang
  4. \n
\n", "1. Ting\n2. Bong\n3. Goo\n", "
    \n
  1. Ting
  2. \n
  3. Bong
  4. \n
  5. Goo
  6. \n
\n", "1. Yin\n\n2. Yang\n", "
    \n
  1. Yin

  2. \n\n
  3. Yang

  4. \n
\n", "1. Ting\n\n2. Bong\n3. Goo\n", "
    \n
  1. Ting

  2. \n\n
  3. Bong

  4. \n\n
  5. Goo

  6. \n
\n", "1 Hello\n", "

1 Hello

\n", "1.Hello\n", "

1.Hello

\n", "1. Hello \n", "
    \n
  1. Hello
  2. \n
\n", "1. Hello \n Next line \n", "
    \n
  1. Hello\nNext line
  2. \n
\n", "Paragraph\n1. No linebreak\n", "

Paragraph

\n\n
    \n
  1. No linebreak
  2. \n
\n", "Paragraph\n\n1. Linebreak\n", "

Paragraph

\n\n
    \n
  1. Linebreak
  2. \n
\n", "1. List\n 1. Nested list\n", "
    \n
  1. List\n\n
      \n
    1. Nested list
    2. \n
  2. \n
\n", "1. List\n\n 1. Nested list\n", "
    \n
  1. List

    \n\n
      \n
    1. Nested list
    2. \n
  2. \n
\n", "1. List\n Second line\n\n 1. Nested\n", "
    \n
  1. List\nSecond line

    \n\n
      \n
    1. Nested
    2. \n
  2. \n
\n", "1. List\n 1. Nested\n\n Continued\n", "
    \n
  1. List

    \n\n
      \n
    1. Nested
    2. \n
    \n\n

    Continued

  2. \n
\n", "1. List\n 1. shallow indent\n", "
    \n
  1. List\n\n
      \n
    1. shallow indent
    2. \n
  2. \n
\n", "1. List\n" + " 1. shallow indent\n" + " 2. part of second list\n" + " 3. still second\n" + " 4. almost there\n" + " 1. third level\n", "
    \n" + "
  1. List\n\n" + "
      \n" + "
    1. shallow indent
    2. \n" + "
    3. part of second list
    4. \n" + "
    5. still second
    6. \n" + "
    7. almost there\n\n" + "
        \n" + "
      1. third level
      2. \n" + "
    8. \n" + "
  2. \n" + "
\n", "1. List\n extra indent, same paragraph\n", "
    \n
  1. List\n extra indent, same paragraph
  2. \n
\n", "1. List\n\n code block\n", "
    \n
  1. List

    \n\n
    code block\n
  2. \n
\n", "1. List\n\n code block with spaces\n", "
    \n
  1. List

    \n\n
      code block with spaces\n
  2. \n
\n", "1. List\n * Mixted list\n", "
    \n
  1. List\n\n
      \n
    • Mixted list
    • \n
  2. \n
\n", "1. List\n * Mixed list\n", "
    \n
  1. List\n\n
      \n
    • Mixed list
    • \n
  2. \n
\n", "* Start with unordered\n 1. Ordered\n", "
    \n
  • Start with unordered\n\n
      \n
    1. Ordered
    2. \n
  • \n
\n", "* Start with unordered\n 1. Ordered\n", "
    \n
  • Start with unordered\n\n
      \n
    1. Ordered
    2. \n
  • \n
\n", "1. numbers\n1. are ignored\n", "
    \n
  1. numbers
  2. \n
  3. are ignored
  4. \n
\n", } doTestsBlock(t, tests, EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK) } func TestFencedCodeBlock_EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK(t *testing.T) { var tests = []string{ "``` go\nfunc foo() bool {\n\treturn true;\n}\n```\n", "
func foo() bool {\n\treturn true;\n}\n
\n", "``` c\n/* special & char < > \" escaping */\n```\n", "
/* special & char < > " escaping */\n
\n", "``` c\nno *inline* processing ~~of text~~\n```\n", "
no *inline* processing ~~of text~~\n
\n", "```\nNo language\n```\n", "
No language\n
\n", "``` {ocaml}\nlanguage in braces\n```\n", "
language in braces\n
\n", "``` {ocaml} \nwith extra whitespace\n```\n", "
with extra whitespace\n
\n", "```{ ocaml }\nwith extra whitespace\n```\n", "
with extra whitespace\n
\n", "~ ~~ java\nWith whitespace\n~~~\n", "

~ ~~ java\nWith whitespace\n~~~

\n", "~~\nonly two\n~~\n", "

~~\nonly two\n~~

\n", "```` python\nextra\n````\n", "
extra\n
\n", "~~~ perl\nthree to start, four to end\n~~~~\n", "

~~~ perl\nthree to start, four to end\n~~~~

\n", "~~~~ perl\nfour to start, three to end\n~~~\n", "

~~~~ perl\nfour to start, three to end\n~~~

\n", "~~~ bash\ntildes\n~~~\n", "
tildes\n
\n", "``` lisp\nno ending\n", "

``` lisp\nno ending

\n", "~~~ lisp\nend with language\n~~~ lisp\n", "

~~~ lisp\nend with language\n~~~ lisp

\n", "```\nmismatched begin and end\n~~~\n", "

```\nmismatched begin and end\n~~~

\n", "~~~\nmismatched begin and end\n```\n", "

~~~\nmismatched begin and end\n```

\n", " ``` oz\nleading spaces\n```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", "``` oz\nleading spaces\n ```\n", "
leading spaces\n
\n", " ``` oz\nleading spaces\n ```\n", "
``` oz\n
\n\n

leading spaces

\n\n
```\n
\n", } doTestsBlock(t, tests, EXTENSION_FENCED_CODE|EXTENSION_NO_EMPTY_LINE_BEFORE_BLOCK) } func TestTitleBlock_EXTENSION_TITLEBLOCK(t *testing.T) { var tests = []string{ "% Some title\n" + "% Another title line\n" + "% Yep, more here too\n", "

" + "Some title\n" + "Another title line\n" + "Yep, more here too\n" + "

", } doTestsBlock(t, tests, EXTENSION_TITLEBLOCK) } func TestBlockComments(t *testing.T) { var tests = []string{ "Some text\n\n\n", "

Some text

\n\n\n", "Some text\n\n\n", "

Some text

\n\n\n", "Some text\n\n\n", "

Some text

\n\n\n", } doTestsBlock(t, tests, 0) } func TestCDATA(t *testing.T) { var tests = []string{ "Some text\n\n\n", "

Some text

\n\n\n", "CDATA ]]\n\n\n", "

CDATA ]]

\n\n\n", "CDATA >\n\n]]>\n", "

CDATA >

\n\n]]>\n", "Lots of text\n\n\n", "

Lots of text

\n\n\n", "]]>\n", "]]>\n", } doTestsBlock(t, tests, 0) doTestsBlock(t, []string{ "``` html\n\n```\n", "
<![CDATA[foo]]>\n
\n", "\n", "\n", ` def func(): > pass ]]> `, ` def func(): > pass ]]> `, }, EXTENSION_FENCED_CODE) } func TestIsFenceLine(t *testing.T) { tests := []struct { data []byte syntaxRequested bool newlineOptional bool wantEnd int wantMarker string wantSyntax string }{ { data: []byte("```"), wantEnd: 0, }, { data: []byte("```\nstuff here\n"), wantEnd: 4, wantMarker: "```", }, { data: []byte("```\nstuff here\n"), syntaxRequested: true, wantEnd: 4, wantMarker: "```", }, { data: []byte("stuff here\n```\n"), wantEnd: 0, }, { data: []byte("```"), newlineOptional: true, wantEnd: 3, wantMarker: "```", }, { data: []byte("```"), syntaxRequested: true, newlineOptional: true, wantEnd: 3, wantMarker: "```", }, { data: []byte("``` go"), syntaxRequested: true, newlineOptional: true, wantEnd: 6, wantMarker: "```", wantSyntax: "go", }, } for _, test := range tests { var syntax *string if test.syntaxRequested { syntax = new(string) } end, marker := isFenceLine(test.data, syntax, "```", test.newlineOptional) if got, want := end, test.wantEnd; got != want { t.Errorf("got end %v, want %v", got, want) } if got, want := marker, test.wantMarker; got != want { t.Errorf("got marker %q, want %q", got, want) } if test.syntaxRequested { if got, want := *syntax, test.wantSyntax; got != want { t.Errorf("got syntax %q, want %q", got, want) } } } } func TestJoinLines(t *testing.T) { input := `# 标题 第一 行文字。 第 二 行文字。 ` result := `

标题

第一行文字。

第二行文字。

` opt := Options{Extensions: commonExtensions | EXTENSION_JOIN_LINES} renderer := HtmlRenderer(commonHtmlFlags, "", "") output := MarkdownOptions([]byte(input), renderer, opt) if string(output) != result { t.Error("output dose not match.") } } func TestSanitizedAnchorName(t *testing.T) { tests := []struct { text string want string }{ { text: "This is a header", want: "this-is-a-header", }, { text: "This is also a header", want: "this-is-also-a-header", }, { text: "main.go", want: "main-go", }, { text: "Article 123", want: "article-123", }, { text: "<- Let's try this, shall we?", want: "let-s-try-this-shall-we", }, { text: " ", want: "", }, { text: "Hello, 世界", want: "hello-世界", }, } for _, test := range tests { if got := SanitizedAnchorName(test.text); got != test.want { t.Errorf("SanitizedAnchorName(%q):\ngot %q\nwant %q", test.text, got, test.want) } } } blackfriday-1.5.1/doc.go000066400000000000000000000033371324224440100150670ustar00rootroot00000000000000// Package blackfriday is a Markdown processor. // // It translates plain text with simple formatting rules into HTML or LaTeX. // // Sanitized Anchor Names // // Blackfriday includes an algorithm for creating sanitized anchor names // corresponding to a given input text. This algorithm is used to create // anchors for headings when EXTENSION_AUTO_HEADER_IDS is enabled. The // algorithm is specified below, so that other packages can create // compatible anchor names and links to those anchors. // // The algorithm iterates over the input text, interpreted as UTF-8, // one Unicode code point (rune) at a time. All runes that are letters (category L) // or numbers (category N) are considered valid characters. They are mapped to // lower case, and included in the output. All other runes are considered // invalid characters. Invalid characters that preceed the first valid character, // as well as invalid character that follow the last valid character // are dropped completely. All other sequences of invalid characters // between two valid characters are replaced with a single dash character '-'. // // SanitizedAnchorName exposes this functionality, and can be used to // create compatible links to the anchor names generated by blackfriday. // This algorithm is also implemented in a small standalone package at // github.com/shurcooL/sanitized_anchor_name. It can be useful for clients // that want a small package and don't need full functionality of blackfriday. package blackfriday // NOTE: Keep Sanitized Anchor Name algorithm in sync with package // github.com/shurcooL/sanitized_anchor_name. // Otherwise, users of sanitized_anchor_name will get anchor names // that are incompatible with those generated by blackfriday. blackfriday-1.5.1/html.go000066400000000000000000000571541324224440100152740ustar00rootroot00000000000000// // Blackfriday Markdown Processor // Available at http://github.com/russross/blackfriday // // Copyright © 2011 Russ Ross . // Distributed under the Simplified BSD License. // See README.md for details. // // // // HTML rendering backend // // package blackfriday import ( "bytes" "fmt" "regexp" "strconv" "strings" ) // Html renderer configuration options. const ( HTML_SKIP_HTML = 1 << iota // skip preformatted HTML blocks HTML_SKIP_STYLE // skip embedded